diff -Nru a/Documentation/DMA-mapping.txt b/Documentation/DMA-mapping.txt --- a/Documentation/DMA-mapping.txt Tue Mar 12 13:58:15 2002 +++ b/Documentation/DMA-mapping.txt Tue Mar 12 13:58:15 2002 @@ -8,7 +8,7 @@ Most of the 64bit platforms have special hardware that translates bus addresses (DMA addresses) into physical addresses. This is similar to how page tables and/or a TLB translates virtual addresses to physical -addresses on a cpu. This is needed so that e.g. PCI devices can +addresses on a CPU. This is needed so that e.g. PCI devices can access with a Single Address Cycle (32bit DMA address) any page in the 64bit physical address space. Previously in Linux those 64bit platforms had to set artificial limits on the maximum RAM size in the @@ -37,7 +37,7 @@ What memory is DMA'able? The first piece of information you must know is what kernel memory can -be used with the DMA mapping facilitites. There has been an unwritten +be used with the DMA mapping facilities. There has been an unwritten set of rules regarding this, and this text is an attempt to finally write them down. @@ -106,7 +106,7 @@ 3) Ignore this device and do not initialize it. It is recommended that your driver print a kernel KERN_WARNING message -when you end up performing either #2 or #2. In this manner, if a user +when you end up performing either #2 or #3. In this manner, if a user of your driver reports that performance is bad or that the device is not even detected, you can ask them for the kernel messages to find out exactly why. @@ -146,7 +146,7 @@ If your 64-bit device is going to be an enormous consumer of DMA mappings, this can be problematic since the DMA mappings are a finite resource on many platforms. Please see the "DAC Addressing -for Address Space Hungry Devices" setion near the end of this +for Address Space Hungry Devices" section near the end of this document for how to handle this case. Finally, if your device can only drive the low 24-bits of @@ -205,7 +205,7 @@ - Consistent DMA mappings which are usually mapped at driver initialization, unmapped at the end and for which the hardware should - guarantee that the device and the cpu can access the data + guarantee that the device and the CPU can access the data in parallel and will see updates made by each other without any explicit software flushing. @@ -222,12 +222,12 @@ - Device firmware microcode executed out of main memory. - The invariant these examples all require is that any cpu store + The invariant these examples all require is that any CPU store to memory is immediately visible to the device, and vice versa. Consistent mappings guarantee this. IMPORTANT: Consistent DMA memory does not preclude the usage of - proper memory barriers. The cpu may reorder stores to + proper memory barriers. The CPU may reorder stores to consistent memory just as it may normal memory. Example: if it is important for the device to see the first word of a descriptor updated before the second, you must do @@ -284,7 +284,7 @@ the pci_pool interface, described below. The consistent DMA mapping interfaces, for non-NULL dev, will always -return a DMA address which is SAC (Single Address Cycle) addressible. +return a DMA address which is SAC (Single Address Cycle) addressable. Even if the device indicates (via PCI dma mask) that it may address the upper 32-bits and thus perform DAC cycles, consistent allocation will still only return 32-bit PCI addresses for DMA. This is true @@ -622,7 +622,7 @@ Note that for streaming type mappings you must either use these interfaces, or the dynamic mapping interfaces above. You may not mix usage of both for the same device. Such an act is illegal and is -guarenteed to put a banana in your tailpipe. +guaranteed to put a banana in your tailpipe. However, consistent mappings may in fact be used in conjunction with these interfaces. Remember that, as defined, consistent mappings are @@ -637,7 +637,7 @@ use the following interfaces if this routine fails. Next, DMA addresses using this API are kept track of using the -dma64_addr_t type. It is guarenteed to be big enough to hold any +dma64_addr_t type. It is guaranteed to be big enough to hold any DAC address the platform layer will give to you from the following routines. If you have consistent mappings as well, you still use plain dma_addr_t to keep track of those. @@ -745,7 +745,7 @@ PCI_DMA_FROMDEVICE); It really should be self-explanatory. We treat the ADDR and LEN -seperately, because it is possible for an implementation to only +separately, because it is possible for an implementation to only need the address in order to perform the unmap operation. Platform Issues diff -Nru a/Documentation/filesystems/Locking b/Documentation/filesystems/Locking --- a/Documentation/filesystems/Locking Tue Mar 12 13:58:15 2002 +++ b/Documentation/filesystems/Locking Tue Mar 12 13:58:15 2002 @@ -121,10 +121,16 @@ --------------------------- file_system_type --------------------------- prototypes: - struct super_block *(*read_super) (struct super_block *, void *, int); + struct super_block *(*get_sb) (struct file_system_type *, int, char *, void *); + void (*kill_sb) (struct super_block *); locking rules: -may block BKL ->s_lock mount_sem -yes yes yes maybe + may block BKL +get_sb yes yes +kill_sb yes yes + +->get_sb() returns error or a locked superblock (exclusive on ->s_umount). +->kill_sb() takes a locked superblock, does all shutdown work on it, +unlocks and drops the reference. --------------------------- address_space_operations -------------------------- prototypes: diff -Nru a/Documentation/filesystems/porting b/Documentation/filesystems/porting --- a/Documentation/filesystems/porting Tue Mar 12 13:58:15 2002 +++ b/Documentation/filesystems/porting Tue Mar 12 13:58:15 2002 @@ -99,3 +99,20 @@ ->link() callers hold ->i_sem on the object we are linking to. Some of your problems might be over... + +--- +[mandatory] + +new file_system_type method - kill_sb(superblock). If you are converting +an existing filesystem, set it according to ->fs_flags: + FS_REQUIRES_DEV - kill_block_super + FS_LITTER - kill_litter_super + neither - kill_anon_super +FS_LITTER is gone - just remove it from fs_flags. + +--- +[mandatory] + + FS_SINGLE is gone (actually, that had happened back when ->get_sb() +went in - and hadn't been documented ;-/). Just remove it from fs_flags +(and see ->get_sb() entry for other actions). diff -Nru a/Documentation/ia64/IRQ-redir.txt b/Documentation/ia64/IRQ-redir.txt --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/Documentation/ia64/IRQ-redir.txt Tue Mar 12 13:58:16 2002 @@ -0,0 +1,69 @@ +IRQ affinity on IA64 platforms +------------------------------ + 07.01.2002, Erich Focht + + +By writing to /proc/irq/IRQ#/smp_affinity the interrupt routing can be +controlled. The behavior on IA64 platforms is slightly different from +that described in Documentation/IRQ-affinity.txt for i386 systems. + +Because of the usage of SAPIC mode and physical destination mode the +IRQ target is one particular CPU and cannot be a mask of several +CPUs. Only the first non-zero bit is taken into account. + + +Usage examples: + +The target CPU has to be specified as a hexadecimal CPU mask. The +first non-zero bit is the selected CPU. This format has been kept for +compatibility reasons with i386. + +Set the delivery mode of interrupt 41 to fixed and route the +interrupts to CPU #3 (logical CPU number) (2^3=0x08): + echo "8" >/proc/irq/41/smp_affinity + +Set the default route for IRQ number 41 to CPU 6 in lowest priority +delivery mode (redirectable): + echo "r 40" >/proc/irq/41/smp_affinity + +The output of the command + cat /proc/irq/IRQ#/smp_affinity +gives the target CPU mask for the specified interrupt vector. If the CPU +mask is preceeded by the character "r", the interrupt is redirectable +(i.e. lowest priority mode routing is used), otherwise its route is +fixed. + + + +Initialization and default behavior: + +If the platform features IRQ redirection (info provided by SAL) all +IO-SAPIC interrupts are initialized with CPU#0 as their default target +and the routing is the so called "lowest priority mode" (actually +fixed SAPIC mode with hint). The XTP chipset registers are used as hints +for the IRQ routing. Currently in Linux XTP registers can have three +values: + - minimal for an idle task, + - normal if any other task runs, + - maximal if the CPU is going to be switched off. +The IRQ is routed to the CPU with lowest XTP register value, the +search begins at the default CPU. Therefore most of the interrupts +will be handled by CPU #0. + +If the platform doesn't feature interrupt redirection IOSAPIC fixed +routing is used. The target CPUs are distributed in a round robin +manner. IRQs will be routed only to the selected target CPUs. Check +with + cat /proc/interrupts + + + +Comments: + +On large (multi-node) systems it is recommended to route the IRQs to +the node to which the corresponding device is connected. +For systems like the NEC AzusA we get IRQ node-affinity for free. This +is because usually the chipsets on each node redirect the interrupts +only to their own CPUs (as they cannot see the XTP registers on the +other nodes). + diff -Nru a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt --- a/Documentation/kernel-parameters.txt Tue Mar 12 13:58:15 2002 +++ b/Documentation/kernel-parameters.txt Tue Mar 12 13:58:15 2002 @@ -467,6 +467,14 @@ whatever the firmware may have done. + usepirqmask [IA-32] Honor the possible IRQ mask + stored in the BIOS $PIR table. This is + needed on some systems with broken + BIOSes, notably some HP Pavilion N5400 + and Omnibook XE3 notebooks. This will + have no effect if ACPI IRQ routing is + enabled. + pd. [PARIDE] pf. [PARIDE] diff -Nru a/Documentation/video4linux/API.html b/Documentation/video4linux/API.html --- a/Documentation/video4linux/API.html Tue Mar 12 13:58:15 2002 +++ b/Documentation/video4linux/API.html Tue Mar 12 13:58:15 2002 @@ -105,7 +105,7 @@ heightThe height of the image capture. chromakeyA host order RGB32 value for the chroma key. flagsAdditional capture flags. -clipsA list of clipping rectangles. (Set only) +clipsA list of clipping rectangles. (Set only) clipcountThe number of clipping rectangles. (Set only)

@@ -120,6 +120,7 @@

Merely setting the window does not enable capturing. Overlay capturing +(i.e. PCI-PCI transfer to the frame buffer of the video card) is activated by passing the VIDIOCCAPTURE ioctl a value of 1, and disabled by passing it a value of 0.

@@ -310,9 +311,10 @@

Reading Images

-Each call to the read syscall returns the next available image from -the device. It is up to the caller to set the format and then to pass a -suitable size buffer and length to the function. Not all devices will support +Each call to the read syscall returns the next available image +from the device. It is up to the caller to set format and size (using +the VIDIOCSPICT and VIDIOCSWIN ioctls) and then to pass a suitable +size buffer and length to the function. Not all devices will support read operations.

A second way to handle image capture is via the mmap interface if supported. @@ -329,16 +331,39 @@ offsetsThe offset of each frame

-Once the mmap has been made the VIDIOCMCAPTURE ioctl sets the image size -you wish to use (which should match or be below the initial query size). -Having done so it will begin capturing to the memory mapped buffer. Whenever -a buffer is "used" by the program it should called VIDIOCSYNC to free this -frame up and continue. to add:VIDIOCSYNC takes the frame number -you are freeing as its argument. When the buffer is unmapped or all the -buffers are full capture ceases. While capturing to memory the driver will -make a "best effort" attempt to capture to screen as well if requested. This -normally means all frames that "miss" memory mapped capture will go to the -display. +Once the mmap has been made the VIDIOCMCAPTURE ioctl starts the +capture to a frame using the format and image size specified in the +video_mmap (which should match or be below the initial query size). +When the VIDIOCMCAPTURE ioctl returns the frame is not +captured yet, the driver just instructed the hardware to start the +capture. The application has to use the VIDIOCSYNC ioctl to wait +until the capture of a frame is finished. VIDIOCSYNC takes the frame +number you want to wait for as argument. +

+It is allowed to call VIDIOCMCAPTURE multiple times (with different +frame numbers in video_mmap->frame of course) and thus have multiple +outstanding capture requests. A simple way do to double-buffering +using this feature looks like this: +

+/* setup everything */
+VIDIOCMCAPTURE(0)
+while (whatever) {
+   VIDIOCMCAPTURE(1)
+   VIDIOCSYNC(0)
+   /* process frame 0 while the hardware captures frame 1 */
+   VIDIOCMCAPTURE(0)
+   VIDIOCSYNC(1)
+   /* process frame 1 while the hardware captures frame 0 */
+}
+
+Note that you are not limited to only two frames. The API +allows up to 32 frames, the VIDIOCGMBUF ioctl returns the number of +frames the driver granted. Thus it is possible to build deeper queues +to avoid loosing frames on load peaks. +

+While capturing to memory the driver will make a "best effort" attempt +to capture to screen as well if requested. This normally means all +frames that "miss" memory mapped capture will go to the display.

A final ioctl exists to allow a device to obtain related devices if a driver has multiple components (for example video0 may not be associated diff -Nru a/MAINTAINERS b/MAINTAINERS --- a/MAINTAINERS Tue Mar 12 13:58:14 2002 +++ b/MAINTAINERS Tue Mar 12 13:58:14 2002 @@ -709,14 +709,12 @@ S: Supported IDE DRIVER [GENERAL] -P: Andre Hedrick -M: andre@linux-ide.org -M: andre@linuxdiskcert.org +P: Martin Dalecki +M: martin@dalecki.de +I: pl_PL.ISO8859-2, de_DE.ISO8859-15, (en_US.ISO8859-1) L: linux-kernel@vger.kernel.org -W: http://www.kernel.org/pub/linux/kernel/people/hedrick/ -W: http://www.linux-ide.org/ -W: http://www.linuxdiskcert.org/ -S: Maintained +W: http://www.dalecki.de +S: Developement IDE/ATAPI CDROM DRIVER P: Jens Axboe diff -Nru a/Makefile b/Makefile --- a/Makefile Tue Mar 12 13:58:14 2002 +++ b/Makefile Tue Mar 12 13:58:14 2002 @@ -1,7 +1,7 @@ VERSION = 2 PATCHLEVEL = 5 -SUBLEVEL = 6 -EXTRAVERSION = +SUBLEVEL = 7 +EXTRAVERSION =-pre1 KERNELRELEASE=$(VERSION).$(PATCHLEVEL).$(SUBLEVEL)$(EXTRAVERSION) @@ -246,14 +246,14 @@ include arch/$(ARCH)/Makefile -export CPPFLAGS CFLAGS AFLAGS +export CPPFLAGS CFLAGS CFLAGS_KERNEL AFLAGS AFLAGS_KERNEL export NETWORKS DRIVERS LIBS HEAD LDFLAGS LINKFLAGS MAKEBOOT ASFLAGS .S.s: - $(CPP) $(AFLAGS) -traditional -o $*.s $< + $(CPP) $(AFLAGS) $(AFLAGS_KERNEL) -traditional -o $*.s $< .S.o: - $(CC) $(AFLAGS) -traditional -c -o $*.o $< + $(CC) $(AFLAGS) $(AFLAGS_KERNEL) -traditional -c -o $*.o $< Version: dummy @rm -f include/linux/compile.h diff -Nru a/arch/alpha/defconfig b/arch/alpha/defconfig --- a/arch/alpha/defconfig Tue Mar 12 13:58:15 2002 +++ b/arch/alpha/defconfig Tue Mar 12 13:58:15 2002 @@ -255,7 +255,6 @@ # CONFIG_IDEDMA_PCI_WIP is not set # CONFIG_BLK_DEV_IDEDMA_TIMEOUT is not set # CONFIG_IDEDMA_NEW_DRIVE_LISTINGS is not set -CONFIG_BLK_DEV_ADMA=y # CONFIG_BLK_DEV_AEC62XX is not set # CONFIG_AEC62XX_TUNING is not set CONFIG_BLK_DEV_ALI15X3=y diff -Nru a/arch/arm/def-configs/badge4 b/arch/arm/def-configs/badge4 --- a/arch/arm/def-configs/badge4 Tue Mar 12 13:58:16 2002 +++ b/arch/arm/def-configs/badge4 Tue Mar 12 13:58:16 2002 @@ -15,7 +15,14 @@ # Code maturity level options # CONFIG_EXPERIMENTAL=y -# CONFIG_OBSOLETE is not set + +# +# General setup +# +CONFIG_NET=y +# CONFIG_SYSVIPC is not set +# CONFIG_BSD_PROCESS_ACCT is not set +CONFIG_SYSCTL=y # # Loadable module support @@ -27,6 +34,7 @@ # # System Type # +# CONFIG_ARCH_ADIFCC is not set # CONFIG_ARCH_ANAKIN is not set # CONFIG_ARCH_ARCA5K is not set # CONFIG_ARCH_CLPS7500 is not set @@ -36,6 +44,7 @@ # CONFIG_ARCH_CAMELOT is not set # CONFIG_ARCH_FOOTBRIDGE is not set # CONFIG_ARCH_INTEGRATOR is not set +# CONFIG_ARCH_IOP310 is not set # CONFIG_ARCH_L7200 is not set # CONFIG_ARCH_RPC is not set CONFIG_ARCH_SA1100=y @@ -48,15 +57,23 @@ # # Archimedes/A5000 Implementations (select only ONE) # +# CONFIG_ARCH_ARC is not set +# CONFIG_ARCH_A5K is not set # # Footbridge Implementations # +# CONFIG_ARCH_CATS is not set +# CONFIG_ARCH_PERSONAL_SERVER is not set +# CONFIG_ARCH_EBSA285_ADDIN is not set +# CONFIG_ARCH_EBSA285_HOST is not set +# CONFIG_ARCH_NETWINDER is not set # # SA11x0 Implementations # # CONFIG_SA1100_ASSABET is not set +# CONFIG_ASSABET_NEPONSET is not set # CONFIG_SA1100_ADSBITSY is not set # CONFIG_SA1100_BRUTUS is not set # CONFIG_SA1100_CERF is not set @@ -78,6 +95,7 @@ # CONFIG_SA1100_OMNIMETER is not set # CONFIG_SA1100_PANGOLIN is not set # CONFIG_SA1100_PLEB is not set +# CONFIG_SA1100_PT_SYSTEM3 is not set # CONFIG_SA1100_SHANNON is not set # CONFIG_SA1100_SHERMAN is not set # CONFIG_SA1100_SIMPAD is not set @@ -85,15 +103,23 @@ # CONFIG_SA1100_VICTOR is not set # CONFIG_SA1100_XP860 is not set # CONFIG_SA1100_YOPY is not set +# CONFIG_SA1100_STORK is not set CONFIG_SA1111=y CONFIG_FORCE_MAX_ZONEORDER=9 -CONFIG_SA1100_USB=m -CONFIG_SA1100_USB_NETLINK=m -CONFIG_SA1100_USB_CHAR=m +# CONFIG_SA1100_USB is not set +# CONFIG_SA1100_USB_NETLINK is not set +# CONFIG_SA1100_USB_CHAR is not set +# CONFIG_H3600_SLEEVE is not set # # CLPS711X/EP721X Implementations # +# CONFIG_ARCH_AUTCPU12 is not set +# CONFIG_ARCH_CDB89712 is not set +# CONFIG_ARCH_CLEP7312 is not set +# CONFIG_ARCH_EDB7211 is not set +# CONFIG_ARCH_P720T is not set +# CONFIG_ARCH_FORTUNET is not set # CONFIG_ARCH_EP7211 is not set # CONFIG_ARCH_EP7212 is not set # CONFIG_ARCH_ACORN is not set @@ -117,6 +143,7 @@ # CONFIG_CPU_ARM1020 is not set # CONFIG_CPU_SA110 is not set CONFIG_CPU_SA1100=y +# CONFIG_XSCALE_PMU is not set # CONFIG_ARM_THUMB is not set CONFIG_DISCONTIGMEM=y @@ -126,6 +153,7 @@ # CONFIG_PCI is not set CONFIG_ISA=y # CONFIG_ISA_DMA is not set +# CONFIG_FIQ is not set CONFIG_CPU_FREQ=y CONFIG_HOTPLUG=y @@ -134,13 +162,11 @@ # CONFIG_PCMCIA=y CONFIG_PCMCIA_PROBE=y +# CONFIG_I82092 is not set # CONFIG_I82365 is not set # CONFIG_TCIC is not set +# CONFIG_PCMCIA_CLPS6700 is not set CONFIG_PCMCIA_SA1100=y -CONFIG_NET=y -# CONFIG_SYSVIPC is not set -# CONFIG_BSD_PROCESS_ACCT is not set -CONFIG_SYSCTL=y # # At least one math emulation must be selected @@ -153,6 +179,8 @@ CONFIG_BINFMT_ELF=y CONFIG_BINFMT_MISC=m # CONFIG_PM is not set +# CONFIG_PREEMPT is not set +# CONFIG_APM is not set CONFIG_ARTHUR=m CONFIG_CMDLINE="init=/linuxrc root=/dev/mtdblock3" # CONFIG_LEDS is not set @@ -161,14 +189,23 @@ # # Parallel port support # -# CONFIG_PARPORT is not set +CONFIG_PARPORT=m +# CONFIG_PARPORT_PC is not set +# CONFIG_PARPORT_ARC is not set +# CONFIG_PARPORT_AMIGA is not set +# CONFIG_PARPORT_MFC3 is not set +# CONFIG_PARPORT_ATARI is not set +# CONFIG_PARPORT_GSC is not set +# CONFIG_PARPORT_SUNBPP is not set +# CONFIG_PARPORT_OTHER is not set +# CONFIG_PARPORT_1284 is not set # # Memory Technology Devices (MTD) # CONFIG_MTD=y CONFIG_MTD_DEBUG=y -CONFIG_MTD_DEBUG_VERBOSE=1 +CONFIG_MTD_DEBUG_VERBOSE=0 CONFIG_MTD_PARTITIONS=y # CONFIG_MTD_REDBOOT_PARTS is not set # CONFIG_MTD_BOOTLDR_PARTS is not set @@ -205,6 +242,9 @@ # CONFIG_MTD_ROM is not set # CONFIG_MTD_ABSENT is not set # CONFIG_MTD_OBSOLETE_CHIPS is not set +# CONFIG_MTD_AMDSTD is not set +# CONFIG_MTD_SHARP is not set +# CONFIG_MTD_JEDEC is not set # # Mapping drivers for chip access @@ -212,17 +252,21 @@ # CONFIG_MTD_PHYSMAP is not set # CONFIG_MTD_NORA is not set # CONFIG_MTD_ARM_INTEGRATOR is not set +# CONFIG_MTD_CDB89712 is not set CONFIG_MTD_SA1100=y +# CONFIG_MTD_2PARTS_IPAQ is not set +# CONFIG_MTD_DC21285 is not set # CONFIG_MTD_IQ80310 is not set +# CONFIG_MTD_EPXA10DB is not set +# CONFIG_MTD_PCI is not set # # Self-contained MTD device drivers # +# CONFIG_MTD_PMC551 is not set # CONFIG_MTD_SLRAM is not set -CONFIG_MTD_MTDRAM=m -CONFIG_MTDRAM_TOTAL_SIZE=4096 -CONFIG_MTDRAM_ERASE_SIZE=128 -CONFIG_MTD_BLKMTD=m +# CONFIG_MTD_MTDRAM is not set +# CONFIG_MTD_BLKMTD is not set # # Disk-On-Chip Device Drivers @@ -241,22 +285,35 @@ # Plug and Play configuration # # CONFIG_PNP is not set +# CONFIG_ISAPNP is not set +# CONFIG_PNPBIOS is not set # # Block devices # # CONFIG_BLK_DEV_FD is not set # CONFIG_BLK_DEV_XD is not set +# CONFIG_PARIDE is not set +# CONFIG_BLK_CPQ_DA is not set +# CONFIG_BLK_CPQ_CISS_DA is not set +# CONFIG_CISS_SCSI_TAPE is not set +# CONFIG_BLK_DEV_DAC960 is not set CONFIG_BLK_DEV_LOOP=y CONFIG_BLK_DEV_NBD=m -CONFIG_BLK_DEV_RAM=y -CONFIG_BLK_DEV_RAM_SIZE=4096 +# CONFIG_BLK_DEV_RAM is not set # CONFIG_BLK_DEV_INITRD is not set # # Multi-device support (RAID and LVM) # # CONFIG_MD is not set +# CONFIG_BLK_DEV_MD is not set +# CONFIG_MD_LINEAR is not set +# CONFIG_MD_RAID0 is not set +# CONFIG_MD_RAID1 is not set +# CONFIG_MD_RAID5 is not set +# CONFIG_MD_MULTIPATH is not set +# CONFIG_BLK_DEV_LVM is not set # # Networking options @@ -325,9 +382,16 @@ # # Ethernet (1000 Mbit) # -# CONFIG_ACENIC_OMIT_TIGON_I is not set +# CONFIG_ACENIC is not set +# CONFIG_DL2K is not set +# CONFIG_MYRI_SBUS is not set +# CONFIG_NS83820 is not set +# CONFIG_HAMACHI is not set +# CONFIG_YELLOWFIN is not set +# CONFIG_SK98LIN is not set # CONFIG_FDDI is not set # CONFIG_HIPPI is not set +# CONFIG_PLIP is not set # CONFIG_PPP is not set # CONFIG_SLIP is not set @@ -336,15 +400,23 @@ # CONFIG_NET_RADIO=y # CONFIG_STRIP is not set -# CONFIG_WAVELAN is not set # CONFIG_ARLAN is not set # CONFIG_AIRONET4500 is not set +# CONFIG_AIRONET4500_NONCS is not set +# CONFIG_AIRONET4500_PROC is not set + +# +# Wireless ISA/PCI cards support +# +# CONFIG_WAVELAN is not set # CONFIG_AIRO is not set CONFIG_HERMES=y # -# Wireless Pcmcia cards support +# Wireless Pcmcia/Cardbus cards support # +CONFIG_PCMCIA_NETWAVE=m +CONFIG_PCMCIA_WAVELAN=m CONFIG_PCMCIA_HERMES=y CONFIG_AIRO_CS=m CONFIG_NET_WIRELESS=y @@ -354,6 +426,7 @@ # # CONFIG_TR is not set # CONFIG_NET_FC is not set +# CONFIG_RCPCI is not set # CONFIG_SHAPER is not set # @@ -369,14 +442,15 @@ CONFIG_PCMCIA_3C574=m CONFIG_PCMCIA_FMVJ18X=m CONFIG_PCMCIA_PCNET=y -CONFIG_PCMCIA_AXNET=m CONFIG_PCMCIA_NMCLAN=m CONFIG_PCMCIA_SMC91C92=m CONFIG_PCMCIA_XIRC2PS=m +CONFIG_PCMCIA_AXNET=m +# CONFIG_ARCNET_COM20020_CS is not set +# CONFIG_PCMCIA_IBMTR is not set CONFIG_NET_PCMCIA_RADIO=y CONFIG_PCMCIA_RAYCS=m -CONFIG_PCMCIA_NETWAVE=m -CONFIG_PCMCIA_WAVELAN=m +# CONFIG_AIRONET4500_CS is not set # # Amateur Radio support @@ -392,9 +466,16 @@ # IrDA protocols # CONFIG_IRLAN=y +# CONFIG_IRNET is not set CONFIG_IRCOMM=y CONFIG_IRDA_ULTRA=y -# CONFIG_IRDA_OPTIONS is not set + +# +# IrDA options +# +# CONFIG_IRDA_CACHE_LAST_LSAP is not set +# CONFIG_IRDA_FAST_RR is not set +# CONFIG_IRDA_DEBUG is not set # # Infrared-port device drivers @@ -440,23 +521,36 @@ # CONFIG_BLK_DEV_HD is not set CONFIG_BLK_DEV_IDEDISK=m # CONFIG_IDEDISK_MULTI_MODE is not set +# CONFIG_IDEDISK_STROKE is not set # CONFIG_BLK_DEV_IDEDISK_VENDOR is not set +# CONFIG_BLK_DEV_IDEDISK_FUJITSU is not set +# CONFIG_BLK_DEV_IDEDISK_IBM is not set +# CONFIG_BLK_DEV_IDEDISK_MAXTOR is not set +# CONFIG_BLK_DEV_IDEDISK_QUANTUM is not set +# CONFIG_BLK_DEV_IDEDISK_SEAGATE is not set +# CONFIG_BLK_DEV_IDEDISK_WD is not set # CONFIG_BLK_DEV_COMMERIAL is not set +# CONFIG_BLK_DEV_TIVO is not set # CONFIG_BLK_DEV_IDECS is not set CONFIG_BLK_DEV_IDECD=m -CONFIG_BLK_DEV_IDETAPE=m +# CONFIG_BLK_DEV_IDETAPE is not set CONFIG_BLK_DEV_IDEFLOPPY=m CONFIG_BLK_DEV_IDESCSI=m +# CONFIG_IDE_TASK_IOCTL is not set # # IDE chipset support/bugfixes # # CONFIG_BLK_DEV_CMD640 is not set +# CONFIG_BLK_DEV_CMD640_ENHANCED is not set +# CONFIG_BLK_DEV_ISAPNP is not set # CONFIG_IDE_CHIPSETS is not set # CONFIG_IDEDMA_AUTO is not set # CONFIG_DMA_NONPCI is not set # CONFIG_BLK_DEV_IDE_MODES is not set # CONFIG_BLK_DEV_ATARAID is not set +# CONFIG_BLK_DEV_ATARAID_PDC is not set +# CONFIG_BLK_DEV_ATARAID_HPT is not set # # SCSI support @@ -469,7 +563,7 @@ CONFIG_BLK_DEV_SD=y CONFIG_SD_EXTRA_DEVS=40 CONFIG_CHR_DEV_ST=m -CONFIG_CHR_DEV_OSST=m +# CONFIG_CHR_DEV_OSST is not set CONFIG_BLK_DEV_SR=m # CONFIG_BLK_DEV_SR_VENDOR is not set CONFIG_SR_EXTRA_DEVS=2 @@ -478,7 +572,6 @@ # # Some SCSI devices (e.g. CD jukebox) support multiple LUNs # -# CONFIG_SCSI_DEBUG_QUEUES is not set # CONFIG_SCSI_MULTI_LUN is not set # CONFIG_SCSI_CONSTANTS is not set # CONFIG_SCSI_LOGGING is not set @@ -496,8 +589,10 @@ # CONFIG_SCSI_DPT_I2O is not set # CONFIG_SCSI_ADVANSYS is not set # CONFIG_SCSI_IN2000 is not set +# CONFIG_SCSI_AM53C974 is not set # CONFIG_SCSI_MEGARAID is not set # CONFIG_SCSI_BUSLOGIC is not set +# CONFIG_SCSI_DMX3191D is not set # CONFIG_SCSI_DTC3280 is not set # CONFIG_SCSI_EATA is not set # CONFIG_SCSI_EATA_DMA is not set @@ -505,10 +600,12 @@ # CONFIG_SCSI_FUTURE_DOMAIN is not set # CONFIG_SCSI_GDTH is not set # CONFIG_SCSI_GENERIC_NCR5380 is not set +# CONFIG_SCSI_INITIO is not set +# CONFIG_SCSI_INIA100 is not set +# CONFIG_SCSI_PPA is not set +# CONFIG_SCSI_IMM is not set # CONFIG_SCSI_NCR53C406A is not set -# CONFIG_SCSI_NCR53C7xx_sync is not set -# CONFIG_SCSI_NCR53C7xx_FAST is not set -# CONFIG_SCSI_NCR53C7xx_DISCONNECT is not set +# CONFIG_SCSI_NCR53C7xx is not set # CONFIG_SCSI_PAS16 is not set # CONFIG_SCSI_PCI2000 is not set # CONFIG_SCSI_PCI2220I is not set @@ -528,11 +625,11 @@ # # I2O device support # -CONFIG_I2O=m -CONFIG_I2O_BLOCK=m -CONFIG_I2O_LAN=m -CONFIG_I2O_SCSI=m -CONFIG_I2O_PROC=m +# CONFIG_I2O is not set +# CONFIG_I2O_BLOCK is not set +# CONFIG_I2O_LAN is not set +# CONFIG_I2O_SCSI is not set +# CONFIG_I2O_PROC is not set # # ISDN subsystem @@ -540,40 +637,75 @@ # CONFIG_ISDN is not set # -# Input core support +# Input device support # -CONFIG_INPUT=m -CONFIG_INPUT_KEYBDEV=m +# CONFIG_INPUT is not set +# CONFIG_INPUT_KEYBDEV is not set # CONFIG_INPUT_MOUSEDEV is not set # CONFIG_INPUT_JOYDEV is not set # CONFIG_INPUT_EVDEV is not set +# CONFIG_GAMEPORT is not set +CONFIG_SOUND_GAMEPORT=y +# CONFIG_GAMEPORT_NS558 is not set +# CONFIG_GAMEPORT_L4 is not set +# CONFIG_INPUT_EMU10K1 is not set +# CONFIG_GAMEPORT_PCIGAME is not set +# CONFIG_GAMEPORT_FM801 is not set +# CONFIG_GAMEPORT_CS461x is not set +# CONFIG_SERIO is not set +# CONFIG_SERIO_SERPORT is not set # # Character devices # # CONFIG_VT is not set # CONFIG_SERIAL is not set +# CONFIG_SERIAL_EXTENDED is not set # CONFIG_SERIAL_NONSTANDARD is not set # # Serial drivers # +# CONFIG_SERIAL_ANAKIN is not set +# CONFIG_SERIAL_ANAKIN_CONSOLE is not set +# CONFIG_SERIAL_AMBA is not set +# CONFIG_SERIAL_AMBA_CONSOLE is not set +# CONFIG_SERIAL_CLPS711X is not set +# CONFIG_SERIAL_CLPS711X_CONSOLE is not set +# CONFIG_SERIAL_21285 is not set +# CONFIG_SERIAL_21285_OLD is not set +# CONFIG_SERIAL_21285_CONSOLE is not set +# CONFIG_SERIAL_UART00 is not set +# CONFIG_SERIAL_UART00_CONSOLE is not set CONFIG_SERIAL_SA1100=y CONFIG_SERIAL_SA1100_CONSOLE=y CONFIG_SA1100_DEFAULT_BAUDRATE=115200 # CONFIG_SERIAL_8250 is not set +# CONFIG_SERIAL_8250_CONSOLE is not set +# CONFIG_ATOMWIDE_SERIAL is not set +# CONFIG_DUALSP_SERIAL is not set +# CONFIG_SERIAL_8250_EXTENDED is not set +# CONFIG_SERIAL_8250_MANY_PORTS is not set +# CONFIG_SERIAL_8250_SHARE_IRQ is not set +# CONFIG_SERIAL_8250_DETECT_IRQ is not set +# CONFIG_SERIAL_8250_MULTIPORT is not set +# CONFIG_SERIAL_8250_RSA is not set CONFIG_SERIAL_CORE=y CONFIG_SERIAL_CORE_CONSOLE=y CONFIG_UNIX98_PTYS=y CONFIG_UNIX98_PTY_COUNT=256 +# CONFIG_PRINTER is not set +# CONFIG_PPDEV is not set # # I2C support # CONFIG_I2C=m CONFIG_I2C_ALGOBIT=m +# CONFIG_I2C_PHILIPSPAR is not set CONFIG_I2C_ELV=m CONFIG_I2C_VELLEMAN=m +# CONFIG_I2C_BIT_SA1100_GPIO is not set CONFIG_I2C_ALGOPCF=m CONFIG_I2C_ELEKTOR=m CONFIG_I2C_CHARDEV=m @@ -582,11 +714,14 @@ # # L3 serial bus support # -CONFIG_L3=m +CONFIG_L3=y +# CONFIG_L3_ALGOBIT is not set +# CONFIG_L3_BIT_SA1100_GPIO is not set # # Other L3 adapters # +CONFIG_L3_SA1111=y # CONFIG_BIT_SA1100_GPIO is not set # @@ -594,17 +729,6 @@ # # CONFIG_BUSMOUSE is not set # CONFIG_MOUSE is not set - -# -# Joysticks -# -# CONFIG_INPUT_GAMEPORT is not set -# CONFIG_INPUT_SERIO is not set - -# -# Joysticks -# -# CONFIG_INPUT_IFORCE_USB is not set # CONFIG_QIC02_TAPE is not set # @@ -618,6 +742,8 @@ # CONFIG_PCWATCHDOG is not set # CONFIG_ACQUIRE_WDT is not set # CONFIG_ADVANTECH_WDT is not set +# CONFIG_21285_WATCHDOG is not set +# CONFIG_977_WATCHDOG is not set CONFIG_SA1100_WATCHDOG=m # CONFIG_EUROTECH_WDT is not set # CONFIG_IB700_WDT is not set @@ -626,6 +752,7 @@ # CONFIG_60XX_WDT is not set # CONFIG_W83877F_WDT is not set # CONFIG_MACHZ_WDT is not set +# CONFIG_INTEL_RNG is not set # CONFIG_NVRAM is not set CONFIG_RTC=m CONFIG_SA1100_RTC=m @@ -647,7 +774,51 @@ # # Multimedia devices # -# CONFIG_VIDEO_DEV is not set +CONFIG_VIDEO_DEV=y + +# +# Video For Linux +# +CONFIG_VIDEO_PROC_FS=y +# CONFIG_I2C_PARPORT is not set + +# +# Video Adapters +# +# CONFIG_VIDEO_BT848 is not set +# CONFIG_VIDEO_PMS is not set +# CONFIG_VIDEO_BWQCAM is not set +# CONFIG_VIDEO_CQCAM is not set +# CONFIG_VIDEO_CPIA is not set +# CONFIG_VIDEO_SAA5249 is not set +# CONFIG_TUNER_3036 is not set +# CONFIG_VIDEO_STRADIS is not set +# CONFIG_VIDEO_ZORAN is not set +# CONFIG_VIDEO_ZORAN_BUZ is not set +# CONFIG_VIDEO_ZORAN_DC10 is not set +# CONFIG_VIDEO_ZORAN_LML33 is not set +# CONFIG_VIDEO_ZR36120 is not set +# CONFIG_VIDEO_MEYE is not set +# CONFIG_VIDEO_CYBERPRO is not set + +# +# Radio Adapters +# +# CONFIG_RADIO_CADET is not set +# CONFIG_RADIO_RTRACK is not set +# CONFIG_RADIO_RTRACK2 is not set +# CONFIG_RADIO_AZTECH is not set +# CONFIG_RADIO_GEMTEK is not set +# CONFIG_RADIO_GEMTEK_PCI is not set +# CONFIG_RADIO_MAXIRADIO is not set +# CONFIG_RADIO_MAESTRO is not set +# CONFIG_RADIO_MIROPCM20 is not set +# CONFIG_RADIO_MIROPCM20_RDS is not set +# CONFIG_RADIO_SF16FMI is not set +# CONFIG_RADIO_TERRATEC is not set +# CONFIG_RADIO_TRUST is not set +# CONFIG_RADIO_TYPHOON is not set +# CONFIG_RADIO_ZOLTRIX is not set # # File systems @@ -656,15 +827,18 @@ # CONFIG_AUTOFS_FS is not set # CONFIG_AUTOFS4_FS is not set # CONFIG_REISERFS_FS is not set +# CONFIG_REISERFS_CHECK is not set +# CONFIG_REISERFS_PROC_INFO is not set # CONFIG_ADFS_FS is not set +# CONFIG_ADFS_FS_RW is not set # CONFIG_AFFS_FS is not set # CONFIG_HFS_FS is not set # CONFIG_BFS_FS is not set CONFIG_EXT3_FS=m CONFIG_JBD=m # CONFIG_JBD_DEBUG is not set -CONFIG_FAT_FS=m -CONFIG_MSDOS_FS=m +CONFIG_FAT_FS=y +CONFIG_MSDOS_FS=y # CONFIG_UMSDOS_FS is not set CONFIG_VFAT_FS=m # CONFIG_EFS_FS is not set @@ -672,12 +846,15 @@ CONFIG_JFFS2_FS=y CONFIG_JFFS2_FS_DEBUG=0 CONFIG_CRAMFS=m -# CONFIG_TMPFS is not set -# CONFIG_RAMFS is not set +CONFIG_TMPFS=y +CONFIG_RAMFS=y # CONFIG_ISO9660_FS is not set +# CONFIG_JOLIET is not set +# CONFIG_ZISOFS is not set CONFIG_MINIX_FS=m # CONFIG_VXFS_FS is not set # CONFIG_NTFS_FS is not set +# CONFIG_NTFS_RW is not set # CONFIG_HPFS_FS is not set CONFIG_PROC_FS=y CONFIG_DEVFS_FS=y @@ -685,11 +862,14 @@ # CONFIG_DEVFS_DEBUG is not set # CONFIG_DEVPTS_FS is not set # CONFIG_QNX4FS_FS is not set +# CONFIG_QNX4FS_RW is not set # CONFIG_ROMFS_FS is not set CONFIG_EXT2_FS=m # CONFIG_SYSV_FS is not set # CONFIG_UDF_FS is not set +# CONFIG_UDF_RW is not set # CONFIG_UFS_FS is not set +# CONFIG_UFS_FS_WRITE is not set # # Network File Systems @@ -698,21 +878,43 @@ # CONFIG_INTERMEZZO_FS is not set CONFIG_NFS_FS=m CONFIG_NFS_V3=y +# CONFIG_ROOT_NFS is not set # CONFIG_NFSD is not set +# CONFIG_NFSD_V3 is not set CONFIG_SUNRPC=m CONFIG_LOCKD=m CONFIG_LOCKD_V4=y CONFIG_SMB_FS=m # CONFIG_SMB_NLS_DEFAULT is not set # CONFIG_NCP_FS is not set +# CONFIG_NCPFS_PACKET_SIGNING is not set +# CONFIG_NCPFS_IOCTL_LOCKING is not set +# CONFIG_NCPFS_STRONG is not set +# CONFIG_NCPFS_NFS_NS is not set +# CONFIG_NCPFS_OS2_NS is not set +# CONFIG_NCPFS_SMALLDOS is not set +# CONFIG_NCPFS_NLS is not set +# CONFIG_NCPFS_EXTRAS is not set # CONFIG_ZISOFS_FS is not set -CONFIG_ZLIB_FS_INFLATE=m # # Partition Types # -# CONFIG_PARTITION_ADVANCED is not set +CONFIG_PARTITION_ADVANCED=y +# CONFIG_ACORN_PARTITION is not set +# CONFIG_OSF_PARTITION is not set +# CONFIG_AMIGA_PARTITION is not set +# CONFIG_ATARI_PARTITION is not set +# CONFIG_MAC_PARTITION is not set CONFIG_MSDOS_PARTITION=y +# CONFIG_BSD_DISKLABEL is not set +# CONFIG_MINIX_SUBPARTITION is not set +# CONFIG_SOLARIS_X86_PARTITION is not set +# CONFIG_UNIXWARE_DISKLABEL is not set +# CONFIG_LDM_PARTITION is not set +# CONFIG_SGI_PARTITION is not set +# CONFIG_ULTRIX_PARTITION is not set +# CONFIG_SUN_PARTITION is not set CONFIG_SMB_NLS=y CONFIG_NLS=y @@ -761,26 +963,56 @@ # Sound # CONFIG_SOUND=y + +# +# Open Sound System +# +CONFIG_SOUND_PRIME=y # CONFIG_SOUND_BT878 is not set +# CONFIG_SOUND_CMPCI is not set +# CONFIG_SOUND_EMU10K1 is not set +# CONFIG_MIDI_EMU10K1 is not set # CONFIG_SOUND_FUSION is not set # CONFIG_SOUND_CS4281 is not set +# CONFIG_SOUND_ES1370 is not set +# CONFIG_SOUND_ES1371 is not set # CONFIG_SOUND_ESSSOLO1 is not set # CONFIG_SOUND_MAESTRO is not set +# CONFIG_SOUND_MAESTRO3 is not set +# CONFIG_SOUND_ICH is not set +# CONFIG_SOUND_RME96XX is not set # CONFIG_SOUND_SONICVIBES is not set # CONFIG_SOUND_TRIDENT is not set # CONFIG_SOUND_MSNDCLAS is not set # CONFIG_SOUND_MSNDPIN is not set +# CONFIG_SOUND_VIA82CXXX is not set +# CONFIG_MIDI_VIA82CXXX is not set CONFIG_SOUND_SA1100=y -CONFIG_SOUND_UDA1341=m -CONFIG_SOUND_SA1111_UDA1341=m -CONFIG_SOUND_SA1100SSP=m +CONFIG_SOUND_UDA1341=y +# CONFIG_SOUND_ASSABET_UDA1341 is not set +# CONFIG_SOUND_H3600_UDA1341 is not set +# CONFIG_SOUND_PANGOLIN_UDA1341 is not set +CONFIG_SOUND_SA1111_UDA1341=y +# CONFIG_SOUND_STORK_UDA1341 is not set +# CONFIG_SOUND_SA1100SSP is not set +# CONFIG_SOUND_STORK_AC97 is not set # CONFIG_SOUND_OSS is not set +# CONFIG_SOUND_WAVEARTIST is not set # CONFIG_SOUND_TVMIXER is not set # +# Advanced Linux Sound Architecture +# +# CONFIG_SND is not set + +# # Multimedia Capabilities Port drivers # -# CONFIG_MCP is not set +CONFIG_MCP=y +CONFIG_MCP_SA1100=y +# CONFIG_MCP_UCB1200 is not set +# CONFIG_MCP_UCB1200_AUDIO is not set +# CONFIG_MCP_UCB1200_TS is not set # # USB support @@ -796,8 +1028,10 @@ # CONFIG_USB_LONG_TIMEOUT is not set # -# USB Controllers +# USB Host Controller Drivers # +# CONFIG_USB_EHCI_HCD is not set +# CONFIG_USB_OHCI_HCD is not set # CONFIG_USB_UHCI is not set # CONFIG_USB_UHCI_ALT is not set # CONFIG_USB_OHCI is not set @@ -810,24 +1044,23 @@ CONFIG_USB_BLUETOOTH=m CONFIG_USB_STORAGE=y CONFIG_USB_STORAGE_DEBUG=y -CONFIG_USB_STORAGE_DATAFAB=y -CONFIG_USB_STORAGE_FREECOM=y -CONFIG_USB_STORAGE_ISD200=y -CONFIG_USB_STORAGE_DPCM=y -CONFIG_USB_STORAGE_HP8200e=y -CONFIG_USB_STORAGE_SDDR09=y -CONFIG_USB_STORAGE_JUMPSHOT=y +# CONFIG_USB_STORAGE_DATAFAB is not set +# CONFIG_USB_STORAGE_FREECOM is not set +# CONFIG_USB_STORAGE_ISD200 is not set +# CONFIG_USB_STORAGE_DPCM is not set +# CONFIG_USB_STORAGE_HP8200e is not set +# CONFIG_USB_STORAGE_SDDR09 is not set +# CONFIG_USB_STORAGE_JUMPSHOT is not set CONFIG_USB_ACM=m CONFIG_USB_PRINTER=m # # USB Human Interface Devices (HID) # -CONFIG_USB_HID=m -# CONFIG_USB_HIDDEV is not set -CONFIG_USB_KBD=m -CONFIG_USB_MOUSE=m -CONFIG_USB_WACOM=m + +# +# Input core support is needed for USB HID +# # # USB Imaging devices @@ -841,10 +1074,15 @@ # # USB Multimedia devices # - -# -# Video4Linux support is needed for USB Multimedia device support -# +CONFIG_USB_IBMCAM=m +CONFIG_USB_OV511=m +CONFIG_USB_PWC=m +CONFIG_USB_SE401=m +# CONFIG_USB_STV680 is not set +CONFIG_USB_VICAM=m +CONFIG_USB_DSBR=m +CONFIG_USB_DABUSB=m +CONFIG_USB_KONICAWC=m # # USB Network adaptors @@ -858,18 +1096,20 @@ # # USB port drivers # +CONFIG_USB_USS720=m # # USB Serial Converter support # CONFIG_USB_SERIAL=m -# CONFIG_USB_SERIAL_GENERIC is not set +CONFIG_USB_SERIAL_GENERIC=y CONFIG_USB_SERIAL_BELKIN=m CONFIG_USB_SERIAL_WHITEHEAT=m CONFIG_USB_SERIAL_DIGI_ACCELEPORT=m CONFIG_USB_SERIAL_EMPEG=m CONFIG_USB_SERIAL_FTDI_SIO=m CONFIG_USB_SERIAL_VISOR=m +# CONFIG_USB_SERIAL_IPAQ is not set CONFIG_USB_SERIAL_IR=m CONFIG_USB_SERIAL_EDGEPORT=m CONFIG_USB_SERIAL_KEYSPAN_PDA=m @@ -883,6 +1123,7 @@ # CONFIG_USB_SERIAL_KEYSPAN_USA19W is not set # CONFIG_USB_SERIAL_KEYSPAN_USA49W is not set CONFIG_USB_SERIAL_MCT_U232=m +# CONFIG_USB_SERIAL_KLSI is not set CONFIG_USB_SERIAL_PL2303=m CONFIG_USB_SERIAL_CYBERJACK=m CONFIG_USB_SERIAL_XIRCOM=m @@ -892,6 +1133,7 @@ # USB Miscellaneous drivers # CONFIG_USB_RIO500=m +# CONFIG_USB_AUERSWALD is not set # # Bluetooth support @@ -920,4 +1162,12 @@ CONFIG_DEBUG_BUGVERBOSE=y CONFIG_DEBUG_ERRORS=y CONFIG_DEBUG_LL=y -CONFIG_DEBUG_LL_SER3=y +# CONFIG_DEBUG_DC21285_PORT is not set +# CONFIG_DEBUG_CLPS711X_UART2 is not set + +# +# Library routines +# +# CONFIG_CRC32 is not set +CONFIG_ZLIB_INFLATE=y +CONFIG_ZLIB_DEFLATE=y diff -Nru a/arch/arm/def-configs/iq80310 b/arch/arm/def-configs/iq80310 --- a/arch/arm/def-configs/iq80310 Tue Mar 12 13:58:15 2002 +++ b/arch/arm/def-configs/iq80310 Tue Mar 12 13:58:15 2002 @@ -454,7 +454,6 @@ CONFIG_BLK_DEV_IDEPCI=y # CONFIG_IDEPCI_SHARE_IRQ is not set CONFIG_BLK_DEV_IDEDMA_PCI=y -CONFIG_BLK_DEV_ADMA=y # CONFIG_BLK_DEV_OFFBOARD is not set CONFIG_IDEDMA_PCI_AUTO=y CONFIG_BLK_DEV_IDEDMA=y diff -Nru a/arch/arm/def-configs/jornada720 b/arch/arm/def-configs/jornada720 --- a/arch/arm/def-configs/jornada720 Tue Mar 12 13:58:15 2002 +++ b/arch/arm/def-configs/jornada720 Tue Mar 12 13:58:15 2002 @@ -1,11 +1,15 @@ # -# Automatically generated make config: don't edit +# Automatically generated by make menuconfig: don't edit # CONFIG_ARM=y # CONFIG_EISA is not set # CONFIG_SBUS is not set # CONFIG_MCA is not set CONFIG_UID16=y +CONFIG_RWSEM_GENERIC_SPINLOCK=y +# CONFIG_RWSEM_XCHGADD_ALGORITHM is not set +# CONFIG_GENERIC_BUST_SPINLOCK is not set +# CONFIG_GENERIC_ISA_DMA is not set # # Code maturity level options @@ -23,24 +27,23 @@ # # System Type # +# CONFIG_ARCH_ANAKIN is not set # CONFIG_ARCH_ARCA5K is not set # CONFIG_ARCH_CLPS7500 is not set +# CONFIG_ARCH_CLPS711X is not set # CONFIG_ARCH_CO285 is not set # CONFIG_ARCH_EBSA110 is not set -# CONFIG_ARCH_L7200 is not set +# CONFIG_ARCH_CAMELOT is not set # CONFIG_ARCH_FOOTBRIDGE is not set # CONFIG_ARCH_INTEGRATOR is not set +# CONFIG_ARCH_L7200 is not set # CONFIG_ARCH_RPC is not set CONFIG_ARCH_SA1100=y -# CONFIG_ARCH_CLPS711X is not set +# CONFIG_ARCH_SHARK is not set # # Archimedes/A5000 Implementations # - -# -# Archimedes/A5000 Implementations (select only ONE) -# # CONFIG_ARCH_ARC is not set # CONFIG_ARCH_A5K is not set @@ -58,12 +61,18 @@ # # CONFIG_SA1100_ASSABET is not set # CONFIG_ASSABET_NEPONSET is not set +# CONFIG_SA1100_ADSBITSY is not set # CONFIG_SA1100_BRUTUS is not set # CONFIG_SA1100_CERF is not set -# CONFIG_SA1100_BITSY is not set +# CONFIG_SA1100_H3100 is not set +# CONFIG_SA1100_H3600 is not set +# CONFIG_SA1100_H3800 is not set +# CONFIG_SA1100_H3XXX is not set # CONFIG_SA1100_EXTENEX1 is not set +# CONFIG_SA1100_FLEXANET is not set # CONFIG_SA1100_FREEBIRD is not set # CONFIG_SA1100_GRAPHICSCLIENT is not set +# CONFIG_SA1100_GRAPHICSMASTER is not set CONFIG_SA1100_JORNADA720=y # CONFIG_SA1100_HUW_WEBPANEL is not set # CONFIG_SA1100_ITSY is not set @@ -72,70 +81,75 @@ # CONFIG_SA1100_OMNIMETER is not set # CONFIG_SA1100_PANGOLIN is not set # CONFIG_SA1100_PLEB is not set +# CONFIG_SA1100_SHANNON is not set # CONFIG_SA1100_SHERMAN is not set +# CONFIG_SA1100_SIMPAD is not set # CONFIG_SA1100_PFS168 is not set # CONFIG_SA1100_VICTOR is not set # CONFIG_SA1100_XP860 is not set # CONFIG_SA1100_YOPY is not set CONFIG_SA1111=y +CONFIG_FORCE_MAX_ZONEORDER=9 # CONFIG_SA1100_USB is not set # CONFIG_SA1100_USB_NETLINK is not set # CONFIG_SA1100_USB_CHAR is not set -# CONFIG_SA1100_FREQUENCY_SCALE is not set -# CONFIG_SA1100_VOLTAGE_SCALE is not set +# CONFIG_REGISTERS is not set # # CLPS711X/EP721X Implementations # +# CONFIG_ARCH_AUTCPU12 is not set +# CONFIG_ARCH_CDB89712 is not set +# CONFIG_ARCH_CLEP7312 is not set +# CONFIG_ARCH_EDB7211 is not set # CONFIG_ARCH_P720T is not set +# CONFIG_ARCH_EP7211 is not set +# CONFIG_ARCH_EP7212 is not set # CONFIG_ARCH_ACORN is not set # CONFIG_FOOTBRIDGE is not set # CONFIG_FOOTBRIDGE_HOST is not set # CONFIG_FOOTBRIDGE_ADDIN is not set CONFIG_CPU_32=y # CONFIG_CPU_26 is not set - -# -# Processor Type -# # CONFIG_CPU_32v3 is not set CONFIG_CPU_32v4=y # CONFIG_CPU_ARM610 is not set # CONFIG_CPU_ARM710 is not set # CONFIG_CPU_ARM720T is not set # CONFIG_CPU_ARM920T is not set +# CONFIG_CPU_ARM922T is not set +# CONFIG_CPU_ARM926T is not set # CONFIG_CPU_ARM1020 is not set # CONFIG_CPU_SA110 is not set CONFIG_CPU_SA1100=y +# CONFIG_ARM_THUMB is not set CONFIG_DISCONTIGMEM=y # # General setup # - -# -# Please ensure that you have read the help on the next option -# -# CONFIG_ANGELBOOT is not set # CONFIG_PCI is not set -# CONFIG_ISA is not set +CONFIG_ISA=y # CONFIG_ISA_DMA is not set +# CONFIG_CPU_FREQ is not set CONFIG_HOTPLUG=y # # PCMCIA/CardBus support # CONFIG_PCMCIA=y +# CONFIG_I82092 is not set # CONFIG_I82365 is not set # CONFIG_TCIC is not set # CONFIG_PCMCIA_CLPS6700 is not set CONFIG_PCMCIA_SA1100=y +# CONFIG_MERCURY_BACKPAQ is not set CONFIG_NET=y CONFIG_SYSVIPC=y # CONFIG_BSD_PROCESS_ACCT is not set CONFIG_SYSCTL=y CONFIG_FPE_NWFPE=y -# CONFIG_FPE_FASTFPE is not set +CONFIG_FPE_FASTFPE=y CONFIG_KCORE_ELF=y # CONFIG_KCORE_AOUT is not set CONFIG_BINFMT_AOUT=m @@ -145,10 +159,8 @@ # CONFIG_APM is not set # CONFIG_ARTHUR is not set CONFIG_CMDLINE="keepinitrd" -# CONFIG_PFS168_CMDLINE is not set # CONFIG_LEDS is not set -# CONFIG_ALIGNMENT_TRAP is not set -# CONFIG_UCB1200 is not set +CONFIG_ALIGNMENT_TRAP=y # # Parallel port support @@ -159,63 +171,72 @@ # Memory Technology Devices (MTD) # CONFIG_MTD=y -# CONFIG_MTD_DEBUG is not set - -# -# Disk-On-Chip Device Drivers -# -# CONFIG_MTD_DOC1000 is not set -# CONFIG_MTD_DOC2000 is not set -# CONFIG_MTD_DOC2001 is not set -# CONFIG_MTD_DOCPROBE is not set - -# -# RAM/ROM Device Drivers -# -# CONFIG_MTD_PMC551 is not set -# CONFIG_MTD_SLRAM is not set -# CONFIG_MTD_RAM is not set -# CONFIG_MTD_ROM is not set -# CONFIG_MTD_MTDRAM is not set +CONFIG_MTD_DEBUG=y +CONFIG_MTD_DEBUG_VERBOSE=1 +CONFIG_MTD_PARTITIONS=y +# CONFIG_MTD_REDBOOT_PARTS is not set +CONFIG_MTD_BOOTLDR_PARTS=y +# CONFIG_MTD_AFS_PARTS is not set +CONFIG_MTD_CHAR=m +CONFIG_MTD_BLOCK=y +# CONFIG_FTL is not set +# CONFIG_NFTL is not set # -# Linearly Mapped Flash Device Drivers +# RAM/ROM/Flash chip drivers # CONFIG_MTD_CFI=y -# CONFIG_MTD_CFI_ADV_OPTIONS is not set +# CONFIG_MTD_JEDECPROBE is not set +CONFIG_MTD_GEN_PROBE=y +CONFIG_MTD_CFI_ADV_OPTIONS=y +CONFIG_MTD_CFI_NOSWAP=y +# CONFIG_MTD_CFI_BE_BYTE_SWAP is not set +# CONFIG_MTD_CFI_LE_BYTE_SWAP is not set +CONFIG_MTD_CFI_GEOMETRY=y +# CONFIG_MTD_CFI_B1 is not set +CONFIG_MTD_CFI_B2=y +CONFIG_MTD_CFI_B4=y +CONFIG_MTD_CFI_I1=y +CONFIG_MTD_CFI_I2=y +# CONFIG_MTD_CFI_I4 is not set CONFIG_MTD_CFI_INTELEXT=y # CONFIG_MTD_CFI_AMDSTD is not set +# CONFIG_MTD_RAM is not set +# CONFIG_MTD_ROM is not set +# CONFIG_MTD_ABSENT is not set +# CONFIG_MTD_OBSOLETE_CHIPS is not set # CONFIG_MTD_AMDSTD is not set # CONFIG_MTD_SHARP is not set +# CONFIG_MTD_JEDEC is not set + +# +# Mapping drivers for chip access +# # CONFIG_MTD_PHYSMAP is not set # CONFIG_MTD_NORA is not set -# CONFIG_MTD_PNC2000 is not set -# CONFIG_MTD_RPXLITE is not set -# CONFIG_MTD_SC520CDP is not set -# CONFIG_MTD_SBC_MEDIAGX is not set -# CONFIG_MTD_ELAN_104NC is not set +# CONFIG_MTD_ARM_INTEGRATOR is not set +# CONFIG_MTD_CDB89712 is not set CONFIG_MTD_SA1100=y +# CONFIG_MTD_H3600_BACKPAQ is not set # CONFIG_MTD_DC21285 is not set # CONFIG_MTD_IQ80310 is not set -# CONFIG_MTD_CSTM_CFI_JEDEC is not set -# CONFIG_MTD_JEDEC is not set -# CONFIG_MTD_MIXMEM is not set -# CONFIG_MTD_OCTAGON is not set -# CONFIG_MTD_VMAX is not set # -# NAND Flash Device Drivers +# Self-contained MTD device drivers # -# CONFIG_MTD_NAND is not set -# CONFIG_MTD_NAND_SPIA is not set +# CONFIG_MTD_PMC551 is not set +# CONFIG_MTD_SLRAM is not set +# CONFIG_MTD_MTDRAM is not set +# CONFIG_MTD_BLKMTD is not set +# CONFIG_MTD_DOC1000 is not set +# CONFIG_MTD_DOC2000 is not set +# CONFIG_MTD_DOC2001 is not set +# CONFIG_MTD_DOCPROBE is not set # -# User Modules And Translation Layers +# NAND Flash Device Drivers # -CONFIG_MTD_CHAR=y -CONFIG_MTD_BLOCK=y -# CONFIG_FTL is not set -# CONFIG_NFTL is not set +# CONFIG_MTD_NAND is not set # # Plug and Play configuration @@ -246,6 +267,7 @@ # CONFIG_MD_RAID0 is not set # CONFIG_MD_RAID1 is not set # CONFIG_MD_RAID5 is not set +# CONFIG_MD_MULTIPATH is not set # CONFIG_BLK_DEV_LVM is not set # @@ -258,7 +280,7 @@ # CONFIG_NETLINK_DEV is not set CONFIG_NETFILTER=y # CONFIG_NETFILTER_DEBUG is not set -# CONFIG_FILTER is not set +CONFIG_FILTER=y CONFIG_UNIX=y CONFIG_INET=y CONFIG_IP_MULTICAST=y @@ -282,10 +304,7 @@ # CONFIG_IPV6 is not set # CONFIG_KHTTPD is not set # CONFIG_ATM is not set - -# -# -# +# CONFIG_VLAN_8021Q is not set # CONFIG_IPX is not set # CONFIG_ATALK is not set # CONFIG_DECNET is not set @@ -318,7 +337,6 @@ # CONFIG_EQUALIZER is not set # CONFIG_TUN is not set # CONFIG_ETHERTAP is not set -# CONFIG_NET_SB1000 is not set # # Ethernet (10 or 100Mbit) @@ -329,18 +347,44 @@ # Ethernet (1000 Mbit) # # CONFIG_ACENIC is not set +# CONFIG_DL2K is not set +# CONFIG_MYRI_SBUS is not set +# CONFIG_NS83820 is not set # CONFIG_HAMACHI is not set # CONFIG_YELLOWFIN is not set # CONFIG_SK98LIN is not set # CONFIG_FDDI is not set # CONFIG_HIPPI is not set -# CONFIG_PPP is not set +# CONFIG_PLIP is not set +CONFIG_PPP=m +# CONFIG_PPP_MULTILINK is not set +# CONFIG_PPP_FILTER is not set +CONFIG_PPP_ASYNC=m +# CONFIG_PPP_SYNC_TTY is not set +CONFIG_PPP_DEFLATE=m +CONFIG_PPP_BSDCOMP=m +# CONFIG_PPPOE is not set # CONFIG_SLIP is not set # # Wireless LAN (non-hamradio) # -# CONFIG_NET_RADIO is not set +CONFIG_NET_RADIO=y +# CONFIG_STRIP is not set +CONFIG_WAVELAN=m +CONFIG_ARLAN=m +CONFIG_AIRONET4500=m +CONFIG_AIRONET4500_NONCS=m +# CONFIG_AIRONET4500_PNP is not set +# CONFIG_AIRONET4500_PCI is not set +# CONFIG_AIRONET4500_ISA is not set +# CONFIG_AIRONET4500_I365 is not set +# CONFIG_AIRONET4500_PROC is not set +# CONFIG_AIRO is not set +CONFIG_HERMES=m +CONFIG_PCMCIA_HERMES=m +CONFIG_AIRO_CS=m +CONFIG_NET_WIRELESS=y # # Token Ring devices @@ -366,11 +410,11 @@ CONFIG_PCMCIA_NMCLAN=m CONFIG_PCMCIA_SMC91C92=m CONFIG_PCMCIA_XIRC2PS=m +# CONFIG_PCMCIA_AXNET is not set # CONFIG_ARCNET_COM20020_CS is not set # CONFIG_PCMCIA_IBMTR is not set CONFIG_NET_PCMCIA_RADIO=y # CONFIG_PCMCIA_RAYCS is not set -# CONFIG_PCMCIA_HERMES is not set # CONFIG_PCMCIA_NETWAVE is not set CONFIG_PCMCIA_WAVELAN=m CONFIG_AIRONET4500_CS=m @@ -384,10 +428,6 @@ # IrDA (infrared) support # CONFIG_IRDA=m - -# -# IrDA protocols -# CONFIG_IRLAN=m # CONFIG_IRNET is not set CONFIG_IRCOMM=m @@ -397,28 +437,19 @@ # # Infrared-port device drivers # - -# -# SIR device drivers -# # CONFIG_IRTTY_SIR is not set # CONFIG_IRPORT_SIR is not set - -# -# FIR device drivers -# +# CONFIG_DONGLE is not set +# CONFIG_USB_IRDA is not set # CONFIG_NSC_FIR is not set # CONFIG_WINBOND_FIR is not set # CONFIG_TOSHIBA_FIR is not set # CONFIG_SMC_IRCC_FIR is not set +# CONFIG_ALI_FIR is not set +# CONFIG_VLSI_FIR is not set CONFIG_SA1100_FIR=m # -# Dongle support -# -# CONFIG_DONGLE is not set - -# # ATA/IDE/MFM/RLL support # CONFIG_IDE=m @@ -427,10 +458,6 @@ # IDE, ATA and ATAPI Block devices # CONFIG_BLK_DEV_IDE=m - -# -# Please see Documentation/ide.txt for help/info on IDE drives -# # CONFIG_BLK_DEV_HD_IDE is not set # CONFIG_BLK_DEV_HD is not set CONFIG_BLK_DEV_IDEDISK=m @@ -445,14 +472,10 @@ # CONFIG_BLK_DEV_COMMERIAL is not set # CONFIG_BLK_DEV_TIVO is not set CONFIG_BLK_DEV_IDECS=m -# CONFIG_BLK_DEV_IDECD is not set +CONFIG_BLK_DEV_IDECD=m # CONFIG_BLK_DEV_IDETAPE is not set # CONFIG_BLK_DEV_IDEFLOPPY is not set # CONFIG_BLK_DEV_IDESCSI is not set - -# -# IDE chipset support/bugfixes -# # CONFIG_BLK_DEV_CMD640 is not set # CONFIG_BLK_DEV_CMD640_ENHANCED is not set # CONFIG_BLK_DEV_ISAPNP is not set @@ -460,6 +483,9 @@ # CONFIG_IDEDMA_AUTO is not set # CONFIG_DMA_NONPCI is not set # CONFIG_BLK_DEV_IDE_MODES is not set +# CONFIG_BLK_DEV_ATARAID is not set +# CONFIG_BLK_DEV_ATARAID_PDC is not set +# CONFIG_BLK_DEV_ATARAID_HPT is not set # # SCSI support @@ -483,7 +509,13 @@ # # Input core support # -# CONFIG_INPUT is not set +CONFIG_INPUT=y +# CONFIG_INPUT_KEYBDEV is not set +CONFIG_INPUT_MOUSEDEV=y +CONFIG_INPUT_MOUSEDEV_SCREEN_X=640 +CONFIG_INPUT_MOUSEDEV_SCREEN_Y=240 +# CONFIG_INPUT_JOYDEV is not set +# CONFIG_INPUT_EVDEV is not set # # Character devices @@ -493,19 +525,37 @@ CONFIG_SERIAL=m # CONFIG_SERIAL_EXTENDED is not set # CONFIG_SERIAL_NONSTANDARD is not set + +# +# Serial drivers +# +# CONFIG_SERIAL_ANAKIN is not set +# CONFIG_SERIAL_ANAKIN_CONSOLE is not set +# CONFIG_SERIAL_AMBA is not set +# CONFIG_SERIAL_AMBA_CONSOLE is not set +# CONFIG_SERIAL_CLPS711X is not set +# CONFIG_SERIAL_CLPS711X_CONSOLE is not set +# CONFIG_SERIAL_21285 is not set +# CONFIG_SERIAL_21285_OLD is not set +# CONFIG_SERIAL_21285_CONSOLE is not set +# CONFIG_SERIAL_UART00 is not set +# CONFIG_SERIAL_UART00_CONSOLE is not set CONFIG_SERIAL_SA1100=y CONFIG_SERIAL_SA1100_CONSOLE=y CONFIG_SA1100_DEFAULT_BAUDRATE=115200 -# CONFIG_TOUCHSCREEN_UCB1200 is not set -# CONFIG_TOUCHSCREEN_BITSY is not set -CONFIG_PROFILER=m -# CONFIG_PFS168_SPI is not set -# CONFIG_PFS168_DTMF is not set -# CONFIG_PFS168_MISC is not set +# CONFIG_SERIAL_8250 is not set +# CONFIG_SERIAL_8250_CONSOLE is not set +# CONFIG_SERIAL_8250_EXTENDED is not set +# CONFIG_SERIAL_8250_MANY_PORTS is not set +# CONFIG_SERIAL_8250_SHARE_IRQ is not set +# CONFIG_SERIAL_8250_DETECT_IRQ is not set +# CONFIG_SERIAL_8250_MULTIPORT is not set +# CONFIG_SERIAL_8250_HUB6 is not set CONFIG_SERIAL_CORE=y CONFIG_SERIAL_CORE_CONSOLE=y CONFIG_UNIX98_PTYS=y CONFIG_UNIX98_PTY_COUNT=32 +# CONFIG_NEWTONKBD is not set # # I2C support @@ -513,6 +563,15 @@ # CONFIG_I2C is not set # +# L3 serial bus support +# +# CONFIG_L3 is not set +# CONFIG_L3_ALGOBIT is not set +# CONFIG_L3_BIT_SA1100_GPIO is not set +# CONFIG_L3_SA1111 is not set +# CONFIG_BIT_SA1100_GPIO is not set + +# # Mice # # CONFIG_BUSMOUSE is not set @@ -524,11 +583,33 @@ # # Joysticks # -# CONFIG_JOYSTICK is not set - -# -# Input core support is needed for joysticks -# +# CONFIG_INPUT_GAMEPORT is not set +# CONFIG_INPUT_NS558 is not set +# CONFIG_INPUT_LIGHTNING is not set +# CONFIG_INPUT_PCIGAME is not set +# CONFIG_INPUT_CS461X is not set +# CONFIG_INPUT_EMU10K1 is not set +# CONFIG_INPUT_SERIO is not set +# CONFIG_INPUT_SERPORT is not set +# CONFIG_INPUT_ANALOG is not set +# CONFIG_INPUT_A3D is not set +# CONFIG_INPUT_ADI is not set +# CONFIG_INPUT_COBRA is not set +# CONFIG_INPUT_GF2K is not set +# CONFIG_INPUT_GRIP is not set +# CONFIG_INPUT_INTERACT is not set +# CONFIG_INPUT_TMDC is not set +# CONFIG_INPUT_SIDEWINDER is not set +# CONFIG_INPUT_IFORCE_USB is not set +# CONFIG_INPUT_IFORCE_232 is not set +# CONFIG_INPUT_WARRIOR is not set +# CONFIG_INPUT_MAGELLAN is not set +# CONFIG_INPUT_SPACEORB is not set +# CONFIG_INPUT_SPACEBALL is not set +# CONFIG_INPUT_STINGER is not set +# CONFIG_INPUT_DB9 is not set +# CONFIG_INPUT_GAMECON is not set +# CONFIG_INPUT_TURBOGRAFX is not set # CONFIG_QIC02_TAPE is not set # @@ -553,12 +634,13 @@ # # PCMCIA character devices # -CONFIG_PCMCIA_SERIAL_CS=m +# CONFIG_PCMCIA_SERIAL_CS is not set # # Multimedia devices # # CONFIG_VIDEO_DEV is not set +# CONFIG_V4L2_DEV is not set # # File systems @@ -568,37 +650,45 @@ # CONFIG_AUTOFS4_FS is not set # CONFIG_REISERFS_FS is not set # CONFIG_REISERFS_CHECK is not set +# CONFIG_REISERFS_PROC_INFO is not set # CONFIG_ADFS_FS is not set # CONFIG_ADFS_FS_RW is not set # CONFIG_AFFS_FS is not set # CONFIG_HFS_FS is not set # CONFIG_BFS_FS is not set -CONFIG_FAT_FS=m -CONFIG_MSDOS_FS=m +# CONFIG_EXT3_FS is not set +# CONFIG_JBD is not set +# CONFIG_JBD_DEBUG is not set +# CONFIG_FAT_FS is not set +# CONFIG_MSDOS_FS is not set # CONFIG_UMSDOS_FS is not set -CONFIG_VFAT_FS=m +# CONFIG_VFAT_FS is not set # CONFIG_EFS_FS is not set # CONFIG_JFFS_FS is not set -# CONFIG_JFFS2_FS is not set -CONFIG_CRAMFS=y +CONFIG_JFFS2_FS=y +CONFIG_JFFS2_FS_DEBUG=2 +# CONFIG_CRAMFS is not set +# CONFIG_TMPFS is not set CONFIG_RAMFS=y -# CONFIG_ISO9660_FS is not set +CONFIG_ISO9660_FS=m # CONFIG_JOLIET is not set +# CONFIG_ZISOFS is not set # CONFIG_MINIX_FS is not set +# CONFIG_VXFS_FS is not set # CONFIG_NTFS_FS is not set # CONFIG_NTFS_RW is not set # CONFIG_HPFS_FS is not set CONFIG_PROC_FS=y -# CONFIG_DEVFS_FS is not set -# CONFIG_DEVFS_MOUNT is not set -# CONFIG_DEVFS_DEBUG is not set +CONFIG_DEVFS_FS=y +CONFIG_DEVFS_MOUNT=y +CONFIG_DEVFS_DEBUG=y +# CONFIG_DRIVERFS_FS is not set CONFIG_DEVPTS_FS=y # CONFIG_QNX4FS_FS is not set # CONFIG_QNX4FS_RW is not set # CONFIG_ROMFS_FS is not set CONFIG_EXT2_FS=y # CONFIG_SYSV_FS is not set -# CONFIG_SYSV_FS_WRITE is not set # CONFIG_UDF_FS is not set # CONFIG_UDF_RW is not set # CONFIG_UFS_FS is not set @@ -608,6 +698,7 @@ # Network File Systems # # CONFIG_CODA_FS is not set +# CONFIG_INTERMEZZO_FS is not set CONFIG_NFS_FS=m CONFIG_NFS_V3=y # CONFIG_ROOT_NFS is not set @@ -616,8 +707,7 @@ CONFIG_SUNRPC=m CONFIG_LOCKD=m CONFIG_LOCKD_V4=y -CONFIG_SMB_FS=m -# CONFIG_SMB_NLS_DEFAULT is not set +# CONFIG_SMB_FS is not set # CONFIG_NCP_FS is not set # CONFIG_NCPFS_PACKET_SIGNING is not set # CONFIG_NCPFS_IOCTL_LOCKING is not set @@ -627,59 +717,22 @@ # CONFIG_NCPFS_SMALLDOS is not set # CONFIG_NCPFS_NLS is not set # CONFIG_NCPFS_EXTRAS is not set +# CONFIG_ZISOFS_FS is not set +# CONFIG_ZLIB_FS_INFLATE is not set # # Partition Types # # CONFIG_PARTITION_ADVANCED is not set CONFIG_MSDOS_PARTITION=y -CONFIG_SMB_NLS=y -CONFIG_NLS=y - -# -# Native Language Support -# -CONFIG_NLS_DEFAULT="iso8859-1" -CONFIG_NLS_CODEPAGE_437=y -# CONFIG_NLS_CODEPAGE_737 is not set -# CONFIG_NLS_CODEPAGE_775 is not set -# CONFIG_NLS_CODEPAGE_850 is not set -# CONFIG_NLS_CODEPAGE_852 is not set -# CONFIG_NLS_CODEPAGE_855 is not set -# CONFIG_NLS_CODEPAGE_857 is not set -# CONFIG_NLS_CODEPAGE_860 is not set -# CONFIG_NLS_CODEPAGE_861 is not set -# CONFIG_NLS_CODEPAGE_862 is not set -# CONFIG_NLS_CODEPAGE_863 is not set -# CONFIG_NLS_CODEPAGE_864 is not set -# CONFIG_NLS_CODEPAGE_865 is not set -# CONFIG_NLS_CODEPAGE_866 is not set -# CONFIG_NLS_CODEPAGE_869 is not set -# CONFIG_NLS_CODEPAGE_874 is not set -# CONFIG_NLS_CODEPAGE_932 is not set -# CONFIG_NLS_CODEPAGE_936 is not set -# CONFIG_NLS_CODEPAGE_949 is not set -# CONFIG_NLS_CODEPAGE_950 is not set -# CONFIG_NLS_ISO8859_1 is not set -# CONFIG_NLS_ISO8859_2 is not set -# CONFIG_NLS_ISO8859_3 is not set -# CONFIG_NLS_ISO8859_4 is not set -# CONFIG_NLS_ISO8859_5 is not set -# CONFIG_NLS_ISO8859_6 is not set -# CONFIG_NLS_ISO8859_7 is not set -# CONFIG_NLS_ISO8859_8 is not set -# CONFIG_NLS_ISO8859_9 is not set -# CONFIG_NLS_ISO8859_14 is not set -# CONFIG_NLS_ISO8859_15 is not set -# CONFIG_NLS_KOI8_R is not set -# CONFIG_NLS_UTF8 is not set +# CONFIG_SMB_NLS is not set +# CONFIG_NLS is not set # # Console drivers # CONFIG_PC_KEYMAP=y # CONFIG_VGA_CONSOLE is not set -CONFIG_FB=y # # Frame-buffer support @@ -687,16 +740,17 @@ CONFIG_FB=y CONFIG_DUMMY_CONSOLE=y # CONFIG_FB_ACORN is not set +# CONFIG_FB_ANAKIN is not set # CONFIG_FB_CLPS711X is not set +# CONFIG_FB_SA1100 is not set CONFIG_FB_EPSON1356=y # CONFIG_FB_CYBER2000 is not set -# CONFIG_FB_SA1100 is not set # CONFIG_FB_VIRTUAL is not set CONFIG_FBCON_ADVANCED=y # CONFIG_FBCON_MFB is not set # CONFIG_FBCON_CFB2 is not set # CONFIG_FBCON_CFB4 is not set -CONFIG_FBCON_CFB8=y +# CONFIG_FBCON_CFB8 is not set CONFIG_FBCON_CFB16=y # CONFIG_FBCON_CFB24 is not set # CONFIG_FBCON_CFB32 is not set @@ -721,10 +775,10 @@ # Sound # CONFIG_SOUND=m -CONFIG_SOUND_UDA1341=m -# CONFIG_SOUND_SA1100_SSP is not set +# CONFIG_SOUND_BT878 is not set # CONFIG_SOUND_CMPCI is not set # CONFIG_SOUND_EMU10K1 is not set +# CONFIG_MIDI_EMU10K1 is not set # CONFIG_SOUND_FUSION is not set # CONFIG_SOUND_CS4281 is not set # CONFIG_SOUND_ES1370 is not set @@ -733,28 +787,121 @@ # CONFIG_SOUND_MAESTRO is not set # CONFIG_SOUND_MAESTRO3 is not set # CONFIG_SOUND_ICH is not set +# CONFIG_SOUND_RME96XX is not set # CONFIG_SOUND_SONICVIBES is not set # CONFIG_SOUND_TRIDENT is not set # CONFIG_SOUND_MSNDCLAS is not set # CONFIG_SOUND_MSNDPIN is not set # CONFIG_SOUND_VIA82CXXX is not set +# CONFIG_MIDI_VIA82CXXX is not set +CONFIG_SOUND_SA1100=m +# CONFIG_SOUND_UDA1341 is not set +# CONFIG_SOUND_ASSABET_UDA1341 is not set +# CONFIG_SOUND_H3600_UDA1341 is not set +# CONFIG_SOUND_PANGOLIN_UDA1341 is not set +# CONFIG_SOUND_SA1111_UDA1341 is not set +# CONFIG_SOUND_SA1100SSP is not set # CONFIG_SOUND_OSS is not set +# CONFIG_SOUND_WAVEARTIST is not set # CONFIG_SOUND_TVMIXER is not set # +# Multimedia Capabilities Port drivers +# +# CONFIG_MCP is not set +# CONFIG_MCP_SA1100 is not set +# CONFIG_MCP_UCB1200 is not set +# CONFIG_MCP_UCB1200_AUDIO is not set +# CONFIG_MCP_UCB1200_TS is not set + +# # USB support # # CONFIG_USB is not set +# CONFIG_USB_UHCI is not set +# CONFIG_USB_UHCI_ALT is not set +# CONFIG_USB_OHCI is not set +# CONFIG_USB_OHCI_SA1111 is not set +# CONFIG_USB_AUDIO is not set +# CONFIG_USB_BLUETOOTH is not set +# CONFIG_USB_STORAGE is not set +# CONFIG_USB_STORAGE_DEBUG is not set +# CONFIG_USB_STORAGE_DATAFAB is not set +# CONFIG_USB_STORAGE_FREECOM is not set +# CONFIG_USB_STORAGE_ISD200 is not set +# CONFIG_USB_STORAGE_DPCM is not set +# CONFIG_USB_STORAGE_HP8200e is not set +# CONFIG_USB_STORAGE_SDDR09 is not set +# CONFIG_USB_STORAGE_JUMPSHOT is not set +# CONFIG_USB_ACM is not set +# CONFIG_USB_PRINTER is not set +# CONFIG_USB_HID is not set +# CONFIG_USB_HIDDEV is not set +# CONFIG_USB_KBD is not set +# CONFIG_USB_MOUSE is not set +# CONFIG_USB_WACOM is not set +# CONFIG_USB_DC2XX is not set +# CONFIG_USB_MDC800 is not set +# CONFIG_USB_SCANNER is not set +# CONFIG_USB_MICROTEK is not set +# CONFIG_USB_HPUSBSCSI is not set +# CONFIG_USB_PEGASUS is not set +# CONFIG_USB_KAWETH is not set +# CONFIG_USB_CATC is not set +# CONFIG_USB_CDCETHER is not set +# CONFIG_USB_USBNET is not set +# CONFIG_USB_USS720 is not set + +# +# USB Serial Converter support +# +# CONFIG_USB_SERIAL is not set +# CONFIG_USB_SERIAL_GENERIC is not set +# CONFIG_USB_SERIAL_BELKIN is not set +# CONFIG_USB_SERIAL_WHITEHEAT is not set +# CONFIG_USB_SERIAL_DIGI_ACCELEPORT is not set +# CONFIG_USB_SERIAL_EMPEG is not set +# CONFIG_USB_SERIAL_FTDI_SIO is not set +# CONFIG_USB_SERIAL_VISOR is not set +# CONFIG_USB_SERIAL_IR is not set +# CONFIG_USB_SERIAL_EDGEPORT is not set +# CONFIG_USB_SERIAL_KEYSPAN_PDA is not set +# CONFIG_USB_SERIAL_KEYSPAN is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28 is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28X is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28XA is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28XB is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA19 is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA18X is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA19W is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA49W is not set +# CONFIG_USB_SERIAL_MCT_U232 is not set +# CONFIG_USB_SERIAL_PL2303 is not set +# CONFIG_USB_SERIAL_CYBERJACK is not set +# CONFIG_USB_SERIAL_XIRCOM is not set +# CONFIG_USB_SERIAL_OMNINET is not set +# CONFIG_USB_RIO500 is not set + +# +# Bluetooth support +# +# CONFIG_BLUEZ is not set # # Kernel hacking # # CONFIG_NO_FRAME_POINTER is not set -CONFIG_DEBUG_ERRORS=y # CONFIG_DEBUG_USER is not set # CONFIG_DEBUG_INFO is not set -# CONFIG_MAGIC_SYSRQ is not set # CONFIG_NO_PGT_CACHE is not set +CONFIG_DEBUG_KERNEL=y +CONFIG_DEBUG_SLAB=y +# CONFIG_MAGIC_SYSRQ is not set +# CONFIG_DEBUG_SPINLOCK is not set +# CONFIG_DEBUG_WAITQ is not set +# CONFIG_DEBUG_BUGVERBOSE is not set +CONFIG_DEBUG_ERRORS=y CONFIG_DEBUG_LL=y # CONFIG_DEBUG_DC21285_PORT is not set # CONFIG_DEBUG_CLPS711X_UART2 is not set +# CONFIG_DEBUG_LL_SER3 is not set diff -Nru a/arch/arm/kernel/dma.c b/arch/arm/kernel/dma.c --- a/arch/arm/kernel/dma.c Tue Mar 12 13:58:15 2002 +++ b/arch/arm/kernel/dma.c Tue Mar 12 13:58:15 2002 @@ -286,3 +286,5 @@ EXPORT_SYMBOL(get_dma_residue); EXPORT_SYMBOL(set_dma_sg); EXPORT_SYMBOL(set_dma_speed); + +EXPORT_SYMBOL(dma_spin_lock); diff -Nru a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S --- a/arch/arm/kernel/entry-armv.S Tue Mar 12 13:58:15 2002 +++ b/arch/arm/kernel/entry-armv.S Tue Mar 12 13:58:15 2002 @@ -16,6 +16,7 @@ #include #include "entry-header.S" #include +#include #ifdef IOC_BASE @@ -681,12 +682,12 @@ /* * This routine must not corrupt r9 */ -#ifdef MULTI_CPU +#ifdef MULTI_ABORT ldr r4, .LCprocfns @ pass r0, r3 to mov lr, pc @ processor code ldr pc, [r4] @ call processor specific code #else - bl cpu_data_abort + bl CPU_ABORT_HANDLER #endif msr cpsr_c, r9 mov r2, sp @@ -799,7 +800,7 @@ .LCirq: .word __temp_irq .LCund: .word __temp_und .LCabt: .word __temp_abt -#ifdef MULTI_CPU +#ifdef MULTI_ABORT .LCprocfns: .word SYMBOL_NAME(processor) #endif .LCfp: .word SYMBOL_NAME(fp_enter) @@ -823,12 +824,12 @@ alignment_trap r7, r7, __temp_abt zero_fp mov r0, r2 @ remove once everyones in sync -#ifdef MULTI_CPU +#ifdef MULTI_ABORT ldr r4, .LCprocfns @ pass r0, r3 to mov lr, pc @ processor code ldr pc, [r4] @ call processor specific code #else - bl cpu_data_abort + bl CPU_ABORT_HANDLER #endif set_cpsr_c r2, #MODE_SVC @ Enable interrupts mov r2, sp diff -Nru a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c --- a/arch/arm/kernel/setup.c Tue Mar 12 13:58:14 2002 +++ b/arch/arm/kernel/setup.c Tue Mar 12 13:58:14 2002 @@ -76,6 +76,9 @@ #ifdef MULTI_TLB struct cpu_tlb_fns cpu_tlb; #endif +#ifdef MULTI_USER +struct cpu_user_fns cpu_user; +#endif unsigned char aux_device_present; char elf_platform[ELF_PLATFORM_SIZE]; @@ -247,6 +250,9 @@ #endif #ifdef MULTI_TLB cpu_tlb = *list->tlb; +#endif +#ifdef MULTI_USER + cpu_user = *list->user; #endif printk("Processor: %s %s revision %d\n", diff -Nru a/arch/arm/mach-integrator/pci.c b/arch/arm/mach-integrator/pci.c --- a/arch/arm/mach-integrator/pci.c Tue Mar 12 13:58:15 2002 +++ b/arch/arm/mach-integrator/pci.c Tue Mar 12 13:58:15 2002 @@ -113,7 +113,6 @@ extern void pci_v3_init(void *); struct hw_pci integrator_pci __initdata = { - mem_offset: 0x40000000, swizzle: integrator_swizzle, map_irq: integrator_map_irq, setup: pci_v3_setup, diff -Nru a/arch/arm/mach-integrator/pci_v3.c b/arch/arm/mach-integrator/pci_v3.c --- a/arch/arm/mach-integrator/pci_v3.c Tue Mar 12 13:58:15 2002 +++ b/arch/arm/mach-integrator/pci_v3.c Tue Mar 12 13:58:15 2002 @@ -435,7 +435,7 @@ resource[1] = &non_mem; resource[2] = &pre_mem; - return 0; + return 1; } /* @@ -529,8 +529,10 @@ { int ret = 0; - if (nr == 0) + if (nr == 0) { + sys->mem_offset = 0x40000000; ret = pci_v3_setup_resources(sys->resource); + } return ret; } @@ -634,7 +636,6 @@ void __init pci_v3_postinit(void) { unsigned int pci_cmd; - int ret; pci_cmd = PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER | PCI_COMMAND_INVALIDATE; diff -Nru a/arch/arm/mach-iop310/iq80310-time.c b/arch/arm/mach-iop310/iq80310-time.c --- a/arch/arm/mach-iop310/iq80310-time.c Tue Mar 12 13:58:15 2002 +++ b/arch/arm/mach-iop310/iq80310-time.c Tue Mar 12 13:58:15 2002 @@ -47,23 +47,44 @@ u_long b0, b1, b2, b3, val; b0 = *la0; b1 = *la1; b2 = *la2; b3 = *la3; - b0 = (((b0 & 0x20) >> 1) | (b0 & 0x1f)); - b1 = (((b1 & 0x20) >> 1) | (b1 & 0x1f)); - b2 = (((b2 & 0x20) >> 1) | (b2 & 0x1f)); + b0 = (((b0 & 0x40) >> 1) | (b0 & 0x1f)); + b1 = (((b1 & 0x40) >> 1) | (b1 & 0x1f)); + b2 = (((b2 & 0x40) >> 1) | (b2 & 0x1f)); b3 = (b3 & 0x0f); val = ((b0 << 0) | (b1 << 6) | (b2 << 12) | (b3 << 18)); return val; } -/* IRQs are disabled before entering here from do_gettimeofday() */ +/* + * IRQs are disabled before entering here from do_gettimeofday(). + * Note that the counter may wrap. When it does, 'elapsed' will + * be small, but we will have a pending interrupt. + */ static unsigned long iq80310_gettimeoffset (void) { - unsigned long elapsed, usec; + unsigned long elapsed, usec, tmp1; + unsigned int stat1, stat2; - /* We need elapsed timer ticks since last interrupt */ + stat1 = *(volatile u8 *)IQ80310_INT_STAT; elapsed = iq80310_read_timer(); + stat2 = *(volatile u8 *)IQ80310_INT_STAT; - /* Now convert them to usec */ + /* + * If an interrupt was pending before we read the timer, + * we've already wrapped. Factor this into the time. + * If an interrupt was pending after we read the timer, + * it may have wrapped between checking the interrupt + * status and reading the timer. Re-read the timer to + * be sure its value is after the wrap. + */ + if (stat1 & 1) + elapsed += LATCH; + else if (stat2 & 1) + elapsed = LATCH + iq80310_read_timer(); + + /* + * Now convert them to usec. + */ usec = (unsigned long)(elapsed*tick)/LATCH; return usec; @@ -92,9 +113,7 @@ * * -DS */ - irq_exit(smp_processor_id(), irq); do_timer(regs); - irq_enter(smp_processor_id(), irq); } extern unsigned long (*gettimeoffset)(void); @@ -116,4 +135,3 @@ *timer_en |= 2; *timer_en |= 1; } - diff -Nru a/arch/arm/mach-sa1100/Makefile b/arch/arm/mach-sa1100/Makefile --- a/arch/arm/mach-sa1100/Makefile Tue Mar 12 13:58:15 2002 +++ b/arch/arm/mach-sa1100/Makefile Tue Mar 12 13:58:15 2002 @@ -14,69 +14,105 @@ obj-m := obj-n := obj- := +led-y := leds.o -export-objs := assabet.o dma.o flexanet.o freebird.o generic.o h3600.o \ - huw_webpanel.o irq.o pcipool.o sa1111.o sa1111-pcibuf.o \ - yopy.o usb_ctl.o usb_recv.o usb_send.o +export-objs := dma.o generic.o irq.o pcipool.o sa1111.o sa1111-pcibuf.o \ + usb_ctl.o usb_recv.o usb_send.o pm.o # This needs to be cleaned up. We probably need to have SA1100 # and SA1110 config symbols. # # We link the CPU support next, so that RAM timings can be tuned. ifeq ($(CONFIG_CPU_FREQ),y) -obj-$(CONFIG_SA1100_ASSABET) += cpu-sa1110.o -obj-$(CONFIG_SA1100_CERF) += cpu-sa1110.o -obj-$(CONFIG_SA1100_PT_SYSTEM3) += cpu-sa1110.o -obj-$(CONFIG_SA1100_LART) += cpu-sa1100.o +obj-$(CONFIG_SA1100_ASSABET) += cpu-sa1110.o +obj-$(CONFIG_SA1100_CERF) += cpu-sa1110.o +obj-$(CONFIG_SA1100_LART) += cpu-sa1100.o +obj-$(CONFIG_SA1100_PT_SYSTEM3) += cpu-sa1110.o endif # Next, the SA1111 stuff. -obj-$(CONFIG_SA1111) += sa1111.o -obj-$(CONFIG_USB_OHCI_SA1111) += sa1111-pcibuf.o pcipool.o +obj-$(CONFIG_SA1111) += sa1111.o +obj-$(CONFIG_USB_OHCI_SA1111) += sa1111-pcibuf.o pcipool.o # Specific board support -obj-$(CONFIG_SA1100_ADSBITSY) += adsbitsy.o -obj-$(CONFIG_SA1100_ASSABET) += assabet.o -obj-$(CONFIG_ASSABET_NEPONSET) += neponset.o -obj-$(CONFIG_SA1100_BRUTUS) += brutus.o -obj-$(CONFIG_SA1100_CERF) += cerf.o -obj-$(CONFIG_SA1100_EMPEG) += empeg.o -obj-$(CONFIG_SA1100_FLEXANET) += flexanet.o -obj-$(CONFIG_SA1100_FREEBIRD) += freebird.o -obj-$(CONFIG_SA1100_GRAPHICSCLIENT) += graphicsclient.o -obj-$(CONFIG_SA1100_GRAPHICSMASTER) += graphicsmaster.o -obj-$(CONFIG_SA1100_H3600) += h3600.o -obj-$(CONFIG_SA1100_HUW_WEBPANEL) += huw_webpanel.o -obj-$(CONFIG_SA1100_ITSY) += itsy.o -obj-$(CONFIG_SA1100_JORNADA720) += jornada720.o -obj-$(CONFIG_SA1100_LART) += lart.o -obj-$(CONFIG_SA1100_NANOENGINE) += nanoengine.o -obj-$(CONFIG_SA1100_OMNIMETER) += omnimeter.o -obj-$(CONFIG_SA1100_PANGOLIN) += pangolin.o -obj-$(CONFIG_SA1100_PFS168) += pfs168.o -obj-$(CONFIG_SA1100_PLEB) += pleb.o -obj-$(CONFIG_SA1100_SHANNON) += shannon.o -obj-$(CONFIG_SA1100_SHERMAN) += sherman.o -obj-$(CONFIG_SA1100_PT_SYSTEM3) += system3.o -obj-$(CONFIG_SA1100_SIMPAD) += simpad.o -obj-$(CONFIG_SA1100_VICTOR) += victor.o -obj-$(CONFIG_SA1100_XP860) += xp860.o -obj-$(CONFIG_SA1100_YOPY) += yopy.o +obj-$(CONFIG_SA1100_ADSBITSY) += adsbitsy.o +led-$(CONFIG_SA1100_ADSBITSY) += leds-adsbitsy.o + +obj-$(CONFIG_SA1100_ASSABET) += assabet.o +export-objs += assabet.o +led-$(CONFIG_SA1100_ASSABET) += leds-assabet.o +obj-$(CONFIG_ASSABET_NEPONSET) += neponset.o + +obj-$(CONFIG_SA1100_BADGE4) += badge4.o +export-objs += badge4.o + +obj-$(CONFIG_SA1100_BRUTUS) += brutus.o +led-$(CONFIG_SA1100_BRUTUS) += leds-brutus.o + +obj-$(CONFIG_SA1100_CERF) += cerf.o +led-$(CONFIG_SA1100_CERF) += leds-cerf.o + +obj-$(CONFIG_SA1100_EMPEG) += empeg.o + +obj-$(CONFIG_SA1100_FLEXANET) += flexanet.o +export-objs += flexanet.o +led-$(CONFIG_SA1100_FLEXANET) += leds-flexanet.o + +obj-$(CONFIG_SA1100_FREEBIRD) += freebird.o +export-objs += freebird.o + +obj-$(CONFIG_SA1100_GRAPHICSCLIENT) += graphicsclient.o +led-$(CONFIG_SA1100_GRAPHICSCLIENT) += leds-graphicsclient.o + +obj-$(CONFIG_SA1100_GRAPHICSMASTER) += graphicsmaster.o +led-$(CONFIG_SA1100_GRAPHICSMASTER) += leds-graphicsmaster.o + +obj-$(CONFIG_SA1100_H3600) += h3600.o +export-objs += h3600.o + +obj-$(CONFIG_SA1100_HUW_WEBPANEL) += huw_webpanel.o +export-objs += huw_webpanel.o + +obj-$(CONFIG_SA1100_ITSY) += itsy.o + +obj-$(CONFIG_SA1100_JORNADA720) += jornada720.o + +obj-$(CONFIG_SA1100_LART) += lart.o +led-$(CONFIG_SA1100_LART) += leds-lart.o + +obj-$(CONFIG_SA1100_NANOENGINE) += nanoengine.o + +obj-$(CONFIG_SA1100_OMNIMETER) += omnimeter.o + +obj-$(CONFIG_SA1100_PANGOLIN) += pangolin.o + +obj-$(CONFIG_SA1100_PFS168) += pfs168.o +led-$(CONFIG_SA1100_PFS168) += leds-pfs168.o + +obj-$(CONFIG_SA1100_PLEB) += pleb.o + +obj-$(CONFIG_SA1100_PT_SYSTEM3) += system3.o +led-$(CONFIG_SA1100_PT_SYSTEM3) += leds-system3.o + +obj-$(CONFIG_SA1100_SHANNON) += shannon.o + +obj-$(CONFIG_SA1100_SHERMAN) += sherman.o + +obj-$(CONFIG_SA1100_SIMPAD) += simpad.o +led-$(CONFIG_SA1100_SIMPAD) += leds-simpad.o + +obj-$(CONFIG_SA1100_STORK) += stork.o +export-objs += stork.o + +obj-$(CONFIG_SA1100_VICTOR) += victor.o + +obj-$(CONFIG_SA1100_XP860) += xp860.o + +obj-$(CONFIG_SA1100_YOPY) += yopy.o +export-objs += yopy.o # LEDs support -leds-y := leds.o -leds-$(CONFIG_SA1100_ADSBITSY) += leds-adsbitsy.o -leds-$(CONFIG_SA1100_ASSABET) += leds-assabet.o -leds-$(CONFIG_SA1100_BRUTUS) += leds-brutus.o -leds-$(CONFIG_SA1100_CERF) += leds-cerf.o -leds-$(CONFIG_SA1100_FLEXANET) += leds-flexanet.o -leds-$(CONFIG_SA1100_GRAPHICSCLIENT) += leds-graphicsclient.o -leds-$(CONFIG_SA1100_GRAPHICSMASTER) += leds-graphicsmaster.o -leds-$(CONFIG_SA1100_LART) += leds-lart.o -leds-$(CONFIG_SA1100_PFS168) += leds-pfs168.o -leds-$(CONFIG_SA1100_SIMPAD) += leds-simpad.o -leds-$(CONFIG_SA1100_PT_SYSTEM3) += leds-system3.o -obj-$(CONFIG_LEDS) += $(leds-y) +obj-$(CONFIG_LEDS) += $(led-y) # SA1110 USB client support list-multi += sa1100usb_core.o diff -Nru a/arch/arm/mach-sa1100/cpu-sa1110.c b/arch/arm/mach-sa1100/cpu-sa1110.c --- a/arch/arm/mach-sa1100/cpu-sa1110.c Tue Mar 12 13:58:14 2002 +++ b/arch/arm/mach-sa1100/cpu-sa1110.c Tue Mar 12 13:58:14 2002 @@ -3,7 +3,7 @@ * * Copyright (C) 2001 Russell King * - * $Id: cpu-sa1110.c,v 1.6 2001/10/22 11:53:47 rmk Exp $ + * $Id: cpu-sa1110.c,v 1.8 2002/01/09 17:13:27 rmk Exp $ * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as @@ -69,13 +69,23 @@ }; static struct sdram_params samsung_k4s641632d_tc75 __initdata = { - rows: 14, - tck: 9, - trcd: 27, - trp: 20, - twr: 9, - refresh: 64000, - cas_latency: 3, + rows: 14, + tck: 9, + trcd: 27, + trp: 20, + twr: 9, + refresh: 64000, + cas_latency: 3, +}; + +static struct sdram_params samsung_km416s4030ct __initdata = { + rows: 13, + tck: 8, + trcd: 24, /* 3 CLKs */ + trp: 24, /* 3 CLKs */ + twr: 16, /* Trdl: 2 CLKs */ + refresh: 64000, + cas_latency: 3, }; static struct sdram_params sdram_params; @@ -273,6 +283,8 @@ if (machine_is_pt_system3()) sdram = &samsung_k4s641632d_tc75; + if (machine_is_h3100()) + sdram = &samsung_km416s4030ct; if (sdram) { printk(KERN_DEBUG "SDRAM: tck: %d trcd: %d trp: %d" diff -Nru a/arch/arm/mach-sa1100/pm.c b/arch/arm/mach-sa1100/pm.c --- a/arch/arm/mach-sa1100/pm.c Tue Mar 12 13:58:15 2002 +++ b/arch/arm/mach-sa1100/pm.c Tue Mar 12 13:58:15 2002 @@ -20,6 +20,7 @@ * in the platform specific files. */ #include +#include #include #include #include @@ -27,6 +28,7 @@ #include #include #include +#include #include #include @@ -210,3 +212,5 @@ __initcall(pm_init); #endif + +EXPORT_SYMBOL(pm_do_suspend); diff -Nru a/arch/arm/mach-sa1100/system3.c b/arch/arm/mach-sa1100/system3.c --- a/arch/arm/mach-sa1100/system3.c Tue Mar 12 13:58:15 2002 +++ b/arch/arm/mach-sa1100/system3.c Tue Mar 12 13:58:15 2002 @@ -99,10 +99,9 @@ */ static struct map_desc system3_io_desc[] __initdata = { - /* virtual physical length domain r w c b */ - { 0xe8000000, 0x00000000, 0x01000000, DOMAIN_IO, 0, 1, 0, 0 }, /* Flash bank 0 */ - { 0xf3000000, PT_CPLD_BASE, 0x00100000, DOMAIN_IO, 0, 1, 0, 0 }, /* System Registers */ - { 0xf4000000, PT_SA1111_BASE, 0x00100000, DOMAIN_IO, 0, 1, 0, 0 }, /* SA-1111 */ + /* virtual physical length domain r w c b */ + { 0xf3000000, PT_CPLD_BASE, 0x00100000, DOMAIN_IO, 0, 1, 0, 0 }, /* System Registers */ + { 0xf4000000, PT_SA1111_BASE, 0x00100000, DOMAIN_IO, 0, 1, 0, 0 }, /* SA-1111 */ LAST_DESC }; diff -Nru a/arch/arm/mm/abort-ev4.S b/arch/arm/mm/abort-ev4.S --- a/arch/arm/mm/abort-ev4.S Tue Mar 12 13:58:15 2002 +++ b/arch/arm/mm/abort-ev4.S Tue Mar 12 13:58:15 2002 @@ -1,7 +1,7 @@ #include #include /* - * Function: armv4_early_abort + * Function: v4_early_abort * * Params : r2 = address of aborted instruction * : r3 = saved SPSR @@ -18,7 +18,7 @@ * picture. Unfortunately, this does happen. We live with it. */ .align 5 -ENTRY(armv4_early_abort) +ENTRY(v4_early_abort) mrc p15, 0, r1, c5, c0, 0 @ get FSR mrc p15, 0, r0, c6, c0, 0 @ get FAR ldr r3, [r2] @ read aborted ARM instruction diff -Nru a/arch/arm/mm/abort-ev4t.S b/arch/arm/mm/abort-ev4t.S --- a/arch/arm/mm/abort-ev4t.S Tue Mar 12 13:58:15 2002 +++ b/arch/arm/mm/abort-ev4t.S Tue Mar 12 13:58:15 2002 @@ -1,7 +1,7 @@ #include #include /* - * Function: armv4t_early_abort + * Function: v4t_early_abort * * Params : r2 = address of aborted instruction * : r3 = saved SPSR @@ -18,7 +18,7 @@ * picture. Unfortunately, this does happen. We live with it. */ .align 5 -ENTRY(armv4t_early_abort) +ENTRY(v4t_early_abort) mrc p15, 0, r1, c5, c0, 0 @ get FSR mrc p15, 0, r0, c6, c0, 0 @ get FAR tst r3, #PSR_T_BIT diff -Nru a/arch/arm/mm/abort-ev5ej.S b/arch/arm/mm/abort-ev5ej.S --- a/arch/arm/mm/abort-ev5ej.S Tue Mar 12 13:58:14 2002 +++ b/arch/arm/mm/abort-ev5ej.S Tue Mar 12 13:58:14 2002 @@ -1,7 +1,7 @@ #include #include /* - * Function: armv5ej_early_abort + * Function: v5ej_early_abort * * Params : r2 = address of aborted instruction * : r3 = saved SPSR @@ -18,7 +18,7 @@ * picture. Unfortunately, this does happen. We live with it. */ .align 5 -ENTRY(armv5ej_early_abort) +ENTRY(v5ej_early_abort) mrc p15, 0, r1, c5, c0, 0 @ get FSR mrc p15, 0, r0, c6, c0, 0 @ get FAR tst r3, #PSR_J_BIT diff -Nru a/arch/arm/mm/abort-lv4t.S b/arch/arm/mm/abort-lv4t.S --- a/arch/arm/mm/abort-lv4t.S Tue Mar 12 13:58:16 2002 +++ b/arch/arm/mm/abort-lv4t.S Tue Mar 12 13:58:16 2002 @@ -1,7 +1,7 @@ #include #include /* - * Function: armv4t_late_abort + * Function: v4t_late_abort * * Params : r2 = address of aborted instruction * : r3 = saved SPSR @@ -17,7 +17,7 @@ * abort here if the I-TLB and D-TLB aren't seeing the same * picture. Unfortunately, this does happen. We live with it. */ -ENTRY(armv4t_late_abort) +ENTRY(v4t_late_abort) tst r3, #PSR_T_BIT @ check for thumb mode mrc p15, 0, r1, c5, c0, 0 @ get FSR mrc p15, 0, r0, c6, c0, 0 @ get FAR diff -Nru a/arch/arm/mm/copypage-v3.S b/arch/arm/mm/copypage-v3.S --- a/arch/arm/mm/copypage-v3.S Tue Mar 12 13:58:14 2002 +++ b/arch/arm/mm/copypage-v3.S Tue Mar 12 13:58:14 2002 @@ -20,7 +20,7 @@ * * FIXME: do we need to handle cache stuff... */ -ENTRY(armv3_copy_user_page) +ENTRY(v3_copy_user_page) stmfd sp!, {r4, lr} @ 2 mov r2, #PAGE_SZ/64 @ 1 ldmia r1!, {r3, r4, ip, lr} @ 4+1 @@ -42,7 +42,7 @@ * * FIXME: do we need to handle cache stuff... */ -ENTRY(armv3_clear_user_page) +ENTRY(v3_clear_user_page) str lr, [sp, #-4]! mov r1, #PAGE_SZ/64 @ 1 mov r2, #0 @ 1 @@ -57,3 +57,8 @@ bne 1b @ 1 ldr pc, [sp], #4 + .section ".text.init", #alloc, #execinstr + +ENTRY(v3_user_fns) + .long v3_clear_user_page + .long v3_copy_user_page diff -Nru a/arch/arm/mm/copypage-v4.S b/arch/arm/mm/copypage-v4.S --- a/arch/arm/mm/copypage-v4.S Tue Mar 12 13:58:14 2002 +++ b/arch/arm/mm/copypage-v4.S Tue Mar 12 13:58:14 2002 @@ -26,7 +26,7 @@ * instruction. If your processor does not supply this, you have to write your * own copy_user_page that does the right thing. */ -ENTRY(armv4_copy_user_page) +ENTRY(v4_copy_user_page) stmfd sp!, {r4, lr} @ 2 mov r2, #PAGE_SZ/64 @ 1 ldmia r1!, {r3, r4, ip, lr} @ 4 @@ -51,7 +51,7 @@ * * Same story as above. */ -ENTRY(armv4_clear_user_page) +ENTRY(v4_clear_user_page) str lr, [sp, #-4]! mov r1, #PAGE_SZ/64 @ 1 mov r2, #0 @ 1 @@ -68,3 +68,10 @@ bne 1b @ 1 mcr p15, 0, r1, c7, c10, 4 @ 1 drain WB ldr pc, [sp], #4 + + .section ".text.init", #alloc, #execinstr + +ENTRY(v4_user_fns) + .long v4_clear_user_page + .long v4_copy_user_page + diff -Nru a/arch/arm/mm/copypage-v4mc.S b/arch/arm/mm/copypage-v4mc.S --- a/arch/arm/mm/copypage-v4mc.S Tue Mar 12 13:58:15 2002 +++ b/arch/arm/mm/copypage-v4mc.S Tue Mar 12 13:58:15 2002 @@ -26,7 +26,7 @@ * instruction. If your processor does not supply this, you have to write your * own copy_user_page that does the right thing. */ -ENTRY(armv4_mc_copy_user_page) +ENTRY(v4_mc_copy_user_page) stmfd sp!, {r4, lr} @ 2 mov r4, r0 mov r0, r1 @@ -53,7 +53,7 @@ * * Same story as above. */ -ENTRY(armv4_mc_clear_user_page) +ENTRY(v4_mc_clear_user_page) str lr, [sp, #-4]! mov r1, #PAGE_SZ/64 @ 1 mov r2, #0 @ 1 @@ -69,3 +69,10 @@ subs r1, r1, #1 @ 1 bne 1b @ 1 ldr pc, [sp], #4 + + .section ".text.init", #alloc, #execinstr + +ENTRY(v4_mc_user_fns) + .long v4_mc_clear_user_page + .long v4_mc_copy_user_page + diff -Nru a/arch/arm/mm/copypage-v5te.S b/arch/arm/mm/copypage-v5te.S --- a/arch/arm/mm/copypage-v5te.S Tue Mar 12 13:58:15 2002 +++ b/arch/arm/mm/copypage-v5te.S Tue Mar 12 13:58:15 2002 @@ -32,7 +32,7 @@ * page. We rely on the mini-cache being smaller than one page, so we'll * cycle through the complete cache anyway. */ -ENTRY(armv5te_copy_user_page) +ENTRY(v5te_mc_copy_user_page) stmfd sp!, {r4, r5, lr} mov r5, r0 mov r0, r1 @@ -62,7 +62,7 @@ * r0 = destination * r1 = virtual user address of ultimate destination page */ -ENTRY(armv5te_clear_user_page) +ENTRY(v5te_mc_clear_user_page) str lr, [sp, #-4]! mov r1, #PAGE_SZ/32 mov r2, #0 @@ -77,3 +77,9 @@ subs r1, r1, #1 bne 1b ldr pc, [sp], #4 + + .section ".text.init", #alloc, #execinstr + +ENTRY(v5te_mc_user_fns) + .long v5te_mc_clear_user_page + .long v5te_mc_copy_user_page diff -Nru a/arch/arm/mm/fault-armv.c b/arch/arm/mm/fault-armv.c --- a/arch/arm/mm/fault-armv.c Tue Mar 12 13:58:15 2002 +++ b/arch/arm/mm/fault-armv.c Tue Mar 12 13:58:15 2002 @@ -181,7 +181,7 @@ static void make_coherent(struct vm_area_struct *vma, unsigned long addr, struct page *page) { - struct vm_area_struct *mpnt; + struct list_head *l; struct mm_struct *mm = vma->vm_mm; unsigned long pgoff = (addr - vma->vm_start) >> PAGE_SHIFT; int aliases = 0; @@ -191,9 +191,11 @@ * space, then we need to handle them specially to maintain * cache coherency. */ - for (mpnt = page->mapping->i_mmap_shared; mpnt; - mpnt = mpnt->vm_next_share) { + list_for_each(l, &page->mapping->i_mmap_shared) { + struct vm_area_struct *mpnt; unsigned long off; + + mpnt = list_entry(l, struct vm_area_struct, shared); /* * If this VMA is not in our MM, we can ignore it. diff -Nru a/arch/arm/mm/init.c b/arch/arm/mm/init.c --- a/arch/arm/mm/init.c Tue Mar 12 13:58:14 2002 +++ b/arch/arm/mm/init.c Tue Mar 12 13:58:14 2002 @@ -1,7 +1,7 @@ /* * linux/arch/arm/mm/init.c * - * Copyright (C) 1995-2000 Russell King + * Copyright (C) 1995-2002 Russell King * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as @@ -46,7 +46,7 @@ #define TABLE_OFFSET 0 #endif -#define TABLE_SIZE ((TABLE_OFFSET + PTRS_PER_PTE) * sizeof(void *)) +#define TABLE_SIZE ((TABLE_OFFSET + PTRS_PER_PTE) * sizeof(pte_t)) static unsigned long totalram_pages; extern pgd_t swapper_pg_dir[PTRS_PER_PGD]; @@ -319,7 +319,7 @@ * and can only be in node 0. */ reserve_bootmem_node(pgdat, __pa(swapper_pg_dir), - PTRS_PER_PGD * sizeof(void *)); + PTRS_PER_PGD * sizeof(pgd_t)); #endif /* * And don't forget to reserve the allocator bitmap, diff -Nru a/arch/arm/mm/mm-armv.c b/arch/arm/mm/mm-armv.c --- a/arch/arm/mm/mm-armv.c Tue Mar 12 13:58:14 2002 +++ b/arch/arm/mm/mm-armv.c Tue Mar 12 13:58:14 2002 @@ -1,7 +1,7 @@ /* * linux/arch/arm/mm/mm-armv.c * - * Copyright (C) 1998-2000 Russell King + * Copyright (C) 1998-2002 Russell King * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as @@ -82,9 +82,6 @@ init_pgd = pgd_offset_k(0); if (vectors_base() == 0) { - init_pmd = pmd_offset(init_pgd, 0); - init_pte = pte_offset(init_pmd, 0); - /* * This lock is here just to satisfy pmd_alloc and pte_lock */ @@ -172,11 +169,14 @@ static inline void alloc_init_section(unsigned long virt, unsigned long phys, int prot) { - pmd_t pmd; + pmd_t *pmdp, pmd; - pmd_val(pmd) = phys | prot; + pmdp = pmd_offset(pgd_offset_k(virt), virt); + if (virt & (1 << PMD_SHIFT)) + pmdp++; - set_pmd(pmd_offset(pgd_offset_k(virt), virt), pmd); + pmd_val(pmd) = phys | prot; + set_pmd(pmdp, pmd); } /* @@ -189,18 +189,19 @@ static inline void alloc_init_page(unsigned long virt, unsigned long phys, int domain, int prot) { - pmd_t *pmdp; + pmd_t *pmdp, pmd; pte_t *ptep; pmdp = pmd_offset(pgd_offset_k(virt), virt); if (pmd_none(*pmdp)) { - pte_t *ptep = alloc_bootmem_low_pages(2 * PTRS_PER_PTE * - sizeof(pte_t)); + ptep = alloc_bootmem_low_pages(2 * PTRS_PER_PTE * + sizeof(pte_t)); - ptep += PTRS_PER_PTE; - - set_pmd(pmdp, __mk_pmd(ptep, PMD_TYPE_TABLE | PMD_DOMAIN(domain))); + pmd_val(pmd) = __pa(ptep) | PMD_TYPE_TABLE | PMD_DOMAIN(domain); + set_pmd(pmdp, pmd); + pmd_val(pmd) += 256 * sizeof(pte_t); + set_pmd(pmdp + 1, pmd); } ptep = pte_offset_kernel(pmdp, virt); @@ -266,11 +267,11 @@ length -= PAGE_SIZE; } - while (length >= PGDIR_SIZE) { + while (length >= (PGDIR_SIZE / 2)) { alloc_init_section(virt, virt + off, prot_sect); - virt += PGDIR_SIZE; - length -= PGDIR_SIZE; + virt += (PGDIR_SIZE / 2); + length -= (PGDIR_SIZE / 2); } while (length >= PAGE_SIZE) { @@ -462,42 +463,4 @@ for (node = 0; node < numnodes; node++) free_unused_memmap_node(node, mi); -} - -/* - * PTE table allocation cache. - * - * This is a move away from our custom 2K page allocator. We now use the - * slab cache to keep track of these objects. - * - * With this, it is questionable as to whether the PGT cache gains us - * anything. We may be better off dropping the PTE stuff from our PGT - * cache implementation. - */ -kmem_cache_t *pte_cache; - -/* - * The constructor gets called for each object within the cache when the - * cache page is created. Note that if slab tries to misalign the blocks, - * we BUG() loudly. - */ -static void pte_cache_ctor(void *pte, kmem_cache_t *cache, unsigned long flags) -{ - unsigned long block = (unsigned long)pte; - - if (block & 2047) - BUG(); - - memzero(pte, 2 * PTRS_PER_PTE * sizeof(pte_t)); - cpu_cache_clean_invalidate_range(block, block + - PTRS_PER_PTE * sizeof(pte_t), 0); -} - -void __init pgtable_cache_init(void) -{ - pte_cache = kmem_cache_create("pte-cache", - 2 * PTRS_PER_PTE * sizeof(pte_t), 0, 0, - pte_cache_ctor, NULL); - if (!pte_cache) - BUG(); } diff -Nru a/arch/arm/mm/proc-arm1020.S b/arch/arm/mm/proc-arm1020.S --- a/arch/arm/mm/proc-arm1020.S Tue Mar 12 13:58:14 2002 +++ b/arch/arm/mm/proc-arm1020.S Tue Mar 12 13:58:14 2002 @@ -499,7 +499,9 @@ */ .align 5 ENTRY(cpu_arm1020_set_pte) - str r1, [r0], #-1024 @ linux version + tst r0, #2048 + streq r0, [r0, -r0] @ BUG_ON + str r1, [r0], #-2048 @ linux version eor r1, r1, #LPTE_PRESENT | LPTE_YOUNG | LPTE_WRITE | LPTE_DIRTY @@ -608,7 +610,7 @@ */ .type arm1020_processor_functions, #object arm1020_processor_functions: - .word armv4t_early_abort + .word v4t_early_abort .word cpu_arm1020_check_bugs .word cpu_arm1020_proc_init .word cpu_arm1020_proc_fin @@ -635,10 +637,6 @@ .word cpu_arm1020_set_pmd .word cpu_arm1020_set_pte - /* misc */ - .word armv4_clear_user_page - .word armv4_copy_user_page - .size arm1020_processor_functions, . - arm1020_processor_functions .type cpu_arm1020_info, #object @@ -672,4 +670,5 @@ .long cpu_arm1020_info .long arm1020_processor_functions .long v4wbi_tlb_fns + .long v4_user_fns .size __arm1020_proc_info, . - __arm1020_proc_info diff -Nru a/arch/arm/mm/proc-arm2,3.S b/arch/arm/mm/proc-arm2,3.S --- a/arch/arm/mm/proc-arm2,3.S Tue Mar 12 13:58:15 2002 +++ b/arch/arm/mm/proc-arm2,3.S Tue Mar 12 13:58:15 2002 @@ -342,6 +342,7 @@ .long cpu_arm2_info .long SYMBOL_NAME(arm2_processor_functions) .long 0 + .long 0 .long 0x41560250 .long 0xfffffff0 @@ -353,6 +354,7 @@ .long cpu_arm250_info .long SYMBOL_NAME(arm250_processor_functions) .long 0 + .long 0 .long 0x41560300 .long 0xfffffff0 @@ -364,3 +366,5 @@ .long cpu_arm3_info .long SYMBOL_NAME(arm3_processor_functions) .long 0 + .long 0 + diff -Nru a/arch/arm/mm/proc-arm6,7.S b/arch/arm/mm/proc-arm6,7.S --- a/arch/arm/mm/proc-arm6,7.S Tue Mar 12 13:58:15 2002 +++ b/arch/arm/mm/proc-arm6,7.S Tue Mar 12 13:58:15 2002 @@ -274,7 +274,9 @@ .align 5 ENTRY(cpu_arm6_set_pte) ENTRY(cpu_arm7_set_pte) - str r1, [r0], #-1024 @ linux version + tst r0, #2048 + streq r0, [r0, -r0] @ BUG_ON + str r1, [r0], #-2048 @ linux version eor r1, r1, #LPTE_PRESENT | LPTE_YOUNG | LPTE_WRITE | LPTE_DIRTY @@ -373,10 +375,6 @@ .word cpu_arm6_set_pmd .word cpu_arm6_set_pte - /* other */ - .word armv3_clear_user_page - .word armv3_copy_user_page - .size arm6_processor_functions, . - arm6_processor_functions /* @@ -412,10 +410,6 @@ .word cpu_arm7_set_pmd .word cpu_arm7_set_pte - /* other */ - .word armv3_clear_user_page - .word armv3_copy_user_page - .size arm7_processor_functions, . - arm7_processor_functions .type cpu_arm6_info, #object @@ -465,6 +459,7 @@ .long cpu_arm6_info .long arm6_processor_functions .long v3_tlb_fns + .long v3_user_fns .size __arm6_proc_info, . - __arm6_proc_info .type __arm610_proc_info, #object @@ -479,6 +474,7 @@ .long cpu_arm610_info .long arm6_processor_functions .long v3_tlb_fns + .long v3_user_fns .size __arm610_proc_info, . - __arm610_proc_info .type __arm7_proc_info, #object @@ -493,6 +489,7 @@ .long cpu_arm7_info .long arm7_processor_functions .long v3_tlb_fns + .long v3_user_fns .size __arm7_proc_info, . - __arm7_proc_info .type __arm710_proc_info, #object @@ -507,4 +504,5 @@ .long cpu_arm710_info .long arm7_processor_functions .long v3_tlb_fns + .long v3_user_fns .size __arm710_proc_info, . - __arm710_proc_info diff -Nru a/arch/arm/mm/proc-arm720.S b/arch/arm/mm/proc-arm720.S --- a/arch/arm/mm/proc-arm720.S Tue Mar 12 13:58:14 2002 +++ b/arch/arm/mm/proc-arm720.S Tue Mar 12 13:58:14 2002 @@ -136,7 +136,9 @@ */ .align 5 ENTRY(cpu_arm720_set_pte) - str r1, [r0], #-1024 @ linux version + tst r0, #2048 + streq r0, [r0, -r0] @ BUG_ON + str r1, [r0], #-2048 @ linux version eor r1, r1, #LPTE_PRESENT | LPTE_YOUNG | LPTE_WRITE | LPTE_DIRTY @@ -199,7 +201,7 @@ */ .type arm720_processor_functions, #object ENTRY(arm720_processor_functions) - .word armv4t_late_abort + .word v4t_late_abort .word cpu_arm720_check_bugs .word cpu_arm720_proc_init .word cpu_arm720_proc_fin @@ -226,10 +228,6 @@ .word cpu_arm720_set_pmd .word cpu_arm720_set_pte - /* misc */ - .word armv4_clear_user_page - .word armv4_copy_user_page - .size arm720_processor_functions, . - arm720_processor_functions .type cpu_arm720_info, #object @@ -265,4 +263,5 @@ .long cpu_arm720_info @ info .long arm720_processor_functions .long v4_tlb_fns + .long v4_user_fns .size __arm720_proc_info, . - __arm720_proc_info diff -Nru a/arch/arm/mm/proc-arm920.S b/arch/arm/mm/proc-arm920.S --- a/arch/arm/mm/proc-arm920.S Tue Mar 12 13:58:14 2002 +++ b/arch/arm/mm/proc-arm920.S Tue Mar 12 13:58:14 2002 @@ -420,7 +420,9 @@ */ .align 5 ENTRY(cpu_arm920_set_pte) - str r1, [r0], #-1024 @ linux version + tst r0, #2048 + streq r0, [r0, -r0] @ BUG_ON + str r1, [r0], #-2048 @ linux version eor r1, r1, #LPTE_PRESENT | LPTE_YOUNG | LPTE_WRITE | LPTE_DIRTY @@ -511,7 +513,7 @@ */ .type arm920_processor_functions, #object arm920_processor_functions: - .word armv4t_early_abort + .word v4t_early_abort .word cpu_arm920_check_bugs .word cpu_arm920_proc_init .word cpu_arm920_proc_fin @@ -538,10 +540,6 @@ .word cpu_arm920_set_pmd .word cpu_arm920_set_pte - /* misc */ - .word armv4_clear_user_page - .word armv4_copy_user_page - .size arm920_processor_functions, . - arm920_processor_functions .type cpu_arm920_info, #object @@ -575,4 +573,5 @@ .long cpu_arm920_info .long arm920_processor_functions .long v4wbi_tlb_fns + .long v4_user_fns .size __arm920_proc_info, . - __arm920_proc_info diff -Nru a/arch/arm/mm/proc-arm922.S b/arch/arm/mm/proc-arm922.S --- a/arch/arm/mm/proc-arm922.S Tue Mar 12 13:58:15 2002 +++ b/arch/arm/mm/proc-arm922.S Tue Mar 12 13:58:15 2002 @@ -421,7 +421,9 @@ */ .align 5 ENTRY(cpu_arm922_set_pte) - str r1, [r0], #-1024 @ linux version + tst r0, #2048 + streq r0, [r0, -r0] @ BUG_ON + str r1, [r0], #-2048 @ linux version eor r1, r1, #LPTE_PRESENT | LPTE_YOUNG | LPTE_WRITE | LPTE_DIRTY @@ -512,7 +514,7 @@ */ .type arm922_processor_functions, #object arm922_processor_functions: - .word armv4t_early_abort + .word v4t_early_abort .word cpu_arm922_check_bugs .word cpu_arm922_proc_init .word cpu_arm922_proc_fin @@ -539,10 +541,6 @@ .word cpu_arm922_set_pmd .word cpu_arm922_set_pte - /* misc */ - .word armv4_clear_user_page - .word armv4_copy_user_page - .size arm922_processor_functions, . - arm922_processor_functions .type cpu_arm922_info, #object @@ -576,4 +574,5 @@ .long cpu_arm922_info .long arm922_processor_functions .long v4wbi_tlb_fns + .long v4_user_fns .size __arm922_proc_info, . - __arm922_proc_info diff -Nru a/arch/arm/mm/proc-arm926.S b/arch/arm/mm/proc-arm926.S --- a/arch/arm/mm/proc-arm926.S Tue Mar 12 13:58:14 2002 +++ b/arch/arm/mm/proc-arm926.S Tue Mar 12 13:58:14 2002 @@ -443,7 +443,9 @@ */ .align 5 ENTRY(cpu_arm926_set_pte) - str r1, [r0], #-1024 @ linux version + tst r0, #2048 + streq r0, [r0, -r0] @ BUG_ON + str r1, [r0], #-2048 @ linux version eor r1, r1, #LPTE_PRESENT | LPTE_YOUNG | LPTE_WRITE | LPTE_DIRTY @@ -549,7 +551,7 @@ */ .type arm926_processor_functions, #object arm926_processor_functions: - .word armv5ej_early_abort + .word v5ej_early_abort .word cpu_arm926_check_bugs .word cpu_arm926_proc_init .word cpu_arm926_proc_fin @@ -576,10 +578,6 @@ .word cpu_arm926_set_pmd .word cpu_arm926_set_pte - /* misc */ - .word armv4_clear_user_page - .word armv4_copy_user_page - .size arm926_processor_functions, . - arm926_processor_functions .type cpu_arm926_info, #object @@ -613,4 +611,5 @@ .long cpu_arm926_info .long arm926_processor_functions .long v4wbi_tlb_fns + .long v4_user_fns .size __arm926_proc_info, . - __arm926_proc_info diff -Nru a/arch/arm/mm/proc-sa110.S b/arch/arm/mm/proc-sa110.S --- a/arch/arm/mm/proc-sa110.S Tue Mar 12 13:58:15 2002 +++ b/arch/arm/mm/proc-sa110.S Tue Mar 12 13:58:15 2002 @@ -1,7 +1,7 @@ /* * linux/arch/arm/mm/proc-sa110.S * - * Copyright (C) 1997-2000 Russell King + * Copyright (C) 1997-2002 Russell King * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as @@ -468,7 +468,9 @@ .align 5 ENTRY(cpu_sa110_set_pte) ENTRY(cpu_sa1100_set_pte) - str r1, [r0], #-1024 @ linux version + tst r0, #2048 + streq r0, [r0, -r0] @ BUG_ON + str r1, [r0], #-2048 @ linux version eor r1, r1, #L_PTE_PRESENT | L_PTE_YOUNG | L_PTE_WRITE | L_PTE_DIRTY @@ -538,7 +540,7 @@ .type sa110_processor_functions, #object ENTRY(sa110_processor_functions) - .word armv4_early_abort + .word v4_early_abort .word cpu_sa110_check_bugs .word cpu_sa110_proc_init .word cpu_sa110_proc_fin @@ -565,10 +567,6 @@ .word cpu_sa110_set_pmd .word cpu_sa110_set_pte - /* misc */ - .word armv4_clear_user_page - .word armv4_copy_user_page - .size sa110_processor_functions, . - sa110_processor_functions .type cpu_sa110_info, #object @@ -610,10 +608,6 @@ .word cpu_sa1100_set_pmd .word cpu_sa1100_set_pte - /* misc */ - .word armv4_mc_clear_user_page - .word armv4_mc_copy_user_page - .size sa1100_processor_functions, . - sa1100_processor_functions cpu_sa1100_info: @@ -651,6 +645,7 @@ .long cpu_sa110_info .long sa110_processor_functions .long v4wb_tlb_fns + .long v4_user_fns .size __sa110_proc_info, . - __sa110_proc_info .type __sa1100_proc_info,#object @@ -665,6 +660,7 @@ .long cpu_sa1100_info .long sa1100_processor_functions .long v4wb_tlb_fns + .long v4_mc_user_fns .size __sa1100_proc_info, . - __sa1100_proc_info .type __sa1110_proc_info,#object @@ -679,4 +675,5 @@ .long cpu_sa1110_info .long sa1100_processor_functions .long v4wb_tlb_fns + .long v4_mc_user_fns .size __sa1110_proc_info, . - __sa1110_proc_info diff -Nru a/arch/arm/mm/proc-xscale.S b/arch/arm/mm/proc-xscale.S --- a/arch/arm/mm/proc-xscale.S Tue Mar 12 13:58:15 2002 +++ b/arch/arm/mm/proc-xscale.S Tue Mar 12 13:58:15 2002 @@ -602,7 +602,9 @@ */ .align 5 ENTRY(cpu_xscale_set_pte) - str r1, [r0], #-1024 @ linux version + tst r0, #2048 + streq r0, [r0, -r0] @ BUG_ON + str r1, [r0], #-2048 @ linux version bic r2, r1, #0xff0 orr r2, r2, #PTE_TYPE_EXT @ extended page @@ -695,7 +697,7 @@ .type xscale_processor_functions, #object ENTRY(xscale_processor_functions) - .word armv4t_early_abort + .word v4t_early_abort .word cpu_xscale_check_bugs .word cpu_xscale_proc_init .word cpu_xscale_proc_fin @@ -722,10 +724,6 @@ .word cpu_xscale_set_pmd .word cpu_xscale_set_pte - /* misc */ - .word armv5te_clear_user_page - .word armv5te_copy_user_page - .size xscale_processor_functions, . - xscale_processor_functions .type cpu_80200_info, #object @@ -765,6 +763,7 @@ .long cpu_80200_info .long xscale_processor_functions .long v4wbi_tlb_fns + .long v5te_mc_user_fns .size __80200_proc_info, . - __80200_proc_info .type __pxa250_proc_info,#object @@ -779,6 +778,7 @@ .long cpu_pxa250_info .long xscale_processor_functions .long v4wbi_tlb_fns + .long v5te_mc_user_fns .size __cotulla_proc_info, . - __cotulla_proc_info .size __pxa250_proc_info, . - __pxa250_proc_info diff -Nru a/arch/i386/defconfig b/arch/i386/defconfig --- a/arch/i386/defconfig Tue Mar 12 13:58:14 2002 +++ b/arch/i386/defconfig Tue Mar 12 13:58:14 2002 @@ -258,7 +258,6 @@ # CONFIG_IDEDMA_PCI_WIP is not set # CONFIG_BLK_DEV_IDEDMA_TIMEOUT is not set # CONFIG_IDEDMA_NEW_DRIVE_LISTINGS is not set -CONFIG_BLK_DEV_ADMA=y # CONFIG_BLK_DEV_AEC62XX is not set # CONFIG_AEC62XX_TUNING is not set # CONFIG_BLK_DEV_ALI15X3 is not set diff -Nru a/arch/i386/kernel/apm.c b/arch/i386/kernel/apm.c --- a/arch/i386/kernel/apm.c Tue Mar 12 13:58:14 2002 +++ b/arch/i386/kernel/apm.c Tue Mar 12 13:58:14 2002 @@ -275,10 +275,11 @@ */ /* - * Define to always call the APM BIOS busy routine even if the clock was - * not slowed by the idle routine. + * Define as 1 to make the driver always call the APM BIOS busy + * routine even if the clock was not reported as slowed by the + * idle routine. Otherwise, define as 0. */ -#define ALWAYS_CALL_BUSY +#define ALWAYS_CALL_BUSY 1 /* * Define to make the APM BIOS calls zero all data segment registers (so @@ -380,7 +381,7 @@ static int set_pm_idle; static int suspends_pending; static int standbys_pending; -static int waiting_for_resume; +static int ignore_sys_suspend; static int ignore_normal_resume; static int bounce_interval = DEFAULT_BOUNCE_INTERVAL; @@ -471,6 +472,28 @@ }; #define ERROR_COUNT (sizeof(error_table)/sizeof(lookup_t)) +/** + * apm_error - display an APM error + * @str: information string + * @err: APM BIOS return code + * + * Write a meaningful log entry to the kernel log in the event of + * an APM error. + */ + +static void apm_error(char *str, int err) +{ + int i; + + for (i = 0; i < ERROR_COUNT; i++) + if (error_table[i].key == err) break; + if (i < ERROR_COUNT) + printk(KERN_NOTICE "apm: %s: %s\n", str, error_table[i].msg); + else + printk(KERN_NOTICE "apm: %s: unknown error code %#2.2x\n", + str, err); +} + /* * These are the actual BIOS calls. Depending on APM_ZERO_SEGS and * apm_info.allow_ints, we are being really paranoid here! Not only @@ -702,13 +725,13 @@ } /** - * apm_set_power_state - set system wide power state + * set_system_power_state - set system wide power state * @state: which state to enter * * Transition the entire system into a new APM power state. */ -static int apm_set_power_state(u_short state) +static int set_system_power_state(u_short state) { return set_power_state(APM_DEVICE_ALL, state); } @@ -725,7 +748,6 @@ static int apm_do_idle(void) { u32 eax; - int slowed; if (apm_bios_call_simple(APM_FUNC_IDLE, 0, 0, &eax)) { static unsigned long t; @@ -737,13 +759,8 @@ } return -1; } - slowed = (apm_info.bios.flags & APM_IDLE_SLOWS_CLOCK) != 0; -#ifdef ALWAYS_CALL_BUSY - clock_slowed = 1; -#else - clock_slowed = slowed; -#endif - return slowed; + clock_slowed = (apm_info.bios.flags & APM_IDLE_SLOWS_CLOCK) != 0; + return clock_slowed; } /** @@ -756,7 +773,7 @@ { u32 dummy; - if (clock_slowed) { + if (clock_slowed || ALWAYS_CALL_BUSY) { (void) apm_bios_call_simple(APM_FUNC_BUSY, 0, 0, &dummy); clock_slowed = 0; } @@ -771,7 +788,7 @@ #define IDLE_CALC_LIMIT (HZ * 100) #define IDLE_LEAKY_MAX 16 -static void (*sys_idle)(void); +static void (*original_pm_idle)(void); extern void default_idle(void); @@ -785,14 +802,13 @@ static void apm_cpu_idle(void) { - static int use_apm_idle = 0; - static unsigned int last_jiffies = 0; - static unsigned int last_stime = 0; + static int use_apm_idle; /* = 0 */ + static unsigned int last_jiffies; /* = 0 */ + static unsigned int last_stime; /* = 0 */ - int apm_is_idle = 0; + int apm_idle_done = 0; unsigned int jiffies_since_last_check = jiffies - last_jiffies; - unsigned int t1; - + unsigned int bucket; recalc: if (jiffies_since_last_check > IDLE_CALC_LIMIT) { @@ -810,7 +826,7 @@ last_stime = current->times.tms_stime; } - t1 = IDLE_LEAKY_MAX; + bucket = IDLE_LEAKY_MAX; while (!need_resched()) { if (use_apm_idle) { @@ -818,23 +834,24 @@ t = jiffies; switch (apm_do_idle()) { - case 0: apm_is_idle = 1; + case 0: apm_idle_done = 1; if (t != jiffies) { - if (t1) { - t1 = IDLE_LEAKY_MAX; + if (bucket) { + bucket = IDLE_LEAKY_MAX; continue; } - } else if (t1) { - t1--; + } else if (bucket) { + bucket--; continue; } break; - case 1: apm_is_idle = 1; + case 1: apm_idle_done = 1; break; + default: /* BIOS refused */ } } - if (sys_idle) - sys_idle(); + if (original_pm_idle) + original_pm_idle(); else default_idle(); jiffies_since_last_check = jiffies - last_jiffies; @@ -842,7 +859,7 @@ goto recalc; } - if (apm_is_idle) + if (apm_idle_done) apm_do_busy(); } @@ -890,7 +907,7 @@ if (apm_info.realmode_power_off) machine_real_restart(po_bios_call, sizeof(po_bios_call)); else - (void) apm_set_power_state(APM_STATE_OFF); + (void) set_system_power_state(APM_STATE_OFF); } /** @@ -1035,28 +1052,6 @@ return APM_SUCCESS; } -/** - * apm_error - display an APM error - * @str: information string - * @err: APM BIOS return code - * - * Write a meaningful log entry to the kernel log in the event of - * an APM error. - */ - -static void apm_error(char *str, int err) -{ - int i; - - for (i = 0; i < ERROR_COUNT; i++) - if (error_table[i].key == err) break; - if (i < ERROR_COUNT) - printk(KERN_NOTICE "apm: %s: %s\n", str, error_table[i].msg); - else - printk(KERN_NOTICE "apm: %s: unknown error code %#2.2x\n", - str, err); -} - #if defined(CONFIG_APM_DISPLAY_BLANK) && defined(CONFIG_VT) /** @@ -1198,9 +1193,9 @@ /* Vetoed */ if (vetoable) { if (apm_info.connection_version > 0x100) - apm_set_power_state(APM_STATE_REJECT); + set_system_power_state(APM_STATE_REJECT); err = -EBUSY; - waiting_for_resume = 0; + ignore_sys_suspend = 0; printk(KERN_WARNING "apm: suspend was vetoed.\n"); goto out; } @@ -1208,9 +1203,10 @@ } get_time_diff(); cli(); - err = apm_set_power_state(APM_STATE_SUSPEND); + err = set_system_power_state(APM_STATE_SUSPEND); reinit_timer(); set_time(); + ignore_normal_resume = 1; sti(); if (err == APM_NO_ERROR) err = APM_SUCCESS; @@ -1219,7 +1215,6 @@ err = (err == APM_SUCCESS) ? 0 : -EIO; pm_send_all(PM_RESUME, (void *)0); queue_event(APM_NORMAL_RESUME, NULL); - ignore_normal_resume = 1; out: spin_lock(&user_list_lock); for (as = user_list; as != NULL; as = as->next) { @@ -1237,7 +1232,7 @@ /* If needed, notify drivers here */ get_time_diff(); - err = apm_set_power_state(APM_STATE_STANDBY); + err = set_system_power_state(APM_STATE_STANDBY); if ((err != APM_SUCCESS) && (err != APM_NO_ERROR)) apm_error("standby", err); } @@ -1291,13 +1286,13 @@ case APM_USER_SUSPEND: #ifdef CONFIG_APM_IGNORE_USER_SUSPEND if (apm_info.connection_version > 0x100) - apm_set_power_state(APM_STATE_REJECT); + set_system_power_state(APM_STATE_REJECT); break; #endif case APM_SYS_SUSPEND: if (ignore_bounce) { if (apm_info.connection_version > 0x100) - apm_set_power_state(APM_STATE_REJECT); + set_system_power_state(APM_STATE_REJECT); break; } /* @@ -1308,9 +1303,9 @@ * sending a SUSPEND event until something else * happens! */ - if (waiting_for_resume) + if (ignore_sys_suspend) return; - waiting_for_resume = 1; + ignore_sys_suspend = 1; queue_event(event, NULL); if (suspends_pending <= 0) (void) suspend(1); @@ -1319,7 +1314,7 @@ case APM_NORMAL_RESUME: case APM_CRITICAL_RESUME: case APM_STANDBY_RESUME: - waiting_for_resume = 0; + ignore_sys_suspend = 0; last_resume = jiffies; ignore_bounce = 1; if ((event != APM_NORMAL_RESUME) @@ -1363,7 +1358,7 @@ pending_count = 4; if (debug) printk(KERN_DEBUG "apm: setting state busy\n"); - err = apm_set_power_state(APM_STATE_BUSY); + err = set_system_power_state(APM_STATE_BUSY); if (err) apm_error("busy", err); } @@ -1972,7 +1967,7 @@ if (HZ != 100) idle_period = (idle_period * HZ) / 100; if (idle_threshold < 100) { - sys_idle = pm_idle; + original_pm_idle = pm_idle; pm_idle = apm_cpu_idle; set_pm_idle = 1; } @@ -1985,7 +1980,7 @@ int error; if (set_pm_idle) - pm_idle = sys_idle; + pm_idle = original_pm_idle; if (((apm_info.bios.flags & APM_BIOS_DISENGAGED) == 0) && (apm_info.connection_version > 0x0100)) { error = apm_engage_power_management(APM_DEVICE_ALL, 0); diff -Nru a/arch/i386/kernel/dmi_scan.c b/arch/i386/kernel/dmi_scan.c --- a/arch/i386/kernel/dmi_scan.c Tue Mar 12 13:58:15 2002 +++ b/arch/i386/kernel/dmi_scan.c Tue Mar 12 13:58:15 2002 @@ -492,6 +492,11 @@ MATCH(DMI_BIOS_VERSION, "A04"), MATCH(DMI_BIOS_DATE, "08/24/2000"), NO_MATCH } }, + { broken_apm_power, "Dell Inspiron 2500", { /* Handle problems with APM on Inspiron 2500 */ + MATCH(DMI_BIOS_VENDOR, "Phoenix Technologies LTD"), + MATCH(DMI_BIOS_VERSION, "A12"), + MATCH(DMI_BIOS_DATE, "02/04/2002"), NO_MATCH + } }, { set_realmode_power_off, "Award Software v4.60 PGMA", { /* broken PM poweroff bios */ MATCH(DMI_BIOS_VENDOR, "Award Software International, Inc."), MATCH(DMI_BIOS_VERSION, "4.60 PGMA"), diff -Nru a/arch/i386/kernel/entry.S b/arch/i386/kernel/entry.S --- a/arch/i386/kernel/entry.S Tue Mar 12 13:58:15 2002 +++ b/arch/i386/kernel/entry.S Tue Mar 12 13:58:15 2002 @@ -717,6 +717,7 @@ .long SYMBOL_NAME(sys_fremovexattr) .long SYMBOL_NAME(sys_tkill) .long SYMBOL_NAME(sys_sendfile64) + .long SYMBOL_NAME(sys_futex) /* 240 */ .rept NR_syscalls-(.-sys_call_table)/4 .long SYMBOL_NAME(sys_ni_syscall) diff -Nru a/arch/i386/kernel/pci-i386.h b/arch/i386/kernel/pci-i386.h --- a/arch/i386/kernel/pci-i386.h Tue Mar 12 13:58:15 2002 +++ b/arch/i386/kernel/pci-i386.h Tue Mar 12 13:58:15 2002 @@ -18,6 +18,7 @@ #define PCI_NO_SORT 0x0100 #define PCI_BIOS_SORT 0x0200 #define PCI_NO_CHECKS 0x0400 +#define PCI_USE_PIRQ_MASK 0x0800 #define PCI_ASSIGN_ROMS 0x1000 #define PCI_BIOS_IRQ_SCAN 0x2000 #define PCI_ASSIGN_ALL_BUSSES 0x4000 diff -Nru a/arch/i386/kernel/pci-irq.c b/arch/i386/kernel/pci-irq.c --- a/arch/i386/kernel/pci-irq.c Tue Mar 12 13:58:15 2002 +++ b/arch/i386/kernel/pci-irq.c Tue Mar 12 13:58:15 2002 @@ -570,6 +570,10 @@ * reported by the device if possible. */ newirq = dev->irq; + if (!((1 << newirq) & mask)) { + if ( pci_probe & PCI_USE_PIRQ_MASK) newirq = 0; + else printk(KERN_WARNING "PCI: IRQ %i for device %s doesn't match PIRQ mask - try pci=usepirqmask\n", newirq, dev->slot_name); + } if (!newirq && assign) { for (i = 0; i < 16; i++) { if (!(mask & (1 << i))) @@ -588,7 +592,8 @@ irq = pirq & 0xf; DBG(" -> hardcoded IRQ %d\n", irq); msg = "Hardcoded"; - } else if (r->get && (irq = r->get(pirq_router_dev, dev, pirq))) { + } else if ( r->get && (irq = r->get(pirq_router_dev, dev, pirq)) && \ + ((!(pci_probe & PCI_USE_PIRQ_MASK)) || ((1 << irq) & mask)) ) { DBG(" -> got IRQ %d\n", irq); msg = "Found"; } else if (newirq && r->set && (dev->class >> 8) != PCI_CLASS_DISPLAY_VGA) { @@ -622,7 +627,9 @@ continue; if (info->irq[pin].link == pirq) { /* We refuse to override the dev->irq information. Give a warning! */ - if (dev2->irq && dev2->irq != irq) { + if ( dev2->irq && dev2->irq != irq && \ + (!(pci_probe & PCI_USE_PIRQ_MASK) || \ + ((1 << dev2->irq) & mask)) ) { printk(KERN_INFO "IRQ routing conflict for %s, have irq %d, want irq %d\n", dev2->slot_name, dev2->irq, irq); continue; diff -Nru a/arch/i386/kernel/pci-pc.c b/arch/i386/kernel/pci-pc.c --- a/arch/i386/kernel/pci-pc.c Tue Mar 12 13:58:15 2002 +++ b/arch/i386/kernel/pci-pc.c Tue Mar 12 13:58:15 2002 @@ -1343,6 +1343,9 @@ } else if (!strcmp(str, "assign-busses")) { pci_probe |= PCI_ASSIGN_ALL_BUSSES; return NULL; + } else if (!strcmp(str, "usepirqmask")) { + pci_probe |= PCI_USE_PIRQ_MASK; + return NULL; } else if (!strncmp(str, "irqmask=", 8)) { pcibios_irq_mask = simple_strtol(str+8, NULL, 0); return NULL; diff -Nru a/arch/ia64/Makefile b/arch/ia64/Makefile --- a/arch/ia64/Makefile Tue Mar 12 13:58:14 2002 +++ b/arch/ia64/Makefile Tue Mar 12 13:58:14 2002 @@ -25,7 +25,7 @@ GCC_VERSION=$(shell $(CROSS_COMPILE)$(HOSTCC) -v 2>&1 | fgrep 'gcc version' | cut -f3 -d' ' | cut -f1 -d'.') ifneq ($(GCC_VERSION),2) - CFLAGS += -frename-registers --param max-inline-insns=400 + CFLAGS += -frename-registers --param max-inline-insns=2000 endif ifeq ($(CONFIG_ITANIUM_BSTEP_SPECIFIC),y) @@ -58,7 +58,7 @@ CFLAGS += -DBRINGUP SUBDIRS := arch/$(ARCH)/sn/kernel \ arch/$(ARCH)/sn/io \ - arch/$(ARCH)/sn/fprom \ + arch/$(ARCH)/sn/fakeprom \ $(SUBDIRS) CORE_FILES := arch/$(ARCH)/sn/kernel/sn.o \ arch/$(ARCH)/sn/io/sgiio.o \ diff -Nru a/arch/ia64/config.in b/arch/ia64/config.in --- a/arch/ia64/config.in Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/config.in Tue Mar 12 13:58:15 2002 @@ -119,6 +119,7 @@ source drivers/mtd/Config.in source drivers/pnp/Config.in source drivers/block/Config.in +source drivers/ieee1394/Config.in source drivers/message/i2o/Config.in source drivers/md/Config.in @@ -230,7 +231,7 @@ mainmenu_option next_comment comment 'Simulated drivers' - tristate 'Simulated Ethernet ' CONFIG_SIMETH + bool 'Simulated Ethernet ' CONFIG_SIMETH bool 'Simulated serial driver support' CONFIG_SIM_SERIAL if [ "$CONFIG_SCSI" != "n" ]; then bool 'Simulated SCSI disk' CONFIG_SCSI_SIM @@ -252,13 +253,20 @@ bool ' Disable VHPT' CONFIG_DISABLE_VHPT bool ' Magic SysRq key' CONFIG_MAGIC_SYSRQ -# early printk is currently broken for SMP: the secondary processors get stuck... -# bool ' Early printk support (requires VGA!)' CONFIG_IA64_EARLY_PRINTK - + bool ' Early printk support (requires VGA!)' CONFIG_IA64_EARLY_PRINTK bool ' Debug memory allocations' CONFIG_DEBUG_SLAB bool ' Spinlock debugging' CONFIG_DEBUG_SPINLOCK bool ' Turn on compare-and-exchange bug checking (slow!)' CONFIG_IA64_DEBUG_CMPXCHG bool ' Turn on irq debug checks (slow!)' CONFIG_IA64_DEBUG_IRQ + bool ' Built-in Kernel Debugger support' CONFIG_KDB + dep_tristate ' KDB modules' CONFIG_KDB_MODULES $CONFIG_KDB + if [ "$CONFIG_KDB" = "y" ]; then + bool ' KDB off by default' CONFIG_KDB_OFF + comment ' Load all symbols for debugging is required for KDB' + define_bool CONFIG_KALLSYMS y + else + bool ' Load all symbols for debugging' CONFIG_KALLSYMS + fi fi endmenu diff -Nru a/arch/ia64/defconfig b/arch/ia64/defconfig --- a/arch/ia64/defconfig Tue Mar 12 13:58:14 2002 +++ b/arch/ia64/defconfig Tue Mar 12 13:58:14 2002 @@ -207,7 +207,6 @@ CONFIG_BLK_DEV_IDEPCI=y CONFIG_IDEPCI_SHARE_IRQ=y CONFIG_BLK_DEV_IDEDMA_PCI=y -CONFIG_BLK_DEV_ADMA=y # CONFIG_BLK_DEV_OFFBOARD is not set # CONFIG_IDEDMA_PCI_AUTO is not set CONFIG_BLK_DEV_IDEDMA=y diff -Nru a/arch/ia64/dig/setup.c b/arch/ia64/dig/setup.c --- a/arch/ia64/dig/setup.c Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/dig/setup.c Tue Mar 12 13:58:15 2002 @@ -33,6 +33,7 @@ * is sufficient (the IDE driver will autodetect the drive geometry). */ char drive_info[4*16]; +extern int pcat_compat; unsigned char aux_device_present = 0xaa; /* XXX remove this when legacy I/O is gone */ @@ -81,13 +82,19 @@ screen_info.orig_video_ega_bx = 3; /* XXX fake */ } -void +void __init dig_irq_init (void) { - /* - * Disable the compatibility mode interrupts (8259 style), needs IN/OUT support - * enabled. - */ - outb(0xff, 0xA1); - outb(0xff, 0x21); + if (pcat_compat) { + /* + * Disable the compatibility mode interrupts (8259 style), needs IN/OUT support + * enabled. + */ + printk("%s: Disabling PC-AT compatible 8259 interrupts\n", __FUNCTION__); + outb(0xff, 0xA1); + outb(0xff, 0x21); + } else { + printk("%s: System doesn't have PC-AT compatible dual-8259 setup. " + "Nothing to be done\n", __FUNCTION__); + } } diff -Nru a/arch/ia64/hp/hpsim_console.c b/arch/ia64/hp/hpsim_console.c --- a/arch/ia64/hp/hpsim_console.c Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/hp/hpsim_console.c Tue Mar 12 13:58:15 2002 @@ -1,15 +1,18 @@ /* * Platform dependent support for HP simulator. * - * Copyright (C) 1998, 1999 Hewlett-Packard Co - * Copyright (C) 1998, 1999 David Mosberger-Tang + * Copyright (C) 1998, 1999, 2002 Hewlett-Packard Co + * David Mosberger-Tang * Copyright (C) 1999 Vijay Chander */ +#include + #include #include #include #include #include +#include #include #include @@ -57,5 +60,5 @@ static kdev_t simcons_console_device (struct console *c) { - return MKDEV(TTY_MAJOR, 64 + c->index); + return mk_kdev(TTY_MAJOR, 64 + c->index); } diff -Nru a/arch/ia64/ia32/binfmt_elf32.c b/arch/ia64/ia32/binfmt_elf32.c --- a/arch/ia64/ia32/binfmt_elf32.c Tue Mar 12 13:58:16 2002 +++ b/arch/ia64/ia32/binfmt_elf32.c Tue Mar 12 13:58:16 2002 @@ -142,10 +142,11 @@ /* * Setup GDTD. Note: GDTD is the descrambled version of the pseudo-descriptor * format defined by Figure 3-11 "Pseudo-Descriptor Format" in the IA-32 - * architecture manual. + * architecture manual. Also note that the only fields that are not ignored are + * `base', `limit', 'G', `P' (must be 1) and `S' (must be 0). */ - regs->r31 = IA32_SEG_UNSCRAMBLE(IA32_SEG_DESCRIPTOR(IA32_GDT_OFFSET, IA32_PAGE_SIZE - 1, 0, - 0, 0, 0, 0, 0, 0)); + regs->r31 = IA32_SEG_UNSCRAMBLE(IA32_SEG_DESCRIPTOR(IA32_GDT_OFFSET, IA32_PAGE_SIZE - 1, + 0, 0, 0, 1, 0, 0, 0)); /* Setup the segment selectors */ regs->r16 = (__USER_DS << 16) | __USER_DS; /* ES == DS, GS, FS are zero */ regs->r17 = (__USER_DS << 16) | __USER_CS; /* SS, CS; ia32_load_state() sets TSS and LDT */ @@ -206,6 +207,7 @@ set_personality(PER_LINUX32); current->thread.map_base = IA32_PAGE_OFFSET/3; current->thread.task_size = IA32_PAGE_OFFSET; /* use what Linux/x86 uses... */ + current->thread.flags |= IA64_THREAD_XSTACK; /* data must be executable */ set_fs(USER_DS); /* set addr limit for new TASK_SIZE */ } diff -Nru a/arch/ia64/ia32/ia32_entry.S b/arch/ia64/ia32/ia32_entry.S --- a/arch/ia64/ia32/ia32_entry.S Tue Mar 12 13:58:14 2002 +++ b/arch/ia64/ia32/ia32_entry.S Tue Mar 12 13:58:14 2002 @@ -37,7 +37,7 @@ mov loc1=r16 // save ar.pfs across do_fork .body zxt4 out1=in1 // newsp - mov out3=0 // stacksize + mov out3=16 // stacksize (compensates for 16-byte scratch area) adds out2=IA64_SWITCH_STACK_SIZE+16,sp // out2 = ®s zxt4 out0=in0 // out0 = clone_flags br.call.sptk.many rp=do_fork @@ -98,7 +98,7 @@ ld8 r2=[r2] ;; mov r8=0 - tbit.nz p6,p0=r2,PT_TRACESYS_BIT + tbit.nz p6,p0=r2,PT_SYSCALLTRACE_BIT (p6) br.cond.spnt .ia32_strace_check_retval ;; // prevent RAW on r8 END(ia32_ret_from_clone) @@ -220,7 +220,7 @@ data8 sys32_pipe data8 sys32_times data8 sys32_ni_syscall /* old prof syscall holder */ - data8 sys_brk /* 45 */ + data8 sys32_brk /* 45 */ data8 sys_setgid /* 16-bit version */ data8 sys_getgid /* 16-bit version */ data8 sys32_signal diff -Nru a/arch/ia64/ia32/ia32_ioctl.c b/arch/ia64/ia32/ia32_ioctl.c --- a/arch/ia64/ia32/ia32_ioctl.c Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/ia32/ia32_ioctl.c Tue Mar 12 13:58:15 2002 @@ -3,12 +3,14 @@ * * Copyright (C) 2000 VA Linux Co * Copyright (C) 2000 Don Dugger - * Copyright (C) 2001 Hewlett-Packard Co + * Copyright (C) 2001-2002 Hewlett-Packard Co * David Mosberger-Tang */ #include #include +#include /* argh, msdos_fs.h isn't self-contained... */ + #include #include #include @@ -79,6 +81,38 @@ return ret; } + case IOCTL_NR(SIOCGIFCONF): + { + struct ifconf32 { + int ifc_len; + unsigned int ifc_ptr; + } ifconf32; + struct ifconf ifconf; + int i, n; + char *p32, *p64; + char buf[32]; /* sizeof IA32 ifreq structure */ + + if (copy_from_user(&ifconf32, P(arg), sizeof(ifconf32))) + return -EFAULT; + ifconf.ifc_len = ifconf32.ifc_len; + ifconf.ifc_req = P(ifconf32.ifc_ptr); + ret = DO_IOCTL(fd, SIOCGIFCONF, &ifconf); + ifconf32.ifc_len = ifconf.ifc_len; + if (copy_to_user(P(arg), &ifconf32, sizeof(ifconf32))) + return -EFAULT; + n = ifconf.ifc_len / sizeof(struct ifreq); + p32 = P(ifconf32.ifc_ptr); + p64 = P(ifconf32.ifc_ptr); + for (i = 0; i < n; i++) { + if (copy_from_user(buf, p64, sizeof(struct ifreq))) + return -EFAULT; + if (copy_to_user(p32, buf, sizeof(buf))) + return -EFAULT; + p32 += sizeof(buf); + p64 += sizeof(struct ifreq); + } + return ret; + } case IOCTL_NR(DRM_IOCTL_VERSION): { diff -Nru a/arch/ia64/ia32/ia32_signal.c b/arch/ia64/ia32/ia32_signal.c --- a/arch/ia64/ia32/ia32_signal.c Tue Mar 12 13:58:14 2002 +++ b/arch/ia64/ia32/ia32_signal.c Tue Mar 12 13:58:14 2002 @@ -1,7 +1,7 @@ /* * IA32 Architecture-specific signal handling support. * - * Copyright (C) 1999, 2001 Hewlett-Packard Co + * Copyright (C) 1999, 2001-2002 Hewlett-Packard Co * David Mosberger-Tang * Copyright (C) 1999 Arun Sharma * Copyright (C) 2000 VA Linux Co @@ -522,6 +522,7 @@ static int setup_frame_ia32 (int sig, struct k_sigaction *ka, sigset_t *set, struct pt_regs * regs) { + struct exec_domain *ed = current_thread_info()->exec_domain; struct sigframe_ia32 *frame; int err = 0; @@ -530,12 +531,8 @@ if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame))) goto give_sigsegv; - err |= __put_user((current->exec_domain - && current->exec_domain->signal_invmap - && sig < 32 - ? (int)(current->exec_domain->signal_invmap[sig]) - : sig), - &frame->sig); + err |= __put_user((ed && ed->signal_invmap && sig < 32 + ? (int)(ed->signal_invmap[sig]) : sig), &frame->sig); err |= setup_sigcontext_ia32(&frame->sc, &frame->fpstate, regs, set->sig[0]); @@ -590,6 +587,7 @@ setup_rt_frame_ia32 (int sig, struct k_sigaction *ka, siginfo_t *info, sigset_t *set, struct pt_regs * regs) { + struct exec_domain *ed = current_thread_info()->exec_domain; struct rt_sigframe_ia32 *frame; int err = 0; @@ -598,12 +596,8 @@ if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame))) goto give_sigsegv; - err |= __put_user((current->exec_domain - && current->exec_domain->signal_invmap - && sig < 32 - ? current->exec_domain->signal_invmap[sig] - : sig), - &frame->sig); + err |= __put_user((ed && ed->signal_invmap + && sig < 32 ? ed->signal_invmap[sig] : sig), &frame->sig); err |= __put_user((long)&frame->info, &frame->pinfo); err |= __put_user((long)&frame->uc, &frame->puc); err |= copy_siginfo_to_user32(&frame->info, info); diff -Nru a/arch/ia64/ia32/ia32_support.c b/arch/ia64/ia32/ia32_support.c --- a/arch/ia64/ia32/ia32_support.c Tue Mar 12 13:58:14 2002 +++ b/arch/ia64/ia32/ia32_support.c Tue Mar 12 13:58:14 2002 @@ -3,7 +3,7 @@ * * Copyright (C) 1999 Arun Sharma * Copyright (C) 2000 Asit K. Mallick - * Copyright (C) 2001 Hewlett-Packard Co + * Copyright (C) 2001-2002 Hewlett-Packard Co * David Mosberger-Tang * * 06/16/00 A. Mallick added csd/ssd/tssd for ia32 thread context @@ -153,10 +153,12 @@ /* We never change the TSS and LDT descriptors, so we can share them across all CPUs. */ ldt_size = PAGE_ALIGN(IA32_LDT_ENTRIES*IA32_LDT_ENTRY_SIZE); for (nr = 0; nr < NR_CPUS; ++nr) { - ia32_gdt[_TSS(nr)] = IA32_SEG_DESCRIPTOR(IA32_TSS_OFFSET, 235, - 0xb, 0, 3, 1, 1, 1, 0); - ia32_gdt[_LDT(nr)] = IA32_SEG_DESCRIPTOR(IA32_LDT_OFFSET, ldt_size - 1, - 0x2, 0, 3, 1, 1, 1, 0); + ia32_gdt[_TSS(nr) >> IA32_SEGSEL_INDEX_SHIFT] + = IA32_SEG_DESCRIPTOR(IA32_TSS_OFFSET, 235, + 0xb, 0, 3, 1, 1, 1, 0); + ia32_gdt[_LDT(nr) >> IA32_SEGSEL_INDEX_SHIFT] + = IA32_SEG_DESCRIPTOR(IA32_LDT_OFFSET, ldt_size - 1, + 0x2, 0, 3, 1, 1, 1, 0); } } @@ -172,6 +174,10 @@ siginfo.si_signo = SIGTRAP; siginfo.si_errno = int_num; /* XXX is it OK to abuse si_errno like this? */ + siginfo.si_flags = 0; + siginfo.si_isr = 0; + siginfo.si_addr = 0; + siginfo.si_imm = 0; siginfo.si_code = TRAP_BRKPT; force_sig_info(SIGTRAP, &siginfo, current); } diff -Nru a/arch/ia64/ia32/ia32_traps.c b/arch/ia64/ia32/ia32_traps.c --- a/arch/ia64/ia32/ia32_traps.c Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/ia32/ia32_traps.c Tue Mar 12 13:58:15 2002 @@ -2,7 +2,7 @@ * IA-32 exception handlers * * Copyright (C) 2000 Asit K. Mallick - * Copyright (C) 2001 Hewlett-Packard Co + * Copyright (C) 2001-2002 Hewlett-Packard Co * David Mosberger-Tang * * 06/16/00 A. Mallick added siginfo for most cases (close to IA32) @@ -40,7 +40,11 @@ { struct siginfo siginfo; + /* initialize these fields to avoid leaking kernel bits to user space: */ siginfo.si_errno = 0; + siginfo.si_flags = 0; + siginfo.si_isr = 0; + siginfo.si_imm = 0; switch ((isr >> 16) & 0xff) { case 1: case 2: @@ -103,6 +107,8 @@ * and it will suffer the consequences since we won't be able to * fully reproduce the context of the exception */ + siginfo.si_isr = isr; + siginfo.si_flags = __ISR_VALID; switch(((~fcr) & (fsr & 0x3f)) | (fsr & 0x240)) { case 0x000: default: diff -Nru a/arch/ia64/ia32/sys_ia32.c b/arch/ia64/ia32/sys_ia32.c --- a/arch/ia64/ia32/sys_ia32.c Tue Mar 12 13:58:14 2002 +++ b/arch/ia64/ia32/sys_ia32.c Tue Mar 12 13:58:14 2002 @@ -6,7 +6,7 @@ * Copyright (C) 1999 Arun Sharma * Copyright (C) 1997,1998 Jakub Jelinek (jj@sunsite.mff.cuni.cz) * Copyright (C) 1997 David S. Miller (davem@caip.rutgers.edu) - * Copyright (C) 2000-2001 Hewlett-Packard Co + * Copyright (C) 2000-2002 Hewlett-Packard Co * David Mosberger-Tang * * These routines maintain argument size conversion between 32bit and 64bit @@ -74,6 +74,9 @@ #define PAGE_START(addr) ((addr) & PAGE_MASK) #define PAGE_OFF(addr) ((addr) & ~PAGE_MASK) +#define high2lowuid(uid) ((uid) > 65535 ? 65534 : (uid)) +#define high2lowgid(gid) ((gid) > 65535 ? 65534 : (gid)) + extern asmlinkage long sys_execve (char *, char **, char **, struct pt_regs *); extern asmlinkage long sys_mprotect (unsigned long, size_t, unsigned long); extern asmlinkage long sys_munmap (unsigned long, size_t); @@ -82,6 +85,7 @@ /* forward declaration: */ asmlinkage long sys32_mprotect (unsigned int, unsigned int, int); +asmlinkage unsigned long sys_brk(unsigned long); /* * Anything that modifies or inspects ia32 user virtual memory must hold this semaphore @@ -400,7 +404,7 @@ return -EINVAL; } if (!(prot & PROT_WRITE) && sys_mprotect(pstart, pend - pstart, prot) < 0) - return EINVAL; + return -EINVAL; } return start; } @@ -2578,6 +2582,7 @@ default: return -EINVAL; } + return -EINVAL; } /* @@ -3780,6 +3785,19 @@ ret = sys_personality(personality); if (ret == PER_LINUX32) ret = PER_LINUX; + return ret; +} + +asmlinkage unsigned long +sys32_brk (unsigned int brk) +{ + unsigned long ret, obrk; + struct mm_struct *mm = current->mm; + + obrk = mm->brk; + ret = sys_brk(brk); + if (ret < obrk) + clear_user((void *) ret, PAGE_ALIGN(ret) - ret); return ret; } diff -Nru a/arch/ia64/kernel/acpi.c b/arch/ia64/kernel/acpi.c --- a/arch/ia64/kernel/acpi.c Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/kernel/acpi.c Tue Mar 12 13:58:15 2002 @@ -5,11 +5,11 @@ * 'IA-64 Extensions to ACPI Specification' Revision 0.6 * * Copyright (C) 1999 VA Linux Systems - * Copyright (C) 1999,2000 Walt Drummond - * Copyright (C) 2000 Hewlett-Packard Co. - * Copyright (C) 2000 David Mosberger-Tang + * Copyright (C) 1999, 2000 Walt Drummond + * Copyright (C) 2000, 2002 Hewlett-Packard Co. + * David Mosberger-Tang * Copyright (C) 2000 Intel Corp. - * Copyright (C) 2000,2001 J.I. Lee + * Copyright (C) 2000, 2001 J.I. Lee * ACPI based kernel configuration manager. * ACPI 2.0 & IA64 ext 0.71 */ @@ -44,6 +44,8 @@ int __initdata available_cpus; int __initdata total_cpus; +int __initdata pcat_compat; + void (*pm_idle) (void); void (*pm_power_off) (void); @@ -293,6 +295,16 @@ } else printk("Lapic address set to default 0x%lx\n", ipi_base_addr); + /* + * The PCAT_COMPAT flag indicates that the system has a dual-8259 compatible + * setup. + */ +#ifdef CONFIG_ITANIUM + pcat_compat = 1; /* fw on some Itanium systems is broken... */ +#else + pcat_compat = (madt->flags & MADT_PCAT_COMPAT); +#endif + p = (char *) (madt + 1); end = p + (madt->header.length - sizeof(acpi_madt_t)); @@ -319,17 +331,7 @@ case ACPI20_ENTRY_IO_SAPIC: iosapic = (acpi_entry_iosapic_t *) p; if (iosapic_init) - /* - * The PCAT_COMPAT flag indicates that the system has a - * dual-8259 compatible setup. - */ - iosapic_init(iosapic->address, iosapic->irq_base, -#ifdef CONFIG_ITANIUM - 1 /* fw on some Itanium systems is broken... */ -#else - (madt->flags & MADT_PCAT_COMPAT) -#endif - ); + iosapic_init(iosapic->address, iosapic->irq_base, pcat_compat); break; case ACPI20_ENTRY_PLATFORM_INT_SOURCE: @@ -401,7 +403,7 @@ # ifdef CONFIG_ACPI acpi_xsdt_t *xsdt; acpi_desc_table_hdr_t *hdrp; - acpi_madt_t *madt; + acpi_madt_t *madt = NULL; int tables, i; if (strncmp(rsdp20->signature, ACPI_RSDP_SIG, ACPI_RSDP_SIG_LEN)) { diff -Nru a/arch/ia64/kernel/brl_emu.c b/arch/ia64/kernel/brl_emu.c --- a/arch/ia64/kernel/brl_emu.c Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/kernel/brl_emu.c Tue Mar 12 13:58:15 2002 @@ -2,6 +2,9 @@ * Emulation of the "brl" instruction for IA64 processors that * don't support it in hardware. * Author: Stephan Zeisset, Intel Corp. + * + * 02/22/02 D. Mosberger Clear si_flgs, si_isr, and si_imm to avoid + * leaking kernel bits. */ #include @@ -195,6 +198,9 @@ printk("Woah! Unimplemented Instruction Address Trap!\n"); siginfo.si_signo = SIGILL; siginfo.si_errno = 0; + siginfo.si_flags = 0; + siginfo.si_isr = 0; + siginfo.si_imm = 0; siginfo.si_code = ILL_BADIADDR; force_sig_info(SIGILL, &siginfo, current); } else if (ia64_psr(regs)->tb) { @@ -205,6 +211,10 @@ siginfo.si_signo = SIGTRAP; siginfo.si_errno = 0; siginfo.si_code = TRAP_BRANCH; + siginfo.si_flags = 0; + siginfo.si_isr = 0; + siginfo.si_addr = 0; + siginfo.si_imm = 0; force_sig_info(SIGTRAP, &siginfo, current); } else if (ia64_psr(regs)->ss) { /* @@ -214,6 +224,10 @@ siginfo.si_signo = SIGTRAP; siginfo.si_errno = 0; siginfo.si_code = TRAP_TRACE; + siginfo.si_flags = 0; + siginfo.si_isr = 0; + siginfo.si_addr = 0; + siginfo.si_imm = 0; force_sig_info(SIGTRAP, &siginfo, current); } return rv; diff -Nru a/arch/ia64/kernel/efivars.c b/arch/ia64/kernel/efivars.c --- a/arch/ia64/kernel/efivars.c Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/kernel/efivars.c Tue Mar 12 13:58:15 2002 @@ -29,6 +29,11 @@ * * Changelog: * + * 12 Feb 2002 - Matt Domsch + * use list_for_each_safe when deleting vars. + * remove ifdef CONFIG_SMP around include + * v0.04 release to linux-ia64@linuxia64.org + * * 20 April 2001 - Matt Domsch * Moved vars from /proc/efi to /proc/efi/vars, and made * efi.c own the /proc/efi directory. @@ -56,18 +61,16 @@ #include /* for capable() */ #include #include +#include #include #include -#ifdef CONFIG_SMP -#include -#endif MODULE_AUTHOR("Matt Domsch "); MODULE_DESCRIPTION("/proc interface to EFI Variables"); MODULE_LICENSE("GPL"); -#define EFIVARS_VERSION "0.03 2001-Apr-20" +#define EFIVARS_VERSION "0.04 2002-Feb-12" static int efivar_read(char *page, char **start, off_t off, @@ -265,7 +268,7 @@ { unsigned long strsize1, strsize2; int found=0; - struct list_head *pos; + struct list_head *pos, *n; unsigned long size = sizeof(efi_variable_t); efi_status_t status; efivar_entry_t *efivar = data, *search_efivar = NULL; @@ -297,7 +300,7 @@ This allows any properly formatted data structure to be written to any of the files in /proc/efi/vars and it will work. */ - list_for_each(pos, &efivar_list) { + list_for_each_safe(pos, n, &efivar_list) { search_efivar = efivar_entry(pos); strsize1 = utf8_strsize(search_efivar->var.VariableName, 1024); strsize2 = utf8_strsize(var_data->VariableName, 1024); @@ -413,12 +416,12 @@ static void __exit efivars_exit(void) { - struct list_head *pos; + struct list_head *pos, *n; efivar_entry_t *efivar; spin_lock(&efivars_lock); - list_for_each(pos, &efivar_list) { + list_for_each_safe(pos, n, &efivar_list) { efivar = efivar_entry(pos); remove_proc_entry(efivar->entry->name, efi_vars_dir); list_del(&efivar->list); diff -Nru a/arch/ia64/kernel/entry.S b/arch/ia64/kernel/entry.S --- a/arch/ia64/kernel/entry.S Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/kernel/entry.S Tue Mar 12 13:58:15 2002 @@ -3,7 +3,7 @@ * * Kernel entry points. * - * Copyright (C) 1998-2001 Hewlett-Packard Co + * Copyright (C) 1998-2002 Hewlett-Packard Co * David Mosberger-Tang * Copyright (C) 1999 VA Linux Systems * Copyright (C) 1999 Walt Drummond @@ -30,14 +30,15 @@ #include +#include #include #include #include #include +#include #include +#include #include -#include -#include #include "minstate.h" @@ -115,7 +116,7 @@ mov loc1=r16 // save ar.pfs across do_fork .body mov out1=in1 - mov out3=0 + mov out3=16 // stacksize (compensates for 16-byte scratch area) adds out2=IA64_SWITCH_STACK_SIZE+16,sp // out2 = ®s mov out0=in0 // out0 = clone_flags br.call.sptk.many rp=do_fork @@ -128,6 +129,9 @@ /* * prev_task <- ia64_switch_to(struct task_struct *next) + * With Ingo's new scheduler, interrupts are disabled when this routine gets + * called. The code starting at .map relies on this. The rest of the code + * doesn't care about the interrupt masking status. */ GLOBAL_ENTRY(ia64_switch_to) .prologue @@ -158,10 +162,8 @@ (p6) srlz.d ld8 sp=[r21] // load kernel stack pointer of new task mov IA64_KR(CURRENT)=r20 // update "current" application register - mov r8=r13 // return pointer to previously running task mov r13=in0 // set "current" pointer ;; -(p6) ssm psr.i // renable psr.i AFTER the ic bit is serialized DO_LOAD_SWITCH_STACK #ifdef CONFIG_SMP @@ -170,7 +172,7 @@ br.ret.sptk.many rp // boogie on out in new context .map: - rsm psr.i | psr.ic + rsm psr.ic // interrupts (psr.i) are already disabled here movl r25=PAGE_KERNEL ;; srlz.d @@ -433,7 +435,7 @@ .body mov loc2=b6 ;; -#error br.call.sptk.many rp=syscall_trace + br.call.sptk.many rp=syscall_trace .ret3: mov rp=loc0 mov ar.pfs=loc1 mov b6=loc2 @@ -454,7 +456,7 @@ GLOBAL_ENTRY(ia64_trace_syscall) PT_REGS_UNWIND_INFO(0) -#error br.call.sptk.many rp=invoke_syscall_trace // give parent a chance to catch syscall args + br.call.sptk.many rp=invoke_syscall_trace // give parent a chance to catch syscall args .ret6: br.call.sptk.many rp=b6 // do the syscall strace_check_retval: cmp.lt p6,p0=r8,r0 // syscall failed? @@ -467,7 +469,7 @@ .mem.offset 0,0; st8.spill [r2]=r8 // store return value in slot for r8 .mem.offset 8,0; st8.spill [r3]=r10 // clear error indication in slot for r10 ia64_strace_leave_kernel: -#error br.call.sptk.many rp=invoke_syscall_trace // give parent a chance to catch return value + br.call.sptk.many rp=invoke_syscall_trace // give parent a chance to catch return value .rety: br.cond.sptk ia64_leave_kernel strace_error: @@ -491,12 +493,12 @@ */ br.call.sptk.many rp=ia64_invoke_schedule_tail .ret8: - adds r2=IA64_TASK_PTRACE_OFFSET,r13 + adds r2=TI_FLAGS+IA64_TASK_SIZE,r13 ;; - ld8 r2=[r2] + ld4 r2=[r2] ;; mov r8=0 - tbit.nz p6,p0=r2,PT_TRACESYS_BIT + tbit.nz p6,p0=r2,TIF_SYSCALL_TRACE (p6) br.cond.spnt strace_check_retval ;; // added stop bits to prevent r8 dependency END(ia64_ret_from_clone) @@ -516,50 +518,29 @@ // fall through GLOBAL_ENTRY(ia64_leave_kernel) PT_REGS_UNWIND_INFO(0) - lfetch.fault [sp] - movl r14=.restart - ;; - mov.ret.sptk rp=r14,.restart -.restart: - adds r17=IA64_TASK_NEED_RESCHED_OFFSET,r13 - adds r18=IA64_TASK_SIGPENDING_OFFSET,r13 -#ifdef CONFIG_PERFMON - adds r19=IA64_TASK_PFM_MUST_BLOCK_OFFSET,r13 -#endif - ;; -#ifdef CONFIG_PERFMON -(pUser) ld8 r19=[r19] // load current->thread.pfm_must_block -#endif -#error (pUser) ld8 r17=[r17] // load current->need_resched -#error (pUser) ld4 r18=[r18] // load current->sigpending + // work.need_resched etc. mustn't get changed by this CPU before it returns to userspace: +(pUser) cmp.eq.unc p6,p0=r0,r0 // p6 <- pUser +(pUser) rsm psr.i ;; -#ifdef CONFIG_PERFMON -(pUser) cmp.ne.unc p9,p0=r19,r0 // current->thread.pfm_must_block != 0? -#endif -#error (pUser) cmp.ne.unc p7,p0=r17,r0 // current->need_resched != 0? -#errror (pUser) cmp.ne.unc p8,p0=r18,r0 // current->sigpending != 0? +(pUser) adds r17=TI_FLAGS+IA64_TASK_SIZE,r13 ;; +.work_processed: +(p6) ld4 r18=[r17] // load current_thread_info()->flags adds r2=PT(R8)+16,r12 adds r3=PT(R9)+16,r12 -#ifdef CONFIG_PERFMON -(p9) br.call.spnt.many b7=pfm_block_on_overflow -#endif -#if __GNUC__ < 3 -(p7) br.call.spnt.many b7=invoke_schedule -#else -(p7) br.call.spnt.many b7=schedule -#endif -(p8) br.call.spnt.many b7=handle_signal_delivery // check & deliver pending signals ;; // start restoring the state saved on the kernel stack (struct pt_regs): ld8.fill r8=[r2],16 ld8.fill r9=[r3],16 +(p6) and r19=TIF_WORK_MASK,r18 // any work other than TIF_SYSCALL_TRACE? ;; ld8.fill r10=[r2],16 ld8.fill r11=[r3],16 +(p6) cmp4.ne.unc p6,p0=r19, r0 // any special work pending? ;; ld8.fill r16=[r2],16 ld8.fill r17=[r3],16 +(p6) br.cond.spnt .work_pending ;; ld8.fill r18=[r2],16 ld8.fill r19=[r3],16 @@ -582,7 +563,7 @@ ld8.fill r30=[r2],16 ld8.fill r31=[r3],16 ;; - rsm psr.i | psr.ic // initiate turning off of interrupts & interruption collection + rsm psr.i | psr.ic // initiate turning off of interrupt and interruption collection invala // invalidate ALAT ;; ld8 r1=[r2],16 // ar.ccv @@ -601,7 +582,7 @@ mov ar.fpsr=r13 mov b0=r14 ;; - srlz.i // ensure interrupts & interruption collection are off + srlz.i // ensure interruption collection is off mov b7=r15 ;; bsw.0 // switch back to bank 0 @@ -729,6 +710,25 @@ mov ar.unat=rARUNAT mov pr=rARPR,-1 rfi + +.work_pending: + tbit.z p6,p0=r18,TIF_NEED_RESCHED // current_thread_info()->need_resched==0? +(p6) br.cond.sptk.few .notify +#if __GNUC__ < 3 + br.call.spnt.many rp=invoke_schedule +#else + br.call.spnt.many rp=schedule +#endif +.ret9: cmp.eq p6,p0=r0,r0 // p6 <- 1 + rsm psr.i + ;; + adds r17=TI_FLAGS+IA64_TASK_SIZE,r13 + br.cond.sptk.many .work_processed // re-check + +.notify: + br.call.spnt.many rp=notify_resume_user +.ret10: cmp.ne p6,p0=r0,r0 // p6 <- 0 + br.cond.sptk.many .work_processed // don't re-check END(ia64_leave_kernel) ENTRY(handle_syscall_error) @@ -802,7 +802,7 @@ * be set up by the caller. We declare 8 input registers so the system call * args get preserved, in case we need to restart a system call. */ -ENTRY(handle_signal_delivery) +ENTRY(notify_resume_user) .prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(8) alloc loc1=ar.pfs,8,2,3,0 // preserve all eight input regs in case of syscall restart! mov r9=ar.unat @@ -816,17 +816,17 @@ .spillpsp ar.unat, 16 // (note that offset is relative to psp+0x10!) st8 [sp]=r9,-16 // allocate space for ar.unat and save it .body -#error br.call.sptk.many rp=ia64_do_signal + br.call.sptk.many rp=do_notify_resume_user .ret15: .restore sp adds sp=16,sp // pop scratch stack space ;; - ld8 r9=[sp] // load new unat from sw->caller_unat + ld8 r9=[sp] // load new unat from sigscratch->scratch_unat mov rp=loc0 ;; mov ar.unat=r9 mov ar.pfs=loc1 br.ret.sptk.many rp -END(handle_signal_delivery) +END(do_notify_resume_user) GLOBAL_ENTRY(sys_rt_sigsuspend) .prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(8) @@ -1033,9 +1033,9 @@ data8 sys_syslog data8 sys_setitimer data8 sys_getitimer - data8 ia64_oldstat // 1120 - data8 ia64_oldlstat - data8 ia64_oldfstat + data8 ia64_ni_syscall // 1120 /* was: ia64_oldstat */ + data8 ia64_ni_syscall /* was: ia64_oldlstat */ + data8 ia64_ni_syscall /* was: ia64_oldfstat */ data8 sys_vhangup data8 sys_lchown data8 sys_vm86 // 1125 @@ -1130,19 +1130,23 @@ data8 sys_getdents64 data8 sys_getunwind // 1215 data8 sys_readahead + data8 sys_setxattr + data8 sys_lsetxattr + data8 sys_fsetxattr + data8 sys_getxattr // 1220 + data8 sys_lgetxattr + data8 sys_fgetxattr + data8 sys_listxattr + data8 sys_llistxattr + data8 sys_flistxattr // 1225 + data8 sys_removexattr + data8 sys_lremovexattr + data8 sys_fremovexattr +#if 0 data8 sys_tkill +#else data8 ia64_ni_syscall - data8 ia64_ni_syscall - data8 ia64_ni_syscall // 1220 - data8 ia64_ni_syscall - data8 ia64_ni_syscall - data8 ia64_ni_syscall - data8 ia64_ni_syscall - data8 ia64_ni_syscall // 1225 - data8 ia64_ni_syscall - data8 ia64_ni_syscall - data8 ia64_ni_syscall - data8 ia64_ni_syscall +#endif data8 ia64_ni_syscall // 1230 data8 ia64_ni_syscall data8 ia64_ni_syscall diff -Nru a/arch/ia64/kernel/gate.S b/arch/ia64/kernel/gate.S --- a/arch/ia64/kernel/gate.S Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/kernel/gate.S Tue Mar 12 13:58:15 2002 @@ -90,7 +90,7 @@ (p8) br.cond.spnt setup_rbs // yup -> (clobbers r14, r15, and r16) back_from_setup_rbs: - .save ar.pfs, r8 + .spillreg ar.pfs, r8 alloc r8=ar.pfs,0,0,3,0 // get CFM0, EC0, and CPL0 into r8 ld8 out0=[base0],16 // load arg0 (signum) adds base1=(ARG1_OFF-(RBS_BASE_OFF+SIGCONTEXT_OFF)),base1 diff -Nru a/arch/ia64/kernel/head.S b/arch/ia64/kernel/head.S --- a/arch/ia64/kernel/head.S Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/kernel/head.S Tue Mar 12 13:58:15 2002 @@ -127,23 +127,21 @@ #ifdef CONFIG_SMP /* * Find the init_task for the currently booting CPU. At poweron, and in - * UP mode, cpucount is 0. + * UP mode, task_for_booting_cpu is NULL. */ - movl r3=cpucount + movl r3=task_for_booting_cpu ;; - ld4 r3=[r3] // r3 <- smp_processor_id() - movl r2=init_tasks + ld8 r3=[r3] + movl r2=init_thread_union ;; - shladd r2=r3,3,r2 + cmp.eq isBP,isAP=r3,r0 ;; - ld8 r2=[r2] +(isAP) mov r2=r3 #else - mov r3=0 - movl r2=init_task_union - ;; + movl r2=init_thread_union + cmp.eq isBP,isAP=r0,r0 #endif - cmp4.ne isAP,isBP=r3,r0 - ;; // RAW on r2 + ;; extr r3=r2,0,61 // r3 == phys addr of task struct mov r16=KERNEL_TR_PAGE_NUM ;; @@ -180,10 +178,12 @@ .rodata alive_msg: stringz "I'm alive and well\n" +alive_msg_end: .previous alloc r2=ar.pfs,0,0,2,0 movl out0=alive_msg + movl out1=alive_msg_end-alive_msg-1 ;; br.call.sptk.many rp=early_printk 1: // force new bundle diff -Nru a/arch/ia64/kernel/ia64_ksyms.c b/arch/ia64/kernel/ia64_ksyms.c --- a/arch/ia64/kernel/ia64_ksyms.c Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/kernel/ia64_ksyms.c Tue Mar 12 13:58:15 2002 @@ -24,6 +24,7 @@ EXPORT_SYMBOL(strrchr); EXPORT_SYMBOL(strstr); EXPORT_SYMBOL(strtok); +EXPORT_SYMBOL(strpbrk); #include EXPORT_SYMBOL(isa_irq_to_vector_map); diff -Nru a/arch/ia64/kernel/init_task.c b/arch/ia64/kernel/init_task.c --- a/arch/ia64/kernel/init_task.c Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/kernel/init_task.c Tue Mar 12 13:58:15 2002 @@ -2,8 +2,8 @@ * This is where we statically allocate and initialize the initial * task. * - * Copyright (C) 1999 Hewlett-Packard Co - * Copyright (C) 1999 David Mosberger-Tang + * Copyright (C) 1999, 2002 Hewlett-Packard Co + * David Mosberger-Tang */ #include @@ -22,10 +22,20 @@ /* * Initial task structure. * - * We need to make sure that this is page aligned due to the way - * process stacks are handled. This is done by having a special - * "init_task" linker map entry.. + * We need to make sure that this is properly aligned due to the way process stacks are + * handled. This is done by having a special ".data.init_task" section... */ -union task_union init_task_union - __attribute__((section("init_task"))) = - { INIT_TASK(init_task_union.task) }; +#define init_thread_info init_thread_union.s.thread_info + +union init_thread { + struct { + struct task_struct task; + struct thread_info thread_info; + } s; + unsigned long stack[KERNEL_STACK_SIZE/sizeof (unsigned long)]; +} init_thread_union __attribute__((section(".data.init_task"))) = {{ + task: INIT_TASK(init_thread_union.s.task), + thread_info: INIT_THREAD_INFO(init_thread_union.s.thread_info) +}}; + +asm (".global init_task; init_task = init_thread_union"); diff -Nru a/arch/ia64/kernel/iosapic.c b/arch/ia64/kernel/iosapic.c --- a/arch/ia64/kernel/iosapic.c Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/kernel/iosapic.c Tue Mar 12 13:58:15 2002 @@ -3,8 +3,9 @@ * * Copyright (C) 1999 Intel Corp. * Copyright (C) 1999 Asit Mallick - * Copyright (C) 1999-2000 Hewlett-Packard Co. - * Copyright (C) 1999-2000 David Mosberger-Tang + * Copyright (C) 2000-2002 J.I. Lee + * Copyright (C) 1999-2000, 2002 Hewlett-Packard Co. + * David Mosberger-Tang * Copyright (C) 1999 VA Linux Systems * Copyright (C) 1999,2000 Walt Drummond * @@ -15,6 +16,12 @@ * PCI to vector mapping, shared PCI interrupts. * 00/10/27 D. Mosberger Document things a bit more to make them more understandable. * Clean up much of the old IOSAPIC cruft. + * 01/07/27 J.I. Lee PCI irq routing, Platform/Legacy interrupts and fixes for + * ACPI S5(SoftOff) support. + * 02/01/23 J.I. Lee iosapic pgm fixes for PCI irq routing from _PRT + * 02/01/07 E. Focht Redirectable interrupt vectors in + * iosapic_set_affinity(), initializations for + * /proc/irq/#/smp_affinity */ /* * Here is what the interrupt logic between a PCI device and the CPU looks like: @@ -63,6 +70,7 @@ #undef DEBUG_IRQ_ROUTING +#undef OVERRIDE_DEBUG static spinlock_t iosapic_lock = SPIN_LOCK_UNLOCKED; @@ -88,7 +96,7 @@ * Translate IOSAPIC irq number to the corresponding IA-64 interrupt vector. If no * entry exists, return -1. */ -static int +int iosapic_irq_to_vector (int irq) { int vector; @@ -121,6 +129,7 @@ u32 low32, high32; char *addr; int pin; + char redir; pin = iosapic_irq[vector].pin; if (pin < 0) @@ -131,6 +140,11 @@ trigger = iosapic_irq[vector].trigger; dmode = iosapic_irq[vector].dmode; + redir = (dmode == IOSAPIC_LOWEST_PRIORITY) ? 1 : 0; +#ifdef CONFIG_SMP + set_irq_affinity_info(vector, (int)(dest & 0xffff), redir); +#endif + low32 = ((pol << IOSAPIC_POLARITY_SHIFT) | (trigger << IOSAPIC_TRIGGER_SHIFT) | (dmode << IOSAPIC_DELIVERY_SHIFT) | @@ -211,6 +225,7 @@ u32 high32, low32; int dest, pin; char *addr; + int redir = (irq & (1<<31)) ? 1 : 0; mask &= (1UL << smp_num_cpus) - 1; @@ -225,6 +240,8 @@ if (pin < 0) return; /* not an IOSAPIC interrupt */ + set_irq_affinity_info(irq,dest,redir); + /* dest contains both id and eid */ high32 = dest << IOSAPIC_DEST_SHIFT; @@ -234,9 +251,13 @@ writel(IOSAPIC_RTE_LOW(pin), addr + IOSAPIC_REG_SELECT); low32 = readl(addr + IOSAPIC_WINDOW); - /* change delivery mode to fixed */ low32 &= ~(7 << IOSAPIC_DELIVERY_SHIFT); - low32 |= (IOSAPIC_FIXED << IOSAPIC_DELIVERY_SHIFT); + if (redir) + /* change delivery mode to lowest priority */ + low32 |= (IOSAPIC_LOWEST_PRIORITY << IOSAPIC_DELIVERY_SHIFT); + else + /* change delivery mode to fixed */ + low32 |= (IOSAPIC_FIXED << IOSAPIC_DELIVERY_SHIFT); writel(IOSAPIC_RTE_HIGH(pin), addr + IOSAPIC_REG_SELECT); writel(high32, addr + IOSAPIC_WINDOW); @@ -343,29 +364,64 @@ } /* - * ACPI can describe IOSAPIC interrupts via static tables and namespace - * methods. This provides an interface to register those interrupts and - * program the IOSAPIC RTE. + * if the given vector is already owned by other, + * assign a new vector for the other and make the vector available */ -int -iosapic_register_irq (u32 global_vector, unsigned long polarity, unsigned long - edge_triggered, u32 base_irq, char *iosapic_address) +static void +iosapic_reassign_vector (int vector) +{ + int new_vector; + + if (iosapic_irq[vector].pin >= 0 || iosapic_irq[vector].addr + || iosapic_irq[vector].base_irq || iosapic_irq[vector].dmode + || iosapic_irq[vector].polarity || iosapic_irq[vector].trigger) + { + new_vector = ia64_alloc_irq(); + printk("Reassigning Vector 0x%x to 0x%x\n", vector, new_vector); + memcpy (&iosapic_irq[new_vector], &iosapic_irq[vector], + sizeof(struct iosapic_irq)); + memset (&iosapic_irq[vector], 0, sizeof(struct iosapic_irq)); + iosapic_irq[vector].pin = -1; + } +} + +static void +register_irq (u32 global_vector, int vector, int pin, unsigned char delivery, + unsigned long polarity, unsigned long edge_triggered, + u32 base_irq, char *iosapic_address) { irq_desc_t *idesc; struct hw_interrupt_type *irq_type; - int vector; - vector = iosapic_irq_to_vector(global_vector); - if (vector < 0) - vector = ia64_alloc_irq(); - - /* fill in information from this vector's IOSAPIC */ - iosapic_irq[vector].addr = iosapic_address; - iosapic_irq[vector].base_irq = base_irq; - iosapic_irq[vector].pin = global_vector - iosapic_irq[vector].base_irq; + iosapic_irq[vector].pin = pin; iosapic_irq[vector].polarity = polarity ? IOSAPIC_POL_HIGH : IOSAPIC_POL_LOW; - iosapic_irq[vector].dmode = IOSAPIC_LOWEST_PRIORITY; + iosapic_irq[vector].dmode = delivery; + /* + * In override, it does not provide addr/base_irq. global_vector is enough to + * locate iosapic addr, base_irq and pin by examining base_irq and max_pin of + * registered iosapics (tbd) + */ +#ifndef OVERRIDE_DEBUG + if (iosapic_address) { + iosapic_irq[vector].addr = iosapic_address; + iosapic_irq[vector].base_irq = base_irq; + } +#else + if (iosapic_address) { + if (iosapic_irq[vector].addr && (iosapic_irq[vector].addr != iosapic_address)) + printk("WARN: register_irq: diff IOSAPIC ADDRESS for gv %x, v %x\n", + global_vector, vector); + iosapic_irq[vector].addr = iosapic_address; + if (iosapic_irq[vector].base_irq && (iosapic_irq[vector].base_irq != base_irq)) { + printk("WARN: register_irq: diff BASE IRQ %x for gv %x, v %x\n", + base_irq, global_vector, vector); + } + iosapic_irq[vector].base_irq = base_irq; + } else if (!iosapic_irq[vector].addr) + printk("WARN: register_irq: invalid override for gv %x, v %x\n", + global_vector, vector); +#endif if (edge_triggered) { iosapic_irq[vector].trigger = IOSAPIC_EDGE; irq_type = &irq_type_iosapic_edge; @@ -377,12 +433,32 @@ idesc = irq_desc(vector); if (idesc->handler != irq_type) { if (idesc->handler != &no_irq_type) - printk("iosapic_register_irq(): changing vector 0x%02x from" + printk("register_irq(): changing vector 0x%02x from " "%s to %s\n", vector, idesc->handler->typename, irq_type->typename); idesc->handler = irq_type; } +} + +/* + * ACPI can describe IOSAPIC interrupts via static tables and namespace + * methods. This provides an interface to register those interrupts and + * program the IOSAPIC RTE. + */ +int +iosapic_register_irq (u32 global_vector, unsigned long polarity, unsigned long + edge_triggered, u32 base_irq, char *iosapic_address) +{ + int vector; - printk("IOSAPIC %x(%s,%s) -> Vector %x\n", global_vector, + vector = iosapic_irq_to_vector(global_vector); + if (vector < 0) + vector = ia64_alloc_irq(); + + register_irq (global_vector, vector, global_vector - base_irq, + IOSAPIC_LOWEST_PRIORITY, polarity, edge_triggered, + base_irq, iosapic_address); + + printk("IOSAPIC 0x%x(%s,%s) -> Vector 0x%x\n", global_vector, (polarity ? "high" : "low"), (edge_triggered ? "edge" : "level"), vector); /* program the IOSAPIC routing table */ @@ -395,51 +471,40 @@ * Note that the irq_base and IOSAPIC address must be set in iosapic_init(). */ int -iosapic_register_platform_irq (u32 int_type, u32 global_vector, u32 iosapic_vector, - u16 eid, u16 id, unsigned long polarity, +iosapic_register_platform_irq (u32 int_type, u32 global_vector, + u32 iosapic_vector, u16 eid, u16 id, unsigned long polarity, unsigned long edge_triggered, u32 base_irq, char *iosapic_address) { - struct hw_interrupt_type *irq_type; - irq_desc_t *idesc; + unsigned char delivery; int vector; switch (int_type) { - case ACPI20_ENTRY_PIS_CPEI: + case ACPI20_ENTRY_PIS_PMI: + vector = iosapic_vector; + /* + * since PMI vector is alloc'd by FW(ACPI) not by kernel, + * we need to make sure the vector is available + */ + iosapic_reassign_vector(vector); + delivery = IOSAPIC_PMI; + break; + case ACPI20_ENTRY_PIS_CPEI: vector = IA64_PCE_VECTOR; - iosapic_irq[vector].dmode = IOSAPIC_LOWEST_PRIORITY; + delivery = IOSAPIC_LOWEST_PRIORITY; break; - case ACPI20_ENTRY_PIS_INIT: + case ACPI20_ENTRY_PIS_INIT: vector = ia64_alloc_irq(); - iosapic_irq[vector].dmode = IOSAPIC_INIT; + delivery = IOSAPIC_INIT; break; - default: + default: printk("iosapic_register_platform_irq(): invalid int type\n"); return -1; } - /* fill in information from this vector's IOSAPIC */ - iosapic_irq[vector].addr = iosapic_address; - iosapic_irq[vector].base_irq = base_irq; - iosapic_irq[vector].pin = global_vector - iosapic_irq[vector].base_irq; - iosapic_irq[vector].polarity = polarity ? IOSAPIC_POL_HIGH : IOSAPIC_POL_LOW; - - if (edge_triggered) { - iosapic_irq[vector].trigger = IOSAPIC_EDGE; - irq_type = &irq_type_iosapic_edge; - } else { - iosapic_irq[vector].trigger = IOSAPIC_LEVEL; - irq_type = &irq_type_iosapic_level; - } - - idesc = irq_desc(vector); - if (idesc->handler != irq_type) { - if (idesc->handler != &no_irq_type) - printk("iosapic_register_platform_irq(): changing vector 0x%02x from" - "%s to %s\n", vector, idesc->handler->typename, irq_type->typename); - idesc->handler = irq_type; - } + register_irq(global_vector, vector, global_vector - base_irq, delivery, polarity, + edge_triggered, base_irq, iosapic_address); - printk("PLATFORM int %x: IOSAPIC %x(%s,%s) -> Vector %x CPU %.02u:%.02u\n", + printk("PLATFORM int 0x%x: IOSAPIC 0x%x(%s,%s) -> Vector 0x%x CPU %.02u:%.02u\n", int_type, global_vector, (polarity ? "high" : "low"), (edge_triggered ? "edge" : "level"), vector, eid, id); @@ -450,15 +515,18 @@ /* - * ACPI calls this when it finds an entry for a legacy ISA interrupt. Note that the - * irq_base and IOSAPIC address must be set in iosapic_init(). + * ACPI calls this when it finds an entry for a legacy ISA interrupt. + * Note that the irq_base and IOSAPIC address must be set in iosapic_init(). */ void iosapic_register_legacy_irq (unsigned long irq, unsigned long pin, unsigned long polarity, unsigned long edge_triggered) { - unsigned int vector = isa_irq_to_vector(irq); + int vector = isa_irq_to_vector(irq); + + register_irq(irq, vector, (int)pin, IOSAPIC_LOWEST_PRIORITY, polarity, edge_triggered, + 0, NULL); /* ignored for override */ #ifdef DEBUG_IRQ_ROUTING printk("ISA: IRQ %u -> IOSAPIC irq 0x%02x (%s, %s) -> vector %02x\n", @@ -467,18 +535,14 @@ vector); #endif - iosapic_irq[vector].pin = pin; - iosapic_irq[vector].dmode = IOSAPIC_LOWEST_PRIORITY; - iosapic_irq[vector].polarity = polarity ? IOSAPIC_POL_HIGH : IOSAPIC_POL_LOW; - iosapic_irq[vector].trigger = edge_triggered ? IOSAPIC_EDGE : IOSAPIC_LEVEL; + /* program the IOSAPIC routing table */ + set_rte(vector, (ia64_get_lid() >> 16) & 0xffff); } void __init iosapic_init (unsigned long phys_addr, unsigned int base_irq, int pcat_compat) { - struct hw_interrupt_type *irq_type; - int i, irq, max_pin, vector; - irq_desc_t *idesc; + int i, irq, max_pin, vector, pin; unsigned int ver; char *addr; static int first_time = 1; @@ -496,7 +560,6 @@ } addr = ioremap(phys_addr, 0); - ver = iosapic_version(addr); max_pin = (ver >> 16) & 0xff; @@ -511,27 +574,18 @@ */ for (irq = 0; irq < 16; ++irq) { vector = isa_irq_to_vector(irq); - iosapic_irq[vector].addr = addr; - iosapic_irq[vector].base_irq = 0; - if (iosapic_irq[vector].pin == -1) - iosapic_irq[vector].pin = irq; - iosapic_irq[vector].dmode = IOSAPIC_LOWEST_PRIORITY; - iosapic_irq[vector].trigger = IOSAPIC_EDGE; - iosapic_irq[vector].polarity = IOSAPIC_POL_HIGH; + if ((pin = iosapic_irq[vector].pin) == -1) + pin = irq; + + register_irq(irq, vector, pin, + /* IOSAPIC_POL_HIGH, IOSAPIC_EDGE */ + IOSAPIC_LOWEST_PRIORITY, 1, 1, base_irq, addr); + #ifdef DEBUG_IRQ_ROUTING printk("ISA: IRQ %u -> IOSAPIC irq 0x%02x (high, edge) -> vector 0x%02x\n", irq, iosapic_irq[vector].base_irq + iosapic_irq[vector].pin, vector); #endif - irq_type = &irq_type_iosapic_edge; - idesc = irq_desc(vector); - if (idesc->handler != irq_type) { - if (idesc->handler != &no_irq_type) - printk("iosapic_init: changing vector 0x%02x from %s to " - "%s\n", irq, idesc->handler->typename, - irq_type->typename); - idesc->handler = irq_type; - } /* program the IOSAPIC routing table: */ set_rte(vector, (ia64_get_lid() >> 16) & 0xffff); @@ -540,7 +594,7 @@ for (i = 0; i < pci_irq.num_routes; i++) { irq = pci_irq.route[i].irq; - if ((unsigned) (irq - base_irq) > max_pin) + if ((irq < (int)base_irq) || (irq > (int)(base_irq + max_pin))) /* the interrupt route is for another controller... */ continue; @@ -553,29 +607,18 @@ vector = ia64_alloc_irq(); } - iosapic_irq[vector].addr = addr; - iosapic_irq[vector].base_irq = base_irq; - iosapic_irq[vector].pin = (irq - base_irq); - iosapic_irq[vector].dmode = IOSAPIC_LOWEST_PRIORITY; - iosapic_irq[vector].trigger = IOSAPIC_LEVEL; - iosapic_irq[vector].polarity = IOSAPIC_POL_LOW; + register_irq(irq, vector, irq - base_irq, + /* IOSAPIC_POL_LOW, IOSAPIC_LEVEL */ + IOSAPIC_LOWEST_PRIORITY, 0, 0, base_irq, addr); # ifdef DEBUG_IRQ_ROUTING printk("PCI: (B%d,I%d,P%d) -> IOSAPIC irq 0x%02x -> vector 0x%02x\n", pci_irq.route[i].bus, pci_irq.route[i].pci_id>>16, pci_irq.route[i].pin, iosapic_irq[vector].base_irq + iosapic_irq[vector].pin, vector); # endif - irq_type = &irq_type_iosapic_level; - idesc = irq_desc(vector); - if (idesc->handler != irq_type){ - if (idesc->handler != &no_irq_type) - printk("iosapic_init: changing vector 0x%02x from %s to %s\n", - vector, idesc->handler->typename, irq_type->typename); - idesc->handler = irq_type; - } - /* program the IOSAPIC routing table: */ - set_rte(vector, (ia64_get_lid() >> 16) & 0xffff); + /* program the IOSAPIC routing table: */ + set_rte(vector, (ia64_get_lid() >> 16) & 0xffff); } } @@ -585,6 +628,8 @@ struct pci_dev *dev; unsigned char pin; int vector; + struct hw_interrupt_type *irq_type; + irq_desc_t *idesc; if (phase != 1) return; @@ -611,19 +656,28 @@ if (vector >= 0) printk(KERN_WARNING "PCI: using PPB(B%d,I%d,P%d) to get vector %02x\n", - bridge->bus->number, PCI_SLOT(bridge->devfn), + dev->bus->number, PCI_SLOT(dev->devfn), pin, vector); else printk(KERN_WARNING - "PCI: Couldn't map irq for (B%d,I%d,P%d)o\n", - bridge->bus->number, PCI_SLOT(bridge->devfn), - pin); + "PCI: Couldn't map irq for (B%d,I%d,P%d)\n", + dev->bus->number, PCI_SLOT(dev->devfn), pin); } if (vector >= 0) { printk("PCI->APIC IRQ transform: (B%d,I%d,P%d) -> 0x%02x\n", dev->bus->number, PCI_SLOT(dev->devfn), pin, vector); dev->irq = vector; + irq_type = &irq_type_iosapic_level; + idesc = irq_desc(vector); + if (idesc->handler != irq_type){ + if (idesc->handler != &no_irq_type) + printk("iosapic_pci_fixup: changing vector 0x%02x from " + "%s to %s\n", vector, + idesc->handler->typename, + irq_type->typename); + idesc->handler = irq_type; + } #ifdef CONFIG_SMP /* * For platforms that do not support interrupt redirect @@ -638,7 +692,16 @@ cpu_index++; if (cpu_index >= smp_num_cpus) cpu_index = 0; + } else { + /* + * Direct the interrupt vector to the current cpu, + * platform redirection will distribute them. + */ + set_rte(vector, (ia64_get_lid() >> 16) & 0xffff); } +#else + /* direct the interrupt vector to the running cpu id */ + set_rte(vector, (ia64_get_lid() >> 16) & 0xffff); #endif } } diff -Nru a/arch/ia64/kernel/irq.c b/arch/ia64/kernel/irq.c --- a/arch/ia64/kernel/irq.c Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/kernel/irq.c Tue Mar 12 13:58:15 2002 @@ -161,7 +161,7 @@ for (action=action->next; action; action = action->next) seq_printf(p, ", %s", action->name); - seq_putc('\n'); + seq_putc(p, '\n'); } seq_puts(p, "NMI: "); for (j = 0; j < smp_num_cpus; j++) @@ -287,10 +287,11 @@ * already executing in one.. */ if (!irqs_running()) - if (local_bh_count() || !spin_is_locked(&global_bh_lock)) + if (really_local_bh_count() || !spin_is_locked(&global_bh_lock)) break; /* Duh, we have to loop. Release the lock to avoid deadlocks */ + smp_mb__before_clear_bit(); /* need barrier before releasing lock... */ clear_bit(0,&global_irq_lock); for (;;) { @@ -305,7 +306,7 @@ continue; if (global_irq_lock) continue; - if (!local_bh_count() && spin_is_locked(&global_bh_lock)) + if (!really_local_bh_count() && spin_is_locked(&global_bh_lock)) continue; if (!test_and_set_bit(0,&global_irq_lock)) break; @@ -378,14 +379,14 @@ __save_flags(flags); if (flags & IA64_PSR_I) { __cli(); - if (!local_irq_count()) + if (!really_local_irq_count()) get_irqlock(); } #else __save_flags(flags); if (flags & (1 << EFLAGS_IF_SHIFT)) { __cli(); - if (!local_irq_count()) + if (!really_local_irq_count()) get_irqlock(); } #endif @@ -393,7 +394,7 @@ void __global_sti(void) { - if (!local_irq_count()) + if (!really_local_irq_count()) release_irqlock(smp_processor_id()); __sti(); } @@ -422,7 +423,7 @@ retval = 2 + local_enabled; /* check for global flags if we're not in an interrupt */ - if (!local_irq_count()) { + if (!really_local_irq_count()) { if (local_enabled) retval = 1; if (global_irq_holder == cpu) @@ -529,7 +530,7 @@ disable_irq_nosync(irq); #ifdef CONFIG_SMP - if (!local_irq_count()) { + if (!really_local_irq_count()) { do { barrier(); } while (irq_desc(irq)->status & IRQ_INPROGRESS); @@ -1009,6 +1010,11 @@ rand_initialize_irq(irq); } + if (new->flags & SA_PERCPU_IRQ) { + desc->status |= IRQ_PER_CPU; + desc->handler = &irq_type_ia64_lsapic; + } + /* * The following block of code has to be executed atomically */ @@ -1089,13 +1095,25 @@ static struct proc_dir_entry * smp_affinity_entry [NR_IRQS]; static unsigned long irq_affinity [NR_IRQS] = { [0 ... NR_IRQS-1] = ~0UL }; +static char irq_redir [NR_IRQS]; // = { [0 ... NR_IRQS-1] = 1 }; + +void set_irq_affinity_info(int irq, int hwid, int redir) +{ + unsigned long mask = 1UL<= 0 && irq < NR_IRQS) { + irq_affinity[irq] = mask; + irq_redir[irq] = (char) (redir & 0xff); + } +} static int irq_affinity_read_proc (char *page, char **start, off_t off, int count, int *eof, void *data) { - if (count < HEX_DIGITS+1) + if (count < HEX_DIGITS+3) return -EINVAL; - return sprintf (page, "%08lx\n", irq_affinity[(long)data]); + return sprintf (page, "%s%08lx\n", irq_redir[(long)data] ? "r " : "", + irq_affinity[(long)data]); } static int irq_affinity_write_proc (struct file *file, const char *buffer, @@ -1103,11 +1121,20 @@ { int irq = (long) data, full_count = count, err; unsigned long new_value; + const char *buf = buffer; + int redir; if (!irq_desc(irq)->handler->set_affinity) return -EIO; - err = parse_hex_value(buffer, count, &new_value); + if (buf[0] == 'r' || buf[0] == 'R') { + ++buf; + while (*buf == ' ') ++buf; + redir = 1; + } else + redir = 0; + + err = parse_hex_value(buf, count, &new_value); /* * Do not allow disabling IRQs completely - it's a too easy @@ -1117,8 +1144,7 @@ if (!(new_value & cpu_online_map)) return -EINVAL; - irq_affinity[irq] = new_value; - irq_desc(irq)->handler->set_affinity(irq, new_value); + irq_desc(irq)->handler->set_affinity(irq | (redir?(1<<31):0), new_value); return full_count; } diff -Nru a/arch/ia64/kernel/ivt.S b/arch/ia64/kernel/ivt.S --- a/arch/ia64/kernel/ivt.S Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/kernel/ivt.S Tue Mar 12 13:58:15 2002 @@ -43,6 +43,7 @@ #include #include #include +#include #include #if 1 @@ -275,6 +276,7 @@ mov r16=cr.ifa // get address that caused the TLB miss movl r17=PAGE_KERNEL mov r21=cr.ipsr + movl r19=(((1 << IA64_MAX_PHYS_BITS) - 1) & ~0xfff) mov r31=pr ;; #ifdef CONFIG_DISABLE_VHPT @@ -289,12 +291,12 @@ (p8) br.cond.dptk itlb_fault #endif extr.u r23=r21,IA64_PSR_CPL0_BIT,2 // extract psr.cpl + and r19=r19,r16 // clear ed, reserved bits, and PTE control bits shr.u r18=r16,57 // move address bit 61 to bit 4 - dep r19=0,r16,IA64_MAX_PHYS_BITS,(64-IA64_MAX_PHYS_BITS) // clear ed & reserved bits ;; andcm r18=0x10,r18 // bit 4=~address-bit(61) cmp.ne p8,p0=r0,r23 // psr.cpl != 0? - dep r19=r17,r19,0,12 // insert PTE control bits into r19 + or r19=r17,r19 // insert PTE control bits into r19 ;; or r19=r19,r18 // set bit 4 (uncached) if the access was to region 6 (p8) br.cond.spnt page_fault @@ -312,6 +314,7 @@ mov r16=cr.ifa // get address that caused the TLB miss movl r17=PAGE_KERNEL mov r20=cr.isr + movl r19=(((1 << IA64_MAX_PHYS_BITS) - 1) & ~0xfff) mov r21=cr.ipsr mov r31=pr ;; @@ -328,15 +331,15 @@ #endif extr.u r23=r21,IA64_PSR_CPL0_BIT,2 // extract psr.cpl tbit.nz p6,p7=r20,IA64_ISR_SP_BIT // is speculation bit on? + and r19=r19,r16 // clear ed, reserved bits, and PTE control bits shr.u r18=r16,57 // move address bit 61 to bit 4 - dep r19=0,r16,IA64_MAX_PHYS_BITS,(64-IA64_MAX_PHYS_BITS) // clear ed & reserved bits ;; andcm r18=0x10,r18 // bit 4=~address-bit(61) cmp.ne p8,p0=r0,r23 (p8) br.cond.spnt page_fault dep r21=-1,r21,IA64_PSR_ED_BIT,1 - dep r19=r17,r19,0,12 // insert PTE control bits into r19 + or r19=r19,r17 // insert PTE control bits into r19 ;; or r19=r19,r18 // set bit 4 (uncached) if the access was to region 6 (p6) mov cr.ipsr=r21 @@ -654,16 +657,16 @@ ld8 r16=[r16] // load address of syscall entry point mov rp=r15 // set the real return addr ;; - ld8 r2=[r2] // r2 = current->ptrace mov b6=r16 // arrange things so we skip over break instruction when returning: adds r16=16,sp // get pointer to cr_ipsr adds r17=24,sp // get pointer to cr_iip + add r2=TI_FLAGS+IA64_TASK_SIZE,r13 ;; ld8 r18=[r16] // fetch cr_ipsr - tbit.z p8,p0=r2,PT_TRACESYS_BIT // (current->ptrace & PF_TRACESYS) == 0? + ld4 r2=[r2] // r2 = current_thread_info()->flags ;; ld8 r19=[r17] // fetch cr_iip extr.u r20=r18,41,2 // extract ei field @@ -676,6 +679,7 @@ ;; (p6) st8 [r17]=r19 // store new cr.iip if cr.isr.ei wrapped around dep r18=r20,r18,41,2 // insert new ei into cr.isr + tbit.z p8,p0=r2,TIF_SYSCALL_TRACE ;; st8 [r16]=r18 // store new value for cr.isr @@ -855,16 +859,16 @@ ld4 out5=[r14],8 // r13 == ebp ;; ld4 out3=[r14],8 // r14 == esi - adds r2=IA64_TASK_PTRACE_OFFSET,r13 // r2 = ¤t->ptrace + adds r2=TI_FLAGS+IA64_TASK_SIZE,r13 ;; ld4 out4=[r14] // r15 == edi movl r16=ia32_syscall_table ;; (p6) shladd r16=r8,3,r16 // force ni_syscall if not valid syscall number - ld8 r2=[r2] // r2 = current->ptrace + ld4 r2=[r2] // r2 = current_thread_info()->flags ;; ld8 r16=[r16] - tbit.z p8,p0=r2,PT_TRACESYS_BIT // (current->ptrace & PT_TRACESYS) == 0? + tbit.z p8,p0=r2,TIF_SYSCALL_TRACE ;; mov b6=r16 movl r15=ia32_ret_from_syscall diff -Nru a/arch/ia64/kernel/mca.c b/arch/ia64/kernel/mca.c --- a/arch/ia64/kernel/mca.c Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/kernel/mca.c Tue Mar 12 13:58:15 2002 @@ -3,6 +3,9 @@ * Purpose: Generic MCA handling layer * * Updated for latest kernel + * Copyright (C) 2002 Intel + * Copyright (C) Jenna Hall (jenna.s.hall@intel.com) + * * Copyright (C) 2001 Intel * Copyright (C) Fred Lewis (frederick.v.lewis@intel.com) * @@ -12,6 +15,11 @@ * Copyright (C) 1999 Silicon Graphics, Inc. * Copyright (C) Vijay Chander(vijay@engr.sgi.com) * + * 02/01/04 J. Hall Aligned MCA stack to 16 bytes, added platform vs. CPU + * error flag, set SAL default return values, changed + * error record structure to linked list, added init call + * to sal_get_state_info_size(). + * * 01/01/03 F. Lewis Added setup of CMCI and CPEI IRQs, logging of corrected * platform errors, completed code for logging of * corrected & uncorrected machine check errors, and @@ -27,6 +35,7 @@ #include #include #include +#include #include #include @@ -50,18 +59,22 @@ ia64_mca_sal_to_os_state_t ia64_sal_to_os_handoff_state; ia64_mca_os_to_sal_state_t ia64_os_to_sal_handoff_state; u64 ia64_mca_proc_state_dump[512]; -u64 ia64_mca_stack[1024]; +u64 ia64_mca_stack[1024] __attribute__((aligned(16))); u64 ia64_mca_stackframe[32]; u64 ia64_mca_bspstore[1024]; -u64 ia64_init_stack[INIT_TASK_SIZE] __attribute__((aligned(16))); +u64 ia64_init_stack[KERNEL_STACK_SIZE] __attribute__((aligned(16))); +u64 ia64_mca_sal_data_area[1356]; +u64 ia64_mca_min_state_save_info; +u64 ia64_tlb_functional; +u64 ia64_os_mca_recovery_successful; static void ia64_mca_wakeup_ipi_wait(void); static void ia64_mca_wakeup(int cpu); static void ia64_mca_wakeup_all(void); static void ia64_log_init(int); -extern void ia64_monarch_init_handler (void); -extern void ia64_slave_init_handler (void); -extern struct hw_interrupt_type irq_type_iosapic_level; +extern void ia64_monarch_init_handler (void); +extern void ia64_slave_init_handler (void); +extern struct hw_interrupt_type irq_type_iosapic_level; static struct irqaction cmci_irqaction = { handler: ia64_mca_cmc_int_handler, @@ -95,25 +108,31 @@ * memory. * * Inputs : sal_info_type (Type of error record MCA/CMC/CPE/INIT) - * Outputs : None + * Outputs : platform error status */ -void +int ia64_mca_log_sal_error_record(int sal_info_type) { + int platform_err = 0; + /* Get the MCA error record */ if (!ia64_log_get(sal_info_type, (prfunc_t)printk)) - return; // no record retrieved + return platform_err; // no record retrieved - /* Log the error record */ - ia64_log_print(sal_info_type, (prfunc_t)printk); + /* TODO: + * 1. analyze error logs to determine recoverability + * 2. perform error recovery procedures, if applicable + * 3. set ia64_os_mca_recovery_successful flag, if applicable + */ - /* Clear the CMC SAL logs now that they have been logged */ + platform_err = ia64_log_print(sal_info_type, (prfunc_t)printk); ia64_sal_clear_state_info(sal_info_type); + + return platform_err; } /* - * hack for now, add platform dependent handlers - * here + * platform dependent error handling */ #ifndef PLATFORM_MCA_HANDLERS void @@ -275,8 +294,8 @@ cmcv_reg_t cmcv; cmcv.cmcv_regval = 0; - cmcv.cmcv_mask = 0; /* Unmask/enable interrupt */ - cmcv.cmcv_vector = IA64_CMC_VECTOR; + cmcv.cmcv_mask = 0; /* Unmask/enable interrupt */ + cmcv.cmcv_vector = IA64_CMC_VECTOR; ia64_set_cmcv(cmcv.cmcv_regval); IA64_MCA_DEBUG("ia64_mca_platform_init: CPU %d corrected " @@ -374,6 +393,9 @@ IA64_MCA_DEBUG("ia64_mca_init: begin\n"); + /* initialize recovery success indicator */ + ia64_os_mca_recovery_successful = 0; + /* Clear the Rendez checkin flag for all cpus */ for(i = 0 ; i < NR_CPUS; i++) ia64_mc_info.imi_rendez_checkin[i] = IA64_MCA_RENDEZ_CHECKIN_NOTDONE; @@ -459,7 +481,7 @@ /* * Configure the CMCI vector and handler. Interrupts for CMC are - * per-processor, so AP CMC interrupts are setup in smp_callin() (smp.c). + * per-processor, so AP CMC interrupts are setup in smp_callin() (smpboot.c). */ register_percpu_irq(IA64_CMC_VECTOR, &cmci_irqaction); ia64_mca_cmc_vector_setup(); /* Setup vector on BSP & enable */ @@ -498,6 +520,9 @@ ia64_log_init(SAL_INFO_TYPE_CMC); ia64_log_init(SAL_INFO_TYPE_CPE); + /* Zero the min state save info */ + ia64_mca_min_state_save_info = 0; + #if defined(MCA_TEST) mca_test(); #endif /* #if defined(MCA_TEST) */ @@ -576,7 +601,7 @@ int cpu; /* Clear the Rendez checkin flag for all cpus */ - for(cpu = 0 ; cpu < smp_num_cpus; cpu++) + for(cpu = 0; cpu < smp_num_cpus; cpu++) if (ia64_mc_info.imi_rendez_checkin[cpu] == IA64_MCA_RENDEZ_CHECKIN_DONE) ia64_mca_wakeup(cpu); @@ -668,6 +693,13 @@ /* Cold Boot for uncorrectable MCA */ ia64_os_to_sal_handoff_state.imots_os_status = IA64_MCA_COLD_BOOT; + + /* Default = tell SAL to return to same context */ + ia64_os_to_sal_handoff_state.imots_context = IA64_MCA_SAME_CONTEXT; + + /* Register pointer to new min state values */ + /* NOTE: need to do something with this during recovery phase */ + ia64_os_to_sal_handoff_state.imots_new_min_state = &ia64_mca_min_state_save_info; } /* @@ -678,10 +710,10 @@ * This is the place where the core of OS MCA handling is done. * Right now the logs are extracted and displayed in a well-defined * format. This handler code is supposed to be run only on the - * monarch processor. Once the monarch is done with MCA handling + * monarch processor. Once the monarch is done with MCA handling * further MCA logging is enabled by clearing logs. * Monarch also has the duty of sending wakeup-IPIs to pull the - * slave processors out of rendezvous spinloop. + * slave processors out of rendezvous spinloop. * * Inputs : None * Outputs : None @@ -689,20 +721,16 @@ void ia64_mca_ucmc_handler(void) { -#if 0 /* stubbed out @FVL */ - /* - * Attempting to log a DBE error Causes "reserved register/field panic" - * in printk. - */ + int platform_err = 0; /* Get the MCA error record and log it */ - ia64_mca_log_sal_error_record(SAL_INFO_TYPE_MCA); -#endif /* stubbed out @FVL */ + platform_err = ia64_mca_log_sal_error_record(SAL_INFO_TYPE_MCA); /* * Do Platform-specific mca error handling if required. */ - mca_handler_platform() ; + if (platform_err) + mca_handler_platform(); /* * Wakeup all the processors which are spinning in the rendezvous @@ -749,13 +777,16 @@ { spinlock_t isl_lock; int isl_index; - ia64_err_rec_t isl_log[IA64_MAX_LOGS]; /* need space to store header + error log */ + ia64_err_rec_t *isl_log[IA64_MAX_LOGS]; /* need space to store header + error log */ } ia64_state_log_t; static ia64_state_log_t ia64_state_log[IA64_MAX_LOG_TYPES]; -/* Note: Some of these macros assume IA64_MAX_LOGS is always 2. Should be */ -/* fixed. @FVL */ +#define IA64_LOG_ALLOCATE(it, size) \ + {ia64_state_log[it].isl_log[IA64_LOG_CURR_INDEX(it)] = \ + (ia64_err_rec_t *)alloc_bootmem(size); \ + ia64_state_log[it].isl_log[IA64_LOG_NEXT_INDEX(it)] = \ + (ia64_err_rec_t *)alloc_bootmem(size);} #define IA64_LOG_LOCK_INIT(it) spin_lock_init(&ia64_state_log[it].isl_lock) #define IA64_LOG_LOCK(it) spin_lock_irqsave(&ia64_state_log[it].isl_lock, s) #define IA64_LOG_UNLOCK(it) spin_unlock_irqrestore(&ia64_state_log[it].isl_lock,s) @@ -765,13 +796,13 @@ ia64_state_log[it].isl_index = 1 - ia64_state_log[it].isl_index #define IA64_LOG_INDEX_DEC(it) \ ia64_state_log[it].isl_index = 1 - ia64_state_log[it].isl_index -#define IA64_LOG_NEXT_BUFFER(it) (void *)(&(ia64_state_log[it].isl_log[IA64_LOG_NEXT_INDEX(it)])) -#define IA64_LOG_CURR_BUFFER(it) (void *)(&(ia64_state_log[it].isl_log[IA64_LOG_CURR_INDEX(it)])) +#define IA64_LOG_NEXT_BUFFER(it) (void *)((ia64_state_log[it].isl_log[IA64_LOG_NEXT_INDEX(it)])) +#define IA64_LOG_CURR_BUFFER(it) (void *)((ia64_state_log[it].isl_log[IA64_LOG_CURR_INDEX(it)])) /* * C portion of the OS INIT handler * - * Called from ia64__init_handler + * Called from ia64_monarch_init_handler * * Inputs: pointer to pt_regs where processor info was saved. * @@ -885,10 +916,18 @@ void ia64_log_init(int sal_info_type) { - IA64_LOG_LOCK_INIT(sal_info_type); + u64 max_size = 0; + IA64_LOG_NEXT_INDEX(sal_info_type) = 0; - memset(IA64_LOG_NEXT_BUFFER(sal_info_type), 0, - sizeof(ia64_err_rec_t) * IA64_MAX_LOGS); + IA64_LOG_LOCK_INIT(sal_info_type); + + // SAL will tell us the maximum size of any error record of this type + max_size = ia64_sal_get_state_info_size(sal_info_type); + + // set up OS data structures to hold error info + IA64_LOG_ALLOCATE(sal_info_type, max_size); + memset(IA64_LOG_CURR_BUFFER(sal_info_type), 0, max_size); + memset(IA64_LOG_NEXT_BUFFER(sal_info_type), 0, max_size); } /* @@ -923,8 +962,7 @@ return total_len; } else { IA64_LOG_UNLOCK(sal_info_type); - prfunc("ia64_log_get: Failed to retrieve SAL error record type %d\n", - sal_info_type); + prfunc("ia64_log_get: No SAL error record available for type %d\n", sal_info_type); return 0; } } @@ -1268,7 +1306,7 @@ } if (mdei->valid.oem_data) { - ia64_log_prt_oem_data((int)mdei->header.len, + platform_mem_dev_err_print((int)mdei->header.len, (int)sizeof(sal_log_mem_dev_err_info_t) - 1, &(mdei->oem_data[0]), prfunc); } @@ -1357,7 +1395,7 @@ prfunc("\n"); if (pbei->valid.oem_data) { - ia64_log_prt_oem_data((int)pbei->header.len, + platform_pci_bus_err_print((int)pbei->header.len, (int)sizeof(sal_log_pci_bus_err_info_t) - 1, &(pbei->oem_data[0]), prfunc); } @@ -1456,7 +1494,7 @@ } } if (pcei->valid.oem_data) { - ia64_log_prt_oem_data((int)pcei->header.len, n_pci_data, + platform_pci_comp_err_print((int)pcei->header.len, n_pci_data, p_oem_data, prfunc); prfunc("\n"); } @@ -1485,7 +1523,7 @@ ia64_log_prt_guid(&psei->guid, prfunc); } if (psei->valid.oem_data) { - ia64_log_prt_oem_data((int)psei->header.len, + platform_plat_specific_err_print((int)psei->header.len, (int)sizeof(sal_log_plat_specific_err_info_t) - 1, &(psei->oem_data[0]), prfunc); } @@ -1519,7 +1557,7 @@ if (hcei->valid.bus_spec_data) prfunc(" Bus Specific Data: %#lx", hcei->bus_spec_data); if (hcei->valid.oem_data) { - ia64_log_prt_oem_data((int)hcei->header.len, + platform_host_ctlr_err_print((int)hcei->header.len, (int)sizeof(sal_log_host_ctlr_err_info_t) - 1, &(hcei->oem_data[0]), prfunc); } @@ -1553,7 +1591,7 @@ if (pbei->valid.bus_spec_data) prfunc(" Bus Specific Data: %#lx", pbei->bus_spec_data); if (pbei->valid.oem_data) { - ia64_log_prt_oem_data((int)pbei->header.len, + platform_plat_bus_err_print((int)pbei->header.len, (int)sizeof(sal_log_plat_bus_err_info_t) - 1, &(pbei->oem_data[0]), prfunc); } @@ -1745,17 +1783,18 @@ * Inputs : lh (Pointer to the sal error record header with format * specified by the SAL spec). * prfunc (fn ptr of log output function to use) - * Outputs : None + * Outputs : platform error status */ -void +int ia64_log_platform_info_print (sal_log_record_header_t *lh, prfunc_t prfunc) { - sal_log_section_hdr_t *slsh; - int n_sects; - int ercd_pos; + sal_log_section_hdr_t *slsh; + int n_sects; + int ercd_pos; + int platform_err = 0; if (!lh) - return; + return platform_err; #ifdef MCA_PRT_XTRA_DATA // for test only @FVL ia64_log_prt_record_header(lh, prfunc); @@ -1765,7 +1804,7 @@ IA64_MCA_DEBUG("ia64_mca_log_print: " "truncated SAL error record. len = %d\n", lh->len); - return; + return platform_err; } /* Print record header info */ @@ -1796,35 +1835,43 @@ ia64_log_proc_dev_err_info_print((sal_log_processor_info_t *)slsh, prfunc); } else if (efi_guidcmp(slsh->guid, SAL_PLAT_MEM_DEV_ERR_SECT_GUID) == 0) { + platform_err = 1; prfunc("+Platform Memory Device Error Info Section\n"); ia64_log_mem_dev_err_info_print((sal_log_mem_dev_err_info_t *)slsh, prfunc); } else if (efi_guidcmp(slsh->guid, SAL_PLAT_SEL_DEV_ERR_SECT_GUID) == 0) { + platform_err = 1; prfunc("+Platform SEL Device Error Info Section\n"); ia64_log_sel_dev_err_info_print((sal_log_sel_dev_err_info_t *)slsh, prfunc); } else if (efi_guidcmp(slsh->guid, SAL_PLAT_PCI_BUS_ERR_SECT_GUID) == 0) { + platform_err = 1; prfunc("+Platform PCI Bus Error Info Section\n"); ia64_log_pci_bus_err_info_print((sal_log_pci_bus_err_info_t *)slsh, prfunc); } else if (efi_guidcmp(slsh->guid, SAL_PLAT_SMBIOS_DEV_ERR_SECT_GUID) == 0) { + platform_err = 1; prfunc("+Platform SMBIOS Device Error Info Section\n"); ia64_log_smbios_dev_err_info_print((sal_log_smbios_dev_err_info_t *)slsh, prfunc); } else if (efi_guidcmp(slsh->guid, SAL_PLAT_PCI_COMP_ERR_SECT_GUID) == 0) { + platform_err = 1; prfunc("+Platform PCI Component Error Info Section\n"); ia64_log_pci_comp_err_info_print((sal_log_pci_comp_err_info_t *)slsh, prfunc); } else if (efi_guidcmp(slsh->guid, SAL_PLAT_SPECIFIC_ERR_SECT_GUID) == 0) { + platform_err = 1; prfunc("+Platform Specific Error Info Section\n"); ia64_log_plat_specific_err_info_print((sal_log_plat_specific_err_info_t *) slsh, prfunc); } else if (efi_guidcmp(slsh->guid, SAL_PLAT_HOST_CTLR_ERR_SECT_GUID) == 0) { + platform_err = 1; prfunc("+Platform Host Controller Error Info Section\n"); ia64_log_host_ctlr_err_info_print((sal_log_host_ctlr_err_info_t *)slsh, prfunc); } else if (efi_guidcmp(slsh->guid, SAL_PLAT_BUS_ERR_SECT_GUID) == 0) { + platform_err = 1; prfunc("+Platform Bus Error Info Section\n"); ia64_log_plat_bus_err_info_print((sal_log_plat_bus_err_info_t *)slsh, prfunc); @@ -1838,8 +1885,9 @@ n_sects, lh->len); if (!n_sects) { prfunc("No Platform Error Info Sections found\n"); - return; + return platform_err; } + return platform_err; } /* @@ -1849,15 +1897,17 @@ * * Inputs : info_type (SAL_INFO_TYPE_{MCA,INIT,CMC,CPE}) * prfunc (fn ptr of log output function to use) - * Outputs : None + * Outputs : platform error status */ -void +int ia64_log_print(int sal_info_type, prfunc_t prfunc) { + int platform_err = 0; + switch(sal_info_type) { case SAL_INFO_TYPE_MCA: prfunc("+BEGIN HARDWARE ERROR STATE AT MCA\n"); - ia64_log_platform_info_print(IA64_LOG_CURR_BUFFER(sal_info_type), prfunc); + platform_err = ia64_log_platform_info_print(IA64_LOG_CURR_BUFFER(sal_info_type), prfunc); prfunc("+END HARDWARE ERROR STATE AT MCA\n"); break; case SAL_INFO_TYPE_INIT: @@ -1877,4 +1927,5 @@ prfunc("+MCA UNKNOWN ERROR LOG (UNIMPLEMENTED)\n"); break; } + return platform_err; } diff -Nru a/arch/ia64/kernel/mca_asm.S b/arch/ia64/kernel/mca_asm.S --- a/arch/ia64/kernel/mca_asm.S Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/kernel/mca_asm.S Tue Mar 12 13:58:15 2002 @@ -7,6 +7,12 @@ // 00/03/29 cfleck Added code to save INIT handoff state in pt_regs format, switch to temp // kstack, switch modes, jump to C INIT handler // +// 02/01/04 J.Hall +// Before entering virtual mode code: +// 1. Check for TLB CPU error +// 2. Restore current thread pointer to kr6 +// 3. Move stack ptr 16 bytes to conform to C calling convention +// #include #include @@ -21,10 +27,21 @@ */ #define MINSTATE_PHYS /* Make sure stack access is physical for MINSTATE */ +/* + * Needed for ia64_sal call + */ +#define SAL_GET_STATE_INFO 0x01000001 + +/* + * Needed for return context to SAL + */ +#define IA64_MCA_SAME_CONTEXT 0x0 +#define IA64_MCA_COLD_BOOT -2 + #include "minstate.h" /* - * SAL_TO_OS_MCA_HANDOFF_STATE (SAL 3.0 spec) + * SAL_TO_OS_MCA_HANDOFF_STATE (SAL 3.0 spec) * 1. GR1 = OS GP * 2. GR8 = PAL_PROC physical address * 3. GR9 = SAL_PROC physical address @@ -40,26 +57,34 @@ st8 [_tmp]=r9,0x08;; \ st8 [_tmp]=r10,0x08;; \ st8 [_tmp]=r11,0x08;; \ - st8 [_tmp]=r12,0x08;; + st8 [_tmp]=r12,0x08 /* - * OS_MCA_TO_SAL_HANDOFF_STATE (SAL 3.0 spec) - * 1. GR8 = OS_MCA return status + * OS_MCA_TO_SAL_HANDOFF_STATE (SAL 3.0 spec) + * (p6) is executed if we never entered virtual mode (TLB error) + * (p7) is executed if we entered virtual mode as expected (normal case) + * 1. GR8 = OS_MCA return status * 2. GR9 = SAL GP (physical) - * 3. GR10 = 0/1 returning same/new context - * 4. GR22 = New min state save area pointer - * returns ptr to SAL rtn save loc in _tmp + * 3. GR10 = 0/1 returning same/new context + * 4. GR22 = New min state save area pointer + * returns ptr to SAL rtn save loc in _tmp */ -#define OS_MCA_TO_SAL_HANDOFF_STATE_RESTORE(_tmp) \ - movl _tmp=ia64_os_to_sal_handoff_state;; \ - DATA_VA_TO_PA(_tmp);; \ - ld8 r8=[_tmp],0x08;; \ - ld8 r9=[_tmp],0x08;; \ - ld8 r10=[_tmp],0x08;; \ - ld8 r22=[_tmp],0x08;; \ - movl _tmp=ia64_sal_to_os_handoff_state;; \ - DATA_VA_TO_PA(_tmp);; \ - add _tmp=0x28,_tmp;; // point to SAL rtn save location +#define OS_MCA_TO_SAL_HANDOFF_STATE_RESTORE(_tmp) \ +(p6) movl _tmp=ia64_sal_to_os_handoff_state;; \ +(p7) movl _tmp=ia64_os_to_sal_handoff_state;; \ + DATA_VA_TO_PA(_tmp);; \ +(p6) movl r8=IA64_MCA_COLD_BOOT; \ +(p6) movl r10=IA64_MCA_SAME_CONTEXT; \ +(p6) add _tmp=0x18,_tmp;; \ +(p6) ld8 r9=[_tmp],0x10; \ +(p6) movl r22=ia64_mca_min_state_save_info;; \ +(p7) ld8 r8=[_tmp],0x08;; \ +(p7) ld8 r9=[_tmp],0x08;; \ +(p7) ld8 r10=[_tmp],0x08;; \ +(p7) ld8 r22=[_tmp],0x08;; \ + DATA_VA_TO_PA(r22) + // now _tmp is pointing to SAL rtn save location + .global ia64_os_mca_dispatch .global ia64_os_mca_dispatch_end @@ -70,6 +95,9 @@ .global ia64_mca_stackframe .global ia64_mca_bspstore .global ia64_init_stack + .global ia64_mca_sal_data_area + .global ia64_tlb_functional + .global ia64_mca_min_state_save_info .text .align 16 @@ -90,26 +118,34 @@ // for ia64_mca_sal_to_os_state_t has been // defined in include/asm/mca.h SAL_TO_OS_MCA_HANDOFF_STATE_SAVE(r2) + ;; // LOG PROCESSOR STATE INFO FROM HERE ON.. - ;; begin_os_mca_dump: br ia64_os_mca_proc_state_dump;; ia64_os_mca_done_dump: // Setup new stack frame for OS_MCA handling - movl r2=ia64_mca_bspstore;; // local bspstore area location in r2 + movl r2=ia64_mca_bspstore;; // local bspstore area location in r2 DATA_VA_TO_PA(r2);; - movl r3=ia64_mca_stackframe;; // save stack frame to memory in r3 + movl r3=ia64_mca_stackframe;; // save stack frame to memory in r3 DATA_VA_TO_PA(r3);; - rse_switch_context(r6,r3,r2);; // RSC management in this new context - movl r12=ia64_mca_stack;; - mov r2=8*1024;; // stack size must be same as c array - add r12=r2,r12;; // stack base @ bottom of array + rse_switch_context(r6,r3,r2);; // RSC management in this new context + movl r12=ia64_mca_stack + mov r2=8*1024;; // stack size must be same as C array + add r12=r2,r12;; // stack base @ bottom of array + adds r12=-16,r12;; // allow 16 bytes of scratch + // (C calling convention) DATA_VA_TO_PA(r12);; - // Enter virtual mode from physical mode + // Check to see if the MCA resulted from a TLB error +begin_tlb_error_check: + br ia64_os_mca_tlb_error_check;; + +done_tlb_error_check: + + // If TLB is functional, enter virtual mode from physical mode VIRTUAL_MODE_ENTER(r2, r3, ia64_os_mca_virtual_begin, r4) ia64_os_mca_virtual_begin: @@ -130,25 +166,28 @@ #endif /* #if defined(MCA_TEST) */ // restore the original stack frame here - movl r2=ia64_mca_stackframe // restore stack frame from memory at r2 + movl r2=ia64_mca_stackframe // restore stack frame from memory at r2 ;; DATA_VA_TO_PA(r2) movl r4=IA64_PSR_MC ;; - rse_return_context(r4,r3,r2) // switch from interrupt context for RSE + rse_return_context(r4,r3,r2) // switch from interrupt context for RSE // let us restore all the registers from our PSI structure - mov r8=gp + mov r8=gp ;; begin_os_mca_restore: br ia64_os_mca_proc_state_restore;; ia64_os_mca_done_restore: - ;; + movl r3=ia64_tlb_functional;; + DATA_VA_TO_PA(r3);; + ld8 r3=[r3];; + cmp.eq p6,p7=r0,r3;; + OS_MCA_TO_SAL_HANDOFF_STATE_RESTORE(r2);; // branch back to SALE_CHECK - OS_MCA_TO_SAL_HANDOFF_STATE_RESTORE(r2) ld8 r3=[r2];; - mov b0=r3;; // SAL_CHECK return address + mov b0=r3;; // SAL_CHECK return address br b0 ;; ia64_os_mca_dispatch_end: @@ -405,7 +444,7 @@ movl r2=ia64_mca_proc_state_dump // Convert virtual address ;; // of OS state dump area DATA_VA_TO_PA(r2) // to physical address - ;; + restore_GRs: // restore bank-1 GRs 16-31 bsw.1;; add r3=16*8,r2;; // to get to NaT of GR 16-31 @@ -621,6 +660,80 @@ //EndStub////////////////////////////////////////////////////////////////////// +//++ +// Name: +// ia64_os_mca_tlb_error_check() +// +// Stub Description: +// +// This stub checks to see if the MCA resulted from a TLB error +// +//-- + +ia64_os_mca_tlb_error_check: + + // Retrieve sal data structure for uncorrected MCA + + // Make the ia64_sal_get_state_info() call + movl r4=ia64_mca_sal_data_area;; + movl r7=ia64_sal;; + mov r6=r1 // save gp + DATA_VA_TO_PA(r4) // convert to physical address + DATA_VA_TO_PA(r7);; // convert to physical address + ld8 r7=[r7] // get addr of pdesc from ia64_sal + movl r3=SAL_GET_STATE_INFO;; + DATA_VA_TO_PA(r7);; // convert to physical address + ld8 r8=[r7],8;; // get pdesc function pointer + DATA_VA_TO_PA(r8) // convert to physical address + ld8 r1=[r7];; // set new (ia64_sal) gp + DATA_VA_TO_PA(r1) // convert to physical address + mov b6=r8 + + alloc r5=ar.pfs,8,0,8,0;; // allocate stack frame for SAL call + mov out0=r3 // which SAL proc to call + mov out1=r0 // error type == MCA + mov out2=r0 // null arg + mov out3=r4 // data copy area + mov out4=r0 // null arg + mov out5=r0 // null arg + mov out6=r0 // null arg + mov out7=r0;; // null arg + + br.call.sptk.few b0=b6;; + + mov r1=r6 // restore gp + mov ar.pfs=r5;; // restore ar.pfs + + movl r6=ia64_tlb_functional;; + DATA_VA_TO_PA(r6) // needed later + + cmp.eq p6,p7=r0,r8;; // check SAL call return address +(p7) st8 [r6]=r0 // clear tlb_functional flag +(p7) br tlb_failure // error; return to SAL + + // examine processor error log for type of error + add r4=40+24,r4;; // parse past record header (length=40) + // and section header (length=24) + ld4 r4=[r4] // get valid field of processor log + mov r5=0xf00;; + and r5=r4,r5;; // read bits 8-11 of valid field + // to determine if we have a TLB error + movl r3=0x1 + cmp.eq p6,p7=r0,r5;; + // if no TLB failure, set tlb_functional flag +(p6) st8 [r6]=r3 + // else clear flag +(p7) st8 [r6]=r0 + + // if no TLB failure, continue with normal virtual mode logging +(p6) br done_tlb_error_check + // else no point in entering virtual mode for logging +tlb_failure: + br ia64_os_mca_virtual_end + +//EndStub////////////////////////////////////////////////////////////////////// + + // ok, the issue here is that we need to save state information so // it can be useable by the kernel debugger and show regs routines. // In order to do this, our best bet is save the current state (plus @@ -633,7 +746,7 @@ // This has been defined for registration purposes with SAL // as a part of ia64_mca_init. // -// When we get here, the follow registers have been +// When we get here, the following registers have been // set by the SAL for our use // // 1. GR1 = OS INIT GP @@ -649,42 +762,10 @@ GLOBAL_ENTRY(ia64_monarch_init_handler) -#if defined(CONFIG_SMP) && defined(SAL_MPINIT_WORKAROUND) - // - // work around SAL bug that sends all processors to monarch entry - // - mov r17=cr.lid - // XXX fix me: this is wrong: hard_smp_processor_id() is a pair of lid/eid - movl r18=ia64_cpu_to_sapicid - ;; - dep r18=0,r18,61,3 // convert to physical address - ;; - shr.u r17=r17,16 - ld4 r18=[r18] // get the BSP ID - ;; - dep r17=0,r17,16,48 - ;; - cmp4.ne p6,p0=r17,r18 // Am I the BSP ? -(p6) br.cond.spnt slave_init_spin_me - ;; -#endif - -// -// ok, the first thing we do is stash the information -// the SAL passed to os -// -_tmp = r2 - movl _tmp=ia64_sal_to_os_handoff_state - ;; - dep _tmp=0,_tmp, 61, 3 // get physical address + // stash the information the SAL passed to os + SAL_TO_OS_MCA_HANDOFF_STATE_SAVE(r2) ;; - st8 [_tmp]=r1,0x08;; - st8 [_tmp]=r8,0x08;; - st8 [_tmp]=r9,0x08;; - st8 [_tmp]=r10,0x08;; - st8 [_tmp]=r11,0x08;; - st8 [_tmp]=r12,0x08;; // now we want to save information so we can dump registers SAVE_MIN_WITH_COVER @@ -695,12 +776,10 @@ ;; SAVE_REST -// ok, enough should be saved at this point to be dangerous, and supply +// ok, enough should be saved at this point to be dangerous, and supply // information for a dump // We need to switch to Virtual mode before hitting the C functions. -// -// -// + movl r2=IA64_PSR_IT|IA64_PSR_IC|IA64_PSR_DT|IA64_PSR_RT|IA64_PSR_DFH|IA64_PSR_BN mov r3=psr // get the current psr, minimum enabled at this point ;; @@ -708,8 +787,8 @@ ;; movl r3=IVirtual_Switch ;; - mov cr.iip=r3 // short return to set the appropriate bits - mov cr.ipsr=r2 // need to do an rfi to set appropriate bits + mov cr.iip=r3 // short return to set the appropriate bits + mov cr.ipsr=r2 // need to do an rfi to set appropriate bits ;; rfi ;; @@ -717,7 +796,7 @@ // // We should now be running virtual // - // Lets call the C handler to get the rest of the state info + // Let's call the C handler to get the rest of the state info // alloc r14=ar.pfs,0,0,1,0 // now it's safe (must be first in insn group!) ;; // diff -Nru a/arch/ia64/kernel/minstate.h b/arch/ia64/kernel/minstate.h --- a/arch/ia64/kernel/minstate.h Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/kernel/minstate.h Tue Mar 12 13:58:15 2002 @@ -92,7 +92,6 @@ * * Assumed state upon entry: * psr.ic: off - * psr.dt: off * r31: contains saved predicates (pr) * * Upon exit, the state is as follows: @@ -186,7 +185,6 @@ * * Assumed state upon entry: * psr.ic: on - * psr.dt: on * r2: points to &pt_regs.r16 * r3: points to &pt_regs.r17 */ diff -Nru a/arch/ia64/kernel/palinfo.c b/arch/ia64/kernel/palinfo.c --- a/arch/ia64/kernel/palinfo.c Tue Mar 12 13:58:14 2002 +++ b/arch/ia64/kernel/palinfo.c Tue Mar 12 13:58:14 2002 @@ -724,7 +724,7 @@ status = ia64_pal_tr_read(j, i, tr_buffer, &tr_valid); if (status != 0) { - printk(__FUNCTION__ " pal call failed on tr[%d:%d]=%ld\n", i, j, status); + printk("palinfo: pal call failed on tr[%d:%d]=%ld\n", i, j, status); continue; } @@ -842,9 +842,8 @@ palinfo_smp_call(void *info) { palinfo_smp_data_t *data = (palinfo_smp_data_t *)info; - /* printk(__FUNCTION__" called on CPU %d\n", smp_processor_id());*/ if (data == NULL) { - printk(KERN_ERR __FUNCTION__" data pointer is NULL\n"); + printk("%s palinfo: data pointer is NULL\n", KERN_ERR); data->ret = 0; /* no output */ return; } @@ -868,11 +867,10 @@ ptr.page = page; ptr.ret = 0; /* just in case */ - /*printk(__FUNCTION__" calling CPU %d from CPU %d for function %d\n", f->req_cpu,smp_processor_id(), f->func_id);*/ /* will send IPI to other CPU and wait for completion of remote call */ if ((ret=smp_call_function_single(f->req_cpu, palinfo_smp_call, &ptr, 0, 1))) { - printk(__FUNCTION__" remote CPU call from %d to %d on function %d: error %d\n", smp_processor_id(), f->req_cpu, f->func_id, ret); + printk("palinfo: remote CPU call from %d to %d on function %d: error %d\n", smp_processor_id(), f->req_cpu, f->func_id, ret); return 0; } return ptr.ret; @@ -881,7 +879,7 @@ static int palinfo_handle_smp(pal_func_cpu_u_t *f, char *page) { - printk(__FUNCTION__" should not be called with non SMP kernel\n"); + printk("palinfo: should not be called with non SMP kernel\n"); return 0; } #endif /* CONFIG_SMP */ diff -Nru a/arch/ia64/kernel/perfmon.c b/arch/ia64/kernel/perfmon.c --- a/arch/ia64/kernel/perfmon.c Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/kernel/perfmon.c Tue Mar 12 13:58:15 2002 @@ -1,13 +1,16 @@ /* - * This file contains the code to configure and read/write the ia64 performance - * monitoring stuff. + * This file implements the perfmon subsystem which is used + * to program the IA-64 Performance Monitoring Unit (PMU). * * Originaly Written by Ganesh Venkitachalam, IBM Corp. - * Modifications by David Mosberger-Tang, Hewlett-Packard Co. - * Modifications by Stephane Eranian, Hewlett-Packard Co. * Copyright (C) 1999 Ganesh Venkitachalam - * Copyright (C) 1999 David Mosberger-Tang - * Copyright (C) 2000-2001 Stephane Eranian + * + * Modifications by Stephane Eranian, Hewlett-Packard Co. + * Modifications by David Mosberger-Tang, Hewlett-Packard Co. + * + * Copyright (C) 1999-2002 Hewlett Packard Co + * Stephane Eranian + * David Mosberger-Tang */ #include @@ -22,151 +25,137 @@ #include #include -#include #include -#include #include #include #include -#include #include #include #include -#include #include #include /* for ia64_get_itc() */ #ifdef CONFIG_PERFMON -#define PFM_VERSION "0.3" -#define PFM_SMPL_HDR_VERSION 1 - -#define PMU_FIRST_COUNTER 4 /* first generic counter */ - -#define PFM_WRITE_PMCS 0xa0 -#define PFM_WRITE_PMDS 0xa1 -#define PFM_READ_PMDS 0xa2 -#define PFM_STOP 0xa3 -#define PFM_START 0xa4 -#define PFM_ENABLE 0xa5 /* unfreeze only */ -#define PFM_DISABLE 0xa6 /* freeze only */ -#define PFM_RESTART 0xcf -#define PFM_CREATE_CONTEXT 0xa7 -#define PFM_DESTROY_CONTEXT 0xa8 /* - * Those 2 are just meant for debugging. I considered using sysctl() for - * that but it is a little bit too pervasive. This solution is at least - * self-contained. + * For PMU which rely on the debug registers for some features, you must + * you must enable the following flag to activate the support for + * accessing the registers via the perfmonctl() interface. */ -#define PFM_DEBUG_ON 0xe0 -#define PFM_DEBUG_OFF 0xe1 - -#define PFM_DEBUG_BASE PFM_DEBUG_ON - +#ifdef CONFIG_ITANIUM +#define PFM_PMU_USES_DBR 1 +#endif /* - * perfmon API flags + * perfmon context states */ -#define PFM_FL_INHERIT_NONE 0x00 /* never inherit a context across fork (default) */ -#define PFM_FL_INHERIT_ONCE 0x01 /* clone pfm_context only once across fork() */ -#define PFM_FL_INHERIT_ALL 0x02 /* always clone pfm_context across fork() */ -#define PFM_FL_SMPL_OVFL_NOBLOCK 0x04 /* do not block on sampling buffer overflow */ -#define PFM_FL_SYSTEM_WIDE 0x08 /* create a system wide context */ -#define PFM_FL_EXCL_INTR 0x10 /* exclude interrupt from system wide monitoring */ +#define PFM_CTX_DISABLED 0 +#define PFM_CTX_ENABLED 1 /* - * PMC API flags + * Reset register flags */ -#define PFM_REGFL_OVFL_NOTIFY 1 /* send notification on overflow */ +#define PFM_RELOAD_LONG_RESET 1 +#define PFM_RELOAD_SHORT_RESET 2 /* - * Private flags and masks + * Misc macros and definitions */ +#define PMU_FIRST_COUNTER 4 + +#define PFM_IS_DISABLED() pmu_conf.pfm_is_disabled + +#define PMC_OVFL_NOTIFY(ctx, i) ((ctx)->ctx_soft_pmds[i].flags & PFM_REGFL_OVFL_NOTIFY) #define PFM_FL_INHERIT_MASK (PFM_FL_INHERIT_NONE|PFM_FL_INHERIT_ONCE|PFM_FL_INHERIT_ALL) -#ifdef CONFIG_SMP -#define cpu_is_online(i) (cpu_online_map & (1UL << i)) -#else -#define cpu_is_online(i) 1 -#endif +#define PMC_IS_IMPL(i) (i>6] & (1UL<< (i) %64)) +#define PMD_IS_IMPL(i) (i>6)] & (1UL<<(i) % 64)) + +#define PMD_IS_COUNTING(i) (i >=0 && i < 256 && pmu_conf.counter_pmds[i>>6] & (1UL <<(i) % 64)) +#define PMC_IS_COUNTING(i) PMD_IS_COUNTING(i) + +#define IBR_IS_IMPL(k) (kpmc_es == PMU_BTB_EVENT) -#define PMC_IS_IMPL(i) (i < pmu_conf.num_pmcs && pmu_conf.impl_regs[i>>6] & (1<< (i&~(64-1)))) -#define PMD_IS_IMPL(i) (i < pmu_conf.num_pmds && pmu_conf.impl_regs[4+(i>>6)] & (1<< (i&~(64-1)))) -#define PMD_IS_COUNTER(i) (i>=PMU_FIRST_COUNTER && i < (PMU_FIRST_COUNTER+pmu_conf.max_counters)) -#define PMC_IS_COUNTER(i) (i>=PMU_FIRST_COUNTER && i < (PMU_FIRST_COUNTER+pmu_conf.max_counters)) +#define LSHIFT(x) (1UL<<(x)) +#define PMM(x) LSHIFT(x) +#define PMC_IS_MONITOR(c) ((pmu_conf.monitor_pmcs[0] & PMM((c))) != 0) -/* This is the Itanium-specific PMC layout for counter config */ +#define CTX_IS_ENABLED(c) ((c)->ctx_flags.state == PFM_CTX_ENABLED) +#define CTX_OVFL_NOBLOCK(c) ((c)->ctx_fl_block == 0) +#define CTX_INHERIT_MODE(c) ((c)->ctx_fl_inherit) +#define CTX_HAS_SMPL(c) ((c)->ctx_psb != NULL) +#define CTX_USED_PMD(ctx,n) (ctx)->ctx_used_pmds[(n)>>6] |= 1UL<< ((n) % 64) + +#define CTX_USED_IBR(ctx,n) (ctx)->ctx_used_ibrs[(n)>>6] |= 1UL<< ((n) % 64) +#define CTX_USED_DBR(ctx,n) (ctx)->ctx_used_dbrs[(n)>>6] |= 1UL<< ((n) % 64) +#define CTX_USES_DBREGS(ctx) (((pfm_context_t *)(ctx))->ctx_fl_using_dbreg==1) + +#define LOCK_CTX(ctx) spin_lock(&(ctx)->ctx_lock) +#define UNLOCK_CTX(ctx) spin_unlock(&(ctx)->ctx_lock) + +#define SET_PMU_OWNER(t) do { pmu_owners[smp_processor_id()].owner = (t); } while(0) +#define PMU_OWNER() pmu_owners[smp_processor_id()].owner + +#define LOCK_PFS() spin_lock(&pfm_sessions.pfs_lock) +#define UNLOCK_PFS() spin_unlock(&pfm_sessions.pfs_lock) + +#define PFM_REG_RETFLAG_SET(flags, val) do { flags &= ~PFM_REG_RETFL_MASK; flags |= (val); } while(0) + +/* + * debugging + */ +#define DBprintk(a) \ + do { \ + if (pfm_debug_mode >0) { printk("%s.%d: CPU%d ", __FUNCTION__, __LINE__, smp_processor_id()); printk a; } \ + } while (0) + + +/* + * These are some helpful architected PMC and IBR/DBR register layouts + */ typedef struct { unsigned long pmc_plm:4; /* privilege level mask */ unsigned long pmc_ev:1; /* external visibility */ unsigned long pmc_oi:1; /* overflow interrupt */ unsigned long pmc_pm:1; /* privileged monitor */ unsigned long pmc_ig1:1; /* reserved */ - unsigned long pmc_es:7; /* event select */ - unsigned long pmc_ig2:1; /* reserved */ - unsigned long pmc_umask:4; /* unit mask */ - unsigned long pmc_thres:3; /* threshold */ - unsigned long pmc_ig3:1; /* reserved (missing from table on p6-17) */ - unsigned long pmc_ism:2; /* instruction set mask */ - unsigned long pmc_ig4:38; /* reserved */ -} pmc_counter_reg_t; - -/* test for EAR/BTB configuration */ -#define PMU_DEAR_EVENT 0x67 -#define PMU_IEAR_EVENT 0x23 -#define PMU_BTB_EVENT 0x11 - -#define PMC_IS_DEAR(a) (((pmc_counter_reg_t *)(a))->pmc_es == PMU_DEAR_EVENT) -#define PMC_IS_IEAR(a) (((pmc_counter_reg_t *)(a))->pmc_es == PMU_IEAR_EVENT) -#define PMC_IS_BTB(a) (((pmc_counter_reg_t *)(a))->pmc_es == PMU_BTB_EVENT) - -/* - * This header is at the beginning of the sampling buffer returned to the user. - * It is exported as Read-Only at this point. It is directly followed with the - * first record. - */ -typedef struct { - int hdr_version; /* could be used to differentiate formats */ - int hdr_reserved; - unsigned long hdr_entry_size; /* size of one entry in bytes */ - unsigned long hdr_count; /* how many valid entries */ - unsigned long hdr_pmds; /* which pmds are recorded */ -} perfmon_smpl_hdr_t; - -/* - * Header entry in the buffer as a header as follows. - * The header is directly followed with the PMDS to saved in increasing index order: - * PMD4, PMD5, .... How many PMDs are present is determined by the tool which must - * keep track of it when generating the final trace file. - */ -typedef struct { - int pid; /* identification of process */ - int cpu; /* which cpu was used */ - unsigned long rate; /* initial value of this counter */ - unsigned long stamp; /* timestamp */ - unsigned long ip; /* where did the overflow interrupt happened */ - unsigned long regs; /* which registers overflowed (up to 64)*/ -} perfmon_smpl_entry_t; + unsigned long pmc_es:8; /* event select */ + unsigned long pmc_ig2:48; /* reserved */ +} pfm_monitor_t; /* * There is one such data structure per perfmon context. It is used to describe the - * sampling buffer. It is to be shared among siblings whereas the pfm_context isn't. + * sampling buffer. It is to be shared among siblings whereas the pfm_context + * is not. * Therefore we maintain a refcnt which is incremented on fork(). - * This buffer is private to the kernel only the actual sampling buffer including its - * header are exposed to the user. This construct allows us to export the buffer read-write, - * if needed, without worrying about security problems. - */ -typedef struct { - atomic_t psb_refcnt; /* how many users for the buffer */ - int reserved; + * This buffer is private to the kernel only the actual sampling buffer + * including its header are exposed to the user. This construct allows us to + * export the buffer read-write, if needed, without worrying about security + * problems. + */ +typedef struct _pfm_smpl_buffer_desc { + spinlock_t psb_lock; /* protection lock */ + unsigned long psb_refcnt; /* how many users for the buffer */ + int psb_flags; /* bitvector of flags */ + void *psb_addr; /* points to location of first entry */ unsigned long psb_entries; /* maximum number of entries */ unsigned long psb_size; /* aligned size of buffer */ - unsigned long psb_index; /* next free entry slot */ + unsigned long psb_index; /* next free entry slot XXX: must use the one in buffer */ unsigned long psb_entry_size; /* size of each entry including entry header */ perfmon_smpl_hdr_t *psb_hdr; /* points to sampling buffer header */ + + struct _pfm_smpl_buffer_desc *psb_next; /* next psb, used for rvfreeing of psb_hdr */ + } pfm_smpl_buffer_desc_t; +#define LOCK_PSB(p) spin_lock(&(p)->psb_lock) +#define UNLOCK_PSB(p) spin_unlock(&(p)->psb_lock) + +#define PFM_PSB_VMA 0x1 /* a VMA is describing the buffer */ /* * This structure is initialized at boot time and contains @@ -180,126 +169,187 @@ unsigned long num_pmcs ; /* highest PMC implemented (may have holes) */ unsigned long num_pmds; /* highest PMD implemented (may have holes) */ unsigned long impl_regs[16]; /* buffer used to hold implememted PMC/PMD mask */ + unsigned long num_ibrs; /* number of instruction debug registers */ + unsigned long num_dbrs; /* number of data debug registers */ + unsigned long monitor_pmcs[4]; /* which pmc are controlling monitors */ + unsigned long counter_pmds[4]; /* which pmd are used as counters */ } pmu_config_t; -#define PERFMON_IS_DISABLED() pmu_conf.pfm_is_disabled - +/* + * 64-bit software counter structure + */ typedef struct { - __u64 val; /* virtual 64bit counter value */ - __u64 ival; /* initial value from user */ - __u64 smpl_rval; /* reset value on sampling overflow */ - __u64 ovfl_rval; /* reset value on overflow */ - int flags; /* notify/do not notify */ + u64 val; /* virtual 64bit counter value */ + u64 ival; /* initial value from user */ + u64 long_reset; /* reset value on sampling overflow */ + u64 short_reset;/* reset value on overflow */ + u64 reset_pmds[4]; /* which other pmds to reset when this counter overflows */ + int flags; /* notify/do not notify */ } pfm_counter_t; -#define PMD_OVFL_NOTIFY(ctx, i) ((ctx)->ctx_pmds[i].flags & PFM_REGFL_OVFL_NOTIFY) /* - * perfmon context. One per process, is cloned on fork() depending on inheritance flags + * perfmon context. One per process, is cloned on fork() depending on + * inheritance flags */ typedef struct { - unsigned int inherit:2; /* inherit mode */ - unsigned int noblock:1; /* block/don't block on overflow with notification */ - unsigned int system:1; /* do system wide monitoring */ - unsigned int frozen:1; /* pmu must be kept frozen on ctxsw in */ - unsigned int exclintr:1;/* exlcude interrupts from system wide monitoring */ - unsigned int reserved:26; + unsigned int state:1; /* 0=disabled, 1=enabled */ + unsigned int inherit:2; /* inherit mode */ + unsigned int block:1; /* when 1, task will blocked on user notifications */ + unsigned int system:1; /* do system wide monitoring */ + unsigned int frozen:1; /* pmu must be kept frozen on ctxsw in */ + unsigned int protected:1; /* allow access to creator of context only */ + unsigned int using_dbreg:1; /* using range restrictions (debug registers) */ + unsigned int reserved:24; } pfm_context_flags_t; +/* + * perfmon context: encapsulates all the state of a monitoring session + * XXX: probably need to change layout + */ typedef struct pfm_context { + pfm_smpl_buffer_desc_t *ctx_psb; /* sampling buffer, if any */ + unsigned long ctx_smpl_vaddr; /* user level virtual address of smpl buffer */ - pfm_smpl_buffer_desc_t *ctx_smpl_buf; /* sampling buffer descriptor, if any */ - unsigned long ctx_dear_counter; /* which PMD holds D-EAR */ - unsigned long ctx_iear_counter; /* which PMD holds I-EAR */ - unsigned long ctx_btb_counter; /* which PMD holds BTB */ - - spinlock_t ctx_notify_lock; + spinlock_t ctx_lock; pfm_context_flags_t ctx_flags; /* block/noblock */ - int ctx_notify_sig; /* XXX: SIGPROF or other */ - struct task_struct *ctx_notify_task; /* who to notify on overflow */ - struct task_struct *ctx_creator; /* pid of creator (debug) */ - unsigned long ctx_ovfl_regs; /* which registers just overflowed (notification) */ - unsigned long ctx_smpl_regs; /* which registers to record on overflow */ + struct task_struct *ctx_notify_task; /* who to notify on overflow */ + struct task_struct *ctx_owner; /* pid of creator (debug) */ - struct semaphore ctx_restart_sem; /* use for blocking notification mode */ + unsigned long ctx_ovfl_regs[4]; /* which registers overflowed (notification) */ + unsigned long ctx_smpl_regs[4]; /* which registers to record on overflow */ - unsigned long ctx_used_pmds[4]; /* bitmask of used PMD (speedup ctxsw) */ - unsigned long ctx_used_pmcs[4]; /* bitmask of used PMC (speedup ctxsw) */ + struct semaphore ctx_restart_sem; /* use for blocking notification mode */ - pfm_counter_t ctx_pmds[IA64_NUM_PMD_COUNTERS]; /* XXX: size should be dynamic */ + unsigned long ctx_used_pmds[4]; /* bitmask of used PMD (speedup ctxsw) */ + unsigned long ctx_saved_pmcs[4]; /* bitmask of PMC to save on ctxsw */ + unsigned long ctx_reload_pmcs[4]; /* bitmask of PMC to reload on ctxsw (SMP) */ -} pfm_context_t; + unsigned long ctx_used_ibrs[4]; /* bitmask of used IBR (speedup ctxsw) */ + unsigned long ctx_used_dbrs[4]; /* bitmask of used DBR (speedup ctxsw) */ -#define CTX_USED_PMD(ctx,n) (ctx)->ctx_used_pmds[(n)>>6] |= 1<< ((n) % 64) -#define CTX_USED_PMC(ctx,n) (ctx)->ctx_used_pmcs[(n)>>6] |= 1<< ((n) % 64) + pfm_counter_t ctx_soft_pmds[IA64_NUM_PMD_REGS]; /* XXX: size should be dynamic */ -#define ctx_fl_inherit ctx_flags.inherit -#define ctx_fl_noblock ctx_flags.noblock -#define ctx_fl_system ctx_flags.system -#define ctx_fl_frozen ctx_flags.frozen -#define ctx_fl_exclintr ctx_flags.exclintr + u64 ctx_saved_psr; /* copy of psr used for lazy ctxsw */ + unsigned long ctx_saved_cpus_allowed; /* copy of the task cpus_allowed (system wide) */ + unsigned long ctx_cpu; /* cpu to which perfmon is applied (system wide) */ -#define CTX_OVFL_NOBLOCK(c) ((c)->ctx_fl_noblock == 1) -#define CTX_INHERIT_MODE(c) ((c)->ctx_fl_inherit) -#define CTX_HAS_SMPL(c) ((c)->ctx_smpl_buf != NULL) + atomic_t ctx_saving_in_progress; /* flag indicating actual save in progress */ + atomic_t ctx_last_cpu; /* CPU id of current or last CPU used */ +} pfm_context_t; -static pmu_config_t pmu_conf; +#define ctx_fl_inherit ctx_flags.inherit +#define ctx_fl_block ctx_flags.block +#define ctx_fl_system ctx_flags.system +#define ctx_fl_frozen ctx_flags.frozen +#define ctx_fl_protected ctx_flags.protected +#define ctx_fl_using_dbreg ctx_flags.using_dbreg -/* for debug only */ -static int pfm_debug=0; /* 0= nodebug, >0= debug output on */ +/* + * global information about all sessions + * mostly used to synchronize between system wide and per-process + */ +typedef struct { + spinlock_t pfs_lock; /* lock the structure */ -#define DBprintk(a) \ - do { \ - if (pfm_debug >0) { printk(__FUNCTION__" %d: ", __LINE__); printk a; } \ - } while (0); + unsigned long pfs_task_sessions; /* number of per task sessions */ + unsigned long pfs_sys_sessions; /* number of per system wide sessions */ + unsigned long pfs_sys_use_dbregs; /* incremented when a system wide session uses debug regs */ + unsigned long pfs_ptrace_use_dbregs; /* incremented when a process uses debug regs */ + struct task_struct *pfs_sys_session[NR_CPUS]; /* point to task owning a system-wide session */ +} pfm_session_t; -static void ia64_reset_pmu(void); +/* + * structure used to pass argument to/from remote CPU + * using IPI to check and possibly save the PMU context on SMP systems. + * + * not used in UP kernels + */ +typedef struct { + struct task_struct *task; /* which task we are interested in */ + int retval; /* return value of the call: 0=you can proceed, 1=need to wait for completion */ +} pfm_smp_ipi_arg_t; /* - * structure used to pass information between the interrupt handler - * and the tasklet. + * perfmon command descriptions */ typedef struct { - pid_t to_pid; /* which process to notify */ - pid_t from_pid; /* which process is source of overflow */ - int sig; /* with which signal */ - unsigned long bitvect; /* which counters have overflowed */ -} notification_info_t; + int (*cmd_func)(struct task_struct *task, pfm_context_t *ctx, void *arg, int count, struct pt_regs *regs); + int cmd_flags; + unsigned int cmd_narg; + size_t cmd_argsize; +} pfm_cmd_desc_t; + +#define PFM_CMD_PID 0x1 /* command requires pid argument */ +#define PFM_CMD_ARG_READ 0x2 /* command must read argument(s) */ +#define PFM_CMD_ARG_WRITE 0x4 /* command must write argument(s) */ +#define PFM_CMD_CTX 0x8 /* command needs a perfmon context */ +#define PFM_CMD_NOCHK 0x10 /* command does not need to check task's state */ + +#define PFM_CMD_IDX(cmd) (cmd) + +#define PFM_CMD_IS_VALID(cmd) ((PFM_CMD_IDX(cmd) >= 0) && (PFM_CMD_IDX(cmd) < PFM_CMD_COUNT) \ + && pfm_cmd_tab[PFM_CMD_IDX(cmd)].cmd_func != NULL) + +#define PFM_CMD_USE_PID(cmd) ((pfm_cmd_tab[PFM_CMD_IDX(cmd)].cmd_flags & PFM_CMD_PID) != 0) +#define PFM_CMD_READ_ARG(cmd) ((pfm_cmd_tab[PFM_CMD_IDX(cmd)].cmd_flags & PFM_CMD_ARG_READ) != 0) +#define PFM_CMD_WRITE_ARG(cmd) ((pfm_cmd_tab[PFM_CMD_IDX(cmd)].cmd_flags & PFM_CMD_ARG_WRITE) != 0) +#define PFM_CMD_USE_CTX(cmd) ((pfm_cmd_tab[PFM_CMD_IDX(cmd)].cmd_flags & PFM_CMD_CTX) != 0) +#define PFM_CMD_CHK(cmd) ((pfm_cmd_tab[PFM_CMD_IDX(cmd)].cmd_flags & PFM_CMD_NOCHK) == 0) + +#define PFM_CMD_ARG_MANY -1 /* cannot be zero */ +#define PFM_CMD_NARG(cmd) (pfm_cmd_tab[PFM_CMD_IDX(cmd)].cmd_narg) +#define PFM_CMD_ARG_SIZE(cmd) (pfm_cmd_tab[PFM_CMD_IDX(cmd)].cmd_argsize) -typedef struct { - unsigned long pfs_proc_sessions; - unsigned long pfs_sys_session; /* can only be 0/1 */ - unsigned long pfs_dfl_dcr; /* XXX: hack */ - unsigned int pfs_pp; -} pfm_session_t; +/* + * perfmon internal variables + */ +static pmu_config_t pmu_conf; /* PMU configuration */ +static int pfm_debug_mode; /* 0= nodebug, >0= debug output on */ +static pfm_session_t pfm_sessions; /* global sessions information */ +static struct proc_dir_entry *perfmon_dir; /* for debug only */ +static unsigned long pfm_spurious_ovfl_intr_count; /* keep track of spurious ovfl interrupts */ +static unsigned long pfm_ovfl_intr_count; /* keep track of spurious ovfl interrupts */ +static unsigned long pfm_recorded_samples_count; + +static void pfm_vm_close(struct vm_area_struct * area); +static struct vm_operations_struct pfm_vm_ops={ + close: pfm_vm_close +}; -struct { +/* + * keep track of task owning the PMU per CPU. + */ +static struct { struct task_struct *owner; } ____cacheline_aligned pmu_owners[NR_CPUS]; -/* - * helper macros - */ -#define SET_PMU_OWNER(t) do { pmu_owners[smp_processor_id()].owner = (t); } while(0); -#define PMU_OWNER() pmu_owners[smp_processor_id()].owner - -#ifdef CONFIG_SMP -#define PFM_CAN_DO_LAZY() (smp_num_cpus==1 && pfs_info.pfs_sys_session==0) -#else -#define PFM_CAN_DO_LAZY() (pfs_info.pfs_sys_session==0) -#endif +/* + * forward declarations + */ +static void ia64_reset_pmu(struct task_struct *); +static void pfm_fetch_regs(int cpu, struct task_struct *task, pfm_context_t *ctx); static void pfm_lazy_save_regs (struct task_struct *ta); -/* for debug only */ -static struct proc_dir_entry *perfmon_dir; +static inline unsigned long +pfm_read_soft_counter(pfm_context_t *ctx, int i) +{ + return ctx->ctx_soft_pmds[i].val + (ia64_get_pmd(i) & pmu_conf.perf_ovfl_val); +} -/* - * XXX: hack to indicate that a system wide monitoring session is active - */ -static pfm_session_t pfs_info; +static inline void +pfm_write_soft_counter(pfm_context_t *ctx, int i, unsigned long val) +{ + ctx->ctx_soft_pmds[i].val = val & ~pmu_conf.perf_ovfl_val; + /* + * writing to unimplemented part is ignore, so we do not need to + * mask off top part + */ + ia64_set_pmd(i, val); +} /* * finds the number of PM(C|D) registers given @@ -324,10 +374,10 @@ * Generates a unique (per CPU) timestamp */ static inline unsigned long -perfmon_get_stamp(void) +pfm_get_stamp(void) { /* - * XXX: maybe find something more efficient + * XXX: must find something more efficient */ return ia64_get_itc(); } @@ -336,15 +386,16 @@ * This is used when initializing the contents of the area. */ static inline unsigned long -kvirt_to_pa(unsigned long adr) +pfm_kvirt_to_pa(unsigned long adr) { __u64 pa = ia64_tpa(adr); - DBprintk(("kv2pa(%lx-->%lx)\n", adr, pa)); + //DBprintk(("kv2pa(%lx-->%lx)\n", adr, pa)); return pa; } + static void * -rvmalloc(unsigned long size) +pfm_rvmalloc(unsigned long size) { void *mem; unsigned long adr; @@ -352,6 +403,7 @@ size=PAGE_ALIGN(size); mem=vmalloc(size); if (mem) { + //printk("perfmon: CPU%d pfm_rvmalloc(%ld)=%p\n", smp_processor_id(), size, mem); memset(mem, 0, size); /* Clear the ram out, no junk to the user */ adr=(unsigned long) mem; while (size > 0) { @@ -364,37 +416,145 @@ } static void -rvfree(void *mem, unsigned long size) +pfm_rvfree(void *mem, unsigned long size) { unsigned long adr; if (mem) { adr=(unsigned long) mem; - while ((long) size > 0) { - mem_map_unreserve(vmalloc_to_page((void *)adr)); + while ((long) size > 0) + mem_map_unreserve(vmalloc_to_page((void*)adr)); adr+=PAGE_SIZE; size-=PAGE_SIZE; } vfree(mem); } + return; +} + +/* + * This function gets called from mm/mmap.c:exit_mmap() only when there is a sampling buffer + * attached to the context AND the current task has a mapping for it, i.e., it is the original + * creator of the context. + * + * This function is used to remember the fact that the vma describing the sampling buffer + * has now been removed. It can only be called when no other tasks share the same mm context. + * + */ +static void +pfm_vm_close(struct vm_area_struct *vma) +{ + pfm_smpl_buffer_desc_t *psb = (pfm_smpl_buffer_desc_t *)vma->vm_private_data; + + if (psb == NULL) { + printk("perfmon: psb is null in [%d]\n", current->pid); + return; + } + /* + * Add PSB to list of buffers to free on release_thread() when no more users + * + * This call is safe because, once the count is zero is cannot be modified anymore. + * This is not because there is no more user of the mm context, that the sampling + * buffer is not being used anymore outside of this task. In fact, it can still + * be accessed from within the kernel by another task (such as the monitored task). + * + * Therefore, we only move the psb into the list of buffers to free when we know + * nobody else is using it. + * The linked list if independent of the perfmon context, because in the case of + * multi-threaded processes, the last thread may not have been involved with + * monitoring however it will be the one removing the vma and it should therefore + * also remove the sampling buffer. This buffer cannot be removed until the vma + * is removed. + * + * This function cannot remove the buffer from here, because exit_mmap() must first + * complete. Given that there is no other vma related callback in the generic code, + * we have created on own with the linked list of sampling buffer to free which + * is part of the thread structure. In release_thread() we check if the list is + * empty. If not we call into perfmon to free the buffer and psb. That is the only + * way to ensure a safe deallocation of the sampling buffer which works when + * the buffer is shared between distinct processes or with multi-threaded programs. + * + * We need to lock the psb because the refcnt test and flag manipulation must + * looked like an atomic operation vis a vis pfm_context_exit() + */ + LOCK_PSB(psb); + + if (psb->psb_refcnt == 0) { + + psb->psb_next = current->thread.pfm_smpl_buf_list; + current->thread.pfm_smpl_buf_list = psb; + + DBprintk(("psb for [%d] smpl @%p size %ld inserted into list\n", + current->pid, psb->psb_hdr, psb->psb_size)); + } + DBprintk(("psb vma flag cleared for [%d] smpl @%p size %ld inserted into list\n", + current->pid, psb->psb_hdr, psb->psb_size)); + + /* + * indicate to pfm_context_exit() that the vma has been removed. + */ + psb->psb_flags &= ~PFM_PSB_VMA; + + UNLOCK_PSB(psb); +} + +/* + * This function is called from pfm_destroy_context() and also from pfm_inherit() + * to explicitely remove the sampling buffer mapping from the user level address space. + */ +static int +pfm_remove_smpl_mapping(struct task_struct *task) +{ + pfm_context_t *ctx = task->thread.pfm_context; + pfm_smpl_buffer_desc_t *psb; + int r; + + /* + * some sanity checks first + */ + if (ctx == NULL || task->mm == NULL || ctx->ctx_smpl_vaddr == 0 || ctx->ctx_psb == NULL) { + printk("perfmon: invalid context mm=%p\n", task->mm); + return -1; + } + psb = ctx->ctx_psb; + + down_write(&task->mm->mmap_sem); + + r = do_munmap(task->mm, ctx->ctx_smpl_vaddr, psb->psb_size); + + up_write(&task->mm->mmap_sem); + if (r !=0) { + printk("perfmon: pid %d unable to unmap sampling buffer @0x%lx size=%ld\n", + task->pid, ctx->ctx_smpl_vaddr, psb->psb_size); + } + DBprintk(("[%d] do_unmap(0x%lx, %ld)=%d\n", + task->pid, ctx->ctx_smpl_vaddr, psb->psb_size, r)); + + /* + * make sure we suppress all traces of this buffer + * (important for pfm_inherit) + */ + ctx->ctx_smpl_vaddr = 0; + + return 0; } static pfm_context_t * pfm_context_alloc(void) { - pfm_context_t *pfc; + pfm_context_t *ctx; /* allocate context descriptor */ - pfc = vmalloc(sizeof(*pfc)); - if (pfc) memset(pfc, 0, sizeof(*pfc)); - - return pfc; + ctx = kmalloc(sizeof(pfm_context_t), GFP_KERNEL); + if (ctx) memset(ctx, 0, sizeof(pfm_context_t)); + + return ctx; } static void -pfm_context_free(pfm_context_t *pfc) +pfm_context_free(pfm_context_t *ctx) { - if (pfc) vfree(pfc); + if (ctx) kfree(ctx); } static int @@ -402,11 +562,13 @@ { unsigned long page; + DBprintk(("CPU%d buf=0x%lx addr=0x%lx size=%ld\n", smp_processor_id(), buf, addr, size)); + while (size > 0) { - page = kvirt_to_pa(buf); + page = pfm_kvirt_to_pa(buf); if (remap_page_range(vma, addr, page, PAGE_SIZE, PAGE_SHARED)) return -ENOMEM; - + addr += PAGE_SIZE; buf += PAGE_SIZE; size -= PAGE_SIZE; @@ -426,7 +588,7 @@ for (i=0; i < size; i++, which++) res += hweight64(*which); - DBprintk((" res=%ld\n", res)); + DBprintk(("weight=%ld\n", res)); return res; } @@ -435,15 +597,16 @@ * Allocates the sampling buffer and remaps it into caller's address space */ static int -pfm_smpl_buffer_alloc(pfm_context_t *ctx, unsigned long which_pmds, unsigned long entries, void **user_addr) +pfm_smpl_buffer_alloc(pfm_context_t *ctx, unsigned long *which_pmds, unsigned long entries, + void **user_vaddr) { struct mm_struct *mm = current->mm; - struct vm_area_struct *vma; - unsigned long addr, size, regcount; + struct vm_area_struct *vma = NULL; + unsigned long size, regcount; void *smpl_buf; pfm_smpl_buffer_desc_t *psb; - regcount = pfm_smpl_entry_size(&which_pmds, 1); + regcount = pfm_smpl_entry_size(which_pmds, 1); /* note that regcount might be 0, in this case only the header for each * entry will be recorded. @@ -456,132 +619,206 @@ + entries * (sizeof(perfmon_smpl_entry_t) + regcount*sizeof(u64))); /* * check requested size to avoid Denial-of-service attacks - * XXX: may have to refine this test + * XXX: may have to refine this test + * Check against address space limit. + * + * if ((mm->total_vm << PAGE_SHIFT) + len> current->rlim[RLIMIT_AS].rlim_cur) + * return -ENOMEM; */ if (size > current->rlim[RLIMIT_MEMLOCK].rlim_cur) return -EAGAIN; - /* find some free area in address space */ - addr = get_unmapped_area(NULL, 0, size, 0, MAP_PRIVATE); - if (!addr) goto no_addr; + /* + * We do the easy to undo allocations first. + * + * pfm_rvmalloc(), clears the buffer, so there is no leak + */ + smpl_buf = pfm_rvmalloc(size); + if (smpl_buf == NULL) { + DBprintk(("Can't allocate sampling buffer\n")); + return -ENOMEM; + } + + DBprintk(("smpl_buf @%p\n", smpl_buf)); - DBprintk((" entries=%ld aligned size=%ld, unmapped @0x%lx\n", entries, size, addr)); + /* allocate sampling buffer descriptor now */ + psb = kmalloc(sizeof(*psb), GFP_KERNEL); + if (psb == NULL) { + DBprintk(("Can't allocate sampling buffer descriptor\n")); + pfm_rvfree(smpl_buf, size); + return -ENOMEM; + } /* allocate vma */ vma = kmem_cache_alloc(vm_area_cachep, SLAB_KERNEL); - if (!vma) goto no_vma; - + if (!vma) { + DBprintk(("Cannot allocate vma\n")); + goto error; + } /* - * initialize the vma for the sampling buffer + * partially initialize the vma for the sampling buffer */ - vma->vm_mm = mm; - vma->vm_start = addr; - vma->vm_end = addr + size; - vma->vm_flags = VM_READ|VM_MAYREAD; - vma->vm_page_prot = PAGE_READONLY; /* XXX may need to change */ - vma->vm_ops = NULL; - vma->vm_pgoff = 0; - vma->vm_file = NULL; - vma->vm_raend = 0; + vma->vm_flags = VM_READ| VM_MAYREAD |VM_RESERVED; + vma->vm_page_prot = PAGE_READONLY; /* XXX may need to change */ + vma->vm_ops = &pfm_vm_ops; /* necesarry to get the close() callback */ + vma->vm_pgoff = 0; + vma->vm_file = NULL; + vma->vm_raend = 0; + vma->vm_private_data = psb; /* information needed by the pfm_vm_close() function */ - smpl_buf = rvmalloc(size); - if (smpl_buf == NULL) goto no_buffer; - - DBprintk((" smpl_buf @%p\n", smpl_buf)); - - if (pfm_remap_buffer(vma, (unsigned long)smpl_buf, addr, size)) goto cant_remap; - - /* allocate sampling buffer descriptor now */ - psb = vmalloc(sizeof(*psb)); - if (psb == NULL) goto no_buffer_desc; - - /* start with something clean */ - memset(smpl_buf, 0x0, size); + /* + * Now we have everything we need and we can initialize + * and connect all the data structures + */ psb->psb_hdr = smpl_buf; - psb->psb_addr = (char *)smpl_buf+sizeof(perfmon_smpl_hdr_t); /* first entry */ + psb->psb_addr = ((char *)smpl_buf)+sizeof(perfmon_smpl_hdr_t); /* first entry */ psb->psb_size = size; /* aligned size */ psb->psb_index = 0; psb->psb_entries = entries; + psb->psb_flags = PFM_PSB_VMA; /* remember that there is a vma describing the buffer */ + psb->psb_refcnt = 1; - atomic_set(&psb->psb_refcnt, 1); + spin_lock_init(&psb->psb_lock); + /* + * XXX: will need to do cacheline alignment to avoid false sharing in SMP mode and + * multitask monitoring. + */ psb->psb_entry_size = sizeof(perfmon_smpl_entry_t) + regcount*sizeof(u64); - DBprintk((" psb @%p entry_size=%ld hdr=%p addr=%p\n", (void *)psb,psb->psb_entry_size, (void *)psb->psb_hdr, (void *)psb->psb_addr)); + DBprintk(("psb @%p entry_size=%ld hdr=%p addr=%p\n", + (void *)psb,psb->psb_entry_size, (void *)psb->psb_hdr, + (void *)psb->psb_addr)); - /* initialize some of the fields of header */ - psb->psb_hdr->hdr_version = PFM_SMPL_HDR_VERSION; - psb->psb_hdr->hdr_entry_size = sizeof(perfmon_smpl_entry_t)+regcount*sizeof(u64); - psb->psb_hdr->hdr_pmds = which_pmds; + /* initialize some of the fields of user visible buffer header */ + psb->psb_hdr->hdr_version = PFM_SMPL_VERSION; + psb->psb_hdr->hdr_entry_size = psb->psb_entry_size; + psb->psb_hdr->hdr_pmds[0] = which_pmds[0]; - /* store which PMDS to record */ - ctx->ctx_smpl_regs = which_pmds; + /* + * Let's do the difficult operations next. + * + * now we atomically find some area in the address space and + * remap the buffer in it. + */ + down_write(¤t->mm->mmap_sem); - /* link to perfmon context */ - ctx->ctx_smpl_buf = psb; - vma->vm_private_data = ctx; /* link to pfm_context(not yet used) */ + /* find some free area in address space, must have mmap sem held */ + vma->vm_start = get_unmapped_area(NULL, 0, size, 0, MAP_PRIVATE|MAP_ANONYMOUS); + if (vma->vm_start == 0UL) { + DBprintk(("Cannot find unmapped area for size %ld\n", size)); + up_write(¤t->mm->mmap_sem); + goto error; + } + vma->vm_end = vma->vm_start + size; + + DBprintk(("entries=%ld aligned size=%ld, unmapped @0x%lx\n", entries, size, vma->vm_start)); + + /* can only be applied to current, need to have the mm semaphore held when called */ + if (pfm_remap_buffer(vma, (unsigned long)smpl_buf, vma->vm_start, size)) { + DBprintk(("Can't remap buffer\n")); + up_write(¤t->mm->mmap_sem); + goto error; + } /* - * now insert the vma in the vm list for the process + * now insert the vma in the vm list for the process, must be + * done with mmap lock held */ insert_vm_struct(mm, vma); mm->total_vm += size >> PAGE_SHIFT; + up_write(¤t->mm->mmap_sem); + + /* store which PMDS to record */ + ctx->ctx_smpl_regs[0] = which_pmds[0]; + + + /* link to perfmon context */ + ctx->ctx_psb = psb; + /* - * that's the address returned to the user + * keep track of user level virtual address */ - *user_addr = (void *)addr; + ctx->ctx_smpl_vaddr = *(unsigned long *)user_vaddr = vma->vm_start; return 0; - /* outlined error handling */ -no_addr: - DBprintk(("Cannot find unmapped area for size %ld\n", size)); - return -ENOMEM; -no_vma: - DBprintk(("Cannot allocate vma\n")); - return -ENOMEM; -cant_remap: - DBprintk(("Can't remap buffer\n")); - rvfree(smpl_buf, size); -no_buffer: - DBprintk(("Can't allocate sampling buffer\n")); - kmem_cache_free(vm_area_cachep, vma); - return -ENOMEM; -no_buffer_desc: - DBprintk(("Can't allocate sampling buffer descriptor\n")); - kmem_cache_free(vm_area_cachep, vma); - rvfree(smpl_buf, size); +error: + pfm_rvfree(smpl_buf, size); + kfree(psb); return -ENOMEM; } +/* + * XXX: do something better here + */ static int -pfx_is_sane(pfreq_context_t *pfx) +pfm_bad_permissions(struct task_struct *task) +{ + /* stolen from bad_signal() */ + return (current->session != task->session) + && (current->euid ^ task->suid) && (current->euid ^ task->uid) + && (current->uid ^ task->suid) && (current->uid ^ task->uid); +} + + +static int +pfx_is_sane(struct task_struct *task, pfarg_context_t *pfx) { int ctx_flags; + int cpu; /* valid signal */ - //if (pfx->notify_sig < 1 || pfx->notify_sig >= _NSIG) return -EINVAL; - if (pfx->notify_sig !=0 && pfx->notify_sig != SIGPROF) return -EINVAL; /* cannot send to process 1, 0 means do not notify */ - if (pfx->notify_pid < 0 || pfx->notify_pid == 1) return -EINVAL; - - ctx_flags = pfx->flags; + if (pfx->ctx_notify_pid == 1) { + DBprintk(("invalid notify_pid %d\n", pfx->ctx_notify_pid)); + return -EINVAL; + } + ctx_flags = pfx->ctx_flags; if (ctx_flags & PFM_FL_SYSTEM_WIDE) { -#ifdef CONFIG_SMP - if (smp_num_cpus > 1) { - printk("perfmon: system wide monitoring on SMP not yet supported\n"); + DBprintk(("cpu_mask=0x%lx\n", pfx->ctx_cpu_mask)); + /* + * cannot block in this mode + */ + if (ctx_flags & PFM_FL_NOTIFY_BLOCK) { + DBprintk(("cannot use blocking mode when in system wide monitoring\n")); return -EINVAL; } -#endif - if ((ctx_flags & PFM_FL_SMPL_OVFL_NOBLOCK) == 0) { - printk("perfmon: system wide monitoring cannot use blocking notification mode\n"); + /* + * must only have one bit set in the CPU mask + */ + if (hweight64(pfx->ctx_cpu_mask) != 1UL) { + DBprintk(("invalid CPU mask specified\n")); + return -EINVAL; + } + /* + * and it must be a valid CPU + */ + cpu = ffs(pfx->ctx_cpu_mask); + if (cpu > smp_num_cpus) { + DBprintk(("CPU%d is not online\n", cpu)); + return -EINVAL; + } + /* + * check for pre-existing pinning, if conflicting reject + */ + if (task->cpus_allowed != ~0UL && (task->cpus_allowed & (1UL<pid, + task->cpus_allowed, cpu)); return -EINVAL; } + + } else { + /* + * must provide a target for the signal in blocking mode even when + * no counter is configured with PFM_FL_REG_OVFL_NOTIFY + */ + if ((ctx_flags & PFM_FL_NOTIFY_BLOCK) && pfx->ctx_notify_pid == 0) return -EINVAL; } /* probably more to add here */ @@ -589,68 +826,97 @@ } static int -pfm_context_create(int flags, perfmon_req_t *req) +pfm_create_context(struct task_struct *task, pfm_context_t *ctx, void *req, int count, + struct pt_regs *regs) { - pfm_context_t *ctx; - struct task_struct *task = NULL; - perfmon_req_t tmp; + pfarg_context_t tmp; void *uaddr = NULL; - int ret; + int ret, cpu = 0; int ctx_flags; - pid_t pid; + pid_t notify_pid; - /* to go away */ - if (flags) { - printk("perfmon: use context flags instead of perfmon() flags. Obsoleted API\n"); - } + /* a context has already been defined */ + if (ctx) return -EBUSY; + + /* + * not yet supported + */ + if (task != current) return -EINVAL; if (copy_from_user(&tmp, req, sizeof(tmp))) return -EFAULT; - ret = pfx_is_sane(&tmp.pfr_ctx); + ret = pfx_is_sane(task, &tmp); if (ret < 0) return ret; - ctx_flags = tmp.pfr_ctx.flags; + ctx_flags = tmp.ctx_flags; + + ret = -EBUSY; + + LOCK_PFS(); if (ctx_flags & PFM_FL_SYSTEM_WIDE) { + + /* at this point, we know there is at least one bit set */ + cpu = ffs(tmp.ctx_cpu_mask) - 1; + + DBprintk(("requesting CPU%d currently on CPU%d\n",cpu, smp_processor_id())); + + if (pfm_sessions.pfs_task_sessions > 0) { + DBprintk(("system wide not possible, task_sessions=%ld\n", pfm_sessions.pfs_task_sessions)); + goto abort; + } + + if (pfm_sessions.pfs_sys_session[cpu]) { + DBprintk(("system wide not possible, conflicting session [%d] on CPU%d\n",pfm_sessions.pfs_sys_session[cpu]->pid, cpu)); + goto abort; + } + pfm_sessions.pfs_sys_session[cpu] = task; /* - * XXX: This is not AT ALL SMP safe + * count the number of system wide sessions */ - if (pfs_info.pfs_proc_sessions > 0) return -EBUSY; - if (pfs_info.pfs_sys_session > 0) return -EBUSY; - - pfs_info.pfs_sys_session = 1; + pfm_sessions.pfs_sys_sessions++; - } else if (pfs_info.pfs_sys_session >0) { + } else if (pfm_sessions.pfs_sys_sessions == 0) { + pfm_sessions.pfs_task_sessions++; + } else { /* no per-process monitoring while there is a system wide session */ - return -EBUSY; - } else - pfs_info.pfs_proc_sessions++; + goto abort; + } + + UNLOCK_PFS(); + + ret = -ENOMEM; ctx = pfm_context_alloc(); if (!ctx) goto error; - /* record the creator (debug only) */ - ctx->ctx_creator = current; + /* record the creator (important for inheritance) */ + ctx->ctx_owner = current; + + notify_pid = tmp.ctx_notify_pid; - pid = tmp.pfr_ctx.notify_pid; + spin_lock_init(&ctx->ctx_lock); - spin_lock_init(&ctx->ctx_notify_lock); + if (notify_pid == current->pid) { - if (pid == current->pid) { ctx->ctx_notify_task = task = current; current->thread.pfm_context = ctx; - atomic_set(¤t->thread.pfm_notifiers_check, 1); + } else if (notify_pid!=0) { + struct task_struct *notify_task; - } else if (pid!=0) { read_lock(&tasklist_lock); - task = find_task_by_pid(pid); - if (task) { + notify_task = find_task_by_pid(notify_pid); + + if (notify_task) { + + ret = -EPERM; + /* - * record who to notify - */ - ctx->ctx_notify_task = task; + * check if we can send this task a signal + */ + if (pfm_bad_permissions(notify_task)) goto buffer_error; /* * make visible @@ -669,7 +935,9 @@ * task has been detached from the tasklist otherwise you are * exposed to race conditions. */ - atomic_add(1, &task->thread.pfm_notifiers_check); + atomic_add(1, &ctx->ctx_notify_task->thread.pfm_notifiers_check); + + ctx->ctx_notify_task = notify_task; } read_unlock(&tasklist_lock); } @@ -677,37 +945,48 @@ /* * notification process does not exist */ - if (pid != 0 && task == NULL) { + if (notify_pid != 0 && ctx->ctx_notify_task == NULL) { ret = -EINVAL; goto buffer_error; } - ctx->ctx_notify_sig = SIGPROF; /* siginfo imposes a fixed signal */ - - if (tmp.pfr_ctx.smpl_entries) { - DBprintk((" sampling entries=%ld\n",tmp.pfr_ctx.smpl_entries)); + if (tmp.ctx_smpl_entries) { + DBprintk(("sampling entries=%ld\n",tmp.ctx_smpl_entries)); - ret = pfm_smpl_buffer_alloc(ctx, tmp.pfr_ctx.smpl_regs, - tmp.pfr_ctx.smpl_entries, &uaddr); + ret = pfm_smpl_buffer_alloc(ctx, tmp.ctx_smpl_regs, + tmp.ctx_smpl_entries, &uaddr); if (ret<0) goto buffer_error; - tmp.pfr_ctx.smpl_vaddr = uaddr; + tmp.ctx_smpl_vaddr = uaddr; } /* initialization of context's flags */ - ctx->ctx_fl_inherit = ctx_flags & PFM_FL_INHERIT_MASK; - ctx->ctx_fl_noblock = (ctx_flags & PFM_FL_SMPL_OVFL_NOBLOCK) ? 1 : 0; - ctx->ctx_fl_system = (ctx_flags & PFM_FL_SYSTEM_WIDE) ? 1: 0; - ctx->ctx_fl_exclintr = (ctx_flags & PFM_FL_EXCL_INTR) ? 1: 0; - ctx->ctx_fl_frozen = 0; + ctx->ctx_fl_inherit = ctx_flags & PFM_FL_INHERIT_MASK; + ctx->ctx_fl_block = (ctx_flags & PFM_FL_NOTIFY_BLOCK) ? 1 : 0; + ctx->ctx_fl_system = (ctx_flags & PFM_FL_SYSTEM_WIDE) ? 1: 0; + ctx->ctx_fl_frozen = 0; + /* + * setting this flag to 0 here means, that the creator or the task that the + * context is being attached are granted access. Given that a context can only + * be created for the calling process this, in effect only allows the creator + * to access the context. See pfm_protect() for more. + */ + ctx->ctx_fl_protected = 0; + + /* for system wide mode only (only 1 bit set) */ + ctx->ctx_cpu = cpu; + + atomic_set(&ctx->ctx_last_cpu,-1); /* SMP only, means no CPU */ /* * Keep track of the pmds we want to sample * XXX: may be we don't need to save/restore the DEAR/IEAR pmds * but we do need the BTB for sure. This is because of a hardware * buffer of 1 only for non-BTB pmds. + * + * We ignore the unimplemented pmds specified by the user */ - ctx->ctx_used_pmds[0] = tmp.pfr_ctx.smpl_regs; - ctx->ctx_used_pmcs[0] = 1; /* always save/restore PMC[0] */ + ctx->ctx_used_pmds[0] = tmp.ctx_smpl_regs[0] & pmu_conf.impl_regs[4]; + ctx->ctx_saved_pmcs[0] = 1; /* always save/restore PMC[0] */ sema_init(&ctx->ctx_restart_sem, 0); /* init this semaphore to locked */ @@ -717,31 +996,27 @@ goto buffer_error; } - DBprintk((" context=%p, pid=%d notify_sig %d notify_task=%p\n",(void *)ctx, current->pid, ctx->ctx_notify_sig, ctx->ctx_notify_task)); - DBprintk((" context=%p, pid=%d flags=0x%x inherit=%d noblock=%d system=%d\n",(void *)ctx, current->pid, ctx_flags, ctx->ctx_fl_inherit, ctx->ctx_fl_noblock, ctx->ctx_fl_system)); + DBprintk(("context=%p, pid=%d notify_task=%p\n", + (void *)ctx, task->pid, ctx->ctx_notify_task)); + + DBprintk(("context=%p, pid=%d flags=0x%x inherit=%d block=%d system=%d\n", + (void *)ctx, task->pid, ctx_flags, ctx->ctx_fl_inherit, + ctx->ctx_fl_block, ctx->ctx_fl_system)); /* * when no notification is required, we can make this visible at the last moment */ - if (pid == 0) current->thread.pfm_context = ctx; - + if (notify_pid == 0) task->thread.pfm_context = ctx; /* - * by default, we always include interrupts for system wide - * DCR.pp is set by default to zero by kernel in cpu_init() + * pin task to CPU and force reschedule on exit to ensure + * that when back to user level the task runs on the designated + * CPU. */ if (ctx->ctx_fl_system) { - if (ctx->ctx_fl_exclintr == 0) { - unsigned long dcr = ia64_get_dcr(); - - ia64_set_dcr(dcr|IA64_DCR_PP); - /* - * keep track of the kernel default value - */ - pfs_info.pfs_dfl_dcr = dcr; - - DBprintk((" dcr.pp is set\n")); - } - } + ctx->ctx_saved_cpus_allowed = task->cpus_allowed; + set_cpus_allowed(task, 1UL << cpu); + DBprintk(("[%d] rescheduled allowed=0x%lx\n", task->pid,task->cpus_allowed)); + } return 0; @@ -751,225 +1026,492 @@ /* * undo session reservation */ + LOCK_PFS(); + if (ctx_flags & PFM_FL_SYSTEM_WIDE) { - pfs_info.pfs_sys_session = 0; + pfm_sessions.pfs_sys_session[cpu] = NULL; + pfm_sessions.pfs_sys_sessions--; } else { - pfs_info.pfs_proc_sessions--; + pfm_sessions.pfs_task_sessions--; } +abort: + UNLOCK_PFS(); + return ret; } static void -pfm_reset_regs(pfm_context_t *ctx) +pfm_reset_regs(pfm_context_t *ctx, unsigned long *ovfl_regs, int flag) { - unsigned long mask = ctx->ctx_ovfl_regs; - int i, cnum; + unsigned long mask = ovfl_regs[0]; + unsigned long reset_others = 0UL; + unsigned long val; + int i; + + DBprintk(("masks=0x%lx\n", mask)); - DBprintk((" ovfl_regs=0x%lx\n", mask)); /* * now restore reset value on sampling overflowed counters */ - for(i=0, cnum=PMU_FIRST_COUNTER; i < pmu_conf.max_counters; i++, cnum++, mask >>= 1) { + mask >>= PMU_FIRST_COUNTER; + for(i = PMU_FIRST_COUNTER; mask; i++, mask >>= 1) { if (mask & 0x1) { - DBprintk((" reseting PMD[%d]=%lx\n", cnum, ctx->ctx_pmds[i].smpl_rval & pmu_conf.perf_ovfl_val)); + val = flag == PFM_RELOAD_LONG_RESET ? + ctx->ctx_soft_pmds[i].long_reset: + ctx->ctx_soft_pmds[i].short_reset; + + reset_others |= ctx->ctx_soft_pmds[i].reset_pmds[0]; + + DBprintk(("[%d] %s reset soft_pmd[%d]=%lx\n", + current->pid, + flag == PFM_RELOAD_LONG_RESET ? "long" : "short", i, val)); /* upper part is ignored on rval */ - ia64_set_pmd(cnum, ctx->ctx_pmds[i].smpl_rval); + pfm_write_soft_counter(ctx, i, val); + } + } - /* - * we must reset BTB index (clears pmd16.full to make - * sure we do not report the same branches twice. - * The non-blocking case in handled in update_counters() - */ - if (cnum == ctx->ctx_btb_counter) { - DBprintk(("reseting PMD16\n")); - ia64_set_pmd(16, 0); - } + /* + * Now take care of resetting the other registers + */ + for(i = 0; reset_others; i++, reset_others >>= 1) { + + if ((reset_others & 0x1) == 0) continue; + + val = flag == PFM_RELOAD_LONG_RESET ? + ctx->ctx_soft_pmds[i].long_reset: + ctx->ctx_soft_pmds[i].short_reset; + + if (PMD_IS_COUNTING(i)) { + pfm_write_soft_counter(ctx, i, val); + } else { + ia64_set_pmd(i, val); } + + DBprintk(("[%d] %s reset_others pmd[%d]=%lx\n", + current->pid, + flag == PFM_RELOAD_LONG_RESET ? "long" : "short", i, val)); } /* just in case ! */ - ctx->ctx_ovfl_regs = 0; + ctx->ctx_ovfl_regs[0] = 0UL; } static int -pfm_write_pmcs(struct task_struct *ta, perfmon_req_t *req, int count) +pfm_write_pmcs(struct task_struct *ta, pfm_context_t *ctx, void *arg, int count, struct pt_regs *regs) { struct thread_struct *th = &ta->thread; - pfm_context_t *ctx = th->pfm_context; - perfmon_req_t tmp; - unsigned long cnum; + pfarg_reg_t tmp, *req = (pfarg_reg_t *)arg; + unsigned int cnum; int i; + int ret = 0, reg_retval = 0; + + /* we don't quite support this right now */ + if (ta != current) return -EINVAL; + + if (!CTX_IS_ENABLED(ctx)) return -EINVAL; /* XXX: ctx locking may be required here */ for (i = 0; i < count; i++, req++) { + if (copy_from_user(&tmp, req, sizeof(tmp))) return -EFAULT; - cnum = tmp.pfr_reg.reg_num; + cnum = tmp.reg_num; - /* XXX needs to check validity of the data maybe */ - if (!PMC_IS_IMPL(cnum)) { - DBprintk((" invalid pmc[%ld]\n", cnum)); - return -EINVAL; + /* + * we reject all non implemented PMC as well + * as attempts to modify PMC[0-3] which are used + * as status registers by the PMU + */ + if (!PMC_IS_IMPL(cnum) || cnum < 4) { + DBprintk(("pmc[%u] is unimplemented or invalid\n", cnum)); + ret = -EINVAL; + goto abort_mission; } + /* + * A PMC used to configure monitors must be: + * - system-wide session: privileged monitor + * - per-task : user monitor + * any other configuration is rejected. + */ + if (PMC_IS_MONITOR(cnum)) { + pfm_monitor_t *p = (pfm_monitor_t *)&tmp.reg_value; - if (PMC_IS_COUNTER(cnum)) { + DBprintk(("pmc[%u].pm = %d\n", cnum, p->pmc_pm)); + if (ctx->ctx_fl_system ^ p->pmc_pm) { + //if ((ctx->ctx_fl_system == 1 && p->pmc_pm == 0) + // ||(ctx->ctx_fl_system == 0 && p->pmc_pm == 1)) { + ret = -EINVAL; + goto abort_mission; + } /* - * we keep track of EARS/BTB to speed up sampling later + * enforce generation of overflow interrupt. Necessary on all + * CPUs which do not implement 64-bit hardware counters. */ - if (PMC_IS_DEAR(&tmp.pfr_reg.reg_value)) { - ctx->ctx_dear_counter = cnum; - } else if (PMC_IS_IEAR(&tmp.pfr_reg.reg_value)) { - ctx->ctx_iear_counter = cnum; - } else if (PMC_IS_BTB(&tmp.pfr_reg.reg_value)) { - ctx->ctx_btb_counter = cnum; + p->pmc_oi = 1; + } + + if (PMC_IS_COUNTING(cnum)) { + if (tmp.reg_flags & PFM_REGFL_OVFL_NOTIFY) { + /* + * must have a target for the signal + */ + if (ctx->ctx_notify_task == NULL) { + ret = -EINVAL; + goto abort_mission; + } + + ctx->ctx_soft_pmds[cnum].flags |= PFM_REGFL_OVFL_NOTIFY; } -#if 0 - if (tmp.pfr_reg.reg_flags & PFM_REGFL_OVFL_NOTIFY) - ctx->ctx_pmds[cnum - PMU_FIRST_COUNTER].flags |= PFM_REGFL_OVFL_NOTIFY; -#endif + /* + * copy reset vector + */ + ctx->ctx_soft_pmds[cnum].reset_pmds[0] = tmp.reg_reset_pmds[0]; + ctx->ctx_soft_pmds[cnum].reset_pmds[1] = tmp.reg_reset_pmds[1]; + ctx->ctx_soft_pmds[cnum].reset_pmds[2] = tmp.reg_reset_pmds[2]; + ctx->ctx_soft_pmds[cnum].reset_pmds[3] = tmp.reg_reset_pmds[3]; + + /* + * needed in case the user does not initialize the equivalent + * PMD. Clearing is done in reset_pmu() so there is no possible + * leak here. + */ + CTX_USED_PMD(ctx, cnum); } - /* keep track of what we use */ - CTX_USED_PMC(ctx, cnum); - ia64_set_pmc(cnum, tmp.pfr_reg.reg_value); +abort_mission: + if (ret == -EINVAL) reg_retval = PFM_REG_RETFL_EINVAL; - DBprintk((" setting PMC[%ld]=0x%lx flags=0x%x used_pmcs=0%lx\n", cnum, tmp.pfr_reg.reg_value, ctx->ctx_pmds[cnum - PMU_FIRST_COUNTER].flags, ctx->ctx_used_pmcs[0])); + PFM_REG_RETFLAG_SET(tmp.reg_flags, reg_retval); - } - /* - * we have to set this here event hough we haven't necessarily started monitoring - * because we may be context switched out - */ - if (ctx->ctx_fl_system==0) th->flags |= IA64_THREAD_PM_VALID; + /* + * update register return value, abort all if problem during copy. + */ + if (copy_to_user(req, &tmp, sizeof(tmp))) return -EFAULT; - return 0; + /* + * if there was something wrong on this register, don't touch + * the hardware at all and abort write request for others. + * + * On error, the user mut sequentially scan the table and the first + * entry which has a return flag set is the one that caused the error. + */ + if (ret != 0) { + DBprintk(("[%d] pmc[%u]=0x%lx error %d\n", + ta->pid, cnum, tmp.reg_value, reg_retval)); + break; + } + + /* + * We can proceed with this register! + */ + + /* + * keep copy the pmc, used for register reload + */ + th->pmc[cnum] = tmp.reg_value; + + ia64_set_pmc(cnum, tmp.reg_value); + + DBprintk(("[%d] pmc[%u]=0x%lx flags=0x%x save_pmcs=0%lx reload_pmcs=0x%lx\n", + ta->pid, cnum, tmp.reg_value, + ctx->ctx_soft_pmds[cnum].flags, + ctx->ctx_saved_pmcs[0], ctx->ctx_reload_pmcs[0])); + + } + return ret; } static int -pfm_write_pmds(struct task_struct *ta, perfmon_req_t *req, int count) +pfm_write_pmds(struct task_struct *ta, pfm_context_t *ctx, void *arg, int count, struct pt_regs *regs) { - struct thread_struct *th = &ta->thread; - pfm_context_t *ctx = th->pfm_context; - perfmon_req_t tmp; - unsigned long cnum; + pfarg_reg_t tmp, *req = (pfarg_reg_t *)arg; + unsigned int cnum; int i; + int ret = 0, reg_retval = 0; + + /* we don't quite support this right now */ + if (ta != current) return -EINVAL; + + /* + * Cannot do anything before PMU is enabled + */ + if (!CTX_IS_ENABLED(ctx)) return -EINVAL; + /* XXX: ctx locking may be required here */ for (i = 0; i < count; i++, req++) { - int k; if (copy_from_user(&tmp, req, sizeof(tmp))) return -EFAULT; - cnum = tmp.pfr_reg.reg_num; - - k = cnum - PMU_FIRST_COUNTER; + cnum = tmp.reg_num; - if (!PMD_IS_IMPL(cnum)) return -EINVAL; + if (!PMD_IS_IMPL(cnum)) { + ret = -EINVAL; + goto abort_mission; + } /* update virtualized (64bits) counter */ - if (PMD_IS_COUNTER(cnum)) { - ctx->ctx_pmds[k].ival = tmp.pfr_reg.reg_value; - ctx->ctx_pmds[k].val = tmp.pfr_reg.reg_value & ~pmu_conf.perf_ovfl_val; - ctx->ctx_pmds[k].smpl_rval = tmp.pfr_reg.reg_smpl_reset; - ctx->ctx_pmds[k].ovfl_rval = tmp.pfr_reg.reg_ovfl_reset; + if (PMD_IS_COUNTING(cnum)) { + ctx->ctx_soft_pmds[cnum].ival = tmp.reg_value; + ctx->ctx_soft_pmds[cnum].val = tmp.reg_value & ~pmu_conf.perf_ovfl_val; + ctx->ctx_soft_pmds[cnum].long_reset = tmp.reg_long_reset; + ctx->ctx_soft_pmds[cnum].short_reset = tmp.reg_short_reset; + + } +abort_mission: + if (ret == -EINVAL) reg_retval = PFM_REG_RETFL_EINVAL; - if (tmp.pfr_reg.reg_flags & PFM_REGFL_OVFL_NOTIFY) - ctx->ctx_pmds[cnum - PMU_FIRST_COUNTER].flags |= PFM_REGFL_OVFL_NOTIFY; + PFM_REG_RETFLAG_SET(tmp.reg_flags, reg_retval); + + if (copy_to_user(req, &tmp, sizeof(tmp))) return -EFAULT; + + /* + * if there was something wrong on this register, don't touch + * the hardware at all and abort write request for others. + * + * On error, the user mut sequentially scan the table and the first + * entry which has a return flag set is the one that caused the error. + */ + if (ret != 0) { + DBprintk(("[%d] pmc[%u]=0x%lx error %d\n", + ta->pid, cnum, tmp.reg_value, reg_retval)); + break; } + /* keep track of what we use */ CTX_USED_PMD(ctx, cnum); /* writes to unimplemented part is ignored, so this is safe */ - ia64_set_pmd(cnum, tmp.pfr_reg.reg_value); + ia64_set_pmd(cnum, tmp.reg_value); /* to go away */ ia64_srlz_d(); - DBprintk((" setting PMD[%ld]: ovfl_notify=%d pmd.val=0x%lx pmd.ovfl_rval=0x%lx pmd.smpl_rval=0x%lx pmd=%lx used_pmds=0%lx\n", - cnum, - PMD_OVFL_NOTIFY(ctx, cnum - PMU_FIRST_COUNTER), - ctx->ctx_pmds[k].val, - ctx->ctx_pmds[k].ovfl_rval, - ctx->ctx_pmds[k].smpl_rval, - ia64_get_pmd(cnum) & pmu_conf.perf_ovfl_val, - ctx->ctx_used_pmds[0])); + DBprintk(("[%d] pmd[%u]: soft_pmd=0x%lx short_reset=0x%lx " + "long_reset=0x%lx hw_pmd=%lx notify=%c used_pmds=0x%lx reset_pmds=0x%lx\n", + ta->pid, cnum, + ctx->ctx_soft_pmds[cnum].val, + ctx->ctx_soft_pmds[cnum].short_reset, + ctx->ctx_soft_pmds[cnum].long_reset, + ia64_get_pmd(cnum) & pmu_conf.perf_ovfl_val, + PMC_OVFL_NOTIFY(ctx, cnum) ? 'Y':'N', + ctx->ctx_used_pmds[0], + ctx->ctx_soft_pmds[cnum].reset_pmds[0])); } - /* - * we have to set this here event hough we haven't necessarily started monitoring - * because we may be context switched out - */ - if (ctx->ctx_fl_system==0) th->flags |= IA64_THREAD_PM_VALID; - - return 0; + return ret; } static int -pfm_read_pmds(struct task_struct *ta, perfmon_req_t *req, int count) +pfm_read_pmds(struct task_struct *ta, pfm_context_t *ctx, void *arg, int count, struct pt_regs *regs) { struct thread_struct *th = &ta->thread; - pfm_context_t *ctx = th->pfm_context; unsigned long val=0; - perfmon_req_t tmp; + pfarg_reg_t tmp, *req = (pfarg_reg_t *)arg; int i; + if (!CTX_IS_ENABLED(ctx)) return -EINVAL; + /* * XXX: MUST MAKE SURE WE DON"T HAVE ANY PENDING OVERFLOW BEFORE READING - * This is required when the monitoring has been stoppped by user of kernel. - * If ity is still going on, then that's fine because we a re not gauranteed - * to return an accurate value in this case + * This is required when the monitoring has been stoppped by user or kernel. + * If it is still going on, then that's fine because we a re not guaranteed + * to return an accurate value in this case. */ /* XXX: ctx locking may be required here */ + DBprintk(("ctx_last_cpu=%d for [%d]\n", atomic_read(&ctx->ctx_last_cpu), ta->pid)); + for (i = 0; i < count; i++, req++) { - unsigned long reg_val = ~0, ctx_val = ~0; + unsigned long reg_val = ~0UL, ctx_val = ~0UL; if (copy_from_user(&tmp, req, sizeof(tmp))) return -EFAULT; - if (!PMD_IS_IMPL(tmp.pfr_reg.reg_num)) return -EINVAL; + if (!PMD_IS_IMPL(tmp.reg_num)) goto abort_mission; - if (PMD_IS_COUNTER(tmp.pfr_reg.reg_num)) { - if (ta == current){ - val = ia64_get_pmd(tmp.pfr_reg.reg_num); - } else { - val = reg_val = th->pmd[tmp.pfr_reg.reg_num]; + /* + * If the task is not the current one, then we check if the + * PMU state is still in the local live register due to lazy ctxsw. + * If true, then we read directly from the registers. + */ + if (atomic_read(&ctx->ctx_last_cpu) == smp_processor_id()){ + ia64_srlz_d(); + val = reg_val = ia64_get_pmd(tmp.reg_num); + DBprintk(("reading pmd[%u]=0x%lx from hw\n", tmp.reg_num, val)); + } else { +#ifdef CONFIG_SMP + int cpu; + /* + * for SMP system, the context may still be live on another + * CPU so we need to fetch it before proceeding with the read + * This call we only be made once for the whole loop because + * of ctx_last_cpu becoming == -1. + * + * We cannot reuse ctx_last_cpu as it may change before we get to the + * actual IPI call. In this case, we will do the call for nothing but + * there is no way around it. The receiving side will simply do nothing. + */ + cpu = atomic_read(&ctx->ctx_last_cpu); + if (cpu != -1) { + DBprintk(("must fetch on CPU%d for [%d]\n", cpu, ta->pid)); + pfm_fetch_regs(cpu, ta, ctx); } - val &= pmu_conf.perf_ovfl_val; +#endif + /* context has been saved */ + val = reg_val = th->pmd[tmp.reg_num]; + } + if (PMD_IS_COUNTING(tmp.reg_num)) { /* - * lower part of .val may not be zero, so we must be an addition because of - * residual count (see update_counters). + * XXX: need to check for overflow */ - val += ctx_val = ctx->ctx_pmds[tmp.pfr_reg.reg_num - PMU_FIRST_COUNTER].val; + + val &= pmu_conf.perf_ovfl_val; + val += ctx_val = ctx->ctx_soft_pmds[tmp.reg_num].val; } else { - /* for now */ - if (ta != current) return -EINVAL; - ia64_srlz_d(); - val = ia64_get_pmd(tmp.pfr_reg.reg_num); + val = reg_val = ia64_get_pmd(tmp.reg_num); } - tmp.pfr_reg.reg_value = val; + PFM_REG_RETFLAG_SET(tmp.reg_flags, 0); + tmp.reg_value = val; - DBprintk((" reading PMD[%ld]=0x%lx reg=0x%lx ctx_val=0x%lx pmc=0x%lx\n", - tmp.pfr_reg.reg_num, val, reg_val, ctx_val, ia64_get_pmc(tmp.pfr_reg.reg_num))); + DBprintk(("read pmd[%u] soft_pmd=0x%lx reg=0x%lx pmc=0x%lx\n", + tmp.reg_num, ctx_val, reg_val, + ia64_get_pmc(tmp.reg_num))); if (copy_to_user(req, &tmp, sizeof(tmp))) return -EFAULT; } return 0; +abort_mission: + PFM_REG_RETFLAG_SET(tmp.reg_flags, PFM_REG_RETFL_EINVAL); + /* + * XXX: if this fails, we stick we the original failure, flag not updated! + */ + copy_to_user(req, &tmp, sizeof(tmp)); + return -EINVAL; + +} + +#ifdef PFM_PMU_USES_DBR +/* + * Only call this function when a process it trying to + * write the debug registers (reading is always allowed) + */ +int +pfm_use_debug_registers(struct task_struct *task) +{ + pfm_context_t *ctx = task->thread.pfm_context; + int ret = 0; + + DBprintk(("called for [%d]\n", task->pid)); + + /* + * do it only once + */ + if (task->thread.flags & IA64_THREAD_DBG_VALID) return 0; + + /* + * Even on SMP, we do not need to use an atomic here because + * the only way in is via ptrace() and this is possible only when the + * process is stopped. Even in the case where the ctxsw out is not totally + * completed by the time we come here, there is no way the 'stopped' process + * could be in the middle of fiddling with the pfm_write_ibr_dbr() routine. + * So this is always safe. + */ + if (ctx && ctx->ctx_fl_using_dbreg == 1) return -1; + + /* + * XXX: not pretty + */ + LOCK_PFS(); + + /* + * We only allow the use of debug registers when there is no system + * wide monitoring + * XXX: we could relax this by + */ + if (pfm_sessions.pfs_sys_use_dbregs> 0) + ret = -1; + else + pfm_sessions.pfs_ptrace_use_dbregs++; + + DBprintk(("ptrace_use_dbregs=%lu sys_use_dbregs=%lu by [%d] ret = %d\n", + pfm_sessions.pfs_ptrace_use_dbregs, + pfm_sessions.pfs_sys_use_dbregs, + task->pid, ret)); + + UNLOCK_PFS(); + + return ret; +} + +/* + * This function is called for every task that exits with the + * IA64_THREAD_DBG_VALID set. This indicates a task which was + * able to use the debug registers for debugging purposes via + * ptrace(). Therefore we know it was not using them for + * perfmormance monitoring, so we only decrement the number + * of "ptraced" debug register users to keep the count up to date + */ +int +pfm_release_debug_registers(struct task_struct *task) +{ + int ret; + + LOCK_PFS(); + if (pfm_sessions.pfs_ptrace_use_dbregs == 0) { + printk("perfmon: invalid release for [%d] ptrace_use_dbregs=0\n", task->pid); + ret = -1; + } else { + pfm_sessions.pfs_ptrace_use_dbregs--; + ret = 0; + } + UNLOCK_PFS(); + + return ret; +} +#else /* PFM_PMU_USES_DBR is true */ +/* + * in case, the PMU does not use the debug registers, these two functions are nops. + * The first function is called from arch/ia64/kernel/ptrace.c. + * The second function is called from arch/ia64/kernel/process.c. + */ +int +pfm_use_debug_registers(struct task_struct *task) +{ + return 0; +} +int +pfm_release_debug_registers(struct task_struct *task) +{ + return 0; } +#endif /* PFM_PMU_USES_DBR */ static int -pfm_do_restart(struct task_struct *task) +pfm_restart(struct task_struct *task, pfm_context_t *ctx, void *arg, int count, + struct pt_regs *regs) { - struct thread_struct *th = &task->thread; - pfm_context_t *ctx = th->pfm_context; void *sem = &ctx->ctx_restart_sem; + /* + * Cannot do anything before PMU is enabled + */ + if (!CTX_IS_ENABLED(ctx)) return -EINVAL; + + + if (ctx->ctx_fl_frozen==0) { + printk("task %d without pmu_frozen set\n", task->pid); + return -EINVAL; + } + if (task == current) { - DBprintk((" restarting self %d frozen=%d \n", current->pid, ctx->ctx_fl_frozen)); + DBprintk(("restarting self %d frozen=%d \n", current->pid, ctx->ctx_fl_frozen)); + + pfm_reset_regs(ctx, ctx->ctx_ovfl_regs, PFM_RELOAD_LONG_RESET); - pfm_reset_regs(ctx); + ctx->ctx_ovfl_regs[0] = 0UL; /* * We ignore block/don't block because we never block @@ -978,26 +1520,37 @@ ctx->ctx_fl_frozen = 0; if (CTX_HAS_SMPL(ctx)) { - ctx->ctx_smpl_buf->psb_hdr->hdr_count = 0; - ctx->ctx_smpl_buf->psb_index = 0; + ctx->ctx_psb->psb_hdr->hdr_count = 0; + ctx->ctx_psb->psb_index = 0; } - /* pfm_reset_smpl_buffers(ctx,th->pfm_ovfl_regs);*/ - /* simply unfreeze */ ia64_set_pmc(0, 0); ia64_srlz_d(); return 0; - } + } + /* restart on another task */ - /* check if blocking */ + /* + * if blocking, then post the semaphore. + * if non-blocking, then we ensure that the task will go into + * pfm_overflow_must_block() before returning to user mode. + * We cannot explicitely reset another task, it MUST always + * be done by the task itself. This works for system wide because + * the tool that is controlling the session is doing "self-monitoring". + * + * XXX: what if the task never goes back to user? + * + */ if (CTX_OVFL_NOBLOCK(ctx) == 0) { - DBprintk((" unblocking %d \n", task->pid)); + DBprintk(("unblocking %d \n", task->pid)); up(sem); - return 0; + } else { + task->thread.pfm_ovfl_block_reset = 1; + set_tsk_thread_flag(current, TIF_NOTIFY_RESUME); } - +#if 0 /* * in case of non blocking mode, then it's just a matter of * of reseting the sampling buffer (if any) index. The PMU @@ -1008,314 +1561,686 @@ * must reset the header count first */ if (CTX_HAS_SMPL(ctx)) { - DBprintk((" resetting sampling indexes for %d \n", task->pid)); - ctx->ctx_smpl_buf->psb_hdr->hdr_count = 0; - ctx->ctx_smpl_buf->psb_index = 0; + DBprintk(("resetting sampling indexes for %d \n", task->pid)); + ctx->ctx_psb->psb_hdr->hdr_count = 0; + ctx->ctx_psb->psb_index = 0; + } +#endif + return 0; +} + +static int +pfm_destroy_context(struct task_struct *task, pfm_context_t *ctx, void *arg, int count, + struct pt_regs *regs) +{ + /* we don't quite support this right now */ + if (task != current) return -EINVAL; + + if (ctx->ctx_fl_system) { + ia64_psr(regs)->pp = 0; + __asm__ __volatile__ ("rsm psr.pp;;"::: "memory"); + } else { + ia64_psr(regs)->up = 0; + __asm__ __volatile__ ("rum psr.up;;"::: "memory"); + + task->thread.flags &= ~IA64_THREAD_PM_VALID; } + SET_PMU_OWNER(NULL); + + /* freeze PMU */ + ia64_set_pmc(0, 1); + ia64_srlz_d(); + + /* restore security level */ + ia64_psr(regs)->sp = 1; + + /* + * remove sampling buffer mapping, if any + */ + if (ctx->ctx_smpl_vaddr) pfm_remove_smpl_mapping(task); + + /* now free context and related state */ + pfm_context_exit(task); + return 0; } /* - * system-wide mode: propagate activation/desactivation throughout the tasklist - * - * XXX: does not work for SMP, of course + * does nothing at the moment */ -static void -pfm_process_tasklist(int cmd) +static int +pfm_unprotect_context(struct task_struct *task, pfm_context_t *ctx, void *arg, int count, + struct pt_regs *regs) { - struct task_struct *p; - struct pt_regs *regs; + return 0; +} - for_each_task(p) { - regs = (struct pt_regs *)((unsigned long)p + IA64_STK_OFFSET); - regs--; - ia64_psr(regs)->pp = cmd; - } +static int +pfm_protect_context(struct task_struct *task, pfm_context_t *ctx, void *arg, int count, + struct pt_regs *regs) +{ + DBprintk(("context from [%d] is protected\n", task->pid)); + /* + * from now on, only the creator of the context has access to it + */ + ctx->ctx_fl_protected = 1; + + /* + * reinforce secure monitoring: cannot toggle psr.up + */ + ia64_psr(regs)->sp = 1; + + return 0; } static int -do_perfmonctl (struct task_struct *task, int cmd, int flags, perfmon_req_t *req, int count, struct pt_regs *regs) +pfm_debug(struct task_struct *task, pfm_context_t *ctx, void *arg, int count, + struct pt_regs *regs) { - perfmon_req_t tmp; - struct thread_struct *th = &task->thread; - pfm_context_t *ctx = th->pfm_context; + unsigned int mode = *(unsigned int *)arg; - memset(&tmp, 0, sizeof(tmp)); + pfm_debug_mode = mode == 0 ? 0 : 1; - if (ctx == NULL && cmd != PFM_CREATE_CONTEXT && cmd < PFM_DEBUG_BASE) { - DBprintk((" PFM_WRITE_PMCS: no context for task %d\n", task->pid)); - return -EINVAL; + printk("perfmon debugging %s\n", pfm_debug_mode ? "on" : "off"); + + return 0; +} + +#ifdef PFM_PMU_USES_DBR + +typedef struct { + unsigned long ibr_mask:56; + unsigned long ibr_plm:4; + unsigned long ibr_ig:3; + unsigned long ibr_x:1; +} ibr_mask_reg_t; + +typedef struct { + unsigned long dbr_mask:56; + unsigned long dbr_plm:4; + unsigned long dbr_ig:2; + unsigned long dbr_w:1; + unsigned long dbr_r:1; +} dbr_mask_reg_t; + +typedef union { + unsigned long val; + ibr_mask_reg_t ibr; + dbr_mask_reg_t dbr; +} dbreg_t; + + +static int +pfm_write_ibr_dbr(int mode, struct task_struct *task, void *arg, int count, struct pt_regs *regs) +{ + struct thread_struct *thread = &task->thread; + pfm_context_t *ctx = task->thread.pfm_context; + pfarg_dbreg_t tmp, *req = (pfarg_dbreg_t *)arg; + dbreg_t dbreg; + unsigned int rnum; + int first_time; + int i, ret = 0; + + /* + * for range restriction: psr.db must be cleared or the + * the PMU will ignore the debug registers. + * + * XXX: may need more in system wide mode, + * no task can have this bit set? + */ + if (ia64_psr(regs)->db == 1) return -EINVAL; + + + first_time = ctx->ctx_fl_using_dbreg == 0; + + /* + * check for debug registers in system wide mode + * + */ + LOCK_PFS(); + if (ctx->ctx_fl_system && first_time) { + if (pfm_sessions.pfs_ptrace_use_dbregs) + ret = -EBUSY; + else + pfm_sessions.pfs_sys_use_dbregs++; } + UNLOCK_PFS(); + + if (ret != 0) return ret; + + if (ctx->ctx_fl_system) { + /* we mark ourselves as owner of the debug registers */ + ctx->ctx_fl_using_dbreg = 1; + } else { + if (ctx->ctx_fl_using_dbreg == 0) { + ret= -EBUSY; + if ((thread->flags & IA64_THREAD_DBG_VALID) != 0) { + DBprintk(("debug registers already in use for [%d]\n", task->pid)); + goto abort_mission; + } + /* we mark ourselves as owner of the debug registers */ + ctx->ctx_fl_using_dbreg = 1; - switch (cmd) { - case PFM_CREATE_CONTEXT: - /* a context has already been defined */ - if (ctx) return -EBUSY; + /* + * Given debug registers cannot be used for both debugging + * and performance monitoring at the same time, we reuse + * the storage area to save and restore the registers on ctxsw. + */ + memset(task->thread.dbr, 0, sizeof(task->thread.dbr)); + memset(task->thread.ibr, 0, sizeof(task->thread.ibr)); /* - * cannot directly create a context in another process + * clear hardware registers to make sure we don't leak + * information and pick up stale state */ - if (task != current) return -EINVAL; + for (i=0; i < pmu_conf.num_ibrs; i++) { + ia64_set_ibr(i, 0UL); + } + for (i=0; i < pmu_conf.num_dbrs; i++) { + ia64_set_dbr(i, 0UL); + } + } + } - if (req == NULL || count != 1) return -EINVAL; + ret = -EFAULT; - if (!access_ok(VERIFY_READ, req, sizeof(struct perfmon_req_t)*count)) return -EFAULT; + /* + * Now install the values into the registers + */ + for (i = 0; i < count; i++, req++) { - return pfm_context_create(flags, req); + + if (copy_from_user(&tmp, req, sizeof(tmp))) goto abort_mission; + + rnum = tmp.dbreg_num; + dbreg.val = tmp.dbreg_value; + + ret = -EINVAL; - case PFM_WRITE_PMCS: - /* we don't quite support this right now */ - if (task != current) return -EINVAL; + if ((mode == 0 && !IBR_IS_IMPL(rnum)) || ((mode == 1) && !DBR_IS_IMPL(rnum))) { + DBprintk(("invalid register %u val=0x%lx mode=%d i=%d count=%d\n", + rnum, dbreg.val, mode, i, count)); - if (!access_ok(VERIFY_READ, req, sizeof(struct perfmon_req_t)*count)) return -EFAULT; + goto abort_mission; + } - return pfm_write_pmcs(task, req, count); + /* + * make sure we do not install enabled breakpoint + */ + if (rnum & 0x1) { + if (mode == 0) + dbreg.ibr.ibr_x = 0; + else + dbreg.dbr.dbr_r = dbreg.dbr.dbr_w = 0; + } - case PFM_WRITE_PMDS: - /* we don't quite support this right now */ - if (task != current) return -EINVAL; + /* + * clear return flags and copy back to user + * + * XXX: fix once EAGAIN is implemented + */ + ret = -EFAULT; - if (!access_ok(VERIFY_READ, req, sizeof(struct perfmon_req_t)*count)) return -EFAULT; + PFM_REG_RETFLAG_SET(tmp.dbreg_flags, 0); - return pfm_write_pmds(task, req, count); + if (copy_to_user(req, &tmp, sizeof(tmp))) goto abort_mission; - case PFM_START: - /* we don't quite support this right now */ - if (task != current) return -EINVAL; + /* + * Debug registers, just like PMC, can only be modified + * by a kernel call. Moreover, perfmon() access to those + * registers are centralized in this routine. The hardware + * does not modify the value of these registers, therefore, + * if we save them as they are written, we can avoid having + * to save them on context switch out. This is made possible + * by the fact that when perfmon uses debug registers, ptrace() + * won't be able to modify them concurrently. + */ + if (mode == 0) { + CTX_USED_IBR(ctx, rnum); - if (PMU_OWNER() && PMU_OWNER() != current && PFM_CAN_DO_LAZY()) pfm_lazy_save_regs(PMU_OWNER()); + ia64_set_ibr(rnum, dbreg.val); - SET_PMU_OWNER(current); + thread->ibr[rnum] = dbreg.val; - /* will start monitoring right after rfi */ - ia64_psr(regs)->up = 1; - ia64_psr(regs)->pp = 1; + DBprintk(("write ibr%u=0x%lx used_ibrs=0x%lx\n", rnum, dbreg.val, ctx->ctx_used_ibrs[0])); + } else { + CTX_USED_DBR(ctx, rnum); - if (ctx->ctx_fl_system) { - pfm_process_tasklist(1); - pfs_info.pfs_pp = 1; - } + ia64_set_dbr(rnum, dbreg.val); - /* - * mark the state as valid. - * this will trigger save/restore at context switch - */ - if (ctx->ctx_fl_system==0) th->flags |= IA64_THREAD_PM_VALID; + thread->dbr[rnum] = dbreg.val; - ia64_set_pmc(0, 0); - ia64_srlz_d(); + DBprintk(("write dbr%u=0x%lx used_dbrs=0x%lx\n", rnum, dbreg.val, ctx->ctx_used_dbrs[0])); + } + } - break; + return 0; + +abort_mission: + /* + * in case it was our first attempt, we undo the global modifications + */ + if (first_time) { + LOCK_PFS(); + if (ctx->ctx_fl_system) { + pfm_sessions.pfs_sys_use_dbregs--; + } + UNLOCK_PFS(); + ctx->ctx_fl_using_dbreg = 0; + } + /* + * install error return flag + */ + if (ret != -EFAULT) { + /* + * XXX: for now we can only come here on EINVAL + */ + PFM_REG_RETFLAG_SET(tmp.dbreg_flags, PFM_REG_RETFL_EINVAL); + copy_to_user(req, &tmp, sizeof(tmp)); + } + return ret; +} - case PFM_ENABLE: - /* we don't quite support this right now */ - if (task != current) return -EINVAL; +static int +pfm_write_ibrs(struct task_struct *task, pfm_context_t *ctx, void *arg, int count, + struct pt_regs *regs) +{ + /* we don't quite support this right now */ + if (task != current) return -EINVAL; - if (PMU_OWNER() && PMU_OWNER() != current && PFM_CAN_DO_LAZY()) pfm_lazy_save_regs(PMU_OWNER()); + if (!CTX_IS_ENABLED(ctx)) return -EINVAL; - /* reset all registers to stable quiet state */ - ia64_reset_pmu(); + return pfm_write_ibr_dbr(0, task, arg, count, regs); +} - /* make sure nothing starts */ - ia64_psr(regs)->up = 0; - ia64_psr(regs)->pp = 0; +static int +pfm_write_dbrs(struct task_struct *task, pfm_context_t *ctx, void *arg, int count, + struct pt_regs *regs) +{ + /* we don't quite support this right now */ + if (task != current) return -EINVAL; - /* do it on the live register as well */ - __asm__ __volatile__ ("rsm psr.pp|psr.pp;;"::: "memory"); + if (!CTX_IS_ENABLED(ctx)) return -EINVAL; - SET_PMU_OWNER(current); + return pfm_write_ibr_dbr(1, task, arg, count, regs); +} - /* - * mark the state as valid. - * this will trigger save/restore at context switch - */ - if (ctx->ctx_fl_system==0) th->flags |= IA64_THREAD_PM_VALID; +#endif /* PFM_PMU_USES_DBR */ - /* simply unfreeze */ - ia64_set_pmc(0, 0); - ia64_srlz_d(); - break; +static int +pfm_get_features(struct task_struct *task, pfm_context_t *ctx, void *arg, int count, struct pt_regs *regs) +{ + pfarg_features_t tmp; - case PFM_DISABLE: - /* we don't quite support this right now */ - if (task != current) return -EINVAL; + memset(&tmp, 0, sizeof(tmp)); - /* simply freeze */ - ia64_set_pmc(0, 1); - ia64_srlz_d(); - /* - * XXX: cannot really toggle IA64_THREAD_PM_VALID - * but context is still considered valid, so any - * read request would return something valid. Same - * thing when this task terminates (pfm_flush_regs()). - */ - break; + tmp.ft_version = PFM_VERSION; + tmp.ft_smpl_version = PFM_SMPL_VERSION; - case PFM_READ_PMDS: - if (!access_ok(VERIFY_READ, req, sizeof(struct perfmon_req_t)*count)) return -EFAULT; - if (!access_ok(VERIFY_WRITE, req, sizeof(struct perfmon_req_t)*count)) return -EFAULT; - - return pfm_read_pmds(task, req, count); - - case PFM_STOP: - /* we don't quite support this right now */ - if (task != current) return -EINVAL; - - /* simply stop monitors, not PMU */ - ia64_psr(regs)->up = 0; - ia64_psr(regs)->pp = 0; - - if (ctx->ctx_fl_system) { - pfm_process_tasklist(0); - pfs_info.pfs_pp = 0; - } + if (copy_to_user(arg, &tmp, sizeof(tmp))) return -EFAULT; - break; + return 0; +} - case PFM_RESTART: /* temporary, will most likely end up as a PFM_ENABLE */ +static int +pfm_start(struct task_struct *task, pfm_context_t *ctx, void *arg, int count, + struct pt_regs *regs) +{ + /* we don't quite support this right now */ + if (task != current) return -EINVAL; - if ((th->flags & IA64_THREAD_PM_VALID) == 0 && ctx->ctx_fl_system==0) { - printk(" PFM_RESTART not monitoring\n"); - return -EINVAL; - } - if (CTX_OVFL_NOBLOCK(ctx) == 0 && ctx->ctx_fl_frozen==0) { - printk("task %d without pmu_frozen set\n", task->pid); - return -EINVAL; - } + /* + * Cannot do anything before PMU is enabled + */ + if (!CTX_IS_ENABLED(ctx)) return -EINVAL; - return pfm_do_restart(task); /* we only look at first entry */ + DBprintk(("[%d] fl_system=%d owner=%p current=%p\n", + current->pid, + ctx->ctx_fl_system, PMU_OWNER(), + current)); - case PFM_DESTROY_CONTEXT: - /* we don't quite support this right now */ - if (task != current) return -EINVAL; - - /* first stop monitors */ - ia64_psr(regs)->up = 0; - ia64_psr(regs)->pp = 0; + if (PMU_OWNER() != task) { + printk("perfmon: pfm_start task [%d] not pmu owner\n", task->pid); + return -EINVAL; + } - /* then freeze PMU */ - ia64_set_pmc(0, 1); - ia64_srlz_d(); + if (ctx->ctx_fl_system) { + + /* enable dcr pp */ + ia64_set_dcr(ia64_get_dcr()|IA64_DCR_PP); + + local_cpu_data->pfm_dcr_pp = 1; + ia64_psr(regs)->pp = 1; + __asm__ __volatile__ ("ssm psr.pp;;"::: "memory"); - /* don't save/restore on context switch */ - if (ctx->ctx_fl_system ==0) task->thread.flags &= ~IA64_THREAD_PM_VALID; + } else { + if ((task->thread.flags & IA64_THREAD_PM_VALID) == 0) { + printk("perfmon: pfm_start task flag not set for [%d]\n", task->pid); + return -EINVAL; + } + ia64_psr(regs)->up = 1; + __asm__ __volatile__ ("sum psr.up;;"::: "memory"); + } + ia64_srlz_d(); - SET_PMU_OWNER(NULL); + return 0; +} - /* now free context and related state */ - pfm_context_exit(task); - break; +static int +pfm_enable(struct task_struct *task, pfm_context_t *ctx, void *arg, int count, + struct pt_regs *regs) +{ + /* we don't quite support this right now */ + if (task != current) return -EINVAL; - case PFM_DEBUG_ON: - printk("perfmon debugging on\n"); - pfm_debug = 1; - break; + if (ctx->ctx_fl_system == 0 && PMU_OWNER() && PMU_OWNER() != current) + pfm_lazy_save_regs(PMU_OWNER()); - case PFM_DEBUG_OFF: - printk("perfmon debugging off\n"); - pfm_debug = 0; - break; + /* reset all registers to stable quiet state */ + ia64_reset_pmu(task); - default: - DBprintk((" UNknown command 0x%x\n", cmd)); - return -EINVAL; + /* make sure nothing starts */ + if (ctx->ctx_fl_system) { + ia64_psr(regs)->pp = 0; + ia64_psr(regs)->up = 0; /* just to make sure! */ + + __asm__ __volatile__ ("rsm psr.pp;;"::: "memory"); + +#ifdef CONFIG_SMP + local_cpu_data->pfm_syst_wide = 1; + local_cpu_data->pfm_dcr_pp = 0; +#endif + } else { + /* + * needed in case the task was a passive task during + * a system wide session and now wants to have its own + * session + */ + ia64_psr(regs)->pp = 0; /* just to make sure! */ + ia64_psr(regs)->up = 0; + + __asm__ __volatile__ ("rum psr.up;;"::: "memory"); + /* + * allow user control (user monitors only) + if (task == ctx->ctx_owner) { + */ + { + DBprintk(("clearing psr.sp for [%d]\n", current->pid)); + ia64_psr(regs)->sp = 0; + } + task->thread.flags |= IA64_THREAD_PM_VALID; + } + + SET_PMU_OWNER(task); + + + ctx->ctx_flags.state = PFM_CTX_ENABLED; + atomic_set(&ctx->ctx_last_cpu, smp_processor_id()); + + /* simply unfreeze */ + ia64_set_pmc(0, 0); + ia64_srlz_d(); + + return 0; +} + +static int +pfm_disable(struct task_struct *task, pfm_context_t *ctx, void *arg, int count, + struct pt_regs *regs) +{ + /* we don't quite support this right now */ + if (task != current) return -EINVAL; + + if (!CTX_IS_ENABLED(ctx)) return -EINVAL; + + /* + * stop monitoring, freeze PMU, and save state in context + */ + pfm_flush_regs(task); + + /* + * just to make sure nothing starts again when back in user mode. + * pfm_flush_regs() freezes the PMU anyway. + */ + if (ctx->ctx_fl_system) { + ia64_psr(regs)->pp = 0; + } else { + ia64_psr(regs)->up = 0; + } + + /* + * goes back to default behavior + * no need to change live psr.sp because useless at the kernel level + */ + ia64_psr(regs)->sp = 1; + + DBprintk(("enabling psr.sp for [%d]\n", current->pid)); + + ctx->ctx_flags.state = PFM_CTX_DISABLED; + + return 0; +} + +static int +pfm_stop(struct task_struct *task, pfm_context_t *ctx, void *arg, int count, + struct pt_regs *regs) +{ + /* we don't quite support this right now */ + if (task != current) return -EINVAL; + + /* + * Cannot do anything before PMU is enabled + */ + if (!CTX_IS_ENABLED(ctx)) return -EINVAL; + + DBprintk(("[%d] fl_system=%d owner=%p current=%p\n", + current->pid, + ctx->ctx_fl_system, PMU_OWNER(), + current)); + /* simply stop monitoring but not the PMU */ + if (ctx->ctx_fl_system) { + + __asm__ __volatile__ ("rsm psr.pp;;"::: "memory"); + + /* disable dcr pp */ + ia64_set_dcr(ia64_get_dcr() & ~IA64_DCR_PP); + + local_cpu_data->pfm_dcr_pp = 0; + + ia64_psr(regs)->pp = 0; + + __asm__ __volatile__ ("rsm psr.pp;;"::: "memory"); + + } else { + ia64_psr(regs)->up = 0; + __asm__ __volatile__ ("rum psr.up;;"::: "memory"); } return 0; } /* - * XXX: do something better here + * functions MUST be listed in the increasing order of their index (see permfon.h) */ +static pfm_cmd_desc_t pfm_cmd_tab[]={ +/* 0 */{ NULL, 0, 0, 0}, /* not used */ +/* 1 */{ pfm_write_pmcs, PFM_CMD_PID|PFM_CMD_CTX|PFM_CMD_ARG_READ|PFM_CMD_ARG_WRITE, PFM_CMD_ARG_MANY, sizeof(pfarg_reg_t)}, +/* 2 */{ pfm_write_pmds, PFM_CMD_PID|PFM_CMD_CTX|PFM_CMD_ARG_READ, PFM_CMD_ARG_MANY, sizeof(pfarg_reg_t)}, +/* 3 */{ pfm_read_pmds, PFM_CMD_PID|PFM_CMD_CTX|PFM_CMD_ARG_READ|PFM_CMD_ARG_WRITE, PFM_CMD_ARG_MANY, sizeof(pfarg_reg_t)}, +/* 4 */{ pfm_stop, PFM_CMD_PID|PFM_CMD_CTX, 0, 0}, +/* 5 */{ pfm_start, PFM_CMD_PID|PFM_CMD_CTX, 0, 0}, +/* 6 */{ pfm_enable, PFM_CMD_PID|PFM_CMD_CTX, 0, 0}, +/* 7 */{ pfm_disable, PFM_CMD_PID|PFM_CMD_CTX, 0, 0}, +/* 8 */{ pfm_create_context, PFM_CMD_ARG_READ, 1, sizeof(pfarg_context_t)}, +/* 9 */{ pfm_destroy_context, PFM_CMD_PID|PFM_CMD_CTX, 0, 0}, +/* 10 */{ pfm_restart, PFM_CMD_PID|PFM_CMD_CTX|PFM_CMD_NOCHK, 0, 0}, +/* 11 */{ pfm_protect_context, PFM_CMD_PID|PFM_CMD_CTX, 0, 0}, +/* 12 */{ pfm_get_features, PFM_CMD_ARG_WRITE, 0, 0}, +/* 13 */{ pfm_debug, 0, 1, sizeof(unsigned int)}, +/* 14 */{ pfm_unprotect_context, PFM_CMD_PID|PFM_CMD_CTX, 0, 0}, +/* 15 */{ NULL, 0, 0, 0}, /* not used */ +/* 16 */{ NULL, 0, 0, 0}, /* not used */ +/* 17 */{ NULL, 0, 0, 0}, /* not used */ +/* 18 */{ NULL, 0, 0, 0}, /* not used */ +/* 19 */{ NULL, 0, 0, 0}, /* not used */ +/* 20 */{ NULL, 0, 0, 0}, /* not used */ +/* 21 */{ NULL, 0, 0, 0}, /* not used */ +/* 22 */{ NULL, 0, 0, 0}, /* not used */ +/* 23 */{ NULL, 0, 0, 0}, /* not used */ +/* 24 */{ NULL, 0, 0, 0}, /* not used */ +/* 25 */{ NULL, 0, 0, 0}, /* not used */ +/* 26 */{ NULL, 0, 0, 0}, /* not used */ +/* 27 */{ NULL, 0, 0, 0}, /* not used */ +/* 28 */{ NULL, 0, 0, 0}, /* not used */ +/* 29 */{ NULL, 0, 0, 0}, /* not used */ +/* 30 */{ NULL, 0, 0, 0}, /* not used */ +/* 31 */{ NULL, 0, 0, 0}, /* not used */ +#ifdef PFM_PMU_USES_DBR +/* 32 */{ pfm_write_ibrs, PFM_CMD_PID|PFM_CMD_CTX|PFM_CMD_ARG_READ|PFM_CMD_ARG_WRITE, PFM_CMD_ARG_MANY, sizeof(pfarg_dbreg_t)}, +/* 33 */{ pfm_write_dbrs, PFM_CMD_PID|PFM_CMD_CTX|PFM_CMD_ARG_READ|PFM_CMD_ARG_WRITE, PFM_CMD_ARG_MANY, sizeof(pfarg_dbreg_t)} +#endif +}; +#define PFM_CMD_COUNT (sizeof(pfm_cmd_tab)/sizeof(pfm_cmd_desc_t)) + static int -perfmon_bad_permissions(struct task_struct *task) +check_task_state(struct task_struct *task) { - /* stolen from bad_signal() */ - return (current->session != task->session) - && (current->euid ^ task->suid) && (current->euid ^ task->uid) - && (current->uid ^ task->suid) && (current->uid ^ task->uid); + int ret = 0; +#ifdef CONFIG_SMP + /* We must wait until the state has been completely + * saved. There can be situations where the reader arrives before + * after the task is marked as STOPPED but before pfm_save_regs() + * is completed. + */ + for (;;) { + + task_lock(task); + if (1 /*XXX !task_has_cpu(task)*/) break; + task_unlock(task); + + do { + if (task->state != TASK_ZOMBIE && task->state != TASK_STOPPED) return -EBUSY; + barrier(); + cpu_relax(); + } while (0 /*task_has_cpu(task)*/); + } + task_unlock(task); +#else + if (task->state != TASK_ZOMBIE && task->state != TASK_STOPPED) { + DBprintk(("warning [%d] not in stable state %ld\n", task->pid, task->state)); + ret = -EBUSY; + } +#endif + return ret; } asmlinkage int -sys_perfmonctl (int pid, int cmd, int flags, perfmon_req_t *req, int count, long arg6, long arg7, long arg8, long stack) +sys_perfmonctl (pid_t pid, int cmd, void *arg, int count, long arg5, long arg6, long arg7, + long arg8, long stack) { - struct pt_regs *regs = (struct pt_regs *) &stack; - struct task_struct *child = current; - int ret = -ESRCH; + struct pt_regs *regs = (struct pt_regs *)&stack; + struct task_struct *task = current; + pfm_context_t *ctx = task->thread.pfm_context; + size_t sz; + int ret = -ESRCH, narg; - /* sanity check: - * - * ensures that we don't do bad things in case the OS - * does not have enough storage to save/restore PMC/PMD + /* + * reject any call if perfmon was disabled at initialization time */ - if (PERFMON_IS_DISABLED()) return -ENOSYS; + if (PFM_IS_DISABLED()) return -ENOSYS; - /* XXX: pid interface is going away in favor of pfm context */ - if (pid != current->pid) { - read_lock(&tasklist_lock); + DBprintk(("cmd=%d idx=%d valid=%d narg=0x%x\n", cmd, PFM_CMD_IDX(cmd), + PFM_CMD_IS_VALID(cmd), PFM_CMD_NARG(cmd))); - child = find_task_by_pid(pid); + if (PFM_CMD_IS_VALID(cmd) == 0) return -EINVAL; - if (!child) goto abort_call; + /* ingore arguments when command has none */ + narg = PFM_CMD_NARG(cmd); + if ((narg == PFM_CMD_ARG_MANY && count == 0) || (narg > 0 && narg != count)) return -EINVAL; - ret = -EPERM; + sz = PFM_CMD_ARG_SIZE(cmd); - if (perfmon_bad_permissions(child)) goto abort_call; + if (PFM_CMD_READ_ARG(cmd) && !access_ok(VERIFY_READ, arg, sz*count)) return -EFAULT; - /* - * XXX: need to do more checking here + if (PFM_CMD_WRITE_ARG(cmd) && !access_ok(VERIFY_WRITE, arg, sz*count)) return -EFAULT; + + if (PFM_CMD_USE_PID(cmd)) { + /* + * XXX: may need to fine tune this one */ - if (child->state != TASK_ZOMBIE && child->state != TASK_STOPPED) { - DBprintk((" warning process %d not in stable state %ld\n", pid, child->state)); + if (pid < 2) return -EPERM; + + if (pid != current->pid) { + + read_lock(&tasklist_lock); + + task = find_task_by_pid(pid); + + if (!task) goto abort_call; + + ret = -EPERM; + + if (pfm_bad_permissions(task)) goto abort_call; + + if (PFM_CMD_CHK(cmd)) { + ret = check_task_state(task); + if (ret != 0) goto abort_call; + } + ctx = task->thread.pfm_context; } + } + + if (PFM_CMD_USE_CTX(cmd)) { + ret = -EINVAL; + if (ctx == NULL) { + DBprintk(("no context for task %d\n", task->pid)); + goto abort_call; + } + ret = -EPERM; + /* + * we only grant access to the context if: + * - the caller is the creator of the context (ctx_owner) + * OR - the context is attached to the caller AND The context IS NOT + * in protected mode + */ + if (ctx->ctx_owner != current && (ctx->ctx_fl_protected || task != current)) { + DBprintk(("context protected, no access for [%d]\n", task->pid)); + goto abort_call; + } } - ret = do_perfmonctl(child, cmd, flags, req, count, regs); + + ret = (*pfm_cmd_tab[PFM_CMD_IDX(cmd)].cmd_func)(task, ctx, arg, count, regs); abort_call: - if (child != current) read_unlock(&tasklist_lock); + if (task != current) read_unlock(&tasklist_lock); return ret; } -#if __GNUC__ >= 3 -void asmlinkage -pfm_block_on_overflow(void) -#else -void asmlinkage -pfm_block_on_overflow(u64 arg0, u64 arg1, u64 arg2, u64 arg3, u64 arg4, u64 arg5, u64 arg6, u64 arg7) -#endif +void +pfm_ovfl_block_reset (void) { struct thread_struct *th = ¤t->thread; pfm_context_t *ctx = current->thread.pfm_context; int ret; /* - * NO matter what notify_pid is, - * we clear overflow, won't notify again + * clear the flag, to make sure we won't get here + * again */ - th->pfm_must_block = 0; + th->pfm_ovfl_block_reset = 0; /* * do some sanity checks first */ if (!ctx) { - printk("perfmon: process %d has no PFM context\n", current->pid); - return; - } - if (ctx->ctx_notify_task == 0) { - printk("perfmon: process %d has no task to notify\n", current->pid); + printk("perfmon: [%d] has no PFM context\n", current->pid); return; } - DBprintk((" current=%d task=%d\n", current->pid, ctx->ctx_notify_task->pid)); + if (CTX_OVFL_NOBLOCK(ctx)) goto non_blocking; - /* should not happen */ - if (CTX_OVFL_NOBLOCK(ctx)) { - printk("perfmon: process %d non-blocking ctx should not be here\n", current->pid); - return; - } - - DBprintk((" CPU%d %d before sleep\n", smp_processor_id(), current->pid)); + DBprintk(("[%d] before sleeping\n", current->pid)); /* * may go through without blocking on SMP systems @@ -1323,12 +2248,14 @@ */ ret = down_interruptible(&ctx->ctx_restart_sem); - DBprintk((" CPU%d %d after sleep ret=%d\n", smp_processor_id(), current->pid, ret)); + DBprintk(("[%d] after sleeping ret=%d\n", current->pid, ret)); /* * in case of interruption of down() we don't restart anything */ if (ret >= 0) { + +non_blocking: /* we reactivate on context switch */ ctx->ctx_fl_frozen = 0; /* @@ -1336,19 +2263,19 @@ * use the local reference */ - pfm_reset_regs(ctx); + pfm_reset_regs(ctx, ctx->ctx_ovfl_regs, PFM_RELOAD_LONG_RESET); + + ctx->ctx_ovfl_regs[0] = 0UL; /* * Unlock sampling buffer and reset index atomically * XXX: not really needed when blocking */ if (CTX_HAS_SMPL(ctx)) { - ctx->ctx_smpl_buf->psb_hdr->hdr_count = 0; - ctx->ctx_smpl_buf->psb_index = 0; + ctx->ctx_psb->psb_hdr->hdr_count = 0; + ctx->ctx_psb->psb_index = 0; } - DBprintk((" CPU%d %d unfreeze PMU\n", smp_processor_id(), current->pid)); - ia64_set_pmc(0, 0); ia64_srlz_d(); @@ -1357,23 +2284,111 @@ } /* + * This function will record an entry in the sampling if it is not full already. + * Return: + * 0 : buffer is not full (did not BECOME full: still space or was already full) + * 1 : buffer is full (recorded the last entry) + */ +static int +pfm_record_sample(struct task_struct *task, pfm_context_t *ctx, unsigned long ovfl_mask, struct pt_regs *regs) +{ + pfm_smpl_buffer_desc_t *psb = ctx->ctx_psb; + unsigned long *e, m, idx; + perfmon_smpl_entry_t *h; + int j; + + +pfm_recorded_samples_count++; + idx = ia64_fetch_and_add(1, &psb->psb_index); + DBprintk(("recording index=%ld entries=%ld\n", idx-1, psb->psb_entries)); + + /* + * XXX: there is a small chance that we could run out on index before resetting + * but index is unsigned long, so it will take some time..... + * We use > instead of == because fetch_and_add() is off by one (see below) + * + * This case can happen in non-blocking mode or with multiple processes. + * For non-blocking, we need to reload and continue. + */ + if (idx > psb->psb_entries) return 0; + + /* first entry is really entry 0, not 1 caused by fetch_and_add */ + idx--; + + h = (perfmon_smpl_entry_t *)(((char *)psb->psb_addr) + idx*(psb->psb_entry_size)); + + /* + * initialize entry header + */ + h->pid = task->pid; + h->cpu = smp_processor_id(); + h->rate = 0; /* XXX: add the sampling rate used here */ + h->ip = regs ? regs->cr_iip : 0x0; /* where did the fault happened */ + h->regs = ovfl_mask; /* which registers overflowed */ + + /* guaranteed to monotonically increase on each cpu */ + h->stamp = pfm_get_stamp(); + h->period = 0UL; /* not yet used */ + + /* position for first pmd */ + e = (unsigned long *)(h+1); + + /* + * selectively store PMDs in increasing index number + */ + m = ctx->ctx_smpl_regs[0]; + for (j=0; m; m >>=1, j++) { + + if ((m & 0x1) == 0) continue; + + if (PMD_IS_COUNTING(j)) { + *e = pfm_read_soft_counter(ctx, j); + /* check if this pmd overflowed as well */ + *e += ovfl_mask & (1UL<psb_hdr->hdr_count); + + DBprintk(("index=%ld entries=%ld hdr_count=%ld\n", + idx, psb->psb_entries, psb->psb_hdr->hdr_count)); + /* + * sampling buffer full ? + */ + if (idx == (psb->psb_entries-1)) { + DBprintk(("sampling buffer full\n")); + /* + * XXX: must reset buffer in blocking mode and lost notified + */ + return 1; + } + return 0; +} + +/* * main overflow processing routine. * it can be called from the interrupt path or explicitely during the context switch code * Return: * new value of pmc[0]. if 0x0 then unfreeze, else keep frozen */ -unsigned long -update_counters (struct task_struct *task, u64 pmc0, struct pt_regs *regs) +static unsigned long +pfm_overflow_handler(struct task_struct *task, u64 pmc0, struct pt_regs *regs) { - unsigned long mask, i, cnum; - struct thread_struct *th; + unsigned long mask; + struct thread_struct *t; pfm_context_t *ctx; - unsigned long bv = 0; + unsigned long old_val; + unsigned long ovfl_notify = 0UL, ovfl_pmds = 0UL; + int i; int my_cpu = smp_processor_id(); - int ret = 1, buffer_is_full = 0; - int ovfl_has_long_recovery, can_notify, need_reset_pmd16=0; + int ret = 1; struct siginfo si; - /* * It is never safe to access the task for which the overflow interrupt is destinated * using the current variable as the interrupt may occur in the middle of a context switch @@ -1388,233 +2403,151 @@ */ if (task == NULL) { - DBprintk((" owners[%d]=NULL\n", my_cpu)); + DBprintk(("owners[%d]=NULL\n", my_cpu)); return 0x1; } - th = &task->thread; - ctx = th->pfm_context; + t = &task->thread; + ctx = task->thread.pfm_context; + + if (!ctx) { + printk("perfmon: Spurious overflow interrupt: process %d has no PFM context\n", + task->pid); + return 0; + } /* * XXX: debug test * Don't think this could happen given upfront tests */ - if ((th->flags & IA64_THREAD_PM_VALID) == 0 && ctx->ctx_fl_system == 0) { - printk("perfmon: Spurious overflow interrupt: process %d not using perfmon\n", task->pid); + if ((t->flags & IA64_THREAD_PM_VALID) == 0 && ctx->ctx_fl_system == 0) { + printk("perfmon: Spurious overflow interrupt: process %d not using perfmon\n", + task->pid); return 0x1; } - if (!ctx) { - printk("perfmon: Spurious overflow interrupt: process %d has no PFM context\n", task->pid); - return 0; - } - /* * sanity test. Should never happen */ - if ((pmc0 & 0x1 )== 0) { - printk("perfmon: pid %d pmc0=0x%lx assumption error for freeze bit\n", task->pid, pmc0); + if ((pmc0 & 0x1) == 0) { + printk("perfmon: pid %d pmc0=0x%lx assumption error for freeze bit\n", + task->pid, pmc0); return 0x0; } mask = pmc0 >> PMU_FIRST_COUNTER; - DBprintk(("pmc0=0x%lx pid=%d owner=%d iip=0x%lx, ctx is in %s mode used_pmds=0x%lx used_pmcs=0x%lx\n", - pmc0, task->pid, PMU_OWNER()->pid, regs->cr_iip, - CTX_OVFL_NOBLOCK(ctx) ? "NO-BLOCK" : "BLOCK", - ctx->ctx_used_pmds[0], - ctx->ctx_used_pmcs[0])); + DBprintk(("pmc0=0x%lx pid=%d iip=0x%lx, %s" + " mode used_pmds=0x%lx save_pmcs=0x%lx reload_pmcs=0x%lx\n", + pmc0, task->pid, (regs ? regs->cr_iip : 0), + CTX_OVFL_NOBLOCK(ctx) ? "nonblocking" : "blocking", + ctx->ctx_used_pmds[0], + ctx->ctx_saved_pmcs[0], + ctx->ctx_reload_pmcs[0])); /* - * XXX: need to record sample only when an EAR/BTB has overflowed + * First we update the virtual counters */ - if (CTX_HAS_SMPL(ctx)) { - pfm_smpl_buffer_desc_t *psb = ctx->ctx_smpl_buf; - unsigned long *e, m, idx=0; - perfmon_smpl_entry_t *h; - int j; - - idx = ia64_fetch_and_add(1, &psb->psb_index); - DBprintk((" recording index=%ld entries=%ld\n", idx, psb->psb_entries)); - - /* - * XXX: there is a small chance that we could run out on index before resetting - * but index is unsigned long, so it will take some time..... - * We use > instead of == because fetch_and_add() is off by one (see below) - * - * This case can happen in non-blocking mode or with multiple processes. - * For non-blocking, we need to reload and continue. - */ - if (idx > psb->psb_entries) { - buffer_is_full = 1; - goto reload_pmds; - } + for (i = PMU_FIRST_COUNTER; mask ; i++, mask >>= 1) { - /* first entry is really entry 0, not 1 caused by fetch_and_add */ - idx--; + /* skip pmd which did not overflow */ + if ((mask & 0x1) == 0) continue; - h = (perfmon_smpl_entry_t *)(((char *)psb->psb_addr) + idx*(psb->psb_entry_size)); + DBprintk(("PMD[%d] overflowed hw_pmd=0x%lx soft_pmd=0x%lx\n", + i, ia64_get_pmd(i), ctx->ctx_soft_pmds[i].val)); - h->pid = task->pid; - h->cpu = my_cpu; - h->rate = 0; - h->ip = regs ? regs->cr_iip : 0x0; /* where did the fault happened */ - h->regs = mask; /* which registers overflowed */ + /* + * Because we sometimes (EARS/BTB) reset to a specific value, we cannot simply use + * val to count the number of times we overflowed. Otherwise we would loose the + * current value in the PMD (which can be >0). So to make sure we don't loose + * the residual counts we set val to contain full 64bits value of the counter. + */ + old_val = ctx->ctx_soft_pmds[i].val; + ctx->ctx_soft_pmds[i].val = 1 + pmu_conf.perf_ovfl_val + pfm_read_soft_counter(ctx, i); - /* guaranteed to monotonically increase on each cpu */ - h->stamp = perfmon_get_stamp(); - e = (unsigned long *)(h+1); + DBprintk(("soft_pmd[%d].val=0x%lx old_val=0x%lx pmd=0x%lx\n", + i, ctx->ctx_soft_pmds[i].val, old_val, + ia64_get_pmd(i) & pmu_conf.perf_ovfl_val)); /* - * selectively store PMDs in increasing index number - */ - for (j=0, m = ctx->ctx_smpl_regs; m; m >>=1, j++) { - if (m & 0x1) { - if (PMD_IS_COUNTER(j)) - *e = ctx->ctx_pmds[j-PMU_FIRST_COUNTER].val - + (ia64_get_pmd(j) & pmu_conf.perf_ovfl_val); - else { - *e = ia64_get_pmd(j); /* slow */ - } - DBprintk((" e=%p pmd%d =0x%lx\n", (void *)e, j, *e)); - e++; - } - } - /* - * make the new entry visible to user, needs to be atomic + * now that we have extracted the hardware counter, we can clear it to ensure + * that a subsequent PFM_READ_PMDS will not include it again. */ - ia64_fetch_and_add(1, &psb->psb_hdr->hdr_count); + ia64_set_pmd(i, 0UL); - DBprintk((" index=%ld entries=%ld hdr_count=%ld\n", idx, psb->psb_entries, psb->psb_hdr->hdr_count)); - /* - * sampling buffer full ? + /* + * check for overflow condition */ - if (idx == (psb->psb_entries-1)) { - /* - * will cause notification, cannot be 0 - */ - bv = mask << PMU_FIRST_COUNTER; + if (old_val > ctx->ctx_soft_pmds[i].val) { - buffer_is_full = 1; + ovfl_pmds |= 1UL << i; - DBprintk((" sampling buffer full must notify bv=0x%lx\n", bv)); + DBprintk(("soft_pmd[%d] overflowed flags=0x%x, ovfl=0x%lx\n", i, ctx->ctx_soft_pmds[i].flags, ovfl_pmds)); - /* - * we do not reload here, when context is blocking - */ - if (!CTX_OVFL_NOBLOCK(ctx)) goto no_reload; - - /* - * here, we have a full buffer but we are in non-blocking mode - * so we need to reload overflowed PMDs with sampling reset values - * and restart right away. - */ + if (PMC_OVFL_NOTIFY(ctx, i)) { + ovfl_notify |= 1UL << i; + } } - /* FALL THROUGH */ } -reload_pmds: - - /* - * in the case of a non-blocking context, we reload - * with the ovfl_rval when no user notification is taking place (short recovery) - * otherwise when the buffer is full which requires user interaction) then we use - * smpl_rval which is the long_recovery path (disturbance introduce by user execution). - * - * XXX: implies that when buffer is full then there is always notification. - */ - ovfl_has_long_recovery = CTX_OVFL_NOBLOCK(ctx) && buffer_is_full; /* - * XXX: CTX_HAS_SMPL() should really be something like CTX_HAS_SMPL() and is activated,i.e., - * one of the PMC is configured for EAR/BTB. + * check for sampling buffer * - * When sampling, we can only notify when the sampling buffer is full. + * if present, record sample. We propagate notification ONLY when buffer + * becomes full. */ - can_notify = CTX_HAS_SMPL(ctx) == 0 && ctx->ctx_notify_task; - - DBprintk((" ovfl_has_long_recovery=%d can_notify=%d\n", ovfl_has_long_recovery, can_notify)); - - for (i = 0, cnum = PMU_FIRST_COUNTER; mask ; cnum++, i++, mask >>= 1) { - - if ((mask & 0x1) == 0) continue; - - DBprintk((" PMD[%ld] overflowed pmd=0x%lx pmod.val=0x%lx\n", cnum, ia64_get_pmd(cnum), ctx->ctx_pmds[i].val)); - - /* - * Because we sometimes (EARS/BTB) reset to a specific value, we cannot simply use - * val to count the number of times we overflowed. Otherwise we would loose the current value - * in the PMD (which can be >0). So to make sure we don't loose - * the residual counts we set val to contain full 64bits value of the counter. - * - * XXX: is this needed for EARS/BTB ? - */ - ctx->ctx_pmds[i].val += 1 + pmu_conf.perf_ovfl_val - + (ia64_get_pmd(cnum) & pmu_conf.perf_ovfl_val); /* slow */ - - DBprintk((" pmod[%ld].val=0x%lx pmd=0x%lx\n", i, ctx->ctx_pmds[i].val, ia64_get_pmd(cnum)&pmu_conf.perf_ovfl_val)); - - if (can_notify && PMD_OVFL_NOTIFY(ctx, i)) { - DBprintk((" CPU%d should notify task %p with signal %d\n", my_cpu, ctx->ctx_notify_task, ctx->ctx_notify_sig)); - bv |= 1 << i; - } else { - DBprintk((" CPU%d PMD[%ld] overflow, no notification\n", my_cpu, cnum)); + if(CTX_HAS_SMPL(ctx)) { + ret = pfm_record_sample(task, ctx, ovfl_pmds, regs); + if (ret == 1) { /* - * In case no notification is requested, we reload the reset value right away - * otherwise we wait until the notify_pid process has been called and has - * has finished processing data. Check out pfm_overflow_notify() + * Sampling buffer became full + * If no notication was requested, then we reset buffer index + * and reset registers (done below) and resume. + * If notification requested, then defer reset until pfm_restart() */ - - /* writes to upper part are ignored, so this is safe */ - if (ovfl_has_long_recovery) { - DBprintk((" CPU%d PMD[%ld] reload with smpl_val=%lx\n", my_cpu, cnum,ctx->ctx_pmds[i].smpl_rval)); - ia64_set_pmd(cnum, ctx->ctx_pmds[i].smpl_rval); - } else { - DBprintk((" CPU%d PMD[%ld] reload with ovfl_val=%lx\n", my_cpu, cnum,ctx->ctx_pmds[i].smpl_rval)); - ia64_set_pmd(cnum, ctx->ctx_pmds[i].ovfl_rval); + if (ovfl_notify == 0UL) { + ctx->ctx_psb->psb_hdr->hdr_count = 0UL; + ctx->ctx_psb->psb_index = 0UL; } + } else { + /* + * sample recorded in buffer, no need to notify user + */ + ovfl_notify = 0UL; } - if (cnum == ctx->ctx_btb_counter) need_reset_pmd16=1; - } - /* - * In case of BTB overflow we need to reset the BTB index. - */ - if (need_reset_pmd16) { - DBprintk(("reset PMD16\n")); - ia64_set_pmd(16, 0); } -no_reload: - /* - * some counters overflowed, but they did not require - * user notification, so after having reloaded them above - * we simply restart + * No overflow requiring a user level notification */ - if (!bv) return 0x0; + if (ovfl_notify == 0UL) { + pfm_reset_regs(ctx, &ovfl_pmds, PFM_RELOAD_SHORT_RESET); + return 0x0; + } - ctx->ctx_ovfl_regs = bv; /* keep track of what to reset when unblocking */ - /* - * Now we know that: - * - we have some counters which overflowed (contains in bv) - * - someone has asked to be notified on overflow. + /* + * keep track of what to reset when unblocking */ + ctx->ctx_ovfl_regs[0] = ovfl_pmds; - /* - * If the notification task is still present, then notify_task is non - * null. It is clean by that task if it ever exits before we do. + * we have come to this point because there was an overflow and that notification + * was requested. The notify_task may have disappeared, in which case notify_task + * is NULL. */ - if (ctx->ctx_notify_task) { si.si_errno = 0; si.si_addr = NULL; si.si_pid = task->pid; /* who is sending */ - si.si_signo = ctx->ctx_notify_sig; /* is SIGPROF */ - si.si_code = PROF_OVFL; /* goes to user */ - si.si_pfm_ovfl = bv; - - + si.si_signo = SIGPROF; + si.si_code = PROF_OVFL; /* indicates a perfmon SIGPROF signal */ + /* + * Shift the bitvector such that the user sees bit 4 for PMD4 and so on. + * We only use smpl_ovfl[0] for now. It should be fine for quite a while + * until we have more than 61 PMD available. + */ + si.si_pfm_ovfl[0] = ovfl_notify; /* * when the target of the signal is not ourself, we have to be more @@ -1626,15 +2559,29 @@ if (ctx->ctx_notify_task != current) { /* * grab the notification lock for this task + * This guarantees that the sequence: test + send_signal + * is atomic with regards to the ctx_notify_task field. + * + * We need a spinlock and not just an atomic variable for this. + * */ - spin_lock(&ctx->ctx_notify_lock); + spin_lock(&ctx->ctx_lock); /* * now notify_task cannot be modified until we're done * if NULL, they it got modified while we were in the handler */ if (ctx->ctx_notify_task == NULL) { - spin_unlock(&ctx->ctx_notify_lock); + + spin_unlock(&ctx->ctx_lock); + + /* + * If we've lost the notified task, then we will run + * to completion wbut keep the PMU frozen. Results + * will be incorrect anyway. We do not kill task + * to leave it possible to attach perfmon context + * to already running task. + */ goto lost_notify; } /* @@ -1648,20 +2595,23 @@ * necessarily go to the signal handler (if any) when it goes back to * user mode. */ - DBprintk((" %d sending %d notification to %d\n", task->pid, si.si_signo, ctx->ctx_notify_task->pid)); + DBprintk(("[%d] sending notification to [%d]\n", + task->pid, ctx->ctx_notify_task->pid)); /* * this call is safe in an interrupt handler, so does read_lock() on tasklist_lock */ - ret = send_sig_info(ctx->ctx_notify_sig, &si, ctx->ctx_notify_task); - if (ret != 0) printk(" send_sig_info(process %d, SIGPROF)=%d\n", ctx->ctx_notify_task->pid, ret); + ret = send_sig_info(SIGPROF, &si, ctx->ctx_notify_task); + if (ret != 0) + printk("send_sig_info(process %d, SIGPROF)=%d\n", + ctx->ctx_notify_task->pid, ret); /* * now undo the protections in order */ if (ctx->ctx_notify_task != current) { read_unlock(&tasklist_lock); - spin_unlock(&ctx->ctx_notify_lock); + spin_unlock(&ctx->ctx_lock); } /* @@ -1678,35 +2628,41 @@ * before, changing it to NULL will still maintain this invariant. * Of course, when it is equal to current it cannot change at this point. */ - if (!CTX_OVFL_NOBLOCK(ctx) && ctx->ctx_notify_task != current) { - th->pfm_must_block = 1; /* will cause blocking */ + DBprintk(("block=%d notify [%d] current [%d]\n", + ctx->ctx_fl_block, + ctx->ctx_notify_task ? ctx->ctx_notify_task->pid: -1, + current->pid )); + + if (!CTX_OVFL_NOBLOCK(ctx) && ctx->ctx_notify_task != task) { + t->pfm_ovfl_block_reset = 1; /* will cause blocking */ } } else { -lost_notify: - DBprintk((" notification task has disappeared !\n")); +lost_notify: /* XXX: more to do here, to convert to non-blocking (reset values) */ + + DBprintk(("notification task has disappeared !\n")); /* - * for a non-blocking context, we make sure we do not fall into the pfm_overflow_notify() - * trap. Also in the case of a blocking context with lost notify process, then we do not - * want to block either (even though it is interruptible). In this case, the PMU will be kept - * frozen and the process will run to completion without monitoring enabled. + * for a non-blocking context, we make sure we do not fall into the + * pfm_overflow_notify() trap. Also in the case of a blocking context with lost + * notify process, then we do not want to block either (even though it is + * interruptible). In this case, the PMU will be kept frozen and the process will + * run to completion without monitoring enabled. * * Of course, we cannot loose notify process when self-monitoring. */ - th->pfm_must_block = 0; + t->pfm_ovfl_block_reset = 0; } /* - * if we block, we keep the PMU frozen. If non-blocking we restart. - * in the case of non-blocking were the notify process is lost, we also - * restart. + * If notification was successful, then we rely on the pfm_restart() + * call to unfreeze and reset (in both blocking or non-blocking mode). + * + * If notification failed, then we will keep the PMU frozen and run + * the task to completion */ - if (!CTX_OVFL_NOBLOCK(ctx)) - ctx->ctx_fl_frozen = 1; - else - ctx->ctx_fl_frozen = 0; + ctx->ctx_fl_frozen = 1; - DBprintk((" reload pmc0=0x%x must_block=%ld\n", - ctx->ctx_fl_frozen ? 0x1 : 0x0, th->pfm_must_block)); + DBprintk(("reload pmc0=0x%x must_block=%ld\n", + ctx->ctx_fl_frozen ? 0x1 : 0x0, t->pfm_ovfl_block_reset)); return ctx->ctx_fl_frozen ? 0x1 : 0x0; } @@ -1715,29 +2671,40 @@ perfmon_interrupt (int irq, void *arg, struct pt_regs *regs) { u64 pmc0; - struct task_struct *ta; + struct task_struct *task; - pmc0 = ia64_get_pmc(0); /* slow */ + pfm_ovfl_intr_count++; + + /* + * srlz.d done before arriving here + * + * This is slow + */ + pmc0 = ia64_get_pmc(0); /* * if we have some pending bits set * assumes : if any PM[0].bit[63-1] is set, then PMC[0].fr = 1 */ - if ((pmc0 & ~0x1) && (ta=PMU_OWNER())) { + if ((pmc0 & ~0x1UL)!=0UL && (task=PMU_OWNER())!= NULL) { - /* assumes, PMC[0].fr = 1 at this point */ - pmc0 = update_counters(ta, pmc0, regs); - - /* - * if pmu_frozen = 0 - * pmc0 = 0 and we resume monitoring right away - * else - * pmc0 = 0x1 frozen but all pending bits are cleared + /* + * assumes, PMC[0].fr = 1 at this point + * + * XXX: change protype to pass &pmc0 */ - ia64_set_pmc(0, pmc0); - ia64_srlz_d(); + pmc0 = pfm_overflow_handler(task, pmc0, regs); + + /* we never explicitely freeze PMU here */ + if (pmc0 == 0) { + ia64_set_pmc(0, 0); + ia64_srlz_d(); + } } else { - printk("perfmon: Spurious PMU overflow interrupt: pmc0=0x%lx owner=%p\n", pmc0, (void *)PMU_OWNER()); + pfm_spurious_ovfl_intr_count++; + + DBprintk(("perfmon: Spurious PMU overflow interrupt on CPU%d: pmc0=0x%lx owner=%p\n", + smp_processor_id(), pmc0, (void *)PMU_OWNER())); } } @@ -1745,14 +2712,37 @@ static int perfmon_proc_info(char *page) { +#ifdef CONFIG_SMP +#define cpu_is_online(i) (cpu_online_map & (1UL << i)) +#else +#define cpu_is_online(i) 1 +#endif char *p = page; u64 pmc0 = ia64_get_pmc(0); int i; - p += sprintf(p, "CPU%d.pmc[0]=%lx\nPerfmon debug: %s\n", smp_processor_id(), pmc0, pfm_debug ? "On" : "Off"); - p += sprintf(p, "proc_sessions=%lu sys_sessions=%lu\n", - pfs_info.pfs_proc_sessions, - pfs_info.pfs_sys_session); + p += sprintf(p, "perfmon enabled: %s\n", pmu_conf.pfm_is_disabled ? "No": "Yes"); + + p += sprintf(p, "monitors_pmcs0]=0x%lx\n", pmu_conf.monitor_pmcs[0]); + p += sprintf(p, "counter_pmcds[0]=0x%lx\n", pmu_conf.counter_pmds[0]); + p += sprintf(p, "overflow interrupts=%lu\n", pfm_ovfl_intr_count); + p += sprintf(p, "spurious overflow interrupts=%lu\n", pfm_spurious_ovfl_intr_count); + p += sprintf(p, "recorded samples=%lu\n", pfm_recorded_samples_count); + + p += sprintf(p, "CPU%d.pmc[0]=%lx\nPerfmon debug: %s\n", + smp_processor_id(), pmc0, pfm_debug ? "On" : "Off"); + + p += sprintf(p, "CPU%d cpu_data.pfm_syst_wide=%d cpu_data.dcr_pp=%d\n", + smp_processor_id(), local_cpu_data->pfm_syst_wide, local_cpu_data->pfm_dcr_pp); + + LOCK_PFS(); + p += sprintf(p, "proc_sessions=%lu\nsys_sessions=%lu\nsys_use_dbregs=%lu\nptrace_use_dbregs=%lu\n", + pfm_sessions.pfs_task_sessions, + pfm_sessions.pfs_sys_sessions, + pfm_sessions.pfs_sys_use_dbregs, + pfm_sessions.pfs_ptrace_use_dbregs); + + UNLOCK_PFS(); for(i=0; i < NR_CPUS; i++) { if (cpu_is_online(i)) { @@ -1761,10 +2751,11 @@ pmu_owners[i].owner ? pmu_owners[i].owner->pid: -1); } } + return p - page; } -/* for debug only */ +/* /proc interface, for debug only */ static int perfmon_read_entry(char *page, char **start, off_t off, int count, int *eof, void *data) { @@ -1781,153 +2772,87 @@ return len; } -static struct irqaction perfmon_irqaction = { - handler: perfmon_interrupt, - flags: SA_INTERRUPT, - name: "perfmon" -}; - -void __init -perfmon_init (void) +void +pfm_syst_wide_update_task(struct task_struct *task, int mode) { - pal_perf_mon_info_u_t pm_info; - s64 status; + struct pt_regs *regs = (struct pt_regs *)((unsigned long) task + IA64_STK_OFFSET); - register_percpu_irq(IA64_PERFMON_VECTOR, &perfmon_irqaction); + regs--; - ia64_set_pmv(IA64_PERFMON_VECTOR); - ia64_srlz_d(); - - pmu_conf.pfm_is_disabled = 1; - - printk("perfmon: version %s (sampling format v%d)\n", PFM_VERSION, PFM_SMPL_HDR_VERSION); - printk("perfmon: Interrupt vectored to %u\n", IA64_PERFMON_VECTOR); + /* + * propagate the value of the dcr_pp bit to the psr + */ + ia64_psr(regs)->pp = mode ? local_cpu_data->pfm_dcr_pp : 0; +} - if ((status=ia64_pal_perf_mon_info(pmu_conf.impl_regs, &pm_info)) != 0) { - printk("perfmon: PAL call failed (%ld)\n", status); - return; - } - pmu_conf.perf_ovfl_val = (1L << pm_info.pal_perf_mon_info_s.width) - 1; - pmu_conf.max_counters = pm_info.pal_perf_mon_info_s.generic; - pmu_conf.num_pmcs = find_num_pm_regs(pmu_conf.impl_regs); - pmu_conf.num_pmds = find_num_pm_regs(&pmu_conf.impl_regs[4]); +void +pfm_save_regs (struct task_struct *task) +{ + pfm_context_t *ctx; + u64 psr; - printk("perfmon: %d bits counters (max value 0x%lx)\n", pm_info.pal_perf_mon_info_s.width, pmu_conf.perf_ovfl_val); - printk("perfmon: %ld PMC/PMD pairs, %ld PMCs, %ld PMDs\n", pmu_conf.max_counters, pmu_conf.num_pmcs, pmu_conf.num_pmds); + ctx = task->thread.pfm_context; - /* sanity check */ - if (pmu_conf.num_pmds >= IA64_NUM_PMD_REGS || pmu_conf.num_pmcs >= IA64_NUM_PMC_REGS) { - printk(KERN_ERR "perfmon: ERROR not enough PMC/PMD storage in kernel, perfmon is DISABLED\n"); - return; /* no need to continue anyway */ - } - /* we are all set */ - pmu_conf.pfm_is_disabled = 0; /* - * Insert the tasklet in the list. - * It is still disabled at this point, so it won't run - printk(__FUNCTION__" tasklet is %p state=%d, count=%d\n", &perfmon_tasklet, perfmon_tasklet.state, perfmon_tasklet.count); + * save current PSR: needed because we modify it */ + __asm__ __volatile__ ("mov %0=psr;;": "=r"(psr) :: "memory"); /* - * for now here for debug purposes + * stop monitoring: + * This is the last instruction which can generate an overflow + * + * We do not need to set psr.sp because, it is irrelevant in kernel. + * It will be restored from ipsr when going back to user level */ - perfmon_dir = create_proc_read_entry ("perfmon", 0, 0, perfmon_read_entry, NULL); -} + __asm__ __volatile__ ("rum psr.up;;"::: "memory"); + + ctx->ctx_saved_psr = psr; + + //ctx->ctx_last_cpu = smp_processor_id(); -void -perfmon_init_percpu (void) -{ - ia64_set_pmv(IA64_PERFMON_VECTOR); - ia64_srlz_d(); } -void -pfm_save_regs (struct task_struct *ta) +static void +pfm_lazy_save_regs (struct task_struct *task) { - struct task_struct *owner; pfm_context_t *ctx; struct thread_struct *t; - u64 pmc0, psr; unsigned long mask; int i; - t = &ta->thread; - ctx = ta->thread.pfm_context; + DBprintk(("on [%d] by [%d]\n", task->pid, current->pid)); - /* - * We must make sure that we don't loose any potential overflow - * interrupt while saving PMU context. In this code, external - * interrupts are always enabled. - */ + t = &task->thread; + ctx = task->thread.pfm_context; - /* - * save current PSR: needed because we modify it +#ifdef CONFIG_SMP + /* + * announce we are saving this PMU state + * This will cause other CPU, to wait until we're done + * before using the context.h + * + * must be an atomic operation */ - __asm__ __volatile__ ("mov %0=psr;;": "=r"(psr) :: "memory"); + atomic_set(&ctx->ctx_saving_in_progress, 1); - /* - * stop monitoring: - * This is the only way to stop monitoring without destroying overflow - * information in PMC[0]. - * This is the last instruction which can cause overflow when monitoring - * in kernel. - * By now, we could still have an overflow interrupt in-flight. - */ - __asm__ __volatile__ ("rsm psr.up|psr.pp;;"::: "memory"); + /* + * if owner is NULL, it means that the other CPU won the race + * and the IPI has caused the context to be saved in pfm_handle_fectch_regs() + * instead of here. We have nothing to do + * + * note that this is safe, because the other CPU NEVER modifies saving_in_progress. + */ + if (PMU_OWNER() == NULL) goto do_nothing; +#endif /* - * Mark the PMU as not owned - * This will cause the interrupt handler to do nothing in case an overflow - * interrupt was in-flight - * This also guarantees that pmc0 will contain the final state - * It virtually gives us full control over overflow processing from that point - * on. - * It must be an atomic operation. + * do not own the PMU */ - owner = PMU_OWNER(); SET_PMU_OWNER(NULL); - /* - * read current overflow status: - * - * we are guaranteed to read the final stable state - */ ia64_srlz_d(); - pmc0 = ia64_get_pmc(0); /* slow */ - - /* - * freeze PMU: - * - * This destroys the overflow information. This is required to make sure - * next process does not start with monitoring on if not requested - */ - ia64_set_pmc(0, 1); - - /* - * Check for overflow bits and proceed manually if needed - * - * It is safe to call the interrupt handler now because it does - * not try to block the task right away. Instead it will set a - * flag and let the task proceed. The blocking will only occur - * next time the task exits from the kernel. - */ - if (pmc0 & ~0x1) { - update_counters(owner, pmc0, NULL); - /* we will save the updated version of pmc0 */ - } - /* - * restore PSR for context switch to save - */ - __asm__ __volatile__ ("mov psr.l=%0;; srlz.i;;"::"r"(psr): "memory"); - - /* - * we do not save registers if we can do lazy - */ - if (PFM_CAN_DO_LAZY()) { - SET_PMU_OWNER(owner); - return; - } /* * XXX needs further optimization. @@ -1937,30 +2862,73 @@ for (i=0; mask; i++, mask>>=1) { if (mask & 0x1) t->pmd[i] =ia64_get_pmd(i); } - - /* skip PMC[0], we handle it separately */ - mask = ctx->ctx_used_pmcs[0]>>1; - for (i=1; mask; i++, mask>>=1) { + /* + * XXX: simplify to pmc0 only + */ + mask = ctx->ctx_saved_pmcs[0]; + for (i=0; mask; i++, mask>>=1) { if (mask & 0x1) t->pmc[i] = ia64_get_pmc(i); } + + /* not owned by this CPU */ + atomic_set(&ctx->ctx_last_cpu, -1); + +do_nothing: /* - * Throughout this code we could have gotten an overflow interrupt. It is transformed - * into a spurious interrupt as soon as we give up pmu ownership. + * declare we are done saving this context + * + * must be an atomic operation */ + atomic_set(&ctx->ctx_saving_in_progress,0); + } -static void -pfm_lazy_save_regs (struct task_struct *ta) +#ifdef CONFIG_SMP +/* + * Handles request coming from other CPUs + */ +static void +pfm_handle_fetch_regs(void *info) { - pfm_context_t *ctx; + pfm_smp_ipi_arg_t *arg = info; struct thread_struct *t; + pfm_context_t *ctx; unsigned long mask; int i; - DBprintk((" on [%d] by [%d]\n", ta->pid, current->pid)); + ctx = arg->task->thread.pfm_context; + t = &arg->task->thread; + + DBprintk(("task=%d owner=%d saving=%d\n", + arg->task->pid, + PMU_OWNER() ? PMU_OWNER()->pid: -1, + atomic_read(&ctx->ctx_saving_in_progress))); + + /* must wait if saving was interrupted */ + if (atomic_read(&ctx->ctx_saving_in_progress)) { + arg->retval = 1; + return; + } + + /* can proceed, done with context */ + if (PMU_OWNER() != arg->task) { + arg->retval = 0; + return; + } + + DBprintk(("saving state for [%d] save_pmcs=0x%lx all_pmcs=0x%lx used_pmds=0x%lx\n", + arg->task->pid, + ctx->ctx_saved_pmcs[0], + ctx->ctx_reload_pmcs[0], + ctx->ctx_used_pmds[0])); + + /* + * XXX: will be replaced with pure assembly call + */ + SET_PMU_OWNER(NULL); + + ia64_srlz_d(); - t = &ta->thread; - ctx = ta->thread.pfm_context; /* * XXX needs further optimization. * Also must take holes into account @@ -1970,84 +2938,338 @@ if (mask & 0x1) t->pmd[i] =ia64_get_pmd(i); } - /* skip PMC[0], we handle it separately */ - mask = ctx->ctx_used_pmcs[0]>>1; - for (i=1; mask; i++, mask>>=1) { + mask = ctx->ctx_saved_pmcs[0]; + for (i=0; mask; i++, mask>>=1) { if (mask & 0x1) t->pmc[i] = ia64_get_pmc(i); } - SET_PMU_OWNER(NULL); + /* not owned by this CPU */ + atomic_set(&ctx->ctx_last_cpu, -1); + + /* can proceed */ + arg->retval = 0; } +/* + * Function call to fetch PMU state from another CPU identified by 'cpu'. + * If the context is being saved on the remote CPU, then we busy wait until + * the saving is done and then we return. In this case, non IPI is sent. + * Otherwise, we send an IPI to the remote CPU, potentially interrupting + * pfm_lazy_save_regs() over there. + * + * If the retval==1, then it means that we interrupted remote save and that we must + * wait until the saving is over before proceeding. + * Otherwise, we did the saving on the remote CPU, and it was done by the time we got there. + * in either case, we can proceed. + */ +static void +pfm_fetch_regs(int cpu, struct task_struct *task, pfm_context_t *ctx) +{ + pfm_smp_ipi_arg_t arg; + int ret; + + arg.task = task; + arg.retval = -1; + + if (atomic_read(&ctx->ctx_saving_in_progress)) { + DBprintk(("no IPI, must wait for [%d] to be saved on [%d]\n", task->pid, cpu)); + + /* busy wait */ + while (atomic_read(&ctx->ctx_saving_in_progress)); + return; + } + DBprintk(("calling CPU %d from CPU %d\n", cpu, smp_processor_id())); + + if (cpu == -1) { + printk("refusing to use -1 for [%d]\n", task->pid); + return; + } + + /* will send IPI to other CPU and wait for completion of remote call */ + if ((ret=smp_call_function_single(cpu, pfm_handle_fetch_regs, &arg, 0, 1))) { + printk("perfmon: remote CPU call from %d to %d error %d\n", smp_processor_id(), cpu, ret); + return; + } + /* + * we must wait until saving is over on the other CPU + * This is the case, where we interrupted the saving which started just at the time we sent the + * IPI. + */ + if (arg.retval == 1) { + DBprintk(("must wait for [%d] to be saved on [%d]\n", task->pid, cpu)); + while (atomic_read(&ctx->ctx_saving_in_progress)); + DBprintk(("done saving for [%d] on [%d]\n", task->pid, cpu)); + } +} +#endif /* CONFIG_SMP */ + void -pfm_load_regs (struct task_struct *ta) +pfm_load_regs (struct task_struct *task) { - struct thread_struct *t = &ta->thread; - pfm_context_t *ctx = ta->thread.pfm_context; + struct thread_struct *t; + pfm_context_t *ctx; struct task_struct *owner; unsigned long mask; - int i; + u64 psr; + int i, cpu; owner = PMU_OWNER(); - if (owner == ta) goto skip_restore; + ctx = task->thread.pfm_context; + + /* + * if we were the last user, then nothing to do except restore psr + */ + if (owner == task) { + if (atomic_read(&ctx->ctx_last_cpu) != smp_processor_id()) + DBprintk(("invalid last_cpu=%d for [%d]\n", + atomic_read(&ctx->ctx_last_cpu), task->pid)); + + psr = ctx->ctx_saved_psr; + __asm__ __volatile__ ("mov psr.l=%0;; srlz.i;;"::"r"(psr): "memory"); + + return; + } + DBprintk(("load_regs: must reload for [%d] owner=%d\n", + task->pid, owner ? owner->pid : -1 )); + /* + * someone else is still using the PMU, first push it out and + * then we'll be able to install our stuff ! + */ if (owner) pfm_lazy_save_regs(owner); - SET_PMU_OWNER(ta); +#ifdef CONFIG_SMP + /* + * check if context on another CPU (-1 means saved) + * We MUST use the variable, as last_cpu may change behind our + * back. If it changes to -1 (not on a CPU anymore), then in cpu + * we have the last CPU the context was on. We may be sending the + * IPI for nothing, but we have no way of verifying this. + */ + cpu = atomic_read(&ctx->ctx_last_cpu); + if (cpu != -1) { + pfm_fetch_regs(cpu, task, ctx); + } +#endif + t = &task->thread; + /* + * XXX: will be replaced by assembly routine + * We clear all unused PMDs to avoid leaking information + */ mask = ctx->ctx_used_pmds[0]; for (i=0; mask; i++, mask>>=1) { - if (mask & 0x1) ia64_set_pmd(i, t->pmd[i]); + if (mask & 0x1) + ia64_set_pmd(i, t->pmd[i]); + else + ia64_set_pmd(i, 0UL); } + /* XXX: will need to clear all unused pmd, for security */ - /* skip PMC[0] to avoid side effects */ - mask = ctx->ctx_used_pmcs[0]>>1; + /* + * skip pmc[0] to avoid side-effects, + * all PMCs are systematically reloaded, unsued get default value + * to avoid picking up stale configuration + */ + mask = ctx->ctx_reload_pmcs[0]>>1; for (i=1; mask; i++, mask>>=1) { if (mask & 0x1) ia64_set_pmc(i, t->pmc[i]); } -skip_restore: + /* - * unfreeze only when possible + * restore debug registers when used for range restrictions. + * We must restore the unused registers to avoid picking up + * stale information. + */ + mask = ctx->ctx_used_ibrs[0]; + for (i=0; mask; i++, mask>>=1) { + if (mask & 0x1) + ia64_set_ibr(i, t->ibr[i]); + else + ia64_set_ibr(i, 0UL); + } + + mask = ctx->ctx_used_dbrs[0]; + for (i=0; mask; i++, mask>>=1) { + if (mask & 0x1) + ia64_set_dbr(i, t->dbr[i]); + else + ia64_set_dbr(i, 0UL); + } + + if (t->pmc[0] & ~0x1) { + ia64_srlz_d(); + pfm_overflow_handler(task, t->pmc[0], NULL); + } + + /* + * fl_frozen==1 when we are in blocking mode waiting for restart */ if (ctx->ctx_fl_frozen == 0) { ia64_set_pmc(0, 0); ia64_srlz_d(); - /* place where we potentially (kernel level) start monitoring again */ } + atomic_set(&ctx->ctx_last_cpu, smp_processor_id()); + + SET_PMU_OWNER(task); + + /* + * restore the psr we changed in pfm_save_regs() + */ + psr = ctx->ctx_saved_psr; + __asm__ __volatile__ ("mov psr.l=%0;; srlz.i;;"::"r"(psr): "memory"); + +} + +static void +pfm_model_specific_reset_pmu(struct task_struct *task) +{ + int i; + +#ifdef CONFIG_ITANIUM + /* opcode matcher set to all 1s */ + ia64_set_pmc(8,~0UL); + ia64_set_pmc(9,~0UL); + + /* I-EAR config cleared, plm=0 */ + ia64_set_pmc(10,0UL); + + /* D-EAR config cleared, PMC[11].pt must be 1 */ + ia64_set_pmc(11,1UL << 28); + + /* BTB config. plm=0 */ + ia64_set_pmc(12,0UL); + + /* Instruction address range, PMC[13].ta must be 1 */ + ia64_set_pmc(13,1UL); + + /* + * Clear all PMDs + * + * XXX: may be good enough to rely on the impl_regs to generalize + * this. + */ + for(i = 0; i< 18 ; i++) { + ia64_set_pmd(i,0UL); + } +#endif } +/* + * XXX: this routine is not very portable for PMCs + * XXX: make this routine able to work with non current context + */ +static void +ia64_reset_pmu(struct task_struct *task) +{ + pfm_context_t *ctx = task->thread.pfm_context; + struct thread_struct *t = &task->thread; + unsigned long mask; + int i; + + if (task != current) { + printk("perfmon: invalid task in ia64_reset_pmu()\n"); + return; + } + + /* PMU is frozen, no pending overflow bits */ + ia64_set_pmc(0,1); + + /* + * Let's first do the architected initializations + */ + + /* clear counters */ + ia64_set_pmd(4,0UL); + ia64_set_pmd(5,0UL); + ia64_set_pmd(6,0UL); + ia64_set_pmd(7,0UL); + + /* clear overflow status bits */ + ia64_set_pmc(1,0UL); + ia64_set_pmc(2,0UL); + ia64_set_pmc(3,0UL); + + /* clear counting monitor configuration */ + ia64_set_pmc(4,0UL); + ia64_set_pmc(5,0UL); + ia64_set_pmc(6,0UL); + ia64_set_pmc(7,0UL); + + /* + * Now let's do the CPU model specific initializations + */ + pfm_model_specific_reset_pmu(task); + + /* + * On context switched restore, we must restore ALL pmc even + * when they are not actively used by the task. In UP, the incoming process + * may otherwise pick up left over PMC state from the previous process. + * As opposed to PMD, stale PMC can cause harm to the incoming + * process because they may change what is being measured. + * Therefore, we must systematically reinstall the entire + * PMC state. In SMP, the same thing is possible on the + * same CPU but also on between 2 CPUs. + * + * There is unfortunately no easy way to avoid this problem + * on either UP or SMP. This definitively slows down the + * pfm_load_regs(). + */ + + /* + * We must include all the PMC in this mask to make sure we don't + * see any side effect of the stale state, such as opcode matching + * or range restrictions, for instance. + */ + ctx->ctx_reload_pmcs[0] = pmu_conf.impl_regs[0]; + + /* + * make sure we pick up whatever values were installed + * for the CPU model specific reset. We also include + * the architected PMC (pmc4-pmc7) + * + * This step is required in order to restore the correct values in PMC when + * the task is switched out and back in just after the PFM_ENABLE. + */ + mask = pmu_conf.impl_regs[0]; + for (i=0; mask; i++, mask>>=1) { + if (mask & 0x1) t->pmc[i] = ia64_get_pmc(i); + } + + /* + * useful in case of re-enable after disable + */ + ctx->ctx_used_pmds[0] = 0UL; + ctx->ctx_used_ibrs[0] = 0UL; + ctx->ctx_used_dbrs[0] = 0UL; + + ia64_srlz_d(); +} /* * This function is called when a thread exits (from exit_thread()). * This is a simplified pfm_save_regs() that simply flushes the current * register state into the save area taking into account any pending - * overflow. This time no notification is sent because the taks is dying + * overflow. This time no notification is sent because the task is dying * anyway. The inline processing of overflows avoids loosing some counts. * The PMU is frozen on exit from this call and is to never be reenabled * again for this task. + * */ void -pfm_flush_regs (struct task_struct *ta) +pfm_flush_regs (struct task_struct *task) { pfm_context_t *ctx; - u64 pmc0, psr, mask; - int i,j; + u64 pmc0; + unsigned long mask, mask2, val; + int i; - if (ta == NULL) { - panic(__FUNCTION__" task is NULL\n"); - } - ctx = ta->thread.pfm_context; - if (ctx == NULL) { - panic(__FUNCTION__" no PFM ctx is NULL\n"); - } - /* - * We must make sure that we don't loose any potential overflow - * interrupt while saving PMU context. In this code, external - * interrupts are always enabled. - */ + ctx = task->thread.pfm_context; - /* - * save current PSR: needed because we modify it + if (ctx == NULL) return; + + /* + * that's it if context already disabled */ - __asm__ __volatile__ ("mov %0=psr;;": "=r"(psr) :: "memory"); + if (ctx->ctx_flags.state == PFM_CTX_DISABLED) return; /* * stop monitoring: @@ -2057,7 +3279,23 @@ * in kernel. * By now, we could still have an overflow interrupt in-flight. */ - __asm__ __volatile__ ("rsm psr.up;;"::: "memory"); + if (ctx->ctx_fl_system) { + /* disable dcr pp */ + ia64_set_dcr(ia64_get_dcr() & ~IA64_DCR_PP); + + local_cpu_data->pfm_syst_wide = 0; + local_cpu_data->pfm_dcr_pp = 0; + + + __asm__ __volatile__ ("rsm psr.pp;;"::: "memory"); + + } else { + + __asm__ __volatile__ ("rum psr.up;;"::: "memory"); + + /* no more save/restore on ctxsw */ + current->thread.flags &= ~IA64_THREAD_PM_VALID; + } /* * Mark the PMU as not owned @@ -2088,85 +3326,68 @@ ia64_srlz_d(); /* - * restore PSR for context switch to save + * We don't need to restore psr, because we are on our way out anyway */ - __asm__ __volatile__ ("mov psr.l=%0;;srlz.i;"::"r"(psr): "memory"); /* * This loop flushes the PMD into the PFM context. - * IT also processes overflow inline. + * It also processes overflow inline. * * IMPORTANT: No notification is sent at this point as the process is dying. * The implicit notification will come from a SIGCHILD or a return from a * waitpid(). * - * XXX: must take holes into account */ - mask = pmc0 >> PMU_FIRST_COUNTER; - for (i=0,j=PMU_FIRST_COUNTER; i< pmu_conf.max_counters; i++,j++) { - /* collect latest results */ - ctx->ctx_pmds[i].val += ia64_get_pmd(j) & pmu_conf.perf_ovfl_val; - - /* - * now everything is in ctx_pmds[] and we need - * to clear the saved context from save_regs() such that - * pfm_read_pmds() gets the correct value - */ - ta->thread.pmd[j] = 0; - - /* take care of overflow inline */ - if (mask & 0x1) { - ctx->ctx_pmds[i].val += 1 + pmu_conf.perf_ovfl_val; - DBprintk((" PMD[%d] overflowed pmd=0x%lx pmds.val=0x%lx\n", - j, ia64_get_pmd(j), ctx->ctx_pmds[i].val)); - } - mask >>=1; - } -} + if (atomic_read(&ctx->ctx_last_cpu) != smp_processor_id()) + printk("perfmon: [%d] last_cpu=%d\n", task->pid, atomic_read(&ctx->ctx_last_cpu)); -/* - * XXX: this routine is not very portable for PMCs - * XXX: make this routine able to work with non current context - */ -static void -ia64_reset_pmu(void) -{ - int i; + mask = pmc0 >> PMU_FIRST_COUNTER; + mask2 = ctx->ctx_used_pmds[0] >> PMU_FIRST_COUNTER; - /* PMU is frozen, no pending overflow bits */ - ia64_set_pmc(0,1); + for (i = PMU_FIRST_COUNTER; mask2; i++, mask>>=1, mask2>>=1) { - /* extra overflow bits + counter configs cleared */ - for(i=1; i< PMU_FIRST_COUNTER + pmu_conf.max_counters ; i++) { - ia64_set_pmc(i,0); - } + /* skip non used pmds */ + if ((mask2 & 0x1) == 0) continue; - /* opcode matcher set to all 1s */ - ia64_set_pmc(8,~0); - ia64_set_pmc(9,~0); + val = ia64_get_pmd(i); - /* I-EAR config cleared, plm=0 */ - ia64_set_pmc(10,0); + if (PMD_IS_COUNTING(i)) { - /* D-EAR config cleared, PMC[11].pt must be 1 */ - ia64_set_pmc(11,1 << 28); + DBprintk(("[%d] pmd[%d] soft_pmd=0x%lx hw_pmd=0x%lx\n", task->pid, i, ctx->ctx_soft_pmds[i].val, val & pmu_conf.perf_ovfl_val)); - /* BTB config. plm=0 */ - ia64_set_pmc(12,0); + /* collect latest results */ + ctx->ctx_soft_pmds[i].val += val & pmu_conf.perf_ovfl_val; - /* Instruction address range, PMC[13].ta must be 1 */ - ia64_set_pmc(13,1); + /* + * now everything is in ctx_soft_pmds[] and we need + * to clear the saved context from save_regs() such that + * pfm_read_pmds() gets the correct value + */ + task->thread.pmd[i] = 0; - /* clears all PMD registers */ - for(i=0;i< pmu_conf.num_pmds; i++) { - if (PMD_IS_IMPL(i)) ia64_set_pmd(i,0); + /* take care of overflow inline */ + if (mask & 0x1) { + ctx->ctx_soft_pmds[i].val += 1 + pmu_conf.perf_ovfl_val; + DBprintk(("[%d] pmd[%d] overflowed soft_pmd=0x%lx\n", + task->pid, i, ctx->ctx_soft_pmds[i].val)); + } + } else { + DBprintk(("[%d] pmd[%d] hw_pmd=0x%lx\n", task->pid, i, val)); + /* not a counter, just save value as is */ + task->thread.pmd[i] = val; + } } - ia64_srlz_d(); + /* + * indicates that context has been saved + */ + atomic_set(&ctx->ctx_last_cpu, -1); + } + /* - * task is the newly created task + * task is the newly created task, pt_regs for new child */ int pfm_inherit(struct task_struct *task, struct pt_regs *regs) @@ -2174,25 +3395,29 @@ pfm_context_t *ctx = current->thread.pfm_context; pfm_context_t *nctx; struct thread_struct *th = &task->thread; - int i, cnum; + unsigned long m; + int i; /* - * bypass completely for system wide + * make sure child cannot mess up the monitoring session */ - if (pfs_info.pfs_sys_session) { - DBprintk((" enabling psr.pp for %d\n", task->pid)); - ia64_psr(regs)->pp = pfs_info.pfs_pp; - return 0; - } + ia64_psr(regs)->sp = 1; + DBprintk(("enabling psr.sp for [%d]\n", task->pid)); + + /* + * remove any sampling buffer mapping from child user + * address space. Must be done for all cases of inheritance. + */ + if (ctx->ctx_smpl_vaddr) pfm_remove_smpl_mapping(task); /* * takes care of easiest case first */ if (CTX_INHERIT_MODE(ctx) == PFM_FL_INHERIT_NONE) { - DBprintk((" removing PFM context for %d\n", task->pid)); + DBprintk(("removing PFM context for [%d]\n", task->pid)); task->thread.pfm_context = NULL; - task->thread.pfm_must_block = 0; - atomic_set(&task->thread.pfm_notifiers_check, 0); + task->thread.pfm_ovfl_block_reset = 0; + /* copy_thread() clears IA64_THREAD_PM_VALID */ return 0; } @@ -2202,45 +3427,81 @@ /* copy content */ *nctx = *ctx; + if (CTX_INHERIT_MODE(ctx) == PFM_FL_INHERIT_ONCE) { nctx->ctx_fl_inherit = PFM_FL_INHERIT_NONE; - atomic_set(&task->thread.pfm_notifiers_check, 0); - DBprintk((" downgrading to INHERIT_NONE for %d\n", task->pid)); - pfs_info.pfs_proc_sessions++; + atomic_set(&nctx->ctx_last_cpu, -1); + + /* + * task is not yet visible in the tasklist, so we do + * not need to lock the newly created context. + * However, we must grab the tasklist_lock to ensure + * that the ctx_owner or ctx_notify_task do not disappear + * while we increment their check counters. + */ + read_lock(&tasklist_lock); + + if (nctx->ctx_notify_task) + atomic_inc(&nctx->ctx_notify_task->thread.pfm_notifiers_check); + + if (nctx->ctx_owner) + atomic_inc(&nctx->ctx_owner->thread.pfm_owners_check); + + read_unlock(&tasklist_lock); + + DBprintk(("downgrading to INHERIT_NONE for [%d]\n", task->pid)); + + LOCK_PFS(); + pfm_sessions.pfs_task_sessions++; + UNLOCK_PFS(); } /* initialize counters in new context */ - for(i=0, cnum= PMU_FIRST_COUNTER; i < pmu_conf.max_counters; cnum++, i++) { - nctx->ctx_pmds[i].val = nctx->ctx_pmds[i].ival & ~pmu_conf.perf_ovfl_val; - th->pmd[cnum] = nctx->ctx_pmds[i].ival & pmu_conf.perf_ovfl_val; + m = pmu_conf.counter_pmds[0] >> PMU_FIRST_COUNTER; + for(i = PMU_FIRST_COUNTER ; m ; m>>=1, i++) { + if (m & 0x1) { + nctx->ctx_soft_pmds[i].val = nctx->ctx_soft_pmds[i].ival & ~pmu_conf.perf_ovfl_val; + th->pmd[i] = nctx->ctx_soft_pmds[i].ival & pmu_conf.perf_ovfl_val; + } } /* clear BTB index register */ th->pmd[16] = 0; /* if sampling then increment number of users of buffer */ - if (nctx->ctx_smpl_buf) { - atomic_inc(&nctx->ctx_smpl_buf->psb_refcnt); + if (nctx->ctx_psb) { + + /* + * XXX: nopt very pretty! + */ + LOCK_PSB(nctx->ctx_psb); + nctx->ctx_psb->psb_refcnt++; + UNLOCK_PSB(nctx->ctx_psb); + /* + * remove any pointer to sampling buffer mapping + */ + nctx->ctx_smpl_vaddr = 0; } nctx->ctx_fl_frozen = 0; - nctx->ctx_ovfl_regs = 0; + nctx->ctx_ovfl_regs[0] = 0UL; + sema_init(&nctx->ctx_restart_sem, 0); /* reset this semaphore to locked */ /* clear pending notification */ - th->pfm_must_block = 0; + th->pfm_ovfl_block_reset = 0; /* link with new task */ - th->pfm_context = nctx; + th->pfm_context = nctx; - DBprintk((" nctx=%p for process %d\n", (void *)nctx, task->pid)); + DBprintk(("nctx=%p for process [%d]\n", (void *)nctx, task->pid)); /* * the copy_thread routine automatically clears * IA64_THREAD_PM_VALID, so we need to reenable it, if it was used by the caller */ if (current->thread.flags & IA64_THREAD_PM_VALID) { - DBprintk((" setting PM_VALID for %d\n", task->pid)); + DBprintk(("setting PM_VALID for [%d]\n", task->pid)); th->flags |= IA64_THREAD_PM_VALID; } @@ -2248,100 +3509,248 @@ } /* - * called from release_thread(), at this point this task is not in the - * tasklist anymore + * + * We cannot touch any of the PMU registers at this point as we may + * not be running on the same CPU the task was last run on. Therefore + * it is assumed that the PMU has been stopped appropriately in + * pfm_flush_regs() called from exit_thread(). + * + * The function is called in the context of the parent via a release_thread() + * and wait4(). The task is not in the tasklist anymore. */ void pfm_context_exit(struct task_struct *task) { pfm_context_t *ctx = task->thread.pfm_context; - if (!ctx) { - DBprintk((" invalid context for %d\n", task->pid)); - return; - } + /* + * check sampling buffer + */ + if (ctx->ctx_psb) { + pfm_smpl_buffer_desc_t *psb = ctx->ctx_psb; + + LOCK_PSB(psb); + + DBprintk(("sampling buffer from [%d] @%p size %ld vma_flag=0x%x\n", + task->pid, + psb->psb_hdr, psb->psb_size, psb->psb_flags)); - /* check is we have a sampling buffer attached */ - if (ctx->ctx_smpl_buf) { - pfm_smpl_buffer_desc_t *psb = ctx->ctx_smpl_buf; - - /* if only user left, then remove */ - DBprintk((" [%d] [%d] psb->refcnt=%d\n", current->pid, task->pid, psb->psb_refcnt.counter)); - - if (atomic_dec_and_test(&psb->psb_refcnt) ) { - rvfree(psb->psb_hdr, psb->psb_size); - vfree(psb); - DBprintk((" [%d] cleaning [%d] sampling buffer\n", current->pid, task->pid )); - } - } - DBprintk((" [%d] cleaning [%d] pfm_context @%p\n", current->pid, task->pid, (void *)ctx)); - - /* - * To avoid getting the notified task scan the entire process list - * when it exits because it would have pfm_notifiers_check set, we - * decrease it by 1 to inform the task, that one less task is going - * to send it notification. each new notifer increases this field by - * 1 in pfm_context_create(). Of course, there is race condition between - * decreasing the value and the notified task exiting. The danger comes - * from the fact that we have a direct pointer to its task structure - * thereby bypassing the tasklist. We must make sure that if we have - * notify_task!= NULL, the target task is still somewhat present. It may - * already be detached from the tasklist but that's okay. Note that it is - * okay if we 'miss the deadline' and the task scans the list for nothing, - * it will affect performance but not correctness. The correctness is ensured - * by using the notify_lock whic prevents the notify_task from changing on us. - * Once holdhing this lock, if we see notify_task!= NULL, then it will stay like + /* + * in the case where we are the last user, we may be able to free + * the buffer + */ + psb->psb_refcnt--; + + if (psb->psb_refcnt == 0) { + + /* + * The flag is cleared in pfm_vm_close(). which gets + * called from do_exit() via exit_mm(). + * By the time we come here, the task has no more mm context. + * + * We can only free the psb and buffer here after the vm area + * describing the buffer has been removed. This normally happens + * as part of do_exit() but the entire mm context is ONLY removed + * once its reference counts goes to zero. This is typically + * the case except for multi-threaded (several tasks) processes. + * + * See pfm_vm_close() and pfm_cleanup_smpl_buf() for more details. + */ + if ((psb->psb_flags & PFM_PSB_VMA) == 0) { + + DBprintk(("cleaning sampling buffer from [%d] @%p size %ld\n", + task->pid, + psb->psb_hdr, psb->psb_size)); + + /* + * free the buffer and psb + */ + pfm_rvfree(psb->psb_hdr, psb->psb_size); + kfree(psb); + psb = NULL; + } + } + /* psb may have been deleted */ + if (psb) UNLOCK_PSB(psb); + } + + DBprintk(("cleaning [%d] pfm_context @%p notify_task=%p check=%d mm=%p\n", + task->pid, ctx, + ctx->ctx_notify_task, + atomic_read(&task->thread.pfm_notifiers_check), task->mm)); + + /* + * To avoid getting the notified task or owner task scan the entire process + * list when they exit, we decrement notifiers_check and owners_check respectively. + * + * Of course, there is race condition between decreasing the value and the + * task exiting. The danger comes from the fact that, in both cases, we have a + * direct pointer to a task structure thereby bypassing the tasklist. + * We must make sure that, if we have task!= NULL, the target task is still + * present and is identical to the initial task specified + * during pfm_create_context(). It may already be detached from the tasklist but + * that's okay. Note that it is okay if we miss the deadline and the task scans + * the list for nothing, it will affect performance but not correctness. + * The correctness is ensured by using the ctx_lock which prevents the + * notify_task from changing the fields in our context. + * Once holdhing this lock, if we see task!= NULL, then it will stay like * that until we release the lock. If it is NULL already then we came too late. */ - spin_lock(&ctx->ctx_notify_lock); + LOCK_CTX(ctx); - if (ctx->ctx_notify_task) { - DBprintk((" [%d] [%d] atomic_sub on [%d] notifiers=%u\n", current->pid, task->pid, - ctx->ctx_notify_task->pid, - atomic_read(&ctx->ctx_notify_task->thread.pfm_notifiers_check))); + if (ctx->ctx_notify_task != NULL) { + DBprintk(("[%d], [%d] atomic_sub on [%d] notifiers=%u\n", current->pid, + task->pid, + ctx->ctx_notify_task->pid, + atomic_read(&ctx->ctx_notify_task->thread.pfm_notifiers_check))); - atomic_sub(1, &ctx->ctx_notify_task->thread.pfm_notifiers_check); + atomic_dec(&ctx->ctx_notify_task->thread.pfm_notifiers_check); } - spin_unlock(&ctx->ctx_notify_lock); + if (ctx->ctx_owner != NULL) { + DBprintk(("[%d], [%d] atomic_sub on [%d] owners=%u\n", + current->pid, + task->pid, + ctx->ctx_owner->pid, + atomic_read(&ctx->ctx_owner->thread.pfm_owners_check))); + + atomic_dec(&ctx->ctx_owner->thread.pfm_owners_check); + } + + UNLOCK_CTX(ctx); + + LOCK_PFS(); if (ctx->ctx_fl_system) { - /* - * if included interrupts (true by default), then reset - * to get default value - */ - if (ctx->ctx_fl_exclintr == 0) { - /* - * reload kernel default DCR value - */ - ia64_set_dcr(pfs_info.pfs_dfl_dcr); - DBprintk((" restored dcr to 0x%lx\n", pfs_info.pfs_dfl_dcr)); + + pfm_sessions.pfs_sys_session[ctx->ctx_cpu] = NULL; + pfm_sessions.pfs_sys_sessions--; + DBprintk(("freeing syswide session on CPU%ld\n", ctx->ctx_cpu)); + /* update perfmon debug register counter */ + if (ctx->ctx_fl_using_dbreg) { + if (pfm_sessions.pfs_sys_use_dbregs == 0) { + printk("perfmon: invalid release for [%d] sys_use_dbregs=0\n", task->pid); + } else + pfm_sessions.pfs_sys_use_dbregs--; } - /* - * free system wide session slot - */ - pfs_info.pfs_sys_session = 0; + + /* + * remove any CPU pinning + */ + set_cpus_allowed(task, ctx->ctx_saved_cpus_allowed); } else { - pfs_info.pfs_proc_sessions--; + pfm_sessions.pfs_task_sessions--; } + UNLOCK_PFS(); pfm_context_free(ctx); /* * clean pfm state in thread structure, */ - task->thread.pfm_context = NULL; - task->thread.pfm_must_block = 0; + task->thread.pfm_context = NULL; + task->thread.pfm_ovfl_block_reset = 0; + /* pfm_notifiers is cleaned in pfm_cleanup_notifiers() */ +} + +/* + * function invoked from release_thread when pfm_smpl_buf_list is not NULL + */ +int +pfm_cleanup_smpl_buf(struct task_struct *task) +{ + pfm_smpl_buffer_desc_t *tmp, *psb = task->thread.pfm_smpl_buf_list; + + if (psb == NULL) { + printk("perfmon: psb is null in [%d]\n", current->pid); + return -1; + } + /* + * Walk through the list and free the sampling buffer and psb + */ + while (psb) { + DBprintk(("[%d] freeing smpl @%p size %ld\n", current->pid, psb->psb_hdr, psb->psb_size)); + + pfm_rvfree(psb->psb_hdr, psb->psb_size); + tmp = psb->psb_next; + kfree(psb); + psb = tmp; + } + + /* just in case */ + task->thread.pfm_smpl_buf_list = NULL; + + return 0; +} + +/* + * function invoked from release_thread to make sure that the ctx_owner field does not + * point to an unexisting task. + */ +void +pfm_cleanup_owners(struct task_struct *task) +{ + struct task_struct *p; + pfm_context_t *ctx; + + DBprintk(("called by [%d] for [%d]\n", current->pid, task->pid)); + + read_lock(&tasklist_lock); + + for_each_task(p) { + /* + * It is safe to do the 2-step test here, because thread.ctx + * is cleaned up only in release_thread() and at that point + * the task has been detached from the tasklist which is an + * operation which uses the write_lock() on the tasklist_lock + * so it cannot run concurrently to this loop. So we have the + * guarantee that if we find p and it has a perfmon ctx then + * it is going to stay like this for the entire execution of this + * loop. + */ + ctx = p->thread.pfm_context; + + //DBprintk(("[%d] scanning task [%d] ctx=%p\n", task->pid, p->pid, ctx)); + + if (ctx && ctx->ctx_owner == task) { + DBprintk(("trying for owner [%d] in [%d]\n", task->pid, p->pid)); + /* + * the spinlock is required to take care of a race condition + * with the send_sig_info() call. We must make sure that + * either the send_sig_info() completes using a valid task, + * or the notify_task is cleared before the send_sig_info() + * can pick up a stale value. Note that by the time this + * function is executed the 'task' is already detached from the + * tasklist. The problem is that the notifiers have a direct + * pointer to it. It is okay to send a signal to a task in this + * stage, it simply will have no effect. But it is better than sending + * to a completely destroyed task or worse to a new task using the same + * task_struct address. + */ + LOCK_CTX(ctx); + + ctx->ctx_owner = NULL; + + UNLOCK_CTX(ctx); + DBprintk(("done for notifier [%d] in [%d]\n", task->pid, p->pid)); + } + } + read_unlock(&tasklist_lock); } + +/* + * function called from release_thread to make sure that the ctx_notify_task is not pointing + * to an unexisting task + */ void pfm_cleanup_notifiers(struct task_struct *task) { struct task_struct *p; pfm_context_t *ctx; - DBprintk((" [%d] called\n", task->pid)); + DBprintk(("called by [%d] for [%d]\n", current->pid, task->pid)); read_lock(&tasklist_lock); @@ -2358,10 +3767,10 @@ */ ctx = p->thread.pfm_context; - DBprintk((" [%d] scanning task [%d] ctx=%p\n", task->pid, p->pid, ctx)); + //DBprintk(("[%d] scanning task [%d] ctx=%p\n", task->pid, p->pid, ctx)); if (ctx && ctx->ctx_notify_task == task) { - DBprintk((" trying for notifier %d in %d\n", task->pid, p->pid)); + DBprintk(("trying for notifier [%d] in [%d]\n", task->pid, p->pid)); /* * the spinlock is required to take care of a race condition * with the send_sig_info() call. We must make sure that @@ -2375,23 +3784,123 @@ * to a completely destroyed task or worse to a new task using the same * task_struct address. */ - spin_lock(&ctx->ctx_notify_lock); + LOCK_CTX(ctx); ctx->ctx_notify_task = NULL; - spin_unlock(&ctx->ctx_notify_lock); + UNLOCK_CTX(ctx); - DBprintk((" done for notifier %d in %d\n", task->pid, p->pid)); + DBprintk(("done for notifier [%d] in [%d]\n", task->pid, p->pid)); } } read_unlock(&tasklist_lock); +} + +static struct irqaction perfmon_irqaction = { + handler: perfmon_interrupt, + flags: SA_INTERRUPT, + name: "perfmon" +}; + + +/* + * perfmon initialization routine, called from the initcall() table + */ +int __init +perfmon_init (void) +{ + pal_perf_mon_info_u_t pm_info; + s64 status; + + register_percpu_irq(IA64_PERFMON_VECTOR, &perfmon_irqaction); + + ia64_set_pmv(IA64_PERFMON_VECTOR); + ia64_srlz_d(); + + pmu_conf.pfm_is_disabled = 1; + + printk("perfmon: version %u.%u (sampling format v%u.%u) IRQ %u\n", + PFM_VERSION_MAJ, + PFM_VERSION_MIN, + PFM_SMPL_VERSION_MAJ, + PFM_SMPL_VERSION_MIN, + IA64_PERFMON_VECTOR); + + if ((status=ia64_pal_perf_mon_info(pmu_conf.impl_regs, &pm_info)) != 0) { + printk("perfmon: PAL call failed (%ld), perfmon disabled\n", status); + return -1; + } + + pmu_conf.perf_ovfl_val = (1UL << pm_info.pal_perf_mon_info_s.width) - 1; + pmu_conf.max_counters = pm_info.pal_perf_mon_info_s.generic; + pmu_conf.num_pmcs = find_num_pm_regs(pmu_conf.impl_regs); + pmu_conf.num_pmds = find_num_pm_regs(&pmu_conf.impl_regs[4]); + + printk("perfmon: %u bits counters (max value 0x%016lx)\n", + pm_info.pal_perf_mon_info_s.width, pmu_conf.perf_ovfl_val); + + printk("perfmon: %lu PMC/PMD pairs, %lu PMCs, %lu PMDs\n", + pmu_conf.max_counters, pmu_conf.num_pmcs, pmu_conf.num_pmds); + + /* sanity check */ + if (pmu_conf.num_pmds >= IA64_NUM_PMD_REGS || pmu_conf.num_pmcs >= IA64_NUM_PMC_REGS) { + printk(KERN_ERR "perfmon: not enough pmc/pmd, perfmon is DISABLED\n"); + return -1; /* no need to continue anyway */ + } + + if (ia64_pal_debug_info(&pmu_conf.num_ibrs, &pmu_conf.num_dbrs)) { + printk(KERN_WARNING "perfmon: unable to get number of debug registers\n"); + pmu_conf.num_ibrs = pmu_conf.num_dbrs = 0; + } + /* PAL reports the number of pairs */ + pmu_conf.num_ibrs <<=1; + pmu_conf.num_dbrs <<=1; + + /* + * list the pmc registers used to control monitors + * XXX: unfortunately this information is not provided by PAL + * + * We start with the architected minimum and then refine for each CPU model + */ + pmu_conf.monitor_pmcs[0] = PMM(4)|PMM(5)|PMM(6)|PMM(7); + + /* + * architected counters + */ + pmu_conf.counter_pmds[0] |= PMM(4)|PMM(5)|PMM(6)|PMM(7); +#ifdef CONFIG_ITANIUM + pmu_conf.monitor_pmcs[0] |= PMM(10)|PMM(11)|PMM(12); + /* Itanium does not add more counters */ +#endif + /* we are all set */ + pmu_conf.pfm_is_disabled = 0; + + /* + * for now here for debug purposes + */ + perfmon_dir = create_proc_read_entry ("perfmon", 0, 0, perfmon_read_entry, NULL); + + spin_lock_init(&pfm_sessions.pfs_lock); + + return 0; +} + +__initcall(perfmon_init); + +void +perfmon_init_percpu (void) +{ + ia64_set_pmv(IA64_PERFMON_VECTOR); + ia64_srlz_d(); } + #else /* !CONFIG_PERFMON */ asmlinkage int -sys_perfmonctl (int pid, int cmd, int flags, perfmon_req_t *req, int count, long arg6, long arg7, long arg8, long stack) +sys_perfmonctl (int pid, int cmd, void *req, int count, long arg5, long arg6, + long arg7, long arg8, long stack) { return -ENOSYS; } diff -Nru a/arch/ia64/kernel/process.c b/arch/ia64/kernel/process.c --- a/arch/ia64/kernel/process.c Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/kernel/process.c Tue Mar 12 13:58:15 2002 @@ -1,8 +1,8 @@ /* * Architecture-specific setup. * - * Copyright (C) 1998-2001 Hewlett-Packard Co - * Copyright (C) 1998-2001 David Mosberger-Tang + * Copyright (C) 1998-2002 Hewlett-Packard Co + * David Mosberger-Tang */ #define __KERNEL_SYSCALLS__ /* see */ #include @@ -12,14 +12,17 @@ #include #include #include +#include #include #include #include #include +#include #include #include #include +#include #include #include #include @@ -28,6 +31,16 @@ #include #include +#ifdef CONFIG_IA64_SGI_SN +#include +#endif + +#ifdef CONFIG_PERFMON +# include +#endif + +#include "sigframe.h" + static void do_show_stack (struct unw_frame_info *info, void *arg) { @@ -46,6 +59,15 @@ } void +show_trace_task (struct task_struct *task) +{ + struct unw_frame_info info; + + unw_init_from_blocked_task(&info, task); + do_show_stack(&info, 0); +} + +void show_stack (struct task_struct *task) { if (!task) @@ -90,8 +112,8 @@ printk("r26 : %016lx r27 : %016lx r28 : %016lx\n", regs->r26, regs->r27, regs->r28); printk("r29 : %016lx r30 : %016lx r31 : %016lx\n", regs->r29, regs->r30, regs->r31); - /* print the stacked registers if cr.ifs is valid: */ - if (regs->cr_ifs & 0x8000000000000000) { + if (user_mode(regs)) { + /* print the stacked registers */ unsigned long val, sof, *bsp, ndirty; int i, is_nat = 0; @@ -103,32 +125,61 @@ printk("r%-3u:%c%016lx%s", 32 + i, is_nat ? '*' : ' ', val, ((i == sof - 1) || (i % 3) == 2) ? "\n" : " "); } - } - if (!user_mode(regs)) + } else show_stack(0); } +void +do_notify_resume_user (sigset_t *oldset, struct sigscratch *scr, long in_syscall) +{ +#ifdef CONFIG_PERFMON + if (current->thread.pfm_ovfl_block_reset) + pfm_ovfl_block_reset(); +#endif + + /* deal with pending signal delivery */ + if (test_thread_flag(TIF_SIGPENDING)) + ia64_do_signal(oldset, scr, in_syscall); +} + +/* + * We use this if we don't have any better idle routine.. + */ +static void +default_idle (void) +{ + /* may want to do PAL_LIGHT_HALT here... */ +} + void __attribute__((noreturn)) cpu_idle (void *unused) { /* endless idle loop with no priority at all */ - init_idle(); - current->nice = 20; - while (1) { #ifdef CONFIG_SMP if (!need_resched()) min_xtp(); #endif - while (!need_resched()) - continue; + + while (!need_resched()) { +#ifdef CONFIG_IA64_SGI_SN + snidle(); +#endif + if (pm_idle) + (*pm_idle)(); + else + default_idle(); + } + +#ifdef CONFIG_IA64_SGI_SN + snidleoff(); +#endif + #ifdef CONFIG_SMP normal_xtp(); #endif schedule(); check_pgt_cache(); - if (pm_idle) - (*pm_idle)(); } } @@ -137,10 +188,14 @@ { if ((task->thread.flags & IA64_THREAD_DBG_VALID) != 0) ia64_save_debug_regs(&task->thread.dbr[0]); + #ifdef CONFIG_PERFMON if ((task->thread.flags & IA64_THREAD_PM_VALID) != 0) pfm_save_regs(task); + + if (local_cpu_data->pfm_syst_wide) pfm_syst_wide_update_task(task, 0); #endif + if (IS_IA32_PROCESS(ia64_task_regs(task))) ia32_save_state(task); } @@ -150,10 +205,14 @@ { if ((task->thread.flags & IA64_THREAD_DBG_VALID) != 0) ia64_load_debug_regs(&task->thread.dbr[0]); + #ifdef CONFIG_PERFMON if ((task->thread.flags & IA64_THREAD_PM_VALID) != 0) pfm_load_regs(task); + + if (local_cpu_data->pfm_syst_wide) pfm_syst_wide_update_task(task, 1); #endif + if (IS_IA32_PROCESS(ia64_task_regs(task))) ia32_load_state(task); } @@ -233,7 +292,7 @@ if (user_mode(child_ptregs)) { if (user_stack_base) { - child_ptregs->r12 = user_stack_base + user_stack_size; + child_ptregs->r12 = user_stack_base + user_stack_size - 16; child_ptregs->ar_bspstore = user_stack_base; child_ptregs->ar_rnat = 0; child_ptregs->loadrs = 0; @@ -286,9 +345,15 @@ if (IS_IA32_PROCESS(ia64_task_regs(current))) ia32_save_state(p); #endif + #ifdef CONFIG_PERFMON - if (p->thread.pfm_context) - retval = pfm_inherit(p, child_ptregs); + /* + * reset notifiers and owner check (may not have a perfmon context) + */ + atomic_set(&p->thread.pfm_notifiers_check, 0); + atomic_set(&p->thread.pfm_owners_check, 0); + + if (current->thread.pfm_context) retval = pfm_inherit(p, child_ptregs); #endif return retval; } @@ -412,6 +477,16 @@ return error; } +void +ia64_set_personality (struct elf64_hdr *elf_ex, int ibcs2_interpreter) +{ + set_personality(PER_LINUX); + if (elf_ex->e_flags & EF_IA_64_LINUX_EXECUTABLE_STACK) + current->thread.flags |= IA64_THREAD_XSTACK; + else + current->thread.flags &= ~IA64_THREAD_XSTACK; +} + pid_t kernel_thread (int (*fn)(void *), void *arg, unsigned long flags) { @@ -443,15 +518,15 @@ #ifdef CONFIG_PERFMON /* - * By the time we get here, the task is detached from the tasklist. This is important - * because it means that no other tasks can ever find it as a notifiied task, therfore - * there is no race condition between this code and let's say a pfm_context_create(). - * Conversely, the pfm_cleanup_notifiers() cannot try to access a task's pfm context if - * this other task is in the middle of its own pfm_context_exit() because it would alreayd - * be out of the task list. Note that this case is very unlikely between a direct child - * and its parents (if it is the notified process) because of the way the exit is notified - * via SIGCHLD. + * by the time we get here, the task is detached from the tasklist. This is important + * because it means that no other tasks can ever find it as a notified task, therfore there + * is no race condition between this code and let's say a pfm_context_create(). + * Conversely, the pfm_cleanup_notifiers() cannot try to access a task's pfm context if this + * other task is in the middle of its own pfm_context_exit() because it would already be out of + * the task list. Note that this case is very unlikely between a direct child and its parents + * (if it is the notified process) because of the way the exit is notified via SIGCHLD. */ + void release_thread (struct task_struct *task) { @@ -460,6 +535,12 @@ if (atomic_read(&task->thread.pfm_notifiers_check) > 0) pfm_cleanup_notifiers(task); + + if (atomic_read(&task->thread.pfm_owners_check) > 0) + pfm_cleanup_owners(task); + + if (task->thread.pfm_smpl_buf_list) + pfm_cleanup_smpl_buf(task); } #endif @@ -475,21 +556,13 @@ ia64_set_fpu_owner(0); #endif #ifdef CONFIG_PERFMON - /* stop monitoring */ - if ((current->thread.flags & IA64_THREAD_PM_VALID) != 0) { - /* - * we cannot rely on switch_to() to save the PMU - * context for the last time. There is a possible race - * condition in SMP mode between the child and the - * parent. by explicitly saving the PMU context here - * we garantee no race. this call we also stop - * monitoring - */ + /* if needed, stop monitoring and flush state to perfmon context */ + if (current->thread.pfm_context) pfm_flush_regs(current); - /* - * make sure that switch_to() will not save context again - */ - current->thread.flags &= ~IA64_THREAD_PM_VALID; + + /* free debug register resources */ + if ((current->thread.flags & IA64_THREAD_DBG_VALID) != 0) { + pfm_release_debug_registers(current); } #endif } @@ -570,4 +643,30 @@ if (pm_power_off) pm_power_off(); machine_halt(); +} + +void __init +init_task_struct_cache (void) +{ +} + +struct task_struct * +dup_task_struct(struct task_struct *orig) +{ + struct task_struct *tsk; + + tsk = __get_free_pages(GFP_KERNEL, KERNEL_STACK_SIZE_ORDER); + if (!tsk) + return NULL; + + memcpy(tsk, orig, sizeof(struct task_struct) + sizeof(struct thread_info)); + tsk->thread_info = (struct thread_info *) ((char *) tsk + IA64_TASK_SIZE); + atomic_set(&tsk->usage, 1); + return tsk; +} + +void +__put_task_struct (struct task_struct *tsk) +{ + free_pages((unsigned long) tsk, KERNEL_STACK_SIZE_ORDER); } diff -Nru a/arch/ia64/kernel/ptrace.c b/arch/ia64/kernel/ptrace.c --- a/arch/ia64/kernel/ptrace.c Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/kernel/ptrace.c Tue Mar 12 13:58:15 2002 @@ -23,6 +23,9 @@ #include #include #include +#ifdef CONFIG_PERFMON +#include +#endif /* * Bits in the PSR that we allow ptrace() to change: @@ -755,11 +758,6 @@ } else { /* access debug registers */ - if (!(child->thread.flags & IA64_THREAD_DBG_VALID)) { - child->thread.flags |= IA64_THREAD_DBG_VALID; - memset(child->thread.dbr, 0, sizeof(child->thread.dbr)); - memset(child->thread.ibr, 0, sizeof(child->thread.ibr)); - } if (addr >= PT_IBR) { regnum = (addr - PT_IBR) >> 3; ptr = &child->thread.ibr[0]; @@ -772,6 +770,31 @@ dprintk("ptrace: rejecting access to register address 0x%lx\n", addr); return -1; } +#ifdef CONFIG_PERFMON + /* + * Check if debug registers are used + * by perfmon. This test must be done once we know that we can + * do the operation, i.e. the arguments are all valid, but before + * we start modifying the state. + * + * Perfmon needs to keep a count of how many processes are + * trying to modify the debug registers for system wide monitoring + * sessions. + * + * We also include read access here, because they may cause + * the PMU-installed debug register state (dbr[], ibr[]) to + * be reset. The two arrays are also used by perfmon, but + * we do not use IA64_THREAD_DBG_VALID. The registers are restored + * by the PMU context switch code. + */ + if (pfm_use_debug_registers(child)) return -1; +#endif + + if (!(child->thread.flags & IA64_THREAD_DBG_VALID)) { + child->thread.flags |= IA64_THREAD_DBG_VALID; + memset(child->thread.dbr, 0, sizeof(child->thread.dbr)); + memset(child->thread.ibr, 0, sizeof(child->thread.ibr)); + } ptr += regnum; @@ -789,6 +812,260 @@ return 0; } +static long +ptrace_getregs (struct task_struct *child, struct pt_all_user_regs *ppr) +{ + struct switch_stack *sw; + struct pt_regs *pt; + long ret, retval; + struct unw_frame_info info; + char nat = 0; + int i; + + retval = verify_area(VERIFY_WRITE, ppr, sizeof(struct pt_all_user_regs)); + if (retval != 0) { + return -EIO; + } + + pt = ia64_task_regs(child); + sw = (struct switch_stack *) (child->thread.ksp + 16); + unw_init_from_blocked_task(&info, child); + if (unw_unwind_to_user(&info) < 0) { + return -EIO; + } + + if (((unsigned long) ppr & 0x7) != 0) { + dprintk("ptrace:unaligned register address %p\n", ppr); + return -EIO; + } + + retval = 0; + + /* control regs */ + + retval |= __put_user(pt->cr_iip, &ppr->cr_iip); + retval |= access_uarea(child, PT_CR_IPSR, &ppr->cr_ipsr, 0); + + /* app regs */ + + retval |= __put_user(pt->ar_pfs, &ppr->ar[PT_AUR_PFS]); + retval |= __put_user(pt->ar_rsc, &ppr->ar[PT_AUR_RSC]); + retval |= __put_user(pt->ar_bspstore, &ppr->ar[PT_AUR_BSPSTORE]); + retval |= __put_user(pt->ar_unat, &ppr->ar[PT_AUR_UNAT]); + retval |= __put_user(pt->ar_ccv, &ppr->ar[PT_AUR_CCV]); + retval |= __put_user(pt->ar_fpsr, &ppr->ar[PT_AUR_FPSR]); + + retval |= access_uarea(child, PT_AR_EC, &ppr->ar[PT_AUR_EC], 0); + retval |= access_uarea(child, PT_AR_LC, &ppr->ar[PT_AUR_LC], 0); + retval |= access_uarea(child, PT_AR_RNAT, &ppr->ar[PT_AUR_RNAT], 0); + retval |= access_uarea(child, PT_AR_BSP, &ppr->ar[PT_AUR_BSP], 0); + retval |= access_uarea(child, PT_CFM, &ppr->cfm, 0); + + /* gr1-gr3 */ + + retval |= __copy_to_user(&ppr->gr[1], &pt->r1, sizeof(long) * 3); + + /* gr4-gr7 */ + + for (i = 4; i < 8; i++) { + retval |= unw_access_gr(&info, i, &ppr->gr[i], &nat, 0); + } + + /* gr8-gr11 */ + + retval |= __copy_to_user(&ppr->gr[8], &pt->r8, sizeof(long) * 4); + + /* gr12-gr15 */ + + retval |= __copy_to_user(&ppr->gr[12], &pt->r12, sizeof(long) * 4); + + /* gr16-gr31 */ + + retval |= __copy_to_user(&ppr->gr[16], &pt->r16, sizeof(long) * 16); + + /* b0 */ + + retval |= __put_user(pt->b0, &ppr->br[0]); + + /* b1-b5 */ + + for (i = 1; i < 6; i++) { + retval |= unw_access_br(&info, i, &ppr->br[i], 0); + } + + /* b6-b7 */ + + retval |= __put_user(pt->b6, &ppr->br[6]); + retval |= __put_user(pt->b7, &ppr->br[7]); + + /* fr2-fr5 */ + + for (i = 2; i < 6; i++) { + retval |= access_fr(&info, i, 0, (unsigned long *) &ppr->fr[i], 0); + retval |= access_fr(&info, i, 1, (unsigned long *) &ppr->fr[i] + 1, 0); + } + + /* fr6-fr9 */ + + retval |= __copy_to_user(&ppr->fr[6], &pt->f6, sizeof(struct ia64_fpreg) * 4); + + /* fp scratch regs(10-15) */ + + retval |= __copy_to_user(&ppr->fr[10], &sw->f10, sizeof(struct ia64_fpreg) * 6); + + /* fr16-fr31 */ + + for (i = 16; i < 32; i++) { + retval |= access_fr(&info, i, 0, (unsigned long *) &ppr->fr[i], 0); + retval |= access_fr(&info, i, 1, (unsigned long *) &ppr->fr[i] + 1, 0); + } + + /* fph */ + + ia64_flush_fph(child); + retval |= __copy_to_user(&ppr->fr[32], &child->thread.fph, sizeof(ppr->fr[32]) * 96); + + /* preds */ + + retval |= __put_user(pt->pr, &ppr->pr); + + /* nat bits */ + + retval |= access_uarea(child, PT_NAT_BITS, &ppr->nat, 0); + + ret = retval ? -EIO : 0; + return ret; +} + +static long +ptrace_setregs (struct task_struct *child, struct pt_all_user_regs *ppr) +{ + struct switch_stack *sw; + struct pt_regs *pt; + long ret, retval; + struct unw_frame_info info; + char nat = 0; + int i; + + retval = verify_area(VERIFY_READ, ppr, sizeof(struct pt_all_user_regs)); + if (retval != 0) { + return -EIO; + } + + pt = ia64_task_regs(child); + sw = (struct switch_stack *) (child->thread.ksp + 16); + unw_init_from_blocked_task(&info, child); + if (unw_unwind_to_user(&info) < 0) { + return -EIO; + } + + if (((unsigned long) ppr & 0x7) != 0) { + dprintk("ptrace:unaligned register address %p\n", ppr); + return -EIO; + } + + retval = 0; + + /* control regs */ + + retval |= __get_user(pt->cr_iip, &ppr->cr_iip); + retval |= access_uarea(child, PT_CR_IPSR, &ppr->cr_ipsr, 1); + + /* app regs */ + + retval |= __get_user(pt->ar_pfs, &ppr->ar[PT_AUR_PFS]); + retval |= __get_user(pt->ar_rsc, &ppr->ar[PT_AUR_RSC]); + retval |= __get_user(pt->ar_bspstore, &ppr->ar[PT_AUR_BSPSTORE]); + retval |= __get_user(pt->ar_unat, &ppr->ar[PT_AUR_UNAT]); + retval |= __get_user(pt->ar_ccv, &ppr->ar[PT_AUR_CCV]); + retval |= __get_user(pt->ar_fpsr, &ppr->ar[PT_AUR_FPSR]); + + retval |= access_uarea(child, PT_AR_EC, &ppr->ar[PT_AUR_EC], 1); + retval |= access_uarea(child, PT_AR_LC, &ppr->ar[PT_AUR_LC], 1); + retval |= access_uarea(child, PT_AR_RNAT, &ppr->ar[PT_AUR_RNAT], 1); + retval |= access_uarea(child, PT_AR_BSP, &ppr->ar[PT_AUR_BSP], 1); + retval |= access_uarea(child, PT_CFM, &ppr->cfm, 1); + + /* gr1-gr3 */ + + retval |= __copy_from_user(&pt->r1, &ppr->gr[1], sizeof(long) * 3); + + /* gr4-gr7 */ + + for (i = 4; i < 8; i++) { + long ret = unw_get_gr(&info, i, &ppr->gr[i], &nat); + if (ret < 0) { + return ret; + } + retval |= unw_access_gr(&info, i, &ppr->gr[i], &nat, 1); + } + + /* gr8-gr11 */ + + retval |= __copy_from_user(&pt->r8, &ppr->gr[8], sizeof(long) * 4); + + /* gr12-gr15 */ + + retval |= __copy_from_user(&pt->r12, &ppr->gr[12], sizeof(long) * 4); + + /* gr16-gr31 */ + + retval |= __copy_from_user(&pt->r16, &ppr->gr[16], sizeof(long) * 16); + + /* b0 */ + + retval |= __get_user(pt->b0, &ppr->br[0]); + + /* b1-b5 */ + + for (i = 1; i < 6; i++) { + retval |= unw_access_br(&info, i, &ppr->br[i], 1); + } + + /* b6-b7 */ + + retval |= __get_user(pt->b6, &ppr->br[6]); + retval |= __get_user(pt->b7, &ppr->br[7]); + + /* fr2-fr5 */ + + for (i = 2; i < 6; i++) { + retval |= access_fr(&info, i, 0, (unsigned long *) &ppr->fr[i], 1); + retval |= access_fr(&info, i, 1, (unsigned long *) &ppr->fr[i] + 1, 1); + } + + /* fr6-fr9 */ + + retval |= __copy_from_user(&pt->f6, &ppr->fr[6], sizeof(ppr->fr[6]) * 4); + + /* fp scratch regs(10-15) */ + + retval |= __copy_from_user(&sw->f10, &ppr->fr[10], sizeof(ppr->fr[10]) * 6); + + /* fr16-fr31 */ + + for (i = 16; i < 32; i++) { + retval |= access_fr(&info, i, 0, (unsigned long *) &ppr->fr[i], 1); + retval |= access_fr(&info, i, 1, (unsigned long *) &ppr->fr[i] + 1, 1); + } + + /* fph */ + + ia64_sync_fph(child); + retval |= __copy_from_user(&child->thread.fph, &ppr->fr[32], sizeof(ppr->fr[32]) * 96); + + /* preds */ + + retval |= __get_user(pt->pr, &ppr->pr); + + /* nat bits */ + + retval |= access_uarea(child, PT_NAT_BITS, &ppr->nat, 1); + + ret = retval ? -EIO : 0; + return ret; +} + /* * Called by kernel/ptrace.c when detaching.. * @@ -916,9 +1193,9 @@ if (data > _NSIG) goto out_tsk; if (request == PTRACE_SYSCALL) - child->ptrace |= PT_TRACESYS; + set_tsk_thread_flag(child, TIF_SYSCALL_TRACE); else - child->ptrace &= ~PT_TRACESYS; + clear_tsk_thread_flag(child, TIF_SYSCALL_TRACE); child->exit_code = data; /* make sure the single step/taken-branch trap bits are not set: */ @@ -959,7 +1236,7 @@ if (data > _NSIG) goto out_tsk; - child->ptrace &= ~PT_TRACESYS; + clear_tsk_thread_flag(child, TIF_SYSCALL_TRACE); if (request == PTRACE_SINGLESTEP) { ia64_psr(pt)->ss = 1; } else { @@ -979,12 +1256,28 @@ ret = ptrace_detach(child, data); goto out_tsk; + case PTRACE_GETREGS: + ret = ptrace_getregs(child, (struct pt_all_user_regs*) data); + goto out_tsk; + + case PTRACE_SETREGS: + ret = ptrace_setregs(child, (struct pt_all_user_regs*) data); + goto out_tsk; + + case PTRACE_SETOPTIONS: + if (data & PTRACE_O_TRACESYSGOOD) + child->ptrace |= PT_TRACESYSGOOD; + else + child->ptrace &= ~PT_TRACESYSGOOD; + ret = 0; + break; + default: ret = -EIO; goto out_tsk; } out_tsk: - free_task_struct(child); + put_task_struct(child); out: unlock_kernel(); return ret; @@ -993,9 +1286,16 @@ void syscall_trace (void) { - if ((current->ptrace & (PT_PTRACED|PT_TRACESYS)) != (PT_PTRACED|PT_TRACESYS)) + if (!test_thread_flag(TIF_SYSCALL_TRACE)) + return; + if (!(current->ptrace & PT_PTRACED)) return; - current->exit_code = SIGTRAP; + /* + * The 0x80 provides a way for the tracing parent to distinguish between a syscall + * stop and SIGTRAP delivery. + */ + current->exit_code = SIGTRAP | ((current->ptrace & PT_TRACESYSGOOD) + ? 0x80 : 0); set_current_state(TASK_STOPPED); notify_parent(current, SIGCHLD); schedule(); diff -Nru a/arch/ia64/kernel/sal.c b/arch/ia64/kernel/sal.c --- a/arch/ia64/kernel/sal.c Tue Mar 12 13:58:16 2002 +++ b/arch/ia64/kernel/sal.c Tue Mar 12 13:58:16 2002 @@ -18,7 +18,8 @@ #include #include -spinlock_t sal_lock = SPIN_LOCK_UNLOCKED; +spinlock_t sal_lock __cacheline_aligned = SPIN_LOCK_UNLOCKED; +unsigned long sal_platform_features; static struct { void *addr; /* function entry point */ @@ -76,7 +77,7 @@ return str; } -static void __init +static void __init ia64_sal_handler_init (void *entry_point, void *gpval) { /* fill in the SAL procedure descriptor and point ia64_sal to it: */ @@ -102,7 +103,7 @@ if (strncmp(systab->signature, "SST_", 4) != 0) printk("bad signature in system table!"); - /* + /* * revisions are coded in BCD, so %x does the job for us */ printk("SAL v%x.%02x: oem=%.32s, product=%.32s\n", @@ -152,12 +153,12 @@ case SAL_DESC_PLATFORM_FEATURE: { struct ia64_sal_desc_platform_feature *pf = (void *) p; + sal_platform_features = pf->feature_mask; printk("SAL: Platform features "); - if (pf->feature_mask & (1 << 0)) + if (pf->feature_mask & IA64_SAL_PLATFORM_FEATURE_BUS_LOCK) printk("BusLock "); - - if (pf->feature_mask & (1 << 1)) { + if (pf->feature_mask & IA64_SAL_PLATFORM_FEATURE_IRQ_REDIR_HINT) { printk("IRQ_Redirection "); #ifdef CONFIG_SMP if (no_int_routing) @@ -166,15 +167,17 @@ smp_int_redirect |= SMP_IRQ_REDIRECTION; #endif } - if (pf->feature_mask & (1 << 2)) { + if (pf->feature_mask & IA64_SAL_PLATFORM_FEATURE_IPI_REDIR_HINT) { printk("IPI_Redirection "); #ifdef CONFIG_SMP - if (no_int_routing) + if (no_int_routing) smp_int_redirect &= ~SMP_IPI_REDIRECTION; else smp_int_redirect |= SMP_IPI_REDIRECTION; #endif } + if (pf->feature_mask & IA64_SAL_PLATFORM_FEATURE_ITC_DRIFT) + printk("ITC_Drift "); printk("\n"); break; } diff -Nru a/arch/ia64/kernel/salinfo.c b/arch/ia64/kernel/salinfo.c --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/kernel/salinfo.c Tue Mar 12 13:58:16 2002 @@ -0,0 +1,105 @@ +/* + * salinfo.c + * + * Creates entries in /proc/sal for various system features. + * + * Copyright (c) 2001 Silicon Graphics, Inc. All rights reserved. + * + * 10/30/2001 jbarnes@sgi.com copied much of Stephane's palinfo + * code to create this file + */ + +#include +#include +#include + +#include + +MODULE_AUTHOR("Jesse Barnes "); +MODULE_DESCRIPTION("/proc interface to IA-64 SAL features"); +MODULE_LICENSE("GPL"); + +static int salinfo_read(char *page, char **start, off_t off, int count, int *eof, void *data); + +typedef struct { + const char *name; /* name of the proc entry */ + unsigned long feature; /* feature bit */ + struct proc_dir_entry *entry; /* registered entry (removal) */ +} salinfo_entry_t; + +/* + * List {name,feature} pairs for every entry in /proc/sal/ + * that this module exports + */ +static salinfo_entry_t salinfo_entries[]={ + { "bus_lock", IA64_SAL_PLATFORM_FEATURE_BUS_LOCK, }, + { "irq_redirection", IA64_SAL_PLATFORM_FEATURE_IRQ_REDIR_HINT, }, + { "ipi_redirection", IA64_SAL_PLATFORM_FEATURE_IPI_REDIR_HINT, }, + { "itc_drift", IA64_SAL_PLATFORM_FEATURE_ITC_DRIFT, }, +}; + +#define NR_SALINFO_ENTRIES (sizeof(salinfo_entries)/sizeof(salinfo_entry_t)) + +/* + * One for each feature and one more for the directory entry... + */ +static struct proc_dir_entry *salinfo_proc_entries[NR_SALINFO_ENTRIES + 1]; + +static int __init +salinfo_init(void) +{ + struct proc_dir_entry *salinfo_dir; /* /proc/sal dir entry */ + struct proc_dir_entry **sdir = salinfo_proc_entries; /* keeps track of every entry */ + int i; + + salinfo_dir = proc_mkdir("sal", NULL); + + for (i=0; i < NR_SALINFO_ENTRIES; i++) { + /* pass the feature bit in question as misc data */ + *sdir++ = create_proc_read_entry (salinfo_entries[i].name, 0, salinfo_dir, + salinfo_read, (void *)salinfo_entries[i].feature); + } + *sdir++ = salinfo_dir; + + return 0; +} + +static void __exit +salinfo_exit(void) +{ + int i = 0; + + for (i = 0; i < NR_SALINFO_ENTRIES ; i++) { + if (salinfo_proc_entries[i]) + remove_proc_entry (salinfo_proc_entries[i]->name, NULL); + } +} + +/* + * 'data' contains an integer that corresponds to the feature we're + * testing + */ +static int +salinfo_read(char *page, char **start, off_t off, int count, int *eof, void *data) +{ + int len = 0; + + MOD_INC_USE_COUNT; + + len = sprintf(page, (sal_platform_features & (unsigned long)data) ? "1\n" : "0\n"); + + if (len <= off+count) *eof = 1; + + *start = page + off; + len -= off; + + if (len>count) len = count; + if (len<0) len = 0; + + MOD_DEC_USE_COUNT; + + return len; +} + +module_init(salinfo_init); +module_exit(salinfo_exit); diff -Nru a/arch/ia64/kernel/setup.c b/arch/ia64/kernel/setup.c --- a/arch/ia64/kernel/setup.c Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/kernel/setup.c Tue Mar 12 13:58:15 2002 @@ -3,7 +3,7 @@ * * Copyright (C) 1998-2001 Hewlett-Packard Co * David Mosberger-Tang - * Copyright (C) 1998, 1999, 2001 Stephane Eranian + * Stephane Eranian * Copyright (C) 2000, Rohit Seth * Copyright (C) 1999 VA Linux Systems * Copyright (C) 1999 Walt Drummond @@ -20,6 +20,7 @@ #include #include +#include #include #include #include @@ -27,7 +28,7 @@ #include #include #include -#include +#include #include #include @@ -147,6 +148,10 @@ } +/* + * Find a place to put the bootmap and return its starting address in bootmap_start. + * This address must be page-aligned. + */ static int find_bootmap_location (unsigned long start, unsigned long end, void *arg) { @@ -165,7 +170,7 @@ for (i = 0; i < num_rsvd_regions; i++) { range_start = MAX(start, free_start); - range_end = MIN(end, rsvd_region[i].start); + range_end = MIN(end, rsvd_region[i].start & PAGE_MASK); if (range_end <= range_start) continue; /* skip over empty range */ @@ -177,7 +182,7 @@ /* nothing more available in this segment */ if (range_end == end) return 0; - free_start = rsvd_region[i].end; + free_start = PAGE_ALIGN(rsvd_region[i].end); } return 0; } @@ -306,6 +311,10 @@ /* process SAL system table: */ ia64_sal_init(efi.sal_systab); +#ifdef CONFIG_IA64_GENERIC + machvec_init(acpi_get_sysname()); +#endif + /* * Set `iobase' to the appropriate address in region 6 * (uncached access range) @@ -332,10 +341,6 @@ cpu_init(); /* initialize the bootstrap CPU */ -#ifdef CONFIG_IA64_GENERIC - machvec_init(acpi_get_sysname()); -#endif - if (efi.acpi20) { /* Parse the ACPI 2.0 tables */ acpi20_parse(efi.acpi20); @@ -371,17 +376,14 @@ { #ifdef CONFIG_SMP # define lpj c->loops_per_jiffy +# define cpunum c->cpu #else # define lpj loops_per_jiffy +# define cpunum 0 #endif char family[32], features[128], *cp; struct cpuinfo_ia64 *c = v; - unsigned long mask, cpu = c - cpu_data(0); - -#ifdef CONFIG_SMP - if (!(cpu_online_map & (1 << cpu))) - return 0; -#endif + unsigned long mask; mask = c->features; @@ -403,7 +405,7 @@ sprintf(cp, " 0x%lx", mask); seq_printf(m, - "processor : %lu\n" + "processor : %d\n" "vendor : %s\n" "arch : IA-64\n" "family : %s\n" @@ -416,7 +418,7 @@ "cpu MHz : %lu.%06lu\n" "itc MHz : %lu.%06lu\n" "BogoMIPS : %lu.%02lu\n\n", - cpu, c->vendor, family, c->model, c->revision, c->archrev, + cpunum, c->vendor, family, c->model, c->revision, c->archrev, features, c->ppn, c->number, c->proc_freq / 1000000, c->proc_freq % 1000000, c->itc_freq / 1000000, c->itc_freq % 1000000, @@ -427,6 +429,10 @@ static void * c_start (struct seq_file *m, loff_t *pos) { +#ifdef CONFIG_SMP + while (*pos < NR_CPUS && !(cpu_online_map & (1 << *pos))) + ++*pos; +#endif return *pos < NR_CPUS ? cpu_data(*pos) : NULL; } @@ -483,6 +489,9 @@ cpuid.bits[i] = ia64_get_cpuid(i); memcpy(c->vendor, cpuid.field.vendor, 16); +#ifdef CONFIG_SMP + c->cpu = smp_processor_id(); +#endif c->ppn = cpuid.field.ppn; c->number = cpuid.field.number; c->revision = cpuid.field.revision; @@ -534,7 +543,7 @@ = alloc_bootmem_pages_node(NODE_DATA(numa_node_id()), sizeof(struct cpuinfo_ia64)); for (cpu = 1; cpu < NR_CPUS; ++cpu) - memcpy(my_cpu_data->cpu_data[cpu]->cpu_data_ptrs, + memcpy(my_cpu_data->cpu_data[cpu]->cpu_data, my_cpu_data->cpu_data, sizeof(my_cpu_data->cpu_data)); } else { order = get_order(sizeof(struct cpuinfo_ia64)); @@ -577,6 +586,8 @@ atomic_inc(&init_mm.mm_count); current->active_mm = &init_mm; + if (current->mm) + BUG(); ia64_mmu_init(my_cpu_data); @@ -616,4 +627,6 @@ num_phys_stacked = 96; } local_cpu_data->phys_stacked_size_p8 = num_phys_stacked*8 + 8; + + platform_cpu_init(); } diff -Nru a/arch/ia64/kernel/sigframe.h b/arch/ia64/kernel/sigframe.h --- a/arch/ia64/kernel/sigframe.h Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/kernel/sigframe.h Tue Mar 12 13:58:15 2002 @@ -21,3 +21,5 @@ struct siginfo info; struct sigcontext sc; }; + +extern long ia64_do_signal (sigset_t *, struct sigscratch *, long); diff -Nru a/arch/ia64/kernel/signal.c b/arch/ia64/kernel/signal.c --- a/arch/ia64/kernel/signal.c Tue Mar 12 13:58:14 2002 +++ b/arch/ia64/kernel/signal.c Tue Mar 12 13:58:14 2002 @@ -1,7 +1,7 @@ /* * Architecture-specific signal handling support. * - * Copyright (C) 1999-2001 Hewlett-Packard Co + * Copyright (C) 1999-2002 Hewlett-Packard Co * David Mosberger-Tang * * Derived from i386 and Alpha versions. @@ -17,6 +17,8 @@ #include #include #include +#include +#include #include #include @@ -39,8 +41,6 @@ # define GET_SIGSET(k,u) __get_user((k)->sig[0], &(u)->sig[0]) #endif -extern long ia64_do_signal (sigset_t *, struct sigscratch *, long); /* forward decl */ - long ia64_rt_sigsuspend (sigset_t *uset, size_t sigsetsize, struct sigscratch *scr) { @@ -160,6 +160,7 @@ err |= __put_user((short)from->si_code, &to->si_code); switch (from->si_code >> 16) { case __SI_FAULT >> 16: + err |= __put_user(from->si_flags, &to->si_flags); err |= __put_user(from->si_isr, &to->si_isr); case __SI_POLL >> 16: err |= __put_user(from->si_addr, &to->si_addr); @@ -172,7 +173,12 @@ case __SI_PROF >> 16: err |= __put_user(from->si_uid, &to->si_uid); err |= __put_user(from->si_pid, &to->si_pid); - err |= __put_user(from->si_pfm_ovfl, &to->si_pfm_ovfl); + if (from->si_code == PROF_OVFL) { + err |= __put_user(from->si_pfm_ovfl[0], &to->si_pfm_ovfl[0]); + err |= __put_user(from->si_pfm_ovfl[1], &to->si_pfm_ovfl[1]); + err |= __put_user(from->si_pfm_ovfl[2], &to->si_pfm_ovfl[2]); + err |= __put_user(from->si_pfm_ovfl[3], &to->si_pfm_ovfl[3]); + } break; default: err |= __put_user(from->si_uid, &to->si_uid); @@ -239,7 +245,7 @@ * could be corrupted. */ retval = (long) &ia64_leave_kernel; - if (current->ptrace & PT_TRACESYS) + if (test_thread_flag(TIF_SYSCALL_TRACE)) /* * strace expects to be notified after sigreturn returns even though the * context to which we return may not be in the middle of a syscall. diff -Nru a/arch/ia64/kernel/smp.c b/arch/ia64/kernel/smp.c --- a/arch/ia64/kernel/smp.c Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/kernel/smp.c Tue Mar 12 13:58:15 2002 @@ -29,6 +29,7 @@ #include #include #include +#include #include #include @@ -38,7 +39,6 @@ #include #include #include - #include #include #include @@ -51,14 +51,19 @@ #include #include -/* The 'big kernel lock' */ -spinlock_t kernel_flag __cacheline_aligned_in_smp = SPIN_LOCK_UNLOCKED; +/* + * The Big Kernel Lock. It's not supposed to be used for performance critical stuff + * anymore. But we still need to align it because certain workloads are still affected by + * it. For example, llseek() and various other filesystem related routines still use the + * BKL. + */ +spinlock_t kernel_flag __cacheline_aligned = SPIN_LOCK_UNLOCKED; /* * Structure and data for smp_call_function(). This is designed to minimise static memory * requirements. It also looks cleaner. */ -static spinlock_t call_lock = SPIN_LOCK_UNLOCKED; +static spinlock_t call_lock __cacheline_aligned = SPIN_LOCK_UNLOCKED; struct call_data_struct { void (*func) (void *info); @@ -70,8 +75,12 @@ static volatile struct call_data_struct *call_data; +static spinlock_t migration_lock = SPIN_LOCK_UNLOCKED; +static task_t *migrating_task; + #define IPI_CALL_FUNC 0 #define IPI_CPU_STOP 1 +#define IPI_MIGRATE_TASK 2 static void stop_this_cpu (void) @@ -98,51 +107,60 @@ mb(); /* Order interrupt and bit testing. */ while ((ops = xchg(pending_ipis, 0)) != 0) { - mb(); /* Order bit clearing and data access. */ - do { - unsigned long which; - - which = ffz(~ops); - ops &= ~(1 << which); - - switch (which) { - case IPI_CALL_FUNC: - { - struct call_data_struct *data; - void (*func)(void *info); - void *info; - int wait; - - /* release the 'pointer lock' */ - data = (struct call_data_struct *) call_data; - func = data->func; - info = data->info; - wait = data->wait; - - mb(); - atomic_inc(&data->started); - - /* At this point the structure may be gone unless wait is true. */ - (*func)(info); - - /* Notify the sending CPU that the task is done. */ - mb(); - if (wait) - atomic_inc(&data->finished); + mb(); /* Order bit clearing and data access. */ + do { + unsigned long which; + + which = ffz(~ops); + ops &= ~(1 << which); + + switch (which) { + case IPI_CALL_FUNC: + { + struct call_data_struct *data; + void (*func)(void *info); + void *info; + int wait; + + /* release the 'pointer lock' */ + data = (struct call_data_struct *) call_data; + func = data->func; + info = data->info; + wait = data->wait; + + mb(); + atomic_inc(&data->started); + /* + * At this point the structure may be gone unless + * wait is true. + */ + (*func)(info); + + /* Notify the sending CPU that the task is done. */ + mb(); + if (wait) + atomic_inc(&data->finished); + } + break; + + case IPI_MIGRATE_TASK: + { + task_t *p = migrating_task; + spin_unlock(&migration_lock); + sched_task_migrated(p); + } + break; + + case IPI_CPU_STOP: + stop_this_cpu(); + break; + + default: + printk(KERN_CRIT "Unknown IPI on CPU %d: %lu\n", this_cpu, which); + break; } - break; - - case IPI_CPU_STOP: - stop_this_cpu(); - break; - - default: - printk(KERN_CRIT "Unknown IPI on CPU %d: %lu\n", this_cpu, which); - break; - } /* Switch */ - } while (ops); - - mb(); /* Order data access and bit testing. */ + } while (ops); + mb(); /* Order data access and bit testing. */ } } @@ -185,10 +203,25 @@ platform_send_ipi(cpu, IA64_IPI_RESCHEDULE, IA64_IPI_DM_INT, 0); } +/* + * This function sends a reschedule IPI to all (other) CPUs. This should only be used if + * some 'global' task became runnable, such as a RT task, that must be handled now. The + * first CPU that manages to grab the task will run it. + */ +void +smp_send_reschedule_all (void) +{ + int i; + + for (i = 0; i < smp_num_cpus; i++) + if (i != smp_processor_id()) + smp_send_reschedule(i); +} + void smp_flush_tlb_all (void) { - smp_call_function ((void (*)(void *))__flush_tlb_all,0,1,1); + smp_call_function((void (*)(void *))__flush_tlb_all, 0, 1, 1); __flush_tlb_all(); } @@ -315,6 +348,15 @@ { send_IPI_allbutself(IPI_CPU_STOP); smp_num_cpus = 1; +} + +void +smp_migrate_task (int cpu, task_t *p) +{ + /* The target CPU will unlock the migration spinlock: */ + spin_lock(&migration_lock); + migrating_task = p; + send_IPI_single(cpu, IPI_MIGRATE_TASK); } int __init diff -Nru a/arch/ia64/kernel/smpboot.c b/arch/ia64/kernel/smpboot.c --- a/arch/ia64/kernel/smpboot.c Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/kernel/smpboot.c Tue Mar 12 13:58:15 2002 @@ -70,6 +70,7 @@ extern void start_ap(void); int cpucount; +task_t *task_for_booting_cpu; /* Setup configured maximum number of CPUs to activate */ static int max_cpus = -1; @@ -378,7 +379,7 @@ smp_callin(); Dprintk("CPU %d is set to go.\n", smp_processor_id()); while (!atomic_read(&smp_commenced)) - ; + cpu_relax(); Dprintk("CPU %d is starting idle.\n", smp_processor_id()); return cpu_idle(); @@ -416,13 +417,13 @@ if (!idle) panic("No idle process for CPU %d", cpu); - idle->processor = cpu; + init_idle(idle, cpu); + ia64_cpu_to_sapicid[cpu] = sapicid; - idle->cpus_runnable = 1 << cpu; /* we schedule the first task manually */ - del_from_runqueue(idle); unhash_process(idle); - init_tasks[cpu] = idle; + + task_for_booting_cpu = idle; Dprintk("Sending wakeup vector %u to AP 0x%x/0x%x.\n", ap_wakeup_vector, cpu, sapicid); @@ -451,6 +452,17 @@ } } +unsigned long cache_decay_ticks; /* # of ticks an idle task is considered cache-hot */ + +static void +smp_tune_scheduling (void) +{ + cache_decay_ticks = 10; /* XXX base this on PAL info and cache-bandwidth estimate */ + + printk("task migration cache decay timeout: %ld msecs.\n", + (cache_decay_ticks + 1) * 1000 / HZ); +} + /* * Cycle through the APs sending Wakeup IPIs to boot each. */ @@ -470,8 +482,8 @@ smp_setup_percpu_timer(); /* - * We have the boot CPU online for sure. - */ + * We have the boot CPU online for sure. + */ set_bit(0, &cpu_online_map); set_bit(0, &cpu_callin_map); @@ -480,9 +492,9 @@ printk("Boot processor id 0x%x/0x%x\n", 0, boot_cpu_id); - global_irq_holder = 0; - current->processor = 0; - init_idle(); + global_irq_holder = NO_PROC_ID; + current_thread_info()->cpu = 0; + smp_tune_scheduling(); /* * If SMP should be disabled, then really disable it! @@ -493,7 +505,7 @@ smp_num_cpus = 1; goto smp_done; } - if (max_cpus != -1) + if (max_cpus != -1) printk (KERN_INFO "Limiting CPUs to %d\n", max_cpus); if (smp_boot_data.cpu_count > 1) { diff -Nru a/arch/ia64/kernel/sys_ia64.c b/arch/ia64/kernel/sys_ia64.c --- a/arch/ia64/kernel/sys_ia64.c Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/kernel/sys_ia64.c Tue Mar 12 13:58:15 2002 @@ -2,8 +2,8 @@ * This file contains various system calls that have different calling * conventions on different platforms. * - * Copyright (C) 1999-2000 Hewlett-Packard Co - * Copyright (C) 1999-2000 David Mosberger-Tang + * Copyright (C) 1999-2000, 2002 Hewlett-Packard Co + * David Mosberger-Tang */ #include #include @@ -201,15 +201,13 @@ if (len == 0) goto out; - /* don't permit mappings into unmapped space or the virtual page table of a region: */ + /* + * Don't permit mappings into unmapped space, the virtual page table of a region, + * or across a region boundary. Note: RGN_MAP_LIMIT is equal to 2^n-PAGE_SIZE + * (for some integer n <= 61) and len > 0. + */ roff = rgn_offset(addr); - if ((len | roff | (roff + len)) >= RGN_MAP_LIMIT) { - addr = -EINVAL; - goto out; - } - - /* don't permit mappings that would cross a region boundary: */ - if (rgn_index(addr) != rgn_index(addr + len)) { + if ((len > RGN_MAP_LIMIT) || (roff > (RGN_MAP_LIMIT - len))) { addr = -EINVAL; goto out; } @@ -275,74 +273,6 @@ regs->r8 = 0; /* ensure large addresses are not mistaken as failures... */ return addr; } - -#if 1 -/* - * This is here for a while to keep compatibillity with the old stat() - * call - it will be removed later once everybody migrates to the new - * kernel stat structure that matches the glibc one - Jes - */ - -static int -cp_ia64_old_stat (struct kstat *stat, struct ia64_oldstat *statbuf) -{ - struct ia64_oldstat tmp; - unsigned int blocks, indirect; - - memset(&tmp, 0, sizeof(tmp)); - tmp.st_dev = stat->dev; - tmp.st_ino = stat->ino; - tmp.st_mode = stat->mode; - tmp.st_nlink = stat->nlink; - SET_STAT_UID(tmp, stat->uid); - SET_STAT_GID(tmp, stat->gid); - tmp.st_rdev = stat->rdev; - tmp.st_size = stat->size; - tmp.st_atime = stat->atime; - tmp.st_mtime = stat->mtime; - tmp.st_ctime = stat->ctime; - tmp.st_blocks = stat->i_blocks; - tmp.st_blksize = stat->i_blksize; - return copy_to_user(statbuf,&tmp,sizeof(tmp)) ? -EFAULT : 0; -} - -asmlinkage long -ia64_oldstat (char *filename, struct ia64_oldstat *statbuf) -{ - struct kstat stat; - int error = vfs_stat(filename, &stat); - - if (!error) - error = cp_ia64_old_stat(&stat, statbuf); - - return error; -} - -asmlinkage long -ia64_oldlstat (char *filename, struct ia64_oldstat *statbuf) -{ - struct kstat stat; - int error = vfs_lstat(filename, &stat); - - if (!error) - error = cp_ia64_old_stat(&stat, statbuf); - - return error; -} - -asmlinkage long -ia64_oldfstat (unsigned int fd, struct ia64_oldstat *statbuf) -{ - struct kstat stat; - int error = vfs_fstat(fd, &stat); - - if (!error) - error = cp_ia64_old_stat(&stat, statbuf); - - return error; -} - -#endif #ifndef CONFIG_PCI diff -Nru a/arch/ia64/kernel/traps.c b/arch/ia64/kernel/traps.c --- a/arch/ia64/kernel/traps.c Tue Mar 12 13:58:16 2002 +++ b/arch/ia64/kernel/traps.c Tue Mar 12 13:58:16 2002 @@ -1,7 +1,7 @@ /* * Architecture-specific trap handling. * - * Copyright (C) 1998-2001 Hewlett-Packard Co + * Copyright (C) 1998-2002 Hewlett-Packard Co * David Mosberger-Tang * * 05/12/00 grao : added isr in siginfo for SIGFPE @@ -32,6 +32,7 @@ #include #include #include +#include #include /* For unblank_screen() */ #include @@ -133,6 +134,8 @@ /* SIGILL, SIGFPE, SIGSEGV, and SIGBUS want these field initialized: */ siginfo.si_addr = (void *) (regs->cr_iip + ia64_psr(regs)->ri); siginfo.si_imm = break_num; + siginfo.si_flags = 0; /* clear __ISR_VALID */ + siginfo.si_isr = 0; switch (break_num) { case 0: /* unknown error */ @@ -352,6 +355,8 @@ siginfo.si_code = FPE_FLTDIV; } siginfo.si_isr = isr; + siginfo.si_flags = __ISR_VALID; + siginfo.si_imm = 0; force_sig_info(SIGFPE, &siginfo, current); } } else { @@ -372,6 +377,8 @@ siginfo.si_code = FPE_FLTRES; } siginfo.si_isr = isr; + siginfo.si_flags = __ISR_VALID; + siginfo.si_imm = 0; force_sig_info(SIGFPE, &siginfo, current); } } @@ -490,6 +497,8 @@ siginfo.si_errno = 0; siginfo.si_addr = (void *) (regs->cr_iip + ia64_psr(regs)->ri); siginfo.si_imm = vector; + siginfo.si_flags = __ISR_VALID; + siginfo.si_isr = isr; force_sig_info(SIGILL, &siginfo, current); return; } @@ -517,6 +526,10 @@ } siginfo.si_signo = SIGTRAP; siginfo.si_errno = 0; + siginfo.si_flags = 0; + siginfo.si_isr = 0; + siginfo.si_addr = 0; + siginfo.si_imm = 0; force_sig_info(SIGTRAP, &siginfo, current); return; @@ -528,6 +541,9 @@ siginfo.si_errno = 0; siginfo.si_code = FPE_FLTINV; siginfo.si_addr = (void *) (regs->cr_iip + ia64_psr(regs)->ri); + siginfo.si_flags = __ISR_VALID; + siginfo.si_isr = isr; + siginfo.si_imm = 0; force_sig_info(SIGFPE, &siginfo, current); } return; @@ -537,6 +553,9 @@ siginfo.si_signo = SIGILL; siginfo.si_code = ILL_BADIADDR; siginfo.si_errno = 0; + siginfo.si_flags = 0; + siginfo.si_isr = 0; + siginfo.si_imm = 0; siginfo.si_addr = (void *) (regs->cr_iip + ia64_psr(regs)->ri); force_sig_info(SIGILL, &siginfo, current); return; diff -Nru a/arch/ia64/kernel/unaligned.c b/arch/ia64/kernel/unaligned.c --- a/arch/ia64/kernel/unaligned.c Tue Mar 12 13:58:16 2002 +++ b/arch/ia64/kernel/unaligned.c Tue Mar 12 13:58:16 2002 @@ -1,9 +1,9 @@ /* * Architecture-specific unaligned trap handling. * - * Copyright (C) 1999-2001 Hewlett-Packard Co - * Copyright (C) 1999-2000 Stephane Eranian - * Copyright (C) 2001 David Mosberger-Tang + * Copyright (C) 1999-2002 Hewlett-Packard Co + * Stephane Eranian + * David Mosberger-Tang * * 2001/10/11 Fix unaligned access to rotating registers in s/w pipelined loops. * 2001/08/13 Correct size of extended floats (float_fsz) from 16 to 10 bytes. @@ -12,6 +12,7 @@ #include #include #include +#include #include #include @@ -23,7 +24,7 @@ #undef DEBUG_UNALIGNED_TRAP #ifdef DEBUG_UNALIGNED_TRAP -# define DPRINT(a...) do { printk("%s.%u: ", __FUNCTION__, __LINE__); printk (a); } while (0) +# define DPRINT(a...) do { printk("%s %u: ", __FUNCTION__, __LINE__); printk (a); } while (0) # define DDUMP(str,vp,len) dump(str, vp, len) static void @@ -650,7 +651,7 @@ * just in case. */ if (ld.x6_op == 1 || ld.x6_op == 3) { - printk(KERN_ERR __FUNCTION__": register update on speculative load, error\n"); + printk("%s %s: register update on speculative load, error\n", KERN_ERR, __FUNCTION__); die_if_kernel("unaligned reference on specualtive load with register update\n", regs, 30); } @@ -1080,8 +1081,8 @@ * For this reason we keep this sanity check */ if (ld.x6_op == 1 || ld.x6_op == 3) - printk(KERN_ERR __FUNCTION__": register update on speculative load pair, " - "error\n"); + printk("%s %s: register update on speculative load pair, " + "error\n",KERN_ERR, __FUNCTION__); setreg(ld.r3, ifa, 0, regs); } @@ -1488,6 +1489,9 @@ si.si_errno = 0; si.si_code = BUS_ADRALN; si.si_addr = (void *) ifa; + si.si_flags = 0; + si.si_isr = 0; + si.si_imm = 0; force_sig_info(SIGBUS, &si, current); goto done; } diff -Nru a/arch/ia64/kernel/unwind.c b/arch/ia64/kernel/unwind.c --- a/arch/ia64/kernel/unwind.c Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/kernel/unwind.c Tue Mar 12 13:58:15 2002 @@ -1,6 +1,6 @@ /* - * Copyright (C) 1999-2001 Hewlett-Packard Co - * Copyright (C) 1999-2001 David Mosberger-Tang + * Copyright (C) 1999-2002 Hewlett-Packard Co + * David Mosberger-Tang */ /* * This file implements call frame unwind support for the Linux @@ -72,6 +72,8 @@ #define alloc_reg_state() kmalloc(sizeof(struct unw_state_record), GFP_ATOMIC) #define free_reg_state(usr) kfree(usr) +#define alloc_labeled_state() kmalloc(sizeof(struct unw_labeled_state), GFP_ATOMIC) +#define free_labeled_state(usr) kfree(usr) typedef unsigned long unw_word; typedef unsigned char unw_hash_index_t; @@ -521,7 +523,7 @@ } -/* Unwind decoder routines */ +/* Routines to manipulate the state stack. */ static inline void push (struct unw_state_record *sr) @@ -534,24 +536,60 @@ return; } memcpy(rs, &sr->curr, sizeof(*rs)); - rs->next = sr->stack; - sr->stack = rs; + sr->curr.next = rs; } static void pop (struct unw_state_record *sr) { - struct unw_reg_state *rs; + struct unw_reg_state *rs = sr->curr.next; - if (!sr->stack) { - printk ("unwind: stack underflow!\n"); + if (!rs) { + printk("unwind: stack underflow!\n"); return; } - rs = sr->stack; - sr->stack = rs->next; + memcpy(&sr->curr, rs, sizeof(*rs)); free_reg_state(rs); } +/* Make a copy of the state stack. Non-recursive to avoid stack overflows. */ +static struct unw_reg_state * +dup_state_stack (struct unw_reg_state *rs) +{ + struct unw_reg_state *copy, *prev = NULL, *first = NULL; + + while (rs) { + copy = alloc_reg_state(); + if (!copy) { + printk ("unwind.dup_state_stack: out of memory\n"); + return NULL; + } + memcpy(copy, rs, sizeof(*copy)); + if (first) + prev->next = copy; + else + first = copy; + rs = rs->next; + prev = copy; + } + return first; +} + +/* Free all stacked register states (but not RS itself). */ +static void +free_state_stack (struct unw_reg_state *rs) +{ + struct unw_reg_state *p, *next; + + for (p = rs->next; p != NULL; p = next) { + next = p->next; + free_reg_state(p); + } + rs->next = NULL; +} + +/* Unwind decoder routines */ + static enum unw_register_index __attribute__((const)) decode_abreg (unsigned char abreg, int memory) { @@ -689,7 +727,7 @@ sr->first_region = 0; /* check if we're done: */ - if (body && sr->when_target < sr->region_start + sr->region_len) { + if (sr->when_target < sr->region_start + sr->region_len) { sr->done = 1; return; } @@ -902,31 +940,36 @@ static inline void desc_copy_state (unw_word label, struct unw_state_record *sr) { - struct unw_reg_state *rs; + struct unw_labeled_state *ls; - for (rs = sr->reg_state_list; rs; rs = rs->next) { - if (rs->label == label) { - memcpy (&sr->curr, rs, sizeof(sr->curr)); + for (ls = sr->labeled_states; ls; ls = ls->next) { + if (ls->label == label) { + free_state_stack(&sr->curr); + memcpy(&sr->curr, &ls->saved_state, sizeof(sr->curr)); + sr->curr.next = dup_state_stack(ls->saved_state.next); return; } } - printk("unwind: failed to find state labelled 0x%lx\n", label); + printk("unwind: failed to find state labeled 0x%lx\n", label); } static inline void desc_label_state (unw_word label, struct unw_state_record *sr) { - struct unw_reg_state *rs; + struct unw_labeled_state *ls; - rs = alloc_reg_state(); - if (!rs) { - printk("unwind: cannot stack!\n"); + ls = alloc_labeled_state(); + if (!ls) { + printk("unwind.desc_label_state(): out of memory\n"); return; } - memcpy(rs, &sr->curr, sizeof(*rs)); - rs->label = label; - rs->next = sr->reg_state_list; - sr->reg_state_list = rs; + ls->label = label; + memcpy(&ls->saved_state, &sr->curr, sizeof(ls->saved_state)); + ls->saved_state.next = dup_state_stack(sr->curr.next); + + /* insert into list of labeled states: */ + ls->next = sr->labeled_states; + sr->labeled_states = ls; } /* @@ -1378,6 +1421,8 @@ else break; } + if (rel_ip < e->start_offset || rel_ip >= e->end_offset) + return NULL; return e; } @@ -1388,9 +1433,9 @@ static inline struct unw_script * build_script (struct unw_frame_info *info) { - struct unw_reg_state *rs, *next; const struct unw_table_entry *e = 0; struct unw_script *script = 0; + struct unw_labeled_state *ls, *next; unsigned long ip = info->ip; struct unw_state_record sr; struct unw_table *table; @@ -1535,15 +1580,15 @@ for (i = UNW_REG_BSP; i < UNW_NUM_REGS; ++i) compile_reg(&sr, i, script); - /* free labelled register states & stack: */ + /* free labeled register states & stack: */ STAT(parse_start = ia64_get_itc()); - for (rs = sr.reg_state_list; rs; rs = next) { - next = rs->next; - free_reg_state(rs); + for (ls = sr.labeled_states; ls; ls = next) { + next = ls->next; + free_state_stack(&ls->saved_state); + free_labeled_state(ls); } - while (sr.stack) - pop(&sr); + free_state_stack(&sr.curr); STAT(unw.stat.script.parse_time += ia64_get_itc() - parse_start); script_finalize(script, &sr); diff -Nru a/arch/ia64/kernel/unwind_i.h b/arch/ia64/kernel/unwind_i.h --- a/arch/ia64/kernel/unwind_i.h Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/kernel/unwind_i.h Tue Mar 12 13:58:15 2002 @@ -1,6 +1,6 @@ /* - * Copyright (C) 2000 Hewlett-Packard Co - * Copyright (C) 2000 David Mosberger-Tang + * Copyright (C) 2000, 2002 Hewlett-Packard Co + * David Mosberger-Tang * * Kernel unwind support. */ @@ -85,6 +85,17 @@ int when; /* when the register gets saved */ }; +struct unw_reg_state { + struct unw_reg_state *next; /* next (outer) element on state stack */ + struct unw_reg_info reg[UNW_NUM_REGS]; /* register save locations */ +}; + +struct unw_labeled_state { + struct unw_labeled_state *next; /* next labeled state (or NULL) */ + unsigned long label; /* label for this state */ + struct unw_reg_state saved_state; +}; + struct unw_state_record { unsigned int first_region : 1; /* is this the first region? */ unsigned int done : 1; /* are we done scanning descriptors? */ @@ -105,11 +116,8 @@ u8 gr_save_loc; /* next general register to use for saving a register */ u8 return_link_reg; /* branch register in which the return link is passed */ - struct unw_reg_state { - struct unw_reg_state *next; - unsigned long label; /* label of this state record */ - struct unw_reg_info reg[UNW_NUM_REGS]; - } curr, *stack, *reg_state_list; + struct unw_labeled_state *labeled_states; /* list of all labeled states */ + struct unw_reg_state curr; /* current state */ }; enum unw_nat_type { @@ -139,7 +147,7 @@ }; /* - * Preserved general static registers (r2-r5) give rise to two script + * Preserved general static registers (r4-r7) give rise to two script * instructions; everything else yields at most one instruction; at * the end of the script, the psp gets popped, accounting for one more * instruction. diff -Nru a/arch/ia64/lib/clear_page.S b/arch/ia64/lib/clear_page.S --- a/arch/ia64/lib/clear_page.S Tue Mar 12 13:58:14 2002 +++ b/arch/ia64/lib/clear_page.S Tue Mar 12 13:58:14 2002 @@ -1,51 +1,77 @@ /* - * - * Optimized function to clear a page of memory. - * - * Inputs: - * in0: address of page - * - * Output: - * none - * - * Copyright (C) 1999-2001 Hewlett-Packard Co - * Copyright (C) 1999 Stephane Eranian - * Copyright (C) 1999-2001 David Mosberger-Tang + * Copyright (C) 1999-2002 Hewlett-Packard Co + * Stephane Eranian + * David Mosberger-Tang + * Copyright (C) 2002 Ken Chen * * 1/06/01 davidm Tuned for Itanium. + * 2/12/02 kchen Tuned for both Itanium and McKinley + * 3/08/02 davidm Some more tweaking */ +#include + #include #include +#ifdef CONFIG_ITANIUM +# define L3_LINE_SIZE 64 // Itanium L3 line size +# define PREFETCH_LINES 9 // magic number +#else +# define L3_LINE_SIZE 128 // McKinley L3 line size +# define PREFETCH_LINES 7 // magic number +#endif + #define saved_lc r2 -#define dst0 in0 +#define dst_fetch r3 #define dst1 r8 #define dst2 r9 #define dst3 r10 -#define dst_fetch r11 +#define dst4 r11 + +#define dst_last r31 GLOBAL_ENTRY(clear_page) .prologue .regstk 1,0,0,0 - mov r16 = PAGE_SIZE/64-1 // -1 = repeat/until - ;; + mov r16 = PAGE_SIZE/L3_LINE_SIZE-1 // main loop count, -1=repeat/until .save ar.lc, saved_lc mov saved_lc = ar.lc + .body - mov ar.lc = r16 - adds dst1 = 16, dst0 - adds dst2 = 32, dst0 - adds dst3 = 48, dst0 - adds dst_fetch = 512, dst0 + mov ar.lc = (PREFETCH_LINES - 1) + mov dst_fetch = in0 + adds dst1 = 16, in0 + adds dst2 = 32, in0 + ;; +.fetch: stf.spill.nta [dst_fetch] = f0, L3_LINE_SIZE + adds dst3 = 48, in0 // executing this multiple times is harmless + br.cloop.sptk.few .fetch + ;; + addl dst_last = (PAGE_SIZE - PREFETCH_LINES*L3_LINE_SIZE), dst_fetch + mov ar.lc = r16 // one L3 line per iteration + adds dst4 = 64, in0 + ;; +#ifdef CONFIG_ITANIUM + // Optimized for Itanium +1: stf.spill.nta [dst1] = f0, 64 + stf.spill.nta [dst2] = f0, 64 + cmp.lt p8,p0=dst_fetch, dst_last + ;; +#else + // Optimized for McKinley +1: stf.spill.nta [dst1] = f0, 64 + stf.spill.nta [dst2] = f0, 64 + stf.spill.nta [dst3] = f0, 64 + stf.spill.nta [dst4] = f0, 128 + cmp.lt p8,p0=dst_fetch, dst_last ;; -1: stf.spill.nta [dst0] = f0, 64 stf.spill.nta [dst1] = f0, 64 stf.spill.nta [dst2] = f0, 64 +#endif stf.spill.nta [dst3] = f0, 64 - - lfetch [dst_fetch], 64 - br.cloop.dptk.few 1b +(p8) stf.spill.nta [dst_fetch] = f0, L3_LINE_SIZE + br.cloop.sptk.few 1b ;; - mov ar.lc = r2 // restore lc + mov ar.lc = saved_lc // restore lc br.ret.sptk.many rp END(clear_page) diff -Nru a/arch/ia64/lib/copy_page.S b/arch/ia64/lib/copy_page.S --- a/arch/ia64/lib/copy_page.S Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/lib/copy_page.S Tue Mar 12 13:58:15 2002 @@ -9,8 +9,8 @@ * no return value * * Copyright (C) 1999, 2001 Hewlett-Packard Co - * Copyright (C) 1999 Stephane Eranian - * Copyright (C) 2001 David Mosberger + * Stephane Eranian + * David Mosberger * * 4/06/01 davidm Tuned to make it perform well both for cached and uncached copies. */ diff -Nru a/arch/ia64/lib/swiotlb.c b/arch/ia64/lib/swiotlb.c --- a/arch/ia64/lib/swiotlb.c Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/lib/swiotlb.c Tue Mar 12 13:58:15 2002 @@ -27,11 +27,20 @@ #define ALIGN(val, align) ((unsigned long) \ (((unsigned long) (val) + ((align) - 1)) & ~((align) - 1))) -#define SG_ENT_VIRT_ADDRESS(sg) ((sg)->address ? (sg)->address \ - : page_address((sg)->page) + (sg)->offset) +#define OFFSET(val,align) ((unsigned long) \ + ( (val) & ( (align) - 1))) + +#define SG_ENT_VIRT_ADDRESS(sg) (page_address((sg)->page) + (sg)->offset) #define SG_ENT_PHYS_ADDRESS(SG) virt_to_phys(SG_ENT_VIRT_ADDRESS(SG)) /* + * Maximum allowable number of contiguous slabs to map, + * must be a power of 2. What is the appropriate value ? + * The complexity of {map,unmap}_single is linearly dependent on this value. + */ +#define IO_TLB_SEGSIZE 128 + +/* * log of the size of each IO TLB slab. The number of slabs is command line controllable. */ #define IO_TLB_SHIFT 11 @@ -69,10 +78,15 @@ setup_io_tlb_npages (char *str) { io_tlb_nslabs = simple_strtoul(str, NULL, 0) << (PAGE_SHIFT - IO_TLB_SHIFT); + + /* avoid tail segment of size < IO_TLB_SEGSIZE */ + io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE); + return 1; } __setup("swiotlb=", setup_io_tlb_npages); + /* * Statically reserve bounce buffer space and initialize bounce buffer data structures for * the software IO TLB used to implement the PCI DMA API. @@ -92,12 +106,12 @@ /* * Allocate and initialize the free list array. This array is used - * to find contiguous free memory regions of size 2^IO_TLB_SHIFT between - * io_tlb_start and io_tlb_end. + * to find contiguous free memory regions of size up to IO_TLB_SEGSIZE + * between io_tlb_start and io_tlb_end. */ io_tlb_list = alloc_bootmem(io_tlb_nslabs * sizeof(int)); for (i = 0; i < io_tlb_nslabs; i++) - io_tlb_list[i] = io_tlb_nslabs - i; + io_tlb_list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE); io_tlb_index = 0; io_tlb_orig_addr = alloc_bootmem(io_tlb_nslabs * sizeof(char *)); @@ -124,7 +138,7 @@ if (size > (1 << PAGE_SHIFT)) stride = (1 << (PAGE_SHIFT - IO_TLB_SHIFT)); else - stride = nslots; + stride = 1; if (!nslots) BUG(); @@ -151,7 +165,8 @@ for (i = index; i < index + nslots; i++) io_tlb_list[i] = 0; - for (i = index - 1; (i >= 0) && io_tlb_list[i]; i--) + for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE -1) + && io_tlb_list[i]; i--) io_tlb_list[i] = ++count; dma_addr = io_tlb_start + (index << IO_TLB_SHIFT); @@ -217,7 +232,8 @@ */ spin_lock_irqsave(&io_tlb_lock, flags); { - int count = ((index + nslots) < io_tlb_nslabs ? io_tlb_list[index + nslots] : 0); + int count = ((index + nslots) < ALIGN(index + 1, IO_TLB_SEGSIZE) ? + io_tlb_list[index + nslots] : 0); /* * Step 1: return the slots to the free list, merging the slots with * superceeding slots @@ -228,7 +244,8 @@ * Step 2: merge the returned slots with the preceeding slots, if * available (non zero) */ - for (i = index - 1; (i >= 0) && io_tlb_list[i]; i--) + for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE -1) && + io_tlb_list[i]; i--) io_tlb_list[i] = ++count; } spin_unlock_irqrestore(&io_tlb_lock, flags); @@ -405,11 +422,9 @@ for (i = 0; i < nelems; i++, sg++) { sg->orig_address = SG_ENT_VIRT_ADDRESS(sg); if ((SG_ENT_PHYS_ADDRESS(sg) & ~hwdev->dma_mask) != 0) { - addr = map_single(hwdev, sg->address, sg->length, direction); - if (sg->address) - sg->address = addr; - else - sg->page = virt_to_page(addr); + addr = map_single(hwdev, sg->orig_address, sg->length, direction); + sg->page = virt_to_page(addr); + sg->offset = (u64) addr & ~PAGE_MASK; } } return nelems; @@ -430,12 +445,10 @@ for (i = 0; i < nelems; i++, sg++) if (sg->orig_address != SG_ENT_VIRT_ADDRESS(sg)) { unmap_single(hwdev, SG_ENT_VIRT_ADDRESS(sg), sg->length, direction); - if (sg->address) - sg->address = sg->orig_address; - else - sg->page = virt_to_page(sg->orig_address); + sg->page = virt_to_page(sg->orig_address); + sg->offset = (u64) sg->orig_address & ~PAGE_MASK; } else if (direction == PCI_DMA_FROMDEVICE) - mark_clean(sg->address, sg->length); + mark_clean(SG_ENT_VIRT_ADDRESS(sg), sg->length); } /* diff -Nru a/arch/ia64/mm/extable.c b/arch/ia64/mm/extable.c --- a/arch/ia64/mm/extable.c Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/mm/extable.c Tue Mar 12 13:58:15 2002 @@ -1,8 +1,8 @@ /* * Kernel exception handling table support. Derived from arch/alpha/mm/extable.c. * - * Copyright (C) 1998, 1999, 2001 Hewlett-Packard Co - * Copyright (C) 1998, 1999, 2001 David Mosberger-Tang + * Copyright (C) 1998, 1999, 2001-2002 Hewlett-Packard Co + * David Mosberger-Tang */ #include @@ -55,10 +55,12 @@ struct module *mp; /* The kernel is the last "module" -- no need to treat it special. */ - for (mp = module_list; mp ; mp = mp->next) { + for (mp = module_list; mp; mp = mp->next) { if (!mp->ex_table_start) continue; archdata = (struct archdata *) mp->archdata_start; + if (!archdata) + continue; entry = search_one_table(mp->ex_table_start, mp->ex_table_end - 1, addr, (unsigned long) archdata->gp); if (entry) { diff -Nru a/arch/ia64/mm/fault.c b/arch/ia64/mm/fault.c --- a/arch/ia64/mm/fault.c Tue Mar 12 13:58:16 2002 +++ b/arch/ia64/mm/fault.c Tue Mar 12 13:58:16 2002 @@ -1,7 +1,7 @@ /* * MMU fault handling support. * - * Copyright (C) 1998-2001 Hewlett-Packard Co + * Copyright (C) 1998-2002 Hewlett-Packard Co * David Mosberger-Tang */ #include @@ -96,7 +96,7 @@ * sure we exit gracefully rather than endlessly redo the * fault. */ - switch (handle_mm_fault(mm, vma, address, mask)) { + switch (handle_mm_fault(mm, vma, address, (mask & VM_WRITE) != 0)) { case 1: ++current->min_flt; break; @@ -151,6 +151,8 @@ si.si_errno = 0; si.si_code = code; si.si_addr = (void *) address; + si.si_isr = isr; + si.si_flags = __ISR_VALID; force_sig_info(signal, &si, current); return; } @@ -194,9 +196,7 @@ out_of_memory: up_read(&mm->mmap_sem); if (current->pid == 1) { - current->policy |= SCHED_YIELD; - schedule(); - down_read(&mm->mmap_sem); + yield(); goto survive; } printk("VM: killing process %s\n", current->comm); diff -Nru a/arch/ia64/mm/init.c b/arch/ia64/mm/init.c --- a/arch/ia64/mm/init.c Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/mm/init.c Tue Mar 12 13:58:15 2002 @@ -1,8 +1,8 @@ /* * Initialize MMU support. * - * Copyright (C) 1998-2001 Hewlett-Packard Co - * Copyright (C) 1998-2001 David Mosberger-Tang + * Copyright (C) 1998-2002 Hewlett-Packard Co + * David Mosberger-Tang */ #include #include @@ -14,6 +14,7 @@ #include #include +#include #include #include #include @@ -37,10 +38,15 @@ static unsigned long totalram_pages; +static int pgt_cache_water[2] = { 25, 50 }; + int -do_check_pgt_cache (int low, int high) +check_pgt_cache (void) { - int freed = 0; + int low, high, freed = 0; + + low = pgt_cache_water[0]; + high = pgt_cache_water[1]; if (pgtable_cache_size > high) { do { @@ -48,8 +54,6 @@ free_page((unsigned long)pgd_alloc_one_fast(0)), ++freed; if (pmd_quicklist) free_page((unsigned long)pmd_alloc_one_fast(0, 0)), ++freed; - if (pte_quicklist) - free_page((unsigned long)pte_alloc_one_fast(0, 0)), ++freed; } while (pgtable_cache_size > low); } return freed; @@ -243,15 +247,16 @@ pmd = pmd_alloc(&init_mm, pgd, address); if (!pmd) goto out; - pte = pte_alloc(&init_mm, pmd, address); + pte = pte_alloc_map(&init_mm, pmd, address); if (!pte) goto out; if (!pte_none(*pte)) { - pte_ERROR(*pte); + pte_unmap(pte); goto out; } flush_page_to_ram(page); set_pte(pte, mk_pte(page, PAGE_GATE)); + pte_unmap(pte); } out: spin_unlock(&init_mm.page_table_lock); /* no need for flush_tlb */ diff -Nru a/arch/ia64/sn/Makefile b/arch/ia64/sn/Makefile --- a/arch/ia64/sn/Makefile Tue Mar 12 13:58:15 2002 +++ /dev/null Wed Dec 31 16:00:00 1969 @@ -1,19 +0,0 @@ -# -# ia64/sn/Makefile -# -# Copyright (C) 1999 Silicon Graphics, Inc. -# Copyright (C) Srinivasa Thirumalachar (sprasad@engr.sgi.com) -# - -EXTRA_CFLAGS := -DSN -DLANGUAGE_C=1 -D_LANGUAGE_C=1 -I. -DBRINGUP \ - -DDIRECT_L1_CONSOLE -DNUMA_BASE -DSIMULATED_KLGRAPH \ - -DNUMA_MIGR_CONTROL -DLITTLE_ENDIAN -DREAL_HARDWARE \ - -DNEW_INTERRUPTS -all: sn.a - -O_TARGET = sn.a -obj-y = sn1/sn1.a - -clean:: - -include $(TOPDIR)/Rules.make diff -Nru a/arch/ia64/sn/configs/sn1/defconfig-bigsur-mp b/arch/ia64/sn/configs/sn1/defconfig-bigsur-mp --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/configs/sn1/defconfig-bigsur-mp Tue Mar 12 13:58:16 2002 @@ -0,0 +1,777 @@ +# +# Automatically generated make config: don't edit +# + +# +# Code maturity level options +# +CONFIG_EXPERIMENTAL=y + +# +# Loadable module support +# +# CONFIG_MODULES is not set + +# +# General setup +# +CONFIG_IA64=y +# CONFIG_ISA is not set +# CONFIG_EISA is not set +# CONFIG_MCA is not set +# CONFIG_SBUS is not set +CONFIG_RWSEM_GENERIC_SPINLOCK=y +# CONFIG_RWSEM_XCHGADD_ALGORITHM is not set +CONFIG_ACPI=y +CONFIG_ACPI_EFI=y +CONFIG_ACPI_INTERPRETER=y +CONFIG_ACPI_KERNEL_CONFIG=y +CONFIG_ITANIUM=y +# CONFIG_MCKINLEY is not set +# CONFIG_IA64_GENERIC is not set +CONFIG_IA64_DIG=y +# CONFIG_IA64_HP_SIM is not set +# CONFIG_IA64_SGI_SN1 is not set +# CONFIG_IA64_SGI_SN2 is not set +# CONFIG_IA64_PAGE_SIZE_4KB is not set +# CONFIG_IA64_PAGE_SIZE_8KB is not set +CONFIG_IA64_PAGE_SIZE_16KB=y +# CONFIG_IA64_PAGE_SIZE_64KB is not set +CONFIG_IA64_BRL_EMU=y +CONFIG_ITANIUM_BSTEP_SPECIFIC=y +CONFIG_IA64_L1_CACHE_SHIFT=6 +# CONFIG_NUMA is not set +# CONFIG_IA64_MCA is not set +CONFIG_PM=y +CONFIG_IA64_HAVE_SYNCRONIZED_ITC=y +CONFIG_DEVFS_FS=y +CONFIG_DEVFS_DEBUG=y +CONFIG_KCORE_ELF=y +CONFIG_SMP=y +CONFIG_IA32_SUPPORT=y +CONFIG_PERFMON=y +CONFIG_IA64_PALINFO=y +# CONFIG_EFI_VARS is not set +CONFIG_NET=y +CONFIG_SYSVIPC=y +# CONFIG_BSD_PROCESS_ACCT is not set +CONFIG_SYSCTL=y +CONFIG_BINFMT_ELF=y +# CONFIG_BINFMT_MISC is not set +# CONFIG_ACPI_DEBUG is not set +# CONFIG_ACPI_BUSMGR is not set +# CONFIG_ACPI_SYS is not set +# CONFIG_ACPI_CPU is not set +# CONFIG_ACPI_BUTTON is not set +# CONFIG_ACPI_AC is not set +# CONFIG_ACPI_EC is not set +# CONFIG_ACPI_CMBATT is not set +# CONFIG_ACPI_THERMAL is not set +CONFIG_PCI=y +# CONFIG_PCI_NAMES is not set +# CONFIG_HOTPLUG is not set +# CONFIG_PCMCIA is not set + +# +# Parallel port support +# +# CONFIG_PARPORT is not set + +# +# Networking options +# +# CONFIG_PACKET is not set +# CONFIG_NETLINK is not set +# CONFIG_NETFILTER is not set +# CONFIG_FILTER is not set +CONFIG_UNIX=y +CONFIG_INET=y +# CONFIG_IP_MULTICAST is not set +# CONFIG_IP_ADVANCED_ROUTER is not set +# CONFIG_IP_PNP is not set +# CONFIG_NET_IPIP is not set +# CONFIG_NET_IPGRE is not set +# CONFIG_INET_ECN is not set +# CONFIG_SYN_COOKIES is not set +# CONFIG_IPV6 is not set +# CONFIG_KHTTPD is not set +# CONFIG_ATM is not set + +# +# +# +# CONFIG_IPX is not set +# CONFIG_ATALK is not set +# CONFIG_DECNET is not set +# CONFIG_BRIDGE is not set +# CONFIG_X25 is not set +# CONFIG_LAPB is not set +# CONFIG_LLC is not set +# CONFIG_NET_DIVERT is not set +# CONFIG_ECONET is not set +# CONFIG_WAN_ROUTER is not set +# CONFIG_NET_FASTROUTE is not set +# CONFIG_NET_HW_FLOWCONTROL is not set + +# +# QoS and/or fair queueing +# +# CONFIG_NET_SCHED is not set + +# +# Memory Technology Devices (MTD) +# +# CONFIG_MTD is not set + +# +# Plug and Play configuration +# +# CONFIG_PNP is not set +# CONFIG_ISAPNP is not set +# CONFIG_PNPBIOS is not set + +# +# Block devices +# +# CONFIG_BLK_DEV_FD is not set +# CONFIG_BLK_DEV_XD is not set +# CONFIG_PARIDE is not set +# CONFIG_BLK_CPQ_DA is not set +# CONFIG_BLK_CPQ_CISS_DA is not set +# CONFIG_BLK_DEV_DAC960 is not set +# CONFIG_BLK_DEV_LOOP is not set +# CONFIG_BLK_DEV_NBD is not set +# CONFIG_BLK_DEV_RAM is not set +# CONFIG_BLK_DEV_INITRD is not set + +# +# I2O device support +# +# CONFIG_I2O is not set +# CONFIG_I2O_PCI is not set +# CONFIG_I2O_BLOCK is not set +# CONFIG_I2O_LAN is not set +# CONFIG_I2O_SCSI is not set +# CONFIG_I2O_PROC is not set + +# +# Multi-device support (RAID and LVM) +# +# CONFIG_MD is not set +# CONFIG_BLK_DEV_MD is not set +# CONFIG_MD_LINEAR is not set +# CONFIG_MD_RAID0 is not set +# CONFIG_MD_RAID1 is not set +# CONFIG_MD_RAID5 is not set +# CONFIG_MD_MULTIPATH is not set +# CONFIG_BLK_DEV_LVM is not set + +# +# ATA/IDE/MFM/RLL support +# +CONFIG_IDE=y + +# +# IDE, ATA and ATAPI Block devices +# +CONFIG_BLK_DEV_IDE=y + +# +# Please see Documentation/ide.txt for help/info on IDE drives +# +# CONFIG_BLK_DEV_HD_IDE is not set +# CONFIG_BLK_DEV_HD is not set +CONFIG_BLK_DEV_IDEDISK=y +# CONFIG_IDEDISK_MULTI_MODE is not set +# CONFIG_BLK_DEV_IDEDISK_VENDOR is not set +# CONFIG_BLK_DEV_IDEDISK_FUJITSU is not set +# CONFIG_BLK_DEV_IDEDISK_IBM is not set +# CONFIG_BLK_DEV_IDEDISK_MAXTOR is not set +# CONFIG_BLK_DEV_IDEDISK_QUANTUM is not set +# CONFIG_BLK_DEV_IDEDISK_SEAGATE is not set +# CONFIG_BLK_DEV_IDEDISK_WD is not set +# CONFIG_BLK_DEV_COMMERIAL is not set +# CONFIG_BLK_DEV_TIVO is not set +# CONFIG_BLK_DEV_IDECS is not set +CONFIG_BLK_DEV_IDECD=y +# CONFIG_BLK_DEV_IDETAPE is not set +CONFIG_BLK_DEV_IDEFLOPPY=y +# CONFIG_BLK_DEV_IDESCSI is not set + +# +# IDE chipset support/bugfixes +# +# CONFIG_BLK_DEV_CMD640 is not set +# CONFIG_BLK_DEV_CMD640_ENHANCED is not set +# CONFIG_BLK_DEV_ISAPNP is not set +# CONFIG_BLK_DEV_RZ1000 is not set +CONFIG_BLK_DEV_IDEPCI=y +# CONFIG_IDEPCI_SHARE_IRQ is not set +CONFIG_BLK_DEV_IDEDMA_PCI=y +CONFIG_BLK_DEV_ADMA=y +# CONFIG_BLK_DEV_OFFBOARD is not set +# CONFIG_IDEDMA_PCI_AUTO is not set +CONFIG_BLK_DEV_IDEDMA=y +# CONFIG_IDEDMA_PCI_WIP is not set +# CONFIG_IDEDMA_NEW_DRIVE_LISTINGS is not set +# CONFIG_BLK_DEV_AEC62XX is not set +# CONFIG_AEC62XX_TUNING is not set +# CONFIG_BLK_DEV_ALI15X3 is not set +# CONFIG_WDC_ALI15X3 is not set +# CONFIG_BLK_DEV_AMD74XX is not set +# CONFIG_AMD74XX_OVERRIDE is not set +# CONFIG_BLK_DEV_CMD64X is not set +# CONFIG_BLK_DEV_CY82C693 is not set +# CONFIG_BLK_DEV_CS5530 is not set +# CONFIG_BLK_DEV_HPT34X is not set +# CONFIG_HPT34X_AUTODMA is not set +# CONFIG_BLK_DEV_HPT366 is not set +# CONFIG_BLK_DEV_PIIX is not set +# CONFIG_PIIX_TUNING is not set +# CONFIG_BLK_DEV_NS87415 is not set +# CONFIG_BLK_DEV_OPTI621 is not set +# CONFIG_BLK_DEV_PDC202XX is not set +# CONFIG_PDC202XX_BURST is not set +# CONFIG_PDC202XX_FORCE is not set +# CONFIG_BLK_DEV_SVWKS is not set +# CONFIG_BLK_DEV_SIS5513 is not set +# CONFIG_BLK_DEV_SLC90E66 is not set +# CONFIG_BLK_DEV_TRM290 is not set +# CONFIG_BLK_DEV_VIA82CXXX is not set +# CONFIG_IDE_CHIPSETS is not set +# CONFIG_IDEDMA_AUTO is not set +# CONFIG_IDEDMA_IVB is not set +# CONFIG_DMA_NONPCI is not set +# CONFIG_BLK_DEV_IDE_MODES is not set +# CONFIG_BLK_DEV_ATARAID is not set +# CONFIG_BLK_DEV_ATARAID_PDC is not set +# CONFIG_BLK_DEV_ATARAID_HPT is not set + +# +# Alternate 1394 support +# +# CONFIG_X1394 is not set + +# +# Alternate SCSI support +# +# CONFIG_XSCSI is not set + +# +# SCSI support +# +CONFIG_SCSI=y + +# +# SCSI support type (disk, tape, CD-ROM) +# +CONFIG_BLK_DEV_SD=y +CONFIG_SD_EXTRA_DEVS=40 +# CONFIG_CHR_DEV_ST is not set +# CONFIG_CHR_DEV_OSST is not set +# CONFIG_BLK_DEV_SR is not set +# CONFIG_CHR_DEV_SG is not set + +# +# Some SCSI devices (e.g. CD jukebox) support multiple LUNs +# +# CONFIG_SCSI_DEBUG_QUEUES is not set +CONFIG_SCSI_MULTI_LUN=y +CONFIG_SCSI_CONSTANTS=y +CONFIG_SCSI_LOGGING=y + +# +# SCSI low-level drivers +# +# CONFIG_BLK_DEV_3W_XXXX_RAID is not set +# CONFIG_SCSI_7000FASST is not set +# CONFIG_SCSI_ACARD is not set +# CONFIG_SCSI_AHA152X is not set +# CONFIG_SCSI_AHA1542 is not set +# CONFIG_SCSI_AHA1740 is not set +# CONFIG_SCSI_AIC7XXX is not set +# CONFIG_SCSI_AIC7XXX_OLD is not set +# CONFIG_SCSI_DPT_I2O is not set +# CONFIG_SCSI_ADVANSYS is not set +# CONFIG_SCSI_IN2000 is not set +# CONFIG_SCSI_AM53C974 is not set +# CONFIG_SCSI_MEGARAID is not set +# CONFIG_SCSI_BUSLOGIC is not set +# CONFIG_SCSI_CPQFCTS is not set +# CONFIG_SCSI_DMX3191D is not set +# CONFIG_SCSI_DTC3280 is not set +# CONFIG_SCSI_EATA is not set +# CONFIG_SCSI_EATA_DMA is not set +# CONFIG_SCSI_EATA_PIO is not set +# CONFIG_SCSI_FUTURE_DOMAIN is not set +# CONFIG_SCSI_GDTH is not set +# CONFIG_SCSI_GENERIC_NCR5380 is not set +# CONFIG_SCSI_INITIO is not set +# CONFIG_SCSI_INIA100 is not set +# CONFIG_SCSI_NCR53C406A is not set +# CONFIG_SCSI_NCR53C7xx is not set +# CONFIG_SCSI_NCR53C8XX is not set +# CONFIG_SCSI_SYM53C8XX is not set +# CONFIG_SCSI_PAS16 is not set +# CONFIG_SCSI_PCI2000 is not set +# CONFIG_SCSI_PCI2220I is not set +# CONFIG_SCSI_PSI240I is not set +# CONFIG_SCSI_QLOGIC_FAS is not set +# CONFIG_SCSI_QLOGIC_ISP is not set +# CONFIG_SCSI_QLOGIC_FC is not set +CONFIG_SCSI_QLOGIC_1280=y +# CONFIG_SCSI_QLOGIC_QLA2100 is not set +# CONFIG_SCSI_SIM710 is not set +# CONFIG_SCSI_SYM53C416 is not set +# CONFIG_SCSI_DC390T is not set +# CONFIG_SCSI_T128 is not set +# CONFIG_SCSI_U14_34F is not set +# CONFIG_SCSI_DEBUG is not set + +# +# Network device support +# +CONFIG_NETDEVICES=y + +# +# ARCnet devices +# +# CONFIG_ARCNET is not set +CONFIG_DUMMY=y +# CONFIG_BONDING is not set +# CONFIG_EQUALIZER is not set +# CONFIG_TUN is not set + +# +# Ethernet (10 or 100Mbit) +# +CONFIG_NET_ETHERNET=y +# CONFIG_SUNLANCE is not set +# CONFIG_HAPPYMEAL is not set +# CONFIG_SUNBMAC is not set +# CONFIG_SUNQE is not set +# CONFIG_SUNLANCE is not set +# CONFIG_SUNGEM is not set +# CONFIG_NET_VENDOR_3COM is not set +# CONFIG_LANCE is not set +# CONFIG_NET_VENDOR_SMC is not set +# CONFIG_NET_VENDOR_RACAL is not set +# CONFIG_HP100 is not set +# CONFIG_NET_ISA is not set +CONFIG_NET_PCI=y +# CONFIG_PCNET32 is not set +# CONFIG_ADAPTEC_STARFIRE is not set +# CONFIG_APRICOT is not set +# CONFIG_CS89x0 is not set +# CONFIG_TULIP is not set +# CONFIG_DE4X5 is not set +# CONFIG_DGRS is not set +# CONFIG_DM9102 is not set +CONFIG_EEPRO100=y +# CONFIG_LNE390 is not set +# CONFIG_FEALNX is not set +# CONFIG_NATSEMI is not set +# CONFIG_NE2K_PCI is not set +# CONFIG_NE3210 is not set +# CONFIG_ES3210 is not set +# CONFIG_8139CP is not set +# CONFIG_8139TOO is not set +# CONFIG_8139TOO_PIO is not set +# CONFIG_8139TOO_TUNE_TWISTER is not set +# CONFIG_8139TOO_8129 is not set +# CONFIG_SIS900 is not set +# CONFIG_EPIC100 is not set +# CONFIG_SUNDANCE is not set +# CONFIG_TLAN is not set +# CONFIG_VIA_RHINE is not set +# CONFIG_WINBOND_840 is not set +# CONFIG_NET_POCKET is not set + +# +# Ethernet (1000 Mbit) +# +# CONFIG_ACENIC is not set +# CONFIG_DL2K is not set +# CONFIG_MYRI_SBUS is not set +# CONFIG_NS83820 is not set +# CONFIG_HAMACHI is not set +# CONFIG_YELLOWFIN is not set +# CONFIG_SK98LIN is not set +# CONFIG_FDDI is not set +# CONFIG_HIPPI is not set +# CONFIG_PLIP is not set +# CONFIG_PPP is not set +# CONFIG_SLIP is not set + +# +# Wireless LAN (non-hamradio) +# +# CONFIG_NET_RADIO is not set + +# +# Token Ring devices +# +# CONFIG_TR is not set +# CONFIG_NET_FC is not set +# CONFIG_RCPCI is not set +# CONFIG_SHAPER is not set + +# +# Wan interfaces +# +# CONFIG_WAN is not set + +# +# Amateur Radio support +# +# CONFIG_HAMRADIO is not set + +# +# ISDN subsystem +# +# CONFIG_ISDN is not set + +# +# CD-ROM drivers (not for SCSI or IDE/ATAPI drives) +# +# CONFIG_CD_NO_IDESCSI is not set + +# +# Input core support +# +# CONFIG_INPUT is not set +# CONFIG_INPUT_KEYBDEV is not set +# CONFIG_INPUT_MOUSEDEV is not set +# CONFIG_INPUT_JOYDEV is not set +# CONFIG_INPUT_EVDEV is not set + +# +# Character devices +# +CONFIG_VT=y +CONFIG_VT_CONSOLE=y +CONFIG_SERIAL=y +CONFIG_SERIAL_CONSOLE=y +# CONFIG_SERIAL_EXTENDED is not set +# CONFIG_SERIAL_NONSTANDARD is not set +CONFIG_UNIX98_PTYS=y +CONFIG_UNIX98_PTY_COUNT=256 + +# +# I2C support +# +# CONFIG_I2C is not set + +# +# Mice +# +# CONFIG_BUSMOUSE is not set +CONFIG_MOUSE=y +CONFIG_PSMOUSE=y +# CONFIG_82C710_MOUSE is not set +# CONFIG_PC110_PAD is not set + +# +# Joysticks +# +# CONFIG_INPUT_GAMEPORT is not set + +# +# Input core support is needed for gameports +# + +# +# Input core support is needed for joysticks +# +# CONFIG_QIC02_TAPE is not set + +# +# Watchdog Cards +# +# CONFIG_WATCHDOG is not set +# CONFIG_INTEL_RNG is not set +# CONFIG_NVRAM is not set +# CONFIG_RTC is not set +# CONFIG_EFI_RTC is not set +# CONFIG_DTLK is not set +# CONFIG_R3964 is not set +# CONFIG_APPLICOM is not set + +# +# Ftape, the floppy tape device driver +# +# CONFIG_FTAPE is not set +# CONFIG_AGP is not set +# CONFIG_DRM is not set +# CONFIG_MWAVE is not set + +# +# Multimedia devices +# +# CONFIG_VIDEO_DEV is not set + +# +# File systems +# +# CONFIG_QUOTA is not set +CONFIG_AUTOFS_FS=y +CONFIG_AUTOFS4_FS=y +# CONFIG_REISERFS_FS is not set +# CONFIG_REISERFS_CHECK is not set +# CONFIG_ADFS_FS is not set +# CONFIG_ADFS_FS_RW is not set +# CONFIG_AFFS_FS is not set +# CONFIG_HFS_FS is not set +# CONFIG_BFS_FS is not set +CONFIG_FAT_FS=y +CONFIG_MSDOS_FS=y +# CONFIG_UMSDOS_FS is not set +CONFIG_VFAT_FS=y +# CONFIG_EFS_FS is not set +# CONFIG_JFFS_FS is not set +# CONFIG_JFFS2_FS is not set +# CONFIG_CRAMFS is not set +CONFIG_TMPFS=y +# CONFIG_RAMFS is not set +CONFIG_ISO9660_FS=y +CONFIG_JOLIET=y +# CONFIG_MINIX_FS is not set +# CONFIG_VXFS_FS is not set +# CONFIG_NTFS_FS is not set +# CONFIG_NTFS_RW is not set +# CONFIG_HPFS_FS is not set +CONFIG_PROC_FS=y +CONFIG_DEVFS_FS=y +CONFIG_DEVFS_MOUNT=y +CONFIG_DEVFS_DEBUG=y +CONFIG_DEVPTS_FS=y +# CONFIG_QNX4FS_FS is not set +# CONFIG_QNX4FS_RW is not set +# CONFIG_ROMFS_FS is not set +CONFIG_EXT2_FS=y +# CONFIG_SYSV_FS is not set +# CONFIG_UDF_FS is not set +# CONFIG_UDF_RW is not set +# CONFIG_UFS_FS is not set +# CONFIG_UFS_FS_WRITE is not set +# CONFIG_XFS_SUPPORT is not set + +# +# Network File Systems +# +# CONFIG_CODA_FS is not set +CONFIG_NFS_FS=y +CONFIG_NFS_V3=y +# CONFIG_ROOT_NFS is not set +CONFIG_NFSD=y +CONFIG_NFSD_V3=y +CONFIG_SUNRPC=y +CONFIG_LOCKD=y +CONFIG_LOCKD_V4=y +# CONFIG_SMB_FS is not set +# CONFIG_NCP_FS is not set +# CONFIG_NCPFS_PACKET_SIGNING is not set +# CONFIG_NCPFS_IOCTL_LOCKING is not set +# CONFIG_NCPFS_STRONG is not set +# CONFIG_NCPFS_NFS_NS is not set +# CONFIG_NCPFS_OS2_NS is not set +# CONFIG_NCPFS_SMALLDOS is not set +# CONFIG_NCPFS_NLS is not set +# CONFIG_NCPFS_EXTRAS is not set + +# +# Partition Types +# +# CONFIG_PARTITION_ADVANCED is not set +CONFIG_MSDOS_PARTITION=y +# CONFIG_SMB_NLS is not set +CONFIG_NLS=y + +# +# Native Language Support +# +CONFIG_NLS_DEFAULT="iso8859-1" +# CONFIG_NLS_CODEPAGE_437 is not set +# CONFIG_NLS_CODEPAGE_737 is not set +# CONFIG_NLS_CODEPAGE_775 is not set +# CONFIG_NLS_CODEPAGE_850 is not set +# CONFIG_NLS_CODEPAGE_852 is not set +# CONFIG_NLS_CODEPAGE_855 is not set +# CONFIG_NLS_CODEPAGE_857 is not set +# CONFIG_NLS_CODEPAGE_860 is not set +# CONFIG_NLS_CODEPAGE_861 is not set +# CONFIG_NLS_CODEPAGE_862 is not set +# CONFIG_NLS_CODEPAGE_863 is not set +# CONFIG_NLS_CODEPAGE_864 is not set +# CONFIG_NLS_CODEPAGE_865 is not set +# CONFIG_NLS_CODEPAGE_866 is not set +# CONFIG_NLS_CODEPAGE_869 is not set +# CONFIG_NLS_CODEPAGE_936 is not set +# CONFIG_NLS_CODEPAGE_950 is not set +# CONFIG_NLS_CODEPAGE_932 is not set +# CONFIG_NLS_CODEPAGE_949 is not set +# CONFIG_NLS_CODEPAGE_874 is not set +# CONFIG_NLS_ISO8859_8 is not set +# CONFIG_NLS_CODEPAGE_1251 is not set +# CONFIG_NLS_ISO8859_1 is not set +# CONFIG_NLS_ISO8859_2 is not set +# CONFIG_NLS_ISO8859_3 is not set +# CONFIG_NLS_ISO8859_4 is not set +# CONFIG_NLS_ISO8859_5 is not set +# CONFIG_NLS_ISO8859_6 is not set +# CONFIG_NLS_ISO8859_7 is not set +# CONFIG_NLS_ISO8859_9 is not set +# CONFIG_NLS_ISO8859_13 is not set +# CONFIG_NLS_ISO8859_14 is not set +# CONFIG_NLS_ISO8859_15 is not set +# CONFIG_NLS_KOI8_R is not set +# CONFIG_NLS_KOI8_U is not set +# CONFIG_NLS_UTF8 is not set + +# +# Console drivers +# +CONFIG_VGA_CONSOLE=y + +# +# Frame-buffer support +# +# CONFIG_FB is not set + +# +# Sound +# +# CONFIG_SOUND is not set + +# +# USB support +# +# CONFIG_USB is not set + +# +# USB Controllers +# +# CONFIG_USB_UHCI is not set +# CONFIG_USB_UHCI_ALT is not set +# CONFIG_USB_OHCI is not set + +# +# USB Device Class drivers +# +# CONFIG_USB_AUDIO is not set +# CONFIG_USB_BLUETOOTH is not set +# CONFIG_USB_STORAGE is not set +# CONFIG_USB_STORAGE_DEBUG is not set +# CONFIG_USB_STORAGE_DATAFAB is not set +# CONFIG_USB_STORAGE_FREECOM is not set +# CONFIG_USB_STORAGE_ISD200 is not set +# CONFIG_USB_STORAGE_DPCM is not set +# CONFIG_USB_STORAGE_HP8200e is not set +# CONFIG_USB_STORAGE_SDDR09 is not set +# CONFIG_USB_STORAGE_JUMPSHOT is not set +# CONFIG_USB_ACM is not set +# CONFIG_USB_PRINTER is not set + +# +# USB Human Interface Devices (HID) +# + +# +# Input core support is needed for USB HID +# + +# +# USB Imaging devices +# +# CONFIG_USB_DC2XX is not set +# CONFIG_USB_MDC800 is not set +# CONFIG_USB_SCANNER is not set +# CONFIG_USB_MICROTEK is not set +# CONFIG_USB_HPUSBSCSI is not set + +# +# USB Multimedia devices +# + +# +# Video4Linux support is needed for USB Multimedia device support +# + +# +# USB Network adaptors +# +# CONFIG_USB_PEGASUS is not set +# CONFIG_USB_KAWETH is not set +# CONFIG_USB_CATC is not set +# CONFIG_USB_CDCETHER is not set +# CONFIG_USB_USBNET is not set + +# +# USB port drivers +# +# CONFIG_USB_USS720 is not set + +# +# USB Serial Converter support +# +# CONFIG_USB_SERIAL is not set +# CONFIG_USB_SERIAL_GENERIC is not set +# CONFIG_USB_SERIAL_BELKIN is not set +# CONFIG_USB_SERIAL_WHITEHEAT is not set +# CONFIG_USB_SERIAL_DIGI_ACCELEPORT is not set +# CONFIG_USB_SERIAL_EMPEG is not set +# CONFIG_USB_SERIAL_FTDI_SIO is not set +# CONFIG_USB_SERIAL_VISOR is not set +# CONFIG_USB_SERIAL_IR is not set +# CONFIG_USB_SERIAL_EDGEPORT is not set +# CONFIG_USB_SERIAL_KEYSPAN_PDA is not set +# CONFIG_USB_SERIAL_KEYSPAN is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28 is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28X is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28XA is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28XB is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA19 is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA18X is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA19W is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA49W is not set +# CONFIG_USB_SERIAL_MCT_U232 is not set +# CONFIG_USB_SERIAL_PL2303 is not set +# CONFIG_USB_SERIAL_CYBERJACK is not set +# CONFIG_USB_SERIAL_XIRCOM is not set +# CONFIG_USB_SERIAL_OMNINET is not set + +# +# USB Miscellaneous drivers +# +# CONFIG_USB_RIO500 is not set + +# +# IEEE 1394 (FireWire) support (EXPERIMENTAL) +# +# CONFIG_IEEE1394 is not set + +# +# Bluetooth support +# +# CONFIG_BLUEZ is not set + +# +# Kernel hacking +# +CONFIG_DEBUG_KERNEL=y +CONFIG_IA64_PRINT_HAZARDS=y +# CONFIG_DISABLE_VHPT is not set +CONFIG_MAGIC_SYSRQ=y +CONFIG_IA64_EARLY_PRINTK=y +# CONFIG_DEBUG_SLAB is not set +# CONFIG_DEBUG_SPINLOCK is not set +# CONFIG_IA64_DEBUG_CMPXCHG is not set +# CONFIG_IA64_DEBUG_IRQ is not set +CONFIG_KDB=y +CONFIG_KDB_MODULES=y +# CONFIG_KDB_OFF is not set + +# +# Load all symbols for debugging is required for KDB +# +CONFIG_KALLSYMS=y diff -Nru a/arch/ia64/sn/configs/sn1/defconfig-bigsur-sp b/arch/ia64/sn/configs/sn1/defconfig-bigsur-sp --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/configs/sn1/defconfig-bigsur-sp Tue Mar 12 13:58:16 2002 @@ -0,0 +1,772 @@ +# +# Automatically generated make config: don't edit +# + +# +# Code maturity level options +# +CONFIG_EXPERIMENTAL=y + +# +# Loadable module support +# +# CONFIG_MODULES is not set + +# +# General setup +# +CONFIG_IA64=y +# CONFIG_ISA is not set +# CONFIG_EISA is not set +# CONFIG_MCA is not set +# CONFIG_SBUS is not set +CONFIG_RWSEM_GENERIC_SPINLOCK=y +# CONFIG_RWSEM_XCHGADD_ALGORITHM is not set +CONFIG_ACPI=y +CONFIG_ACPI_EFI=y +CONFIG_ACPI_INTERPRETER=y +CONFIG_ACPI_KERNEL_CONFIG=y +CONFIG_ITANIUM=y +# CONFIG_MCKINLEY is not set +# CONFIG_IA64_GENERIC is not set +CONFIG_IA64_DIG=y +# CONFIG_IA64_HP_SIM is not set +# CONFIG_IA64_SGI_SN1 is not set +# CONFIG_IA64_SGI_SN2 is not set +# CONFIG_IA64_PAGE_SIZE_4KB is not set +# CONFIG_IA64_PAGE_SIZE_8KB is not set +CONFIG_IA64_PAGE_SIZE_16KB=y +# CONFIG_IA64_PAGE_SIZE_64KB is not set +CONFIG_IA64_BRL_EMU=y +CONFIG_ITANIUM_BSTEP_SPECIFIC=y +CONFIG_IA64_L1_CACHE_SHIFT=6 +# CONFIG_NUMA is not set +# CONFIG_IA64_MCA is not set +CONFIG_PM=y +CONFIG_IA64_HAVE_SYNCRONIZED_ITC=y +CONFIG_DEVFS_FS=y +CONFIG_DEVFS_DEBUG=y +CONFIG_KCORE_ELF=y +# CONFIG_SMP is not set +CONFIG_IA32_SUPPORT=y +CONFIG_PERFMON=y +CONFIG_IA64_PALINFO=y +# CONFIG_EFI_VARS is not set +CONFIG_NET=y +CONFIG_SYSVIPC=y +# CONFIG_BSD_PROCESS_ACCT is not set +CONFIG_SYSCTL=y +CONFIG_BINFMT_ELF=y +# CONFIG_BINFMT_MISC is not set +# CONFIG_ACPI_DEBUG is not set +# CONFIG_ACPI_BUSMGR is not set +# CONFIG_ACPI_SYS is not set +# CONFIG_ACPI_CPU is not set +# CONFIG_ACPI_BUTTON is not set +# CONFIG_ACPI_AC is not set +# CONFIG_ACPI_EC is not set +# CONFIG_ACPI_CMBATT is not set +# CONFIG_ACPI_THERMAL is not set +CONFIG_PCI=y +# CONFIG_PCI_NAMES is not set +# CONFIG_HOTPLUG is not set +# CONFIG_PCMCIA is not set + +# +# Parallel port support +# +# CONFIG_PARPORT is not set + +# +# Networking options +# +# CONFIG_PACKET is not set +# CONFIG_NETLINK is not set +# CONFIG_NETFILTER is not set +# CONFIG_FILTER is not set +CONFIG_UNIX=y +CONFIG_INET=y +# CONFIG_IP_MULTICAST is not set +# CONFIG_IP_ADVANCED_ROUTER is not set +# CONFIG_IP_PNP is not set +# CONFIG_NET_IPIP is not set +# CONFIG_NET_IPGRE is not set +# CONFIG_INET_ECN is not set +# CONFIG_SYN_COOKIES is not set +# CONFIG_IPV6 is not set +# CONFIG_KHTTPD is not set +# CONFIG_ATM is not set + +# +# +# +# CONFIG_IPX is not set +# CONFIG_ATALK is not set +# CONFIG_DECNET is not set +# CONFIG_BRIDGE is not set +# CONFIG_X25 is not set +# CONFIG_LAPB is not set +# CONFIG_LLC is not set +# CONFIG_NET_DIVERT is not set +# CONFIG_ECONET is not set +# CONFIG_WAN_ROUTER is not set +# CONFIG_NET_FASTROUTE is not set +# CONFIG_NET_HW_FLOWCONTROL is not set + +# +# QoS and/or fair queueing +# +# CONFIG_NET_SCHED is not set + +# +# Memory Technology Devices (MTD) +# +# CONFIG_MTD is not set + +# +# Plug and Play configuration +# +# CONFIG_PNP is not set +# CONFIG_ISAPNP is not set +# CONFIG_PNPBIOS is not set + +# +# Block devices +# +# CONFIG_BLK_DEV_FD is not set +# CONFIG_BLK_DEV_XD is not set +# CONFIG_PARIDE is not set +# CONFIG_BLK_CPQ_DA is not set +# CONFIG_BLK_CPQ_CISS_DA is not set +# CONFIG_BLK_DEV_DAC960 is not set +# CONFIG_BLK_DEV_LOOP is not set +# CONFIG_BLK_DEV_NBD is not set +# CONFIG_BLK_DEV_RAM is not set +# CONFIG_BLK_DEV_INITRD is not set + +# +# I2O device support +# +# CONFIG_I2O is not set +# CONFIG_I2O_PCI is not set +# CONFIG_I2O_BLOCK is not set +# CONFIG_I2O_LAN is not set +# CONFIG_I2O_SCSI is not set +# CONFIG_I2O_PROC is not set + +# +# Multi-device support (RAID and LVM) +# +# CONFIG_MD is not set +# CONFIG_BLK_DEV_MD is not set +# CONFIG_MD_LINEAR is not set +# CONFIG_MD_RAID0 is not set +# CONFIG_MD_RAID1 is not set +# CONFIG_MD_RAID5 is not set +# CONFIG_MD_MULTIPATH is not set +# CONFIG_BLK_DEV_LVM is not set + +# +# ATA/IDE/MFM/RLL support +# +CONFIG_IDE=y + +# +# IDE, ATA and ATAPI Block devices +# +CONFIG_BLK_DEV_IDE=y + +# +# Please see Documentation/ide.txt for help/info on IDE drives +# +# CONFIG_BLK_DEV_HD_IDE is not set +# CONFIG_BLK_DEV_HD is not set +CONFIG_BLK_DEV_IDEDISK=y +# CONFIG_IDEDISK_MULTI_MODE is not set +# CONFIG_BLK_DEV_IDEDISK_VENDOR is not set +# CONFIG_BLK_DEV_IDEDISK_FUJITSU is not set +# CONFIG_BLK_DEV_IDEDISK_IBM is not set +# CONFIG_BLK_DEV_IDEDISK_MAXTOR is not set +# CONFIG_BLK_DEV_IDEDISK_QUANTUM is not set +# CONFIG_BLK_DEV_IDEDISK_SEAGATE is not set +# CONFIG_BLK_DEV_IDEDISK_WD is not set +# CONFIG_BLK_DEV_COMMERIAL is not set +# CONFIG_BLK_DEV_TIVO is not set +# CONFIG_BLK_DEV_IDECS is not set +CONFIG_BLK_DEV_IDECD=y +# CONFIG_BLK_DEV_IDETAPE is not set +CONFIG_BLK_DEV_IDEFLOPPY=y +# CONFIG_BLK_DEV_IDESCSI is not set + +# +# IDE chipset support/bugfixes +# +# CONFIG_BLK_DEV_CMD640 is not set +# CONFIG_BLK_DEV_CMD640_ENHANCED is not set +# CONFIG_BLK_DEV_ISAPNP is not set +# CONFIG_BLK_DEV_RZ1000 is not set +CONFIG_BLK_DEV_IDEPCI=y +# CONFIG_IDEPCI_SHARE_IRQ is not set +CONFIG_BLK_DEV_IDEDMA_PCI=y +CONFIG_BLK_DEV_ADMA=y +# CONFIG_BLK_DEV_OFFBOARD is not set +# CONFIG_IDEDMA_PCI_AUTO is not set +CONFIG_BLK_DEV_IDEDMA=y +# CONFIG_IDEDMA_PCI_WIP is not set +# CONFIG_IDEDMA_NEW_DRIVE_LISTINGS is not set +# CONFIG_BLK_DEV_AEC62XX is not set +# CONFIG_AEC62XX_TUNING is not set +# CONFIG_BLK_DEV_ALI15X3 is not set +# CONFIG_WDC_ALI15X3 is not set +# CONFIG_BLK_DEV_AMD74XX is not set +# CONFIG_AMD74XX_OVERRIDE is not set +# CONFIG_BLK_DEV_CMD64X is not set +# CONFIG_BLK_DEV_CY82C693 is not set +# CONFIG_BLK_DEV_CS5530 is not set +# CONFIG_BLK_DEV_HPT34X is not set +# CONFIG_HPT34X_AUTODMA is not set +# CONFIG_BLK_DEV_HPT366 is not set +# CONFIG_BLK_DEV_PIIX is not set +# CONFIG_PIIX_TUNING is not set +# CONFIG_BLK_DEV_NS87415 is not set +# CONFIG_BLK_DEV_OPTI621 is not set +# CONFIG_BLK_DEV_PDC202XX is not set +# CONFIG_PDC202XX_BURST is not set +# CONFIG_PDC202XX_FORCE is not set +# CONFIG_BLK_DEV_SVWKS is not set +# CONFIG_BLK_DEV_SIS5513 is not set +# CONFIG_BLK_DEV_SLC90E66 is not set +# CONFIG_BLK_DEV_TRM290 is not set +# CONFIG_BLK_DEV_VIA82CXXX is not set +# CONFIG_IDE_CHIPSETS is not set +# CONFIG_IDEDMA_AUTO is not set +# CONFIG_IDEDMA_IVB is not set +# CONFIG_DMA_NONPCI is not set +# CONFIG_BLK_DEV_IDE_MODES is not set +# CONFIG_BLK_DEV_ATARAID is not set +# CONFIG_BLK_DEV_ATARAID_PDC is not set +# CONFIG_BLK_DEV_ATARAID_HPT is not set + +# +# Alternate 1394 support +# +# CONFIG_X1394 is not set + +# +# Alternate SCSI support +# +# CONFIG_XSCSI is not set + +# +# SCSI support +# +CONFIG_SCSI=y + +# +# SCSI support type (disk, tape, CD-ROM) +# +CONFIG_BLK_DEV_SD=y +CONFIG_SD_EXTRA_DEVS=40 +# CONFIG_CHR_DEV_ST is not set +# CONFIG_CHR_DEV_OSST is not set +# CONFIG_BLK_DEV_SR is not set +# CONFIG_CHR_DEV_SG is not set + +# +# Some SCSI devices (e.g. CD jukebox) support multiple LUNs +# +# CONFIG_SCSI_DEBUG_QUEUES is not set +CONFIG_SCSI_MULTI_LUN=y +CONFIG_SCSI_CONSTANTS=y +CONFIG_SCSI_LOGGING=y + +# +# SCSI low-level drivers +# +# CONFIG_BLK_DEV_3W_XXXX_RAID is not set +# CONFIG_SCSI_7000FASST is not set +# CONFIG_SCSI_ACARD is not set +# CONFIG_SCSI_AHA152X is not set +# CONFIG_SCSI_AHA1542 is not set +# CONFIG_SCSI_AHA1740 is not set +# CONFIG_SCSI_AIC7XXX is not set +# CONFIG_SCSI_AIC7XXX_OLD is not set +# CONFIG_SCSI_DPT_I2O is not set +# CONFIG_SCSI_ADVANSYS is not set +# CONFIG_SCSI_IN2000 is not set +# CONFIG_SCSI_AM53C974 is not set +# CONFIG_SCSI_MEGARAID is not set +# CONFIG_SCSI_BUSLOGIC is not set +# CONFIG_SCSI_CPQFCTS is not set +# CONFIG_SCSI_DMX3191D is not set +# CONFIG_SCSI_DTC3280 is not set +# CONFIG_SCSI_EATA is not set +# CONFIG_SCSI_EATA_DMA is not set +# CONFIG_SCSI_EATA_PIO is not set +# CONFIG_SCSI_FUTURE_DOMAIN is not set +# CONFIG_SCSI_GDTH is not set +# CONFIG_SCSI_GENERIC_NCR5380 is not set +# CONFIG_SCSI_INITIO is not set +# CONFIG_SCSI_INIA100 is not set +# CONFIG_SCSI_NCR53C406A is not set +# CONFIG_SCSI_NCR53C7xx is not set +# CONFIG_SCSI_NCR53C8XX is not set +# CONFIG_SCSI_SYM53C8XX is not set +# CONFIG_SCSI_PAS16 is not set +# CONFIG_SCSI_PCI2000 is not set +# CONFIG_SCSI_PCI2220I is not set +# CONFIG_SCSI_PSI240I is not set +# CONFIG_SCSI_QLOGIC_FAS is not set +# CONFIG_SCSI_QLOGIC_ISP is not set +# CONFIG_SCSI_QLOGIC_FC is not set +CONFIG_SCSI_QLOGIC_1280=y +# CONFIG_SCSI_QLOGIC_QLA2100 is not set +# CONFIG_SCSI_SIM710 is not set +# CONFIG_SCSI_SYM53C416 is not set +# CONFIG_SCSI_DC390T is not set +# CONFIG_SCSI_T128 is not set +# CONFIG_SCSI_U14_34F is not set +# CONFIG_SCSI_DEBUG is not set + +# +# Network device support +# +CONFIG_NETDEVICES=y + +# +# ARCnet devices +# +# CONFIG_ARCNET is not set +CONFIG_DUMMY=y +# CONFIG_BONDING is not set +# CONFIG_EQUALIZER is not set +# CONFIG_TUN is not set + +# +# Ethernet (10 or 100Mbit) +# +CONFIG_NET_ETHERNET=y +# CONFIG_SUNLANCE is not set +# CONFIG_HAPPYMEAL is not set +# CONFIG_SUNBMAC is not set +# CONFIG_SUNQE is not set +# CONFIG_SUNLANCE is not set +# CONFIG_SUNGEM is not set +# CONFIG_NET_VENDOR_3COM is not set +# CONFIG_LANCE is not set +# CONFIG_NET_VENDOR_SMC is not set +# CONFIG_NET_VENDOR_RACAL is not set +# CONFIG_HP100 is not set +# CONFIG_NET_ISA is not set +CONFIG_NET_PCI=y +# CONFIG_PCNET32 is not set +# CONFIG_ADAPTEC_STARFIRE is not set +# CONFIG_APRICOT is not set +# CONFIG_CS89x0 is not set +# CONFIG_TULIP is not set +# CONFIG_DE4X5 is not set +# CONFIG_DGRS is not set +# CONFIG_DM9102 is not set +CONFIG_EEPRO100=y +# CONFIG_LNE390 is not set +# CONFIG_FEALNX is not set +# CONFIG_NATSEMI is not set +# CONFIG_NE2K_PCI is not set +# CONFIG_NE3210 is not set +# CONFIG_ES3210 is not set +# CONFIG_8139CP is not set +# CONFIG_8139TOO is not set +# CONFIG_8139TOO_PIO is not set +# CONFIG_8139TOO_TUNE_TWISTER is not set +# CONFIG_8139TOO_8129 is not set +# CONFIG_SIS900 is not set +# CONFIG_EPIC100 is not set +# CONFIG_SUNDANCE is not set +# CONFIG_TLAN is not set +# CONFIG_VIA_RHINE is not set +# CONFIG_WINBOND_840 is not set +# CONFIG_NET_POCKET is not set + +# +# Ethernet (1000 Mbit) +# +# CONFIG_ACENIC is not set +# CONFIG_DL2K is not set +# CONFIG_MYRI_SBUS is not set +# CONFIG_NS83820 is not set +# CONFIG_HAMACHI is not set +# CONFIG_YELLOWFIN is not set +# CONFIG_SK98LIN is not set +# CONFIG_FDDI is not set +# CONFIG_HIPPI is not set +# CONFIG_PLIP is not set +# CONFIG_PPP is not set +# CONFIG_SLIP is not set + +# +# Wireless LAN (non-hamradio) +# +# CONFIG_NET_RADIO is not set + +# +# Token Ring devices +# +# CONFIG_TR is not set +# CONFIG_NET_FC is not set +# CONFIG_RCPCI is not set +# CONFIG_SHAPER is not set + +# +# Wan interfaces +# +# CONFIG_WAN is not set + +# +# Amateur Radio support +# +# CONFIG_HAMRADIO is not set + +# +# ISDN subsystem +# +# CONFIG_ISDN is not set + +# +# CD-ROM drivers (not for SCSI or IDE/ATAPI drives) +# +# CONFIG_CD_NO_IDESCSI is not set + +# +# Input core support +# +# CONFIG_INPUT is not set +# CONFIG_INPUT_KEYBDEV is not set +# CONFIG_INPUT_MOUSEDEV is not set +# CONFIG_INPUT_JOYDEV is not set +# CONFIG_INPUT_EVDEV is not set + +# +# Character devices +# +CONFIG_VT=y +CONFIG_VT_CONSOLE=y +CONFIG_SERIAL=y +CONFIG_SERIAL_CONSOLE=y +# CONFIG_SERIAL_EXTENDED is not set +# CONFIG_SERIAL_NONSTANDARD is not set +CONFIG_UNIX98_PTYS=y +CONFIG_UNIX98_PTY_COUNT=256 + +# +# I2C support +# +# CONFIG_I2C is not set + +# +# Mice +# +# CONFIG_BUSMOUSE is not set +CONFIG_MOUSE=y +CONFIG_PSMOUSE=y +# CONFIG_82C710_MOUSE is not set +# CONFIG_PC110_PAD is not set + +# +# Joysticks +# +# CONFIG_INPUT_GAMEPORT is not set + +# +# Input core support is needed for gameports +# + +# +# Input core support is needed for joysticks +# +# CONFIG_QIC02_TAPE is not set + +# +# Watchdog Cards +# +# CONFIG_WATCHDOG is not set +# CONFIG_INTEL_RNG is not set +# CONFIG_NVRAM is not set +# CONFIG_RTC is not set +# CONFIG_EFI_RTC is not set +# CONFIG_DTLK is not set +# CONFIG_R3964 is not set +# CONFIG_APPLICOM is not set + +# +# Ftape, the floppy tape device driver +# +# CONFIG_FTAPE is not set +# CONFIG_AGP is not set +# CONFIG_DRM is not set +# CONFIG_MWAVE is not set + +# +# Multimedia devices +# +# CONFIG_VIDEO_DEV is not set + +# +# File systems +# +# CONFIG_QUOTA is not set +CONFIG_AUTOFS_FS=y +CONFIG_AUTOFS4_FS=y +# CONFIG_REISERFS_FS is not set +# CONFIG_REISERFS_CHECK is not set +# CONFIG_ADFS_FS is not set +# CONFIG_ADFS_FS_RW is not set +# CONFIG_AFFS_FS is not set +# CONFIG_HFS_FS is not set +# CONFIG_BFS_FS is not set +CONFIG_FAT_FS=y +CONFIG_MSDOS_FS=y +# CONFIG_UMSDOS_FS is not set +CONFIG_VFAT_FS=y +# CONFIG_EFS_FS is not set +# CONFIG_JFFS_FS is not set +# CONFIG_JFFS2_FS is not set +# CONFIG_CRAMFS is not set +CONFIG_TMPFS=y +# CONFIG_RAMFS is not set +CONFIG_ISO9660_FS=y +CONFIG_JOLIET=y +# CONFIG_MINIX_FS is not set +# CONFIG_VXFS_FS is not set +# CONFIG_NTFS_FS is not set +# CONFIG_NTFS_RW is not set +# CONFIG_HPFS_FS is not set +CONFIG_PROC_FS=y +CONFIG_DEVFS_FS=y +CONFIG_DEVFS_MOUNT=y +CONFIG_DEVFS_DEBUG=y +CONFIG_DEVPTS_FS=y +# CONFIG_QNX4FS_FS is not set +# CONFIG_QNX4FS_RW is not set +# CONFIG_ROMFS_FS is not set +CONFIG_EXT2_FS=y +# CONFIG_SYSV_FS is not set +# CONFIG_UDF_FS is not set +# CONFIG_UDF_RW is not set +# CONFIG_UFS_FS is not set +# CONFIG_UFS_FS_WRITE is not set +# CONFIG_XFS_SUPPORT is not set + +# +# Network File Systems +# +# CONFIG_CODA_FS is not set +CONFIG_NFS_FS=y +CONFIG_NFS_V3=y +# CONFIG_ROOT_NFS is not set +CONFIG_NFSD=y +CONFIG_NFSD_V3=y +CONFIG_SUNRPC=y +CONFIG_LOCKD=y +CONFIG_LOCKD_V4=y +# CONFIG_SMB_FS is not set +# CONFIG_NCP_FS is not set +# CONFIG_NCPFS_PACKET_SIGNING is not set +# CONFIG_NCPFS_IOCTL_LOCKING is not set +# CONFIG_NCPFS_STRONG is not set +# CONFIG_NCPFS_NFS_NS is not set +# CONFIG_NCPFS_OS2_NS is not set +# CONFIG_NCPFS_SMALLDOS is not set +# CONFIG_NCPFS_NLS is not set +# CONFIG_NCPFS_EXTRAS is not set + +# +# Partition Types +# +# CONFIG_PARTITION_ADVANCED is not set +CONFIG_MSDOS_PARTITION=y +# CONFIG_SMB_NLS is not set +CONFIG_NLS=y + +# +# Native Language Support +# +CONFIG_NLS_DEFAULT="iso8859-1" +# CONFIG_NLS_CODEPAGE_437 is not set +# CONFIG_NLS_CODEPAGE_737 is not set +# CONFIG_NLS_CODEPAGE_775 is not set +# CONFIG_NLS_CODEPAGE_850 is not set +# CONFIG_NLS_CODEPAGE_852 is not set +# CONFIG_NLS_CODEPAGE_855 is not set +# CONFIG_NLS_CODEPAGE_857 is not set +# CONFIG_NLS_CODEPAGE_860 is not set +# CONFIG_NLS_CODEPAGE_861 is not set +# CONFIG_NLS_CODEPAGE_862 is not set +# CONFIG_NLS_CODEPAGE_863 is not set +# CONFIG_NLS_CODEPAGE_864 is not set +# CONFIG_NLS_CODEPAGE_865 is not set +# CONFIG_NLS_CODEPAGE_866 is not set +# CONFIG_NLS_CODEPAGE_869 is not set +# CONFIG_NLS_CODEPAGE_936 is not set +# CONFIG_NLS_CODEPAGE_950 is not set +# CONFIG_NLS_CODEPAGE_932 is not set +# CONFIG_NLS_CODEPAGE_949 is not set +# CONFIG_NLS_CODEPAGE_874 is not set +# CONFIG_NLS_ISO8859_8 is not set +# CONFIG_NLS_CODEPAGE_1251 is not set +# CONFIG_NLS_ISO8859_1 is not set +# CONFIG_NLS_ISO8859_2 is not set +# CONFIG_NLS_ISO8859_3 is not set +# CONFIG_NLS_ISO8859_4 is not set +# CONFIG_NLS_ISO8859_5 is not set +# CONFIG_NLS_ISO8859_6 is not set +# CONFIG_NLS_ISO8859_7 is not set +# CONFIG_NLS_ISO8859_9 is not set +# CONFIG_NLS_ISO8859_13 is not set +# CONFIG_NLS_ISO8859_14 is not set +# CONFIG_NLS_ISO8859_15 is not set +# CONFIG_NLS_KOI8_R is not set +# CONFIG_NLS_KOI8_U is not set +# CONFIG_NLS_UTF8 is not set + +# +# Console drivers +# +CONFIG_VGA_CONSOLE=y + +# +# Frame-buffer support +# +# CONFIG_FB is not set + +# +# Sound +# +# CONFIG_SOUND is not set + +# +# USB support +# +# CONFIG_USB is not set + +# +# USB Controllers +# +# CONFIG_USB_UHCI is not set +# CONFIG_USB_UHCI_ALT is not set +# CONFIG_USB_OHCI is not set + +# +# USB Device Class drivers +# +# CONFIG_USB_AUDIO is not set +# CONFIG_USB_BLUETOOTH is not set +# CONFIG_USB_STORAGE is not set +# CONFIG_USB_STORAGE_DEBUG is not set +# CONFIG_USB_STORAGE_DATAFAB is not set +# CONFIG_USB_STORAGE_FREECOM is not set +# CONFIG_USB_STORAGE_ISD200 is not set +# CONFIG_USB_STORAGE_DPCM is not set +# CONFIG_USB_STORAGE_HP8200e is not set +# CONFIG_USB_STORAGE_SDDR09 is not set +# CONFIG_USB_STORAGE_JUMPSHOT is not set +# CONFIG_USB_ACM is not set +# CONFIG_USB_PRINTER is not set + +# +# USB Human Interface Devices (HID) +# + +# +# Input core support is needed for USB HID +# + +# +# USB Imaging devices +# +# CONFIG_USB_DC2XX is not set +# CONFIG_USB_MDC800 is not set +# CONFIG_USB_SCANNER is not set +# CONFIG_USB_MICROTEK is not set +# CONFIG_USB_HPUSBSCSI is not set + +# +# USB Multimedia devices +# + +# +# Video4Linux support is needed for USB Multimedia device support +# + +# +# USB Network adaptors +# +# CONFIG_USB_PEGASUS is not set +# CONFIG_USB_KAWETH is not set +# CONFIG_USB_CATC is not set +# CONFIG_USB_CDCETHER is not set +# CONFIG_USB_USBNET is not set + +# +# USB port drivers +# +# CONFIG_USB_USS720 is not set + +# +# USB Serial Converter support +# +# CONFIG_USB_SERIAL is not set +# CONFIG_USB_SERIAL_GENERIC is not set +# CONFIG_USB_SERIAL_BELKIN is not set +# CONFIG_USB_SERIAL_WHITEHEAT is not set +# CONFIG_USB_SERIAL_DIGI_ACCELEPORT is not set +# CONFIG_USB_SERIAL_EMPEG is not set +# CONFIG_USB_SERIAL_FTDI_SIO is not set +# CONFIG_USB_SERIAL_VISOR is not set +# CONFIG_USB_SERIAL_IR is not set +# CONFIG_USB_SERIAL_EDGEPORT is not set +# CONFIG_USB_SERIAL_KEYSPAN_PDA is not set +# CONFIG_USB_SERIAL_KEYSPAN is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28 is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28X is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28XA is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28XB is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA19 is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA18X is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA19W is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA49W is not set +# CONFIG_USB_SERIAL_MCT_U232 is not set +# CONFIG_USB_SERIAL_PL2303 is not set +# CONFIG_USB_SERIAL_CYBERJACK is not set +# CONFIG_USB_SERIAL_XIRCOM is not set +# CONFIG_USB_SERIAL_OMNINET is not set + +# +# USB Miscellaneous drivers +# +# CONFIG_USB_RIO500 is not set + +# +# IEEE 1394 (FireWire) support (EXPERIMENTAL) +# +# CONFIG_IEEE1394 is not set + +# +# Bluetooth support +# +# CONFIG_BLUEZ is not set + +# +# Kernel hacking +# +CONFIG_DEBUG_KERNEL=y +CONFIG_IA64_PRINT_HAZARDS=y +# CONFIG_DISABLE_VHPT is not set +CONFIG_MAGIC_SYSRQ=y +CONFIG_IA64_EARLY_PRINTK=y +# CONFIG_DEBUG_SLAB is not set +# CONFIG_DEBUG_SPINLOCK is not set +# CONFIG_IA64_DEBUG_CMPXCHG is not set +# CONFIG_IA64_DEBUG_IRQ is not set +# CONFIG_KDB is not set +# CONFIG_KDB_MODULES is not set +# CONFIG_KALLSYMS is not set diff -Nru a/arch/ia64/sn/configs/sn1/defconfig-dig-mp b/arch/ia64/sn/configs/sn1/defconfig-dig-mp --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/configs/sn1/defconfig-dig-mp Tue Mar 12 13:58:16 2002 @@ -0,0 +1,459 @@ +# +# Automatically generated make config: don't edit +# + +# +# Code maturity level options +# +# CONFIG_EXPERIMENTAL is not set + +# +# Loadable module support +# +# CONFIG_MODULES is not set + +# +# General setup +# +CONFIG_IA64=y +# CONFIG_ISA is not set +# CONFIG_EISA is not set +# CONFIG_MCA is not set +# CONFIG_SBUS is not set +CONFIG_RWSEM_GENERIC_SPINLOCK=y +# CONFIG_RWSEM_XCHGADD_ALGORITHM is not set +CONFIG_ACPI=y +CONFIG_ACPI_EFI=y +CONFIG_ACPI_INTERPRETER=y +CONFIG_ACPI_KERNEL_CONFIG=y +CONFIG_ITANIUM=y +# CONFIG_MCKINLEY is not set +# CONFIG_IA64_GENERIC is not set +CONFIG_IA64_DIG=y +# CONFIG_IA64_HP_SIM is not set +# CONFIG_IA64_SGI_SN1 is not set +# CONFIG_IA64_SGI_SN2 is not set +# CONFIG_IA64_PAGE_SIZE_4KB is not set +# CONFIG_IA64_PAGE_SIZE_8KB is not set +CONFIG_IA64_PAGE_SIZE_16KB=y +# CONFIG_IA64_PAGE_SIZE_64KB is not set +CONFIG_IA64_BRL_EMU=y +CONFIG_ITANIUM_BSTEP_SPECIFIC=y +CONFIG_IA64_L1_CACHE_SHIFT=6 +# CONFIG_NUMA is not set +# CONFIG_IA64_MCA is not set +CONFIG_PM=y +CONFIG_IA64_HAVE_SYNCRONIZED_ITC=y +# CONFIG_DEVFS_FS is not set +CONFIG_KCORE_ELF=y +CONFIG_SMP=y +# CONFIG_IA32_SUPPORT is not set +# CONFIG_PERFMON is not set +# CONFIG_IA64_PALINFO is not set +# CONFIG_EFI_VARS is not set +# CONFIG_NET is not set +CONFIG_SYSVIPC=y +# CONFIG_BSD_PROCESS_ACCT is not set +# CONFIG_SYSCTL is not set +CONFIG_BINFMT_ELF=y +# CONFIG_BINFMT_MISC is not set +# CONFIG_ACPI_DEBUG is not set +# CONFIG_ACPI_BUSMGR is not set +# CONFIG_ACPI_SYS is not set +# CONFIG_ACPI_CPU is not set +# CONFIG_ACPI_BUTTON is not set +# CONFIG_ACPI_AC is not set +# CONFIG_ACPI_EC is not set +# CONFIG_ACPI_CMBATT is not set +# CONFIG_ACPI_THERMAL is not set +CONFIG_PCI=y +# CONFIG_PCI_NAMES is not set +# CONFIG_HOTPLUG is not set +# CONFIG_PCMCIA is not set + +# +# Parallel port support +# +# CONFIG_PARPORT is not set + +# +# Memory Technology Devices (MTD) +# +# CONFIG_MTD is not set + +# +# Plug and Play configuration +# +# CONFIG_PNP is not set +# CONFIG_ISAPNP is not set + +# +# Block devices +# +# CONFIG_BLK_DEV_FD is not set +# CONFIG_BLK_DEV_XD is not set +# CONFIG_PARIDE is not set +# CONFIG_BLK_CPQ_DA is not set +# CONFIG_BLK_CPQ_CISS_DA is not set +# CONFIG_BLK_DEV_DAC960 is not set +# CONFIG_BLK_DEV_LOOP is not set +# CONFIG_BLK_DEV_NBD is not set +# CONFIG_BLK_DEV_RAM is not set +# CONFIG_BLK_DEV_INITRD is not set + +# +# I2O device support +# +# CONFIG_I2O is not set +# CONFIG_I2O_PCI is not set +# CONFIG_I2O_BLOCK is not set +# CONFIG_I2O_SCSI is not set +# CONFIG_I2O_PROC is not set + +# +# Multi-device support (RAID and LVM) +# +# CONFIG_MD is not set +# CONFIG_BLK_DEV_MD is not set +# CONFIG_MD_LINEAR is not set +# CONFIG_MD_RAID0 is not set +# CONFIG_MD_RAID1 is not set +# CONFIG_MD_RAID5 is not set +# CONFIG_MD_MULTIPATH is not set +# CONFIG_BLK_DEV_LVM is not set + +# +# ATA/IDE/MFM/RLL support +# +CONFIG_IDE=y + +# +# IDE, ATA and ATAPI Block devices +# +CONFIG_BLK_DEV_IDE=y + +# +# Please see Documentation/ide.txt for help/info on IDE drives +# +# CONFIG_BLK_DEV_HD_IDE is not set +# CONFIG_BLK_DEV_HD is not set +CONFIG_BLK_DEV_IDEDISK=y +# CONFIG_IDEDISK_MULTI_MODE is not set +# CONFIG_BLK_DEV_IDEDISK_VENDOR is not set +# CONFIG_BLK_DEV_IDEDISK_FUJITSU is not set +# CONFIG_BLK_DEV_IDEDISK_IBM is not set +# CONFIG_BLK_DEV_IDEDISK_MAXTOR is not set +# CONFIG_BLK_DEV_IDEDISK_QUANTUM is not set +# CONFIG_BLK_DEV_IDEDISK_SEAGATE is not set +# CONFIG_BLK_DEV_IDEDISK_WD is not set +# CONFIG_BLK_DEV_COMMERIAL is not set +# CONFIG_BLK_DEV_TIVO is not set +# CONFIG_BLK_DEV_IDECS is not set +# CONFIG_BLK_DEV_IDECD is not set +# CONFIG_BLK_DEV_IDETAPE is not set +# CONFIG_BLK_DEV_IDEFLOPPY is not set +# CONFIG_BLK_DEV_IDESCSI is not set + +# +# IDE chipset support/bugfixes +# +# CONFIG_BLK_DEV_CMD640 is not set +# CONFIG_BLK_DEV_CMD640_ENHANCED is not set +# CONFIG_BLK_DEV_ISAPNP is not set +# CONFIG_BLK_DEV_RZ1000 is not set +# CONFIG_BLK_DEV_IDEPCI is not set +# CONFIG_IDE_CHIPSETS is not set +# CONFIG_IDEDMA_AUTO is not set +# CONFIG_DMA_NONPCI is not set +# CONFIG_BLK_DEV_IDE_MODES is not set +# CONFIG_BLK_DEV_ATARAID is not set +# CONFIG_BLK_DEV_ATARAID_PDC is not set +# CONFIG_BLK_DEV_ATARAID_HPT is not set + +# +# Alternate 1394 support +# +# CONFIG_X1394 is not set + +# +# Alternate SCSI support +# +# CONFIG_XSCSI is not set + +# +# SCSI support +# +# CONFIG_SCSI is not set + +# +# Amateur Radio support +# +# CONFIG_HAMRADIO is not set + +# +# ISDN subsystem +# +# CONFIG_ISDN is not set + +# +# CD-ROM drivers (not for SCSI or IDE/ATAPI drives) +# +# CONFIG_CD_NO_IDESCSI is not set + +# +# Input core support +# +# CONFIG_INPUT is not set +# CONFIG_INPUT_KEYBDEV is not set +# CONFIG_INPUT_MOUSEDEV is not set +# CONFIG_INPUT_JOYDEV is not set +# CONFIG_INPUT_EVDEV is not set + +# +# Character devices +# +CONFIG_VT=y +CONFIG_VT_CONSOLE=y +# CONFIG_SERIAL is not set +# CONFIG_SERIAL_EXTENDED is not set +# CONFIG_SERIAL_NONSTANDARD is not set +CONFIG_UNIX98_PTYS=y +CONFIG_UNIX98_PTY_COUNT=256 + +# +# I2C support +# +# CONFIG_I2C is not set + +# +# Mice +# +# CONFIG_BUSMOUSE is not set +# CONFIG_MOUSE is not set + +# +# Joysticks +# +# CONFIG_INPUT_GAMEPORT is not set + +# +# Input core support is needed for gameports +# + +# +# Input core support is needed for joysticks +# +# CONFIG_QIC02_TAPE is not set + +# +# Watchdog Cards +# +# CONFIG_WATCHDOG is not set +# CONFIG_INTEL_RNG is not set +# CONFIG_NVRAM is not set +# CONFIG_RTC is not set +# CONFIG_EFI_RTC is not set +# CONFIG_DTLK is not set +# CONFIG_R3964 is not set +# CONFIG_APPLICOM is not set + +# +# Ftape, the floppy tape device driver +# +# CONFIG_FTAPE is not set +# CONFIG_AGP is not set +# CONFIG_DRM is not set +# CONFIG_MWAVE is not set + +# +# Multimedia devices +# +# CONFIG_VIDEO_DEV is not set + +# +# File systems +# +# CONFIG_QUOTA is not set +# CONFIG_AUTOFS_FS is not set +# CONFIG_AUTOFS4_FS is not set +# CONFIG_REISERFS_FS is not set +# CONFIG_REISERFS_CHECK is not set +# CONFIG_ADFS_FS is not set +# CONFIG_ADFS_FS_RW is not set +# CONFIG_AFFS_FS is not set +# CONFIG_HFS_FS is not set +# CONFIG_BFS_FS is not set +# CONFIG_FAT_FS is not set +# CONFIG_MSDOS_FS is not set +# CONFIG_UMSDOS_FS is not set +# CONFIG_VFAT_FS is not set +# CONFIG_EFS_FS is not set +# CONFIG_JFFS_FS is not set +# CONFIG_JFFS2_FS is not set +# CONFIG_CRAMFS is not set +CONFIG_TMPFS=y +# CONFIG_RAMFS is not set +# CONFIG_ISO9660_FS is not set +# CONFIG_JOLIET is not set +# CONFIG_MINIX_FS is not set +# CONFIG_VXFS_FS is not set +# CONFIG_NTFS_FS is not set +# CONFIG_NTFS_RW is not set +# CONFIG_HPFS_FS is not set +CONFIG_PROC_FS=y +# CONFIG_DEVFS_FS is not set +# CONFIG_DEVFS_MOUNT is not set +# CONFIG_DEVFS_DEBUG is not set +CONFIG_DEVPTS_FS=y +# CONFIG_QNX4FS_FS is not set +# CONFIG_QNX4FS_RW is not set +# CONFIG_ROMFS_FS is not set +CONFIG_EXT2_FS=y +# CONFIG_SYSV_FS is not set +# CONFIG_UDF_FS is not set +# CONFIG_UDF_RW is not set +# CONFIG_UFS_FS is not set +# CONFIG_UFS_FS_WRITE is not set +# CONFIG_XFS_SUPPORT is not set +# CONFIG_NCPFS_NLS is not set +# CONFIG_SMB_FS is not set + +# +# Partition Types +# +# CONFIG_PARTITION_ADVANCED is not set +CONFIG_MSDOS_PARTITION=y +# CONFIG_SMB_NLS is not set +# CONFIG_NLS is not set + +# +# Console drivers +# +CONFIG_VGA_CONSOLE=y + +# +# Frame-buffer support +# +# CONFIG_FB is not set + +# +# Sound +# +# CONFIG_SOUND is not set + +# +# USB support +# +# CONFIG_USB is not set + +# +# USB Controllers +# +# CONFIG_USB_UHCI is not set +# CONFIG_USB_UHCI_ALT is not set +# CONFIG_USB_OHCI is not set + +# +# USB Device Class drivers +# +# CONFIG_USB_AUDIO is not set +# CONFIG_USB_BLUETOOTH is not set +# CONFIG_USB_STORAGE is not set +# CONFIG_USB_STORAGE_DEBUG is not set +# CONFIG_USB_STORAGE_DATAFAB is not set +# CONFIG_USB_STORAGE_FREECOM is not set +# CONFIG_USB_STORAGE_ISD200 is not set +# CONFIG_USB_STORAGE_DPCM is not set +# CONFIG_USB_STORAGE_HP8200e is not set +# CONFIG_USB_STORAGE_SDDR09 is not set +# CONFIG_USB_STORAGE_JUMPSHOT is not set +# CONFIG_USB_ACM is not set +# CONFIG_USB_PRINTER is not set + +# +# USB Human Interface Devices (HID) +# + +# +# Input core support is needed for USB HID +# + +# +# USB Imaging devices +# +# CONFIG_USB_DC2XX is not set +# CONFIG_USB_MDC800 is not set +# CONFIG_USB_SCANNER is not set +# CONFIG_USB_MICROTEK is not set +# CONFIG_USB_HPUSBSCSI is not set + +# +# USB Multimedia devices +# + +# +# Video4Linux support is needed for USB Multimedia device support +# + +# +# USB Network adaptors +# + +# +# Networking support is needed for USB Networking device support +# + +# +# USB port drivers +# +# CONFIG_USB_USS720 is not set + +# +# USB Serial Converter support +# +# CONFIG_USB_SERIAL is not set +# CONFIG_USB_SERIAL_GENERIC is not set +# CONFIG_USB_SERIAL_BELKIN is not set +# CONFIG_USB_SERIAL_WHITEHEAT is not set +# CONFIG_USB_SERIAL_DIGI_ACCELEPORT is not set +# CONFIG_USB_SERIAL_EMPEG is not set +# CONFIG_USB_SERIAL_FTDI_SIO is not set +# CONFIG_USB_SERIAL_VISOR is not set +# CONFIG_USB_SERIAL_IR is not set +# CONFIG_USB_SERIAL_EDGEPORT is not set +# CONFIG_USB_SERIAL_KEYSPAN_PDA is not set +# CONFIG_USB_SERIAL_KEYSPAN is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28 is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28X is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28XA is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28XB is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA19 is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA18X is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA19W is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA49W is not set +# CONFIG_USB_SERIAL_MCT_U232 is not set +# CONFIG_USB_SERIAL_PL2303 is not set +# CONFIG_USB_SERIAL_CYBERJACK is not set +# CONFIG_USB_SERIAL_XIRCOM is not set +# CONFIG_USB_SERIAL_OMNINET is not set + +# +# USB Miscellaneous drivers +# +# CONFIG_USB_RIO500 is not set + +# +# Kernel hacking +# +CONFIG_DEBUG_KERNEL=y +CONFIG_IA64_PRINT_HAZARDS=y +# CONFIG_DISABLE_VHPT is not set +CONFIG_MAGIC_SYSRQ=y +CONFIG_IA64_EARLY_PRINTK=y +# CONFIG_DEBUG_SLAB is not set +# CONFIG_DEBUG_SPINLOCK is not set +# CONFIG_IA64_DEBUG_CMPXCHG is not set +# CONFIG_IA64_DEBUG_IRQ is not set +# CONFIG_KDB is not set +# CONFIG_KDB_MODULES is not set +# CONFIG_KALLSYMS is not set diff -Nru a/arch/ia64/sn/configs/sn1/defconfig-dig-sp b/arch/ia64/sn/configs/sn1/defconfig-dig-sp --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/configs/sn1/defconfig-dig-sp Tue Mar 12 13:58:16 2002 @@ -0,0 +1,459 @@ +# +# Automatically generated make config: don't edit +# + +# +# Code maturity level options +# +# CONFIG_EXPERIMENTAL is not set + +# +# Loadable module support +# +# CONFIG_MODULES is not set + +# +# General setup +# +CONFIG_IA64=y +# CONFIG_ISA is not set +# CONFIG_EISA is not set +# CONFIG_MCA is not set +# CONFIG_SBUS is not set +CONFIG_RWSEM_GENERIC_SPINLOCK=y +# CONFIG_RWSEM_XCHGADD_ALGORITHM is not set +CONFIG_ACPI=y +CONFIG_ACPI_EFI=y +CONFIG_ACPI_INTERPRETER=y +CONFIG_ACPI_KERNEL_CONFIG=y +CONFIG_ITANIUM=y +# CONFIG_MCKINLEY is not set +# CONFIG_IA64_GENERIC is not set +CONFIG_IA64_DIG=y +# CONFIG_IA64_HP_SIM is not set +# CONFIG_IA64_SGI_SN1 is not set +# CONFIG_IA64_SGI_SN2 is not set +# CONFIG_IA64_PAGE_SIZE_4KB is not set +# CONFIG_IA64_PAGE_SIZE_8KB is not set +CONFIG_IA64_PAGE_SIZE_16KB=y +# CONFIG_IA64_PAGE_SIZE_64KB is not set +CONFIG_IA64_BRL_EMU=y +CONFIG_ITANIUM_BSTEP_SPECIFIC=y +CONFIG_IA64_L1_CACHE_SHIFT=6 +# CONFIG_NUMA is not set +# CONFIG_IA64_MCA is not set +CONFIG_PM=y +CONFIG_IA64_HAVE_SYNCRONIZED_ITC=y +# CONFIG_DEVFS_FS is not set +CONFIG_KCORE_ELF=y +# CONFIG_SMP is not set +# CONFIG_IA32_SUPPORT is not set +# CONFIG_PERFMON is not set +# CONFIG_IA64_PALINFO is not set +# CONFIG_EFI_VARS is not set +# CONFIG_NET is not set +CONFIG_SYSVIPC=y +# CONFIG_BSD_PROCESS_ACCT is not set +# CONFIG_SYSCTL is not set +CONFIG_BINFMT_ELF=y +# CONFIG_BINFMT_MISC is not set +# CONFIG_ACPI_DEBUG is not set +# CONFIG_ACPI_BUSMGR is not set +# CONFIG_ACPI_SYS is not set +# CONFIG_ACPI_CPU is not set +# CONFIG_ACPI_BUTTON is not set +# CONFIG_ACPI_AC is not set +# CONFIG_ACPI_EC is not set +# CONFIG_ACPI_CMBATT is not set +# CONFIG_ACPI_THERMAL is not set +CONFIG_PCI=y +# CONFIG_PCI_NAMES is not set +# CONFIG_HOTPLUG is not set +# CONFIG_PCMCIA is not set + +# +# Parallel port support +# +# CONFIG_PARPORT is not set + +# +# Memory Technology Devices (MTD) +# +# CONFIG_MTD is not set + +# +# Plug and Play configuration +# +# CONFIG_PNP is not set +# CONFIG_ISAPNP is not set + +# +# Block devices +# +# CONFIG_BLK_DEV_FD is not set +# CONFIG_BLK_DEV_XD is not set +# CONFIG_PARIDE is not set +# CONFIG_BLK_CPQ_DA is not set +# CONFIG_BLK_CPQ_CISS_DA is not set +# CONFIG_BLK_DEV_DAC960 is not set +# CONFIG_BLK_DEV_LOOP is not set +# CONFIG_BLK_DEV_NBD is not set +# CONFIG_BLK_DEV_RAM is not set +# CONFIG_BLK_DEV_INITRD is not set + +# +# I2O device support +# +# CONFIG_I2O is not set +# CONFIG_I2O_PCI is not set +# CONFIG_I2O_BLOCK is not set +# CONFIG_I2O_SCSI is not set +# CONFIG_I2O_PROC is not set + +# +# Multi-device support (RAID and LVM) +# +# CONFIG_MD is not set +# CONFIG_BLK_DEV_MD is not set +# CONFIG_MD_LINEAR is not set +# CONFIG_MD_RAID0 is not set +# CONFIG_MD_RAID1 is not set +# CONFIG_MD_RAID5 is not set +# CONFIG_MD_MULTIPATH is not set +# CONFIG_BLK_DEV_LVM is not set + +# +# ATA/IDE/MFM/RLL support +# +CONFIG_IDE=y + +# +# IDE, ATA and ATAPI Block devices +# +CONFIG_BLK_DEV_IDE=y + +# +# Please see Documentation/ide.txt for help/info on IDE drives +# +# CONFIG_BLK_DEV_HD_IDE is not set +# CONFIG_BLK_DEV_HD is not set +CONFIG_BLK_DEV_IDEDISK=y +# CONFIG_IDEDISK_MULTI_MODE is not set +# CONFIG_BLK_DEV_IDEDISK_VENDOR is not set +# CONFIG_BLK_DEV_IDEDISK_FUJITSU is not set +# CONFIG_BLK_DEV_IDEDISK_IBM is not set +# CONFIG_BLK_DEV_IDEDISK_MAXTOR is not set +# CONFIG_BLK_DEV_IDEDISK_QUANTUM is not set +# CONFIG_BLK_DEV_IDEDISK_SEAGATE is not set +# CONFIG_BLK_DEV_IDEDISK_WD is not set +# CONFIG_BLK_DEV_COMMERIAL is not set +# CONFIG_BLK_DEV_TIVO is not set +# CONFIG_BLK_DEV_IDECS is not set +# CONFIG_BLK_DEV_IDECD is not set +# CONFIG_BLK_DEV_IDETAPE is not set +# CONFIG_BLK_DEV_IDEFLOPPY is not set +# CONFIG_BLK_DEV_IDESCSI is not set + +# +# IDE chipset support/bugfixes +# +# CONFIG_BLK_DEV_CMD640 is not set +# CONFIG_BLK_DEV_CMD640_ENHANCED is not set +# CONFIG_BLK_DEV_ISAPNP is not set +# CONFIG_BLK_DEV_RZ1000 is not set +# CONFIG_BLK_DEV_IDEPCI is not set +# CONFIG_IDE_CHIPSETS is not set +# CONFIG_IDEDMA_AUTO is not set +# CONFIG_DMA_NONPCI is not set +# CONFIG_BLK_DEV_IDE_MODES is not set +# CONFIG_BLK_DEV_ATARAID is not set +# CONFIG_BLK_DEV_ATARAID_PDC is not set +# CONFIG_BLK_DEV_ATARAID_HPT is not set + +# +# Alternate 1394 support +# +# CONFIG_X1394 is not set + +# +# Alternate SCSI support +# +# CONFIG_XSCSI is not set + +# +# SCSI support +# +# CONFIG_SCSI is not set + +# +# Amateur Radio support +# +# CONFIG_HAMRADIO is not set + +# +# ISDN subsystem +# +# CONFIG_ISDN is not set + +# +# CD-ROM drivers (not for SCSI or IDE/ATAPI drives) +# +# CONFIG_CD_NO_IDESCSI is not set + +# +# Input core support +# +# CONFIG_INPUT is not set +# CONFIG_INPUT_KEYBDEV is not set +# CONFIG_INPUT_MOUSEDEV is not set +# CONFIG_INPUT_JOYDEV is not set +# CONFIG_INPUT_EVDEV is not set + +# +# Character devices +# +CONFIG_VT=y +CONFIG_VT_CONSOLE=y +# CONFIG_SERIAL is not set +# CONFIG_SERIAL_EXTENDED is not set +# CONFIG_SERIAL_NONSTANDARD is not set +CONFIG_UNIX98_PTYS=y +CONFIG_UNIX98_PTY_COUNT=256 + +# +# I2C support +# +# CONFIG_I2C is not set + +# +# Mice +# +# CONFIG_BUSMOUSE is not set +# CONFIG_MOUSE is not set + +# +# Joysticks +# +# CONFIG_INPUT_GAMEPORT is not set + +# +# Input core support is needed for gameports +# + +# +# Input core support is needed for joysticks +# +# CONFIG_QIC02_TAPE is not set + +# +# Watchdog Cards +# +# CONFIG_WATCHDOG is not set +# CONFIG_INTEL_RNG is not set +# CONFIG_NVRAM is not set +# CONFIG_RTC is not set +# CONFIG_EFI_RTC is not set +# CONFIG_DTLK is not set +# CONFIG_R3964 is not set +# CONFIG_APPLICOM is not set + +# +# Ftape, the floppy tape device driver +# +# CONFIG_FTAPE is not set +# CONFIG_AGP is not set +# CONFIG_DRM is not set +# CONFIG_MWAVE is not set + +# +# Multimedia devices +# +# CONFIG_VIDEO_DEV is not set + +# +# File systems +# +# CONFIG_QUOTA is not set +# CONFIG_AUTOFS_FS is not set +# CONFIG_AUTOFS4_FS is not set +# CONFIG_REISERFS_FS is not set +# CONFIG_REISERFS_CHECK is not set +# CONFIG_ADFS_FS is not set +# CONFIG_ADFS_FS_RW is not set +# CONFIG_AFFS_FS is not set +# CONFIG_HFS_FS is not set +# CONFIG_BFS_FS is not set +# CONFIG_FAT_FS is not set +# CONFIG_MSDOS_FS is not set +# CONFIG_UMSDOS_FS is not set +# CONFIG_VFAT_FS is not set +# CONFIG_EFS_FS is not set +# CONFIG_JFFS_FS is not set +# CONFIG_JFFS2_FS is not set +# CONFIG_CRAMFS is not set +CONFIG_TMPFS=y +# CONFIG_RAMFS is not set +# CONFIG_ISO9660_FS is not set +# CONFIG_JOLIET is not set +# CONFIG_MINIX_FS is not set +# CONFIG_VXFS_FS is not set +# CONFIG_NTFS_FS is not set +# CONFIG_NTFS_RW is not set +# CONFIG_HPFS_FS is not set +CONFIG_PROC_FS=y +# CONFIG_DEVFS_FS is not set +# CONFIG_DEVFS_MOUNT is not set +# CONFIG_DEVFS_DEBUG is not set +CONFIG_DEVPTS_FS=y +# CONFIG_QNX4FS_FS is not set +# CONFIG_QNX4FS_RW is not set +# CONFIG_ROMFS_FS is not set +CONFIG_EXT2_FS=y +# CONFIG_SYSV_FS is not set +# CONFIG_UDF_FS is not set +# CONFIG_UDF_RW is not set +# CONFIG_UFS_FS is not set +# CONFIG_UFS_FS_WRITE is not set +# CONFIG_XFS_SUPPORT is not set +# CONFIG_NCPFS_NLS is not set +# CONFIG_SMB_FS is not set + +# +# Partition Types +# +# CONFIG_PARTITION_ADVANCED is not set +CONFIG_MSDOS_PARTITION=y +# CONFIG_SMB_NLS is not set +# CONFIG_NLS is not set + +# +# Console drivers +# +CONFIG_VGA_CONSOLE=y + +# +# Frame-buffer support +# +# CONFIG_FB is not set + +# +# Sound +# +# CONFIG_SOUND is not set + +# +# USB support +# +# CONFIG_USB is not set + +# +# USB Controllers +# +# CONFIG_USB_UHCI is not set +# CONFIG_USB_UHCI_ALT is not set +# CONFIG_USB_OHCI is not set + +# +# USB Device Class drivers +# +# CONFIG_USB_AUDIO is not set +# CONFIG_USB_BLUETOOTH is not set +# CONFIG_USB_STORAGE is not set +# CONFIG_USB_STORAGE_DEBUG is not set +# CONFIG_USB_STORAGE_DATAFAB is not set +# CONFIG_USB_STORAGE_FREECOM is not set +# CONFIG_USB_STORAGE_ISD200 is not set +# CONFIG_USB_STORAGE_DPCM is not set +# CONFIG_USB_STORAGE_HP8200e is not set +# CONFIG_USB_STORAGE_SDDR09 is not set +# CONFIG_USB_STORAGE_JUMPSHOT is not set +# CONFIG_USB_ACM is not set +# CONFIG_USB_PRINTER is not set + +# +# USB Human Interface Devices (HID) +# + +# +# Input core support is needed for USB HID +# + +# +# USB Imaging devices +# +# CONFIG_USB_DC2XX is not set +# CONFIG_USB_MDC800 is not set +# CONFIG_USB_SCANNER is not set +# CONFIG_USB_MICROTEK is not set +# CONFIG_USB_HPUSBSCSI is not set + +# +# USB Multimedia devices +# + +# +# Video4Linux support is needed for USB Multimedia device support +# + +# +# USB Network adaptors +# + +# +# Networking support is needed for USB Networking device support +# + +# +# USB port drivers +# +# CONFIG_USB_USS720 is not set + +# +# USB Serial Converter support +# +# CONFIG_USB_SERIAL is not set +# CONFIG_USB_SERIAL_GENERIC is not set +# CONFIG_USB_SERIAL_BELKIN is not set +# CONFIG_USB_SERIAL_WHITEHEAT is not set +# CONFIG_USB_SERIAL_DIGI_ACCELEPORT is not set +# CONFIG_USB_SERIAL_EMPEG is not set +# CONFIG_USB_SERIAL_FTDI_SIO is not set +# CONFIG_USB_SERIAL_VISOR is not set +# CONFIG_USB_SERIAL_IR is not set +# CONFIG_USB_SERIAL_EDGEPORT is not set +# CONFIG_USB_SERIAL_KEYSPAN_PDA is not set +# CONFIG_USB_SERIAL_KEYSPAN is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28 is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28X is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28XA is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28XB is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA19 is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA18X is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA19W is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA49W is not set +# CONFIG_USB_SERIAL_MCT_U232 is not set +# CONFIG_USB_SERIAL_PL2303 is not set +# CONFIG_USB_SERIAL_CYBERJACK is not set +# CONFIG_USB_SERIAL_XIRCOM is not set +# CONFIG_USB_SERIAL_OMNINET is not set + +# +# USB Miscellaneous drivers +# +# CONFIG_USB_RIO500 is not set + +# +# Kernel hacking +# +CONFIG_DEBUG_KERNEL=y +CONFIG_IA64_PRINT_HAZARDS=y +# CONFIG_DISABLE_VHPT is not set +CONFIG_MAGIC_SYSRQ=y +CONFIG_IA64_EARLY_PRINTK=y +# CONFIG_DEBUG_SLAB is not set +# CONFIG_DEBUG_SPINLOCK is not set +# CONFIG_IA64_DEBUG_CMPXCHG is not set +# CONFIG_IA64_DEBUG_IRQ is not set +# CONFIG_KDB is not set +# CONFIG_KDB_MODULES is not set +# CONFIG_KALLSYMS is not set diff -Nru a/arch/ia64/sn/configs/sn1/defconfig-generic-mp b/arch/ia64/sn/configs/sn1/defconfig-generic-mp --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/configs/sn1/defconfig-generic-mp Tue Mar 12 13:58:16 2002 @@ -0,0 +1,460 @@ +# +# Automatically generated make config: don't edit +# + +# +# Code maturity level options +# +# CONFIG_EXPERIMENTAL is not set + +# +# Loadable module support +# +# CONFIG_MODULES is not set + +# +# General setup +# +CONFIG_IA64=y +# CONFIG_ISA is not set +# CONFIG_EISA is not set +# CONFIG_MCA is not set +# CONFIG_SBUS is not set +CONFIG_RWSEM_GENERIC_SPINLOCK=y +# CONFIG_RWSEM_XCHGADD_ALGORITHM is not set +CONFIG_ACPI=y +CONFIG_ACPI_EFI=y +CONFIG_ACPI_INTERPRETER=y +CONFIG_ACPI_KERNEL_CONFIG=y +CONFIG_ITANIUM=y +# CONFIG_MCKINLEY is not set +CONFIG_IA64_GENERIC=y +# CONFIG_IA64_DIG is not set +# CONFIG_IA64_HP_SIM is not set +# CONFIG_IA64_SGI_SN1 is not set +# CONFIG_IA64_SGI_SN2 is not set +# CONFIG_IA64_PAGE_SIZE_4KB is not set +# CONFIG_IA64_PAGE_SIZE_8KB is not set +CONFIG_IA64_PAGE_SIZE_16KB=y +# CONFIG_IA64_PAGE_SIZE_64KB is not set +CONFIG_IA64_BRL_EMU=y +CONFIG_ITANIUM_BSTEP_SPECIFIC=y +CONFIG_IA64_L1_CACHE_SHIFT=6 +CONFIG_KCORE_ELF=y +CONFIG_SMP=y +# CONFIG_IA32_SUPPORT is not set +CONFIG_PERFMON=y +CONFIG_IA64_PALINFO=y +# CONFIG_EFI_VARS is not set +# CONFIG_NET is not set +CONFIG_SYSVIPC=y +# CONFIG_BSD_PROCESS_ACCT is not set +# CONFIG_SYSCTL is not set +CONFIG_BINFMT_ELF=y +# CONFIG_BINFMT_MISC is not set +# CONFIG_ACPI_DEBUG is not set +# CONFIG_ACPI_BUSMGR is not set +# CONFIG_ACPI_SYS is not set +# CONFIG_ACPI_CPU is not set +# CONFIG_ACPI_BUTTON is not set +# CONFIG_ACPI_AC is not set +# CONFIG_ACPI_EC is not set +# CONFIG_ACPI_CMBATT is not set +# CONFIG_ACPI_THERMAL is not set +CONFIG_PCI=y +# CONFIG_PCI_NAMES is not set +# CONFIG_HOTPLUG is not set +# CONFIG_PCMCIA is not set + +# +# Parallel port support +# +# CONFIG_PARPORT is not set + +# +# Memory Technology Devices (MTD) +# +# CONFIG_MTD is not set + +# +# Plug and Play configuration +# +# CONFIG_PNP is not set +# CONFIG_ISAPNP is not set + +# +# Block devices +# +# CONFIG_BLK_DEV_FD is not set +# CONFIG_BLK_DEV_XD is not set +# CONFIG_PARIDE is not set +# CONFIG_BLK_CPQ_DA is not set +# CONFIG_BLK_CPQ_CISS_DA is not set +# CONFIG_BLK_DEV_DAC960 is not set +# CONFIG_BLK_DEV_LOOP is not set +# CONFIG_BLK_DEV_NBD is not set +# CONFIG_BLK_DEV_RAM is not set +# CONFIG_BLK_DEV_INITRD is not set + +# +# I2O device support +# +# CONFIG_I2O is not set +# CONFIG_I2O_PCI is not set +# CONFIG_I2O_BLOCK is not set +# CONFIG_I2O_SCSI is not set +# CONFIG_I2O_PROC is not set + +# +# Multi-device support (RAID and LVM) +# +# CONFIG_MD is not set +# CONFIG_BLK_DEV_MD is not set +# CONFIG_MD_LINEAR is not set +# CONFIG_MD_RAID0 is not set +# CONFIG_MD_RAID1 is not set +# CONFIG_MD_RAID5 is not set +# CONFIG_MD_MULTIPATH is not set +# CONFIG_BLK_DEV_LVM is not set + +# +# ATA/IDE/MFM/RLL support +# +CONFIG_IDE=y + +# +# IDE, ATA and ATAPI Block devices +# +CONFIG_BLK_DEV_IDE=y + +# +# Please see Documentation/ide.txt for help/info on IDE drives +# +# CONFIG_BLK_DEV_HD_IDE is not set +# CONFIG_BLK_DEV_HD is not set +CONFIG_BLK_DEV_IDEDISK=y +# CONFIG_IDEDISK_MULTI_MODE is not set +# CONFIG_BLK_DEV_IDEDISK_VENDOR is not set +# CONFIG_BLK_DEV_IDEDISK_FUJITSU is not set +# CONFIG_BLK_DEV_IDEDISK_IBM is not set +# CONFIG_BLK_DEV_IDEDISK_MAXTOR is not set +# CONFIG_BLK_DEV_IDEDISK_QUANTUM is not set +# CONFIG_BLK_DEV_IDEDISK_SEAGATE is not set +# CONFIG_BLK_DEV_IDEDISK_WD is not set +# CONFIG_BLK_DEV_COMMERIAL is not set +# CONFIG_BLK_DEV_TIVO is not set +# CONFIG_BLK_DEV_IDECS is not set +# CONFIG_BLK_DEV_IDECD is not set +# CONFIG_BLK_DEV_IDETAPE is not set +# CONFIG_BLK_DEV_IDEFLOPPY is not set +# CONFIG_BLK_DEV_IDESCSI is not set + +# +# IDE chipset support/bugfixes +# +# CONFIG_BLK_DEV_CMD640 is not set +# CONFIG_BLK_DEV_CMD640_ENHANCED is not set +# CONFIG_BLK_DEV_ISAPNP is not set +# CONFIG_BLK_DEV_RZ1000 is not set +# CONFIG_BLK_DEV_IDEPCI is not set +# CONFIG_IDE_CHIPSETS is not set +# CONFIG_IDEDMA_AUTO is not set +# CONFIG_DMA_NONPCI is not set +# CONFIG_BLK_DEV_IDE_MODES is not set +# CONFIG_BLK_DEV_ATARAID is not set +# CONFIG_BLK_DEV_ATARAID_PDC is not set +# CONFIG_BLK_DEV_ATARAID_HPT is not set + +# +# Alternate 1394 support +# +# CONFIG_X1394 is not set + +# +# Alternate SCSI support +# +# CONFIG_XSCSI is not set + +# +# SCSI support +# +# CONFIG_SCSI is not set + +# +# Amateur Radio support +# +# CONFIG_HAMRADIO is not set + +# +# ISDN subsystem +# +# CONFIG_ISDN is not set + +# +# CD-ROM drivers (not for SCSI or IDE/ATAPI drives) +# +# CONFIG_CD_NO_IDESCSI is not set + +# +# Input core support +# +# CONFIG_INPUT is not set +# CONFIG_INPUT_KEYBDEV is not set +# CONFIG_INPUT_MOUSEDEV is not set +# CONFIG_INPUT_JOYDEV is not set +# CONFIG_INPUT_EVDEV is not set + +# +# Character devices +# +CONFIG_VT=y +CONFIG_VT_CONSOLE=y +# CONFIG_SERIAL is not set +# CONFIG_SERIAL_EXTENDED is not set +# CONFIG_SERIAL_NONSTANDARD is not set +CONFIG_UNIX98_PTYS=y +CONFIG_UNIX98_PTY_COUNT=256 + +# +# I2C support +# +# CONFIG_I2C is not set + +# +# Mice +# +# CONFIG_BUSMOUSE is not set +# CONFIG_MOUSE is not set + +# +# Joysticks +# +# CONFIG_INPUT_GAMEPORT is not set + +# +# Input core support is needed for gameports +# + +# +# Input core support is needed for joysticks +# +# CONFIG_QIC02_TAPE is not set + +# +# Watchdog Cards +# +# CONFIG_WATCHDOG is not set +# CONFIG_INTEL_RNG is not set +# CONFIG_NVRAM is not set +# CONFIG_RTC is not set +# CONFIG_EFI_RTC is not set +# CONFIG_DTLK is not set +# CONFIG_R3964 is not set +# CONFIG_APPLICOM is not set + +# +# Ftape, the floppy tape device driver +# +# CONFIG_FTAPE is not set +# CONFIG_AGP is not set +# CONFIG_DRM is not set +# CONFIG_MWAVE is not set + +# +# Multimedia devices +# +# CONFIG_VIDEO_DEV is not set + +# +# File systems +# +# CONFIG_QUOTA is not set +# CONFIG_AUTOFS_FS is not set +# CONFIG_AUTOFS4_FS is not set +# CONFIG_REISERFS_FS is not set +# CONFIG_REISERFS_CHECK is not set +# CONFIG_ADFS_FS is not set +# CONFIG_ADFS_FS_RW is not set +# CONFIG_AFFS_FS is not set +# CONFIG_HFS_FS is not set +# CONFIG_BFS_FS is not set +# CONFIG_FAT_FS is not set +# CONFIG_MSDOS_FS is not set +# CONFIG_UMSDOS_FS is not set +# CONFIG_VFAT_FS is not set +# CONFIG_EFS_FS is not set +# CONFIG_JFFS_FS is not set +# CONFIG_JFFS2_FS is not set +# CONFIG_CRAMFS is not set +CONFIG_TMPFS=y +# CONFIG_RAMFS is not set +# CONFIG_ISO9660_FS is not set +# CONFIG_JOLIET is not set +# CONFIG_MINIX_FS is not set +# CONFIG_VXFS_FS is not set +# CONFIG_NTFS_FS is not set +# CONFIG_NTFS_RW is not set +# CONFIG_HPFS_FS is not set +CONFIG_PROC_FS=y +# CONFIG_DEVFS_FS is not set +# CONFIG_DEVFS_MOUNT is not set +# CONFIG_DEVFS_DEBUG is not set +CONFIG_DEVPTS_FS=y +# CONFIG_QNX4FS_FS is not set +# CONFIG_QNX4FS_RW is not set +# CONFIG_ROMFS_FS is not set +CONFIG_EXT2_FS=y +# CONFIG_SYSV_FS is not set +# CONFIG_UDF_FS is not set +# CONFIG_UDF_RW is not set +# CONFIG_UFS_FS is not set +# CONFIG_UFS_FS_WRITE is not set +# CONFIG_XFS_SUPPORT is not set +# CONFIG_NCPFS_NLS is not set +# CONFIG_SMB_FS is not set + +# +# Partition Types +# +# CONFIG_PARTITION_ADVANCED is not set +CONFIG_MSDOS_PARTITION=y +# CONFIG_SMB_NLS is not set +# CONFIG_NLS is not set + +# +# Console drivers +# +CONFIG_VGA_CONSOLE=y + +# +# Frame-buffer support +# +# CONFIG_FB is not set + +# +# Sound +# +# CONFIG_SOUND is not set + +# +# USB support +# +# CONFIG_USB is not set + +# +# USB Controllers +# +# CONFIG_USB_UHCI is not set +# CONFIG_USB_UHCI_ALT is not set +# CONFIG_USB_OHCI is not set + +# +# USB Device Class drivers +# +# CONFIG_USB_AUDIO is not set +# CONFIG_USB_BLUETOOTH is not set +# CONFIG_USB_STORAGE is not set +# CONFIG_USB_STORAGE_DEBUG is not set +# CONFIG_USB_STORAGE_DATAFAB is not set +# CONFIG_USB_STORAGE_FREECOM is not set +# CONFIG_USB_STORAGE_ISD200 is not set +# CONFIG_USB_STORAGE_DPCM is not set +# CONFIG_USB_STORAGE_HP8200e is not set +# CONFIG_USB_STORAGE_SDDR09 is not set +# CONFIG_USB_STORAGE_JUMPSHOT is not set +# CONFIG_USB_ACM is not set +# CONFIG_USB_PRINTER is not set + +# +# USB Human Interface Devices (HID) +# + +# +# Input core support is needed for USB HID +# + +# +# USB Imaging devices +# +# CONFIG_USB_DC2XX is not set +# CONFIG_USB_MDC800 is not set +# CONFIG_USB_SCANNER is not set +# CONFIG_USB_MICROTEK is not set +# CONFIG_USB_HPUSBSCSI is not set + +# +# USB Multimedia devices +# + +# +# Video4Linux support is needed for USB Multimedia device support +# + +# +# USB Network adaptors +# + +# +# Networking support is needed for USB Networking device support +# + +# +# USB port drivers +# +# CONFIG_USB_USS720 is not set + +# +# USB Serial Converter support +# +# CONFIG_USB_SERIAL is not set +# CONFIG_USB_SERIAL_GENERIC is not set +# CONFIG_USB_SERIAL_BELKIN is not set +# CONFIG_USB_SERIAL_WHITEHEAT is not set +# CONFIG_USB_SERIAL_DIGI_ACCELEPORT is not set +# CONFIG_USB_SERIAL_EMPEG is not set +# CONFIG_USB_SERIAL_FTDI_SIO is not set +# CONFIG_USB_SERIAL_VISOR is not set +# CONFIG_USB_SERIAL_IR is not set +# CONFIG_USB_SERIAL_EDGEPORT is not set +# CONFIG_USB_SERIAL_KEYSPAN_PDA is not set +# CONFIG_USB_SERIAL_KEYSPAN is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28 is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28X is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28XA is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28XB is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA19 is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA18X is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA19W is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA49W is not set +# CONFIG_USB_SERIAL_MCT_U232 is not set +# CONFIG_USB_SERIAL_PL2303 is not set +# CONFIG_USB_SERIAL_CYBERJACK is not set +# CONFIG_USB_SERIAL_XIRCOM is not set +# CONFIG_USB_SERIAL_OMNINET is not set + +# +# USB Miscellaneous drivers +# +# CONFIG_USB_RIO500 is not set + +# +# Simulated drivers +# +# CONFIG_SIMETH is not set +# CONFIG_SIM_SERIAL is not set + +# +# Kernel hacking +# +CONFIG_DEBUG_KERNEL=y +CONFIG_IA64_PRINT_HAZARDS=y +# CONFIG_DISABLE_VHPT is not set +CONFIG_MAGIC_SYSRQ=y +CONFIG_IA64_EARLY_PRINTK=y +# CONFIG_DEBUG_SLAB is not set +# CONFIG_DEBUG_SPINLOCK is not set +# CONFIG_IA64_DEBUG_CMPXCHG is not set +# CONFIG_IA64_DEBUG_IRQ is not set +# CONFIG_KDB is not set +# CONFIG_KDB_MODULES is not set +# CONFIG_KALLSYMS is not set diff -Nru a/arch/ia64/sn/configs/sn1/defconfig-generic-sp b/arch/ia64/sn/configs/sn1/defconfig-generic-sp --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/configs/sn1/defconfig-generic-sp Tue Mar 12 13:58:16 2002 @@ -0,0 +1,460 @@ +# +# Automatically generated make config: don't edit +# + +# +# Code maturity level options +# +# CONFIG_EXPERIMENTAL is not set + +# +# Loadable module support +# +# CONFIG_MODULES is not set + +# +# General setup +# +CONFIG_IA64=y +# CONFIG_ISA is not set +# CONFIG_EISA is not set +# CONFIG_MCA is not set +# CONFIG_SBUS is not set +CONFIG_RWSEM_GENERIC_SPINLOCK=y +# CONFIG_RWSEM_XCHGADD_ALGORITHM is not set +CONFIG_ACPI=y +CONFIG_ACPI_EFI=y +CONFIG_ACPI_INTERPRETER=y +CONFIG_ACPI_KERNEL_CONFIG=y +CONFIG_ITANIUM=y +# CONFIG_MCKINLEY is not set +CONFIG_IA64_GENERIC=y +# CONFIG_IA64_DIG is not set +# CONFIG_IA64_HP_SIM is not set +# CONFIG_IA64_SGI_SN1 is not set +# CONFIG_IA64_SGI_SN2 is not set +# CONFIG_IA64_PAGE_SIZE_4KB is not set +# CONFIG_IA64_PAGE_SIZE_8KB is not set +CONFIG_IA64_PAGE_SIZE_16KB=y +# CONFIG_IA64_PAGE_SIZE_64KB is not set +CONFIG_IA64_BRL_EMU=y +CONFIG_ITANIUM_BSTEP_SPECIFIC=y +CONFIG_IA64_L1_CACHE_SHIFT=6 +CONFIG_KCORE_ELF=y +# CONFIG_SMP is not set +# CONFIG_IA32_SUPPORT is not set +CONFIG_PERFMON=y +CONFIG_IA64_PALINFO=y +# CONFIG_EFI_VARS is not set +# CONFIG_NET is not set +CONFIG_SYSVIPC=y +# CONFIG_BSD_PROCESS_ACCT is not set +# CONFIG_SYSCTL is not set +CONFIG_BINFMT_ELF=y +# CONFIG_BINFMT_MISC is not set +# CONFIG_ACPI_DEBUG is not set +# CONFIG_ACPI_BUSMGR is not set +# CONFIG_ACPI_SYS is not set +# CONFIG_ACPI_CPU is not set +# CONFIG_ACPI_BUTTON is not set +# CONFIG_ACPI_AC is not set +# CONFIG_ACPI_EC is not set +# CONFIG_ACPI_CMBATT is not set +# CONFIG_ACPI_THERMAL is not set +CONFIG_PCI=y +# CONFIG_PCI_NAMES is not set +# CONFIG_HOTPLUG is not set +# CONFIG_PCMCIA is not set + +# +# Parallel port support +# +# CONFIG_PARPORT is not set + +# +# Memory Technology Devices (MTD) +# +# CONFIG_MTD is not set + +# +# Plug and Play configuration +# +# CONFIG_PNP is not set +# CONFIG_ISAPNP is not set + +# +# Block devices +# +# CONFIG_BLK_DEV_FD is not set +# CONFIG_BLK_DEV_XD is not set +# CONFIG_PARIDE is not set +# CONFIG_BLK_CPQ_DA is not set +# CONFIG_BLK_CPQ_CISS_DA is not set +# CONFIG_BLK_DEV_DAC960 is not set +# CONFIG_BLK_DEV_LOOP is not set +# CONFIG_BLK_DEV_NBD is not set +# CONFIG_BLK_DEV_RAM is not set +# CONFIG_BLK_DEV_INITRD is not set + +# +# I2O device support +# +# CONFIG_I2O is not set +# CONFIG_I2O_PCI is not set +# CONFIG_I2O_BLOCK is not set +# CONFIG_I2O_SCSI is not set +# CONFIG_I2O_PROC is not set + +# +# Multi-device support (RAID and LVM) +# +# CONFIG_MD is not set +# CONFIG_BLK_DEV_MD is not set +# CONFIG_MD_LINEAR is not set +# CONFIG_MD_RAID0 is not set +# CONFIG_MD_RAID1 is not set +# CONFIG_MD_RAID5 is not set +# CONFIG_MD_MULTIPATH is not set +# CONFIG_BLK_DEV_LVM is not set + +# +# ATA/IDE/MFM/RLL support +# +CONFIG_IDE=y + +# +# IDE, ATA and ATAPI Block devices +# +CONFIG_BLK_DEV_IDE=y + +# +# Please see Documentation/ide.txt for help/info on IDE drives +# +# CONFIG_BLK_DEV_HD_IDE is not set +# CONFIG_BLK_DEV_HD is not set +CONFIG_BLK_DEV_IDEDISK=y +# CONFIG_IDEDISK_MULTI_MODE is not set +# CONFIG_BLK_DEV_IDEDISK_VENDOR is not set +# CONFIG_BLK_DEV_IDEDISK_FUJITSU is not set +# CONFIG_BLK_DEV_IDEDISK_IBM is not set +# CONFIG_BLK_DEV_IDEDISK_MAXTOR is not set +# CONFIG_BLK_DEV_IDEDISK_QUANTUM is not set +# CONFIG_BLK_DEV_IDEDISK_SEAGATE is not set +# CONFIG_BLK_DEV_IDEDISK_WD is not set +# CONFIG_BLK_DEV_COMMERIAL is not set +# CONFIG_BLK_DEV_TIVO is not set +# CONFIG_BLK_DEV_IDECS is not set +# CONFIG_BLK_DEV_IDECD is not set +# CONFIG_BLK_DEV_IDETAPE is not set +# CONFIG_BLK_DEV_IDEFLOPPY is not set +# CONFIG_BLK_DEV_IDESCSI is not set + +# +# IDE chipset support/bugfixes +# +# CONFIG_BLK_DEV_CMD640 is not set +# CONFIG_BLK_DEV_CMD640_ENHANCED is not set +# CONFIG_BLK_DEV_ISAPNP is not set +# CONFIG_BLK_DEV_RZ1000 is not set +# CONFIG_BLK_DEV_IDEPCI is not set +# CONFIG_IDE_CHIPSETS is not set +# CONFIG_IDEDMA_AUTO is not set +# CONFIG_DMA_NONPCI is not set +# CONFIG_BLK_DEV_IDE_MODES is not set +# CONFIG_BLK_DEV_ATARAID is not set +# CONFIG_BLK_DEV_ATARAID_PDC is not set +# CONFIG_BLK_DEV_ATARAID_HPT is not set + +# +# Alternate 1394 support +# +# CONFIG_X1394 is not set + +# +# Alternate SCSI support +# +# CONFIG_XSCSI is not set + +# +# SCSI support +# +# CONFIG_SCSI is not set + +# +# Amateur Radio support +# +# CONFIG_HAMRADIO is not set + +# +# ISDN subsystem +# +# CONFIG_ISDN is not set + +# +# CD-ROM drivers (not for SCSI or IDE/ATAPI drives) +# +# CONFIG_CD_NO_IDESCSI is not set + +# +# Input core support +# +# CONFIG_INPUT is not set +# CONFIG_INPUT_KEYBDEV is not set +# CONFIG_INPUT_MOUSEDEV is not set +# CONFIG_INPUT_JOYDEV is not set +# CONFIG_INPUT_EVDEV is not set + +# +# Character devices +# +CONFIG_VT=y +CONFIG_VT_CONSOLE=y +# CONFIG_SERIAL is not set +# CONFIG_SERIAL_EXTENDED is not set +# CONFIG_SERIAL_NONSTANDARD is not set +CONFIG_UNIX98_PTYS=y +CONFIG_UNIX98_PTY_COUNT=256 + +# +# I2C support +# +# CONFIG_I2C is not set + +# +# Mice +# +# CONFIG_BUSMOUSE is not set +# CONFIG_MOUSE is not set + +# +# Joysticks +# +# CONFIG_INPUT_GAMEPORT is not set + +# +# Input core support is needed for gameports +# + +# +# Input core support is needed for joysticks +# +# CONFIG_QIC02_TAPE is not set + +# +# Watchdog Cards +# +# CONFIG_WATCHDOG is not set +# CONFIG_INTEL_RNG is not set +# CONFIG_NVRAM is not set +# CONFIG_RTC is not set +# CONFIG_EFI_RTC is not set +# CONFIG_DTLK is not set +# CONFIG_R3964 is not set +# CONFIG_APPLICOM is not set + +# +# Ftape, the floppy tape device driver +# +# CONFIG_FTAPE is not set +# CONFIG_AGP is not set +# CONFIG_DRM is not set +# CONFIG_MWAVE is not set + +# +# Multimedia devices +# +# CONFIG_VIDEO_DEV is not set + +# +# File systems +# +# CONFIG_QUOTA is not set +# CONFIG_AUTOFS_FS is not set +# CONFIG_AUTOFS4_FS is not set +# CONFIG_REISERFS_FS is not set +# CONFIG_REISERFS_CHECK is not set +# CONFIG_ADFS_FS is not set +# CONFIG_ADFS_FS_RW is not set +# CONFIG_AFFS_FS is not set +# CONFIG_HFS_FS is not set +# CONFIG_BFS_FS is not set +# CONFIG_FAT_FS is not set +# CONFIG_MSDOS_FS is not set +# CONFIG_UMSDOS_FS is not set +# CONFIG_VFAT_FS is not set +# CONFIG_EFS_FS is not set +# CONFIG_JFFS_FS is not set +# CONFIG_JFFS2_FS is not set +# CONFIG_CRAMFS is not set +CONFIG_TMPFS=y +# CONFIG_RAMFS is not set +# CONFIG_ISO9660_FS is not set +# CONFIG_JOLIET is not set +# CONFIG_MINIX_FS is not set +# CONFIG_VXFS_FS is not set +# CONFIG_NTFS_FS is not set +# CONFIG_NTFS_RW is not set +# CONFIG_HPFS_FS is not set +CONFIG_PROC_FS=y +# CONFIG_DEVFS_FS is not set +# CONFIG_DEVFS_MOUNT is not set +# CONFIG_DEVFS_DEBUG is not set +CONFIG_DEVPTS_FS=y +# CONFIG_QNX4FS_FS is not set +# CONFIG_QNX4FS_RW is not set +# CONFIG_ROMFS_FS is not set +CONFIG_EXT2_FS=y +# CONFIG_SYSV_FS is not set +# CONFIG_UDF_FS is not set +# CONFIG_UDF_RW is not set +# CONFIG_UFS_FS is not set +# CONFIG_UFS_FS_WRITE is not set +# CONFIG_XFS_SUPPORT is not set +# CONFIG_NCPFS_NLS is not set +# CONFIG_SMB_FS is not set + +# +# Partition Types +# +# CONFIG_PARTITION_ADVANCED is not set +CONFIG_MSDOS_PARTITION=y +# CONFIG_SMB_NLS is not set +# CONFIG_NLS is not set + +# +# Console drivers +# +CONFIG_VGA_CONSOLE=y + +# +# Frame-buffer support +# +# CONFIG_FB is not set + +# +# Sound +# +# CONFIG_SOUND is not set + +# +# USB support +# +# CONFIG_USB is not set + +# +# USB Controllers +# +# CONFIG_USB_UHCI is not set +# CONFIG_USB_UHCI_ALT is not set +# CONFIG_USB_OHCI is not set + +# +# USB Device Class drivers +# +# CONFIG_USB_AUDIO is not set +# CONFIG_USB_BLUETOOTH is not set +# CONFIG_USB_STORAGE is not set +# CONFIG_USB_STORAGE_DEBUG is not set +# CONFIG_USB_STORAGE_DATAFAB is not set +# CONFIG_USB_STORAGE_FREECOM is not set +# CONFIG_USB_STORAGE_ISD200 is not set +# CONFIG_USB_STORAGE_DPCM is not set +# CONFIG_USB_STORAGE_HP8200e is not set +# CONFIG_USB_STORAGE_SDDR09 is not set +# CONFIG_USB_STORAGE_JUMPSHOT is not set +# CONFIG_USB_ACM is not set +# CONFIG_USB_PRINTER is not set + +# +# USB Human Interface Devices (HID) +# + +# +# Input core support is needed for USB HID +# + +# +# USB Imaging devices +# +# CONFIG_USB_DC2XX is not set +# CONFIG_USB_MDC800 is not set +# CONFIG_USB_SCANNER is not set +# CONFIG_USB_MICROTEK is not set +# CONFIG_USB_HPUSBSCSI is not set + +# +# USB Multimedia devices +# + +# +# Video4Linux support is needed for USB Multimedia device support +# + +# +# USB Network adaptors +# + +# +# Networking support is needed for USB Networking device support +# + +# +# USB port drivers +# +# CONFIG_USB_USS720 is not set + +# +# USB Serial Converter support +# +# CONFIG_USB_SERIAL is not set +# CONFIG_USB_SERIAL_GENERIC is not set +# CONFIG_USB_SERIAL_BELKIN is not set +# CONFIG_USB_SERIAL_WHITEHEAT is not set +# CONFIG_USB_SERIAL_DIGI_ACCELEPORT is not set +# CONFIG_USB_SERIAL_EMPEG is not set +# CONFIG_USB_SERIAL_FTDI_SIO is not set +# CONFIG_USB_SERIAL_VISOR is not set +# CONFIG_USB_SERIAL_IR is not set +# CONFIG_USB_SERIAL_EDGEPORT is not set +# CONFIG_USB_SERIAL_KEYSPAN_PDA is not set +# CONFIG_USB_SERIAL_KEYSPAN is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28 is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28X is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28XA is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28XB is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA19 is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA18X is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA19W is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA49W is not set +# CONFIG_USB_SERIAL_MCT_U232 is not set +# CONFIG_USB_SERIAL_PL2303 is not set +# CONFIG_USB_SERIAL_CYBERJACK is not set +# CONFIG_USB_SERIAL_XIRCOM is not set +# CONFIG_USB_SERIAL_OMNINET is not set + +# +# USB Miscellaneous drivers +# +# CONFIG_USB_RIO500 is not set + +# +# Simulated drivers +# +# CONFIG_SIMETH is not set +# CONFIG_SIM_SERIAL is not set + +# +# Kernel hacking +# +CONFIG_DEBUG_KERNEL=y +CONFIG_IA64_PRINT_HAZARDS=y +# CONFIG_DISABLE_VHPT is not set +CONFIG_MAGIC_SYSRQ=y +CONFIG_IA64_EARLY_PRINTK=y +# CONFIG_DEBUG_SLAB is not set +# CONFIG_DEBUG_SPINLOCK is not set +# CONFIG_IA64_DEBUG_CMPXCHG is not set +# CONFIG_IA64_DEBUG_IRQ is not set +# CONFIG_KDB is not set +# CONFIG_KDB_MODULES is not set +# CONFIG_KALLSYMS is not set diff -Nru a/arch/ia64/sn/configs/sn1/defconfig-hp-sp b/arch/ia64/sn/configs/sn1/defconfig-hp-sp --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/configs/sn1/defconfig-hp-sp Tue Mar 12 13:58:16 2002 @@ -0,0 +1,334 @@ +# +# Automatically generated make config: don't edit +# + +# +# Code maturity level options +# +# CONFIG_EXPERIMENTAL is not set + +# +# Loadable module support +# +# CONFIG_MODULES is not set + +# +# General setup +# +CONFIG_IA64=y +# CONFIG_ISA is not set +# CONFIG_EISA is not set +# CONFIG_MCA is not set +# CONFIG_SBUS is not set +CONFIG_RWSEM_GENERIC_SPINLOCK=y +# CONFIG_RWSEM_XCHGADD_ALGORITHM is not set +CONFIG_ITANIUM=y +# CONFIG_MCKINLEY is not set +# CONFIG_IA64_GENERIC is not set +# CONFIG_IA64_DIG is not set +CONFIG_IA64_HP_SIM=y +# CONFIG_IA64_SGI_SN1 is not set +# CONFIG_IA64_SGI_SN2 is not set +# CONFIG_IA64_PAGE_SIZE_4KB is not set +# CONFIG_IA64_PAGE_SIZE_8KB is not set +CONFIG_IA64_PAGE_SIZE_16KB=y +# CONFIG_IA64_PAGE_SIZE_64KB is not set +CONFIG_IA64_BRL_EMU=y +CONFIG_ITANIUM_BSTEP_SPECIFIC=y +CONFIG_IA64_L1_CACHE_SHIFT=6 +CONFIG_KCORE_ELF=y +# CONFIG_SMP is not set +# CONFIG_IA32_SUPPORT is not set +# CONFIG_PERFMON is not set +# CONFIG_IA64_PALINFO is not set +# CONFIG_EFI_VARS is not set +CONFIG_NET=y +CONFIG_SYSVIPC=y +# CONFIG_BSD_PROCESS_ACCT is not set +CONFIG_SYSCTL=y +CONFIG_BINFMT_ELF=y +# CONFIG_BINFMT_MISC is not set + +# +# Networking options +# +# CONFIG_PACKET is not set +# CONFIG_NETLINK is not set +# CONFIG_NETFILTER is not set +# CONFIG_FILTER is not set +CONFIG_UNIX=y +CONFIG_INET=y +# CONFIG_IP_MULTICAST is not set +# CONFIG_IP_ADVANCED_ROUTER is not set +# CONFIG_IP_PNP is not set +# CONFIG_NET_IPIP is not set +# CONFIG_NET_IPGRE is not set +# CONFIG_INET_ECN is not set +# CONFIG_SYN_COOKIES is not set + +# +# +# +# CONFIG_IPX is not set +# CONFIG_ATALK is not set +# CONFIG_DECNET is not set +# CONFIG_BRIDGE is not set + +# +# QoS and/or fair queueing +# +# CONFIG_NET_SCHED is not set + +# +# Alternate 1394 support +# +# CONFIG_X1394 is not set + +# +# Alternate SCSI support +# +# CONFIG_XSCSI is not set + +# +# SCSI support +# +CONFIG_SCSI=y + +# +# SCSI support type (disk, tape, CD-ROM) +# +CONFIG_BLK_DEV_SD=y +CONFIG_SD_EXTRA_DEVS=40 +# CONFIG_CHR_DEV_ST is not set +# CONFIG_CHR_DEV_OSST is not set +# CONFIG_BLK_DEV_SR is not set +# CONFIG_CHR_DEV_SG is not set + +# +# Some SCSI devices (e.g. CD jukebox) support multiple LUNs +# +# CONFIG_SCSI_DEBUG_QUEUES is not set +# CONFIG_SCSI_MULTI_LUN is not set +CONFIG_SCSI_CONSTANTS=y +# CONFIG_SCSI_LOGGING is not set + +# +# SCSI low-level drivers +# +# CONFIG_SCSI_7000FASST is not set +# CONFIG_SCSI_ACARD is not set +# CONFIG_SCSI_AHA152X is not set +# CONFIG_SCSI_AHA1542 is not set +# CONFIG_SCSI_AHA1740 is not set +# CONFIG_SCSI_AIC7XXX is not set +# CONFIG_SCSI_AIC7XXX_OLD is not set +# CONFIG_SCSI_DPT_I2O is not set +# CONFIG_SCSI_ADVANSYS is not set +# CONFIG_SCSI_IN2000 is not set +# CONFIG_SCSI_AM53C974 is not set +# CONFIG_SCSI_MEGARAID is not set +# CONFIG_SCSI_BUSLOGIC is not set +# CONFIG_SCSI_DMX3191D is not set +# CONFIG_SCSI_DTC3280 is not set +# CONFIG_SCSI_EATA is not set +# CONFIG_SCSI_EATA_DMA is not set +# CONFIG_SCSI_EATA_PIO is not set +# CONFIG_SCSI_FUTURE_DOMAIN is not set +# CONFIG_SCSI_GDTH is not set +# CONFIG_SCSI_GENERIC_NCR5380 is not set +# CONFIG_SCSI_INITIO is not set +# CONFIG_SCSI_INIA100 is not set +# CONFIG_SCSI_PPA is not set +# CONFIG_SCSI_IMM is not set +# CONFIG_SCSI_NCR53C406A is not set +# CONFIG_SCSI_NCR53C7xx is not set +# CONFIG_SCSI_PAS16 is not set +# CONFIG_SCSI_PCI2000 is not set +# CONFIG_SCSI_PCI2220I is not set +# CONFIG_SCSI_PSI240I is not set +# CONFIG_SCSI_QLOGIC_FAS is not set +# CONFIG_SCSI_SIM710 is not set +# CONFIG_SCSI_SYM53C416 is not set +# CONFIG_SCSI_T128 is not set +# CONFIG_SCSI_U14_34F is not set + +# +# Input core support +# +# CONFIG_INPUT is not set +# CONFIG_INPUT_KEYBDEV is not set +# CONFIG_INPUT_MOUSEDEV is not set +# CONFIG_INPUT_JOYDEV is not set +# CONFIG_INPUT_EVDEV is not set + +# +# Character devices +# +CONFIG_VT=y +CONFIG_VT_CONSOLE=y +# CONFIG_SERIAL is not set +# CONFIG_SERIAL_EXTENDED is not set +# CONFIG_SERIAL_NONSTANDARD is not set +CONFIG_UNIX98_PTYS=y +CONFIG_UNIX98_PTY_COUNT=256 +# CONFIG_PRINTER is not set +# CONFIG_PPDEV is not set + +# +# I2C support +# +# CONFIG_I2C is not set + +# +# Mice +# +# CONFIG_BUSMOUSE is not set +# CONFIG_MOUSE is not set + +# +# Joysticks +# +# CONFIG_INPUT_GAMEPORT is not set + +# +# Input core support is needed for gameports +# + +# +# Input core support is needed for joysticks +# +# CONFIG_QIC02_TAPE is not set + +# +# Watchdog Cards +# +# CONFIG_WATCHDOG is not set +# CONFIG_INTEL_RNG is not set +# CONFIG_NVRAM is not set +# CONFIG_RTC is not set +CONFIG_EFI_RTC=y +# CONFIG_DTLK is not set +# CONFIG_R3964 is not set +# CONFIG_APPLICOM is not set + +# +# Ftape, the floppy tape device driver +# +# CONFIG_FTAPE is not set +# CONFIG_AGP is not set +# CONFIG_DRM is not set +# CONFIG_MWAVE is not set + +# +# Multimedia devices +# +# CONFIG_VIDEO_DEV is not set + +# +# File systems +# +# CONFIG_QUOTA is not set +# CONFIG_AUTOFS_FS is not set +# CONFIG_AUTOFS4_FS is not set +# CONFIG_REISERFS_FS is not set +# CONFIG_REISERFS_CHECK is not set +# CONFIG_ADFS_FS is not set +# CONFIG_ADFS_FS_RW is not set +# CONFIG_AFFS_FS is not set +# CONFIG_HFS_FS is not set +# CONFIG_BFS_FS is not set +# CONFIG_FAT_FS is not set +# CONFIG_MSDOS_FS is not set +# CONFIG_UMSDOS_FS is not set +# CONFIG_VFAT_FS is not set +# CONFIG_EFS_FS is not set +# CONFIG_JFFS_FS is not set +# CONFIG_JFFS2_FS is not set +# CONFIG_CRAMFS is not set +CONFIG_TMPFS=y +# CONFIG_RAMFS is not set +# CONFIG_ISO9660_FS is not set +# CONFIG_JOLIET is not set +# CONFIG_MINIX_FS is not set +# CONFIG_VXFS_FS is not set +# CONFIG_NTFS_FS is not set +# CONFIG_NTFS_RW is not set +# CONFIG_HPFS_FS is not set +CONFIG_PROC_FS=y +# CONFIG_DEVFS_FS is not set +# CONFIG_DEVFS_MOUNT is not set +# CONFIG_DEVFS_DEBUG is not set +CONFIG_DEVPTS_FS=y +# CONFIG_QNX4FS_FS is not set +# CONFIG_QNX4FS_RW is not set +# CONFIG_ROMFS_FS is not set +CONFIG_EXT2_FS=y +# CONFIG_SYSV_FS is not set +# CONFIG_UDF_FS is not set +# CONFIG_UDF_RW is not set +# CONFIG_UFS_FS is not set +# CONFIG_UFS_FS_WRITE is not set +# CONFIG_XFS_SUPPORT is not set + +# +# Network File Systems +# +# CONFIG_CODA_FS is not set +# CONFIG_NFS_FS is not set +# CONFIG_NFS_V3 is not set +# CONFIG_ROOT_NFS is not set +# CONFIG_NFSD is not set +# CONFIG_NFSD_V3 is not set +# CONFIG_SUNRPC is not set +# CONFIG_LOCKD is not set +# CONFIG_SMB_FS is not set +# CONFIG_NCP_FS is not set +# CONFIG_NCPFS_PACKET_SIGNING is not set +# CONFIG_NCPFS_IOCTL_LOCKING is not set +# CONFIG_NCPFS_STRONG is not set +# CONFIG_NCPFS_NFS_NS is not set +# CONFIG_NCPFS_OS2_NS is not set +# CONFIG_NCPFS_SMALLDOS is not set +# CONFIG_NCPFS_NLS is not set +# CONFIG_NCPFS_EXTRAS is not set + +# +# Partition Types +# +# CONFIG_PARTITION_ADVANCED is not set +CONFIG_MSDOS_PARTITION=y +# CONFIG_SMB_NLS is not set +# CONFIG_NLS is not set + +# +# Console drivers +# +CONFIG_VGA_CONSOLE=y + +# +# Frame-buffer support +# +# CONFIG_FB is not set + +# +# Simulated drivers +# +CONFIG_SIMETH=y +CONFIG_SIM_SERIAL=y +CONFIG_SCSI_SIM=y + +# +# Kernel hacking +# +CONFIG_DEBUG_KERNEL=y +CONFIG_IA64_PRINT_HAZARDS=y +# CONFIG_DISABLE_VHPT is not set +CONFIG_MAGIC_SYSRQ=y +CONFIG_IA64_EARLY_PRINTK=y +# CONFIG_DEBUG_SLAB is not set +# CONFIG_DEBUG_SPINLOCK is not set +# CONFIG_IA64_DEBUG_CMPXCHG is not set +# CONFIG_IA64_DEBUG_IRQ is not set +# CONFIG_KDB is not set +# CONFIG_KDB_MODULES is not set +# CONFIG_KALLSYMS is not set diff -Nru a/arch/ia64/sn/configs/sn1/defconfig-prom-medusa b/arch/ia64/sn/configs/sn1/defconfig-prom-medusa --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/configs/sn1/defconfig-prom-medusa Tue Mar 12 13:58:16 2002 @@ -0,0 +1,529 @@ +# +# Automatically generated make config: don't edit +# + +# +# Code maturity level options +# +CONFIG_EXPERIMENTAL=y + +# +# Loadable module support +# +# CONFIG_MODULES is not set + +# +# General setup +# +CONFIG_IA64=y +# CONFIG_ISA is not set +# CONFIG_EISA is not set +# CONFIG_MCA is not set +# CONFIG_SBUS is not set +CONFIG_RWSEM_GENERIC_SPINLOCK=y +# CONFIG_RWSEM_XCHGADD_ALGORITHM is not set +CONFIG_ACPI=y +CONFIG_ACPI_EFI=y +CONFIG_ACPI_INTERPRETER=y +CONFIG_ACPI_KERNEL_CONFIG=y +CONFIG_ITANIUM=y +# CONFIG_MCKINLEY is not set +# CONFIG_IA64_GENERIC is not set +# CONFIG_IA64_DIG is not set +# CONFIG_IA64_HP_SIM is not set +CONFIG_IA64_SGI_SN1=y +# CONFIG_IA64_SGI_SN2 is not set +# CONFIG_IA64_PAGE_SIZE_4KB is not set +# CONFIG_IA64_PAGE_SIZE_8KB is not set +CONFIG_IA64_PAGE_SIZE_16KB=y +# CONFIG_IA64_PAGE_SIZE_64KB is not set +CONFIG_IA64_BRL_EMU=y +CONFIG_ITANIUM_BSTEP_SPECIFIC=y +CONFIG_IA64_L1_CACHE_SHIFT=7 +CONFIG_IA64_SGI_SN=y +CONFIG_IA64_SGI_SN_DEBUG=y +CONFIG_IA64_SGI_SN_SIM=y +CONFIG_IA64_SGI_AUTOTEST=y +CONFIG_DEVFS_FS=y +# CONFIG_DEVFS_DEBUG is not set +CONFIG_SERIAL_SGI_L1_PROTOCOL=y +CONFIG_DISCONTIGMEM=y +CONFIG_IA64_MCA=y +CONFIG_NUMA=y +CONFIG_PERCPU_IRQ=y +CONFIG_PCIBA=y +CONFIG_KCORE_ELF=y +CONFIG_SMP=y +# CONFIG_IA32_SUPPORT is not set +CONFIG_PERFMON=y +CONFIG_IA64_PALINFO=y +# CONFIG_EFI_VARS is not set +# CONFIG_NET is not set +CONFIG_SYSVIPC=y +# CONFIG_BSD_PROCESS_ACCT is not set +CONFIG_SYSCTL=y +CONFIG_BINFMT_ELF=y +# CONFIG_BINFMT_MISC is not set +# CONFIG_ACPI_DEBUG is not set +# CONFIG_ACPI_BUSMGR is not set +# CONFIG_ACPI_SYS is not set +# CONFIG_ACPI_CPU is not set +# CONFIG_ACPI_BUTTON is not set +# CONFIG_ACPI_AC is not set +# CONFIG_ACPI_EC is not set +# CONFIG_ACPI_CMBATT is not set +# CONFIG_ACPI_THERMAL is not set +CONFIG_PCI=y +# CONFIG_PCI_NAMES is not set +# CONFIG_HOTPLUG is not set +# CONFIG_PCMCIA is not set + +# +# Parallel port support +# +# CONFIG_PARPORT is not set + +# +# Memory Technology Devices (MTD) +# +# CONFIG_MTD is not set + +# +# Plug and Play configuration +# +# CONFIG_PNP is not set +# CONFIG_ISAPNP is not set +# CONFIG_PNPBIOS is not set + +# +# Block devices +# +# CONFIG_BLK_DEV_FD is not set +# CONFIG_BLK_DEV_XD is not set +# CONFIG_PARIDE is not set +# CONFIG_BLK_CPQ_DA is not set +# CONFIG_BLK_CPQ_CISS_DA is not set +# CONFIG_BLK_DEV_DAC960 is not set +# CONFIG_BLK_DEV_LOOP is not set +# CONFIG_BLK_DEV_NBD is not set +# CONFIG_BLK_DEV_RAM is not set +# CONFIG_BLK_DEV_INITRD is not set + +# +# I2O device support +# +# CONFIG_I2O is not set +# CONFIG_I2O_PCI is not set +# CONFIG_I2O_BLOCK is not set +# CONFIG_I2O_SCSI is not set +# CONFIG_I2O_PROC is not set + +# +# Multi-device support (RAID and LVM) +# +# CONFIG_MD is not set +# CONFIG_BLK_DEV_MD is not set +# CONFIG_MD_LINEAR is not set +# CONFIG_MD_RAID0 is not set +# CONFIG_MD_RAID1 is not set +# CONFIG_MD_RAID5 is not set +# CONFIG_MD_MULTIPATH is not set +# CONFIG_BLK_DEV_LVM is not set + +# +# ATA/IDE/MFM/RLL support +# +CONFIG_IDE=y + +# +# IDE, ATA and ATAPI Block devices +# +CONFIG_BLK_DEV_IDE=y + +# +# Please see Documentation/ide.txt for help/info on IDE drives +# +# CONFIG_BLK_DEV_HD_IDE is not set +# CONFIG_BLK_DEV_HD is not set +CONFIG_BLK_DEV_IDEDISK=y +# CONFIG_IDEDISK_MULTI_MODE is not set +# CONFIG_BLK_DEV_IDEDISK_VENDOR is not set +# CONFIG_BLK_DEV_IDEDISK_FUJITSU is not set +# CONFIG_BLK_DEV_IDEDISK_IBM is not set +# CONFIG_BLK_DEV_IDEDISK_MAXTOR is not set +# CONFIG_BLK_DEV_IDEDISK_QUANTUM is not set +# CONFIG_BLK_DEV_IDEDISK_SEAGATE is not set +# CONFIG_BLK_DEV_IDEDISK_WD is not set +# CONFIG_BLK_DEV_COMMERIAL is not set +# CONFIG_BLK_DEV_TIVO is not set +# CONFIG_BLK_DEV_IDECS is not set +# CONFIG_BLK_DEV_IDECD is not set +# CONFIG_BLK_DEV_IDETAPE is not set +# CONFIG_BLK_DEV_IDEFLOPPY is not set +# CONFIG_BLK_DEV_IDESCSI is not set + +# +# IDE chipset support/bugfixes +# +# CONFIG_BLK_DEV_CMD640 is not set +# CONFIG_BLK_DEV_CMD640_ENHANCED is not set +# CONFIG_BLK_DEV_ISAPNP is not set +# CONFIG_BLK_DEV_RZ1000 is not set +# CONFIG_BLK_DEV_IDEPCI is not set +# CONFIG_IDE_CHIPSETS is not set +# CONFIG_IDEDMA_AUTO is not set +# CONFIG_DMA_NONPCI is not set +# CONFIG_BLK_DEV_IDE_MODES is not set +# CONFIG_BLK_DEV_ATARAID is not set +# CONFIG_BLK_DEV_ATARAID_PDC is not set +# CONFIG_BLK_DEV_ATARAID_HPT is not set + +# +# Alternate 1394 support +# +# CONFIG_X1394 is not set + +# +# Alternate SCSI support +# +# CONFIG_XSCSI is not set + +# +# SCSI support +# +CONFIG_SCSI=y + +# +# SCSI support type (disk, tape, CD-ROM) +# +CONFIG_BLK_DEV_SD=y +CONFIG_SD_EXTRA_DEVS=40 +# CONFIG_CHR_DEV_ST is not set +# CONFIG_CHR_DEV_OSST is not set +# CONFIG_BLK_DEV_SR is not set +# CONFIG_CHR_DEV_SG is not set + +# +# Some SCSI devices (e.g. CD jukebox) support multiple LUNs +# +# CONFIG_SCSI_DEBUG_QUEUES is not set +# CONFIG_SCSI_MULTI_LUN is not set +# CONFIG_SCSI_CONSTANTS is not set +# CONFIG_SCSI_LOGGING is not set + +# +# SCSI low-level drivers +# +# CONFIG_BLK_DEV_3W_XXXX_RAID is not set +# CONFIG_SCSI_7000FASST is not set +# CONFIG_SCSI_ACARD is not set +# CONFIG_SCSI_AHA152X is not set +# CONFIG_SCSI_AHA1542 is not set +# CONFIG_SCSI_AHA1740 is not set +# CONFIG_SCSI_AIC7XXX is not set +# CONFIG_SCSI_AIC7XXX_OLD is not set +# CONFIG_SCSI_DPT_I2O is not set +# CONFIG_SCSI_ADVANSYS is not set +# CONFIG_SCSI_IN2000 is not set +# CONFIG_SCSI_AM53C974 is not set +# CONFIG_SCSI_MEGARAID is not set +# CONFIG_SCSI_BUSLOGIC is not set +# CONFIG_SCSI_CPQFCTS is not set +# CONFIG_SCSI_DMX3191D is not set +# CONFIG_SCSI_DTC3280 is not set +# CONFIG_SCSI_EATA is not set +# CONFIG_SCSI_EATA_DMA is not set +# CONFIG_SCSI_EATA_PIO is not set +# CONFIG_SCSI_FUTURE_DOMAIN is not set +# CONFIG_SCSI_GDTH is not set +# CONFIG_SCSI_GENERIC_NCR5380 is not set +# CONFIG_SCSI_INITIO is not set +# CONFIG_SCSI_INIA100 is not set +# CONFIG_SCSI_NCR53C406A is not set +# CONFIG_SCSI_NCR53C7xx is not set +# CONFIG_SCSI_NCR53C8XX is not set +# CONFIG_SCSI_SYM53C8XX is not set +# CONFIG_SCSI_PAS16 is not set +# CONFIG_SCSI_PCI2000 is not set +# CONFIG_SCSI_PCI2220I is not set +# CONFIG_SCSI_PSI240I is not set +# CONFIG_SCSI_QLOGIC_FAS is not set +# CONFIG_SCSI_QLOGIC_ISP is not set +CONFIG_SCSI_QLOGIC_FC=y +# CONFIG_SCSI_QLOGIC_FC_FIRMWARE is not set +# CONFIG_SCSI_QLOGIC_1280 is not set +# CONFIG_SCSI_QLOGIC_QLA2100 is not set +# CONFIG_SCSI_SIM710 is not set +# CONFIG_SCSI_SYM53C416 is not set +# CONFIG_SCSI_DC390T is not set +# CONFIG_SCSI_T128 is not set +# CONFIG_SCSI_U14_34F is not set +# CONFIG_SCSI_DEBUG is not set + +# +# Amateur Radio support +# +# CONFIG_HAMRADIO is not set + +# +# ISDN subsystem +# +# CONFIG_ISDN is not set + +# +# CD-ROM drivers (not for SCSI or IDE/ATAPI drives) +# +# CONFIG_CD_NO_IDESCSI is not set + +# +# Input core support +# +# CONFIG_INPUT is not set +# CONFIG_INPUT_KEYBDEV is not set +# CONFIG_INPUT_MOUSEDEV is not set +# CONFIG_INPUT_JOYDEV is not set +# CONFIG_INPUT_EVDEV is not set + +# +# Character devices +# +# CONFIG_VT is not set +CONFIG_SERIAL=y +# CONFIG_SERIAL_CONSOLE is not set +# CONFIG_SERIAL_EXTENDED is not set +# CONFIG_SERIAL_NONSTANDARD is not set +CONFIG_UNIX98_PTYS=y +CONFIG_UNIX98_PTY_COUNT=256 + +# +# I2C support +# +# CONFIG_I2C is not set + +# +# Mice +# +# CONFIG_BUSMOUSE is not set +# CONFIG_MOUSE is not set + +# +# Joysticks +# +# CONFIG_INPUT_GAMEPORT is not set + +# +# Input core support is needed for gameports +# + +# +# Input core support is needed for joysticks +# +# CONFIG_QIC02_TAPE is not set + +# +# Watchdog Cards +# +# CONFIG_WATCHDOG is not set +# CONFIG_INTEL_RNG is not set +# CONFIG_NVRAM is not set +# CONFIG_RTC is not set +# CONFIG_EFI_RTC is not set +# CONFIG_DTLK is not set +# CONFIG_R3964 is not set +# CONFIG_APPLICOM is not set + +# +# Ftape, the floppy tape device driver +# +# CONFIG_FTAPE is not set +# CONFIG_AGP is not set +# CONFIG_DRM is not set +# CONFIG_MWAVE is not set + +# +# Multimedia devices +# +# CONFIG_VIDEO_DEV is not set + +# +# File systems +# +# CONFIG_QUOTA is not set +# CONFIG_AUTOFS_FS is not set +# CONFIG_AUTOFS4_FS is not set +# CONFIG_REISERFS_FS is not set +# CONFIG_REISERFS_CHECK is not set +# CONFIG_ADFS_FS is not set +# CONFIG_ADFS_FS_RW is not set +# CONFIG_AFFS_FS is not set +# CONFIG_HFS_FS is not set +# CONFIG_BFS_FS is not set +# CONFIG_FAT_FS is not set +# CONFIG_MSDOS_FS is not set +# CONFIG_UMSDOS_FS is not set +# CONFIG_VFAT_FS is not set +# CONFIG_EFS_FS is not set +# CONFIG_JFFS_FS is not set +# CONFIG_JFFS2_FS is not set +# CONFIG_CRAMFS is not set +CONFIG_TMPFS=y +# CONFIG_RAMFS is not set +# CONFIG_ISO9660_FS is not set +# CONFIG_JOLIET is not set +# CONFIG_MINIX_FS is not set +# CONFIG_VXFS_FS is not set +# CONFIG_NTFS_FS is not set +# CONFIG_NTFS_RW is not set +# CONFIG_HPFS_FS is not set +CONFIG_PROC_FS=y +CONFIG_DEVFS_FS=y +# CONFIG_DEVFS_MOUNT is not set +# CONFIG_DEVFS_DEBUG is not set +CONFIG_DEVPTS_FS=y +# CONFIG_QNX4FS_FS is not set +# CONFIG_QNX4FS_RW is not set +# CONFIG_ROMFS_FS is not set +CONFIG_EXT2_FS=y +# CONFIG_SYSV_FS is not set +# CONFIG_UDF_FS is not set +# CONFIG_UDF_RW is not set +# CONFIG_UFS_FS is not set +# CONFIG_UFS_FS_WRITE is not set +# CONFIG_XFS_SUPPORT is not set +# CONFIG_NCPFS_NLS is not set +# CONFIG_SMB_FS is not set + +# +# Partition Types +# +# CONFIG_PARTITION_ADVANCED is not set +CONFIG_MSDOS_PARTITION=y +# CONFIG_SMB_NLS is not set +# CONFIG_NLS is not set + +# +# Sound +# +# CONFIG_SOUND is not set + +# +# USB support +# +# CONFIG_USB is not set + +# +# USB Controllers +# +# CONFIG_USB_UHCI is not set +# CONFIG_USB_UHCI_ALT is not set +# CONFIG_USB_OHCI is not set + +# +# USB Device Class drivers +# +# CONFIG_USB_AUDIO is not set +# CONFIG_USB_BLUETOOTH is not set +# CONFIG_USB_STORAGE is not set +# CONFIG_USB_STORAGE_DEBUG is not set +# CONFIG_USB_STORAGE_DATAFAB is not set +# CONFIG_USB_STORAGE_FREECOM is not set +# CONFIG_USB_STORAGE_ISD200 is not set +# CONFIG_USB_STORAGE_DPCM is not set +# CONFIG_USB_STORAGE_HP8200e is not set +# CONFIG_USB_STORAGE_SDDR09 is not set +# CONFIG_USB_STORAGE_JUMPSHOT is not set +# CONFIG_USB_ACM is not set +# CONFIG_USB_PRINTER is not set + +# +# USB Human Interface Devices (HID) +# + +# +# Input core support is needed for USB HID +# + +# +# USB Imaging devices +# +# CONFIG_USB_DC2XX is not set +# CONFIG_USB_MDC800 is not set +# CONFIG_USB_SCANNER is not set +# CONFIG_USB_MICROTEK is not set +# CONFIG_USB_HPUSBSCSI is not set + +# +# USB Multimedia devices +# + +# +# Video4Linux support is needed for USB Multimedia device support +# + +# +# USB Network adaptors +# + +# +# Networking support is needed for USB Networking device support +# + +# +# USB port drivers +# +# CONFIG_USB_USS720 is not set + +# +# USB Serial Converter support +# +# CONFIG_USB_SERIAL is not set +# CONFIG_USB_SERIAL_GENERIC is not set +# CONFIG_USB_SERIAL_BELKIN is not set +# CONFIG_USB_SERIAL_WHITEHEAT is not set +# CONFIG_USB_SERIAL_DIGI_ACCELEPORT is not set +# CONFIG_USB_SERIAL_EMPEG is not set +# CONFIG_USB_SERIAL_FTDI_SIO is not set +# CONFIG_USB_SERIAL_VISOR is not set +# CONFIG_USB_SERIAL_IR is not set +# CONFIG_USB_SERIAL_EDGEPORT is not set +# CONFIG_USB_SERIAL_KEYSPAN_PDA is not set +# CONFIG_USB_SERIAL_KEYSPAN is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28 is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28X is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28XA is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28XB is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA19 is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA18X is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA19W is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA49W is not set +# CONFIG_USB_SERIAL_MCT_U232 is not set +# CONFIG_USB_SERIAL_PL2303 is not set +# CONFIG_USB_SERIAL_CYBERJACK is not set +# CONFIG_USB_SERIAL_XIRCOM is not set +# CONFIG_USB_SERIAL_OMNINET is not set + +# +# USB Miscellaneous drivers +# +# CONFIG_USB_RIO500 is not set + +# +# IEEE 1394 (FireWire) support (EXPERIMENTAL) +# +# CONFIG_IEEE1394 is not set + +# +# Kernel hacking +# +CONFIG_DEBUG_KERNEL=y +CONFIG_IA64_PRINT_HAZARDS=y +# CONFIG_DISABLE_VHPT is not set +CONFIG_MAGIC_SYSRQ=y +CONFIG_IA64_EARLY_PRINTK=y +# CONFIG_DEBUG_SLAB is not set +# CONFIG_DEBUG_SPINLOCK is not set +# CONFIG_IA64_DEBUG_CMPXCHG is not set +# CONFIG_IA64_DEBUG_IRQ is not set +# CONFIG_KDB is not set +# CONFIG_KDB_MODULES is not set +# CONFIG_KALLSYMS is not set diff -Nru a/arch/ia64/sn/configs/sn1/defconfig-sn1-mp b/arch/ia64/sn/configs/sn1/defconfig-sn1-mp --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/configs/sn1/defconfig-sn1-mp Tue Mar 12 13:58:16 2002 @@ -0,0 +1,736 @@ +# +# Automatically generated make config: don't edit +# + +# +# Code maturity level options +# +CONFIG_EXPERIMENTAL=y + +# +# Loadable module support +# +# CONFIG_MODULES is not set + +# +# General setup +# +CONFIG_IA64=y +# CONFIG_ISA is not set +# CONFIG_EISA is not set +# CONFIG_MCA is not set +# CONFIG_SBUS is not set +CONFIG_RWSEM_GENERIC_SPINLOCK=y +# CONFIG_RWSEM_XCHGADD_ALGORITHM is not set +CONFIG_ACPI=y +CONFIG_ACPI_EFI=y +CONFIG_ACPI_INTERPRETER=y +CONFIG_ACPI_KERNEL_CONFIG=y +CONFIG_ITANIUM=y +# CONFIG_MCKINLEY is not set +# CONFIG_IA64_GENERIC is not set +# CONFIG_IA64_DIG is not set +# CONFIG_IA64_HP_SIM is not set +CONFIG_IA64_SGI_SN1=y +# CONFIG_IA64_SGI_SN2 is not set +# CONFIG_IA64_PAGE_SIZE_4KB is not set +# CONFIG_IA64_PAGE_SIZE_8KB is not set +CONFIG_IA64_PAGE_SIZE_16KB=y +# CONFIG_IA64_PAGE_SIZE_64KB is not set +CONFIG_IA64_BRL_EMU=y +CONFIG_ITANIUM_BSTEP_SPECIFIC=y +CONFIG_IA64_L1_CACHE_SHIFT=7 +CONFIG_IA64_SGI_SN=y +CONFIG_IA64_SGI_SN_DEBUG=y +CONFIG_IA64_SGI_SN_SIM=y +CONFIG_IA64_SGI_AUTOTEST=y +CONFIG_DEVFS_FS=y +CONFIG_DEVFS_DEBUG=y +CONFIG_SERIAL_SGI_L1_PROTOCOL=y +CONFIG_DISCONTIGMEM=y +CONFIG_IA64_MCA=y +CONFIG_NUMA=y +CONFIG_PERCPU_IRQ=y +CONFIG_PCIBA=y +CONFIG_KCORE_ELF=y +CONFIG_SMP=y +CONFIG_IA32_SUPPORT=y +CONFIG_PERFMON=y +CONFIG_IA64_PALINFO=y +# CONFIG_EFI_VARS is not set +CONFIG_NET=y +CONFIG_SYSVIPC=y +# CONFIG_BSD_PROCESS_ACCT is not set +CONFIG_SYSCTL=y +CONFIG_BINFMT_ELF=y +# CONFIG_BINFMT_MISC is not set +# CONFIG_ACPI_DEBUG is not set +# CONFIG_ACPI_BUSMGR is not set +# CONFIG_ACPI_SYS is not set +# CONFIG_ACPI_CPU is not set +# CONFIG_ACPI_BUTTON is not set +# CONFIG_ACPI_AC is not set +# CONFIG_ACPI_EC is not set +# CONFIG_ACPI_CMBATT is not set +# CONFIG_ACPI_THERMAL is not set +CONFIG_PCI=y +# CONFIG_PCI_NAMES is not set +# CONFIG_HOTPLUG is not set +# CONFIG_PCMCIA is not set + +# +# Parallel port support +# +# CONFIG_PARPORT is not set + +# +# Networking options +# +CONFIG_PACKET=y +# CONFIG_PACKET_MMAP is not set +CONFIG_NETLINK=y +CONFIG_RTNETLINK=y +CONFIG_NETLINK_DEV=y +CONFIG_NETFILTER=y +CONFIG_NETFILTER_DEBUG=y +CONFIG_FILTER=y +CONFIG_UNIX=y +CONFIG_INET=y +CONFIG_IP_MULTICAST=y +# CONFIG_IP_ADVANCED_ROUTER is not set +# CONFIG_IP_PNP is not set +# CONFIG_NET_IPIP is not set +# CONFIG_NET_IPGRE is not set +# CONFIG_IP_MROUTE is not set +# CONFIG_ARPD is not set +# CONFIG_INET_ECN is not set +CONFIG_SYN_COOKIES=y + +# +# IP: Netfilter Configuration +# +# CONFIG_IP_NF_CONNTRACK is not set +# CONFIG_IP_NF_QUEUE is not set +# CONFIG_IP_NF_IPTABLES is not set +# CONFIG_IP_NF_COMPAT_IPCHAINS is not set +# CONFIG_IP_NF_COMPAT_IPFWADM is not set +# CONFIG_IPV6 is not set +# CONFIG_KHTTPD is not set +# CONFIG_ATM is not set + +# +# +# +# CONFIG_IPX is not set +# CONFIG_ATALK is not set +# CONFIG_DECNET is not set +# CONFIG_BRIDGE is not set +# CONFIG_X25 is not set +# CONFIG_LAPB is not set +# CONFIG_LLC is not set +# CONFIG_NET_DIVERT is not set +# CONFIG_ECONET is not set +# CONFIG_WAN_ROUTER is not set +# CONFIG_NET_FASTROUTE is not set +# CONFIG_NET_HW_FLOWCONTROL is not set + +# +# QoS and/or fair queueing +# +# CONFIG_NET_SCHED is not set + +# +# Memory Technology Devices (MTD) +# +# CONFIG_MTD is not set + +# +# Plug and Play configuration +# +# CONFIG_PNP is not set +# CONFIG_ISAPNP is not set +# CONFIG_PNPBIOS is not set + +# +# Block devices +# +# CONFIG_BLK_DEV_FD is not set +# CONFIG_BLK_DEV_XD is not set +# CONFIG_PARIDE is not set +# CONFIG_BLK_CPQ_DA is not set +# CONFIG_BLK_CPQ_CISS_DA is not set +# CONFIG_BLK_DEV_DAC960 is not set +CONFIG_BLK_DEV_LOOP=y +# CONFIG_BLK_DEV_NBD is not set +# CONFIG_BLK_DEV_RAM is not set +# CONFIG_BLK_DEV_INITRD is not set + +# +# I2O device support +# +# CONFIG_I2O is not set +# CONFIG_I2O_PCI is not set +# CONFIG_I2O_BLOCK is not set +# CONFIG_I2O_LAN is not set +# CONFIG_I2O_SCSI is not set +# CONFIG_I2O_PROC is not set + +# +# Multi-device support (RAID and LVM) +# +# CONFIG_MD is not set +# CONFIG_BLK_DEV_MD is not set +# CONFIG_MD_LINEAR is not set +# CONFIG_MD_RAID0 is not set +# CONFIG_MD_RAID1 is not set +# CONFIG_MD_RAID5 is not set +# CONFIG_MD_MULTIPATH is not set +# CONFIG_BLK_DEV_LVM is not set + +# +# ATA/IDE/MFM/RLL support +# +CONFIG_IDE=y + +# +# IDE, ATA and ATAPI Block devices +# +CONFIG_BLK_DEV_IDE=y + +# +# Please see Documentation/ide.txt for help/info on IDE drives +# +# CONFIG_BLK_DEV_HD_IDE is not set +# CONFIG_BLK_DEV_HD is not set +CONFIG_BLK_DEV_IDEDISK=y +# CONFIG_IDEDISK_MULTI_MODE is not set +# CONFIG_BLK_DEV_IDEDISK_VENDOR is not set +# CONFIG_BLK_DEV_IDEDISK_FUJITSU is not set +# CONFIG_BLK_DEV_IDEDISK_IBM is not set +# CONFIG_BLK_DEV_IDEDISK_MAXTOR is not set +# CONFIG_BLK_DEV_IDEDISK_QUANTUM is not set +# CONFIG_BLK_DEV_IDEDISK_SEAGATE is not set +# CONFIG_BLK_DEV_IDEDISK_WD is not set +# CONFIG_BLK_DEV_COMMERIAL is not set +# CONFIG_BLK_DEV_TIVO is not set +# CONFIG_BLK_DEV_IDECS is not set +# CONFIG_BLK_DEV_IDECD is not set +# CONFIG_BLK_DEV_IDETAPE is not set +# CONFIG_BLK_DEV_IDEFLOPPY is not set +# CONFIG_BLK_DEV_IDESCSI is not set + +# +# IDE chipset support/bugfixes +# +# CONFIG_BLK_DEV_CMD640 is not set +# CONFIG_BLK_DEV_CMD640_ENHANCED is not set +# CONFIG_BLK_DEV_ISAPNP is not set +# CONFIG_BLK_DEV_RZ1000 is not set +# CONFIG_BLK_DEV_IDEPCI is not set +# CONFIG_IDE_CHIPSETS is not set +# CONFIG_IDEDMA_AUTO is not set +# CONFIG_DMA_NONPCI is not set +# CONFIG_BLK_DEV_IDE_MODES is not set +# CONFIG_BLK_DEV_ATARAID is not set +# CONFIG_BLK_DEV_ATARAID_PDC is not set +# CONFIG_BLK_DEV_ATARAID_HPT is not set + +# +# Alternate 1394 support +# +# CONFIG_X1394 is not set + +# +# Alternate SCSI support +# +CONFIG_XSCSI=y + +# +# Alternate SCSI support +# +CONFIG_XSCSI_DKSC=y +# CONFIG_XSCSI_QLFC is not set +# CONFIG_XSCSI_QL is not set +# CONFIG_XSCSI_SBP2 is not set + +# +# SCSI support +# +CONFIG_SCSI=y + +# +# SCSI support type (disk, tape, CD-ROM) +# +CONFIG_BLK_DEV_SD=y +CONFIG_SD_EXTRA_DEVS=40 +# CONFIG_CHR_DEV_ST is not set +# CONFIG_CHR_DEV_OSST is not set +# CONFIG_BLK_DEV_SR is not set +# CONFIG_CHR_DEV_SG is not set + +# +# Some SCSI devices (e.g. CD jukebox) support multiple LUNs +# +# CONFIG_SCSI_DEBUG_QUEUES is not set +# CONFIG_SCSI_MULTI_LUN is not set +# CONFIG_SCSI_CONSTANTS is not set +# CONFIG_SCSI_LOGGING is not set + +# +# SCSI low-level drivers +# +# CONFIG_BLK_DEV_3W_XXXX_RAID is not set +# CONFIG_SCSI_7000FASST is not set +# CONFIG_SCSI_ACARD is not set +# CONFIG_SCSI_AHA152X is not set +# CONFIG_SCSI_AHA1542 is not set +# CONFIG_SCSI_AHA1740 is not set +# CONFIG_SCSI_AIC7XXX is not set +# CONFIG_SCSI_AIC7XXX_OLD is not set +# CONFIG_SCSI_DPT_I2O is not set +# CONFIG_SCSI_ADVANSYS is not set +# CONFIG_SCSI_IN2000 is not set +# CONFIG_SCSI_AM53C974 is not set +# CONFIG_SCSI_MEGARAID is not set +# CONFIG_SCSI_BUSLOGIC is not set +# CONFIG_SCSI_CPQFCTS is not set +# CONFIG_SCSI_DMX3191D is not set +# CONFIG_SCSI_DTC3280 is not set +# CONFIG_SCSI_EATA is not set +# CONFIG_SCSI_EATA_DMA is not set +# CONFIG_SCSI_EATA_PIO is not set +# CONFIG_SCSI_FUTURE_DOMAIN is not set +# CONFIG_SCSI_GDTH is not set +# CONFIG_SCSI_GENERIC_NCR5380 is not set +# CONFIG_SCSI_INITIO is not set +# CONFIG_SCSI_INIA100 is not set +# CONFIG_SCSI_NCR53C406A is not set +# CONFIG_SCSI_NCR53C7xx is not set +# CONFIG_SCSI_NCR53C8XX is not set +# CONFIG_SCSI_SYM53C8XX is not set +# CONFIG_SCSI_PAS16 is not set +# CONFIG_SCSI_PCI2000 is not set +# CONFIG_SCSI_PCI2220I is not set +# CONFIG_SCSI_PSI240I is not set +# CONFIG_SCSI_QLOGIC_FAS is not set +# CONFIG_SCSI_QLOGIC_ISP is not set +CONFIG_SCSI_QLOGIC_FC=y +# CONFIG_SCSI_QLOGIC_FC_FIRMWARE is not set +# CONFIG_SCSI_QLOGIC_1280 is not set +# CONFIG_SCSI_QLOGIC_QLA2100 is not set +# CONFIG_SCSI_SIM710 is not set +# CONFIG_SCSI_SYM53C416 is not set +# CONFIG_SCSI_DC390T is not set +# CONFIG_SCSI_T128 is not set +# CONFIG_SCSI_U14_34F is not set +# CONFIG_SCSI_DEBUG is not set + +# +# Network device support +# +CONFIG_NETDEVICES=y + +# +# ARCnet devices +# +# CONFIG_ARCNET is not set +# CONFIG_DUMMY is not set +# CONFIG_BONDING is not set +# CONFIG_EQUALIZER is not set +# CONFIG_TUN is not set +# CONFIG_ETHERTAP is not set + +# +# Ethernet (10 or 100Mbit) +# +CONFIG_NET_ETHERNET=y +CONFIG_SGI_IOC3_ETH=y +# CONFIG_SUNLANCE is not set +# CONFIG_HAPPYMEAL is not set +# CONFIG_SUNBMAC is not set +# CONFIG_SUNQE is not set +# CONFIG_SUNLANCE is not set +# CONFIG_SUNGEM is not set +# CONFIG_NET_VENDOR_3COM is not set +# CONFIG_LANCE is not set +# CONFIG_NET_VENDOR_SMC is not set +# CONFIG_NET_VENDOR_RACAL is not set +# CONFIG_HP100 is not set +# CONFIG_NET_ISA is not set +# CONFIG_NET_PCI is not set +# CONFIG_NET_POCKET is not set + +# +# Ethernet (1000 Mbit) +# +# CONFIG_ACENIC is not set +# CONFIG_DL2K is not set +# CONFIG_MYRI_SBUS is not set +# CONFIG_NS83820 is not set +# CONFIG_HAMACHI is not set +# CONFIG_YELLOWFIN is not set +# CONFIG_SK98LIN is not set +# CONFIG_FDDI is not set +# CONFIG_HIPPI is not set +# CONFIG_PLIP is not set +# CONFIG_PPP is not set +# CONFIG_SLIP is not set + +# +# Wireless LAN (non-hamradio) +# +# CONFIG_NET_RADIO is not set + +# +# Token Ring devices +# +# CONFIG_TR is not set +# CONFIG_NET_FC is not set +# CONFIG_RCPCI is not set +# CONFIG_SHAPER is not set + +# +# Wan interfaces +# +# CONFIG_WAN is not set + +# +# Amateur Radio support +# +# CONFIG_HAMRADIO is not set + +# +# ISDN subsystem +# +# CONFIG_ISDN is not set + +# +# CD-ROM drivers (not for SCSI or IDE/ATAPI drives) +# +# CONFIG_CD_NO_IDESCSI is not set + +# +# Input core support +# +# CONFIG_INPUT is not set +# CONFIG_INPUT_KEYBDEV is not set +# CONFIG_INPUT_MOUSEDEV is not set +# CONFIG_INPUT_JOYDEV is not set +# CONFIG_INPUT_EVDEV is not set + +# +# Character devices +# +# CONFIG_VT is not set +CONFIG_SERIAL=y +# CONFIG_SERIAL_CONSOLE is not set +# CONFIG_SERIAL_EXTENDED is not set +# CONFIG_SERIAL_NONSTANDARD is not set +CONFIG_UNIX98_PTYS=y +CONFIG_UNIX98_PTY_COUNT=256 + +# +# I2C support +# +# CONFIG_I2C is not set + +# +# Mice +# +# CONFIG_BUSMOUSE is not set +# CONFIG_MOUSE is not set + +# +# Joysticks +# +# CONFIG_INPUT_GAMEPORT is not set + +# +# Input core support is needed for gameports +# + +# +# Input core support is needed for joysticks +# +# CONFIG_QIC02_TAPE is not set + +# +# Watchdog Cards +# +# CONFIG_WATCHDOG is not set +# CONFIG_INTEL_RNG is not set +# CONFIG_NVRAM is not set +# CONFIG_RTC is not set +CONFIG_EFI_RTC=y +# CONFIG_DTLK is not set +# CONFIG_R3964 is not set +# CONFIG_APPLICOM is not set + +# +# Ftape, the floppy tape device driver +# +# CONFIG_FTAPE is not set +# CONFIG_AGP is not set +# CONFIG_DRM is not set +# CONFIG_MWAVE is not set + +# +# Multimedia devices +# +# CONFIG_VIDEO_DEV is not set + +# +# File systems +# +CONFIG_QUOTA=y +CONFIG_AUTOFS_FS=y +CONFIG_AUTOFS4_FS=y +# CONFIG_REISERFS_FS is not set +# CONFIG_REISERFS_CHECK is not set +# CONFIG_ADFS_FS is not set +# CONFIG_ADFS_FS_RW is not set +# CONFIG_AFFS_FS is not set +# CONFIG_HFS_FS is not set +# CONFIG_BFS_FS is not set +CONFIG_FAT_FS=y +CONFIG_MSDOS_FS=y +# CONFIG_UMSDOS_FS is not set +CONFIG_VFAT_FS=y +# CONFIG_EFS_FS is not set +# CONFIG_JFFS_FS is not set +# CONFIG_JFFS2_FS is not set +# CONFIG_CRAMFS is not set +CONFIG_TMPFS=y +# CONFIG_RAMFS is not set +CONFIG_ISO9660_FS=y +# CONFIG_JOLIET is not set +# CONFIG_MINIX_FS is not set +# CONFIG_VXFS_FS is not set +# CONFIG_NTFS_FS is not set +# CONFIG_NTFS_RW is not set +# CONFIG_HPFS_FS is not set +CONFIG_PROC_FS=y +CONFIG_DEVFS_FS=y +CONFIG_DEVFS_MOUNT=y +CONFIG_DEVFS_DEBUG=y +CONFIG_DEVPTS_FS=y +# CONFIG_QNX4FS_FS is not set +# CONFIG_QNX4FS_RW is not set +# CONFIG_ROMFS_FS is not set +CONFIG_EXT2_FS=y +# CONFIG_SYSV_FS is not set +# CONFIG_UDF_FS is not set +# CONFIG_UDF_RW is not set +# CONFIG_UFS_FS is not set +# CONFIG_UFS_FS_WRITE is not set +CONFIG_XFS_SUPPORT=y + +# +# Network File Systems +# +# CONFIG_CODA_FS is not set +CONFIG_NFS_FS=y +CONFIG_NFS_V3=y +# CONFIG_ROOT_NFS is not set +CONFIG_NFSD=y +CONFIG_NFSD_V3=y +CONFIG_SUNRPC=y +CONFIG_LOCKD=y +CONFIG_LOCKD_V4=y +# CONFIG_SMB_FS is not set +# CONFIG_NCP_FS is not set +# CONFIG_NCPFS_PACKET_SIGNING is not set +# CONFIG_NCPFS_IOCTL_LOCKING is not set +# CONFIG_NCPFS_STRONG is not set +# CONFIG_NCPFS_NFS_NS is not set +# CONFIG_NCPFS_OS2_NS is not set +# CONFIG_NCPFS_SMALLDOS is not set +# CONFIG_NCPFS_NLS is not set +# CONFIG_NCPFS_EXTRAS is not set + +# +# Partition Types +# +# CONFIG_PARTITION_ADVANCED is not set +CONFIG_MSDOS_PARTITION=y +# CONFIG_SMB_NLS is not set +CONFIG_NLS=y + +# +# Native Language Support +# +CONFIG_NLS_DEFAULT="n" +# CONFIG_NLS_CODEPAGE_437 is not set +# CONFIG_NLS_CODEPAGE_737 is not set +# CONFIG_NLS_CODEPAGE_775 is not set +# CONFIG_NLS_CODEPAGE_850 is not set +# CONFIG_NLS_CODEPAGE_852 is not set +# CONFIG_NLS_CODEPAGE_855 is not set +# CONFIG_NLS_CODEPAGE_857 is not set +# CONFIG_NLS_CODEPAGE_860 is not set +# CONFIG_NLS_CODEPAGE_861 is not set +# CONFIG_NLS_CODEPAGE_862 is not set +# CONFIG_NLS_CODEPAGE_863 is not set +# CONFIG_NLS_CODEPAGE_864 is not set +# CONFIG_NLS_CODEPAGE_865 is not set +# CONFIG_NLS_CODEPAGE_866 is not set +# CONFIG_NLS_CODEPAGE_869 is not set +# CONFIG_NLS_CODEPAGE_936 is not set +# CONFIG_NLS_CODEPAGE_950 is not set +# CONFIG_NLS_CODEPAGE_932 is not set +# CONFIG_NLS_CODEPAGE_949 is not set +# CONFIG_NLS_CODEPAGE_874 is not set +# CONFIG_NLS_ISO8859_8 is not set +# CONFIG_NLS_CODEPAGE_1251 is not set +# CONFIG_NLS_ISO8859_1 is not set +# CONFIG_NLS_ISO8859_2 is not set +# CONFIG_NLS_ISO8859_3 is not set +# CONFIG_NLS_ISO8859_4 is not set +# CONFIG_NLS_ISO8859_5 is not set +# CONFIG_NLS_ISO8859_6 is not set +# CONFIG_NLS_ISO8859_7 is not set +# CONFIG_NLS_ISO8859_9 is not set +# CONFIG_NLS_ISO8859_13 is not set +# CONFIG_NLS_ISO8859_14 is not set +# CONFIG_NLS_ISO8859_15 is not set +# CONFIG_NLS_KOI8_R is not set +# CONFIG_NLS_KOI8_U is not set +# CONFIG_NLS_UTF8 is not set + +# +# Sound +# +# CONFIG_SOUND is not set + +# +# USB support +# +# CONFIG_USB is not set + +# +# USB Controllers +# +# CONFIG_USB_UHCI is not set +# CONFIG_USB_UHCI_ALT is not set +# CONFIG_USB_OHCI is not set + +# +# USB Device Class drivers +# +# CONFIG_USB_AUDIO is not set +# CONFIG_USB_BLUETOOTH is not set +# CONFIG_USB_STORAGE is not set +# CONFIG_USB_STORAGE_DEBUG is not set +# CONFIG_USB_STORAGE_DATAFAB is not set +# CONFIG_USB_STORAGE_FREECOM is not set +# CONFIG_USB_STORAGE_ISD200 is not set +# CONFIG_USB_STORAGE_DPCM is not set +# CONFIG_USB_STORAGE_HP8200e is not set +# CONFIG_USB_STORAGE_SDDR09 is not set +# CONFIG_USB_STORAGE_JUMPSHOT is not set +# CONFIG_USB_ACM is not set +# CONFIG_USB_PRINTER is not set + +# +# USB Human Interface Devices (HID) +# + +# +# Input core support is needed for USB HID +# + +# +# USB Imaging devices +# +# CONFIG_USB_DC2XX is not set +# CONFIG_USB_MDC800 is not set +# CONFIG_USB_SCANNER is not set +# CONFIG_USB_MICROTEK is not set +# CONFIG_USB_HPUSBSCSI is not set + +# +# USB Multimedia devices +# + +# +# Video4Linux support is needed for USB Multimedia device support +# + +# +# USB Network adaptors +# +# CONFIG_USB_PEGASUS is not set +# CONFIG_USB_KAWETH is not set +# CONFIG_USB_CATC is not set +# CONFIG_USB_CDCETHER is not set +# CONFIG_USB_USBNET is not set + +# +# USB port drivers +# +# CONFIG_USB_USS720 is not set + +# +# USB Serial Converter support +# +# CONFIG_USB_SERIAL is not set +# CONFIG_USB_SERIAL_GENERIC is not set +# CONFIG_USB_SERIAL_BELKIN is not set +# CONFIG_USB_SERIAL_WHITEHEAT is not set +# CONFIG_USB_SERIAL_DIGI_ACCELEPORT is not set +# CONFIG_USB_SERIAL_EMPEG is not set +# CONFIG_USB_SERIAL_FTDI_SIO is not set +# CONFIG_USB_SERIAL_VISOR is not set +# CONFIG_USB_SERIAL_IR is not set +# CONFIG_USB_SERIAL_EDGEPORT is not set +# CONFIG_USB_SERIAL_KEYSPAN_PDA is not set +# CONFIG_USB_SERIAL_KEYSPAN is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28 is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28X is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28XA is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28XB is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA19 is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA18X is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA19W is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA49W is not set +# CONFIG_USB_SERIAL_MCT_U232 is not set +# CONFIG_USB_SERIAL_PL2303 is not set +# CONFIG_USB_SERIAL_CYBERJACK is not set +# CONFIG_USB_SERIAL_XIRCOM is not set +# CONFIG_USB_SERIAL_OMNINET is not set + +# +# USB Miscellaneous drivers +# +# CONFIG_USB_RIO500 is not set + +# +# IEEE 1394 (FireWire) support (EXPERIMENTAL) +# +# CONFIG_IEEE1394 is not set + +# +# Bluetooth support +# +# CONFIG_BLUEZ is not set + +# +# Kernel hacking +# +CONFIG_DEBUG_KERNEL=y +CONFIG_IA64_PRINT_HAZARDS=y +# CONFIG_DISABLE_VHPT is not set +CONFIG_MAGIC_SYSRQ=y +CONFIG_IA64_EARLY_PRINTK=y +# CONFIG_DEBUG_SLAB is not set +# CONFIG_DEBUG_SPINLOCK is not set +# CONFIG_IA64_DEBUG_CMPXCHG is not set +# CONFIG_IA64_DEBUG_IRQ is not set +CONFIG_KDB=y +CONFIG_KDB_MODULES=y +# CONFIG_KDB_OFF is not set + +# +# Load all symbols for debugging is required for KDB +# +CONFIG_KALLSYMS=y diff -Nru a/arch/ia64/sn/configs/sn1/defconfig-sn1-mp-modules b/arch/ia64/sn/configs/sn1/defconfig-sn1-mp-modules --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/configs/sn1/defconfig-sn1-mp-modules Tue Mar 12 13:58:16 2002 @@ -0,0 +1,738 @@ +# +# Automatically generated make config: don't edit +# + +# +# Code maturity level options +# +CONFIG_EXPERIMENTAL=y + +# +# Loadable module support +# +CONFIG_MODULES=y +# CONFIG_MODVERSIONS is not set +CONFIG_KMOD=y + +# +# General setup +# +CONFIG_IA64=y +# CONFIG_ISA is not set +# CONFIG_EISA is not set +# CONFIG_MCA is not set +# CONFIG_SBUS is not set +CONFIG_RWSEM_GENERIC_SPINLOCK=y +# CONFIG_RWSEM_XCHGADD_ALGORITHM is not set +CONFIG_ACPI=y +CONFIG_ACPI_EFI=y +CONFIG_ACPI_INTERPRETER=y +CONFIG_ACPI_KERNEL_CONFIG=y +CONFIG_ITANIUM=y +# CONFIG_MCKINLEY is not set +# CONFIG_IA64_GENERIC is not set +# CONFIG_IA64_DIG is not set +# CONFIG_IA64_HP_SIM is not set +CONFIG_IA64_SGI_SN1=y +# CONFIG_IA64_SGI_SN2 is not set +# CONFIG_IA64_PAGE_SIZE_4KB is not set +# CONFIG_IA64_PAGE_SIZE_8KB is not set +CONFIG_IA64_PAGE_SIZE_16KB=y +# CONFIG_IA64_PAGE_SIZE_64KB is not set +CONFIG_IA64_BRL_EMU=y +CONFIG_ITANIUM_BSTEP_SPECIFIC=y +CONFIG_IA64_L1_CACHE_SHIFT=7 +CONFIG_IA64_SGI_SN=y +CONFIG_IA64_SGI_SN_DEBUG=y +CONFIG_IA64_SGI_SN_SIM=y +CONFIG_IA64_SGI_AUTOTEST=y +CONFIG_DEVFS_FS=y +CONFIG_DEVFS_DEBUG=y +CONFIG_SERIAL_SGI_L1_PROTOCOL=y +CONFIG_DISCONTIGMEM=y +CONFIG_IA64_MCA=y +CONFIG_NUMA=y +CONFIG_PERCPU_IRQ=y +CONFIG_PCIBA=y +CONFIG_KCORE_ELF=y +CONFIG_SMP=y +CONFIG_IA32_SUPPORT=y +CONFIG_PERFMON=y +CONFIG_IA64_PALINFO=y +# CONFIG_EFI_VARS is not set +CONFIG_NET=y +CONFIG_SYSVIPC=y +# CONFIG_BSD_PROCESS_ACCT is not set +CONFIG_SYSCTL=y +CONFIG_BINFMT_ELF=y +# CONFIG_BINFMT_MISC is not set +# CONFIG_ACPI_DEBUG is not set +# CONFIG_ACPI_BUSMGR is not set +# CONFIG_ACPI_SYS is not set +# CONFIG_ACPI_CPU is not set +# CONFIG_ACPI_BUTTON is not set +# CONFIG_ACPI_AC is not set +# CONFIG_ACPI_EC is not set +# CONFIG_ACPI_CMBATT is not set +# CONFIG_ACPI_THERMAL is not set +CONFIG_PCI=y +# CONFIG_PCI_NAMES is not set +# CONFIG_HOTPLUG is not set +# CONFIG_PCMCIA is not set + +# +# Parallel port support +# +# CONFIG_PARPORT is not set + +# +# Networking options +# +CONFIG_PACKET=y +# CONFIG_PACKET_MMAP is not set +CONFIG_NETLINK=y +CONFIG_RTNETLINK=y +CONFIG_NETLINK_DEV=y +CONFIG_NETFILTER=y +CONFIG_NETFILTER_DEBUG=y +CONFIG_FILTER=y +CONFIG_UNIX=y +CONFIG_INET=y +CONFIG_IP_MULTICAST=y +# CONFIG_IP_ADVANCED_ROUTER is not set +# CONFIG_IP_PNP is not set +# CONFIG_NET_IPIP is not set +# CONFIG_NET_IPGRE is not set +# CONFIG_IP_MROUTE is not set +# CONFIG_ARPD is not set +# CONFIG_INET_ECN is not set +CONFIG_SYN_COOKIES=y + +# +# IP: Netfilter Configuration +# +# CONFIG_IP_NF_CONNTRACK is not set +# CONFIG_IP_NF_QUEUE is not set +# CONFIG_IP_NF_IPTABLES is not set +# CONFIG_IP_NF_COMPAT_IPCHAINS is not set +# CONFIG_IP_NF_COMPAT_IPFWADM is not set +# CONFIG_IPV6 is not set +# CONFIG_KHTTPD is not set +# CONFIG_ATM is not set + +# +# +# +# CONFIG_IPX is not set +# CONFIG_ATALK is not set +# CONFIG_DECNET is not set +# CONFIG_BRIDGE is not set +# CONFIG_X25 is not set +# CONFIG_LAPB is not set +# CONFIG_LLC is not set +# CONFIG_NET_DIVERT is not set +# CONFIG_ECONET is not set +# CONFIG_WAN_ROUTER is not set +# CONFIG_NET_FASTROUTE is not set +# CONFIG_NET_HW_FLOWCONTROL is not set + +# +# QoS and/or fair queueing +# +# CONFIG_NET_SCHED is not set + +# +# Memory Technology Devices (MTD) +# +# CONFIG_MTD is not set + +# +# Plug and Play configuration +# +# CONFIG_PNP is not set +# CONFIG_ISAPNP is not set +# CONFIG_PNPBIOS is not set + +# +# Block devices +# +# CONFIG_BLK_DEV_FD is not set +# CONFIG_BLK_DEV_XD is not set +# CONFIG_PARIDE is not set +# CONFIG_BLK_CPQ_DA is not set +# CONFIG_BLK_CPQ_CISS_DA is not set +# CONFIG_BLK_DEV_DAC960 is not set +CONFIG_BLK_DEV_LOOP=y +# CONFIG_BLK_DEV_NBD is not set +# CONFIG_BLK_DEV_RAM is not set +# CONFIG_BLK_DEV_INITRD is not set + +# +# I2O device support +# +# CONFIG_I2O is not set +# CONFIG_I2O_PCI is not set +# CONFIG_I2O_BLOCK is not set +# CONFIG_I2O_LAN is not set +# CONFIG_I2O_SCSI is not set +# CONFIG_I2O_PROC is not set + +# +# Multi-device support (RAID and LVM) +# +# CONFIG_MD is not set +# CONFIG_BLK_DEV_MD is not set +# CONFIG_MD_LINEAR is not set +# CONFIG_MD_RAID0 is not set +# CONFIG_MD_RAID1 is not set +# CONFIG_MD_RAID5 is not set +# CONFIG_MD_MULTIPATH is not set +# CONFIG_BLK_DEV_LVM is not set + +# +# ATA/IDE/MFM/RLL support +# +CONFIG_IDE=y + +# +# IDE, ATA and ATAPI Block devices +# +CONFIG_BLK_DEV_IDE=y + +# +# Please see Documentation/ide.txt for help/info on IDE drives +# +# CONFIG_BLK_DEV_HD_IDE is not set +# CONFIG_BLK_DEV_HD is not set +CONFIG_BLK_DEV_IDEDISK=y +# CONFIG_IDEDISK_MULTI_MODE is not set +# CONFIG_BLK_DEV_IDEDISK_VENDOR is not set +# CONFIG_BLK_DEV_IDEDISK_FUJITSU is not set +# CONFIG_BLK_DEV_IDEDISK_IBM is not set +# CONFIG_BLK_DEV_IDEDISK_MAXTOR is not set +# CONFIG_BLK_DEV_IDEDISK_QUANTUM is not set +# CONFIG_BLK_DEV_IDEDISK_SEAGATE is not set +# CONFIG_BLK_DEV_IDEDISK_WD is not set +# CONFIG_BLK_DEV_COMMERIAL is not set +# CONFIG_BLK_DEV_TIVO is not set +# CONFIG_BLK_DEV_IDECS is not set +# CONFIG_BLK_DEV_IDECD is not set +# CONFIG_BLK_DEV_IDETAPE is not set +# CONFIG_BLK_DEV_IDEFLOPPY is not set +# CONFIG_BLK_DEV_IDESCSI is not set + +# +# IDE chipset support/bugfixes +# +# CONFIG_BLK_DEV_CMD640 is not set +# CONFIG_BLK_DEV_CMD640_ENHANCED is not set +# CONFIG_BLK_DEV_ISAPNP is not set +# CONFIG_BLK_DEV_RZ1000 is not set +# CONFIG_BLK_DEV_IDEPCI is not set +# CONFIG_IDE_CHIPSETS is not set +# CONFIG_IDEDMA_AUTO is not set +# CONFIG_DMA_NONPCI is not set +# CONFIG_BLK_DEV_IDE_MODES is not set +# CONFIG_BLK_DEV_ATARAID is not set +# CONFIG_BLK_DEV_ATARAID_PDC is not set +# CONFIG_BLK_DEV_ATARAID_HPT is not set + +# +# Alternate 1394 support +# +# CONFIG_X1394 is not set + +# +# Alternate SCSI support +# +CONFIG_XSCSI=y + +# +# Alternate SCSI support +# +CONFIG_XSCSI_DKSC=y +# CONFIG_XSCSI_QLFC is not set +# CONFIG_XSCSI_QL is not set +# CONFIG_XSCSI_SBP2 is not set + +# +# SCSI support +# +CONFIG_SCSI=y + +# +# SCSI support type (disk, tape, CD-ROM) +# +CONFIG_BLK_DEV_SD=y +CONFIG_SD_EXTRA_DEVS=40 +# CONFIG_CHR_DEV_ST is not set +# CONFIG_CHR_DEV_OSST is not set +# CONFIG_BLK_DEV_SR is not set +# CONFIG_CHR_DEV_SG is not set + +# +# Some SCSI devices (e.g. CD jukebox) support multiple LUNs +# +# CONFIG_SCSI_DEBUG_QUEUES is not set +# CONFIG_SCSI_MULTI_LUN is not set +# CONFIG_SCSI_CONSTANTS is not set +# CONFIG_SCSI_LOGGING is not set + +# +# SCSI low-level drivers +# +# CONFIG_BLK_DEV_3W_XXXX_RAID is not set +# CONFIG_SCSI_7000FASST is not set +# CONFIG_SCSI_ACARD is not set +# CONFIG_SCSI_AHA152X is not set +# CONFIG_SCSI_AHA1542 is not set +# CONFIG_SCSI_AHA1740 is not set +# CONFIG_SCSI_AIC7XXX is not set +# CONFIG_SCSI_AIC7XXX_OLD is not set +# CONFIG_SCSI_DPT_I2O is not set +# CONFIG_SCSI_ADVANSYS is not set +# CONFIG_SCSI_IN2000 is not set +# CONFIG_SCSI_AM53C974 is not set +# CONFIG_SCSI_MEGARAID is not set +# CONFIG_SCSI_BUSLOGIC is not set +# CONFIG_SCSI_CPQFCTS is not set +# CONFIG_SCSI_DMX3191D is not set +# CONFIG_SCSI_DTC3280 is not set +# CONFIG_SCSI_EATA is not set +# CONFIG_SCSI_EATA_DMA is not set +# CONFIG_SCSI_EATA_PIO is not set +# CONFIG_SCSI_FUTURE_DOMAIN is not set +# CONFIG_SCSI_GDTH is not set +# CONFIG_SCSI_GENERIC_NCR5380 is not set +# CONFIG_SCSI_INITIO is not set +# CONFIG_SCSI_INIA100 is not set +# CONFIG_SCSI_NCR53C406A is not set +# CONFIG_SCSI_NCR53C7xx is not set +# CONFIG_SCSI_NCR53C8XX is not set +# CONFIG_SCSI_SYM53C8XX is not set +# CONFIG_SCSI_PAS16 is not set +# CONFIG_SCSI_PCI2000 is not set +# CONFIG_SCSI_PCI2220I is not set +# CONFIG_SCSI_PSI240I is not set +# CONFIG_SCSI_QLOGIC_FAS is not set +# CONFIG_SCSI_QLOGIC_ISP is not set +CONFIG_SCSI_QLOGIC_FC=y +# CONFIG_SCSI_QLOGIC_FC_FIRMWARE is not set +# CONFIG_SCSI_QLOGIC_1280 is not set +# CONFIG_SCSI_QLOGIC_QLA2100 is not set +# CONFIG_SCSI_SIM710 is not set +# CONFIG_SCSI_SYM53C416 is not set +# CONFIG_SCSI_DC390T is not set +# CONFIG_SCSI_T128 is not set +# CONFIG_SCSI_U14_34F is not set +# CONFIG_SCSI_DEBUG is not set + +# +# Network device support +# +CONFIG_NETDEVICES=y + +# +# ARCnet devices +# +# CONFIG_ARCNET is not set +# CONFIG_DUMMY is not set +# CONFIG_BONDING is not set +# CONFIG_EQUALIZER is not set +# CONFIG_TUN is not set +# CONFIG_ETHERTAP is not set + +# +# Ethernet (10 or 100Mbit) +# +CONFIG_NET_ETHERNET=y +CONFIG_SGI_IOC3_ETH=y +# CONFIG_SUNLANCE is not set +# CONFIG_HAPPYMEAL is not set +# CONFIG_SUNBMAC is not set +# CONFIG_SUNQE is not set +# CONFIG_SUNLANCE is not set +# CONFIG_SUNGEM is not set +# CONFIG_NET_VENDOR_3COM is not set +# CONFIG_LANCE is not set +# CONFIG_NET_VENDOR_SMC is not set +# CONFIG_NET_VENDOR_RACAL is not set +# CONFIG_HP100 is not set +# CONFIG_NET_ISA is not set +# CONFIG_NET_PCI is not set +# CONFIG_NET_POCKET is not set + +# +# Ethernet (1000 Mbit) +# +# CONFIG_ACENIC is not set +# CONFIG_DL2K is not set +# CONFIG_MYRI_SBUS is not set +# CONFIG_NS83820 is not set +# CONFIG_HAMACHI is not set +# CONFIG_YELLOWFIN is not set +# CONFIG_SK98LIN is not set +# CONFIG_FDDI is not set +# CONFIG_HIPPI is not set +# CONFIG_PLIP is not set +# CONFIG_PPP is not set +# CONFIG_SLIP is not set + +# +# Wireless LAN (non-hamradio) +# +# CONFIG_NET_RADIO is not set + +# +# Token Ring devices +# +# CONFIG_TR is not set +# CONFIG_NET_FC is not set +# CONFIG_RCPCI is not set +# CONFIG_SHAPER is not set + +# +# Wan interfaces +# +# CONFIG_WAN is not set + +# +# Amateur Radio support +# +# CONFIG_HAMRADIO is not set + +# +# ISDN subsystem +# +# CONFIG_ISDN is not set + +# +# CD-ROM drivers (not for SCSI or IDE/ATAPI drives) +# +# CONFIG_CD_NO_IDESCSI is not set + +# +# Input core support +# +# CONFIG_INPUT is not set +# CONFIG_INPUT_KEYBDEV is not set +# CONFIG_INPUT_MOUSEDEV is not set +# CONFIG_INPUT_JOYDEV is not set +# CONFIG_INPUT_EVDEV is not set + +# +# Character devices +# +# CONFIG_VT is not set +CONFIG_SERIAL=y +# CONFIG_SERIAL_CONSOLE is not set +# CONFIG_SERIAL_EXTENDED is not set +# CONFIG_SERIAL_NONSTANDARD is not set +CONFIG_UNIX98_PTYS=y +CONFIG_UNIX98_PTY_COUNT=256 + +# +# I2C support +# +# CONFIG_I2C is not set + +# +# Mice +# +# CONFIG_BUSMOUSE is not set +# CONFIG_MOUSE is not set + +# +# Joysticks +# +# CONFIG_INPUT_GAMEPORT is not set + +# +# Input core support is needed for gameports +# + +# +# Input core support is needed for joysticks +# +# CONFIG_QIC02_TAPE is not set + +# +# Watchdog Cards +# +# CONFIG_WATCHDOG is not set +# CONFIG_INTEL_RNG is not set +# CONFIG_NVRAM is not set +# CONFIG_RTC is not set +CONFIG_EFI_RTC=y +# CONFIG_DTLK is not set +# CONFIG_R3964 is not set +# CONFIG_APPLICOM is not set + +# +# Ftape, the floppy tape device driver +# +# CONFIG_FTAPE is not set +# CONFIG_AGP is not set +# CONFIG_DRM is not set +# CONFIG_MWAVE is not set + +# +# Multimedia devices +# +# CONFIG_VIDEO_DEV is not set + +# +# File systems +# +CONFIG_QUOTA=y +CONFIG_AUTOFS_FS=y +CONFIG_AUTOFS4_FS=y +# CONFIG_REISERFS_FS is not set +# CONFIG_REISERFS_CHECK is not set +# CONFIG_ADFS_FS is not set +# CONFIG_ADFS_FS_RW is not set +# CONFIG_AFFS_FS is not set +# CONFIG_HFS_FS is not set +# CONFIG_BFS_FS is not set +CONFIG_FAT_FS=y +CONFIG_MSDOS_FS=y +# CONFIG_UMSDOS_FS is not set +CONFIG_VFAT_FS=y +# CONFIG_EFS_FS is not set +# CONFIG_JFFS_FS is not set +# CONFIG_JFFS2_FS is not set +# CONFIG_CRAMFS is not set +CONFIG_TMPFS=y +# CONFIG_RAMFS is not set +CONFIG_ISO9660_FS=y +# CONFIG_JOLIET is not set +# CONFIG_MINIX_FS is not set +# CONFIG_VXFS_FS is not set +# CONFIG_NTFS_FS is not set +# CONFIG_NTFS_RW is not set +# CONFIG_HPFS_FS is not set +CONFIG_PROC_FS=y +CONFIG_DEVFS_FS=y +CONFIG_DEVFS_MOUNT=y +CONFIG_DEVFS_DEBUG=y +CONFIG_DEVPTS_FS=y +# CONFIG_QNX4FS_FS is not set +# CONFIG_QNX4FS_RW is not set +# CONFIG_ROMFS_FS is not set +CONFIG_EXT2_FS=y +# CONFIG_SYSV_FS is not set +# CONFIG_UDF_FS is not set +# CONFIG_UDF_RW is not set +# CONFIG_UFS_FS is not set +# CONFIG_UFS_FS_WRITE is not set +CONFIG_XFS_SUPPORT=y + +# +# Network File Systems +# +# CONFIG_CODA_FS is not set +CONFIG_NFS_FS=y +CONFIG_NFS_V3=y +# CONFIG_ROOT_NFS is not set +CONFIG_NFSD=y +CONFIG_NFSD_V3=y +CONFIG_SUNRPC=y +CONFIG_LOCKD=y +CONFIG_LOCKD_V4=y +# CONFIG_SMB_FS is not set +# CONFIG_NCP_FS is not set +# CONFIG_NCPFS_PACKET_SIGNING is not set +# CONFIG_NCPFS_IOCTL_LOCKING is not set +# CONFIG_NCPFS_STRONG is not set +# CONFIG_NCPFS_NFS_NS is not set +# CONFIG_NCPFS_OS2_NS is not set +# CONFIG_NCPFS_SMALLDOS is not set +# CONFIG_NCPFS_NLS is not set +# CONFIG_NCPFS_EXTRAS is not set + +# +# Partition Types +# +# CONFIG_PARTITION_ADVANCED is not set +CONFIG_MSDOS_PARTITION=y +# CONFIG_SMB_NLS is not set +CONFIG_NLS=y + +# +# Native Language Support +# +CONFIG_NLS_DEFAULT="n" +# CONFIG_NLS_CODEPAGE_437 is not set +# CONFIG_NLS_CODEPAGE_737 is not set +# CONFIG_NLS_CODEPAGE_775 is not set +# CONFIG_NLS_CODEPAGE_850 is not set +# CONFIG_NLS_CODEPAGE_852 is not set +# CONFIG_NLS_CODEPAGE_855 is not set +# CONFIG_NLS_CODEPAGE_857 is not set +# CONFIG_NLS_CODEPAGE_860 is not set +# CONFIG_NLS_CODEPAGE_861 is not set +# CONFIG_NLS_CODEPAGE_862 is not set +# CONFIG_NLS_CODEPAGE_863 is not set +# CONFIG_NLS_CODEPAGE_864 is not set +# CONFIG_NLS_CODEPAGE_865 is not set +# CONFIG_NLS_CODEPAGE_866 is not set +# CONFIG_NLS_CODEPAGE_869 is not set +# CONFIG_NLS_CODEPAGE_936 is not set +# CONFIG_NLS_CODEPAGE_950 is not set +# CONFIG_NLS_CODEPAGE_932 is not set +# CONFIG_NLS_CODEPAGE_949 is not set +# CONFIG_NLS_CODEPAGE_874 is not set +# CONFIG_NLS_ISO8859_8 is not set +# CONFIG_NLS_CODEPAGE_1251 is not set +# CONFIG_NLS_ISO8859_1 is not set +# CONFIG_NLS_ISO8859_2 is not set +# CONFIG_NLS_ISO8859_3 is not set +# CONFIG_NLS_ISO8859_4 is not set +# CONFIG_NLS_ISO8859_5 is not set +# CONFIG_NLS_ISO8859_6 is not set +# CONFIG_NLS_ISO8859_7 is not set +# CONFIG_NLS_ISO8859_9 is not set +# CONFIG_NLS_ISO8859_13 is not set +# CONFIG_NLS_ISO8859_14 is not set +# CONFIG_NLS_ISO8859_15 is not set +# CONFIG_NLS_KOI8_R is not set +# CONFIG_NLS_KOI8_U is not set +# CONFIG_NLS_UTF8 is not set + +# +# Sound +# +# CONFIG_SOUND is not set + +# +# USB support +# +# CONFIG_USB is not set + +# +# USB Controllers +# +# CONFIG_USB_UHCI is not set +# CONFIG_USB_UHCI_ALT is not set +# CONFIG_USB_OHCI is not set + +# +# USB Device Class drivers +# +# CONFIG_USB_AUDIO is not set +# CONFIG_USB_BLUETOOTH is not set +# CONFIG_USB_STORAGE is not set +# CONFIG_USB_STORAGE_DEBUG is not set +# CONFIG_USB_STORAGE_DATAFAB is not set +# CONFIG_USB_STORAGE_FREECOM is not set +# CONFIG_USB_STORAGE_ISD200 is not set +# CONFIG_USB_STORAGE_DPCM is not set +# CONFIG_USB_STORAGE_HP8200e is not set +# CONFIG_USB_STORAGE_SDDR09 is not set +# CONFIG_USB_STORAGE_JUMPSHOT is not set +# CONFIG_USB_ACM is not set +# CONFIG_USB_PRINTER is not set + +# +# USB Human Interface Devices (HID) +# + +# +# Input core support is needed for USB HID +# + +# +# USB Imaging devices +# +# CONFIG_USB_DC2XX is not set +# CONFIG_USB_MDC800 is not set +# CONFIG_USB_SCANNER is not set +# CONFIG_USB_MICROTEK is not set +# CONFIG_USB_HPUSBSCSI is not set + +# +# USB Multimedia devices +# + +# +# Video4Linux support is needed for USB Multimedia device support +# + +# +# USB Network adaptors +# +# CONFIG_USB_PEGASUS is not set +# CONFIG_USB_KAWETH is not set +# CONFIG_USB_CATC is not set +# CONFIG_USB_CDCETHER is not set +# CONFIG_USB_USBNET is not set + +# +# USB port drivers +# +# CONFIG_USB_USS720 is not set + +# +# USB Serial Converter support +# +# CONFIG_USB_SERIAL is not set +# CONFIG_USB_SERIAL_GENERIC is not set +# CONFIG_USB_SERIAL_BELKIN is not set +# CONFIG_USB_SERIAL_WHITEHEAT is not set +# CONFIG_USB_SERIAL_DIGI_ACCELEPORT is not set +# CONFIG_USB_SERIAL_EMPEG is not set +# CONFIG_USB_SERIAL_FTDI_SIO is not set +# CONFIG_USB_SERIAL_VISOR is not set +# CONFIG_USB_SERIAL_IR is not set +# CONFIG_USB_SERIAL_EDGEPORT is not set +# CONFIG_USB_SERIAL_KEYSPAN_PDA is not set +# CONFIG_USB_SERIAL_KEYSPAN is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28 is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28X is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28XA is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28XB is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA19 is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA18X is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA19W is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA49W is not set +# CONFIG_USB_SERIAL_MCT_U232 is not set +# CONFIG_USB_SERIAL_PL2303 is not set +# CONFIG_USB_SERIAL_CYBERJACK is not set +# CONFIG_USB_SERIAL_XIRCOM is not set +# CONFIG_USB_SERIAL_OMNINET is not set + +# +# USB Miscellaneous drivers +# +# CONFIG_USB_RIO500 is not set + +# +# IEEE 1394 (FireWire) support (EXPERIMENTAL) +# +# CONFIG_IEEE1394 is not set + +# +# Bluetooth support +# +# CONFIG_BLUEZ is not set + +# +# Kernel hacking +# +CONFIG_DEBUG_KERNEL=y +CONFIG_IA64_PRINT_HAZARDS=y +# CONFIG_DISABLE_VHPT is not set +CONFIG_MAGIC_SYSRQ=y +CONFIG_IA64_EARLY_PRINTK=y +# CONFIG_DEBUG_SLAB is not set +# CONFIG_DEBUG_SPINLOCK is not set +# CONFIG_IA64_DEBUG_CMPXCHG is not set +# CONFIG_IA64_DEBUG_IRQ is not set +CONFIG_KDB=y +CONFIG_KDB_MODULES=y +# CONFIG_KDB_OFF is not set + +# +# Load all symbols for debugging is required for KDB +# +CONFIG_KALLSYMS=y diff -Nru a/arch/ia64/sn/configs/sn1/defconfig-sn1-mp-syn1-0 b/arch/ia64/sn/configs/sn1/defconfig-sn1-mp-syn1-0 --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/configs/sn1/defconfig-sn1-mp-syn1-0 Tue Mar 12 13:58:16 2002 @@ -0,0 +1,736 @@ +# +# Automatically generated make config: don't edit +# + +# +# Code maturity level options +# +CONFIG_EXPERIMENTAL=y + +# +# Loadable module support +# +# CONFIG_MODULES is not set + +# +# General setup +# +CONFIG_IA64=y +# CONFIG_ISA is not set +# CONFIG_EISA is not set +# CONFIG_MCA is not set +# CONFIG_SBUS is not set +CONFIG_RWSEM_GENERIC_SPINLOCK=y +# CONFIG_RWSEM_XCHGADD_ALGORITHM is not set +CONFIG_ACPI=y +CONFIG_ACPI_EFI=y +CONFIG_ACPI_INTERPRETER=y +CONFIG_ACPI_KERNEL_CONFIG=y +CONFIG_ITANIUM=y +# CONFIG_MCKINLEY is not set +# CONFIG_IA64_GENERIC is not set +# CONFIG_IA64_DIG is not set +# CONFIG_IA64_HP_SIM is not set +CONFIG_IA64_SGI_SN1=y +# CONFIG_IA64_SGI_SN2 is not set +# CONFIG_IA64_PAGE_SIZE_4KB is not set +# CONFIG_IA64_PAGE_SIZE_8KB is not set +CONFIG_IA64_PAGE_SIZE_16KB=y +# CONFIG_IA64_PAGE_SIZE_64KB is not set +CONFIG_IA64_BRL_EMU=y +CONFIG_ITANIUM_BSTEP_SPECIFIC=y +CONFIG_IA64_L1_CACHE_SHIFT=7 +CONFIG_IA64_SGI_SN=y +CONFIG_IA64_SGI_SN_DEBUG=y +CONFIG_IA64_SGI_SN_SIM=y +CONFIG_IA64_SGI_AUTOTEST=y +CONFIG_DEVFS_FS=y +CONFIG_DEVFS_DEBUG=y +CONFIG_SERIAL_SGI_L1_PROTOCOL=y +CONFIG_DISCONTIGMEM=y +CONFIG_IA64_MCA=y +CONFIG_NUMA=y +CONFIG_PERCPU_IRQ=y +CONFIG_PCIBA=y +CONFIG_KCORE_ELF=y +CONFIG_SMP=y +CONFIG_IA32_SUPPORT=y +CONFIG_PERFMON=y +CONFIG_IA64_PALINFO=y +# CONFIG_EFI_VARS is not set +CONFIG_NET=y +CONFIG_SYSVIPC=y +# CONFIG_BSD_PROCESS_ACCT is not set +CONFIG_SYSCTL=y +CONFIG_BINFMT_ELF=y +# CONFIG_BINFMT_MISC is not set +# CONFIG_ACPI_DEBUG is not set +# CONFIG_ACPI_BUSMGR is not set +# CONFIG_ACPI_SYS is not set +# CONFIG_ACPI_CPU is not set +# CONFIG_ACPI_BUTTON is not set +# CONFIG_ACPI_AC is not set +# CONFIG_ACPI_EC is not set +# CONFIG_ACPI_CMBATT is not set +# CONFIG_ACPI_THERMAL is not set +CONFIG_PCI=y +# CONFIG_PCI_NAMES is not set +# CONFIG_HOTPLUG is not set +# CONFIG_PCMCIA is not set + +# +# Parallel port support +# +# CONFIG_PARPORT is not set + +# +# Networking options +# +CONFIG_PACKET=y +# CONFIG_PACKET_MMAP is not set +CONFIG_NETLINK=y +CONFIG_RTNETLINK=y +CONFIG_NETLINK_DEV=y +CONFIG_NETFILTER=y +CONFIG_NETFILTER_DEBUG=y +CONFIG_FILTER=y +CONFIG_UNIX=y +CONFIG_INET=y +CONFIG_IP_MULTICAST=y +# CONFIG_IP_ADVANCED_ROUTER is not set +# CONFIG_IP_PNP is not set +# CONFIG_NET_IPIP is not set +# CONFIG_NET_IPGRE is not set +# CONFIG_IP_MROUTE is not set +# CONFIG_ARPD is not set +# CONFIG_INET_ECN is not set +CONFIG_SYN_COOKIES=y + +# +# IP: Netfilter Configuration +# +# CONFIG_IP_NF_CONNTRACK is not set +# CONFIG_IP_NF_QUEUE is not set +# CONFIG_IP_NF_IPTABLES is not set +# CONFIG_IP_NF_COMPAT_IPCHAINS is not set +# CONFIG_IP_NF_COMPAT_IPFWADM is not set +# CONFIG_IPV6 is not set +# CONFIG_KHTTPD is not set +# CONFIG_ATM is not set + +# +# +# +# CONFIG_IPX is not set +# CONFIG_ATALK is not set +# CONFIG_DECNET is not set +# CONFIG_BRIDGE is not set +# CONFIG_X25 is not set +# CONFIG_LAPB is not set +# CONFIG_LLC is not set +# CONFIG_NET_DIVERT is not set +# CONFIG_ECONET is not set +# CONFIG_WAN_ROUTER is not set +# CONFIG_NET_FASTROUTE is not set +# CONFIG_NET_HW_FLOWCONTROL is not set + +# +# QoS and/or fair queueing +# +# CONFIG_NET_SCHED is not set + +# +# Memory Technology Devices (MTD) +# +# CONFIG_MTD is not set + +# +# Plug and Play configuration +# +# CONFIG_PNP is not set +# CONFIG_ISAPNP is not set +# CONFIG_PNPBIOS is not set + +# +# Block devices +# +# CONFIG_BLK_DEV_FD is not set +# CONFIG_BLK_DEV_XD is not set +# CONFIG_PARIDE is not set +# CONFIG_BLK_CPQ_DA is not set +# CONFIG_BLK_CPQ_CISS_DA is not set +# CONFIG_BLK_DEV_DAC960 is not set +CONFIG_BLK_DEV_LOOP=y +# CONFIG_BLK_DEV_NBD is not set +# CONFIG_BLK_DEV_RAM is not set +# CONFIG_BLK_DEV_INITRD is not set + +# +# I2O device support +# +# CONFIG_I2O is not set +# CONFIG_I2O_PCI is not set +# CONFIG_I2O_BLOCK is not set +# CONFIG_I2O_LAN is not set +# CONFIG_I2O_SCSI is not set +# CONFIG_I2O_PROC is not set + +# +# Multi-device support (RAID and LVM) +# +# CONFIG_MD is not set +# CONFIG_BLK_DEV_MD is not set +# CONFIG_MD_LINEAR is not set +# CONFIG_MD_RAID0 is not set +# CONFIG_MD_RAID1 is not set +# CONFIG_MD_RAID5 is not set +# CONFIG_MD_MULTIPATH is not set +# CONFIG_BLK_DEV_LVM is not set + +# +# ATA/IDE/MFM/RLL support +# +CONFIG_IDE=y + +# +# IDE, ATA and ATAPI Block devices +# +CONFIG_BLK_DEV_IDE=y + +# +# Please see Documentation/ide.txt for help/info on IDE drives +# +# CONFIG_BLK_DEV_HD_IDE is not set +# CONFIG_BLK_DEV_HD is not set +CONFIG_BLK_DEV_IDEDISK=y +# CONFIG_IDEDISK_MULTI_MODE is not set +# CONFIG_BLK_DEV_IDEDISK_VENDOR is not set +# CONFIG_BLK_DEV_IDEDISK_FUJITSU is not set +# CONFIG_BLK_DEV_IDEDISK_IBM is not set +# CONFIG_BLK_DEV_IDEDISK_MAXTOR is not set +# CONFIG_BLK_DEV_IDEDISK_QUANTUM is not set +# CONFIG_BLK_DEV_IDEDISK_SEAGATE is not set +# CONFIG_BLK_DEV_IDEDISK_WD is not set +# CONFIG_BLK_DEV_COMMERIAL is not set +# CONFIG_BLK_DEV_TIVO is not set +# CONFIG_BLK_DEV_IDECS is not set +# CONFIG_BLK_DEV_IDECD is not set +# CONFIG_BLK_DEV_IDETAPE is not set +# CONFIG_BLK_DEV_IDEFLOPPY is not set +# CONFIG_BLK_DEV_IDESCSI is not set + +# +# IDE chipset support/bugfixes +# +# CONFIG_BLK_DEV_CMD640 is not set +# CONFIG_BLK_DEV_CMD640_ENHANCED is not set +# CONFIG_BLK_DEV_ISAPNP is not set +# CONFIG_BLK_DEV_RZ1000 is not set +# CONFIG_BLK_DEV_IDEPCI is not set +# CONFIG_IDE_CHIPSETS is not set +# CONFIG_IDEDMA_AUTO is not set +# CONFIG_DMA_NONPCI is not set +# CONFIG_BLK_DEV_IDE_MODES is not set +# CONFIG_BLK_DEV_ATARAID is not set +# CONFIG_BLK_DEV_ATARAID_PDC is not set +# CONFIG_BLK_DEV_ATARAID_HPT is not set + +# +# Alternate 1394 support +# +# CONFIG_X1394 is not set + +# +# Alternate SCSI support +# +CONFIG_XSCSI=y + +# +# Alternate SCSI support +# +CONFIG_XSCSI_DKSC=y +# CONFIG_XSCSI_QLFC is not set +# CONFIG_XSCSI_QL is not set +# CONFIG_XSCSI_SBP2 is not set + +# +# SCSI support +# +CONFIG_SCSI=y + +# +# SCSI support type (disk, tape, CD-ROM) +# +CONFIG_BLK_DEV_SD=y +CONFIG_SD_EXTRA_DEVS=40 +# CONFIG_CHR_DEV_ST is not set +# CONFIG_CHR_DEV_OSST is not set +# CONFIG_BLK_DEV_SR is not set +# CONFIG_CHR_DEV_SG is not set + +# +# Some SCSI devices (e.g. CD jukebox) support multiple LUNs +# +# CONFIG_SCSI_DEBUG_QUEUES is not set +# CONFIG_SCSI_MULTI_LUN is not set +# CONFIG_SCSI_CONSTANTS is not set +# CONFIG_SCSI_LOGGING is not set + +# +# SCSI low-level drivers +# +# CONFIG_BLK_DEV_3W_XXXX_RAID is not set +# CONFIG_SCSI_7000FASST is not set +# CONFIG_SCSI_ACARD is not set +# CONFIG_SCSI_AHA152X is not set +# CONFIG_SCSI_AHA1542 is not set +# CONFIG_SCSI_AHA1740 is not set +# CONFIG_SCSI_AIC7XXX is not set +# CONFIG_SCSI_AIC7XXX_OLD is not set +# CONFIG_SCSI_DPT_I2O is not set +# CONFIG_SCSI_ADVANSYS is not set +# CONFIG_SCSI_IN2000 is not set +# CONFIG_SCSI_AM53C974 is not set +# CONFIG_SCSI_MEGARAID is not set +# CONFIG_SCSI_BUSLOGIC is not set +# CONFIG_SCSI_CPQFCTS is not set +# CONFIG_SCSI_DMX3191D is not set +# CONFIG_SCSI_DTC3280 is not set +# CONFIG_SCSI_EATA is not set +# CONFIG_SCSI_EATA_DMA is not set +# CONFIG_SCSI_EATA_PIO is not set +# CONFIG_SCSI_FUTURE_DOMAIN is not set +# CONFIG_SCSI_GDTH is not set +# CONFIG_SCSI_GENERIC_NCR5380 is not set +# CONFIG_SCSI_INITIO is not set +# CONFIG_SCSI_INIA100 is not set +# CONFIG_SCSI_NCR53C406A is not set +# CONFIG_SCSI_NCR53C7xx is not set +# CONFIG_SCSI_NCR53C8XX is not set +# CONFIG_SCSI_SYM53C8XX is not set +# CONFIG_SCSI_PAS16 is not set +# CONFIG_SCSI_PCI2000 is not set +# CONFIG_SCSI_PCI2220I is not set +# CONFIG_SCSI_PSI240I is not set +# CONFIG_SCSI_QLOGIC_FAS is not set +# CONFIG_SCSI_QLOGIC_ISP is not set +CONFIG_SCSI_QLOGIC_FC=y +# CONFIG_SCSI_QLOGIC_FC_FIRMWARE is not set +# CONFIG_SCSI_QLOGIC_1280 is not set +# CONFIG_SCSI_QLOGIC_QLA2100 is not set +# CONFIG_SCSI_SIM710 is not set +# CONFIG_SCSI_SYM53C416 is not set +# CONFIG_SCSI_DC390T is not set +# CONFIG_SCSI_T128 is not set +# CONFIG_SCSI_U14_34F is not set +# CONFIG_SCSI_DEBUG is not set + +# +# Network device support +# +CONFIG_NETDEVICES=y + +# +# ARCnet devices +# +# CONFIG_ARCNET is not set +# CONFIG_DUMMY is not set +# CONFIG_BONDING is not set +# CONFIG_EQUALIZER is not set +# CONFIG_TUN is not set +# CONFIG_ETHERTAP is not set + +# +# Ethernet (10 or 100Mbit) +# +CONFIG_NET_ETHERNET=y +CONFIG_SGI_IOC3_ETH=y +# CONFIG_SUNLANCE is not set +# CONFIG_HAPPYMEAL is not set +# CONFIG_SUNBMAC is not set +# CONFIG_SUNQE is not set +# CONFIG_SUNLANCE is not set +# CONFIG_SUNGEM is not set +# CONFIG_NET_VENDOR_3COM is not set +# CONFIG_LANCE is not set +# CONFIG_NET_VENDOR_SMC is not set +# CONFIG_NET_VENDOR_RACAL is not set +# CONFIG_HP100 is not set +# CONFIG_NET_ISA is not set +# CONFIG_NET_PCI is not set +# CONFIG_NET_POCKET is not set + +# +# Ethernet (1000 Mbit) +# +# CONFIG_ACENIC is not set +# CONFIG_DL2K is not set +# CONFIG_MYRI_SBUS is not set +# CONFIG_NS83820 is not set +# CONFIG_HAMACHI is not set +# CONFIG_YELLOWFIN is not set +# CONFIG_SK98LIN is not set +# CONFIG_FDDI is not set +# CONFIG_HIPPI is not set +# CONFIG_PLIP is not set +# CONFIG_PPP is not set +# CONFIG_SLIP is not set + +# +# Wireless LAN (non-hamradio) +# +# CONFIG_NET_RADIO is not set + +# +# Token Ring devices +# +# CONFIG_TR is not set +# CONFIG_NET_FC is not set +# CONFIG_RCPCI is not set +# CONFIG_SHAPER is not set + +# +# Wan interfaces +# +# CONFIG_WAN is not set + +# +# Amateur Radio support +# +# CONFIG_HAMRADIO is not set + +# +# ISDN subsystem +# +# CONFIG_ISDN is not set + +# +# CD-ROM drivers (not for SCSI or IDE/ATAPI drives) +# +# CONFIG_CD_NO_IDESCSI is not set + +# +# Input core support +# +# CONFIG_INPUT is not set +# CONFIG_INPUT_KEYBDEV is not set +# CONFIG_INPUT_MOUSEDEV is not set +# CONFIG_INPUT_JOYDEV is not set +# CONFIG_INPUT_EVDEV is not set + +# +# Character devices +# +# CONFIG_VT is not set +CONFIG_SERIAL=y +# CONFIG_SERIAL_CONSOLE is not set +# CONFIG_SERIAL_EXTENDED is not set +# CONFIG_SERIAL_NONSTANDARD is not set +CONFIG_UNIX98_PTYS=y +CONFIG_UNIX98_PTY_COUNT=256 + +# +# I2C support +# +# CONFIG_I2C is not set + +# +# Mice +# +# CONFIG_BUSMOUSE is not set +# CONFIG_MOUSE is not set + +# +# Joysticks +# +# CONFIG_INPUT_GAMEPORT is not set + +# +# Input core support is needed for gameports +# + +# +# Input core support is needed for joysticks +# +# CONFIG_QIC02_TAPE is not set + +# +# Watchdog Cards +# +# CONFIG_WATCHDOG is not set +# CONFIG_INTEL_RNG is not set +# CONFIG_NVRAM is not set +# CONFIG_RTC is not set +CONFIG_EFI_RTC=y +# CONFIG_DTLK is not set +# CONFIG_R3964 is not set +# CONFIG_APPLICOM is not set + +# +# Ftape, the floppy tape device driver +# +# CONFIG_FTAPE is not set +# CONFIG_AGP is not set +# CONFIG_DRM is not set +# CONFIG_MWAVE is not set + +# +# Multimedia devices +# +# CONFIG_VIDEO_DEV is not set + +# +# File systems +# +CONFIG_QUOTA=y +CONFIG_AUTOFS_FS=y +CONFIG_AUTOFS4_FS=y +# CONFIG_REISERFS_FS is not set +# CONFIG_REISERFS_CHECK is not set +# CONFIG_ADFS_FS is not set +# CONFIG_ADFS_FS_RW is not set +# CONFIG_AFFS_FS is not set +# CONFIG_HFS_FS is not set +# CONFIG_BFS_FS is not set +CONFIG_FAT_FS=y +CONFIG_MSDOS_FS=y +# CONFIG_UMSDOS_FS is not set +CONFIG_VFAT_FS=y +# CONFIG_EFS_FS is not set +# CONFIG_JFFS_FS is not set +# CONFIG_JFFS2_FS is not set +# CONFIG_CRAMFS is not set +CONFIG_TMPFS=y +# CONFIG_RAMFS is not set +CONFIG_ISO9660_FS=y +# CONFIG_JOLIET is not set +# CONFIG_MINIX_FS is not set +# CONFIG_VXFS_FS is not set +# CONFIG_NTFS_FS is not set +# CONFIG_NTFS_RW is not set +# CONFIG_HPFS_FS is not set +CONFIG_PROC_FS=y +CONFIG_DEVFS_FS=y +CONFIG_DEVFS_MOUNT=y +CONFIG_DEVFS_DEBUG=y +CONFIG_DEVPTS_FS=y +# CONFIG_QNX4FS_FS is not set +# CONFIG_QNX4FS_RW is not set +# CONFIG_ROMFS_FS is not set +CONFIG_EXT2_FS=y +# CONFIG_SYSV_FS is not set +# CONFIG_UDF_FS is not set +# CONFIG_UDF_RW is not set +# CONFIG_UFS_FS is not set +# CONFIG_UFS_FS_WRITE is not set +CONFIG_XFS_SUPPORT=y + +# +# Network File Systems +# +# CONFIG_CODA_FS is not set +CONFIG_NFS_FS=y +CONFIG_NFS_V3=y +# CONFIG_ROOT_NFS is not set +CONFIG_NFSD=y +CONFIG_NFSD_V3=y +CONFIG_SUNRPC=y +CONFIG_LOCKD=y +CONFIG_LOCKD_V4=y +# CONFIG_SMB_FS is not set +# CONFIG_NCP_FS is not set +# CONFIG_NCPFS_PACKET_SIGNING is not set +# CONFIG_NCPFS_IOCTL_LOCKING is not set +# CONFIG_NCPFS_STRONG is not set +# CONFIG_NCPFS_NFS_NS is not set +# CONFIG_NCPFS_OS2_NS is not set +# CONFIG_NCPFS_SMALLDOS is not set +# CONFIG_NCPFS_NLS is not set +# CONFIG_NCPFS_EXTRAS is not set + +# +# Partition Types +# +# CONFIG_PARTITION_ADVANCED is not set +CONFIG_MSDOS_PARTITION=y +# CONFIG_SMB_NLS is not set +CONFIG_NLS=y + +# +# Native Language Support +# +CONFIG_NLS_DEFAULT="n" +# CONFIG_NLS_CODEPAGE_437 is not set +# CONFIG_NLS_CODEPAGE_737 is not set +# CONFIG_NLS_CODEPAGE_775 is not set +# CONFIG_NLS_CODEPAGE_850 is not set +# CONFIG_NLS_CODEPAGE_852 is not set +# CONFIG_NLS_CODEPAGE_855 is not set +# CONFIG_NLS_CODEPAGE_857 is not set +# CONFIG_NLS_CODEPAGE_860 is not set +# CONFIG_NLS_CODEPAGE_861 is not set +# CONFIG_NLS_CODEPAGE_862 is not set +# CONFIG_NLS_CODEPAGE_863 is not set +# CONFIG_NLS_CODEPAGE_864 is not set +# CONFIG_NLS_CODEPAGE_865 is not set +# CONFIG_NLS_CODEPAGE_866 is not set +# CONFIG_NLS_CODEPAGE_869 is not set +# CONFIG_NLS_CODEPAGE_936 is not set +# CONFIG_NLS_CODEPAGE_950 is not set +# CONFIG_NLS_CODEPAGE_932 is not set +# CONFIG_NLS_CODEPAGE_949 is not set +# CONFIG_NLS_CODEPAGE_874 is not set +# CONFIG_NLS_ISO8859_8 is not set +# CONFIG_NLS_CODEPAGE_1251 is not set +# CONFIG_NLS_ISO8859_1 is not set +# CONFIG_NLS_ISO8859_2 is not set +# CONFIG_NLS_ISO8859_3 is not set +# CONFIG_NLS_ISO8859_4 is not set +# CONFIG_NLS_ISO8859_5 is not set +# CONFIG_NLS_ISO8859_6 is not set +# CONFIG_NLS_ISO8859_7 is not set +# CONFIG_NLS_ISO8859_9 is not set +# CONFIG_NLS_ISO8859_13 is not set +# CONFIG_NLS_ISO8859_14 is not set +# CONFIG_NLS_ISO8859_15 is not set +# CONFIG_NLS_KOI8_R is not set +# CONFIG_NLS_KOI8_U is not set +# CONFIG_NLS_UTF8 is not set + +# +# Sound +# +# CONFIG_SOUND is not set + +# +# USB support +# +# CONFIG_USB is not set + +# +# USB Controllers +# +# CONFIG_USB_UHCI is not set +# CONFIG_USB_UHCI_ALT is not set +# CONFIG_USB_OHCI is not set + +# +# USB Device Class drivers +# +# CONFIG_USB_AUDIO is not set +# CONFIG_USB_BLUETOOTH is not set +# CONFIG_USB_STORAGE is not set +# CONFIG_USB_STORAGE_DEBUG is not set +# CONFIG_USB_STORAGE_DATAFAB is not set +# CONFIG_USB_STORAGE_FREECOM is not set +# CONFIG_USB_STORAGE_ISD200 is not set +# CONFIG_USB_STORAGE_DPCM is not set +# CONFIG_USB_STORAGE_HP8200e is not set +# CONFIG_USB_STORAGE_SDDR09 is not set +# CONFIG_USB_STORAGE_JUMPSHOT is not set +# CONFIG_USB_ACM is not set +# CONFIG_USB_PRINTER is not set + +# +# USB Human Interface Devices (HID) +# + +# +# Input core support is needed for USB HID +# + +# +# USB Imaging devices +# +# CONFIG_USB_DC2XX is not set +# CONFIG_USB_MDC800 is not set +# CONFIG_USB_SCANNER is not set +# CONFIG_USB_MICROTEK is not set +# CONFIG_USB_HPUSBSCSI is not set + +# +# USB Multimedia devices +# + +# +# Video4Linux support is needed for USB Multimedia device support +# + +# +# USB Network adaptors +# +# CONFIG_USB_PEGASUS is not set +# CONFIG_USB_KAWETH is not set +# CONFIG_USB_CATC is not set +# CONFIG_USB_CDCETHER is not set +# CONFIG_USB_USBNET is not set + +# +# USB port drivers +# +# CONFIG_USB_USS720 is not set + +# +# USB Serial Converter support +# +# CONFIG_USB_SERIAL is not set +# CONFIG_USB_SERIAL_GENERIC is not set +# CONFIG_USB_SERIAL_BELKIN is not set +# CONFIG_USB_SERIAL_WHITEHEAT is not set +# CONFIG_USB_SERIAL_DIGI_ACCELEPORT is not set +# CONFIG_USB_SERIAL_EMPEG is not set +# CONFIG_USB_SERIAL_FTDI_SIO is not set +# CONFIG_USB_SERIAL_VISOR is not set +# CONFIG_USB_SERIAL_IR is not set +# CONFIG_USB_SERIAL_EDGEPORT is not set +# CONFIG_USB_SERIAL_KEYSPAN_PDA is not set +# CONFIG_USB_SERIAL_KEYSPAN is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28 is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28X is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28XA is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28XB is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA19 is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA18X is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA19W is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA49W is not set +# CONFIG_USB_SERIAL_MCT_U232 is not set +# CONFIG_USB_SERIAL_PL2303 is not set +# CONFIG_USB_SERIAL_CYBERJACK is not set +# CONFIG_USB_SERIAL_XIRCOM is not set +# CONFIG_USB_SERIAL_OMNINET is not set + +# +# USB Miscellaneous drivers +# +# CONFIG_USB_RIO500 is not set + +# +# IEEE 1394 (FireWire) support (EXPERIMENTAL) +# +# CONFIG_IEEE1394 is not set + +# +# Bluetooth support +# +# CONFIG_BLUEZ is not set + +# +# Kernel hacking +# +CONFIG_DEBUG_KERNEL=y +CONFIG_IA64_PRINT_HAZARDS=y +# CONFIG_DISABLE_VHPT is not set +CONFIG_MAGIC_SYSRQ=y +CONFIG_IA64_EARLY_PRINTK=y +# CONFIG_DEBUG_SLAB is not set +# CONFIG_DEBUG_SPINLOCK is not set +# CONFIG_IA64_DEBUG_CMPXCHG is not set +# CONFIG_IA64_DEBUG_IRQ is not set +CONFIG_KDB=y +CONFIG_KDB_MODULES=y +# CONFIG_KDB_OFF is not set + +# +# Load all symbols for debugging is required for KDB +# +CONFIG_KALLSYMS=y diff -Nru a/arch/ia64/sn/configs/sn1/defconfig-sn1-sp b/arch/ia64/sn/configs/sn1/defconfig-sn1-sp --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/configs/sn1/defconfig-sn1-sp Tue Mar 12 13:58:16 2002 @@ -0,0 +1,736 @@ +# +# Automatically generated make config: don't edit +# + +# +# Code maturity level options +# +CONFIG_EXPERIMENTAL=y + +# +# Loadable module support +# +# CONFIG_MODULES is not set + +# +# General setup +# +CONFIG_IA64=y +# CONFIG_ISA is not set +# CONFIG_EISA is not set +# CONFIG_MCA is not set +# CONFIG_SBUS is not set +CONFIG_RWSEM_GENERIC_SPINLOCK=y +# CONFIG_RWSEM_XCHGADD_ALGORITHM is not set +CONFIG_ACPI=y +CONFIG_ACPI_EFI=y +CONFIG_ACPI_INTERPRETER=y +CONFIG_ACPI_KERNEL_CONFIG=y +CONFIG_ITANIUM=y +# CONFIG_MCKINLEY is not set +# CONFIG_IA64_GENERIC is not set +# CONFIG_IA64_DIG is not set +# CONFIG_IA64_HP_SIM is not set +CONFIG_IA64_SGI_SN1=y +# CONFIG_IA64_SGI_SN2 is not set +# CONFIG_IA64_PAGE_SIZE_4KB is not set +# CONFIG_IA64_PAGE_SIZE_8KB is not set +CONFIG_IA64_PAGE_SIZE_16KB=y +# CONFIG_IA64_PAGE_SIZE_64KB is not set +CONFIG_IA64_BRL_EMU=y +CONFIG_ITANIUM_BSTEP_SPECIFIC=y +CONFIG_IA64_L1_CACHE_SHIFT=7 +CONFIG_IA64_SGI_SN=y +CONFIG_IA64_SGI_SN_DEBUG=y +CONFIG_IA64_SGI_SN_SIM=y +CONFIG_IA64_SGI_AUTOTEST=y +CONFIG_DEVFS_FS=y +CONFIG_DEVFS_DEBUG=y +CONFIG_SERIAL_SGI_L1_PROTOCOL=y +CONFIG_DISCONTIGMEM=y +CONFIG_IA64_MCA=y +CONFIG_NUMA=y +CONFIG_PERCPU_IRQ=y +CONFIG_PCIBA=y +CONFIG_KCORE_ELF=y +# CONFIG_SMP is not set +CONFIG_IA32_SUPPORT=y +CONFIG_PERFMON=y +CONFIG_IA64_PALINFO=y +# CONFIG_EFI_VARS is not set +CONFIG_NET=y +CONFIG_SYSVIPC=y +# CONFIG_BSD_PROCESS_ACCT is not set +CONFIG_SYSCTL=y +CONFIG_BINFMT_ELF=y +# CONFIG_BINFMT_MISC is not set +# CONFIG_ACPI_DEBUG is not set +# CONFIG_ACPI_BUSMGR is not set +# CONFIG_ACPI_SYS is not set +# CONFIG_ACPI_CPU is not set +# CONFIG_ACPI_BUTTON is not set +# CONFIG_ACPI_AC is not set +# CONFIG_ACPI_EC is not set +# CONFIG_ACPI_CMBATT is not set +# CONFIG_ACPI_THERMAL is not set +CONFIG_PCI=y +# CONFIG_PCI_NAMES is not set +# CONFIG_HOTPLUG is not set +# CONFIG_PCMCIA is not set + +# +# Parallel port support +# +# CONFIG_PARPORT is not set + +# +# Networking options +# +CONFIG_PACKET=y +# CONFIG_PACKET_MMAP is not set +CONFIG_NETLINK=y +CONFIG_RTNETLINK=y +CONFIG_NETLINK_DEV=y +CONFIG_NETFILTER=y +CONFIG_NETFILTER_DEBUG=y +CONFIG_FILTER=y +CONFIG_UNIX=y +CONFIG_INET=y +CONFIG_IP_MULTICAST=y +# CONFIG_IP_ADVANCED_ROUTER is not set +# CONFIG_IP_PNP is not set +# CONFIG_NET_IPIP is not set +# CONFIG_NET_IPGRE is not set +# CONFIG_IP_MROUTE is not set +# CONFIG_ARPD is not set +# CONFIG_INET_ECN is not set +CONFIG_SYN_COOKIES=y + +# +# IP: Netfilter Configuration +# +# CONFIG_IP_NF_CONNTRACK is not set +# CONFIG_IP_NF_QUEUE is not set +# CONFIG_IP_NF_IPTABLES is not set +# CONFIG_IP_NF_COMPAT_IPCHAINS is not set +# CONFIG_IP_NF_COMPAT_IPFWADM is not set +# CONFIG_IPV6 is not set +# CONFIG_KHTTPD is not set +# CONFIG_ATM is not set + +# +# +# +# CONFIG_IPX is not set +# CONFIG_ATALK is not set +# CONFIG_DECNET is not set +# CONFIG_BRIDGE is not set +# CONFIG_X25 is not set +# CONFIG_LAPB is not set +# CONFIG_LLC is not set +# CONFIG_NET_DIVERT is not set +# CONFIG_ECONET is not set +# CONFIG_WAN_ROUTER is not set +# CONFIG_NET_FASTROUTE is not set +# CONFIG_NET_HW_FLOWCONTROL is not set + +# +# QoS and/or fair queueing +# +# CONFIG_NET_SCHED is not set + +# +# Memory Technology Devices (MTD) +# +# CONFIG_MTD is not set + +# +# Plug and Play configuration +# +# CONFIG_PNP is not set +# CONFIG_ISAPNP is not set +# CONFIG_PNPBIOS is not set + +# +# Block devices +# +# CONFIG_BLK_DEV_FD is not set +# CONFIG_BLK_DEV_XD is not set +# CONFIG_PARIDE is not set +# CONFIG_BLK_CPQ_DA is not set +# CONFIG_BLK_CPQ_CISS_DA is not set +# CONFIG_BLK_DEV_DAC960 is not set +CONFIG_BLK_DEV_LOOP=y +# CONFIG_BLK_DEV_NBD is not set +# CONFIG_BLK_DEV_RAM is not set +# CONFIG_BLK_DEV_INITRD is not set + +# +# I2O device support +# +# CONFIG_I2O is not set +# CONFIG_I2O_PCI is not set +# CONFIG_I2O_BLOCK is not set +# CONFIG_I2O_LAN is not set +# CONFIG_I2O_SCSI is not set +# CONFIG_I2O_PROC is not set + +# +# Multi-device support (RAID and LVM) +# +# CONFIG_MD is not set +# CONFIG_BLK_DEV_MD is not set +# CONFIG_MD_LINEAR is not set +# CONFIG_MD_RAID0 is not set +# CONFIG_MD_RAID1 is not set +# CONFIG_MD_RAID5 is not set +# CONFIG_MD_MULTIPATH is not set +# CONFIG_BLK_DEV_LVM is not set + +# +# ATA/IDE/MFM/RLL support +# +CONFIG_IDE=y + +# +# IDE, ATA and ATAPI Block devices +# +CONFIG_BLK_DEV_IDE=y + +# +# Please see Documentation/ide.txt for help/info on IDE drives +# +# CONFIG_BLK_DEV_HD_IDE is not set +# CONFIG_BLK_DEV_HD is not set +CONFIG_BLK_DEV_IDEDISK=y +# CONFIG_IDEDISK_MULTI_MODE is not set +# CONFIG_BLK_DEV_IDEDISK_VENDOR is not set +# CONFIG_BLK_DEV_IDEDISK_FUJITSU is not set +# CONFIG_BLK_DEV_IDEDISK_IBM is not set +# CONFIG_BLK_DEV_IDEDISK_MAXTOR is not set +# CONFIG_BLK_DEV_IDEDISK_QUANTUM is not set +# CONFIG_BLK_DEV_IDEDISK_SEAGATE is not set +# CONFIG_BLK_DEV_IDEDISK_WD is not set +# CONFIG_BLK_DEV_COMMERIAL is not set +# CONFIG_BLK_DEV_TIVO is not set +# CONFIG_BLK_DEV_IDECS is not set +# CONFIG_BLK_DEV_IDECD is not set +# CONFIG_BLK_DEV_IDETAPE is not set +# CONFIG_BLK_DEV_IDEFLOPPY is not set +# CONFIG_BLK_DEV_IDESCSI is not set + +# +# IDE chipset support/bugfixes +# +# CONFIG_BLK_DEV_CMD640 is not set +# CONFIG_BLK_DEV_CMD640_ENHANCED is not set +# CONFIG_BLK_DEV_ISAPNP is not set +# CONFIG_BLK_DEV_RZ1000 is not set +# CONFIG_BLK_DEV_IDEPCI is not set +# CONFIG_IDE_CHIPSETS is not set +# CONFIG_IDEDMA_AUTO is not set +# CONFIG_DMA_NONPCI is not set +# CONFIG_BLK_DEV_IDE_MODES is not set +# CONFIG_BLK_DEV_ATARAID is not set +# CONFIG_BLK_DEV_ATARAID_PDC is not set +# CONFIG_BLK_DEV_ATARAID_HPT is not set + +# +# Alternate 1394 support +# +# CONFIG_X1394 is not set + +# +# Alternate SCSI support +# +CONFIG_XSCSI=y + +# +# Alternate SCSI support +# +CONFIG_XSCSI_DKSC=y +# CONFIG_XSCSI_QLFC is not set +# CONFIG_XSCSI_QL is not set +# CONFIG_XSCSI_SBP2 is not set + +# +# SCSI support +# +CONFIG_SCSI=y + +# +# SCSI support type (disk, tape, CD-ROM) +# +CONFIG_BLK_DEV_SD=y +CONFIG_SD_EXTRA_DEVS=40 +# CONFIG_CHR_DEV_ST is not set +# CONFIG_CHR_DEV_OSST is not set +# CONFIG_BLK_DEV_SR is not set +# CONFIG_CHR_DEV_SG is not set + +# +# Some SCSI devices (e.g. CD jukebox) support multiple LUNs +# +# CONFIG_SCSI_DEBUG_QUEUES is not set +# CONFIG_SCSI_MULTI_LUN is not set +# CONFIG_SCSI_CONSTANTS is not set +# CONFIG_SCSI_LOGGING is not set + +# +# SCSI low-level drivers +# +# CONFIG_BLK_DEV_3W_XXXX_RAID is not set +# CONFIG_SCSI_7000FASST is not set +# CONFIG_SCSI_ACARD is not set +# CONFIG_SCSI_AHA152X is not set +# CONFIG_SCSI_AHA1542 is not set +# CONFIG_SCSI_AHA1740 is not set +# CONFIG_SCSI_AIC7XXX is not set +# CONFIG_SCSI_AIC7XXX_OLD is not set +# CONFIG_SCSI_DPT_I2O is not set +# CONFIG_SCSI_ADVANSYS is not set +# CONFIG_SCSI_IN2000 is not set +# CONFIG_SCSI_AM53C974 is not set +# CONFIG_SCSI_MEGARAID is not set +# CONFIG_SCSI_BUSLOGIC is not set +# CONFIG_SCSI_CPQFCTS is not set +# CONFIG_SCSI_DMX3191D is not set +# CONFIG_SCSI_DTC3280 is not set +# CONFIG_SCSI_EATA is not set +# CONFIG_SCSI_EATA_DMA is not set +# CONFIG_SCSI_EATA_PIO is not set +# CONFIG_SCSI_FUTURE_DOMAIN is not set +# CONFIG_SCSI_GDTH is not set +# CONFIG_SCSI_GENERIC_NCR5380 is not set +# CONFIG_SCSI_INITIO is not set +# CONFIG_SCSI_INIA100 is not set +# CONFIG_SCSI_NCR53C406A is not set +# CONFIG_SCSI_NCR53C7xx is not set +# CONFIG_SCSI_NCR53C8XX is not set +# CONFIG_SCSI_SYM53C8XX is not set +# CONFIG_SCSI_PAS16 is not set +# CONFIG_SCSI_PCI2000 is not set +# CONFIG_SCSI_PCI2220I is not set +# CONFIG_SCSI_PSI240I is not set +# CONFIG_SCSI_QLOGIC_FAS is not set +# CONFIG_SCSI_QLOGIC_ISP is not set +CONFIG_SCSI_QLOGIC_FC=y +# CONFIG_SCSI_QLOGIC_FC_FIRMWARE is not set +# CONFIG_SCSI_QLOGIC_1280 is not set +# CONFIG_SCSI_QLOGIC_QLA2100 is not set +# CONFIG_SCSI_SIM710 is not set +# CONFIG_SCSI_SYM53C416 is not set +# CONFIG_SCSI_DC390T is not set +# CONFIG_SCSI_T128 is not set +# CONFIG_SCSI_U14_34F is not set +# CONFIG_SCSI_DEBUG is not set + +# +# Network device support +# +CONFIG_NETDEVICES=y + +# +# ARCnet devices +# +# CONFIG_ARCNET is not set +# CONFIG_DUMMY is not set +# CONFIG_BONDING is not set +# CONFIG_EQUALIZER is not set +# CONFIG_TUN is not set +# CONFIG_ETHERTAP is not set + +# +# Ethernet (10 or 100Mbit) +# +CONFIG_NET_ETHERNET=y +CONFIG_SGI_IOC3_ETH=y +# CONFIG_SUNLANCE is not set +# CONFIG_HAPPYMEAL is not set +# CONFIG_SUNBMAC is not set +# CONFIG_SUNQE is not set +# CONFIG_SUNLANCE is not set +# CONFIG_SUNGEM is not set +# CONFIG_NET_VENDOR_3COM is not set +# CONFIG_LANCE is not set +# CONFIG_NET_VENDOR_SMC is not set +# CONFIG_NET_VENDOR_RACAL is not set +# CONFIG_HP100 is not set +# CONFIG_NET_ISA is not set +# CONFIG_NET_PCI is not set +# CONFIG_NET_POCKET is not set + +# +# Ethernet (1000 Mbit) +# +# CONFIG_ACENIC is not set +# CONFIG_DL2K is not set +# CONFIG_MYRI_SBUS is not set +# CONFIG_NS83820 is not set +# CONFIG_HAMACHI is not set +# CONFIG_YELLOWFIN is not set +# CONFIG_SK98LIN is not set +# CONFIG_FDDI is not set +# CONFIG_HIPPI is not set +# CONFIG_PLIP is not set +# CONFIG_PPP is not set +# CONFIG_SLIP is not set + +# +# Wireless LAN (non-hamradio) +# +# CONFIG_NET_RADIO is not set + +# +# Token Ring devices +# +# CONFIG_TR is not set +# CONFIG_NET_FC is not set +# CONFIG_RCPCI is not set +# CONFIG_SHAPER is not set + +# +# Wan interfaces +# +# CONFIG_WAN is not set + +# +# Amateur Radio support +# +# CONFIG_HAMRADIO is not set + +# +# ISDN subsystem +# +# CONFIG_ISDN is not set + +# +# CD-ROM drivers (not for SCSI or IDE/ATAPI drives) +# +# CONFIG_CD_NO_IDESCSI is not set + +# +# Input core support +# +# CONFIG_INPUT is not set +# CONFIG_INPUT_KEYBDEV is not set +# CONFIG_INPUT_MOUSEDEV is not set +# CONFIG_INPUT_JOYDEV is not set +# CONFIG_INPUT_EVDEV is not set + +# +# Character devices +# +# CONFIG_VT is not set +CONFIG_SERIAL=y +# CONFIG_SERIAL_CONSOLE is not set +# CONFIG_SERIAL_EXTENDED is not set +# CONFIG_SERIAL_NONSTANDARD is not set +CONFIG_UNIX98_PTYS=y +CONFIG_UNIX98_PTY_COUNT=256 + +# +# I2C support +# +# CONFIG_I2C is not set + +# +# Mice +# +# CONFIG_BUSMOUSE is not set +# CONFIG_MOUSE is not set + +# +# Joysticks +# +# CONFIG_INPUT_GAMEPORT is not set + +# +# Input core support is needed for gameports +# + +# +# Input core support is needed for joysticks +# +# CONFIG_QIC02_TAPE is not set + +# +# Watchdog Cards +# +# CONFIG_WATCHDOG is not set +# CONFIG_INTEL_RNG is not set +# CONFIG_NVRAM is not set +# CONFIG_RTC is not set +CONFIG_EFI_RTC=y +# CONFIG_DTLK is not set +# CONFIG_R3964 is not set +# CONFIG_APPLICOM is not set + +# +# Ftape, the floppy tape device driver +# +# CONFIG_FTAPE is not set +# CONFIG_AGP is not set +# CONFIG_DRM is not set +# CONFIG_MWAVE is not set + +# +# Multimedia devices +# +# CONFIG_VIDEO_DEV is not set + +# +# File systems +# +CONFIG_QUOTA=y +CONFIG_AUTOFS_FS=y +CONFIG_AUTOFS4_FS=y +# CONFIG_REISERFS_FS is not set +# CONFIG_REISERFS_CHECK is not set +# CONFIG_ADFS_FS is not set +# CONFIG_ADFS_FS_RW is not set +# CONFIG_AFFS_FS is not set +# CONFIG_HFS_FS is not set +# CONFIG_BFS_FS is not set +CONFIG_FAT_FS=y +CONFIG_MSDOS_FS=y +# CONFIG_UMSDOS_FS is not set +CONFIG_VFAT_FS=y +# CONFIG_EFS_FS is not set +# CONFIG_JFFS_FS is not set +# CONFIG_JFFS2_FS is not set +# CONFIG_CRAMFS is not set +CONFIG_TMPFS=y +# CONFIG_RAMFS is not set +CONFIG_ISO9660_FS=y +# CONFIG_JOLIET is not set +# CONFIG_MINIX_FS is not set +# CONFIG_VXFS_FS is not set +# CONFIG_NTFS_FS is not set +# CONFIG_NTFS_RW is not set +# CONFIG_HPFS_FS is not set +CONFIG_PROC_FS=y +CONFIG_DEVFS_FS=y +CONFIG_DEVFS_MOUNT=y +CONFIG_DEVFS_DEBUG=y +CONFIG_DEVPTS_FS=y +# CONFIG_QNX4FS_FS is not set +# CONFIG_QNX4FS_RW is not set +# CONFIG_ROMFS_FS is not set +CONFIG_EXT2_FS=y +# CONFIG_SYSV_FS is not set +# CONFIG_UDF_FS is not set +# CONFIG_UDF_RW is not set +# CONFIG_UFS_FS is not set +# CONFIG_UFS_FS_WRITE is not set +CONFIG_XFS_SUPPORT=y + +# +# Network File Systems +# +# CONFIG_CODA_FS is not set +CONFIG_NFS_FS=y +CONFIG_NFS_V3=y +# CONFIG_ROOT_NFS is not set +CONFIG_NFSD=y +CONFIG_NFSD_V3=y +CONFIG_SUNRPC=y +CONFIG_LOCKD=y +CONFIG_LOCKD_V4=y +# CONFIG_SMB_FS is not set +# CONFIG_NCP_FS is not set +# CONFIG_NCPFS_PACKET_SIGNING is not set +# CONFIG_NCPFS_IOCTL_LOCKING is not set +# CONFIG_NCPFS_STRONG is not set +# CONFIG_NCPFS_NFS_NS is not set +# CONFIG_NCPFS_OS2_NS is not set +# CONFIG_NCPFS_SMALLDOS is not set +# CONFIG_NCPFS_NLS is not set +# CONFIG_NCPFS_EXTRAS is not set + +# +# Partition Types +# +# CONFIG_PARTITION_ADVANCED is not set +CONFIG_MSDOS_PARTITION=y +# CONFIG_SMB_NLS is not set +CONFIG_NLS=y + +# +# Native Language Support +# +CONFIG_NLS_DEFAULT="n" +# CONFIG_NLS_CODEPAGE_437 is not set +# CONFIG_NLS_CODEPAGE_737 is not set +# CONFIG_NLS_CODEPAGE_775 is not set +# CONFIG_NLS_CODEPAGE_850 is not set +# CONFIG_NLS_CODEPAGE_852 is not set +# CONFIG_NLS_CODEPAGE_855 is not set +# CONFIG_NLS_CODEPAGE_857 is not set +# CONFIG_NLS_CODEPAGE_860 is not set +# CONFIG_NLS_CODEPAGE_861 is not set +# CONFIG_NLS_CODEPAGE_862 is not set +# CONFIG_NLS_CODEPAGE_863 is not set +# CONFIG_NLS_CODEPAGE_864 is not set +# CONFIG_NLS_CODEPAGE_865 is not set +# CONFIG_NLS_CODEPAGE_866 is not set +# CONFIG_NLS_CODEPAGE_869 is not set +# CONFIG_NLS_CODEPAGE_936 is not set +# CONFIG_NLS_CODEPAGE_950 is not set +# CONFIG_NLS_CODEPAGE_932 is not set +# CONFIG_NLS_CODEPAGE_949 is not set +# CONFIG_NLS_CODEPAGE_874 is not set +# CONFIG_NLS_ISO8859_8 is not set +# CONFIG_NLS_CODEPAGE_1251 is not set +# CONFIG_NLS_ISO8859_1 is not set +# CONFIG_NLS_ISO8859_2 is not set +# CONFIG_NLS_ISO8859_3 is not set +# CONFIG_NLS_ISO8859_4 is not set +# CONFIG_NLS_ISO8859_5 is not set +# CONFIG_NLS_ISO8859_6 is not set +# CONFIG_NLS_ISO8859_7 is not set +# CONFIG_NLS_ISO8859_9 is not set +# CONFIG_NLS_ISO8859_13 is not set +# CONFIG_NLS_ISO8859_14 is not set +# CONFIG_NLS_ISO8859_15 is not set +# CONFIG_NLS_KOI8_R is not set +# CONFIG_NLS_KOI8_U is not set +# CONFIG_NLS_UTF8 is not set + +# +# Sound +# +# CONFIG_SOUND is not set + +# +# USB support +# +# CONFIG_USB is not set + +# +# USB Controllers +# +# CONFIG_USB_UHCI is not set +# CONFIG_USB_UHCI_ALT is not set +# CONFIG_USB_OHCI is not set + +# +# USB Device Class drivers +# +# CONFIG_USB_AUDIO is not set +# CONFIG_USB_BLUETOOTH is not set +# CONFIG_USB_STORAGE is not set +# CONFIG_USB_STORAGE_DEBUG is not set +# CONFIG_USB_STORAGE_DATAFAB is not set +# CONFIG_USB_STORAGE_FREECOM is not set +# CONFIG_USB_STORAGE_ISD200 is not set +# CONFIG_USB_STORAGE_DPCM is not set +# CONFIG_USB_STORAGE_HP8200e is not set +# CONFIG_USB_STORAGE_SDDR09 is not set +# CONFIG_USB_STORAGE_JUMPSHOT is not set +# CONFIG_USB_ACM is not set +# CONFIG_USB_PRINTER is not set + +# +# USB Human Interface Devices (HID) +# + +# +# Input core support is needed for USB HID +# + +# +# USB Imaging devices +# +# CONFIG_USB_DC2XX is not set +# CONFIG_USB_MDC800 is not set +# CONFIG_USB_SCANNER is not set +# CONFIG_USB_MICROTEK is not set +# CONFIG_USB_HPUSBSCSI is not set + +# +# USB Multimedia devices +# + +# +# Video4Linux support is needed for USB Multimedia device support +# + +# +# USB Network adaptors +# +# CONFIG_USB_PEGASUS is not set +# CONFIG_USB_KAWETH is not set +# CONFIG_USB_CATC is not set +# CONFIG_USB_CDCETHER is not set +# CONFIG_USB_USBNET is not set + +# +# USB port drivers +# +# CONFIG_USB_USS720 is not set + +# +# USB Serial Converter support +# +# CONFIG_USB_SERIAL is not set +# CONFIG_USB_SERIAL_GENERIC is not set +# CONFIG_USB_SERIAL_BELKIN is not set +# CONFIG_USB_SERIAL_WHITEHEAT is not set +# CONFIG_USB_SERIAL_DIGI_ACCELEPORT is not set +# CONFIG_USB_SERIAL_EMPEG is not set +# CONFIG_USB_SERIAL_FTDI_SIO is not set +# CONFIG_USB_SERIAL_VISOR is not set +# CONFIG_USB_SERIAL_IR is not set +# CONFIG_USB_SERIAL_EDGEPORT is not set +# CONFIG_USB_SERIAL_KEYSPAN_PDA is not set +# CONFIG_USB_SERIAL_KEYSPAN is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28 is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28X is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28XA is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28XB is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA19 is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA18X is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA19W is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA49W is not set +# CONFIG_USB_SERIAL_MCT_U232 is not set +# CONFIG_USB_SERIAL_PL2303 is not set +# CONFIG_USB_SERIAL_CYBERJACK is not set +# CONFIG_USB_SERIAL_XIRCOM is not set +# CONFIG_USB_SERIAL_OMNINET is not set + +# +# USB Miscellaneous drivers +# +# CONFIG_USB_RIO500 is not set + +# +# IEEE 1394 (FireWire) support (EXPERIMENTAL) +# +# CONFIG_IEEE1394 is not set + +# +# Bluetooth support +# +# CONFIG_BLUEZ is not set + +# +# Kernel hacking +# +CONFIG_DEBUG_KERNEL=y +CONFIG_IA64_PRINT_HAZARDS=y +# CONFIG_DISABLE_VHPT is not set +CONFIG_MAGIC_SYSRQ=y +CONFIG_IA64_EARLY_PRINTK=y +# CONFIG_DEBUG_SLAB is not set +# CONFIG_DEBUG_SPINLOCK is not set +# CONFIG_IA64_DEBUG_CMPXCHG is not set +# CONFIG_IA64_DEBUG_IRQ is not set +CONFIG_KDB=y +CONFIG_KDB_MODULES=y +# CONFIG_KDB_OFF is not set + +# +# Load all symbols for debugging is required for KDB +# +CONFIG_KALLSYMS=y diff -Nru a/arch/ia64/sn/configs/sn2/defconfig-dig-numa b/arch/ia64/sn/configs/sn2/defconfig-dig-numa --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/configs/sn2/defconfig-dig-numa Tue Mar 12 13:58:16 2002 @@ -0,0 +1,460 @@ +# +# Automatically generated make config: don't edit +# + +# +# Code maturity level options +# +# CONFIG_EXPERIMENTAL is not set + +# +# Loadable module support +# +# CONFIG_MODULES is not set + +# +# General setup +# +CONFIG_IA64=y +# CONFIG_ISA is not set +# CONFIG_EISA is not set +# CONFIG_MCA is not set +# CONFIG_SBUS is not set +CONFIG_RWSEM_GENERIC_SPINLOCK=y +# CONFIG_RWSEM_XCHGADD_ALGORITHM is not set +CONFIG_ACPI=y +CONFIG_ACPI_EFI=y +CONFIG_ACPI_INTERPRETER=y +CONFIG_ACPI_KERNEL_CONFIG=y +CONFIG_ITANIUM=y +# CONFIG_MCKINLEY is not set +# CONFIG_IA64_GENERIC is not set +CONFIG_IA64_DIG=y +# CONFIG_IA64_HP_SIM is not set +# CONFIG_IA64_SGI_SN1 is not set +# CONFIG_IA64_SGI_SN2 is not set +# CONFIG_IA64_PAGE_SIZE_4KB is not set +# CONFIG_IA64_PAGE_SIZE_8KB is not set +CONFIG_IA64_PAGE_SIZE_16KB=y +# CONFIG_IA64_PAGE_SIZE_64KB is not set +CONFIG_IA64_BRL_EMU=y +CONFIG_ITANIUM_BSTEP_SPECIFIC=y +CONFIG_IA64_L1_CACHE_SHIFT=6 +CONFIG_NUMA=y +CONFIG_DISCONTIGMEM=y +# CONFIG_IA64_MCA is not set +CONFIG_PM=y +CONFIG_IA64_HAVE_SYNCRONIZED_ITC=y +# CONFIG_DEVFS_FS is not set +CONFIG_KCORE_ELF=y +CONFIG_SMP=y +# CONFIG_IA32_SUPPORT is not set +# CONFIG_PERFMON is not set +# CONFIG_IA64_PALINFO is not set +# CONFIG_EFI_VARS is not set +# CONFIG_NET is not set +CONFIG_SYSVIPC=y +# CONFIG_BSD_PROCESS_ACCT is not set +# CONFIG_SYSCTL is not set +CONFIG_BINFMT_ELF=y +# CONFIG_BINFMT_MISC is not set +# CONFIG_ACPI_DEBUG is not set +# CONFIG_ACPI_BUSMGR is not set +# CONFIG_ACPI_SYS is not set +# CONFIG_ACPI_CPU is not set +# CONFIG_ACPI_BUTTON is not set +# CONFIG_ACPI_AC is not set +# CONFIG_ACPI_EC is not set +# CONFIG_ACPI_CMBATT is not set +# CONFIG_ACPI_THERMAL is not set +CONFIG_PCI=y +# CONFIG_PCI_NAMES is not set +# CONFIG_HOTPLUG is not set +# CONFIG_PCMCIA is not set + +# +# Parallel port support +# +# CONFIG_PARPORT is not set + +# +# Memory Technology Devices (MTD) +# +# CONFIG_MTD is not set + +# +# Plug and Play configuration +# +# CONFIG_PNP is not set +# CONFIG_ISAPNP is not set + +# +# Block devices +# +# CONFIG_BLK_DEV_FD is not set +# CONFIG_BLK_DEV_XD is not set +# CONFIG_PARIDE is not set +# CONFIG_BLK_CPQ_DA is not set +# CONFIG_BLK_CPQ_CISS_DA is not set +# CONFIG_BLK_DEV_DAC960 is not set +# CONFIG_BLK_DEV_LOOP is not set +# CONFIG_BLK_DEV_NBD is not set +# CONFIG_BLK_DEV_RAM is not set +# CONFIG_BLK_DEV_INITRD is not set + +# +# I2O device support +# +# CONFIG_I2O is not set +# CONFIG_I2O_PCI is not set +# CONFIG_I2O_BLOCK is not set +# CONFIG_I2O_SCSI is not set +# CONFIG_I2O_PROC is not set + +# +# Multi-device support (RAID and LVM) +# +# CONFIG_MD is not set +# CONFIG_BLK_DEV_MD is not set +# CONFIG_MD_LINEAR is not set +# CONFIG_MD_RAID0 is not set +# CONFIG_MD_RAID1 is not set +# CONFIG_MD_RAID5 is not set +# CONFIG_MD_MULTIPATH is not set +# CONFIG_BLK_DEV_LVM is not set + +# +# ATA/IDE/MFM/RLL support +# +CONFIG_IDE=y + +# +# IDE, ATA and ATAPI Block devices +# +CONFIG_BLK_DEV_IDE=y + +# +# Please see Documentation/ide.txt for help/info on IDE drives +# +# CONFIG_BLK_DEV_HD_IDE is not set +# CONFIG_BLK_DEV_HD is not set +CONFIG_BLK_DEV_IDEDISK=y +# CONFIG_IDEDISK_MULTI_MODE is not set +# CONFIG_BLK_DEV_IDEDISK_VENDOR is not set +# CONFIG_BLK_DEV_IDEDISK_FUJITSU is not set +# CONFIG_BLK_DEV_IDEDISK_IBM is not set +# CONFIG_BLK_DEV_IDEDISK_MAXTOR is not set +# CONFIG_BLK_DEV_IDEDISK_QUANTUM is not set +# CONFIG_BLK_DEV_IDEDISK_SEAGATE is not set +# CONFIG_BLK_DEV_IDEDISK_WD is not set +# CONFIG_BLK_DEV_COMMERIAL is not set +# CONFIG_BLK_DEV_TIVO is not set +# CONFIG_BLK_DEV_IDECS is not set +# CONFIG_BLK_DEV_IDECD is not set +# CONFIG_BLK_DEV_IDETAPE is not set +# CONFIG_BLK_DEV_IDEFLOPPY is not set +# CONFIG_BLK_DEV_IDESCSI is not set + +# +# IDE chipset support/bugfixes +# +# CONFIG_BLK_DEV_CMD640 is not set +# CONFIG_BLK_DEV_CMD640_ENHANCED is not set +# CONFIG_BLK_DEV_ISAPNP is not set +# CONFIG_BLK_DEV_RZ1000 is not set +# CONFIG_BLK_DEV_IDEPCI is not set +# CONFIG_IDE_CHIPSETS is not set +# CONFIG_IDEDMA_AUTO is not set +# CONFIG_DMA_NONPCI is not set +# CONFIG_BLK_DEV_IDE_MODES is not set +# CONFIG_BLK_DEV_ATARAID is not set +# CONFIG_BLK_DEV_ATARAID_PDC is not set +# CONFIG_BLK_DEV_ATARAID_HPT is not set + +# +# Alternate 1394 support +# +# CONFIG_X1394 is not set + +# +# Alternate SCSI support +# +# CONFIG_XSCSI is not set + +# +# SCSI support +# +# CONFIG_SCSI is not set + +# +# Amateur Radio support +# +# CONFIG_HAMRADIO is not set + +# +# ISDN subsystem +# +# CONFIG_ISDN is not set + +# +# CD-ROM drivers (not for SCSI or IDE/ATAPI drives) +# +# CONFIG_CD_NO_IDESCSI is not set + +# +# Input core support +# +# CONFIG_INPUT is not set +# CONFIG_INPUT_KEYBDEV is not set +# CONFIG_INPUT_MOUSEDEV is not set +# CONFIG_INPUT_JOYDEV is not set +# CONFIG_INPUT_EVDEV is not set + +# +# Character devices +# +CONFIG_VT=y +CONFIG_VT_CONSOLE=y +# CONFIG_SERIAL is not set +# CONFIG_SERIAL_EXTENDED is not set +# CONFIG_SERIAL_NONSTANDARD is not set +CONFIG_UNIX98_PTYS=y +CONFIG_UNIX98_PTY_COUNT=256 + +# +# I2C support +# +# CONFIG_I2C is not set + +# +# Mice +# +# CONFIG_BUSMOUSE is not set +# CONFIG_MOUSE is not set + +# +# Joysticks +# +# CONFIG_INPUT_GAMEPORT is not set + +# +# Input core support is needed for gameports +# + +# +# Input core support is needed for joysticks +# +# CONFIG_QIC02_TAPE is not set + +# +# Watchdog Cards +# +# CONFIG_WATCHDOG is not set +# CONFIG_INTEL_RNG is not set +# CONFIG_NVRAM is not set +# CONFIG_RTC is not set +# CONFIG_EFI_RTC is not set +# CONFIG_DTLK is not set +# CONFIG_R3964 is not set +# CONFIG_APPLICOM is not set + +# +# Ftape, the floppy tape device driver +# +# CONFIG_FTAPE is not set +# CONFIG_AGP is not set +# CONFIG_DRM is not set +# CONFIG_MWAVE is not set + +# +# Multimedia devices +# +# CONFIG_VIDEO_DEV is not set + +# +# File systems +# +# CONFIG_QUOTA is not set +# CONFIG_AUTOFS_FS is not set +# CONFIG_AUTOFS4_FS is not set +# CONFIG_REISERFS_FS is not set +# CONFIG_REISERFS_CHECK is not set +# CONFIG_ADFS_FS is not set +# CONFIG_ADFS_FS_RW is not set +# CONFIG_AFFS_FS is not set +# CONFIG_HFS_FS is not set +# CONFIG_BFS_FS is not set +# CONFIG_FAT_FS is not set +# CONFIG_MSDOS_FS is not set +# CONFIG_UMSDOS_FS is not set +# CONFIG_VFAT_FS is not set +# CONFIG_EFS_FS is not set +# CONFIG_JFFS_FS is not set +# CONFIG_JFFS2_FS is not set +# CONFIG_CRAMFS is not set +CONFIG_TMPFS=y +# CONFIG_RAMFS is not set +# CONFIG_ISO9660_FS is not set +# CONFIG_JOLIET is not set +# CONFIG_MINIX_FS is not set +# CONFIG_VXFS_FS is not set +# CONFIG_NTFS_FS is not set +# CONFIG_NTFS_RW is not set +# CONFIG_HPFS_FS is not set +CONFIG_PROC_FS=y +# CONFIG_DEVFS_FS is not set +# CONFIG_DEVFS_MOUNT is not set +# CONFIG_DEVFS_DEBUG is not set +CONFIG_DEVPTS_FS=y +# CONFIG_QNX4FS_FS is not set +# CONFIG_QNX4FS_RW is not set +# CONFIG_ROMFS_FS is not set +CONFIG_EXT2_FS=y +# CONFIG_SYSV_FS is not set +# CONFIG_UDF_FS is not set +# CONFIG_UDF_RW is not set +# CONFIG_UFS_FS is not set +# CONFIG_UFS_FS_WRITE is not set +# CONFIG_XFS_SUPPORT is not set +# CONFIG_NCPFS_NLS is not set +# CONFIG_SMB_FS is not set + +# +# Partition Types +# +# CONFIG_PARTITION_ADVANCED is not set +CONFIG_MSDOS_PARTITION=y +# CONFIG_SMB_NLS is not set +# CONFIG_NLS is not set + +# +# Console drivers +# +CONFIG_VGA_CONSOLE=y + +# +# Frame-buffer support +# +# CONFIG_FB is not set + +# +# Sound +# +# CONFIG_SOUND is not set + +# +# USB support +# +# CONFIG_USB is not set + +# +# USB Controllers +# +# CONFIG_USB_UHCI is not set +# CONFIG_USB_UHCI_ALT is not set +# CONFIG_USB_OHCI is not set + +# +# USB Device Class drivers +# +# CONFIG_USB_AUDIO is not set +# CONFIG_USB_BLUETOOTH is not set +# CONFIG_USB_STORAGE is not set +# CONFIG_USB_STORAGE_DEBUG is not set +# CONFIG_USB_STORAGE_DATAFAB is not set +# CONFIG_USB_STORAGE_FREECOM is not set +# CONFIG_USB_STORAGE_ISD200 is not set +# CONFIG_USB_STORAGE_DPCM is not set +# CONFIG_USB_STORAGE_HP8200e is not set +# CONFIG_USB_STORAGE_SDDR09 is not set +# CONFIG_USB_STORAGE_JUMPSHOT is not set +# CONFIG_USB_ACM is not set +# CONFIG_USB_PRINTER is not set + +# +# USB Human Interface Devices (HID) +# + +# +# Input core support is needed for USB HID +# + +# +# USB Imaging devices +# +# CONFIG_USB_DC2XX is not set +# CONFIG_USB_MDC800 is not set +# CONFIG_USB_SCANNER is not set +# CONFIG_USB_MICROTEK is not set +# CONFIG_USB_HPUSBSCSI is not set + +# +# USB Multimedia devices +# + +# +# Video4Linux support is needed for USB Multimedia device support +# + +# +# USB Network adaptors +# + +# +# Networking support is needed for USB Networking device support +# + +# +# USB port drivers +# +# CONFIG_USB_USS720 is not set + +# +# USB Serial Converter support +# +# CONFIG_USB_SERIAL is not set +# CONFIG_USB_SERIAL_GENERIC is not set +# CONFIG_USB_SERIAL_BELKIN is not set +# CONFIG_USB_SERIAL_WHITEHEAT is not set +# CONFIG_USB_SERIAL_DIGI_ACCELEPORT is not set +# CONFIG_USB_SERIAL_EMPEG is not set +# CONFIG_USB_SERIAL_FTDI_SIO is not set +# CONFIG_USB_SERIAL_VISOR is not set +# CONFIG_USB_SERIAL_IR is not set +# CONFIG_USB_SERIAL_EDGEPORT is not set +# CONFIG_USB_SERIAL_KEYSPAN_PDA is not set +# CONFIG_USB_SERIAL_KEYSPAN is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28 is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28X is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28XA is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28XB is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA19 is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA18X is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA19W is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA49W is not set +# CONFIG_USB_SERIAL_MCT_U232 is not set +# CONFIG_USB_SERIAL_PL2303 is not set +# CONFIG_USB_SERIAL_CYBERJACK is not set +# CONFIG_USB_SERIAL_XIRCOM is not set +# CONFIG_USB_SERIAL_OMNINET is not set + +# +# USB Miscellaneous drivers +# +# CONFIG_USB_RIO500 is not set + +# +# Kernel hacking +# +CONFIG_DEBUG_KERNEL=y +CONFIG_IA64_PRINT_HAZARDS=y +# CONFIG_DISABLE_VHPT is not set +CONFIG_MAGIC_SYSRQ=y +CONFIG_IA64_EARLY_PRINTK=y +# CONFIG_DEBUG_SLAB is not set +# CONFIG_DEBUG_SPINLOCK is not set +# CONFIG_IA64_DEBUG_CMPXCHG is not set +# CONFIG_IA64_DEBUG_IRQ is not set +# CONFIG_KDB is not set +# CONFIG_KDB_MODULES is not set +# CONFIG_KALLSYMS is not set diff -Nru a/arch/ia64/sn/configs/sn2/defconfig-sn2-dig-mp b/arch/ia64/sn/configs/sn2/defconfig-sn2-dig-mp --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/configs/sn2/defconfig-sn2-dig-mp Tue Mar 12 13:58:16 2002 @@ -0,0 +1,459 @@ +# +# Automatically generated make config: don't edit +# + +# +# Code maturity level options +# +# CONFIG_EXPERIMENTAL is not set + +# +# Loadable module support +# +# CONFIG_MODULES is not set + +# +# General setup +# +CONFIG_IA64=y +# CONFIG_ISA is not set +# CONFIG_EISA is not set +# CONFIG_MCA is not set +# CONFIG_SBUS is not set +CONFIG_RWSEM_GENERIC_SPINLOCK=y +# CONFIG_RWSEM_XCHGADD_ALGORITHM is not set +CONFIG_ACPI=y +CONFIG_ACPI_EFI=y +CONFIG_ACPI_INTERPRETER=y +CONFIG_ACPI_KERNEL_CONFIG=y +CONFIG_ITANIUM=y +# CONFIG_MCKINLEY is not set +# CONFIG_IA64_GENERIC is not set +CONFIG_IA64_DIG=y +# CONFIG_IA64_HP_SIM is not set +# CONFIG_IA64_SGI_SN1 is not set +# CONFIG_IA64_SGI_SN2 is not set +# CONFIG_IA64_PAGE_SIZE_4KB is not set +# CONFIG_IA64_PAGE_SIZE_8KB is not set +CONFIG_IA64_PAGE_SIZE_16KB=y +# CONFIG_IA64_PAGE_SIZE_64KB is not set +CONFIG_IA64_BRL_EMU=y +CONFIG_ITANIUM_BSTEP_SPECIFIC=y +CONFIG_IA64_L1_CACHE_SHIFT=6 +# CONFIG_NUMA is not set +# CONFIG_IA64_MCA is not set +CONFIG_PM=y +CONFIG_IA64_HAVE_SYNCRONIZED_ITC=y +# CONFIG_DEVFS_FS is not set +CONFIG_KCORE_ELF=y +CONFIG_SMP=y +# CONFIG_IA32_SUPPORT is not set +# CONFIG_PERFMON is not set +# CONFIG_IA64_PALINFO is not set +# CONFIG_EFI_VARS is not set +# CONFIG_NET is not set +CONFIG_SYSVIPC=y +# CONFIG_BSD_PROCESS_ACCT is not set +# CONFIG_SYSCTL is not set +CONFIG_BINFMT_ELF=y +# CONFIG_BINFMT_MISC is not set +# CONFIG_ACPI_DEBUG is not set +# CONFIG_ACPI_BUSMGR is not set +# CONFIG_ACPI_SYS is not set +# CONFIG_ACPI_CPU is not set +# CONFIG_ACPI_BUTTON is not set +# CONFIG_ACPI_AC is not set +# CONFIG_ACPI_EC is not set +# CONFIG_ACPI_CMBATT is not set +# CONFIG_ACPI_THERMAL is not set +CONFIG_PCI=y +# CONFIG_PCI_NAMES is not set +# CONFIG_HOTPLUG is not set +# CONFIG_PCMCIA is not set + +# +# Parallel port support +# +# CONFIG_PARPORT is not set + +# +# Memory Technology Devices (MTD) +# +# CONFIG_MTD is not set + +# +# Plug and Play configuration +# +# CONFIG_PNP is not set +# CONFIG_ISAPNP is not set + +# +# Block devices +# +# CONFIG_BLK_DEV_FD is not set +# CONFIG_BLK_DEV_XD is not set +# CONFIG_PARIDE is not set +# CONFIG_BLK_CPQ_DA is not set +# CONFIG_BLK_CPQ_CISS_DA is not set +# CONFIG_BLK_DEV_DAC960 is not set +# CONFIG_BLK_DEV_LOOP is not set +# CONFIG_BLK_DEV_NBD is not set +# CONFIG_BLK_DEV_RAM is not set +# CONFIG_BLK_DEV_INITRD is not set + +# +# I2O device support +# +# CONFIG_I2O is not set +# CONFIG_I2O_PCI is not set +# CONFIG_I2O_BLOCK is not set +# CONFIG_I2O_SCSI is not set +# CONFIG_I2O_PROC is not set + +# +# Multi-device support (RAID and LVM) +# +# CONFIG_MD is not set +# CONFIG_BLK_DEV_MD is not set +# CONFIG_MD_LINEAR is not set +# CONFIG_MD_RAID0 is not set +# CONFIG_MD_RAID1 is not set +# CONFIG_MD_RAID5 is not set +# CONFIG_MD_MULTIPATH is not set +# CONFIG_BLK_DEV_LVM is not set + +# +# ATA/IDE/MFM/RLL support +# +CONFIG_IDE=y + +# +# IDE, ATA and ATAPI Block devices +# +CONFIG_BLK_DEV_IDE=y + +# +# Please see Documentation/ide.txt for help/info on IDE drives +# +# CONFIG_BLK_DEV_HD_IDE is not set +# CONFIG_BLK_DEV_HD is not set +CONFIG_BLK_DEV_IDEDISK=y +# CONFIG_IDEDISK_MULTI_MODE is not set +# CONFIG_BLK_DEV_IDEDISK_VENDOR is not set +# CONFIG_BLK_DEV_IDEDISK_FUJITSU is not set +# CONFIG_BLK_DEV_IDEDISK_IBM is not set +# CONFIG_BLK_DEV_IDEDISK_MAXTOR is not set +# CONFIG_BLK_DEV_IDEDISK_QUANTUM is not set +# CONFIG_BLK_DEV_IDEDISK_SEAGATE is not set +# CONFIG_BLK_DEV_IDEDISK_WD is not set +# CONFIG_BLK_DEV_COMMERIAL is not set +# CONFIG_BLK_DEV_TIVO is not set +# CONFIG_BLK_DEV_IDECS is not set +# CONFIG_BLK_DEV_IDECD is not set +# CONFIG_BLK_DEV_IDETAPE is not set +# CONFIG_BLK_DEV_IDEFLOPPY is not set +# CONFIG_BLK_DEV_IDESCSI is not set + +# +# IDE chipset support/bugfixes +# +# CONFIG_BLK_DEV_CMD640 is not set +# CONFIG_BLK_DEV_CMD640_ENHANCED is not set +# CONFIG_BLK_DEV_ISAPNP is not set +# CONFIG_BLK_DEV_RZ1000 is not set +# CONFIG_BLK_DEV_IDEPCI is not set +# CONFIG_IDE_CHIPSETS is not set +# CONFIG_IDEDMA_AUTO is not set +# CONFIG_DMA_NONPCI is not set +# CONFIG_BLK_DEV_IDE_MODES is not set +# CONFIG_BLK_DEV_ATARAID is not set +# CONFIG_BLK_DEV_ATARAID_PDC is not set +# CONFIG_BLK_DEV_ATARAID_HPT is not set + +# +# Alternate 1394 support +# +# CONFIG_X1394 is not set + +# +# Alternate SCSI support +# +# CONFIG_XSCSI is not set + +# +# SCSI support +# +# CONFIG_SCSI is not set + +# +# Amateur Radio support +# +# CONFIG_HAMRADIO is not set + +# +# ISDN subsystem +# +# CONFIG_ISDN is not set + +# +# CD-ROM drivers (not for SCSI or IDE/ATAPI drives) +# +# CONFIG_CD_NO_IDESCSI is not set + +# +# Input core support +# +# CONFIG_INPUT is not set +# CONFIG_INPUT_KEYBDEV is not set +# CONFIG_INPUT_MOUSEDEV is not set +# CONFIG_INPUT_JOYDEV is not set +# CONFIG_INPUT_EVDEV is not set + +# +# Character devices +# +CONFIG_VT=y +CONFIG_VT_CONSOLE=y +# CONFIG_SERIAL is not set +# CONFIG_SERIAL_EXTENDED is not set +# CONFIG_SERIAL_NONSTANDARD is not set +CONFIG_UNIX98_PTYS=y +CONFIG_UNIX98_PTY_COUNT=256 + +# +# I2C support +# +# CONFIG_I2C is not set + +# +# Mice +# +# CONFIG_BUSMOUSE is not set +# CONFIG_MOUSE is not set + +# +# Joysticks +# +# CONFIG_INPUT_GAMEPORT is not set + +# +# Input core support is needed for gameports +# + +# +# Input core support is needed for joysticks +# +# CONFIG_QIC02_TAPE is not set + +# +# Watchdog Cards +# +# CONFIG_WATCHDOG is not set +# CONFIG_INTEL_RNG is not set +# CONFIG_NVRAM is not set +# CONFIG_RTC is not set +# CONFIG_EFI_RTC is not set +# CONFIG_DTLK is not set +# CONFIG_R3964 is not set +# CONFIG_APPLICOM is not set + +# +# Ftape, the floppy tape device driver +# +# CONFIG_FTAPE is not set +# CONFIG_AGP is not set +# CONFIG_DRM is not set +# CONFIG_MWAVE is not set + +# +# Multimedia devices +# +# CONFIG_VIDEO_DEV is not set + +# +# File systems +# +# CONFIG_QUOTA is not set +# CONFIG_AUTOFS_FS is not set +# CONFIG_AUTOFS4_FS is not set +# CONFIG_REISERFS_FS is not set +# CONFIG_REISERFS_CHECK is not set +# CONFIG_ADFS_FS is not set +# CONFIG_ADFS_FS_RW is not set +# CONFIG_AFFS_FS is not set +# CONFIG_HFS_FS is not set +# CONFIG_BFS_FS is not set +# CONFIG_FAT_FS is not set +# CONFIG_MSDOS_FS is not set +# CONFIG_UMSDOS_FS is not set +# CONFIG_VFAT_FS is not set +# CONFIG_EFS_FS is not set +# CONFIG_JFFS_FS is not set +# CONFIG_JFFS2_FS is not set +# CONFIG_CRAMFS is not set +CONFIG_TMPFS=y +# CONFIG_RAMFS is not set +# CONFIG_ISO9660_FS is not set +# CONFIG_JOLIET is not set +# CONFIG_MINIX_FS is not set +# CONFIG_VXFS_FS is not set +# CONFIG_NTFS_FS is not set +# CONFIG_NTFS_RW is not set +# CONFIG_HPFS_FS is not set +CONFIG_PROC_FS=y +# CONFIG_DEVFS_FS is not set +# CONFIG_DEVFS_MOUNT is not set +# CONFIG_DEVFS_DEBUG is not set +CONFIG_DEVPTS_FS=y +# CONFIG_QNX4FS_FS is not set +# CONFIG_QNX4FS_RW is not set +# CONFIG_ROMFS_FS is not set +CONFIG_EXT2_FS=y +# CONFIG_SYSV_FS is not set +# CONFIG_UDF_FS is not set +# CONFIG_UDF_RW is not set +# CONFIG_UFS_FS is not set +# CONFIG_UFS_FS_WRITE is not set +# CONFIG_XFS_SUPPORT is not set +# CONFIG_NCPFS_NLS is not set +# CONFIG_SMB_FS is not set + +# +# Partition Types +# +# CONFIG_PARTITION_ADVANCED is not set +CONFIG_MSDOS_PARTITION=y +# CONFIG_SMB_NLS is not set +# CONFIG_NLS is not set + +# +# Console drivers +# +CONFIG_VGA_CONSOLE=y + +# +# Frame-buffer support +# +# CONFIG_FB is not set + +# +# Sound +# +# CONFIG_SOUND is not set + +# +# USB support +# +# CONFIG_USB is not set + +# +# USB Controllers +# +# CONFIG_USB_UHCI is not set +# CONFIG_USB_UHCI_ALT is not set +# CONFIG_USB_OHCI is not set + +# +# USB Device Class drivers +# +# CONFIG_USB_AUDIO is not set +# CONFIG_USB_BLUETOOTH is not set +# CONFIG_USB_STORAGE is not set +# CONFIG_USB_STORAGE_DEBUG is not set +# CONFIG_USB_STORAGE_DATAFAB is not set +# CONFIG_USB_STORAGE_FREECOM is not set +# CONFIG_USB_STORAGE_ISD200 is not set +# CONFIG_USB_STORAGE_DPCM is not set +# CONFIG_USB_STORAGE_HP8200e is not set +# CONFIG_USB_STORAGE_SDDR09 is not set +# CONFIG_USB_STORAGE_JUMPSHOT is not set +# CONFIG_USB_ACM is not set +# CONFIG_USB_PRINTER is not set + +# +# USB Human Interface Devices (HID) +# + +# +# Input core support is needed for USB HID +# + +# +# USB Imaging devices +# +# CONFIG_USB_DC2XX is not set +# CONFIG_USB_MDC800 is not set +# CONFIG_USB_SCANNER is not set +# CONFIG_USB_MICROTEK is not set +# CONFIG_USB_HPUSBSCSI is not set + +# +# USB Multimedia devices +# + +# +# Video4Linux support is needed for USB Multimedia device support +# + +# +# USB Network adaptors +# + +# +# Networking support is needed for USB Networking device support +# + +# +# USB port drivers +# +# CONFIG_USB_USS720 is not set + +# +# USB Serial Converter support +# +# CONFIG_USB_SERIAL is not set +# CONFIG_USB_SERIAL_GENERIC is not set +# CONFIG_USB_SERIAL_BELKIN is not set +# CONFIG_USB_SERIAL_WHITEHEAT is not set +# CONFIG_USB_SERIAL_DIGI_ACCELEPORT is not set +# CONFIG_USB_SERIAL_EMPEG is not set +# CONFIG_USB_SERIAL_FTDI_SIO is not set +# CONFIG_USB_SERIAL_VISOR is not set +# CONFIG_USB_SERIAL_IR is not set +# CONFIG_USB_SERIAL_EDGEPORT is not set +# CONFIG_USB_SERIAL_KEYSPAN_PDA is not set +# CONFIG_USB_SERIAL_KEYSPAN is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28 is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28X is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28XA is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28XB is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA19 is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA18X is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA19W is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA49W is not set +# CONFIG_USB_SERIAL_MCT_U232 is not set +# CONFIG_USB_SERIAL_PL2303 is not set +# CONFIG_USB_SERIAL_CYBERJACK is not set +# CONFIG_USB_SERIAL_XIRCOM is not set +# CONFIG_USB_SERIAL_OMNINET is not set + +# +# USB Miscellaneous drivers +# +# CONFIG_USB_RIO500 is not set + +# +# Kernel hacking +# +CONFIG_DEBUG_KERNEL=y +CONFIG_IA64_PRINT_HAZARDS=y +# CONFIG_DISABLE_VHPT is not set +CONFIG_MAGIC_SYSRQ=y +CONFIG_IA64_EARLY_PRINTK=y +# CONFIG_DEBUG_SLAB is not set +# CONFIG_DEBUG_SPINLOCK is not set +# CONFIG_IA64_DEBUG_CMPXCHG is not set +# CONFIG_IA64_DEBUG_IRQ is not set +# CONFIG_KDB is not set +# CONFIG_KDB_MODULES is not set +# CONFIG_KALLSYMS is not set diff -Nru a/arch/ia64/sn/configs/sn2/defconfig-sn2-dig-sp b/arch/ia64/sn/configs/sn2/defconfig-sn2-dig-sp --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/configs/sn2/defconfig-sn2-dig-sp Tue Mar 12 13:58:16 2002 @@ -0,0 +1,459 @@ +# +# Automatically generated make config: don't edit +# + +# +# Code maturity level options +# +# CONFIG_EXPERIMENTAL is not set + +# +# Loadable module support +# +# CONFIG_MODULES is not set + +# +# General setup +# +CONFIG_IA64=y +# CONFIG_ISA is not set +# CONFIG_EISA is not set +# CONFIG_MCA is not set +# CONFIG_SBUS is not set +CONFIG_RWSEM_GENERIC_SPINLOCK=y +# CONFIG_RWSEM_XCHGADD_ALGORITHM is not set +CONFIG_ACPI=y +CONFIG_ACPI_EFI=y +CONFIG_ACPI_INTERPRETER=y +CONFIG_ACPI_KERNEL_CONFIG=y +CONFIG_ITANIUM=y +# CONFIG_MCKINLEY is not set +# CONFIG_IA64_GENERIC is not set +CONFIG_IA64_DIG=y +# CONFIG_IA64_HP_SIM is not set +# CONFIG_IA64_SGI_SN1 is not set +# CONFIG_IA64_SGI_SN2 is not set +# CONFIG_IA64_PAGE_SIZE_4KB is not set +# CONFIG_IA64_PAGE_SIZE_8KB is not set +CONFIG_IA64_PAGE_SIZE_16KB=y +# CONFIG_IA64_PAGE_SIZE_64KB is not set +CONFIG_IA64_BRL_EMU=y +CONFIG_ITANIUM_BSTEP_SPECIFIC=y +CONFIG_IA64_L1_CACHE_SHIFT=6 +# CONFIG_NUMA is not set +# CONFIG_IA64_MCA is not set +CONFIG_PM=y +CONFIG_IA64_HAVE_SYNCRONIZED_ITC=y +# CONFIG_DEVFS_FS is not set +CONFIG_KCORE_ELF=y +# CONFIG_SMP is not set +# CONFIG_IA32_SUPPORT is not set +# CONFIG_PERFMON is not set +# CONFIG_IA64_PALINFO is not set +# CONFIG_EFI_VARS is not set +# CONFIG_NET is not set +CONFIG_SYSVIPC=y +# CONFIG_BSD_PROCESS_ACCT is not set +# CONFIG_SYSCTL is not set +CONFIG_BINFMT_ELF=y +# CONFIG_BINFMT_MISC is not set +# CONFIG_ACPI_DEBUG is not set +# CONFIG_ACPI_BUSMGR is not set +# CONFIG_ACPI_SYS is not set +# CONFIG_ACPI_CPU is not set +# CONFIG_ACPI_BUTTON is not set +# CONFIG_ACPI_AC is not set +# CONFIG_ACPI_EC is not set +# CONFIG_ACPI_CMBATT is not set +# CONFIG_ACPI_THERMAL is not set +CONFIG_PCI=y +# CONFIG_PCI_NAMES is not set +# CONFIG_HOTPLUG is not set +# CONFIG_PCMCIA is not set + +# +# Parallel port support +# +# CONFIG_PARPORT is not set + +# +# Memory Technology Devices (MTD) +# +# CONFIG_MTD is not set + +# +# Plug and Play configuration +# +# CONFIG_PNP is not set +# CONFIG_ISAPNP is not set + +# +# Block devices +# +# CONFIG_BLK_DEV_FD is not set +# CONFIG_BLK_DEV_XD is not set +# CONFIG_PARIDE is not set +# CONFIG_BLK_CPQ_DA is not set +# CONFIG_BLK_CPQ_CISS_DA is not set +# CONFIG_BLK_DEV_DAC960 is not set +# CONFIG_BLK_DEV_LOOP is not set +# CONFIG_BLK_DEV_NBD is not set +# CONFIG_BLK_DEV_RAM is not set +# CONFIG_BLK_DEV_INITRD is not set + +# +# I2O device support +# +# CONFIG_I2O is not set +# CONFIG_I2O_PCI is not set +# CONFIG_I2O_BLOCK is not set +# CONFIG_I2O_SCSI is not set +# CONFIG_I2O_PROC is not set + +# +# Multi-device support (RAID and LVM) +# +# CONFIG_MD is not set +# CONFIG_BLK_DEV_MD is not set +# CONFIG_MD_LINEAR is not set +# CONFIG_MD_RAID0 is not set +# CONFIG_MD_RAID1 is not set +# CONFIG_MD_RAID5 is not set +# CONFIG_MD_MULTIPATH is not set +# CONFIG_BLK_DEV_LVM is not set + +# +# ATA/IDE/MFM/RLL support +# +CONFIG_IDE=y + +# +# IDE, ATA and ATAPI Block devices +# +CONFIG_BLK_DEV_IDE=y + +# +# Please see Documentation/ide.txt for help/info on IDE drives +# +# CONFIG_BLK_DEV_HD_IDE is not set +# CONFIG_BLK_DEV_HD is not set +CONFIG_BLK_DEV_IDEDISK=y +# CONFIG_IDEDISK_MULTI_MODE is not set +# CONFIG_BLK_DEV_IDEDISK_VENDOR is not set +# CONFIG_BLK_DEV_IDEDISK_FUJITSU is not set +# CONFIG_BLK_DEV_IDEDISK_IBM is not set +# CONFIG_BLK_DEV_IDEDISK_MAXTOR is not set +# CONFIG_BLK_DEV_IDEDISK_QUANTUM is not set +# CONFIG_BLK_DEV_IDEDISK_SEAGATE is not set +# CONFIG_BLK_DEV_IDEDISK_WD is not set +# CONFIG_BLK_DEV_COMMERIAL is not set +# CONFIG_BLK_DEV_TIVO is not set +# CONFIG_BLK_DEV_IDECS is not set +# CONFIG_BLK_DEV_IDECD is not set +# CONFIG_BLK_DEV_IDETAPE is not set +# CONFIG_BLK_DEV_IDEFLOPPY is not set +# CONFIG_BLK_DEV_IDESCSI is not set + +# +# IDE chipset support/bugfixes +# +# CONFIG_BLK_DEV_CMD640 is not set +# CONFIG_BLK_DEV_CMD640_ENHANCED is not set +# CONFIG_BLK_DEV_ISAPNP is not set +# CONFIG_BLK_DEV_RZ1000 is not set +# CONFIG_BLK_DEV_IDEPCI is not set +# CONFIG_IDE_CHIPSETS is not set +# CONFIG_IDEDMA_AUTO is not set +# CONFIG_DMA_NONPCI is not set +# CONFIG_BLK_DEV_IDE_MODES is not set +# CONFIG_BLK_DEV_ATARAID is not set +# CONFIG_BLK_DEV_ATARAID_PDC is not set +# CONFIG_BLK_DEV_ATARAID_HPT is not set + +# +# Alternate 1394 support +# +# CONFIG_X1394 is not set + +# +# Alternate SCSI support +# +# CONFIG_XSCSI is not set + +# +# SCSI support +# +# CONFIG_SCSI is not set + +# +# Amateur Radio support +# +# CONFIG_HAMRADIO is not set + +# +# ISDN subsystem +# +# CONFIG_ISDN is not set + +# +# CD-ROM drivers (not for SCSI or IDE/ATAPI drives) +# +# CONFIG_CD_NO_IDESCSI is not set + +# +# Input core support +# +# CONFIG_INPUT is not set +# CONFIG_INPUT_KEYBDEV is not set +# CONFIG_INPUT_MOUSEDEV is not set +# CONFIG_INPUT_JOYDEV is not set +# CONFIG_INPUT_EVDEV is not set + +# +# Character devices +# +CONFIG_VT=y +CONFIG_VT_CONSOLE=y +# CONFIG_SERIAL is not set +# CONFIG_SERIAL_EXTENDED is not set +# CONFIG_SERIAL_NONSTANDARD is not set +CONFIG_UNIX98_PTYS=y +CONFIG_UNIX98_PTY_COUNT=256 + +# +# I2C support +# +# CONFIG_I2C is not set + +# +# Mice +# +# CONFIG_BUSMOUSE is not set +# CONFIG_MOUSE is not set + +# +# Joysticks +# +# CONFIG_INPUT_GAMEPORT is not set + +# +# Input core support is needed for gameports +# + +# +# Input core support is needed for joysticks +# +# CONFIG_QIC02_TAPE is not set + +# +# Watchdog Cards +# +# CONFIG_WATCHDOG is not set +# CONFIG_INTEL_RNG is not set +# CONFIG_NVRAM is not set +# CONFIG_RTC is not set +# CONFIG_EFI_RTC is not set +# CONFIG_DTLK is not set +# CONFIG_R3964 is not set +# CONFIG_APPLICOM is not set + +# +# Ftape, the floppy tape device driver +# +# CONFIG_FTAPE is not set +# CONFIG_AGP is not set +# CONFIG_DRM is not set +# CONFIG_MWAVE is not set + +# +# Multimedia devices +# +# CONFIG_VIDEO_DEV is not set + +# +# File systems +# +# CONFIG_QUOTA is not set +# CONFIG_AUTOFS_FS is not set +# CONFIG_AUTOFS4_FS is not set +# CONFIG_REISERFS_FS is not set +# CONFIG_REISERFS_CHECK is not set +# CONFIG_ADFS_FS is not set +# CONFIG_ADFS_FS_RW is not set +# CONFIG_AFFS_FS is not set +# CONFIG_HFS_FS is not set +# CONFIG_BFS_FS is not set +# CONFIG_FAT_FS is not set +# CONFIG_MSDOS_FS is not set +# CONFIG_UMSDOS_FS is not set +# CONFIG_VFAT_FS is not set +# CONFIG_EFS_FS is not set +# CONFIG_JFFS_FS is not set +# CONFIG_JFFS2_FS is not set +# CONFIG_CRAMFS is not set +CONFIG_TMPFS=y +# CONFIG_RAMFS is not set +# CONFIG_ISO9660_FS is not set +# CONFIG_JOLIET is not set +# CONFIG_MINIX_FS is not set +# CONFIG_VXFS_FS is not set +# CONFIG_NTFS_FS is not set +# CONFIG_NTFS_RW is not set +# CONFIG_HPFS_FS is not set +CONFIG_PROC_FS=y +# CONFIG_DEVFS_FS is not set +# CONFIG_DEVFS_MOUNT is not set +# CONFIG_DEVFS_DEBUG is not set +CONFIG_DEVPTS_FS=y +# CONFIG_QNX4FS_FS is not set +# CONFIG_QNX4FS_RW is not set +# CONFIG_ROMFS_FS is not set +CONFIG_EXT2_FS=y +# CONFIG_SYSV_FS is not set +# CONFIG_UDF_FS is not set +# CONFIG_UDF_RW is not set +# CONFIG_UFS_FS is not set +# CONFIG_UFS_FS_WRITE is not set +# CONFIG_XFS_SUPPORT is not set +# CONFIG_NCPFS_NLS is not set +# CONFIG_SMB_FS is not set + +# +# Partition Types +# +# CONFIG_PARTITION_ADVANCED is not set +CONFIG_MSDOS_PARTITION=y +# CONFIG_SMB_NLS is not set +# CONFIG_NLS is not set + +# +# Console drivers +# +CONFIG_VGA_CONSOLE=y + +# +# Frame-buffer support +# +# CONFIG_FB is not set + +# +# Sound +# +# CONFIG_SOUND is not set + +# +# USB support +# +# CONFIG_USB is not set + +# +# USB Controllers +# +# CONFIG_USB_UHCI is not set +# CONFIG_USB_UHCI_ALT is not set +# CONFIG_USB_OHCI is not set + +# +# USB Device Class drivers +# +# CONFIG_USB_AUDIO is not set +# CONFIG_USB_BLUETOOTH is not set +# CONFIG_USB_STORAGE is not set +# CONFIG_USB_STORAGE_DEBUG is not set +# CONFIG_USB_STORAGE_DATAFAB is not set +# CONFIG_USB_STORAGE_FREECOM is not set +# CONFIG_USB_STORAGE_ISD200 is not set +# CONFIG_USB_STORAGE_DPCM is not set +# CONFIG_USB_STORAGE_HP8200e is not set +# CONFIG_USB_STORAGE_SDDR09 is not set +# CONFIG_USB_STORAGE_JUMPSHOT is not set +# CONFIG_USB_ACM is not set +# CONFIG_USB_PRINTER is not set + +# +# USB Human Interface Devices (HID) +# + +# +# Input core support is needed for USB HID +# + +# +# USB Imaging devices +# +# CONFIG_USB_DC2XX is not set +# CONFIG_USB_MDC800 is not set +# CONFIG_USB_SCANNER is not set +# CONFIG_USB_MICROTEK is not set +# CONFIG_USB_HPUSBSCSI is not set + +# +# USB Multimedia devices +# + +# +# Video4Linux support is needed for USB Multimedia device support +# + +# +# USB Network adaptors +# + +# +# Networking support is needed for USB Networking device support +# + +# +# USB port drivers +# +# CONFIG_USB_USS720 is not set + +# +# USB Serial Converter support +# +# CONFIG_USB_SERIAL is not set +# CONFIG_USB_SERIAL_GENERIC is not set +# CONFIG_USB_SERIAL_BELKIN is not set +# CONFIG_USB_SERIAL_WHITEHEAT is not set +# CONFIG_USB_SERIAL_DIGI_ACCELEPORT is not set +# CONFIG_USB_SERIAL_EMPEG is not set +# CONFIG_USB_SERIAL_FTDI_SIO is not set +# CONFIG_USB_SERIAL_VISOR is not set +# CONFIG_USB_SERIAL_IR is not set +# CONFIG_USB_SERIAL_EDGEPORT is not set +# CONFIG_USB_SERIAL_KEYSPAN_PDA is not set +# CONFIG_USB_SERIAL_KEYSPAN is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28 is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28X is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28XA is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28XB is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA19 is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA18X is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA19W is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA49W is not set +# CONFIG_USB_SERIAL_MCT_U232 is not set +# CONFIG_USB_SERIAL_PL2303 is not set +# CONFIG_USB_SERIAL_CYBERJACK is not set +# CONFIG_USB_SERIAL_XIRCOM is not set +# CONFIG_USB_SERIAL_OMNINET is not set + +# +# USB Miscellaneous drivers +# +# CONFIG_USB_RIO500 is not set + +# +# Kernel hacking +# +CONFIG_DEBUG_KERNEL=y +CONFIG_IA64_PRINT_HAZARDS=y +# CONFIG_DISABLE_VHPT is not set +CONFIG_MAGIC_SYSRQ=y +CONFIG_IA64_EARLY_PRINTK=y +# CONFIG_DEBUG_SLAB is not set +# CONFIG_DEBUG_SPINLOCK is not set +# CONFIG_IA64_DEBUG_CMPXCHG is not set +# CONFIG_IA64_DEBUG_IRQ is not set +# CONFIG_KDB is not set +# CONFIG_KDB_MODULES is not set +# CONFIG_KALLSYMS is not set diff -Nru a/arch/ia64/sn/configs/sn2/defconfig-sn2-mp b/arch/ia64/sn/configs/sn2/defconfig-sn2-mp --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/configs/sn2/defconfig-sn2-mp Tue Mar 12 13:58:16 2002 @@ -0,0 +1,730 @@ +# +# Automatically generated make config: don't edit +# + +# +# Code maturity level options +# +CONFIG_EXPERIMENTAL=y + +# +# Loadable module support +# +# CONFIG_MODULES is not set + +# +# General setup +# +CONFIG_IA64=y +# CONFIG_ISA is not set +# CONFIG_EISA is not set +# CONFIG_MCA is not set +# CONFIG_SBUS is not set +CONFIG_RWSEM_GENERIC_SPINLOCK=y +# CONFIG_RWSEM_XCHGADD_ALGORITHM is not set +CONFIG_ACPI=y +CONFIG_ACPI_EFI=y +CONFIG_ACPI_INTERPRETER=y +CONFIG_ACPI_KERNEL_CONFIG=y +# CONFIG_ITANIUM is not set +CONFIG_MCKINLEY=y +# CONFIG_IA64_GENERIC is not set +# CONFIG_IA64_DIG is not set +# CONFIG_IA64_HP_SIM is not set +# CONFIG_IA64_SGI_SN1 is not set +CONFIG_IA64_SGI_SN2=y +# CONFIG_IA64_PAGE_SIZE_4KB is not set +# CONFIG_IA64_PAGE_SIZE_8KB is not set +CONFIG_IA64_PAGE_SIZE_16KB=y +# CONFIG_IA64_PAGE_SIZE_64KB is not set +CONFIG_IA64_L1_CACHE_SHIFT=7 +CONFIG_MCKINLEY_ASTEP_SPECIFIC=y +CONFIG_MCKINLEY_A0_SPECIFIC=y +CONFIG_IA64_SGI_SN=y +CONFIG_IA64_SGI_SN_DEBUG=y +CONFIG_IA64_SGI_SN_SIM=y +CONFIG_IA64_SGI_AUTOTEST=y +CONFIG_DEVFS_FS=y +CONFIG_DEVFS_DEBUG=y +CONFIG_SERIAL_SGI_L1_PROTOCOL=y +CONFIG_DISCONTIGMEM=y +CONFIG_IA64_MCA=y +CONFIG_NUMA=y +CONFIG_PERCPU_IRQ=y +CONFIG_PCIBA=y +CONFIG_KCORE_ELF=y +CONFIG_SMP=y +CONFIG_IA32_SUPPORT=y +CONFIG_PERFMON=y +CONFIG_IA64_PALINFO=y +# CONFIG_EFI_VARS is not set +CONFIG_NET=y +CONFIG_SYSVIPC=y +# CONFIG_BSD_PROCESS_ACCT is not set +CONFIG_SYSCTL=y +CONFIG_BINFMT_ELF=y +# CONFIG_BINFMT_MISC is not set +# CONFIG_ACPI_DEBUG is not set +# CONFIG_ACPI_BUSMGR is not set +# CONFIG_ACPI_SYS is not set +# CONFIG_ACPI_CPU is not set +# CONFIG_ACPI_BUTTON is not set +# CONFIG_ACPI_AC is not set +# CONFIG_ACPI_EC is not set +# CONFIG_ACPI_CMBATT is not set +# CONFIG_ACPI_THERMAL is not set +CONFIG_PCI=y +# CONFIG_PCI_NAMES is not set +# CONFIG_HOTPLUG is not set +# CONFIG_PCMCIA is not set + +# +# Parallel port support +# +# CONFIG_PARPORT is not set + +# +# Networking options +# +CONFIG_PACKET=y +# CONFIG_PACKET_MMAP is not set +CONFIG_NETLINK=y +CONFIG_RTNETLINK=y +CONFIG_NETLINK_DEV=y +CONFIG_NETFILTER=y +CONFIG_NETFILTER_DEBUG=y +CONFIG_FILTER=y +CONFIG_UNIX=y +CONFIG_INET=y +CONFIG_IP_MULTICAST=y +# CONFIG_IP_ADVANCED_ROUTER is not set +# CONFIG_IP_PNP is not set +# CONFIG_NET_IPIP is not set +# CONFIG_NET_IPGRE is not set +# CONFIG_IP_MROUTE is not set +# CONFIG_ARPD is not set +# CONFIG_INET_ECN is not set +CONFIG_SYN_COOKIES=y + +# +# IP: Netfilter Configuration +# +# CONFIG_IP_NF_CONNTRACK is not set +# CONFIG_IP_NF_QUEUE is not set +# CONFIG_IP_NF_IPTABLES is not set +# CONFIG_IP_NF_COMPAT_IPCHAINS is not set +# CONFIG_IP_NF_COMPAT_IPFWADM is not set +# CONFIG_IPV6 is not set +# CONFIG_KHTTPD is not set +# CONFIG_ATM is not set + +# +# +# +# CONFIG_IPX is not set +# CONFIG_ATALK is not set +# CONFIG_DECNET is not set +# CONFIG_BRIDGE is not set +# CONFIG_X25 is not set +# CONFIG_LAPB is not set +# CONFIG_LLC is not set +# CONFIG_NET_DIVERT is not set +# CONFIG_ECONET is not set +# CONFIG_WAN_ROUTER is not set +# CONFIG_NET_FASTROUTE is not set +# CONFIG_NET_HW_FLOWCONTROL is not set + +# +# QoS and/or fair queueing +# +# CONFIG_NET_SCHED is not set + +# +# Memory Technology Devices (MTD) +# +# CONFIG_MTD is not set + +# +# Plug and Play configuration +# +# CONFIG_PNP is not set +# CONFIG_ISAPNP is not set +# CONFIG_PNPBIOS is not set + +# +# Block devices +# +# CONFIG_BLK_DEV_FD is not set +# CONFIG_BLK_DEV_XD is not set +# CONFIG_PARIDE is not set +# CONFIG_BLK_CPQ_DA is not set +# CONFIG_BLK_CPQ_CISS_DA is not set +# CONFIG_BLK_DEV_DAC960 is not set +CONFIG_BLK_DEV_LOOP=y +# CONFIG_BLK_DEV_NBD is not set +# CONFIG_BLK_DEV_RAM is not set +# CONFIG_BLK_DEV_INITRD is not set + +# +# I2O device support +# +# CONFIG_I2O is not set +# CONFIG_I2O_PCI is not set +# CONFIG_I2O_BLOCK is not set +# CONFIG_I2O_LAN is not set +# CONFIG_I2O_SCSI is not set +# CONFIG_I2O_PROC is not set + +# +# Multi-device support (RAID and LVM) +# +# CONFIG_MD is not set +# CONFIG_BLK_DEV_MD is not set +# CONFIG_MD_LINEAR is not set +# CONFIG_MD_RAID0 is not set +# CONFIG_MD_RAID1 is not set +# CONFIG_MD_RAID5 is not set +# CONFIG_MD_MULTIPATH is not set +# CONFIG_BLK_DEV_LVM is not set + +# +# ATA/IDE/MFM/RLL support +# +CONFIG_IDE=y + +# +# IDE, ATA and ATAPI Block devices +# +CONFIG_BLK_DEV_IDE=y + +# +# Please see Documentation/ide.txt for help/info on IDE drives +# +# CONFIG_BLK_DEV_HD_IDE is not set +# CONFIG_BLK_DEV_HD is not set +CONFIG_BLK_DEV_IDEDISK=y +# CONFIG_IDEDISK_MULTI_MODE is not set +# CONFIG_BLK_DEV_IDEDISK_VENDOR is not set +# CONFIG_BLK_DEV_IDEDISK_FUJITSU is not set +# CONFIG_BLK_DEV_IDEDISK_IBM is not set +# CONFIG_BLK_DEV_IDEDISK_MAXTOR is not set +# CONFIG_BLK_DEV_IDEDISK_QUANTUM is not set +# CONFIG_BLK_DEV_IDEDISK_SEAGATE is not set +# CONFIG_BLK_DEV_IDEDISK_WD is not set +# CONFIG_BLK_DEV_COMMERIAL is not set +# CONFIG_BLK_DEV_TIVO is not set +# CONFIG_BLK_DEV_IDECS is not set +# CONFIG_BLK_DEV_IDECD is not set +# CONFIG_BLK_DEV_IDETAPE is not set +# CONFIG_BLK_DEV_IDEFLOPPY is not set +# CONFIG_BLK_DEV_IDESCSI is not set + +# +# IDE chipset support/bugfixes +# +# CONFIG_BLK_DEV_CMD640 is not set +# CONFIG_BLK_DEV_CMD640_ENHANCED is not set +# CONFIG_BLK_DEV_ISAPNP is not set +# CONFIG_BLK_DEV_RZ1000 is not set +# CONFIG_BLK_DEV_IDEPCI is not set +# CONFIG_IDE_CHIPSETS is not set +# CONFIG_IDEDMA_AUTO is not set +# CONFIG_DMA_NONPCI is not set +# CONFIG_BLK_DEV_IDE_MODES is not set +# CONFIG_BLK_DEV_ATARAID is not set +# CONFIG_BLK_DEV_ATARAID_PDC is not set +# CONFIG_BLK_DEV_ATARAID_HPT is not set + +# +# Alternate 1394 support +# +# CONFIG_X1394 is not set + +# +# Alternate SCSI support +# +CONFIG_XSCSI=y + +# +# Alternate SCSI support +# +CONFIG_XSCSI_DKSC=y +# CONFIG_XSCSI_QLFC is not set +# CONFIG_XSCSI_QL is not set +# CONFIG_XSCSI_SBP2 is not set + +# +# SCSI support +# +CONFIG_SCSI=y + +# +# SCSI support type (disk, tape, CD-ROM) +# +CONFIG_BLK_DEV_SD=y +CONFIG_SD_EXTRA_DEVS=40 +# CONFIG_CHR_DEV_ST is not set +# CONFIG_CHR_DEV_OSST is not set +# CONFIG_BLK_DEV_SR is not set +# CONFIG_CHR_DEV_SG is not set + +# +# Some SCSI devices (e.g. CD jukebox) support multiple LUNs +# +# CONFIG_SCSI_DEBUG_QUEUES is not set +# CONFIG_SCSI_MULTI_LUN is not set +# CONFIG_SCSI_CONSTANTS is not set +# CONFIG_SCSI_LOGGING is not set + +# +# SCSI low-level drivers +# +# CONFIG_BLK_DEV_3W_XXXX_RAID is not set +# CONFIG_SCSI_7000FASST is not set +# CONFIG_SCSI_ACARD is not set +# CONFIG_SCSI_AHA152X is not set +# CONFIG_SCSI_AHA1542 is not set +# CONFIG_SCSI_AHA1740 is not set +# CONFIG_SCSI_AIC7XXX is not set +# CONFIG_SCSI_AIC7XXX_OLD is not set +# CONFIG_SCSI_DPT_I2O is not set +# CONFIG_SCSI_ADVANSYS is not set +# CONFIG_SCSI_IN2000 is not set +# CONFIG_SCSI_AM53C974 is not set +# CONFIG_SCSI_MEGARAID is not set +# CONFIG_SCSI_BUSLOGIC is not set +# CONFIG_SCSI_CPQFCTS is not set +# CONFIG_SCSI_DMX3191D is not set +# CONFIG_SCSI_DTC3280 is not set +# CONFIG_SCSI_EATA is not set +# CONFIG_SCSI_EATA_DMA is not set +# CONFIG_SCSI_EATA_PIO is not set +# CONFIG_SCSI_FUTURE_DOMAIN is not set +# CONFIG_SCSI_GDTH is not set +# CONFIG_SCSI_GENERIC_NCR5380 is not set +# CONFIG_SCSI_INITIO is not set +# CONFIG_SCSI_INIA100 is not set +# CONFIG_SCSI_NCR53C406A is not set +# CONFIG_SCSI_NCR53C7xx is not set +# CONFIG_SCSI_NCR53C8XX is not set +# CONFIG_SCSI_SYM53C8XX is not set +# CONFIG_SCSI_PAS16 is not set +# CONFIG_SCSI_PCI2000 is not set +# CONFIG_SCSI_PCI2220I is not set +# CONFIG_SCSI_PSI240I is not set +# CONFIG_SCSI_QLOGIC_FAS is not set +# CONFIG_SCSI_QLOGIC_ISP is not set +CONFIG_SCSI_QLOGIC_FC=y +# CONFIG_SCSI_QLOGIC_FC_FIRMWARE is not set +# CONFIG_SCSI_QLOGIC_1280 is not set +# CONFIG_SCSI_QLOGIC_QLA2100 is not set +# CONFIG_SCSI_SIM710 is not set +# CONFIG_SCSI_SYM53C416 is not set +# CONFIG_SCSI_DC390T is not set +# CONFIG_SCSI_T128 is not set +# CONFIG_SCSI_U14_34F is not set +# CONFIG_SCSI_DEBUG is not set + +# +# Network device support +# +CONFIG_NETDEVICES=y + +# +# ARCnet devices +# +# CONFIG_ARCNET is not set +# CONFIG_DUMMY is not set +# CONFIG_BONDING is not set +# CONFIG_EQUALIZER is not set +# CONFIG_TUN is not set +# CONFIG_ETHERTAP is not set + +# +# Ethernet (10 or 100Mbit) +# +CONFIG_NET_ETHERNET=y +# CONFIG_SUNLANCE is not set +# CONFIG_HAPPYMEAL is not set +# CONFIG_SUNBMAC is not set +# CONFIG_SUNQE is not set +# CONFIG_SUNLANCE is not set +# CONFIG_SUNGEM is not set +# CONFIG_NET_VENDOR_3COM is not set +# CONFIG_LANCE is not set +# CONFIG_NET_VENDOR_SMC is not set +# CONFIG_NET_VENDOR_RACAL is not set +# CONFIG_HP100 is not set +# CONFIG_NET_ISA is not set +# CONFIG_NET_PCI is not set +# CONFIG_NET_POCKET is not set + +# +# Ethernet (1000 Mbit) +# +# CONFIG_ACENIC is not set +# CONFIG_DL2K is not set +# CONFIG_MYRI_SBUS is not set +# CONFIG_NS83820 is not set +# CONFIG_HAMACHI is not set +# CONFIG_YELLOWFIN is not set +# CONFIG_SK98LIN is not set +# CONFIG_FDDI is not set +# CONFIG_HIPPI is not set +# CONFIG_PLIP is not set +# CONFIG_PPP is not set +# CONFIG_SLIP is not set + +# +# Wireless LAN (non-hamradio) +# +# CONFIG_NET_RADIO is not set + +# +# Token Ring devices +# +# CONFIG_TR is not set +# CONFIG_NET_FC is not set +# CONFIG_RCPCI is not set +# CONFIG_SHAPER is not set + +# +# Wan interfaces +# +# CONFIG_WAN is not set + +# +# Amateur Radio support +# +# CONFIG_HAMRADIO is not set + +# +# ISDN subsystem +# +# CONFIG_ISDN is not set + +# +# CD-ROM drivers (not for SCSI or IDE/ATAPI drives) +# +# CONFIG_CD_NO_IDESCSI is not set + +# +# Input core support +# +# CONFIG_INPUT is not set +# CONFIG_INPUT_KEYBDEV is not set +# CONFIG_INPUT_MOUSEDEV is not set +# CONFIG_INPUT_JOYDEV is not set +# CONFIG_INPUT_EVDEV is not set + +# +# Character devices +# +# CONFIG_VT is not set +CONFIG_SERIAL=y +# CONFIG_SERIAL_CONSOLE is not set +# CONFIG_SERIAL_EXTENDED is not set +# CONFIG_SERIAL_NONSTANDARD is not set +CONFIG_UNIX98_PTYS=y +CONFIG_UNIX98_PTY_COUNT=256 + +# +# I2C support +# +# CONFIG_I2C is not set + +# +# Mice +# +# CONFIG_BUSMOUSE is not set +# CONFIG_MOUSE is not set + +# +# Joysticks +# +# CONFIG_INPUT_GAMEPORT is not set + +# +# Input core support is needed for gameports +# + +# +# Input core support is needed for joysticks +# +# CONFIG_QIC02_TAPE is not set + +# +# Watchdog Cards +# +# CONFIG_WATCHDOG is not set +# CONFIG_INTEL_RNG is not set +# CONFIG_NVRAM is not set +# CONFIG_RTC is not set +CONFIG_EFI_RTC=y +# CONFIG_DTLK is not set +# CONFIG_R3964 is not set +# CONFIG_APPLICOM is not set + +# +# Ftape, the floppy tape device driver +# +# CONFIG_FTAPE is not set +# CONFIG_AGP is not set +# CONFIG_DRM is not set +# CONFIG_MWAVE is not set + +# +# Multimedia devices +# +# CONFIG_VIDEO_DEV is not set + +# +# File systems +# +CONFIG_QUOTA=y +CONFIG_AUTOFS_FS=y +CONFIG_AUTOFS4_FS=y +# CONFIG_REISERFS_FS is not set +# CONFIG_REISERFS_CHECK is not set +# CONFIG_ADFS_FS is not set +# CONFIG_ADFS_FS_RW is not set +# CONFIG_AFFS_FS is not set +# CONFIG_HFS_FS is not set +# CONFIG_BFS_FS is not set +CONFIG_FAT_FS=y +CONFIG_MSDOS_FS=y +# CONFIG_UMSDOS_FS is not set +CONFIG_VFAT_FS=y +# CONFIG_EFS_FS is not set +# CONFIG_JFFS_FS is not set +# CONFIG_JFFS2_FS is not set +# CONFIG_CRAMFS is not set +CONFIG_TMPFS=y +# CONFIG_RAMFS is not set +CONFIG_ISO9660_FS=y +# CONFIG_JOLIET is not set +# CONFIG_MINIX_FS is not set +# CONFIG_VXFS_FS is not set +# CONFIG_NTFS_FS is not set +# CONFIG_NTFS_RW is not set +# CONFIG_HPFS_FS is not set +CONFIG_PROC_FS=y +CONFIG_DEVFS_FS=y +CONFIG_DEVFS_MOUNT=y +CONFIG_DEVFS_DEBUG=y +CONFIG_DEVPTS_FS=y +# CONFIG_QNX4FS_FS is not set +# CONFIG_QNX4FS_RW is not set +# CONFIG_ROMFS_FS is not set +CONFIG_EXT2_FS=y +# CONFIG_SYSV_FS is not set +# CONFIG_UDF_FS is not set +# CONFIG_UDF_RW is not set +# CONFIG_UFS_FS is not set +# CONFIG_UFS_FS_WRITE is not set +CONFIG_XFS_SUPPORT=y + +# +# Network File Systems +# +# CONFIG_CODA_FS is not set +CONFIG_NFS_FS=y +CONFIG_NFS_V3=y +# CONFIG_ROOT_NFS is not set +CONFIG_NFSD=y +CONFIG_NFSD_V3=y +CONFIG_SUNRPC=y +CONFIG_LOCKD=y +CONFIG_LOCKD_V4=y +# CONFIG_SMB_FS is not set +# CONFIG_NCP_FS is not set +# CONFIG_NCPFS_PACKET_SIGNING is not set +# CONFIG_NCPFS_IOCTL_LOCKING is not set +# CONFIG_NCPFS_STRONG is not set +# CONFIG_NCPFS_NFS_NS is not set +# CONFIG_NCPFS_OS2_NS is not set +# CONFIG_NCPFS_SMALLDOS is not set +# CONFIG_NCPFS_NLS is not set +# CONFIG_NCPFS_EXTRAS is not set + +# +# Partition Types +# +# CONFIG_PARTITION_ADVANCED is not set +CONFIG_MSDOS_PARTITION=y +# CONFIG_SMB_NLS is not set +CONFIG_NLS=y + +# +# Native Language Support +# +CONFIG_NLS_DEFAULT="n" +# CONFIG_NLS_CODEPAGE_437 is not set +# CONFIG_NLS_CODEPAGE_737 is not set +# CONFIG_NLS_CODEPAGE_775 is not set +# CONFIG_NLS_CODEPAGE_850 is not set +# CONFIG_NLS_CODEPAGE_852 is not set +# CONFIG_NLS_CODEPAGE_855 is not set +# CONFIG_NLS_CODEPAGE_857 is not set +# CONFIG_NLS_CODEPAGE_860 is not set +# CONFIG_NLS_CODEPAGE_861 is not set +# CONFIG_NLS_CODEPAGE_862 is not set +# CONFIG_NLS_CODEPAGE_863 is not set +# CONFIG_NLS_CODEPAGE_864 is not set +# CONFIG_NLS_CODEPAGE_865 is not set +# CONFIG_NLS_CODEPAGE_866 is not set +# CONFIG_NLS_CODEPAGE_869 is not set +# CONFIG_NLS_CODEPAGE_936 is not set +# CONFIG_NLS_CODEPAGE_950 is not set +# CONFIG_NLS_CODEPAGE_932 is not set +# CONFIG_NLS_CODEPAGE_949 is not set +# CONFIG_NLS_CODEPAGE_874 is not set +# CONFIG_NLS_ISO8859_8 is not set +# CONFIG_NLS_CODEPAGE_1251 is not set +# CONFIG_NLS_ISO8859_1 is not set +# CONFIG_NLS_ISO8859_2 is not set +# CONFIG_NLS_ISO8859_3 is not set +# CONFIG_NLS_ISO8859_4 is not set +# CONFIG_NLS_ISO8859_5 is not set +# CONFIG_NLS_ISO8859_6 is not set +# CONFIG_NLS_ISO8859_7 is not set +# CONFIG_NLS_ISO8859_9 is not set +# CONFIG_NLS_ISO8859_13 is not set +# CONFIG_NLS_ISO8859_14 is not set +# CONFIG_NLS_ISO8859_15 is not set +# CONFIG_NLS_KOI8_R is not set +# CONFIG_NLS_KOI8_U is not set +# CONFIG_NLS_UTF8 is not set + +# +# Sound +# +# CONFIG_SOUND is not set + +# +# USB support +# +# CONFIG_USB is not set + +# +# USB Controllers +# +# CONFIG_USB_UHCI is not set +# CONFIG_USB_UHCI_ALT is not set +# CONFIG_USB_OHCI is not set + +# +# USB Device Class drivers +# +# CONFIG_USB_AUDIO is not set +# CONFIG_USB_BLUETOOTH is not set +# CONFIG_USB_STORAGE is not set +# CONFIG_USB_STORAGE_DEBUG is not set +# CONFIG_USB_STORAGE_DATAFAB is not set +# CONFIG_USB_STORAGE_FREECOM is not set +# CONFIG_USB_STORAGE_ISD200 is not set +# CONFIG_USB_STORAGE_DPCM is not set +# CONFIG_USB_STORAGE_HP8200e is not set +# CONFIG_USB_STORAGE_SDDR09 is not set +# CONFIG_USB_STORAGE_JUMPSHOT is not set +# CONFIG_USB_ACM is not set +# CONFIG_USB_PRINTER is not set + +# +# USB Human Interface Devices (HID) +# + +# +# Input core support is needed for USB HID +# + +# +# USB Imaging devices +# +# CONFIG_USB_DC2XX is not set +# CONFIG_USB_MDC800 is not set +# CONFIG_USB_SCANNER is not set +# CONFIG_USB_MICROTEK is not set +# CONFIG_USB_HPUSBSCSI is not set + +# +# USB Multimedia devices +# + +# +# Video4Linux support is needed for USB Multimedia device support +# + +# +# USB Network adaptors +# +# CONFIG_USB_PEGASUS is not set +# CONFIG_USB_KAWETH is not set +# CONFIG_USB_CATC is not set +# CONFIG_USB_CDCETHER is not set +# CONFIG_USB_USBNET is not set + +# +# USB port drivers +# +# CONFIG_USB_USS720 is not set + +# +# USB Serial Converter support +# +# CONFIG_USB_SERIAL is not set +# CONFIG_USB_SERIAL_GENERIC is not set +# CONFIG_USB_SERIAL_BELKIN is not set +# CONFIG_USB_SERIAL_WHITEHEAT is not set +# CONFIG_USB_SERIAL_DIGI_ACCELEPORT is not set +# CONFIG_USB_SERIAL_EMPEG is not set +# CONFIG_USB_SERIAL_FTDI_SIO is not set +# CONFIG_USB_SERIAL_VISOR is not set +# CONFIG_USB_SERIAL_IR is not set +# CONFIG_USB_SERIAL_EDGEPORT is not set +# CONFIG_USB_SERIAL_KEYSPAN_PDA is not set +# CONFIG_USB_SERIAL_KEYSPAN is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28 is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28X is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28XA is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28XB is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA19 is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA18X is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA19W is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA49W is not set +# CONFIG_USB_SERIAL_MCT_U232 is not set +# CONFIG_USB_SERIAL_PL2303 is not set +# CONFIG_USB_SERIAL_CYBERJACK is not set +# CONFIG_USB_SERIAL_XIRCOM is not set +# CONFIG_USB_SERIAL_OMNINET is not set + +# +# USB Miscellaneous drivers +# +# CONFIG_USB_RIO500 is not set + +# +# IEEE 1394 (FireWire) support (EXPERIMENTAL) +# +# CONFIG_IEEE1394 is not set + +# +# Bluetooth support +# +# CONFIG_BLUEZ is not set + +# +# Kernel hacking +# +CONFIG_DEBUG_KERNEL=y +CONFIG_IA64_PRINT_HAZARDS=y +# CONFIG_DISABLE_VHPT is not set +CONFIG_MAGIC_SYSRQ=y +CONFIG_IA64_EARLY_PRINTK=y +# CONFIG_DEBUG_SLAB is not set +# CONFIG_DEBUG_SPINLOCK is not set +# CONFIG_IA64_DEBUG_CMPXCHG is not set +# CONFIG_IA64_DEBUG_IRQ is not set +# CONFIG_KDB is not set +# CONFIG_KDB_MODULES is not set +CONFIG_KALLSYMS=y diff -Nru a/arch/ia64/sn/configs/sn2/defconfig-sn2-mp-modules b/arch/ia64/sn/configs/sn2/defconfig-sn2-mp-modules --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/configs/sn2/defconfig-sn2-mp-modules Tue Mar 12 13:58:16 2002 @@ -0,0 +1,732 @@ +# +# Automatically generated make config: don't edit +# + +# +# Code maturity level options +# +CONFIG_EXPERIMENTAL=y + +# +# Loadable module support +# +CONFIG_MODULES=y +# CONFIG_MODVERSIONS is not set +CONFIG_KMOD=y + +# +# General setup +# +CONFIG_IA64=y +# CONFIG_ISA is not set +# CONFIG_EISA is not set +# CONFIG_MCA is not set +# CONFIG_SBUS is not set +CONFIG_RWSEM_GENERIC_SPINLOCK=y +# CONFIG_RWSEM_XCHGADD_ALGORITHM is not set +CONFIG_ACPI=y +CONFIG_ACPI_EFI=y +CONFIG_ACPI_INTERPRETER=y +CONFIG_ACPI_KERNEL_CONFIG=y +# CONFIG_ITANIUM is not set +CONFIG_MCKINLEY=y +# CONFIG_IA64_GENERIC is not set +# CONFIG_IA64_DIG is not set +# CONFIG_IA64_HP_SIM is not set +# CONFIG_IA64_SGI_SN1 is not set +CONFIG_IA64_SGI_SN2=y +# CONFIG_IA64_PAGE_SIZE_4KB is not set +# CONFIG_IA64_PAGE_SIZE_8KB is not set +CONFIG_IA64_PAGE_SIZE_16KB=y +# CONFIG_IA64_PAGE_SIZE_64KB is not set +CONFIG_IA64_L1_CACHE_SHIFT=7 +CONFIG_MCKINLEY_ASTEP_SPECIFIC=y +CONFIG_MCKINLEY_A0_SPECIFIC=y +CONFIG_IA64_SGI_SN=y +CONFIG_IA64_SGI_SN_DEBUG=y +CONFIG_IA64_SGI_SN_SIM=y +CONFIG_IA64_SGI_AUTOTEST=y +CONFIG_DEVFS_FS=y +CONFIG_DEVFS_DEBUG=y +# CONFIG_SERIAL_SGI_L1_PROTOCOL is not set +CONFIG_DISCONTIGMEM=y +CONFIG_IA64_MCA=y +CONFIG_NUMA=y +CONFIG_PERCPU_IRQ=y +CONFIG_PCIBA=y +CONFIG_KCORE_ELF=y +CONFIG_SMP=y +CONFIG_IA32_SUPPORT=y +CONFIG_PERFMON=y +CONFIG_IA64_PALINFO=y +# CONFIG_EFI_VARS is not set +CONFIG_NET=y +CONFIG_SYSVIPC=y +# CONFIG_BSD_PROCESS_ACCT is not set +CONFIG_SYSCTL=y +CONFIG_BINFMT_ELF=y +# CONFIG_BINFMT_MISC is not set +# CONFIG_ACPI_DEBUG is not set +# CONFIG_ACPI_BUSMGR is not set +# CONFIG_ACPI_SYS is not set +# CONFIG_ACPI_CPU is not set +# CONFIG_ACPI_BUTTON is not set +# CONFIG_ACPI_AC is not set +# CONFIG_ACPI_EC is not set +# CONFIG_ACPI_CMBATT is not set +# CONFIG_ACPI_THERMAL is not set +CONFIG_PCI=y +# CONFIG_PCI_NAMES is not set +# CONFIG_HOTPLUG is not set +# CONFIG_PCMCIA is not set + +# +# Parallel port support +# +# CONFIG_PARPORT is not set + +# +# Networking options +# +CONFIG_PACKET=y +# CONFIG_PACKET_MMAP is not set +CONFIG_NETLINK=y +CONFIG_RTNETLINK=y +CONFIG_NETLINK_DEV=y +CONFIG_NETFILTER=y +CONFIG_NETFILTER_DEBUG=y +CONFIG_FILTER=y +CONFIG_UNIX=y +CONFIG_INET=y +CONFIG_IP_MULTICAST=y +# CONFIG_IP_ADVANCED_ROUTER is not set +# CONFIG_IP_PNP is not set +# CONFIG_NET_IPIP is not set +# CONFIG_NET_IPGRE is not set +# CONFIG_IP_MROUTE is not set +# CONFIG_ARPD is not set +# CONFIG_INET_ECN is not set +CONFIG_SYN_COOKIES=y + +# +# IP: Netfilter Configuration +# +# CONFIG_IP_NF_CONNTRACK is not set +# CONFIG_IP_NF_QUEUE is not set +# CONFIG_IP_NF_IPTABLES is not set +# CONFIG_IP_NF_COMPAT_IPCHAINS is not set +# CONFIG_IP_NF_COMPAT_IPFWADM is not set +# CONFIG_IPV6 is not set +# CONFIG_KHTTPD is not set +# CONFIG_ATM is not set + +# +# +# +# CONFIG_IPX is not set +# CONFIG_ATALK is not set +# CONFIG_DECNET is not set +# CONFIG_BRIDGE is not set +# CONFIG_X25 is not set +# CONFIG_LAPB is not set +# CONFIG_LLC is not set +# CONFIG_NET_DIVERT is not set +# CONFIG_ECONET is not set +# CONFIG_WAN_ROUTER is not set +# CONFIG_NET_FASTROUTE is not set +# CONFIG_NET_HW_FLOWCONTROL is not set + +# +# QoS and/or fair queueing +# +# CONFIG_NET_SCHED is not set + +# +# Memory Technology Devices (MTD) +# +# CONFIG_MTD is not set + +# +# Plug and Play configuration +# +# CONFIG_PNP is not set +# CONFIG_ISAPNP is not set +# CONFIG_PNPBIOS is not set + +# +# Block devices +# +# CONFIG_BLK_DEV_FD is not set +# CONFIG_BLK_DEV_XD is not set +# CONFIG_PARIDE is not set +# CONFIG_BLK_CPQ_DA is not set +# CONFIG_BLK_CPQ_CISS_DA is not set +# CONFIG_BLK_DEV_DAC960 is not set +CONFIG_BLK_DEV_LOOP=y +# CONFIG_BLK_DEV_NBD is not set +# CONFIG_BLK_DEV_RAM is not set +# CONFIG_BLK_DEV_INITRD is not set + +# +# I2O device support +# +# CONFIG_I2O is not set +# CONFIG_I2O_PCI is not set +# CONFIG_I2O_BLOCK is not set +# CONFIG_I2O_LAN is not set +# CONFIG_I2O_SCSI is not set +# CONFIG_I2O_PROC is not set + +# +# Multi-device support (RAID and LVM) +# +# CONFIG_MD is not set +# CONFIG_BLK_DEV_MD is not set +# CONFIG_MD_LINEAR is not set +# CONFIG_MD_RAID0 is not set +# CONFIG_MD_RAID1 is not set +# CONFIG_MD_RAID5 is not set +# CONFIG_MD_MULTIPATH is not set +# CONFIG_BLK_DEV_LVM is not set + +# +# ATA/IDE/MFM/RLL support +# +CONFIG_IDE=y + +# +# IDE, ATA and ATAPI Block devices +# +CONFIG_BLK_DEV_IDE=y + +# +# Please see Documentation/ide.txt for help/info on IDE drives +# +# CONFIG_BLK_DEV_HD_IDE is not set +# CONFIG_BLK_DEV_HD is not set +CONFIG_BLK_DEV_IDEDISK=y +# CONFIG_IDEDISK_MULTI_MODE is not set +# CONFIG_BLK_DEV_IDEDISK_VENDOR is not set +# CONFIG_BLK_DEV_IDEDISK_FUJITSU is not set +# CONFIG_BLK_DEV_IDEDISK_IBM is not set +# CONFIG_BLK_DEV_IDEDISK_MAXTOR is not set +# CONFIG_BLK_DEV_IDEDISK_QUANTUM is not set +# CONFIG_BLK_DEV_IDEDISK_SEAGATE is not set +# CONFIG_BLK_DEV_IDEDISK_WD is not set +# CONFIG_BLK_DEV_COMMERIAL is not set +# CONFIG_BLK_DEV_TIVO is not set +# CONFIG_BLK_DEV_IDECS is not set +# CONFIG_BLK_DEV_IDECD is not set +# CONFIG_BLK_DEV_IDETAPE is not set +# CONFIG_BLK_DEV_IDEFLOPPY is not set +# CONFIG_BLK_DEV_IDESCSI is not set + +# +# IDE chipset support/bugfixes +# +# CONFIG_BLK_DEV_CMD640 is not set +# CONFIG_BLK_DEV_CMD640_ENHANCED is not set +# CONFIG_BLK_DEV_ISAPNP is not set +# CONFIG_BLK_DEV_RZ1000 is not set +# CONFIG_BLK_DEV_IDEPCI is not set +# CONFIG_IDE_CHIPSETS is not set +# CONFIG_IDEDMA_AUTO is not set +# CONFIG_DMA_NONPCI is not set +# CONFIG_BLK_DEV_IDE_MODES is not set +# CONFIG_BLK_DEV_ATARAID is not set +# CONFIG_BLK_DEV_ATARAID_PDC is not set +# CONFIG_BLK_DEV_ATARAID_HPT is not set + +# +# Alternate 1394 support +# +# CONFIG_X1394 is not set + +# +# Alternate SCSI support +# +CONFIG_XSCSI=y + +# +# Alternate SCSI support +# +CONFIG_XSCSI_DKSC=y +# CONFIG_XSCSI_QLFC is not set +# CONFIG_XSCSI_QL is not set +# CONFIG_XSCSI_SBP2 is not set + +# +# SCSI support +# +CONFIG_SCSI=y + +# +# SCSI support type (disk, tape, CD-ROM) +# +CONFIG_BLK_DEV_SD=y +CONFIG_SD_EXTRA_DEVS=40 +# CONFIG_CHR_DEV_ST is not set +# CONFIG_CHR_DEV_OSST is not set +# CONFIG_BLK_DEV_SR is not set +# CONFIG_CHR_DEV_SG is not set + +# +# Some SCSI devices (e.g. CD jukebox) support multiple LUNs +# +# CONFIG_SCSI_DEBUG_QUEUES is not set +# CONFIG_SCSI_MULTI_LUN is not set +# CONFIG_SCSI_CONSTANTS is not set +# CONFIG_SCSI_LOGGING is not set + +# +# SCSI low-level drivers +# +# CONFIG_BLK_DEV_3W_XXXX_RAID is not set +# CONFIG_SCSI_7000FASST is not set +# CONFIG_SCSI_ACARD is not set +# CONFIG_SCSI_AHA152X is not set +# CONFIG_SCSI_AHA1542 is not set +# CONFIG_SCSI_AHA1740 is not set +# CONFIG_SCSI_AIC7XXX is not set +# CONFIG_SCSI_AIC7XXX_OLD is not set +# CONFIG_SCSI_DPT_I2O is not set +# CONFIG_SCSI_ADVANSYS is not set +# CONFIG_SCSI_IN2000 is not set +# CONFIG_SCSI_AM53C974 is not set +# CONFIG_SCSI_MEGARAID is not set +# CONFIG_SCSI_BUSLOGIC is not set +# CONFIG_SCSI_CPQFCTS is not set +# CONFIG_SCSI_DMX3191D is not set +# CONFIG_SCSI_DTC3280 is not set +# CONFIG_SCSI_EATA is not set +# CONFIG_SCSI_EATA_DMA is not set +# CONFIG_SCSI_EATA_PIO is not set +# CONFIG_SCSI_FUTURE_DOMAIN is not set +# CONFIG_SCSI_GDTH is not set +# CONFIG_SCSI_GENERIC_NCR5380 is not set +# CONFIG_SCSI_INITIO is not set +# CONFIG_SCSI_INIA100 is not set +# CONFIG_SCSI_NCR53C406A is not set +# CONFIG_SCSI_NCR53C7xx is not set +# CONFIG_SCSI_NCR53C8XX is not set +# CONFIG_SCSI_SYM53C8XX is not set +# CONFIG_SCSI_PAS16 is not set +# CONFIG_SCSI_PCI2000 is not set +# CONFIG_SCSI_PCI2220I is not set +# CONFIG_SCSI_PSI240I is not set +# CONFIG_SCSI_QLOGIC_FAS is not set +# CONFIG_SCSI_QLOGIC_ISP is not set +CONFIG_SCSI_QLOGIC_FC=y +# CONFIG_SCSI_QLOGIC_FC_FIRMWARE is not set +# CONFIG_SCSI_QLOGIC_1280 is not set +# CONFIG_SCSI_QLOGIC_QLA2100 is not set +# CONFIG_SCSI_SIM710 is not set +# CONFIG_SCSI_SYM53C416 is not set +# CONFIG_SCSI_DC390T is not set +# CONFIG_SCSI_T128 is not set +# CONFIG_SCSI_U14_34F is not set +# CONFIG_SCSI_DEBUG is not set + +# +# Network device support +# +CONFIG_NETDEVICES=y + +# +# ARCnet devices +# +# CONFIG_ARCNET is not set +# CONFIG_DUMMY is not set +# CONFIG_BONDING is not set +# CONFIG_EQUALIZER is not set +# CONFIG_TUN is not set +# CONFIG_ETHERTAP is not set + +# +# Ethernet (10 or 100Mbit) +# +CONFIG_NET_ETHERNET=y +# CONFIG_SUNLANCE is not set +# CONFIG_HAPPYMEAL is not set +# CONFIG_SUNBMAC is not set +# CONFIG_SUNQE is not set +# CONFIG_SUNLANCE is not set +# CONFIG_SUNGEM is not set +# CONFIG_NET_VENDOR_3COM is not set +# CONFIG_LANCE is not set +# CONFIG_NET_VENDOR_SMC is not set +# CONFIG_NET_VENDOR_RACAL is not set +# CONFIG_HP100 is not set +# CONFIG_NET_ISA is not set +# CONFIG_NET_PCI is not set +# CONFIG_NET_POCKET is not set + +# +# Ethernet (1000 Mbit) +# +# CONFIG_ACENIC is not set +# CONFIG_DL2K is not set +# CONFIG_MYRI_SBUS is not set +# CONFIG_NS83820 is not set +# CONFIG_HAMACHI is not set +# CONFIG_YELLOWFIN is not set +# CONFIG_SK98LIN is not set +# CONFIG_FDDI is not set +# CONFIG_HIPPI is not set +# CONFIG_PLIP is not set +# CONFIG_PPP is not set +# CONFIG_SLIP is not set + +# +# Wireless LAN (non-hamradio) +# +# CONFIG_NET_RADIO is not set + +# +# Token Ring devices +# +# CONFIG_TR is not set +# CONFIG_NET_FC is not set +# CONFIG_RCPCI is not set +# CONFIG_SHAPER is not set + +# +# Wan interfaces +# +# CONFIG_WAN is not set + +# +# Amateur Radio support +# +# CONFIG_HAMRADIO is not set + +# +# ISDN subsystem +# +# CONFIG_ISDN is not set + +# +# CD-ROM drivers (not for SCSI or IDE/ATAPI drives) +# +# CONFIG_CD_NO_IDESCSI is not set + +# +# Input core support +# +# CONFIG_INPUT is not set +# CONFIG_INPUT_KEYBDEV is not set +# CONFIG_INPUT_MOUSEDEV is not set +# CONFIG_INPUT_JOYDEV is not set +# CONFIG_INPUT_EVDEV is not set + +# +# Character devices +# +# CONFIG_VT is not set +CONFIG_SERIAL=y +# CONFIG_SERIAL_CONSOLE is not set +# CONFIG_SERIAL_EXTENDED is not set +# CONFIG_SERIAL_NONSTANDARD is not set +CONFIG_UNIX98_PTYS=y +CONFIG_UNIX98_PTY_COUNT=256 + +# +# I2C support +# +# CONFIG_I2C is not set + +# +# Mice +# +# CONFIG_BUSMOUSE is not set +# CONFIG_MOUSE is not set + +# +# Joysticks +# +# CONFIG_INPUT_GAMEPORT is not set + +# +# Input core support is needed for gameports +# + +# +# Input core support is needed for joysticks +# +# CONFIG_QIC02_TAPE is not set + +# +# Watchdog Cards +# +# CONFIG_WATCHDOG is not set +# CONFIG_INTEL_RNG is not set +# CONFIG_NVRAM is not set +# CONFIG_RTC is not set +CONFIG_EFI_RTC=y +# CONFIG_DTLK is not set +# CONFIG_R3964 is not set +# CONFIG_APPLICOM is not set + +# +# Ftape, the floppy tape device driver +# +# CONFIG_FTAPE is not set +# CONFIG_AGP is not set +# CONFIG_DRM is not set +# CONFIG_MWAVE is not set + +# +# Multimedia devices +# +# CONFIG_VIDEO_DEV is not set + +# +# File systems +# +CONFIG_QUOTA=y +CONFIG_AUTOFS_FS=y +CONFIG_AUTOFS4_FS=y +# CONFIG_REISERFS_FS is not set +# CONFIG_REISERFS_CHECK is not set +# CONFIG_ADFS_FS is not set +# CONFIG_ADFS_FS_RW is not set +# CONFIG_AFFS_FS is not set +# CONFIG_HFS_FS is not set +# CONFIG_BFS_FS is not set +CONFIG_FAT_FS=y +CONFIG_MSDOS_FS=y +# CONFIG_UMSDOS_FS is not set +CONFIG_VFAT_FS=y +# CONFIG_EFS_FS is not set +# CONFIG_JFFS_FS is not set +# CONFIG_JFFS2_FS is not set +# CONFIG_CRAMFS is not set +CONFIG_TMPFS=y +# CONFIG_RAMFS is not set +CONFIG_ISO9660_FS=y +# CONFIG_JOLIET is not set +# CONFIG_MINIX_FS is not set +# CONFIG_VXFS_FS is not set +# CONFIG_NTFS_FS is not set +# CONFIG_NTFS_RW is not set +# CONFIG_HPFS_FS is not set +CONFIG_PROC_FS=y +CONFIG_DEVFS_FS=y +CONFIG_DEVFS_MOUNT=y +CONFIG_DEVFS_DEBUG=y +CONFIG_DEVPTS_FS=y +# CONFIG_QNX4FS_FS is not set +# CONFIG_QNX4FS_RW is not set +# CONFIG_ROMFS_FS is not set +CONFIG_EXT2_FS=y +# CONFIG_SYSV_FS is not set +# CONFIG_UDF_FS is not set +# CONFIG_UDF_RW is not set +# CONFIG_UFS_FS is not set +# CONFIG_UFS_FS_WRITE is not set +CONFIG_XFS_SUPPORT=y + +# +# Network File Systems +# +# CONFIG_CODA_FS is not set +CONFIG_NFS_FS=y +CONFIG_NFS_V3=y +# CONFIG_ROOT_NFS is not set +CONFIG_NFSD=y +CONFIG_NFSD_V3=y +CONFIG_SUNRPC=y +CONFIG_LOCKD=y +CONFIG_LOCKD_V4=y +# CONFIG_SMB_FS is not set +# CONFIG_NCP_FS is not set +# CONFIG_NCPFS_PACKET_SIGNING is not set +# CONFIG_NCPFS_IOCTL_LOCKING is not set +# CONFIG_NCPFS_STRONG is not set +# CONFIG_NCPFS_NFS_NS is not set +# CONFIG_NCPFS_OS2_NS is not set +# CONFIG_NCPFS_SMALLDOS is not set +# CONFIG_NCPFS_NLS is not set +# CONFIG_NCPFS_EXTRAS is not set + +# +# Partition Types +# +# CONFIG_PARTITION_ADVANCED is not set +CONFIG_MSDOS_PARTITION=y +# CONFIG_SMB_NLS is not set +CONFIG_NLS=y + +# +# Native Language Support +# +CONFIG_NLS_DEFAULT="n" +# CONFIG_NLS_CODEPAGE_437 is not set +# CONFIG_NLS_CODEPAGE_737 is not set +# CONFIG_NLS_CODEPAGE_775 is not set +# CONFIG_NLS_CODEPAGE_850 is not set +# CONFIG_NLS_CODEPAGE_852 is not set +# CONFIG_NLS_CODEPAGE_855 is not set +# CONFIG_NLS_CODEPAGE_857 is not set +# CONFIG_NLS_CODEPAGE_860 is not set +# CONFIG_NLS_CODEPAGE_861 is not set +# CONFIG_NLS_CODEPAGE_862 is not set +# CONFIG_NLS_CODEPAGE_863 is not set +# CONFIG_NLS_CODEPAGE_864 is not set +# CONFIG_NLS_CODEPAGE_865 is not set +# CONFIG_NLS_CODEPAGE_866 is not set +# CONFIG_NLS_CODEPAGE_869 is not set +# CONFIG_NLS_CODEPAGE_936 is not set +# CONFIG_NLS_CODEPAGE_950 is not set +# CONFIG_NLS_CODEPAGE_932 is not set +# CONFIG_NLS_CODEPAGE_949 is not set +# CONFIG_NLS_CODEPAGE_874 is not set +# CONFIG_NLS_ISO8859_8 is not set +# CONFIG_NLS_CODEPAGE_1251 is not set +# CONFIG_NLS_ISO8859_1 is not set +# CONFIG_NLS_ISO8859_2 is not set +# CONFIG_NLS_ISO8859_3 is not set +# CONFIG_NLS_ISO8859_4 is not set +# CONFIG_NLS_ISO8859_5 is not set +# CONFIG_NLS_ISO8859_6 is not set +# CONFIG_NLS_ISO8859_7 is not set +# CONFIG_NLS_ISO8859_9 is not set +# CONFIG_NLS_ISO8859_13 is not set +# CONFIG_NLS_ISO8859_14 is not set +# CONFIG_NLS_ISO8859_15 is not set +# CONFIG_NLS_KOI8_R is not set +# CONFIG_NLS_KOI8_U is not set +# CONFIG_NLS_UTF8 is not set + +# +# Sound +# +# CONFIG_SOUND is not set + +# +# USB support +# +# CONFIG_USB is not set + +# +# USB Controllers +# +# CONFIG_USB_UHCI is not set +# CONFIG_USB_UHCI_ALT is not set +# CONFIG_USB_OHCI is not set + +# +# USB Device Class drivers +# +# CONFIG_USB_AUDIO is not set +# CONFIG_USB_BLUETOOTH is not set +# CONFIG_USB_STORAGE is not set +# CONFIG_USB_STORAGE_DEBUG is not set +# CONFIG_USB_STORAGE_DATAFAB is not set +# CONFIG_USB_STORAGE_FREECOM is not set +# CONFIG_USB_STORAGE_ISD200 is not set +# CONFIG_USB_STORAGE_DPCM is not set +# CONFIG_USB_STORAGE_HP8200e is not set +# CONFIG_USB_STORAGE_SDDR09 is not set +# CONFIG_USB_STORAGE_JUMPSHOT is not set +# CONFIG_USB_ACM is not set +# CONFIG_USB_PRINTER is not set + +# +# USB Human Interface Devices (HID) +# + +# +# Input core support is needed for USB HID +# + +# +# USB Imaging devices +# +# CONFIG_USB_DC2XX is not set +# CONFIG_USB_MDC800 is not set +# CONFIG_USB_SCANNER is not set +# CONFIG_USB_MICROTEK is not set +# CONFIG_USB_HPUSBSCSI is not set + +# +# USB Multimedia devices +# + +# +# Video4Linux support is needed for USB Multimedia device support +# + +# +# USB Network adaptors +# +# CONFIG_USB_PEGASUS is not set +# CONFIG_USB_KAWETH is not set +# CONFIG_USB_CATC is not set +# CONFIG_USB_CDCETHER is not set +# CONFIG_USB_USBNET is not set + +# +# USB port drivers +# +# CONFIG_USB_USS720 is not set + +# +# USB Serial Converter support +# +# CONFIG_USB_SERIAL is not set +# CONFIG_USB_SERIAL_GENERIC is not set +# CONFIG_USB_SERIAL_BELKIN is not set +# CONFIG_USB_SERIAL_WHITEHEAT is not set +# CONFIG_USB_SERIAL_DIGI_ACCELEPORT is not set +# CONFIG_USB_SERIAL_EMPEG is not set +# CONFIG_USB_SERIAL_FTDI_SIO is not set +# CONFIG_USB_SERIAL_VISOR is not set +# CONFIG_USB_SERIAL_IR is not set +# CONFIG_USB_SERIAL_EDGEPORT is not set +# CONFIG_USB_SERIAL_KEYSPAN_PDA is not set +# CONFIG_USB_SERIAL_KEYSPAN is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28 is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28X is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28XA is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28XB is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA19 is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA18X is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA19W is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA49W is not set +# CONFIG_USB_SERIAL_MCT_U232 is not set +# CONFIG_USB_SERIAL_PL2303 is not set +# CONFIG_USB_SERIAL_CYBERJACK is not set +# CONFIG_USB_SERIAL_XIRCOM is not set +# CONFIG_USB_SERIAL_OMNINET is not set + +# +# USB Miscellaneous drivers +# +# CONFIG_USB_RIO500 is not set + +# +# IEEE 1394 (FireWire) support (EXPERIMENTAL) +# +# CONFIG_IEEE1394 is not set + +# +# Bluetooth support +# +# CONFIG_BLUEZ is not set + +# +# Kernel hacking +# +CONFIG_DEBUG_KERNEL=y +CONFIG_IA64_PRINT_HAZARDS=y +# CONFIG_DISABLE_VHPT is not set +CONFIG_MAGIC_SYSRQ=y +CONFIG_IA64_EARLY_PRINTK=y +# CONFIG_DEBUG_SLAB is not set +# CONFIG_DEBUG_SPINLOCK is not set +# CONFIG_IA64_DEBUG_CMPXCHG is not set +# CONFIG_IA64_DEBUG_IRQ is not set +# CONFIG_KDB is not set +# CONFIG_KDB_MODULES is not set +CONFIG_KALLSYMS=y diff -Nru a/arch/ia64/sn/configs/sn2/defconfig-sn2-prom-medusa b/arch/ia64/sn/configs/sn2/defconfig-sn2-prom-medusa --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/configs/sn2/defconfig-sn2-prom-medusa Tue Mar 12 13:58:16 2002 @@ -0,0 +1,537 @@ +# +# Automatically generated make config: don't edit +# + +# +# Code maturity level options +# +CONFIG_EXPERIMENTAL=y + +# +# Loadable module support +# +# CONFIG_MODULES is not set + +# +# General setup +# +CONFIG_IA64=y +# CONFIG_ISA is not set +# CONFIG_EISA is not set +# CONFIG_MCA is not set +# CONFIG_SBUS is not set +CONFIG_RWSEM_GENERIC_SPINLOCK=y +# CONFIG_RWSEM_XCHGADD_ALGORITHM is not set +CONFIG_ACPI=y +CONFIG_ACPI_EFI=y +CONFIG_ACPI_INTERPRETER=y +CONFIG_ACPI_KERNEL_CONFIG=y +# CONFIG_ITANIUM is not set +CONFIG_MCKINLEY=y +# CONFIG_IA64_GENERIC is not set +# CONFIG_IA64_DIG is not set +# CONFIG_IA64_HP_SIM is not set +# CONFIG_IA64_SGI_SN1 is not set +CONFIG_IA64_SGI_SN2=y +# CONFIG_IA64_PAGE_SIZE_4KB is not set +# CONFIG_IA64_PAGE_SIZE_8KB is not set +CONFIG_IA64_PAGE_SIZE_16KB=y +# CONFIG_IA64_PAGE_SIZE_64KB is not set +CONFIG_IA64_L1_CACHE_SHIFT=7 +CONFIG_MCKINLEY_ASTEP_SPECIFIC=y +CONFIG_MCKINLEY_A0_SPECIFIC=y +CONFIG_IA64_SGI_SN=y +CONFIG_IA64_SGI_SN_DEBUG=y +CONFIG_IA64_SGI_SN_SIM=y +CONFIG_IA64_SGI_AUTOTEST=y +CONFIG_DEVFS_FS=y +CONFIG_DEVFS_DEBUG=y +CONFIG_SERIAL_SGI_L1_PROTOCOL=y +CONFIG_DISCONTIGMEM=y +CONFIG_IA64_MCA=y +CONFIG_NUMA=y +CONFIG_PERCPU_IRQ=y +CONFIG_PCIBA=y +CONFIG_KCORE_ELF=y +CONFIG_SMP=y +# CONFIG_IA32_SUPPORT is not set +CONFIG_PERFMON=y +CONFIG_IA64_PALINFO=y +# CONFIG_EFI_VARS is not set +# CONFIG_NET is not set +CONFIG_SYSVIPC=y +# CONFIG_BSD_PROCESS_ACCT is not set +CONFIG_SYSCTL=y +CONFIG_BINFMT_ELF=y +# CONFIG_BINFMT_MISC is not set +# CONFIG_ACPI_DEBUG is not set +# CONFIG_ACPI_BUSMGR is not set +# CONFIG_ACPI_SYS is not set +# CONFIG_ACPI_CPU is not set +# CONFIG_ACPI_BUTTON is not set +# CONFIG_ACPI_AC is not set +# CONFIG_ACPI_EC is not set +# CONFIG_ACPI_CMBATT is not set +# CONFIG_ACPI_THERMAL is not set +CONFIG_PCI=y +# CONFIG_PCI_NAMES is not set +# CONFIG_HOTPLUG is not set +# CONFIG_PCMCIA is not set + +# +# Parallel port support +# +# CONFIG_PARPORT is not set + +# +# Memory Technology Devices (MTD) +# +# CONFIG_MTD is not set + +# +# Plug and Play configuration +# +# CONFIG_PNP is not set +# CONFIG_ISAPNP is not set +# CONFIG_PNPBIOS is not set + +# +# Block devices +# +# CONFIG_BLK_DEV_FD is not set +# CONFIG_BLK_DEV_XD is not set +# CONFIG_PARIDE is not set +# CONFIG_BLK_CPQ_DA is not set +# CONFIG_BLK_CPQ_CISS_DA is not set +# CONFIG_BLK_DEV_DAC960 is not set +# CONFIG_BLK_DEV_LOOP is not set +# CONFIG_BLK_DEV_NBD is not set +# CONFIG_BLK_DEV_RAM is not set +# CONFIG_BLK_DEV_INITRD is not set + +# +# I2O device support +# +# CONFIG_I2O is not set +# CONFIG_I2O_PCI is not set +# CONFIG_I2O_BLOCK is not set +# CONFIG_I2O_SCSI is not set +# CONFIG_I2O_PROC is not set + +# +# Multi-device support (RAID and LVM) +# +# CONFIG_MD is not set +# CONFIG_BLK_DEV_MD is not set +# CONFIG_MD_LINEAR is not set +# CONFIG_MD_RAID0 is not set +# CONFIG_MD_RAID1 is not set +# CONFIG_MD_RAID5 is not set +# CONFIG_MD_MULTIPATH is not set +# CONFIG_BLK_DEV_LVM is not set + +# +# ATA/IDE/MFM/RLL support +# +CONFIG_IDE=y + +# +# IDE, ATA and ATAPI Block devices +# +CONFIG_BLK_DEV_IDE=y + +# +# Please see Documentation/ide.txt for help/info on IDE drives +# +# CONFIG_BLK_DEV_HD_IDE is not set +# CONFIG_BLK_DEV_HD is not set +CONFIG_BLK_DEV_IDEDISK=y +# CONFIG_IDEDISK_MULTI_MODE is not set +# CONFIG_BLK_DEV_IDEDISK_VENDOR is not set +# CONFIG_BLK_DEV_IDEDISK_FUJITSU is not set +# CONFIG_BLK_DEV_IDEDISK_IBM is not set +# CONFIG_BLK_DEV_IDEDISK_MAXTOR is not set +# CONFIG_BLK_DEV_IDEDISK_QUANTUM is not set +# CONFIG_BLK_DEV_IDEDISK_SEAGATE is not set +# CONFIG_BLK_DEV_IDEDISK_WD is not set +# CONFIG_BLK_DEV_COMMERIAL is not set +# CONFIG_BLK_DEV_TIVO is not set +# CONFIG_BLK_DEV_IDECS is not set +# CONFIG_BLK_DEV_IDECD is not set +# CONFIG_BLK_DEV_IDETAPE is not set +# CONFIG_BLK_DEV_IDEFLOPPY is not set +# CONFIG_BLK_DEV_IDESCSI is not set + +# +# IDE chipset support/bugfixes +# +# CONFIG_BLK_DEV_CMD640 is not set +# CONFIG_BLK_DEV_CMD640_ENHANCED is not set +# CONFIG_BLK_DEV_ISAPNP is not set +# CONFIG_BLK_DEV_RZ1000 is not set +# CONFIG_BLK_DEV_IDEPCI is not set +# CONFIG_IDE_CHIPSETS is not set +# CONFIG_IDEDMA_AUTO is not set +# CONFIG_DMA_NONPCI is not set +# CONFIG_BLK_DEV_IDE_MODES is not set +# CONFIG_BLK_DEV_ATARAID is not set +# CONFIG_BLK_DEV_ATARAID_PDC is not set +# CONFIG_BLK_DEV_ATARAID_HPT is not set + +# +# Alternate 1394 support +# +# CONFIG_X1394 is not set + +# +# Alternate SCSI support +# +CONFIG_XSCSI=y + +# +# Alternate SCSI support +# +CONFIG_XSCSI_DKSC=y +# CONFIG_XSCSI_QLFC is not set +# CONFIG_XSCSI_QL is not set +# CONFIG_XSCSI_SBP2 is not set + +# +# SCSI support +# +CONFIG_SCSI=y + +# +# SCSI support type (disk, tape, CD-ROM) +# +CONFIG_BLK_DEV_SD=y +CONFIG_SD_EXTRA_DEVS=40 +# CONFIG_CHR_DEV_ST is not set +# CONFIG_CHR_DEV_OSST is not set +# CONFIG_BLK_DEV_SR is not set +# CONFIG_CHR_DEV_SG is not set + +# +# Some SCSI devices (e.g. CD jukebox) support multiple LUNs +# +# CONFIG_SCSI_DEBUG_QUEUES is not set +# CONFIG_SCSI_MULTI_LUN is not set +# CONFIG_SCSI_CONSTANTS is not set +# CONFIG_SCSI_LOGGING is not set + +# +# SCSI low-level drivers +# +# CONFIG_BLK_DEV_3W_XXXX_RAID is not set +# CONFIG_SCSI_7000FASST is not set +# CONFIG_SCSI_ACARD is not set +# CONFIG_SCSI_AHA152X is not set +# CONFIG_SCSI_AHA1542 is not set +# CONFIG_SCSI_AHA1740 is not set +# CONFIG_SCSI_AIC7XXX is not set +# CONFIG_SCSI_AIC7XXX_OLD is not set +# CONFIG_SCSI_DPT_I2O is not set +# CONFIG_SCSI_ADVANSYS is not set +# CONFIG_SCSI_IN2000 is not set +# CONFIG_SCSI_AM53C974 is not set +# CONFIG_SCSI_MEGARAID is not set +# CONFIG_SCSI_BUSLOGIC is not set +# CONFIG_SCSI_CPQFCTS is not set +# CONFIG_SCSI_DMX3191D is not set +# CONFIG_SCSI_DTC3280 is not set +# CONFIG_SCSI_EATA is not set +# CONFIG_SCSI_EATA_DMA is not set +# CONFIG_SCSI_EATA_PIO is not set +# CONFIG_SCSI_FUTURE_DOMAIN is not set +# CONFIG_SCSI_GDTH is not set +# CONFIG_SCSI_GENERIC_NCR5380 is not set +# CONFIG_SCSI_INITIO is not set +# CONFIG_SCSI_INIA100 is not set +# CONFIG_SCSI_NCR53C406A is not set +# CONFIG_SCSI_NCR53C7xx is not set +# CONFIG_SCSI_NCR53C8XX is not set +# CONFIG_SCSI_SYM53C8XX is not set +# CONFIG_SCSI_PAS16 is not set +# CONFIG_SCSI_PCI2000 is not set +# CONFIG_SCSI_PCI2220I is not set +# CONFIG_SCSI_PSI240I is not set +# CONFIG_SCSI_QLOGIC_FAS is not set +# CONFIG_SCSI_QLOGIC_ISP is not set +CONFIG_SCSI_QLOGIC_FC=y +# CONFIG_SCSI_QLOGIC_FC_FIRMWARE is not set +# CONFIG_SCSI_QLOGIC_1280 is not set +# CONFIG_SCSI_QLOGIC_QLA2100 is not set +# CONFIG_SCSI_SIM710 is not set +# CONFIG_SCSI_SYM53C416 is not set +# CONFIG_SCSI_DC390T is not set +# CONFIG_SCSI_T128 is not set +# CONFIG_SCSI_U14_34F is not set +# CONFIG_SCSI_DEBUG is not set + +# +# Amateur Radio support +# +# CONFIG_HAMRADIO is not set + +# +# ISDN subsystem +# +# CONFIG_ISDN is not set + +# +# CD-ROM drivers (not for SCSI or IDE/ATAPI drives) +# +# CONFIG_CD_NO_IDESCSI is not set + +# +# Input core support +# +# CONFIG_INPUT is not set +# CONFIG_INPUT_KEYBDEV is not set +# CONFIG_INPUT_MOUSEDEV is not set +# CONFIG_INPUT_JOYDEV is not set +# CONFIG_INPUT_EVDEV is not set + +# +# Character devices +# +# CONFIG_VT is not set +CONFIG_SERIAL=y +# CONFIG_SERIAL_CONSOLE is not set +# CONFIG_SERIAL_EXTENDED is not set +# CONFIG_SERIAL_NONSTANDARD is not set +CONFIG_UNIX98_PTYS=y +CONFIG_UNIX98_PTY_COUNT=256 + +# +# I2C support +# +# CONFIG_I2C is not set + +# +# Mice +# +# CONFIG_BUSMOUSE is not set +# CONFIG_MOUSE is not set + +# +# Joysticks +# +# CONFIG_INPUT_GAMEPORT is not set + +# +# Input core support is needed for gameports +# + +# +# Input core support is needed for joysticks +# +# CONFIG_QIC02_TAPE is not set + +# +# Watchdog Cards +# +# CONFIG_WATCHDOG is not set +# CONFIG_INTEL_RNG is not set +# CONFIG_NVRAM is not set +# CONFIG_RTC is not set +# CONFIG_EFI_RTC is not set +# CONFIG_DTLK is not set +# CONFIG_R3964 is not set +# CONFIG_APPLICOM is not set + +# +# Ftape, the floppy tape device driver +# +# CONFIG_FTAPE is not set +# CONFIG_AGP is not set +# CONFIG_DRM is not set +# CONFIG_MWAVE is not set + +# +# Multimedia devices +# +# CONFIG_VIDEO_DEV is not set + +# +# File systems +# +# CONFIG_QUOTA is not set +# CONFIG_AUTOFS_FS is not set +# CONFIG_AUTOFS4_FS is not set +# CONFIG_REISERFS_FS is not set +# CONFIG_REISERFS_CHECK is not set +# CONFIG_ADFS_FS is not set +# CONFIG_ADFS_FS_RW is not set +# CONFIG_AFFS_FS is not set +# CONFIG_HFS_FS is not set +# CONFIG_BFS_FS is not set +# CONFIG_FAT_FS is not set +# CONFIG_MSDOS_FS is not set +# CONFIG_UMSDOS_FS is not set +# CONFIG_VFAT_FS is not set +# CONFIG_EFS_FS is not set +# CONFIG_JFFS_FS is not set +# CONFIG_JFFS2_FS is not set +# CONFIG_CRAMFS is not set +CONFIG_TMPFS=y +# CONFIG_RAMFS is not set +# CONFIG_ISO9660_FS is not set +# CONFIG_JOLIET is not set +# CONFIG_MINIX_FS is not set +# CONFIG_VXFS_FS is not set +# CONFIG_NTFS_FS is not set +# CONFIG_NTFS_RW is not set +# CONFIG_HPFS_FS is not set +CONFIG_PROC_FS=y +CONFIG_DEVFS_FS=y +CONFIG_DEVFS_MOUNT=y +CONFIG_DEVFS_DEBUG=y +CONFIG_DEVPTS_FS=y +# CONFIG_QNX4FS_FS is not set +# CONFIG_QNX4FS_RW is not set +# CONFIG_ROMFS_FS is not set +CONFIG_EXT2_FS=y +# CONFIG_SYSV_FS is not set +# CONFIG_UDF_FS is not set +# CONFIG_UDF_RW is not set +# CONFIG_UFS_FS is not set +# CONFIG_UFS_FS_WRITE is not set +CONFIG_XFS_SUPPORT=y +# CONFIG_NCPFS_NLS is not set +# CONFIG_SMB_FS is not set + +# +# Partition Types +# +# CONFIG_PARTITION_ADVANCED is not set +CONFIG_MSDOS_PARTITION=y +# CONFIG_SMB_NLS is not set +# CONFIG_NLS is not set + +# +# Sound +# +# CONFIG_SOUND is not set + +# +# USB support +# +# CONFIG_USB is not set + +# +# USB Controllers +# +# CONFIG_USB_UHCI is not set +# CONFIG_USB_UHCI_ALT is not set +# CONFIG_USB_OHCI is not set + +# +# USB Device Class drivers +# +# CONFIG_USB_AUDIO is not set +# CONFIG_USB_BLUETOOTH is not set +# CONFIG_USB_STORAGE is not set +# CONFIG_USB_STORAGE_DEBUG is not set +# CONFIG_USB_STORAGE_DATAFAB is not set +# CONFIG_USB_STORAGE_FREECOM is not set +# CONFIG_USB_STORAGE_ISD200 is not set +# CONFIG_USB_STORAGE_DPCM is not set +# CONFIG_USB_STORAGE_HP8200e is not set +# CONFIG_USB_STORAGE_SDDR09 is not set +# CONFIG_USB_STORAGE_JUMPSHOT is not set +# CONFIG_USB_ACM is not set +# CONFIG_USB_PRINTER is not set + +# +# USB Human Interface Devices (HID) +# + +# +# Input core support is needed for USB HID +# + +# +# USB Imaging devices +# +# CONFIG_USB_DC2XX is not set +# CONFIG_USB_MDC800 is not set +# CONFIG_USB_SCANNER is not set +# CONFIG_USB_MICROTEK is not set +# CONFIG_USB_HPUSBSCSI is not set + +# +# USB Multimedia devices +# + +# +# Video4Linux support is needed for USB Multimedia device support +# + +# +# USB Network adaptors +# + +# +# Networking support is needed for USB Networking device support +# + +# +# USB port drivers +# +# CONFIG_USB_USS720 is not set + +# +# USB Serial Converter support +# +# CONFIG_USB_SERIAL is not set +# CONFIG_USB_SERIAL_GENERIC is not set +# CONFIG_USB_SERIAL_BELKIN is not set +# CONFIG_USB_SERIAL_WHITEHEAT is not set +# CONFIG_USB_SERIAL_DIGI_ACCELEPORT is not set +# CONFIG_USB_SERIAL_EMPEG is not set +# CONFIG_USB_SERIAL_FTDI_SIO is not set +# CONFIG_USB_SERIAL_VISOR is not set +# CONFIG_USB_SERIAL_IR is not set +# CONFIG_USB_SERIAL_EDGEPORT is not set +# CONFIG_USB_SERIAL_KEYSPAN_PDA is not set +# CONFIG_USB_SERIAL_KEYSPAN is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28 is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28X is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28XA is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28XB is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA19 is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA18X is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA19W is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA49W is not set +# CONFIG_USB_SERIAL_MCT_U232 is not set +# CONFIG_USB_SERIAL_PL2303 is not set +# CONFIG_USB_SERIAL_CYBERJACK is not set +# CONFIG_USB_SERIAL_XIRCOM is not set +# CONFIG_USB_SERIAL_OMNINET is not set + +# +# USB Miscellaneous drivers +# +# CONFIG_USB_RIO500 is not set + +# +# IEEE 1394 (FireWire) support (EXPERIMENTAL) +# +# CONFIG_IEEE1394 is not set + +# +# Kernel hacking +# +CONFIG_DEBUG_KERNEL=y +CONFIG_IA64_PRINT_HAZARDS=y +# CONFIG_DISABLE_VHPT is not set +CONFIG_MAGIC_SYSRQ=y +CONFIG_IA64_EARLY_PRINTK=y +# CONFIG_DEBUG_SLAB is not set +# CONFIG_DEBUG_SPINLOCK is not set +# CONFIG_IA64_DEBUG_CMPXCHG is not set +# CONFIG_IA64_DEBUG_IRQ is not set +# CONFIG_KDB is not set +# CONFIG_KDB_MODULES is not set +# CONFIG_KALLSYMS is not set diff -Nru a/arch/ia64/sn/configs/sn2/defconfig-sn2-sp b/arch/ia64/sn/configs/sn2/defconfig-sn2-sp --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/configs/sn2/defconfig-sn2-sp Tue Mar 12 13:58:16 2002 @@ -0,0 +1,730 @@ +# +# Automatically generated make config: don't edit +# + +# +# Code maturity level options +# +CONFIG_EXPERIMENTAL=y + +# +# Loadable module support +# +# CONFIG_MODULES is not set + +# +# General setup +# +CONFIG_IA64=y +# CONFIG_ISA is not set +# CONFIG_EISA is not set +# CONFIG_MCA is not set +# CONFIG_SBUS is not set +CONFIG_RWSEM_GENERIC_SPINLOCK=y +# CONFIG_RWSEM_XCHGADD_ALGORITHM is not set +CONFIG_ACPI=y +CONFIG_ACPI_EFI=y +CONFIG_ACPI_INTERPRETER=y +CONFIG_ACPI_KERNEL_CONFIG=y +# CONFIG_ITANIUM is not set +CONFIG_MCKINLEY=y +# CONFIG_IA64_GENERIC is not set +# CONFIG_IA64_DIG is not set +# CONFIG_IA64_HP_SIM is not set +# CONFIG_IA64_SGI_SN1 is not set +CONFIG_IA64_SGI_SN2=y +# CONFIG_IA64_PAGE_SIZE_4KB is not set +# CONFIG_IA64_PAGE_SIZE_8KB is not set +CONFIG_IA64_PAGE_SIZE_16KB=y +# CONFIG_IA64_PAGE_SIZE_64KB is not set +CONFIG_IA64_L1_CACHE_SHIFT=7 +CONFIG_MCKINLEY_ASTEP_SPECIFIC=y +CONFIG_MCKINLEY_A0_SPECIFIC=y +CONFIG_IA64_SGI_SN=y +CONFIG_IA64_SGI_SN_DEBUG=y +CONFIG_IA64_SGI_SN_SIM=y +CONFIG_IA64_SGI_AUTOTEST=y +CONFIG_DEVFS_FS=y +CONFIG_DEVFS_DEBUG=y +CONFIG_SERIAL_SGI_L1_PROTOCOL=y +CONFIG_DISCONTIGMEM=y +CONFIG_IA64_MCA=y +CONFIG_NUMA=y +CONFIG_PERCPU_IRQ=y +CONFIG_PCIBA=y +CONFIG_KCORE_ELF=y +# CONFIG_SMP is not set +CONFIG_IA32_SUPPORT=y +CONFIG_PERFMON=y +CONFIG_IA64_PALINFO=y +# CONFIG_EFI_VARS is not set +CONFIG_NET=y +CONFIG_SYSVIPC=y +# CONFIG_BSD_PROCESS_ACCT is not set +CONFIG_SYSCTL=y +CONFIG_BINFMT_ELF=y +# CONFIG_BINFMT_MISC is not set +# CONFIG_ACPI_DEBUG is not set +# CONFIG_ACPI_BUSMGR is not set +# CONFIG_ACPI_SYS is not set +# CONFIG_ACPI_CPU is not set +# CONFIG_ACPI_BUTTON is not set +# CONFIG_ACPI_AC is not set +# CONFIG_ACPI_EC is not set +# CONFIG_ACPI_CMBATT is not set +# CONFIG_ACPI_THERMAL is not set +CONFIG_PCI=y +# CONFIG_PCI_NAMES is not set +# CONFIG_HOTPLUG is not set +# CONFIG_PCMCIA is not set + +# +# Parallel port support +# +# CONFIG_PARPORT is not set + +# +# Networking options +# +CONFIG_PACKET=y +# CONFIG_PACKET_MMAP is not set +CONFIG_NETLINK=y +CONFIG_RTNETLINK=y +CONFIG_NETLINK_DEV=y +CONFIG_NETFILTER=y +CONFIG_NETFILTER_DEBUG=y +CONFIG_FILTER=y +CONFIG_UNIX=y +CONFIG_INET=y +CONFIG_IP_MULTICAST=y +# CONFIG_IP_ADVANCED_ROUTER is not set +# CONFIG_IP_PNP is not set +# CONFIG_NET_IPIP is not set +# CONFIG_NET_IPGRE is not set +# CONFIG_IP_MROUTE is not set +# CONFIG_ARPD is not set +# CONFIG_INET_ECN is not set +CONFIG_SYN_COOKIES=y + +# +# IP: Netfilter Configuration +# +# CONFIG_IP_NF_CONNTRACK is not set +# CONFIG_IP_NF_QUEUE is not set +# CONFIG_IP_NF_IPTABLES is not set +# CONFIG_IP_NF_COMPAT_IPCHAINS is not set +# CONFIG_IP_NF_COMPAT_IPFWADM is not set +# CONFIG_IPV6 is not set +# CONFIG_KHTTPD is not set +# CONFIG_ATM is not set + +# +# +# +# CONFIG_IPX is not set +# CONFIG_ATALK is not set +# CONFIG_DECNET is not set +# CONFIG_BRIDGE is not set +# CONFIG_X25 is not set +# CONFIG_LAPB is not set +# CONFIG_LLC is not set +# CONFIG_NET_DIVERT is not set +# CONFIG_ECONET is not set +# CONFIG_WAN_ROUTER is not set +# CONFIG_NET_FASTROUTE is not set +# CONFIG_NET_HW_FLOWCONTROL is not set + +# +# QoS and/or fair queueing +# +# CONFIG_NET_SCHED is not set + +# +# Memory Technology Devices (MTD) +# +# CONFIG_MTD is not set + +# +# Plug and Play configuration +# +# CONFIG_PNP is not set +# CONFIG_ISAPNP is not set +# CONFIG_PNPBIOS is not set + +# +# Block devices +# +# CONFIG_BLK_DEV_FD is not set +# CONFIG_BLK_DEV_XD is not set +# CONFIG_PARIDE is not set +# CONFIG_BLK_CPQ_DA is not set +# CONFIG_BLK_CPQ_CISS_DA is not set +# CONFIG_BLK_DEV_DAC960 is not set +CONFIG_BLK_DEV_LOOP=y +# CONFIG_BLK_DEV_NBD is not set +# CONFIG_BLK_DEV_RAM is not set +# CONFIG_BLK_DEV_INITRD is not set + +# +# I2O device support +# +# CONFIG_I2O is not set +# CONFIG_I2O_PCI is not set +# CONFIG_I2O_BLOCK is not set +# CONFIG_I2O_LAN is not set +# CONFIG_I2O_SCSI is not set +# CONFIG_I2O_PROC is not set + +# +# Multi-device support (RAID and LVM) +# +# CONFIG_MD is not set +# CONFIG_BLK_DEV_MD is not set +# CONFIG_MD_LINEAR is not set +# CONFIG_MD_RAID0 is not set +# CONFIG_MD_RAID1 is not set +# CONFIG_MD_RAID5 is not set +# CONFIG_MD_MULTIPATH is not set +# CONFIG_BLK_DEV_LVM is not set + +# +# ATA/IDE/MFM/RLL support +# +CONFIG_IDE=y + +# +# IDE, ATA and ATAPI Block devices +# +CONFIG_BLK_DEV_IDE=y + +# +# Please see Documentation/ide.txt for help/info on IDE drives +# +# CONFIG_BLK_DEV_HD_IDE is not set +# CONFIG_BLK_DEV_HD is not set +CONFIG_BLK_DEV_IDEDISK=y +# CONFIG_IDEDISK_MULTI_MODE is not set +# CONFIG_BLK_DEV_IDEDISK_VENDOR is not set +# CONFIG_BLK_DEV_IDEDISK_FUJITSU is not set +# CONFIG_BLK_DEV_IDEDISK_IBM is not set +# CONFIG_BLK_DEV_IDEDISK_MAXTOR is not set +# CONFIG_BLK_DEV_IDEDISK_QUANTUM is not set +# CONFIG_BLK_DEV_IDEDISK_SEAGATE is not set +# CONFIG_BLK_DEV_IDEDISK_WD is not set +# CONFIG_BLK_DEV_COMMERIAL is not set +# CONFIG_BLK_DEV_TIVO is not set +# CONFIG_BLK_DEV_IDECS is not set +# CONFIG_BLK_DEV_IDECD is not set +# CONFIG_BLK_DEV_IDETAPE is not set +# CONFIG_BLK_DEV_IDEFLOPPY is not set +# CONFIG_BLK_DEV_IDESCSI is not set + +# +# IDE chipset support/bugfixes +# +# CONFIG_BLK_DEV_CMD640 is not set +# CONFIG_BLK_DEV_CMD640_ENHANCED is not set +# CONFIG_BLK_DEV_ISAPNP is not set +# CONFIG_BLK_DEV_RZ1000 is not set +# CONFIG_BLK_DEV_IDEPCI is not set +# CONFIG_IDE_CHIPSETS is not set +# CONFIG_IDEDMA_AUTO is not set +# CONFIG_DMA_NONPCI is not set +# CONFIG_BLK_DEV_IDE_MODES is not set +# CONFIG_BLK_DEV_ATARAID is not set +# CONFIG_BLK_DEV_ATARAID_PDC is not set +# CONFIG_BLK_DEV_ATARAID_HPT is not set + +# +# Alternate 1394 support +# +# CONFIG_X1394 is not set + +# +# Alternate SCSI support +# +CONFIG_XSCSI=y + +# +# Alternate SCSI support +# +CONFIG_XSCSI_DKSC=y +# CONFIG_XSCSI_QLFC is not set +# CONFIG_XSCSI_QL is not set +# CONFIG_XSCSI_SBP2 is not set + +# +# SCSI support +# +CONFIG_SCSI=y + +# +# SCSI support type (disk, tape, CD-ROM) +# +CONFIG_BLK_DEV_SD=y +CONFIG_SD_EXTRA_DEVS=40 +# CONFIG_CHR_DEV_ST is not set +# CONFIG_CHR_DEV_OSST is not set +# CONFIG_BLK_DEV_SR is not set +# CONFIG_CHR_DEV_SG is not set + +# +# Some SCSI devices (e.g. CD jukebox) support multiple LUNs +# +# CONFIG_SCSI_DEBUG_QUEUES is not set +# CONFIG_SCSI_MULTI_LUN is not set +# CONFIG_SCSI_CONSTANTS is not set +# CONFIG_SCSI_LOGGING is not set + +# +# SCSI low-level drivers +# +# CONFIG_BLK_DEV_3W_XXXX_RAID is not set +# CONFIG_SCSI_7000FASST is not set +# CONFIG_SCSI_ACARD is not set +# CONFIG_SCSI_AHA152X is not set +# CONFIG_SCSI_AHA1542 is not set +# CONFIG_SCSI_AHA1740 is not set +# CONFIG_SCSI_AIC7XXX is not set +# CONFIG_SCSI_AIC7XXX_OLD is not set +# CONFIG_SCSI_DPT_I2O is not set +# CONFIG_SCSI_ADVANSYS is not set +# CONFIG_SCSI_IN2000 is not set +# CONFIG_SCSI_AM53C974 is not set +# CONFIG_SCSI_MEGARAID is not set +# CONFIG_SCSI_BUSLOGIC is not set +# CONFIG_SCSI_CPQFCTS is not set +# CONFIG_SCSI_DMX3191D is not set +# CONFIG_SCSI_DTC3280 is not set +# CONFIG_SCSI_EATA is not set +# CONFIG_SCSI_EATA_DMA is not set +# CONFIG_SCSI_EATA_PIO is not set +# CONFIG_SCSI_FUTURE_DOMAIN is not set +# CONFIG_SCSI_GDTH is not set +# CONFIG_SCSI_GENERIC_NCR5380 is not set +# CONFIG_SCSI_INITIO is not set +# CONFIG_SCSI_INIA100 is not set +# CONFIG_SCSI_NCR53C406A is not set +# CONFIG_SCSI_NCR53C7xx is not set +# CONFIG_SCSI_NCR53C8XX is not set +# CONFIG_SCSI_SYM53C8XX is not set +# CONFIG_SCSI_PAS16 is not set +# CONFIG_SCSI_PCI2000 is not set +# CONFIG_SCSI_PCI2220I is not set +# CONFIG_SCSI_PSI240I is not set +# CONFIG_SCSI_QLOGIC_FAS is not set +# CONFIG_SCSI_QLOGIC_ISP is not set +CONFIG_SCSI_QLOGIC_FC=y +# CONFIG_SCSI_QLOGIC_FC_FIRMWARE is not set +# CONFIG_SCSI_QLOGIC_1280 is not set +# CONFIG_SCSI_QLOGIC_QLA2100 is not set +# CONFIG_SCSI_SIM710 is not set +# CONFIG_SCSI_SYM53C416 is not set +# CONFIG_SCSI_DC390T is not set +# CONFIG_SCSI_T128 is not set +# CONFIG_SCSI_U14_34F is not set +# CONFIG_SCSI_DEBUG is not set + +# +# Network device support +# +CONFIG_NETDEVICES=y + +# +# ARCnet devices +# +# CONFIG_ARCNET is not set +# CONFIG_DUMMY is not set +# CONFIG_BONDING is not set +# CONFIG_EQUALIZER is not set +# CONFIG_TUN is not set +# CONFIG_ETHERTAP is not set + +# +# Ethernet (10 or 100Mbit) +# +CONFIG_NET_ETHERNET=y +# CONFIG_SUNLANCE is not set +# CONFIG_HAPPYMEAL is not set +# CONFIG_SUNBMAC is not set +# CONFIG_SUNQE is not set +# CONFIG_SUNLANCE is not set +# CONFIG_SUNGEM is not set +# CONFIG_NET_VENDOR_3COM is not set +# CONFIG_LANCE is not set +# CONFIG_NET_VENDOR_SMC is not set +# CONFIG_NET_VENDOR_RACAL is not set +# CONFIG_HP100 is not set +# CONFIG_NET_ISA is not set +# CONFIG_NET_PCI is not set +# CONFIG_NET_POCKET is not set + +# +# Ethernet (1000 Mbit) +# +# CONFIG_ACENIC is not set +# CONFIG_DL2K is not set +# CONFIG_MYRI_SBUS is not set +# CONFIG_NS83820 is not set +# CONFIG_HAMACHI is not set +# CONFIG_YELLOWFIN is not set +# CONFIG_SK98LIN is not set +# CONFIG_FDDI is not set +# CONFIG_HIPPI is not set +# CONFIG_PLIP is not set +# CONFIG_PPP is not set +# CONFIG_SLIP is not set + +# +# Wireless LAN (non-hamradio) +# +# CONFIG_NET_RADIO is not set + +# +# Token Ring devices +# +# CONFIG_TR is not set +# CONFIG_NET_FC is not set +# CONFIG_RCPCI is not set +# CONFIG_SHAPER is not set + +# +# Wan interfaces +# +# CONFIG_WAN is not set + +# +# Amateur Radio support +# +# CONFIG_HAMRADIO is not set + +# +# ISDN subsystem +# +# CONFIG_ISDN is not set + +# +# CD-ROM drivers (not for SCSI or IDE/ATAPI drives) +# +# CONFIG_CD_NO_IDESCSI is not set + +# +# Input core support +# +# CONFIG_INPUT is not set +# CONFIG_INPUT_KEYBDEV is not set +# CONFIG_INPUT_MOUSEDEV is not set +# CONFIG_INPUT_JOYDEV is not set +# CONFIG_INPUT_EVDEV is not set + +# +# Character devices +# +# CONFIG_VT is not set +CONFIG_SERIAL=y +# CONFIG_SERIAL_CONSOLE is not set +# CONFIG_SERIAL_EXTENDED is not set +# CONFIG_SERIAL_NONSTANDARD is not set +CONFIG_UNIX98_PTYS=y +CONFIG_UNIX98_PTY_COUNT=256 + +# +# I2C support +# +# CONFIG_I2C is not set + +# +# Mice +# +# CONFIG_BUSMOUSE is not set +# CONFIG_MOUSE is not set + +# +# Joysticks +# +# CONFIG_INPUT_GAMEPORT is not set + +# +# Input core support is needed for gameports +# + +# +# Input core support is needed for joysticks +# +# CONFIG_QIC02_TAPE is not set + +# +# Watchdog Cards +# +# CONFIG_WATCHDOG is not set +# CONFIG_INTEL_RNG is not set +# CONFIG_NVRAM is not set +# CONFIG_RTC is not set +CONFIG_EFI_RTC=y +# CONFIG_DTLK is not set +# CONFIG_R3964 is not set +# CONFIG_APPLICOM is not set + +# +# Ftape, the floppy tape device driver +# +# CONFIG_FTAPE is not set +# CONFIG_AGP is not set +# CONFIG_DRM is not set +# CONFIG_MWAVE is not set + +# +# Multimedia devices +# +# CONFIG_VIDEO_DEV is not set + +# +# File systems +# +CONFIG_QUOTA=y +CONFIG_AUTOFS_FS=y +CONFIG_AUTOFS4_FS=y +# CONFIG_REISERFS_FS is not set +# CONFIG_REISERFS_CHECK is not set +# CONFIG_ADFS_FS is not set +# CONFIG_ADFS_FS_RW is not set +# CONFIG_AFFS_FS is not set +# CONFIG_HFS_FS is not set +# CONFIG_BFS_FS is not set +CONFIG_FAT_FS=y +CONFIG_MSDOS_FS=y +# CONFIG_UMSDOS_FS is not set +CONFIG_VFAT_FS=y +# CONFIG_EFS_FS is not set +# CONFIG_JFFS_FS is not set +# CONFIG_JFFS2_FS is not set +# CONFIG_CRAMFS is not set +CONFIG_TMPFS=y +# CONFIG_RAMFS is not set +CONFIG_ISO9660_FS=y +# CONFIG_JOLIET is not set +# CONFIG_MINIX_FS is not set +# CONFIG_VXFS_FS is not set +# CONFIG_NTFS_FS is not set +# CONFIG_NTFS_RW is not set +# CONFIG_HPFS_FS is not set +CONFIG_PROC_FS=y +CONFIG_DEVFS_FS=y +CONFIG_DEVFS_MOUNT=y +CONFIG_DEVFS_DEBUG=y +CONFIG_DEVPTS_FS=y +# CONFIG_QNX4FS_FS is not set +# CONFIG_QNX4FS_RW is not set +# CONFIG_ROMFS_FS is not set +CONFIG_EXT2_FS=y +# CONFIG_SYSV_FS is not set +# CONFIG_UDF_FS is not set +# CONFIG_UDF_RW is not set +# CONFIG_UFS_FS is not set +# CONFIG_UFS_FS_WRITE is not set +CONFIG_XFS_SUPPORT=y + +# +# Network File Systems +# +# CONFIG_CODA_FS is not set +CONFIG_NFS_FS=y +CONFIG_NFS_V3=y +# CONFIG_ROOT_NFS is not set +CONFIG_NFSD=y +CONFIG_NFSD_V3=y +CONFIG_SUNRPC=y +CONFIG_LOCKD=y +CONFIG_LOCKD_V4=y +# CONFIG_SMB_FS is not set +# CONFIG_NCP_FS is not set +# CONFIG_NCPFS_PACKET_SIGNING is not set +# CONFIG_NCPFS_IOCTL_LOCKING is not set +# CONFIG_NCPFS_STRONG is not set +# CONFIG_NCPFS_NFS_NS is not set +# CONFIG_NCPFS_OS2_NS is not set +# CONFIG_NCPFS_SMALLDOS is not set +# CONFIG_NCPFS_NLS is not set +# CONFIG_NCPFS_EXTRAS is not set + +# +# Partition Types +# +# CONFIG_PARTITION_ADVANCED is not set +CONFIG_MSDOS_PARTITION=y +# CONFIG_SMB_NLS is not set +CONFIG_NLS=y + +# +# Native Language Support +# +CONFIG_NLS_DEFAULT="n" +# CONFIG_NLS_CODEPAGE_437 is not set +# CONFIG_NLS_CODEPAGE_737 is not set +# CONFIG_NLS_CODEPAGE_775 is not set +# CONFIG_NLS_CODEPAGE_850 is not set +# CONFIG_NLS_CODEPAGE_852 is not set +# CONFIG_NLS_CODEPAGE_855 is not set +# CONFIG_NLS_CODEPAGE_857 is not set +# CONFIG_NLS_CODEPAGE_860 is not set +# CONFIG_NLS_CODEPAGE_861 is not set +# CONFIG_NLS_CODEPAGE_862 is not set +# CONFIG_NLS_CODEPAGE_863 is not set +# CONFIG_NLS_CODEPAGE_864 is not set +# CONFIG_NLS_CODEPAGE_865 is not set +# CONFIG_NLS_CODEPAGE_866 is not set +# CONFIG_NLS_CODEPAGE_869 is not set +# CONFIG_NLS_CODEPAGE_936 is not set +# CONFIG_NLS_CODEPAGE_950 is not set +# CONFIG_NLS_CODEPAGE_932 is not set +# CONFIG_NLS_CODEPAGE_949 is not set +# CONFIG_NLS_CODEPAGE_874 is not set +# CONFIG_NLS_ISO8859_8 is not set +# CONFIG_NLS_CODEPAGE_1251 is not set +# CONFIG_NLS_ISO8859_1 is not set +# CONFIG_NLS_ISO8859_2 is not set +# CONFIG_NLS_ISO8859_3 is not set +# CONFIG_NLS_ISO8859_4 is not set +# CONFIG_NLS_ISO8859_5 is not set +# CONFIG_NLS_ISO8859_6 is not set +# CONFIG_NLS_ISO8859_7 is not set +# CONFIG_NLS_ISO8859_9 is not set +# CONFIG_NLS_ISO8859_13 is not set +# CONFIG_NLS_ISO8859_14 is not set +# CONFIG_NLS_ISO8859_15 is not set +# CONFIG_NLS_KOI8_R is not set +# CONFIG_NLS_KOI8_U is not set +# CONFIG_NLS_UTF8 is not set + +# +# Sound +# +# CONFIG_SOUND is not set + +# +# USB support +# +# CONFIG_USB is not set + +# +# USB Controllers +# +# CONFIG_USB_UHCI is not set +# CONFIG_USB_UHCI_ALT is not set +# CONFIG_USB_OHCI is not set + +# +# USB Device Class drivers +# +# CONFIG_USB_AUDIO is not set +# CONFIG_USB_BLUETOOTH is not set +# CONFIG_USB_STORAGE is not set +# CONFIG_USB_STORAGE_DEBUG is not set +# CONFIG_USB_STORAGE_DATAFAB is not set +# CONFIG_USB_STORAGE_FREECOM is not set +# CONFIG_USB_STORAGE_ISD200 is not set +# CONFIG_USB_STORAGE_DPCM is not set +# CONFIG_USB_STORAGE_HP8200e is not set +# CONFIG_USB_STORAGE_SDDR09 is not set +# CONFIG_USB_STORAGE_JUMPSHOT is not set +# CONFIG_USB_ACM is not set +# CONFIG_USB_PRINTER is not set + +# +# USB Human Interface Devices (HID) +# + +# +# Input core support is needed for USB HID +# + +# +# USB Imaging devices +# +# CONFIG_USB_DC2XX is not set +# CONFIG_USB_MDC800 is not set +# CONFIG_USB_SCANNER is not set +# CONFIG_USB_MICROTEK is not set +# CONFIG_USB_HPUSBSCSI is not set + +# +# USB Multimedia devices +# + +# +# Video4Linux support is needed for USB Multimedia device support +# + +# +# USB Network adaptors +# +# CONFIG_USB_PEGASUS is not set +# CONFIG_USB_KAWETH is not set +# CONFIG_USB_CATC is not set +# CONFIG_USB_CDCETHER is not set +# CONFIG_USB_USBNET is not set + +# +# USB port drivers +# +# CONFIG_USB_USS720 is not set + +# +# USB Serial Converter support +# +# CONFIG_USB_SERIAL is not set +# CONFIG_USB_SERIAL_GENERIC is not set +# CONFIG_USB_SERIAL_BELKIN is not set +# CONFIG_USB_SERIAL_WHITEHEAT is not set +# CONFIG_USB_SERIAL_DIGI_ACCELEPORT is not set +# CONFIG_USB_SERIAL_EMPEG is not set +# CONFIG_USB_SERIAL_FTDI_SIO is not set +# CONFIG_USB_SERIAL_VISOR is not set +# CONFIG_USB_SERIAL_IR is not set +# CONFIG_USB_SERIAL_EDGEPORT is not set +# CONFIG_USB_SERIAL_KEYSPAN_PDA is not set +# CONFIG_USB_SERIAL_KEYSPAN is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28 is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28X is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28XA is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA28XB is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA19 is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA18X is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA19W is not set +# CONFIG_USB_SERIAL_KEYSPAN_USA49W is not set +# CONFIG_USB_SERIAL_MCT_U232 is not set +# CONFIG_USB_SERIAL_PL2303 is not set +# CONFIG_USB_SERIAL_CYBERJACK is not set +# CONFIG_USB_SERIAL_XIRCOM is not set +# CONFIG_USB_SERIAL_OMNINET is not set + +# +# USB Miscellaneous drivers +# +# CONFIG_USB_RIO500 is not set + +# +# IEEE 1394 (FireWire) support (EXPERIMENTAL) +# +# CONFIG_IEEE1394 is not set + +# +# Bluetooth support +# +# CONFIG_BLUEZ is not set + +# +# Kernel hacking +# +CONFIG_DEBUG_KERNEL=y +CONFIG_IA64_PRINT_HAZARDS=y +# CONFIG_DISABLE_VHPT is not set +CONFIG_MAGIC_SYSRQ=y +CONFIG_IA64_EARLY_PRINTK=y +# CONFIG_DEBUG_SLAB is not set +# CONFIG_DEBUG_SPINLOCK is not set +# CONFIG_IA64_DEBUG_CMPXCHG is not set +# CONFIG_IA64_DEBUG_IRQ is not set +# CONFIG_KDB is not set +# CONFIG_KDB_MODULES is not set +CONFIG_KALLSYMS=y diff -Nru a/arch/ia64/sn/fakeprom/Makefile b/arch/ia64/sn/fakeprom/Makefile --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/fakeprom/Makefile Tue Mar 12 13:58:15 2002 @@ -0,0 +1,30 @@ +# +# This file is subject to the terms and conditions of the GNU General Public +# License. See the file "COPYING" in the main directory of this archive +# for more details. +# +# Copyright (c) 2000-2001 Silicon Graphics, Inc. All rights reserved. +# + +TOPDIR=../../../.. +HPATH = $(TOPDIR)/include + +LIB = ../../lib/lib.a + +OBJ=fpromasm.o main.o fw-emu.o fpmem.o klgraph_init.o +obj-y=fprom + +fprom: $(OBJ) + $(LD) -static -Tfprom.lds -o fprom $(OBJ) $(LIB) + +.S.o: + $(CC) -D__ASSEMBLY__ $(AFLAGS) $(AFLAGS_KERNEL) -c -o $*.o $< +.c.o: + $(CC) $(CFLAGS) $(CFLAGS_KERNEL) -c -o $*.o $< + +clean: + rm -f *.o fprom + + +include $(TOPDIR)/Rules.make + diff -Nru a/arch/ia64/sn/fakeprom/README b/arch/ia64/sn/fakeprom/README --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/fakeprom/README Tue Mar 12 13:58:15 2002 @@ -0,0 +1,85 @@ +This directory contains the files required to build +the fake PROM image that is currently being used to +boot IA64 kernels running under the SGI Medusa kernel. + +The FPROM currently provides the following functions: + + - PAL emulation for all PAL calls we've made so far. + - SAL emulation for all SAL calls we've made so far. + - EFI emulation for all EFI calls we've made so far. + - builds the "ia64_bootparam" structure that is + passed to the kernel from SAL. This structure + shows the cpu & memory configurations. + - supports medusa boottime options for changing + the number of cpus present + - supports medusa boottime options for changing + the memory configuration. + + + +At some point, this fake PROM will be replaced by the +real PROM. + + + + +To build a fake PROM, cd to this directory & type: + + make + +This will (or should) build a fake PROM named "fprom". + + + + +Use this fprom image when booting the Medusa simulator. The +control file used to boot Medusa should include the +following lines: + + load fprom + load vmlinux + sr pc 0x100000 + sr g 9

#(currently 0xe000000000520000) + +NOTE: There is a script "runsim" in this directory that can be used to +simplify setting up an environment for running under Medusa. + + + + +The following parameters may be passed to the fake PROM to +control the PAL/SAL/EFI parameters passed to the kernel: + + GR[8] = # of cpus + GR[9] = address of primary entry point into the kernel + GR[20] = memory configuration for node 0 + GR[21] = memory configuration for node 1 + GR[22] = memory configuration for node 2 + GR[23] = memory configuration for node 3 + + +Registers GR[20] - GR[23] contain information to specify the +amount of memory present on nodes 0-3. + + - if nothing is specified (all registers are 0), the configuration + defaults to 8 MB on node 0. + + - a mem config entry for node N is passed in GR[20+N] + + - a mem config entry consists of 8 hex digits. Each digit gives the + amount of physical memory available on the node starting at + 1GB*, where dn is the digit number. The amount of memory + is 8MB*2**. (If = 0, the memory size is 0). + + SN1 doesnt support dimms this small but small memory systems + boot faster on Medusa. + + + +An example helps a lot. The following specifies that node 0 has +physical memory 0 to 8MB and 1GB to 1GB+32MB, and that node 1 has +64MB starting at address 0 of the node which is 8GB. + + gr[20] = 0x21 # 0 to 8MB, 1GB to 1GB+32MB + gr[21] = 0x4 # 8GB to 8GB+64MB + diff -Nru a/arch/ia64/sn/fakeprom/fpmem.c b/arch/ia64/sn/fakeprom/fpmem.c --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/fakeprom/fpmem.c Tue Mar 12 13:58:15 2002 @@ -0,0 +1,257 @@ +/* + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (C) 2000-2002 Silicon Graphics, Inc. All rights reserved. + */ + + + +/* + * FPROM EFI memory descriptor build routines + * + * - Routines to build the EFI memory descriptor map + * - Should also be usable by the SGI SN1 prom to convert + * klconfig to efi_memmap + */ + +#include +#include +#include "fpmem.h" + +/* + * args points to a layout in memory like this + * + * 32 bit 32 bit + * + * numnodes numcpus + * + * 16 bit 16 bit 32 bit + * nasid0 cpuconf membankdesc0 + * nasid1 cpuconf membankdesc1 + * . + * . + * . + * . + * . + */ + +sn_memmap_t *sn_memmap ; +sn_config_t *sn_config ; + +/* + * There is a hole in the node 0 address space. Dont put it + * in the memory map + */ +#define NODE0_HOLE_SIZE (20*MB) +#define NODE0_HOLE_END (4UL*GB) + +#define MB (1024*1024) +#define GB (1024*MB) +#define KERNEL_SIZE (4*MB) +#define PROMRESERVED_SIZE (1*MB) + +#ifdef CONFIG_IA64_SGI_SN1 +#define PHYS_ADDRESS(_n, _x) (((long)_n<<33L) | (long)_x) +#define MD_BANK_SHFT 30 +#else +#define PHYS_ADDRESS(_n, _x) (((long)_n<<38L) | (long)_x | 0x3000000000UL) +#define MD_BANK_SHFT 34 +#endif + +/* + * For SN, this may not take an arg and gets the numnodes from + * the prom variable or by traversing klcfg or promcfg + */ +int +GetNumNodes(void) +{ + return sn_config->nodes; +} + +int +GetNumCpus(void) +{ + return sn_config->cpus; +} + +/* For SN1, get the index th nasid */ + +int +GetNasid(int index) +{ + return sn_memmap[index].nasid ; +} + +node_memmap_t +GetMemBankInfo(int index) +{ + return sn_memmap[index].node_memmap ; +} + +int +IsCpuPresent(int cnode, int cpu) +{ + return sn_memmap[cnode].cpuconfig & (1<type = type; + md->phys_addr = paddr; + md->virt_addr = 0; + md->num_pages = numbytes >> 12; + md->attribute = EFI_MEMORY_WB; +} + +int +build_efi_memmap(void *md, int mdsize) +{ + int numnodes = GetNumNodes() ; + int cnode,bank ; + int nasid ; + node_memmap_t membank_info ; + int bsize; + int count = 0 ; + long paddr, hole, numbytes; + + + for (cnode=0;cnode + +/* + * Structure of the mem config of the node as a SN1 MI reg + * Medusa supports this reg config. + * + * BankSize nibble to bank size mapping + * + * 1 - 64 MB + * 2 - 128 MB + * 3 - 256 MB + * 4 - 512 MB + * 5 - 1024 MB (1GB) + */ + +#define MBSHIFT 20 + +#ifdef CONFIG_IA64_SGI_SN1 +typedef struct node_memmap_s +{ + unsigned int b0 :1, /* 0 bank 0 present */ + b1 :1, /* 1 bank 1 present */ + r01 :2, /* 2-3 reserved */ + b01size :4, /* 4-7 Size of bank 0 and 1 */ + b2 :1, /* 8 bank 2 present */ + b3 :1, /* 9 bank 3 present */ + r23 :2, /* 10-11 reserved */ + b23size :4, /* 12-15 Size of bank 2 and 3 */ + b4 :1, /* 16 bank 4 present */ + b5 :1, /* 17 bank 5 present */ + r45 :2, /* 18-19 reserved */ + b45size :4, /* 20-23 Size of bank 4 and 5 */ + b6 :1, /* 24 bank 6 present */ + b7 :1, /* 25 bank 7 present */ + r67 :2, /* 26-27 reserved */ + b67size :4; /* 28-31 Size of bank 6 and 7 */ +} node_memmap_t ; + +/* Support the medusa hack for 8M/16M/32M nodes */ +#define SN1_BANK_SIZE_SHIFT (MBSHIFT+6) /* 64 MB */ +#define BankSizeBytes(bsize) ((bsize<6) ? (1<<((bsize-1)+SN1_BANK_SIZE_SHIFT)) :\ + (1<<((bsize-9)+MBSHIFT))) +#else +typedef struct node_memmap_s +{ + unsigned int b0size :3, /* 0-2 bank 0 size */ + b0dou :1, /* 3 bank 0 is 2-sided */ + ena0 :1, /* 4 bank 0 enabled */ + r0 :3, /* 5-7 reserved */ + b1size :3, /* 8-10 bank 1 size */ + b1dou :1, /* 11 bank 1 is 2-sided */ + ena1 :1, /* 12 bank 1 enabled */ + r1 :3, /* 13-15 reserved */ + b2size :3, /* 16-18 bank 2 size */ + b2dou :1, /* 19 bank 1 is 2-sided */ + ena2 :1, /* 20 bank 2 enabled */ + r2 :3, /* 21-23 reserved */ + b3size :3, /* 24-26 bank 3 size */ + b3dou :1, /* 27 bank 3 is 2-sided */ + ena3 :1, /* 28 bank 3 enabled */ + r3 :3; /* 29-31 reserved */ +} node_memmap_t ; + +#define SN2_BANK_SIZE_SHIFT (MBSHIFT+6) /* 64 MB */ +#define BankSizeBytes(bsize) (1UL<<((bsize)+SN2_BANK_SIZE_SHIFT)) +#endif + +typedef struct sn_memmap_s +{ + short nasid ; + short cpuconfig; + node_memmap_t node_memmap ; +} sn_memmap_t ; + +typedef struct sn_config_s +{ + int cpus; + int nodes; + sn_memmap_t memmap[1]; /* start of array */ +} sn_config_t; + + + +extern void build_init(unsigned long); +extern int build_efi_memmap(void *, int); +extern int GetNumNodes(void); +extern int GetNumCpus(void); +extern int IsCpuPresent(int, int); +extern int GetNasid(int); diff -Nru a/arch/ia64/sn/fakeprom/fprom.lds b/arch/ia64/sn/fakeprom/fprom.lds --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/fakeprom/fprom.lds Tue Mar 12 13:58:15 2002 @@ -0,0 +1,96 @@ + +OUTPUT_FORMAT("elf64-ia64-little") +OUTPUT_ARCH(ia64) +ENTRY(_start) +SECTIONS +{ + v = 0x0000000000000000 ; /* this symbol is here to make debugging with kdb easier... */ + + . = (0x000000000000000 + 0x100000) ; + + _text = .; + .text : AT(ADDR(.text) - 0x0000000000000000 ) + { + *(__ivt_section) + /* these are not really text pages, but the zero page needs to be in a fixed location: */ + *(__special_page_section) + __start_gate_section = .; + *(__gate_section) + __stop_gate_section = .; + *(.text) + } + + /* Global data */ + _data = .; + + .rodata : AT(ADDR(.rodata) - 0x0000000000000000 ) + { *(.rodata) *(.rodata.*) } + .opd : AT(ADDR(.opd) - 0x0000000000000000 ) + { *(.opd) } + .data : AT(ADDR(.data) - 0x0000000000000000 ) + { *(.data) *(.gnu.linkonce.d*) CONSTRUCTORS } + + __gp = ALIGN (8) + 0x200000; + + .got : AT(ADDR(.got) - 0x0000000000000000 ) + { *(.got.plt) *(.got) } + /* We want the small data sections together, so single-instruction offsets + can access them all, and initialized data all before uninitialized, so + we can shorten the on-disk segment size. */ + .sdata : AT(ADDR(.sdata) - 0x0000000000000000 ) + { *(.sdata) } + _edata = .; + _bss = .; + .sbss : AT(ADDR(.sbss) - 0x0000000000000000 ) + { *(.sbss) *(.scommon) } + .bss : AT(ADDR(.bss) - 0x0000000000000000 ) + { *(.bss) *(COMMON) } + . = ALIGN(64 / 8); + _end = .; + + /* Sections to be discarded */ + /DISCARD/ : { + *(.text.exit) + *(.data.exit) + } + + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* These must appear regardless of . */ + /* Discard them for now since Intel SoftSDV cannot handle them. + .comment 0 : { *(.comment) } + .note 0 : { *(.note) } + */ + /DISCARD/ : { *(.comment) } + /DISCARD/ : { *(.note) } +} diff -Nru a/arch/ia64/sn/fakeprom/fpromasm.S b/arch/ia64/sn/fakeprom/fpromasm.S --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/fakeprom/fpromasm.S Tue Mar 12 13:58:15 2002 @@ -0,0 +1,403 @@ +/* + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * (Code copied from or=ther files) + * Copyright (C) 1998-2000 Hewlett-Packard Co + * Copyright (C) 1998-2000 David Mosberger-Tang + * + * Copyright (C) 2000-2002 Silicon Graphics, Inc. All rights reserved. + */ + + + +#define __ASSEMBLY__ 1 +#include +#include +#include +#include + +/* + * This file contains additional set up code that is needed to get going on + * Medusa. This code should disappear once real hw is available. + * + * On entry to this routine, the following register values are assumed: + * + * gr[8] - BSP cpu + * pr[9] - kernel entry address + * pr[10] - cpu number on the node + * + * NOTE: + * This FPROM may be loaded/executed at an address different from the + * address that it was linked at. The FPROM is linked to run on node 0 + * at address 0x100000. If the code in loaded into another node, it + * must be loaded at offset 0x100000 of the node. In addition, the + * FPROM does the following things: + * - determine the base address of the node it is loaded on + * - add the node base to _gp. + * - add the node base to all addresses derived from "movl" + * instructions. (I couldnt get GPREL addressing to work) + * (maybe newer versions of the tools will support this) + * - scan the .got section and add the node base to all + * pointers in this section. + * - add the node base to all physical addresses in the + * SAL/PAL/EFI table built by the C code. (This is done + * in the C code - not here) + * - add the node base to the TLB entries for vmlinux + */ + +#define KERNEL_BASE 0xe000000000000000 +#define BOOT_PARAM_ADDR 0x40000 + + +/* + * ar.k0 gets set to IOPB_PA value, on 460gx chipset it should + * be 0x00000ffffc000000, but on snia we use the (inverse swizzled) + * IOSPEC_BASE value + */ +#ifdef CONFIG_IA64_SGI_SN1 +#define IOPB_PA 0xc0000FFFFC000000 +#else +#define IOPB_PA 0xc000000fcc000000 +#endif + +#define RR_RID 8 + + + +// ==================================================================================== + .text + .align 16 + .global _start + .proc _start +_start: + +// Setup psr and rse for system init + mov psr.l = r0;; + srlz.d;; + invala + mov ar.rsc = r0;; + loadrs + ;; + +// Isolate node number we are running on. + mov r6 = ip;; +#ifdef CONFIG_IA64_SGI_SN1 + shr r5 = r6,33;; // r5 = node number + shl r6 = r5,33 // r6 = base memory address of node +#else + shr r5 = r6,38 // r5 = node number + dep r6 = 0,r6,0,36 // r6 = base memory address of node + +#endif + + +// Set & relocate gp. + movl r1= __gp;; // Add base memory address + or r1 = r1,r6 // Relocate to boot node + +// Lets figure out who we are & put it in the LID register. +#ifdef CONFIG_IA64_SGI_SN2 +// On SN2, we (currently) pass the cpu number in r10 at boot + and r25=3,r10;; + movl r16=0x8000008110000400 // Allow IPIs + mov r17=-1;; + st8 [r16]=r17 + movl r16=0x8000008110060580;; // SHUB_ID + ld8 r27=[r16];; + extr.u r27=r27,32,11;; + shl r26=r25,28;; // Align local cpu# to lid.eid + shl r27=r27,16;; // Align NASID to lid.id + or r26=r26,r27;; // build the LID +#else +// The BR_PI_SELF_CPU_NUM register gives us a value of 0-3. +// This identifies the cpu on the node. +// Merge the cpu number with the NASID to generate the LID. + movl r24=0x80000a0001000020;; // BR_PI_SELF_CPU_NUM + ld8 r25=[r24] // Fetch PI_SELF + movl r27=0x80000a0001600000;; // Fetch REVID to get local NASID + ld8 r27=[r27];; + extr.u r27=r27,32,8;; + shl r26=r25,16;; // Align local cpu# to lid.eid + shl r27=r27,24;; // Align NASID to lid.id + or r26=r26,r27;; // build the LID +#endif + mov cr.lid=r26 // Now put in in the LID register + + movl r2=FPSR_DEFAULT;; + mov ar.fpsr=r2 + movl sp = bootstacke-16;; + or sp = sp,r6 // Relocate to boot node + +// Save the NASID that we are loaded on. + movl r2=base_nasid;; // Save base_nasid for C code + or r2 = r2,r6;; // Relocate to boot node + st8 [r2]=r5 // Uncond st8 - same on all cpus + +// Save the kernel entry address. It is passed in r9 on one of +// the cpus. + movl r2=bsp_entry_pc + cmp.ne p6,p0=r9,r0;; + or r2 = r2,r6;; // Relocate to boot node +(p6) st8 [r2]=r9 // Uncond st8 - same on all cpus + + +// The following can ONLY be done by 1 cpu. Lets set a lock - the +// cpu that gets it does the initilization. The rest just spin waiting +// til initilization is complete. + movl r22 = initlock;; + or r22 = r22,r6 // Relocate to boot node + mov r23 = 1;; + xchg8 r23 = [r22],r23;; + cmp.eq p6,p0 = 0,r23 +(p6) br.cond.spnt.few init +1: ld4 r23 = [r22];; + cmp.eq p6,p0 = 1,r23 +(p6) br.cond.sptk 1b + br initx + +// Add base address of node memory to each pointer in the .got section. +init: movl r16 = _GLOBAL_OFFSET_TABLE_;; + or r16 = r16,r6;; // Relocate to boot node +1: ld8 r17 = [r16];; + cmp.eq p6,p7=0,r17 +(p6) br.cond.sptk.few.clr 2f;; + or r17 = r17,r6;; // Relocate to boot node + st8 [r16] = r17,8 + br 1b +2: + mov r23 = 2;; // All done, release the spinning cpus + st4 [r22] = r23 +initx: + +// +// I/O-port space base address: +// + movl r2 = IOPB_PA;; + mov ar.k0 = r2 + + +// Now call main & pass it the current LID value. + alloc r0=ar.pfs,0,0,2,0 + mov r32=r26 + mov r33=r8;; + br.call.sptk.few rp=fmain + +// Initialize Region Registers +// + mov r10 = r0 + mov r2 = (13<<2) + mov r3 = r0;; +1: cmp4.gtu p6,p7 = 7, r3 + dep r10 = r3, r10, 61, 3 + dep r2 = r3, r2, RR_RID, 4;; +(p7) dep r2 = 0, r2, 0, 1;; +(p6) dep r2 = -1, r2, 0, 1;; + mov rr[r10] = r2 + add r3 = 1, r3;; + srlz.d;; + cmp4.gtu p6,p0 = 8, r3 +(p6) br.cond.sptk.few.clr 1b + +// +// Return value indicates if we are the BSP or AP. +// 1 = BSP, 0 = AP + mov cr.tpr=r0;; + cmp.eq p6,p0=r8,r0 +(p6) br.cond.spnt slave + +// +// Go to kernel C startup routines +// Need to do a "rfi" in order set "it" and "ed" bits in the PSR. +// This is the only way to set them. + + movl r28=BOOT_PARAM_ADDR + movl r2=bsp_entry_pc;; + or r28 = r28,r6;; // Relocate to boot node + or r2 = r2,r6;; // Relocate to boot node + ld8 r2=[r2];; + or r2=r2,r6;; + dep r2=0,r2,61,3;; // convert to phys mode + +// +// Turn on address translation, interrupt collection, psr.ed, protection key. +// Interrupts (PSR.i) are still off here. +// + + movl r3 = ( IA64_PSR_BN | \ + IA64_PSR_AC | \ + IA64_PSR_DB | \ + IA64_PSR_DA | \ + IA64_PSR_IC \ + ) + ;; + mov cr.ipsr = r3 + +// +// Go to kernel C startup routines +// Need to do a "rfi" in order set "it" and "ed" bits in the PSR. +// This is the only way to set them. + + mov r8=r28;; + bsw.1 ;; + mov r28=r8;; + bsw.0 ;; + mov cr.iip = r2 + srlz.d;; + rfi;; + + .endp _start + + + +// Slave processors come here to spin til they get an interrupt. Then they launch themselves to +// the place ap_entry points. No initialization is necessary - the kernel makes no +// assumptions about state on this entry. +// Note: should verify that the interrupt we got was really the ap_wakeup +// interrupt but this should not be an issue on medusa +slave: + nop.i 0x8beef // Medusa - put cpu to sleep til interrupt occurs + mov r8=cr.irr0;; // Check for interrupt pending. + cmp.eq p6,p0=r8,r0 +(p6) br.cond.sptk slave;; + + mov r8=cr.ivr;; // Got one. Must read ivr to accept it + srlz.d;; + mov cr.eoi=r0;; // must write eoi to clear + movl r8=ap_entry;; // now jump to kernel entry + or r8 = r8,r6;; // Relocate to boot node + ld8 r9=[r8],8;; + ld8 r1=[r8] + mov b0=r9;; + br b0 + +// Here is the kernel stack used for the fake PROM + .bss + .align 16384 +bootstack: + .skip 16384 +bootstacke: +initlock: + data4 + + + +////////////////////////////////////////////////////////////////////////////////////////////////////////// +// This code emulates the PAL. Only essential interfaces are emulated. + + + .text + .global pal_emulator + .proc pal_emulator +pal_emulator: + mov r8=-1 + + mov r9=256 + ;; + cmp.gtu p6,p7=r9,r28 /* r28 <= 255? */ +(p6) br.cond.sptk.few static + ;; + mov r9=512 + ;; + cmp.gtu p6,p7=r9,r28 +(p6) br.cond.sptk.few stacked + ;; + +static: cmp.eq p6,p7=6,r28 /* PAL_PTCE_INFO */ +(p7) br.cond.sptk.few 1f + movl r8=0 /* status = 0 */ + movl r9=0x100000000 /* tc.base */ + movl r10=0x0000000200000003 /* count[0], count[1] */ + movl r11=0x1000000000002000 /* stride[0], stride[1] */ + ;; + +1: cmp.eq p6,p7=14,r28 /* PAL_FREQ_RATIOS */ +(p7) br.cond.sptk.few 1f + movl r8=0 /* status = 0 */ + movl r9 =0x100000064 /* proc_ratio (1/100) */ + movl r10=0x100000100 /* bus_ratio<<32 (1/256) */ + movl r11=0x10000000a /* itc_ratio<<32 (1/100) */ + ;; + +1: cmp.eq p6,p7=8,r28 /* PAL_VM_SUMMARY */ +(p7) br.cond.sptk.few 1f + movl r8=0 +#ifdef CONFIG_IA64_SGI_SN1 + movl r9=0x0203083001151059 + movl r10=0x1232 +#else + movl r9=0x0203083001151065 + movl r10=0x183f +#endif + movl r11=0 + ;; + +1: cmp.eq p6,p7=19,r28 /* PAL_RSE_INFO */ +(p7) br.cond.sptk.few 1f + movl r8=0 + movl r9=0x60 + movl r10=0x0 + movl r11=0 + ;; + +1: cmp.eq p6,p7=15,r28 /* PAL_PERF_MON_INFO */ +(p7) br.cond.sptk.few 1f + movl r8=0 + movl r9=0x08122004 + movl r10=0x0 + movl r11=0 + mov r2=ar.lc + mov r3=16;; + mov ar.lc=r3 + mov r3=r29;; +5: st8 [r3]=r0,8 + br.cloop.sptk.few 5b;; + mov ar.lc=r2 + mov r3=r29 + movl r2=0x1fff;; /* PMC regs */ + st8 [r3]=r2 + add r3=32,r3 + movl r2=0x3ffff;; /* PMD regs */ + st8 [r3]=r2 + add r3=32,r3 + movl r2=0xf0;; /* cycle regs */ + st8 [r3]=r2 + add r3=32,r3 + movl r2=0x10;; /* retired regs */ + st8 [r3]=r2 + ;; + +1: cmp.eq p6,p7=19,r28 /* PAL_RSE_INFO */ +(p7) br.cond.sptk.few 1f + movl r8=0 /* status = 0 */ + movl r9=96 /* num phys stacked */ + movl r10=0 /* hints */ + movl r11=0 + ;; + +1: cmp.eq p6,p7=1,r28 /* PAL_CACHE_FLUSH */ +(p7) br.cond.sptk.few 1f + mov r9=ar.lc + movl r8=524288 /* flush 512k million cache lines (16MB) */ + ;; + mov ar.lc=r8 + movl r8=0xe000000000000000 + ;; +.loop: fc r8 + add r8=32,r8 + br.cloop.sptk.few .loop + sync.i + ;; + srlz.i + ;; + mov ar.lc=r9 + mov r8=r0 +1: br.cond.sptk.few rp + +stacked: + br.ret.sptk.few rp + + .endp pal_emulator + diff -Nru a/arch/ia64/sn/fakeprom/fw-emu.c b/arch/ia64/sn/fakeprom/fw-emu.c --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/fakeprom/fw-emu.c Tue Mar 12 13:58:15 2002 @@ -0,0 +1,829 @@ +/* + * PAL & SAL emulation. + * + * Copyright (C) 1998-2000 Hewlett-Packard Co + * Copyright (C) 1998-2000 David Mosberger-Tang + * + * + * Copyright (C) 2000-2002 Silicon Graphics, Inc. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of version 2 of the GNU General Public License + * as published by the Free Software Foundation. + * + * This program is distributed in the hope that it would be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. + * + * Further, this software is distributed without any warranty that it is + * free of the rightful claim of any third person regarding infringement + * or the like. Any license provided herein, whether implied or + * otherwise, applies only to this software file. Patent licenses, if + * any, provided herein do not apply to combinations of this program with + * other software, or any other product whatsoever. + * + * You should have received a copy of the GNU General Public + * License along with this program; if not, write the Free Software + * Foundation, Inc., 59 Temple Place - Suite 330, Boston MA 02111-1307, USA. + * + * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy, + * Mountain View, CA 94043, or: + * + * http://www.sgi.com + * + * For further information regarding this notice, see: + * + * http://oss.sgi.com/projects/GenInfo/NoticeExplan + */ +#include +#include +#include +#include +#include +#include +#include +#ifdef CONFIG_IA64_SGI_SN2 +#include +#include +#endif +#include +#include "fpmem.h" + +#define zzACPI_1_0 1 /* Include ACPI 1.0 tables */ + +#define OEMID "SGI" +#ifdef CONFIG_IA64_SGI_SN1 +#define PRODUCT "SN1" +#define PROXIMITY_DOMAIN(nasid) (nasid) +#else +#define PRODUCT "SN2" +#define PROXIMITY_DOMAIN(nasid) (((nasid)>>1) & 255) +#endif + +#define MB (1024*1024UL) +#define GB (MB*1024UL) +#define BOOT_PARAM_ADDR 0x40000 +#define MAX(i,j) ((i) > (j) ? (i) : (j)) +#define MIN(i,j) ((i) < (j) ? (i) : (j)) +#define ABS(i) ((i) > 0 ? (i) : -(i)) +#define ALIGN8(p) (((long)(p) +7) & ~7) + +#define FPROM_BUG() do {while (1);} while (0) +#define MAX_SN_NODES 128 +#define MAX_LSAPICS 512 +#define MAX_CPUS 512 +#define MAX_CPUS_NODE 4 +#define CPUS_PER_NODE 4 +#define CPUS_PER_FSB 2 +#define CPUS_PER_FSB_MASK (CPUS_PER_FSB-1) + +#ifdef ACPI_1_0 +#define NUM_EFI_DESCS 3 +#else +#define NUM_EFI_DESCS 2 +#endif + +#define RSDP_CHECKSUM_LENGTH 20 + +typedef union ia64_nasid_va { + struct { +#if defined(CONFIG_IA64_SGI_SN1) + unsigned long off : 33; /* intra-region offset */ + unsigned long nasid : 7; /* NASID */ + unsigned long off2 : 21; /* fill */ + unsigned long reg : 3; /* region number */ +#elif defined(CONFIG_IA64_SGI_SN2) + unsigned long off : 36; /* intra-region offset */ + unsigned long attr : 2; + unsigned long nasid : 11; /* NASID */ + unsigned long off2 : 12; /* fill */ + unsigned long reg : 3; /* region number */ +#endif + } f; + unsigned long l; + void *p; +} ia64_nasid_va; + +typedef struct { + unsigned long pc; + unsigned long gp; +} func_ptr_t; + +#define IS_VIRTUAL_MODE() ({struct ia64_psr psr; asm("mov %0=psr" : "=r"(psr)); psr.dt;}) +#define ADDR_OF(p) (IS_VIRTUAL_MODE() ? ((void*)((long)(p)+PAGE_OFFSET)) : ((void*) (p))) + +#if defined(CONFIG_IA64_SGI_SN1) +#define __fwtab_pa(n,x) ({ia64_nasid_va _v; _v.l = (long) (x); _v.f.nasid = (x) ? (n) : 0; _v.f.reg = 0; _v.l;}) +#elif defined(CONFIG_IA64_SGI_SN2) +#define __fwtab_pa(n,x) ({ia64_nasid_va _v; _v.l = (long) (x); _v.f.nasid = (x) ? (n) : 0; _v.f.reg = 0; _v.f.attr = 3; _v.l;}) +#endif + +/* + * The following variables are passed thru registersfrom the configuration file and + * are set via the _start function. + */ +long base_nasid; +long num_cpus; +long bsp_entry_pc=0; +long num_nodes; +long app_entry_pc; +int bsp_lid; +func_ptr_t ap_entry; + + +extern void pal_emulator(void); +static efi_runtime_services_t *efi_runtime_p; +static char fw_mem[( sizeof(efi_system_table_t) + + sizeof(efi_runtime_services_t) + + NUM_EFI_DESCS*sizeof(efi_config_table_t) + + sizeof(struct ia64_sal_systab) + + sizeof(struct ia64_sal_desc_entry_point) + + sizeof(struct ia64_sal_desc_ap_wakeup) +#ifdef ACPI_1_0 + + sizeof(acpi_rsdp_t) + + sizeof(acpi_rsdt_t) + + sizeof(acpi_sapic_t) + + MAX_LSAPICS*(sizeof(acpi_entry_lsapic_t)) +#endif + + sizeof(acpi20_rsdp_t) + + sizeof(acpi_xsdt_t) + + sizeof(acpi_slit_t) + + MAX_SN_NODES*MAX_SN_NODES+8 + + sizeof(acpi_madt_t) + + 16*MAX_CPUS + + (1+8*MAX_SN_NODES)*(sizeof(efi_memory_desc_t)) + + sizeof(acpi_srat_t) + + MAX_CPUS*sizeof(srat_cpu_affinity_t) + + MAX_SN_NODES*sizeof(srat_memory_affinity_t) + + sizeof(ia64_sal_desc_ptc_t) + + + MAX_SN_NODES*sizeof(ia64_sal_ptc_domain_info_t) + + + MAX_CPUS*sizeof(ia64_sal_ptc_domain_proc_entry_t) + + + 1024)] __attribute__ ((aligned (8))); + + +static efi_status_t +efi_get_time (efi_time_t *tm, efi_time_cap_t *tc) +{ + if (tm) { + memset(tm, 0, sizeof(*tm)); + tm->year = 2000; + tm->month = 2; + tm->day = 13; + tm->hour = 10; + tm->minute = 11; + tm->second = 12; + } + + if (tc) { + tc->resolution = 10; + tc->accuracy = 12; + tc->sets_to_zero = 1; + } + + return EFI_SUCCESS; +} + +static void +efi_reset_system (int reset_type, efi_status_t status, unsigned long data_size, efi_char16_t *data) +{ + while(1); /* Is there a pseudo-op to stop medusa */ +} + +static efi_status_t +efi_success (void) +{ + return EFI_SUCCESS; +} + +static efi_status_t +efi_unimplemented (void) +{ + return EFI_UNSUPPORTED; +} + +#ifdef CONFIG_IA64_SGI_SN2 + +#undef cpu_physical_id +#define cpu_physical_id(cpuid) ((ia64_get_lid() >> 16) & 0xffff) + +void +fprom_send_cpei(void) { + long *p, val; + long physid; + long nasid, slice; + + physid = cpu_physical_id(0); + nasid = cpu_physical_id_to_nasid(physid); + slice = cpu_physical_id_to_slice(physid); + + p = (long*)GLOBAL_MMR_ADDR(nasid, SH_IPI_INT); + val = (1UL<pc = in2; + fp->gp = in3; + } else if (in1 == SAL_VECTOR_OS_MCA || in1 == SAL_VECTOR_OS_INIT) { + } else { + status = -1; + } + ; + } else if (index == SAL_GET_STATE_INFO) { + ; + } else if (index == SAL_GET_STATE_INFO_SIZE) { + ; + } else if (index == SAL_CLEAR_STATE_INFO) { + ; + } else if (index == SAL_MC_RENDEZ) { + ; + } else if (index == SAL_MC_SET_PARAMS) { + ; + } else if (index == SAL_CACHE_FLUSH) { + ; + } else if (index == SAL_CACHE_INIT) { + ; + } else if (index == SAL_UPDATE_PAL) { + ; +#ifdef CONFIG_IA64_SGI_SN2 + } else if (index == SN_SAL_LOG_CE) { +#ifdef ajmtestcpei + fprom_send_cpei(); +#else /* ajmtestcpei */ + ; +#endif /* ajmtestcpei */ +#endif + } else if (index == SN_SAL_PROBE) { + r9 = 0UL; + if (in2 == 4) { + r9 = *(unsigned *)in1; + if (r9 == -1) { + status = 1; + } + } else if (in2 == 2) { + r9 = *(unsigned short *)in1; + if (r9 == -1) { + status = 1; + } + } else if (in2 == 1) { + r9 = *(unsigned char *)in1; + if (r9 == -1) { + status = 1; + } + } else if (in2 == 8) { + r9 = *(unsigned long *)in1; + if (r9 == -1) { + status = 1; + } + } else { + status = 2; + } + } else if (index == SN_SAL_GET_KLCONFIG_ADDR) { + r9 = 0x30000; + } else { + status = -1; + } + + asm volatile ("" :: "r"(r9), "r"(r10), "r"(r11)); + return status; +} + + +/* + * This is here to work around a bug in egcs-1.1.1b that causes the + * compiler to crash (seems like a bug in the new alias analysis code. + */ +void * +id (long addr) +{ + return (void *) addr; +} + + +/* + * Fix the addresses in a function pointer by adding base node address + * to pc & gp. + */ +void +fix_function_pointer(void *fp) +{ + func_ptr_t *_fp; + + _fp = fp; + _fp->pc = __fwtab_pa(base_nasid, _fp->pc); + _fp->gp = __fwtab_pa(base_nasid, _fp->gp); +} + +void +fix_virt_function_pointer(void **fptr) +{ + func_ptr_t *fp; + long *p; + + p = (long*)fptr; + fp = *fptr; + fp->pc = fp->pc | PAGE_OFFSET; + fp->gp = fp->gp | PAGE_OFFSET; + *p |= PAGE_OFFSET; +} + + +int +efi_set_virtual_address_map(void) +{ + efi_runtime_services_t *runtime; + + runtime = efi_runtime_p; + fix_virt_function_pointer((void**)&runtime->get_time); + fix_virt_function_pointer((void**)&runtime->set_time); + fix_virt_function_pointer((void**)&runtime->get_wakeup_time); + fix_virt_function_pointer((void**)&runtime->set_wakeup_time); + fix_virt_function_pointer((void**)&runtime->set_virtual_address_map); + fix_virt_function_pointer((void**)&runtime->get_variable); + fix_virt_function_pointer((void**)&runtime->get_next_variable); + fix_virt_function_pointer((void**)&runtime->set_variable); + fix_virt_function_pointer((void**)&runtime->get_next_high_mono_count); + fix_virt_function_pointer((void**)&runtime->reset_system); + return EFI_SUCCESS;; +} + +void +acpi_table_init(acpi_desc_table_hdr_t *p, char *sig, int siglen, int revision, int oem_revision) +{ + memcpy(p->signature, sig, siglen); + memcpy(p->oem_id, OEMID, 6); + memcpy(p->oem_table_id, sig, 4); + memcpy(p->oem_table_id+4, PRODUCT, 4); + p->revision = revision; + p->oem_revision = (revision<<16) + oem_revision; + p->creator_id = 1; + p->creator_revision = 1; +} + +void +acpi_checksum(acpi_desc_table_hdr_t *p, int length) +{ + u8 *cp, *cpe, checksum; + + p->checksum = 0; + p->length = length; + checksum = 0; + for (cp=(u8*)p, cpe=cp+p->length; cpchecksum = -checksum; +} + +void +acpi_checksum_rsdp20(acpi20_rsdp_t *p, int length) +{ + u8 *cp, *cpe, checksum; + + p->checksum = 0; + p->length = length; + checksum = 0; + for (cp=(u8*)p, cpe=cp+RSDP_CHECKSUM_LENGTH; cpchecksum = -checksum; +} + +int +nasid_present(int nasid) +{ + int cnode; + for (cnode=0; cnode= 1024) + arglen = 1023; + memcpy(cmd_line, args, arglen); + } else { + arglen = 0; + } + cmd_line[arglen] = '\0'; + /* + * For now, just bring up bash. + * If you want to execute all the startup scripts, delete the "init=..". + * You can also edit this line to pass other arguments to the kernel. + */ + strcpy(cmd_line, "init=/bin/bash"); + + memset(efi_systab, 0, sizeof(efi_systab)); + efi_systab->hdr.signature = EFI_SYSTEM_TABLE_SIGNATURE; + efi_systab->hdr.revision = EFI_SYSTEM_TABLE_REVISION; + efi_systab->hdr.headersize = sizeof(efi_systab->hdr); + efi_systab->fw_vendor = __fwtab_pa(base_nasid, vendor); + efi_systab->fw_revision = 1; + efi_systab->runtime = __fwtab_pa(base_nasid, efi_runtime); + efi_systab->nr_tables = 2; + efi_systab->tables = __fwtab_pa(base_nasid, efi_tables); + memcpy(vendor, "S\0i\0l\0i\0c\0o\0n\0-\0G\0r\0a\0p\0h\0i\0c\0s\0\0", 40); + + efi_runtime->hdr.signature = EFI_RUNTIME_SERVICES_SIGNATURE; + efi_runtime->hdr.revision = EFI_RUNTIME_SERVICES_REVISION; + efi_runtime->hdr.headersize = sizeof(efi_runtime->hdr); + efi_runtime->get_time = __fwtab_pa(base_nasid, &efi_get_time); + efi_runtime->set_time = __fwtab_pa(base_nasid, &efi_unimplemented); + efi_runtime->get_wakeup_time = __fwtab_pa(base_nasid, &efi_unimplemented); + efi_runtime->set_wakeup_time = __fwtab_pa(base_nasid, &efi_unimplemented); + efi_runtime->set_virtual_address_map = __fwtab_pa(base_nasid, &efi_set_virtual_address_map); + efi_runtime->get_variable = __fwtab_pa(base_nasid, &efi_unimplemented); + efi_runtime->get_next_variable = __fwtab_pa(base_nasid, &efi_unimplemented); + efi_runtime->set_variable = __fwtab_pa(base_nasid, &efi_unimplemented); + efi_runtime->get_next_high_mono_count = __fwtab_pa(base_nasid, &efi_unimplemented); + efi_runtime->reset_system = __fwtab_pa(base_nasid, &efi_reset_system); + + efi_tables->guid = SAL_SYSTEM_TABLE_GUID; + efi_tables->table = __fwtab_pa(base_nasid, sal_systab); + efi_tables++; +#ifdef ACPI_1_0 + efi_tables->guid = ACPI_TABLE_GUID; + efi_tables->table = __fwtab_pa(base_nasid, acpi_rsdp); + efi_tables++; +#endif + efi_tables->guid = ACPI_20_TABLE_GUID; + efi_tables->table = __fwtab_pa(base_nasid, acpi20_rsdp); + efi_tables++; + + fix_function_pointer(&efi_unimplemented); + fix_function_pointer(&efi_get_time); + fix_function_pointer(&efi_success); + fix_function_pointer(&efi_reset_system); + fix_function_pointer(&efi_set_virtual_address_map); + +#ifdef ACPI_1_0 + /* fill in the ACPI system table - has a pointer to the ACPI table header */ + memcpy(acpi_rsdp->signature, "RSD PTR ", 8); + acpi_rsdp->rsdt = (struct acpi_rsdt*)__fwtab_pa(base_nasid, acpi_rsdt); + + acpi_table_init(&acpi_rsdt->header, ACPI_RSDT_SIG, ACPI_RSDT_SIG_LEN, 1, 1); + acpi_rsdt->header.length = sizeof(acpi_rsdt_t); + acpi_rsdt->entry_ptrs[0] = __fwtab_pa(base_nasid, acpi_sapic); + + memcpy(acpi_sapic->header.signature, "SPIC ", 4); + acpi_sapic->header.length = sizeof(acpi_sapic_t)+num_cpus*sizeof(acpi_entry_lsapic_t); + + for (cnode=0; cnodetype = ACPI_ENTRY_LOCAL_SAPIC; + acpi_lsapic->length = sizeof(acpi_entry_lsapic_t); + acpi_lsapic->acpi_processor_id = cnode*4+cpu; + acpi_lsapic->flags = LSAPIC_ENABLED|LSAPIC_PRESENT; +#if defined(CONFIG_IA64_SGI_SN1) + acpi_lsapic->eid = cpu; + acpi_lsapic->id = nasid; +#else + acpi_lsapic->eid = nasid&0xffff; + acpi_lsapic->id = (cpu<<4) | (nasid>>16); +#endif + acpi_lsapic++; + } + } +#endif + + + /* fill in the ACPI20 system table - has a pointer to the ACPI table header */ + memcpy(acpi20_rsdp->signature, "RSD PTR ", 8); + acpi20_rsdp->xsdt = (struct acpi_xsdt*)__fwtab_pa(base_nasid, acpi_xsdt); + acpi20_rsdp->revision = 2; + acpi_checksum_rsdp20(acpi20_rsdp, sizeof(acpi20_rsdp_t)); + + /* Set up the XSDT table - contains pointers to the other ACPI tables */ + acpi_table_init(&acpi_xsdt->header, ACPI_XSDT_SIG, ACPI_XSDT_SIG_LEN, 1, 1); + acpi_xsdt->entry_ptrs[0] = __fwtab_pa(base_nasid, acpi_madt); + acpi_xsdt->entry_ptrs[1] = __fwtab_pa(base_nasid, acpi_slit); + acpi_xsdt->entry_ptrs[2] = __fwtab_pa(base_nasid, acpi_srat); + acpi_checksum(&acpi_xsdt->header, sizeof(acpi_xsdt_t) + 16); + + /* Set up the MADT table */ + acpi_table_init(&acpi_madt->header, ACPI_MADT_SIG, ACPI_MADT_SIG_LEN, 1, 1); + lsapic20 = (acpi20_entry_lsapic_t*) (acpi_madt + 1); + for (cnode=0; cnodetype = ACPI20_ENTRY_LOCAL_SAPIC; + lsapic20->length = sizeof(acpi_entry_lsapic_t); + lsapic20->acpi_processor_id = cnode*4+cpu; + lsapic20->flags = LSAPIC_ENABLED|LSAPIC_PRESENT; +#if defined(CONFIG_IA64_SGI_SN1) + lsapic20->eid = cpu; + lsapic20->id = nasid; +#else + lsapic20->eid = nasid&0xffff; + lsapic20->id = (cpu<<4) | (nasid>>16); +#endif + lsapic20 = (acpi20_entry_lsapic_t*) ((long)lsapic20+sizeof(acpi_entry_lsapic_t)); + } + } + acpi_checksum(&acpi_madt->header, (char*)lsapic20 - (char*)acpi_madt); + + /* Set up the SRAT table */ + acpi_table_init(&acpi_srat->header, ACPI_SRAT_SIG, ACPI_SRAT_SIG_LEN, ACPI_SRAT_REVISION, 1); + ptr = acpi_srat+1; + for (cnode=0; cnodetype = SRAT_MEMORY_STRUCTURE; + srat_memory_affinity->length = sizeof(srat_memory_affinity_t); + srat_memory_affinity->proximity_domain = PROXIMITY_DOMAIN(nasid); + srat_memory_affinity->base_addr_lo = 0; + srat_memory_affinity->length_lo = 0; +#if defined(CONFIG_IA64_SGI_SN1) + srat_memory_affinity->base_addr_hi = nasid<<1; + srat_memory_affinity->length_hi = SN1_NODE_SIZE>>32; +#else + srat_memory_affinity->base_addr_hi = (nasid<<6) | (3<<4); + srat_memory_affinity->length_hi = SN2_NODE_SIZE>>32; +#endif + srat_memory_affinity->memory_type = ACPI_ADDRESS_RANGE_MEMORY; + srat_memory_affinity->flags = SRAT_MEMORY_FLAGS_ENABLED; + } + + for (cnode=0; cnodetype = SRAT_CPU_STRUCTURE; + srat_cpu_affinity->length = sizeof(srat_cpu_affinity_t); + srat_cpu_affinity->proximity_domain = PROXIMITY_DOMAIN(nasid); + srat_cpu_affinity->flags = SRAT_CPU_FLAGS_ENABLED; +#if defined(CONFIG_IA64_SGI_SN1) + srat_cpu_affinity->apic_id = nasid; + srat_cpu_affinity->local_sapic_eid = cpu; +#else + srat_cpu_affinity->local_sapic_eid = nasid&0xffff; + srat_cpu_affinity->apic_id = (cpu<<4) | (nasid>>16); +#endif + } + } + acpi_checksum(&acpi_srat->header, (char*)ptr - (char*)acpi_srat); + + + /* Set up the SLIT table */ + acpi_table_init(&acpi_slit->header, ACPI_SLIT_SIG, ACPI_SLIT_SIG_LEN, ACPI_SLIT_REVISION, 1); + acpi_slit->localities = PROXIMITY_DOMAIN(max_nasid)+1; + cp=acpi_slit->entries; + memset(cp, 255, acpi_slit->localities*acpi_slit->localities); + + for (i=0; i<=max_nasid; i++) + for (j=0; j<=max_nasid; j++) + if (nasid_present(i) && nasid_present(j)) + *(cp+PROXIMITY_DOMAIN(i)*acpi_slit->localities+PROXIMITY_DOMAIN(j)) = 10 + MIN(254, 5*ABS(i-j)); + + cp = acpi_slit->entries + acpi_slit->localities*acpi_slit->localities; + acpi_checksum(&acpi_slit->header, cp - (char*)acpi_slit); + + + /* fill in the SAL system table: */ + memcpy(sal_systab->signature, "SST_", 4); + sal_systab->size = sizeof(*sal_systab); + sal_systab->sal_rev_minor = 1; + sal_systab->sal_rev_major = 0; + sal_systab->entry_count = 3; + + strcpy(sal_systab->oem_id, "SGI"); + strcpy(sal_systab->product_id, "SN1"); + + /* fill in an entry point: */ + sal_ed->type = SAL_DESC_ENTRY_POINT; + sal_ed->pal_proc = __fwtab_pa(base_nasid, pal_desc[0]); + sal_ed->sal_proc = __fwtab_pa(base_nasid, sal_desc[0]); + sal_ed->gp = __fwtab_pa(base_nasid, sal_desc[1]); + + /* kludge the PTC domain info */ + sal_ptc->type = SAL_DESC_PTC; + sal_ptc->num_domains = 0; + sal_ptc->domain_info = __fwtab_pa(base_nasid, sal_ptcdi); + cpus_found = 0; + last_domain = -1; + sal_ptcdi--; + for (cnode=0; cnodenum_domains++; + sal_ptcdi++; + sal_ptcdi->proc_count = 0; + sal_ptcdi->proc_list = __fwtab_pa(base_nasid, sal_ptclid); + last_domain = domain; + } + sal_ptcdi->proc_count++; + sal_ptclid->id = nasid; + sal_ptclid->eid = cpu; + sal_ptclid++; + cpus_found++; + } + } + } + + if (cpus_found != num_cpus) + FPROM_BUG(); + + /* Make the AP WAKEUP entry */ + sal_apwake->type = SAL_DESC_AP_WAKEUP; + sal_apwake->mechanism = IA64_SAL_AP_EXTERNAL_INT; + sal_apwake->vector = 18; + + for (checksum=0, cp=(char*)sal_systab; cp < (char *)efi_memmap; ++cp) + checksum += *cp; + sal_systab->checksum = -checksum; + + /* If the checksum is correct, the kernel tries to use the + * table. We dont build enough table & the kernel aborts. + * Note that the PROM hasd thhhe same problem!! + */ +#ifdef DOESNT_WORK + for (checksum=0, cp=(char*)acpi_rsdp, cpe=cp+RSDP_CHECKSUM_LENGTH; cpchecksum = -checksum; +#endif + + md = &efi_memmap[0]; + num_memmd = build_efi_memmap((void *)md, mdsize) ; + + bp = (struct ia64_boot_param*) __fwtab_pa(base_nasid, BOOT_PARAM_ADDR); + bp->efi_systab = __fwtab_pa(base_nasid, &fw_mem); + bp->efi_memmap = __fwtab_pa(base_nasid, efi_memmap); + bp->efi_memmap_size = num_memmd*mdsize; + bp->efi_memdesc_size = mdsize; + bp->efi_memdesc_version = 0x101; + bp->command_line = __fwtab_pa(base_nasid, cmd_line); + bp->console_info.num_cols = 80; + bp->console_info.num_rows = 25; + bp->console_info.orig_x = 0; + bp->console_info.orig_y = 24; + bp->fpswa = 0; + + /* + * Now pick the BSP & store it LID value in + * a global variable. Note if BSP is greater than last cpu, + * pick the last cpu. + */ + for (cnode=0; cnode 0) + continue; + return; + } + } +} diff -Nru a/arch/ia64/sn/fakeprom/klgraph_init.c b/arch/ia64/sn/fakeprom/klgraph_init.c --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/fakeprom/klgraph_init.c Tue Mar 12 13:58:16 2002 @@ -0,0 +1,287 @@ +/* $Id: klgraph_init.c,v 1.2 2001/12/05 16:58:41 jh Exp $ + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (C) 1992 - 1997, 2000-2002 Silicon Graphics, Inc. All Rights Reserved. + */ + + +/* + * This is a temporary file that statically initializes the expected + * initial klgraph information that is normally provided by prom. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define SYNERGY_WIDGET ((char *)0xc0000e0000000000) +#define SYNERGY_SWIZZLE ((char *)0xc0000e0000000400) +#define HUBREG ((char *)0xc0000a0001e00000) +#define WIDGET0 ((char *)0xc0000a0000000000) +#define WIDGET4 ((char *)0xc0000a0000000004) + +#define SYNERGY_WIDGET ((char *)0xc0000e0000000000) +#define SYNERGY_SWIZZLE ((char *)0xc0000e0000000400) +#define HUBREG ((char *)0xc0000a0001e00000) +#define WIDGET0 ((char *)0xc0000a0000000000) + +#define convert(a,b,c) temp = (u64 *)a; *temp = b; temp++; *temp = c +void +klgraph_init(void) +{ + + u64 *temp; + + /* + * Initialize some hub/xbow registers that allows access to + * Xbridge etc. These are normally done in PROM. + */ + + /* Write IOERR clear to clear the CRAZY bit in the status */ +#ifdef CONFIG_IA64_SGI_SN1 + *(volatile uint64_t *)0xc0000a0001c001f8 = (uint64_t)0xffffffff; + + /* set widget control register...setting bedrock widget id to b */ + *(volatile uint64_t *)0xc0000a0001c00020 = (uint64_t)0x801b; + + /* set io outbound widget access...allow all */ + *(volatile uint64_t *)0xc0000a0001c00110 = (uint64_t)0xff01; + + /* set io inbound widget access...allow all */ + *(volatile uint64_t *)0xc0000a0001c00118 = (uint64_t)0xff01; + + /* set io crb timeout to max */ + *(volatile uint64_t *)0xc0000a0001c003c0 = (uint64_t)0xffffff; + *(volatile uint64_t *)0xc0000a0001c003c0 = (uint64_t)0xffffff; + + /* set local block io permission...allow all */ + *(volatile uint64_t *)0xc0000a0001e04010 = (uint64_t)0xfffffffffffffff; + + /* clear any errors */ + /* clear_ii_error(); medusa should have cleared these */ + + /* set default read response buffers in bridge */ + *(volatile u32 *)0xc0000a000f000280L = 0xba98; + *(volatile u32 *)0xc0000a000f000288L = 0xba98; +#elif CONFIG_IA64_SGI_SN2 + *(volatile uint64_t *)0xc000000801c001f8 = (uint64_t)0xffffffff; + + /* set widget control register...setting bedrock widget id to a */ + *(volatile uint64_t *)0xc000000801c00020 = (uint64_t)0x801a; + + /* set io outbound widget access...allow all */ + *(volatile uint64_t *)0xc000000801c00110 = (uint64_t)0xff01; + + /* set io inbound widget access...allow all */ + *(volatile uint64_t *)0xc000000801c00118 = (uint64_t)0xff01; + + /* set io crb timeout to max */ + *(volatile uint64_t *)0xc000000801c003c0 = (uint64_t)0xffffff; + *(volatile uint64_t *)0xc000000801c003c0 = (uint64_t)0xffffff; + + /* set local block io permission...allow all */ +// [LB] *(volatile uint64_t *)0xc000000801e04010 = (uint64_t)0xfffffffffffffff; + + /* clear any errors */ + /* clear_ii_error(); medusa should have cleared these */ + + /* set default read response buffers in bridge */ +// [PI] *(volatile u32 *)0xc00000080f000280L = 0xba98; +// [PI] *(volatile u32 *)0xc00000080f000288L = 0xba98; +#endif /* CONFIG_IA64_SGI_SN1 */ + + /* + * kldir entries initialization - mankato + */ + convert(0x8000000000002000, 0x0000000000000000, 0x0000000000000000); + convert(0x8000000000002010, 0x0000000000000000, 0x0000000000000000); + convert(0x8000000000002020, 0x0000000000000000, 0x0000000000000000); + convert(0x8000000000002030, 0x0000000000000000, 0x0000000000000000); + convert(0x8000000000002040, 0x434d5f53505f5357, 0x0000000000030000); + convert(0x8000000000002050, 0x0000000000000000, 0x0000000000010000); + convert(0x8000000000002060, 0x0000000000000001, 0x0000000000000000); + convert(0x8000000000002070, 0x0000000000000000, 0x0000000000000000); + convert(0x8000000000002080, 0x0000000000000000, 0x0000000000000000); + convert(0x8000000000002090, 0x0000000000000000, 0x0000000000000000); + convert(0x80000000000020a0, 0x0000000000000000, 0x0000000000000000); + convert(0x80000000000020b0, 0x0000000000000000, 0x0000000000000000); + convert(0x80000000000020c0, 0x434d5f53505f5357, 0x0000000000000000); + convert(0x80000000000020d0, 0x0000000000002400, 0x0000000000000400); + convert(0x80000000000020e0, 0x0000000000000001, 0x0000000000000000); + convert(0x80000000000020f0, 0x0000000000000000, 0x0000000000000000); + convert(0x8000000000002100, 0x434d5f53505f5357, 0x0000000000040000); + convert(0x8000000000002110, 0x0000000000000000, 0xffffffffffffffff); + convert(0x8000000000002120, 0x0000000000000001, 0x0000000000000000); + convert(0x8000000000002130, 0x0000000000000000, 0x0000000000000000); + convert(0x8000000000002140, 0x0000000000000000, 0x0000000000000000); + convert(0x8000000000002150, 0x0000000000000000, 0x0000000000000000); + convert(0x8000000000002160, 0x0000000000000000, 0x0000000000000000); + convert(0x8000000000002170, 0x0000000000000000, 0x0000000000000000); + convert(0x8000000000002180, 0x434d5f53505f5357, 0x0000000000020000); + convert(0x8000000000002190, 0x0000000000000000, 0x0000000000010000); + convert(0x80000000000021a0, 0x0000000000000001, 0x0000000000000000); + + /* + * klconfig entries initialization - mankato + */ + convert(0x0000000000030000, 0x00000000beedbabe, 0x0000004800000000); + convert(0x0000000000030010, 0x0003007000000018, 0x800002000f820178); + convert(0x0000000000030020, 0x80000a000f024000, 0x800002000f800000); + convert(0x0000000000030030, 0x0300fafa00012580, 0x00000000040f0000); + convert(0x0000000000030040, 0x0000000000000000, 0x0003097000030070); + convert(0x0000000000030050, 0x00030970000303b0, 0x0003181000033f70); + convert(0x0000000000030060, 0x0003d51000037570, 0x0000000000038330); + convert(0x0000000000030070, 0x0203110100030140, 0x0001000000000101); + convert(0x0000000000030080, 0x0900000000000000, 0x000000004e465e67); + convert(0x0000000000030090, 0x0003097000000000, 0x00030b1000030a40); + convert(0x00000000000300a0, 0x00030cb000030be0, 0x000315a0000314d0); + convert(0x00000000000300b0, 0x0003174000031670, 0x0000000000000000); + convert(0x0000000000030100, 0x000000000000001a, 0x3350490000000000); + convert(0x0000000000030110, 0x0000000000000037, 0x0000000000000000); + convert(0x0000000000030140, 0x0002420100030210, 0x0001000000000101); + convert(0x0000000000030150, 0x0100000000000000, 0xffffffffffffffff); + convert(0x0000000000030160, 0x00030d8000000000, 0x0000000000030e50); + convert(0x00000000000301c0, 0x0000000000000000, 0x0000000000030070); + convert(0x00000000000301d0, 0x0000000000000025, 0x424f490000000000); + convert(0x00000000000301e0, 0x000000004b434952, 0x0000000000000000); + convert(0x0000000000030210, 0x00027101000302e0, 0x00010000000e4101); + convert(0x0000000000030220, 0x0200000000000000, 0xffffffffffffffff); + convert(0x0000000000030230, 0x00030f2000000000, 0x0000000000030ff0); + convert(0x0000000000030290, 0x0000000000000000, 0x0000000000030140); + convert(0x00000000000302a0, 0x0000000000000026, 0x7262490000000000); + convert(0x00000000000302b0, 0x00000000006b6369, 0x0000000000000000); + convert(0x00000000000302e0, 0x0002710100000000, 0x00010000000f3101); + convert(0x00000000000302f0, 0x0500000000000000, 0xffffffffffffffff); + convert(0x0000000000030300, 0x000310c000000000, 0x0003126000031190); + convert(0x0000000000030310, 0x0003140000031330, 0x0000000000000000); + convert(0x0000000000030360, 0x0000000000000000, 0x0000000000030140); + convert(0x0000000000030370, 0x0000000000000029, 0x7262490000000000); + convert(0x0000000000030380, 0x00000000006b6369, 0x0000000000000000); + convert(0x0000000000030970, 0x0000000002010102, 0x0000000000000000); + convert(0x0000000000030980, 0x000000004e465e67, 0xffffffff00000000); + /* convert(0x00000000000309a0, 0x0000000000037570, 0x0000000100000000); */ + convert(0x00000000000309a0, 0x0000000000037570, 0xffffffff00000000); + convert(0x00000000000309b0, 0x0000000000030070, 0x0000000000000000); + convert(0x00000000000309c0, 0x000000000003f420, 0x0000000000000000); + convert(0x0000000000030a40, 0x0000000002010125, 0x0000000000000000); + convert(0x0000000000030a50, 0xffffffffffffffff, 0xffffffff00000000); + convert(0x0000000000030a70, 0x0000000000037b78, 0x0000000000000000); + convert(0x0000000000030b10, 0x0000000002010125, 0x0000000000000000); + convert(0x0000000000030b20, 0xffffffffffffffff, 0xffffffff00000000); + convert(0x0000000000030b40, 0x0000000000037d30, 0x0000000000000001); + convert(0x0000000000030be0, 0x00000000ff010203, 0x0000000000000000); + convert(0x0000000000030bf0, 0xffffffffffffffff, 0xffffffff000000ff); + convert(0x0000000000030c10, 0x0000000000037ee8, 0x0100010000000200); + convert(0x0000000000030cb0, 0x00000000ff310111, 0x0000000000000000); + convert(0x0000000000030cc0, 0xffffffffffffffff, 0x0000000000000000); + convert(0x0000000000030d80, 0x0000000002010104, 0x0000000000000000); + convert(0x0000000000030d90, 0xffffffffffffffff, 0x00000000000000ff); + convert(0x0000000000030db0, 0x0000000000037f18, 0x0000000000000000); + convert(0x0000000000030dc0, 0x0000000000000000, 0x0003007000060000); + convert(0x0000000000030de0, 0x0000000000000000, 0x0003021000050000); + convert(0x0000000000030df0, 0x000302e000050000, 0x0000000000000000); + convert(0x0000000000030e30, 0x0000000000000000, 0x000000000000000a); + convert(0x0000000000030e50, 0x00000000ff00011a, 0x0000000000000000); + convert(0x0000000000030e60, 0xffffffffffffffff, 0x0000000000000000); + convert(0x0000000000030e80, 0x0000000000037fe0, 0x9e6e9e9e9e9e9e9e); + convert(0x0000000000030e90, 0x000000000000bc6e, 0x0000000000000000); + convert(0x0000000000030f20, 0x0000000002010205, 0x00000000d0020000); + convert(0x0000000000030f30, 0xffffffffffffffff, 0x0000000e0000000e); + convert(0x0000000000030f40, 0x000000000000000e, 0x0000000000000000); + convert(0x0000000000030f50, 0x0000000000038010, 0x00000000000007ff); + convert(0x0000000000030f70, 0x0000000000000000, 0x0000000022001077); + convert(0x0000000000030fa0, 0x0000000000000000, 0x000000000003f4a8); + convert(0x0000000000030ff0, 0x0000000000310120, 0x0000000000000000); + convert(0x0000000000031000, 0xffffffffffffffff, 0xffffffff00000002); + convert(0x0000000000031010, 0x000000000000000e, 0x0000000000000000); + convert(0x0000000000031020, 0x0000000000038088, 0x0000000000000000); + convert(0x00000000000310c0, 0x0000000002010205, 0x00000000d0020000); + convert(0x00000000000310d0, 0xffffffffffffffff, 0x0000000f0000000f); + convert(0x00000000000310e0, 0x000000000000000f, 0x0000000000000000); + convert(0x00000000000310f0, 0x00000000000380b8, 0x00000000000007ff); + convert(0x0000000000031120, 0x0000000022001077, 0x00000000000310a9); + convert(0x0000000000031130, 0x00000000580211c1, 0x000000008009104c); + convert(0x0000000000031140, 0x0000000000000000, 0x000000000003f4c0); + convert(0x0000000000031190, 0x0000000000310120, 0x0000000000000000); + convert(0x00000000000311a0, 0xffffffffffffffff, 0xffffffff00000003); + convert(0x00000000000311b0, 0x000000000000000f, 0x0000000000000000); + convert(0x00000000000311c0, 0x0000000000038130, 0x0000000000000000); + convert(0x0000000000031260, 0x0000000000110106, 0x0000000000000000); + convert(0x0000000000031270, 0xffffffffffffffff, 0xffffffff00000004); + convert(0x0000000000031280, 0x000000000000000f, 0x0000000000000000); + convert(0x00000000000312a0, 0x00000000ff110013, 0x0000000000000000); + convert(0x00000000000312b0, 0xffffffffffffffff, 0xffffffff00000000); + convert(0x00000000000312c0, 0x000000000000000f, 0x0000000000000000); + convert(0x00000000000312e0, 0x0000000000110012, 0x0000000000000000); + convert(0x00000000000312f0, 0xffffffffffffffff, 0xffffffff00000000); + convert(0x0000000000031300, 0x000000000000000f, 0x0000000000000000); + convert(0x0000000000031310, 0x0000000000038160, 0x0000000000000000); + convert(0x0000000000031330, 0x00000000ff310122, 0x0000000000000000); + convert(0x0000000000031340, 0xffffffffffffffff, 0xffffffff00000005); + convert(0x0000000000031350, 0x000000000000000f, 0x0000000000000000); + convert(0x0000000000031360, 0x0000000000038190, 0x0000000000000000); + convert(0x0000000000031400, 0x0000000000310121, 0x0000000000000000); + convert(0x0000000000031400, 0x0000000000310121, 0x0000000000000000); + convert(0x0000000000031410, 0xffffffffffffffff, 0xffffffff00000006); + convert(0x0000000000031420, 0x000000000000000f, 0x0000000000000000); + convert(0x0000000000031430, 0x00000000000381c0, 0x0000000000000000); + convert(0x00000000000314d0, 0x00000000ff010201, 0x0000000000000000); + convert(0x00000000000314e0, 0xffffffffffffffff, 0xffffffff00000000); + convert(0x0000000000031500, 0x00000000000381f0, 0x000030430000ffff); + convert(0x0000000000031510, 0x000000000000ffff, 0x0000000000000000); + convert(0x00000000000315a0, 0x00000020ff000201, 0x0000000000000000); + convert(0x00000000000315b0, 0xffffffffffffffff, 0xffffffff00000001); + convert(0x00000000000315d0, 0x0000000000038240, 0x00003f3f0000ffff); + convert(0x00000000000315e0, 0x000000000000ffff, 0x0000000000000000); + convert(0x0000000000031670, 0x00000000ff010201, 0x0000000000000000); + convert(0x0000000000031680, 0xffffffffffffffff, 0x0000000100000002); + convert(0x00000000000316a0, 0x0000000000038290, 0x000030430000ffff); + convert(0x00000000000316b0, 0x000000000000ffff, 0x0000000000000000); + convert(0x0000000000031740, 0x00000020ff000201, 0x0000000000000000); + convert(0x0000000000031750, 0xffffffffffffffff, 0x0000000500000003); + convert(0x0000000000031770, 0x00000000000382e0, 0x00003f3f0000ffff); + convert(0x0000000000031780, 0x000000000000ffff, 0x0000000000000000); + + /* + * GDA initialization - mankato + */ + convert(0x8000000000002400, 0x0000000258464552, 0x000000000ead0000); + convert(0x8000000000002480, 0xffffffff00010000, 0xffffffffffffffff); + convert(0x8000000000002490, 0xffffffffffffffff, 0xffffffffffffffff); + convert(0x80000000000024a0, 0xffffffffffffffff, 0xffffffffffffffff); + convert(0x80000000000024b0, 0xffffffffffffffff, 0xffffffffffffffff); + convert(0x80000000000024c0, 0xffffffffffffffff, 0xffffffffffffffff); + convert(0x80000000000024d0, 0xffffffffffffffff, 0xffffffffffffffff); + convert(0x80000000000024e0, 0xffffffffffffffff, 0xffffffffffffffff); + convert(0x80000000000024f0, 0xffffffffffffffff, 0xffffffffffffffff); + convert(0x8000000000002500, 0xffffffffffffffff, 0xffffffffffffffff); + convert(0x8000000000002510, 0xffffffffffffffff, 0xffffffffffffffff); + convert(0x8000000000002520, 0xffffffffffffffff, 0xffffffffffffffff); + convert(0x8000000000002530, 0xffffffffffffffff, 0xffffffffffffffff); + convert(0x8000000000002540, 0xffffffffffffffff, 0xffffffffffffffff); + convert(0x8000000000002550, 0xffffffffffffffff, 0xffffffffffffffff); + convert(0x8000000000002560, 0xffffffffffffffff, 0xffffffffffffffff); + convert(0x8000000000002570, 0xffffffffffffffff, 0xffffffffffffffff); + convert(0x8000000000002580, 0x000000000000ffff, 0x0000000000000000); + +} + diff -Nru a/arch/ia64/sn/fakeprom/main.c b/arch/ia64/sn/fakeprom/main.c --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/fakeprom/main.c Tue Mar 12 13:58:15 2002 @@ -0,0 +1,125 @@ +/* + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (C) 2000-2001 Silicon Graphics, Inc. All rights reserved. + */ + + + +#include +#include +#include + +extern void klgraph_init(void); +void bedrock_init(int); +void synergy_init(int, int); +void sys_fw_init (const char *args, int arglen, int bsp); + +volatile int bootmaster=0; /* Used to pick bootmaster */ +volatile int nasidmaster[128]={0}; /* Used to pick node/synergy masters */ +int init_done=0; +extern int bsp_lid; + +#define get_bit(b,p) (((*p)>>(b))&1) + +int +fmain(int lid, int bsp) { + int syn, nasid, cpu; + + /* + * First lets figure out who we are. This is done from the + * LID passed to us. + */ + +#ifdef CONFIG_IA64_SGI_SN1 + nasid = (lid>>24); + syn = (lid>>17)&1; + cpu = (lid>>16)&1; + + /* + * Now pick a synergy master to initialize synergy registers. + */ + if (test_and_set_bit(syn, &nasidmaster[nasid]) == 0) { + synergy_init(nasid, syn); + test_and_set_bit(syn+2, &nasidmaster[nasid]); + } else + while (get_bit(syn+2, &nasidmaster[nasid]) == 0); +#else + nasid = (lid>>16)&0xfff; + cpu = (lid>>28)&3; + syn = 0; +#endif + + /* + * Now pick a nasid master to initialize Bedrock registers. + */ + if (test_and_set_bit(8, &nasidmaster[nasid]) == 0) { + bedrock_init(nasid); + test_and_set_bit(9, &nasidmaster[nasid]); + } else + while (get_bit(9, &nasidmaster[nasid]) == 0); + + + /* + * Now pick a BSP & finish init. + */ + if (test_and_set_bit(0, &bootmaster) == 0) { + sys_fw_init(0, 0, bsp); + test_and_set_bit(1, &bootmaster); + } else + while (get_bit(1, &bootmaster) == 0); + + return (lid == bsp_lid); +} + + +void +bedrock_init(int nasid) +{ + nasid = nasid; /* to quiet gcc */ +#if 0 + /* + * Undef if you need fprom to generate a 1 node klgraph + * information .. only works for 1 node for nasid 0. + */ + klgraph_init(); +#endif +} + + +void +synergy_init(int nasid, int syn) +{ + long *base; + long off; + + /* + * Enable all FSB flashed interrupts. + * ZZZ - I'd really like defines for this...... + */ + base = (long*)0x80000e0000000000LL; /* base of synergy regs */ + for (off = 0x2a0; off < 0x2e0; off+=8) /* offset for VEC_MASK_{0-3}_A/B */ + *(base+off/8) = -1LL; + + /* + * Set the NASID in the FSB_CONFIG register. + */ + base = (long*)0x80000e0000000450LL; + *base = (long)((nasid<<16)|(syn<<9)); +} + + +/* Why isnt there a bcopy/memcpy in lib64.a */ + +void* +memcpy(void * dest, const void *src, size_t count) +{ + char *s, *se, *d; + + for(d=dest, s=(char*)src, se=s+count; s] <-p> | <-k> [] + -p Create PROM control file & links + -k Create LINUX control file & links + -c Control file name [Default: cf] + Path to directory that contains the linux or PROM files. + The directory can be any of the following: + (linux simulations) + worktree + worktree/linux + any directory with vmlinux, vmlinux.sym & fprom files + (prom simulations) + worktree + worktree/stand/arcs/IP37prom/dev + any directory with fw.bin & fw.sim files + + Simulations: + sim [-X ] [-o ] [-M] [] + -c Control file name [Default: cf] + -M Pipe output thru fmtmedusa + -o Output filename (copy of all commands/output) [Default: simout] + -X Specifies number of instructions to execute [Default: 0] + (Used only in auto test mode - not described here) + +Examples: + sim -p # create control file (cf) & links for prom simulations + sim -k # create control file (cf) & links for linux simulations + sim -p -c cfprom # create a prom control file (cfprom) only. No links are made. + + sim # run medusa using previously created links & + # control file (cf). +END +exit 1 +} + +# ----------------------- create control file header -------------------- +create_cf_header() { +cat <>$CF +# +# Template for a control file for running linux kernels under medusa. +# You probably want to make mods here but this is a good starting point. +# + +# Preferences +setenv cpu_stepping A +setenv exceptionPrint off +setenv interrupt_messages off +setenv lastPCsize 100000 +setenv low_power_mode on +setenv partialIntelChipSet on +setenv printIntelMessages off +setenv prom_write_action halt +setenv prom_write_messages on +setenv step_quantum 100 +setenv swizzling on +setenv tsconsole on +setenv uart_echo on +symbols on + +# IDE disk params +setenv diskCylinders 611 +setenv bootDrive C +setenv diskHeads 16 +setenv diskPath idedisk +setenv diskPresent 1 +setenv diskSpt 63 + +# Hardware config +setenv coherency_type nasid +setenv cpu_cache_type default +setenv synergy_cache_type syn_cac_64m_8w +setenv l4_uc_snoop off + +# Numalink config +setenv route_enable on +setenv network_type router # Select [xbar|router] +setenv network_warning 0xff + +END +} + + +# ------------------ create control file entries for linux simulations ------------- +create_cf_linux() { +cat <>$CF +# Kernel specific options +setenv calias_size 0 +setenv mca_on_memory_failure off +setenv LOADPC 0x00100000 # FPROM load address/entry point (8 digits!) +setenv symbol_table vmlinux.sym +load fprom +load vmlinux + +# Useful breakpoints to always have set. Add more if desired. +break 0xe000000000505e00 all # dispatch_to_fault_handler +break panic all # stop on panic +break die_if_kernel all # may as well stop + +END +} + +# ------------------ create control file entries for prom simulations --------------- +create_cf_prom() { + SYM2="" + ADDR="0x80000000ff800000" + [ "$EMBEDDED_LINUX" != "0" ] || SYM2="setenv symbol_table2 vmlinux.sym" + [ "$SIZE" = "8MB" ] || ADDR="0x80000000ffc00000" + cat <>$CF +# PROM specific options +setenv mca_on_memory_failure on +setenv LOADPC 0x80000000ffffffb0 +setenv promFile fw.bin +setenv promAddr $ADDR +setenv symbol_table fw.sym +$SYM2 + +# Useful breakpoints to always have set. Add more if desired. +break ivt_gexx all +break ivt_brk all +break PROM_Panic_Spin all +break PROM_Panic all +break PROM_C_Panic all +break fled_die all +break ResetNow all +break zzzbkpt all + +END +} + + +# ------------------ create control file entries for memory configuration ------------- +create_cf_memory() { +cat <>$CF +# CPU/Memory map format: +# setenv nodeN_memory_config 0xBSBSBSBS +# B=banksize (0=unused, 1=64M, 2=128M, .., 5-1G, c=8M, d=16M, e=32M) +# S=bank enable (0=both disable, 3=both enable, 2=bank1 enable, 1=bank0 enable) +# rightmost digits are for bank 0, the lowest address. +# setenv nodeN_nasid +# specifies the NASID for the node. This is used ONLY if booting the kernel. +# On PROM configurations, set to 0 - PROM will change it later. +# setenv nodeN_cpu_config +# Set bit number N to 1 to enable cpu N. Ex., a value of 5 enables cpu 0 & 2. +# +# Repeat the above 3 commands for each node. +# +# For kernel, default to 32MB. Although this is not a valid hardware configuration, +# it runs faster on medusa. For PROM, 64MB is smallest allowed value. + +setenv node0_cpu_config 0x1 # Enable only cpu 0 on the node +END + +if [ $LINUX -eq 1 ] ; then +cat <>$CF +setenv node0_nasid 0 # cnode 0 has NASID 0 +setenv node0_memory_config 0xe1 # 32MB +END +else +cat <>$CF +setenv node0_memory_config 0x31 # 256MB +END +fi +} + +# -------------------- set links to linux files ------------------------- +set_linux_links() { + if [ -d $D/linux/arch ] ; then + D=$D/linux + elif [ -d $D/arch -o -e vmlinux.sym -o -e $D/vmlinux ] ; then + D=$D + else + err "cant determine directory for linux binaries" + fi + rm -rf vmlinux vmlinux.sym fprom + ln -s $D/vmlinux vmlinux + if [ -f $D/vmlinux.sym ] ; then + ln -s $D/vmlinux.sym vmlinux.sym + elif [ -f $D/System.map ] ; then + ln -s $D/System.map vmlinux.sym + fi + if [ -d $D/arch ] ; then + ln -s $D/arch/ia64/sn/fprom/fprom fprom + else + ln -s $D/fprom fprom + fi + echo " .. Created links to linux files" +} + +# -------------------- set links to prom files ------------------------- +set_prom_links() { + if [ -d $D/stand ] ; then + D=$D/stand/arcs/IP37prom/dev + elif [ -d $D/sal ] ; then + D=$D + else + err "cant determine directory for PROM binaries" + fi + SETUP="/tmp/tmp.$$" + rm -r -f $SETUP + sed 's/export/setenv/' < $D/../../../../.setup | sed 's/=/ /' >$SETUP + egrep -q '^ *setenv *PROMSIZE *8MB|^ *export' $SETUP + if [ $? -eq 0 ] ; then + SIZE="8MB" + else + SIZE="4MB" + fi + grep -q '^ *setenv *LAUNCH_VMLINUX' $SETUP + EMBEDDED_LINUX=$? + PRODUCT=`grep '^ *setenv *PRODUCT' $SETUP | cut -d" " -f3` + rm -f fw.bin fw.map fw.sym vmlinux vmlinux.sym fprom $SETUP + SDIR="${PRODUCT}${SIZE}.O" + BIN="${PRODUCT}ip37prom${SIZE}" + ln -s $D/$SDIR/$BIN.bin fw.bin + ln -s $D/$SDIR/$BIN.map fw.map + ln -s $D/$SDIR/$BIN.sym fw.sym + echo " .. Created links to $SIZE prom files" + if [ $EMBEDDED_LINUX -eq 0 ] ; then + ln -s $D/linux/vmlinux vmlinux + ln -s $D/linux/vmlinux.sym vmlinux.sym + if [ -d linux/arch ] ; then + ln -s $D/linux/arch/ia64/sn/fprom/fprom fprom + else + ln -s $D/linux/fprom fprom + fi + echo " .. Created links to embedded linux files in prom tree" + fi +} + +# --------------- start of shell script -------------------------------- +OUT="simout" +FMTMED=0 +STEPCNT=0 +PROM=0 +LINUX=0 +NCF="cf" +while getopts "HMX:c:o:pk" c ; do + case ${c} in + H) help;; + M) FMTMED=1;; + X) STEPCNT=${OPTARG};; + c) NCF=${OPTARG};; + k) PROM=0;LINUX=1;; + p) PROM=1;LINUX=0;; + o) OUT=${OPTARG};; + \?) exit 1;; + esac +done +shift `expr ${OPTIND} - 1` + +# Check if command is for creating control file and/or links to images. +if [ $PROM -eq 1 -o $LINUX -eq 1 ] ; then + CF=$NCF + [ ! -f $CF ] || err "wont overwrite an existing control file ($CF)" + if [ $# -gt 0 ] ; then + D=$1 + [ -d $D ] || err "cannot find directory $D" + [ $PROM -eq 0 ] || set_prom_links + [ $LINUX -eq 0 ] || set_linux_links + fi + create_cf_header + [ $PROM -eq 0 ] || create_cf_prom + [ $LINUX -eq 0 ] || create_cf_linux + [ ! -f ../idedisk ] || ln -s ../idedisk . + create_cf_memory + echo " .. Basic control file created (in $CF). You might want to edit" + echo " this file (at least, look at it)." + exit 0 +fi + +# Verify that the control file exists +CF=${1:-$NCF} +[ -f $CF ] || err "No control file exists. For help, type: $0 -H" + +# Build the .cf files from the user control file. The .cf file is +# identical except that the actual start & load addresses are inserted +# into the file. In addition, the FPROM commands for configuring memory +# and LIDs are generated. + +rm -f .cf .cf1 .cf2 +awk ' +function strtonum(n) { + if (substr(n,1,2) != "0x") + return int(n) + n = substr(n,3) + r=0 + while (length(n) > 0) { + r = r*16+(index("0123456789abcdef", substr(n,1,1))-1) + n = substr(n,2) + } + return r + } +/^#/ {next} +/^$/ {next} +/^setenv *LOADPC/ {loadpc = $3; next} +/^setenv *node.._cpu_config/ {n=int(substr($2,5,2)); cpuconf[n] = strtonum($3); print; next} +/^setenv *node.._memory_config/ {n=int(substr($2,5,2)); memconf[n] = strtonum($3); print; next} +/^setenv *node.._nasid/ {n=int(substr($2,5,2)); nasid[n] = strtonum($3); print; next} +/^setenv *node._cpu_config/ {n=int(substr($2,5,1)); cpuconf[n] = strtonum($3); print; next} +/^setenv *node._memory_config/ {n=int(substr($2,5,1)); memconf[n] = strtonum($3); print; next} +/^setenv *node._nasid/ {n=int(substr($2,5,1)); nasid[n] = strtonum($3); print; next} + {print} +END { + # Generate the memmap info that starts at the beginning of + # the node the kernel was loaded on. + loadnasid = nasid[0] + cnode = 0 + for (i=0; i<128; i++) { + if (memconf[i] != "") { + printf "sm 0x%x%08x 0x%x%04x%04x\n", + 2*loadnasid, 8*cnodes+8, memconf[i], cpuconf[i], nasid[i] + cnodes++ + cpus += substr("0112122312232334", cpuconf[i]+1,1) + } + } + printf "sm 0x%x00000000 0x%x%08x\n", 2*loadnasid, cnodes, cpus + printf "setenv number_of_nodes %d\n", cnodes + + # Now set the starting PC for each cpu. + cnode = 0 + lowcpu=-1 + for (i=0; i<128; i++) { + if (memconf[i] != "") { + printf "setnode %d\n", cnode + conf = cpuconf[i] + for (j=0; j<4; j++) { + if (conf != int(conf/2)*2) { + printf "setcpu %d\n", j + if (length(loadpc) == 18) + printf "sr pc %s\n", loadpc + else + printf "sr pc 0x%x%s\n", 2*loadnasid, substr(loadpc,3) + if (lowcpu == -1) + lowcpu = j + } + conf = int(conf/2) + } + cnode++ + } + } + printf "setnode 0\n" + printf "setcpu %d\n", lowcpu + } +' <$CF >.cf + +# Now build the .cf1 & .cf2 control files. +CF2_LINES="^sm |^break |^run |^si |^quit |^symbols " +egrep "$CF2_LINES" .cf >.cf2 +egrep -v "$CF2_LINES" .cf >.cf1 +if [ $STEPCNT -ne 0 ] ; then + echo "s $STEPCNT" >>.cf2 + echo "lastpc 1000" >>.cf2 + echo "q" >>.cf2 +fi +if [ -f vmlinux.sym ] ; then + awk '/ _start$/ {print "sr g 9 0x" $3}' < vmlinux.sym >> .cf2 +fi +echo "script-on $OUT" >>.cf2 + +# Now start medusa.... +if [ $FMTMED -ne 0 ] ; then + $MEDUSA -system mpsn1 -c .cf1 -i .cf2 | fmtmedusa +elif [ $STEPCNT -eq 0 ] ; then + $MEDUSA -system mpsn1 -c .cf1 -i .cf2 +else + $MEDUSA -system mpsn1 -c .cf1 -i .cf2 2>&1 +fi diff -Nru a/arch/ia64/sn/fprom/Makefile b/arch/ia64/sn/fprom/Makefile --- a/arch/ia64/sn/fprom/Makefile Tue Mar 12 13:58:15 2002 +++ /dev/null Wed Dec 31 16:00:00 1969 @@ -1,33 +0,0 @@ -# -# This file is subject to the terms and conditions of the GNU General Public -# License. See the file "COPYING" in the main directory of this archive -# for more details. -# -# Copyright (C) 2000 Silicon Graphics, Inc. -# Copyright (C) Jack Steiner (steiner@sgi.com) -# - -TOPDIR=../../../.. -HPATH = $(TOPDIR)/include - -LIB = ../../lib/lib.a - -OBJ=fpromasm.o main.o fw-emu.o fpmem.o -obj-y=fprom - -fprom: $(OBJ) - $(LD) -static -Tfprom.lds -o fprom $(OBJ) $(LIB) - -comma := , - -.S.o: - $(CC) -D__ASSEMBLY__ $(AFLAGS) $(AFLAGS_KERNEL) -c -o $*.o $< -.c.o: - $(CC) $(CFLAGS) -DKBUILD_BASENAME=$(subst $(comma),_,$(subst -,_,$(*F))) $(CFLAGS_KERNEL) -c -o $*.o $< - -clean: - rm -f *.o fprom - - -include $(TOPDIR)/Rules.make - diff -Nru a/arch/ia64/sn/fprom/README b/arch/ia64/sn/fprom/README --- a/arch/ia64/sn/fprom/README Tue Mar 12 13:58:15 2002 +++ /dev/null Wed Dec 31 16:00:00 1969 @@ -1,85 +0,0 @@ -This directory contains the files required to build -the fake PROM image that is currently being used to -boot IA64 kernels running under the SGI Medusa kernel. - -The FPROM currently provides the following functions: - - - PAL emulation for all PAL calls we've made so far. - - SAL emulation for all SAL calls we've made so far. - - EFI emulation for all EFI calls we've made so far. - - builds the "ia64_bootparam" structure that is - passed to the kernel from SAL. This structure - shows the cpu & memory configurations. - - supports medusa boottime options for changing - the number of cpus present - - supports medusa boottime options for changing - the memory configuration. - - - -At some point, this fake PROM will be replaced by the -real PROM. - - - - -To build a fake PROM, cd to this directory & type: - - make - -This will (or should) build a fake PROM named "fprom". - - - - -Use this fprom image when booting the Medusa simulator. The -control file used to boot Medusa should include the -following lines: - - load fprom - load vmlinux - sr pc 0x100000 - sr g 9
#(currently 0xe000000000520000) - -NOTE: There is a script "runsim" in this directory that can be used to -simplify setting up an environment for running under Medusa. - - - - -The following parameters may be passed to the fake PROM to -control the PAL/SAL/EFI parameters passed to the kernel: - - GR[8] = # of cpus - GR[9] = address of primary entry point into the kernel - GR[20] = memory configuration for node 0 - GR[21] = memory configuration for node 1 - GR[22] = memory configuration for node 2 - GR[23] = memory configuration for node 3 - - -Registers GR[20] - GR[23] contain information to specify the -amount of memory present on nodes 0-3. - - - if nothing is specified (all registers are 0), the configuration - defaults to 8 MB on node 0. - - - a mem config entry for node N is passed in GR[20+N] - - - a mem config entry consists of 8 hex digits. Each digit gives the - amount of physical memory available on the node starting at - 1GB*, where dn is the digit number. The amount of memory - is 8MB*2**. (If = 0, the memory size is 0). - - SN1 doesnt support dimms this small but small memory systems - boot faster on Medusa. - - - -An example helps a lot. The following specifies that node 0 has -physical memory 0 to 8MB and 1GB to 1GB+32MB, and that node 1 has -64MB starting at address 0 of the node which is 8GB. - - gr[20] = 0x21 # 0 to 8MB, 1GB to 1GB+32MB - gr[21] = 0x4 # 8GB to 8GB+64MB - diff -Nru a/arch/ia64/sn/fprom/fpmem.c b/arch/ia64/sn/fprom/fpmem.c --- a/arch/ia64/sn/fprom/fpmem.c Tue Mar 12 13:58:15 2002 +++ /dev/null Wed Dec 31 16:00:00 1969 @@ -1,200 +0,0 @@ -/* - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * - * Copyright (C) 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Jack Steiner (steiner@sgi.com) - */ - - -/* - * FPROM EFI memory descriptor build routines - * - * - Routines to build the EFI memory descriptor map - * - Should also be usable by the SGI SN1 prom to convert - * klconfig to efi_memmap - */ - -#include -#include "fpmem.h" - -/* - * args points to a layout in memory like this - * - * 32 bit 32 bit - * - * numnodes numcpus - * - * 16 bit 16 bit 32 bit - * nasid0 cpuconf membankdesc0 - * nasid1 cpuconf membankdesc1 - * . - * . - * . - * . - * . - */ - -sn_memmap_t *sn_memmap ; -sn_config_t *sn_config ; - -/* - * There is a hole in the node 0 address space. Dont put it - * in the memory map - */ -#define NODE0_HOLE_SIZE (20*MB) -#define NODE0_HOLE_END (4UL*GB) - -#define MB (1024*1024) -#define GB (1024*MB) -#define KERNEL_SIZE (4*MB) -#define PROMRESERVED_SIZE (1*MB) -#define MD_BANK_SHFT 30 - -#define TO_NODE(_n, _x) (((long)_n<<33L) | (long)_x) - -/* - * For SN, this may not take an arg and gets the numnodes from - * the prom variable or by traversing klcfg or promcfg - */ -int -GetNumNodes(void) -{ - return sn_config->nodes; -} - -int -GetNumCpus(void) -{ - return sn_config->cpus; -} - -/* For SN1, get the index th nasid */ - -int -GetNasid(int index) -{ - return sn_memmap[index].nasid ; -} - -node_memmap_t -GetMemBankInfo(int index) -{ - return sn_memmap[index].node_memmap ; -} - -int -IsCpuPresent(int cnode, int cpu) -{ - return sn_memmap[cnode].cpuconfig & (1<type = type; - md->phys_addr = paddr; - md->virt_addr = 0; - md->num_pages = numbytes >> 12; - md->attribute = EFI_MEMORY_WB; -} - -int -build_efi_memmap(void *md, int mdsize) -{ - int numnodes = GetNumNodes() ; - int cnode,bank ; - int nasid ; - node_memmap_t membank_info ; - int bsize; - int count = 0 ; - long paddr, hole, numbytes; - - - for (cnode=0;cnode - -typedef struct sn_memmap_s -{ - short nasid ; - short cpuconfig; - node_memmap_t node_memmap ; -} sn_memmap_t ; - -typedef struct sn_config_s -{ - int cpus; - int nodes; - sn_memmap_t memmap[1]; /* start of array */ -} sn_config_t; - - -extern void build_init(unsigned long); -extern int build_efi_memmap(void *, int); -extern int GetNumNodes(void); -extern int GetNumCpus(void); -extern int IsCpuPresent(int, int); -extern int GetNasid(int); diff -Nru a/arch/ia64/sn/fprom/fprom.lds b/arch/ia64/sn/fprom/fprom.lds --- a/arch/ia64/sn/fprom/fprom.lds Tue Mar 12 13:58:15 2002 +++ /dev/null Wed Dec 31 16:00:00 1969 @@ -1,96 +0,0 @@ - -OUTPUT_FORMAT("elf64-ia64-little") -OUTPUT_ARCH(ia64) -ENTRY(_start) -SECTIONS -{ - v = 0x0000000000000000 ; /* this symbol is here to make debugging with kdb easier... */ - - . = (0x000000000000000 + 0x100000) ; - - _text = .; - .text : AT(ADDR(.text) - 0x0000000000000000 ) - { - *(__ivt_section) - /* these are not really text pages, but the zero page needs to be in a fixed location: */ - *(__special_page_section) - __start_gate_section = .; - *(__gate_section) - __stop_gate_section = .; - *(.text) - } - - /* Global data */ - _data = .; - - .rodata : AT(ADDR(.rodata) - 0x0000000000000000 ) - { *(.rodata) *(.rodata.*) } - .opd : AT(ADDR(.opd) - 0x0000000000000000 ) - { *(.opd) } - .data : AT(ADDR(.data) - 0x0000000000000000 ) - { *(.data) *(.gnu.linkonce.d*) CONSTRUCTORS } - - __gp = ALIGN (8) + 0x200000; - - .got : AT(ADDR(.got) - 0x0000000000000000 ) - { *(.got.plt) *(.got) } - /* We want the small data sections together, so single-instruction offsets - can access them all, and initialized data all before uninitialized, so - we can shorten the on-disk segment size. */ - .sdata : AT(ADDR(.sdata) - 0x0000000000000000 ) - { *(.sdata) } - _edata = .; - _bss = .; - .sbss : AT(ADDR(.sbss) - 0x0000000000000000 ) - { *(.sbss) *(.scommon) } - .bss : AT(ADDR(.bss) - 0x0000000000000000 ) - { *(.bss) *(COMMON) } - . = ALIGN(64 / 8); - _end = .; - - /* Sections to be discarded */ - /DISCARD/ : { - *(.text.exit) - *(.data.exit) - } - - /* Stabs debugging sections. */ - .stab 0 : { *(.stab) } - .stabstr 0 : { *(.stabstr) } - .stab.excl 0 : { *(.stab.excl) } - .stab.exclstr 0 : { *(.stab.exclstr) } - .stab.index 0 : { *(.stab.index) } - .stab.indexstr 0 : { *(.stab.indexstr) } - /* DWARF debug sections. - Symbols in the DWARF debugging sections are relative to the beginning - of the section so we begin them at 0. */ - /* DWARF 1 */ - .debug 0 : { *(.debug) } - .line 0 : { *(.line) } - /* GNU DWARF 1 extensions */ - .debug_srcinfo 0 : { *(.debug_srcinfo) } - .debug_sfnames 0 : { *(.debug_sfnames) } - /* DWARF 1.1 and DWARF 2 */ - .debug_aranges 0 : { *(.debug_aranges) } - .debug_pubnames 0 : { *(.debug_pubnames) } - /* DWARF 2 */ - .debug_info 0 : { *(.debug_info) } - .debug_abbrev 0 : { *(.debug_abbrev) } - .debug_line 0 : { *(.debug_line) } - .debug_frame 0 : { *(.debug_frame) } - .debug_str 0 : { *(.debug_str) } - .debug_loc 0 : { *(.debug_loc) } - .debug_macinfo 0 : { *(.debug_macinfo) } - /* SGI/MIPS DWARF 2 extensions */ - .debug_weaknames 0 : { *(.debug_weaknames) } - .debug_funcnames 0 : { *(.debug_funcnames) } - .debug_typenames 0 : { *(.debug_typenames) } - .debug_varnames 0 : { *(.debug_varnames) } - /* These must appear regardless of . */ - /* Discard them for now since Intel SoftSDV cannot handle them. - .comment 0 : { *(.comment) } - .note 0 : { *(.note) } - */ - /DISCARD/ : { *(.comment) } - /DISCARD/ : { *(.note) } -} diff -Nru a/arch/ia64/sn/fprom/fpromasm.S b/arch/ia64/sn/fprom/fpromasm.S --- a/arch/ia64/sn/fprom/fpromasm.S Tue Mar 12 13:58:15 2002 +++ /dev/null Wed Dec 31 16:00:00 1969 @@ -1,314 +0,0 @@ -/* - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * - * (Code copied from or=ther files) - * Copyright (C) 1998-2000 Hewlett-Packard Co - * Copyright (C) 1998-2000 David Mosberger-Tang - * - * Copyright (C) 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Jack Steiner (steiner@sgi.com) - */ - - - -#define __ASSEMBLY__ 1 -#include "asm/processor.h" - -/* - * This file contains additional set up code that is needed to get going on - * Medusa. This code should disappear once real hw is available. - * - * On entry to this routine, the following register values are assumed: - * - * gr[8] - BSP cpu - * pr[9] - kernel entry address - * - * NOTE: - * This FPROM may be loaded/executed at an address different from the - * address that it was linked at. The FPROM is linked to run on node 0 - * at address 0x100000. If the code in loaded into another node, it - * must be loaded at offset 0x100000 of the node. In addition, the - * FPROM does the following things: - * - determine the base address of the node it is loaded on - * - add the node base to _gp. - * - add the node base to all addresses derived from "movl" - * instructions. (I couldnt get GPREL addressing to work) - * (maybe newer versions of the tools will support this) - * - scan the .got section and add the node base to all - * pointers in this section. - * - add the node base to all physical addresses in the - * SAL/PAL/EFI table built by the C code. (This is done - * in the C code - not here) - * - add the node base to the TLB entries for vmlinux - */ - -#define KERNEL_BASE 0xe000000000000000 -#define PAGESIZE_256M 28 - -/* - * ar.k0 gets set to IOPB_PA value, on 460gx chipset it should - * be 0x00000ffffc000000, but on snia we use the (inverse swizzled) - * IOSPEC_BASE value - */ -#define IOPB_PA 0x00000a0000000000 /* inv swizzle IOSPEC_BASE */ - -#define RR_RID 8 - - - -// ==================================================================================== - .text - .align 16 - .global _start - .proc _start -_start: - -// Setup psr and rse for system init - mov psr.l = r0;; - srlz.d;; - invala - mov ar.rsc = r0;; - loadrs - ;; - -// Set CALIAS size to zero. We dont use it. - movl r24=0x80000a0001000028;; // BR_PI_CALIAS_SIZE - st8 [r24]=r0 - -// Isolate node number we are running on. - mov r6 = ip;; - shr r5 = r6,33;; // r5 = node number - shl r6 = r5,33 // r6 = base memory address of node - -// Set & relocate gp. - movl r1= __gp;; // Add base memory address - add r1 = r1,r6 // Relocate to boot node - -// Lets figure out who we are & put it in the LID register. -// The BR_PI_SELF_CPU_NUM register gives us a value of 0-3. -// This identifies the cpu on the node. -// Merge the cpu number with the NASID to generate the LID. - movl r24=0x80000a0001000020;; // BR_PI_SELF_CPU_NUM - ld8 r25=[r24] // Fetch PI_SELF - movl r27=0x80000a0001600000;; // Fetch REVID to get local NASID - ld8 r27=[r27];; - extr.u r27=r27,32,8 - shl r26=r25,16;; // Align local cpu# to lid.eid - shl r27=r27,24;; // Align NASID to lid.id - or r26=r26,r27;; // build the LID - mov cr.lid=r26 // Now put in in the LID register - - movl r2=FPSR_DEFAULT;; - mov ar.fpsr=r2 - movl sp = bootstacke-16;; - add sp = sp,r6 // Relocate to boot node - -// Save the NASID that we are loaded on. - movl r2=base_nasid;; // Save base_nasid for C code - add r2 = r2,r6;; // Relocate to boot node - st8 [r2]=r5 // Uncond st8 - same on all cpus - -// Save the kernel entry address. It is passed in r9 on one of -// the cpus. - movl r2=bsp_entry_pc - cmp.ne p6,p0=r9,r0;; - add r2 = r2,r6;; // Relocate to boot node -(p6) st8 [r2]=r9 // Uncond st8 - same on all cpus - - -// The following can ONLY be done by 1 cpu. Lets set a lock - the -// cpu that gets it does the initilization. The rest just spin waiting -// til initilization is complete. - movl r22 = initlock;; - add r22 = r22,r6 // Relocate to boot node - mov r23 = 1;; - xchg8 r23 = [r22],r23;; - cmp.eq p6,p0 = 0,r23 -(p6) br.cond.spnt.few init -1: ld4 r23 = [r22];; - cmp.eq p6,p0 = 1,r23 -(p6) br.cond.sptk 1b - br initx - -// Add base address of node memory to each pointer in the .got section. -init: movl r16 = _GLOBAL_OFFSET_TABLE_;; - add r16 = r16,r6;; // Relocate to boot node -1: ld8 r17 = [r16];; - cmp.eq p6,p7=0,r17 -(p6) br.cond.sptk.few.clr 2f;; - add r17 = r17,r6;; // Relocate to boot node - st8 [r16] = r17,8 - br 1b -2: - mov r23 = 2;; // All done, release the spinning cpus - st4 [r22] = r23 -initx: - -// -// I/O-port space base address: -// - movl r2 = IOPB_PA;; - mov ar.k0 = r2 - - -// Now call main & pass it the current LID value. - alloc r0=ar.pfs,0,0,2,0 - mov r32=r26 - mov r33=r8;; - br.call.sptk.few rp=fmain - -// Initialize Region Registers -// - mov r10 = r0 - mov r2 = (13<<2) - mov r3 = r0;; -1: cmp4.gtu p6,p7 = 7, r3 - dep r10 = r3, r10, 61, 3 - dep r2 = r3, r2, RR_RID, 4;; -(p7) dep r2 = 0, r2, 0, 1;; -(p6) dep r2 = -1, r2, 0, 1;; - mov rr[r10] = r2 - add r3 = 1, r3;; - srlz.d;; - cmp4.gtu p6,p0 = 8, r3 -(p6) br.cond.sptk.few.clr 1b - -// -// Return value indicates if we are the BSP or AP. -// 1 = BSP, 0 = AP - mov cr.tpr=r0;; - cmp.eq p6,p0=r8,r0 -(p6) br.cond.spnt slave - -// -// Initialize the protection key registers with only pkr[0] = valid. -// -// Should be initialized in accordance with the OS. -// - mov r2 = 1 - mov r3 = r0;; - mov pkr[r3] = r2;; - srlz.d;; - mov r2 = r0 - -1: add r3 = r3, r0, 1;; // increment PKR - cmp.gtu p6, p0 = 16, r3;; -(p6) mov pkr[r3] = r2 -(p6) br.cond.sptk.few.clr 1b - - mov ar.rnat = r0 // clear RNAT register - -// -// Setup system address translation for kernel -// -// Note: The setup of Kernel Virtual address space can be done by the -// C code of the boot loader. -// -// - -#define LINUX_PAGE_OFFSET 0xe000000000000000 -#define ITIR(key, ps) ((key<<8) | (ps<<2)) -#define ITRGR(ed,ar,ma) ((ed<<52) | (ar<<9) | (ma<<2) | 0x61) - -#define AR_RX 1 // RX permission -#define AR_RW 4 // RW permission -#define MA_WB 0 // WRITEBACK memory attribute - -#define TLB_PAGESIZE 28 // Use 256MB pages for now. - mov r16=r5 - -// -// text section -// - movl r2 = LINUX_PAGE_OFFSET;; // Set up IFA with VPN of linux - mov cr.ifa = r2 - movl r3 = ITIR(0,TLB_PAGESIZE);; // Set ITIR to default pagesize - mov cr.itir = r3 - - shl r4 = r16,33;; // physical addr of start of node - movl r5 = ITRGR(1,AR_RX,MA_WB);; // TLB attributes - or r10=r4,r5;; - - itr.i itr[r0] = r10;; // Dropin ITR entry - srlz.i;; - -// -// data section -// - movl r2 = LINUX_PAGE_OFFSET;; // Set up IFA with VPN of linux - mov cr.ifa = r2 - movl r3 = ITIR(0,TLB_PAGESIZE);; // Set ITIR to default pagesize - mov cr.itir = r3 - - shl r4 = r16,33;; // physical addr of start of node - movl r5 = ITRGR(1,AR_RW,MA_WB);; // TLB attributes - or r10=r4,r5;; - - itr.d dtr[r0] = r10;; // Dropin DTR entry - srlz.d;; - - - - -// -// Turn on address translation, interrupt collection, psr.ed, protection key. -// Interrupts (PSR.i) are still off here. -// - - movl r3 = ( IA64_PSR_BN | \ - IA64_PSR_AC | \ - IA64_PSR_IT | \ - IA64_PSR_DB | \ - IA64_PSR_DA | \ - IA64_PSR_RT | \ - IA64_PSR_DT | \ - IA64_PSR_IC \ - ) - ;; - mov cr.ipsr = r3 - -// -// Go to kernel C startup routines -// Need to do a "rfi" in order set "it" and "ed" bits in the PSR. -// This is the only way to set them. - - movl r2=bsp_entry_pc;; - add r2 = r2,r6;; // Relocate to boot node - ld8 r2=[r2];; - mov cr.iip = r2 - srlz.d;; - rfi;; - .endp _start - -// Slave processors come here to spin til they get an interrupt. Then they launch themselves to -// the place ap_entry points. No initialization is necessary - the kernel makes no -// assumptions about state on this entry. -// Note: should verify that the interrupt we got was really the ap_wakeup -// interrupt but this should not be an issue on medusa -slave: - nop.i 0x8beef // Medusa - put cpu to sleep til interrupt occurs - mov r8=cr.irr0;; // Check for interrupt pending. - cmp.eq p6,p0=r8,r0 -(p6) br.cond.sptk slave;; - - mov r8=cr.ivr;; // Got one. Must read ivr to accept it - srlz.d;; - mov cr.eoi=r0;; // must write eoi to clear - movl r8=ap_entry;; // now jump to kernel entry - add r8 = r8,r6;; // Relocate to boot node - ld8 r9=[r8],8;; - ld8 r1=[r8] - mov b0=r9;; - br b0 - -// Here is the kernel stack used for the fake PROM - .bss - .align 16384 -bootstack: - .skip 16384 -bootstacke: -initlock: - data4 diff -Nru a/arch/ia64/sn/fprom/fw-emu.c b/arch/ia64/sn/fprom/fw-emu.c --- a/arch/ia64/sn/fprom/fw-emu.c Tue Mar 12 13:58:15 2002 +++ /dev/null Wed Dec 31 16:00:00 1969 @@ -1,524 +0,0 @@ -/* - * PAL & SAL emulation. - * - * Copyright (C) 1998-2000 Hewlett-Packard Co - * Copyright (C) 1998-2000 David Mosberger-Tang - * - * - * Copyright (C) 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Jack Steiner (steiner@sgi.com) - */ -#include -#include -#include -#include -#include -#include "fpmem.h" - -#define MB (1024*1024UL) -#define GB (MB*1024UL) - -#define FPROM_BUG() do {while (1);} while (0) -#define MAX_NODES 128 -#define MAX_LSAPICS 512 -#define MAX_CPUS 512 -#define MAX_CPUS_NODE 4 -#define CPUS_PER_NODE 4 -#define CPUS_PER_FSB 2 -#define CPUS_PER_FSB_MASK (CPUS_PER_FSB-1) - -#define NUM_EFI_DESCS 2 - -typedef union ia64_nasid_va { - struct { - unsigned long off : 33; /* intra-region offset */ - unsigned long nasid : 7; /* NASID */ - unsigned long off2 : 21; /* fill */ - unsigned long reg : 3; /* region number */ - } f; - unsigned long l; - void *p; -} ia64_nasid_va; - -typedef struct { - unsigned long pc; - unsigned long gp; -} func_ptr_t; - -#define IS_VIRTUAL_MODE() ({struct ia64_psr psr; asm("mov %0=psr" : "=r"(psr)); psr.dt;}) -#define ADDR_OF(p) (IS_VIRTUAL_MODE() ? ((void*)((long)(p)+PAGE_OFFSET)) : ((void*) (p))) -#define __fwtab_pa(n,x) ({ia64_nasid_va _v; _v.l = (long) (x); _v.f.nasid = (x) ? (n) : 0; _v.f.reg = 0; _v.l;}) - -/* - * The following variables are passed thru registersfrom the configuration file and - * are set via the _start function. - */ -long base_nasid; -long num_cpus; -long bsp_entry_pc=0; -long num_nodes; -long app_entry_pc; -int bsp_lid; -func_ptr_t ap_entry; - - -static efi_runtime_services_t *efi_runtime_p; -static char fw_mem[( sizeof(efi_system_table_t) - + sizeof(efi_runtime_services_t) - + NUM_EFI_DESCS*sizeof(efi_config_table_t) - + sizeof(struct ia64_sal_systab) - + sizeof(struct ia64_sal_desc_entry_point) - + sizeof(struct ia64_sal_desc_ap_wakeup) - + sizeof(acpi_rsdp_t) - + sizeof(acpi_rsdt_t) - + sizeof(acpi_sapic_t) - + MAX_LSAPICS*(sizeof(acpi_entry_lsapic_t)) - + (1+8*MAX_NODES)*(sizeof(efi_memory_desc_t)) - + sizeof(ia64_sal_desc_ptc_t) + - + MAX_NODES*sizeof(ia64_sal_ptc_domain_info_t) + - + MAX_CPUS*sizeof(ia64_sal_ptc_domain_proc_entry_t) + - + 1024)] __attribute__ ((aligned (8))); - -/* - * Very ugly, but we need this in the simulator only. Once we run on - * real hw, this can all go away. - */ -extern void pal_emulator_static (void); - -asm (" - .text - .proc pal_emulator_static -pal_emulator_static: - mov r8=-1;; - cmp.eq p6,p7=6,r28;; /* PAL_PTCE_INFO */ -(p7) br.cond.sptk.few 1f - ;; - mov r8=0 /* status = 0 */ - movl r9=0x500000000 /* tc.base */ - movl r10=0x0000000200000003 /* count[0], count[1] */ - movl r11=0x1000000000002000 /* stride[0], stride[1] */ - br.cond.sptk.few rp - -1: cmp.eq p6,p7=14,r28;; /* PAL_FREQ_RATIOS */ -(p7) br.cond.sptk.few 1f;; - mov r8=0 /* status = 0 */ - movl r9 =0x100000064 /* proc_ratio (1/100) */ - movl r10=0x100000100 /* bus_ratio<<32 (1/256) */ - movl r11=0x10000000a /* itc_ratio<<32 (1/100) */ - -1: cmp.eq p6,p7=22,r28;; /* PAL_MC_DRAIN */ -(p7) br.cond.sptk.few 1f;; - mov r8=0 - br.cond.sptk.few rp - -1: cmp.eq p6,p7=23,r28;; /* PAL_MC_EXPECTED */ -(p7) br.cond.sptk.few 1f;; - mov r8=0 - br.cond.sptk.few rp - -1: br.cond.sptk.few rp - .endp pal_emulator_static\n"); - - -static efi_status_t -efi_get_time (efi_time_t *tm, efi_time_cap_t *tc) -{ - if (tm) { - memset(tm, 0, sizeof(*tm)); - tm->year = 2000; - tm->month = 2; - tm->day = 13; - tm->hour = 10; - tm->minute = 11; - tm->second = 12; - } - - if (tc) { - tc->resolution = 10; - tc->accuracy = 12; - tc->sets_to_zero = 1; - } - - return EFI_SUCCESS; -} - -static void -efi_reset_system (int reset_type, efi_status_t status, unsigned long data_size, efi_char16_t *data) -{ - while(1); /* Is there a pseudo-op to stop medusa */ -} - -static efi_status_t -efi_success (void) -{ - return EFI_SUCCESS; -} - -static efi_status_t -efi_unimplemented (void) -{ - return EFI_UNSUPPORTED; -} - -static long -sal_emulator (long index, unsigned long in1, unsigned long in2, - unsigned long in3, unsigned long in4, unsigned long in5, - unsigned long in6, unsigned long in7) -{ - register long r9 asm ("r9") = 0; - register long r10 asm ("r10") = 0; - register long r11 asm ("r11") = 0; - long status; - - /* - * Don't do a "switch" here since that gives us code that - * isn't self-relocatable. - */ - status = 0; - if (index == SAL_FREQ_BASE) { - switch (in1) { - case SAL_FREQ_BASE_PLATFORM: - r9 = 500000000; - break; - - case SAL_FREQ_BASE_INTERVAL_TIMER: - /* - * Is this supposed to be the cr.itc frequency - * or something platform specific? The SAL - * doc ain't exactly clear on this... - */ - r9 = 700000000; - break; - - case SAL_FREQ_BASE_REALTIME_CLOCK: - r9 = 1; - break; - - default: - status = -1; - break; - } - } else if (index == SAL_SET_VECTORS) { - if (in1 == SAL_VECTOR_OS_BOOT_RENDEZ) { - func_ptr_t *fp; - fp = ADDR_OF(&ap_entry); - fp->pc = in2; - fp->gp = in3; - } else { - status = -1; - } - ; - } else if (index == SAL_GET_STATE_INFO) { - ; - } else if (index == SAL_GET_STATE_INFO_SIZE) { - ; - } else if (index == SAL_CLEAR_STATE_INFO) { - ; - } else if (index == SAL_MC_RENDEZ) { - ; - } else if (index == SAL_MC_SET_PARAMS) { - ; - } else if (index == SAL_CACHE_FLUSH) { - ; - } else if (index == SAL_CACHE_INIT) { - ; - } else if (index == SAL_UPDATE_PAL) { - ; - } else { - status = -1; - } - asm volatile ("" :: "r"(r9), "r"(r10), "r"(r11)); - return status; -} - - -/* - * This is here to work around a bug in egcs-1.1.1b that causes the - * compiler to crash (seems like a bug in the new alias analysis code. - */ -void * -id (long addr) -{ - return (void *) addr; -} - - -/* - * Fix the addresses in a function pointer by adding base node address - * to pc & gp. - */ -void -fix_function_pointer(void *fp) -{ - func_ptr_t *_fp; - - _fp = fp; - _fp->pc = __fwtab_pa(base_nasid, _fp->pc); - _fp->gp = __fwtab_pa(base_nasid, _fp->gp); -} - -void -fix_virt_function_pointer(void *fptr) -{ - func_ptr_t *fp; - - fp = fptr; - fp->pc = fp->pc | PAGE_OFFSET; - fp->gp = fp->gp | PAGE_OFFSET; -} - - -int -efi_set_virtual_address_map(void) -{ - efi_runtime_services_t *runtime; - - runtime = efi_runtime_p; - fix_virt_function_pointer((void*)runtime->get_time); - fix_virt_function_pointer((void*)runtime->set_time); - fix_virt_function_pointer((void*)runtime->get_wakeup_time); - fix_virt_function_pointer((void*)runtime->set_wakeup_time); - fix_virt_function_pointer((void*)runtime->set_virtual_address_map); - fix_virt_function_pointer((void*)runtime->get_variable); - fix_virt_function_pointer((void*)runtime->get_next_variable); - fix_virt_function_pointer((void*)runtime->set_variable); - fix_virt_function_pointer((void*)runtime->get_next_high_mono_count); - fix_virt_function_pointer((void*)runtime->reset_system); - return EFI_SUCCESS;; -} - - -void -sys_fw_init (const char *args, int arglen, int bsp) -{ - /* - * Use static variables to keep from overflowing the RSE stack - */ - static efi_system_table_t *efi_systab; - static efi_runtime_services_t *efi_runtime; - static efi_config_table_t *efi_tables; - static ia64_sal_desc_ptc_t *sal_ptc; - static ia64_sal_ptc_domain_info_t *sal_ptcdi; - static ia64_sal_ptc_domain_proc_entry_t *sal_ptclid; - static acpi_rsdp_t *acpi_systab; - static acpi_rsdt_t *acpi_rsdt; - static acpi_sapic_t *acpi_sapic; - static acpi_entry_lsapic_t *acpi_lsapic; - static struct ia64_sal_systab *sal_systab; - static efi_memory_desc_t *efi_memmap, *md; - static unsigned long *pal_desc, *sal_desc; - static struct ia64_sal_desc_entry_point *sal_ed; - static struct ia64_boot_param *bp; - static struct ia64_sal_desc_ap_wakeup *sal_apwake; - static unsigned char checksum = 0; - static char *cp, *cmd_line, *vendor; - static int mdsize, domain, last_domain ; - static int cnode, nasid, cpu, num_memmd, cpus_found; - - /* - * Pass the parameter base address to the build_efi_xxx routines. - */ - build_init(8LL*GB*base_nasid); - - num_nodes = GetNumNodes(); - num_cpus = GetNumCpus(); - - - memset(fw_mem, 0, sizeof(fw_mem)); - - pal_desc = (unsigned long *) &pal_emulator_static; - sal_desc = (unsigned long *) &sal_emulator; - fix_function_pointer(&pal_emulator_static); - fix_function_pointer(&sal_emulator); - - /* Align this to 16 bytes, probably EFI does this */ - mdsize = (sizeof(efi_memory_desc_t) + 15) & ~15 ; - - cp = fw_mem; - efi_systab = (void *) cp; cp += sizeof(*efi_systab); - efi_runtime_p = efi_runtime = (void *) cp; cp += sizeof(*efi_runtime); - efi_tables = (void *) cp; cp += NUM_EFI_DESCS*sizeof(*efi_tables); - sal_systab = (void *) cp; cp += sizeof(*sal_systab); - sal_ed = (void *) cp; cp += sizeof(*sal_ed); - sal_ptc = (void *) cp; cp += sizeof(*sal_ptc); - sal_apwake = (void *) cp; cp += sizeof(*sal_apwake); - acpi_systab = (void *) cp; cp += sizeof(*acpi_systab); - acpi_rsdt = (void *) cp; cp += sizeof(*acpi_rsdt); - acpi_sapic = (void *) cp; cp += sizeof(*acpi_sapic); - acpi_lsapic = (void *) cp; cp += num_cpus*sizeof(*acpi_lsapic); - vendor = (char *) cp; cp += 32; - efi_memmap = (void *) cp; cp += 8*32*sizeof(*efi_memmap); - sal_ptcdi = (void *) cp; cp += CPUS_PER_FSB*(1+num_nodes)*sizeof(*sal_ptcdi); - sal_ptclid = (void *) cp; cp += ((3+num_cpus)*sizeof(*sal_ptclid)+7)/8*8; - cmd_line = (void *) cp; - - if (args) { - if (arglen >= 1024) - arglen = 1023; - memcpy(cmd_line, args, arglen); - } else { - arglen = 0; - } - cmd_line[arglen] = '\0'; -#ifdef BRINGUP - /* for now, just bring up bash */ - strcpy(cmd_line, "init=/bin/bash"); -#else - strcpy(cmd_line, ""); -#endif - - memset(efi_systab, 0, sizeof(efi_systab)); - efi_systab->hdr.signature = EFI_SYSTEM_TABLE_SIGNATURE; - efi_systab->hdr.revision = EFI_SYSTEM_TABLE_REVISION; - efi_systab->hdr.headersize = sizeof(efi_systab->hdr); - efi_systab->fw_vendor = __fwtab_pa(base_nasid, vendor); - efi_systab->fw_revision = 1; - efi_systab->runtime = __fwtab_pa(base_nasid, efi_runtime); - efi_systab->nr_tables = 2; - efi_systab->tables = __fwtab_pa(base_nasid, efi_tables); - memcpy(vendor, "S\0i\0l\0i\0c\0o\0n\0-\0G\0r\0a\0p\0h\0i\0c\0s\0\0", 32); - - efi_runtime->hdr.signature = EFI_RUNTIME_SERVICES_SIGNATURE; - efi_runtime->hdr.revision = EFI_RUNTIME_SERVICES_REVISION; - efi_runtime->hdr.headersize = sizeof(efi_runtime->hdr); - efi_runtime->get_time = __fwtab_pa(base_nasid, &efi_get_time); - efi_runtime->set_time = __fwtab_pa(base_nasid, &efi_unimplemented); - efi_runtime->get_wakeup_time = __fwtab_pa(base_nasid, &efi_unimplemented); - efi_runtime->set_wakeup_time = __fwtab_pa(base_nasid, &efi_unimplemented); - efi_runtime->set_virtual_address_map = __fwtab_pa(base_nasid, &efi_set_virtual_address_map); - efi_runtime->get_variable = __fwtab_pa(base_nasid, &efi_unimplemented); - efi_runtime->get_next_variable = __fwtab_pa(base_nasid, &efi_unimplemented); - efi_runtime->set_variable = __fwtab_pa(base_nasid, &efi_unimplemented); - efi_runtime->get_next_high_mono_count = __fwtab_pa(base_nasid, &efi_unimplemented); - efi_runtime->reset_system = __fwtab_pa(base_nasid, &efi_reset_system); - - efi_tables->guid = SAL_SYSTEM_TABLE_GUID; - efi_tables->table = __fwtab_pa(base_nasid, sal_systab); - efi_tables++; - efi_tables->guid = ACPI_TABLE_GUID; - efi_tables->table = __fwtab_pa(base_nasid, acpi_systab); - fix_function_pointer(&efi_unimplemented); - fix_function_pointer(&efi_get_time); - fix_function_pointer(&efi_success); - fix_function_pointer(&efi_reset_system); - fix_function_pointer(&efi_set_virtual_address_map); - - /* fill in the ACPI system table: */ - memcpy(acpi_systab->signature, "RSD PTR ", 8); - acpi_systab->rsdt = (struct acpi_rsdt*)__fwtab_pa(base_nasid, acpi_rsdt); - - memcpy(acpi_rsdt->header.signature, "RSDT",4); - acpi_rsdt->header.length = sizeof(acpi_rsdt_t); - memcpy(acpi_rsdt->header.oem_id, "SGI", 3); - memcpy(acpi_rsdt->header.oem_table_id, "SN1", 3); - acpi_rsdt->header.oem_revision = 0x00010001; - acpi_rsdt->entry_ptrs[0] = __fwtab_pa(base_nasid, acpi_sapic); - - memcpy(acpi_sapic->header.signature, "SPIC ", 4); - acpi_sapic->header.length = sizeof(acpi_sapic_t)+num_cpus*sizeof(acpi_entry_lsapic_t); - for (cnode=0; cnodetype = ACPI_ENTRY_LOCAL_SAPIC; - acpi_lsapic->length = sizeof(acpi_entry_lsapic_t); - acpi_lsapic->acpi_processor_id = cnode*4+cpu; - acpi_lsapic->flags = LSAPIC_ENABLED|LSAPIC_PRESENT; - acpi_lsapic->eid = cpu; - acpi_lsapic->id = nasid; - acpi_lsapic++; - } - } - - - /* fill in the SAL system table: */ - memcpy(sal_systab->signature, "SST_", 4); - sal_systab->size = sizeof(*sal_systab); - sal_systab->sal_rev_minor = 1; - sal_systab->sal_rev_major = 0; - sal_systab->entry_count = 3; - - strcpy(sal_systab->oem_id, "SGI"); - strcpy(sal_systab->product_id, "SN1"); - - /* fill in an entry point: */ - sal_ed->type = SAL_DESC_ENTRY_POINT; - sal_ed->pal_proc = __fwtab_pa(base_nasid, pal_desc[0]); - sal_ed->sal_proc = __fwtab_pa(base_nasid, sal_desc[0]); - sal_ed->gp = __fwtab_pa(base_nasid, sal_desc[1]); - - /* kludge the PTC domain info */ - sal_ptc->type = SAL_DESC_PTC; - sal_ptc->num_domains = 0; - sal_ptc->domain_info = __fwtab_pa(base_nasid, sal_ptcdi); - cpus_found = 0; - last_domain = -1; - sal_ptcdi--; - for (cnode=0; cnodenum_domains++; - sal_ptcdi++; - sal_ptcdi->proc_count = 0; - sal_ptcdi->proc_list = __fwtab_pa(base_nasid, sal_ptclid); - last_domain = domain; - } - sal_ptcdi->proc_count++; - sal_ptclid->id = nasid; - sal_ptclid->eid = cpu; - sal_ptclid++; - cpus_found++; - } - } - } - - if (cpus_found != num_cpus) - FPROM_BUG(); - - /* Make the AP WAKEUP entry */ - sal_apwake->type = SAL_DESC_AP_WAKEUP; - sal_apwake->mechanism = IA64_SAL_AP_EXTERNAL_INT; - sal_apwake->vector = 18; - - for (cp = (char *) sal_systab; cp < (char *) efi_memmap; ++cp) - checksum += *cp; - - sal_systab->checksum = -checksum; - - md = &efi_memmap[0]; - num_memmd = build_efi_memmap((void *)md, mdsize) ; - - bp = id(ZERO_PAGE_ADDR + (((long)base_nasid)<<33)); - bp->efi_systab = __fwtab_pa(base_nasid, &fw_mem); - bp->efi_memmap = __fwtab_pa(base_nasid, efi_memmap); - bp->efi_memmap_size = num_memmd*mdsize; - bp->efi_memdesc_size = mdsize; - bp->efi_memdesc_version = 0x101; - bp->command_line = __fwtab_pa(base_nasid, cmd_line); - bp->console_info.num_cols = 80; - bp->console_info.num_rows = 25; - bp->console_info.orig_x = 0; - bp->console_info.orig_y = 24; - bp->num_pci_vectors = 0; - bp->fpswa = 0; - - /* - * Now pick the BSP & store it LID value in - * a global variable. Note if BSP is greater than last cpu, - * pick the last cpu. - */ - for (cnode=0; cnode 0) - continue; - return; - } - } -} diff -Nru a/arch/ia64/sn/fprom/main.c b/arch/ia64/sn/fprom/main.c --- a/arch/ia64/sn/fprom/main.c Tue Mar 12 13:58:15 2002 +++ /dev/null Wed Dec 31 16:00:00 1969 @@ -1,110 +0,0 @@ -/* - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * - * Copyright (C) 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Jack Steiner (steiner@sgi.com) - */ - - - -#include -#include - -void bedrock_init(int); -void synergy_init(int, int); -void sys_fw_init (const char *args, int arglen, int bsp); - -volatile int bootmaster=0; /* Used to pick bootmaster */ -volatile int nasidmaster[128]={0}; /* Used to pick node/synergy masters */ -int init_done=0; -extern int bsp_lid; - -#define get_bit(b,p) (((*p)>>(b))&1) - -int -fmain(int lid, int bsp) { - int syn, nasid, cpu; - - /* - * First lets figure out who we are. This is done from the - * LID passed to us. - */ - nasid = (lid>>24); - syn = (lid>>17)&1; - cpu = (lid>>16)&1; - - /* - * Now pick a synergy master to initialize synergy registers. - */ - if (test_and_set_bit(syn, &nasidmaster[nasid]) == 0) { - synergy_init(nasid, syn); - test_and_set_bit(syn+2, &nasidmaster[nasid]); - } else - while (get_bit(syn+2, &nasidmaster[nasid]) == 0); - - /* - * Now pick a nasid master to initialize Bedrock registers. - */ - if (test_and_set_bit(8, &nasidmaster[nasid]) == 0) { - bedrock_init(nasid); - test_and_set_bit(9, &nasidmaster[nasid]); - } else - while (get_bit(9, &nasidmaster[nasid]) == 0); - - - /* - * Now pick a BSP & finish init. - */ - if (test_and_set_bit(0, &bootmaster) == 0) { - sys_fw_init(0, 0, bsp); - test_and_set_bit(1, &bootmaster); - } else - while (get_bit(1, &bootmaster) == 0); - - return (lid == bsp_lid); -} - - -void -bedrock_init(int nasid) -{ - nasid = nasid; /* to quiet gcc */ -} - - -void -synergy_init(int nasid, int syn) -{ - long *base; - long off; - - /* - * Enable all FSB flashed interrupts. - * ZZZ - I'd really like defines for this...... - */ - base = (long*)0x80000e0000000000LL; /* base of synergy regs */ - for (off = 0x2a0; off < 0x2e0; off+=8) /* offset for VEC_MASK_{0-3}_A/B */ - *(base+off/8) = -1LL; - - /* - * Set the NASID in the FSB_CONFIG register. - */ - base = (long*)0x80000e0000000450LL; - *base = (long)((nasid<<16)|(syn<<9)); -} - - -/* Why isnt there a bcopy/memcpy in lib64.a */ - -void* -memcpy(void * dest, const void *src, size_t count) -{ - char *s, *se, *d; - - for(d=dest, s=(char*)src, se=s+count; s] <-p> | <-k> [] - -p Create PROM control file & links - -k Create LINUX control file & links - -c Control file name [Default: cf] - Path to directory that contains the linux or PROM files. - The directory can be any of the following: - (linux simulations) - worktree - worktree/linux - any directory with vmlinux, vmlinux.sym & fprom files - (prom simulations) - worktree - worktree/stand/arcs/IP37prom/dev - any directory with fw.bin & fw.sim files - - Simulations: - sim [-X ] [-o ] [-M] [] - -c Control file name [Default: cf] - -M Pipe output thru fmtmedusa - -o Output filename (copy of all commands/output) [Default: simout] - -X Specifies number of instructions to execute [Default: 0] - (Used only in auto test mode - not described here) - -Examples: - sim -p # create control file (cf) & links for prom simulations - sim -k # create control file (cf) & links for linux simulations - sim -p -c cfprom # create a prom control file (cfprom) only. No links are made. - - sim # run medusa using previously created links & - # control file (cf). -END -exit 1 -} - -# ----------------------- create control file header -------------------- -create_cf_header() { -cat <>$CF -# -# Template for a control file for running linux kernels under medusa. -# You probably want to make mods here but this is a good starting point. -# - -# Preferences -setenv cpu_stepping A -setenv exceptionPrint off -setenv interrupt_messages off -setenv lastPCsize 100000 -setenv low_power_mode on -setenv partialIntelChipSet on -setenv printIntelMessages off -setenv prom_write_action halt -setenv prom_write_messages on -setenv step_quantum 100 -setenv swizzling on -setenv tsconsole on -setenv uart_echo on -symbols on - -# IDE disk params -setenv diskCylinders 611 -setenv bootDrive C -setenv diskHeads 16 -setenv diskPath idedisk -setenv diskPresent 1 -setenv diskSpt 63 - -# Hardware config -setenv coherency_type nasid -setenv cpu_cache_type default -setenv synergy_cache_type syn_cac_64m_8w - -# Numalink config -setenv route_enable on -setenv network_type xbar # Select [xbar|router] -setenv network_warning 0xff - -END -} - - -# ------------------ create control file entries for linux simulations ------------- -create_cf_linux() { -cat <>$CF -# Kernel specific options -setenv mca_on_memory_failure off -setenv LOADPC 0x00100000 # FPROM load address/entry point (8 digits!) -sr g 9 0xe000000000520000 # Kernel entry point -setenv symbol_table vmlinux.sym -load fprom -load vmlinux - -# Useful breakpoints to always have set. Add more if desired. -break 0xe000000000505e00 all # dispatch_to_fault_handler -break panic all # stop on panic -break die_if_kernel all # may as well stop - -END -} - -# ------------------ create control file entries for prom simulations --------------- -create_cf_prom() { - SYM2="" - ADDR="0x80000000ff800000" - [ "$EMBEDDED_LINUX" != "0" ] || SYM2="setenv symbol_table2 vmlinux.sym" - [ "$SIZE" = "8MB" ] || ADDR="0x80000000ffc00000" - cat <>$CF -# PROM specific options -setenv mca_on_memory_failure on -setenv LOADPC 0x80000000ffffffb0 -setenv promFile fw.bin -setenv promAddr $ADDR -setenv symbol_table fw.sym -$SYM2 - -# Useful breakpoints to always have set. Add more if desired. -break Pr_ivt_gexx all -break Pr_ivt_brk all -break Pr_PROM_Panic_Spin all -break Pr_PROM_Panic all -break Pr_PROM_C_Panic all -break Pr_fled_die all -break Pr_ResetNow all -break Pr_zzzbkpt all - -END -} - - -# ------------------ create control file entries for memory configuration ------------- -create_cf_memory() { -cat <>$CF -# CPU/Memory map format: -# setenv nodeN_memory_config 0xBSBSBSBS -# B=banksize (0=unused, 1=64M, 2=128M, .., 5-1G, c=8M, d=16M, e=32M) -# S=bank enable (0=both disable, 3=both enable, 2=bank1 enable, 1=bank0 enable) -# rightmost digits are for bank 0, the lowest address. -# setenv nodeN_nasid -# specifies the NASID for the node. This is used ONLY if booting the kernel. -# On PROM configurations, set to 0 - PROM will change it later. -# setenv nodeN_cpu_config -# Set bit number N to 1 to enable cpu N. Ex., a value of 5 enables cpu 0 & 2. -# -# Repeat the above 3 commands for each node. -# -# For kernel, default to 32MB. Although this is not a valid hardware configuration, -# it runs faster on medusa. For PROM, 64MB is smallest allowed value. - -setenv node0_cpu_config 0x1 # Enable only cpu 0 on the node -END - -if [ $LINUX -eq 1 ] ; then -cat <>$CF -setenv node0_nasid 0 # cnode 0 has NASID 0 -setenv node0_memory_config 0xe1 # 32MB -END -else -cat <>$CF -setenv node0_memory_config 0x11 # 64MB -END -fi -} - -# -------------------- set links to linux files ------------------------- -set_linux_links() { - if [ -d $D/linux/arch ] ; then - D=$D/linux - elif [ -d $D/arch -o -e vmlinux.sym ] ; then - D=$D - else - err "cant determine directory for linux binaries" - fi - rm -rf vmlinux vmlinux.sym fprom - ln -s $D/vmlinux vmlinux - ln -s $D/vmlinux.sym vmlinux.sym - if [ -d $D/arch ] ; then - ln -s $D/arch/ia64/sn/fprom/fprom fprom - else - ln -s $D/fprom fprom - fi - echo " .. Created links to linux files" -} - -# -------------------- set links to prom files ------------------------- -set_prom_links() { - if [ -d $D/stand ] ; then - D=$D/stand/arcs/IP37prom/dev - elif [ -d $D/sal ] ; then - D=$D - else - err "cant determine directory for PROM binaries" - fi - SETUP="$D/../../../../.setup" - grep -q '^ *setenv *PROMSIZE *8MB' $SETUP - if [ $? -eq 0 ] ; then - SIZE="8MB" - else - SIZE="4MB" - fi - grep -q '^ *setenv *LAUNCH_VMLINUX' $SETUP - EMBEDDED_LINUX=$? - rm -f fw.bin fw.map fw.sym vmlinux vmlinux.sym fprom - SDIR="SN1IA${SIZE}.O" - BIN="SN1IAip37prom${SIZE}" - ln -s $D/$SDIR/$BIN.bin fw.bin - ln -s $D/$SDIR/$BIN.map fw.map - ln -s $D/$SDIR/$BIN.sym fw.sym - echo " .. Created links to $SIZE prom files" - if [ $EMBEDDED_LINUX -eq 0 ] ; then - ln -s $D/linux/vmlinux vmlinux - ln -s $D/linux/vmlinux.sym vmlinux.sym - if [ -d linux/arch ] ; then - ln -s $D/linux/arch/ia64/sn/fprom/fprom fprom - else - ln -s $D/linux/fprom fprom - fi - echo " .. Created links to embedded linux files in prom tree" - fi -} - -# --------------- start of shell script -------------------------------- -OUT="simout" -FMTMED=0 -STEPCNT=0 -PROM=0 -LINUX=0 -NCF="cf" -while getopts "HMX:c:o:pk" c ; do - case ${c} in - H) help;; - M) FMTMED=1;; - X) STEPCNT=${OPTARG};; - c) NCF=${OPTARG};; - k) PROM=0;LINUX=1;; - p) PROM=1;LINUX=0;; - o) OUT=${OPTARG};; - \?) exit 1;; - esac -done -shift `expr ${OPTIND} - 1` - -# Check if command is for creating control file and/or links to images. -if [ $PROM -eq 1 -o $LINUX -eq 1 ] ; then - CF=$NCF - [ ! -f $CF ] || err "wont overwrite an existing control file ($CF)" - if [ $# -gt 0 ] ; then - D=$1 - [ -d $D ] || err "cannot find directory $D" - [ $PROM -eq 0 ] || set_prom_links - [ $LINUX -eq 0 ] || set_linux_links - fi - create_cf_header - [ $PROM -eq 0 ] || create_cf_prom - [ $LINUX -eq 0 ] || create_cf_linux - create_cf_memory - echo " .. Basic control file created (in $CF). You might want to edit" - echo " this file (at least, look at it)." - exit 0 -fi - -# Verify that the control file exists -CF=${1:-$NCF} -[ -f $CF ] || err "No control file exists. For help, type: $0 -H" - -# Build the .cf files from the user control file. The .cf file is -# identical except that the actual start & load addresses are inserted -# into the file. In addition, the FPROM commands for configuring memory -# and LIDs are generated. - -rm -f .cf .cf1 .cf2 -awk ' -function strtonum(n) { - if (substr(n,1,2) != "0x") - return int(n) - n = substr(n,3) - r=0 - while (length(n) > 0) { - r = r*16+(index("0123456789abcdef", substr(n,1,1))-1) - n = substr(n,2) - } - return r - } -/^#/ {next} -/^$/ {next} -/^setenv *LOADPC/ {loadpc = $3; next} -/^setenv *node._cpu_config/ {n=int(substr($2,5,1)); cpuconf[n] = strtonum($3); print; next} -/^setenv *node._memory_config/ {n=int(substr($2,5,1)); memconf[n] = strtonum($3); print; next} -/^setenv *node._nasid/ {n=int(substr($2,5,1)); nasid[n] = strtonum($3); print; next} - {print} -END { - # Generate the memmap info that starts at the beginning of - # the node the kernel was loaded on. - loadnasid = nasid[0] - cnode = 0 - for (i=0; i<128; i++) { - if (memconf[i] != "") { - printf "sm 0x%x%08x 0x%x%04x%04x\n", - 2*loadnasid, 8*cnodes+8, memconf[i], cpuconf[i], nasid[i] - cnodes++ - cpus += substr("0112122312232334", cpuconf[i]+1,1) - } - } - printf "sm 0x%x00000000 0x%x%08x\n", 2*loadnasid, cnodes, cpus - printf "setenv number_of_nodes %d\n", cnodes - - # Now set the starting PC for each cpu. - cnode = 0 - lowcpu=-1 - for (i=0; i<128; i++) { - if (memconf[i] != "") { - printf "setnode %d\n", cnode - conf = cpuconf[i] - for (j=0; j<4; j++) { - if (conf != int(conf/2)*2) { - printf "setcpu %d\n", j - if (length(loadpc) == 18) - printf "sr pc %s\n", loadpc - else - printf "sr pc 0x%x%s\n", 2*loadnasid, substr(loadpc,3) - if (lowcpu == -1) - lowcpu = j - } - conf = int(conf/2) - } - cnode++ - } - } - printf "setnode 0\n" - printf "setcpu %d\n", lowcpu - } -' <$CF >.cf - -# Now build the .cf1 & .cf2 control files. -CF2_LINES="^sm |^break |^run |^si |^quit |^symbols " -egrep "$CF2_LINES" .cf >.cf2 -egrep -v "$CF2_LINES" .cf >.cf1 -if [ $STEPCNT -ne 0 ] ; then - echo "s $STEPCNT" >>.cf2 - echo "lastpc 1000" >>.cf2 - echo "q" >>.cf2 -fi -echo "script-on $OUT" >>.cf2 - -# Now start medusa.... -if [ $FMTMED -ne 0 ] ; then - $MEDUSA -system mpsn1 -c .cf1 -i .cf2 | fmtmedusa -elif [ $STEPCNT -eq 0 ] ; then - $MEDUSA -system mpsn1 -c .cf1 -i .cf2 -else - $MEDUSA -system mpsn1 -c .cf1 -i .cf2 2>&1 -fi diff -Nru a/arch/ia64/sn/io/Makefile b/arch/ia64/sn/io/Makefile --- a/arch/ia64/sn/io/Makefile Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/sn/io/Makefile Tue Mar 12 13:58:15 2002 @@ -3,8 +3,7 @@ # License. See the file "COPYING" in the main directory of this archive # for more details. # -# Copyright (C) 2000 Silicon Graphics, Inc. -# Copyright (C) Jack Steiner (steiner@sgi.com) +# Copyright (C) 2000-2002 Silicon Graphics, Inc. All Rights Reserved. # # # Makefile for the linux kernel. @@ -13,20 +12,34 @@ # removes any old dependencies. DON'T put your own dependencies here # unless it's something special (ie not a .c file). # -# Note 2! The CFLAGS definitions are now in the main makefile... -EXTRA_CFLAGS := -DSN -DLANGUAGE_C=1 -D_LANGUAGE_C=1 -I. -DBRINGUP \ - -DDIRECT_L1_CONSOLE -DNUMA_BASE -DSIMULATED_KLGRAPH \ - -DNUMA_MIGR_CONTROL -DLITTLE_ENDIAN -DREAL_HARDWARE \ - -DNEW_INTERRUPTS +EXTRA_CFLAGS := -DLITTLE_ENDIAN + O_TARGET := sgiio.o -obj-y := stubs.o sgi_if.o pciio.o pcibr.o xtalk.o xbow.o xswitch.o hubspc.o \ - klgraph_hack.o io.o hubdev.o huberror.o \ + +ifeq ($(CONFIG_MODULES),y) +export-objs = pciio.o hcl.o +endif + +obj-y := stubs.o sgi_if.o pciio.o xtalk.o xbow.o xswitch.o klgraph_hack.o \ hcl.o labelcl.o invent.o klgraph.o klconflib.o sgi_io_sim.o \ module.o sgi_io_init.o klgraph_hack.o ml_SN_init.o \ - ml_SN_intr.o ip37.o pciba.o \ - ml_iograph.o hcl_util.o cdl.o \ - mem_refcnt.o devsupport.o alenlist.o pci_bus_cvlink.o \ - eeprom.o pci.o pci_dma.o l1.o l1_command.o ate_utils.o + ml_iograph.o hcl_util.o cdl.o hubdev.o hubspc.o \ + alenlist.o pci_bus_cvlink.o \ + eeprom.o pci.o pci_dma.o l1.o l1_command.o ate_utils.o \ + ifconfig_net.o efi-rtc.o io.o + +obj-$(CONFIG_IA64_SGI_SN1) += sn1/ml_SN_intr.o sn1/mem_refcnt.o sn1/hubcounters.o \ + sn1/ip37.o sn1/huberror.o sn1/hub_intr.o sn1/pcibr.o + +obj-$(CONFIG_IA64_SGI_SN2) += sn2/ml_SN_intr.o sn2/shub_intr.o sn2/shuberror.o \ + sn2/bte_error.o \ + sn2/pcibr/pcibr_dvr.o sn2/pcibr/pcibr_ate.o \ + sn2/pcibr/pcibr_config.o sn2/pcibr/pcibr_dvr.o \ + sn2/pcibr/pcibr_hints.o \ + sn2/pcibr/pcibr_idbg.o sn2/pcibr/pcibr_intr.o \ + sn2/pcibr/pcibr_rrb.o sn2/pcibr/pcibr_slot.o + +obj-$(CONFIG_PCIBA) += pciba.o include $(TOPDIR)/Rules.make diff -Nru a/arch/ia64/sn/io/alenlist.c b/arch/ia64/sn/io/alenlist.c --- a/arch/ia64/sn/io/alenlist.c Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/sn/io/alenlist.c Tue Mar 12 13:58:15 2002 @@ -4,8 +4,7 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2002 Silicon Graphics, Inc. All rights reserved. */ /* Implementation of Address/Length Lists. */ @@ -13,9 +12,9 @@ #include #include +#include #include #include -#include /* * Logically, an Address/Length List is a list of Pairs, where each pair @@ -218,9 +217,9 @@ void alenlist_init(void) { - alenlist_zone = kmem_zone_init(sizeof(struct alenlist_s), "alenlist"); - alenlist_chunk_zone = kmem_zone_init(sizeof(struct alenlist_chunk_s), "alchunk"); - alenlist_cursor_zone = kmem_zone_init(sizeof(struct alenlist_cursor_s), "alcursor"); + alenlist_zone = snia_kmem_zone_init(sizeof(struct alenlist_s), "alenlist"); + alenlist_chunk_zone = snia_kmem_zone_init(sizeof(struct alenlist_chunk_s), "alchunk"); + alenlist_cursor_zone = snia_kmem_zone_init(sizeof(struct alenlist_cursor_s), "alcursor"); #if DEBUG idbg_addfunc("alenshow", alenlist_show); #endif /* DEBUG */ @@ -250,7 +249,7 @@ { alenlist_t alenlist; - alenlist = kmem_zone_alloc(alenlist_zone, flags & AL_NOSLEEP ? VM_NOSLEEP : 0); + alenlist = snia_kmem_zone_alloc(alenlist_zone, flags & AL_NOSLEEP ? VM_NOSLEEP : 0); if (alenlist) { INCR_COUNT(&alenlist_count); @@ -334,7 +333,7 @@ while (chunk) { freechunk = chunk; chunk = chunk->alc_next; - kmem_zone_free(alenlist_chunk_zone, freechunk); + snia_kmem_zone_free(alenlist_chunk_zone, freechunk); DECR_COUNT(&alenlist_chunk_count); } alenlist->al_actual_size = ALEN_CHUNK_SZ; @@ -407,7 +406,7 @@ alenlist_clear(alenlist); /* Now, free the alenlist itself */ - kmem_zone_free(alenlist_zone, alenlist); + snia_kmem_zone_free(alenlist_zone, alenlist); DECR_COUNT(&alenlist_count); } @@ -473,7 +472,7 @@ } else { alenlist_chunk_t new_chunk; - new_chunk = kmem_zone_alloc(alenlist_chunk_zone, + new_chunk = snia_kmem_zone_alloc(alenlist_chunk_zone, flags & AL_NOSLEEP ? VM_NOSLEEP : 0); if (new_chunk == NULL) @@ -656,7 +655,7 @@ alenlist_cursor_t cursorp; ASSERT(alenlist != NULL); - cursorp = kmem_zone_alloc(alenlist_cursor_zone, flags & AL_NOSLEEP ? VM_NOSLEEP : 0); + cursorp = snia_kmem_zone_alloc(alenlist_cursor_zone, flags & AL_NOSLEEP ? VM_NOSLEEP : 0); if (cursorp) { INCR_COUNT(&alenlist_cursor_count); alenlist_cursor_init(alenlist, 0, cursorp); @@ -671,7 +670,7 @@ alenlist_cursor_destroy(alenlist_cursor_t cursorp) { DECR_COUNT(&alenlist_cursor_count); - kmem_zone_free(alenlist_cursor_zone, cursorp); + snia_kmem_zone_free(alenlist_cursor_zone, cursorp); } @@ -752,7 +751,7 @@ maxlength -= ((alenp->al_addr + cursorp->al_bcount) & maxlen1); - length = MIN(maxlength, length); + length = min(maxlength, length); } /* Update the cursor, if desired. */ @@ -842,7 +841,7 @@ offset = poff(kvaddr); /* Handle first page */ - piece_length = MIN(NBPP - offset, length); + piece_length = min((size_t)(NBPP - offset), length); if (alenlist_append(alenlist, paddr, piece_length, flags) == ALENLIST_FAILURE) goto failure; length -= piece_length; diff -Nru a/arch/ia64/sn/io/ate_utils.c b/arch/ia64/sn/io/ate_utils.c --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/io/ate_utils.c Tue Mar 12 13:58:16 2002 @@ -0,0 +1,205 @@ +/* $Id$ + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (C) 1992 - 1997, 2000-2002 Silicon Graphics, Inc. All rights reserved. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include + +/* + * Allocate the map needed to allocate the ATE entries. + */ +struct map * +atemapalloc(ulong_t mapsiz) +{ + struct map *mp; + ulong_t size; + struct a { + spinlock_t lock; + sv_t sema; + } *sync; + + if (mapsiz == 0) + return(NULL); + size = sizeof(struct map) * (mapsiz + 2); + if ((mp = (struct map *) kmalloc(size, GFP_KERNEL)) == NULL) + return(NULL); + memset(mp, 0x0, size); + + sync = kmalloc(sizeof(struct a), GFP_KERNEL); + if (sync == NULL) { + kfree(mp); + return(NULL); + } + memset(sync, 0x0, sizeof(struct a)); + + mutex_spinlock_init(&sync->lock); + sv_init( &(sync->sema), &(sync->lock), SV_MON_SPIN | SV_ORDER_FIFO /*| SV_INTS*/); + mp[1].m_size = (unsigned long) &sync->lock; + mp[1].m_addr = (unsigned long) &sync->sema; + mapsize(mp) = mapsiz - 1; + return(mp); +} + +/* + * free a map structure previously allocated via rmallocmap(). + */ +void +atemapfree(struct map *mp) +{ + struct a { + spinlock_t lock; + sv_t sema; + }; + /* ASSERT(sv_waitq(mapout(mp)) == 0); */ + /* sv_destroy(mapout(mp)); */ + spin_lock_destroy(maplock(mp)); + kfree((void *)mp[1].m_size); + kfree(mp); +} + +/* + * Allocate 'size' units from the given map. + * Return the base of the allocated space. + * In a map, the addresses are increasing and the + * list is terminated by a 0 size. + * Algorithm is first-fit. + */ + +ulong_t +atealloc( + struct map *mp, + size_t size) +{ + register unsigned int a; + register struct map *bp; + register unsigned long s; + + ASSERT(size >= 0); + + if (size == 0) + return((ulong_t) NULL); + + s = mutex_spinlock(maplock(mp)); + + for (bp = mapstart(mp); bp->m_size; bp++) { + if (bp->m_size >= size) { + a = bp->m_addr; + bp->m_addr += size; + if ((bp->m_size -= size) == 0) { + do { + bp++; + (bp-1)->m_addr = bp->m_addr; + } while ((((bp-1)->m_size) = (bp->m_size))); + mapsize(mp)++; + } + + ASSERT(bp->m_size < 0x80000000); + mutex_spinunlock(maplock(mp), s); + return(a); + } + } + + /* + * We did not get what we need .. we cannot sleep .. + */ + mutex_spinunlock(maplock(mp), s); + return(0); +} + +/* + * Free the previously allocated space a of size units into the specified map. + * Sort ``a'' into map and combine on one or both ends if possible. + * Returns 0 on success, 1 on failure. + */ +void +atefree(struct map *mp, size_t size, ulong_t a) +{ + register struct map *bp; + register unsigned int t; + register unsigned long s; + + ASSERT(size >= 0); + + if (size == 0) + return; + + bp = mapstart(mp); + s = mutex_spinlock(maplock(mp)); + + for ( ; bp->m_addr<=a && bp->m_size!=0; bp++) + ; + if (bp>mapstart(mp) && (bp-1)->m_addr+(bp-1)->m_size == a) { + (bp-1)->m_size += size; + if (bp->m_addr) { + /* m_addr==0 end of map table */ + ASSERT(a+size <= bp->m_addr); + if (a+size == bp->m_addr) { + + /* compress adjacent map addr entries */ + (bp-1)->m_size += bp->m_size; + while (bp->m_size) { + bp++; + (bp-1)->m_addr = bp->m_addr; + (bp-1)->m_size = bp->m_size; + } + mapsize(mp)++; + } + } + } else { + if (a+size == bp->m_addr && bp->m_size) { + bp->m_addr -= size; + bp->m_size += size; + } else { + ASSERT(size); + if (mapsize(mp) == 0) { + mutex_spinunlock(maplock(mp), s); + printk("atefree : map overflow 0x%p Lost 0x%lx items at 0x%lx", + (void *)mp, size, a) ; + return ; + } + do { + t = bp->m_addr; + bp->m_addr = a; + a = t; + t = bp->m_size; + bp->m_size = size; + bp++; + } while ((size = t)); + mapsize(mp)--; + } + } + mutex_spinunlock(maplock(mp), s); + /* + * wake up everyone waiting for space + */ + if (mapout(mp)) + ; + /* sv_broadcast(mapout(mp)); */ +} diff -Nru a/arch/ia64/sn/io/cdl.c b/arch/ia64/sn/io/cdl.c --- a/arch/ia64/sn/io/cdl.c Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/sn/io/cdl.c Tue Mar 12 13:58:15 2002 @@ -4,8 +4,7 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2002 Silicon Graphics, Inc. All rights reserved. */ #include @@ -17,11 +16,9 @@ #include "asm/sn/ioerror_handling.h" #include -#ifdef BRINGUP /* these get called directly in cdl_add_connpt in fops bypass hack */ extern int pcibr_attach(devfs_handle_t); extern int xbow_attach(devfs_handle_t); -#endif /* BRINGUP */ /* * cdl: Connection and Driver List @@ -37,8 +34,6 @@ int mfg_num; int (*attach) (devfs_handle_t); } dummy_reg; - -typedef struct cdl *cdl_p; #define MAX_SGI_IO_INFRA_DRVR 4 struct cdl sgi_infrastructure_drivers[MAX_SGI_IO_INFRA_DRVR] = diff -Nru a/arch/ia64/sn/io/devsupport.c b/arch/ia64/sn/io/devsupport.c --- a/arch/ia64/sn/io/devsupport.c Tue Mar 12 13:58:14 2002 +++ /dev/null Wed Dec 31 16:00:00 1969 @@ -1,1289 +0,0 @@ -/* $Id$ - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam - */ - -#include -#include -#include -#include -#include -#include -#include -#include - -/* - * Interfaces in this file are all platform-independent AND IObus-independent. - * Be aware that there may be macro equivalents to each of these hiding in - * header files which supercede these functions. - */ - -/* =====Generic iobus support===== */ - -/* String table to hold names of interrupts. */ -#ifdef LATER -static struct string_table device_desc_string_table; -#endif - -/* One time initialization for device descriptor support. */ -static void -device_desc_init(void) -{ -#ifdef LATER - string_table_init(&device_desc_string_table); -#endif - FIXME("device_desc_init"); -} - - -/* Drivers use these interfaces to manage device descriptors */ -static device_desc_t -device_desc_alloc(void) -{ -#ifdef LATER - device_desc_t device_desc; - - device_desc = (device_desc_t)kmem_zalloc(sizeof(struct device_desc_s), 0); - device_desc->intr_target = GRAPH_VERTEX_NONE; - - ASSERT(device_desc->intr_policy == 0); - device_desc->intr_swlevel = -1; - ASSERT(device_desc->intr_name == NULL); - ASSERT(device_desc->flags == 0); - - ASSERT(!(device_desc->flags & D_IS_ASSOC)); - return(device_desc); -#else - FIXME("device_desc_alloc"); - return((device_desc_t)0); -#endif -} - -void -device_desc_free(device_desc_t device_desc) -{ -#ifdef LATER - if (!(device_desc->flags & D_IS_ASSOC)) /* sanity */ - kfree(device_desc); -#endif - FIXME("device_desc_free"); -} - -device_desc_t -device_desc_dup(devfs_handle_t dev) -{ -#ifdef LATER - device_desc_t orig_device_desc, new_device_desc; - - - new_device_desc = device_desc_alloc(); - orig_device_desc = device_desc_default_get(dev); - if (orig_device_desc) - *new_device_desc = *orig_device_desc;/* small structure copy */ - else { - device_driver_t driver; - ilvl_t pri; - /* - * Use the driver's thread priority in - * case the device thread priority has not - * been given. - */ - if (driver = device_driver_getbydev(dev)) { - pri = device_driver_thread_pri_get(driver); - device_desc_intr_swlevel_set(new_device_desc,pri); - } - } - new_device_desc->flags &= ~D_IS_ASSOC; - return(new_device_desc); -#else - FIXME("device_desc_dup"); - return((device_desc_t)0); -#endif -} - -device_desc_t -device_desc_default_get(devfs_handle_t dev) -{ -#ifdef LATER - graph_error_t rc; - device_desc_t device_desc; - - rc = hwgraph_info_get_LBL(dev, INFO_LBL_DEVICE_DESC, (arbitrary_info_t *)&device_desc); - - if (rc == GRAPH_SUCCESS) - return(device_desc); - else - return(NULL); -#else - FIXME("device_desc_default_get"); - return((device_desc_t)0); -#endif -} - -void -device_desc_default_set(devfs_handle_t dev, device_desc_t new_device_desc) -{ -#ifdef LATER - graph_error_t rc; - device_desc_t old_device_desc = NULL; - - if (new_device_desc) { - new_device_desc->flags |= D_IS_ASSOC; - rc = hwgraph_info_add_LBL(dev, INFO_LBL_DEVICE_DESC, - (arbitrary_info_t)new_device_desc); - if (rc == GRAPH_DUP) { - rc = hwgraph_info_replace_LBL(dev, INFO_LBL_DEVICE_DESC, - (arbitrary_info_t)new_device_desc, - (arbitrary_info_t *)&old_device_desc); - - ASSERT(rc == GRAPH_SUCCESS); - } - hwgraph_info_export_LBL(dev, INFO_LBL_DEVICE_DESC, - sizeof(struct device_desc_s)); - } else { - rc = hwgraph_info_remove_LBL(dev, INFO_LBL_DEVICE_DESC, - (arbitrary_info_t *)&old_device_desc); - } - - if (old_device_desc) { - ASSERT(old_device_desc->flags & D_IS_ASSOC); - old_device_desc->flags &= ~D_IS_ASSOC; - device_desc_free(old_device_desc); - } -#endif - FIXME("device_desc_default_set"); -} - -devfs_handle_t -device_desc_intr_target_get(device_desc_t device_desc) -{ -#ifdef LATER - return(device_desc->intr_target); -#else - FIXME("device_desc_intr_target_get"); - return((devfs_handle_t)0); -#endif -} - -int -device_desc_intr_policy_get(device_desc_t device_desc) -{ -#ifdef LATER - return(device_desc->intr_policy); -#else - FIXME("device_desc_intr_policy_get"); - return(0); -#endif -} - -ilvl_t -device_desc_intr_swlevel_get(device_desc_t device_desc) -{ -#ifdef LATER - return(device_desc->intr_swlevel); -#else - FIXME("device_desc_intr_swlevel_get"); - return((ilvl_t)0); -#endif -} - -char * -device_desc_intr_name_get(device_desc_t device_desc) -{ -#ifdef LATER - return(device_desc->intr_name); -#else - FIXME("device_desc_intr_name_get"); - return(NULL); -#endif -} - -int -device_desc_flags_get(device_desc_t device_desc) -{ -#ifdef LATER - return(device_desc->flags); -#else - FIXME("device_desc_flags_get"); - return(0); -#endif -} - -void -device_desc_intr_target_set(device_desc_t device_desc, devfs_handle_t target) -{ - if ( device_desc != (device_desc_t)0 ) - device_desc->intr_target = target; -} - -void -device_desc_intr_policy_set(device_desc_t device_desc, int policy) -{ - if ( device_desc != (device_desc_t)0 ) - device_desc->intr_policy = policy; -} - -void -device_desc_intr_swlevel_set(device_desc_t device_desc, ilvl_t swlevel) -{ - if ( device_desc != (device_desc_t)0 ) - device_desc->intr_swlevel = swlevel; -} - -void -device_desc_intr_name_set(device_desc_t device_desc, char *name) -{ -#ifdef LATER - if ( device_desc != (device_desc_t)0 ) - device_desc->intr_name = string_table_insert(&device_desc_string_table, name); -#else - FIXME("device_desc_intr_name_set"); -#endif -} - -void -device_desc_flags_set(device_desc_t device_desc, int flags) -{ - if ( device_desc != (device_desc_t)0 ) - device_desc->flags = flags; -} - - - -/*============= device admin registry routines ===================== */ - -/* Linked list of pairs */ -typedef struct dev_admin_list_s { - struct dev_admin_list_s *admin_next; /* next entry in the - * list - */ - char *admin_name; /* info label */ - char *admin_val; /* actual info */ -} dev_admin_list_t; - -/* Device/Driver administration registry */ -typedef struct dev_admin_registry_s { - mrlock_t reg_lock; /* To allow - * exclusive - * access - */ - dev_admin_list_t *reg_first; /* first entry in - * the list - */ - dev_admin_list_t **reg_last; /* pointer to the - * next to last entry - * in the last which - * is also the place - * where the new - * entry gets - * inserted - */ -} dev_admin_registry_t; - -/* -** device_driver_s associates a device driver prefix with device switch entries. -*/ -struct device_driver_s { - struct device_driver_s *dd_next; /* next element on hash chain */ - struct device_driver_s *dd_prev; /* previous element on hash chain */ - char *dd_prefix; /* driver prefix string */ - struct bdevsw *dd_bdevsw; /* driver's bdevsw */ - struct cdevsw *dd_cdevsw; /* driver's cdevsw */ - - /* driver administration specific data structures need to - * maintain the list of pairs - */ - dev_admin_registry_t dd_dev_admin_registry; - ilvl_t dd_thread_pri; /* default thread priority for - * all this driver's - * threads. - */ - -}; - -#define NEW(_p) (_p = kmalloc(sizeof(*_p), GFP_KERNEL)) -#define FREE(_p) (kmem_free(_p)) - -/* - * helpful lock macros - */ - -#define DEV_ADMIN_REGISTRY_INITLOCK(lockp,name) mrinit(lockp,name) -#define DEV_ADMIN_REGISTRY_RDLOCK(lockp) mraccess(lockp) -#define DEV_ADMIN_REGISTRY_WRLOCK(lockp) mrupdate(lockp) -#define DEV_ADMIN_REGISTRY_UNLOCK(lockp) mrunlock(lockp) - -/* Initialize the registry - */ -static void -dev_admin_registry_init(dev_admin_registry_t *registry) -{ -#ifdef LATER - if ( registry != (dev_admin_registry_t *)0 ) - DEV_ADMIN_REGISTRY_INITLOCK(®istry->reg_lock, - "dev_admin_registry_lock"); - registry->reg_first = NULL; - registry->reg_last = ®istry->reg_first; - } -#else - FIXME("dev_admin_registry_init"); -#endif -} - -/* - * add an entry to the dev admin registry. - * if the name already exists in the registry then change the - * value iff the new value differs from the old value. - * if the name doesn't exist a new list entry is created and put - * at the end. - */ -static void -dev_admin_registry_add(dev_admin_registry_t *registry, - char *name, - char *val) -{ -#ifdef LATER - dev_admin_list_t *reg_entry; - dev_admin_list_t *scan = 0; - - DEV_ADMIN_REGISTRY_WRLOCK(®istry->reg_lock); - - /* check if the name already exists in the registry */ - scan = registry->reg_first; - - while (scan) { - if (strcmp(scan->admin_name,name) == 0) { - /* name is there in the registry */ - if (strcmp(scan->admin_val,val)) { - /* old value != new value - * reallocate memory and copy the new value - */ - FREE(scan->admin_val); - scan->admin_val = - (char *)kern_calloc(1,strlen(val)+1); - strcpy(scan->admin_val,val); - goto out; - } - goto out; /* old value == new value */ - } - scan = scan->admin_next; - } - - /* name is not there in the registry. - * allocate memory for the new registry entry - */ - NEW(reg_entry); - - reg_entry->admin_next = 0; - reg_entry->admin_name = (char *)kern_calloc(1,strlen(name)+1); - strcpy(reg_entry->admin_name,name); - reg_entry->admin_val = (char *)kern_calloc(1,strlen(val)+1); - strcpy(reg_entry->admin_val,val); - - /* add the entry at the end of the registry */ - - *(registry->reg_last) = reg_entry; - registry->reg_last = ®_entry->admin_next; - -out: DEV_ADMIN_REGISTRY_UNLOCK(®istry->reg_lock); -#endif - FIXME("dev_admin_registry_add"); -} -/* - * check if there is an info corr. to a particular - * name starting from the cursor position in the - * registry - */ -static char * -dev_admin_registry_find(dev_admin_registry_t *registry,char *name) -{ -#ifdef LATER - dev_admin_list_t *scan = 0; - - DEV_ADMIN_REGISTRY_RDLOCK(®istry->reg_lock); - scan = registry->reg_first; - - while (scan) { - if (strcmp(scan->admin_name,name) == 0) { - DEV_ADMIN_REGISTRY_UNLOCK(®istry->reg_lock); - return scan->admin_val; - } - scan = scan->admin_next; - } - DEV_ADMIN_REGISTRY_UNLOCK(®istry->reg_lock); - return 0; -#else - FIXME("dev_admin_registry_find"); - return(NULL); -#endif -} -/*============= MAIN DEVICE/ DRIVER ADMINISTRATION INTERFACE================ */ -/* - * return any labelled info associated with a device. - * called by any kernel code including device drivers. - */ -char * -device_admin_info_get(devfs_handle_t dev_vhdl, - char *info_lbl) -{ -#ifdef LATER - char *info = 0; - - /* return value need not be GRAPH_SUCCESS as the labelled - * info may not be present - */ - (void)hwgraph_info_get_LBL(dev_vhdl,info_lbl, - (arbitrary_info_t *)&info); - - - return info; -#else - FIXME("device_admin_info_get"); - return(NULL); -#endif -} - -/* - * set labelled info associated with a device. - * called by hwgraph infrastructure . may also be called - * by device drivers etc. - */ -int -device_admin_info_set(devfs_handle_t dev_vhdl, - char *dev_info_lbl, - char *dev_info_val) -{ -#ifdef LATER - graph_error_t rv; - arbitrary_info_t old_info; - - /* Handle the labelled info - * intr_target - * sw_level - * in a special way. These are part of device_desc_t - * Right now this is the only case where we have - * a set of related device_admin attributes which - * are grouped together. - * In case there is a need for another set we need to - * take a more generic approach to solving this. - * Basically a registry should be implemented. This - * registry is initialized with the callbacks for the - * attributes which need to handled in a special way - * For example: - * Consider - * device_desc - * intr_target - * intr_swlevel - * register "do_intr_target" for intr_target - * register "do_intr_swlevel" for intr_swlevel. - * When the device_admin interface layer gets an pair - * it looks in the registry to see if there is a function registered to - * handle "attr. If not follow the default path of setting the - * as labelled information hanging off the vertex. - * In the above example: - * "do_intr_target" does what is being done below for the ADMIN_LBL_INTR_TARGET - * case - */ - if (!strcmp(dev_info_lbl,ADMIN_LBL_INTR_TARGET) || - !strcmp(dev_info_lbl,ADMIN_LBL_INTR_SWLEVEL)) { - - device_desc_t device_desc; - - /* Check if there is a default device descriptor - * information for this vertex. If not dup one . - */ - if (!(device_desc = device_desc_default_get(dev_vhdl))) { - device_desc = device_desc_dup(dev_vhdl); - device_desc_default_set(dev_vhdl,device_desc); - - } - if (!strcmp(dev_info_lbl,ADMIN_LBL_INTR_TARGET)) { - /* Check if a target cpu has been specified - * for this device by a device administration - * directive - */ -#ifdef DEBUG - printf(ADMIN_LBL_INTR_TARGET - " dev = 0x%x " - "dev_admin_info = %s" - " target = 0x%x\n", - dev_vhdl, - dev_info_lbl, - hwgraph_path_to_vertex(dev_info_val)); -#endif - - device_desc->intr_target = - hwgraph_path_to_vertex(dev_info_val); - } else if (!strcmp(dev_info_lbl,ADMIN_LBL_INTR_SWLEVEL)) { - /* Check if the ithread priority level has been - * specified for this device by a device administration - * directive - */ -#ifdef DEBUG - printf(ADMIN_LBL_INTR_SWLEVEL - " dev = 0x%x " - "dev_admin_info = %s" - " sw level = 0x%x\n", - dev_vhdl, - dev_info_lbl, - atoi(dev_info_val)); -#endif - device_desc->intr_swlevel = atoi(dev_info_val); - } - - } - if (!dev_info_val) - rv = hwgraph_info_remove_LBL(dev_vhdl, - dev_info_lbl, - &old_info); - else { - - rv = hwgraph_info_add_LBL(dev_vhdl, - dev_info_lbl, - (arbitrary_info_t)dev_info_val); - - if (rv == GRAPH_DUP) { - rv = hwgraph_info_replace_LBL(dev_vhdl, - dev_info_lbl, - (arbitrary_info_t)dev_info_val, - &old_info); - } - } - ASSERT(rv == GRAPH_SUCCESS); -#endif - FIXME("device_admin_info_set"); - return 0; -} - -/* - * return labelled info associated with a device driver - * called by kernel code including device drivers - */ -char * -device_driver_admin_info_get(char *driver_prefix, - char *driver_info_lbl) -{ -#ifdef LATER - device_driver_t driver; - - driver = device_driver_get(driver_prefix); - return (dev_admin_registry_find(&driver->dd_dev_admin_registry, - driver_info_lbl)); -#else - FIXME("device_driver_admin_info_get"); - return(NULL); -#endif -} - -/* - * set labelled info associated with a device driver. - * called by hwgraph infrastructure . may also be called - * from drivers etc. - */ -int -device_driver_admin_info_set(char *driver_prefix, - char *driver_info_lbl, - char *driver_info_val) -{ -#ifdef LATER - device_driver_t driver; - - driver = device_driver_get(driver_prefix); - dev_admin_registry_add(&driver->dd_dev_admin_registry, - driver_info_lbl, - driver_info_val); -#endif - FIXME("device_driver_admin_info_set"); - return 0; -} -/*================== device / driver admin support routines================*/ - -/* static tables created by lboot */ -extern dev_admin_info_t dev_admin_table[]; -extern dev_admin_info_t drv_admin_table[]; -extern int dev_admin_table_size; -extern int drv_admin_table_size; - -/* Extend the device admin table to allow the kernel startup code to - * provide some device specific administrative hints - */ -#define ADMIN_TABLE_CHUNK 100 -static dev_admin_info_t extended_dev_admin_table[ADMIN_TABLE_CHUNK]; -static int extended_dev_admin_table_size = 0; -static mrlock_t extended_dev_admin_table_lock; - -/* Initialize the extended device admin table */ -void -device_admin_table_init(void) -{ -#ifdef LATER - extended_dev_admin_table_size = 0; - mrinit(&extended_dev_admin_table_lock, - "extended_dev_admin_table_lock"); -#endif - FIXME("device_admin_table_init"); -} -/* Add triple to - * the extended device administration info table. This is helpful - * for kernel startup code to put some hints before the hwgraph - * is setup - */ -void -device_admin_table_update(char *name,char *label,char *value) -{ -#ifdef LATER - dev_admin_info_t *p; - - mrupdate(&extended_dev_admin_table_lock); - - /* Safety check that we haven't exceeded array limits */ - ASSERT(extended_dev_admin_table_size < ADMIN_TABLE_CHUNK); - - if (extended_dev_admin_table_size == ADMIN_TABLE_CHUNK) - goto out; - - /* Get the pointer to the entry in the table where we are - * going to put the new information - */ - p = &extended_dev_admin_table[extended_dev_admin_table_size++]; - - /* Allocate memory for the strings and copy them in */ - p->dai_name = (char *)kern_calloc(1,strlen(name)+1); - strcpy(p->dai_name,name); - p->dai_param_name = (char *)kern_calloc(1,strlen(label)+1); - strcpy(p->dai_param_name,label); - p->dai_param_val = (char *)kern_calloc(1,strlen(value)+1); - strcpy(p->dai_param_val,value); - -out: mrunlock(&extended_dev_admin_table_lock); -#endif - FIXME("device_admin_table_update"); -} -/* Extend the device driver admin table to allow the kernel startup code to - * provide some device driver specific administrative hints - */ - -static dev_admin_info_t extended_drv_admin_table[ADMIN_TABLE_CHUNK]; -static int extended_drv_admin_table_size = 0; -mrlock_t extended_drv_admin_table_lock; - -/* Initialize the extended device driver admin table */ -void -device_driver_admin_table_init(void) -{ -#ifdef LATER - extended_drv_admin_table_size = 0; - mrinit(&extended_drv_admin_table_lock, - "extended_drv_admin_table_lock"); -#endif - FIXME("device_driver_admin_table_init"); -} -/* Add triple to - * the extended device administration info table. This is helpful - * for kernel startup code to put some hints before the hwgraph - * is setup - */ -void -device_driver_admin_table_update(char *name,char *label,char *value) -{ -#ifdef LATER - dev_admin_info_t *p; - - mrupdate(&extended_dev_admin_table_lock); - - /* Safety check that we haven't exceeded array limits */ - ASSERT(extended_drv_admin_table_size < ADMIN_TABLE_CHUNK); - - if (extended_drv_admin_table_size == ADMIN_TABLE_CHUNK) - goto out; - - /* Get the pointer to the entry in the table where we are - * going to put the new information - */ - p = &extended_drv_admin_table[extended_drv_admin_table_size++]; - - /* Allocate memory for the strings and copy them in */ - p->dai_name = (char *)kern_calloc(1,strlen(name)+1); - strcpy(p->dai_name,name); - p->dai_param_name = (char *)kern_calloc(1,strlen(label)+1); - strcpy(p->dai_param_name,label); - p->dai_param_val = (char *)kern_calloc(1,strlen(value)+1); - strcpy(p->dai_param_val,value); - -out: mrunlock(&extended_drv_admin_table_lock); -#endif - FIXME("device_driver_admin_table_update"); -} -/* - * keeps on adding the labelled info for each new (lbl,value) pair - * that it finds in the static dev admin table ( created by lboot) - * and the extended dev admin table ( created if at all by the kernel startup - * code) corresponding to a device in the hardware graph. - */ -void -device_admin_info_update(devfs_handle_t dev_vhdl) -{ -#ifdef LATER - int i = 0; - dev_admin_info_t *scan; - devfs_handle_t scan_vhdl; - - /* Check the static device administration info table */ - scan = dev_admin_table; - while (i < dev_admin_table_size) { - - scan_vhdl = hwgraph_path_to_dev(scan->dai_name); - if (scan_vhdl == dev_vhdl) { - device_admin_info_set(dev_vhdl, - scan->dai_param_name, - scan->dai_param_val); - } - if (scan_vhdl != NODEV) - hwgraph_vertex_unref(scan_vhdl); - scan++;i++; - - } - i = 0; - /* Check the extended device administration info table */ - scan = extended_dev_admin_table; - while (i < extended_dev_admin_table_size) { - scan_vhdl = hwgraph_path_to_dev(scan->dai_name); - if (scan_vhdl == dev_vhdl) { - device_admin_info_set(dev_vhdl, - scan->dai_param_name, - scan->dai_param_val); - } - if (scan_vhdl != NODEV) - hwgraph_vertex_unref(scan_vhdl); - scan++;i++; - - } - - -#endif - FIXME("device_admin_info_update"); -} - -/* looks up the static drv admin table ( created by the lboot) and the extended - * drv admin table (created if at all by the kernel startup code) - * for this driver specific administration info and adds it to the admin info - * associated with this device driver's object - */ -void -device_driver_admin_info_update(device_driver_t driver) -{ -#ifdef LATER - int i = 0; - dev_admin_info_t *scan; - - /* Check the static device driver administration info table */ - scan = drv_admin_table; - while (i < drv_admin_table_size) { - - if (strcmp(scan->dai_name,driver->dd_prefix) == 0) { - dev_admin_registry_add(&driver->dd_dev_admin_registry, - scan->dai_param_name, - scan->dai_param_val); - } - scan++;i++; - } - i = 0; - /* Check the extended device driver administration info table */ - scan = extended_drv_admin_table; - while (i < extended_drv_admin_table_size) { - - if (strcmp(scan->dai_name,driver->dd_prefix) == 0) { - dev_admin_registry_add(&driver->dd_dev_admin_registry, - scan->dai_param_name, - scan->dai_param_val); - } - scan++;i++; - } -#endif - FIXME("device_driver_admin_info_update"); -} - -/* =====Device Driver Support===== */ - - - -/* -** Generic device driver support routines for use by kernel modules that -** deal with device drivers (but NOT for use by the drivers themselves). -** EVERY registered driver currently in the system -- static or loadable -- -** has an entry in the device_driver_hash table. A pointer to such an entry -** serves as a generic device driver handle. -*/ - -#define DEVICE_DRIVER_HASH_SIZE 32 -#ifdef LATER -lock_t device_driver_lock[DEVICE_DRIVER_HASH_SIZE]; -device_driver_t device_driver_hash[DEVICE_DRIVER_HASH_SIZE]; -static struct string_table driver_prefix_string_table; -#endif - -/* -** Initialize device driver infrastructure. -*/ -void -device_driver_init(void) -{ -#ifdef LATER - int i; - extern void alenlist_init(void); - extern void hwgraph_init(void); - extern void device_desc_init(void); - - ASSERT(DEVICE_DRIVER_NONE == NULL); - alenlist_init(); - hwgraph_init(); - device_desc_init(); - - string_table_init(&driver_prefix_string_table); - - for (i=0; isdd_prefix); - if (!driver) - driver = device_driver_alloc(desc->sdd_prefix); - pri = device_driver_sysgen_thread_pri_get(desc->sdd_prefix); - device_driver_thread_pri_set(driver, pri); - device_driver_devsw_put(driver, desc->sdd_bdevsw, desc->sdd_cdevsw); - } -#endif - FIXME("device_driver_init"); -} - -/* -** Hash a prefix string into a hash table chain. -*/ -static int -driver_prefix_hash(char *prefix) -{ -#ifdef LATER - int accum = 0; - char nextchar; - - while (nextchar = *prefix++) - accum = accum ^ nextchar; - - return(accum % DEVICE_DRIVER_HASH_SIZE); -#else - FIXME("driver_prefix_hash"); - return(0); -#endif -} - - -/* -** Allocate a driver handle. -** Returns the driver handle, or NULL if the driver prefix -** already has a handle. -** -** Upper layers prevent races among device_driver_alloc, -** device_driver_free, and device_driver_get*. -*/ -device_driver_t -device_driver_alloc(char *prefix) -{ -#ifdef LATER - int which_hash; - device_driver_t new_driver; - unsigned long s; - - which_hash = driver_prefix_hash(prefix); - - new_driver = kern_calloc(1, sizeof(*new_driver)); - ASSERT(new_driver != NULL); - new_driver->dd_prev = NULL; - new_driver->dd_prefix = string_table_insert(&driver_prefix_string_table, prefix); - new_driver->dd_bdevsw = NULL; - new_driver->dd_cdevsw = NULL; - - dev_admin_registry_init(&new_driver->dd_dev_admin_registry); - device_driver_admin_info_update(new_driver); - - s = mutex_spinlock(&device_driver_lock[which_hash]); - -#if DEBUG - { - device_driver_t drvscan; - - /* Make sure we haven't already added a driver with this prefix */ - drvscan = device_driver_hash[which_hash]; - while (drvscan && - strcmp(drvscan->dd_prefix, prefix)) { - drvscan = drvscan->dd_next; - } - - ASSERT(!drvscan); - } -#endif /* DEBUG */ - - - /* Add new_driver to front of hash chain. */ - new_driver->dd_next = device_driver_hash[which_hash]; - if (new_driver->dd_next) - new_driver->dd_next->dd_prev = new_driver; - device_driver_hash[which_hash] = new_driver; - - mutex_spinunlock(&device_driver_lock[which_hash], s); - - return(new_driver); -#else - FIXME("device_driver_alloc"); - return((device_driver_t)0); -#endif -} - -/* -** Free a driver handle. -** -** Statically loaded drivers should never device_driver_free. -** Dynamically loaded drivers device_driver_free when either an -** unloaded driver is unregistered, or when an unregistered driver -** is unloaded. -*/ -void -device_driver_free(device_driver_t driver) -{ -#ifdef LATER - int which_hash; - unsigned long s; - - if (!driver) - return; - - which_hash = driver_prefix_hash(driver->dd_prefix); - - s = mutex_spinlock(&device_driver_lock[which_hash]); - -#if DEBUG - { - device_driver_t drvscan; - - /* Make sure we're dealing with the right list */ - drvscan = device_driver_hash[which_hash]; - while (drvscan && (drvscan != driver)) - drvscan = drvscan->dd_next; - - ASSERT(drvscan); - } -#endif /* DEBUG */ - - if (driver->dd_next) - driver->dd_next->dd_prev = driver->dd_prev; - - if (driver->dd_prev) - driver->dd_prev->dd_next = driver->dd_next; - else - device_driver_hash[which_hash] = driver->dd_next; - - mutex_spinunlock(&device_driver_lock[which_hash], s); - - driver->dd_next = NULL; /* sanity */ - driver->dd_prev = NULL; /* sanity */ - driver->dd_prefix = NULL; /* sanity */ - - if (driver->dd_bdevsw) { - driver->dd_bdevsw->d_driver = NULL; - driver->dd_bdevsw = NULL; - } - - if (driver->dd_cdevsw) { - if (driver->dd_cdevsw->d_str) { - str_free_mux_node(driver); - } - driver->dd_cdevsw->d_driver = NULL; - driver->dd_cdevsw = NULL; - } - - kern_free(driver); -#endif - FIXME("device_driver_free"); -} - - -/* -** Given a device driver prefix, return a handle to the caller. -*/ -device_driver_t -device_driver_get(char *prefix) -{ -#ifdef LATER - int which_hash; - device_driver_t drvscan; - unsigned long s; - - if (prefix == NULL) - return(NULL); - - which_hash = driver_prefix_hash(prefix); - - s = mutex_spinlock(&device_driver_lock[which_hash]); - - drvscan = device_driver_hash[which_hash]; - while (drvscan && strcmp(drvscan->dd_prefix, prefix)) - drvscan = drvscan->dd_next; - - mutex_spinunlock(&device_driver_lock[which_hash], s); - - return(drvscan); -#else - FIXME("device_driver_get"); - return((device_driver_t)0); -#endif -} - - -/* -** Given a block or char special file devfs_handle_t, find the -** device driver that controls it. -*/ -device_driver_t -device_driver_getbydev(devfs_handle_t device) -{ -#ifdef LATER - struct bdevsw *my_bdevsw; - struct cdevsw *my_cdevsw; - - my_cdevsw = get_cdevsw(device); - if (my_cdevsw != NULL) - return(my_cdevsw->d_driver); - - my_bdevsw = get_bdevsw(device); - if (my_bdevsw != NULL) - return(my_bdevsw->d_driver); - -#endif - FIXME("device_driver_getbydev"); - return((device_driver_t)0); -} - - -/* -** Associate a driver with bdevsw/cdevsw pointers. -** -** Statically loaded drivers are permanently and automatically associated -** with the proper bdevsw/cdevsw. Dynamically loaded drivers associate -** themselves when the driver is registered, and disassociate when the -** driver unregisters. -** -** Returns 0 on success, -1 on failure (devsw already associated with driver) -*/ -int -device_driver_devsw_put(device_driver_t driver, - struct bdevsw *my_bdevsw, - struct cdevsw *my_cdevsw) -{ -#ifdef LATER - int i; - - if (!driver) - return(-1); - - /* Trying to re-register data? */ - if (((my_bdevsw != NULL) && (driver->dd_bdevsw != NULL)) || - ((my_cdevsw != NULL) && (driver->dd_cdevsw != NULL))) - return(-1); - - if (my_bdevsw != NULL) { - driver->dd_bdevsw = my_bdevsw; - my_bdevsw->d_driver = driver; - for (i = 0; i < bdevmax; i++) { - if (driver->dd_bdevsw->d_flags == bdevsw[i].d_flags) { - bdevsw[i].d_driver = driver; - break; - } - } - } - - if (my_cdevsw != NULL) { - driver->dd_cdevsw = my_cdevsw; - my_cdevsw->d_driver = driver; - for (i = 0; i < cdevmax; i++) { - if (driver->dd_cdevsw->d_flags == cdevsw[i].d_flags) { - cdevsw[i].d_driver = driver; - break; - } - } - } -#endif - FIXME("device_driver_devsw_put"); - return(0); -} - - -/* -** Given a driver, return the corresponding bdevsw and cdevsw pointers. -*/ -void -device_driver_devsw_get( device_driver_t driver, - struct bdevsw **bdevswp, - struct cdevsw **cdevswp) -{ - if (!driver) { - *bdevswp = NULL; - *cdevswp = NULL; - } else { - *bdevswp = driver->dd_bdevsw; - *cdevswp = driver->dd_cdevsw; - } -} - -/* - * device_driver_thread_pri_set - * Given a driver try to set its thread priority. - * Returns 0 on success , -1 on failure. - */ -int -device_driver_thread_pri_set(device_driver_t driver,ilvl_t pri) -{ - if (!driver) - return(-1); - driver->dd_thread_pri = pri; - return(0); -} -/* - * device_driver_thread_pri_get - * Given a driver return the driver thread priority. - * If the driver is NULL return invalid driver thread - * priority. - */ -ilvl_t -device_driver_thread_pri_get(device_driver_t driver) -{ - if (driver) - return(driver->dd_thread_pri); - else - return(DRIVER_THREAD_PRI_INVALID); -} -/* -** Given a device driver, return it's handle (prefix). -*/ -void -device_driver_name_get(device_driver_t driver, char *buffer, int length) -{ - if (driver == NULL) - return; - - strncpy(buffer, driver->dd_prefix, length); -} - - -/* -** Associate a pointer-sized piece of information with a device. -*/ -void -device_info_set(devfs_handle_t device, void *info) -{ -#ifdef LATER - hwgraph_fastinfo_set(device, (arbitrary_info_t)info); -#endif - FIXME("device_info_set"); -} - - -/* -** Retrieve a pointer-sized piece of information associated with a device. -*/ -void * -device_info_get(devfs_handle_t device) -{ -#ifdef LATER - return((void *)hwgraph_fastinfo_get(device)); -#else - FIXME("device_info_get"); - return(NULL); -#endif -} - -/* - * Find the thread priority for a device, from the various - * sysgen files. - */ -int -device_driver_sysgen_thread_pri_get(char *dev_prefix) -{ -#ifdef LATER - int pri; - char *pri_s; - char *class; - - extern default_intr_pri; - extern disk_intr_pri; - extern serial_intr_pri; - extern parallel_intr_pri; - extern tape_intr_pri; - extern graphics_intr_pri; - extern network_intr_pri; - extern scsi_intr_pri; - extern audio_intr_pri; - extern video_intr_pri; - extern external_intr_pri; - extern tserialio_intr_pri; - - /* Check if there is a thread priority specified for - * this driver's thread thru admin hints. If so - * use that value. Otherwise set it to its default - * class value, otherwise set it to the default - * value. - */ - - if (pri_s = device_driver_admin_info_get(dev_prefix, - ADMIN_LBL_THREAD_PRI)) { - pri = atoi(pri_s); - } else if (class = device_driver_admin_info_get(dev_prefix, - ADMIN_LBL_THREAD_CLASS)) { - if (strcmp(class, "disk") == 0) - pri = disk_intr_pri; - else if (strcmp(class, "serial") == 0) - pri = serial_intr_pri; - else if (strcmp(class, "parallel") == 0) - pri = parallel_intr_pri; - else if (strcmp(class, "tape") == 0) - pri = tape_intr_pri; - else if (strcmp(class, "graphics") == 0) - pri = graphics_intr_pri; - else if (strcmp(class, "network") == 0) - pri = network_intr_pri; - else if (strcmp(class, "scsi") == 0) - pri = scsi_intr_pri; - else if (strcmp(class, "audio") == 0) - pri = audio_intr_pri; - else if (strcmp(class, "video") == 0) - pri = video_intr_pri; - else if (strcmp(class, "external") == 0) - pri = external_intr_pri; - else if (strcmp(class, "tserialio") == 0) - pri = tserialio_intr_pri; - else - pri = default_intr_pri; - } else - pri = default_intr_pri; - - if (pri > 255) - pri = 255; - else if (pri < 0) - pri = 0; - return pri; -#else - FIXME("device_driver_sysgen_thread_pri_get"); - return(-1); -#endif -} diff -Nru a/arch/ia64/sn/io/eeprom.c b/arch/ia64/sn/io/eeprom.c --- a/arch/ia64/sn/io/eeprom.c Tue Mar 12 13:58:14 2002 +++ b/arch/ia64/sn/io/eeprom.c Tue Mar 12 13:58:14 2002 @@ -1,14 +1,11 @@ /* - * * This file is subject to the terms and conditions of the GNU General Public * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Jack Steiner (steiner@sgi.com) + * Copyright (C) 1999-2002 Silicon Graphics, Inc. All rights reserved. */ - /* * WARNING: There is more than one copy of this file in different isms. * All copies must be kept exactly in sync. @@ -28,37 +25,24 @@ * */ -/************************************************************************** - * * - * Copyright (C) 1999 Silicon Graphics, Inc. * - * * - * These coded instructions, statements, and computer programs contain * - * unpublished proprietary information of Silicon Graphics, Inc., and * - * are protected by Federal copyright law. They may not be disclosed * - * to third parties or copied or duplicated in any form, in whole or * - * in part, without the prior written consent of Silicon Graphics, Inc. * - * * - ************************************************************************** - */ - - #include #include #include #include +#include #include #include #include #include #include #include -#include -/* #include */ #include #include #include #include #include +#include +#include #if defined(EEPROM_DEBUG) #define db_printf(x) printk x @@ -421,7 +405,7 @@ } else { scp = ≻ - sc_init( &sc, nasid, BRL1_LOCALUART ); + sc_init( &sc, nasid, BRL1_LOCALHUB_UART ); } /* fill in msg with the opcode & params */ @@ -472,14 +456,12 @@ if ( IS_RUNNING_ON_SIMULATOR() ) return EEP_L1; -#ifdef BRINGUP #define FAIL \ { \ *uid = rtc_time(); \ printk( "rbrick_uid_get failed; using current time as uid\n" ); \ return EEP_OK; \ } -#endif /* BRINGUP */ ROUTER_LOCK(path); sc_init( &sc, nasid, path ); @@ -593,12 +575,10 @@ extern char *nic_vertex_info_get( devfs_handle_t ); extern void nic_vmc_check( devfs_handle_t, char * ); -#ifdef BRINGUP /* the following were lifted from nic.c - change later? */ #define MAX_INFO 2048 #define NEWSZ(ptr,sz) ((ptr) = kern_malloc((sz))) #define DEL(ptr) (kern_free((ptr))) -#endif /* BRINGUP */ char *eeprom_vertex_info_set( int component, int nasid, devfs_handle_t v, net_vec_t path ) @@ -1068,7 +1048,7 @@ if( (checksum & 0xff) != 0 ) { db_printf(( "read_chassis_ia: bad checksum\n" )); - db_printf(( "read_chassis_ia: target 0x%x uart 0x%x\n", + db_printf(( "read_chassis_ia: target 0x%x uart 0x%lx\n", sc->subch[subch].target, sc->uart )); return EEP_BAD_CHECKSUM; } @@ -1199,7 +1179,7 @@ if( (checksum & 0xff) != 0 ) { db_printf(( "read_board_ia: bad checksum\n" )); - db_printf(( "read_board_ia: target 0x%x uart 0x%x\n", + db_printf(( "read_board_ia: target 0x%x uart 0x%lx\n", sc->subch[subch].target, sc->uart )); return EEP_BAD_CHECKSUM; } diff -Nru a/arch/ia64/sn/io/efi-rtc.c b/arch/ia64/sn/io/efi-rtc.c --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/io/efi-rtc.c Tue Mar 12 13:58:16 2002 @@ -0,0 +1,185 @@ +/* + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (C) 2001 Silicon Graphics, Inc. + * Copyright (C) 2001 by Ralf Baechle + */ +#include +#include +#include +#include +#include +#include +#include +#include + +/* + * No locking necessary when this is called from efirtc which protects us + * from racing by efi_rtc_lock. + */ +#define __swizzle(addr) ((u8 *)((unsigned long)(addr) ^ 3)) +#define read_io_port(addr) (*(volatile u8 *) __swizzle(addr)) +#define write_io_port(addr, data) (*(volatile u8 *) __swizzle(addr) = (data)) + +#define TOD_SGS_M48T35 1 +#define TOD_DALLAS_DS1386 2 + +static unsigned long nvram_base = 0; +static int tod_chip_type; + +static int +get_tod_chip_type(void) +{ + unsigned char testval; + + write_io_port(RTC_DAL_CONTROL_ADDR, RTC_DAL_UPDATE_DISABLE); + write_io_port(RTC_DAL_DAY_ADDR, 0xff); + write_io_port(RTC_DAL_CONTROL_ADDR, RTC_DAL_UPDATE_ENABLE); + + testval = read_io_port(RTC_DAL_DAY_ADDR); + if (testval == 0xff) + return TOD_SGS_M48T35; + + return TOD_DALLAS_DS1386; +} + +efi_status_t +ioc3_get_time(efi_time_t *time, efi_time_cap_t *caps) +{ + if (!nvram_base) { + printk(KERN_CRIT "nvram_base is zero\n"); + return EFI_UNSUPPORTED; + } + + memset(time, 0, sizeof(*time)); + + switch (tod_chip_type) { + case TOD_SGS_M48T35: + write_io_port(RTC_SGS_CONTROL_ADDR, RTC_SGS_READ_PROTECT); + + time->year = BCD_TO_INT(read_io_port(RTC_SGS_YEAR_ADDR)) + YRREF; + time->month = BCD_TO_INT(read_io_port(RTC_SGS_MONTH_ADDR)); + time->day = BCD_TO_INT(read_io_port(RTC_SGS_DATE_ADDR)); + time->hour = BCD_TO_INT(read_io_port(RTC_SGS_HOUR_ADDR)); + time->minute = BCD_TO_INT(read_io_port(RTC_SGS_MIN_ADDR)); + time->second = BCD_TO_INT(read_io_port(RTC_SGS_SEC_ADDR)); + time->nanosecond = 0; + + write_io_port(RTC_SGS_CONTROL_ADDR, 0); + break; + + case TOD_DALLAS_DS1386: + write_io_port(RTC_DAL_CONTROL_ADDR, RTC_DAL_UPDATE_DISABLE); + + time->nanosecond = 0; + time->second = BCD_TO_INT(read_io_port(RTC_DAL_SEC_ADDR)); + time->minute = BCD_TO_INT(read_io_port(RTC_DAL_MIN_ADDR)); + time->hour = BCD_TO_INT(read_io_port(RTC_DAL_HOUR_ADDR)); + time->day = BCD_TO_INT(read_io_port(RTC_DAL_DATE_ADDR)); + time->month = BCD_TO_INT(read_io_port(RTC_DAL_MONTH_ADDR)); + time->year = BCD_TO_INT(read_io_port(RTC_DAL_YEAR_ADDR)) + YRREF; + + write_io_port(RTC_DAL_CONTROL_ADDR, RTC_DAL_UPDATE_ENABLE); + break; + + default: + break; + } + + if (caps) { + caps->resolution = 50000000; /* 50PPM */ + caps->accuracy = 1000; /* 1ms */ + caps->sets_to_zero = 0; + } + + return EFI_SUCCESS; +} + +static efi_status_t ioc3_set_time (efi_time_t *t) +{ + if (!nvram_base) { + printk(KERN_CRIT "nvram_base is zero\n"); + return EFI_UNSUPPORTED; + } + + switch (tod_chip_type) { + case TOD_SGS_M48T35: + write_io_port(RTC_SGS_CONTROL_ADDR, RTC_SGS_WRITE_ENABLE); + write_io_port(RTC_SGS_YEAR_ADDR, INT_TO_BCD((t->year - YRREF))); + write_io_port(RTC_SGS_MONTH_ADDR,INT_TO_BCD(t->month)); + write_io_port(RTC_SGS_DATE_ADDR, INT_TO_BCD(t->day)); + write_io_port(RTC_SGS_HOUR_ADDR, INT_TO_BCD(t->hour)); + write_io_port(RTC_SGS_MIN_ADDR, INT_TO_BCD(t->minute)); + write_io_port(RTC_SGS_SEC_ADDR, INT_TO_BCD(t->second)); + write_io_port(RTC_SGS_CONTROL_ADDR, 0); + break; + + case TOD_DALLAS_DS1386: + write_io_port(RTC_DAL_CONTROL_ADDR, RTC_DAL_UPDATE_DISABLE); + write_io_port(RTC_DAL_SEC_ADDR, INT_TO_BCD(t->second)); + write_io_port(RTC_DAL_MIN_ADDR, INT_TO_BCD(t->minute)); + write_io_port(RTC_DAL_HOUR_ADDR, INT_TO_BCD(t->hour)); + write_io_port(RTC_DAL_DATE_ADDR, INT_TO_BCD(t->day)); + write_io_port(RTC_DAL_MONTH_ADDR,INT_TO_BCD(t->month)); + write_io_port(RTC_DAL_YEAR_ADDR, INT_TO_BCD((t->year - YRREF))); + write_io_port(RTC_DAL_CONTROL_ADDR, RTC_DAL_UPDATE_ENABLE); + break; + + default: + break; + } + + return EFI_SUCCESS; +} + +/* The following two are not supported atm. */ +static efi_status_t +ioc3_get_wakeup_time (efi_bool_t *enabled, efi_bool_t *pending, efi_time_t *tm) +{ + return EFI_UNSUPPORTED; +} + +static efi_status_t +ioc3_set_wakeup_time (efi_bool_t enabled, efi_time_t *tm) +{ + return EFI_UNSUPPORTED; +} + +/* + * It looks like the master IOC3 is usually on bus 0, device 4. Hope + * that's right + */ +static __init int efi_ioc3_time_init(void) +{ + struct pci_dev *dev; + static struct ioc3 *ioc3; + + dev = pci_find_slot(0, PCI_DEVFN(4, 0)); + if (!dev) { + printk(KERN_CRIT "Couldn't find master IOC3\n"); + + return -ENODEV; + } + + ioc3 = ioremap(pci_resource_start(dev, 0), pci_resource_len(dev, 0)); + nvram_base = (unsigned long) ioc3 + IOC3_BYTEBUS_DEV0; + + tod_chip_type = get_tod_chip_type(); + if (tod_chip_type == 1) + printk(KERN_NOTICE "TOD type is SGS M48T35\n"); + else if (tod_chip_type == 2) + printk(KERN_NOTICE "TOD type is Dallas DS1386\n"); + else + printk(KERN_CRIT "No or unknown TOD\n"); + + efi.get_time = ioc3_get_time; + efi.set_time = ioc3_set_time; + efi.get_wakeup_time = ioc3_get_wakeup_time; + efi.set_wakeup_time = ioc3_set_wakeup_time; + + return 0; +} + +module_init(efi_ioc3_time_init); diff -Nru a/arch/ia64/sn/io/hcl.c b/arch/ia64/sn/io/hcl.c --- a/arch/ia64/sn/io/hcl.c Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/sn/io/hcl.c Tue Mar 12 13:58:15 2002 @@ -6,8 +6,7 @@ * * hcl - SGI's Hardware Graph compatibility layer. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2001 Silicon Graphics, Inc. All rights reserved. */ #include @@ -49,7 +48,6 @@ /* * Some Global definitions. */ -spinlock_t hcl_spinlock; devfs_handle_t hcl_handle = NULL; invplace_t invplace_none = { @@ -142,6 +140,7 @@ { extern void string_table_init(struct string_table *); extern struct string_table label_string_table; + extern int init_ifconfig_net(void); int rv = 0; #if defined(CONFIG_HCL_DEBUG) && !defined(MODULE) @@ -153,8 +152,6 @@ printk ("\n%s: boot_options: 0x%0x\n", HCL_NAME, boot_options); #endif - spin_lock_init(&hcl_spinlock); - /* * Create the hwgraph_root on devfs. */ @@ -192,6 +189,12 @@ return(0); } + /* + * Initialize the ifconfgi_net driver that does network devices + * Persistent Naming. + */ + init_ifconfig_net(); + return(0); } @@ -238,8 +241,7 @@ { if (hcl_debug) { - printk("HCL: hwgraph_fastinfo_set handle 0x%p fastinfo %ld\n", - de, fastinfo); + printk("HCL: hwgraph_fastinfo_set handle 0x%p fastinfo %ld\n", (void *)de, fastinfo); } labelcl_info_replace_IDX(de, HWGRAPH_FASTINFO, fastinfo, NULL); @@ -466,7 +468,7 @@ * We need to clean up! */ printk(KERN_WARNING "HCL: Unable to set the connect point to it's parent 0x%p\n", - new_devfs_handle); + (void *)new_devfs_handle); } /* @@ -1044,30 +1046,6 @@ } /* - * hwgraph_cdevsw_get - returns the fops of the given devfs entry. - */ -struct file_operations * -hwgraph_cdevsw_get(devfs_handle_t de) -{ - struct file_operations *fops = devfs_get_ops(de); - - devfs_put_ops(de); /* FIXME: this may need to be moved to callers */ - return(fops); -} - -/* - * hwgraph_bdevsw_get - returns the fops of the given devfs entry. -*/ -struct file_operations * /* FIXME: shouldn't this be a blkdev? */ -hwgraph_bdevsw_get(devfs_handle_t de) -{ - struct file_operations *fops = devfs_get_ops(de); - - devfs_put_ops(de); /* FIXME: this may need to be moved to callers */ - return(fops); -} - -/* ** Inventory is now associated with a vertex in the graph. For items that ** belong in the inventory but have no vertex ** (e.g. old non-graph-aware drivers), we create a bogus vertex under the @@ -1550,6 +1528,4 @@ EXPORT_SYMBOL(hwgraph_path_to_dev); EXPORT_SYMBOL(hwgraph_block_device_get); EXPORT_SYMBOL(hwgraph_char_device_get); -EXPORT_SYMBOL(hwgraph_cdevsw_get); -EXPORT_SYMBOL(hwgraph_bdevsw_get); EXPORT_SYMBOL(hwgraph_vertex_name_get); diff -Nru a/arch/ia64/sn/io/hcl_util.c b/arch/ia64/sn/io/hcl_util.c --- a/arch/ia64/sn/io/hcl_util.c Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/sn/io/hcl_util.c Tue Mar 12 13:58:15 2002 @@ -4,8 +4,7 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2001 Silicon Graphics, Inc. All rights reserved. */ #include @@ -14,6 +13,7 @@ #include #include #include +#include #include #include #include diff -Nru a/arch/ia64/sn/io/hubdev.c b/arch/ia64/sn/io/hubdev.c --- a/arch/ia64/sn/io/hubdev.c Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/sn/io/hubdev.c Tue Mar 12 13:58:15 2002 @@ -4,13 +4,14 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2001 Silicon Graphics, Inc. All rights reserved. */ +#include #include #include #include +#include #include #include #include @@ -42,7 +43,7 @@ ASSERT(attach_method); - callout = (hubdev_callout_t *)kmem_zalloc(sizeof(hubdev_callout_t), KM_SLEEP); + callout = (hubdev_callout_t *)snia_kmem_zalloc(sizeof(hubdev_callout_t), KM_SLEEP); ASSERT(callout); mutex_lock(&hubdev_callout_mutex); @@ -104,6 +105,9 @@ * Given a hub vertex, return the base address of the Hspec space * for that hub. */ + +#if defined(CONFIG_IA64_SGI_SN1) + caddr_t hubdev_prombase_get(devfs_handle_t hub) { @@ -124,3 +128,5 @@ return hinfo->h_cnodeid; } + +#endif /* CONFIG_IA64_SGI_SN1 */ diff -Nru a/arch/ia64/sn/io/huberror.c b/arch/ia64/sn/io/huberror.c --- a/arch/ia64/sn/io/huberror.c Tue Mar 12 13:58:15 2002 +++ /dev/null Wed Dec 31 16:00:00 1969 @@ -1,475 +0,0 @@ -/* $Id$ - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Alan Mayer - */ - - -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -extern void hubni_eint_init(cnodeid_t cnode); -extern void hubii_eint_init(cnodeid_t cnode); -extern void hubii_eint_handler (int irq, void *arg, struct pt_regs *ep); -extern void snia_error_intr_handler(int irq, void *devid, struct pt_regs *pt_regs); - -extern int maxcpus; - -#define HUB_ERROR_PERIOD (120 * HZ) /* 2 minutes */ - - -void -hub_error_clear(nasid_t nasid) -{ - int i; - hubreg_t idsr; - int sn; - - for(sn=0; snel_spool_cur_addr[0] = - SN0_ERROR_LOG(cnode)->el_spool_last_addr[0] = - REMOTE_HUB_PI_L(nasid, sn, PI_ERR_STACK_ADDR_A); - } - - if (REMOTE_HUB_PI_L(nasid, sn, PI_CPU_PRESENT_B)) { - SN0_ERROR_LOG(cnode)->el_spool_cur_addr[1] = - SN0_ERROR_LOG(cnode)->el_spool_last_addr[1] = - REMOTE_HUB_PI_L(nasid, sn, PI_ERR_STACK_ADDR_B); - } - } - - - PI_SPOOL_SIZE_BYTES = - ERR_STACK_SIZE_BYTES(REMOTE_HUB_L(nasid, PI_ERR_STACK_SIZE)); - -#ifdef BRINGUP -/* BRINGUP: The following code looks like a check to make sure -the prom set up the error spool correctly for 2 processors. I -don't think it is needed. */ - for(sn=0; snel_spool_cur_addr[1] = - SN0_ERROR_LOG(cnode)->el_spool_last_addr[1] = - REMOTE_HUB_PI_L(nasid, sn, PI_ERR_STACK_ADDR_B); - - } - } - } -#endif /* BRINGUP */ - - /* programming our own hub. Enable error_int_pend intr. - * If both present, CPU A takes CPU b's error interrupts and any - * generic ones. CPU B takes CPU A error ints. - */ - if (cause_intr_connect (SRB_ERR_IDX, - (intr_func_t)(hubpi_eint_handler), - SR_ALL_MASK|SR_IE)) { - cmn_err(ERR_WARN, - "hub_error_init: cause_intr_connect failed on %d", cnode); - } - } - else { - /* programming remote hub. The only valid reason that this - * is called will be on headless hubs. No interrupts - */ - for(sn=0; snhuberror_ticks = HUB_ERROR_PERIOD; - return; -} - -/* - * Function : hubii_eint_init - * Parameters : cnode - * Purpose : to initialize the hub iio error interrupt. - * Assumptions : Called once per hub, by the cpu which will ultimately - * handle this interrupt. - * Returns : None. - */ - - -void -hubii_eint_init(cnodeid_t cnode) -{ - int bit, rv; - ii_iidsr_u_t hubio_eint; - hubinfo_t hinfo; - cpuid_t intr_cpu; - devfs_handle_t hub_v; - ii_ilcsr_u_t ilcsr; - - hub_v = (devfs_handle_t)cnodeid_to_vertex(cnode); - ASSERT_ALWAYS(hub_v); - hubinfo_get(hub_v, &hinfo); - - ASSERT(hinfo); - ASSERT(hinfo->h_cnodeid == cnode); - - ilcsr.ii_ilcsr_regval = REMOTE_HUB_L(hinfo->h_nasid, IIO_ILCSR); - - if ((ilcsr.ii_ilcsr_fld_s.i_llp_stat & 0x2) == 0) { - /* - * HUB II link is not up. - * Just disable LLP, and don't connect any interrupts. - */ - ilcsr.ii_ilcsr_fld_s.i_llp_en = 0; - REMOTE_HUB_S(hinfo->h_nasid, IIO_ILCSR, ilcsr.ii_ilcsr_regval); - return; - } - /* Select a possible interrupt target where there is a free interrupt - * bit and also reserve the interrupt bit for this IO error interrupt - */ - intr_cpu = intr_heuristic(hub_v,0,INTRCONNECT_ANYBIT,II_ERRORINT,hub_v, - "HUB IO error interrupt",&bit); - if (intr_cpu == CPU_NONE) { - printk("hubii_eint_init: intr_reserve_level failed, cnode %d", cnode); - return; - } - - rv = intr_connect_level(intr_cpu, bit, 0,(intr_func_t)(NULL), - (void *)(long)hub_v, NULL); - synergy_intr_connect(bit, intr_cpu); - request_irq(bit_pos_to_irq(bit) + (intr_cpu << 8), hubii_eint_handler, 0, NULL, (void *)hub_v); - ASSERT_ALWAYS(rv >= 0); - hubio_eint.ii_iidsr_regval = 0; - hubio_eint.ii_iidsr_fld_s.i_enable = 1; - hubio_eint.ii_iidsr_fld_s.i_level = bit;/* Take the least significant bits*/ - hubio_eint.ii_iidsr_fld_s.i_node = COMPACT_TO_NASID_NODEID(cnode); - hubio_eint.ii_iidsr_fld_s.i_pi_id = cpuid_to_subnode(intr_cpu); - REMOTE_HUB_S(hinfo->h_nasid, IIO_IIDSR, hubio_eint.ii_iidsr_regval); - -} - -void -hubni_eint_init(cnodeid_t cnode) -{ - int intr_bit; - cpuid_t targ; - - - if ((targ = cnodeid_to_cpuid(cnode)) == CPU_NONE) - return; - - /* The prom chooses which cpu gets these interrupts, but we - * don't know which one it chose. We will register all of the - * cpus to be sure. This only costs us an irqaction per cpu. - */ - for (; targ < CPUS_PER_NODE; targ++) { - if (!cpu_enabled(targ) ) continue; - /* connect the INTEND1 bits. */ - for (intr_bit = XB_ERROR; intr_bit <= MSC_PANIC_INTR; intr_bit++) { - intr_connect_level(targ, intr_bit, II_ERRORINT, NULL, NULL, NULL); - } - request_irq(SGI_HUB_ERROR_IRQ + (targ << 8), snia_error_intr_handler, 0, NULL, NULL); - /* synergy masks are initialized in the prom to enable all interrupts. */ - /* We'll just leave them that way, here, for these interrupts. */ - } -} - - -/*ARGSUSED*/ -void -hubii_eint_handler (int irq, void *arg, struct pt_regs *ep) -{ - devfs_handle_t hub_v; - hubinfo_t hinfo; - ii_wstat_u_t wstat; - hubreg_t idsr; - - panic("Hubii interrupt\n"); -#ifdef ajm - /* - * If the NI has a problem, everyone has a problem. We shouldn't - * even attempt to handle other errors when an NI error is present. - */ - if (check_ni_errors()) { - hubni_error_handler("II interrupt", 1); - /* NOTREACHED */ - } - - /* two levels of casting avoids compiler warning.!! */ - hub_v = (devfs_handle_t)(long)(arg); - ASSERT(hub_v); - - hubinfo_get(hub_v, &hinfo); - - /* - * Identify the reason for error. - */ - wstat.ii_wstat_regval = REMOTE_HUB_L(hinfo->h_nasid, IIO_WSTAT); - - if (wstat.ii_wstat_fld_s.w_crazy) { - char *reason; - /* - * We can do a couple of things here. - * Look at the fields TX_MX_RTY/XT_TAIL_TO/XT_CRD_TO to check - * which of these caused the CRAZY bit to be set. - * You may be able to check if the Link is up really. - */ - if (wstat.ii_wstat_fld_s.w_tx_mx_rty) - reason = "Micro Packet Retry Timeout"; - else if (wstat.ii_wstat_fld_s.w_xt_tail_to) - reason = "Crosstalk Tail Timeout"; - else if (wstat.ii_wstat_fld_s.w_xt_crd_to) - reason = "Crosstalk Credit Timeout"; - else { - hubreg_t hubii_imem; - /* - * Check if widget 0 has been marked as shutdown, or - * if BTE 0/1 has been marked. - */ - hubii_imem = REMOTE_HUB_L(hinfo->h_nasid, IIO_IMEM); - if (hubii_imem & IIO_IMEM_W0ESD) - reason = "Hub Widget 0 has been Shutdown"; - else if (hubii_imem & IIO_IMEM_B0ESD) - reason = "BTE 0 has been shutdown"; - else if (hubii_imem & IIO_IMEM_B1ESD) - reason = "BTE 1 has been shutdown"; - else reason = "Unknown"; - - } - /* - * Note: we may never be able to print this, if the II talking - * to Xbow which hosts the console is dead. - */ - printk("Hub %d to Xtalk Link failed (II_ECRAZY) Reason: %s", - hinfo->h_cnodeid, reason); - } - - /* - * It's a toss as to which one among PRB/CRB to check first. - * Current decision is based on the severity of the errors. - * IO CRB errors tend to be more severe than PRB errors. - * - * It is possible for BTE errors to have been handled already, so we - * may not see any errors handled here. - */ - (void)hubiio_crb_error_handler(hub_v, hinfo); - (void)hubiio_prb_error_handler(hub_v, hinfo); - /* - * If we reach here, it indicates crb/prb handlers successfully - * handled the error. So, re-enable II to send more interrupt - * and return. - */ - REMOTE_HUB_S(hinfo->h_nasid, IIO_IECLR, 0xffffff); - idsr = REMOTE_HUB_L(hinfo->h_nasid, IIO_IIDSR) & ~IIO_IIDSR_SENT_MASK; - REMOTE_HUB_S(hinfo->h_nasid, IIO_IIDSR, idsr); -#endif /* ajm */ -} diff -Nru a/arch/ia64/sn/io/hubspc.c b/arch/ia64/sn/io/hubspc.c --- a/arch/ia64/sn/io/hubspc.c Tue Mar 12 13:58:14 2002 +++ b/arch/ia64/sn/io/hubspc.c Tue Mar 12 13:58:14 2002 @@ -4,8 +4,7 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992-1997, 2000-2002 Silicon Graphics, Inc. All rights reserved. */ /* @@ -19,6 +18,8 @@ #include #include #include +#include +#include #include #include #include @@ -26,18 +27,12 @@ #include #include #include -#include -#include +#include #include - - -#if defined(CONFIG_SGI_IP35) || defined(CONFIG_IA64_SGI_SN1) || defined(CONFIG_IA64_GENERIC) -#include -#include +#include +#include #include -#endif - -#include +#include /* Uncomment the following line for tracing */ @@ -45,10 +40,6 @@ int hubspc_devflag = D_MP; -extern void *device_info_get(devfs_handle_t device); -extern void device_info_set(devfs_handle_t device, void *info); - - /***********************************************************************/ /* CPU Prom Space */ @@ -61,7 +52,7 @@ }cpuprom_info_t; static cpuprom_info_t *cpuprom_head; -static spinlock_t cpuprom_spinlock; +spinlock_t cpuprom_spinlock; #define PROM_LOCK() mutex_spinlock(&cpuprom_spinlock) #define PROM_UNLOCK(s) mutex_spinunlock(&cpuprom_spinlock, (s)) @@ -127,9 +118,8 @@ return 0; } -#if defined(CONFIG_SGI_IP35) || defined(CONFIG_IA64_SGI_SN1) || defined(CONFIG_IA64_GENERIC) +#if defined(CONFIG_IA64_SGI_SN1) #define SN_PROMVERSION INV_IP35PROM -#endif /* Add "detailed" labelled inventory information to the * prom vertex @@ -159,7 +149,6 @@ cpuprom_inventory_info->im_rev = IP27CONFIG.pvers_rev; cpuprom_inventory_info->im_version = IP27CONFIG.pvers_vers; - /* Store this info as labelled information hanging off the * prom device vertex */ @@ -172,41 +161,17 @@ sizeof(invent_miscinfo_t)); } -int -cpuprom_attach(devfs_handle_t node) -{ - devfs_handle_t prom_dev; - - hwgraph_char_device_add(node, EDGE_LBL_PROM, "hubspc_", &prom_dev); -#ifdef HUBSPC_DEBUG - printf("hubspc: prom_attach hub: 0x%x prom: 0x%x\n", node, prom_dev); -#endif /* HUBSPC_DEBUG */ - device_inventory_add(prom_dev, INV_PROM, SN_PROMVERSION, - (major_t)0, (minor_t)0, 0); - - /* Add additional inventory info about the cpu prom like - * revision & version numbers etc. - */ - cpuprom_detailed_inventory_info_add(prom_dev,node); - device_info_set(prom_dev, (void*)(ulong)HUBSPC_PROM); - prominfo_add(node, prom_dev); - - return (0); -} - -#if defined(CONFIG_SGI_IP35) || defined(CONFIG_IA64_SGI_SN1) || defined(CONFIG_IA64_GENERIC) #define FPROM_CONFIG_ADDR MD_JUNK_BUS_TIMING #define FPROM_ENABLE_MASK MJT_FPROM_ENABLE_MASK #define FPROM_ENABLE_SHFT MJT_FPROM_ENABLE_SHFT #define FPROM_SETUP_MASK MJT_FPROM_SETUP_MASK #define FPROM_SETUP_SHFT MJT_FPROM_SETUP_SHFT -#endif /*ARGSUSED*/ int cpuprom_map(devfs_handle_t dev, vhandl_t *vt, off_t addr, size_t len) { - int errcode; + int errcode = 0; caddr_t kvaddr; devfs_handle_t node; cnodeid_t cnode; @@ -220,7 +185,7 @@ kvaddr = hubdev_prombase_get(node); cnode = hubdev_cnodeid_get(node); #ifdef HUBSPC_DEBUG - printf("cpuprom_map: hubnode %d kvaddr 0x%x\n", node, kvaddr); + printk("cpuprom_map: hubnode %d kvaddr 0x%x\n", node, kvaddr); #endif if (len > RBOOT_SIZE) @@ -251,6 +216,7 @@ } return (errcode); } +#endif /* CONFIG_IA64_SGI_SN1 */ /*ARGSUSED*/ int @@ -263,8 +229,6 @@ /* Base Hub Space Driver */ /***********************************************************************/ -// extern int l1_attach( devfs_handle_t ); - /* * hubspc_init * Registration of the hubspc devices with the hub manager @@ -277,24 +241,21 @@ */ /* The reference counters */ +#if defined(CONFIG_IA64_SGI_SN1) hubdev_register(mem_refcnt_attach); - - /* Prom space */ - hubdev_register(cpuprom_attach); +#endif #if defined(CONFIG_SERIAL_SGI_L1_PROTOCOL) /* L1 system controller link */ if ( !IS_RUNNING_ON_SIMULATOR() ) { /* initialize the L1 link */ - void l1_cons_init( l1sc_t *sc ); - elsc_t *get_elsc(void); - - l1_cons_init((l1sc_t *)get_elsc()); + extern void l1_init(void); + l1_init(); } #endif #ifdef HUBSPC_DEBUG - printf("hubspc_init: Completed\n"); + printk("hubspc_init: Completed\n"); #endif /* HUBSPC_DEBUG */ /* Initialize spinlocks */ mutex_spinlock_init(&cpuprom_spinlock); @@ -304,26 +265,7 @@ int hubspc_open(devfs_handle_t *devp, mode_t oflag, int otyp, cred_t *crp) { - int errcode = 0; - - switch ((hubspc_subdevice_t)(ulong)device_info_get(*devp)) { - case HUBSPC_REFCOUNTERS: - errcode = mem_refcnt_open(devp, oflag, otyp, crp); - break; - - case HUBSPC_PROM: - break; - - default: - errcode = ENODEV; - } - -#ifdef HUBSPC_DEBUG - printf("hubspc_open: Completed open for type %d\n", - (hubspc_subdevice_t)(ulong)device_info_get(*devp)); -#endif /* HUBSPC_DEBUG */ - - return (errcode); + return (0); } @@ -331,25 +273,7 @@ int hubspc_close(devfs_handle_t dev, int oflag, int otyp, cred_t *crp) { - int errcode = 0; - - switch ((hubspc_subdevice_t)(ulong)device_info_get(dev)) { - case HUBSPC_REFCOUNTERS: - errcode = mem_refcnt_close(dev, oflag, otyp, crp); - break; - - case HUBSPC_PROM: - break; - default: - errcode = ENODEV; - } - -#ifdef HUBSPC_DEBUG - printf("hubspc_close: Completed close for type %d\n", - (hubspc_subdevice_t)(ulong)device_info_get(dev)); -#endif /* HUBSPC_DEBUG */ - - return (errcode); + return (0); } /* ARGSUSED */ @@ -357,7 +281,6 @@ hubspc_map(devfs_handle_t dev, vhandl_t *vt, off_t off, size_t len, uint prot) { /*REFERENCED*/ - hubspc_subdevice_t subdevice; int errcode = 0; /* check validity of request */ @@ -365,30 +288,6 @@ return ENXIO; } - subdevice = (hubspc_subdevice_t)(ulong)device_info_get(dev); - -#ifdef HUBSPC_DEBUG - printf("hubspc_map: subdevice: %d vaddr: 0x%x phyaddr: 0x%x len: 0x%x\n", - subdevice, v_getaddr(vt), off, len); -#endif /* HUBSPC_DEBUG */ - - switch ((hubspc_subdevice_t)(ulong)device_info_get(dev)) { - case HUBSPC_REFCOUNTERS: - errcode = mem_refcnt_mmap(dev, vt, off, len, prot); - break; - - case HUBSPC_PROM: - errcode = cpuprom_map(dev, vt, off, len); - break; - default: - errcode = ENODEV; - } - -#ifdef HUBSPC_DEBUG - printf("hubspc_map finished: spctype: %d vaddr: 0x%x len: 0x%x\n", - (hubspc_subdevice_t)(ulong)device_info_get(dev), v_getaddr(vt), len); -#endif /* HUBSPC_DEBUG */ - return errcode; } @@ -396,21 +295,7 @@ int hubspc_unmap(devfs_handle_t dev, vhandl_t *vt) { - int errcode = 0; - - switch ((hubspc_subdevice_t)(ulong)device_info_get(dev)) { - case HUBSPC_REFCOUNTERS: - errcode = mem_refcnt_unmap(dev, vt); - break; - - case HUBSPC_PROM: - errcode = cpuprom_unmap(dev, vt); - break; - - default: - errcode = ENODEV; - } - return errcode; + return (0); } @@ -423,19 +308,6 @@ cred_t *cred_p, int *rvalp) { - int errcode = 0; - - switch ((hubspc_subdevice_t)(ulong)device_info_get(dev)) { - case HUBSPC_REFCOUNTERS: - errcode = mem_refcnt_ioctl(dev, cmd, arg, mode, cred_p, rvalp); - break; - - case HUBSPC_PROM: - break; - - default: - errcode = ENODEV; - } - return errcode; + return (0); } diff -Nru a/arch/ia64/sn/io/ifconfig_net.c b/arch/ia64/sn/io/ifconfig_net.c --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/io/ifconfig_net.c Tue Mar 12 13:58:16 2002 @@ -0,0 +1,298 @@ +/* $Id$ + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * ifconfig_net - SGI's Persistent Network Device names. + * + * Copyright (C) 1992-1997, 2000-2002 Silicon Graphics, Inc. All rights reserved. + */ + +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define SGI_IFCONFIG_NET "SGI-PERSISTENT NETWORK DEVICE NAME DRIVER" +#define SGI_IFCONFIG_NET_VERSION "1.0" + +/* + * Some Global definitions. + */ +devfs_handle_t ifconfig_net_handle = NULL; +unsigned long ifconfig_net_debug = 0; + +/* + * ifconfig_net_open - Opens the special device node "/devhw/.ifconfig_net". + */ +static int ifconfig_net_open(struct inode * inode, struct file * filp) +{ + if (ifconfig_net_debug) { + printk("ifconfig_net_open called.\n"); + } + + return(0); + +} + +/* + * ifconfig_net_close - Closes the special device node "/devhw/.ifconfig_net". + */ +static int ifconfig_net_close(struct inode * inode, struct file * filp) +{ + + if (ifconfig_net_debug) { + printk("ifconfig_net_close called.\n"); + } + + return(0); +} + +/* + * assign_ifname - Assign the next available interface name from the persistent list. + */ +void +assign_ifname(struct net_device *dev, + struct ifname_num *ifname_num) + +{ + + /* + * Handle eth devices. + */ + if ( (memcmp(dev->name, "eth", 3) == 0) ) { + if (ifname_num->next_eth != -1) { + /* + * Assign it the next available eth interface number. + */ + memset(dev->name, 0, strlen(dev->name)); + sprintf(dev->name, "eth%d", (int)ifname_num->next_eth); + ifname_num->next_eth++; + } + + return; + } + + /* + * Handle fddi devices. + */ + if ( (memcmp(dev->name, "fddi", 4) == 0) ) { + if (ifname_num->next_fddi != -1) { + /* + * Assign it the next available fddi interface number. + */ + memset(dev->name, 0, strlen(dev->name)); + sprintf(dev->name, "fddi%d", (int)ifname_num->next_fddi); + ifname_num->next_fddi++; + } + + return; + } + + /* + * Handle hip devices. + */ + if ( (memcmp(dev->name, "hip", 3) == 0) ) { + if (ifname_num->next_hip != -1) { + /* + * Assign it the next available hip interface number. + */ + memset(dev->name, 0, strlen(dev->name)); + sprintf(dev->name, "hip%d", (int)ifname_num->next_hip); + ifname_num->next_hip++; + } + + return; + } + + /* + * Handle tr devices. + */ + if ( (memcmp(dev->name, "tr", 2) == 0) ) { + if (ifname_num->next_tr != -1) { + /* + * Assign it the next available tr interface number. + */ + memset(dev->name, 0, strlen(dev->name)); + sprintf(dev->name, "tr%d", (int)ifname_num->next_tr); + ifname_num->next_tr++; + } + + return; + } + + /* + * Handle fc devices. + */ + if ( (memcmp(dev->name, "fc", 2) == 0) ) { + if (ifname_num->next_fc != -1) { + /* + * Assign it the next available fc interface number. + */ + memset(dev->name, 0, strlen(dev->name)); + sprintf(dev->name, "fc%d", (int)ifname_num->next_fc); + ifname_num->next_fc++; + } + + return; + } +} + +/* + * find_persistent_ifname: Returns the entry that was seen in previous boot. + */ +struct ifname_MAC * +find_persistent_ifname(struct net_device *dev, + struct ifname_MAC *ifname_MAC) + +{ + + while (ifname_MAC->addr_len) { + if (memcmp(dev->dev_addr, ifname_MAC->dev_addr, dev->addr_len) == 0) + return(ifname_MAC); + + ifname_MAC++; + } + + return(NULL); +} + +/* + * ifconfig_net_ioctl: ifconfig_net driver ioctl interface. + */ +static int ifconfig_net_ioctl(struct inode * inode, struct file * file, + unsigned int cmd, unsigned long arg) +{ + + extern struct net_device *__dev_get_by_name(const char *); +#ifdef CONFIG_NET + struct net_device *dev; + struct ifname_MAC *found; + char temp[64]; +#endif + struct ifname_MAC *ifname_MAC; + struct ifname_MAC *new_devices, *temp_new_devices; + struct ifname_num *ifname_num; + unsigned long size; + + + if (ifconfig_net_debug) { + printk("HCL: hcl_ioctl called.\n"); + } + + /* + * Read in the header and see how big of a buffer we really need to + * allocate. + */ + ifname_num = (struct ifname_num *) kmalloc(sizeof(struct ifname_num), + GFP_KERNEL); + copy_from_user( ifname_num, (char *) arg, sizeof(struct ifname_num)); + size = ifname_num->size; + kfree(ifname_num); + ifname_num = (struct ifname_num *) kmalloc(size, GFP_KERNEL); + ifname_MAC = (struct ifname_MAC *) ((char *)ifname_num + (sizeof(struct ifname_num)) ); + + copy_from_user( ifname_num, (char *) arg, size); + new_devices = kmalloc(size - sizeof(struct ifname_num), GFP_KERNEL); + temp_new_devices = new_devices; + + memset(new_devices, 0, size - sizeof(struct ifname_num)); + +#ifdef CONFIG_NET + /* + * Go through the net device entries and make them persistent! + */ + for (dev = dev_base; dev != NULL; dev = dev->next) { + /* + * Skip NULL entries or "lo" + */ + if ( (dev->addr_len == 0) || ( !strncmp(dev->name, "lo", strlen(dev->name))) ){ + continue; + } + + /* + * See if we have a persistent interface name for this device. + */ + found = NULL; + found = find_persistent_ifname(dev, ifname_MAC); + if (found) { + strcpy(dev->name, found->name); + } else { + /* Never seen this before .. */ + assign_ifname(dev, ifname_num); + + /* + * Save the information for the next boot. + */ + sprintf(temp,"%s %02x:%02x:%02x:%02x:%02x:%02x\n", dev->name, + dev->dev_addr[0], dev->dev_addr[1], dev->dev_addr[2], + dev->dev_addr[3], dev->dev_addr[4], dev->dev_addr[5]); + strcpy(temp_new_devices->name, dev->name); + temp_new_devices->addr_len = dev->addr_len; + memcpy(temp_new_devices->dev_addr, dev->dev_addr, dev->addr_len); + temp_new_devices++; + } + + } +#endif + + /* + * Copy back to the User Buffer area any new devices encountered. + */ + copy_to_user((char *)arg + (sizeof(struct ifname_num)), new_devices, + size - sizeof(struct ifname_num)); + + return(0); + +} + +struct file_operations ifconfig_net_fops = { + ioctl:ifconfig_net_ioctl, /* ioctl */ + open:ifconfig_net_open, /* open */ + release:ifconfig_net_close /* release */ +}; + + +/* + * init_ifconfig_net() - Boot time initialization. Ensure that it is called + * after devfs has been initialized. + * + */ +#ifdef MODULE +int init_module (void) +#else +int __init init_ifconfig_net(void) +#endif +{ + ifconfig_net_handle = NULL; + ifconfig_net_handle = hwgraph_register(hwgraph_root, ".ifconfig_net", + 0, DEVFS_FL_AUTO_DEVNUM, + 0, 0, + S_IFCHR | S_IRUSR | S_IWUSR | S_IRGRP, 0, 0, + &ifconfig_net_fops, NULL); + + if (ifconfig_net_handle == NULL) { + panic("Unable to create SGI PERSISTENT NETWORK DEVICE Name Driver.\n"); + } + + return(0); + +} diff -Nru a/arch/ia64/sn/io/invent.c b/arch/ia64/sn/io/invent.c --- a/arch/ia64/sn/io/invent.c Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/sn/io/invent.c Tue Mar 12 13:58:15 2002 @@ -4,8 +4,7 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2001 Silicon Graphics, Inc. All rights reserved. */ /* diff -Nru a/arch/ia64/sn/io/io.c b/arch/ia64/sn/io/io.c --- a/arch/ia64/sn/io/io.c Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/sn/io/io.c Tue Mar 12 13:58:15 2002 @@ -1,36 +1,50 @@ -/* $Id$ +/* $Id: io.c,v 1.2 2001/06/26 14:02:43 pfg Exp $ * * This file is subject to the terms and conditions of the GNU General Public * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992-1997, 2000-2002 Silicon Graphics, Inc. All Rights Reserved. */ #include -#include #include #include #include -#include +#include #include #include #include #include +#include #include #include #include #include #include -#include #include #include #include -#include #include extern xtalk_provider_t hub_provider; +extern void hub_intr_init(devfs_handle_t hubv); + + +/* + * hub_device_desc_update + * Update the passed in device descriptor with the actual the + * target cpu number and interrupt priority level. + * NOTE : These might be the same as the ones passed in thru + * the descriptor. + */ +void +hub_device_desc_update(device_desc_t dev_desc, + ilvl_t intr_swlevel, + cpuid_t cpu) +{ +} + /* * Perform any initializations needed to support hub-based I/O. @@ -63,7 +77,7 @@ /* * Setup pio structures needed for a particular hub. */ -static void +void hub_pio_init(devfs_handle_t hubv) { xwidgetnum_t widget; @@ -386,7 +400,7 @@ /* ARGSUSED */ -static void +void hub_dma_init(devfs_handle_t hubv) { } @@ -411,7 +425,7 @@ xwidgetnum_t widget = xwidget_info_id_get(widget_info); devfs_handle_t hubv = xwidget_info_master_get(widget_info); - dmamap = kern_malloc(sizeof(struct hub_dmamap_s)); + dmamap = kmalloc(sizeof(struct hub_dmamap_s), GFP_ATOMIC); dmamap->hdma_xtalk_info.xd_dev = dev; dmamap->hdma_xtalk_info.xd_target = widget; dmamap->hdma_hub = hubv; @@ -454,9 +468,9 @@ if (!(dmamap->hdma_flags & HUB_DMAMAP_IS_FIXED)) { vhdl = dmamap->hdma_xtalk_info.xd_dev; #if defined(SUPPORT_PRINTING_V_FORMAT) - PRINT_WARNING("%v: hub_dmamap_addr re-uses dmamap.\n",vhdl); + printk(KERN_WARNING "%v: hub_dmamap_addr re-uses dmamap.\n",vhdl); #else - PRINT_WARNING("0x%x: hub_dmamap_addr re-uses dmamap.\n", vhdl); + printk(KERN_WARNING "%p: hub_dmamap_addr re-uses dmamap.\n", (void *)vhdl); #endif } } else { @@ -487,9 +501,9 @@ if (!(hub_dmamap->hdma_flags & HUB_DMAMAP_IS_FIXED)) { vhdl = hub_dmamap->hdma_xtalk_info.xd_dev; #if defined(SUPPORT_PRINTING_V_FORMAT) - PRINT_WARNING("%v: hub_dmamap_list re-uses dmamap\n",vhdl); + printk(KERN_WARNING "%v: hub_dmamap_list re-uses dmamap\n",vhdl); #else - PRINT_WARNING("0x%x: hub_dmamap_list re-uses dmamap\n", vhdl); + printk(KERN_WARNING "%p: hub_dmamap_list re-uses dmamap\n", (void *)vhdl); #endif } } else { @@ -516,9 +530,9 @@ if (!(hub_dmamap->hdma_flags & HUB_DMAMAP_IS_FIXED)) { vhdl = hub_dmamap->hdma_xtalk_info.xd_dev; #if defined(SUPPORT_PRINTING_V_FORMAT) - PRINT_WARNING("%v: hub_dmamap_done already done with dmamap\n",vhdl); + printk(KERN_WARNING "%v: hub_dmamap_done already done with dmamap\n",vhdl); #else - PRINT_WARNING("0x%x: hub_dmamap_done already done with dmamap\n", vhdl); + printk(KERN_WARNING "%p: hub_dmamap_done already done with dmamap\n", (void *)vhdl); #endif } } @@ -581,329 +595,6 @@ -/* INTERRUPT MANAGEMENT */ - -/* ARGSUSED */ -static void -hub_intr_init(devfs_handle_t hubv) -{ -} - -/* - * hub_device_desc_update - * Update the passed in device descriptor with the actual the - * target cpu number and interrupt priority level. - * NOTE : These might be the same as the ones passed in thru - * the descriptor. - */ -static void -hub_device_desc_update(device_desc_t dev_desc, - ilvl_t intr_swlevel, - cpuid_t cpu) -{ - char cpuname[40]; - - /* Store the interrupt priority level in the device descriptor */ - device_desc_intr_swlevel_set(dev_desc, intr_swlevel); - - /* Convert the cpuid to the vertex handle in the hwgraph and - * save it in the device descriptor. - */ - sprintf(cpuname,"/hw/cpunum/%ld",cpu); - device_desc_intr_target_set(dev_desc, - hwgraph_path_to_dev(cpuname)); -} - -int allocate_my_bit = INTRCONNECT_ANYBIT; - -/* - * Allocate resources required for an interrupt as specified in dev_desc. - * Returns a hub interrupt handle on success, or 0 on failure. - */ -static hub_intr_t -do_hub_intr_alloc(devfs_handle_t dev, /* which crosstalk device */ - device_desc_t dev_desc, /* device descriptor */ - devfs_handle_t owner_dev, /* owner of this interrupt, if known */ - int uncond_nothread) /* unconditionally non-threaded */ -{ - cpuid_t cpu = (cpuid_t)0; /* cpu to receive interrupt */ - int cpupicked = 0; - int bit; /* interrupt vector */ - /*REFERENCED*/ - int intr_resflags = 0; - hub_intr_t intr_hdl; - cnodeid_t nodeid; /* node to receive interrupt */ - /*REFERENCED*/ - nasid_t nasid; /* nasid to receive interrupt */ - struct xtalk_intr_s *xtalk_info; - iopaddr_t xtalk_addr; /* xtalk addr on hub to set intr */ - xwidget_info_t xwidget_info; /* standard crosstalk widget info handle */ - char *intr_name = NULL; - ilvl_t intr_swlevel; - extern int default_intr_pri; -#ifdef CONFIG_IA64_SGI_SN1 - extern void synergy_intr_alloc(int, int); -#endif - - /* - * If caller didn't explicily specify a device descriptor, see if there's - * a default descriptor associated with the device. - */ - if (!dev_desc) - dev_desc = device_desc_default_get(dev); - - if (dev_desc) { - intr_name = device_desc_intr_name_get(dev_desc); - intr_swlevel = device_desc_intr_swlevel_get(dev_desc); - if (dev_desc->flags & D_INTR_ISERR) { - intr_resflags = II_ERRORINT; - } else if (!uncond_nothread && !(dev_desc->flags & D_INTR_NOTHREAD)) { - intr_resflags = II_THREADED; - } else { - /* Neither an error nor a thread. */ - intr_resflags = 0; - } - } else { - intr_swlevel = default_intr_pri; - if (!uncond_nothread) - intr_resflags = II_THREADED; - } - - /* XXX - Need to determine if the interrupt should be threaded. */ - - /* If the cpu has not been picked already then choose a candidate - * interrupt target and reserve the interrupt bit - */ -#if defined(NEW_INTERRUPTS) - if (!cpupicked) { - cpu = intr_heuristic(dev,dev_desc,allocate_my_bit, - intr_resflags,owner_dev, - intr_name,&bit); - } -#endif - - /* At this point we SHOULD have a valid cpu */ - if (cpu == CPU_NONE) { -#if defined(SUPPORT_PRINTING_V_FORMAT) - PRINT_WARNING("%v hub_intr_alloc could not allocate interrupt\n", - owner_dev); -#else - PRINT_WARNING("0x%x hub_intr_alloc could not allocate interrupt\n", - owner_dev); -#endif - return(0); - - } - - /* If the cpu has been picked already (due to the bridge data - * corruption bug) then try to reserve an interrupt bit . - */ -#if defined(NEW_INTERRUPTS) - if (cpupicked) { - bit = intr_reserve_level(cpu, allocate_my_bit, - intr_resflags, - owner_dev, intr_name); - if (bit < 0) { -#if defined(SUPPORT_PRINTING_V_FORMAT) - PRINT_WARNING("Could not reserve an interrupt bit for cpu " - " %d and dev %v\n", - cpu,owner_dev); -#else - PRINT_WARNING("Could not reserve an interrupt bit for cpu " - " %d and dev 0x%x\n", - cpu, owner_dev); -#endif - - return(0); - } - } -#endif /* NEW_INTERRUPTS */ - - nodeid = cpuid_to_cnodeid(cpu); - nasid = cpuid_to_nasid(cpu); - xtalk_addr = HUBREG_AS_XTALKADDR(nasid, PIREG(PI_INT_PEND_MOD, cpuid_to_subnode(cpu))); - - /* - * Allocate an interrupt handle, and fill it in. There are two - * pieces to an interrupt handle: the piece needed by generic - * xtalk code which is used by crosstalk device drivers, and - * the piece needed by low-level IP27 hardware code. - */ - intr_hdl = kmem_alloc_node(sizeof(struct hub_intr_s), KM_NOSLEEP, nodeid); - ASSERT_ALWAYS(intr_hdl); - - /* - * Fill in xtalk information for generic xtalk interfaces that - * operate on xtalk_intr_hdl's. - */ - xtalk_info = &intr_hdl->i_xtalk_info; - xtalk_info->xi_dev = dev; - xtalk_info->xi_vector = bit; - xtalk_info->xi_addr = xtalk_addr; - - /* - * Regardless of which CPU we ultimately interrupt, a given crosstalk - * widget always handles interrupts (and PIO and DMA) through its - * designated "master" crosstalk provider. - */ - xwidget_info = xwidget_info_get(dev); - if (xwidget_info) - xtalk_info->xi_target = xwidget_info_masterid_get(xwidget_info); - - /* Fill in low level hub information for hub_* interrupt interface */ - intr_hdl->i_swlevel = intr_swlevel; - intr_hdl->i_cpuid = cpu; - intr_hdl->i_bit = bit; - intr_hdl->i_flags = HUB_INTR_IS_ALLOCED; - - /* Store the actual interrupt priority level & interrupt target - * cpu back in the device descriptor. - */ - hub_device_desc_update(dev_desc, intr_swlevel, cpu); -#ifdef CONFIG_IA64_SGI_SN1 - synergy_intr_alloc((int)bit, (int)cpu); -#endif - return(intr_hdl); -} - -/* - * Allocate resources required for an interrupt as specified in dev_desc. - * Returns a hub interrupt handle on success, or 0 on failure. - */ -hub_intr_t -hub_intr_alloc( devfs_handle_t dev, /* which crosstalk device */ - device_desc_t dev_desc, /* device descriptor */ - devfs_handle_t owner_dev) /* owner of this interrupt, if known */ -{ - return(do_hub_intr_alloc(dev, dev_desc, owner_dev, 0)); -} - -/* - * Allocate resources required for an interrupt as specified in dev_desc. - * Uncondtionally request non-threaded, regardless of what the device - * descriptor might say. - * Returns a hub interrupt handle on success, or 0 on failure. - */ -hub_intr_t -hub_intr_alloc_nothd(devfs_handle_t dev, /* which crosstalk device */ - device_desc_t dev_desc, /* device descriptor */ - devfs_handle_t owner_dev) /* owner of this interrupt, if known */ -{ - return(do_hub_intr_alloc(dev, dev_desc, owner_dev, 1)); -} - -/* - * Free resources consumed by intr_alloc. - */ -void -hub_intr_free(hub_intr_t intr_hdl) -{ - cpuid_t cpu = intr_hdl->i_cpuid; - int bit = intr_hdl->i_bit; - xtalk_intr_t xtalk_info; - - if (intr_hdl->i_flags & HUB_INTR_IS_CONNECTED) { - /* Setting the following fields in the xtalk interrupt info - * clears the interrupt target register in the xtalk user - */ - xtalk_info = &intr_hdl->i_xtalk_info; - xtalk_info->xi_dev = NODEV; - xtalk_info->xi_vector = 0; - xtalk_info->xi_addr = 0; - hub_intr_disconnect(intr_hdl); - } - - if (intr_hdl->i_flags & HUB_INTR_IS_ALLOCED) - kfree(intr_hdl); - -#if defined(NEW_INTERRUPTS) - intr_unreserve_level(cpu, bit); -#endif -} - - -/* - * Associate resources allocated with a previous hub_intr_alloc call with the - * described handler, arg, name, etc. - */ -/*ARGSUSED*/ -int -hub_intr_connect( hub_intr_t intr_hdl, /* xtalk intr resource handle */ - intr_func_t intr_func, /* xtalk intr handler */ - void *intr_arg, /* arg to intr handler */ - xtalk_intr_setfunc_t setfunc, /* func to set intr hw */ - void *setfunc_arg, /* arg to setfunc */ - void *thread) /* intr thread to use */ -{ - int rv; - cpuid_t cpu = intr_hdl->i_cpuid; - int bit = intr_hdl->i_bit; -#ifdef CONFIG_IA64_SGI_SN1 - extern int synergy_intr_connect(int, int); -#endif - - ASSERT(intr_hdl->i_flags & HUB_INTR_IS_ALLOCED); - -#if defined(NEW_INTERRUPTS) - rv = intr_connect_level(cpu, bit, intr_hdl->i_swlevel, - intr_func, intr_arg, NULL); - if (rv < 0) - return(rv); - -#endif - intr_hdl->i_xtalk_info.xi_setfunc = setfunc; - intr_hdl->i_xtalk_info.xi_sfarg = setfunc_arg; - - if (setfunc) (*setfunc)((xtalk_intr_t)intr_hdl); - - intr_hdl->i_flags |= HUB_INTR_IS_CONNECTED; -#ifdef CONFIG_IA64_SGI_SN1 - return(synergy_intr_connect((int)bit, (int)cpu)); -#endif -} - - -/* - * Disassociate handler with the specified interrupt. - */ -void -hub_intr_disconnect(hub_intr_t intr_hdl) -{ - /*REFERENCED*/ - int rv; - cpuid_t cpu = intr_hdl->i_cpuid; - int bit = intr_hdl->i_bit; - xtalk_intr_setfunc_t setfunc; - - setfunc = intr_hdl->i_xtalk_info.xi_setfunc; - - /* TBD: send disconnected interrupts somewhere harmless */ - if (setfunc) (*setfunc)((xtalk_intr_t)intr_hdl); - -#if defined(NEW_INTERRUPTS) - rv = intr_disconnect_level(cpu, bit); - ASSERT(rv == 0); -#endif - - intr_hdl->i_flags &= ~HUB_INTR_IS_CONNECTED; -} - - -/* - * Return a hwgraph vertex that represents the CPU currently - * targeted by an interrupt. - */ -devfs_handle_t -hub_intr_cpu_get(hub_intr_t intr_hdl) -{ - cpuid_t cpuid = intr_hdl->i_cpuid; - ASSERT(cpuid != CPU_NONE); - - return(cpuid_to_vertex(cpuid)); -} - - - /* CONFIGURATION MANAGEMENT */ /* @@ -912,6 +603,9 @@ void hub_provider_startup(devfs_handle_t hubv) { + extern void hub_dma_init(devfs_handle_t hubv); + extern void hub_pio_init(devfs_handle_t hubv); + hub_pio_init(hubv); hub_dma_init(hubv); hub_intr_init(hubv); @@ -1170,58 +864,6 @@ return rv; } -#if ((defined(CONFIG_IA64_SGI_SN1) || defined(CONFIG_IA64_GENERIC)) && defined(BRINGUP)) -/* BRINGUP: This ought to be useful for IP27 too but, for now, - * make it SN1 only because `ii_ixtt_u_t' is not in IP27/hubio.h - * (or anywhere else :-). - */ -int -hubii_ixtt_set(devfs_handle_t widget_vhdl, ii_ixtt_u_t *ixtt) -{ - xwidget_info_t widget_info = xwidget_info_get(widget_vhdl); - devfs_handle_t hub_vhdl = xwidget_info_master_get(widget_info); - hubinfo_t hub_info = 0; - nasid_t nasid; - unsigned long s; - - /* Use the nasid from the hub info hanging off the hub vertex - * and widget number from the widget vertex - */ - hubinfo_get(hub_vhdl, &hub_info); - /* Being over cautious by grabbing a lock */ - s = mutex_spinlock(&hub_info->h_bwlock); - nasid = hub_info->h_nasid; - - REMOTE_HUB_S(nasid, IIO_IXTT, ixtt->ii_ixtt_regval); - - mutex_spinunlock(&hub_info->h_bwlock, s); - return 0; -} - -int -hubii_ixtt_get(devfs_handle_t widget_vhdl, ii_ixtt_u_t *ixtt) -{ - xwidget_info_t widget_info = xwidget_info_get(widget_vhdl); - devfs_handle_t hub_vhdl = xwidget_info_master_get(widget_info); - hubinfo_t hub_info = 0; - nasid_t nasid; - unsigned long s; - - /* Use the nasid from the hub info hanging off the hub vertex - * and widget number from the widget vertex - */ - hubinfo_get(hub_vhdl, &hub_info); - /* Being over cautious by grabbing a lock */ - s = mutex_spinlock(&hub_info->h_bwlock); - nasid = hub_info->h_nasid; - - ixtt->ii_ixtt_regval = REMOTE_HUB_L(nasid, IIO_IXTT); - - mutex_spinunlock(&hub_info->h_bwlock, s); - return 0; -} -#endif /* CONFIG_IA64_SGI_SN1 */ - /* * hub_device_inquiry * Find out the xtalk widget related information stored in this @@ -1259,7 +901,7 @@ #if defined(SUPPORT_PRINTING_V_FORMAT) printk("Inquiry Info for %v\n", xconn); #else - printk("Inquiry Info for 0x%x\n", xconn); + printk("Inquiry Info for %p\n", (void *)xconn); #endif printk("\tDevices shutdown [ "); diff -Nru a/arch/ia64/sn/io/ip37.c b/arch/ia64/sn/io/ip37.c --- a/arch/ia64/sn/io/ip37.c Tue Mar 12 13:58:15 2002 +++ /dev/null Wed Dec 31 16:00:00 1969 @@ -1,121 +0,0 @@ -/* $Id$ - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam - */ - -/* - * ip37.c - * Support for IP35/IP37 machines - */ - -#include -#include - -#if defined(CONFIG_SGI_IP35) || defined(CONFIG_IA64_SGI_SN1) || defined(CONFIG_IA64_GENERIC) -#include -#include -#include -#include /* for bridge_t */ - - -xwidgetnum_t -hub_widget_id(nasid_t nasid) -{ - hubii_wcr_t ii_wcr; /* the control status register */ - - ii_wcr.wcr_reg_value = REMOTE_HUB_L(nasid,IIO_WCR); - - return ii_wcr.wcr_fields_s.wcr_widget_id; -} - -/* - * get_nasid() returns the physical node id number of the caller. - */ -nasid_t -get_nasid(void) -{ - return (nasid_t)((LOCAL_HUB_L(LB_REV_ID) & LRI_NODEID_MASK) >> LRI_NODEID_SHFT); -} - -int -get_slice(void) -{ - return LOCAL_HUB_L(PI_CPU_NUM); -} - -int -is_fine_dirmode(void) -{ - return (((LOCAL_HUB_L(LB_REV_ID) & LRI_SYSTEM_SIZE_MASK) - >> LRI_SYSTEM_SIZE_SHFT) == SYSTEM_SIZE_SMALL); - -} - -hubreg_t -get_hub_chiprev(nasid_t nasid) -{ - - return ((REMOTE_HUB_L(nasid, LB_REV_ID) & LRI_REV_MASK) - >> LRI_REV_SHFT); -} - -int -verify_snchip_rev(void) -{ - int hub_chip_rev; - int i; - static int min_hub_rev = 0; - nasid_t nasid; - static int first_time = 1; - extern int maxnodes; - - - if (first_time) { - for (i = 0; i < maxnodes; i++) { - nasid = COMPACT_TO_NASID_NODEID(i); - hub_chip_rev = get_hub_chiprev(nasid); - - if ((hub_chip_rev < min_hub_rev) || (i == 0)) - min_hub_rev = hub_chip_rev; - } - - - first_time = 0; - } - - return min_hub_rev; - -} - -#ifdef SN1_USE_POISON_BITS -int -hub_bte_poison_ok(void) -{ - /* - * For now, assume poisoning is ok. If it turns out there are chip - * bugs that prevent its use in early revs, there is some neat code - * to steal from the IP27 equivalent of this code. - */ - -#ifdef BRINGUP /* temp disable BTE poisoning - might be sw bugs in this area */ - return 0; -#else - return 1; -#endif -} -#endif /* SN1_USE_POISON_BITS */ - - -void -ni_reset_port(void) -{ - LOCAL_HUB_S(NI_RESET_ENABLE, NRE_RESETOK); - LOCAL_HUB_S(NI_PORT_RESET, NPR_PORTRESET | NPR_LOCALRESET); -} - -#endif /* CONFIG_SGI_IP35 || CONFIG_IA64_SGI_SN1 */ diff -Nru a/arch/ia64/sn/io/klconflib.c b/arch/ia64/sn/io/klconflib.c --- a/arch/ia64/sn/io/klconflib.c Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/sn/io/klconflib.c Tue Mar 12 13:58:15 2002 @@ -4,8 +4,7 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2002 Silicon Graphics, Inc. All rights reserved. */ @@ -13,12 +12,13 @@ #include #include #include +#include +#include +#include #include #include #include #include - -#include #include #include #include @@ -40,6 +40,8 @@ static void sort_nic_names(lboard_t *) ; +u64 klgraph_addr[MAX_COMPACT_NODES]; + lboard_t * find_lboard(lboard_t *start, unsigned char brd_type) { @@ -213,14 +215,13 @@ { lboard_t *board; -#if CONFIG_SGI_IP35 || CONFIG_IA64_SGI_SN1 || CONFIG_IA64_GENERIC -/* BRINGUP: If this works then look for callers of is_master_baseio() +#if defined(CONFIG_IA64_SGI_SN1) || defined(CONFIG_IA64_GENERIC) +/* If this works then look for callers of is_master_baseio() * (e.g. iograph.c) and let them pass in a slot if they want */ board = find_lboard_module((lboard_t *)KL_CONFIG_INFO(nasid), module); #else - board = find_lboard_modslot((lboard_t *)KL_CONFIG_INFO(nasid), - module, slot); + board = find_lboard_modslot((lboard_t *)KL_CONFIG_INFO(nasid), module, slot); #endif #ifndef _STANDALONE @@ -228,7 +229,7 @@ cnodeid_t cnode = NASID_TO_COMPACT_NODEID(nasid); if (!board && (NODEPDA(cnode)->xbow_peer != INVALID_NASID)) -#if CONFIG_SGI_IP35 || CONFIG_IA64_SGI_SN1 || CONFIG_IA64_GENERIC +#if defined(CONFIG_IA64_SGI_SN1) || defined(CONFIG_IA64_GENERIC) board = find_lboard_module((lboard_t *) KL_CONFIG_INFO(NODEPDA(cnode)->xbow_peer), module); @@ -300,16 +301,6 @@ return(brd); } -int -get_cpu_slice(cpuid_t cpu) -{ - klcpu_t *acpu; - if ((acpu = get_cpuinfo(cpu)) == NULL) - return -1; - return acpu->cpu_info.physid; -} - - /* * get_actual_nasid * @@ -366,10 +357,6 @@ { moduleid_t modnum; char *board_name; -#if !defined(CONFIG_SGI_IP35) && !defined(CONFIG_IA64_SGI_SN1) && !defined(CONFIG_IA64_GENERIC) - slotid_t slot; - char slot_name[SLOTNUM_MAXLENGTH]; -#endif ASSERT(brd); @@ -431,7 +418,7 @@ { lboard_t *brd; - brd = find_lboard((lboard_t *)KL_CONFIG_INFO(nasid), KLTYPE_IP27); + brd = find_lboard((lboard_t *)KL_CONFIG_INFO(nasid), KLTYPE_SNIA); if (!brd) return INVALID_MODULE; @@ -569,8 +556,8 @@ if (component_serial_number_get(board, hub->hub_mfg_nic, serial_number, -#if defined(CONFIG_SGI_IP35) || defined(CONFIG_IA64_SGI_SN1) || defined(CONFIG_IA64_GENERIC) - "IP35")) +#if defined(CONFIG_IA64_SGI_SN1) || defined(CONFIG_IA64_GENERIC) + "IP37")) #else "IP27")) /* Try with IP31 key if IP27 key fails */ @@ -578,7 +565,7 @@ hub->hub_mfg_nic, serial_number, "IP31")) -#endif /* CONFIG_SGI_IP35 || CONFIG_IA64_SGI_SN1 */ +#endif /* CONFIG_IA64_SGI_SN1 */ return(1); break; } @@ -875,10 +862,11 @@ } -#if defined(CONFIG_SGI_IP35) || defined(CONFIG_IA64_SGI_SN1) || defined(CONFIG_IA64_GENERIC) char brick_types[MAX_BRICK_TYPES + 1] = "crikxdp789012345"; +#if defined(CONFIG_IA64_SGI_SN1) || defined(CONFIG_IA64_GENERIC) + /* * Format a module id for printing. */ @@ -1009,7 +997,7 @@ return (int)(unsigned short)m; } -#else /* CONFIG_SGI_IP35 || CONFIG_IA64_SGI_SN1 */ +#else /* CONFIG_IA64_SGI_SN1 */ /* * Format a module id for printing. @@ -1038,8 +1026,8 @@ if (strstr(buffer, EDGE_LBL_MODULE "/") == buffer) buffer += strlen(EDGE_LBL_MODULE "/"); - m = 0; - while(c = *buffer++) { + for (m = 0; *buffer; buffer++) { + c = *buffer; if (!isdigit(c)) return -1; m = 10 * m + (c - '0'); @@ -1049,6 +1037,6 @@ return (int)(unsigned short)m; } -#endif /* CONFIG_SGI_IP35 || CONFIG_IA64_SGI_SN1 */ +#endif /* CONFIG_IA64_SGI_SN1 */ diff -Nru a/arch/ia64/sn/io/klgraph.c b/arch/ia64/sn/io/klgraph.c --- a/arch/ia64/sn/io/klgraph.c Tue Mar 12 13:58:14 2002 +++ b/arch/ia64/sn/io/klgraph.c Tue Mar 12 13:58:14 2002 @@ -4,8 +4,7 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2002 Silicon Graphics, Inc. All rights reserved. */ /* @@ -18,12 +17,12 @@ #include #include #include +#include +#include #include #include #include #include - -#include #include #include #include @@ -43,8 +42,7 @@ #include extern char arg_maxnodes[]; -extern int maxnodes; - +extern u64 klgraph_addr[]; /* * Support for verbose inventory via hardware graph. @@ -139,193 +137,63 @@ void klhwg_add_hub(devfs_handle_t node_vertex, klhub_t *hub, cnodeid_t cnode) { +#if defined(CONFIG_IA64_SGI_SN1) devfs_handle_t myhubv; + devfs_handle_t hub_mon; + devfs_handle_t synergy; + devfs_handle_t fsb0; + devfs_handle_t fsb1; int rc; + extern struct file_operations hub_mon_fops; GRPRINTF(("klhwg_add_hub: adding %s\n", EDGE_LBL_HUB)); (void) hwgraph_path_add(node_vertex, EDGE_LBL_HUB, &myhubv); rc = device_master_set(myhubv, node_vertex); -#ifdef LATER /* - * Activate when we support hub stats. + * hub perf stats. */ rc = hwgraph_info_add_LBL(myhubv, INFO_LBL_HUB_INFO, (arbitrary_info_t)(&NODEPDA(cnode)->hubstats)); -#endif if (rc != GRAPH_SUCCESS) { - PRINT_WARNING("klhwg_add_hub: Can't add hub info label 0x%p, code %d", - myhubv, rc); + printk(KERN_WARNING "klhwg_add_hub: Can't add hub info label 0x%p, code %d", + (void *)myhubv, rc); } klhwg_hub_invent_info(myhubv, cnode, hub); -#ifndef BRINGUP + hub_mon = hwgraph_register(myhubv, EDGE_LBL_PERFMON, + 0, DEVFS_FL_AUTO_DEVNUM, + 0, 0, + S_IFCHR | S_IRUSR | S_IWUSR | S_IRGRP, 0, 0, + &hub_mon_fops, + (void *)(long)cnode); + init_hub_stats(cnode, NODEPDA(cnode)); - sndrv_attach(myhubv); -#else + /* - * Need to call our driver to do the attach? + * synergy perf */ - FIXME("klhwg_add_hub: Need to add code to do the attach.\n"); -#endif + (void) hwgraph_path_add(myhubv, EDGE_LBL_SYNERGY, &synergy); + (void) hwgraph_path_add(synergy, "0", &fsb0); + (void) hwgraph_path_add(synergy, "1", &fsb1); + + fsb0 = hwgraph_register(fsb0, EDGE_LBL_PERFMON, + 0, DEVFS_FL_AUTO_DEVNUM, + 0, 0, + S_IFCHR | S_IRUSR | S_IWUSR | S_IRGRP, 0, 0, + &synergy_mon_fops, (void *)SYNERGY_PERF_INFO(cnode, 0)); + + fsb1 = hwgraph_register(fsb1, EDGE_LBL_PERFMON, + 0, DEVFS_FL_AUTO_DEVNUM, + 0, 0, + S_IFCHR | S_IRUSR | S_IWUSR | S_IRGRP, 0, 0, + &synergy_mon_fops, (void *)SYNERGY_PERF_INFO(cnode, 1)); +#endif /* CONFIG_IA64_SGI_SN1 */ } -#ifndef BRINGUP - -void -klhwg_add_rps(devfs_handle_t node_vertex, cnodeid_t cnode, int flag) -{ - devfs_handle_t myrpsv; - invent_rpsinfo_t *rps_invent; - int rc; - - if(cnode == CNODEID_NONE) - return; - - GRPRINTF(("klhwg_add_rps: adding %s to vertex 0x%x\n", EDGE_LBL_RPS, - node_vertex)); - - rc = hwgraph_path_add(node_vertex, EDGE_LBL_RPS, &myrpsv); - if (rc != GRAPH_SUCCESS) - return; - - device_master_set(myrpsv, node_vertex); - - rps_invent = (invent_rpsinfo_t *) - klhwg_invent_alloc(cnode, INV_RPS, sizeof(invent_rpsinfo_t)); - - if (!rps_invent) - return; - - rps_invent->ir_xbox = 0; /* not an xbox RPS */ - - if (flag) - rps_invent->ir_gen.ig_flag = INVENT_ENABLED; - else - rps_invent->ir_gen.ig_flag = 0x0; - - hwgraph_info_add_LBL(myrpsv, INFO_LBL_DETAIL_INVENT, - (arbitrary_info_t) rps_invent); - hwgraph_info_export_LBL(myrpsv, INFO_LBL_DETAIL_INVENT, - sizeof(invent_rpsinfo_t)); - -} - -/* - * klhwg_update_rps gets invoked when the system controller sends an - * interrupt indicating the power supply has lost/regained the redundancy. - * It's responsible for updating the Hardware graph information. - * rps_state = 0 -> if the rps lost the redundancy - * = 1 -> If it is redundant. - */ -void -klhwg_update_rps(cnodeid_t cnode, int rps_state) -{ - devfs_handle_t node_vertex; - devfs_handle_t rpsv; - invent_rpsinfo_t *rps_invent; - int rc; - if(cnode == CNODEID_NONE) - return; - - node_vertex = cnodeid_to_vertex(cnode); - rc = hwgraph_edge_get(node_vertex, EDGE_LBL_RPS, &rpsv); - if (rc != GRAPH_SUCCESS) { - return; - } - - rc = hwgraph_info_get_LBL(rpsv, INFO_LBL_DETAIL_INVENT, - (arbitrary_info_t *)&rps_invent); - if (rc != GRAPH_SUCCESS) { - return; - } - - if (rps_state == 0 ) - rps_invent->ir_gen.ig_flag = 0; - else - rps_invent->ir_gen.ig_flag = INVENT_ENABLED; -} - -void -klhwg_add_xbox_rps(devfs_handle_t node_vertex, cnodeid_t cnode, int flag) -{ - devfs_handle_t myrpsv; - invent_rpsinfo_t *rps_invent; - int rc; - - if(cnode == CNODEID_NONE) - return; - - GRPRINTF(("klhwg_add_rps: adding %s to vertex 0x%x\n", - EDGE_LBL_XBOX_RPS, node_vertex)); - - rc = hwgraph_path_add(node_vertex, EDGE_LBL_XBOX_RPS, &myrpsv); - if (rc != GRAPH_SUCCESS) - return; - - device_master_set(myrpsv, node_vertex); - - rps_invent = (invent_rpsinfo_t *) - klhwg_invent_alloc(cnode, INV_RPS, sizeof(invent_rpsinfo_t)); - - if (!rps_invent) - return; - - rps_invent->ir_xbox = 1; /* xbox RPS */ - - if (flag) - rps_invent->ir_gen.ig_flag = INVENT_ENABLED; - else - rps_invent->ir_gen.ig_flag = 0x0; - - hwgraph_info_add_LBL(myrpsv, INFO_LBL_DETAIL_INVENT, - (arbitrary_info_t) rps_invent); - hwgraph_info_export_LBL(myrpsv, INFO_LBL_DETAIL_INVENT, - sizeof(invent_rpsinfo_t)); - -} - -/* - * klhwg_update_xbox_rps gets invoked when the xbox system controller - * polls the status register and discovers that the power supply has - * lost/regained the redundancy. - * It's responsible for updating the Hardware graph information. - * rps_state = 0 -> if the rps lost the redundancy - * = 1 -> If it is redundant. - */ -void -klhwg_update_xbox_rps(cnodeid_t cnode, int rps_state) -{ - devfs_handle_t node_vertex; - devfs_handle_t rpsv; - invent_rpsinfo_t *rps_invent; - int rc; - if(cnode == CNODEID_NONE) - return; - - node_vertex = cnodeid_to_vertex(cnode); - rc = hwgraph_edge_get(node_vertex, EDGE_LBL_XBOX_RPS, &rpsv); - if (rc != GRAPH_SUCCESS) { - return; - } - - rc = hwgraph_info_get_LBL(rpsv, INFO_LBL_DETAIL_INVENT, - (arbitrary_info_t *)&rps_invent); - if (rc != GRAPH_SUCCESS) { - return; - } - - if (rps_state == 0 ) - rps_invent->ir_gen.ig_flag = 0; - else - rps_invent->ir_gen.ig_flag = INVENT_ENABLED; -} - -#endif /* BRINGUP */ - void klhwg_add_xbow(cnodeid_t cnode, nasid_t nasid) { @@ -338,11 +206,8 @@ /*REFERENCED*/ graph_error_t err; -#if CONFIG_SGI_IP35 || CONFIG_IA64_SGI_SN1 || defined(CONFIG_IA64_GENERIC) - if ((brd = find_lboard((lboard_t *)KL_CONFIG_INFO(nasid), - KLTYPE_IOBRICK_XBOW)) == NULL) + if ((brd = find_lboard((lboard_t *)KL_CONFIG_INFO(nasid), KLTYPE_IOBRICK_XBOW)) == NULL) return; -#endif if (KL_CONFIG_DUPLICATE_BOARD(brd)) return; @@ -372,7 +237,7 @@ hub_nasid = XBOW_PORT_NASID(xbow_p, widgetnum); if (hub_nasid == INVALID_NASID) { - PRINT_WARNING("hub widget %d, skipping xbow graph\n", widgetnum); + printk(KERN_WARNING "hub widget %d, skipping xbow graph\n", widgetnum); continue; } @@ -387,13 +252,13 @@ err = hwgraph_path_add(hubv, EDGE_LBL_XTALK, &xbow_v); if (err != GRAPH_SUCCESS) { if (err == GRAPH_DUP) - PRINT_WARNING("klhwg_add_xbow: Check for " + printk(KERN_WARNING "klhwg_add_xbow: Check for " "working routers and router links!"); PRINT_PANIC("klhwg_add_xbow: Failed to add " - "edge: vertex 0x%p (0x%p) to vertex 0x%p (0x%p)," + "edge: vertex 0x%p to vertex 0x%p," "error %d\n", - hubv, hubv, xbow_v, xbow_v, err); + (void *)hubv, (void *)xbow_v, err); } xswitch_vertex_init(xbow_v); @@ -416,7 +281,7 @@ err = hwgraph_edge_add(hubv, xbow_v, EDGE_LBL_XTALK); if (err != GRAPH_SUCCESS) { if (err == GRAPH_DUP) - PRINT_WARNING("klhwg_add_xbow: Check for " + printk(KERN_WARNING "klhwg_add_xbow: Check for " "working routers and router links!"); PRINT_PANIC("klhwg_add_xbow: Failed to add " @@ -443,7 +308,7 @@ int board_disabled = 0; nasid = COMPACT_TO_NASID_NODEID(cnode); - brd = find_lboard((lboard_t *)KL_CONFIG_INFO(nasid), KLTYPE_IP27); + brd = find_lboard((lboard_t *)KL_CONFIG_INFO(nasid), KLTYPE_SNIA); GRPRINTF(("klhwg_add_node: Adding cnode %d, nasid %d, brd 0x%p\n", cnode, nasid, brd)); ASSERT(brd); @@ -495,7 +360,7 @@ brd = KLCF_NEXT(brd); if (brd) - brd = find_lboard(brd, KLTYPE_IP27); + brd = find_lboard(brd, KLTYPE_SNIA); else break; } while(brd); @@ -513,7 +378,7 @@ char path_buffer[100]; int rv; - for (cnode = 0; cnode < maxnodes; cnode++) { + for (cnode = 0; cnode < numnodes; cnode++) { nasid = COMPACT_TO_NASID_NODEID(cnode); GRPRINTF(("klhwg_add_all_routers: adding router on cnode %d\n", @@ -594,7 +459,7 @@ return; if (rc != GRAPH_SUCCESS) - PRINT_WARNING("Can't find router: %s", path_buffer); + printk(KERN_WARNING "Can't find router: %s", path_buffer); /* We don't know what to do with multiple router components */ if (brd->brd_numcompts != 1) { @@ -650,7 +515,7 @@ if (rc != GRAPH_SUCCESS && !is_specified(arg_maxnodes)) PRINT_PANIC("Can't create edge: %s/%s to vertex 0x%p error 0x%x\n", - path_buffer, dest_path, dest_hndl, rc); + path_buffer, dest_path, (void *)dest_hndl, rc); } } @@ -663,7 +528,7 @@ cnodeid_t cnode; lboard_t *brd; - for (cnode = 0; cnode < maxnodes; cnode++) { + for (cnode = 0; cnode < numnodes; cnode++) { nasid = COMPACT_TO_NASID_NODEID(cnode); GRPRINTF(("klhwg_connect_routers: Connecting routers on cnode %d\n", @@ -703,14 +568,13 @@ char dest_path[50]; graph_error_t rc; - for (cnode = 0; cnode < maxnodes; cnode++) { + for (cnode = 0; cnode < numnodes; cnode++) { nasid = COMPACT_TO_NASID_NODEID(cnode); GRPRINTF(("klhwg_connect_hubs: Connecting hubs on cnode %d\n", cnode)); - brd = find_lboard((lboard_t *)KL_CONFIG_INFO(nasid), - KLTYPE_IP27); + brd = find_lboard((lboard_t *)KL_CONFIG_INFO(nasid), KLTYPE_SNIA); ASSERT(brd); hub = (klhub_t *)find_first_component(brd, KLSTRUCT_HUB); @@ -732,7 +596,7 @@ rc = hwgraph_traverse(hwgraph_root, path_buffer, &hub_hndl); if (rc != GRAPH_SUCCESS) - PRINT_WARNING("Can't find hub: %s", path_buffer); + printk(KERN_WARNING "Can't find hub: %s", path_buffer); dest_brd = (lboard_t *)NODE_OFFSET_TO_K0( hub->hub_port.port_nasid, @@ -757,7 +621,7 @@ if (rc != GRAPH_SUCCESS) PRINT_PANIC("Can't create edge: %s/%s to vertex 0x%p, error 0x%x\n", - path_buffer, dest_path, dest_hndl, rc); + path_buffer, dest_path, (void *)dest_hndl, rc); } } diff -Nru a/arch/ia64/sn/io/klgraph_hack.c b/arch/ia64/sn/io/klgraph_hack.c --- a/arch/ia64/sn/io/klgraph_hack.c Tue Mar 12 13:58:14 2002 +++ b/arch/ia64/sn/io/klgraph_hack.c Tue Mar 12 13:58:14 2002 @@ -4,8 +4,7 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2002 Silicon Graphics, Inc. All rights reserved. */ @@ -18,6 +17,7 @@ #include #include #include +#include #include void * real_port; @@ -28,11 +28,11 @@ kl_config_hdr_t *linux_klcfg; -#ifdef BRINGUP +#ifdef DEFINE_DUMP_RTNS /* forward declarations */ -extern void dump_ii(void), dump_lb(void), dump_crossbow(void); -extern void clear_ii_error(void); -#endif /* BRINGUP */ +static void dump_ii(void), dump_crossbow(void); +static void clear_ii_error(void); +#endif /* DEFINE_DUMP_RTNS */ #define SYNERGY_WIDGET ((char *)0xc0000e0000000000) #define SYNERGY_SWIZZLE ((char *)0xc0000e0000000400) @@ -45,115 +45,10 @@ #define HUBREG ((char *)0xc0000a0001e00000) #define WIDGET0 ((char *)0xc0000a0000000000) -int test = 0; - -/* - * Hack to loop for test. - */ -void -test_io_regs(void) -{ - - uint32_t reg_32bits; - uint64_t reg_64bits; - - while (test) { - - reg_32bits = (uint32_t)(*(volatile uint32_t *) SYNERGY_WIDGET); - reg_64bits = (uint64_t) (*(volatile uint64_t *) SYNERGY_WIDGET); - - } - - printk("Synergy Widget Address = 0x%p, Value = 0x%lx\n", SYNERGY_WIDGET, (uint64_t)*(SYNERGY_WIDGET)); - - printk("Synergy swizzle Address = 0x%p, Value = 0x%lx\n", SYNERGY_SWIZZLE, (uint64_t)*(SYNERGY_SWIZZLE)); - printk("HUBREG Address = 0x%p, Value = 0x%lx\n", HUBREG, (uint64_t)*(HUBREG)); - printk("WIDGET0 Address = 0x%p, Value = 0x%lx\n", WIDGET0, (uint64_t)*(WIDGET0)); - printk("WIDGET4 Address = 0x%p, Value = 0x%x\n", WIDGET4, (uint32_t)*(WIDGET4)); - -} - void klgraph_hack_init(void) { - kl_config_hdr_t *kl_hdr_ptr; - lboard_t *lb_ptr; - lboard_t *temp_ptr; - klhub_t *klhub_ptr; - klioc3_t *klioc3_ptr; - klbri_t *klbri_ptr; - klxbow_t *klxbow_ptr; - klinfo_t *klinfo_ptr; - klcomp_t *klcomp_ptr; -#if 0 - uint64_t *tmp; - volatile u32 *tmp32; - - /* Preset some values */ - /* Write IOERR clear to clear the CRAZY bit in the status */ - tmp = (uint64_t *)0xc0000a0001c001f8; *tmp = (uint64_t)0xffffffff; - /* set widget control register...setting bedrock widget id to b */ - /* tmp = (uint64_t *)0xc0000a0001c00020; *tmp = (uint64_t)0x801b; */ - /* set io outbound widget access...allow all */ - tmp = (uint64_t *)0xc0000a0001c00110; *tmp = (uint64_t)0xff01; - /* set io inbound widget access...allow all */ - tmp = (uint64_t *)0xc0000a0001c00118; *tmp = (uint64_t)0xff01; - /* set io crb timeout to max */ - tmp = (uint64_t *)0xc0000a0001c003c0; *tmp = (uint64_t)0xffffff; - tmp = (uint64_t *)0xc0000a0001c003c0; *tmp = (uint64_t)0xffffff; - - /* set local block io permission...allow all */ - tmp = (uint64_t *)0xc0000a0001e04010; *tmp = (uint64_t)0xfffffffffffffff; - - /* clear any errors */ - clear_ii_error(); - - /* set default read response buffers in bridge */ - tmp32 = (volatile u32 *)0xc0000a000f000280L; - *tmp32 = 0xba98; - tmp32 = (volatile u32 *)0xc0000a000f000288L; - *tmp32 = 0xba98; - -printk("Widget ID Address 0x%p Value 0x%lx\n", (uint64_t *)0xc0000a0001e00000, *( (volatile uint64_t *)0xc0000a0001e00000) ); - -printk("Widget ID Address 0x%p Value 0x%lx\n", (uint64_t *)0xc0000a0001c00000, *( (volatile uint64_t *)0xc0000a0001c00000) ); - -printk("Widget ID Address 0x%p Value 0x%lx\n", (uint64_t *)0xc000020001e00000, *( (volatile uint64_t *)0xc000020001e00000) ); - - -printk("Widget ID Address 0x%p Value 0x%lx\n", (uint64_t *)0xc000020001c00000, *( (volatile uint64_t *)0xc000020001c00000) ); - -printk("Widget ID Address 0x%p Value 0x%lx\n", (uint64_t *)0xc0000a0001e00000, *( (volatile uint64_t *)0xc0000a0001e00000) ); - -printk("Xbow ID Address 0x%p Value 0x%x\n", (uint64_t *)0xc0000a0000000000, *( (volatile uint32_t *)0xc0000a0000000000) ); - -printk("Xbow ID Address 0x%p Value 0x%x\n", (uint64_t *)0xc000020000000004, *( (volatile uint32_t *)0xc000020000000004) ); - -#endif - - if ( test ) - test_io_regs(); - /* - * Klconfig header. - */ - kl_hdr_ptr = kmalloc(sizeof(kl_config_hdr_t), GFP_KERNEL); - kl_hdr_ptr->ch_magic = 0xbeedbabe; - kl_hdr_ptr->ch_version = 0x0; - kl_hdr_ptr->ch_malloc_hdr_off = 0x48; - kl_hdr_ptr->ch_cons_off = 0x18; - kl_hdr_ptr->ch_board_info = 0x0; - kl_hdr_ptr->ch_cons_info.uart_base = 0x920000000f820178; - kl_hdr_ptr->ch_cons_info.config_base = 0x920000000f024000; - kl_hdr_ptr->ch_cons_info.memory_base = 0x920000000f800000; - kl_hdr_ptr->ch_cons_info.baud = 0x2580; - kl_hdr_ptr->ch_cons_info.flag = 0x1; - kl_hdr_ptr->ch_cons_info.type = 0x300fafa; - kl_hdr_ptr->ch_cons_info.nasid = 0x0; - kl_hdr_ptr->ch_cons_info.wid = 0xf; - kl_hdr_ptr->ch_cons_info.npci = 0x4; - kl_hdr_ptr->ch_cons_info.baseio_nic = 0x0; - /* * We need to know whether we are booting from PROM or * boot from disk. @@ -162,520 +57,44 @@ if (linux_klcfg->ch_magic == 0xbeedbabe) { return; } else { - linux_klcfg = kl_hdr_ptr; + panic("klgraph_hack_init: Unable to locate KLCONFIG TABLE\n"); } - /* - * lboard KLTYPE_IP35 - */ - lb_ptr = kmalloc(sizeof(lboard_t), GFP_KERNEL); - kl_hdr_ptr->ch_board_info = (klconf_off_t) lb_ptr; - temp_ptr = lb_ptr; - printk("First Lboard = %p\n", temp_ptr); - - lb_ptr->brd_next = 0; - lb_ptr->struct_type = 0x1; - lb_ptr->brd_type = 0x11; - lb_ptr->brd_sversion = 0x3; - lb_ptr->brd_brevision = 0x1; - lb_ptr->brd_promver = 0x1; - lb_ptr->brd_promver = 0x1; - lb_ptr->brd_slot = 0x0; - lb_ptr->brd_debugsw = 0x0; - lb_ptr->brd_module = 0x145; - lb_ptr->brd_partition = 0x0; - lb_ptr->brd_diagval = 0x0; - lb_ptr->brd_diagparm = 0x0; - lb_ptr->brd_inventory = 0x0; - lb_ptr->brd_numcompts = 0x5; - lb_ptr->brd_nic = 0x2a0aed35; - lb_ptr->brd_nasid = 0x0; - lb_ptr->brd_errinfo = 0x0; - lb_ptr->brd_parent = 0x0; - lb_ptr->brd_graph_link = (devfs_handle_t)0x26; - lb_ptr->brd_owner = 0x0; - lb_ptr->brd_nic_flags = 0x0; - memcpy(&lb_ptr->brd_name[0], "IP35", 4); - - /* - * Hub Component - */ - klcomp_ptr = kmalloc(sizeof(klcomp_t), GFP_KERNEL); - klhub_ptr = (klhub_t *)klcomp_ptr; - klinfo_ptr = (klinfo_t *)klcomp_ptr; - lb_ptr->brd_compts[0] = (klconf_off_t)klcomp_ptr; - printk("hub info = %p lboard = %p\n", klhub_ptr, lb_ptr); - - klinfo_ptr = (klinfo_t *)klhub_ptr; - klinfo_ptr->struct_type = 0x2; - klinfo_ptr->struct_version = 0x1; - klinfo_ptr->flags = 0x1; - klinfo_ptr->revision = 0x1; - klinfo_ptr->diagval = 0x0; - klinfo_ptr->diagparm = 0x0; - klinfo_ptr->inventory = 0x0; - klinfo_ptr->partid = 0x0; - klinfo_ptr->nic = 0x2a0aed35; - klinfo_ptr->physid = 0x0; - klinfo_ptr->virtid = 0x0; - klinfo_ptr->widid = 0x0; - klinfo_ptr->nasid = 0x0; - - klhub_ptr->hub_flags = 0x0; - klhub_ptr->hub_port.port_nasid = (nasid_t)0x0ffffffff; - klhub_ptr->hub_port.port_flag = 0x0; - klhub_ptr->hub_port.port_offset = 0x0; - klhub_ptr->hub_box_nic = 0x0; - klhub_ptr->hub_mfg_nic = 0x3f420; - klhub_ptr->hub_speed = 0xbebc200; - - /* - * Memory Component - */ - klcomp_ptr = kmalloc(sizeof(klcomp_t), GFP_KERNEL); - klinfo_ptr = (klinfo_t *)klcomp_ptr; - lb_ptr->brd_compts[1] = (klconf_off_t)klcomp_ptr; - - klinfo_ptr->struct_type = 0x3; - klinfo_ptr->struct_version = 0x2; - klinfo_ptr->flags = 0x1; - klinfo_ptr->revision = 0xff; - klinfo_ptr->diagval = 0x0; - klinfo_ptr->diagparm = 0x0; - klinfo_ptr->inventory = 0x0; - klinfo_ptr->partid = 0x0; - klinfo_ptr->nic = 0xffffffffffffffff; - klinfo_ptr->physid = 0xff; - klinfo_ptr->virtid = 0xffffffff; - klinfo_ptr->widid = 0x0; - klinfo_ptr->nasid = 0x0; - - /* - * KLSTRUCT_HUB_UART Component - */ - klcomp_ptr = kmalloc(sizeof(klcomp_t), GFP_KERNEL); - klinfo_ptr = (klinfo_t *)klcomp_ptr; - lb_ptr->brd_compts[2] = (klconf_off_t)klcomp_ptr; - - klinfo_ptr->struct_type = 0x11; - klinfo_ptr->struct_version = 0x1; - klinfo_ptr->flags = 0x31; - klinfo_ptr->revision = 0xff; - klinfo_ptr->diagval = 0x0; - klinfo_ptr->diagparm = 0x0; - klinfo_ptr->inventory = 0x0; - klinfo_ptr->partid = 0x0; - klinfo_ptr->nic = 0xffffffffffffffff; - klinfo_ptr->physid = 0x0; - klinfo_ptr->virtid = 0x0; - klinfo_ptr->widid = 0x0; - klinfo_ptr->nasid = 0x0; - - /* - * KLSTRUCT_CPU Component - */ - klcomp_ptr = kmalloc(sizeof(klcomp_t), GFP_KERNEL); - klinfo_ptr = (klinfo_t *)klcomp_ptr; - lb_ptr->brd_compts[3] = (klconf_off_t)klcomp_ptr; - - klinfo_ptr->struct_type = 0x1; - klinfo_ptr->struct_version = 0x2; - klinfo_ptr->flags = 0x1; - klinfo_ptr->revision = 0xff; - klinfo_ptr->diagval = 0x0; - klinfo_ptr->diagparm = 0x0; - klinfo_ptr->inventory = 0x0; - klinfo_ptr->partid = 0x0; - klinfo_ptr->nic = 0xffffffffffffffff; - klinfo_ptr->physid = 0x0; - klinfo_ptr->virtid = 0x0; - klinfo_ptr->widid = 0x0; - klinfo_ptr->nasid = 0x0; - - /* - * KLSTRUCT_CPU Component - */ - klcomp_ptr = kmalloc(sizeof(klcomp_t), GFP_KERNEL); - klinfo_ptr = (klinfo_t *)klcomp_ptr; - lb_ptr->brd_compts[4] = (klconf_off_t)klcomp_ptr; - - klinfo_ptr->struct_type = 0x1; - klinfo_ptr->struct_version = 0x2; - klinfo_ptr->flags = 0x1; - klinfo_ptr->revision = 0xff; - klinfo_ptr->diagval = 0x0; - klinfo_ptr->diagparm = 0x0; - klinfo_ptr->inventory = 0x0; - klinfo_ptr->partid = 0x0; - klinfo_ptr->nic = 0xffffffffffffffff; - klinfo_ptr->physid = 0x1; - klinfo_ptr->virtid = 0x1; - klinfo_ptr->widid = 0x0; - klinfo_ptr->nasid = 0x0; - - lb_ptr->brd_compts[5] = 0; /* Set the next one to 0 .. end */ - lb_ptr->brd_numcompts = 5; /* 0 to 4 */ - - /* - * lboard(0x42) KLTYPE_PBRICK_XBOW - */ - lb_ptr = kmalloc(sizeof(lboard_t), GFP_KERNEL); - temp_ptr->brd_next = (klconf_off_t)lb_ptr; /* Let the previous point at the new .. */ - temp_ptr = lb_ptr; - printk("Second Lboard = %p\n", temp_ptr); - - lb_ptr->brd_next = 0; - lb_ptr->struct_type = 0x1; - lb_ptr->brd_type = 0x42; - lb_ptr->brd_sversion = 0x2; - lb_ptr->brd_brevision = 0x0; - lb_ptr->brd_promver = 0x1; - lb_ptr->brd_promver = 0x1; - lb_ptr->brd_slot = 0x0; - lb_ptr->brd_debugsw = 0x0; - lb_ptr->brd_module = 0x145; - lb_ptr->brd_partition = 0x1; - lb_ptr->brd_diagval = 0x0; - lb_ptr->brd_diagparm = 0x0; - lb_ptr->brd_inventory = 0x0; - lb_ptr->brd_numcompts = 0x1; - lb_ptr->brd_nic = 0xffffffffffffffff; - lb_ptr->brd_nasid = 0x0; - lb_ptr->brd_errinfo = 0x0; - lb_ptr->brd_parent = (struct lboard_s *)0x9600000000030070; - lb_ptr->brd_graph_link = (devfs_handle_t)0xffffffff; - lb_ptr->brd_owner = 0x0; - lb_ptr->brd_nic_flags = 0x0; - memcpy(&lb_ptr->brd_name[0], "IOBRICK", 7); - - /* - * KLSTRUCT_XBOW Component - */ - klcomp_ptr = kmalloc(sizeof(klcomp_t), GFP_KERNEL); - memset(klcomp_ptr, 0, sizeof(klcomp_t)); - klxbow_ptr = (klxbow_t *)klcomp_ptr; - klinfo_ptr = (klinfo_t *)klcomp_ptr; - lb_ptr->brd_compts[0] = (klconf_off_t)klcomp_ptr; - printk("xbow_p 0x%p\n", klcomp_ptr); - - klinfo_ptr->struct_type = 0x4; - klinfo_ptr->struct_version = 0x1; - klinfo_ptr->flags = 0x1; - klinfo_ptr->revision = 0x2; - klinfo_ptr->diagval = 0x0; - klinfo_ptr->diagparm = 0x0; - klinfo_ptr->inventory = 0x0; - klinfo_ptr->partid = 0x0; - klinfo_ptr->nic = 0xffffffffffffffff; - klinfo_ptr->physid = 0xff; - klinfo_ptr->virtid = 0x0; - klinfo_ptr->widid = 0x0; - klinfo_ptr->nasid = 0x0; - - klxbow_ptr->xbow_master_hub_link = 0xb; - klxbow_ptr->xbow_port_info[0].port_nasid = 0x0; - klxbow_ptr->xbow_port_info[0].port_flag = 0x0; - klxbow_ptr->xbow_port_info[0].port_offset = 0x0; - - klxbow_ptr->xbow_port_info[1].port_nasid = 0x401; - klxbow_ptr->xbow_port_info[1].port_flag = 0x0; - klxbow_ptr->xbow_port_info[1].port_offset = 0x0; - - klxbow_ptr->xbow_port_info[2].port_nasid = 0x0; - klxbow_ptr->xbow_port_info[2].port_flag = 0x0; - klxbow_ptr->xbow_port_info[2].port_offset = 0x0; - - klxbow_ptr->xbow_port_info[3].port_nasid = 0x0; /* ffffffff */ - klxbow_ptr->xbow_port_info[3].port_flag = 0x6; - klxbow_ptr->xbow_port_info[3].port_offset = 0x30070; - - klxbow_ptr->xbow_port_info[4].port_nasid = 0x0; /* ffffff00; */ - klxbow_ptr->xbow_port_info[4].port_flag = 0x0; - klxbow_ptr->xbow_port_info[4].port_offset = 0x0; - - klxbow_ptr->xbow_port_info[5].port_nasid = 0x0; - klxbow_ptr->xbow_port_info[5].port_flag = 0x0; - klxbow_ptr->xbow_port_info[5].port_offset = 0x0; - klxbow_ptr->xbow_port_info[6].port_nasid = 0x0; - klxbow_ptr->xbow_port_info[6].port_flag = 0x5; - klxbow_ptr->xbow_port_info[6].port_offset = 0x30210; - klxbow_ptr->xbow_port_info[7].port_nasid = 0x3; - klxbow_ptr->xbow_port_info[7].port_flag = 0x5; - klxbow_ptr->xbow_port_info[7].port_offset = 0x302e0; - - lb_ptr->brd_compts[1] = 0; - lb_ptr->brd_numcompts = 1; - - - /* - * lboard KLTYPE_PBRICK - */ - lb_ptr = kmalloc(sizeof(lboard_t), GFP_KERNEL); - temp_ptr->brd_next = (klconf_off_t)lb_ptr; /* Let the previous point at the new .. */ - temp_ptr = lb_ptr; - printk("Third Lboard %p\n", lb_ptr); - - lb_ptr->brd_next = 0; - lb_ptr->struct_type = 0x1; - lb_ptr->brd_type = 0x72; - lb_ptr->brd_sversion = 0x2; - lb_ptr->brd_brevision = 0x0; - lb_ptr->brd_promver = 0x1; - lb_ptr->brd_promver = 0x41; - lb_ptr->brd_slot = 0xe; - lb_ptr->brd_debugsw = 0x0; - lb_ptr->brd_module = 0x145; - lb_ptr->brd_partition = 0x1; - lb_ptr->brd_diagval = 0x0; - lb_ptr->brd_diagparm = 0x0; - lb_ptr->brd_inventory = 0x0; - lb_ptr->brd_numcompts = 0x1; - lb_ptr->brd_nic = 0x30e3fd; - lb_ptr->brd_nasid = 0x0; - lb_ptr->brd_errinfo = 0x0; - lb_ptr->brd_parent = (struct lboard_s *)0x9600000000030140; - lb_ptr->brd_graph_link = (devfs_handle_t)0xffffffff; - lb_ptr->brd_owner = 0x0; - lb_ptr->brd_nic_flags = 0x0; - memcpy(&lb_ptr->brd_name[0], "IP35", 4); - - /* - * KLSTRUCT_BRI Component - */ - klcomp_ptr = kmalloc(sizeof(klcomp_t), GFP_KERNEL); - klbri_ptr = (klbri_t *)klcomp_ptr; - klinfo_ptr = (klinfo_t *)klcomp_ptr; - lb_ptr->brd_compts[0] = (klconf_off_t)klcomp_ptr; - - klinfo_ptr->struct_type = 0x5; - klinfo_ptr->struct_version = 0x2; - klinfo_ptr->flags = 0x1; - klinfo_ptr->revision = 0x2; - klinfo_ptr->diagval = 0x0; - klinfo_ptr->diagparm = 0x0; - klinfo_ptr->inventory = 0x0; - klinfo_ptr->partid = 0xd002; - klinfo_ptr->nic = 0x30e3fd; - klinfo_ptr->physid = 0xe; - klinfo_ptr->virtid = 0xe; - klinfo_ptr->widid = 0xe; - klinfo_ptr->nasid = 0x0; - - klbri_ptr->bri_eprominfo = 0xff; - klbri_ptr->bri_bustype = 0x7; - klbri_ptr->bri_mfg_nic = 0x3f4a8; - - lb_ptr->brd_compts[1] = 0; - lb_ptr->brd_numcompts = 1; - - /* - * lboard KLTYPE_PBRICK - */ - lb_ptr = kmalloc(sizeof(lboard_t), GFP_KERNEL); - temp_ptr->brd_next = (klconf_off_t)lb_ptr; /* Let the previous point at the new .. */ - temp_ptr = lb_ptr; - printk("Fourth Lboard %p\n", lb_ptr); - - lb_ptr->brd_next = 0x0; - lb_ptr->struct_type = 0x1; - lb_ptr->brd_type = 0x72; - lb_ptr->brd_sversion = 0x2; - lb_ptr->brd_brevision = 0x0; - lb_ptr->brd_promver = 0x1; - lb_ptr->brd_promver = 0x31; - lb_ptr->brd_slot = 0xf; - lb_ptr->brd_debugsw = 0x0; - lb_ptr->brd_module = 0x145; - lb_ptr->brd_partition = 0x1; - lb_ptr->brd_diagval = 0x0; - lb_ptr->brd_diagparm = 0x0; - lb_ptr->brd_inventory = 0x0; - lb_ptr->brd_numcompts = 0x6; - lb_ptr->brd_nic = 0x30e3fd; - lb_ptr->brd_nasid = 0x0; - lb_ptr->brd_errinfo = 0x0; - lb_ptr->brd_parent = (struct lboard_s *)0x9600000000030140; - lb_ptr->brd_graph_link = (devfs_handle_t)0xffffffff; - lb_ptr->brd_owner = 0x0; - lb_ptr->brd_nic_flags = 0x0; - memcpy(&lb_ptr->brd_name[0], "IP35", 4); - - - /* - * KLSTRUCT_BRI Component - */ - klcomp_ptr = kmalloc(sizeof(klcomp_t), GFP_KERNEL); - klbri_ptr = (klbri_t *)klcomp_ptr; - klinfo_ptr = (klinfo_t *)klcomp_ptr; - lb_ptr->brd_compts[0] = (klconf_off_t)klcomp_ptr; - - klinfo_ptr->struct_type = 0x5; - klinfo_ptr->struct_version = 0x2; - klinfo_ptr->flags = 0x1; - klinfo_ptr->revision = 0x2; - klinfo_ptr->diagval = 0x0; - klinfo_ptr->diagparm = 0x0; - klinfo_ptr->inventory = 0x0; - klinfo_ptr->partid = 0xd002; - klinfo_ptr->nic = 0x30e3fd; - klinfo_ptr->physid = 0xf; - klinfo_ptr->virtid = 0xf; - klinfo_ptr->widid = 0xf; - klinfo_ptr->nasid = 0x0; - - klbri_ptr->bri_eprominfo = 0xff; - klbri_ptr->bri_bustype = 0x7; - klbri_ptr->bri_mfg_nic = 0x3f528; - - /* - * KLSTRUCT_SCSI component - */ - klcomp_ptr = kmalloc(sizeof(klcomp_t), GFP_KERNEL); - klinfo_ptr = (klinfo_t *)klcomp_ptr; - lb_ptr->brd_compts[1] = (klconf_off_t)klcomp_ptr; - - klinfo_ptr->struct_type = 0xb; - klinfo_ptr->struct_version = 0x1; - klinfo_ptr->flags = 0x31; - klinfo_ptr->revision = 0x5; - klinfo_ptr->diagval = 0x0; - klinfo_ptr->diagparm = 0x0; - klinfo_ptr->inventory = 0x0; - klinfo_ptr->partid = 0x0; - klinfo_ptr->nic = 0xffffffffffffffff; - klinfo_ptr->physid = 0x1; - klinfo_ptr->virtid = 0x0; - klinfo_ptr->widid = 0xf; - klinfo_ptr->nasid = 0x0; - - /* - * KLSTRUCT_IOC3 Component - */ - klcomp_ptr = kmalloc(sizeof(klcomp_t), GFP_KERNEL); - klioc3_ptr = (klioc3_t *)klcomp_ptr; - klinfo_ptr = (klinfo_t *)klcomp_ptr; - lb_ptr->brd_compts[2] = (klconf_off_t)klcomp_ptr; - - klinfo_ptr->struct_type = 0x6; - klinfo_ptr->struct_version = 0x1; - klinfo_ptr->flags = 0x31; - klinfo_ptr->revision = 0x1; - klinfo_ptr->diagval = 0x0; - klinfo_ptr->diagparm = 0x0; - klinfo_ptr->inventory = 0x0; - klinfo_ptr->partid = 0x0; - klinfo_ptr->nic = 0xffffffffffffffff; - klinfo_ptr->physid = 0x4; - klinfo_ptr->virtid = 0x0; - klinfo_ptr->widid = 0xf; - klinfo_ptr->nasid = 0x0; - - klioc3_ptr->ioc3_ssram = 0x0; - klioc3_ptr->ioc3_nvram = 0x0; - - /* - * KLSTRUCT_UNKNOWN Component - */ - klcomp_ptr = kmalloc(sizeof(klcomp_t), GFP_KERNEL); - klinfo_ptr = (klinfo_t *)klcomp_ptr; - lb_ptr->brd_compts[3] = (klconf_off_t)klcomp_ptr; - - klinfo_ptr->struct_type = 0x0; - klinfo_ptr->struct_version = 0x1; - klinfo_ptr->flags = 0x31; - klinfo_ptr->revision = 0xff; - klinfo_ptr->diagval = 0x0; - klinfo_ptr->diagparm = 0x0; - klinfo_ptr->inventory = 0x0; - klinfo_ptr->partid = 0x0; - klinfo_ptr->nic = 0xffffffffffffffff; - klinfo_ptr->physid = 0x5; - klinfo_ptr->virtid = 0x0; - klinfo_ptr->widid = 0xf; - klinfo_ptr->nasid = 0x0; - - /* - * KLSTRUCT_SCSI Component - */ - klcomp_ptr = kmalloc(sizeof(klcomp_t), GFP_KERNEL); - klinfo_ptr = (klinfo_t *)klcomp_ptr; - lb_ptr->brd_compts[4] = (klconf_off_t)klcomp_ptr; - - klinfo_ptr->struct_type = 0xb; - klinfo_ptr->struct_version = 0x1; - klinfo_ptr->flags = 0x31; - klinfo_ptr->revision = 0x1; - klinfo_ptr->diagval = 0x0; - klinfo_ptr->diagparm = 0x0; - klinfo_ptr->inventory = 0x0; - klinfo_ptr->partid = 0x0; - klinfo_ptr->nic = 0xffffffffffffffff; - klinfo_ptr->physid = 0x6; - klinfo_ptr->virtid = 0x5; - klinfo_ptr->widid = 0xf; - klinfo_ptr->nasid = 0x0; - - /* - * KLSTRUCT_UNKNOWN - */ - klcomp_ptr = kmalloc(sizeof(klcomp_t), GFP_KERNEL); - klinfo_ptr = (klinfo_t *)klcomp_ptr; - lb_ptr->brd_compts[5] = (klconf_off_t)klcomp_ptr; - - klinfo_ptr->struct_type = 0x0; - klinfo_ptr->struct_version = 0x1; - klinfo_ptr->flags = 0x31; - klinfo_ptr->revision = 0xff; - klinfo_ptr->diagval = 0x0; - klinfo_ptr->diagparm = 0x0; - klinfo_ptr->inventory = 0x0; - klinfo_ptr->partid = 0x0; - klinfo_ptr->nic = 0xffffffffffffffff; - klinfo_ptr->physid = 0x7; - klinfo_ptr->virtid = 0x0; - klinfo_ptr->widid = 0xf; - klinfo_ptr->nasid = 0x0; - - lb_ptr->brd_compts[6] = 0; - lb_ptr->brd_numcompts = 6; - } -#ifdef BRINGUP +#ifdef DEFINE_DUMP_RTNS /* * these were useful for printing out registers etc * during bringup */ -void +static void xdump(long long *addr, int count) { int ii; volatile long long *xx = addr; for ( ii = 0; ii < count; ii++, xx++ ) { - printk("0x%p : 0x%p\n", xx, *xx); + printk("0x%p : 0x%p\n", (void *)xx, (void *)*xx); } } -void +static void xdump32(unsigned int *addr, int count) { int ii; volatile unsigned int *xx = addr; for ( ii = 0; ii < count; ii++, xx++ ) { - printk("0x%p : 0x%0x\n", xx, *xx); + printk("0x%p : 0x%0x\n", (void *)xx, (int)*xx); } } - - -void +static void clear_ii_error(void) { volatile long long *tmp; @@ -716,8 +135,8 @@ } -void -dump_ii() +static void +dump_ii(void) { printk("===== Dump the II regs =====\n"); xdump((long long *)0xc0000a0001c00000, 2); @@ -746,23 +165,8 @@ xdump((long long *)0xc0000a000f000000, 1); } -void -dump_lb() -{ - printk("===== Dump the LB regs =====\n"); - xdump((long long *)0xc0000a0001e00000, 1); - xdump((long long *)0xc0000a0001e04000, 13); - xdump((long long *)0xc0000a0001e04100, 2); - xdump((long long *)0xc0000a0001e04200, 2); - xdump((long long *)0xc0000a0001e08000, 5); - xdump((long long *)0xc0000a0001e08040, 2); - xdump((long long *)0xc0000a0001e08050, 3); - xdump((long long *)0xc0000a0001e0c000, 3); - xdump((long long *)0xc0000a0001e0c020, 4); -} - -void -dump_crossbow() +static void +dump_crossbow(void) { printk("===== Dump the Crossbow regs =====\n"); clear_ii_error(); @@ -793,4 +197,4 @@ } -#endif /* BRINGUP */ +#endif /* DEFINE_DUMP_RTNS */ diff -Nru a/arch/ia64/sn/io/l1.c b/arch/ia64/sn/io/l1.c --- a/arch/ia64/sn/io/l1.c Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/sn/io/l1.c Tue Mar 12 13:58:15 2002 @@ -4,8 +4,7 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992-1997, 2000-2002 Silicon Graphics, Inc. All rights reserved. */ /* In general, this file is organized in a hierarchy from lower-level @@ -16,18 +15,12 @@ * System controller "message" interface (allows multiplexing * of various kinds of requests and responses with * console I/O) - * Console interfaces (there are two): - * (1) "elscuart", used in the IP35prom and (maybe) some - * debugging situations elsewhere, and - * (2) "l1_cons", the glue that allows the L1 to act + * Console interface: + * "l1_cons", the glue that allows the L1 to act * as the system console for the stdio libraries * * Routines making use of the system controller "message"-style interface - * can be found in l1_command.c. Their names are leftover from early SN0, - * when the "module system controller" (msc) was known as the "entry level - * system controller" (elsc). The names and signatures of those functions - * remain unchanged in order to keep the SN0 -> SN1 system controller - * changes fairly localized. + * can be found in l1_command.c. */ @@ -35,45 +28,29 @@ #include #include #include +#include #include +#include #include #include #include #include #include #include -#include #include #include #include #include #include +#include +#include +#include +#include -#include -/* - * Delete this when atomic_clear is part of atomic.h. - */ -static __inline__ int -atomic_clear (int i, atomic_t *v) -{ - __s32 old, new; - - do { - old = atomic_read(v); - new = old & ~i; - } while (ia64_cmpxchg("acq", v, old, new, sizeof(atomic_t)) != old); - return new; -} - -#if defined(EEPROM_DEBUG) -#define db_printf(x) printk x -#else -#define db_printf(x) -#endif +/* Make all console writes atomic */ +#define SYNC_CONSOLE_WRITE 1 -// From irix/kern/sys/SN/SN1/bdrkhspecregs.h -#define HSPEC_UART_0 0x00000080 /* UART Registers */ /********************************************************************* * Hardware-level (UART) driver routines. @@ -81,28 +58,33 @@ /* macros for reading/writing registers */ -#define LD(x) (*(volatile uint64_t *)(x)) -#define SD(x, v) (LD(x) = (uint64_t) (v)) +#define LD(x) (*(volatile uint64_t *)(x)) +#define SD(x, v) (LD(x) = (uint64_t) (v)) /* location of uart receive/xmit data register */ -#define L1_UART_BASE(n) ((ulong)REMOTE_HSPEC_ADDR((n), HSPEC_UART_0)) -#define LOCAL_HUB LOCAL_HUB_ADDR -#define LOCK_HUB REMOTE_HUB_ADDR - -#define ADDR_L1_REG(n, r) \ - (L1_UART_BASE(n) | ( (r) << 3 )) +#if defined(CONFIG_IA64_SGI_SN1) +#define L1_UART_BASE(n) ((ulong)REMOTE_HSPEC_ADDR((n), 0x00000080)) +#define LOCK_HUB REMOTE_HUB_ADDR +#elif defined(CONFIG_IA64_SGI_SN2) +#define L1_UART_BASE(n) ((ulong)REMOTE_HUB((n), SH_JUNK_BUS_UART0)) +#define LOCK_HUB REMOTE_HUB +typedef u64 rtc_time_t; +#endif -#define READ_L1_UART_REG(n, r) \ - ( LD(ADDR_L1_REG((n), (r))) ) -#define WRITE_L1_UART_REG(n, r, v) \ - ( SD(ADDR_L1_REG((n), (r)), (v)) ) +#define ADDR_L1_REG(n, r) ( L1_UART_BASE(n) | ( (r) << 3 ) ) +#define READ_L1_UART_REG(n, r) ( LD(ADDR_L1_REG((n), (r))) ) +#define WRITE_L1_UART_REG(n, r, v) ( SD(ADDR_L1_REG((n), (r)), (v)) ) + +/* upper layer interface calling methods */ +#define SERIAL_INTERRUPT_MODE 0 +#define SERIAL_POLLED_MODE 1 /* UART-related #defines */ #define UART_BAUD_RATE 57600 -#define UART_FIFO_DEPTH 0xf0 +#define UART_FIFO_DEPTH 16 #define UART_DELAY_SPAN 10 #define UART_PUTC_TIMEOUT 50000 #define UART_INIT_TIMEOUT 100000 @@ -114,17 +96,32 @@ #define UART_NO_CHAR (-3) #define UART_VECTOR (-4) -#ifdef BRINGUP -#define UART_DELAY(x) { int i; i = x * 1000; while (--i); } -#else -#define UART_DELAY(x) us_delay(x) -#endif +#define UART_DELAY(x) udelay(x) + +/* Some debug counters */ +#define L1C_INTERRUPTS 0 +#define L1C_OUR_R_INTERRUPTS 1 +#define L1C_OUR_X_INTERRUPTS 2 +#define L1C_SEND_CALLUPS 3 +#define L1C_RECEIVE_CALLUPS 4 +#define L1C_SET_BAUD 5 +#define L1C_ALREADY_LOCKED L1C_SET_BAUD +#define L1C_R_IRQ 6 +#define L1C_R_IRQ_RET 7 +#define L1C_LOCK_TIMEOUTS 8 +#define L1C_LOCK_COUNTER 9 +#define L1C_UNLOCK_COUNTER 10 +#define L1C_REC_STALLS 11 +#define L1C_CONNECT_CALLS 12 +#define L1C_SIZE L1C_CONNECT_CALLS /* Set to the last one */ + +uint64_t L1_collectibles[L1C_SIZE + 1]; + /* * Some macros for handling Endian-ness */ -#ifdef LITTLE_ENDIAN #define COPY_INT_TO_BUFFER(_b, _i, _n) \ { \ _b[_i++] = (_n >> 24) & 0xff; \ @@ -149,41 +146,61 @@ _xyz[1] = _b[_i++]; \ _xyz[0] = _b[_i++]; \ } -#else /* BIG_ENDIAN */ -extern char *bcopy(const char * src, char * dest, int count); +void snia_kmem_free(void *where, int size); -#define COPY_INT_TO_BUFFER(_b, _i, _n) \ - { \ - bcopy((char *)&_n, _b, sizeof(_n)); \ - _i += sizeof(_n); \ - } +#define ALREADY_LOCKED 1 +#define NOT_LOCKED 0 +static int early_l1_serial_out(nasid_t, char *, int, int /* defines above*/ ); -#define COPY_BUFFER_TO_INT(_b, _i, _n) \ - { \ - bcopy(&_b[_i], &_n, sizeof(_n)); \ - _i += sizeof(_n); \ - } - -#define COPY_BUFFER_TO_BUFFER(_b, _i, _bn) \ - { \ - bcopy(&(_b[_i]), _bn, sizeof(int)); \ - _i += sizeof(int); \ - } -#endif /* LITTLE_ENDIAN */ +#define BCOPY(x,y,z) memcpy(y,x,z) -void kmem_free(void *where, int size); +uint8_t L1_interrupts_connected; /* Non-zero when we are in interrupt mode */ -#define BCOPY(x,y,z) memcpy(y,x,z) /* * Console locking defines and functions. * */ -#ifdef BRINGUP -#define FORCE_CONSOLE_NASID -#endif +uint8_t L1_cons_is_inited = 0; /* non-zero when console is init'd */ +nasid_t Master_console_nasid = (nasid_t)-1; +extern nasid_t console_nasid; + +u64 ia64_sn_get_console_nasid(void); + +inline nasid_t +get_master_nasid(void) +{ +#if defined(CONFIG_IA64_SGI_SN1) + nasid_t nasid = Master_console_nasid; + + if ( nasid == (nasid_t)-1 ) { + nasid = (nasid_t)ia64_sn_get_console_nasid(); + if ( (nasid < 0) || (nasid >= MAX_NASIDS) ) { + /* Out of bounds, use local */ + console_nasid = nasid = get_nasid(); + } + else { + /* Got a valid nasid, set the console_nasid */ + char xx[100]; +/* zzzzzz - force nasid to 0 for now */ + sprintf(xx, "Master console is set to nasid %d (%d)\n", 0, (int)nasid); +nasid = 0; +/* end zzzzzz */ + xx[99] = (char)0; + early_l1_serial_out(nasid, xx, strlen(xx), NOT_LOCKED); + Master_console_nasid = console_nasid = nasid; + } + } + return(nasid); +#else + return((nasid_t)0); +#endif /* CONFIG_IA64_SGI_SN1 */ +} + + +#if defined(CONFIG_IA64_SGI_SN1) #define HUB_LOCK 16 @@ -199,7 +216,6 @@ #define RTC_TIME_MAX ((rtc_time_t) ~0ULL) - /* * primary_lock * @@ -295,26 +311,37 @@ #define LOCK_TIMEOUT (0x1500000 * 1) /* 0x1500000 is ~30 sec */ -inline void +void lock_console(nasid_t nasid) { int ret; + /* If we already have it locked, just return */ + L1_collectibles[L1C_LOCK_COUNTER]++; + ret = hub_lock_timeout(nasid, HUB_LOCK, (rtc_time_t)LOCK_TIMEOUT); if ( ret != 0 ) { + L1_collectibles[L1C_LOCK_TIMEOUTS]++; /* timeout */ hub_unlock(nasid, HUB_LOCK); /* If the 2nd lock fails, just pile ahead.... */ hub_lock_timeout(nasid, HUB_LOCK, (rtc_time_t)LOCK_TIMEOUT); + L1_collectibles[L1C_LOCK_TIMEOUTS]++; } } inline void unlock_console(nasid_t nasid) { + L1_collectibles[L1C_UNLOCK_COUNTER]++; hub_unlock(nasid, HUB_LOCK); } +#else /* SN2 */ +inline void lock_console(nasid_t n) {} +inline void unlock_console(nasid_t n) {} + +#endif /* CONFIG_IA64_SGI_SN1 */ int get_L1_baud(void) @@ -325,27 +352,18 @@ /* uart driver functions */ -static void +static inline void uart_delay( rtc_time_t delay_span ) { UART_DELAY( delay_span ); } -#define UART_PUTC_READY(n) ( (READ_L1_UART_REG((n), REG_LSR) & LSR_XHRE) && (READ_L1_UART_REG((n), REG_MSR) & MSR_CTS) ) +#define UART_PUTC_READY(n) (READ_L1_UART_REG((n), REG_LSR) & LSR_XHRE) static int uart_putc( l1sc_t *sc ) { -#ifdef BRINGUP - /* need a delay to avoid dropping chars */ - UART_DELAY(57); -#endif -#ifdef FORCE_CONSOLE_NASID - /* We need this for the console write path _elscuart_flush() -> brl1_send() */ - sc->nasid = 0; -#endif - WRITE_L1_UART_REG( sc->nasid, REG_DAT, - sc->send[sc->sent] ); + WRITE_L1_UART_REG( sc->nasid, REG_DAT, sc->send[sc->sent] ); return UART_SUCCESS; } @@ -356,10 +374,6 @@ u_char lsr_reg = 0; nasid_t nasid = sc->nasid; -#ifdef FORCE_CONSOLE_NASID - nasid = sc->nasid = 0; -#endif - if( (lsr_reg = READ_L1_UART_REG( nasid, REG_LSR )) & (LSR_RCA | LSR_PARERR | LSR_FRMERR) ) { @@ -396,9 +410,10 @@ } } - if ( sc->uart == BRL1_LOCALUART ) + if ( sc->uart == BRL1_LOCALHUB_UART ) lock_console(nasid); + /* Setup for the proper baud rate */ WRITE_L1_UART_REG( nasid, REG_LCR, LCR_DLAB ); uart_delay( UART_DELAY_SPAN ); WRITE_L1_UART_REG( nasid, REG_DLH, (clkdiv >> 8) & 0xff ); @@ -407,6 +422,8 @@ uart_delay( UART_DELAY_SPAN ); /* set operating parameters and set DLAB to 0 */ + + /* 8bit, one stop, clear request to send, auto flow control */ WRITE_L1_UART_REG( nasid, REG_LCR, LCR_BITS8 | LCR_STOP1 ); uart_delay( UART_DELAY_SPAN ); WRITE_L1_UART_REG( nasid, REG_MCR, MCR_RTS | MCR_AFE ); @@ -416,23 +433,28 @@ WRITE_L1_UART_REG( nasid, REG_ICR, 0x0 ); uart_delay( UART_DELAY_SPAN ); - /* enable FIFO mode and reset both FIFOs */ + /* enable FIFO mode and reset both FIFOs, trigger on 1 */ WRITE_L1_UART_REG( nasid, REG_FCR, FCR_FIFOEN ); uart_delay( UART_DELAY_SPAN ); - WRITE_L1_UART_REG( nasid, REG_FCR, - FCR_FIFOEN | FCR_RxFIFO | FCR_TxFIFO ); + WRITE_L1_UART_REG( nasid, REG_FCR, FCR_FIFOEN | FCR_RxFIFO | FCR_TxFIFO | RxLVL0); - if ( sc->uart == BRL1_LOCALUART ) + if ( sc->uart == BRL1_LOCALHUB_UART ) unlock_console(nasid); } /* This requires the console lock */ + +#if defined(CONFIG_IA64_SGI_SN1) + static void uart_intr_enable( l1sc_t *sc, u_char mask ) { u_char lcr_reg, icr_reg; nasid_t nasid = sc->nasid; + if ( sc->uart == BRL1_LOCALHUB_UART ) + lock_console(nasid); + /* make sure that the DLAB bit in the LCR register is 0 */ lcr_reg = READ_L1_UART_REG( nasid, REG_LCR ); @@ -444,6 +466,9 @@ icr_reg = READ_L1_UART_REG( nasid, REG_ICR ); icr_reg |= mask; WRITE_L1_UART_REG( nasid, REG_ICR, icr_reg /*(ICR_RIEN | ICR_TIEN)*/ ); + + if ( sc->uart == BRL1_LOCALHUB_UART ) + unlock_console(nasid); } /* This requires the console lock */ @@ -453,6 +478,9 @@ u_char lcr_reg, icr_reg; nasid_t nasid = sc->nasid; + if ( sc->uart == BRL1_LOCALHUB_UART ) + lock_console(nasid); + /* make sure that the DLAB bit in the LCR register is 0 */ lcr_reg = READ_L1_UART_REG( nasid, REG_LCR ); @@ -464,7 +492,11 @@ icr_reg = READ_L1_UART_REG( nasid, REG_ICR ); icr_reg &= mask; WRITE_L1_UART_REG( nasid, REG_ICR, icr_reg /*(ICR_RIEN | ICR_TIEN)*/ ); + + if ( sc->uart == BRL1_LOCALHUB_UART ) + unlock_console(nasid); } +#endif /* CONFIG_IA64_SGI_SN1 */ #define uart_enable_xmit_intr(sc) \ uart_intr_enable((sc), ICR_TIEN) @@ -511,10 +543,6 @@ net_vec_t path = sc->uart; rtc_time_t expire = rtc_time() + RTR_UART_PUTC_TIMEOUT; -#ifdef FORCE_CONSOLE_NASID - /* We need this for the console write path _elscuart_flush() -> brl1_send() */ - nasid = sc->nasid = 0; -#endif c = (sc->send[sc->sent] & 0xffULL); while( 1 ) @@ -543,10 +571,6 @@ nasid_t nasid = sc->nasid; net_vec_t path = sc->uart; -#ifdef FORCE_CONSOLE_NASID - nasid = sc->nasid = 0; -#endif - READ_RTR_L1_UART_REG( path, nasid, REG_LSR, ®val ); if( regval & (LSR_RCA | LSR_PARERR | LSR_FRMERR) ) { @@ -617,6 +641,15 @@ return 0; } +/********************************************************************* + * locking macros + */ + +#define L1SC_SEND_LOCK(l,p) { if ((l)->uart == BRL1_LOCALHUB_UART) spin_lock_irqsave(&((l)->send_lock),p); } +#define L1SC_SEND_UNLOCK(l,p) { if ((l)->uart == BRL1_LOCALHUB_UART) spin_unlock_irqrestore(&((l)->send_lock), p); } +#define L1SC_RECV_LOCK(l,p) { if ((l)->uart == BRL1_LOCALHUB_UART) spin_lock_irqsave(&((l)->recv_lock), p); } +#define L1SC_RECV_UNLOCK(l,p) { if ((l)->uart == BRL1_LOCALHUB_UART) spin_unlock_irqrestore(&((l)->recv_lock), p); } + /********************************************************************* * subchannel manipulation @@ -626,31 +659,43 @@ * associated with particular subchannels (e.g., receive queues). * */ +#define SUBCH_LOCK(sc, p) spin_lock_irqsave( &((sc)->subch_lock), p ) +#define SUBCH_UNLOCK(sc, p) spin_unlock_irqrestore( &((sc)->subch_lock), p ) +#define SUBCH_DATA_LOCK(sbch, p) spin_lock_irqsave( &((sbch)->data_lock), p ) +#define SUBCH_DATA_UNLOCK(sbch, p) spin_unlock_irqrestore( &((sbch)->data_lock), p ) -#ifdef SPINLOCKS_WORK -#define SUBCH_LOCK(sc) spin_lock_irq( &((sc)->subch_lock) ) -#define SUBCH_UNLOCK(sc) spin_unlock_irq( &((sc)->subch_lock) ) -#define SUBCH_DATA_LOCK(sbch) spin_lock_irq( &((sbch)->data_lock) ) -#define SUBCH_DATA_UNLOCK(sbch) spin_unlock_irq( &((sbch)->data_lock) ) -#else -#define SUBCH_LOCK(sc) -#define SUBCH_UNLOCK(sc) -#define SUBCH_DATA_LOCK(sbch) -#define SUBCH_DATA_UNLOCK(sbch) -#endif - -/* get_myid is an internal function that reads the PI_CPU_NUM - * register of the local bedrock to determine which of the - * four possible CPU's "this" one is +/* + * set a function to be called for subchannel ch in the event of + * a transmission low-water interrupt from the uart */ -static int -get_myid( void ) +void +subch_set_tx_notify( l1sc_t *sc, int ch, brl1_notif_t func ) { - return( LD(LOCAL_HUB(PI_CPU_NUM)) ); + unsigned long pl = 0; + + L1SC_SEND_LOCK( sc, pl ); +#if !defined(SYNC_CONSOLE_WRITE) + if ( func && !sc->send_in_use ) + uart_enable_xmit_intr( sc ); +#endif + sc->subch[ch].tx_notify = func; + L1SC_SEND_UNLOCK(sc, pl ); } +/* + * set a function to be called for subchannel ch when data is received + */ +void +subch_set_rx_notify( l1sc_t *sc, int ch, brl1_notif_t func ) +{ + unsigned long pl = 0; + brl1_sch_t *subch = &(sc->subch[ch]); + SUBCH_DATA_LOCK( subch, pl ); + sc->subch[ch].rx_notify = func; + SUBCH_DATA_UNLOCK( subch, pl ); +} /********************************************************************* * Queue manipulation macros @@ -767,14 +812,16 @@ * brl1_discard_packet is a dummy "receive callback" used to get rid * of packets we don't want */ -void brl1_discard_packet( l1sc_t *sc, int ch ) +void brl1_discard_packet( int dummy0, void *dummy1, struct pt_regs *dummy2, l1sc_t *sc, int ch ) { + unsigned long pl = 0; brl1_sch_t *subch = &sc->subch[ch]; + sc_cq_t *q = subch->iqp; - SUBCH_DATA_LOCK( subch ); + SUBCH_DATA_LOCK( subch, pl ); q->opos = q->ipos; - atomic_clear( &(subch->packet_arrived), ~((unsigned)0) ); - SUBCH_DATA_UNLOCK( subch ); + atomic_set(&(subch->packet_arrived), 0); + SUBCH_DATA_UNLOCK( subch, pl ); } @@ -789,17 +836,15 @@ static int brl1_send_chars( l1sc_t *sc ) { - /* In the kernel, we track the depth of the C brick's UART's + /* We track the depth of the C brick's UART's * fifo in software, and only check if the UART is accepting * characters when our count indicates that the fifo should * be full. * - * For remote (router) UARTs, and also for the local (C brick) - * UART in the prom, we check with the UART before sending every + * For remote (router) UARTs, we check with the UART before sending every * character. */ - if( sc->uart == BRL1_LOCALUART ) - { + if( sc->uart == BRL1_LOCALHUB_UART ) { if( !(sc->fifo_space) && UART_PUTC_READY( sc->nasid ) ) sc->fifo_space = UART_FIFO_DEPTH; @@ -809,16 +854,10 @@ sc->sent++; } } + else { - else - - /* The following applies to all UARTs in the prom, and to remote - * (router) UARTs in the kernel... - */ - -#define TIMEOUT_RETRIES 30 + /* remote (router) UARTs */ - { int result; int tries = 0; @@ -831,7 +870,7 @@ if( result == UART_TIMEOUT ) { tries++; /* send this character in TIMEOUT_RETRIES... */ - if( tries < TIMEOUT_RETRIES ) { + if( tries < 30 /* TIMEOUT_RETRIES */ ) { continue; } /* ...or else... */ @@ -864,33 +903,39 @@ static int brl1_send( l1sc_t *sc, char *msg, int len, u_char type_and_subch, int wait ) { + unsigned long pl = 0; int index; int pkt_len = 0; unsigned short crc = INIT_CRC; char *send_ptr = sc->send; -#ifdef BRINGUP - /* We want to be sure that we are sending the entire packet before returning */ - wait = 1; -#endif - if ( sc->uart == BRL1_LOCALUART ) - lock_console(sc->nasid); - if( sc->send_in_use ) { - if( !wait ) { - if ( sc->uart == BRL1_LOCALUART ) - unlock_console(sc->nasid); - return 0; /* couldn't send anything; wait for buffer to drain */ - } - else { - /* buffer's in use, but we're synchronous I/O, so we're going - * to send whatever's in there right now and take the buffer - */ - while( sc->sent < sc->send_len ) + if( sc->send_in_use && !(wait) ) { + /* We are in the middle of sending, but can wait until done */ + return 0; + } + else if( sc->send_in_use ) { + /* buffer's in use, but we're synchronous I/O, so we're going + * to send whatever's in there right now and take the buffer + */ + int counter = 0; + + if ( sc->uart == BRL1_LOCALHUB_UART ) + lock_console(sc->nasid); + L1SC_SEND_LOCK(sc, pl); + while( sc->sent < sc->send_len ) { brl1_send_chars( sc ); + if ( counter++ > 0xfffff ) { + char *str = "Looping waiting for uart to clear (1)\n"; + early_l1_serial_out(sc->nasid, str, strlen(str), ALREADY_LOCKED); + break; + } } } else { + if ( sc->uart == BRL1_LOCALHUB_UART ) + lock_console(sc->nasid); + L1SC_SEND_LOCK(sc, pl); sc->send_in_use = 1; } *send_ptr++ = BRL1_FLAG_CH; @@ -948,23 +993,100 @@ sc->send_len = pkt_len; sc->sent = 0; - do { - brl1_send_chars( sc ); - } while( (sc->sent < sc->send_len) && wait ); + { + int counter = 0; + do { + brl1_send_chars( sc ); + if ( counter++ > 0xfffff ) { + char *str = "Looping waiting for uart to clear (2)\n"; + early_l1_serial_out(sc->nasid, str, strlen(str), ALREADY_LOCKED); + break; + } + } while( (sc->sent < sc->send_len) && wait ); + } + + if ( sc->uart == BRL1_LOCALHUB_UART ) + unlock_console(sc->nasid); if( sc->sent == sc->send_len ) { - /* success! release the send buffer */ + /* success! release the send buffer and call the callup */ +#if !defined(SYNC_CONSOLE_WRITE) + brl1_notif_t callup; +#endif + sc->send_in_use = 0; + /* call any upper layer that's asked for notification */ +#if defined(XX_SYNC_CONSOLE_WRITE) + /* + * This is probably not a good idea - since the l1_ write func can be called multiple + * time within the callup function. + */ + callup = subch->tx_notify; + if( callup && (SUBCH(type_and_subch) == SC_CONS_SYSTEM) ) { + L1_collectibles[L1C_SEND_CALLUPS]++; + (*callup)(sc->subch[SUBCH(type_and_subch)].irq_frame.bf_irq, + sc->subch[SUBCH(type_and_subch)].irq_frame.bf_dev_id, + sc->subch[SUBCH(type_and_subch)].irq_frame.bf_regs, sc, SUBCH(type_and_subch)); + } +#endif /* SYNC_CONSOLE_WRITE */ } - else if( !wait ) { +#if !defined(SYNC_CONSOLE_WRITE) + else if ( !wait ) { /* enable low-water interrupts so buffer will be drained */ uart_enable_xmit_intr(sc); } - if ( sc->uart == BRL1_LOCALUART ) - unlock_console(sc->nasid); +#endif + + L1SC_SEND_UNLOCK(sc, pl); + return len; } +/* brl1_send_cont is intended to be called as an interrupt service + * routine. It sends until the UART won't accept any more characters, + * or until an error is encountered (in which case we surrender the + * send buffer and give up trying to send the packet). Once the + * last character in the packet has been sent, this routine releases + * the send buffer and calls any previously-registered "low-water" + * output routines. + */ + +#if !defined(SYNC_CONSOLE_WRITE) + +int +brl1_send_cont( l1sc_t *sc ) +{ + unsigned long pl = 0; + int done = 0; + brl1_notif_t callups[BRL1_NUM_SUBCHANS]; + brl1_notif_t *callup; + brl1_sch_t *subch; + int index; + + /* + * I'm not sure how I think this is to be handled - whether the lock is held + * over the interrupt - but it seems like it is a bad idea.... + */ + + if ( sc->uart == BRL1_LOCALHUB_UART ) + lock_console(sc->nasid); + L1SC_SEND_LOCK(sc, pl); + brl1_send_chars( sc ); + done = (sc->sent == sc->send_len); + if( done ) { + sc->send_in_use = 0; +#if !defined(SYNC_CONSOLE_WRITE) + uart_disable_xmit_intr(sc); +#endif + } + if ( sc->uart == BRL1_LOCALHUB_UART ) + unlock_console(sc->nasid); + /* Release the lock */ + L1SC_SEND_UNLOCK(sc, pl); + + return 0; +} +#endif /* SYNC_CONSOLE_WRITE */ /* internal function -- used by brl1_receive to read a character * from the uart and check whether errors occurred in the process. @@ -1046,40 +1168,33 @@ * error (parity error, bad header, bad CRC, etc.). */ -#define STATE_SET(l,s) ((l)->brl1_state = (s)) -#define STATE_GET(l) ((l)->brl1_state) +#define STATE_SET(l,s) ((l)->brl1_state = (s)) +#define STATE_GET(l) ((l)->brl1_state) #define LAST_HDR_SET(l,h) ((l)->brl1_last_hdr = (h)) #define LAST_HDR_GET(l) ((l)->brl1_last_hdr) -#define SEQSTAMP_INCR(l) -#define SEQSTAMP_GET(l) - #define VALID_HDR(c) \ ( SUBCH((c)) <= SC_CONS_SYSTEM \ ? PKT_TYPE((c)) == BRL1_REQUEST \ : ( PKT_TYPE((c)) == BRL1_RESPONSE || \ PKT_TYPE((c)) == BRL1_EVENT ) ) -#define IS_TTY_PKT(l) \ - ( SUBCH(LAST_HDR_GET(l)) <= SC_CONS_SYSTEM ? 1 : 0 ) +#define IS_TTY_PKT(l) ( SUBCH(LAST_HDR_GET(l)) <= SC_CONS_SYSTEM ? 1 : 0 ) int -brl1_receive( l1sc_t *sc ) +brl1_receive( l1sc_t *sc, int mode ) { int result; /* value to be returned by brl1_receive */ int c; /* most-recently-read character */ int done; /* set done to break out of recv loop */ + unsigned long pl = 0, cpl = 0; sc_cq_t *q; /* pointer to queue we're working with */ result = BRL1_NO_MESSAGE; -#ifdef FORCE_CONSOLE_NASID - sc->nasid = 0; -#endif - if ( sc->uart == BRL1_LOCALUART ) - lock_console(sc->nasid); + L1SC_RECV_LOCK(sc, cpl); done = 0; while( !done ) @@ -1210,8 +1325,7 @@ * starting a new packet */ STATE_SET( sc, BRL1_FLAG ); - SEQSTAMP_INCR(sc); /* bump the packet sequence counter */ - + /* if the packet body has less than 2 characters, * it can't be a well-formed packet. Discard it. */ @@ -1258,7 +1372,7 @@ /* get the subchannel and lock it */ subch = &(sc->subch[SUBCH( LAST_HDR_GET(sc) )]); - SUBCH_DATA_LOCK( subch ); + SUBCH_DATA_LOCK( subch, pl ); /* if this isn't a console packet, we need to record * a length byte @@ -1276,14 +1390,16 @@ */ atomic_inc(&(subch->packet_arrived)); callup = subch->rx_notify; - SUBCH_DATA_UNLOCK( subch ); + SUBCH_DATA_UNLOCK( subch, pl ); - if( callup ) { - if ( sc->uart == BRL1_LOCALUART ) - unlock_console(sc->nasid); - (*callup)( sc, SUBCH(LAST_HDR_GET(sc)) ); - if ( sc->uart == BRL1_LOCALUART ) - lock_console(sc->nasid); + if( callup && (mode == SERIAL_INTERRUPT_MODE) ) { + L1SC_RECV_UNLOCK( sc, cpl ); + L1_collectibles[L1C_RECEIVE_CALLUPS]++; + (*callup)( sc->subch[SUBCH(LAST_HDR_GET(sc))].irq_frame.bf_irq, + sc->subch[SUBCH(LAST_HDR_GET(sc))].irq_frame.bf_dev_id, + sc->subch[SUBCH(LAST_HDR_GET(sc))].irq_frame.bf_regs, + sc, SUBCH(LAST_HDR_GET(sc)) ); + L1SC_RECV_LOCK( sc, cpl ); } continue; /* go back for more! */ } @@ -1351,9 +1467,8 @@ } /* end of switch( STATE_GET(sc) ) */ } /* end of while(!done) */ - - if ( sc->uart == BRL1_LOCALUART ) - unlock_console(sc->nasid); + + L1SC_RECV_UNLOCK( sc, cpl ); return result; } @@ -1370,13 +1485,10 @@ brl1_sch_t *subch; bzero( sc, sizeof( *sc ) ); -#ifdef FORCE_CONSOLE_NASID - nasid = (nasid_t)0; -#endif sc->nasid = nasid; sc->uart = uart; - sc->getc_f = (uart == BRL1_LOCALUART ? uart_getc : rtr_uart_getc); - sc->putc_f = (uart == BRL1_LOCALUART ? uart_putc : rtr_uart_putc); + sc->getc_f = (uart == BRL1_LOCALHUB_UART ? uart_getc : rtr_uart_getc); + sc->putc_f = (uart == BRL1_LOCALHUB_UART ? uart_putc : rtr_uart_putc); sc->sol = 1; subch = sc->subch; @@ -1403,9 +1515,8 @@ spin_lock_init( &(subch->data_lock) ); sv_init( &(subch->arrive_sv), &subch->data_lock, SV_MON_SPIN | SV_ORDER_FIFO /* | SV_INTS */ ); subch->tx_notify = NULL; - if( sc->uart == BRL1_LOCALUART ) { - subch->iqp = kmem_zalloc_node( sizeof(sc_cq_t), KM_NOSLEEP, - NASID_TO_COMPACT_NODEID(nasid) ); + if( sc->uart == BRL1_LOCALHUB_UART ) { + subch->iqp = snia_kmem_zalloc_node( sizeof(sc_cq_t), KM_NOSLEEP, NASID_TO_COMPACT_NODEID(nasid) ); ASSERT( subch->iqp ); cq_init( subch->iqp ); subch->rx_notify = NULL; @@ -1440,8 +1551,10 @@ /* initialize synchronization structures */ spin_lock_init( &(sc->subch_lock) ); + spin_lock_init( &(sc->send_lock) ); + spin_lock_init( &(sc->recv_lock) ); - if( sc->uart == BRL1_LOCALUART ) { + if( sc->uart == BRL1_LOCALHUB_UART ) { uart_init( sc, UART_BAUD_RATE ); } else { @@ -1461,23 +1574,397 @@ } } +/********************************************************************* + * These are interrupt-related functions used in the kernel to service + * the L1. + */ + +/* + * brl1_intrd is the function which is called on a console interrupt. + */ + +#if defined(CONFIG_IA64_SGI_SN1) + +static void +brl1_intrd(int irq, void *dev_id, struct pt_regs *stuff) +{ + u_char isr_reg; + l1sc_t *sc = get_elsc(); + int ret; + + L1_collectibles[L1C_INTERRUPTS]++; + isr_reg = READ_L1_UART_REG(sc->nasid, REG_ISR); + + /* Save for callup args in console */ + sc->subch[SC_CONS_SYSTEM].irq_frame.bf_irq = irq; + sc->subch[SC_CONS_SYSTEM].irq_frame.bf_dev_id = dev_id; + sc->subch[SC_CONS_SYSTEM].irq_frame.bf_regs = stuff; + +#if defined(SYNC_CONSOLE_WRITE) + while( isr_reg & ISR_RxRDY ) +#else + while( isr_reg & (ISR_RxRDY | ISR_TxRDY) ) +#endif + { + if( isr_reg & ISR_RxRDY ) { + L1_collectibles[L1C_OUR_R_INTERRUPTS]++; + ret = brl1_receive(sc, SERIAL_INTERRUPT_MODE); + if ( (ret != BRL1_VALID) && (ret != BRL1_NO_MESSAGE) && (ret != BRL1_PROTOCOL) && (ret != BRL1_CRC) ) + L1_collectibles[L1C_REC_STALLS] = ret; + } +#if !defined(SYNC_CONSOLE_WRITE) + if( (isr_reg & ISR_TxRDY) || (sc->send_in_use && UART_PUTC_READY(sc->nasid)) ) { + L1_collectibles[L1C_OUR_X_INTERRUPTS]++; + brl1_send_cont(sc); + } +#endif /* SYNC_CONSOLE_WRITE */ + isr_reg = READ_L1_UART_REG(sc->nasid, REG_ISR); + } +} +#endif /* CONFIG_IA64_SGI_SN1 */ + + +/* + * Install a callback function for the system console subchannel + * to allow an upper layer to be notified when the send buffer + * has been emptied. + */ +static inline void +l1_tx_notif( brl1_notif_t func ) +{ + subch_set_tx_notify( &NODEPDA(NASID_TO_COMPACT_NODEID(get_master_nasid()))->module->elsc, + SC_CONS_SYSTEM, func ); +} + + +/* + * Install a callback function for the system console subchannel + * to allow an upper layer to be notified when a packet has been + * received. + */ +static inline void +l1_rx_notif( brl1_notif_t func ) +{ + subch_set_rx_notify( &NODEPDA(NASID_TO_COMPACT_NODEID(get_master_nasid()))->module->elsc, + SC_CONS_SYSTEM, func ); +} + + +/* brl1_intr is called directly from the uart interrupt; after it runs, the + * interrupt "daemon" xthread is signalled to continue. + */ +void +brl1_intr( void ) +{ +} + +#define BRL1_INTERRUPT_LEVEL 65 /* linux request_irq() value */ + +/* Return the current interrupt level */ + +//#define CONSOLE_POLLING_ALSO + +int +l1_get_intr_value( void ) +{ +#ifdef CONSOLE_POLLING_ALSO + return(0); +#else + return(BRL1_INTERRUPT_LEVEL); +#endif +} + +/* Disconnect the callup functions - throw away interrupts */ + +void +l1_unconnect_intr(void) +{ + /* UnRegister the upper-level callup functions */ + l1_rx_notif((brl1_notif_t)NULL); + l1_tx_notif((brl1_notif_t)NULL); + /* We do NOT unregister the interrupts */ +} + +/* Set up uart interrupt handling for this node's uart */ + +void +l1_connect_intr(void *rx_notify, void *tx_notify) +{ + l1sc_t *sc; + nasid_t nasid; +#if defined(CONFIG_IA64_SGI_SN1) + int tmp; +#endif + nodepda_t *console_nodepda; + int intr_connect_level(cpuid_t, int, ilvl_t, intr_func_t); + + if ( L1_interrupts_connected ) { + /* Interrupts are connected, so just register the callups */ + l1_rx_notif((brl1_notif_t)rx_notify); + l1_tx_notif((brl1_notif_t)tx_notify); + + L1_collectibles[L1C_CONNECT_CALLS]++; + return; + } + else + L1_interrupts_connected = 1; + + nasid = get_master_nasid(); + console_nodepda = NODEPDA(NASID_TO_COMPACT_NODEID(nasid)); + sc = &console_nodepda->module->elsc; + sc->intr_cpu = console_nodepda->node_first_cpu; + +#if defined(CONFIG_IA64_SGI_SN1) + if ( intr_connect_level(sc->intr_cpu, UART_INTR, INTPEND0_MAXMASK, (intr_func_t)brl1_intr) ) { + L1_interrupts_connected = 0; /* FAILS !! */ + } + else { + void synergy_intr_connect(int, int); + + synergy_intr_connect(UART_INTR, sc->intr_cpu); + L1_collectibles[L1C_R_IRQ]++; + tmp = request_irq(BRL1_INTERRUPT_LEVEL, brl1_intrd, SA_INTERRUPT | SA_SHIRQ, "l1_protocol_driver", (void *)sc); + L1_collectibles[L1C_R_IRQ_RET] = (uint64_t)tmp; + if ( tmp ) { + L1_interrupts_connected = 0; /* FAILS !! */ + } + else { + /* Register the upper-level callup functions */ + l1_rx_notif((brl1_notif_t)rx_notify); + l1_tx_notif((brl1_notif_t)tx_notify); + + /* Set the uarts the way we like it */ + uart_enable_recv_intr( sc ); + uart_disable_xmit_intr( sc ); + } + } +#endif /* CONFIG_IA64_SGI_SN1 */ +} + + +/* Set the line speed */ + +void +l1_set_baud(int baud) +{ +#if 0 + nasid_t nasid; + static void uart_init(l1sc_t *, int); +#endif + + L1_collectibles[L1C_SET_BAUD]++; + +#if 0 + if ( L1_cons_is_inited ) { + nasid = get_master_nasid(); + if ( NODEPDA(NASID_TO_COMPACT_NODEID(nasid))->module != (module_t *)0 ) + uart_init(&NODEPDA(NASID_TO_COMPACT_NODEID(nasid))->module->elsc, baud); + } +#endif + return; +} + /* These are functions to use from serial_in/out when in protocol * mode to send and receive uart control regs. These are external * interfaces into the protocol driver. */ + void l1_control_out(int offset, int value) { - nasid_t nasid = 0; //(get_elsc())->nasid; + nasid_t nasid = get_master_nasid(); WRITE_L1_UART_REG(nasid, offset, value); } +/* Console input exported interface. Return a register value. */ + +int +l1_control_in_polled(int offset) +{ + static int l1_control_in_local(int, int); + + return(l1_control_in_local(offset, SERIAL_POLLED_MODE)); +} + int l1_control_in(int offset) { - nasid_t nasid = 0; //(get_elsc())->nasid; - return(READ_L1_UART_REG(nasid, offset)); + static int l1_control_in_local(int, int); + + return(l1_control_in_local(offset, SERIAL_INTERRUPT_MODE)); +} + +static int +l1_control_in_local(int offset, int mode) +{ + nasid_t nasid; + int ret, input; + static int l1_poll(l1sc_t *, int); + + nasid = get_master_nasid(); + ret = READ_L1_UART_REG(nasid, offset); + + if ( offset == REG_LSR ) { + ret |= (LSR_XHRE | LSR_XSRE); /* can send anytime */ + if ( L1_cons_is_inited ) { + if ( NODEPDA(NASID_TO_COMPACT_NODEID(nasid))->module != (module_t *)0 ) { + input = l1_poll(&NODEPDA(NASID_TO_COMPACT_NODEID(nasid))->module->elsc, mode); + if ( input ) { + ret |= LSR_RCA; + } + } + } + } + return(ret); +} + +/* + * Console input exported interface. Return a character (if one is available) + */ + +int +l1_serial_in_polled(void) +{ + static int l1_serial_in_local(int mode); + + return(l1_serial_in_local(SERIAL_POLLED_MODE)); +} + +int +l1_serial_in(void) +{ + static int l1_serial_in_local(int mode); + + return(l1_serial_in_local(SERIAL_INTERRUPT_MODE)); +} + +static int +l1_serial_in_local(int mode) +{ + nasid_t nasid; + l1sc_t *sc; + int value; + static int l1_getc( l1sc_t *, int ); + static inline l1sc_t *early_sc_init(nasid_t); + + nasid = get_master_nasid(); + sc = early_sc_init(nasid); + if ( L1_cons_is_inited ) { + if ( NODEPDA(NASID_TO_COMPACT_NODEID(nasid))->module != (module_t *)0 ) { + sc = &NODEPDA(NASID_TO_COMPACT_NODEID(nasid))->module->elsc; + } + } + value = l1_getc(sc, mode); + return(value); +} + +/* Console output exported interface. Write message to the console. */ + +int +l1_serial_out( char *str, int len ) +{ + nasid_t nasid = get_master_nasid(); + int l1_write(l1sc_t *, char *, int, int); + + if ( L1_cons_is_inited ) { + if ( NODEPDA(NASID_TO_COMPACT_NODEID(nasid))->module != (module_t *)0 ) + return(l1_write(&NODEPDA(NASID_TO_COMPACT_NODEID(nasid))->module->elsc, str, len, +#if defined(SYNC_CONSOLE_WRITE) + 1 +#else + !L1_interrupts_connected +#endif + )); + } + return(early_l1_serial_out(nasid, str, len, NOT_LOCKED)); +} + + +/* + * These are the 'early' functions - when we need to do things before we have + * all the structs setup. + */ + +static l1sc_t Early_console; /* fake l1sc_t */ +static int Early_console_inited = 0; + +static void +early_brl1_init( l1sc_t *sc, nasid_t nasid, net_vec_t uart ) +{ + int i; + brl1_sch_t *subch; + + bzero( sc, sizeof( *sc ) ); + sc->nasid = nasid; + sc->uart = uart; + sc->getc_f = (uart == BRL1_LOCALHUB_UART ? uart_getc : rtr_uart_getc); + sc->putc_f = (uart == BRL1_LOCALHUB_UART ? uart_putc : rtr_uart_putc); + sc->sol = 1; + subch = sc->subch; + + /* initialize L1 subchannels + */ + + /* assign processor TTY channels */ + for( i = 0; i < CPUS_PER_NODE; i++, subch++ ) { + subch->use = BRL1_SUBCH_RSVD; + subch->packet_arrived = ATOMIC_INIT(0); + subch->tx_notify = NULL; + subch->rx_notify = NULL; + subch->iqp = &sc->garbage_q; + } + + /* assign system TTY channel (first free subchannel after each + * processor's individual TTY channel has been assigned) + */ + subch->use = BRL1_SUBCH_RSVD; + subch->packet_arrived = ATOMIC_INIT(0); + subch->tx_notify = NULL; + subch->rx_notify = NULL; + if( sc->uart == BRL1_LOCALHUB_UART ) { + static sc_cq_t x_iqp; + + subch->iqp = &x_iqp; + ASSERT( subch->iqp ); + cq_init( subch->iqp ); + } + else { + /* we shouldn't be getting console input from remote UARTs */ + subch->iqp = &sc->garbage_q; + } + subch++; i++; + + /* "reserved" subchannels (0x05-0x0F); for now, throw away + * incoming packets + */ + for( ; i < 0x10; i++, subch++ ) { + subch->use = BRL1_SUBCH_FREE; + subch->packet_arrived = ATOMIC_INIT(0); + subch->tx_notify = NULL; + subch->rx_notify = NULL; + subch->iqp = &sc->garbage_q; + } + + /* remaining subchannels are free */ + for( ; i < BRL1_NUM_SUBCHANS; i++, subch++ ) { + subch->use = BRL1_SUBCH_FREE; + subch->packet_arrived = ATOMIC_INIT(0); + subch->tx_notify = NULL; + subch->rx_notify = NULL; + subch->iqp = &sc->garbage_q; + } +} + +static inline l1sc_t * +early_sc_init(nasid_t nasid) +{ + /* This is for early I/O */ + if ( Early_console_inited == 0 ) { + early_brl1_init(&Early_console, nasid, BRL1_LOCALHUB_UART); + Early_console_inited = 1; + } + return(&Early_console); } #define PUTCHAR(ch) \ @@ -1487,15 +1974,34 @@ WRITE_L1_UART_REG( nasid, REG_DAT, (ch) ); \ } -int -l1_serial_out( char *str, int len ) +static int +early_l1_serial_out( nasid_t nasid, char *str, int len, int lock_state ) { - int sent = len; + int ret, sent = 0; + char *msg = str; + static int early_l1_send( nasid_t nasid, char *str, int len, int lock_state ); + + while ( sent < len ) { + ret = early_l1_send(nasid, msg, len - sent, lock_state); + sent += ret; + msg += ret; + } + return(len); +} + +static inline int +early_l1_send( nasid_t nasid, char *str, int len, int lock_state ) +{ + int sent; char crc_char; unsigned short crc = INIT_CRC; - nasid_t nasid = 0; //(get_elsc())->nasid; - lock_console(nasid); + if( len > (BRL1_QSIZE - 1) ) + len = (BRL1_QSIZE - 1); + + sent = len; + if ( lock_state == NOT_LOCKED ) + lock_console(nasid); PUTCHAR( BRL1_FLAG_CH ); PUTCHAR( BRL1_EVENT | SC_CONS_SYSTEM ); @@ -1531,16 +2037,9 @@ PUTCHAR( crc_char ); PUTCHAR( BRL1_FLAG_CH ); - unlock_console(nasid); - return sent - len; -} - -int -l1_serial_in(void) -{ - static int l1_cons_getc( l1sc_t *sc ); - - return(l1_cons_getc(get_elsc())); + if ( lock_state == NOT_LOCKED ) + unlock_console(nasid); + return sent; } @@ -1555,12 +2054,16 @@ */ static int -l1_cons_poll( l1sc_t *sc ) +l1_poll( l1sc_t *sc, int mode ) { + int ret; + /* in case this gets called before the l1sc_t structure for the module_t * struct for this node is initialized (i.e., if we're called with a * zero l1sc_t pointer)... */ + + if( !sc ) { return 0; } @@ -1569,7 +2072,9 @@ return 1; } - brl1_receive( sc ); + ret = brl1_receive( sc, mode ); + if ( (ret != BRL1_VALID) && (ret != BRL1_NO_MESSAGE) && (ret != BRL1_PROTOCOL) && (ret != BRL1_CRC) ) + L1_collectibles[L1C_REC_STALLS] = ret; if( atomic_read(&sc->subch[SC_CONS_SYSTEM].packet_arrived) ) { return 1; @@ -1581,43 +2086,65 @@ /* pull a character off of the system console queue (if one is available) */ static int -l1_cons_getc( l1sc_t *sc ) +l1_getc( l1sc_t *sc, int mode ) { + unsigned long pl = 0; int c; brl1_sch_t *subch = &(sc->subch[SC_CONS_SYSTEM]); sc_cq_t *q = subch->iqp; - if( !l1_cons_poll( sc ) ) { + if( !l1_poll( sc, mode ) ) { return 0; } - SUBCH_DATA_LOCK( subch ); + SUBCH_DATA_LOCK( subch, pl ); if( cq_empty( q ) ) { atomic_set(&subch->packet_arrived, 0); - SUBCH_DATA_UNLOCK( subch ); + SUBCH_DATA_UNLOCK( subch, pl ); return 0; } cq_rem( q, c ); if( cq_empty( q ) ) atomic_set(&subch->packet_arrived, 0); - SUBCH_DATA_UNLOCK( subch ); + SUBCH_DATA_UNLOCK( subch, pl ); return c; } +/* + * Write a message to the L1 on the system console subchannel. + * + * Danger: don't use a non-zero value for the wait parameter unless you're + * someone important (like a kernel error message). + */ + +int +l1_write( l1sc_t *sc, char *msg, int len, int wait ) +{ + int sent = 0, ret = 0; + + if ( wait ) { + while ( sent < len ) { + ret = brl1_send( sc, msg, len - sent, (SC_CONS_SYSTEM | BRL1_EVENT), wait ); + sent += ret; + msg += ret; + } + ret = len; + } + else { + ret = brl1_send( sc, msg, len, (SC_CONS_SYSTEM | BRL1_EVENT), wait ); + } + return(ret); +} /* initialize the system console subchannel */ void -l1_cons_init( l1sc_t *sc ) +l1_init(void) { - brl1_sch_t *subch = &(sc->subch[SC_CONS_SYSTEM]); - - SUBCH_DATA_LOCK( subch ); - atomic_set(&subch->packet_arrived, 0); - cq_init( subch->iqp ); - SUBCH_DATA_UNLOCK( subch ); + /* All we do now is remember that we have been called */ + L1_cons_is_inited = 1; } @@ -1637,16 +2164,18 @@ #define L1_DBG_PRF(x) #endif -/* sc_data_ready is called to signal threads that are blocked on - * l1 input. +/* + * sc_data_ready is called to signal threads that are blocked on l1 input. */ void -sc_data_ready( l1sc_t *sc, int ch ) +sc_data_ready( int dummy0, void *dummy1, struct pt_regs *dummy2, l1sc_t *sc, int ch ) { + unsigned long pl = 0; + brl1_sch_t *subch = &(sc->subch[ch]); - SUBCH_DATA_LOCK( subch ); + SUBCH_DATA_LOCK( subch, pl ); sv_signal( &(subch->arrive_sv) ); - SUBCH_DATA_UNLOCK( subch ); + SUBCH_DATA_UNLOCK( subch, pl ); } /* sc_open reserves a subchannel to send a request to the L1 (the @@ -1661,9 +2190,10 @@ * subchannel assignment. */ int ch; + unsigned long pl = 0; brl1_sch_t *subch; - SUBCH_LOCK( sc ); + SUBCH_LOCK( sc, pl ); /* Look for a free subchannel. Subchannels 0-15 are reserved * for other purposes. @@ -1676,12 +2206,12 @@ if( ch == BRL1_NUM_SUBCHANS ) { /* there were no subchannels available! */ - SUBCH_UNLOCK( sc ); + SUBCH_UNLOCK( sc, pl ); return SC_NSUBCH; } subch->use = BRL1_SUBCH_RSVD; - SUBCH_UNLOCK( sc ); + SUBCH_UNLOCK( sc, pl ); atomic_set(&subch->packet_arrived, 0); subch->target = target; @@ -1689,7 +2219,7 @@ sv_init( &(subch->arrive_sv), &(subch->data_lock), SV_MON_SPIN | SV_ORDER_FIFO /* | SV_INTS */); subch->tx_notify = NULL; subch->rx_notify = sc_data_ready; - subch->iqp = kmem_zalloc_node( sizeof(sc_cq_t), KM_NOSLEEP, + subch->iqp = snia_kmem_zalloc_node( sizeof(sc_cq_t), KM_NOSLEEP, NASID_TO_COMPACT_NODEID(sc->nasid) ); ASSERT( subch->iqp ); cq_init( subch->iqp ); @@ -1703,29 +2233,31 @@ int sc_close( l1sc_t *sc, int ch ) { + unsigned long pl = 0; brl1_sch_t *subch; - SUBCH_LOCK( sc ); + SUBCH_LOCK( sc, pl ); subch = &(sc->subch[ch]); if( subch->use != BRL1_SUBCH_RSVD ) { /* we're trying to close a subchannel that's not open */ + SUBCH_UNLOCK( sc, pl ); return SC_NOPEN; } atomic_set(&subch->packet_arrived, 0); subch->use = BRL1_SUBCH_FREE; - SUBCH_DATA_LOCK( subch ); sv_broadcast( &(subch->arrive_sv) ); sv_destroy( &(subch->arrive_sv) ); - SUBCH_DATA_UNLOCK( subch ); spin_lock_destroy( &(subch->data_lock) ); ASSERT( subch->iqp && (subch->iqp != &sc->garbage_q) ); - kmem_free( subch->iqp, sizeof(sc_cq_t) ); + snia_kmem_free( subch->iqp, sizeof(sc_cq_t) ); subch->iqp = &sc->garbage_q; + subch->tx_notify = NULL; + subch->rx_notify = brl1_discard_packet; - SUBCH_UNLOCK( sc ); + SUBCH_UNLOCK( sc, pl ); return SC_SUCCESS; } @@ -2033,11 +2565,10 @@ /* Verify that this is an open subchannel */ - if( sc->subch[ch].use == BRL1_SUBCH_FREE ) - { + if( sc->subch[ch].use == BRL1_SUBCH_FREE ) { return SC_NOPEN; } - + type_and_subch = (BRL1_REQUEST | ((u_char)ch)); result = brl1_send( sc, msg, len, type_and_subch, wait ); @@ -2115,6 +2646,7 @@ sc_recv_poll( l1sc_t *sc, int ch, char *msg, int *len, uint64_t block ) { int is_msg = 0; + unsigned long pl = 0; brl1_sch_t *subch = &(sc->subch[ch]); rtc_time_t exp_time = rtc_time() + block; @@ -2127,7 +2659,7 @@ /* kick the next lower layer and see if it pulls anything in */ - brl1_receive( sc ); + brl1_receive( sc, SERIAL_POLLED_MODE ); is_msg = atomic_read(&subch->packet_arrived); } while( block && !is_msg && (rtc_time() < exp_time) ); @@ -2137,9 +2669,9 @@ return( SC_NMSG ); } - SUBCH_DATA_LOCK( subch ); + SUBCH_DATA_LOCK( subch, pl ); subch_pull_msg( subch, msg, len ); - SUBCH_DATA_UNLOCK( subch ); + SUBCH_DATA_UNLOCK( subch, pl ); return( SC_SUCCESS ); } @@ -2156,10 +2688,11 @@ sc_recv_intr( l1sc_t *sc, int ch, char *msg, int *len, uint64_t block ) { int is_msg = 0; + unsigned long pl = 0; brl1_sch_t *subch = &(sc->subch[ch]); do { - SUBCH_DATA_LOCK(subch); + SUBCH_DATA_LOCK(subch, pl); is_msg = atomic_read(&subch->packet_arrived); if( !is_msg && block ) { /* wake me when you've got something */ @@ -2178,12 +2711,12 @@ if( !is_msg ) { /* no message and we didn't care to wait for one */ - SUBCH_DATA_UNLOCK( subch ); + SUBCH_DATA_UNLOCK( subch, pl ); return( SC_NMSG ); } subch_pull_msg( subch, msg, len ); - SUBCH_DATA_UNLOCK( subch ); + SUBCH_DATA_UNLOCK( subch, pl ); return( SC_SUCCESS ); } @@ -2206,7 +2739,7 @@ * rewriting of the L1 command interface anyway.) */ #define __RETRIES 50 -#define __WAIT_SEND ( sc->uart != BRL1_LOCALUART ) +#define __WAIT_SEND 1 // ( sc->uart != BRL1_LOCALHUB_UART ) #define __WAIT_RECV 10000000 @@ -2237,13 +2770,10 @@ } /* block on sc_recv_* */ -#ifdef LATER - if( sc->uart == BRL1_LOCALUART ) { + if( (sc->uart == BRL1_LOCALHUB_UART) && L1_interrupts_connected ) { return( sc_recv_intr( sc, ch, resp, len, __WAIT_RECV ) ); } - else -#endif /* LATER */ - { + else { return( sc_recv_poll( sc, ch, resp, len, __WAIT_RECV ) ); } #endif /* CONFIG_SERIAL_SGI_L1_PROTOCOL */ @@ -2254,6 +2784,7 @@ * delayed until the send buffer clears. sc_command should be used instead * under most circumstances. */ + int sc_command_kern( l1sc_t *sc, int ch, char *cmd, char *resp, int *len ) { @@ -2282,6 +2813,7 @@ * Returns 1 if input is available on the given queue, * 0 otherwise. */ + int sc_poll( l1sc_t *sc, int ch ) { @@ -2290,7 +2822,7 @@ if( atomic_read(&subch->packet_arrived) ) return 1; - brl1_receive( sc ); + brl1_receive( sc, SERIAL_POLLED_MODE ); if( atomic_read(&subch->packet_arrived) ) return 1; @@ -2298,8 +2830,8 @@ return 0; } -/* for now, sc_init just calls brl1_init - */ +/* for now, sc_init just calls brl1_init */ + void sc_init( l1sc_t *sc, nasid_t nasid, net_vec_t uart ) { @@ -2311,7 +2843,7 @@ * network's environmental monitor tasks. */ -#ifdef LINUX_KERNEL_THREADS +#if defined(LINUX_KERNEL_THREADS) static void sc_dispatch_env_event( uint code, int argc, char *args, int maxlen ) @@ -2366,14 +2898,12 @@ i++ ); } } -#endif /* LINUX_KERNEL_THREADS */ /* sc_event waits for events to arrive from the system controller, and * prints appropriate messages to the syslog. */ -#ifdef LINUX_KERNEL_THREADS static void sc_event( l1sc_t *sc, int ch ) { @@ -2394,7 +2924,7 @@ */ result = sc_recv_intr( sc, ch, event, &event_len, 1 ); if( result != SC_SUCCESS ) { - PRINT_WARNING("Error receiving sysctl event on nasid %d\n", + printk(KERN_WARNING "Error receiving sysctl event on nasid %d\n", sc->nasid ); } else { @@ -2438,53 +2968,50 @@ } } -#endif /* LINUX_KERNEL_THREADS */ /* sc_listen sets up a service thread to listen for incoming events. */ + void sc_listen( l1sc_t *sc ) { int result; + unsigned long pl = 0; brl1_sch_t *subch; char msg[BRL1_QSIZE]; int len; /* length of message being sent */ int ch; /* system controller subchannel used */ -#ifdef LINUX_KERNEL_THREADS extern int msc_shutdown_pri; -#endif /* grab the designated "event subchannel" */ - SUBCH_LOCK( sc ); + SUBCH_LOCK( sc, pl ); subch = &(sc->subch[BRL1_EVENT_SUBCH]); if( subch->use != BRL1_SUBCH_FREE ) { - SUBCH_UNLOCK( sc ); - PRINT_WARNING("sysctl event subchannel in use! " + SUBCH_UNLOCK( sc, pl ); + printk(KERN_WARNING "sysctl event subchannel in use! " "Not monitoring sysctl events.\n" ); return; } subch->use = BRL1_SUBCH_RSVD; - SUBCH_UNLOCK( sc ); + SUBCH_UNLOCK( sc, pl ); atomic_set(&subch->packet_arrived, 0); - subch->target = BRL1_LOCALUART; + subch->target = BRL1_LOCALHUB_UART; spin_lock_init( &(subch->data_lock) ); sv_init( &(subch->arrive_sv), &(subch->data_lock), SV_MON_SPIN | SV_ORDER_FIFO /* | SV_INTS */); subch->tx_notify = NULL; subch->rx_notify = sc_data_ready; - subch->iqp = kmem_zalloc_node( sizeof(sc_cq_t), KM_NOSLEEP, + subch->iqp = snia_kmem_zalloc_node( sizeof(sc_cq_t), KM_NOSLEEP, NASID_TO_COMPACT_NODEID(sc->nasid) ); ASSERT( subch->iqp ); cq_init( subch->iqp ); -#ifdef LINUX_KERNEL_THREADS /* set up a thread to listen for events */ sthread_create( "sysctl event handler", 0, 0, 0, msc_shutdown_pri, KT_PS, (st_func_t *) sc_event, (void *)sc, (void *)(uint64_t)BRL1_EVENT_SUBCH, 0, 0 ); -#endif /* signal the L1 to begin sending events */ bzero( msg, BRL1_QSIZE ); @@ -2522,276 +3049,8 @@ err_return: /* there was a problem; complain */ - PRINT_WARNING("failed to set sysctl event-monitoring subchannel. " + printk(KERN_WARNING "failed to set sysctl event-monitoring subchannel. " "Sysctl events will not be monitored.\n" ); } - -/********************************************************************* - * elscuart functions. These provide a uart-like interface to the - * bedrock/l1 protocol console channels. They are similar in form - * and intent to the elscuart_* functions defined for SN0 in elsc.c. - * - */ - -int _elscuart_flush( l1sc_t *sc ); - -/* Leave room in queue for CR/LF */ -#define ELSCUART_LINE_MAX (BRL1_QSIZE - 2) - - -/* - * _elscuart_putc provides an entry point to the L1 interface driver; - * writes a single character to the output queue. Flushes at the - * end of each line, and translates newlines into CR/LF. - * - * The kernel should generally use l1_cons_write instead, since it assumes - * buffering, translation, prefixing, etc. are done at a higher - * level. - * - */ -int -_elscuart_putc( l1sc_t *sc, int c ) -{ - sc_cq_t *q; - - q = &(sc->oq[ MAP_OQ(L1_ELSCUART_SUBCH(get_myid())) ]); - - if( c != '\n' && c != '\r' && cq_used(q) >= ELSCUART_LINE_MAX ) { - cq_add( q, '\r' ); - cq_add( q, '\n' ); - _elscuart_flush( sc ); - sc->sol = 1; - } - - if( sc->sol && c != '\r' ) { - char prefix[16], *s; - - if( cq_room( q ) < 8 && _elscuart_flush(sc) < 0 ) - { - return -1; - } - - if( sc->verbose ) - { -#ifdef SUPPORT_PRINTING_M_FORMAT - sprintf( prefix, - "%c %d%d%d %M:", - 'A' + get_myid(), - sc->nasid / 100, - (sc->nasid / 10) % 10, - sc->nasid / 10, - sc->modid ); -#else - sprintf( prefix, - "%c %d%d%d 0x%x:", - 'A' + get_myid(), - sc->nasid / 100, - (sc->nasid / 10) % 10, - sc->nasid / 10, - sc->modid ); -#endif - - for( s = prefix; *s; s++ ) - cq_add( q, *s ); - } - sc->sol = 0; - - } - - if( cq_room( q ) < 2 && _elscuart_flush(sc) < 0 ) - { - return -1; - } - - if( c == '\n' ) { - cq_add( q, '\r' ); - sc->sol = 1; - } - - cq_add( q, (u_char) c ); - - if( c == '\n' ) { - /* flush buffered line */ - if( _elscuart_flush( sc ) < 0 ) - { - return -1; - } - } - - if( c== '\r' ) - { - sc->sol = 1; - } - - return 0; -} - - -/* - * _elscuart_getc reads a character from the input queue. This - * routine blocks. - */ -int -_elscuart_getc( l1sc_t *sc ) -{ - int r; - - while( (r = _elscuart_poll( sc )) == 0 ); - - if( r < 0 ) { - /* some error occurred */ - return r; - } - - return _elscuart_readc( sc ); -} - - - -/* - * _elscuart_poll returns 1 if characters are ready for the - * calling processor, 0 if they are not - */ -int -_elscuart_poll( l1sc_t *sc ) -{ - int result; - - if( sc->cons_listen ) { - result = l1_cons_poll( sc ); - if( result ) - return result; - } - - return sc_poll( sc, L1_ELSCUART_SUBCH(get_myid()) ); -} - - - -/* _elscuart_readc is to be used only when _elscuart_poll has - * indicated that a character is waiting. Pulls a character - * of this processor's console queue and returns it. - * - */ -int -_elscuart_readc( l1sc_t *sc ) -{ - int c; - sc_cq_t *q; - brl1_sch_t *subch; - - if( sc->cons_listen ) { - subch = &(sc->subch[ SC_CONS_SYSTEM ]); - q = subch->iqp; - - SUBCH_DATA_LOCK( subch ); - if( !cq_empty( q ) ) { - cq_rem( q, c ); - if( cq_empty( q ) ) { - atomic_set(&subch->packet_arrived, 0); - } - SUBCH_DATA_UNLOCK( subch ); - return c; - } - SUBCH_DATA_UNLOCK( subch ); - } - - subch = &(sc->subch[ L1_ELSCUART_SUBCH(get_myid()) ]); - q = subch->iqp; - - SUBCH_DATA_LOCK( subch ); - if( cq_empty( q ) ) { - SUBCH_DATA_UNLOCK( subch ); - return -1; - } - - cq_rem( q, c ); - if( cq_empty ( q ) ) { - atomic_set(&subch->packet_arrived, 0); - } - SUBCH_DATA_UNLOCK( subch ); - - return c; -} - - -/* - * _elscuart_flush flushes queued output to the L1. - * This routine blocks until the queue is flushed. - */ -int -_elscuart_flush( l1sc_t *sc ) -{ - int r, n; - char buf[BRL1_QSIZE]; - sc_cq_t *q = &(sc->oq[ MAP_OQ(L1_ELSCUART_SUBCH(get_myid())) ]); - - while( (n = cq_used(q)) ) { - - /* buffer queue contents */ - r = BRL1_QSIZE - q->opos; - - if( n > r ) { - BCOPY( q->buf + q->opos, buf, r ); - BCOPY( q->buf, buf + r, n - r ); - } else { - BCOPY( q->buf + q->opos, buf, n ); - } - - /* attempt to send buffer contents */ - r = brl1_send( sc, buf, cq_used( q ), - (BRL1_EVENT | L1_ELSCUART_SUBCH(get_myid())), 1 ); - - /* if no error, dequeue the sent characters; otherwise, - * return the error - */ - if( r >= SC_SUCCESS ) { - q->opos = (q->opos + r) % BRL1_QSIZE; - } - else { - return r; - } - } - - return 0; -} - - - -/* _elscuart_probe returns non-zero if the L1 (and - * consequently the elscuart) can be accessed - */ -int -_elscuart_probe( l1sc_t *sc ) -{ -#ifndef CONFIG_SERIAL_SGI_L1_PROTOCOL - return 0; -#else - char ver[BRL1_QSIZE]; - extern int elsc_version( l1sc_t *, char * ); - - if ( IS_RUNNING_ON_SIMULATOR() ) - return 0; - return( elsc_version(sc, ver) >= 0 ); -#endif /* CONFIG_SERIAL_SGI_L1_PROTOCOL */ -} - - - -/* _elscuart_init zeroes out the l1sc_t console - * queues for this processor's console subchannel. - */ -void -_elscuart_init( l1sc_t *sc ) -{ - brl1_sch_t *subch = &sc->subch[L1_ELSCUART_SUBCH(get_myid())]; - - SUBCH_DATA_LOCK(subch); - - atomic_set(&subch->packet_arrived, 0); - cq_init( subch->iqp ); - cq_init( &sc->oq[MAP_OQ(L1_ELSCUART_SUBCH(get_myid()))] ); - - SUBCH_DATA_UNLOCK(subch); -} +#endif /* LINUX_KERNEL_THREADS */ diff -Nru a/arch/ia64/sn/io/l1_command.c b/arch/ia64/sn/io/l1_command.c --- a/arch/ia64/sn/io/l1_command.c Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/sn/io/l1_command.c Tue Mar 12 13:58:15 2002 @@ -4,20 +4,20 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000 - 2001 Silicon Graphics, Inc. + * All rights reserved. */ #include #include #include +#include #include #include #include #include #include #include -#include #include #include #include @@ -27,7 +27,6 @@ #define ELSC_TIMEOUT 1000000 /* ELSC response timeout (usec) */ #define LOCK_TIMEOUT 5000000 /* Hub lock timeout (usec) */ -#define LOCAL_HUB LOCAL_HUB_ADDR #define LD(x) (*(volatile uint64_t *)(x)) #define SD(x, v) (LD(x) = (uint64_t) (v)) @@ -75,7 +74,7 @@ void elsc_init(elsc_t *e, nasid_t nasid) { - sc_init((l1sc_t *)e, nasid, BRL1_LOCALUART); + sc_init((l1sc_t *)e, nasid, BRL1_LOCALHUB_UART); } @@ -1376,85 +1375,4 @@ sprintf( result, "%d.%d.%d", major, minor, bugfix ); return 0; -} - - - -/* elscuart routines - * - * Most of the elscuart functionality is implemented in l1.c. The following - * is directly "recycled" from elsc.c. - */ - - -/* - * _elscuart_puts - */ - -int _elscuart_puts(elsc_t *e, char *s) -{ - int c; - - if (s == 0) - s = ""; - - while ((c = LBYTE(s)) != 0) { - if (_elscuart_putc(e, c) < 0) - return -1; - s++; - } - - return 0; -} - - -/* - * elscuart wrapper routines - * - * The following routines are similar to their counterparts in l1.c, - * except instead of taking an elsc_t pointer directly, they call - * a global routine "get_elsc" to obtain the pointer. - * This is useful when the elsc is employed for stdio. - */ - -int elscuart_probe(void) -{ - return _elscuart_probe(get_elsc()); -} - -void elscuart_init(void *init_data) -{ - _elscuart_init(get_elsc()); - /* dummy variable included for driver compatability */ - init_data = init_data; -} - -int elscuart_poll(void) -{ - return _elscuart_poll(get_elsc()); -} - -int elscuart_readc(void) -{ - return _elscuart_readc(get_elsc()); -} - -int elscuart_getc(void) -{ - return _elscuart_getc(get_elsc()); -} - -int elscuart_puts(char *s) -{ - return _elscuart_puts(get_elsc(), s); -} - -int elscuart_putc(int c) -{ - return _elscuart_putc(get_elsc(), c); -} - -int elscuart_flush(void) -{ - return _elscuart_flush(get_elsc()); } diff -Nru a/arch/ia64/sn/io/labelcl.c b/arch/ia64/sn/io/labelcl.c --- a/arch/ia64/sn/io/labelcl.c Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/sn/io/labelcl.c Tue Mar 12 13:58:15 2002 @@ -1,21 +1,10 @@ /* labelcl - SGI's Hwgraph Compatibility Layer. - - This library is free software; you can redistribute it and/or - modify it under the terms of the GNU Library General Public - License as published by the Free Software Foundation; either - version 2 of the License, or (at your option) any later version. - - This library is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY; without even the implied warranty of - MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - Library General Public License for more details. - - You should have received a copy of the GNU Library General Public - License along with this library; if not, write to the Free - Software Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. - - Colin Ngam may be reached by email at cngam@sgi.com - + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (c) 2001 Silicon Graphics, Inc. All rights reserved. */ #include @@ -286,7 +275,7 @@ if (!strcmp(info_name, old_label_list[i].name)) { /* Not allowed to add duplicate labelled info names. */ kfree(new_label_list); - printk(KERN_WARNING "labelcl_info_add_LBL: Duplicate label name %s for vertex 0x%p\n", info_name, de); + printk(KERN_WARNING "labelcl_info_add_LBL: Duplicate label name %s for vertex 0x%p\n", info_name, (void *)de); return(-1); } new_label_list[i] = old_label_list[i]; /* structure copy */ diff -Nru a/arch/ia64/sn/io/mem_refcnt.c b/arch/ia64/sn/io/mem_refcnt.c --- a/arch/ia64/sn/io/mem_refcnt.c Tue Mar 12 13:58:15 2002 +++ /dev/null Wed Dec 31 16:00:00 1969 @@ -1,222 +0,0 @@ -/* $Id$ - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam - */ - -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -// From numa_hw.h - -#define MIGR_COUNTER_MAX_GET(nodeid) \ - (NODEPDA_MCD((nodeid))->migr_system_kparms.migr_threshold_reference) -/* - * Get the Absolute Theshold - */ -#define MIGR_THRESHOLD_ABS_GET(nodeid) ( \ - MD_MIG_VALUE_THRESH_GET(COMPACT_TO_NASID_NODEID(nodeid))) -/* - * Get the current Differential Threshold - */ -#define MIGR_THRESHOLD_DIFF_GET(nodeid) \ - (NODEPDA_MCD(nodeid)->migr_as_kparms.migr_base_threshold) - -#define NUM_OF_HW_PAGES_PER_SW_PAGE() (NBPP / MD_PAGE_SIZE) - -// #include "migr_control.h" - -int -mem_refcnt_attach(devfs_handle_t hub) -{ - devfs_handle_t refcnt_dev; - - hwgraph_char_device_add(hub, - "refcnt", - "hubspc_", - &refcnt_dev); - device_info_set(refcnt_dev, (void*)(ulong)HUBSPC_REFCOUNTERS); - - return (0); -} - - -/*ARGSUSED*/ -int -mem_refcnt_open(devfs_handle_t *devp, mode_t oflag, int otyp, cred_t *crp) -{ - cnodeid_t node; - - ASSERT( (hubspc_subdevice_t)(ulong)device_info_get(*devp) == HUBSPC_REFCOUNTERS ); - - node = master_node_get(*devp); - - ASSERT( (node >= 0) && (node < numnodes) ); - - if (NODEPDA(node)->migr_refcnt_counterbuffer == NULL) { - return (ENODEV); - } - - ASSERT( NODEPDA(node)->migr_refcnt_counterbase != NULL ); - ASSERT( NODEPDA(node)->migr_refcnt_cbsize != (size_t)0 ); - - return (0); -} - -/*ARGSUSED*/ -int -mem_refcnt_close(devfs_handle_t dev, int oflag, int otyp, cred_t *crp) -{ - return 0; -} - -/*ARGSUSED*/ -int -mem_refcnt_mmap(devfs_handle_t dev, vhandl_t *vt, off_t off, size_t len, uint prot) -{ - cnodeid_t node; - int errcode; - char* buffer; - size_t blen; - - ASSERT( (hubspc_subdevice_t)(ulong)device_info_get(dev) == HUBSPC_REFCOUNTERS ); - - node = master_node_get(dev); - - ASSERT( (node >= 0) && (node < numnodes) ); - - ASSERT( NODEPDA(node)->migr_refcnt_counterbuffer != NULL); - ASSERT( NODEPDA(node)->migr_refcnt_counterbase != NULL ); - ASSERT( NODEPDA(node)->migr_refcnt_cbsize != 0 ); - - /* - * XXXX deal with prot's somewhere around here.... - */ - - buffer = NODEPDA(node)->migr_refcnt_counterbuffer; - blen = NODEPDA(node)->migr_refcnt_cbsize; - - /* - * Force offset to be a multiple of sizeof(refcnt_t) - * We round up. - */ - - off = (((off - 1)/sizeof(refcnt_t)) + 1) * sizeof(refcnt_t); - - if ( ((buffer + blen) - (buffer + off + len)) < 0 ) { - return (EPERM); - } - - errcode = v_mapphys(vt, - buffer + off, - len); - - return errcode; -} - -/*ARGSUSED*/ -int -mem_refcnt_unmap(devfs_handle_t dev, vhandl_t *vt) -{ - return 0; -} - -/* ARGSUSED */ -int -mem_refcnt_ioctl(devfs_handle_t dev, - int cmd, - void *arg, - int mode, - cred_t *cred_p, - int *rvalp) -{ - cnodeid_t node; - int errcode; - extern int numnodes; - - ASSERT( (hubspc_subdevice_t)(ulong)device_info_get(dev) == HUBSPC_REFCOUNTERS ); - - node = master_node_get(dev); - - ASSERT( (node >= 0) && (node < numnodes) ); - - ASSERT( NODEPDA(node)->migr_refcnt_counterbuffer != NULL); - ASSERT( NODEPDA(node)->migr_refcnt_counterbase != NULL ); - ASSERT( NODEPDA(node)->migr_refcnt_cbsize != 0 ); - - errcode = 0; - - switch (cmd) { - case RCB_INFO_GET: - { - rcb_info_t rcb; - - rcb.rcb_len = NODEPDA(node)->migr_refcnt_cbsize; - - rcb.rcb_sw_sets = NODEPDA(node)->migr_refcnt_numsets; - rcb.rcb_sw_counters_per_set = numnodes; - rcb.rcb_sw_counter_size = sizeof(refcnt_t); - - rcb.rcb_base_pages = NODEPDA(node)->migr_refcnt_numsets / - NUM_OF_HW_PAGES_PER_SW_PAGE(); - rcb.rcb_base_page_size = NBPP; - rcb.rcb_base_paddr = ctob(slot_getbasepfn(node, 0)); - - rcb.rcb_cnodeid = node; - rcb.rcb_granularity = MD_PAGE_SIZE; -#ifdef LATER - rcb.rcb_hw_counter_max = MIGR_COUNTER_MAX_GET(node); - rcb.rcb_diff_threshold = MIGR_THRESHOLD_DIFF_GET(node); -#endif - rcb.rcb_abs_threshold = MIGR_THRESHOLD_ABS_GET(node); - rcb.rcb_num_slots = node_getnumslots(node); - - if (COPYOUT(&rcb, arg, sizeof(rcb_info_t))) { - errcode = EFAULT; - } - - break; - } - case RCB_SLOT_GET: - { - rcb_slot_t slot[MAX_MEM_SLOTS]; - int s; - int nslots; - - nslots = node_getnumslots(node); - ASSERT(nslots <= MAX_MEM_SLOTS); - for (s = 0; s < nslots; s++) { - slot[s].base = (uint64_t)ctob(slot_getbasepfn(node, s)); -#ifdef LATER - slot[s].size = (uint64_t)ctob(slot_getsize(node, s)); -#else - slot[s].size = (uint64_t)1; -#endif - } - if (COPYOUT(&slot[0], arg, nslots * sizeof(rcb_slot_t))) { - errcode = EFAULT; - } - - *rvalp = nslots; - break; - } - - default: - errcode = EINVAL; - break; - - } - - return errcode; -} diff -Nru a/arch/ia64/sn/io/ml_SN_init.c b/arch/ia64/sn/io/ml_SN_init.c --- a/arch/ia64/sn/io/ml_SN_init.c Tue Mar 12 13:58:14 2002 +++ b/arch/ia64/sn/io/ml_SN_init.c Tue Mar 12 13:58:14 2002 @@ -4,37 +4,30 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2002 Silicon Graphics, Inc. All rights reserved. */ #include #include #include +#include #include +#include #include #include #include #include -#include #include #include #include -#include - - -#if defined (CONFIG_SGI_IP35) || defined(CONFIG_IA64_SGI_SN1) || defined(CONFIG_IA64_GENERIC) -#include -#include -#include -#endif /* CONFIG_SGI_IP35 || CONFIG_IA64_SGI_SN1 */ - +#include extern int numcpus; extern char arg_maxnodes[]; extern cpuid_t master_procid; -extern void * kmem_alloc_node(register size_t, register int , cnodeid_t); +#if defined(CONFIG_IA64_SGI_SN1) extern synergy_da_t *Synergy_da_indr[]; +#endif extern int hasmetarouter; @@ -45,18 +38,8 @@ extern xwidgetnum_t hub_widget_id(nasid_t); -static int fine_mode = 0; - -static cnodemask_t hub_init_mask; /* Mask of cpu in a node doing init */ -static volatile cnodemask_t hub_init_done_mask; - /* Node mask where we wait for - * per hub initialization - */ -spinlock_t hub_mask_lock; /* Lock for hub_init_mask above. */ - extern int valid_icache_reasons; /* Reasons to flush the icache */ extern int valid_dcache_reasons; /* Reasons to flush the dcache */ -extern int numnodes; extern u_char miniroot; extern volatile int need_utlbmiss_patch; extern void iograph_early_init(void); @@ -83,20 +66,6 @@ */ master_nasid = get_nasid(); set_master_bridge_base(); - FIXME("mlreset: Enable when we support ioc3 .."); -#ifdef LATER - if (get_console_nasid() == master_nasid) - /* Set up the IOC3 */ - ioc3_mlreset((ioc3_cfg_t *)KL_CONFIG_CH_CONS_INFO(master_nasid)->config_base, - (ioc3_mem_t *)KL_CONFIG_CH_CONS_INFO(master_nasid)->memory_base); - - /* - * Initialize Master nvram base. - */ - nvram_baseinit(); - - fine_mode = is_fine_dirmode(); -#endif /* LATER */ /* We're the master processor */ master_procid = smp_processor_id(); @@ -108,75 +77,12 @@ */ ASSERT_ALWAYS(master_nasid == get_nasid()); -#ifdef LATER - - /* - * Activate when calias is implemented. - */ - /* Set all nodes' calias sizes to 8k */ - for (i = 0; i < maxnodes; i++) { - nasid_t nasid; - int sn; - - nasid = COMPACT_TO_NASID_NODEID(i); - - /* - * Always have node 0 in the region mask, otherwise CALIAS accesses - * get exceptions since the hub thinks it is a node 0 address. - */ - for (sn=0; snpdinfo = (void *)hubinfo; hubinfo->h_nodepda = npda; hubinfo->h_cnodeid = node; @@ -230,92 +119,37 @@ hubinfo->h_widgetid = hub_widget_id(hubinfo->h_nasid); npda->xbow_peer = INVALID_NASID; - /* Initialize the linked list of + + /* + * Initialize the linked list of * router info pointers to the dependent routers */ npda->npda_rip_first = NULL; - /* npda_rip_last always points to the place + + /* + * npda_rip_last always points to the place * where the next element is to be inserted * into the list */ npda->npda_rip_last = &npda->npda_rip_first; - npda->dependent_routers = 0; npda->module_id = INVALID_MODULE; +#ifdef CONFIG_IA64_SGI_SN1 /* - * Initialize the subnodePDA. - */ + * Initialize the interrupts. + * On sn2, this is done at pci init time, + * because sn2 needs the cpus checked in + * when it initializes interrupts. This is + * so we don't see all the nodes as headless. + */ for (sn=0; snprof_count = 0; - SNPDA(npda,sn)->next_prof_timeout = 0; intr_init_vecblk(npda, node, sn); } +#endif /* CONFIG_IA64_SGI_SN1 */ - npda->vector_unit_busy = 0; - - spin_lock_init(&npda->vector_lock); mutex_init_locked(&npda->xbow_sema); /* init it locked? */ - spin_lock_init(&npda->fprom_lock); - spin_lock_init(&npda->node_utlbswitchlock); - npda->ni_error_print = 0; #ifdef LATER - if (need_utlbmiss_patch) { - npda->node_need_utlbmiss_patch = 1; - npda->node_utlbmiss_patched = 1; - } -#endif - - /* - * Clear out the nasid mask. - */ - for (i = 0; i < NASID_MASK_BYTES; i++) - npda->nasid_mask[i] = 0; - - for (i = 0; i < numnodes; i++) { - nasid_t nasid = COMPACT_TO_NASID_NODEID(i); - - /* Set my mask bit */ - npda->nasid_mask[nasid / 8] |= (1 << nasid % 8); - } - -#ifdef LATER - npda->node_first_cpu = get_cnode_cpu(node); -#endif - - if (npda->node_first_cpu != CPU_NONE) { - /* - * Count number of cpus only if first CPU is valid. - */ - numcpus_p = &npda->node_num_cpus; - *numcpus_p = 0; - for (i = npda->node_first_cpu; i < MAXCPUS; i++) { - if (CPUID_TO_COMPACT_NODEID(i) != node) - break; - else - (*numcpus_p)++; - } - } else { - npda->node_num_cpus = 0; - } - - /* Allocate memory for the dump stack on each node - * This is useful during nmi handling since we - * may not be guaranteed shared memory at that time - * which precludes depending on a global dump stack - */ -#ifdef LATER - npda->dump_stack = (uint64_t *)kmem_zalloc_node(DUMP_STACK_SIZE,VM_NOSLEEP, - node); - ASSERT_ALWAYS(npda->dump_stack); - ASSERT(npda->dump_stack); -#endif - /* Initialize the counter which prevents - * both the cpus on a node to proceed with nmi - * handling. - */ -#ifdef LATER - npda->dump_count = 0; /* Setup the (module,slot) --> nic mapping for all the routers * in the system. This is useful during error handling when @@ -325,17 +159,9 @@ /* Allocate memory for the per-node router traversal queue */ router_queue_init(npda,node); - npda->sbe_info = kmem_zalloc_node_hint(sizeof (sbe_info_t), 0, node); + npda->sbe_info = alloc_bootmem_node(NODE_DATA(node), sizeof (sbe_info_t)); ASSERT(npda->sbe_info); -#ifdef CONFIG_SGI_IP35 || CONFIG_IA64_SGI_SN1 || CONFIG_IA64_GENERIC - /* - * Initialize bte info pointers to NULL - */ - for (i = 0; i < BTES_PER_NODE; i++) { - npda->node_bte_info[i] = (bteinfo_t *)NULL; - } -#endif #endif /* LATER */ } @@ -345,260 +171,44 @@ * Must be done _after_ init_platform_nodepda(). * If we need a lock here, something else is wrong! */ -// void init_platform_pda(pda_t *ppda, cpuid_t cpu) void init_platform_pda(cpuid_t cpu) { +#if defined(CONFIG_IA64_SGI_SN1) hub_intmasks_t *intmasks; -#ifdef LATER - cpuinfo_t cpuinfo; -#endif - int i; + int i, subnode; cnodeid_t cnode; synergy_da_t *sda; int which_synergy; -#ifdef LATER - /* Allocate per-cpu platform-dependent data */ - cpuinfo = (cpuinfo_t)kmem_alloc_node(sizeof(struct cpuinfo_s), GFP_ATOMIC, cputocnode(cpu)); - ASSERT_ALWAYS(cpuinfo); - ppda->pdinfo = (void *)cpuinfo; - cpuinfo->ci_cpupda = ppda; - cpuinfo->ci_cpuid = cpu; -#endif cnode = cpuid_to_cnodeid(cpu); which_synergy = cpuid_to_synergy(cpu); + sda = Synergy_da_indr[(cnode * 2) + which_synergy]; - // intmasks = &ppda->p_intmasks; intmasks = &sda->s_intmasks; -#ifdef LATER - ASSERT_ALWAYS(&ppda->p_nodepda); -#endif - /* Clear INT_PEND0 masks. */ for (i = 0; i < N_INTPEND0_MASKS; i++) intmasks->intpend0_masks[i] = 0; /* Set up pointer to the vector block in the nodepda. */ /* (Cant use SUBNODEPDA - not working yet) */ - intmasks->dispatch0 = &Nodepdaindr[cnode]->snpda[cpuid_to_subnode(cpu)].intr_dispatch0; - intmasks->dispatch1 = &Nodepdaindr[cnode]->snpda[cpuid_to_subnode(cpu)].intr_dispatch1; + subnode = cpuid_to_subnode(cpu); + intmasks->dispatch0 = &NODEPDA(cnode)->snpda[cpuid_to_subnode(cpu)].intr_dispatch0; + intmasks->dispatch1 = &NODEPDA(cnode)->snpda[cpuid_to_subnode(cpu)].intr_dispatch1; + if (intmasks->dispatch0 != &SUBNODEPDA(cnode, subnode)->intr_dispatch0 || + intmasks->dispatch1 != &SUBNODEPDA(cnode, subnode)->intr_dispatch1) + panic("xxx"); + intmasks->dispatch0 = &SUBNODEPDA(cnode, subnode)->intr_dispatch0; + intmasks->dispatch1 = &SUBNODEPDA(cnode, subnode)->intr_dispatch1; /* Clear INT_PEND1 masks. */ for (i = 0; i < N_INTPEND1_MASKS; i++) intmasks->intpend1_masks[i] = 0; - - -#ifdef LATER - /* Don't read the routers unless we're the master. */ - ppda->p_routertick = 0; -#endif - +#endif /* CONFIG_IA64_SGI_SN1 */ } -#if (defined(CONFIG_SGI_IP35) || defined(CONFIG_IA64_SGI_SN1) || defined(CONFIG_IA64_GENERIC)) && !defined(BRINGUP) /* protect low mem for IP35/7 */ -#error "need protect_hub_calias, protect_nmi_handler_data" -#endif - -#ifdef LATER -/* - * For now, just protect the first page (exception handlers). We - * may want to protect more stuff later. - */ void -protect_hub_calias(nasid_t nasid) -{ - paddr_t pa = NODE_OFFSET(nasid) + 0; /* page 0 on node nasid */ - int i; - - for (i = 0; i < MAX_REGIONS; i++) { - if (i == nasid_to_region(nasid)) - continue; - } -} - -/* - * Protect the page of low memory used to communicate with the NMI handler. - */ -void -protect_nmi_handler_data(nasid_t nasid, int slice) -{ - paddr_t pa = NODE_OFFSET(nasid) + NMI_OFFSET(nasid, slice); - int i; - - for (i = 0; i < MAX_REGIONS; i++) { - if (i == nasid_to_region(nasid)) - continue; - } -} -#endif /* LATER */ - - -#ifdef LATER -/* - * Protect areas of memory that we access uncached by marking them as - * poisoned so the T5 can't read them speculatively and erroneously - * mark them dirty in its cache only to write them back with old data - * later. - */ -static void -protect_low_memory(nasid_t nasid) -{ - /* Protect low memory directory */ - poison_state_alter_range(KLDIR_ADDR(nasid), KLDIR_SIZE, 1); - - /* Protect klconfig area */ - poison_state_alter_range(KLCONFIG_ADDR(nasid), KLCONFIG_SIZE(nasid), 1); - - /* Protect the PI error spool area. */ - poison_state_alter_range(PI_ERROR_ADDR(nasid), PI_ERROR_SIZE(nasid), 1); - - /* Protect CPU A's cache error eframe area. */ - poison_state_alter_range(TO_NODE_UNCAC(nasid, CACHE_ERR_EFRAME), - CACHE_ERR_AREA_SIZE, 1); - - /* Protect CPU B's area */ - poison_state_alter_range(TO_NODE_UNCAC(nasid, CACHE_ERR_EFRAME) - ^ UALIAS_FLIP_BIT, - CACHE_ERR_AREA_SIZE, 1); -#error "SN1 not handled correctly" -} -#endif /* LATER */ - -/* - * per_hub_init - * - * This code is executed once for each Hub chip. - */ -void -per_hub_init(cnodeid_t cnode) -{ - uint64_t done; - nasid_t nasid; - nodepda_t *npdap; -#if defined(CONFIG_SGI_IP35) || defined(CONFIG_IA64_SGI_SN1) || defined(CONFIG_IA64_GENERIC) /* SN1 specific */ - ii_icmr_u_t ii_icmr; - ii_ibcr_u_t ii_ibcr; -#endif -#ifdef LATER - int i; -#endif - - nasid = COMPACT_TO_NASID_NODEID(cnode); - - ASSERT(nasid != INVALID_NASID); - ASSERT(NASID_TO_COMPACT_NODEID(nasid) == cnode); - - /* Grab the hub_mask lock. */ - spin_lock(&hub_mask_lock); - - /* Test our bit. */ - if (!(done = CNODEMASK_TSTB(hub_init_mask, cnode))) { - - /* Turn our bit on in the mask. */ - CNODEMASK_SETB(hub_init_mask, cnode); - } - -#if defined(SN0_HWDEBUG) - hub_config_setup(); -#endif - /* Release the hub_mask lock. */ - spin_unlock(&hub_mask_lock); - - /* - * Do the actual initialization if it hasn't been done yet. - * We don't need to hold a lock for this work. - */ - if (!done) { - npdap = NODEPDA(cnode); - -#if defined(CONFIG_IA64_SGI_SYNERGY_PERF) - /* initialize per-node synergy perf instrumentation */ - npdap->synergy_perf_enabled = 0; /* off by default */ - npdap->synergy_perf_lock = SPIN_LOCK_UNLOCKED; - npdap->synergy_perf_freq = SYNERGY_PERF_FREQ_DEFAULT; - npdap->synergy_inactive_intervals = 0; - npdap->synergy_active_intervals = 0; - npdap->synergy_perf_data = NULL; - npdap->synergy_perf_first = NULL; -#endif /* CONFIG_IA64_SGI_SYNERGY_PERF */ - - npdap->hub_chip_rev = get_hub_chiprev(nasid); - -#ifdef LATER - for (i = 0; i < CPUS_PER_NODE; i++) { - cpu = cnode_slice_to_cpuid(cnode, i); - if (!cpu_enabled(cpu)) - SET_CPU_LEDS(nasid, i, 0xf); - } -#endif /* LATER */ - -#if defined(CONFIG_SGI_IP35) || defined(CONFIG_IA64_SGI_SN1) || defined(CONFIG_IA64_GENERIC) /* SN1 specific */ - - /* - * Set the total number of CRBs that can be used. - */ - ii_icmr.ii_icmr_regval= 0x0; - ii_icmr.ii_icmr_fld_s.i_c_cnt = 0xF; - REMOTE_HUB_S(nasid, IIO_ICMR, ii_icmr.ii_icmr_regval); - - /* - * Set the number of CRBs that both of the BTEs combined - * can use minus 1. - */ - ii_ibcr.ii_ibcr_regval= 0x0; - ii_ibcr.ii_ibcr_fld_s.i_count = 0x8; - REMOTE_HUB_S(nasid, IIO_IBCR, ii_ibcr.ii_ibcr_regval); - - /* - * Set CRB timeout to be 10ms. - */ - REMOTE_HUB_S(nasid, IIO_ICTP, 0x1000 ); - REMOTE_HUB_S(nasid, IIO_ICTO, 0xff); - -#endif /* SN0_HWDEBUG */ - - - - /* Reserve all of the hardwired interrupt levels. */ - intr_reserve_hardwired(cnode); - - /* Initialize error interrupts for this hub. */ - hub_error_init(cnode); - -#ifdef LATER - /* Set up correctable memory/directory ECC error interrupt. */ - install_eccintr(cnode); - - /* Protect our exception vectors from accidental corruption. */ - protect_hub_calias(nasid); - - /* Enable RT clock interrupts */ - hub_rtc_init(cnode); - hub_migrintr_init(cnode); /* Enable migration interrupt */ -#endif /* LATER */ - - spin_lock(&hub_mask_lock); - CNODEMASK_SETB(hub_init_done_mask, cnode); - spin_unlock(&hub_mask_lock); - - } else { - /* - * Wait for the other CPU to complete the initialization. - */ - while (CNODEMASK_TSTB(hub_init_done_mask, cnode) == 0) { - /* - * On SNIA64 we should never get here .. - */ - printk("WARNING: per_hub_init: Should NEVER get here!\n"); - /* LOOP */ - ; - } - } -} - -extern void update_node_information(cnodeid_t cnodeid) { nodepda_t *npda = NODEPDA(cnodeid); @@ -623,22 +233,3 @@ npda_rip = npda_rip->router_next; } } - -hubreg_t -get_region(cnodeid_t cnode) -{ - if (fine_mode) - return COMPACT_TO_NASID_NODEID(cnode) >> NASID_TO_FINEREG_SHFT; - else - return COMPACT_TO_NASID_NODEID(cnode) >> NASID_TO_COARSEREG_SHFT; -} - -hubreg_t -nasid_to_region(nasid_t nasid) -{ - if (fine_mode) - return nasid >> NASID_TO_FINEREG_SHFT; - else - return nasid >> NASID_TO_COARSEREG_SHFT; -} - diff -Nru a/arch/ia64/sn/io/ml_SN_intr.c b/arch/ia64/sn/io/ml_SN_intr.c --- a/arch/ia64/sn/io/ml_SN_intr.c Tue Mar 12 13:58:14 2002 +++ /dev/null Wed Dec 31 16:00:00 1969 @@ -1,1728 +0,0 @@ -/* $Id$ - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Alan Mayer - */ - -/* - * intr.c- - * This file contains all of the routines necessary to set up and - * handle interrupts on an IP27 board. - */ - -#ident "$Revision: 1.167 $" - -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#if DEBUG_INTR_TSTAMP_DEBUG -#include -#include -#include -void do_splx_log(int, int); -void spldebug_log_event(int); -#endif - -// FIXME - BRINGUP -#ifdef CONFIG_SMP -extern unsigned long cpu_online_map; -#endif -#define cpu_allows_intr(cpu) (1) -// If I understand what's going on with this, 32 should work. -// physmem_maxradius seems to be the maximum number of router -// hops to get from one end of the system to the other. With -// a maximally configured machine, with the dumbest possible -// topology, we would make 32 router hops. For what we're using -// it for, the dumbest possible should suffice. -#define physmem_maxradius() 32 - -#define SUBNODE_ANY -1 - -extern int nmied; -extern int hub_intr_wakeup_cnt; -extern synergy_da_t *Synergy_da_indr[]; -extern cpuid_t master_procid; - -extern cnodeid_t master_node_get(devfs_handle_t vhdl); - -extern snia_error_intr_handler(int irq, void *devid, struct pt_regs *pt_regs); - - -#define INTR_LOCK(vecblk) \ - (s = mutex_spinlock(&(vecblk)->vector_lock)) -#define INTR_UNLOCK(vecblk) \ - mutex_spinunlock(&(vecblk)->vector_lock, s) - -/* - * REACT/Pro - */ - - - -/* - * Find first bit set - * Used outside this file also - */ -int ms1bit(unsigned long x) -{ - int b; - - if (x >> 32) b = 32, x >>= 32; - else b = 0; - if (x >> 16) b += 16, x >>= 16; - if (x >> 8) b += 8, x >>= 8; - if (x >> 4) b += 4, x >>= 4; - if (x >> 2) b += 2, x >>= 2; - - return b + (int) (x >> 1); -} - -/* ARGSUSED */ -void -intr_stray(void *lvl) -{ - PRINT_WARNING("Stray Interrupt - level %ld to cpu %d", (long)lvl, cpuid()); -} - -#if defined(DEBUG) - -/* Infrastructure to gather the device - target cpu mapping info */ -#define MAX_DEVICES 1000 /* Reasonable large number . Need not be - * the exact maximum # devices possible. - */ -#define MAX_NAME 100 -typedef struct { - dev_t dev; /* device */ - cpuid_t cpuid; /* target cpu */ - cnodeid_t cnodeid;/* node on which the target cpu is present */ - int bit; /* intr bit reserved */ - char intr_name[MAX_NAME]; /* name of the interrupt */ -} intr_dev_targ_map_t; - -intr_dev_targ_map_t intr_dev_targ_map[MAX_DEVICES]; -uint64_t intr_dev_targ_map_size; -spinlock_t intr_dev_targ_map_lock; - -/* Print out the device - target cpu mapping. - * This routine is used only in the idbg command - * "intrmap" - */ -void -intr_dev_targ_map_print(cnodeid_t cnodeid) -{ - int i,j,size = 0; - int print_flag = 0,verbose = 0; - char node_name[10]; - - if (cnodeid != CNODEID_NONE) { - nodepda_t *npda; - - npda = NODEPDA(cnodeid); - for (j=0; jintr_dispatch0.info[i].ii_flags); - qprintf("\n INT_PEND1: "); - for(i = 0 ; i < N_INTPEND_BITS ; i++) - qprintf("%d",SNPDA(npda,j)->intr_dispatch1.info[i].ii_flags); - } - verbose = 1; - } - qprintf("\n Device - Target Map [Interrupts: %s Node%s]\n\n", - (verbose ? "All" : "Non-hardwired"), - (cnodeid == CNODEID_NONE) ? "s: All" : node_name); - - qprintf("Device\tCpu\tCnode\tIntr_bit\tIntr_name\n"); - for (i = 0 ; i < intr_dev_targ_map_size ; i++) { - - print_flag = 0; - if (verbose) { - if (cnodeid != CNODEID_NONE) { - if (cnodeid == intr_dev_targ_map[i].cnodeid) - print_flag = 1; - } else { - print_flag = 1; - } - } else { - if (intr_dev_targ_map[i].dev != 0) { - if (cnodeid != CNODEID_NONE) { - if (cnodeid == - intr_dev_targ_map[i].cnodeid) - print_flag = 1; - } else { - print_flag = 1; - } - } - } - if (print_flag) { - size++; - qprintf("%d\t%d\t%d\t%d\t%s\n", - intr_dev_targ_map[i].dev, - intr_dev_targ_map[i].cpuid, - intr_dev_targ_map[i].cnodeid, - intr_dev_targ_map[i].bit, - intr_dev_targ_map[i].intr_name); - } - - } - qprintf("\nTotal : %d\n",size); -} -#endif /* DEBUG */ - -/* - * The spinlocks have already been initialized. Now initialize the interrupt - * vectors. One processor on each hub does the work. - */ -void -intr_init_vecblk(nodepda_t *npda, cnodeid_t node, int sn) -{ - int i, ip=0; - intr_vecblk_t *vecblk; - subnode_pda_t *snpda; - - - snpda = SNPDA(npda,sn); - do { - if (ip == 0) { - vecblk = &snpda->intr_dispatch0; - } else { - vecblk = &snpda->intr_dispatch1; - } - - /* Initialize this vector. */ - for (i = 0; i < N_INTPEND_BITS; i++) { - vecblk->vectors[i].iv_func = intr_stray; - vecblk->vectors[i].iv_prefunc = NULL; - vecblk->vectors[i].iv_arg = (void *)(__psint_t)(ip * N_INTPEND_BITS + i); - - vecblk->info[i].ii_owner_dev = 0; - strcpy(vecblk->info[i].ii_name, "Unused"); - vecblk->info[i].ii_flags = 0; /* No flags */ - vecblk->vectors[i].iv_mustruncpu = -1; /* No CPU yet. */ - - } - - mutex_spinlock_init(&vecblk->vector_lock); - - vecblk->vector_count = 0; - for (i = 0; i < CPUS_PER_SUBNODE; i++) - vecblk->cpu_count[i] = 0; - - vecblk->vector_state = VECTOR_UNINITED; - - } while (++ip < 2); - -} - - -/* - * do_intr_reserve_level(cpuid_t cpu, int bit, int resflags, int reserve, - * devfs_handle_t owner_dev, char *name) - * Internal work routine to reserve or unreserve an interrupt level. - * cpu is the CPU to which the interrupt will be sent. - * bit is the level bit to reserve. -1 means any level - * resflags should include II_ERRORINT if this is an - * error interrupt, II_THREADED if the interrupt handler - * will be threaded, or 0 otherwise. - * reserve should be set to II_RESERVE or II_UNRESERVE - * to get or clear a reservation. - * owner_dev is the device that "owns" this interrupt, if supplied - * name is a human-readable name for this interrupt, if supplied - * intr_reserve_level returns the bit reserved or -1 to indicate an error - */ -static int -do_intr_reserve_level(cpuid_t cpu, int bit, int resflags, int reserve, - devfs_handle_t owner_dev, char *name) -{ - intr_vecblk_t *vecblk; - hub_intmasks_t *hub_intmasks; - unsigned long s; - int rv = 0; - int ip; - synergy_da_t *sda; - int which_synergy; - cnodeid_t cnode; - - ASSERT(bit < N_INTPEND_BITS * 2); - - cnode = cpuid_to_cnodeid(cpu); - which_synergy = cpuid_to_synergy(cpu); - sda = Synergy_da_indr[(cnode * 2) + which_synergy]; - hub_intmasks = &sda->s_intmasks; - // hub_intmasks = &pdaindr[cpu].pda->p_intmasks; - - // if (pdaindr[cpu].pda == NULL) return -1; - if ((bit < N_INTPEND_BITS) && !(resflags & II_ERRORINT)) { - vecblk = hub_intmasks->dispatch0; - ip = 0; - } else { - ASSERT((bit >= N_INTPEND_BITS) || (bit == -1)); - bit -= N_INTPEND_BITS; /* Get position relative to INT_PEND1 reg. */ - vecblk = hub_intmasks->dispatch1; - ip = 1; - } - - INTR_LOCK(vecblk); - - if (bit <= -1) { - bit = 0; - ASSERT(reserve == II_RESERVE); - /* Choose any available level */ - for (; bit < N_INTPEND_BITS; bit++) { - if (!(vecblk->info[bit].ii_flags & II_RESERVE)) { - rv = bit; - break; - } - } - - /* Return -1 if all interrupt levels int this register are taken. */ - if (bit == N_INTPEND_BITS) - rv = -1; - - } else { - /* Reserve a particular level if it's available. */ - if ((vecblk->info[bit].ii_flags & II_RESERVE) == reserve) { - /* Can't (un)reserve a level that's already (un)reserved. */ - rv = -1; - } else { - rv = bit; - } - } - - /* Reserve the level and bump the count. */ - if (rv != -1) { - if (reserve) { - int maxlen = sizeof(vecblk->info[bit].ii_name) - 1; - int namelen; - vecblk->info[bit].ii_flags |= (II_RESERVE | resflags); - vecblk->info[bit].ii_owner_dev = owner_dev; - /* Copy in the name. */ - namelen = name ? strlen(name) : 0; - strncpy(vecblk->info[bit].ii_name, name, MIN(namelen, maxlen)); - vecblk->info[bit].ii_name[maxlen] = '\0'; - vecblk->vector_count++; - } else { - vecblk->info[bit].ii_flags = 0; /* Clear all the flags */ - vecblk->info[bit].ii_owner_dev = 0; - /* Clear the name. */ - vecblk->info[bit].ii_name[0] = '\0'; - vecblk->vector_count--; - } - } - - INTR_UNLOCK(vecblk); - -#if defined(DEBUG) - if (rv >= 0) { - int namelen = name ? strlen(name) : 0; - /* Gather this device - target cpu mapping information - * in a table which can be used later by the idbg "intrmap" - * command - */ - s = mutex_spinlock(&intr_dev_targ_map_lock); - if (intr_dev_targ_map_size < MAX_DEVICES) { - intr_dev_targ_map_t *p; - - p = &intr_dev_targ_map[intr_dev_targ_map_size]; - p->dev = owner_dev; - p->cpuid = cpu; - p->cnodeid = cputocnode(cpu); - p->bit = ip * N_INTPEND_BITS + rv; - strncpy(p->intr_name, - name, - MIN(MAX_NAME,namelen)); - intr_dev_targ_map_size++; - } - mutex_spinunlock(&intr_dev_targ_map_lock,s); - } -#endif /* DEBUG */ - - return (((rv == -1) ? rv : (ip * N_INTPEND_BITS) + rv)) ; -} - - -/* - * WARNING: This routine should only be called from within ml/SN. - * Reserve an interrupt level. - */ -int -intr_reserve_level(cpuid_t cpu, int bit, int resflags, devfs_handle_t owner_dev, char *name) -{ - return(do_intr_reserve_level(cpu, bit, resflags, II_RESERVE, owner_dev, name)); -} - - -/* - * WARNING: This routine should only be called from within ml/SN. - * Unreserve an interrupt level. - */ -void -intr_unreserve_level(cpuid_t cpu, int bit) -{ - (void)do_intr_reserve_level(cpu, bit, 0, II_UNRESERVE, 0, NULL); -} - -/* - * Get values that vary depending on which CPU and bit we're operating on - */ -static hub_intmasks_t * -intr_get_ptrs(cpuid_t cpu, int bit, - int *new_bit, /* Bit relative to the register */ - hubreg_t **intpend_masks, /* Masks for this register */ - intr_vecblk_t **vecblk, /* Vecblock for this interrupt */ - int *ip) /* Which intpend register */ -{ - hub_intmasks_t *hub_intmasks; - synergy_da_t *sda; - int which_synergy; - cnodeid_t cnode; - - ASSERT(bit < N_INTPEND_BITS * 2); - - cnode = cpuid_to_cnodeid(cpu); - which_synergy = cpuid_to_synergy(cpu); - sda = Synergy_da_indr[(cnode * 2) + which_synergy]; - hub_intmasks = &sda->s_intmasks; - - // hub_intmasks = &pdaindr[cpu].pda->p_intmasks; - - if (bit < N_INTPEND_BITS) { - *intpend_masks = hub_intmasks->intpend0_masks; - *vecblk = hub_intmasks->dispatch0; - *ip = 0; - *new_bit = bit; - } else { - *intpend_masks = hub_intmasks->intpend1_masks; - *vecblk = hub_intmasks->dispatch1; - *ip = 1; - *new_bit = bit - N_INTPEND_BITS; - } - - return hub_intmasks; -} - - -/* - * intr_connect_level(cpuid_t cpu, int bit, ilvl_t intr_swlevel, - * intr_func_t intr_func, void *intr_arg); - * This is the lowest-level interface to the interrupt code. It shouldn't - * be called from outside the ml/SN directory. - * intr_connect_level hooks up an interrupt to a particular bit in - * the INT_PEND0/1 masks. Returns 0 on success. - * cpu is the CPU to which the interrupt will be sent. - * bit is the level bit to connect to - * intr_swlevel tells which software level to use - * intr_func is the interrupt handler - * intr_arg is an arbitrary argument interpreted by the handler - * intr_prefunc is a prologue function, to be called - * with interrupts disabled, to disable - * the interrupt at source. It is called - * with the same argument. Should be NULL for - * typical interrupts, which can be masked - * by the infrastructure at the level bit. - * intr_connect_level returns 0 on success or nonzero on an error - */ -/* ARGSUSED */ -int -intr_connect_level(cpuid_t cpu, int bit, ilvl_t intr_swlevel, - intr_func_t intr_func, void *intr_arg, - intr_func_t intr_prefunc) -{ - intr_vecblk_t *vecblk; - hubreg_t *intpend_masks; - int rv = 0; - int ip; - unsigned long s; - - ASSERT(bit < N_INTPEND_BITS * 2); - - (void)intr_get_ptrs(cpu, bit, &bit, &intpend_masks, - &vecblk, &ip); - - INTR_LOCK(vecblk); - - if ((vecblk->info[bit].ii_flags & II_INUSE) || - (!(vecblk->info[bit].ii_flags & II_RESERVE))) { - /* Can't assign to a level that's in use or isn't reserved. */ - rv = -1; - } else { - /* Stuff parameters into vector and info */ - vecblk->vectors[bit].iv_func = intr_func; - vecblk->vectors[bit].iv_prefunc = intr_prefunc; - vecblk->vectors[bit].iv_arg = intr_arg; - vecblk->info[bit].ii_flags |= II_INUSE; - } - - /* Now stuff the masks if everything's okay. */ - if (!rv) { - int lslice; - volatile hubreg_t *mask_reg; - // nasid_t nasid = COMPACT_TO_NASID_NODEID(cputocnode(cpu)); - nasid_t nasid = cpuid_to_nasid(cpu); - int subnode = cpuid_to_subnode(cpu); - - /* Make sure it's not already pending when we connect it. */ - REMOTE_HUB_PI_CLR_INTR(nasid, subnode, bit + ip * N_INTPEND_BITS); - - intpend_masks[0] |= (1ULL << (uint64_t)bit); - - lslice = cputolocalslice(cpu); - vecblk->cpu_count[lslice]++; -#if SN1 - /* - * On SN1, there are 8 interrupt mask registers per node: - * PI_0 MASK_0 A - * PI_0 MASK_1 A - * PI_0 MASK_0 B - * PI_0 MASK_1 B - * PI_1 MASK_0 A - * PI_1 MASK_1 A - * PI_1 MASK_0 B - * PI_1 MASK_1 B - */ -#endif - if (ip == 0) { - mask_reg = REMOTE_HUB_PI_ADDR(nasid, subnode, - PI_INT_MASK0_A + PI_INT_MASK_OFFSET * lslice); - } else { - mask_reg = REMOTE_HUB_PI_ADDR(nasid, subnode, - PI_INT_MASK1_A + PI_INT_MASK_OFFSET * lslice); - } - - HUB_S(mask_reg, intpend_masks[0]); - } - - INTR_UNLOCK(vecblk); - - return rv; -} - - -/* - * intr_disconnect_level(cpuid_t cpu, int bit) - * - * This is the lowest-level interface to the interrupt code. It should - * not be called from outside the ml/SN directory. - * intr_disconnect_level removes a particular bit from an interrupt in - * the INT_PEND0/1 masks. Returns 0 on success or nonzero on failure. - */ -int -intr_disconnect_level(cpuid_t cpu, int bit) -{ - intr_vecblk_t *vecblk; - hubreg_t *intpend_masks; - unsigned long s; - int rv = 0; - int ip; - - (void)intr_get_ptrs(cpu, bit, &bit, &intpend_masks, - &vecblk, &ip); - - INTR_LOCK(vecblk); - - if ((vecblk->info[bit].ii_flags & (II_RESERVE | II_INUSE)) != - ((II_RESERVE | II_INUSE))) { - /* Can't remove a level that's not in use or isn't reserved. */ - rv = -1; - } else { - /* Stuff parameters into vector and info */ - vecblk->vectors[bit].iv_func = (intr_func_t)NULL; - vecblk->vectors[bit].iv_prefunc = (intr_func_t)NULL; - vecblk->vectors[bit].iv_arg = 0; - vecblk->info[bit].ii_flags &= ~II_INUSE; -#ifdef BASE_ITHRTEAD - vecblk->vectors[bit].iv_mustruncpu = -1; /* No mustrun CPU any more. */ -#endif - } - - /* Now clear the masks if everything's okay. */ - if (!rv) { - int lslice; - volatile hubreg_t *mask_reg; - - intpend_masks[0] &= ~(1ULL << (uint64_t)bit); - lslice = cputolocalslice(cpu); - vecblk->cpu_count[lslice]--; - mask_reg = REMOTE_HUB_PI_ADDR(COMPACT_TO_NASID_NODEID(cputocnode(cpu)), - cpuid_to_subnode(cpu), - ip == 0 ? PI_INT_MASK0_A : PI_INT_MASK1_A); - mask_reg = (volatile hubreg_t *)((__psunsigned_t)mask_reg + - (PI_INT_MASK_OFFSET * lslice)); - *mask_reg = intpend_masks[0]; - } - - INTR_UNLOCK(vecblk); - - return rv; -} - -/* - * Actually block or unblock an interrupt - */ -void -do_intr_block_bit(cpuid_t cpu, int bit, int block) -{ - intr_vecblk_t *vecblk; - int ip; - unsigned long s; - hubreg_t *intpend_masks; - volatile hubreg_t mask_value; - volatile hubreg_t *mask_reg; - - intr_get_ptrs(cpu, bit, &bit, &intpend_masks, &vecblk, &ip); - - INTR_LOCK(vecblk); - - if (block) - /* Block */ - intpend_masks[0] &= ~(1ULL << (uint64_t)bit); - else - /* Unblock */ - intpend_masks[0] |= (1ULL << (uint64_t)bit); - - if (ip == 0) { - mask_reg = REMOTE_HUB_PI_ADDR(COMPACT_TO_NASID_NODEID(cputocnode(cpu)), - cpuid_to_subnode(cpu), PI_INT_MASK0_A); - } else { - mask_reg = REMOTE_HUB_PI_ADDR(COMPACT_TO_NASID_NODEID(cputocnode(cpu)), - cpuid_to_subnode(cpu), PI_INT_MASK1_A); - } - - HUB_S(mask_reg, intpend_masks[0]); - - /* - * Wait for it to take effect. (One read should suffice.) - * This is only necessary when blocking an interrupt - */ - if (block) - while ((mask_value = HUB_L(mask_reg)) != intpend_masks[0]) - ; - - INTR_UNLOCK(vecblk); -} - - -/* - * Block a particular interrupt (cpu/bit pair). - */ -/* ARGSUSED */ -void -intr_block_bit(cpuid_t cpu, int bit) -{ - do_intr_block_bit(cpu, bit, 1); -} - - -/* - * Unblock a particular interrupt (cpu/bit pair). - */ -/* ARGSUSED */ -void -intr_unblock_bit(cpuid_t cpu, int bit) -{ - do_intr_block_bit(cpu, bit, 0); -} - - -/* verifies that the specified CPUID is on the specified SUBNODE (if any) */ -#define cpu_on_subnode(cpuid, which_subnode) \ - (((which_subnode) == SUBNODE_ANY) || (cpuid_to_subnode(cpuid) == (which_subnode))) - - -/* - * Choose one of the CPUs on a specified node or subnode to receive - * interrupts. Don't pick a cpu which has been specified as a NOINTR cpu. - * - * Among all acceptable CPUs, the CPU that has the fewest total number - * of interrupts targetted towards it is chosen. Note that we never - * consider how frequent each of these interrupts might occur, so a rare - * hardware error interrupt is weighted equally with a disk interrupt. - */ -static cpuid_t -do_intr_cpu_choose(cnodeid_t cnode, int which_subnode) -{ - cpuid_t cpu, best_cpu = CPU_NONE; - int slice, min_count=1000; - - min_count = 1000; - for (slice=0; slice < CPUS_PER_NODE; slice++) { - intr_vecblk_t *vecblk0, *vecblk1; - int total_intrs_to_slice; - subnode_pda_t *snpda; - int local_cpu_num; - - cpu = cnode_slice_to_cpuid(cnode, slice); - if (cpu == CPU_NONE) - continue; - - /* If this cpu isn't enabled for interrupts, skip it */ - if (!cpu_enabled(cpu) || !cpu_allows_intr(cpu)) - continue; - - /* If this isn't the right subnode, skip it */ - if (!cpu_on_subnode(cpu, which_subnode)) - continue; - - /* OK, this one's a potential CPU for interrupts */ - snpda = SUBNODEPDA(cnode,SUBNODE(slice)); - vecblk0 = &snpda->intr_dispatch0; - vecblk1 = &snpda->intr_dispatch1; - local_cpu_num = LOCALCPU(slice); - total_intrs_to_slice = vecblk0->cpu_count[local_cpu_num] + - vecblk1->cpu_count[local_cpu_num]; - - if (min_count > total_intrs_to_slice) { - min_count = total_intrs_to_slice; - best_cpu = cpu; - } - } - return best_cpu; -} - -/* - * Choose an appropriate interrupt target CPU on a specified node. - * If which_subnode is SUBNODE_ANY, then subnode is not considered. - * Otherwise, the chosen CPU must be on the specified subnode. - */ -static cpuid_t -intr_cpu_choose_from_node(cnodeid_t cnode, int which_subnode) -{ - return(do_intr_cpu_choose(cnode, which_subnode)); -} - - -#ifdef LATER -/* - * Convert a subnode vertex into a (cnodeid, which_subnode) pair. - * Return 0 on success, non-zero on failure. - */ -static int -subnodevertex_to_subnode(devfs_handle_t vhdl, cnodeid_t *cnodeidp, int *which_subnodep) -{ - arbitrary_info_t which_subnode; - cnodeid_t cnodeid; - - /* Try to grab subnode information */ - if (hwgraph_info_get_LBL(vhdl, INFO_LBL_CPUBUS, &which_subnode) != GRAPH_SUCCESS) - return(-1); - - /* On which node? */ - cnodeid = master_node_get(vhdl); - if (cnodeid == CNODEID_NONE) - return(-1); - - *which_subnodep = (int)which_subnode; - *cnodeidp = cnodeid; - return(0); /* success */ -} - -#endif /* LATER */ - -/* Make it easy to identify subnode vertices in the hwgraph */ -void -mark_subnodevertex_as_subnode(devfs_handle_t vhdl, int which_subnode) -{ - graph_error_t rv; - - ASSERT(0 <= which_subnode); - ASSERT(which_subnode < NUM_SUBNODES); - - rv = hwgraph_info_add_LBL(vhdl, INFO_LBL_CPUBUS, (arbitrary_info_t)which_subnode); - ASSERT_ALWAYS(rv == GRAPH_SUCCESS); - - rv = hwgraph_info_export_LBL(vhdl, INFO_LBL_CPUBUS, sizeof(arbitrary_info_t)); - ASSERT_ALWAYS(rv == GRAPH_SUCCESS); -} - - -/* - * Given a device descriptor, extract interrupt target information and - * choose an appropriate CPU. Return CPU_NONE if we can't make sense - * out of the target information. - * TBD: Should this be considered platform-independent code? - */ - -#ifdef LATER -static cpuid_t -intr_target_from_desc(device_desc_t dev_desc, int favor_subnode) -{ - cpuid_t cpuid = CPU_NONE; - cnodeid_t cnodeid; - int which_subnode; - devfs_handle_t intr_target_dev; - - if ((intr_target_dev = device_desc_intr_target_get(dev_desc)) != GRAPH_VERTEX_NONE) { - /* - * A valid device was specified. If it's a particular - * CPU, then use that CPU as target. - */ - cpuid = cpuvertex_to_cpuid(intr_target_dev); - if (cpuid != CPU_NONE) - goto cpuchosen; - - /* If a subnode vertex was specified, pick a CPU on that subnode. */ - if (subnodevertex_to_subnode(intr_target_dev, &cnodeid, &which_subnode) == 0) { - cpuid = intr_cpu_choose_from_node(cnodeid, which_subnode); - goto cpuchosen; - } - - /* - * Otherwise, pick a CPU on the node that owns the - * specified target. Favor "favor_subnode", if specified. - */ - cnodeid = master_node_get(intr_target_dev); - if (cnodeid != CNODEID_NONE) { - cpuid = intr_cpu_choose_from_node(cnodeid, favor_subnode); - goto cpuchosen; - } - } - -cpuchosen: - return(cpuid); -} -#endif /* LATER */ - - -#ifdef LATER -/* - * Check if we had already visited this candidate cnode - */ -static void * -intr_cnode_seen(cnodeid_t candidate, - void *arg1, - void *arg2) -{ - int i; - cnodeid_t *visited_cnodes = (cnodeid_t *)arg1; - int *num_visited_cnodes = (int *)arg2; - - ASSERT(visited_cnodes); - ASSERT(*num_visited_cnodes <= numnodes); - for(i = 0 ; i < *num_visited_cnodes; i++) { - if (candidate == visited_cnodes[i]) - return(NULL); - } - return(visited_cnodes); -} - -#endif /* LATER */ - - - -/* - * intr_bit_reserve_test(cpuid,which_subnode,cnode,req_bit,intr_resflags, - * owner_dev,intr_name,*resp_bit) - * Either cpuid is not CPU_NONE or cnodeid not CNODE_NONE but - * not both. - * 1. If cpuid is specified, this routine tests if this cpu can be a valid - * interrupt target candidate. - * 2. If cnodeid is specified, this routine tests if there is a cpu on - * this node which can be a valid interrupt target candidate. - * 3. If a valid interrupt target cpu candidate is found then an attempt at - * reserving an interrupt bit on the corresponding cnode is made. - * - * If steps 1 & 2 both fail or step 3 fails then we are not able to get a valid - * interrupt target cpu then routine returns CPU_NONE (failure) - * Otherwise routine returns cpuid of interrupt target (success) - */ -static cpuid_t -intr_bit_reserve_test(cpuid_t cpuid, - int favor_subnode, - cnodeid_t cnodeid, - int req_bit, - int intr_resflags, - devfs_handle_t owner_dev, - char *intr_name, - int *resp_bit) -{ - - ASSERT((cpuid==CPU_NONE) || (cnodeid==CNODEID_NONE)); - - if (cnodeid != CNODEID_NONE) { - /* Try to choose a interrupt cpu candidate */ - cpuid = intr_cpu_choose_from_node(cnodeid, favor_subnode); - } - - if (cpuid != CPU_NONE) { - /* Try to reserve an interrupt bit on the hub - * corresponding to the canidate cnode. If we - * are successful then we got a cpu which can - * act as an interrupt target for the io device. - * Otherwise we need to continue the search - * further. - */ - *resp_bit = do_intr_reserve_level(cpuid, - req_bit, - intr_resflags, - II_RESERVE, - owner_dev, - intr_name); - - if (*resp_bit >= 0) - /* The interrupt target specified was fine */ - return(cpuid); - } - return(CPU_NONE); -} -/* - * intr_heuristic(dev_t dev,device_desc_t dev_desc, - * int req_bit,int intr_resflags,dev_t owner_dev, - * char *intr_name,int *resp_bit) - * - * Choose an interrupt destination for an interrupt. - * dev is the device for which the interrupt is being set up - * dev_desc is a description of hardware and policy that could - * help determine where this interrupt should go - * req_bit is the interrupt bit requested - * (can be INTRCONNECT_ANY_BIT in which the first available - * interrupt bit is used) - * intr_resflags indicates whether we want to (un)reserve bit - * owner_dev is the owner device - * intr_name is the readable interrupt name - * resp_bit indicates whether we succeeded in getting the required - * action { (un)reservation} done - * negative value indicates failure - * - */ -/* ARGSUSED */ -cpuid_t -intr_heuristic(devfs_handle_t dev, - device_desc_t dev_desc, - int req_bit, - int intr_resflags, - devfs_handle_t owner_dev, - char *intr_name, - int *resp_bit) -{ - cpuid_t cpuid; /* possible intr targ*/ - cnodeid_t candidate; /* possible canidate */ -#ifdef LATER - cnodeid_t visited_cnodes[MAX_NASIDS], /* nodes seen so far */ - center, /* node we are on */ - candidate; /* possible canidate */ - int num_visited_cnodes = 0; /* # nodes seen */ - - int radius = 1, /* start looking at the - * current node - */ - maxradius = physmem_maxradius(); - void *rv; -#endif /* LATER */ - int which_subnode = SUBNODE_ANY; - -/* SN1 + pcibr Addressing Limitation */ - { - devfs_handle_t pconn_vhdl; - pcibr_soft_t pcibr_soft; - - /* - * This combination of SN1 and Bridge hardware has an odd "limitation". - * Due to the choice of addresses for PI0 and PI1 registers on SN1 - * and historical limitations in Bridge, Bridge is unable to - * send interrupts to both PI0 CPUs and PI1 CPUs -- we have - * to choose one set or the other. That choice is implicitly - * made when Bridge first attaches its error interrupt. After - * that point, all subsequent interrupts are restricted to the - * same PI number (though it's possible to send interrupts to - * the same PI number on a different node). - * - * Since neither SN1 nor Bridge designers are willing to admit a - * bug, we can't really call this a "workaround". It's a permanent - * solution for an SN1-specific and Bridge-specific hardware - * limitation that won't ever be lifted. - */ - if ((hwgraph_edge_get(dev, EDGE_LBL_PCI, &pconn_vhdl) == GRAPH_SUCCESS) && - ((pcibr_soft = pcibr_soft_get(pconn_vhdl)) != NULL)) { - /* - * We "know" that the error interrupt is the first - * interrupt set up by pcibr_attach. Send all interrupts - * on this bridge to the same subnode number. - */ - if (pcibr_soft->bsi_err_intr) { - which_subnode = cpuid_to_subnode(((hub_intr_t) pcibr_soft->bsi_err_intr)->i_cpuid); - } - } - } - -#ifdef LATER - /* - * If an interrupt target was specified for this - * interrupt allocation, try to use it. - */ - if (dev_desc) { - - /* Try to see if the interrupt target specified in the - * device descriptor is a legal candidate. - */ - cpuid = intr_bit_reserve_test(intr_target_from_desc(dev_desc, which_subnode), - which_subnode, - CNODEID_NONE, - req_bit, - intr_resflags, - owner_dev, - intr_name, - resp_bit); - - if (cpuid != CPU_NONE) { - if (cpu_on_subnode(cpuid, which_subnode)) - return(cpuid); /* got a valid interrupt target */ - - printk("Override explicit interrupt targetting: %v (0x%x)\n", - owner_dev, owner_dev); - - intr_unreserve_level(cpuid, *resp_bit); - } - - /* Fall through on to the next step in the search for - * the interrupt candidate. - */ - - } -#endif /* LATER */ - - /* Check if we can find a valid interrupt target candidate on - * the master node for the device. - */ - cpuid = intr_bit_reserve_test(CPU_NONE, - which_subnode, - master_node_get(dev), - req_bit, - intr_resflags, - owner_dev, - intr_name, - resp_bit); - - if (cpuid != CPU_NONE) { - if (cpu_on_subnode(cpuid, which_subnode)) - return(cpuid); /* got a valid interrupt target */ - else - intr_unreserve_level(cpuid, *resp_bit); - } - - PRINT_WARNING("Cannot target interrupts to closest node(%d): %ld (0x%lx)\n", - master_node_get(dev),(long) owner_dev, (unsigned long)owner_dev); - - /* Fall through into the default algorithm - * (exhaustive-search-for-the-nearest-possible-interrupt-target) - * for finding the interrupt target - */ - -#ifndef BRINGUP - // Use of this algorithm is deferred until the supporting - // code has been implemented. - /* - * No valid interrupt specification exists. - * Try to find a node which is closest to the current node - * which can process interrupts from a device - */ - - center = cpuid_to_cnodeid(smp_processor_id()); - while (radius <= maxradius) { - - /* Try to find a node at the given radius and which - * we haven't seen already. - */ - rv = physmem_select_neighbor_node(center,radius,&candidate, - intr_cnode_seen, - (void *)visited_cnodes, - (void *)&num_visited_cnodes); - if (!rv) { - /* We have seen all the nodes at this particular radius - * Go on to the next radius level. - */ - radius++; - continue; - } - /* We are seeing this candidate cnode for the first time - */ - visited_cnodes[num_visited_cnodes++] = candidate; - - cpuid = intr_bit_reserve_test(CPU_NONE, - which_subnode, - candidate, - req_bit, - intr_resflags, - owner_dev, - intr_name, - resp_bit); - - if (cpuid != CPU_NONE) { - if (cpu_on_subnode(cpuid, which_subnode)) - return(cpuid); /* got a valid interrupt target */ - else - intr_unreserve_level(cpuid, *resp_bit); - } - } -#else /* BRINGUP */ - { - // Do a stupid round-robin assignment of the node. - static cnodeid_t last_node = -1; - - if (last_node >= numnodes) last_node = 0; - for (candidate = last_node + 1; candidate != last_node; candidate++) { - if (candidate == numnodes) candidate = 0; - cpuid = intr_bit_reserve_test(CPU_NONE, - which_subnode, - candidate, - req_bit, - intr_resflags, - owner_dev, - intr_name, - resp_bit); - - if (cpuid != CPU_NONE) { - if (cpu_on_subnode(cpuid, which_subnode)) { - last_node = candidate; - return(cpuid); /* got a valid interrupt target */ - } - else - intr_unreserve_level(cpuid, *resp_bit); - } - } - last_node = candidate; - } -#endif - - PRINT_WARNING("Cannot target interrupts to any close node: %ld (0x%lx)\n", - (long)owner_dev, (unsigned long)owner_dev); - - /* In the worst case try to allocate interrupt bits on the - * master processor's node. We may get here during error interrupt - * allocation phase when the topology matrix is not yet setup - * and hence cannot do an exhaustive search. - */ - ASSERT(cpu_allows_intr(master_procid)); - cpuid = intr_bit_reserve_test(master_procid, - which_subnode, - CNODEID_NONE, - req_bit, - intr_resflags, - owner_dev, - intr_name, - resp_bit); - - if (cpuid != CPU_NONE) { - if (cpu_on_subnode(cpuid, which_subnode)) - return(cpuid); - else - intr_unreserve_level(cpuid, *resp_bit); - } - - PRINT_WARNING("Cannot target interrupts: %ld (0x%lx)\n", - (long)owner_dev, (unsigned long)owner_dev); - - return(CPU_NONE); /* Should never get here */ -} - - - - -#ifndef BRINGUP -/* - * Should never receive an exception while running on the idle - * stack. It IS possible to handle *interrupts* while on the - * idle stack, but a non-interrupt *exception* is a problem. - */ -void -idle_err(inst_t *epc, uint cause, void *fep, void *sp) -{ - eframe_t *ep = (eframe_t *)fep; - - if ((cause & CAUSE_EXCMASK) == EXC_IBE || - (cause & CAUSE_EXCMASK) == EXC_DBE) { - (void)dobuserre((eframe_t *)ep, epc, 0); - } - - /* XXX - This will have to change to deal with various SN errors. */ - panic( "exception on IDLE stack " - "ep:0x%x epc:0x%x cause:0x%w32x sp:0x%x badvaddr:0x%x", - ep, epc, cause, sp, getbadvaddr()); - /* NOTREACHED */ -} - - -/* - * earlynofault - handle very early global faults - usually just while - * sizing memory - * Returns: 1 if should do nofault - * 0 if not - */ -/* ARGSUSED */ -int -earlynofault(eframe_t *ep, uint code) -{ - switch(code) { - case EXC_DBE: - return(1); - default: - return(0); - } -} - - - -/* ARGSUSED */ -static void -cpuintr(void *arg1, void *arg2) -{ -#if RTE - static int rte_intrdebug = 1; -#endif - /* - * Frame Scheduler - */ - LOG_TSTAMP_EVENT(RTMON_INTR, TSTAMP_EV_CPUINTR, NULL, NULL, - NULL, NULL); - - /* - * Hardware clears the IO interrupts, but we need to clear software- - * generated interrupts. - */ - LOCAL_HUB_CLR_INTR(CPU_ACTION_A + cputolocalslice(cpuid())); - -#if 0 - /* XXX - Handle error interrupts. */ - if (error_intr_reason) - error_intr(); -#endif /* 0 */ - - /* - * If we're headed for panicspin and it is due to a NMI, save the - * eframe in the NMI area - */ - if (private.p_va_panicspin && nmied) { - caddr_t nmi_save_area; - - nmi_save_area = (caddr_t) (TO_UNCAC(TO_NODE( - cputonasid(cpuid()), IP27_NMI_EFRAME_OFFSET)) + - cputoslice(cpuid()) * IP27_NMI_EFRAME_SIZE); - bcopy((caddr_t) arg2, nmi_save_area, sizeof(eframe_t)); - } - - doacvec(); -#if RTE - if (private.p_flags & PDAF_ISOLATED && !rte_intrdebug) - goto end_cpuintr; -#endif - doactions(); -#if RTE -end_cpuintr: -#endif - LOG_TSTAMP_EVENT(RTMON_INTR, TSTAMP_EV_INTREXIT, TSTAMP_EV_CPUINTR, NULL, NULL, NULL); -} - -void -install_cpuintr(cpuid_t cpu) -{ - int intr_bit = CPU_ACTION_A + cputolocalslice(cpu); - - if (intr_connect_level(cpu, intr_bit, INTPEND0_MAXMASK, - (intr_func_t) cpuintr, NULL, NULL)) - panic("install_cpuintr: Can't connect interrupt."); -} -#endif /* BRINGUP */ - -#ifdef DEBUG_INTR_TSTAMP -/* We allocate an array, but only use element number 64. This guarantees that - * the entry is in a cacheline by itself. - */ -#define DINTR_CNTIDX 32 -#define DINTR_TSTAMP1 48 -#define DINTR_TSTAMP2 64 -volatile long long dintr_tstamp_cnt[128]; -int dintr_debug_output=0; -extern void idbg_tstamp_debug(void); -#ifdef SPLDEBUG -extern void idbg_splx_log(int); -#endif -#if DEBUG_INTR_TSTAMP_DEBUG -int dintr_enter_symmon=1000; /* 1000 microseconds is 1 millisecond */ -#endif - -#ifndef BRINGUP -/* ARGSUSED */ -static void -cpulatintr(void *arg) -{ - /* - * Hardware only clears IO interrupts so we have to clear our level - * here. - */ - LOCAL_HUB_CLR_INTR(CPU_INTRLAT_A + cputolocalslice(cpuid())); - -#if DEBUG_INTR_TSTAMP_DEBUG - dintr_tstamp_cnt[DINTR_TSTAMP2] = GET_LOCAL_RTC; - if ((dintr_tstamp_cnt[DINTR_TSTAMP2] - dintr_tstamp_cnt[DINTR_TSTAMP1]) - > dintr_enter_symmon) { -#ifdef SPLDEBUG - extern int spldebug_log_off; - - spldebug_log_off = 1; -#endif /* SPLDEBUG */ - debug("ring"); -#ifdef SPLDEBUG - spldebug_log_off = 0; -#endif /* SPLDEBUG */ - } -#endif - dintr_tstamp_cnt[DINTR_CNTIDX]++; - - return; -} - -static int install_cpulat_first=0; - -void -install_cpulatintr(cpuid_t cpu) -{ - int intr_bit; - devfs_handle_t cpuv = cpuid_to_vertex(cpu); - - intr_bit = CPU_INTRLAT_A + cputolocalslice(cpu); - if (intr_bit != intr_reserve_level(cpu, intr_bit, II_THREADED, - cpuv, "intrlat")) - panic( "install_cpulatintr: Can't reserve interrupt."); - - if (intr_connect_level(cpu, intr_bit, INTPEND0_MAXMASK, - cpulatintr, NULL, NULL)) - panic( "install_cpulatintr: Can't connect interrupt."); - - if (!install_cpulat_first) { - install_cpulat_first++; - idbg_addfunc("tstamp_debug", (void (*)())idbg_tstamp_debug); -#if defined(SPLDEBUG) || defined(SPLDEBUG_CPU_EVENTS) - idbg_addfunc("splx_log", (void (*)())idbg_splx_log); -#endif /* SPLDEBUG || SPLDEBUG_CPU_EVENTS */ - } -} -#endif /* BRINGUP */ - -#endif /* DEBUG_INTR_TSTAMP */ - -#ifndef BRINGUP -/* ARGSUSED */ -static void -dbgintr(void *arg) -{ - /* - * Hardware only clears IO interrupts so we have to clear our level - * here. - */ - LOCAL_HUB_CLR_INTR(N_INTPEND_BITS + DEBUG_INTR_A + cputolocalslice(cpuid())); - - debug("zing"); - return; -} - - -void -install_dbgintr(cpuid_t cpu) -{ - int intr_bit; - devfs_handle_t cpuv = cpuid_to_vertex(cpu); - - intr_bit = N_INTPEND_BITS + DEBUG_INTR_A + cputolocalslice(cpu); - if (intr_bit != intr_reserve_level(cpu, intr_bit, 1, cpuv, "DEBUG")) - panic("install_dbgintr: Can't reserve interrupt. " - " intr_bit %d" ,intr_bit); - - if (intr_connect_level(cpu, intr_bit, INTPEND1_MAXMASK, - dbgintr, NULL, NULL)) - panic("install_dbgintr: Can't connect interrupt."); - -#ifdef DEBUG_INTR_TSTAMP - /* Set up my interrupt latency test interrupt */ - install_cpulatintr(cpu); -#endif -} - -/* ARGSUSED */ -static void -tlbintr(void *arg) -{ - extern void tlbflush_rand(void); - - /* - * Hardware only clears IO interrupts so we have to clear our level - * here. - */ - LOCAL_HUB_CLR_INTR(N_INTPEND_BITS + TLB_INTR_A + cputolocalslice(cpuid())); - - tlbflush_rand(); - return; -} - - -void -install_tlbintr(cpuid_t cpu) -{ - int intr_bit; - devfs_handle_t cpuv = cpuid_to_vertex(cpu); - - intr_bit = N_INTPEND_BITS + TLB_INTR_A + cputolocalslice(cpu); - if (intr_bit != intr_reserve_level(cpu, intr_bit, 1, cpuv, "DEBUG")) - panic("install_tlbintr: Can't reserve interrupt. " - " intr_bit %d" ,intr_bit); - - if (intr_connect_level(cpu, intr_bit, INTPEND1_MAXMASK, - tlbintr, NULL, NULL)) - panic("install_tlbintr: Can't connect interrupt."); - -} - - -/* - * Send an interrupt to all nodes. Don't panic if we get an error. - * Returns 1 if any exceptions occurred. - */ -int -protected_broadcast(hubreg_t intrbit) -{ - nodepda_t *npdap = private.p_nodepda; - int byte, bit, sn; - int error = 0; - - extern int _wbadaddr_val(volatile void *, int, volatile int *); - - /* Send rather than clear an interrupt. */ - intrbit |= 0x100; - - for (byte = 0; byte < NASID_MASK_BYTES; byte++) { - for (bit = 0; bit < 8; bit++) { - if (npdap->nasid_mask[byte] & (1 << bit)) { - nasid_t nasid = byte * 8 + bit; - for (sn=0; snii_name, - vector->iv_func, vector->iv_arg, vector->iv_prefunc); - pf(" vertex 0x%x %s%s", - info->ii_owner_dev, - ((info->ii_flags) & II_RESERVE) ? "R" : "U", - ((info->ii_flags) & II_INUSE) ? "C" : "-"); - pf("%s%s%s%s", - ip & value ? "P" : "-", - ima & value ? "A" : "-", - imb & value ? "B" : "-", - ((info->ii_flags) & II_ERRORINT) ? "E" : "-"); - pf("\n"); -} - - -/* - * Dump information about interrupt vector assignment. - */ -void -intr_dumpvec(cnodeid_t cnode, void (*pf)(char *, ...)) -{ - nodepda_t *npda; - int ip, sn, bit; - intr_vecblk_t *dispatch; - hubreg_t ipr, ima, imb; - nasid_t nasid; - - if ((cnode < 0) || (cnode >= numnodes)) { - pf("intr_dumpvec: cnodeid out of range: %d\n", cnode); - return ; - } - - nasid = COMPACT_TO_NASID_NODEID(cnode); - - if (nasid == INVALID_NASID) { - pf("intr_dumpvec: Bad cnodeid: %d\n", cnode); - return ; - } - - - npda = NODEPDA(cnode); - - for (sn = 0; sn < NUM_SUBNODES; sn++) { - for (ip = 0; ip < 2; ip++) { - dispatch = ip ? &(SNPDA(npda,sn)->intr_dispatch1) : &(SNPDA(npda,sn)->intr_dispatch0); - ipr = REMOTE_HUB_PI_L(nasid, sn, ip ? PI_INT_PEND1 : PI_INT_PEND0); - ima = REMOTE_HUB_PI_L(nasid, sn, ip ? PI_INT_MASK1_A : PI_INT_MASK0_A); - imb = REMOTE_HUB_PI_L(nasid, sn, ip ? PI_INT_MASK1_B : PI_INT_MASK0_B); - - pf("Node %d INT_PEND%d:\n", cnode, ip); - - if (dispatch->ithreads_enabled) - pf(" Ithreads enabled\n"); - else - pf(" Ithreads disabled\n"); - pf(" vector_count = %d, vector_state = %d\n", - dispatch->vector_count, - dispatch->vector_state); - pf(" CPU A count %d, CPU B count %d\n", - dispatch->cpu_count[0], - dispatch->cpu_count[1]); - pf(" &vector_lock = 0x%x\n", - &(dispatch->vector_lock)); - for (bit = 0; bit < N_INTPEND_BITS; bit++) { - if ((dispatch->info[bit].ii_flags & II_RESERVE) || - (ipr & (1L << bit))) { - dump_vector(&(dispatch->info[bit]), - &(dispatch->vectors[bit]), - bit, ipr, ima, imb, pf); - } - } - pf("\n"); - } - } -} - diff -Nru a/arch/ia64/sn/io/ml_iograph.c b/arch/ia64/sn/io/ml_iograph.c --- a/arch/ia64/sn/io/ml_iograph.c Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/sn/io/ml_iograph.c Tue Mar 12 13:58:15 2002 @@ -4,8 +4,7 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2002 Silicon Graphics, Inc. All rights reserved. */ #include @@ -13,6 +12,9 @@ #include #include #include +#include +#include +#include #include #include #include @@ -30,8 +32,6 @@ #include #include -extern int maxnodes; - /* #define IOGRAPH_DEBUG */ #ifdef IOGRAPH_DEBUG #define DBG(x...) printk(x) @@ -107,10 +107,10 @@ #ifdef LATER if (!is_headless_node_vertex(master)) { #if defined(SUPPORT_PRINTING_V_FORMAT) - PRINT_WARNING("volunteer for widgets: vertex %v has no info label", + printk(KERN_WARNING "volunteer for widgets: vertex %v has no info label", xswitch); #else - PRINT_WARNING("volunteer for widgets: vertex 0x%x has no info label", + printk(KERN_WARNING "volunteer for widgets: vertex 0x%x has no info label", xswitch); #endif } @@ -155,11 +155,11 @@ #ifdef LATER if (!is_headless_node_vertex(hubv)) { #if defined(SUPPORT_PRINTING_V_FORMAT) - PRINT_WARNING("assign_widgets_to_volunteers:vertex %v has " + printk(KERN_WARNING "assign_widgets_to_volunteers:vertex %v has " " no info label", xswitch); #else - PRINT_WARNING("assign_widgets_to_volunteers:vertex 0x%x has " + printk(KERN_WARNING "assign_widgets_to_volunteers:vertex 0x%x has " " no info label", xswitch); #endif @@ -184,9 +184,6 @@ */ for (widgetnum=HUB_WIDGET_ID_MIN; widgetnum <= HUB_WIDGET_ID_MAX; widgetnum++) { -#ifndef BRINGUP - int i; -#endif /* * Ignore disabled/empty ports. */ @@ -244,7 +241,7 @@ cnodeid_t cnode; nasid_t nasid; lboard_t *board; - + /* * Init. the board-to-hwgraph link early, so FRU analyzer * doesn't trip on leftover values if we panic early on. @@ -267,55 +264,6 @@ hubio_init(); } -#ifdef LATER -/* There is an identical definition of this in os/scheduler/runq.c */ -#define INIT_COOKIE(cookie) cookie.must_run = 0; cookie.cpu = PDA_RUNANYWHERE -/* - * These functions absolutely doesn't belong here. It's here, though, - * until the scheduler provides a platform-independent version - * that works the way it should. The interface will definitely change, - * too. Currently used only in this file and by io/cdl.c in order to - * bind various I/O threads to a CPU on the proper node. - */ -cpu_cookie_t -setnoderun(cnodeid_t cnodeid) -{ - int i; - cpuid_t cpunum; - cpu_cookie_t cookie; - - INIT_COOKIE(cookie); - if (cnodeid == CNODEID_NONE) - return(cookie); - - /* - * Do a setmustrun to one of the CPUs on the specified - * node. - */ - if ((cpunum = CNODE_TO_CPU_BASE(cnodeid)) == CPU_NONE) { - return(cookie); - } - - cpunum += CNODE_NUM_CPUS(cnodeid) - 1; - - for (i = 0; i < CNODE_NUM_CPUS(cnodeid); i++, cpunum--) { - - if (cpu_enabled(cpunum)) { - cookie = setmustrun(cpunum); - break; - } - } - - return(cookie); -} - -void -restorenoderun(cpu_cookie_t cookie) -{ - restoremustrun(cookie); -} -#endif /* LATER */ - #ifdef LINUX_KERNEL_THREADS static struct semaphore io_init_sema; #endif @@ -445,6 +393,7 @@ slotid_t slot; lboard_t *board = NULL; char buffer[16]; + slotid_t get_widget_slotnum(int xbow, int widget); DBG("\nio_xswitch_widget_init: hubv 0x%p, xswitchv 0x%p, widgetnum 0x%x\n", hubv, xswitchv, widgetnum); /* @@ -507,6 +456,7 @@ { lboard_t dummy; + if (board) { DBG("io_xswitch_widget_init: Found KLTYPE_IOBRICK Board 0x%p brd_type 0x%x\n", board, board->brd_type); } else { @@ -517,7 +467,6 @@ } /* - * BRINGUP * Make sure we really want to say xbrick, pbrick, * etc. rather than XIO, graphics, etc. */ @@ -534,14 +483,10 @@ "%cbrick" "/%s/%d", buffer, #endif -#ifdef BRINGUP (board->brd_type == KLTYPE_IBRICK) ? 'I' : (board->brd_type == KLTYPE_PBRICK) ? 'P' : (board->brd_type == KLTYPE_XBRICK) ? 'X' : '?', -#else - toupper(MODULE_GET_BTCHAR(NODEPDA(cnode)->module_id)), -#endif /* BRINGUP */ EDGE_LBL_XTALK, widgetnum); } @@ -563,11 +508,7 @@ */ if (is_master_baseio(nasid, NODEPDA(cnode)->module_id, -#ifdef BRINGUP get_widget_slotnum(0,widgetnum))) { -#else - <<< BOMB! >>> Need a new way to get slot numbers on IP35/IP37 -#endif extern void klhwg_baseio_inventory_add(devfs_handle_t, cnodeid_t); module = NODEPDA(cnode)->module_id; @@ -582,7 +523,6 @@ (lboard_t *)KL_CONFIG_INFO(nasid), module); /* - * BRINGUP * Change iobrick to correct i/o brick */ #ifdef SUPPORT_PRINTING_M_FORMAT @@ -594,11 +534,7 @@ NODEPDA(cnode)->module_id, EDGE_LBL_XTALK, widgetnum); } else { -#ifdef BRINGUP slot = get_widget_slotnum(0, widgetnum); -#else - <<< BOMB! Need a new way to get slot numbers on IP35/IP37 -#endif board = get_board_name(nasid, module, slot, new_name); /* @@ -729,41 +665,25 @@ GRAPH_SUCCESS) continue; -#if defined (CONFIG_SGI_IP35) || defined (CONFIG_IA64_SGI_SN1) || defined (CONFIG_IA64_GENERIC) board = find_lboard_module((lboard_t *)KL_CONFIG_INFO(nasid), NODEPDA(cnodeid)->module_id); -#else - { - slotid_t slot; - slot = get_widget_slotnum(xbow_num, widgetnum); - board = find_lboard_modslot((lboard_t *)KL_CONFIG_INFO(nasid), - NODEPDA(cnodeid)->module_id, slot); - } -#endif /* CONFIG_SGI_IP35 || CONFIG_IA64_SGI_SN1 */ if (board == NULL && peer_nasid != INVALID_NASID) { /* * Try to find the board on our peer */ -#if defined (CONFIG_SGI_IP35) || defined (CONFIG_IA64_SGI_SN1) || defined (CONFIG_IA64_GENERIC) board = find_lboard_module( (lboard_t *)KL_CONFIG_INFO(peer_nasid), NODEPDA(cnodeid)->module_id); - -#else - board = find_lboard_modslot((lboard_t *)KL_CONFIG_INFO(peer_nasid), - NODEPDA(cnodeid)->module_id, slot); - -#endif /* CONFIG_SGI_IP35 || CONFIG_IA64_SGI_SN1 */ } if (board == NULL) { #if defined(SUPPORT_PRINTING_V_FORMAT) - PRINT_WARNING("Could not find PROM info for vertex %v, " + printk(KERN_WARNING "Could not find PROM info for vertex %v, " "FRU analyzer may fail", vhdl); #else - PRINT_WARNING("Could not find PROM info for vertex 0x%x, " + printk(KERN_WARNING "Could not find PROM info for vertex 0x%p, " "FRU analyzer may fail", - vhdl); + (void *)vhdl); #endif return; } @@ -918,7 +838,6 @@ DBG("io_init_node: Found XBOW widget_partnum= 0x%x\n", widget_partnum); npdap->basew_id = 0; -#if defined(BRINGUP) } else if (widget_partnum == XG_WIDGET_PART_NUM) { /* * OK, WTF do we do here if we have an XG direct connected to a HUB/Bedrock??? @@ -926,11 +845,10 @@ */ npdap->basew_id = 0; npdap->basew_id = (((*(volatile int32_t *)(NODE_SWIN_BASE(COMPACT_TO_NASID_NODEID(cnodeid), 0) + BRIDGE_WID_CONTROL))) & WIDGET_WIDGET_ID); -#endif } else { npdap->basew_id = (((*(volatile int32_t *)(NODE_SWIN_BASE(COMPACT_TO_NASID_NODEID(cnodeid), 0) + BRIDGE_WID_CONTROL))) & WIDGET_WIDGET_ID); - panic(" ****io_init_node: Unknown Widget Part Number 0x%x Widgt ID 0x%x attached to Hubv 0x%p ****\n", widget_partnum, npdap->basew_id, hubv); + panic(" ****io_init_node: Unknown Widget Part Number 0x%x Widgt ID 0x%x attached to Hubv 0x%p ****\n", widget_partnum, npdap->basew_id, (void *)hubv); /*NOTREACHED*/ } @@ -1037,7 +955,7 @@ #define __DEVSTR3 "/lun/0/disk/partition/" #define __DEVSTR4 "/../ef" -#if CONFIG_SGI_IP35 || CONFIG_IA64_SGI_SN1 || CONFIG_IA64_GENERIC +#if defined(CONFIG_IA64_SGI_SN1) /* * Currently, we need to allow for 5 IBrick slots with 1 FC each * plus an internal 1394. @@ -1045,6 +963,8 @@ * ioconfig starts numbering SCSI's at NUM_BASE_IO_SCSI_CTLR. */ #define NUM_BASE_IO_SCSI_CTLR 6 +#else +#define NUM_BASE_IO_SCSI_CTLR 6 #endif /* * This tells ioconfig where it can start numbering scsi controllers. @@ -1072,7 +992,6 @@ for (i=0; i -extern devfs_handle_t ioc3_console_vhdl_get(void); devfs_handle_t sys_critical_graph_root = GRAPH_VERTEX_NONE; /* Define the system critical vertices and connect them through @@ -1251,6 +1166,7 @@ { char name[MAXDEVNAME]; devfs_handle_t console_vhdl, pci_vhdl, enet_vhdl; + devfs_handle_t ioc3_console_vhdl_get(void); DBG("baseio_ctlr_num_set; FIXME\n"); @@ -1335,7 +1251,7 @@ rtn_val = pcibr_alloc_all_rrbs(vhdl, 0, 4,1, 4,0, 0,0, 0,0); } if (rtn_val) - PRINT_WARNING("sn00_rrb_alloc: pcibr_alloc_all_rrbs failed"); + printk(KERN_WARNING "sn00_rrb_alloc: pcibr_alloc_all_rrbs failed"); if ((vendor_list[5] != PCIIO_VENDOR_ID_NONE) && (vendor_list[7] != PCIIO_VENDOR_ID_NONE)) { @@ -1355,7 +1271,7 @@ rtn_val = pcibr_alloc_all_rrbs(vhdl, 1, 4,1, 4,0, 0,0, 0,0); } if (rtn_val) - PRINT_WARNING("sn00_rrb_alloc: pcibr_alloc_all_rrbs failed"); + printk(KERN_WARNING "sn00_rrb_alloc: pcibr_alloc_all_rrbs failed"); } @@ -1379,7 +1295,7 @@ #endif active = 0; - for (cnodeid = 0; cnodeid < maxnodes; cnodeid++) { + for (cnodeid = 0; cnodeid < numnodes; cnodeid++) { #ifdef LINUX_KERNEL_THREADS char thread_name[16]; extern int io_init_pri; @@ -1428,7 +1344,7 @@ #endif /* LINUX_KERNEL_THREADS */ - for (cnodeid = 0; cnodeid < maxnodes; cnodeid++) + for (cnodeid = 0; cnodeid < numnodes; cnodeid++) /* * Update information generated by IO init. */ diff -Nru a/arch/ia64/sn/io/module.c b/arch/ia64/sn/io/module.c --- a/arch/ia64/sn/io/module.c Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/sn/io/module.c Tue Mar 12 13:58:15 2002 @@ -4,13 +4,14 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2002 Silicon Graphics, Inc. All rights reserved. */ #include #include #include +#include +#include #include #include #include @@ -25,7 +26,7 @@ #include -/* #define LDEBUG 1 */ +/* #define LDEBUG 1 */ #ifdef LDEBUG #define DPRINTF printk @@ -173,9 +174,35 @@ lboard_t *board; klmod_serial_num_t *comp; char * bcopy(const char * src, char * dest, int count); + char serial_number[16]; + + /* + * record brick serial number + */ + board = find_lboard((lboard_t *) KL_CONFIG_INFO(nasid), KLTYPE_SNIA); + + if (! board || KL_CONFIG_DUPLICATE_BOARD(board)) + { +#if LDEBUG + printf ("module_probe_snum: no IP35 board found!\n"); +#endif + return 0; + } + + board_serial_number_get( board, serial_number ); + if( serial_number[0] != '\0' ) { + encode_str_serial( serial_number, m->snum.snum_str ); + m->snum_valid = 1; + } +#if LDEBUG + else { + printf("module_probe_snum: brick serial number is null!\n"); + } + printf("module_probe_snum: brick serial number == %s\n", serial_number); +#endif /* DEBUG */ board = find_lboard((lboard_t *) KL_CONFIG_INFO(nasid), - KLTYPE_MIDPLANE8); + KLTYPE_IOBRICK_XBOW); if (! board || KL_CONFIG_DUPLICATE_BOARD(board)) return 0; @@ -196,13 +223,13 @@ if (comp->snum.snum_str[0] != '\0') { bcopy(comp->snum.snum_str, - m->snum.snum_str, + m->sys_snum, MAX_SERIAL_NUM_SIZE); - m->snum_valid = 1; + m->sys_snum_valid = 1; } } - if (m->snum_valid) + if (m->sys_snum_valid) return 1; else { DPRINTF("Invalid serial number for module %d, " @@ -227,8 +254,7 @@ for (node = 0; node < numnodes; node++) { nasid = COMPACT_TO_NASID_NODEID(node); - board = find_lboard((lboard_t *) KL_CONFIG_INFO(nasid), - KLTYPE_IP27); + board = find_lboard((lboard_t *) KL_CONFIG_INFO(nasid), KLTYPE_SNIA); ASSERT(board); m = module_add_node(board->brd_module, node); @@ -241,7 +267,7 @@ nserial); if (nserial == 0) - PRINT_WARNING("io_module_init: No serial number found.\n"); + printk(KERN_WARNING "io_module_init: No serial number found.\n"); } elsc_t *get_elsc(void) diff -Nru a/arch/ia64/sn/io/pci.c b/arch/ia64/sn/io/pci.c --- a/arch/ia64/sn/io/pci.c Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/sn/io/pci.c Tue Mar 12 13:58:15 2002 @@ -1,12 +1,12 @@ /* * + * SNI64 specific PCI support for SNI IO. + * * This file is subject to the terms and conditions of the GNU General Public * License. See the file "COPYING" in the main directory of this archive * for more details. * - * SNI64 specific PCI support for SNI IO. - * - * Copyright (C) 1997, 1998, 2000 Colin Ngam + * Copyright (c) 1997, 1998, 2000-2001 Silicon Graphics, Inc. All rights reserved. */ #include #include @@ -14,7 +14,8 @@ #include #include #include -#include +#include +#include #include #include #include @@ -237,10 +238,6 @@ sgi_master_io_infr_init(); -#ifdef BRINGUP - if ( IS_RUNNING_ON_SIMULATOR() ) - return; -#endif /* sn1_io_infrastructure_init(); */ pci_conf = snia64_pci_ops; } @@ -251,8 +248,6 @@ int i; unsigned int size; - devfs_handle_t bridge_vhdl = pci_bus_to_vertex(d->bus->number); - /* IOC3 only decodes 0x20 bytes of the config space, reading * beyond that is relatively benign but writing beyond that * (especially the base address registers) will shut down the @@ -294,5 +289,12 @@ d->subsystem_device = 0; } + +#else +void sn1_pci_find_bios(void) {} +void pci_fixup_ioc3(struct pci_dev *d) {} +struct list_head pci_root_buses; +struct list_head pci_root_buses; +struct list_head pci_devices; #endif /* CONFIG_PCI */ diff -Nru a/arch/ia64/sn/io/pci_bus_cvlink.c b/arch/ia64/sn/io/pci_bus_cvlink.c --- a/arch/ia64/sn/io/pci_bus_cvlink.c Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/sn/io/pci_bus_cvlink.c Tue Mar 12 13:58:15 2002 @@ -4,19 +4,21 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2002 Silicon Graphics, Inc. All rights reserved. */ +#include #include #include #include +#include #include #include #include #include #include -#include +#include +#include #include #include #include @@ -26,20 +28,19 @@ #include #include #include -#include #include #include #include -#include - +#include #include -// #include #include #include -extern int bridge_rev_b_data_check_disable; #include +#include +#include + +extern int bridge_rev_b_data_check_disable; -#define MAX_PCI_XWIDGET 256 devfs_handle_t busnum_to_pcibr_vhdl[MAX_PCI_XWIDGET]; nasid_t busnum_to_nid[MAX_PCI_XWIDGET]; void * busnum_to_atedmamaps[MAX_PCI_XWIDGET]; @@ -55,6 +56,8 @@ struct ioports_to_tlbs_s ioports_to_tlbs[MAX_IOPORTS_CHUNKS]; unsigned long sn1_allocate_ioports(unsigned long pci_address); +extern void sn1_init_irq_desc(void); + /* @@ -104,7 +107,7 @@ int func = 0; char name[16]; devfs_handle_t pci_bus = NULL; - devfs_handle_t device_vertex = NULL; + devfs_handle_t device_vertex = (devfs_handle_t)NULL; /* * Go get the pci bus vertex. @@ -129,11 +132,25 @@ slot = PCI_SLOT(devfn); func = PCI_FUNC(devfn); - if (func == 0) + /* + * For a NON Multi-function card the name of the device looks like: + * ../pci/1, ../pci/2 .. + */ + if (func == 0) { sprintf(name, "%d", slot); - else - sprintf(name, "%d%c", slot, 'a'+func); - + if (hwgraph_traverse(pci_bus, name, &device_vertex) == + GRAPH_SUCCESS) { + if (device_vertex) { + return(device_vertex); + } + } + } + + /* + * This maybe a multifunction card. It's names look like: + * ../pci/1a, ../pci/1b, etc. + */ + sprintf(name, "%d%c", slot, 'a'+func); if (hwgraph_traverse(pci_bus, name, &device_vertex) != GRAPH_SUCCESS) { if (!device_vertex) { return(NULL); @@ -144,6 +161,36 @@ } /* + * For the given device, initialize the addresses for both the Device(x) Flush + * Write Buffer register and the Xbow Flush Register for the port the PCI bus + * is connected. + */ +static void +set_flush_addresses(struct pci_dev *device_dev, + struct sn1_device_sysdata *device_sysdata) +{ + pciio_info_t pciio_info = pciio_info_get(device_sysdata->vhdl); + pciio_slot_t pciio_slot = pciio_info_slot_get(pciio_info); + pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); + bridge_t *bridge = pcibr_soft->bs_base; + + device_sysdata->dma_buf_sync = (volatile unsigned int *) + &(bridge->b_wr_req_buf[pciio_slot].reg); + device_sysdata->xbow_buf_sync = (volatile unsigned int *) + XBOW_PRIO_LINKREGS_PTR(NODE_SWIN_BASE(get_nasid(), 0), + pcibr_soft->bs_xid); +#ifdef DEBUG + + printk("set_flush_addresses: dma_buf_sync %p xbow_buf_sync %p\n", + device_sysdata->dma_buf_sync, device_sysdata->xbow_buf_sync); + + while((volatile unsigned int )*device_sysdata->dma_buf_sync); + while((volatile unsigned int )*device_sysdata->xbow_buf_sync); +#endif + +} + +/* * Most drivers currently do not properly tell the arch specific pci dma * interfaces whether they can handle A64. Here is where we privately * keep track of this. @@ -189,7 +236,7 @@ * Address. This address via the tlb entries generates the PCI Address * allocated by the SN1 IO Infrastructure Layer. */ -static unsigned long sn1_ioport_num = 0x100; /* Reserve room for Legacy stuff */ +static unsigned long sn1_ioport_num = 0x1000; /* Reserve room for Legacy stuff */ unsigned long sn1_allocate_ioports(unsigned long pci_address) { @@ -209,17 +256,19 @@ * Manual for details. */ ioport_index = sn1_ioport_num / SN1_IOPORTS_UNIT; - ioports_to_tlbs[ioport_index].ppn = pci_address; + ioports_to_tlbs[ioport_index].p = 1; /* Present Bit */ - ioports_to_tlbs[ioport_index].ma = 5; /* Memory Attributes */ - ioports_to_tlbs[ioport_index].a = 0; /* Set Data Access Bit Fault */ - ioports_to_tlbs[ioport_index].d = 0; /* Dirty Bit */ - ioports_to_tlbs[ioport_index].pl = 3;/* Privilege Level - All levels can R/W*/ - ioports_to_tlbs[ioport_index].ar = 2; /* Access Rights - R/W only*/ + ioports_to_tlbs[ioport_index].rv_1 = 0; /* 1 Bit */ + ioports_to_tlbs[ioport_index].ma = 4; /* Memory Attributes 3 bits*/ + ioports_to_tlbs[ioport_index].a = 1; /* Set Data Access Bit Fault 1 Bit*/ + ioports_to_tlbs[ioport_index].d = 1; /* Dirty Bit */ + ioports_to_tlbs[ioport_index].pl = 0;/* Privilege Level - All levels can R/W*/ + ioports_to_tlbs[ioport_index].ar = 3; /* Access Rights - R/W only*/ + ioports_to_tlbs[ioport_index].ppn = pci_address >> 12; /* 4K page size */ ioports_to_tlbs[ioport_index].ed = 0; /* Exception Deferral Bit */ ioports_to_tlbs[ioport_index].ig = 0; /* Ignored */ - printk("sn1_allocate_ioports: ioport_index 0x%x ioports_to_tlbs 0x%p\n", ioport_index, ioports_to_tlbs[ioport_index].ppn); + /* printk("sn1_allocate_ioports: ioport_index 0x%x ioports_to_tlbs 0x%p\n", ioport_index, ioports_to_tlbs[ioport_index]); */ sn1_ioport_num += SN1_IOPORTS_UNIT; @@ -241,18 +290,29 @@ struct pci_dev *device_dev = NULL; struct sn1_widget_sysdata *widget_sysdata; struct sn1_device_sysdata *device_sysdata; +#ifdef SN1_IOPORTS unsigned long ioport; +#endif pciio_intr_t intr_handle; int cpuid, bit; - devfs_handle_t *device_vertex; + devfs_handle_t device_vertex; pciio_intr_line_t lines; extern void sn1_pci_find_bios(void); +#ifdef CONFIG_IA64_SGI_SN2 + extern int numnodes; + int cnode; +#endif /* CONFIG_IA64_SGI_SN2 */ -unsigned long res; - if (arg == 0) { + sn1_init_irq_desc(); sn1_pci_find_bios(); +#ifdef CONFIG_IA64_SGI_SN2 + for (cnode = 0; cnode < numnodes; cnode++) { + extern void intr_init_vecblk(nodepda_t *npda, cnodeid_t, int); + intr_init_vecblk(NODEPDA(cnode), cnode, 0); + } +#endif /* CONFIG_IA64_SGI_SN2 */ return; } @@ -274,13 +334,6 @@ #endif done_probing = 1; - if ( IS_RUNNING_ON_SIMULATOR() ) { - printk("sn1_pci_fixup not supported on simulator.\n"); - return; - } - -#ifdef REAL_HARDWARE - /* * Initialize the pci bus vertex in the pci_bus struct. */ @@ -296,8 +349,35 @@ * set the root start and end so that drivers calling check_region() * won't see a conflict */ - ioport_resource.start |= IO_SWIZ_BASE; - ioport_resource.end |= (HSPEC_SWIZ_BASE-1); +#ifdef SN1_IOPORTS + ioport_resource.start = sn1_ioport_num; + ioport_resource.end = 0xffff; +#else +#if defined(CONFIG_IA64_SGI_SN1) + if ( IS_RUNNING_ON_SIMULATOR() ) { + /* + * IDE legacy IO PORTs are supported in Medusa. + * Just open up IO PORTs from 0 .. ioport_resource.end. + */ + ioport_resource.start = 0; + } else { + /* + * We do not support Legacy IO PORT numbers. + */ + ioport_resource.start |= IO_SWIZ_BASE | __IA64_UNCACHED_OFFSET; + } + ioport_resource.end |= (HSPEC_SWIZ_BASE-1) | __IA64_UNCACHED_OFFSET; +#else + // Need something here for sn2.... ZXZXZX +#endif +#endif + + /* + * Set the root start and end for Mem Resource. + */ + iomem_resource.start = 0; + iomem_resource.end = 0xffffffffffffffff; + /* * Initialize the device vertex in the pci_dev struct. */ @@ -307,6 +387,7 @@ u16 cmd; devfs_handle_t vhdl; unsigned long size; + extern int bit_pos_to_irq(int); if (device_dev->vendor == PCI_VENDOR_ID_SGI && device_dev->device == PCI_DEVICE_ID_SGI_IOC3) { @@ -320,6 +401,12 @@ GFP_KERNEL); device_sysdata->vhdl = devfn_to_vertex(device_dev->bus->number, device_dev->devfn); device_sysdata->isa64 = 0; + /* + * Set the xbridge Device(X) Write Buffer Flush and Xbow Flush + * register addresses. + */ + (void) set_flush_addresses(device_dev, device_sysdata); + device_dev->sysdata = (void *) device_sysdata; set_sn1_pci64(device_dev); pci_read_config_word(device_dev, PCI_COMMAND, &cmd); @@ -336,11 +423,8 @@ size = device_dev->resource[idx].end - device_dev->resource[idx].start; if (size) { - res = 0; - res = pciio_config_get(vhdl, (unsigned) PCI_BASE_ADDRESS_0 + idx, 4); device_dev->resource[idx].start = (unsigned long)pciio_pio_addr(vhdl, 0, PCIIO_SPACE_WIN(idx), 0, size, 0, PCIIO_BYTE_STREAM); - -/* printk("sn1_pci_fixup: Mapped Address = 0x%p size = 0x%x\n", device_dev->resource[idx].start, size); */ + device_dev->resource[idx].start |= __IA64_UNCACHED_OFFSET; } else continue; @@ -348,6 +432,7 @@ device_dev->resource[idx].end = device_dev->resource[idx].start + size; +#ifdef CONFIG_IA64_SGI_SN1 /* * Adjust the addresses to go to the SWIZZLE .. */ @@ -355,15 +440,25 @@ device_dev->resource[idx].start & 0xfffff7ffffffffff; device_dev->resource[idx].end = device_dev->resource[idx].end & 0xfffff7ffffffffff; - res = 0; - res = pciio_config_get(vhdl, (unsigned) PCI_BASE_ADDRESS_0 + idx, 4); +#endif + if (device_dev->resource[idx].flags & IORESOURCE_IO) { cmd |= PCI_COMMAND_IO; +#ifdef SN1_IOPORTS ioport = sn1_allocate_ioports(device_dev->resource[idx].start); - /* device_dev->resource[idx].start = ioport; */ - /* device_dev->resource[idx].end = ioport + SN1_IOPORTS_UNIT */ + if (ioport < 0) { + printk("sn1_pci_fixup: PCI Device 0x%x on PCI Bus %d not mapped to IO PORTs .. IO PORTs exhausted\n", device_dev->devfn, device_dev->bus->number); + continue; + } + pciio_config_set(vhdl, (unsigned) PCI_BASE_ADDRESS_0 + (idx * 4), 4, (res + (ioport & 0xfff))); + +printk("sn1_pci_fixup: ioport number %d mapped to pci address 0x%lx\n", ioport, (res + (ioport & 0xfff))); + + device_dev->resource[idx].start = ioport; + device_dev->resource[idx].end = ioport + SN1_IOPORTS_UNIT; +#endif } - else if (device_dev->resource[idx].flags & IORESOURCE_MEM) + if (device_dev->resource[idx].flags & IORESOURCE_MEM) cmd |= PCI_COMMAND_MEMORY; } /* @@ -371,17 +466,24 @@ */ size = device_dev->resource[PCI_ROM_RESOURCE].end - device_dev->resource[PCI_ROM_RESOURCE].start; - device_dev->resource[PCI_ROM_RESOURCE].start = + + if (size) { + device_dev->resource[PCI_ROM_RESOURCE].start = (unsigned long) pciio_pio_addr(vhdl, 0, PCIIO_SPACE_ROM, 0, size, 0, PCIIO_BYTE_STREAM); - device_dev->resource[PCI_ROM_RESOURCE].end = + device_dev->resource[PCI_ROM_RESOURCE].start |= __IA64_UNCACHED_OFFSET; + device_dev->resource[PCI_ROM_RESOURCE].end = device_dev->resource[PCI_ROM_RESOURCE].start + size; - /* - * go through synergy swizzled space - */ - device_dev->resource[PCI_ROM_RESOURCE].start &= 0xfffff7ffffffffffUL; - device_dev->resource[PCI_ROM_RESOURCE].end &= 0xfffff7ffffffffffUL; +#ifdef CONFIG_IA64_SGI_SN1 + /* + * go through synergy swizzled space + */ + device_dev->resource[PCI_ROM_RESOURCE].start &= 0xfffff7ffffffffffUL; + device_dev->resource[PCI_ROM_RESOURCE].end &= 0xfffff7ffffffffffUL; +#endif + + } /* * Update the Command Word on the Card. @@ -390,14 +492,11 @@ /* bit gets dropped .. no harm */ pci_write_config_word(device_dev, PCI_COMMAND, cmd); - pci_read_config_byte(device_dev, PCI_INTERRUPT_PIN, &lines); -#ifdef BRINGUP + pci_read_config_byte(device_dev, PCI_INTERRUPT_PIN, (unsigned char *)&lines); if (device_dev->vendor == PCI_VENDOR_ID_SGI && device_dev->device == PCI_DEVICE_ID_SGI_IOC3 ) { lines = 1; } - -#endif device_sysdata = (struct sn1_device_sysdata *)device_dev->sysdata; device_vertex = device_sysdata->vhdl; @@ -406,13 +505,33 @@ bit = intr_handle->pi_irq; cpuid = intr_handle->pi_cpu; +#ifdef CONFIG_IA64_SGI_SN1 irq = bit_pos_to_irq(bit); +#else /* SN2 */ + irq = bit; +#endif irq = irq + (cpuid << 8); - pciio_intr_connect(intr_handle, NULL, NULL, NULL); + pciio_intr_connect(intr_handle); device_dev->irq = irq; +#ifdef ajmtestintr + { + int slot = PCI_SLOT(device_dev->devfn); + static int timer_set = 0; + pcibr_intr_t pcibr_intr = (pcibr_intr_t)intr_handle; + pcibr_soft_t pcibr_soft = pcibr_intr->bi_soft; + extern void intr_test_handle_intr(int, void*, struct pt_regs *); + + if (!timer_set) { + intr_test_set_timer(); + timer_set = 1; + } + intr_test_register_irq(irq, pcibr_soft, slot); + request_irq(irq, intr_test_handle_intr,0,NULL, NULL); + } +#endif } -#endif /* REAL_HARDWARE */ + #if 0 { @@ -430,6 +549,10 @@ printk("pci_fixup_ioc3: Devreg 6 0x%x\n", bridge->b_device[6].reg); printk("pci_fixup_ioc3: Devreg 7 0x%x\n", bridge->b_device[7].reg); } + +printk("testing Big Window: 0xC0000200c0000000 %p\n", *( (volatile uint64_t *)0xc0000200a0000000)); +printk("testing Big Window: 0xC0000200c0000008 %p\n", *( (volatile uint64_t *)0xc0000200a0000008)); + #endif } @@ -472,12 +595,14 @@ * Loop throught this vertex and get the Xwidgets .. */ for (widgetnum = HUB_WIDGET_ID_MAX; widgetnum >= HUB_WIDGET_ID_MIN; widgetnum--) { +#if 0 { int pos; char dname[256]; pos = devfs_generate_path(xtalk, dname, 256); printk("%s : path= %s\n", __FUNCTION__, &dname[pos]); } +#endif sprintf(pathname, "%d", widgetnum); xwidget = NULL; @@ -512,12 +637,12 @@ */ master_node_vertex = device_master_get(xwidget); if (!master_node_vertex) { - printk("WARNING: pci_bus_map_create: Unable to get .master for vertex 0x%p\n", xwidget); + printk("WARNING: pci_bus_map_create: Unable to get .master for vertex 0x%p\n", (void *)xwidget); } hubinfo_get(master_node_vertex, &hubinfo); if (!hubinfo) { - printk("WARNING: pci_bus_map_create: Unable to get hubinfo for master node vertex 0x%p\n", master_node_vertex); + printk("WARNING: pci_bus_map_create: Unable to get hubinfo for master node vertex 0x%p\n", (void *)master_node_vertex); return(1); } else { busnum_to_nid[num_bridges - 1] = hubinfo->h_nasid; @@ -527,12 +652,12 @@ * Pre assign DMA maps needed for 32 Bits Page Map DMA. */ busnum_to_atedmamaps[num_bridges - 1] = (void *) kmalloc( - sizeof(struct sn1_dma_maps_s) * 512, GFP_KERNEL); + sizeof(struct sn1_dma_maps_s) * MAX_ATE_MAPS, GFP_KERNEL); if (!busnum_to_atedmamaps[num_bridges - 1]) - printk("WARNING: pci_bus_map_create: Unable to precreate ATE DMA Maps for busnum %d vertex 0x%p\n", num_bridges - 1, xwidget); + printk("WARNING: pci_bus_map_create: Unable to precreate ATE DMA Maps for busnum %d vertex 0x%p\n", num_bridges - 1, (void *)xwidget); memset(busnum_to_atedmamaps[num_bridges - 1], 0x0, - sizeof(struct sn1_dma_maps_s) * 512); + sizeof(struct sn1_dma_maps_s) * MAX_ATE_MAPS); } @@ -552,14 +677,10 @@ { devfs_handle_t devfs_hdl = NULL; - devfs_handle_t module_comp = NULL; - devfs_handle_t node = NULL; devfs_handle_t xtalk = NULL; - graph_vertex_place_t placeptr = EDGE_PLACE_WANT_REAL_EDGES; int rv = 0; char name[256]; int master_iobrick; - moduleid_t iobrick_id; int i; /* @@ -619,66 +740,4 @@ } return(0); -} - -/* - * sgi_pci_intr_support - - */ -int -sgi_pci_intr_support (unsigned int requested_irq, device_desc_t *dev_desc, - devfs_handle_t *bus_vertex, pciio_intr_line_t *lines, - devfs_handle_t *device_vertex) - -{ - - unsigned int bus; - unsigned int devfn; - struct pci_dev *pci_dev; - unsigned char intr_pin = 0; - struct sn1_widget_sysdata *widget_sysdata; - struct sn1_device_sysdata *device_sysdata; - - if (!dev_desc || !bus_vertex || !device_vertex) { - printk("WARNING: sgi_pci_intr_support: Invalid parameter dev_desc 0x%p, bus_vertex 0x%p, device_vertex 0x%p\n", dev_desc, bus_vertex, device_vertex); - return(-1); - } - - devfn = (requested_irq >> 8) & 0xff; - bus = (requested_irq >> 16) & 0xffff; - pci_dev = pci_find_slot(bus, devfn); - widget_sysdata = (struct sn1_widget_sysdata *)pci_dev->bus->sysdata; - *bus_vertex = widget_sysdata->vhdl; - device_sysdata = (struct sn1_device_sysdata *)pci_dev->sysdata; - *device_vertex = device_sysdata->vhdl; -#if 0 - { - int pos; - char dname[256]; - pos = devfs_generate_path(*device_vertex, dname, 256); - printk("%s : path= %s pos %d\n", __FUNCTION__, &dname[pos], pos); - } -#endif /* BRINGUP */ - - - /* - * Get the Interrupt PIN. - */ - pci_read_config_byte(pci_dev, PCI_INTERRUPT_PIN, &intr_pin); - *lines = (pciio_intr_line_t)intr_pin; - -#ifdef BRINGUP - /* - * ioc3 can't decode the PCI_INTERRUPT_PIN field of its config - * space so we have to set it here - */ - if (pci_dev->vendor == PCI_VENDOR_ID_SGI && - pci_dev->device == PCI_DEVICE_ID_SGI_IOC3 ) { - *lines = 1; - } -#endif /* BRINGUP */ - - /* Not supported currently */ - *dev_desc = NULL; - return(0); - } diff -Nru a/arch/ia64/sn/io/pci_dma.c b/arch/ia64/sn/io/pci_dma.c --- a/arch/ia64/sn/io/pci_dma.c Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/sn/io/pci_dma.c Tue Mar 12 13:58:15 2002 @@ -3,11 +3,9 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Leo Dagum + * Copyright (C) 2000,2002 Silicon Graphics, Inc. All rights reserved. */ -#include #include #include #include @@ -15,36 +13,22 @@ #include #include -#ifndef LANGUAGE_C -#define LANGUAGE_C 99 -#endif -#ifndef _LANGUAGE_C -#define _LANGUAGE_C 99 -#endif - +#include #include #include +#include #include #include #include #include -#include +#include #include #include #include - -/* - * this is REALLY ugly, blame it on gcc's lame inlining that we - * have to put procedures in header files - */ -#if LANGUAGE_C == 99 -#undef LANGUAGE_C -#endif -#if CONFIG_IA64_SGI_IO == 99 -#undef CONFIG_IA64_SGI_IO -#endif +#include pciio_dmamap_t get_free_pciio_dmamap(devfs_handle_t); +void free_pciio_dmamap(pcibr_dmamap_t); struct sn1_dma_maps_s *find_sn1_dma_map(dma_addr_t, unsigned char); extern devfs_handle_t busnum_to_pcibr_vhdl[]; extern nasid_t busnum_to_nid[]; @@ -62,7 +46,7 @@ /* * Darn, we need to get the maps allocated for this bus. */ - for (i=0; i<512; i++) { + for (i=0; idma_addr) { sn1_dma_map->dma_addr = -1; return( (pciio_dmamap_t) sn1_dma_map ); } } -printk("get_pciio_dmamap: Unable to find a free dmamap\n"); return(NULL); } +/* + * Free pciio_dmamap_t entry. + */ +void +free_pciio_dmamap(pcibr_dmamap_t dma_map) +{ + struct sn1_dma_maps_s *sn1_dma_map; + + sn1_dma_map = (struct sn1_dma_maps_s *) dma_map; + sn1_dma_map->dma_addr = 0; + +} + +/* + * sn_dma_sync: This routine flushes all DMA buffers for the device into the II. + * This does not mean that the data is in the "Coherence Domain". But it + * is very close. + */ +void +sn_dma_sync( struct pci_dev *hwdev ) +{ + + struct sn1_device_sysdata *device_sysdata; + volatile unsigned long dummy; + + /* + * It is expected that on IA64 platform, a DMA sync ensures that all + * the DMA (dma_handle) are complete and coherent. + * 1. Flush Write Buffers from Bridge. + * 2. Flush Xbow Port. + */ + device_sysdata = (struct sn1_device_sysdata *)hwdev->sysdata; + dummy = (volatile unsigned long ) *device_sysdata->dma_buf_sync; + + /* + * For the Xbow Port flush, we maybe denied the request because + * someone else may be flushing the Port .. try again. + */ + while((volatile unsigned long ) *device_sysdata->xbow_buf_sync) { + udelay(2); + } +} + + struct sn1_dma_maps_s * find_sn1_dma_map(dma_addr_t dma_addr, unsigned char busnum) { @@ -92,13 +119,14 @@ sn1_dma_map = busnum_to_atedmamaps[busnum]; - for (i=0; i<512; i++, sn1_dma_map++) { + for (i=0; idma_addr == dma_addr) { return( sn1_dma_map ); } } printk("find_pciio_dmamap: Unable find the corresponding dma map\n"); + return(NULL); } @@ -139,9 +167,15 @@ /* * This device supports 64bits DMA addresses. */ +#ifdef CONFIG_IA64_SGI_SN1 *dma_handle = pciio_dmatrans_addr(vhdl, NULL, temp_ptr, size, PCIBR_BARRIER | PCIIO_BYTE_STREAM | PCIIO_DMA_CMD | PCIIO_DMA_A64 ); +#else /* SN2 */ + *dma_handle = pciio_dmatrans_addr(vhdl, NULL, temp_ptr, size, + PCIBR_BARRIER | PCIIO_DMA_CMD | PCIIO_DMA_A64 ); +#endif + return (ret); } @@ -152,8 +186,14 @@ * First try to get 32 Bit Direct Map Support. */ if (IS_PCI32G(hwdev)) { +#ifdef CONFIG_IA64_SGI_SN1 *dma_handle = pciio_dmatrans_addr(vhdl, NULL, temp_ptr, size, PCIBR_BARRIER | PCIIO_BYTE_STREAM | PCIIO_DMA_CMD); +#else /* SN2 */ + *dma_handle = pciio_dmatrans_addr(vhdl, NULL, temp_ptr, size, + PCIBR_BARRIER | PCIIO_DMA_CMD); +#endif + if (dma_handle) { return (ret); } else { @@ -182,7 +222,7 @@ } /* - * On sn1 we use the orig_address entry of the scatterlist to store + * On sn1 we use the page entry of the scatterlist to store * the physical address corresponding to the given virtual address */ int @@ -208,18 +248,48 @@ device_sysdata = (struct sn1_device_sysdata *) hwdev->sysdata; vhdl = device_sysdata->vhdl; for (i = 0; i < nents; i++, sg++) { - sg->orig_address = (char *)NULL; + /* this catches incorrectly written drivers that + attempt to map scatterlists that they have + previously mapped. we print a warning and + continue, but the driver should be fixed */ + switch (((u64)sg->address) >> 60) { + case 0xa: + case 0xb: +#ifdef DEBUG +/* This needs to be cleaned up at some point. */ + NAG("A PCI driver (for device at%8s) has attempted to " + "map a scatterlist that was previously mapped at " + "%p - this is currently being worked around.\n", + hwdev->slot_name, (void *)sg->address); +#endif + temp_ptr = (u64)sg->address & TO_PHYS_MASK; + break; + case 0xe: /* a good address, we now map it. */ + temp_ptr = (paddr_t) __pa(sg->address); + break; + default: + printk(KERN_ERR + "Very bad address (%p) passed to sn1_pci_map_sg\n", + (void *)sg->address); + BUG(); + } + sg->page = (char *)NULL; dma_addr = 0; - temp_ptr = (paddr_t) __pa(sg->address); /* * Handle the most common case 64Bit cards. */ if (IS_PCIA64(hwdev)) { +#ifdef CONFIG_IA64_SGI_SN1 dma_addr = (dma_addr_t) pciio_dmatrans_addr(vhdl, NULL, temp_ptr, sg->length, - PCIBR_BARRIER | PCIIO_BYTE_STREAM | - PCIIO_DMA_CMD | PCIIO_DMA_A64 ); + PCIIO_BYTE_STREAM | PCIIO_DMA_DATA | + PCIIO_DMA_A64 ); +#else + dma_addr = (dma_addr_t) pciio_dmatrans_addr(vhdl, NULL, + temp_ptr, sg->length, + PCIIO_DMA_DATA | PCIIO_DMA_A64 ); +#endif sg->address = (char *)dma_addr; continue; } @@ -228,10 +298,14 @@ * Handle 32Bits and greater cards. */ if (IS_PCI32G(hwdev)) { +#ifdef CONFIG_IA64_SGI_SN1 dma_addr = (dma_addr_t) pciio_dmatrans_addr(vhdl, NULL, temp_ptr, sg->length, - PCIBR_BARRIER | PCIIO_BYTE_STREAM | - PCIIO_DMA_CMD); + PCIIO_BYTE_STREAM | PCIIO_DMA_DATA); +#else + dma_addr = (dma_addr_t) pciio_dmatrans_addr(vhdl, NULL, + temp_ptr, sg->length, PCIIO_DMA_DATA); +#endif if (dma_addr) { sg->address = (char *)dma_addr; continue; @@ -244,9 +318,12 @@ * Let's 32Bit Page map the request. */ dma_map = NULL; +#ifdef CONFIG_IA64_SGI_SN1 dma_map = pciio_dmamap_alloc(vhdl, NULL, sg->length, - PCIBR_BARRIER | PCIIO_BYTE_STREAM | - PCIIO_DMA_CMD); + PCIIO_BYTE_STREAM | PCIIO_DMA_DATA); +#else + dma_map = pciio_dmamap_alloc(vhdl, NULL, sg->length, PCIIO_DMA_DATA); +#endif if (!dma_map) { printk("pci_map_sg: Unable to allocate anymore 32Bits Page Map entries.\n"); BUG(); @@ -254,7 +331,7 @@ dma_addr = (dma_addr_t)pciio_dmamap_addr(dma_map, temp_ptr, sg->length); /* printk("pci_map_sg: dma_map 0x%p Phys Addr 0x%p dma_addr 0x%p\n", dma_map, temp_ptr, dma_addr); */ sg->address = (char *)dma_addr; - sg->orig_address = (char *)dma_map; + sg->page = (char *)dma_map; } @@ -278,20 +355,21 @@ BUG(); for (i = 0; i < nelems; i++, sg++) - if (sg->orig_address) { + if (sg->page) { /* - * We maintain the DMA Map pointer in sg->orig_address if + * We maintain the DMA Map pointer in sg->page if * it is ever allocated. */ /* phys_to_virt((dma_addr_t)sg->address | ~0x80000000); */ - /* sg->address = sg->orig_address; */ + /* sg->address = sg->page; */ sg->address = (char *)-1; - sn1_dma_map = (struct sn1_dma_maps_s *)sg->orig_address; + sn1_dma_map = (struct sn1_dma_maps_s *)sg->page; pciio_dmamap_done((pciio_dmamap_t)sn1_dma_map); pciio_dmamap_free((pciio_dmamap_t)sn1_dma_map); sn1_dma_map->dma_addr = 0; - sg->orig_address = 0; + sg->page = 0; } + } /* @@ -335,10 +413,14 @@ /* * This device supports 64bits DMA addresses. */ +#ifdef CONFIG_IA64_SGI_SN1 dma_addr = (dma_addr_t) pciio_dmatrans_addr(vhdl, NULL, temp_ptr, size, - PCIBR_BARRIER | PCIIO_BYTE_STREAM | PCIIO_DMA_CMD - | PCIIO_DMA_A64 ); + PCIIO_BYTE_STREAM | PCIIO_DMA_DATA | PCIIO_DMA_A64 ); +#else + dma_addr = (dma_addr_t) pciio_dmatrans_addr(vhdl, NULL, + temp_ptr, size, PCIIO_DMA_DATA | PCIIO_DMA_A64 ); +#endif return (dma_addr); } @@ -349,9 +431,14 @@ * First try to get 32 Bit Direct Map Support. */ if (IS_PCI32G(hwdev)) { +#ifdef CONFIG_IA64_SGI_SN1 dma_addr = (dma_addr_t) pciio_dmatrans_addr(vhdl, NULL, temp_ptr, size, - PCIBR_BARRIER | PCIIO_BYTE_STREAM | PCIIO_DMA_CMD); + PCIIO_BYTE_STREAM | PCIIO_DMA_DATA); +#else + dma_addr = (dma_addr_t) pciio_dmatrans_addr(vhdl, NULL, + temp_ptr, size, PCIIO_DMA_DATA); +#endif if (dma_addr) { return (dma_addr); } @@ -369,15 +456,19 @@ * Let's 32Bit Page map the request. */ dma_map = NULL; - dma_map = pciio_dmamap_alloc(vhdl, NULL, size, PCIBR_BARRIER | - PCIIO_BYTE_STREAM | PCIIO_DMA_CMD); +#ifdef CONFIG_IA64_SGI_SN1 + dma_map = pciio_dmamap_alloc(vhdl, NULL, size, PCIIO_BYTE_STREAM | + PCIIO_DMA_DATA); +#else + dma_map = pciio_dmamap_alloc(vhdl, NULL, size, PCIIO_DMA_DATA); +#endif if (!dma_map) { printk("pci_map_single: Unable to allocate anymore 32Bits Page Map entries.\n"); BUG(); } dma_addr = (dma_addr_t) pciio_dmamap_addr(dma_map, temp_ptr, size); - /* printk("pci_map_single: dma_map 0x%p Phys Addr 0x%p dma_addr 0x%p\n", dma_map, + /* printk("pci_map_single: dma_map 0x%p Phys Addr 0x%p dma_addr 0x%p\n", dma_map, temp_ptr, dma_addr); */ sn1_dma_map = (struct sn1_dma_maps_s *)dma_map; sn1_dma_map->dma_addr = dma_addr; @@ -414,7 +505,9 @@ if (direction == PCI_DMA_NONE) BUG(); - /* Nothing to do */ + + sn_dma_sync(hwdev); + } void @@ -422,7 +515,9 @@ { if (direction == PCI_DMA_NONE) BUG(); - /* Nothing to do */ + + sn_dma_sync(hwdev); + } unsigned long diff -Nru a/arch/ia64/sn/io/pciba.c b/arch/ia64/sn/io/pciba.c --- a/arch/ia64/sn/io/pciba.c Tue Mar 12 13:58:14 2002 +++ b/arch/ia64/sn/io/pciba.c Tue Mar 12 13:58:14 2002 @@ -1,1716 +1,958 @@ -/* $Id$ +/* + * arch/ia64/sn/io/pciba.c * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. + * IRIX PCIBA-inspired user mode PCI interface * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * requires: devfs + * + * device nodes show up in /dev/pci/BB/SS.F (where BB is the bus the + * device is on, SS is the slot the device is in, and F is the + * device's function on a multi-function card). + * + * when compiled into the kernel, it will only be initialized by the + * sgi sn1 specific initialization code. in this case, device nodes + * are under /dev/hw/..../ + * + * This file is subject to the terms and conditions of the GNU General + * Public License. See the file "COPYING" in the main directory of + * this archive for more details. + * + * Copyright (C) 2001-2002 Silicon Graphics, Inc. All rights reserved. + * + * 03262001 - Initial version by Chad Talbott */ -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#if defined(CONFIG_SGI_IP35) || defined(CONFIG_IA64_SGI_SN1) || defined(CONFIG_IA64_GENERIC) -#include -#include -#endif -#define copyin(_a, _b, _c) copy_from_user(_b, _a, _c) +/* jesse's beefs: + + register_pci_device should be documented + + grossness with do_swap should be documented + + big, gross union'ized node_data should be replaced with independent + structures + + replace global list of nodes with global lists of resources. could + use object oriented approach of allocating and cleaning up + resources. + +*/ + -#ifndef DEBUG_PCIBA -#define DEBUG_PCIBA 0 +#include +#ifndef CONFIG_DEVFS_FS +# error PCIBA requires devfs #endif -/* v_mapphys does not percolate page offset back. */ -#define PCIBA_ALIGN_CHECK 1 +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include #include -/* grab an unused space code for "User DMA" space */ -#ifndef PCIBA_SPACE_UDMA -#define PCIBA_SPACE_UDMA (14) + +MODULE_DESCRIPTION("User mode PCI interface"); +MODULE_AUTHOR("Chad Talbott"); + + +#undef DEBUG_PCIBA +/* #define DEBUG_PCIBA */ + +#undef TRACE_PCIBA +/* #define TRACE_PCIBA */ + +#if defined(DEBUG_PCIBA) +# define DPRINTF(x...) printk(KERN_DEBUG x) +#else +# define DPRINTF(x...) #endif -#if DEBUG_REFCT -extern int hwgraph_vertex_refct(vertex_hdl_t); +#if defined(TRACE_PCIBA) +# if defined(__GNUC__) +# define TRACE() printk(KERN_DEBUG "%s:%d:%s\n", \ + __FILE__, __LINE__, __FUNCTION__) +# else +# define TRACE() printk(KERN_DEBUG "%s:%d\n", __LINE__, __FILE__) +# endif +#else +# define TRACE() #endif -extern int pci_user_dma_max_pages; -#define NEW(ptr) (ptr = kmem_zalloc(sizeof (*(ptr)), KM_SLEEP)) -#define DEL(ptr) (kfree(ptr)) -/* Oops -- no standard "pci address" type! */ -typedef uint64_t pciaddr_t; +typedef enum { failure, success } status; +typedef enum { false, true } boolean; -/* ================================================================ - * driver types - */ -typedef struct pciba_slot_s *pciba_slot_t; -typedef struct pciba_comm_s *pciba_comm_t; -typedef struct pciba_soft_s *pciba_soft_t; -typedef struct pciba_map_s *pciba_map_t, **pciba_map_h; -typedef struct pciba_dma_s *pciba_dma_t, **pciba_dma_h; -typedef struct pciba_bus_s *pciba_bus_t; - -#define TRACKED_SPACES 16 -struct pciba_comm_s { - devfs_handle_t conn; - pciba_bus_t bus; - int refct; - pciba_soft_t soft[TRACKED_SPACES][2]; - struct semaphore lock; - pciba_dma_t dmap; + +/* major data structures: + + struct node_data - + + one for each file registered with devfs. contains everything + that any file's fops would need to know about. + + struct dma_allocation - + + a single DMA allocation. only the 'dma' nodes care about + these. they are there primarily to allow the driver to look + up the kernel virtual address of dma buffers allocated by + pci_alloc_consistent, as the application is only given the + physical address (to program the device's dma, presumably) and + cannot supply the kernel virtual address when freeing the + buffer. + + it's also useful to maintain a list of buffers allocated + through a specific node to allow some sanity checking by this + driver. this prevents (for example) a broken application from + freeing buffers that it didn't allocate, or buffers allocated + on another node. + + global_node_list - + + a list of all nodes allocated. this allows the driver to free + all the memory it has 'kmalloc'd in case of an error, or on + module removal. + + global_dma_list - + + a list of all dma buffers allocated by this driver. this + allows the driver to 'pci_free_consistent' all buffers on + module removal or error. + +*/ + + +struct node_data { + /* flat list of all the device nodes. makes it easy to free + them all when we're unregistered */ + struct list_head global_node_list; + devfs_handle_t devfs_handle; + + void (* cleanup)(struct node_data *); + + union { + struct { + struct pci_dev * dev; + struct list_head dma_allocs; + boolean mmapped; + } dma; + struct { + struct pci_dev * dev; + u32 saved_rom_base_reg; + boolean mmapped; + } rom; + struct { + struct resource * res; + } base; + struct { + struct pci_dev * dev; + } config; + } u; }; -/* pciba_soft: device_info() for all openables */ -struct pciba_soft_s { - pciba_comm_t comm; - devfs_handle_t vhdl; - int refct; - pciio_space_t space; - size_t size; - pciio_space_t iomem; - pciaddr_t base; - unsigned flags; +struct dma_allocation { + struct list_head list; + + dma_addr_t handle; + void * va; + size_t size; }; -#define pciba_soft_get(v) (pciba_soft_t)hwgraph_fastinfo_get(v) -#define pciba_soft_set(v,i) hwgraph_fastinfo_set(v,(arbitrary_info_t)(i)) -#define pciba_soft_lock(soft) down(&soft->comm->lock) -#define pciba_soft_unlock(soft) up(&soft->comm->lock) +static LIST_HEAD(global_node_list); +static LIST_HEAD(global_dma_list); -/* pciba_map: data describing a mapping. - * (ie. a user mmap request) - */ -struct pciba_map_s { - pciba_map_t next; -#ifdef LATER - uthread_t *uthread; -#endif - __psunsigned_t handle; - uvaddr_t uvaddr; - size_t size; - pciio_piomap_t map; - pciio_space_t space; - pciaddr_t base; - unsigned flags; -}; -/* pciba_dma: data describing a DMA mapping. - */ -struct pciba_dma_s { - pciba_dma_t next; - iopaddr_t paddr; /* starting phys addr */ - caddr_t kaddr; /* starting kern addr */ - pciio_dmamap_t map; /* mapping resources (ugh!) */ - pciaddr_t daddr; /* starting pci addr */ - size_t pages; /* size of block in pages */ - size_t bytes; /* size of block in bytes */ - __psunsigned_t handle; /* mapping handle */ +/* module entry points */ +int __init pciba_init(void); +void __exit pciba_exit(void); + +static status __init register_with_devfs(void); +static void __exit unregister_with_devfs(void); + +static status __init register_pci_device(devfs_handle_t device_dir_handle, + struct pci_dev * dev); + +/* file operations */ +static int generic_open(struct inode * inode, struct file * file); +static int rom_mmap(struct file * file, struct vm_area_struct * vma); +static int rom_release(struct inode * inode, struct file * file); +static int base_mmap(struct file * file, struct vm_area_struct * vma); +static int config_ioctl(struct inode * inode, struct file * file, + unsigned int cmd, + unsigned long arg); +static int dma_ioctl(struct inode * inode, struct file * file, + unsigned int cmd, + unsigned long arg); +static int dma_mmap(struct file * file, struct vm_area_struct * vma); + +/* support routines */ +static int mmap_pci_address(struct vm_area_struct * vma, unsigned long pci_va); +static int mmap_kernel_address(struct vm_area_struct * vma, void * kernel_va); + +#ifdef DEBUG_PCIBA +static void dump_nodes(struct list_head * nodes); +static void dump_allocations(struct list_head * dalp); +#endif + +/* file operations for each type of node */ +static struct file_operations rom_fops = { + owner: THIS_MODULE, + mmap: rom_mmap, + open: generic_open, + release: rom_release }; + -/* pciba_bus: common bus info for all openables - * descended from the same master vertex. - */ -struct pciba_bus_s { - struct semaphore lock; - pciba_map_t maps; /* stack of mappings */ - int refct; +static struct file_operations base_fops = { + owner: THIS_MODULE, + mmap: base_mmap, + open: generic_open }; -#define pciba_bus_lock(bus) down(&bus->lock) -#define pciba_bus_unlock(bus) up(&bus->lock) -typedef union ioctl_arg_buffer_u { - char data[IOCPARM_MASK + 1]; - uint8_t uc; - uint16_t us; - uint32_t ui; - uint64_t ud; - caddr_t ca; -#if ULI - struct uliargs uli; - struct uliargs32 uli32; -#endif -} ioctl_arg_buffer_t; +static struct file_operations config_fops = { + owner: THIS_MODULE, + ioctl: config_ioctl, + open: generic_open +}; -/* ================================================================ - * driver variables - */ -char *pciba_mversion = "mload version 7.0"; -int pciba_devflag = 0x1 | - 0x200 | - 0x400; +static struct file_operations dma_fops = { + owner: THIS_MODULE, + ioctl: dma_ioctl, + mmap: dma_mmap, + open: generic_open +}; -/* this counts the reasons why we can not - * currently unload this driver. - */ -atomic_t pciba_prevent_unload = ATOMIC_INIT(0); -#if DEBUG_PCIBA -static struct reg_values space_v[] = -{ - {PCIIO_SPACE_NONE, "none"}, - {PCIIO_SPACE_ROM, "ROM"}, - {PCIIO_SPACE_IO, "I/O"}, - {PCIIO_SPACE_MEM, "MEM"}, - {PCIIO_SPACE_MEM32, "MEM(32)"}, - {PCIIO_SPACE_MEM64, "MEM(64)"}, - {PCIIO_SPACE_CFG, "CFG"}, - {PCIIO_SPACE_WIN(0), "WIN(0)"}, - {PCIIO_SPACE_WIN(1), "WIN(1)"}, - {PCIIO_SPACE_WIN(2), "WIN(2)"}, - {PCIIO_SPACE_WIN(3), "WIN(3)"}, - {PCIIO_SPACE_WIN(4), "WIN(4)"}, - {PCIIO_SPACE_WIN(5), "WIN(5)"}, - {PCIBA_SPACE_UDMA, "UDMA"}, - {PCIIO_SPACE_BAD, "BAD"}, - {0} -}; +module_init(pciba_init); +module_exit(pciba_exit); + -static struct reg_desc space_desc[] = +int __init +pciba_init(void) { - {0xFF, 0, "space", 0, space_v}, - {0} -}; -#endif + TRACE(); -char pciba_edge_lbl_base[] = "base"; -char pciba_edge_lbl_cfg[] = "config"; -char pciba_edge_lbl_dma[] = "dma"; -char pciba_edge_lbl_intr[] = "intr"; -char pciba_edge_lbl_io[] = "io"; -char pciba_edge_lbl_mem[] = "mem"; -char pciba_edge_lbl_rom[] = "rom"; -char *pciba_edge_lbl_win[6] = -{"0", "1", "2", "3", "4", "5"}; - -#define PCIBA_EDGE_LBL_BASE pciba_edge_lbl_base -#define PCIBA_EDGE_LBL_CFG pciba_edge_lbl_cfg -#define PCIBA_EDGE_LBL_DMA pciba_edge_lbl_dma -#define PCIBA_EDGE_LBL_INTR pciba_edge_lbl_intr -#define PCIBA_EDGE_LBL_IO pciba_edge_lbl_io -#define PCIBA_EDGE_LBL_MEM pciba_edge_lbl_mem -#define PCIBA_EDGE_LBL_ROM pciba_edge_lbl_rom -#define PCIBA_EDGE_LBL_WIN(n) pciba_edge_lbl_win[n] - -#define PCIBA_EDGE_LBL_FLIP pciba_edge_lbl_flip - -static char pciba_info_lbl_bus[] = "pciba_bus"; - -#define PCIBA_INFO_LBL_BUS pciba_info_lbl_bus - -struct file_operations pciba_fops = { - owner: THIS_MODULE, - llseek: NULL, - read: NULL, - write: NULL, - readdir: NULL, - poll: NULL, - ioctl: NULL, - mmap: NULL, - open: NULL, - flush: NULL, - release: NULL, - fsync: NULL, - fasync: NULL, - lock: NULL, - readv: NULL, - writev: NULL -}; - -/* ================================================================ - * function table of contents - */ + if (register_with_devfs() == failure) + return 1; /* failure */ -void pciba_init(void); -int pciba_attach(devfs_handle_t); + printk("PCIBA (a user mode PCI interface) initialized.\n"); -static void pciba_sub_attach(pciba_comm_t, - pciio_space_t, pciio_space_t, pciaddr_t, - devfs_handle_t, devfs_handle_t, char *); - -static pciba_bus_t pciba_find_bus(devfs_handle_t, int); -#ifdef LATER -static void pciba_map_push(pciba_bus_t, pciba_map_t); -static pciba_map_t pciba_map_pop_hdl(pciba_bus_t, __psunsigned_t); -static void pciba_sub_detach(devfs_handle_t, char *); -static pciio_iter_f pciba_unload_me; -#endif + return 0; /* success */ +} -int pciba_unload(void); -int pciba_unreg(void); -int pciba_detach(devfs_handle_t); - -int pciba_open(dev_t *, int, int, struct cred *); -int pciba_close(dev_t); -int pciba_read(dev_t, cred_t *); -int pciba_write(dev_t, cred_t *); -int pciba_ioctl(dev_t, int, void *, int, cred_t *, int *); - -int pciba_map(dev_t, vhandl_t *, off_t, size_t, uint32_t); -int pciba_unmap(dev_t, vhandl_t *); - -#if ULI -void pciba_clearuli(struct uli *); -static intr_func_f pciba_intr; -#endif /* Undef as it gets implemented */ -/* ================================================================ - * driver load, register, and setup - */ -void -pciba_init(void) +void __exit +pciba_exit(void) { + TRACE(); - /* - * What do we need to do here? - */ -#if DEBUG_PCIBA - printk("pciba_init()\n"); -#endif + /* FIXME: should also free all that memory that we allocated + ;) */ + unregister_with_devfs(); } -#ifdef LATER -#if HWG_PERF_CHECK && IP30 && !DEBUG -void -pciba_timeout(void *arg1, void *arg2) -{ - struct semaphore *semap = (sema_t *) arg1; - unsigned long *cvalp = (unsigned long *) arg2; - - if (cvalp) - cvalp[0] = RAW_COUNT(); - if (semap) - up(semap); -} - -volatile unsigned long cNval[1]; -struct semaphore tsema; - -void -pciba_timeout_test(void) -{ - unsigned long c0val, cval; - toid_t tid; - - extern void hwg_hprint(unsigned long, char *); - - sema_init(&tsema, 0); - - cNval[0] = 0; - c0val = RAW_COUNT(); - tid = timeout((void (*)()) pciba_timeout, (void *) 0, 1, (void *) cNval); - DELAY(1000000); - cval = cNval[0]; - if (cval == 0) { - untimeout(tid); - PRINT_ALERT("pciba: one-tick timeout did not happen in a second\n"); - return; - } - cval = cval - c0val; - hwg_hprint(cval, "timeout(1)"); - - cNval[0] = 0; - c0val = RAW_COUNT(); - tid = timeout((void (*)()) pciba_timeout, (void *) &tsema, 2, (void *) cNval); - - /* FIXME : this probably needs to be down_interruptible() */ - - if (down(&tsema) < 0) { /* wait for the pciba_timeout */ - untimeout(tid); - PRINT_WARNING("pciba: timeout(2) time check aborted\n"); - return; - } - cval = cNval[0]; - if (cval == 0) { - untimeout(tid); - PRINT_WARNING("pciba: timeout(2) time not logged\n"); - return; - } - cval = cval - c0val; - hwg_hprint(cval, "timeout(2)"); - - cNval[0] = 0; - c0val = RAW_COUNT(); - tid = timeout((void (*)()) pciba_timeout, (void *) &tsema, HZ, (void *) cNval); - - /* FIXME : this probably needs to be down_interruptible() */ - - if (down(&tsema) < 0) { /* wait for the pciba_timeout */ - untimeout(tid); - PRINT_WARNING("pciba: timeout(HZ) time check aborted\n"); - return; - } - cval = cNval[0]; - if (cval == 0) { - untimeout(tid); - PRINT_WARNING("pciba: timeout(HZ) time not logged\n"); - return; - } - cval = cval - c0val; - hwg_hprint(cval, "timeout(HZ)"); - - printk("verifying untimeout() cancells ...\n"); - cNval[0] = 0; - tid = timeout((void (*)()) pciba_timeout, (void *) 0, 2, (void *) cNval); - untimeout(tid); - DELAY(1000000); - cval = cNval[0]; - if (cval != 0) { - PRINT_ALERT("pciba: unable to cancel two-tick timeout\n"); - cval -= c0val; - hwg_hprint(cval, "CANCELLED timeout(2)"); - } -} -#endif -int -pciba_reg(void) +# if 0 +static void __exit +free_nodes(void) { -#if DEBUG_PCIBA - printk("pciba_reg()\n"); -#endif - pciio_driver_register(-1, -1, "pciba_", 0); - -#if HWG_PERF_CHECK && IP30 && !DEBUG - printk("%s %d\n", __FUNCTION__, __LINE__); -pciba_timeout_test(); -#endif + struct node_data * nd; + + TRACE(); -#if DEBUG_REFCT - { - char *cname = "pciba"; - char *dname = "ptv"; - char *cpath0 = "node/xtalk/15"; - char *uname0 = "0"; - char *cpath1 = "node/xtalk/13"; - char *uname1 = "1"; - devfs_handle_t conn; - devfs_handle_t conv; - devfs_handle_t vhdl; - int ret; - - printk("pciba refct tests:\n"); - -#define SHOWREF(vhdl,func) printk("ref=%d\t%s\t(%d) %v\n", hwgraph_vertex_refct(vhdl), #func, vhdl, vhdl); - - if (GRAPH_SUCCESS != (ret = hwgraph_path_add(hwgraph_root, cname, &conv))) - printk("\tunable to create conv (ret=%d)\n", ret); - else { SHOWREF(conv, hwgraph_path_add); - if (GRAPH_SUCCESS != (ret = hwgraph_traverse(hwgraph_root, cpath0, &conn))) - printk("\tunable to find %s (ret=%d)\n", cpath0, ret); - else { SHOWREF(conn, hwgraph_traverse); - if (GRAPH_SUCCESS != (ret = hwgraph_char_device_add(conn, dname, "pciba_", &vhdl))) - printk("unable to create %v/%s (ret=%d)\n", conn, dname, ret); - else { SHOWREF(vhdl, hwgraph_char_device_add); - hwgraph_chmod(vhdl, 0666); SHOWREF(vhdl, hwgraph_chmod); - if (GRAPH_SUCCESS != (ret = hwgraph_edge_add(conv, vhdl, uname0))) - printk("unable to create %v/%s (ret=%d)\n", conn, uname0, vhdl, ret); - else SHOWREF(vhdl, hwgraph_edge_add); - if (GRAPH_SUCCESS != (ret = hwgraph_vertex_unref(vhdl))) - printk("unable to unref %v\n", vhdl); - else SHOWREF(vhdl, hwgraph_vertex_unref); - } - if (GRAPH_SUCCESS != (ret = hwgraph_vertex_unref(conn))) - printk("unable to unref %v\n", conn); - else SHOWREF(conn, hwgraph_vertex_unref); - } - - if (GRAPH_SUCCESS != (ret = hwgraph_traverse(hwgraph_root, cpath1, &conn))) - printk("\tunable to find %s (ret=%d)\n", cpath1, ret); - else { SHOWREF(conn, hwgraph_traverse); - if (GRAPH_SUCCESS != (ret = hwgraph_char_device_add(conn, dname, "pciba_", &vhdl))) - printk("unable to create %v/%s (ret=%d)\n", conn, dname, ret); - else { SHOWREF(vhdl, hwgraph_char_device_add); - hwgraph_chmod(vhdl, 0666); SHOWREF(vhdl, hwgraph_chmod); - if (GRAPH_SUCCESS != (ret = hwgraph_edge_add(conv, vhdl, uname1))) - printk("unable to create %v/%s (ret=%d)\n", conn, uname1, vhdl, ret); - else SHOWREF(vhdl, hwgraph_edge_add); - if (GRAPH_SUCCESS != (ret = hwgraph_vertex_unref(vhdl))) - printk("unable to unref %v\n", vhdl); - else SHOWREF(vhdl, hwgraph_vertex_unref); - } - if (GRAPH_SUCCESS != (ret = hwgraph_vertex_unref(conn))) - printk("unable to unref %v\n", conn); - else SHOWREF(conn, hwgraph_vertex_unref); - } - - if (GRAPH_SUCCESS != (ret = hwgraph_traverse(hwgraph_root, cpath0, &conn))) - printk("\tunable to find %s (ret=%d)\n", cpath0, ret); - else { SHOWREF(conn, hwgraph_traverse); - if (GRAPH_SUCCESS != (ret = hwgraph_traverse(conn, dname, &vhdl))) - printk("\tunable to find %v/%s (ret=%d)\n", conn, dname, ret); - else { SHOWREF(vhdl, hwgraph_traverse); - if (GRAPH_SUCCESS != (ret = hwgraph_edge_remove(conv, uname0, NULL))) - printk("\tunable to remove edge %v/%s (ret=%d)\n", conv, uname0, ret); - else SHOWREF(vhdl, hwgraph_edge_remove); - if (GRAPH_SUCCESS != (ret = hwgraph_edge_remove(conn, dname, NULL))) - printk("\tunable to remove edge %v/%s (ret=%d)\n", conn, dname, ret); - else SHOWREF(vhdl, hwgraph_edge_remove); - if (GRAPH_SUCCESS != (ret = hwgraph_vertex_unref(vhdl))) - printk("unable to unref %v\n", vhdl); - else SHOWREF(vhdl, hwgraph_vertex_unref); - if (GRAPH_SUCCESS == (ret = hwgraph_vertex_destroy(vhdl))) - printk("\tvertex %d destroyed OK\n", vhdl); - else SHOWREF(vhdl, hwgraph_vertex_destroy); - } - if (GRAPH_SUCCESS != (ret = hwgraph_vertex_unref(conn))) - printk("unable to unref %v\n", conn); - else SHOWREF(conn, hwgraph_vertex_unref); - } - - if (GRAPH_SUCCESS != (ret = hwgraph_traverse(hwgraph_root, cpath1, &conn))) - printk("\tunable to find %s (ret=%d)\n", cpath1, ret); - else { SHOWREF(conn, hwgraph_traverse); - if (GRAPH_SUCCESS != (ret = hwgraph_traverse(conn, dname, &vhdl))) - printk("\tunable to find %v/%s (ret=%d)\n", conn, dname, ret); - else { SHOWREF(vhdl, hwgraph_traverse); - if (GRAPH_SUCCESS != (ret = hwgraph_edge_remove(conv, uname1, NULL))) - printk("\tunable to remove edge %v/%s (ret=%d)\n", conv, uname1, ret); - else SHOWREF(vhdl, hwgraph_edge_remove); - if (GRAPH_SUCCESS != (ret = hwgraph_edge_remove(conn, dname, NULL))) - printk("\tunable to remove edge %v/%s (ret=%d)\n", conn, dname, ret); - else SHOWREF(vhdl, hwgraph_edge_remove); - if (GRAPH_SUCCESS != (ret = hwgraph_vertex_unref(vhdl))) - printk("unable to unref %v\n", vhdl); - else SHOWREF(vhdl, hwgraph_vertex_unref); - if (GRAPH_SUCCESS == (ret = hwgraph_vertex_destroy(vhdl))) - printk("\tvertex %d destroyed OK\n", vhdl); - else SHOWREF(vhdl, hwgraph_vertex_destroy); - } - if (GRAPH_SUCCESS != (ret = hwgraph_vertex_unref(conn))) - printk("unable to unref %v\n", conn); - else SHOWREF(conn, hwgraph_vertex_unref); - } - - if (GRAPH_SUCCESS != (ret = hwgraph_edge_remove(hwgraph_root, cname, NULL))) - printk("\tunable to remove edge %v/%s (ret=%d)\n", hwgraph_root, cname, ret); - else SHOWREF(conv, hwgraph_edge_remove); - if (GRAPH_SUCCESS != (ret = hwgraph_vertex_unref(conv))) - printk("unable to unref %v\n", conv); - else SHOWREF(conv, hwgraph_vertex_unref); - if (GRAPH_SUCCESS == (ret = hwgraph_vertex_destroy(conv))) - printk("\tvertex %d destroyed OK\n", conv); - else SHOWREF(conv, hwgraph_vertex_destroy); + list_for_each(nd, &node_list) { + kfree(list_entry(nd, struct nd, node_list)); } - } -#endif - - return 0; } - -#endif -int -pciba_attach(devfs_handle_t hconn) -{ -#if defined(PCIIO_SLOT_NONE) - pciio_info_t info = pciio_info_get(hconn); - pciio_slot_t slot = pciio_info_slot_get(info); #endif - pciba_comm_t comm; - pciba_bus_t bus; - int ht; - devfs_handle_t hbase; - devfs_handle_t gconn; - devfs_handle_t gbase; - int win; - int wins; - pciio_space_t space; - pciaddr_t base; - int iwins; - int mwins; -#if DEBUG_PCIBA - printk("pciba_attach(%p)\n", hconn); -#endif +static devfs_handle_t pciba_devfs_handle; - /* Pick up "dualslot guest" vertex, - * which gets all functionality except - * config space access. - */ - if ((GRAPH_SUCCESS != - hwgraph_traverse(hconn, ".guest", &gconn)) || - (hconn == gconn)) - gconn = GRAPH_VERTEX_NONE; - - bus = pciba_find_bus(hconn, 1); - bus->refct ++; - - /* set up data common to all pciba openables - * on this connection point. - */ - NEW(comm); - comm->conn = hconn; - comm->bus = bus; - comm->refct = 0; - sema_init(&comm->lock, 1); -#if !defined(PCIIO_SLOT_NONE) - if (bus->refct == 1) -#else - if (slot == PCIIO_SLOT_NONE) -#endif - { - pciio_info_t pciio_info; - devfs_handle_t master; - - pciio_info = pciio_info_get(hconn); - master = pciio_info_master_get(pciio_info); - - pciba_sub_attach(comm, PCIIO_SPACE_IO, PCIIO_SPACE_IO, 0, master, master, PCIBA_EDGE_LBL_IO); - pciba_sub_attach(comm, PCIIO_SPACE_MEM, PCIIO_SPACE_MEM, 0, master, master, PCIBA_EDGE_LBL_MEM); -#if defined(PCIIO_SLOT_NONE) - return 0; -#endif - } +#if !defined(CONFIG_IA64_SGI_SN1) - ht = 0x7F & pciio_config_get(hconn, PCI_CFG_HEADER_TYPE, 1); +static status __init +register_with_devfs(void) +{ + struct pci_dev * dev; + devfs_handle_t device_dir_handle; + char devfs_path[40]; - wins = ((ht == 0x00) ? 6 : - (ht == 0x01) ? 2 : - 0); - - mwins = iwins = 0; - - hbase = GRAPH_VERTEX_NONE; - gbase = GRAPH_VERTEX_NONE; - - for (win = 0; win < wins; win++) { - - base = pciio_config_get(hconn, PCI_CFG_BASE_ADDR(win), 4); - if (base & 1) { - space = PCIIO_SPACE_IO; - base &= 0xFFFFFFFC; - } else if ((base & 7) == 4) { - space = PCIIO_SPACE_MEM; - base &= 0xFFFFFFF0; - base |= ((pciaddr_t) pciio_config_get(hconn, PCI_CFG_BASE_ADDR(win + 1), 4)) << 32; - } else { - space = PCIIO_SPACE_MEM; - base &= 0xFFFFFFF0; - } + TRACE(); - if (!base) - break; + pciba_devfs_handle = devfs_mk_dir(NULL, "pci", NULL); + if (pciba_devfs_handle == NULL) + return failure; -#if PCIBA_ALIGN_CHECK - if (base & (_PAGESZ - 1)) { -#if DEBUG_PCIBA - PRINT_WARNING("%p pciba: BASE%d not page aligned!\n" - "\tmmap this window at offset 0x%x via \".../pci/%s\"\n", - hconn, win, base, - (space == PCIIO_SPACE_IO) ? "io" : "mem"); -#endif - continue; /* next window */ - } -#endif + /* FIXME: don't forget /dev/pci/mem & /dev/pci/io */ + + pci_for_each_dev(dev) { + sprintf(devfs_path, "%02x/%02x.%x", + dev->bus->number, + PCI_SLOT(dev->devfn), + PCI_FUNC(dev->devfn)); + + device_dir_handle = + devfs_mk_dir(pciba_devfs_handle, devfs_path, NULL); + if (device_dir_handle == NULL) + return failure; - if ((hbase == GRAPH_VERTEX_NONE) && - ((GRAPH_SUCCESS != - hwgraph_path_add(hconn, PCIBA_EDGE_LBL_BASE, &hbase)) || - (hbase == GRAPH_VERTEX_NONE))) - break; /* no base vertex, no more windows. */ - - if ((gconn != GRAPH_VERTEX_NONE) && - (gbase == GRAPH_VERTEX_NONE) && - ((GRAPH_SUCCESS != - hwgraph_path_add(gconn, PCIBA_EDGE_LBL_BASE, &gbase)) || - (gbase == GRAPH_VERTEX_NONE))) - break; /* no base vertex, no more windows. */ - - pciba_sub_attach(comm, PCIIO_SPACE_WIN(win), space, base, hbase, gbase, PCIBA_EDGE_LBL_WIN(win)); - - if (space == PCIIO_SPACE_IO) { - if (!iwins++) { - pciba_sub_attach(comm, PCIIO_SPACE_WIN(win), space, base, hconn, gconn, PCIBA_EDGE_LBL_IO); - } - } else { - if (!mwins++) { - pciba_sub_attach(comm, PCIIO_SPACE_WIN(win), space, base, hconn, gconn, PCIBA_EDGE_LBL_MEM); - } + if (register_pci_device(device_dir_handle, dev) == failure) { + devfs_unregister(pciba_devfs_handle); + return failure; + } } - if ((base & 7) == 4) - win++; - } - - pciba_sub_attach(comm, PCIIO_SPACE_CFG, PCIIO_SPACE_NONE, 0, hconn, gconn, PCIBA_EDGE_LBL_CFG); - pciba_sub_attach(comm, PCIBA_SPACE_UDMA, PCIIO_SPACE_NONE, 0, hconn, gconn, PCIBA_EDGE_LBL_DMA); -#if ULI - pciba_sub_attach(comm, PCIIO_SPACE_NONE, PCIIO_SPACE_NONE, 0, hconn, gconn, PCIBA_EDGE_LBL_INTR); -#endif + return success; +} - /* XXX should ignore if device is an IOC3 */ - if (ht == 0x01) - base = pciio_config_get(hconn, PCI_EXPANSION_ROM+8, 4); - else - base = pciio_config_get(hconn, PCI_EXPANSION_ROM, 4); - - base &= 0xFFFFF000; - - if (base) { - if (base & (_PAGESZ - 1)) -#if defined(SUPPORT_PRINTING_V_FORMAT) - PRINT_WARNING("%v pciba: ROM is 0x%x\n" - "\tnot page aligned, mmap will be difficult\n", - hconn, base); #else - PRINT_WARNING("0x%x pciba: ROM is 0x%x\n" - "\tnot page aligned, mmap will be difficult\n", - hconn, base); -#endif - pciba_sub_attach(comm, PCIIO_SPACE_ROM, PCIIO_SPACE_MEM, base, hconn, gconn, PCIBA_EDGE_LBL_ROM); - } -#if !FICUS /* FICUS shorts the refct by one on path_add */ - if (hbase != GRAPH_VERTEX_NONE) - hwgraph_vertex_unref(hbase); +extern devfs_handle_t +devfn_to_vertex(unsigned char busnum, unsigned int devfn); - if (gbase != GRAPH_VERTEX_NONE) - hwgraph_vertex_unref(gbase); -#endif +static status __init +register_with_devfs(void) +{ + struct pci_dev * dev; + devfs_handle_t device_dir_handle; - return 0; -} + TRACE(); -static void -pciba_sub_attach2(pciba_comm_t comm, - pciio_space_t space, - pciio_space_t iomem, - pciaddr_t base, - devfs_handle_t from, - char *name, - char *suf, - unsigned bigend) -{ - char nbuf[128]; - pciba_soft_t soft; - devfs_handle_t handle = NULL; - - if (suf && *suf) { - strcpy(nbuf, name); - name = nbuf; - strcat(name, suf); - } - -#if DEBUG_PCIBA - printk("pciba_sub_attach2 %p/%s %p at %p[%x]\n", - from, name, space, space_desc, iomem, space_desc, base, from, name); -#endif + /* FIXME: don't forget /dev/.../pci/mem & /dev/.../pci/io */ - if (space < TRACKED_SPACES) - if ((soft = comm->soft[space][bigend]) != NULL) { - soft->refct ++; - hwgraph_edge_add(from, soft->vhdl, name); - return; + pci_for_each_dev(dev) { + device_dir_handle = devfn_to_vertex(dev->bus->number, + dev->devfn); + if (device_dir_handle == NULL) + return failure; + + if (register_pci_device(device_dir_handle, dev) == failure) { + devfs_unregister(pciba_devfs_handle); + return failure; + } } - NEW(soft); - if (!soft) - return; - - soft->comm = comm; - soft->space = space; - soft->size = 0; - soft->iomem = iomem; - soft->base = base; - soft->refct = 1; - - if (space == PCIIO_SPACE_NONE) - soft->flags = 0; - else if (bigend) - soft->flags = PCIIO_BYTE_STREAM; - else - soft->flags = PCIIO_WORD_VALUES; - - handle = hwgraph_register(from, name, - 0, DEVFS_FL_AUTO_DEVNUM, - 0, 0, - S_IFCHR | S_IRUSR | S_IWUSR | S_IRGRP, 0, 0, - &pciba_fops, NULL); - soft->vhdl = handle; - pciba_soft_set(soft->vhdl, soft); - if (space < TRACKED_SPACES) - comm->soft[space][bigend] = soft; - comm->refct ++; + return success; } -static void -pciba_sub_attach1(pciba_comm_t comm, - pciio_space_t space, - pciio_space_t iomem, - pciaddr_t base, - devfs_handle_t hfrom, - devfs_handle_t gfrom, - char *name, - char *suf, - unsigned bigend) -{ - pciba_sub_attach2(comm, space, iomem, base, hfrom, name, suf, bigend); - if ((gfrom != GRAPH_VERTEX_NONE) && (gfrom != hfrom)) - pciba_sub_attach2(comm, space, iomem, base, gfrom, name, suf, bigend); -} +#endif /* CONFIG_IA64_SGI_SN1 */ + + +static void __exit +unregister_with_devfs(void) +{ + struct list_head * lhp; + struct node_data * nd; + + TRACE(); + + list_for_each(lhp, &global_node_list) { + nd = list_entry(lhp, struct node_data, global_node_list); + devfs_unregister(nd->devfs_handle); + } -static void -pciba_sub_attach(pciba_comm_t comm, - pciio_space_t space, - pciio_space_t iomem, - pciaddr_t base, - devfs_handle_t hfrom, - devfs_handle_t gfrom, - char *name) -{ - pciba_sub_attach1(comm, space, iomem, base, hfrom, gfrom, name, NULL, 0); - if (iomem != PCIIO_SPACE_NONE) { - pciba_sub_attach1(comm, space, iomem, base, hfrom, gfrom, name, "_le", 0); - pciba_sub_attach1(comm, space, iomem, base, hfrom, gfrom, name, "_be", 1); - } } -#ifdef LATER -static void -pciba_reload_me(devfs_handle_t pconn_vhdl) + +struct node_data * new_node(void) { - devfs_handle_t vhdl; + struct node_data * node; + + TRACE(); + + node = kmalloc(sizeof(struct node_data), GFP_KERNEL); + if (node == NULL) + return NULL; + list_add(&node->global_node_list, &global_node_list); + return node; +} -#if DEBUG_PCIBA - printf("pciba_reload_me(%v)\n", pconn_vhdl); -#endif - if (GRAPH_SUCCESS != - hwgraph_traverse(pconn_vhdl, PCIBA_EDGE_LBL_CFG, &vhdl)) - return; +void dma_cleanup(struct node_data * dma_node) +{ + TRACE(); - hwgraph_vertex_unref(vhdl); + /* FIXME: should free these allocations */ +#ifdef DEBUG_PCIBA + dump_allocations(&dma_node->u.dma.dma_allocs); +#endif + devfs_unregister(dma_node->devfs_handle); } -#endif /* LATER */ -static pciba_bus_t -pciba_find_bus(devfs_handle_t pconn, int cflag) + +void init_dma_node(struct node_data * node, + struct pci_dev * dev, devfs_handle_t dh) { - pciio_info_t pciio_info; - devfs_handle_t master; - arbitrary_info_t ainfo; - pciba_bus_t bus; + TRACE(); - pciio_info = pciio_info_get(pconn); - master = pciio_info_master_get(pciio_info); + node->devfs_handle = dh; + node->u.dma.dev = dev; + node->cleanup = dma_cleanup; + INIT_LIST_HEAD(&node->u.dma.dma_allocs); +} - if (GRAPH_SUCCESS == - hwgraph_info_get_LBL(master, PCIBA_INFO_LBL_BUS, &ainfo)) - return (pciba_bus_t) ainfo; - if (!cflag) - return 0; +void rom_cleanup(struct node_data * rom_node) +{ + TRACE(); - NEW(bus); - if (!bus) - return 0; + if (rom_node->u.rom.mmapped) + pci_write_config_dword(rom_node->u.rom.dev, + PCI_ROM_ADDRESS, + rom_node->u.rom.saved_rom_base_reg); + devfs_unregister(rom_node->devfs_handle); +} - sema_init(&bus->lock, 1); - ainfo = (arbitrary_info_t) bus; - hwgraph_info_add_LBL(master, PCIBA_INFO_LBL_BUS, ainfo); - hwgraph_info_get_LBL(master, PCIBA_INFO_LBL_BUS, &ainfo); - if ((pciba_bus_t) ainfo != bus) - DEL(bus); -#if DEBUG_PCIBA - else - printk("pcbia_find_bus: new bus at %p\n", master); -#endif +void init_rom_node(struct node_data * node, + struct pci_dev * dev, devfs_handle_t dh) +{ + TRACE(); - return (pciba_bus_t) ainfo; + node->devfs_handle = dh; + node->u.rom.dev = dev; + node->cleanup = rom_cleanup; + node->u.rom.mmapped = false; } -#ifdef LATER -static void -pciba_map_push(pciba_bus_t bus, pciba_map_t map) + +static status __init +register_pci_device(devfs_handle_t device_dir_handle, struct pci_dev * dev) { -#if DEBUG_PCIBA - printk("pciba_map_push(bus=0x%x, map=0x%x, hdl=0x%x\n", - bus, map, map->handle); -#endif - pciba_bus_lock(bus); - map->next = bus->maps; - bus->maps = map; - pciba_bus_unlock(bus); -} - -static pciba_map_t -pciba_map_pop_hdl(pciba_bus_t bus, __psunsigned_t handle) -{ - pciba_map_h hdl; - pciba_map_t map; - - pciba_bus_lock(bus); - for (hdl = &bus->maps; map = *hdl; hdl = &map->next) - if (map->handle == handle) { - *hdl = map->next; - break; + struct node_data * nd; + char devfs_path[20]; + devfs_handle_t node_devfs_handle; + int ri; + + TRACE(); + + + /* register nodes for all the device's base address registers */ + for (ri = 0; ri < PCI_ROM_RESOURCE; ri++) { + if (pci_resource_len(dev, ri) != 0) { + sprintf(devfs_path, "base/%d", ri); + if (devfs_register(device_dir_handle, devfs_path, + DEVFS_FL_NONE, + 0, 0, + S_IFREG | S_IRUSR | S_IWUSR, + &base_fops, + &dev->resource[ri]) == NULL) + return failure; + } } - pciba_bus_unlock(bus); -#if DEBUG_PCIBA - printk("pciba_map_pop_va(bus=0x%x, handle=0x%x) returns map=0x%x\n", - bus, handle, map); + + /* register a node corresponding to the first MEM resource on + the device */ + for (ri = 0; ri < PCI_ROM_RESOURCE; ri++) { + if (dev->resource[ri].flags & IORESOURCE_MEM && + pci_resource_len(dev, ri) != 0) { + if (devfs_register(device_dir_handle, "mem", + DEVFS_FL_NONE, 0, 0, + S_IFREG | S_IRUSR | S_IWUSR, + &base_fops, + &dev->resource[ri]) == NULL) + return failure; + break; + } + } + + /* also register a node corresponding to the first IO resource + on the device */ + for (ri = 0; ri < PCI_ROM_RESOURCE; ri++) { + if (dev->resource[ri].flags & IORESOURCE_IO && + pci_resource_len(dev, ri) != 0) { + if (devfs_register(device_dir_handle, "io", + DEVFS_FL_NONE, 0, 0, + S_IFREG | S_IRUSR | S_IWUSR, + &base_fops, + &dev->resource[ri]) == NULL) + return failure; + break; + } + } + + /* register a node corresponding to the device's ROM resource, + if present */ + if (pci_resource_len(dev, PCI_ROM_RESOURCE) != 0) { + nd = new_node(); + if (nd == NULL) + return failure; + node_devfs_handle = devfs_register(device_dir_handle, "rom", + DEVFS_FL_NONE, 0, 0, + S_IFREG | S_IRUSR, + &rom_fops, nd); + if (node_devfs_handle == NULL) + return failure; + init_rom_node(nd, dev, node_devfs_handle); + } + + /* register a node that allows ioctl's to read and write to + the device's config space */ + if (devfs_register(device_dir_handle, "config", DEVFS_FL_NONE, + 0, 0, S_IFREG | S_IRUSR | S_IWUSR, + &config_fops, dev) == NULL) + return failure; + + + /* finally, register a node that allows ioctl's to allocate + and free DMA buffers, as well as memory map those + buffers. */ + nd = new_node(); + if (nd == NULL) + return failure; + node_devfs_handle = + devfs_register(device_dir_handle, "dma", DEVFS_FL_NONE, + 0, 0, S_IFREG | S_IRUSR | S_IWUSR, + &dma_fops, nd); + if (node_devfs_handle == NULL) + return failure; + init_dma_node(nd, dev, node_devfs_handle); + +#ifdef DEBUG_PCIBA + dump_nodes(&global_node_list); #endif - return map; + + return success; } -/* ================================================================ - * driver teardown, unregister and unload - */ -int -pciba_unload(void) -{ -#if DEBUG_PCIBA - printk("pciba_unload()\n"); -#endif - if (atomic_read(&pciba_prevent_unload)) - return -1; +static int +generic_open(struct inode * inode, struct file * file) +{ + TRACE(); - pciio_iterate("pciba_", pciba_unload_me); + /* FIXME: should check that they're not trying to open the ROM + writable */ - return 0; + return 0; /* success */ } -int -pciba_unreg(void) + +static int +rom_mmap(struct file * file, struct vm_area_struct * vma) { + unsigned long pci_pa; + struct node_data * nd; -#if DEBUG_PCIBA - printf("pciba_unreg()\n"); -#endif + TRACE(); - if (atomic_read(&pciba_prevent_unload)) - return -1; + nd = (struct node_data * )file->private_data; - pciio_driver_unregister("pciba_"); - return 0; + pci_pa = pci_resource_start(nd->u.rom.dev, PCI_ROM_RESOURCE); + + if (!nd->u.rom.mmapped) { + nd->u.rom.mmapped = true; + DPRINTF("Enabling ROM address decoder.\n"); + DPRINTF( +"rom_mmap: FIXME: some cards do not allow both ROM and memory addresses to\n" +"rom_mmap: FIXME: be enabled simultaneously, as they share a decoder.\n"); + pci_read_config_dword(nd->u.rom.dev, PCI_ROM_ADDRESS, + &nd->u.rom.saved_rom_base_reg); + DPRINTF("ROM base address contains %x\n", + nd->u.rom.saved_rom_base_reg); + pci_write_config_dword(nd->u.rom.dev, PCI_ROM_ADDRESS, + nd->u.rom.saved_rom_base_reg | + PCI_ROM_ADDRESS_ENABLE); + } + + return mmap_pci_address(vma, pci_pa); } -int -pciba_detach(devfs_handle_t conn) + +static int +rom_release(struct inode * inode, struct file * file) { - devfs_handle_t base; - pciba_bus_t bus; - devfs_handle_t gconn; - devfs_handle_t gbase; + struct node_data * nd; - pciio_info_t pciio_info; - devfs_handle_t master; - arbitrary_info_t ainfo; - int ret; + TRACE(); -#if DEBUG_PCIBA - printf("pciba_detach(%v)\n", conn); -#endif + nd = (struct node_data * )file->private_data; - if ((GRAPH_SUCCESS != - hwgraph_traverse(conn, ".guest", &gconn)) || - (conn == gconn)) - gconn = GRAPH_VERTEX_NONE; - - if (gconn != GRAPH_VERTEX_NONE) { - pciba_sub_detach(gconn, PCIBA_EDGE_LBL_CFG); - pciba_sub_detach(gconn, PCIBA_EDGE_LBL_DMA); - pciba_sub_detach(gconn, PCIBA_EDGE_LBL_ROM); -#if ULI - pciba_sub_detach(gconn, PCIBA_EDGE_LBL_INTR); -#endif - if (GRAPH_SUCCESS == hwgraph_edge_remove(conn, PCIBA_EDGE_LBL_BASE, &gbase)) { - pciba_sub_detach(gconn, PCIBA_EDGE_LBL_MEM); - pciba_sub_detach(gconn, PCIBA_EDGE_LBL_IO); - pciba_sub_detach(gbase, "0"); - pciba_sub_detach(gbase, "1"); - pciba_sub_detach(gbase, "2"); - pciba_sub_detach(gbase, "3"); - pciba_sub_detach(gbase, "4"); - pciba_sub_detach(gbase, "5"); - hwgraph_vertex_unref(gbase); - if (GRAPH_SUCCESS != (ret = hwgraph_vertex_destroy(gbase))) { -#if defined(SUPPORT_PRINTING_V_FORMAT) - PRINT_WARNING("pciba: hwgraph_vertex_destroy(%v/base) failed (%d)", - conn, ret); -#else - PRINT_WARNING("pciba: hwgraph_vertex_destroy(0x%x/base) failed (%d)", - conn, ret); -#endif -#if DEBUG_REFCT - printk("\tretained refct %d\n", hwgraph_vertex_refct(gbase)); -#endif - } + if (nd->u.rom.mmapped) { + nd->u.rom.mmapped = false; + DPRINTF("Disabling ROM address decoder.\n"); + pci_write_config_dword(nd->u.rom.dev, PCI_ROM_ADDRESS, + nd->u.rom.saved_rom_base_reg); } - } + return 0; /* indicate success */ +} - pciba_sub_detach(conn, PCIBA_EDGE_LBL_CFG); - pciba_sub_detach(conn, PCIBA_EDGE_LBL_DMA); - pciba_sub_detach(conn, PCIBA_EDGE_LBL_ROM); -#if ULI - pciba_sub_detach(conn, PCIBA_EDGE_LBL_INTR); -#endif - if (GRAPH_SUCCESS == hwgraph_edge_remove(conn, PCIBA_EDGE_LBL_BASE, &base)) { - pciba_sub_detach(conn, PCIBA_EDGE_LBL_MEM); - pciba_sub_detach(conn, PCIBA_EDGE_LBL_IO); - pciba_sub_detach(base, "0"); - pciba_sub_detach(base, "1"); - pciba_sub_detach(base, "2"); - pciba_sub_detach(base, "3"); - pciba_sub_detach(base, "4"); - pciba_sub_detach(base, "5"); - hwgraph_vertex_unref(base); - if (GRAPH_SUCCESS != (ret = hwgraph_vertex_destroy(base))) { -#if defined(SUPPORT_PRINTING_V_FORMAT) - PRINT_WARNING(CE_WARN, "pciba: hwgraph_vertex_destroy(%v/base) failed (%d)", - conn, ret); -#else - PRINT_WARNING(CE_WARN, "pciba: hwgraph_vertex_destroy(0x%x/base) failed (%d)", - conn, ret); -#endif -#if DEBUG_REFCT - printk("\tretained refct %d\n", hwgraph_vertex_refct(base)); -#endif - } - } +static int +base_mmap(struct file * file, struct vm_area_struct * vma) +{ + struct resource * resource; - bus = pciba_find_bus(conn, 0); - if (bus && !--(bus->refct)) { + TRACE(); - pciio_info = pciio_info_get(conn); + resource = (struct resource *)file->private_data; - master = pciio_info_master_get(pciio_info); + return mmap_pci_address(vma, resource->start); +} - pciba_sub_detach(master, PCIBA_EDGE_LBL_IO); - pciba_sub_detach(master, PCIBA_EDGE_LBL_MEM); - pciba_sub_detach(master, PCIBA_EDGE_LBL_CFG); - hwgraph_info_remove_LBL(master, PCIBA_INFO_LBL_BUS, &ainfo); -#if DEBUG_PCIBA - printf("pcbia_detach: DEL(bus) at %v\n", master); -#endif - DEL(bus); - } +static int +config_ioctl(struct inode * inode, struct file * file, + unsigned int cmd, + unsigned long arg) +{ + struct pci_dev * dev; - return 0; -} + union cfg_data { + uint8_t byte; + uint16_t word; + uint32_t dword; + } read_data, write_data; -static void -pciba_sub_detach1(devfs_handle_t conn, - char *name, - char *suf) -{ - devfs_handle_t vhdl; - pciba_soft_t soft; - pciba_comm_t comm; - int ret; - char nbuf[128]; - - if (suf && *suf) { - strcpy(nbuf, name); - name = nbuf; - strcat(name, suf); - } - - if ((GRAPH_SUCCESS == hwgraph_edge_remove(conn, name, &vhdl)) && - ((soft = pciba_soft_get(vhdl)) != NULL)) { -#if DEBUG_PCIBA -#if defined(SUPPORT_PRINTING_V_FORMAT) - prink("pciba_sub_detach(%v,%s)\n", conn, name); -#else - prink("pciba_sub_detach(0x%x,%s)\n", conn, name); -#endif -#endif + int dir, size, offset; - hwgraph_vertex_unref(soft->vhdl); -#if DEBUG_REFCT - printk("\tadjusted refct %d (soft ref: %d)\n", - hwgraph_vertex_refct(vhdl), - soft->refct); -#endif - if (!--(soft->refct)) { - comm = soft->comm; - if (!--(comm->refct)) { - DEL(comm); - } - pciba_soft_set(vhdl, 0); - DEL(soft); - - hwgraph_vertex_unref(vhdl); - if (GRAPH_SUCCESS != (ret = hwgraph_vertex_destroy(vhdl))) { -#if defined(SUPPORT_PRINTING_V_FORMAT) - PRINT_WARNING("pciba: hwgraph_vertex_destroy(0x%x/%s) failed (%d)", - conn, name, ret); -#else - PRINT_WARNING("pciba: hwgraph_vertex_destroy(%v/%s) failed (%d)", - conn, name, ret); -#endif -#if DEBUG_REFCT - printk("\tretained refct %d\n", hwgraph_vertex_refct(vhdl)); -#endif - } - } - } -} + TRACE(); -static void -pciba_sub_detach(devfs_handle_t conn, - char *name) -{ - pciba_sub_detach1(conn, name, ""); - pciba_sub_detach1(conn, name, "_le"); - pciba_sub_detach1(conn, name, "_be"); -} + DPRINTF("cmd = %x (DIR = %x, TYPE = %x, NR = %x, SIZE = %x)\n", + cmd, + _IOC_DIR(cmd), _IOC_TYPE(cmd), _IOC_NR(cmd), _IOC_SIZE(cmd)); + DPRINTF("arg = %lx\n", arg); -static void -pciba_unload_me(devfs_handle_t pconn_vhdl) -{ - devfs_handle_t c_vhdl; + dev = (struct pci_dev *)file->private_data; -#if DEBUG_PCIBA - printf("pciba_unload_me(%v)\n", pconn_vhdl); -#endif + /* PCIIOCCFG{RD,WR}: read and/or write PCI configuration + space. If both, the read happens first (this becomes a swap + operation, atomic with respect to other updates through + this path). */ - if (GRAPH_SUCCESS != - hwgraph_traverse(pconn_vhdl, PCIBA_EDGE_LBL_CFG, &c_vhdl)) - return; + dir = _IOC_DIR(cmd); - hwgraph_vertex_unref(c_vhdl); -} +#define do_swap(suffix, type) \ + do { \ + if (dir & _IOC_READ) { \ + pci_read_config_##suffix(dev, _IOC_NR(cmd), \ + &read_data.suffix); \ + } \ + if (dir & _IOC_WRITE) { \ + get_user(write_data.suffix, (type)arg); \ + pci_write_config_##suffix(dev, _IOC_NR(cmd), \ + write_data.suffix); \ + } \ + if (dir & _IOC_READ) { \ + put_user(read_data.suffix, (type)arg); \ + } \ + } while (0) -/* ================================================================ - * standard unix entry points - */ + size = _IOC_SIZE(cmd); + offset = _IOC_NR(cmd); -/*ARGSUSED */ -int -pciba_open(dev_t *devp, int flag, int otyp, struct cred *crp) -{ + DPRINTF("sanity check\n"); + if (((size > 0) || (size <= 4)) && + ((offset + size) <= 256) && + (dir & (_IOC_READ | _IOC_WRITE))) { -#if DEBUG_PCIBA - printf("pciba_open(%V)\n", *devp); -#endif - return 0; + switch (size) + { + case 1: + do_swap(byte, uint8_t *); + break; + case 2: + do_swap(word, uint16_t *); + break; + case 4: + do_swap(dword, uint32_t *); + break; + default: + DPRINTF("invalid ioctl\n"); + return -EINVAL; + } + } else + return -EINVAL; + + return 0; } -/*ARGSUSED */ -int -pciba_close(dev_t dev) -{ - devfs_handle_t vhdl = dev_to_vhdl(dev); - pciba_soft_t soft = pciba_soft_get(vhdl); - -#if DEBUG_PCIBA - printf("pciba_close(%V)\n", dev); -#endif - /* if there is pending DMA for this device, hit the - * device over the head with a baseball bat and - * release the system memory resources. - */ - if (soft && soft->comm->dmap) { - pciba_dma_t next; - pciba_dma_t dmap; - - pciba_soft_lock(soft); - if (dmap = soft->comm->dmap) { - soft->comm->dmap = 0; - - pciio_reset(soft->comm->conn); - - do { - if (!dmap->kaddr) - break; - if (!dmap->paddr) - break; - if (dmap->bytes < NBPP) - break; - next = dmap->next; - kvpfree(dmap->kaddr, dmap->bytes / NBPP); - dmap->paddr = 0; - dmap->bytes = 0; - DEL(dmap); - } while (dmap = next); +#ifdef DEBUG_PCIBA +static void +dump_allocations(struct list_head * dalp) +{ + struct dma_allocation * dap; + struct list_head * p; + + printk("{\n"); + list_for_each(p, dalp) { + dap = list_entry(p, struct dma_allocation, + list); + printk(" handle = %lx, va = %p\n", + dap->handle, dap->va); } - pciba_soft_unlock(soft); - } - return 0; + printk("}\n"); } -/* ARGSUSED */ -int -pciba_read(dev_t dev, cred_t *crp) +static void +dump_nodes(struct list_head * nodes) { -#if DEBUG_PCIBA - printf("pciba_read(%V)\n", dev); -#endif - - return EINVAL; + struct node_data * ndp; + struct list_head * p; + + printk("{\n"); + list_for_each(p, nodes) { + ndp = list_entry(p, struct node_data, + global_node_list); + printk(" %p\n", (void *)ndp); + } + printk("}\n"); } -/* ARGSUSED */ -int -pciba_write(dev_t dev, cred_t *crp) + +#if 0 +#define NEW(ptr) (ptr = kmalloc(sizeof (*(ptr)), GFP_KERNEL)) + +static void +test_list(void) { -#if DEBUG_PCIBA - printf("pciba_write(%V)\n", dev); -#endif + u64 i; + LIST_HEAD(the_list); - return EINVAL; + for (i = 0; i < 5; i++) { + struct dma_allocation * new_alloc; + NEW(new_alloc); + new_alloc->va = (void *)i; + new_alloc->handle = 5*i; + printk("%d - the_list->next = %lx\n", i, the_list.next); + list_add(&new_alloc->list, &the_list); + } + dump_allocations(&the_list); } - -/*ARGSUSED */ -int -pciba_ioctl(dev_t dev, int cmd, void *uarg, int mode, cred_t *crp, int *rvalp) -{ - devfs_handle_t vhdl; - pciba_soft_t soft; - pciio_space_t space; - ioctl_arg_buffer_t arg; - int psize; - int err = 0; - -#if ULI - char abi = get_current_abi(); - pciio_intr_t intr=0; - device_desc_t desc; - cpuid_t intrcpu; - unsigned lines; - struct uli *uli = 0; #endif - unsigned flags; - void *kaddr = 0; - iopaddr_t paddr; - pciba_dma_h dmah; - pciba_dma_t dmap = 0; - pciio_dmamap_t dmamap = 0; - size_t bytes; - int pages; - pciaddr_t daddr; - -#if DEBUG_PCIBA - printf("pciba_ioctl(%V,0x%x)\n", dev, cmd); #endif - psize = (cmd >> 16) & IOCPARM_MASK; - -#if ULI - ASSERT(sizeof(struct uliargs) > 8); /* prevent CFG access conflict */ - ASSERT(sizeof(struct uliargs) <= IOCPARM_MASK); -#endif - arg.ca = uarg; +static LIST_HEAD(dma_buffer_list); - if ((psize > 0) && (cmd & (IOC_OUT | IOC_IN))) { - if (psize > sizeof(arg)) - err = EINVAL; /* "bad parameter size */ - else { - if (cmd & IOC_OUT) - bzero(arg.data, psize); - if ((cmd & IOC_IN) && - (copyin(uarg, arg.data, psize) < 0)) - err = EFAULT; /* "parameter copyin failed" */ - } - } - vhdl = dev_to_vhdl(dev); - soft = pciba_soft_get(vhdl); - space = soft->space; - - if (err == 0) { - err = EINVAL; /* "invalid ioctl for this vertex" */ - switch (space) { -#if ULI - case PCIIO_SPACE_NONE: /* the "intr" vertex */ - /* PCIIOCSETULI: set up user interrupts. - */ - lines = cmd & 15; - if (ABI_IS_64BIT(abi)) { - if (cmd != PCIIOCSETULI(lines)) { - err = EINVAL; /* "invalid ioctl for this vertex" */ - break; - } - } - else { - struct uliargs uliargs; - - if (cmd != PCIIOCSETULI32(lines)) { - err = EINVAL; /* "invalid ioctl for this vertex" */ - break; - } - - uliargs32_to_uliargs(&arg.uli32, &uliargs); - arg.uli = uliargs; - } - desc = device_desc_dup(soft->comm->conn); - device_desc_flags_set(desc, (device_desc_flags_get(desc) | - D_INTR_NOTHREAD)); - device_desc_intr_swlevel_set(desc, INTR_SWLEVEL_NOTHREAD_DEFAULT); - device_desc_intr_name_set(desc, "PCIBA"); - device_desc_default_set(soft->comm->conn, desc); - - /* When designating interrupts, the slot number - * is taken from the connection point. - * Bits 0..3 are used to select INTA..INTD; more - * than one bit can be specified. These should - * be constructed using PCIIO_INTR_LINE_[ABCD]. - */ - intr = pciio_intr_alloc - (soft->comm->conn, desc, lines, soft->vhdl); - if (intr == 0) { - err = ENOMEM; /* "insufficient resources" */ - break; - } - intrcpu = cpuvertex_to_cpuid(pciio_intr_cpu_get(intr)); - if (err = new_uli(&arg.uli, &uli, intrcpu)) { - break; /* "unable to set up ULI" */ - } - atomic_inc(&pciba_prevent_unload); - - pciio_intr_connect(intr, pciba_intr, uli, (void *) 0); - - /* NOTE: don't set the teardown function - * until the interrupt is connected. - */ - uli->teardownarg1 = (__psint_t) intr; - uli->teardown = pciba_clearuli; - - arg.uli.id = uli->index; - - if (!ABI_IS_64BIT(abi)) { - struct uliargs32 uliargs32; - uliargs_to_uliargs32(&arg.uli, &uliargs32); - arg.uli32 = uliargs32; - } +static int +dma_ioctl(struct inode * inode, struct file * file, + unsigned int cmd, + unsigned long arg) +{ + struct node_data * nd; + uint64_t argv; + int result; + struct dma_allocation * dma_alloc; + struct list_head * iterp; - err = 0; - break; -#endif + TRACE(); - case PCIBA_SPACE_UDMA: /* the "dma" vertex */ + DPRINTF("cmd = %x\n", cmd); + DPRINTF("arg = %lx\n", arg); - switch (cmd) { + nd = (struct node_data *)file->private_data; - case PCIIOCDMAALLOC: - /* PCIIOCDMAALLOC: allocate a chunk of physical - * memory and set it up for DMA. Return the - * PCI address that gets to it. - * NOTE: this allocates memory local to the - * CPU doing the ioctl, not local to the - * device that will be doing the DMA. - */ - - if (!_CAP_ABLE(CAP_DEVICE_MGT)) { - err = EPERM; - break; - } - /* separate the halves of the incoming parameter */ - flags = arg.ud >> 32; - bytes = arg.ud & 0xFFFFFFFF; - -#if DEBUG_PCIBA - printf("pciba: user wants 0x%x bytes of DMA, flags 0x%x\n", - bytes, flags); +#ifdef DEBUG_PCIBA + DPRINTF("at dma_ioctl entry\n"); + dump_allocations(&nd->u.dma.dma_allocs); #endif - /* round up the requested size to the next highest page */ - pages = (bytes + NBPP - 1) / NBPP; - - /* make sure the requested size is something reasonable */ - if (pages > pci_user_dma_max_pages) { -#if DEBUG_PCIBA - printf("pciba: request for too much buffer space\n"); + switch (cmd) { + case PCIIOCDMAALLOC: + /* PCIIOCDMAALLOC: allocate a chunk of physical memory + and set it up for DMA. Return the PCI address that + gets to it. */ + DPRINTF("case PCIIOCDMAALLOC (%lx)\n", PCIIOCDMAALLOC); + + if ( (result = get_user(argv, (uint64_t *)arg)) ) + return result; + DPRINTF("argv (size of buffer) = %lx\n", argv); + + dma_alloc = (struct dma_allocation *) + kmalloc(sizeof(struct dma_allocation), GFP_KERNEL); + if (dma_alloc == NULL) + return -ENOMEM; + + dma_alloc->size = (size_t)argv; + dma_alloc->va = pci_alloc_consistent(nd->u.dma.dev, + dma_alloc->size, + &dma_alloc->handle); + DPRINTF("dma_alloc->va = %p, dma_alloc->handle = %lx\n", + dma_alloc->va, dma_alloc->handle); + if (dma_alloc->va == NULL) { + kfree(dma_alloc); + return -ENOMEM; + } + + list_add(&dma_alloc->list, &nd->u.dma.dma_allocs); + if ( (result = put_user((uint64_t)dma_alloc->handle, + (uint64_t *)arg)) ) { + DPRINTF("put_user failed\n"); + pci_free_consistent(nd->u.dma.dev, (size_t)argv, + dma_alloc->va, dma_alloc->handle); + kfree(dma_alloc); + return result; + } + +#ifdef DEBUG_PCIBA + DPRINTF("after insertion\n"); + dump_allocations(&nd->u.dma.dma_allocs); #endif - err = EINVAL; - break; /* "request for too much buffer space" */ - } - /* "correct" number of bytes */ - bytes = pages * NBPP; + break; - /* allocate the space */ - /* XXX- force to same node as the device? */ - /* XXX- someday, we want to handle user buffers, - * and noncontiguous pages, but this will - * require either fancy mapping or handing - * a list of blocks back to the user. For - * now, just tell users to allocate a lot of - * individual single-pages and manage their - * scatter-gather manually. - */ - kaddr = kvpalloc(pages, VM_DIRECT | KM_NOSLEEP, 0); - if (kaddr == 0) { -#if DEBUG_PCIBA - printf("pciba: unable to get %d contiguous pages\n", pages); -#endif - err = EAGAIN; /* "insufficient resources, try again later" */ - break; - } -#if DEBUG_PCIBA - printf("pciba: kaddr is 0x%x\n", kaddr); -#endif - paddr = kvtophys(kaddr); + case PCIIOCDMAFREE: + DPRINTF("case PCIIOCDMAFREE (%lx)\n", PCIIOCDMAFREE); - daddr = pciio_dmatrans_addr - (soft->comm->conn, 0, paddr, bytes, flags); - if (daddr == 0) { /* "no direct path available" */ -#if DEBUG_PCIBA - printf("pciba: dmatrans failed, trying dmamap\n"); -#endif - dmamap = pciio_dmamap_alloc - (soft->comm->conn, 0, bytes, flags); - if (dmamap == 0) { -#if DEBUG_PCIBA - printf("pciba: unable to allocate dmamap\n"); -#endif - err = ENOMEM; - break; /* "out of mapping resources" */ - } - daddr = pciio_dmamap_addr - (dmamap, paddr, bytes); - if (daddr == 0) { -#if DEBUG_PCIBA - printf("pciba: dmamap_addr failed\n"); -#endif - err = EINVAL; - break; /* "can't get there from here" */ - } - } -#if DEBUG_PCIBA - printf("pciba: daddr is 0x%x\n", daddr); + if ( (result = get_user(argv, (uint64_t *)arg)) ) { + DPRINTF("get_user failed\n"); + return result; + } + + DPRINTF("argv (physical address of DMA buffer) = %lx\n", argv); + list_for_each(iterp, &nd->u.dma.dma_allocs) { + struct dma_allocation * da = + list_entry(iterp, struct dma_allocation, list); + if (da->handle == argv) { + pci_free_consistent(nd->u.dma.dev, da->size, + da->va, da->handle); + list_del(&da->list); + kfree(da); +#ifdef DEBUG_PCIBA + DPRINTF("after deletion\n"); + dump_allocations(&nd->u.dma.dma_allocs); #endif - NEW(dmap); - if (!dmap) { - err = ENOMEM; - break; /* "no memory available" */ + return 0; /* success */ + } } - dmap->bytes = bytes; - dmap->pages = pages; - dmap->paddr = paddr; - dmap->kaddr = kaddr; - dmap->map = dmamap; - dmap->daddr = daddr; - dmap->handle = 0; - -#if DEBUG_PCIBA - printf("pciba: dmap 0x%x contains va 0x%x bytes 0x%x pa 0x%x pages 0x%x daddr 0x%x\n", - dmap, kaddr, bytes, paddr, pages, daddr); -#endif + /* previously allocated dma buffer wasn't found */ + DPRINTF("attempt to free invalid dma handle\n"); + return -EINVAL; - arg.ud = dmap->daddr; + default: + DPRINTF("undefined ioctl\n"); + return -EINVAL; + } - err = 0; - break; + DPRINTF("success\n"); + return 0; +} + - case PCIIOCDMAFREE: - /* PCIIOCDMAFREE: Find the chunk of - * User DMA memory, and release its - * resources back to the system. - */ - - if (!_CAP_ABLE(CAP_DEVICE_MGT)) { - err = EPERM; /* "you can't do that" */ - break; - } - if (soft->comm->dmap == NULL) { - err = EINVAL; /* "no User DMA to free" */ - break; - } - /* find the request. */ - daddr = arg.ud; - err = EINVAL; /* "block not found" */ - pciba_soft_lock(soft); - for (dmah = &soft->comm->dmap; dmap = *dmah; dmah = &dmap->next) { - if (dmap->daddr == daddr) { - if (dmap->handle != 0) { - dmap = 0; /* don't DEL this dmap! */ - err = EINVAL; /* "please unmap first" */ - break; /* break outa for loop. */ - } - *dmah = dmap->next; +static int +dma_mmap(struct file * file, struct vm_area_struct * vma) +{ + struct node_data * nd; + struct list_head * iterp; + int result; + + TRACE(); - if (dmamap = dmap->map) { - pciio_dmamap_free(dmamap); - dmamap = 0; /* don't free it twice! */ + nd = (struct node_data *)file->private_data; + + DPRINTF("vma->vm_start is %lx\n", vma->vm_start); + DPRINTF("vma->vm_end is %lx\n", vma->vm_end); + DPRINTF("offset = %lx\n", vma->vm_pgoff); + + /* get kernel virtual address for the dma buffer (necessary + * for the mmap). */ + list_for_each(iterp, &nd->u.dma.dma_allocs) { + struct dma_allocation * da = + list_entry(iterp, struct dma_allocation, list); + /* why does mmap shift its offset argument? */ + if (da->handle == vma->vm_pgoff << PAGE_SHIFT) { + DPRINTF("found dma handle\n"); + if ( (result = mmap_kernel_address(vma, + da->va)) ) { + return result; /* failure */ + } else { + /* it seems like at least one of these + should show up in user land.... + I'm missing something */ + *(char *)da->va = 0xaa; + strncpy(da->va, " Toastie!", da->size); + if (put_user(0x18badbeeful, + (u64 *)vma->vm_start)) + DPRINTF("put_user failed?!\n"); + return 0; /* success */ } - kvpfree(dmap->kaddr, dmap->bytes / NBPP); - DEL(dmap); - dmap = 0; /* don't link this back into the list! */ - err = 0; /* "all done" */ - break; /* break outa for loop. */ - } - } - pciba_soft_unlock(soft); - break; /* break outa case PCIIOCDMAFREE: */ - } - break; /* break outa case PCIBA_SPACE_UDMA: */ - - case PCIIO_SPACE_CFG: - - /* PCIIOCCFG{RD,WR}: read and/or write - * PCI configuration space. If both, - * the read happens first (this becomes - * a swap operation, atomic with respect - * to other updates through this path). - * - * Should be *last* IOCTl command checked, - * so other patterns can nip useless codes - * out of the space this decodes. - */ - err = EINVAL; - if ((psize > 0) || (psize <= 8) && - (((cmd & 0xFF) + psize) <= 256) && - (cmd & (IOC_IN | IOC_OUT))) { - - uint64_t rdata; - uint64_t wdata; - int shft; - - shft = 64 - (8 * psize); - - wdata = arg.ud >> shft; - - pciba_soft_lock(soft); - - if (cmd & IOC_OUT) - rdata = pciio_config_get(soft->comm->conn, cmd & 0xFFFF, psize); - if (cmd & IOC_IN) - pciio_config_set(soft->comm->conn, cmd & 0xFFFF, psize, wdata); - - pciba_soft_unlock(soft); - arg.ud = rdata << shft; - err = 0; - break; - } - break; - } - } - /* done: come here if all went OK. - */ - if ((err == 0) && - ((cmd & IOC_OUT) && (psize > 0)) && - copyout(arg.data, uarg, psize)) - err = EFAULT; - - /* This gets delayed until after the copyout so we - * do not free the dmap on a copyout error, or - * alternately end up with a dangling allocated - * buffer that the user never got back. - */ - if ((err == 0) && dmap) { - pciba_soft_lock(soft); - dmap->next = soft->comm->dmap; - soft->comm->dmap = dmap; - pciba_soft_unlock(soft); - } - if (err) { - /* Things went badly. Clean up. - */ -#if ULI - if (intr) { - pciio_intr_disconnect(intr); - pciio_intr_free(intr); - } - if (uli) - free_uli(uli); -#endif - if (dmap) { - if (dmap->map && (dmap->map != dmamap)) - pciio_dmamap_free(dmap->map); - DEL(dmap); + } } - if (dmamap) - pciio_dmamap_free(dmamap); - if (kaddr) - kvpfree(kaddr, pages); - } - return *rvalp = err; + DPRINTF("attempt to mmap an invalid dma handle\n"); + return -EINVAL; } -/* ================================================================ - * mapping support - */ - -/*ARGSUSED */ -int -pciba_map(dev_t dev, vhandl_t *vt, - off_t off, size_t len, uint32_t prot) -{ - devfs_handle_t vhdl = dev_to_vhdl(dev); - pciba_soft_t soft = pciba_soft_get(vhdl); - devfs_handle_t conn = soft->comm->conn; - pciio_space_t space = soft->space; - size_t pages = (len + NBPP - 1) / NBPP; - pciio_piomap_t pciio_piomap = 0; - caddr_t kaddr; - pciba_map_t map; - pciba_dma_t dmap; -#if DEBUG_PCIBA - printf("pciba_map(%V,vt=0x%x)\n", dev, vt); -#endif +static int +mmap_pci_address(struct vm_area_struct * vma, unsigned long pci_va) +{ + unsigned long pci_pa; - if (space == PCIBA_SPACE_UDMA) { - pciba_soft_lock(soft); + TRACE(); - for (dmap = soft->comm->dmap; dmap != NULL; dmap = dmap->next) { - if (off == dmap->daddr) { - if (pages != dmap->pages) { - pciba_soft_unlock(soft); - return EINVAL; /* "size mismatch" */ - } - v_mapphys(vt, dmap->kaddr, dmap->bytes); - dmap->handle = v_gethandle(vt); - pciba_soft_unlock(soft); -#if DEBUG_PCIBA - printf("pciba: mapped dma at kaddr 0x%x via handle 0x%x\n", - dmap->kaddr, dmap->handle); -#endif - return 0; - } - } - pciba_soft_unlock(soft); - return EINVAL; /* "block not found" */ - } - if (soft->iomem == PCIIO_SPACE_NONE) - return EINVAL; /* "mmap not supported" */ - - kaddr = (caddr_t) pciio_pio_addr - (conn, 0, space, off, len, &pciio_piomap, soft->flags | PCIIO_FIXED ); - -#if DEBUG_PCIBA - printf("pciba: mapped %R[0x%x..0x%x] via map 0x%x to kaddr 0x%x\n", - space, space_desc, off, off + len - 1, pciio_piomap, kaddr); -#endif + DPRINTF("vma->vm_start is %lx\n", vma->vm_start); + DPRINTF("vma->vm_end is %lx\n", vma->vm_end); - if (kaddr == NULL) - return EINVAL; /* "you can't get there from here" */ + /* the size of the vma doesn't necessarily correspond to the + size specified in the mmap call. So we can't really do any + kind of sanity check here. This is a dangerous driver, and + it's very easy for a user process to kill the machine. */ - NEW(map); - if (map == NULL) { - if (pciio_piomap) - pciio_piomap_free(pciio_piomap); - return ENOMEM; /* "unable to get memory resources */ - } -#ifdef LATER - map->uthread = curuthread; -#endif - map->handle = v_gethandle(vt); - map->uvaddr = v_getaddr(vt); - map->map = pciio_piomap; - map->space = soft->iomem; - map->base = soft->base + off; - map->size = len; - pciba_map_push(soft->comm->bus, map); - - /* Inform the system of the correct - * kvaddr corresponding to the thing - * that is being mapped. - */ - v_mapphys(vt, kaddr, len); - - return 0; -} - -/*ARGSUSED */ -int -pciba_unmap(dev_t dev, vhandl_t *vt) -{ - devfs_handle_t vhdl = dev_to_vhdl(dev); - pciba_soft_t soft = pciba_soft_get(vhdl); - pciba_bus_t bus = soft->comm->bus; - pciba_map_t map; - __psunsigned_t handle = v_gethandle(vt); + DPRINTF("PCI base at virtual address %lx\n", pci_va); + /* the __pa macro is intended for region 7 on IA64, so it + doesn't work for region 6 */ + /* pci_pa = __pa(pci_va); */ + /* should be replaced by __tpa or equivalent (preferably a + generic equivalent) */ + pci_pa = pci_va & ~0xe000000000000000ul; + DPRINTF("PCI base at physical address %lx\n", pci_pa); -#if DEBUG_PCIBA - printf("pciba_unmap(%V,vt=%x)\n", dev, vt); -#endif + /* there are various arch-specific versions of this function + defined in linux/drivers/char/mem.c, but it would be nice + if all architectures put it in pgtable.h. it's defined + there for ia64.... */ + vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); - /* If this is a userDMA buffer, - * make a note that it has been unmapped - * so it can be released. - */ - if (soft->comm->dmap) { - pciba_dma_t dmap; - - pciba_soft_lock(soft); - for (dmap = soft->comm->dmap; dmap != NULL; dmap = dmap->next) - if (handle == dmap->handle) { - dmap->handle = 0; - pciba_soft_unlock(soft); -#if DEBUG_PCIBA - printf("pciba: unmapped dma at kaddr 0x%x via handle 0x%x\n", - dmap->kaddr, handle); -#endif - return 0; /* found userPCI */ - } - pciba_soft_unlock(soft); - } - map = pciba_map_pop_hdl(bus, handle); - if (map == NULL) - return EINVAL; /* no match */ + vma->vm_flags |= VM_NONCACHED | VM_RESERVED | VM_IO; - if (map->map) - pciio_piomap_free(map->map); - DEL(map); - - return (0); /* all done OK */ + return io_remap_page_range(vma->vm_start, pci_pa, + vma->vm_end-vma->vm_start, + vma->vm_page_prot); } -#if ULI -void -pciba_clearuli(struct uli *uli) + +static int +mmap_kernel_address(struct vm_area_struct * vma, void * kernel_va) { - pciio_intr_t intr = (pciio_intr_t) uli->teardownarg1; + unsigned long kernel_pa; -#if DEBUG_PCIBA - printf("pciba_clearuli(0x%x)\n", uli); -#endif + TRACE(); - pciio_intr_disconnect(intr); - pciio_intr_free(intr); - atomic_dec(&pciba_prevent_unload); -} + DPRINTF("vma->vm_start is %lx\n", vma->vm_start); + DPRINTF("vma->vm_end is %lx\n", vma->vm_end); -void -pciba_intr(intr_arg_t arg) -{ - struct uli *uli = (struct uli *) arg; - int ulinum = uli->index; + /* the size of the vma doesn't necessarily correspond to the + size specified in the mmap call. So we can't really do any + kind of sanity check here. This is a dangerous driver, and + it's very easy for a user process to kill the machine. */ - extern void frs_handle_uli(void); + DPRINTF("mapping virtual address %p\n", kernel_va); + kernel_pa = __pa(kernel_va); + DPRINTF("mapping physical address %lx\n", kernel_pa); - if (ulinum >= 0 && ulinum < MAX_ULIS) { - uli_callup(ulinum); + vma->vm_flags |= VM_NONCACHED | VM_RESERVED | VM_IO; - if (private.p_frs_flags) - frs_handle_uli(); - } -} -#endif -#endif /* LATER - undef as we implement each routine */ + return remap_page_range(vma->vm_start, kernel_pa, + vma->vm_end-vma->vm_start, + vma->vm_page_prot); +} diff -Nru a/arch/ia64/sn/io/pcibr.c b/arch/ia64/sn/io/pcibr.c --- a/arch/ia64/sn/io/pcibr.c Tue Mar 12 13:58:14 2002 +++ /dev/null Wed Dec 31 16:00:00 1969 @@ -1,9824 +0,0 @@ -/* $Id$ - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam - */ - -#ifdef BRINGUP -int NeedXbridgeSwap = 0; -#endif - -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#if defined(CONFIG_SGI_IP35) || defined(CONFIG_IA64_SGI_SN1) || defined(CONFIG_IA64_GENERIC) -#include -#include -#endif - -#ifdef __ia64 -#define rmallocmap atemapalloc -#define rmfreemap atemapfree -#define rmfree atefree -#define rmalloc atealloc -#endif - -#undef PCIBR_ATE_DEBUG -#if defined(BRINGUP) -#if 0 -#define DEBUG 1 /* To avoid lots of bad printk() formats leave off */ -#endif -#define PCI_DEBUG 1 -#define ATTACH_DEBUG 1 -#define PCIBR_SOFT_LIST 1 -#endif - -#ifndef LOCAL -#define LOCAL static -#endif - -/* - * Macros related to the Lucent USS 302/312 usb timeout workaround. It - * appears that if the lucent part can get into a retry loop if it sees a - * DAC on the bus during a pio read retry. The loop is broken after about - * 1ms, so we need to set up bridges holding this part to allow at least - * 1ms for pio. - */ - -#define USS302_TIMEOUT_WAR - -#ifdef USS302_TIMEOUT_WAR -#include -#define LUCENT_USBHC_VENDOR_ID_NUM 0x11c1 -#define LUCENT_USBHC302_DEVICE_ID_NUM 0x5801 -#define LUCENT_USBHC312_DEVICE_ID_NUM 0x5802 -#define USS302_BRIDGE_TIMEOUT_HLD 4 -#endif - -#define PCIBR_LLP_CONTROL_WAR -#if defined (PCIBR_LLP_CONTROL_WAR) -int pcibr_llp_control_war_cnt; -#endif /* PCIBR_LLP_CONTROL_WAR */ - -#define NEWAf(ptr,n,f) (ptr = kmem_zalloc((n)*sizeof (*(ptr)), (f&PCIIO_NOSLEEP)?KM_NOSLEEP:KM_SLEEP)) -#define NEWA(ptr,n) (ptr = kmem_zalloc((n)*sizeof (*(ptr)), KM_SLEEP)) -#define DELA(ptr,n) (kfree(ptr)) - -#define NEWf(ptr,f) NEWAf(ptr,1,f) -#define NEW(ptr) NEWA(ptr,1) -#define DEL(ptr) DELA(ptr,1) - -int pcibr_devflag = D_MP; - -#ifdef LATER -#define F(s,n) { 1l<<(s),-(s), n } - -struct reg_desc bridge_int_status_desc[] = -{ - F(31, "MULTI_ERR"), - F(30, "PMU_ESIZE_EFAULT"), - F(29, "UNEXPECTED_RESP"), - F(28, "BAD_XRESP_PACKET"), - F(27, "BAD_XREQ_PACKET"), - F(26, "RESP_XTALK_ERROR"), - F(25, "REQ_XTALK_ERROR"), - F(24, "INVALID_ADDRESS"), - F(23, "UNSUPPORTED_XOP"), - F(22, "XREQ_FIFO_OFLOW"), - F(21, "LLP_REC_SNERROR"), - F(20, "LLP_REC_CBERROR"), - F(19, "LLP_RCTY"), - F(18, "LLP_TX_RETRY"), - F(17, "LLP_TCTY"), - F(16, "SSRAM_PERR"), - F(15, "PCI_ABORT"), - F(14, "PCI_PARITY"), - F(13, "PCI_SERR"), - F(12, "PCI_PERR"), - F(11, "PCI_MASTER_TOUT"), - F(10, "PCI_RETRY_CNT"), - F(9, "XREAD_REQ_TOUT"), - F(8, "GIO_BENABLE_ERR"), - F(7, "INT7"), - F(6, "INT6"), - F(5, "INT5"), - F(4, "INT4"), - F(3, "INT3"), - F(2, "INT2"), - F(1, "INT1"), - F(0, "INT0"), - {0} -}; - -struct reg_values space_v[] = -{ - {PCIIO_SPACE_NONE, "none"}, - {PCIIO_SPACE_ROM, "ROM"}, - {PCIIO_SPACE_IO, "I/O"}, - {PCIIO_SPACE_MEM, "MEM"}, - {PCIIO_SPACE_MEM32, "MEM(32)"}, - {PCIIO_SPACE_MEM64, "MEM(64)"}, - {PCIIO_SPACE_CFG, "CFG"}, - {PCIIO_SPACE_WIN(0), "WIN(0)"}, - {PCIIO_SPACE_WIN(1), "WIN(1)"}, - {PCIIO_SPACE_WIN(2), "WIN(2)"}, - {PCIIO_SPACE_WIN(3), "WIN(3)"}, - {PCIIO_SPACE_WIN(4), "WIN(4)"}, - {PCIIO_SPACE_WIN(5), "WIN(5)"}, - {PCIIO_SPACE_BAD, "BAD"}, - {0} -}; - -struct reg_desc space_desc[] = -{ - {0xFF, 0, "space", 0, space_v}, - {0} -}; - -#if DEBUG -#define device_desc device_bits -LOCAL struct reg_desc device_bits[] = -{ - {BRIDGE_DEV_ERR_LOCK_EN, 0, "ERR_LOCK_EN"}, - {BRIDGE_DEV_PAGE_CHK_DIS, 0, "PAGE_CHK_DIS"}, - {BRIDGE_DEV_FORCE_PCI_PAR, 0, "FORCE_PCI_PAR"}, - {BRIDGE_DEV_VIRTUAL_EN, 0, "VIRTUAL_EN"}, - {BRIDGE_DEV_PMU_WRGA_EN, 0, "PMU_WRGA_EN"}, - {BRIDGE_DEV_DIR_WRGA_EN, 0, "DIR_WRGA_EN"}, - {BRIDGE_DEV_DEV_SIZE, 0, "DEV_SIZE"}, - {BRIDGE_DEV_RT, 0, "RT"}, - {BRIDGE_DEV_SWAP_PMU, 0, "SWAP_PMU"}, - {BRIDGE_DEV_SWAP_DIR, 0, "SWAP_DIR"}, - {BRIDGE_DEV_PREF, 0, "PREF"}, - {BRIDGE_DEV_PRECISE, 0, "PRECISE"}, - {BRIDGE_DEV_COH, 0, "COH"}, - {BRIDGE_DEV_BARRIER, 0, "BARRIER"}, - {BRIDGE_DEV_GBR, 0, "GBR"}, - {BRIDGE_DEV_DEV_SWAP, 0, "DEV_SWAP"}, - {BRIDGE_DEV_DEV_IO_MEM, 0, "DEV_IO_MEM"}, - {BRIDGE_DEV_OFF_MASK, BRIDGE_DEV_OFF_ADDR_SHFT, "DEV_OFF", "%x"}, - {0} -}; -#endif /* DEBUG */ - -#ifdef SUPPORT_PRINTING_R_FORMAT -LOCAL struct reg_values xio_cmd_pactyp[] = -{ - {0x0, "RdReq"}, - {0x1, "RdResp"}, - {0x2, "WrReqWithResp"}, - {0x3, "WrResp"}, - {0x4, "WrReqNoResp"}, - {0x5, "Reserved(5)"}, - {0x6, "FetchAndOp"}, - {0x7, "Reserved(7)"}, - {0x8, "StoreAndOp"}, - {0x9, "Reserved(9)"}, - {0xa, "Reserved(a)"}, - {0xb, "Reserved(b)"}, - {0xc, "Reserved(c)"}, - {0xd, "Reserved(d)"}, - {0xe, "SpecialReq"}, - {0xf, "SpecialResp"}, - {0} -}; - -LOCAL struct reg_desc xio_cmd_bits[] = -{ - {WIDGET_DIDN, -28, "DIDN", "%x"}, - {WIDGET_SIDN, -24, "SIDN", "%x"}, - {WIDGET_PACTYP, -20, "PACTYP", 0, xio_cmd_pactyp}, - {WIDGET_TNUM, -15, "TNUM", "%x"}, - {WIDGET_COHERENT, 0, "COHERENT"}, - {WIDGET_DS, 0, "DS"}, - {WIDGET_GBR, 0, "GBR"}, - {WIDGET_VBPM, 0, "VBPM"}, - {WIDGET_ERROR, 0, "ERROR"}, - {WIDGET_BARRIER, 0, "BARRIER"}, - {0} -}; -#endif /* SUPPORT_PRINTING_R_FORMAT */ - -#if PCIBR_FREEZE_TIME || PCIBR_ATE_DEBUG -LOCAL struct reg_desc ate_bits[] = -{ - {0xFFFF000000000000ull, -48, "RMF", "%x"}, - {~(IOPGSIZE - 1) & /* may trim off some low bits */ - 0x0000FFFFFFFFF000ull, 0, "XIO", "%x"}, - {0x0000000000000F00ull, -8, "port", "%x"}, - {0x0000000000000010ull, 0, "Barrier"}, - {0x0000000000000008ull, 0, "Prefetch"}, - {0x0000000000000004ull, 0, "Precise"}, - {0x0000000000000002ull, 0, "Coherent"}, - {0x0000000000000001ull, 0, "Valid"}, - {0} -}; -#endif - -#if PCIBR_ATE_DEBUG -LOCAL struct reg_values ssram_sizes[] = -{ - {BRIDGE_CTRL_SSRAM_512K, "512k"}, - {BRIDGE_CTRL_SSRAM_128K, "128k"}, - {BRIDGE_CTRL_SSRAM_64K, "64k"}, - {BRIDGE_CTRL_SSRAM_1K, "1k"}, - {0} -}; - -LOCAL struct reg_desc control_bits[] = -{ - {BRIDGE_CTRL_FLASH_WR_EN, 0, "FLASH_WR_EN"}, - {BRIDGE_CTRL_EN_CLK50, 0, "EN_CLK50"}, - {BRIDGE_CTRL_EN_CLK40, 0, "EN_CLK40"}, - {BRIDGE_CTRL_EN_CLK33, 0, "EN_CLK33"}, - {BRIDGE_CTRL_RST_MASK, -24, "RST", "%x"}, - {BRIDGE_CTRL_IO_SWAP, 0, "IO_SWAP"}, - {BRIDGE_CTRL_MEM_SWAP, 0, "MEM_SWAP"}, - {BRIDGE_CTRL_PAGE_SIZE, 0, "PAGE_SIZE"}, - {BRIDGE_CTRL_SS_PAR_BAD, 0, "SS_PAR_BAD"}, - {BRIDGE_CTRL_SS_PAR_EN, 0, "SS_PAR_EN"}, - {BRIDGE_CTRL_SSRAM_SIZE_MASK, 0, "SSRAM_SIZE", 0, ssram_sizes}, - {BRIDGE_CTRL_F_BAD_PKT, 0, "F_BAD_PKT"}, - {BRIDGE_CTRL_LLP_XBAR_CRD_MASK, -12, "LLP_XBAR_CRD", "%d"}, - {BRIDGE_CTRL_CLR_RLLP_CNT, 0, "CLR_RLLP_CNT"}, - {BRIDGE_CTRL_CLR_TLLP_CNT, 0, "CLR_TLLP_CNT"}, - {BRIDGE_CTRL_SYS_END, 0, "SYS_END"}, - {BRIDGE_CTRL_MAX_TRANS_MASK, -4, "MAX_TRANS", "%d"}, - {BRIDGE_CTRL_WIDGET_ID_MASK, 0, "WIDGET_ID", "%x"}, - {0} -}; -#endif -#endif /* LATER */ - -/* kbrick widgetnum-to-bus layout */ -int p_busnum[MAX_PORT_NUM] = { /* widget# */ - 0, 0, 0, 0, 0, 0, 0, 0, /* 0x0 - 0x7 */ - 2, /* 0x8 */ - 1, /* 0x9 */ - 0, 0, /* 0xa - 0xb */ - 5, /* 0xc */ - 6, /* 0xd */ - 4, /* 0xe */ - 3, /* 0xf */ -}; - -/* - * Additional PIO spaces per slot are - * recorded in this structure. - */ -struct pciio_piospace_s { - pciio_piospace_t next; /* another space for this device */ - char free; /* 1 if free, 0 if in use */ - pciio_space_t space; /* Which space is in use */ - iopaddr_t start; /* Starting address of the PIO space */ - size_t count; /* size of PIO space */ -}; - -/* Use io spin locks. This ensures that all the PIO writes from a particular - * CPU to a particular IO device are synched before the start of the next - * set of PIO operations to the same device. - */ -#define pcibr_lock(pcibr_soft) io_splock(&pcibr_soft->bs_lock) -#define pcibr_unlock(pcibr_soft,s) io_spunlock(&pcibr_soft->bs_lock,s) - -#if PCIBR_SOFT_LIST -typedef struct pcibr_list_s *pcibr_list_p; -struct pcibr_list_s { - pcibr_list_p bl_next; - pcibr_soft_t bl_soft; - devfs_handle_t bl_vhdl; -}; -pcibr_list_p pcibr_list = 0; -#endif - -typedef volatile unsigned *cfg_p; -typedef volatile bridgereg_t *reg_p; - -#define INFO_LBL_PCIBR_ASIC_REV "_pcibr_asic_rev" - -#define PCIBR_D64_BASE_UNSET (0xFFFFFFFFFFFFFFFF) -#define PCIBR_D32_BASE_UNSET (0xFFFFFFFF) - -#define PCIBR_VALID_SLOT(s) (s < 8) - -#ifdef SN_XXX -extern int hub_device_flags_set(devfs_handle_t widget_dev, - hub_widget_flags_t flags); -#endif -extern pciio_dmamap_t get_free_pciio_dmamap(devfs_handle_t); - -/* - * This is the file operation table for the pcibr driver. - * As each of the functions are implemented, put the - * appropriate function name below. - */ -struct file_operations pcibr_fops = { - owner: THIS_MODULE, - llseek: NULL, - read: NULL, - write: NULL, - readdir: NULL, - poll: NULL, - ioctl: NULL, - mmap: NULL, - open: NULL, - flush: NULL, - release: NULL, - fsync: NULL, - fasync: NULL, - lock: NULL, - readv: NULL, - writev: NULL -}; - -extern devfs_handle_t hwgraph_root; -extern graph_error_t hwgraph_vertex_unref(devfs_handle_t vhdl); -extern int cap_able(uint64_t x); -extern uint64_t rmalloc(struct map *mp, size_t size); -extern void rmfree(struct map *mp, size_t size, uint64_t a); -extern int hwgraph_vertex_name_get(devfs_handle_t vhdl, char *buf, uint buflen); -extern long atoi(register char *p); -extern void *swap_ptr(void **loc, void *new); -extern char *dev_to_name(devfs_handle_t dev, char *buf, uint buflen); -extern cnodeid_t nodevertex_to_cnodeid(devfs_handle_t vhdl); -extern graph_error_t hwgraph_edge_remove(devfs_handle_t from, char *name, devfs_handle_t *toptr); -extern struct map *rmallocmap(uint64_t mapsiz); -extern void rmfreemap(struct map *mp); -extern int compare_and_swap_ptr(void **location, void *old_ptr, void *new_ptr); -extern int io_path_map_widget(devfs_handle_t vertex); - - - -/* ===================================================================== - * Function Table of Contents - * - * The order of functions in this file has stopped - * making much sense. We might want to take a look - * at it some time and bring back some sanity, or - * perhaps bust this file into smaller chunks. - */ - -LOCAL void do_pcibr_rrb_clear(bridge_t *, int); -LOCAL void do_pcibr_rrb_flush(bridge_t *, int); -LOCAL int do_pcibr_rrb_count_valid(bridge_t *, pciio_slot_t); -LOCAL int do_pcibr_rrb_count_avail(bridge_t *, pciio_slot_t); -LOCAL int do_pcibr_rrb_alloc(bridge_t *, pciio_slot_t, int); -LOCAL int do_pcibr_rrb_free(bridge_t *, pciio_slot_t, int); - -LOCAL void do_pcibr_rrb_autoalloc(pcibr_soft_t, int, int); - -int pcibr_wrb_flush(devfs_handle_t); -int pcibr_rrb_alloc(devfs_handle_t, int *, int *); -int pcibr_rrb_check(devfs_handle_t, int *, int *, int *, int *); -int pcibr_alloc_all_rrbs(devfs_handle_t, int, int, int, int, int, int, int, int, int); -void pcibr_rrb_flush(devfs_handle_t); - -LOCAL int pcibr_try_set_device(pcibr_soft_t, pciio_slot_t, unsigned, bridgereg_t); -void pcibr_release_device(pcibr_soft_t, pciio_slot_t, bridgereg_t); - -LOCAL void pcibr_clearwidint(bridge_t *); -LOCAL void pcibr_setwidint(xtalk_intr_t); -LOCAL int pcibr_probe_slot(bridge_t *, cfg_p, unsigned *); - -void pcibr_init(void); -int pcibr_attach(devfs_handle_t); -int pcibr_detach(devfs_handle_t); -int pcibr_open(devfs_handle_t *, int, int, cred_t *); -int pcibr_close(devfs_handle_t, int, int, cred_t *); -int pcibr_map(devfs_handle_t, vhandl_t *, off_t, size_t, uint); -int pcibr_unmap(devfs_handle_t, vhandl_t *); -int pcibr_ioctl(devfs_handle_t, int, void *, int, struct cred *, int *); - -void pcibr_freeblock_sub(iopaddr_t *, iopaddr_t *, iopaddr_t, size_t); - -LOCAL int pcibr_init_ext_ate_ram(bridge_t *); -LOCAL int pcibr_ate_alloc(pcibr_soft_t, int); -LOCAL void pcibr_ate_free(pcibr_soft_t, int, int); - -LOCAL pcibr_info_t pcibr_info_get(devfs_handle_t); -LOCAL pcibr_info_t pcibr_device_info_new(pcibr_soft_t, pciio_slot_t, pciio_function_t, pciio_vendor_id_t, pciio_device_id_t); -LOCAL void pcibr_device_info_free(devfs_handle_t, pciio_slot_t); -LOCAL iopaddr_t pcibr_addr_pci_to_xio(devfs_handle_t, pciio_slot_t, pciio_space_t, iopaddr_t, size_t, unsigned); - -pcibr_piomap_t pcibr_piomap_alloc(devfs_handle_t, device_desc_t, pciio_space_t, iopaddr_t, size_t, size_t, unsigned); -void pcibr_piomap_free(pcibr_piomap_t); -caddr_t pcibr_piomap_addr(pcibr_piomap_t, iopaddr_t, size_t); -void pcibr_piomap_done(pcibr_piomap_t); -caddr_t pcibr_piotrans_addr(devfs_handle_t, device_desc_t, pciio_space_t, iopaddr_t, size_t, unsigned); -iopaddr_t pcibr_piospace_alloc(devfs_handle_t, device_desc_t, pciio_space_t, size_t, size_t); -void pcibr_piospace_free(devfs_handle_t, pciio_space_t, iopaddr_t, size_t); - -LOCAL iopaddr_t pcibr_flags_to_d64(unsigned, pcibr_soft_t); -LOCAL bridge_ate_t pcibr_flags_to_ate(unsigned); - -pcibr_dmamap_t pcibr_dmamap_alloc(devfs_handle_t, device_desc_t, size_t, unsigned); -void pcibr_dmamap_free(pcibr_dmamap_t); -LOCAL bridge_ate_p pcibr_ate_addr(pcibr_soft_t, int); -LOCAL iopaddr_t pcibr_addr_xio_to_pci(pcibr_soft_t, iopaddr_t, size_t); -iopaddr_t pcibr_dmamap_addr(pcibr_dmamap_t, paddr_t, size_t); -alenlist_t pcibr_dmamap_list(pcibr_dmamap_t, alenlist_t, unsigned); -void pcibr_dmamap_done(pcibr_dmamap_t); -cnodeid_t pcibr_get_dmatrans_node(devfs_handle_t); -iopaddr_t pcibr_dmatrans_addr(devfs_handle_t, device_desc_t, paddr_t, size_t, unsigned); -alenlist_t pcibr_dmatrans_list(devfs_handle_t, device_desc_t, alenlist_t, unsigned); -void pcibr_dmamap_drain(pcibr_dmamap_t); -void pcibr_dmaaddr_drain(devfs_handle_t, paddr_t, size_t); -void pcibr_dmalist_drain(devfs_handle_t, alenlist_t); -iopaddr_t pcibr_dmamap_pciaddr_get(pcibr_dmamap_t); - -static unsigned pcibr_intr_bits(pciio_info_t info, pciio_intr_line_t lines); -pcibr_intr_t pcibr_intr_alloc(devfs_handle_t, device_desc_t, pciio_intr_line_t, devfs_handle_t); -void pcibr_intr_free(pcibr_intr_t); -LOCAL void pcibr_setpciint(xtalk_intr_t); -int pcibr_intr_connect(pcibr_intr_t, intr_func_t, intr_arg_t, void *); -void pcibr_intr_disconnect(pcibr_intr_t); - -devfs_handle_t pcibr_intr_cpu_get(pcibr_intr_t); -void pcibr_xintr_preset(void *, int, xwidgetnum_t, iopaddr_t, xtalk_intr_vector_t); -void pcibr_intr_func(intr_arg_t); - -LOCAL void print_bridge_errcmd(uint32_t, char *); - -void pcibr_error_dump(pcibr_soft_t); -uint32_t pcibr_errintr_group(uint32_t); -LOCAL void pcibr_pioerr_check(pcibr_soft_t); -LOCAL void pcibr_error_intr_handler(intr_arg_t); - -LOCAL int pcibr_addr_toslot(pcibr_soft_t, iopaddr_t, pciio_space_t *, iopaddr_t *, pciio_function_t *); -LOCAL void pcibr_error_cleanup(pcibr_soft_t, int); -void pcibr_device_disable(pcibr_soft_t, int); -LOCAL int pcibr_pioerror(pcibr_soft_t, int, ioerror_mode_t, ioerror_t *); -int pcibr_dmard_error(pcibr_soft_t, int, ioerror_mode_t, ioerror_t *); -int pcibr_dmawr_error(pcibr_soft_t, int, ioerror_mode_t, ioerror_t *); -LOCAL int pcibr_error_handler(error_handler_arg_t, int, ioerror_mode_t, ioerror_t *); -int pcibr_error_devenable(devfs_handle_t, int); - -void pcibr_provider_startup(devfs_handle_t); -void pcibr_provider_shutdown(devfs_handle_t); - -int pcibr_reset(devfs_handle_t); -pciio_endian_t pcibr_endian_set(devfs_handle_t, pciio_endian_t, pciio_endian_t); -int pcibr_priority_bits_set(pcibr_soft_t, pciio_slot_t, pciio_priority_t); -pciio_priority_t pcibr_priority_set(devfs_handle_t, pciio_priority_t); -int pcibr_device_flags_set(devfs_handle_t, pcibr_device_flags_t); - -LOCAL cfg_p pcibr_config_addr(devfs_handle_t, unsigned); -uint64_t pcibr_config_get(devfs_handle_t, unsigned, unsigned); -LOCAL uint64_t do_pcibr_config_get(cfg_p, unsigned, unsigned); -void pcibr_config_set(devfs_handle_t, unsigned, unsigned, uint64_t); -LOCAL void do_pcibr_config_set(cfg_p, unsigned, unsigned, uint64_t); - -LOCAL pcibr_hints_t pcibr_hints_get(devfs_handle_t, int); -void pcibr_hints_fix_rrbs(devfs_handle_t); -void pcibr_hints_dualslot(devfs_handle_t, pciio_slot_t, pciio_slot_t); -void pcibr_hints_intr_bits(devfs_handle_t, pcibr_intr_bits_f *); -void pcibr_set_rrb_callback(devfs_handle_t, rrb_alloc_funct_t); -void pcibr_hints_handsoff(devfs_handle_t); -void pcibr_hints_subdevs(devfs_handle_t, pciio_slot_t, ulong); - -LOCAL int pcibr_slot_info_init(devfs_handle_t,pciio_slot_t); -LOCAL int pcibr_slot_info_free(devfs_handle_t,pciio_slot_t); - -#ifdef LATER -LOCAL int pcibr_slot_info_return(pcibr_soft_t, pciio_slot_t, - pcibr_slot_info_resp_t); -LOCAL void pcibr_slot_func_info_return(pcibr_info_h, int, - pcibr_slot_func_info_resp_t); -#endif /* LATER */ - -LOCAL int pcibr_slot_addr_space_init(devfs_handle_t,pciio_slot_t); -LOCAL int pcibr_slot_device_init(devfs_handle_t, pciio_slot_t); -LOCAL int pcibr_slot_guest_info_init(devfs_handle_t,pciio_slot_t); -LOCAL int pcibr_slot_initial_rrb_alloc(devfs_handle_t,pciio_slot_t); -LOCAL int pcibr_slot_call_device_attach(devfs_handle_t, - pciio_slot_t, int); -LOCAL int pcibr_slot_call_device_detach(devfs_handle_t, - pciio_slot_t, int); - -LOCAL int pcibr_slot_detach(devfs_handle_t, pciio_slot_t, int); -LOCAL int pcibr_is_slot_sys_critical(devfs_handle_t, pciio_slot_t); -#ifdef LATER -LOCAL int pcibr_slot_query(devfs_handle_t, pcibr_slot_info_req_t); -#endif - -/* ===================================================================== - * RRB management - */ - -#define LSBIT(word) ((word) &~ ((word)-1)) - -#define PCIBR_RRB_SLOT_VIRTUAL 8 - -LOCAL void -do_pcibr_rrb_clear(bridge_t *bridge, int rrb) -{ - bridgereg_t status; - - /* bridge_lock must be held; - * this RRB must be disabled. - */ - - /* wait until RRB has no outstanduing XIO packets. */ - while ((status = bridge->b_resp_status) & BRIDGE_RRB_INUSE(rrb)) { - ; /* XXX- beats on bridge. bad idea? */ - } - - /* if the RRB has data, drain it. */ - if (status & BRIDGE_RRB_VALID(rrb)) { - bridge->b_resp_clear = BRIDGE_RRB_CLEAR(rrb); - - /* wait until RRB is no longer valid. */ - while ((status = bridge->b_resp_status) & BRIDGE_RRB_VALID(rrb)) { - ; /* XXX- beats on bridge. bad idea? */ - } - } -} - -LOCAL void -do_pcibr_rrb_flush(bridge_t *bridge, int rrbn) -{ - reg_p rrbp = &bridge->b_rrb_map[rrbn & 1].reg; - bridgereg_t rrbv; - int shft = 4 * (rrbn >> 1); - unsigned ebit = BRIDGE_RRB_EN << shft; - - rrbv = *rrbp; - if (rrbv & ebit) - *rrbp = rrbv & ~ebit; - - do_pcibr_rrb_clear(bridge, rrbn); - - if (rrbv & ebit) - *rrbp = rrbv; -} - -/* - * pcibr_rrb_count_valid: count how many RRBs are - * marked valid for the specified PCI slot on this - * bridge. - * - * NOTE: The "slot" parameter for all pcibr_rrb - * management routines must include the "virtual" - * bit; when manageing both the normal and the - * virtual channel, separate calls to these - * routines must be made. To denote the virtual - * channel, add PCIBR_RRB_SLOT_VIRTUAL to the slot - * number. - * - * IMPL NOTE: The obvious algorithm is to iterate - * through the RRB fields, incrementing a count if - * the RRB is valid and matches the slot. However, - * it is much simpler to use an algorithm derived - * from the "partitioned add" idea. First, XOR in a - * pattern such that the fields that match this - * slot come up "all ones" and all other fields - * have zeros in the mismatching bits. Then AND - * together the bits in the field, so we end up - * with one bit turned on for each field that - * matched. Now we need to count these bits. This - * can be done either with a series of shift/add - * instructions or by using "tmp % 15"; I expect - * that the cascaded shift/add will be faster. - */ - -LOCAL int -do_pcibr_rrb_count_valid(bridge_t *bridge, - pciio_slot_t slot) -{ - bridgereg_t tmp; - - tmp = bridge->b_rrb_map[slot & 1].reg; - tmp ^= 0x11111111 * (7 - slot / 2); - tmp &= (0xCCCCCCCC & tmp) >> 2; - tmp &= (0x22222222 & tmp) >> 1; - tmp += tmp >> 4; - tmp += tmp >> 8; - tmp += tmp >> 16; - return tmp & 15; -} - -/* - * do_pcibr_rrb_count_avail: count how many RRBs are - * available to be allocated for the specified slot. - * - * IMPL NOTE: similar to the above, except we are - * just counting how many fields have the valid bit - * turned off. - */ -LOCAL int -do_pcibr_rrb_count_avail(bridge_t *bridge, - pciio_slot_t slot) -{ - bridgereg_t tmp; - - tmp = bridge->b_rrb_map[slot & 1].reg; - tmp = (0x88888888 & ~tmp) >> 3; - tmp += tmp >> 4; - tmp += tmp >> 8; - tmp += tmp >> 16; - return tmp & 15; -} - -/* - * do_pcibr_rrb_alloc: allocate some additional RRBs - * for the specified slot. Returns -1 if there were - * insufficient free RRBs to satisfy the request, - * or 0 if the request was fulfilled. - * - * Note that if a request can be partially filled, - * it will be, even if we return failure. - * - * IMPL NOTE: again we avoid iterating across all - * the RRBs; instead, we form up a word containing - * one bit for each free RRB, then peel the bits - * off from the low end. - */ -LOCAL int -do_pcibr_rrb_alloc(bridge_t *bridge, - pciio_slot_t slot, - int more) -{ - int rv = 0; - bridgereg_t reg, tmp, bit; - - reg = bridge->b_rrb_map[slot & 1].reg; - tmp = (0x88888888 & ~reg) >> 3; - while (more-- > 0) { - bit = LSBIT(tmp); - if (!bit) { - rv = -1; - break; - } - tmp &= ~bit; - reg = ((reg & ~(bit * 15)) | (bit * (8 + slot / 2))); - } - bridge->b_rrb_map[slot & 1].reg = reg; - return rv; -} - -/* - * do_pcibr_rrb_free: release some of the RRBs that - * have been allocated for the specified - * slot. Returns zero for success, or negative if - * it was unable to free that many RRBs. - * - * IMPL NOTE: We form up a bit for each RRB - * allocated to the slot, aligned with the VALID - * bitfield this time; then we peel bits off one at - * a time, releasing the corresponding RRB. - */ -LOCAL int -do_pcibr_rrb_free(bridge_t *bridge, - pciio_slot_t slot, - int less) -{ - int rv = 0; - bridgereg_t reg, tmp, clr, bit; - int i; - - clr = 0; - reg = bridge->b_rrb_map[slot & 1].reg; - - /* This needs to be done otherwise the rrb's on the virtual channel - * for this slot won't be freed !! - */ - tmp = reg & 0xbbbbbbbb; - - tmp ^= (0x11111111 * (7 - slot / 2)); - tmp &= (0x33333333 & tmp) << 2; - tmp &= (0x44444444 & tmp) << 1; - while (less-- > 0) { - bit = LSBIT(tmp); - if (!bit) { - rv = -1; - break; - } - tmp &= ~bit; - reg &= ~bit; - clr |= bit; - } - bridge->b_rrb_map[slot & 1].reg = reg; - - for (i = 0; i < 8; i++) - if (clr & (8 << (4 * i))) - do_pcibr_rrb_clear(bridge, (2 * i) + (slot & 1)); - - return rv; -} - -LOCAL void -do_pcibr_rrb_autoalloc(pcibr_soft_t pcibr_soft, - int slot, - int more_rrbs) -{ - bridge_t *bridge = pcibr_soft->bs_base; - int got; - - for (got = 0; got < more_rrbs; ++got) { - if (pcibr_soft->bs_rrb_res[slot & 7] > 0) - pcibr_soft->bs_rrb_res[slot & 7]--; - else if (pcibr_soft->bs_rrb_avail[slot & 1] > 0) - pcibr_soft->bs_rrb_avail[slot & 1]--; - else - break; - if (do_pcibr_rrb_alloc(bridge, slot, 1) < 0) - break; -#if PCIBR_RRB_DEBUG - printk( "do_pcibr_rrb_autoalloc: add one to slot %d%s\n", - slot & 7, slot & 8 ? "v" : ""); -#endif - pcibr_soft->bs_rrb_valid[slot]++; - } -#if PCIBR_RRB_DEBUG - printk("%s: %d+%d free RRBs. Allocation list:\n", pcibr_soft->bs_name, - pcibr_soft->bs_rrb_avail[0], - pcibr_soft->bs_rrb_avail[1]); - for (slot = 0; slot < 8; ++slot) - printk("\t%d+%d+%d", - 0xFFF & pcibr_soft->bs_rrb_valid[slot], - 0xFFF & pcibr_soft->bs_rrb_valid[slot + PCIBR_RRB_SLOT_VIRTUAL], - pcibr_soft->bs_rrb_res[slot]); - printk("\n"); -#endif -} - -/* - * Device driver interface to flush the write buffers for a specified - * device hanging off the bridge. - */ -int -pcibr_wrb_flush(devfs_handle_t pconn_vhdl) -{ - pciio_info_t pciio_info = pciio_info_get(pconn_vhdl); - pciio_slot_t pciio_slot = pciio_info_slot_get(pciio_info); - pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); - bridge_t *bridge = pcibr_soft->bs_base; - volatile bridgereg_t *wrb_flush; - - wrb_flush = &(bridge->b_wr_req_buf[pciio_slot].reg); - while (*wrb_flush); - - return(0); -} -/* - * Device driver interface to request RRBs for a specified device - * hanging off a Bridge. The driver requests the total number of - * RRBs it would like for the normal channel (vchan0) and for the - * "virtual channel" (vchan1). The actual number allocated to each - * channel is returned. - * - * If we cannot allocate at least one RRB to a channel that needs - * at least one, return -1 (failure). Otherwise, satisfy the request - * as best we can and return 0. - */ -int -pcibr_rrb_alloc(devfs_handle_t pconn_vhdl, - int *count_vchan0, - int *count_vchan1) -{ - pciio_info_t pciio_info = pciio_info_get(pconn_vhdl); - pciio_slot_t pciio_slot = pciio_info_slot_get(pciio_info); - pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); - bridge_t *bridge = pcibr_soft->bs_base; - int desired_vchan0; - int desired_vchan1; - int orig_vchan0; - int orig_vchan1; - int delta_vchan0; - int delta_vchan1; - int final_vchan0; - int final_vchan1; - int avail_rrbs; - unsigned long s; - int error; - - /* - * TBD: temper request with admin info about RRB allocation, - * and according to demand from other devices on this Bridge. - * - * One way of doing this would be to allocate two RRBs - * for each device on the bus, before any drivers start - * asking for extras. This has the weakness that one - * driver might not give back an "extra" RRB until after - * another driver has already failed to get one that - * it wanted. - */ - - s = pcibr_lock(pcibr_soft); - - /* How many RRBs do we own? */ - orig_vchan0 = pcibr_soft->bs_rrb_valid[pciio_slot]; - orig_vchan1 = pcibr_soft->bs_rrb_valid[pciio_slot + PCIBR_RRB_SLOT_VIRTUAL]; - - /* How many RRBs do we want? */ - desired_vchan0 = count_vchan0 ? *count_vchan0 : orig_vchan0; - desired_vchan1 = count_vchan1 ? *count_vchan1 : orig_vchan1; - - /* How many RRBs are free? */ - avail_rrbs = pcibr_soft->bs_rrb_avail[pciio_slot & 1] - + pcibr_soft->bs_rrb_res[pciio_slot]; - - /* Figure desired deltas */ - delta_vchan0 = desired_vchan0 - orig_vchan0; - delta_vchan1 = desired_vchan1 - orig_vchan1; - - /* Trim back deltas to something - * that we can actually meet, by - * decreasing the ending allocation - * for whichever channel wants - * more RRBs. If both want the same - * number, cut the second channel. - * NOTE: do not change the allocation for - * a channel that was passed as NULL. - */ - while ((delta_vchan0 + delta_vchan1) > avail_rrbs) { - if (count_vchan0 && - (!count_vchan1 || - ((orig_vchan0 + delta_vchan0) > - (orig_vchan1 + delta_vchan1)))) - delta_vchan0--; - else - delta_vchan1--; - } - - /* Figure final RRB allocations - */ - final_vchan0 = orig_vchan0 + delta_vchan0; - final_vchan1 = orig_vchan1 + delta_vchan1; - - /* If either channel wants RRBs but our actions - * would leave it with none, declare an error, - * but DO NOT change any RRB allocations. - */ - if ((desired_vchan0 && !final_vchan0) || - (desired_vchan1 && !final_vchan1)) { - - error = -1; - - } else { - - /* Commit the allocations: free, then alloc. - */ - if (delta_vchan0 < 0) - (void) do_pcibr_rrb_free(bridge, pciio_slot, -delta_vchan0); - if (delta_vchan1 < 0) - (void) do_pcibr_rrb_free(bridge, PCIBR_RRB_SLOT_VIRTUAL + pciio_slot, -delta_vchan1); - - if (delta_vchan0 > 0) - (void) do_pcibr_rrb_alloc(bridge, pciio_slot, delta_vchan0); - if (delta_vchan1 > 0) - (void) do_pcibr_rrb_alloc(bridge, PCIBR_RRB_SLOT_VIRTUAL + pciio_slot, delta_vchan1); - - /* Return final values to caller. - */ - if (count_vchan0) - *count_vchan0 = final_vchan0; - if (count_vchan1) - *count_vchan1 = final_vchan1; - - /* prevent automatic changes to this slot's RRBs - */ - pcibr_soft->bs_rrb_fixed |= 1 << pciio_slot; - - /* Track the actual allocations, release - * any further reservations, and update the - * number of available RRBs. - */ - - pcibr_soft->bs_rrb_valid[pciio_slot] = final_vchan0; - pcibr_soft->bs_rrb_valid[pciio_slot + PCIBR_RRB_SLOT_VIRTUAL] = final_vchan1; - pcibr_soft->bs_rrb_avail[pciio_slot & 1] = - pcibr_soft->bs_rrb_avail[pciio_slot & 1] - + pcibr_soft->bs_rrb_res[pciio_slot] - - delta_vchan0 - - delta_vchan1; - pcibr_soft->bs_rrb_res[pciio_slot] = 0; - -#if PCIBR_RRB_DEBUG - printk("pcibr_rrb_alloc: slot %d set to %d+%d; %d+%d free\n", - pciio_slot, final_vchan0, final_vchan1, - pcibr_soft->bs_rrb_avail[0], - pcibr_soft->bs_rrb_avail[1]); - for (pciio_slot = 0; pciio_slot < 8; ++pciio_slot) - printk("\t%d+%d+%d", - 0xFFF & pcibr_soft->bs_rrb_valid[pciio_slot], - 0xFFF & pcibr_soft->bs_rrb_valid[pciio_slot + PCIBR_RRB_SLOT_VIRTUAL], - pcibr_soft->bs_rrb_res[pciio_slot]); - printk("\n"); -#endif - - error = 0; - } - - pcibr_unlock(pcibr_soft, s); - return error; -} - -/* - * Device driver interface to check the current state - * of the RRB allocations. - * - * pconn_vhdl is your PCI connection point (specifies which - * PCI bus and which slot). - * - * count_vchan0 points to where to return the number of RRBs - * assigned to the primary DMA channel, used by all DMA - * that does not explicitly ask for the alternate virtual - * channel. - * - * count_vchan1 points to where to return the number of RRBs - * assigned to the secondary DMA channel, used when - * PCIBR_VCHAN1 and PCIIO_DMA_A64 are specified. - * - * count_reserved points to where to return the number of RRBs - * that have been automatically reserved for your device at - * startup, but which have not been assigned to a - * channel. RRBs must be assigned to a channel to be used; - * this can be done either with an explicit pcibr_rrb_alloc - * call, or automatically by the infrastructure when a DMA - * translation is constructed. Any call to pcibr_rrb_alloc - * will release any unassigned reserved RRBs back to the - * free pool. - * - * count_pool points to where to return the number of RRBs - * that are currently unassigned and unreserved. This - * number can (and will) change as other drivers make calls - * to pcibr_rrb_alloc, or automatically allocate RRBs for - * DMA beyond their initial reservation. - * - * NULL may be passed for any of the return value pointers - * the caller is not interested in. - * - * The return value is "0" if all went well, or "-1" if - * there is a problem. Additionally, if the wrong vertex - * is passed in, one of the subsidiary support functions - * could panic with a "bad pciio fingerprint." - */ - -int -pcibr_rrb_check(devfs_handle_t pconn_vhdl, - int *count_vchan0, - int *count_vchan1, - int *count_reserved, - int *count_pool) -{ - pciio_info_t pciio_info; - pciio_slot_t pciio_slot; - pcibr_soft_t pcibr_soft; - unsigned long s; - int error = -1; - - if ((pciio_info = pciio_info_get(pconn_vhdl)) && - (pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info)) && - ((pciio_slot = pciio_info_slot_get(pciio_info)) < 8)) { - - s = pcibr_lock(pcibr_soft); - - if (count_vchan0) - *count_vchan0 = - pcibr_soft->bs_rrb_valid[pciio_slot]; - - if (count_vchan1) - *count_vchan1 = - pcibr_soft->bs_rrb_valid[pciio_slot + PCIBR_RRB_SLOT_VIRTUAL]; - - if (count_reserved) - *count_reserved = - pcibr_soft->bs_rrb_res[pciio_slot]; - - if (count_pool) - *count_pool = - pcibr_soft->bs_rrb_avail[pciio_slot & 1]; - - error = 0; - - pcibr_unlock(pcibr_soft, s); - } - return error; -} - -/* pcibr_alloc_all_rrbs allocates all the rrbs available in the quantities - * requested for each of the devies. The evn_odd argument indicates whether - * allcoation for the odd or even rrbs is requested and next group of four pairse - * are the amount to assign to each device (they should sum to <= 8) and - * whether to set the viritual bit for that device (1 indictaes yes, 0 indicates no) - * the devices in order are either 0, 2, 4, 6 or 1, 3, 5, 7 - * if even_odd is even we alloc even rrbs else we allocate odd rrbs - * returns 0 if no errors else returns -1 - */ - -int -pcibr_alloc_all_rrbs(devfs_handle_t vhdl, int even_odd, - int dev_1_rrbs, int virt1, int dev_2_rrbs, int virt2, - int dev_3_rrbs, int virt3, int dev_4_rrbs, int virt4) -{ - devfs_handle_t pcibr_vhdl; - pcibr_soft_t pcibr_soft = NULL; - bridge_t *bridge = NULL; - - uint32_t rrb_setting = 0; - int rrb_shift = 7; - uint32_t cur_rrb; - int dev_rrbs[4]; - int virt[4]; - int i, j; - unsigned long s; - - if (GRAPH_SUCCESS == - hwgraph_traverse(vhdl, EDGE_LBL_PCI, &pcibr_vhdl)) { - pcibr_soft = pcibr_soft_get(pcibr_vhdl); - if (pcibr_soft) - bridge = pcibr_soft->bs_base; - hwgraph_vertex_unref(pcibr_vhdl); - } - if (bridge == NULL) - bridge = (bridge_t *) xtalk_piotrans_addr - (vhdl, NULL, 0, sizeof(bridge_t), 0); - - even_odd &= 1; - - dev_rrbs[0] = dev_1_rrbs; - dev_rrbs[1] = dev_2_rrbs; - dev_rrbs[2] = dev_3_rrbs; - dev_rrbs[3] = dev_4_rrbs; - - virt[0] = virt1; - virt[1] = virt2; - virt[2] = virt3; - virt[3] = virt4; - - if ((dev_1_rrbs + dev_2_rrbs + dev_3_rrbs + dev_4_rrbs) > 8) { - return -1; - } - if ((dev_1_rrbs < 0) || (dev_2_rrbs < 0) || (dev_3_rrbs < 0) || (dev_4_rrbs < 0)) { - return -1; - } - /* walk through rrbs */ - for (i = 0; i < 4; i++) { - if (virt[i]) { - cur_rrb = i | 0xc; - cur_rrb = cur_rrb << (rrb_shift * 4); - rrb_shift--; - rrb_setting = rrb_setting | cur_rrb; - dev_rrbs[i] = dev_rrbs[i] - 1; - } - for (j = 0; j < dev_rrbs[i]; j++) { - cur_rrb = i | 0x8; - cur_rrb = cur_rrb << (rrb_shift * 4); - rrb_shift--; - rrb_setting = rrb_setting | cur_rrb; - } - } - - if (pcibr_soft) - s = pcibr_lock(pcibr_soft); - - bridge->b_rrb_map[even_odd].reg = rrb_setting; - - if (pcibr_soft) { - - pcibr_soft->bs_rrb_fixed |= 0x55 << even_odd; - - /* since we've "FIXED" the allocations - * for these slots, we probably can dispense - * with tracking avail/res/valid data, but - * keeping it up to date helps debugging. - */ - - pcibr_soft->bs_rrb_avail[even_odd] = - 8 - (dev_1_rrbs + dev_2_rrbs + dev_3_rrbs + dev_4_rrbs); - - pcibr_soft->bs_rrb_res[even_odd + 0] = 0; - pcibr_soft->bs_rrb_res[even_odd + 2] = 0; - pcibr_soft->bs_rrb_res[even_odd + 4] = 0; - pcibr_soft->bs_rrb_res[even_odd + 6] = 0; - - pcibr_soft->bs_rrb_valid[even_odd + 0] = dev_1_rrbs - virt1; - pcibr_soft->bs_rrb_valid[even_odd + 2] = dev_2_rrbs - virt2; - pcibr_soft->bs_rrb_valid[even_odd + 4] = dev_3_rrbs - virt3; - pcibr_soft->bs_rrb_valid[even_odd + 6] = dev_4_rrbs - virt4; - - pcibr_soft->bs_rrb_valid[even_odd + 0 + PCIBR_RRB_SLOT_VIRTUAL] = virt1; - pcibr_soft->bs_rrb_valid[even_odd + 2 + PCIBR_RRB_SLOT_VIRTUAL] = virt2; - pcibr_soft->bs_rrb_valid[even_odd + 4 + PCIBR_RRB_SLOT_VIRTUAL] = virt3; - pcibr_soft->bs_rrb_valid[even_odd + 6 + PCIBR_RRB_SLOT_VIRTUAL] = virt4; - - pcibr_unlock(pcibr_soft, s); - } - return 0; -} - -/* - * pcibr_rrb_flush: chase down all the RRBs assigned - * to the specified connection point, and flush - * them. - */ -void -pcibr_rrb_flush(devfs_handle_t pconn_vhdl) -{ - pciio_info_t pciio_info = pciio_info_get(pconn_vhdl); - pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); - pciio_slot_t pciio_slot = pciio_info_slot_get(pciio_info); - bridge_t *bridge = pcibr_soft->bs_base; - unsigned long s; - reg_p rrbp; - unsigned rrbm; - int i; - int rrbn; - unsigned sval; - unsigned mask; - - sval = BRIDGE_RRB_EN | (pciio_slot >> 1); - mask = BRIDGE_RRB_EN | BRIDGE_RRB_PDEV; - rrbn = pciio_slot & 1; - rrbp = &bridge->b_rrb_map[rrbn].reg; - - s = pcibr_lock(pcibr_soft); - rrbm = *rrbp; - for (i = 0; i < 8; ++i) { - if ((rrbm & mask) == sval) - do_pcibr_rrb_flush(bridge, rrbn); - rrbm >>= 4; - rrbn += 2; - } - pcibr_unlock(pcibr_soft, s); -} - -/* ===================================================================== - * Device(x) register management - */ - -/* pcibr_try_set_device: attempt to modify Device(x) - * for the specified slot on the specified bridge - * as requested in flags, limited to the specified - * bits. Returns which BRIDGE bits were in conflict, - * or ZERO if everything went OK. - * - * Caller MUST hold pcibr_lock when calling this function. - */ -LOCAL int -pcibr_try_set_device(pcibr_soft_t pcibr_soft, - pciio_slot_t slot, - unsigned flags, - bridgereg_t mask) -{ - bridge_t *bridge; - pcibr_soft_slot_t slotp; - bridgereg_t old; - bridgereg_t new; - bridgereg_t chg; - bridgereg_t bad; - bridgereg_t badpmu; - bridgereg_t badd32; - bridgereg_t badd64; - bridgereg_t fix; - unsigned long s; - bridgereg_t xmask; - - xmask = mask; - if (pcibr_soft->bs_xbridge) { - if (mask == BRIDGE_DEV_PMU_BITS) - xmask = XBRIDGE_DEV_PMU_BITS; - if (mask == BRIDGE_DEV_D64_BITS) - xmask = XBRIDGE_DEV_D64_BITS; - } - - slotp = &pcibr_soft->bs_slot[slot]; - - s = pcibr_lock(pcibr_soft); - - bridge = pcibr_soft->bs_base; - - old = slotp->bss_device; - - /* figure out what the desired - * Device(x) bits are based on - * the flags specified. - */ - - new = old; - - /* Currently, we inherit anything that - * the new caller has not specified in - * one way or another, unless we take - * action here to not inherit. - * - * This is needed for the "swap" stuff, - * since it could have been set via - * pcibr_endian_set -- altho note that - * any explicit PCIBR_BYTE_STREAM or - * PCIBR_WORD_VALUES will freely override - * the effect of that call (and vice - * versa, no protection either way). - * - * I want to get rid of pcibr_endian_set - * in favor of tracking DMA endianness - * using the flags specified when DMA - * channels are created. - */ - -#define BRIDGE_DEV_WRGA_BITS (BRIDGE_DEV_PMU_WRGA_EN | BRIDGE_DEV_DIR_WRGA_EN) -#define BRIDGE_DEV_SWAP_BITS (BRIDGE_DEV_SWAP_PMU | BRIDGE_DEV_SWAP_DIR) - - /* Do not use Barrier, Write Gather, - * or Prefetch unless asked. - * Leave everything else as it - * was from the last time. - */ - new = new - & ~BRIDGE_DEV_BARRIER - & ~BRIDGE_DEV_WRGA_BITS - & ~BRIDGE_DEV_PREF - ; - - /* Generic macro flags - */ - if (flags & PCIIO_DMA_DATA) { - new = (new - & ~BRIDGE_DEV_BARRIER) /* barrier off */ - | BRIDGE_DEV_PREF; /* prefetch on */ - - } - if (flags & PCIIO_DMA_CMD) { - new = ((new - & ~BRIDGE_DEV_PREF) /* prefetch off */ - & ~BRIDGE_DEV_WRGA_BITS) /* write gather off */ - | BRIDGE_DEV_BARRIER; /* barrier on */ - } - /* Generic detail flags - */ - if (flags & PCIIO_WRITE_GATHER) - new |= BRIDGE_DEV_WRGA_BITS; - if (flags & PCIIO_NOWRITE_GATHER) - new &= ~BRIDGE_DEV_WRGA_BITS; - - if (flags & PCIIO_PREFETCH) - new |= BRIDGE_DEV_PREF; - if (flags & PCIIO_NOPREFETCH) - new &= ~BRIDGE_DEV_PREF; - - if (flags & PCIBR_WRITE_GATHER) - new |= BRIDGE_DEV_WRGA_BITS; - if (flags & PCIBR_NOWRITE_GATHER) - new &= ~BRIDGE_DEV_WRGA_BITS; - - if (flags & PCIIO_BYTE_STREAM) - new |= (pcibr_soft->bs_xbridge) ? - BRIDGE_DEV_SWAP_DIR : BRIDGE_DEV_SWAP_BITS; - if (flags & PCIIO_WORD_VALUES) - new &= (pcibr_soft->bs_xbridge) ? - ~BRIDGE_DEV_SWAP_DIR : ~BRIDGE_DEV_SWAP_BITS; - - /* Provider-specific flags - */ - if (flags & PCIBR_PREFETCH) - new |= BRIDGE_DEV_PREF; - if (flags & PCIBR_NOPREFETCH) - new &= ~BRIDGE_DEV_PREF; - - if (flags & PCIBR_PRECISE) - new |= BRIDGE_DEV_PRECISE; - if (flags & PCIBR_NOPRECISE) - new &= ~BRIDGE_DEV_PRECISE; - - if (flags & PCIBR_BARRIER) - new |= BRIDGE_DEV_BARRIER; - if (flags & PCIBR_NOBARRIER) - new &= ~BRIDGE_DEV_BARRIER; - - if (flags & PCIBR_64BIT) - new |= BRIDGE_DEV_DEV_SIZE; - if (flags & PCIBR_NO64BIT) - new &= ~BRIDGE_DEV_DEV_SIZE; - - chg = old ^ new; /* what are we changing, */ - chg &= xmask; /* of the interesting bits */ - - if (chg) { - - badd32 = slotp->bss_d32_uctr ? (BRIDGE_DEV_D32_BITS & chg) : 0; - if (pcibr_soft->bs_xbridge) { - badpmu = slotp->bss_pmu_uctr ? (XBRIDGE_DEV_PMU_BITS & chg) : 0; - badd64 = slotp->bss_d64_uctr ? (XBRIDGE_DEV_D64_BITS & chg) : 0; - } else { - badpmu = slotp->bss_pmu_uctr ? (BRIDGE_DEV_PMU_BITS & chg) : 0; - badd64 = slotp->bss_d64_uctr ? (BRIDGE_DEV_D64_BITS & chg) : 0; - } - bad = badpmu | badd32 | badd64; - - if (bad) { - - /* some conflicts can be resolved by - * forcing the bit on. this may cause - * some performance degredation in - * the stream(s) that want the bit off, - * but the alternative is not allowing - * the new stream at all. - */ - if ( (fix = bad & (BRIDGE_DEV_PRECISE | - BRIDGE_DEV_BARRIER)) ){ - bad &= ~fix; - /* don't change these bits if - * they are already set in "old" - */ - chg &= ~(fix & old); - } - /* some conflicts can be resolved by - * forcing the bit off. this may cause - * some performance degredation in - * the stream(s) that want the bit on, - * but the alternative is not allowing - * the new stream at all. - */ - if ( (fix = bad & (BRIDGE_DEV_WRGA_BITS | - BRIDGE_DEV_PREF)) ) { - bad &= ~fix; - /* don't change these bits if - * we wanted to turn them on. - */ - chg &= ~(fix & new); - } - /* conflicts in other bits mean - * we can not establish this DMA - * channel while the other(s) are - * still present. - */ - if (bad) { - pcibr_unlock(pcibr_soft, s); -#if (DEBUG && PCIBR_DEV_DEBUG) - printk("pcibr_try_set_device: mod blocked by %R\n", bad, device_bits); -#endif - return bad; - } - } - } - if (mask == BRIDGE_DEV_PMU_BITS) - slotp->bss_pmu_uctr++; - if (mask == BRIDGE_DEV_D32_BITS) - slotp->bss_d32_uctr++; - if (mask == BRIDGE_DEV_D64_BITS) - slotp->bss_d64_uctr++; - - /* the value we want to write is the - * original value, with the bits for - * our selected changes flipped, and - * with any disabled features turned off. - */ - new = old ^ chg; /* only change what we want to change */ - - if (slotp->bss_device == new) { - pcibr_unlock(pcibr_soft, s); - return 0; - } - bridge->b_device[slot].reg = new; - slotp->bss_device = new; - bridge->b_wid_tflush; /* wait until Bridge PIO complete */ - pcibr_unlock(pcibr_soft, s); -#if DEBUG && PCIBR_DEV_DEBUG - printk("pcibr Device(%d): 0x%p\n", slot, bridge->b_device[slot].reg); -#endif - - return 0; -} - -void -pcibr_release_device(pcibr_soft_t pcibr_soft, - pciio_slot_t slot, - bridgereg_t mask) -{ - pcibr_soft_slot_t slotp; - unsigned long s; - - slotp = &pcibr_soft->bs_slot[slot]; - - s = pcibr_lock(pcibr_soft); - - if (mask == BRIDGE_DEV_PMU_BITS) - slotp->bss_pmu_uctr--; - if (mask == BRIDGE_DEV_D32_BITS) - slotp->bss_d32_uctr--; - if (mask == BRIDGE_DEV_D64_BITS) - slotp->bss_d64_uctr--; - - pcibr_unlock(pcibr_soft, s); -} - -/* - * flush write gather buffer for slot - */ -LOCAL void -pcibr_device_write_gather_flush(pcibr_soft_t pcibr_soft, - pciio_slot_t slot) -{ - bridge_t *bridge; - unsigned long s; - volatile uint32_t wrf; - s = pcibr_lock(pcibr_soft); - bridge = pcibr_soft->bs_base; - wrf = bridge->b_wr_req_buf[slot].reg; - pcibr_unlock(pcibr_soft, s); -} - -/* ===================================================================== - * Bridge (pcibr) "Device Driver" entry points - */ - -/* - * pcibr_probe_slot: read a config space word - * while trapping any errors; reutrn zero if - * all went OK, or nonzero if there was an error. - * The value read, if any, is passed back - * through the valp parameter. - */ -LOCAL int -pcibr_probe_slot(bridge_t *bridge, - cfg_p cfg, - unsigned *valp) -{ - int rv; - bridgereg_t old_enable, new_enable; - int badaddr_val(volatile void *, int, volatile void *); - - - old_enable = bridge->b_int_enable; - new_enable = old_enable & ~BRIDGE_IMR_PCI_MST_TIMEOUT; - - bridge->b_int_enable = new_enable; - - /* - * The xbridge doesn't clear b_err_int_view unless - * multi-err is cleared... - */ - if (is_xbridge(bridge)) - if (bridge->b_err_int_view & BRIDGE_ISR_PCI_MST_TIMEOUT) { - bridge->b_int_rst_stat = BRIDGE_IRR_MULTI_CLR; - } - - if (bridge->b_int_status & BRIDGE_IRR_PCI_GRP) { - bridge->b_int_rst_stat = BRIDGE_IRR_PCI_GRP_CLR; - (void) bridge->b_wid_tflush; /* flushbus */ - } - rv = badaddr_val((void *) cfg, 4, valp); - - /* - * The xbridge doesn't set master timeout in b_int_status - * here. Fortunately it's in error_interrupt_view. - */ - if (is_xbridge(bridge)) - if (bridge->b_err_int_view & BRIDGE_ISR_PCI_MST_TIMEOUT) { - bridge->b_int_rst_stat = BRIDGE_IRR_MULTI_CLR; - rv = 1; /* unoccupied slot */ - } - - bridge->b_int_enable = old_enable; - bridge->b_wid_tflush; /* wait until Bridge PIO complete */ - - return rv; -} - -/* - * pcibr_init: called once during system startup or - * when a loadable driver is loaded. - * - * The driver_register function should normally - * be in _reg, not _init. But the pcibr driver is - * required by devinit before the _reg routines - * are called, so this is an exception. - */ -void -pcibr_init(void) -{ -#if DEBUG && ATTACH_DEBUG - printk("pcibr_init\n"); -#endif - - xwidget_driver_register(XBRIDGE_WIDGET_PART_NUM, - XBRIDGE_WIDGET_MFGR_NUM, - "pcibr_", - 0); - xwidget_driver_register(BRIDGE_WIDGET_PART_NUM, - BRIDGE_WIDGET_MFGR_NUM, - "pcibr_", - 0); -} - -/* - * open/close mmap/munmap interface would be used by processes - * that plan to map the PCI bridge, and muck around with the - * registers. This is dangerous to do, and will be allowed - * to a select brand of programs. Typically these are - * diagnostics programs, or some user level commands we may - * write to do some weird things. - * To start with expect them to have root priveleges. - * We will ask for more later. - */ -/* ARGSUSED */ -int -pcibr_open(devfs_handle_t *devp, int oflag, int otyp, cred_t *credp) -{ - return 0; -} - -/*ARGSUSED */ -int -pcibr_close(devfs_handle_t dev, int oflag, int otyp, cred_t *crp) -{ - return 0; -} - -/*ARGSUSED */ -int -pcibr_map(devfs_handle_t dev, vhandl_t *vt, off_t off, size_t len, uint prot) -{ - int error; - devfs_handle_t vhdl = dev_to_vhdl(dev); - devfs_handle_t pcibr_vhdl = hwgraph_connectpt_get(vhdl); - pcibr_soft_t pcibr_soft = pcibr_soft_get(pcibr_vhdl); - bridge_t *bridge = pcibr_soft->bs_base; - - hwgraph_vertex_unref(pcibr_vhdl); - - ASSERT(pcibr_soft); - len = ctob(btoc(len)); /* Make len page aligned */ - error = v_mapphys(vt, (void *) ((__psunsigned_t) bridge + off), len); - - /* - * If the offset being mapped corresponds to the flash prom - * base, and if the mapping succeeds, and if the user - * has requested the protections to be WRITE, enable the - * flash prom to be written. - * - * XXX- deprecate this in favor of using the - * real flash driver ... - */ - if (!error && - ((off == BRIDGE_EXTERNAL_FLASH) || - (len > BRIDGE_EXTERNAL_FLASH))) { - int s; - - /* - * ensure that we write and read without any interruption. - * The read following the write is required for the Bridge war - */ - s = splhi(); - bridge->b_wid_control |= BRIDGE_CTRL_FLASH_WR_EN; - bridge->b_wid_control; /* inval addr bug war */ - splx(s); - } - return error; -} - -/*ARGSUSED */ -int -pcibr_unmap(devfs_handle_t dev, vhandl_t *vt) -{ - devfs_handle_t pcibr_vhdl = hwgraph_connectpt_get((devfs_handle_t) dev); - pcibr_soft_t pcibr_soft = pcibr_soft_get(pcibr_vhdl); - bridge_t *bridge = pcibr_soft->bs_base; - - hwgraph_vertex_unref(pcibr_vhdl); - - /* - * If flashprom write was enabled, disable it, as - * this is the last unmap. - */ - if (bridge->b_wid_control & BRIDGE_CTRL_FLASH_WR_EN) { - int s; - - /* - * ensure that we write and read without any interruption. - * The read following the write is required for the Bridge war - */ - s = splhi(); - bridge->b_wid_control &= ~BRIDGE_CTRL_FLASH_WR_EN; - bridge->b_wid_control; /* inval addr bug war */ - splx(s); - } - return 0; -} - -/* This is special case code used by grio. There are plans to make - * this a bit more general in the future, but till then this should - * be sufficient. - */ -pciio_slot_t -pcibr_device_slot_get(devfs_handle_t dev_vhdl) -{ - char devname[MAXDEVNAME]; - devfs_handle_t tdev; - pciio_info_t pciio_info; - pciio_slot_t slot = PCIIO_SLOT_NONE; - - vertex_to_name(dev_vhdl, devname, MAXDEVNAME); - - /* run back along the canonical path - * until we find a PCI connection point. - */ - tdev = hwgraph_connectpt_get(dev_vhdl); - while (tdev != GRAPH_VERTEX_NONE) { - pciio_info = pciio_info_chk(tdev); - if (pciio_info) { - slot = pciio_info_slot_get(pciio_info); - break; - } - hwgraph_vertex_unref(tdev); - tdev = hwgraph_connectpt_get(tdev); - } - hwgraph_vertex_unref(tdev); - - return slot; -} - -/*========================================================================== - * BRIDGE PCI SLOT RELATED IOCTLs - */ -char *pci_space_name[] = {"NONE", - "ROM", - "IO", - "", - "MEM", - "MEM32", - "MEM64", - "CFG", - "WIN0", - "WIN1", - "WIN2", - "WIN3", - "WIN4", - "WIN5", - "", - "BAD"}; - - -#ifdef LATER - -void -pcibr_slot_func_info_return(pcibr_info_h pcibr_infoh, - int func, - pcibr_slot_func_info_resp_t funcp) -{ - pcibr_info_t pcibr_info = pcibr_infoh[func]; - int win; - - funcp->resp_f_status = 0; - - if (!pcibr_info) { - return; - } - - funcp->resp_f_status |= FUNC_IS_VALID; -#ifdef SUPPORT_PRINTING_V_FORMAT - sprintf(funcp->resp_f_slot_name, "%v", pcibr_info->f_vertex); -#else - sprintf(funcp->resp_f_slot_name, "%x", pcibr_info->f_vertex); -#endif - - if(is_sys_critical_vertex(pcibr_info->f_vertex)) { - funcp->resp_f_status |= FUNC_IS_SYS_CRITICAL; - } - - funcp->resp_f_bus = pcibr_info->f_bus; - funcp->resp_f_slot = pcibr_info->f_slot; - funcp->resp_f_func = pcibr_info->f_func; -#ifdef SUPPORT_PRINTING_V_FORMAT - sprintf(funcp->resp_f_master_name, "%v", pcibr_info->f_master); -#else - sprintf(funcp->resp_f_master_name, "%x", pcibr_info->f_master); -#endif - funcp->resp_f_pops = pcibr_info->f_pops; - funcp->resp_f_efunc = pcibr_info->f_efunc; - funcp->resp_f_einfo = pcibr_info->f_einfo; - - funcp->resp_f_vendor = pcibr_info->f_vendor; - funcp->resp_f_device = pcibr_info->f_device; - - for(win = 0 ; win < 6 ; win++) { - funcp->resp_f_window[win].resp_w_base = - pcibr_info->f_window[win].w_base; - funcp->resp_f_window[win].resp_w_size = - pcibr_info->f_window[win].w_size; - sprintf(funcp->resp_f_window[win].resp_w_space, - "%s", - pci_space_name[pcibr_info->f_window[win].w_space]); - } - - funcp->resp_f_rbase = pcibr_info->f_rbase; - funcp->resp_f_rsize = pcibr_info->f_rsize; - - for (win = 0 ; win < 4; win++) { - funcp->resp_f_ibit[win] = pcibr_info->f_ibit[win]; - } - - funcp->resp_f_att_det_error = pcibr_info->f_att_det_error; - -} - -int -pcibr_slot_info_return(pcibr_soft_t pcibr_soft, - pciio_slot_t slot, - pcibr_slot_info_resp_t respp) -{ - pcibr_soft_slot_t pss; - int func; - bridge_t *bridge = pcibr_soft->bs_base; - reg_p b_respp; - pcibr_slot_info_resp_t slotp; - pcibr_slot_func_info_resp_t funcp; - - slotp = kmem_zalloc(sizeof(*slotp), KM_SLEEP); - if (slotp == NULL) { - return(ENOMEM); - } - - pss = &pcibr_soft->bs_slot[slot]; - - printk("\nPCI INFRASTRUCTURAL INFO FOR SLOT %d\n\n", slot); - - slotp->resp_has_host = pss->has_host; - slotp->resp_host_slot = pss->host_slot; -#ifdef SUPPORT_PRINTING_V_FORMAT - sprintf(slotp->resp_slot_conn_name, "%v", pss->slot_conn); -#else - sprintf(slotp->resp_slot_conn_name, "%x", pss->slot_conn); -#endif - slotp->resp_slot_status = pss->slot_status; - slotp->resp_l1_bus_num = io_path_map_widget(pcibr_soft->bs_vhdl); - - if (is_sys_critical_vertex(pss->slot_conn)) { - slotp->resp_slot_status |= SLOT_IS_SYS_CRITICAL; - } - - slotp->resp_bss_ninfo = pss->bss_ninfo; - - for (func = 0; func < pss->bss_ninfo; func++) { - funcp = &(slotp->resp_func[func]); - pcibr_slot_func_info_return(pss->bss_infos, func, funcp); - } - - sprintf(slotp->resp_bss_devio_bssd_space, "%s", - pci_space_name[pss->bss_devio.bssd_space]); - slotp->resp_bss_devio_bssd_base = pss->bss_devio.bssd_base; - slotp->resp_bss_device = pss->bss_device; - - slotp->resp_bss_pmu_uctr = pss->bss_pmu_uctr; - slotp->resp_bss_d32_uctr = pss->bss_d32_uctr; - slotp->resp_bss_d64_uctr = pss->bss_d64_uctr; - - slotp->resp_bss_d64_base = pss->bss_d64_base; - slotp->resp_bss_d64_flags = pss->bss_d64_flags; - slotp->resp_bss_d32_base = pss->bss_d32_base; - slotp->resp_bss_d32_flags = pss->bss_d32_flags; - - slotp->resp_bss_ext_ates_active = atomic_read(&pss->bss_ext_ates_active); - - slotp->resp_bss_cmd_pointer = pss->bss_cmd_pointer; - slotp->resp_bss_cmd_shadow = pss->bss_cmd_shadow; - - slotp->resp_bs_rrb_valid = pcibr_soft->bs_rrb_valid[slot]; - slotp->resp_bs_rrb_valid_v = pcibr_soft->bs_rrb_valid[slot + - PCIBR_RRB_SLOT_VIRTUAL]; - slotp->resp_bs_rrb_res = pcibr_soft->bs_rrb_res[slot]; - - if (slot & 1) { - b_respp = &bridge->b_odd_resp; - } else { - b_respp = &bridge->b_even_resp; - } - - slotp->resp_b_resp = *b_respp; - - slotp->resp_b_int_device = bridge->b_int_device; - slotp->resp_b_int_enable = bridge->b_int_enable; - slotp->resp_b_int_host = bridge->b_int_addr[slot].addr; - - if (COPYOUT(slotp, respp, sizeof(*respp))) { - return(EFAULT); - } - - kmem_free(slotp, sizeof(*slotp)); - - return(0); -} - -/* - * pcibr_slot_query - * Return information about the PCI slot maintained by the infrastructure. - * Information is requested in the request structure. - * - * Information returned in the response structure: - * Slot hwgraph name - * Vendor/Device info - * Base register info - * Interrupt mapping from device pins to the bridge pins - * Devio register - * Software RRB info - * RRB register info - * Host/Gues info - * PCI Bus #,slot #, function # - * Slot provider hwgraph name - * Provider Functions - * Error handler - * DMA mapping usage counters - * DMA direct translation info - * External SSRAM workaround info - */ -int -pcibr_slot_query(devfs_handle_t pcibr_vhdl, pcibr_slot_info_req_t reqp) -{ - pcibr_soft_t pcibr_soft = pcibr_soft_get(pcibr_vhdl); - pciio_slot_t slot = reqp->req_slot; - pciio_slot_t tmp_slot; - pcibr_slot_info_resp_t respp = (pcibr_slot_info_resp_t) reqp->req_respp; - int size = reqp->req_size; - int error; - - /* Make sure that we are dealing with a bridge device vertex */ - if (!pcibr_soft) { - return(EINVAL); - } - - /* Make sure that we have a valid PCI slot number or PCIIO_SLOT_NONE */ - if ((!PCIBR_VALID_SLOT(slot)) && (slot != PCIIO_SLOT_NONE)) { - return(EINVAL); - } - -#ifdef LATER - /* Do not allow a query of a slot in a shoehorn */ - if(nic_vertex_info_match(pcibr_soft->bs_conn, XTALK_PCI_PART_NUM)) { - return(EPERM); - } -#endif - - /* Return information for the requested PCI slot */ - if (slot != PCIIO_SLOT_NONE) { - if (size < sizeof(*respp)) { - return(EINVAL); - } - - /* Acquire read access to the slot */ - mrlock(pcibr_soft->bs_slot[slot].slot_lock, MR_ACCESS, PZERO); - - error = pcibr_slot_info_return(pcibr_soft, slot, respp); - - /* Release the slot lock */ - mrunlock(pcibr_soft->bs_slot[slot].slot_lock); - - return(error); - } - - /* Return information for all the slots */ - for (tmp_slot = 0; tmp_slot < 8; tmp_slot++) { - - if (size < sizeof(*respp)) { - return(EINVAL); - } - - /* Acquire read access to the slot */ - mrlock(pcibr_soft->bs_slot[tmp_slot].slot_lock, MR_ACCESS, PZERO); - - error = pcibr_slot_info_return(pcibr_soft, tmp_slot, respp); - - /* Release the slot lock */ - mrunlock(pcibr_soft->bs_slot[tmp_slot].slot_lock); - - if (error) { - return(error); - } - - ++respp; - size -= sizeof(*respp); - } - - return(error); -} -#endif /* LATER */ - - -/*ARGSUSED */ -int -pcibr_ioctl(devfs_handle_t dev, - int cmd, - void *arg, - int flag, - struct cred *cr, - int *rvalp) -{ - devfs_handle_t pcibr_vhdl = hwgraph_connectpt_get((devfs_handle_t)dev); -#ifdef LATER - pcibr_soft_t pcibr_soft = pcibr_soft_get(pcibr_vhdl); -#endif - int error = 0; - - hwgraph_vertex_unref(pcibr_vhdl); - - switch (cmd) { -#ifdef LATER - case GIOCSETBW: - { - grio_ioctl_info_t info; - pciio_slot_t slot = 0; - - if (!cap_able((uint64_t)CAP_DEVICE_MGT)) { - error = EPERM; - break; - } - if (COPYIN(arg, &info, sizeof(grio_ioctl_info_t))) { - error = EFAULT; - break; - } -#ifdef GRIO_DEBUG - printk("pcibr:: prev_vhdl: %d reqbw: %lld\n", - info.prev_vhdl, info.reqbw); -#endif /* GRIO_DEBUG */ - - if ((slot = pcibr_device_slot_get(info.prev_vhdl)) == - PCIIO_SLOT_NONE) { - error = EIO; - break; - } - if (info.reqbw) - pcibr_priority_bits_set(pcibr_soft, slot, PCI_PRIO_HIGH); - break; - } - - case GIOCRELEASEBW: - { - grio_ioctl_info_t info; - pciio_slot_t slot = 0; - - if (!cap_able(CAP_DEVICE_MGT)) { - error = EPERM; - break; - } - if (COPYIN(arg, &info, sizeof(grio_ioctl_info_t))) { - error = EFAULT; - break; - } -#ifdef GRIO_DEBUG - printk("pcibr:: prev_vhdl: %d reqbw: %lld\n", - info.prev_vhdl, info.reqbw); -#endif /* GRIO_DEBUG */ - - if ((slot = pcibr_device_slot_get(info.prev_vhdl)) == - PCIIO_SLOT_NONE) { - error = EIO; - break; - } - if (info.reqbw) - pcibr_priority_bits_set(pcibr_soft, slot, PCI_PRIO_LOW); - break; - } - - case PCIBR_SLOT_POWERUP: - { - pciio_slot_t slot; - - if (!cap_able(CAP_DEVICE_MGT)) { - error = EPERM; - break; - } - - slot = (pciio_slot_t)(uint64_t)arg; - error = pcibr_slot_powerup(pcibr_vhdl,slot); - break; - } - case PCIBR_SLOT_SHUTDOWN: - if (!cap_able(CAP_DEVICE_MGT)) { - error = EPERM; - break; - } - - slot = (pciio_slot_t)(uint64_t)arg; - error = pcibr_slot_powerup(pcibr_vhdl,slot); - break; - } - case PCIBR_SLOT_QUERY: - { - struct pcibr_slot_info_req_s req; - - if (!cap_able(CAP_DEVICE_MGT)) { - error = EPERM; - break; - } - - if (COPYIN(arg, &req, sizeof(req))) { - error = EFAULT; - break; - } - - error = pcibr_slot_query(pcibr_vhdl, &req); - break; - } -#endif /* LATER */ - default: - break; - - } - - return error; -} - -void -pcibr_freeblock_sub(iopaddr_t *free_basep, - iopaddr_t *free_lastp, - iopaddr_t base, - size_t size) -{ - iopaddr_t free_base = *free_basep; - iopaddr_t free_last = *free_lastp; - iopaddr_t last = base + size - 1; - - if ((last < free_base) || (base > free_last)); /* free block outside arena */ - - else if ((base <= free_base) && (last >= free_last)) - /* free block contains entire arena */ - *free_basep = *free_lastp = 0; - - else if (base <= free_base) - /* free block is head of arena */ - *free_basep = last + 1; - - else if (last >= free_last) - /* free block is tail of arena */ - *free_lastp = base - 1; - - /* - * We are left with two regions: the free area - * in the arena "below" the block, and the free - * area in the arena "above" the block. Keep - * the one that is bigger. - */ - - else if ((base - free_base) > (free_last - last)) - *free_lastp = base - 1; /* keep lower chunk */ - else - *free_basep = last + 1; /* keep upper chunk */ -} - -/* Convert from ssram_bits in control register to number of SSRAM entries */ -#define ATE_NUM_ENTRIES(n) _ate_info[n] - -/* Possible choices for number of ATE entries in Bridge's SSRAM */ -LOCAL int _ate_info[] = -{ - 0, /* 0 entries */ - 8 * 1024, /* 8K entries */ - 16 * 1024, /* 16K entries */ - 64 * 1024 /* 64K entries */ -}; - -#define ATE_NUM_SIZES (sizeof(_ate_info) / sizeof(int)) -#define ATE_PROBE_VALUE 0x0123456789abcdefULL - -/* - * Determine the size of this bridge's external mapping SSRAM, and set - * the control register appropriately to reflect this size, and initialize - * the external SSRAM. - */ -LOCAL int -pcibr_init_ext_ate_ram(bridge_t *bridge) -{ - int largest_working_size = 0; - int num_entries, entry; - int i, j; - bridgereg_t old_enable, new_enable; - int s; - - /* Probe SSRAM to determine its size. */ - old_enable = bridge->b_int_enable; - new_enable = old_enable & ~BRIDGE_IMR_PCI_MST_TIMEOUT; - bridge->b_int_enable = new_enable; - - for (i = 1; i < ATE_NUM_SIZES; i++) { - /* Try writing a value */ - bridge->b_ext_ate_ram[ATE_NUM_ENTRIES(i) - 1] = ATE_PROBE_VALUE; - - /* Guard against wrap */ - for (j = 1; j < i; j++) - bridge->b_ext_ate_ram[ATE_NUM_ENTRIES(j) - 1] = 0; - - /* See if value was written */ - if (bridge->b_ext_ate_ram[ATE_NUM_ENTRIES(i) - 1] == ATE_PROBE_VALUE) - largest_working_size = i; - } - bridge->b_int_enable = old_enable; - bridge->b_wid_tflush; /* wait until Bridge PIO complete */ - - /* - * ensure that we write and read without any interruption. - * The read following the write is required for the Bridge war - */ - - s = splhi(); - bridge->b_wid_control = (bridge->b_wid_control - & ~BRIDGE_CTRL_SSRAM_SIZE_MASK) - | BRIDGE_CTRL_SSRAM_SIZE(largest_working_size); - bridge->b_wid_control; /* inval addr bug war */ - splx(s); - - num_entries = ATE_NUM_ENTRIES(largest_working_size); - -#if PCIBR_ATE_DEBUG - if (num_entries) - printk("bridge at 0x%x: clearing %d external ATEs\n", bridge, num_entries); - else - printk("bridge at 0x%x: no externa9422l ATE RAM found\n", bridge); -#endif - - /* Initialize external mapping entries */ - for (entry = 0; entry < num_entries; entry++) - bridge->b_ext_ate_ram[entry] = 0; - - return (num_entries); -} - -/* - * Allocate "count" contiguous Bridge Address Translation Entries - * on the specified bridge to be used for PCI to XTALK mappings. - * Indices in rm map range from 1..num_entries. Indicies returned - * to caller range from 0..num_entries-1. - * - * Return the start index on success, -1 on failure. - */ -LOCAL int -pcibr_ate_alloc(pcibr_soft_t pcibr_soft, int count) -{ - int index = 0; - - index = (int) rmalloc(pcibr_soft->bs_int_ate_map, (size_t) count); -/* printk("Colin: pcibr_ate_alloc - index %d count %d \n", index, count); */ - - if (!index && pcibr_soft->bs_ext_ate_map) - index = (int) rmalloc(pcibr_soft->bs_ext_ate_map, (size_t) count); - - /* rmalloc manages resources in the 1..n - * range, with 0 being failure. - * pcibr_ate_alloc manages resources - * in the 0..n-1 range, with -1 being failure. - */ - return index - 1; -} - -LOCAL void -pcibr_ate_free(pcibr_soft_t pcibr_soft, int index, int count) -/* Who says there's no such thing as a free meal? :-) */ -{ - /* note the "+1" since rmalloc handles 1..n but - * we start counting ATEs at zero. - */ -/* printk("Colin: pcibr_ate_free - index %d count %d\n", index, count); */ - - rmfree((index < pcibr_soft->bs_int_ate_size) - ? pcibr_soft->bs_int_ate_map - : pcibr_soft->bs_ext_ate_map, - count, index + 1); -} - -LOCAL pcibr_info_t -pcibr_info_get(devfs_handle_t vhdl) -{ - return (pcibr_info_t) pciio_info_get(vhdl); -} - -pcibr_info_t -pcibr_device_info_new( - pcibr_soft_t pcibr_soft, - pciio_slot_t slot, - pciio_function_t rfunc, - pciio_vendor_id_t vendor, - pciio_device_id_t device) -{ - pcibr_info_t pcibr_info; - pciio_function_t func; - int ibit; - - func = (rfunc == PCIIO_FUNC_NONE) ? 0 : rfunc; - - NEW(pcibr_info); - pciio_device_info_new(&pcibr_info->f_c, - pcibr_soft->bs_vhdl, - slot, rfunc, - vendor, device); - - if (slot != PCIIO_SLOT_NONE) { - - /* - * Currently favored mapping from PCI - * slot number and INTA/B/C/D to Bridge - * PCI Interrupt Bit Number: - * - * SLOT A B C D - * 0 0 4 0 4 - * 1 1 5 1 5 - * 2 2 6 2 6 - * 3 3 7 3 7 - * 4 4 0 4 0 - * 5 5 1 5 1 - * 6 6 2 6 2 - * 7 7 3 7 3 - * - * XXX- allow pcibr_hints to override default - * XXX- allow ADMIN to override pcibr_hints - */ - for (ibit = 0; ibit < 4; ++ibit) - pcibr_info->f_ibit[ibit] = - (slot + 4 * ibit) & 7; - - /* - * Record the info in the sparse func info space. - */ - if (func < pcibr_soft->bs_slot[slot].bss_ninfo) - pcibr_soft->bs_slot[slot].bss_infos[func] = pcibr_info; - } - return pcibr_info; -} - -void -pcibr_device_info_free(devfs_handle_t pcibr_vhdl, pciio_slot_t slot) -{ - pcibr_soft_t pcibr_soft = pcibr_soft_get(pcibr_vhdl); - pcibr_info_t pcibr_info; - pciio_function_t func; - pcibr_soft_slot_t slotp = &pcibr_soft->bs_slot[slot]; - int nfunc = slotp->bss_ninfo; - - - for (func = 0; func < nfunc; func++) { - pcibr_info = slotp->bss_infos[func]; - - if (!pcibr_info) - continue; - - slotp->bss_infos[func] = 0; - pciio_device_info_unregister(pcibr_vhdl, &pcibr_info->f_c); - pciio_device_info_free(&pcibr_info->f_c); - DEL(pcibr_info); - } - - /* Clear the DEVIO(x) for this slot */ - slotp->bss_devio.bssd_space = PCIIO_SPACE_NONE; - slotp->bss_devio.bssd_base = PCIBR_D32_BASE_UNSET; - slotp->bss_device = 0; - - - /* Reset the mapping usage counters */ - slotp->bss_pmu_uctr = 0; - slotp->bss_d32_uctr = 0; - slotp->bss_d64_uctr = 0; - - /* Clear the Direct translation info */ - slotp->bss_d64_base = PCIBR_D64_BASE_UNSET; - slotp->bss_d64_flags = 0; - slotp->bss_d32_base = PCIBR_D32_BASE_UNSET; - slotp->bss_d32_flags = 0; - - /* Clear out shadow info necessary for the external SSRAM workaround */ - slotp->bss_ext_ates_active = ATOMIC_INIT(0); - slotp->bss_cmd_pointer = 0; - slotp->bss_cmd_shadow = 0; - -} - -/* - * PCI_ADDR_SPACE_LIMITS_LOAD - * Gets the current values of - * pci io base, - * pci io last, - * pci low memory base, - * pci low memory last, - * pci high memory base, - * pci high memory last - */ -#define PCI_ADDR_SPACE_LIMITS_LOAD() \ - pci_io_fb = pcibr_soft->bs_spinfo.pci_io_base; \ - pci_io_fl = pcibr_soft->bs_spinfo.pci_io_last; \ - pci_lo_fb = pcibr_soft->bs_spinfo.pci_swin_base; \ - pci_lo_fl = pcibr_soft->bs_spinfo.pci_swin_last; \ - pci_hi_fb = pcibr_soft->bs_spinfo.pci_mem_base; \ - pci_hi_fl = pcibr_soft->bs_spinfo.pci_mem_last; -/* - * PCI_ADDR_SPACE_LIMITS_STORE - * Sets the current values of - * pci io base, - * pci io last, - * pci low memory base, - * pci low memory last, - * pci high memory base, - * pci high memory last - */ -#define PCI_ADDR_SPACE_LIMITS_STORE() \ - pcibr_soft->bs_spinfo.pci_io_base = pci_io_fb; \ - pcibr_soft->bs_spinfo.pci_io_last = pci_io_fl; \ - pcibr_soft->bs_spinfo.pci_swin_base = pci_lo_fb; \ - pcibr_soft->bs_spinfo.pci_swin_last = pci_lo_fl; \ - pcibr_soft->bs_spinfo.pci_mem_base = pci_hi_fb; \ - pcibr_soft->bs_spinfo.pci_mem_last = pci_hi_fl; - -#define PCI_ADDR_SPACE_LIMITS_PRINT() \ - printf("+++++++++++++++++++++++\n" \ - "IO base 0x%x last 0x%x\n" \ - "SWIN base 0x%x last 0x%x\n" \ - "MEM base 0x%x last 0x%x\n" \ - "+++++++++++++++++++++++\n", \ - pcibr_soft->bs_spinfo.pci_io_base, \ - pcibr_soft->bs_spinfo.pci_io_last, \ - pcibr_soft->bs_spinfo.pci_swin_base, \ - pcibr_soft->bs_spinfo.pci_swin_last, \ - pcibr_soft->bs_spinfo.pci_mem_base, \ - pcibr_soft->bs_spinfo.pci_mem_last); - -/* - * pcibr_slot_info_init - * Probe for this slot and see if it is populated. - * If it is populated initialize the generic PCI infrastructural - * information associated with this particular PCI device. - */ -int -pcibr_slot_info_init(devfs_handle_t pcibr_vhdl, - pciio_slot_t slot) -{ - pcibr_soft_t pcibr_soft; - pcibr_info_h pcibr_infoh; - pcibr_info_t pcibr_info; - bridge_t *bridge; - cfg_p cfgw; - unsigned idword; - unsigned pfail; - unsigned idwords[8]; - pciio_vendor_id_t vendor; - pciio_device_id_t device; - unsigned htype; -#if !defined(CONFIG_IA64_SGI_SN1) - int nbars; -#endif - cfg_p wptr; - int win; - pciio_space_t space; - iopaddr_t pci_io_fb, pci_io_fl; - iopaddr_t pci_lo_fb, pci_lo_fl; - iopaddr_t pci_hi_fb, pci_hi_fl; - int nfunc; - pciio_function_t rfunc; - int func; - devfs_handle_t conn_vhdl; - pcibr_soft_slot_t slotp; - - /* Get the basic software information required to proceed */ - pcibr_soft = pcibr_soft_get(pcibr_vhdl); - if (!pcibr_soft) - return(EINVAL); - - bridge = pcibr_soft->bs_base; - if (!PCIBR_VALID_SLOT(slot)) - return(EINVAL); - - /* If we have a host slot (eg:- IOC3 has 2 PCI slots and the initialization - * is done by the host slot then we are done. - */ - if (pcibr_soft->bs_slot[slot].has_host) { - return(0); - } - - /* Check for a slot with any system critical functions */ - if (pcibr_is_slot_sys_critical(pcibr_vhdl, slot)) - return(EPERM); - - /* Load the current values of allocated PCI address spaces */ - PCI_ADDR_SPACE_LIMITS_LOAD(); - - /* Try to read the device-id/vendor-id from the config space */ - cfgw = bridge->b_type0_cfg_dev[slot].l; - - if (pcibr_probe_slot(bridge, cfgw, &idword)) - return(ENODEV); - - slotp = &pcibr_soft->bs_slot[slot]; - slotp->slot_status |= SLOT_POWER_UP; - - vendor = 0xFFFF & idword; - /* If the vendor id is not valid then the slot is not populated - * and we are done. - */ - if (vendor == 0xFFFF) - return(ENODEV); - - device = 0xFFFF & (idword >> 16); - htype = do_pcibr_config_get(cfgw, PCI_CFG_HEADER_TYPE, 1); - - nfunc = 1; - rfunc = PCIIO_FUNC_NONE; - pfail = 0; - - /* NOTE: if a card claims to be multifunction - * but only responds to config space 0, treat - * it as a unifunction card. - */ - - if (htype & 0x80) { /* MULTIFUNCTION */ - for (func = 1; func < 8; ++func) { - cfgw = bridge->b_type0_cfg_dev[slot].f[func].l; - if (pcibr_probe_slot(bridge, cfgw, &idwords[func])) { - pfail |= 1 << func; - continue; - } - vendor = 0xFFFF & idwords[func]; - if (vendor == 0xFFFF) { - pfail |= 1 << func; - continue; - } - nfunc = func + 1; - rfunc = 0; - } - cfgw = bridge->b_type0_cfg_dev[slot].l; - } - NEWA(pcibr_infoh, nfunc); - - pcibr_soft->bs_slot[slot].bss_ninfo = nfunc; - pcibr_soft->bs_slot[slot].bss_infos = pcibr_infoh; - - for (func = 0; func < nfunc; ++func) { - unsigned cmd_reg; - - if (func) { - if (pfail & (1 << func)) - continue; - - idword = idwords[func]; - cfgw = bridge->b_type0_cfg_dev[slot].f[func].l; - - device = 0xFFFF & (idword >> 16); - htype = do_pcibr_config_get(cfgw, PCI_CFG_HEADER_TYPE, 1); - rfunc = func; - } - htype &= 0x7f; - if (htype != 0x00) { - PRINT_WARNING("%s pcibr: pci slot %d func %d has strange header type 0x%x\n", - pcibr_soft->bs_name, slot, func, htype); -#if defined(CONFIG_IA64_SGI_SN1) - continue; -#else - nbars = 2; - } else { - nbars = PCI_CFG_BASE_ADDRS; -#endif - } -#if DEBUG && ATTACH_DEBUG - PRINT_NOTICE( - "%s pcibr: pci slot %d func %d: vendor 0x%x device 0x%x", - pcibr_soft->bs_name, slot, func, vendor, device); -#endif - - pcibr_info = pcibr_device_info_new - (pcibr_soft, slot, rfunc, vendor, device); - conn_vhdl = pciio_device_info_register(pcibr_vhdl, &pcibr_info->f_c); - if (func == 0) - slotp->slot_conn = conn_vhdl; - - cmd_reg = cfgw[PCI_CFG_COMMAND / 4]; - - wptr = cfgw + PCI_CFG_BASE_ADDR_0 / 4; - -#if defined(CONFIG_IA64_SGI_SN1) - for (win = 0; win < PCI_CFG_BASE_ADDRS; ++win) -#else - for (win = 0; win < nbars; ++win) -#endif - { - iopaddr_t base, mask, code; - size_t size; - - /* - * GET THE BASE & SIZE OF THIS WINDOW: - * - * The low two or four bits of the BASE register - * determines which address space we are in; the - * rest is a base address. BASE registers - * determine windows that are power-of-two sized - * and naturally aligned, so we can get the size - * of a window by writing all-ones to the - * register, reading it back, and seeing which - * bits are used for decode; the least - * significant nonzero bit is also the size of - * the window. - * - * WARNING: someone may already have allocated - * some PCI space to this window, and in fact - * PIO may be in process at this very moment - * from another processor (or even from this - * one, if we get interrupted)! So, if the BASE - * already has a nonzero address, be generous - * and use the LSBit of that address as the - * size; this could overstate the window size. - * Usually, when one card is set up, all are set - * up; so, since we don't bitch about - * overlapping windows, we are ok. - * - * UNFORTUNATELY, some cards do not clear their - * BASE registers on reset. I have two heuristics - * that can detect such cards: first, if the - * decode enable is turned off for the space - * that the window uses, we can disregard the - * initial value. second, if the address is - * outside the range that we use, we can disregard - * it as well. - * - * This is looking very PCI generic. Except for - * knowing how many slots and where their config - * spaces are, this window loop and the next one - * could probably be shared with other PCI host - * adapters. It would be interesting to see if - * this could be pushed up into pciio, when we - * start supporting more PCI providers. - */ -#ifdef LITTLE_ENDIAN - base = wptr[((win*4)^4)/4]; -#else - base = wptr[win]; -#endif - - if (base & PCI_BA_IO_SPACE) { - /* BASE is in I/O space. */ - space = PCIIO_SPACE_IO; - mask = -4; - code = base & 3; - base = base & mask; - if (base == 0) { - ; /* not assigned */ - } else if (!(cmd_reg & PCI_CMD_IO_SPACE)) { - base = 0; /* decode not enabled */ - } - } else { - /* BASE is in MEM space. */ - space = PCIIO_SPACE_MEM; - mask = -16; - code = base & PCI_BA_MEM_LOCATION; /* extract BAR type */ - base = base & mask; - if (base == 0) { - ; /* not assigned */ - } else if (!(cmd_reg & PCI_CMD_MEM_SPACE)) { - base = 0; /* decode not enabled */ - } else if (base & 0xC0000000) { - base = 0; /* outside permissable range */ - } else if ((code == PCI_BA_MEM_64BIT) && -#ifdef LITTLE_ENDIAN - (wptr[(((win + 1)*4)^4)/4] != 0)) { -#else - (wptr[win + 1] != 0)) { -#endif /* LITTLE_ENDIAN */ - base = 0; /* outside permissable range */ - } - } - - if (base != 0) { /* estimate size */ - size = base & -base; - } else { /* calculate size */ -#ifdef LITTLE_ENDIAN - wptr[((win*4)^4)/4] = ~0; /* turn on all bits */ - size = wptr[((win*4)^4)/4]; /* get stored bits */ -#else - wptr[win] = ~0; /* turn on all bits */ - size = wptr[win]; /* get stored bits */ -#endif /* LITTLE_ENDIAN */ - size &= mask; /* keep addr */ - size &= -size; /* keep lsbit */ - if (size == 0) - continue; - } - - pcibr_info->f_window[win].w_space = space; - pcibr_info->f_window[win].w_base = base; - pcibr_info->f_window[win].w_size = size; - - /* - * If this window already has PCI space - * allocated for it, "subtract" that space from - * our running freeblocks. Don't worry about - * overlaps in existing allocated windows; we - * may be overstating their sizes anyway. - */ - - if (base && size) { - if (space == PCIIO_SPACE_IO) { - pcibr_freeblock_sub(&pci_io_fb, - &pci_io_fl, - base, size); - } else { - pcibr_freeblock_sub(&pci_lo_fb, - &pci_lo_fl, - base, size); - pcibr_freeblock_sub(&pci_hi_fb, - &pci_hi_fl, - base, size); - } - } -#if defined(IOC3_VENDOR_ID_NUM) && defined(IOC3_DEVICE_ID_NUM) - /* - * IOC3 BASE_ADDR* BUG WORKAROUND - * - - * If we write to BASE1 on the IOC3, the - * data in BASE0 is replaced. The - * original workaround was to remember - * the value of BASE0 and restore it - * when we ran off the end of the BASE - * registers; however, a later - * workaround was added (I think it was - * rev 1.44) to avoid setting up - * anything but BASE0, with the comment - * that writing all ones to BASE1 set - * the enable-parity-error test feature - * in IOC3's SCR bit 14. - * - * So, unless we defer doing any PCI - * space allocation until drivers - * attach, and set up a way for drivers - * (the IOC3 in paricular) to tell us - * generically to keep our hands off - * BASE registers, we gotta "know" about - * the IOC3 here. - * - * Too bad the PCI folks didn't reserve the - * all-zero value for 'no BASE here' (it is a - * valid code for an uninitialized BASE in - * 32-bit PCI memory space). - */ - - if ((vendor == IOC3_VENDOR_ID_NUM) && - (device == IOC3_DEVICE_ID_NUM)) - break; -#endif - if (code == PCI_BA_MEM_64BIT) { - win++; /* skip upper half */ -#ifdef LITTLE_ENDIAN - wptr[((win*4)^4)/4] = 0; /* which must be zero */ -#else - wptr[win] = 0; /* which must be zero */ -#endif /* LITTLE_ENDIAN */ - } - } /* next win */ - } /* next func */ - - /* Store back the values for allocated PCI address spaces */ - PCI_ADDR_SPACE_LIMITS_STORE(); - return(0); -} - -/* - * pcibr_slot_info_free - * Remove all the PCI infrastructural information associated - * with a particular PCI device. - */ -int -pcibr_slot_info_free(devfs_handle_t pcibr_vhdl, - pciio_slot_t slot) -{ - pcibr_soft_t pcibr_soft; - pcibr_info_h pcibr_infoh; - int nfunc; - - pcibr_soft = pcibr_soft_get(pcibr_vhdl); - - if (!pcibr_soft || !PCIBR_VALID_SLOT(slot)) - return(EINVAL); - -#if !defined(CONFIG_IA64_SGI_SN1) - /* Clean out all the base registers */ - bridge = pcibr_soft->bs_base; - cfgw = bridge->b_type0_cfg_dev[slot].l; - wptr = cfgw + PCI_CFG_BASE_ADDR_0 / 4; - - for (win = 0; win < PCI_CFG_BASE_ADDRS; ++win) -#ifdef LITTLE_ENDIAN - wptr[((win*4)^4)/4] = 0; -#else - wptr[win] = 0; -#endif /* LITTLE_ENDIAN */ -#endif /* !CONFIG_IA64_SGI_SN1 */ - - nfunc = pcibr_soft->bs_slot[slot].bss_ninfo; - - pcibr_device_info_free(pcibr_vhdl, slot); - - pcibr_infoh = pcibr_soft->bs_slot[slot].bss_infos; - DELA(pcibr_infoh,nfunc); - pcibr_soft->bs_slot[slot].bss_ninfo = 0; - - return(0); -} - -int as_debug = 0; -/* - * pcibr_slot_addr_space_init - * Reserve chunks of PCI address space as required by - * the base registers in the card. - */ -int -pcibr_slot_addr_space_init(devfs_handle_t pcibr_vhdl, - pciio_slot_t slot) -{ - pcibr_soft_t pcibr_soft; - pcibr_info_h pcibr_infoh; - pcibr_info_t pcibr_info; - bridge_t *bridge; - iopaddr_t pci_io_fb, pci_io_fl; - iopaddr_t pci_lo_fb, pci_lo_fl; - iopaddr_t pci_hi_fb, pci_hi_fl; - size_t align; - iopaddr_t mask; - int nbars; - int nfunc; - int func; - int win; - - pcibr_soft = pcibr_soft_get(pcibr_vhdl); - - if (!pcibr_soft || !PCIBR_VALID_SLOT(slot)) - return(EINVAL); - - bridge = pcibr_soft->bs_base; - - /* Get the current values for the allocated PCI address spaces */ - PCI_ADDR_SPACE_LIMITS_LOAD(); - - if (as_debug) -#ifdef LATER - PCI_ADDR_SPACE_LIMITS_PRINT(); -#endif - /* allocate address space, - * for windows that have not been - * previously assigned. - */ - if (pcibr_soft->bs_slot[slot].has_host) { - return(0); - } - - nfunc = pcibr_soft->bs_slot[slot].bss_ninfo; - if (nfunc < 1) - return(EINVAL); - - pcibr_infoh = pcibr_soft->bs_slot[slot].bss_infos; - if (!pcibr_infoh) - return(EINVAL); - - /* - * Try to make the DevIO windows not - * overlap by pushing the "io" and "hi" - * allocation areas up to the next one - * or two megabyte bound. This also - * keeps them from being zero. - * - * DO NOT do this with "pci_lo" since - * the entire "lo" area is only a - * megabyte, total ... - */ - align = (slot < 2) ? 0x200000 : 0x100000; - mask = -align; - pci_io_fb = (pci_io_fb + align - 1) & mask; - pci_hi_fb = (pci_hi_fb + align - 1) & mask; - - for (func = 0; func < nfunc; ++func) { - cfg_p cfgw; - cfg_p wptr; - pciio_space_t space; - iopaddr_t base; - size_t size; - cfg_p pci_cfg_cmd_reg_p; - unsigned pci_cfg_cmd_reg; - unsigned pci_cfg_cmd_reg_add = 0; - - pcibr_info = pcibr_infoh[func]; - - if (!pcibr_info) - continue; - - if (pcibr_info->f_vendor == PCIIO_VENDOR_ID_NONE) - continue; - - cfgw = bridge->b_type0_cfg_dev[slot].f[func].l; - wptr = cfgw + PCI_CFG_BASE_ADDR_0 / 4; - -#if defined(CONFIG_IA64_SGI_SN1) - nbars = PCI_CFG_BASE_ADDRS; -#else - if ((do_pcibr_config_get(cfgw, PCI_CFG_HEADER_TYPE, 1) & 0x7f) != 0) - nbars = 2; - else - nbars = PCI_CFG_BASE_ADDRS; -#endif - - for (win = 0; win < nbars; ++win) { - - space = pcibr_info->f_window[win].w_space; - base = pcibr_info->f_window[win].w_base; - size = pcibr_info->f_window[win].w_size; - - if (size < 1) - continue; - - if (base >= size) { -#if DEBUG && PCI_DEBUG - printk("pcibr: slot %d func %d window %d is in %d[0x%x..0x%x], alloc by prom\n", - slot, func, win, space, base, base + size - 1); -#endif - continue; /* already allocated */ - } - align = size; /* ie. 0x00001000 */ - if (align < _PAGESZ) - align = _PAGESZ; /* ie. 0x00004000 */ - mask = -align; /* ie. 0xFFFFC000 */ - - switch (space) { - case PCIIO_SPACE_IO: - base = (pci_io_fb + align - 1) & mask; - if ((base + size) > pci_io_fl) { - base = 0; - break; - } - pci_io_fb = base + size; - break; - - case PCIIO_SPACE_MEM: -#ifdef LITTLE_ENDIAN - if ((wptr[((win*4)^4)/4] & PCI_BA_MEM_LOCATION) == -#else - if ((wptr[win] & PCI_BA_MEM_LOCATION) == -#endif /* LITTLE_ENDIAN */ - PCI_BA_MEM_1MEG) { - /* allocate from 20-bit PCI space */ - base = (pci_lo_fb + align - 1) & mask; - if ((base + size) > pci_lo_fl) { - base = 0; - break; - } - pci_lo_fb = base + size; - } else { - /* allocate from 32-bit or 64-bit PCI space */ - base = (pci_hi_fb + align - 1) & mask; - if ((base + size) > pci_hi_fl) { - base = 0; - break; - } - pci_hi_fb = base + size; - } - break; - - default: - base = 0; -#if DEBUG && PCI_DEBUG - printk("pcibr: slot %d window %d had bad space code %d\n", - slot, win, space); -#endif - } - pcibr_info->f_window[win].w_base = base; -#ifdef LITTLE_ENDIAN - wptr[((win*4)^4)/4] = base; -#if DEBUG && PCI_DEBUG - printk("Setting base address 0x%p base 0x%x\n", &(wptr[((win*4)^4)/4]), base); -#endif -#else - wptr[win] = base; -#endif /* LITTLE_ENDIAN */ - -#if DEBUG && PCI_DEBUG - if (base >= size) - printk("pcibr: slot %d func %d window %d is in %d [0x%x..0x%x], alloc by pcibr\n", - slot, func, win, space, base, base + size - 1); - else - printk("pcibr: slot %d func %d window %d, unable to alloc 0x%x in 0x%p\n", - slot, func, win, size, space); -#endif - } /* next base */ - - /* - * Allocate space for the EXPANSION ROM - * NOTE: DO NOT DO THIS ON AN IOC3, - * as it blows the system away. - */ - base = size = 0; - if ((pcibr_soft->bs_slot[slot].bss_vendor_id != IOC3_VENDOR_ID_NUM) || - (pcibr_soft->bs_slot[slot].bss_device_id != IOC3_DEVICE_ID_NUM)) { - - wptr = cfgw + PCI_EXPANSION_ROM / 4; -#ifdef LITTLE_ENDIAN - wptr[1] = 0xFFFFF000; - mask = wptr[1]; -#else - *wptr = 0xFFFFF000; - mask = *wptr; -#endif /* LITTLE_ENDIAN */ - if (mask & 0xFFFFF000) { - size = mask & -mask; - align = size; - if (align < _PAGESZ) - align = _PAGESZ; - mask = -align; - base = (pci_hi_fb + align - 1) & mask; - if ((base + size) > pci_hi_fl) - base = size = 0; - else { - pci_hi_fb = base + size; -#ifdef LITTLE_ENDIAN - wptr[1] = base; -#else - *wptr = base; -#endif /* LITTLE_ENDIAN */ -#if DEBUG && PCI_DEBUG - printk("%s/%d ROM in 0x%lx..0x%lx (alloc by pcibr)\n", - pcibr_soft->bs_name, slot, - base, base + size - 1); -#endif - } - } - } - pcibr_info->f_rbase = base; - pcibr_info->f_rsize = size; - - /* - * if necessary, update the board's - * command register to enable decoding - * in the windows we added. - * - * There are some bits we always want to - * be sure are set. - */ - pci_cfg_cmd_reg_add |= PCI_CMD_IO_SPACE; - - /* - * The Adaptec 1160 FC Controller WAR #767995: - * The part incorrectly ignores the upper 32 bits of a 64 bit - * address when decoding references to it's registers so to - * keep it from responding to a bus cycle that it shouldn't - * we only use I/O space to get at it's registers. Don't - * enable memory space accesses on that PCI device. - */ - #define FCADP_VENDID 0x9004 /* Adaptec Vendor ID from fcadp.h */ - #define FCADP_DEVID 0x1160 /* Adaptec 1160 Device ID from fcadp.h */ - - if ((pcibr_info->f_vendor != FCADP_VENDID) || - (pcibr_info->f_device != FCADP_DEVID)) - pci_cfg_cmd_reg_add |= PCI_CMD_MEM_SPACE; - - pci_cfg_cmd_reg_add |= PCI_CMD_BUS_MASTER; - - pci_cfg_cmd_reg_p = cfgw + PCI_CFG_COMMAND / 4; - pci_cfg_cmd_reg = *pci_cfg_cmd_reg_p; -#if PCI_FBBE /* XXX- check here to see if dev can do fast-back-to-back */ - if (!((pci_cfg_cmd_reg >> 16) & PCI_STAT_F_BK_BK_CAP)) - fast_back_to_back_enable = 0; -#endif - pci_cfg_cmd_reg &= 0xFFFF; - if (pci_cfg_cmd_reg_add & ~pci_cfg_cmd_reg) - *pci_cfg_cmd_reg_p = pci_cfg_cmd_reg | pci_cfg_cmd_reg_add; - - } /* next func */ - - /* Now that we have allocated new chunks of PCI address spaces to this - * card we need to update the bookkeeping values which indicate - * the current PCI address space allocations. - */ - PCI_ADDR_SPACE_LIMITS_STORE(); - return(0); -} - -/* - * pcibr_slot_device_init - * Setup the device register in the bridge for this PCI slot. - */ -int -pcibr_slot_device_init(devfs_handle_t pcibr_vhdl, - pciio_slot_t slot) -{ - pcibr_soft_t pcibr_soft; - bridge_t *bridge; - bridgereg_t devreg; - - pcibr_soft = pcibr_soft_get(pcibr_vhdl); - - if (!pcibr_soft || !PCIBR_VALID_SLOT(slot)) - return(EINVAL); - - bridge = pcibr_soft->bs_base; - - /* - * Adjustments to Device(x) - * and init of bss_device shadow - */ - devreg = bridge->b_device[slot].reg; - devreg &= ~BRIDGE_DEV_PAGE_CHK_DIS; - devreg |= BRIDGE_DEV_COH | BRIDGE_DEV_VIRTUAL_EN; -#ifdef LITTLE_ENDIAN - devreg |= BRIDGE_DEV_DEV_SWAP; -#endif - pcibr_soft->bs_slot[slot].bss_device = devreg; - bridge->b_device[slot].reg = devreg; - -#if DEBUG && PCI_DEBUG - printk("pcibr Device(%d): 0x%lx\n", slot, bridge->b_device[slot].reg); -#endif - -#if DEBUG && PCI_DEBUG - printk("pcibr: PCI space allocation done.\n"); -#endif - - return(0); -} - -/* - * pcibr_slot_guest_info_init - * Setup the host/guest relations for a PCI slot. - */ -int -pcibr_slot_guest_info_init(devfs_handle_t pcibr_vhdl, - pciio_slot_t slot) -{ - pcibr_soft_t pcibr_soft; - pcibr_info_h pcibr_infoh; - pcibr_info_t pcibr_info; - pcibr_soft_slot_t slotp; - - pcibr_soft = pcibr_soft_get(pcibr_vhdl); - - if (!pcibr_soft || !PCIBR_VALID_SLOT(slot)) - return(EINVAL); - - slotp = &pcibr_soft->bs_slot[slot]; - - /* create info and verticies for guest slots; - * for compatibilitiy macros, create info - * for even unpopulated slots (but do not - * build verticies for them). - */ - if (pcibr_soft->bs_slot[slot].bss_ninfo < 1) { - NEWA(pcibr_infoh, 1); - pcibr_soft->bs_slot[slot].bss_ninfo = 1; - pcibr_soft->bs_slot[slot].bss_infos = pcibr_infoh; - - pcibr_info = pcibr_device_info_new - (pcibr_soft, slot, PCIIO_FUNC_NONE, - PCIIO_VENDOR_ID_NONE, PCIIO_DEVICE_ID_NONE); - - if (pcibr_soft->bs_slot[slot].has_host) { - slotp->slot_conn = pciio_device_info_register - (pcibr_vhdl, &pcibr_info->f_c); - } - } - - /* generate host/guest relations - */ - if (pcibr_soft->bs_slot[slot].has_host) { - int host = pcibr_soft->bs_slot[slot].host_slot; - pcibr_soft_slot_t host_slotp = &pcibr_soft->bs_slot[host]; - - hwgraph_edge_add(slotp->slot_conn, - host_slotp->slot_conn, - EDGE_LBL_HOST); - - /* XXX- only gives us one guest edge per - * host. If/when we have a host with more than - * one guest, we will need to figure out how - * the host finds all its guests, and sorts - * out which one is which. - */ - hwgraph_edge_add(host_slotp->slot_conn, - slotp->slot_conn, - EDGE_LBL_GUEST); - } - - return(0); -} - -/* - * pcibr_slot_initial_rrb_alloc - * Allocate a default number of rrbs for this slot on - * the two channels. This is dictated by the rrb allocation - * strategy routine defined per platform. - */ - -int -pcibr_slot_initial_rrb_alloc(devfs_handle_t pcibr_vhdl, - pciio_slot_t slot) -{ - pcibr_soft_t pcibr_soft; - pcibr_info_h pcibr_infoh; - pcibr_info_t pcibr_info; - bridge_t *bridge; - int c0, c1; - int r; - - pcibr_soft = pcibr_soft_get(pcibr_vhdl); - - if (!pcibr_soft || !PCIBR_VALID_SLOT(slot)) - return(EINVAL); - - bridge = pcibr_soft->bs_base; - - /* How may RRBs are on this slot? - */ - c0 = do_pcibr_rrb_count_valid(bridge, slot); - c1 = do_pcibr_rrb_count_valid(bridge, slot + PCIBR_RRB_SLOT_VIRTUAL); - -#if PCIBR_RRB_DEBUG - printk("pcibr_attach: slot %d started with %d+%d\n", slot, c0, c1); -#endif - - /* Do we really need any? - */ - pcibr_infoh = pcibr_soft->bs_slot[slot].bss_infos; - pcibr_info = pcibr_infoh[0]; - if ((pcibr_info->f_vendor == PCIIO_VENDOR_ID_NONE) && - !pcibr_soft->bs_slot[slot].has_host) { - if (c0 > 0) - do_pcibr_rrb_free(bridge, slot, c0); - if (c1 > 0) - do_pcibr_rrb_free(bridge, slot + PCIBR_RRB_SLOT_VIRTUAL, c1); - pcibr_soft->bs_rrb_valid[slot] = 0x1000; - pcibr_soft->bs_rrb_valid[slot + PCIBR_RRB_SLOT_VIRTUAL] = 0x1000; - return(ENODEV); - } - - pcibr_soft->bs_rrb_avail[slot & 1] -= c0 + c1; - pcibr_soft->bs_rrb_valid[slot] = c0; - pcibr_soft->bs_rrb_valid[slot + PCIBR_RRB_SLOT_VIRTUAL] = c1; - - pcibr_soft->bs_rrb_avail[0] = do_pcibr_rrb_count_avail(bridge, 0); - pcibr_soft->bs_rrb_avail[1] = do_pcibr_rrb_count_avail(bridge, 1); - - r = 3 - (c0 + c1); - - if (r > 0) { - pcibr_soft->bs_rrb_res[slot] = r; - pcibr_soft->bs_rrb_avail[slot & 1] -= r; - } - -#if PCIBR_RRB_DEBUG - printk("\t%d+%d+%d", - 0xFFF & pcibr_soft->bs_rrb_valid[slot], - 0xFFF & pcibr_soft->bs_rrb_valid[slot + PCIBR_RRB_SLOT_VIRTUAL], - pcibr_soft->bs_rrb_res[slot]); - printk("\n"); -#endif - - return(0); -} - -/* - * pcibr_slot_call_device_attach - * This calls the associated driver attach routine for the PCI - * card in this slot. - */ -int -pcibr_slot_call_device_attach(devfs_handle_t pcibr_vhdl, - pciio_slot_t slot, - int drv_flags) -{ - pcibr_soft_t pcibr_soft; - pcibr_info_h pcibr_infoh; - pcibr_info_t pcibr_info; - async_attach_t aa = NULL; - int func; - devfs_handle_t xconn_vhdl,conn_vhdl; - int nfunc; - int error_func; - int error_slot = 0; - int error = ENODEV; - - pcibr_soft = pcibr_soft_get(pcibr_vhdl); - - if (!pcibr_soft || !PCIBR_VALID_SLOT(slot)) - return(EINVAL); - - - if (pcibr_soft->bs_slot[slot].has_host) { - return(EPERM); - } - - xconn_vhdl = pcibr_soft->bs_conn; - aa = async_attach_get_info(xconn_vhdl); - - nfunc = pcibr_soft->bs_slot[slot].bss_ninfo; - pcibr_infoh = pcibr_soft->bs_slot[slot].bss_infos; - - for (func = 0; func < nfunc; ++func) { - - pcibr_info = pcibr_infoh[func]; - - if (!pcibr_info) - continue; - - if (pcibr_info->f_vendor == PCIIO_VENDOR_ID_NONE) - continue; - - conn_vhdl = pcibr_info->f_vertex; - - /* If the PCI device has been disabled in the prom, - * do not set it up for driver attach. NOTE: usrpci - * and pciba will not "see" this connection point! - */ - if (device_admin_info_get(conn_vhdl, ADMIN_LBL_DISABLED)) { -#ifdef SUPPORT_PRINTING_V_FORMAT - PRINT_WARNING("pcibr_slot_call_device_attach: %v disabled\n", - conn_vhdl); -#endif - continue; - } -#ifdef LATER - /* - * Activate if and when we support cdl. - */ - if (aa) - async_attach_add_info(conn_vhdl, aa); -#endif /* LATER */ - - error_func = pciio_device_attach(conn_vhdl, drv_flags); - - pcibr_info->f_att_det_error = error_func; - - if (error_func) - error_slot = error_func; - - error = error_slot; - - } /* next func */ - - if (error) { - if ((error != ENODEV) && (error != EUNATCH)) - pcibr_soft->bs_slot[slot].slot_status |= SLOT_STARTUP_INCMPLT; - } else { - pcibr_soft->bs_slot[slot].slot_status |= SLOT_STARTUP_CMPLT; - } - - return(error); -} - -/* - * pcibr_slot_call_device_detach - * This calls the associated driver detach routine for the PCI - * card in this slot. - */ -int -pcibr_slot_call_device_detach(devfs_handle_t pcibr_vhdl, - pciio_slot_t slot, - int drv_flags) -{ - pcibr_soft_t pcibr_soft; - pcibr_info_h pcibr_infoh; - pcibr_info_t pcibr_info; - int func; - devfs_handle_t conn_vhdl = GRAPH_VERTEX_NONE; - int nfunc; - int error_func; - int error_slot = 0; - int error = ENODEV; - - pcibr_soft = pcibr_soft_get(pcibr_vhdl); - - if (!pcibr_soft || !PCIBR_VALID_SLOT(slot)) - return(EINVAL); - - if (pcibr_soft->bs_slot[slot].has_host) - return(EPERM); - - /* Make sure that we do not detach a system critical function vertex */ - if(pcibr_is_slot_sys_critical(pcibr_vhdl, slot)) - return(EPERM); - - nfunc = pcibr_soft->bs_slot[slot].bss_ninfo; - pcibr_infoh = pcibr_soft->bs_slot[slot].bss_infos; - - for (func = 0; func < nfunc; ++func) { - - pcibr_info = pcibr_infoh[func]; - - if (!pcibr_info) - continue; - - if (pcibr_info->f_vendor == PCIIO_VENDOR_ID_NONE) - continue; - - conn_vhdl = pcibr_info->f_vertex; - - error_func = pciio_device_detach(conn_vhdl, drv_flags); - - pcibr_info->f_att_det_error = error_func; - - if (error_func) - error_slot = error_func; - - error = error_slot; - - } /* next func */ - - pcibr_soft->bs_slot[slot].slot_status &= ~SLOT_STATUS_MASK; - - if (error) { - if ((error != ENODEV) && (error != EUNATCH)) - pcibr_soft->bs_slot[slot].slot_status |= SLOT_SHUTDOWN_INCMPLT; - } else { - if (conn_vhdl != GRAPH_VERTEX_NONE) - pcibr_device_unregister(conn_vhdl); - pcibr_soft->bs_slot[slot].slot_status |= SLOT_SHUTDOWN_CMPLT; - } - - return(error); -} - -/* - * pcibr_slot_detach - * This is a place holder routine to keep track of all the - * slot-specific freeing that needs to be done. - */ -int -pcibr_slot_detach(devfs_handle_t pcibr_vhdl, - pciio_slot_t slot, - int drv_flags) -{ - int error; - - /* Call the device detach function */ - error = (pcibr_slot_call_device_detach(pcibr_vhdl, slot, drv_flags)); - return (error); - -} - -/* - * pcibr_is_slot_sys_critical - * Check slot for any functions that are system critical. - * Return 1 if any are system critical or 0 otherwise. - * - * This function will always return 0 when called by - * pcibr_attach() because the system critical vertices - * have not yet been set in the hwgraph. - */ -int -pcibr_is_slot_sys_critical(devfs_handle_t pcibr_vhdl, - pciio_slot_t slot) -{ - pcibr_soft_t pcibr_soft; - pcibr_info_h pcibr_infoh; - pcibr_info_t pcibr_info; - devfs_handle_t conn_vhdl = GRAPH_VERTEX_NONE; - int nfunc; - int func; - - pcibr_soft = pcibr_soft_get(pcibr_vhdl); - if (!pcibr_soft || !PCIBR_VALID_SLOT(slot)) - return(0); - - nfunc = pcibr_soft->bs_slot[slot].bss_ninfo; - pcibr_infoh = pcibr_soft->bs_slot[slot].bss_infos; - - for (func = 0; func < nfunc; ++func) { - - pcibr_info = pcibr_infoh[func]; - if (!pcibr_info) - continue; - - if (pcibr_info->f_vendor == PCIIO_VENDOR_ID_NONE) - continue; - - conn_vhdl = pcibr_info->f_vertex; - if (is_sys_critical_vertex(conn_vhdl)) { -#if defined(SUPPORT_PRINTING_V_FORMAT) - PRINT_WARNING("%v is a system critical device vertex\n", conn_vhdl); -#else - PRINT_WARNING("%p is a system critical device vertex\n", conn_vhdl); -#endif - return(1); - } - - } - - return(0); -} - -/* - * pcibr_device_unregister - * This frees up any hardware resources reserved for this PCI device - * and removes any PCI infrastructural information setup for it. - * This is usually used at the time of shutting down of the PCI card. - */ -int -pcibr_device_unregister(devfs_handle_t pconn_vhdl) -{ - pciio_info_t pciio_info; - devfs_handle_t pcibr_vhdl; - pciio_slot_t slot; - pcibr_soft_t pcibr_soft; - bridge_t *bridge; - int error_call; - int error = 0; - - pciio_info = pciio_info_get(pconn_vhdl); - - pcibr_vhdl = pciio_info_master_get(pciio_info); - slot = pciio_info_slot_get(pciio_info); - - pcibr_soft = pcibr_soft_get(pcibr_vhdl); - bridge = pcibr_soft->bs_base; - - /* Clear all the hardware xtalk resources for this device */ - xtalk_widgetdev_shutdown(pcibr_soft->bs_conn, slot); - - /* Flush all the rrbs */ - pcibr_rrb_flush(pconn_vhdl); - - /* Free the rrbs allocated to this slot */ - error_call = do_pcibr_rrb_free(bridge, slot, - pcibr_soft->bs_rrb_valid[slot] + - pcibr_soft->bs_rrb_valid[slot + - PCIBR_RRB_SLOT_VIRTUAL]); - - if (error_call) - error = ERANGE; - - pcibr_soft->bs_rrb_valid[slot] = 0; - pcibr_soft->bs_rrb_valid[slot + PCIBR_RRB_SLOT_VIRTUAL] = 0; - pcibr_soft->bs_rrb_res[slot] = 0; - - /* Flush the write buffers !! */ - error_call = pcibr_wrb_flush(pconn_vhdl); - - if (error_call) - error = error_call; - - /* Clear the information specific to the slot */ - error_call = pcibr_slot_info_free(pcibr_vhdl, slot); - - if (error_call) - error = error_call; - - return(error); - -} - -/* - * build a convenience link path in the - * form of "...//bus/" - * - * returns 1 on success, 0 otherwise - * - * depends on hwgraph separator == '/' - */ -int -pcibr_bus_cnvlink(devfs_handle_t f_c, int slot) -{ - char dst[MAXDEVNAME]; - char *dp = dst; - char *cp, *xp; - int widgetnum; - char pcibus[8]; - devfs_handle_t nvtx, svtx; - int rv; - -#if DEBUG - printk("pcibr_bus_cnvlink: slot= %d f_c= %p\n", - slot, f_c); - { - int pos; - char dname[256]; - pos = devfs_generate_path(f_c, dname, 256); - printk("%s : path= %s\n", __FUNCTION__, &dname[pos]); - } -#endif - - if (GRAPH_SUCCESS != hwgraph_vertex_name_get(f_c, dst, MAXDEVNAME)) - return 0; - - /* dst example == /hw/module/001c02/Pbrick/xtalk/8/pci/direct */ - - /* find the widget number */ - xp = strstr(dst, "/"EDGE_LBL_XTALK"/"); - if (xp == NULL) - return 0; - widgetnum = atoi(xp+7); - if (widgetnum < XBOW_PORT_8 || widgetnum > XBOW_PORT_F) - return 0; - - /* remove "/pci/direct" from path */ - cp = strstr(dst, "/" EDGE_LBL_PCI "/" "direct"); - if (cp == NULL) - return 0; - *cp = (char)NULL; - - /* get the vertex for the widget */ - if (GRAPH_SUCCESS != hwgraph_traverse(NULL, dp, &svtx)) - return 0; - - *xp = (char)NULL; /* remove "/xtalk/..." from path */ - - /* dst example now == /hw/module/001c02/Pbrick */ - - /* get the bus number */ - strcat(dst, "/bus"); - sprintf(pcibus, "%d", p_busnum[widgetnum]); - - /* link to bus to widget */ - rv = hwgraph_path_add(NULL, dp, &nvtx); - if (GRAPH_SUCCESS == rv) - rv = hwgraph_edge_add(nvtx, svtx, pcibus); - - return (rv == GRAPH_SUCCESS); -} - - -/* - * pcibr_attach: called every time the crosstalk - * infrastructure is asked to initialize a widget - * that matches the part number we handed to the - * registration routine above. - */ -/*ARGSUSED */ -int -pcibr_attach(devfs_handle_t xconn_vhdl) -{ - /* REFERENCED */ - graph_error_t rc; - devfs_handle_t pcibr_vhdl; - devfs_handle_t ctlr_vhdl; - bridge_t *bridge = NULL; - bridgereg_t id; - int rev; - pcibr_soft_t pcibr_soft; - pcibr_info_t pcibr_info; - xwidget_info_t info; - xtalk_intr_t xtalk_intr; - device_desc_t dev_desc; - int slot; - int ibit; - devfs_handle_t noslot_conn; - char devnm[MAXDEVNAME], *s; - pcibr_hints_t pcibr_hints; - bridgereg_t b_int_enable; - unsigned rrb_fixed = 0; - - iopaddr_t pci_io_fb, pci_io_fl; - iopaddr_t pci_lo_fb, pci_lo_fl; - iopaddr_t pci_hi_fb, pci_hi_fl; - - int spl_level; -#ifdef LATER - char *nicinfo = (char *)0; -#endif - -#if PCI_FBBE - int fast_back_to_back_enable; -#endif - l1sc_t *scp; - nasid_t nasid; - - async_attach_t aa = NULL; - - aa = async_attach_get_info(xconn_vhdl); - -#if DEBUG && ATTACH_DEBUG - printk("pcibr_attach: xconn_vhdl= %p\n", xconn_vhdl); - { - int pos; - char dname[256]; - pos = devfs_generate_path(xconn_vhdl, dname, 256); - printk("%s : path= %s \n", __FUNCTION__, &dname[pos]); - } -#endif - - /* Setup the PRB for the bridge in CONVEYOR BELT - * mode. PRBs are setup in default FIRE-AND-FORGET - * mode during the initialization. - */ - hub_device_flags_set(xconn_vhdl, HUB_PIO_CONVEYOR); - - bridge = (bridge_t *) - xtalk_piotrans_addr(xconn_vhdl, NULL, - 0, sizeof(bridge_t), 0); - -#ifndef MEDUSA_HACK - if ((bridge->b_wid_stat & BRIDGE_STAT_PCI_GIO_N) == 0) - return -1; /* someone else handles GIO bridges. */ -#endif - -#ifdef BRINGUP - if (XWIDGET_PART_REV_NUM(bridge->b_wid_id) == XBRIDGE_PART_REV_A) - NeedXbridgeSwap = 1; -#endif - - /* - * Create the vertex for the PCI bus, which we - * will also use to hold the pcibr_soft and - * which will be the "master" vertex for all the - * pciio connection points we will hang off it. - * This needs to happen before we call nic_bridge_vertex_info - * as we are some of the *_vmc functions need access to the edges. - * - * Opening this vertex will provide access to - * the Bridge registers themselves. - */ - rc = hwgraph_path_add(xconn_vhdl, EDGE_LBL_PCI, &pcibr_vhdl); - ASSERT(rc == GRAPH_SUCCESS); - - ctlr_vhdl = NULL; - ctlr_vhdl = hwgraph_register(pcibr_vhdl, EDGE_LBL_CONTROLLER, - 0, DEVFS_FL_AUTO_DEVNUM, - 0, 0, - S_IFCHR | S_IRUSR | S_IWUSR | S_IRGRP, 0, 0, - &pcibr_fops, NULL); - - ASSERT(ctlr_vhdl != NULL); - - /* - * decode the nic, and hang its stuff off our - * connection point where other drivers can get - * at it. - */ -#ifdef LATER - nicinfo = BRIDGE_VERTEX_MFG_INFO(xconn_vhdl, (nic_data_t) & bridge->b_nic); -#endif - - /* - * Get the hint structure; if some NIC callback - * marked this vertex as "hands-off" then we - * just return here, before doing anything else. - */ - pcibr_hints = pcibr_hints_get(xconn_vhdl, 0); - - if (pcibr_hints && pcibr_hints->ph_hands_off) - return -1; /* generic operations disabled */ - - id = bridge->b_wid_id; - rev = XWIDGET_PART_REV_NUM(id); - - hwgraph_info_add_LBL(pcibr_vhdl, INFO_LBL_PCIBR_ASIC_REV, (arbitrary_info_t) rev); - - /* - * allocate soft state structure, fill in some - * fields, and hook it up to our vertex. - */ - NEW(pcibr_soft); - BZERO(pcibr_soft, sizeof *pcibr_soft); - pcibr_soft_set(pcibr_vhdl, pcibr_soft); - - pcibr_soft->bs_conn = xconn_vhdl; - pcibr_soft->bs_vhdl = pcibr_vhdl; - pcibr_soft->bs_base = bridge; - pcibr_soft->bs_rev_num = rev; - pcibr_soft->bs_intr_bits = pcibr_intr_bits; - if (is_xbridge(bridge)) { - pcibr_soft->bs_int_ate_size = XBRIDGE_INTERNAL_ATES; - pcibr_soft->bs_xbridge = 1; - } else { - pcibr_soft->bs_int_ate_size = BRIDGE_INTERNAL_ATES; - pcibr_soft->bs_xbridge = 0; - } - - nasid = NASID_GET(bridge); - scp = &NODEPDA( NASID_TO_COMPACT_NODEID(nasid) )->module->elsc; - pcibr_soft->bs_l1sc = scp; - pcibr_soft->bs_moduleid = iobrick_module_get(scp); - pcibr_soft->bsi_err_intr = 0; - - /* Bridges up through REV C - * are unable to set the direct - * byteswappers to BYTE_STREAM. - */ - if (pcibr_soft->bs_rev_num <= BRIDGE_PART_REV_C) { - pcibr_soft->bs_pio_end_io = PCIIO_WORD_VALUES; - pcibr_soft->bs_pio_end_mem = PCIIO_WORD_VALUES; - } -#if PCIBR_SOFT_LIST - { - pcibr_list_p self; - - NEW(self); - self->bl_soft = pcibr_soft; - self->bl_vhdl = pcibr_vhdl; - self->bl_next = pcibr_list; - self->bl_next = swap_ptr((void **) &pcibr_list, (void *)self); - } -#endif - - /* - * get the name of this bridge vertex and keep the info. Use this - * only where it is really needed now: like error interrupts. - */ - s = dev_to_name(pcibr_vhdl, devnm, MAXDEVNAME); - pcibr_soft->bs_name = kmalloc(strlen(s) + 1, GFP_KERNEL); - strcpy(pcibr_soft->bs_name, s); - -#if SHOW_REVS || DEBUG -#if !DEBUG - if (kdebug) -#endif - printk("%sBridge ASIC: rev %s (code=0x%x) at %s\n", - is_xbridge(bridge) ? "X" : "", - (rev == BRIDGE_PART_REV_A) ? "A" : - (rev == BRIDGE_PART_REV_B) ? "B" : - (rev == BRIDGE_PART_REV_C) ? "C" : - (rev == BRIDGE_PART_REV_D) ? "D" : - (rev == XBRIDGE_PART_REV_A) ? "A" : - (rev == XBRIDGE_PART_REV_B) ? "B" : - "unknown", - rev, pcibr_soft->bs_name); -#endif - - info = xwidget_info_get(xconn_vhdl); - pcibr_soft->bs_xid = xwidget_info_id_get(info); - pcibr_soft->bs_master = xwidget_info_master_get(info); - pcibr_soft->bs_mxid = xwidget_info_masterid_get(info); - - /* - * Init bridge lock. - */ - spin_lock_init(&pcibr_soft->bs_lock); - - /* - * If we have one, process the hints structure. - */ - if (pcibr_hints) { - rrb_fixed = pcibr_hints->ph_rrb_fixed; - - pcibr_soft->bs_rrb_fixed = rrb_fixed; - - if (pcibr_hints->ph_intr_bits) - pcibr_soft->bs_intr_bits = pcibr_hints->ph_intr_bits; - - for (slot = 0; slot < 8; ++slot) { - int hslot = pcibr_hints->ph_host_slot[slot] - 1; - - if (hslot < 0) { - pcibr_soft->bs_slot[slot].host_slot = slot; - } else { - pcibr_soft->bs_slot[slot].has_host = 1; - pcibr_soft->bs_slot[slot].host_slot = hslot; - } - } - } - /* - * set up initial values for state fields - */ - for (slot = 0; slot < 8; ++slot) { - pcibr_soft->bs_slot[slot].bss_devio.bssd_space = PCIIO_SPACE_NONE; - pcibr_soft->bs_slot[slot].bss_d64_base = PCIBR_D64_BASE_UNSET; - pcibr_soft->bs_slot[slot].bss_d32_base = PCIBR_D32_BASE_UNSET; - pcibr_soft->bs_slot[slot].bss_ext_ates_active = ATOMIC_INIT(0); - } - - for (ibit = 0; ibit < 8; ++ibit) { - pcibr_soft->bs_intr[ibit].bsi_xtalk_intr = 0; - pcibr_soft->bs_intr[ibit].bsi_pcibr_intr_wrap.iw_soft = pcibr_soft; - pcibr_soft->bs_intr[ibit].bsi_pcibr_intr_wrap.iw_list = NULL; - pcibr_soft->bs_intr[ibit].bsi_pcibr_intr_wrap.iw_stat = - &(bridge->b_int_status); - pcibr_soft->bs_intr[ibit].bsi_pcibr_intr_wrap.iw_hdlrcnt = 0; - pcibr_soft->bs_intr[ibit].bsi_pcibr_intr_wrap.iw_shared = 0; - pcibr_soft->bs_intr[ibit].bsi_pcibr_intr_wrap.iw_connected = 0; - } - - /* - * connect up our error handler - */ - xwidget_error_register(xconn_vhdl, pcibr_error_handler, pcibr_soft); - - /* - * Initialize various Bridge registers. - */ - - /* - * On pre-Rev.D bridges, set the PCI_RETRY_CNT - * to zero to avoid dropping stores. (#475347) - */ - if (rev < BRIDGE_PART_REV_D) - bridge->b_bus_timeout &= ~BRIDGE_BUS_PCI_RETRY_MASK; - - /* - * Clear all pending interrupts. - */ - bridge->b_int_rst_stat = (BRIDGE_IRR_ALL_CLR); - - /* - * Until otherwise set up, - * assume all interrupts are - * from slot 7. - */ - bridge->b_int_device = (uint32_t) 0xffffffff; - - { - bridgereg_t dirmap; - paddr_t paddr; - iopaddr_t xbase; - xwidgetnum_t xport; - iopaddr_t offset; - int num_entries = 0; - int entry; - cnodeid_t cnodeid; - nasid_t nasid; - char *node_val; - devfs_handle_t node_vhdl; - char vname[MAXDEVNAME]; - - /* Set the Bridge's 32-bit PCI to XTalk - * Direct Map register to the most useful - * value we can determine. Note that we - * must use a single xid for all of: - * direct-mapped 32-bit DMA accesses - * direct-mapped 64-bit DMA accesses - * DMA accesses through the PMU - * interrupts - * This is the only way to guarantee that - * completion interrupts will reach a CPU - * after all DMA data has reached memory. - * (Of course, there may be a few special - * drivers/controllers that explicitly manage - * this ordering problem.) - */ - - cnodeid = 0; /* default node id */ - /* - * Determine the base address node id to be used for all 32-bit - * Direct Mapping I/O. The default is node 0, but this can be changed - * via a DEVICE_ADMIN directive and the PCIBUS_DMATRANS_NODE - * attribute in the irix.sm config file. A device driver can obtain - * this node value via a call to pcibr_get_dmatrans_node(). - */ - node_val = device_admin_info_get(pcibr_vhdl, ADMIN_LBL_DMATRANS_NODE); - if (node_val != NULL) { - node_vhdl = hwgraph_path_to_vertex(node_val); - if (node_vhdl != GRAPH_VERTEX_NONE) { - cnodeid = nodevertex_to_cnodeid(node_vhdl); - } - if ((node_vhdl == GRAPH_VERTEX_NONE) || (cnodeid == CNODEID_NONE)) { - cnodeid = 0; - vertex_to_name(pcibr_vhdl, vname, sizeof(vname)); - PRINT_WARNING( "Invalid hwgraph node path specified:\n DEVICE_ADMIN: %s %s=%s\n", - vname, ADMIN_LBL_DMATRANS_NODE, node_val); - } - } - nasid = COMPACT_TO_NASID_NODEID(cnodeid); - paddr = NODE_OFFSET(nasid) + 0; - - /* currently, we just assume that if we ask - * for a DMA mapping to "zero" the XIO - * host will transmute this into a request - * for the lowest hunk of memory. - */ - xbase = xtalk_dmatrans_addr(xconn_vhdl, 0, - paddr, _PAGESZ, 0); - - if (xbase != XIO_NOWHERE) { - if (XIO_PACKED(xbase)) { - xport = XIO_PORT(xbase); - xbase = XIO_ADDR(xbase); - } else - xport = pcibr_soft->bs_mxid; - - offset = xbase & ((1ull << BRIDGE_DIRMAP_OFF_ADDRSHFT) - 1ull); - xbase >>= BRIDGE_DIRMAP_OFF_ADDRSHFT; - - dirmap = xport << BRIDGE_DIRMAP_W_ID_SHFT; - - if (xbase) - dirmap |= BRIDGE_DIRMAP_OFF & xbase; - else if (offset >= (512 << 20)) - dirmap |= BRIDGE_DIRMAP_ADD512; - - bridge->b_dir_map = dirmap; - } - /* - * Set bridge's idea of page size according to the system's - * idea of "IO page size". TBD: The idea of IO page size - * should really go away. - */ - /* - * ensure that we write and read without any interruption. - * The read following the write is required for the Bridge war - */ - spl_level = splhi(); -#if IOPGSIZE == 4096 - bridge->b_wid_control &= ~BRIDGE_CTRL_PAGE_SIZE; -#elif IOPGSIZE == 16384 - bridge->b_wid_control |= BRIDGE_CTRL_PAGE_SIZE; -#else - <<>>; -#endif - bridge->b_wid_control; /* inval addr bug war */ - splx(spl_level); - - /* Initialize internal mapping entries */ - for (entry = 0; entry < pcibr_soft->bs_int_ate_size; entry++) - bridge->b_int_ate_ram[entry].wr = 0; - - /* - * Determine if there's external mapping SSRAM on this - * bridge. Set up Bridge control register appropriately, - * inititlize SSRAM, and set software up to manage RAM - * entries as an allocatable resource. - * - * Currently, we just use the rm* routines to manage ATE - * allocation. We should probably replace this with a - * Best Fit allocator. - * - * For now, if we have external SSRAM, avoid using - * the internal ssram: we can't turn PREFETCH on - * when we use the internal SSRAM; and besides, - * this also guarantees that no allocation will - * straddle the internal/external line, so we - * can increment ATE write addresses rather than - * recomparing against BRIDGE_INTERNAL_ATES every - * time. - */ - if (is_xbridge(bridge)) - num_entries = 0; - else - num_entries = pcibr_init_ext_ate_ram(bridge); - - /* we always have 128 ATEs (512 for Xbridge) inside the chip - * even if disabled for debugging. - */ - pcibr_soft->bs_int_ate_map = rmallocmap(pcibr_soft->bs_int_ate_size); - pcibr_ate_free(pcibr_soft, 0, pcibr_soft->bs_int_ate_size); -#if PCIBR_ATE_DEBUG - printk("pcibr_attach: %d INTERNAL ATEs\n", pcibr_soft->bs_int_ate_size); -#endif - - if (num_entries > pcibr_soft->bs_int_ate_size) { -#if PCIBR_ATE_NOTBOTH /* for debug -- forces us to use external ates */ - printk("pcibr_attach: disabling internal ATEs.\n"); - pcibr_ate_alloc(pcibr_soft, pcibr_soft->bs_int_ate_size); -#endif - pcibr_soft->bs_ext_ate_map = rmallocmap(num_entries); - pcibr_ate_free(pcibr_soft, pcibr_soft->bs_int_ate_size, - num_entries - pcibr_soft->bs_int_ate_size); -#if PCIBR_ATE_DEBUG - printk("pcibr_attach: %d EXTERNAL ATEs\n", - num_entries - pcibr_soft->bs_int_ate_size); -#endif - } - } - - { - bridgereg_t dirmap; - iopaddr_t xbase; - - /* - * now figure the *real* xtalk base address - * that dirmap sends us to. - */ - dirmap = bridge->b_dir_map; - if (dirmap & BRIDGE_DIRMAP_OFF) - xbase = (iopaddr_t)(dirmap & BRIDGE_DIRMAP_OFF) - << BRIDGE_DIRMAP_OFF_ADDRSHFT; - else if (dirmap & BRIDGE_DIRMAP_ADD512) - xbase = 512 << 20; - else - xbase = 0; - - pcibr_soft->bs_dir_xbase = xbase; - - /* it is entirely possible that we may, at this - * point, have our dirmap pointing somewhere - * other than our "master" port. - */ - pcibr_soft->bs_dir_xport = - (dirmap & BRIDGE_DIRMAP_W_ID) >> BRIDGE_DIRMAP_W_ID_SHFT; - } - - /* pcibr sources an error interrupt; - * figure out where to send it. - * - * If any interrupts are enabled in bridge, - * then the prom set us up and our interrupt - * has already been reconnected in mlreset - * above. - * - * Need to set the D_INTR_ISERR flag - * in the dev_desc used for allocating the - * error interrupt, so our interrupt will - * be properly routed and prioritized. - * - * If our crosstalk provider wants to - * fix widget error interrupts to specific - * destinations, D_INTR_ISERR is how it - * knows to do this. - */ - - dev_desc = device_desc_dup(pcibr_vhdl); - device_desc_flags_set(dev_desc, - device_desc_flags_get(dev_desc) | D_INTR_ISERR); - device_desc_intr_name_set(dev_desc, "Bridge error"); - - xtalk_intr = xtalk_intr_alloc(xconn_vhdl, dev_desc, pcibr_vhdl); - ASSERT(xtalk_intr != NULL); - - device_desc_free(dev_desc); - - pcibr_soft->bsi_err_intr = xtalk_intr; - - /* - * On IP35 with XBridge, we do some extra checks in pcibr_setwidint - * in order to work around some addressing limitations. In order - * for that fire wall to work properly, we need to make sure we - * start from a known clean state. - */ - pcibr_clearwidint(bridge); - - xtalk_intr_connect(xtalk_intr, - (intr_func_t) pcibr_error_intr_handler, - (intr_arg_t) pcibr_soft, - (xtalk_intr_setfunc_t) pcibr_setwidint, - (void *) bridge, - (void *) 0); - - /* - * now we can start handling error interrupts; - * enable all of them. - * NOTE: some PCI ints may already be enabled. - */ - b_int_enable = bridge->b_int_enable | BRIDGE_ISR_ERRORS; - - - bridge->b_int_enable = b_int_enable; - bridge->b_int_mode = 0; /* do not send "clear interrupt" packets */ - - bridge->b_wid_tflush; /* wait until Bridge PIO complete */ - - /* - * Depending on the rev of bridge, disable certain features. - * Easiest way seems to be to force the PCIBR_NOwhatever - * flag to be on for all DMA calls, which overrides any - * PCIBR_whatever flag or even the setting of whatever - * from the PCIIO_DMA_class flags (or even from the other - * PCIBR flags, since NO overrides YES). - */ - pcibr_soft->bs_dma_flags = 0; - - /* PREFETCH: - * Always completely disabled for REV.A; - * at "pcibr_prefetch_enable_rev", anyone - * asking for PCIIO_PREFETCH gets it. - * Between these two points, you have to ask - * for PCIBR_PREFETCH, which promises that - * your driver knows about known Bridge WARs. - */ - if (pcibr_soft->bs_rev_num < BRIDGE_PART_REV_B) - pcibr_soft->bs_dma_flags |= PCIBR_NOPREFETCH; - else if (pcibr_soft->bs_rev_num < - (BRIDGE_WIDGET_PART_NUM << 4 | pcibr_prefetch_enable_rev)) - pcibr_soft->bs_dma_flags |= PCIIO_NOPREFETCH; - - /* WRITE_GATHER: - * Disabled up to but not including the - * rev number in pcibr_wg_enable_rev. There - * is no "WAR range" as with prefetch. - */ - if (pcibr_soft->bs_rev_num < - (BRIDGE_WIDGET_PART_NUM << 4 | pcibr_wg_enable_rev)) - pcibr_soft->bs_dma_flags |= PCIBR_NOWRITE_GATHER; - - pciio_provider_register(pcibr_vhdl, &pcibr_provider); - pciio_provider_startup(pcibr_vhdl); - - pci_io_fb = 0x00000004; /* I/O FreeBlock Base */ - pci_io_fl = 0xFFFFFFFF; /* I/O FreeBlock Last */ - - pci_lo_fb = 0x00000010; /* Low Memory FreeBlock Base */ - pci_lo_fl = 0x001FFFFF; /* Low Memory FreeBlock Last */ - - pci_hi_fb = 0x00200000; /* High Memory FreeBlock Base */ - pci_hi_fl = 0x3FFFFFFF; /* High Memory FreeBlock Last */ - - - PCI_ADDR_SPACE_LIMITS_STORE(); - - /* build "no-slot" connection point - */ - pcibr_info = pcibr_device_info_new - (pcibr_soft, PCIIO_SLOT_NONE, PCIIO_FUNC_NONE, - PCIIO_VENDOR_ID_NONE, PCIIO_DEVICE_ID_NONE); - noslot_conn = pciio_device_info_register - (pcibr_vhdl, &pcibr_info->f_c); - - /* Remember the no slot connection point info for tearing it - * down during detach. - */ - pcibr_soft->bs_noslot_conn = noslot_conn; - pcibr_soft->bs_noslot_info = pcibr_info; -#if PCI_FBBE - fast_back_to_back_enable = 1; -#endif - -#if PCI_FBBE - if (fast_back_to_back_enable) { - /* - * All devices on the bus are capable of fast back to back, so - * we need to set the fast back to back bit in all devices on - * the bus that are capable of doing such accesses. - */ - } -#endif - -#ifdef LATER - /* If the bridge has been reset then there is no need to reset - * the individual PCI slots. - */ - for (slot = 0; slot < 8; ++slot) - /* Reset all the slots */ - (void)pcibr_slot_reset(pcibr_vhdl, slot); -#endif - - for (slot = 0; slot < 8; ++slot) - /* Find out what is out there */ - (void)pcibr_slot_info_init(pcibr_vhdl,slot); - - for (slot = 0; slot < 8; ++slot) - /* Set up the address space for this slot in the pci land */ - (void)pcibr_slot_addr_space_init(pcibr_vhdl,slot); - - for (slot = 0; slot < 8; ++slot) - /* Setup the device register */ - (void)pcibr_slot_device_init(pcibr_vhdl, slot); - -#ifndef __ia64 -#if defined(CONFIG_SGI_IP35) || defined(CONFIG_IA64_SGI_SN1) || defined(CONFIG_IA64_GENERIC) - for (slot = 0; slot < 8; ++slot) - /* Set up convenience links */ - if (is_xbridge(bridge)) - if (pcibr_soft->bs_slot[slot].bss_ninfo > 0) /* if occupied */ - pcibr_bus_cnvlink(pcibr_info->f_vertex, slot); -#endif -#endif - - for (slot = 0; slot < 8; ++slot) - /* Setup host/guest relations */ - (void)pcibr_slot_guest_info_init(pcibr_vhdl,slot); - - for (slot = 0; slot < 8; ++slot) - /* Initial RRB management */ - (void)pcibr_slot_initial_rrb_alloc(pcibr_vhdl,slot); - - /* driver attach routines should be called out from generic linux code */ - for (slot = 0; slot < 8; ++slot) - /* Call the device attach */ - (void)pcibr_slot_call_device_attach(pcibr_vhdl, slot, 0); - - /* - * Each Pbrick PCI bus only has slots 1 and 2. Similarly for - * widget 0xe on Ibricks. Allocate RRB's accordingly. - */ - if (pcibr_soft->bs_moduleid > 0) { - switch (MODULE_GET_BTCHAR(pcibr_soft->bs_moduleid)) { - case 'p': /* Pbrick */ - do_pcibr_rrb_autoalloc(pcibr_soft, 1, 8); - do_pcibr_rrb_autoalloc(pcibr_soft, 2, 8); - break; - case 'i': /* Ibrick */ - /* port 0xe on the Ibrick only has slots 1 and 2 */ - if (pcibr_soft->bs_xid == 0xe) { - do_pcibr_rrb_autoalloc(pcibr_soft, 1, 8); - do_pcibr_rrb_autoalloc(pcibr_soft, 2, 8); - } - else { - /* allocate one RRB for the serial port */ - do_pcibr_rrb_autoalloc(pcibr_soft, 0, 1); - } - break; - } /* switch */ - } - -#ifdef LATER - if (strstr(nicinfo, XTALK_PCI_PART_NUM)) { - do_pcibr_rrb_autoalloc(pcibr_soft, 1, 8); -#if PCIBR_RRB_DEBUG - printf("\n\nFound XTALK_PCI (030-1275) at %v\n", xconn_vhdl); - - printf("pcibr_attach: %v Shoebox RRB MANAGEMENT: %d+%d free\n", - pcibr_vhdl, - pcibr_soft->bs_rrb_avail[0], - pcibr_soft->bs_rrb_avail[1]); - - for (slot = 0; slot < 8; ++slot) - printf("\t%d+%d+%d", - 0xFFF & pcibr_soft->bs_rrb_valid[slot], - 0xFFF & pcibr_soft->bs_rrb_valid[slot + PCIBR_RRB_SLOT_VIRTUAL], - pcibr_soft->bs_rrb_res[slot]); - - printf("\n"); -#endif - } -#else - FIXME("pcibr_attach: Call do_pcibr_rrb_autoalloc nicinfo\n"); -#endif - - if (aa) - async_attach_add_info(noslot_conn, aa); - - pciio_device_attach(noslot_conn, 0); - - - /* - * Tear down pointer to async attach info -- async threads for - * bridge's descendants may be running but the bridge's work is done. - */ - if (aa) - async_attach_del_info(xconn_vhdl); - - return 0; -} -/* - * pcibr_detach: - * Detach the bridge device from the hwgraph after cleaning out all the - * underlying vertices. - */ -int -pcibr_detach(devfs_handle_t xconn) -{ - pciio_slot_t slot; - devfs_handle_t pcibr_vhdl; - pcibr_soft_t pcibr_soft; - bridge_t *bridge; - - /* Get the bridge vertex from its xtalk connection point */ - if (hwgraph_traverse(xconn, EDGE_LBL_PCI, &pcibr_vhdl) != GRAPH_SUCCESS) - return(1); - - pcibr_soft = pcibr_soft_get(pcibr_vhdl); - bridge = pcibr_soft->bs_base; - - /* Disable the interrupts from the bridge */ - bridge->b_int_enable = 0; - - /* Detach all the PCI devices talking to this bridge */ - for(slot = 0; slot < 8; slot++) { -#ifdef DEBUG - printk("pcibr_device_detach called for %p/%d\n", - pcibr_vhdl,slot); -#endif - pcibr_slot_detach(pcibr_vhdl, slot, 0); - } - - /* Unregister the no-slot connection point */ - pciio_device_info_unregister(pcibr_vhdl, - &(pcibr_soft->bs_noslot_info->f_c)); - - spin_lock_destroy(&pcibr_soft->bs_lock); - kfree(pcibr_soft->bs_name); - - /* Error handler gets unregistered when the widget info is - * cleaned - */ - /* Free the soft ATE maps */ - if (pcibr_soft->bs_int_ate_map) - rmfreemap(pcibr_soft->bs_int_ate_map); - if (pcibr_soft->bs_ext_ate_map) - rmfreemap(pcibr_soft->bs_ext_ate_map); - - /* Disconnect the error interrupt and free the xtalk resources - * associated with it. - */ - xtalk_intr_disconnect(pcibr_soft->bsi_err_intr); - xtalk_intr_free(pcibr_soft->bsi_err_intr); - - /* Clear the software state maintained by the bridge driver for this - * bridge. - */ - DEL(pcibr_soft); - /* Remove the Bridge revision labelled info */ - (void)hwgraph_info_remove_LBL(pcibr_vhdl, INFO_LBL_PCIBR_ASIC_REV, NULL); - /* Remove the character device associated with this bridge */ - (void)hwgraph_edge_remove(pcibr_vhdl, EDGE_LBL_CONTROLLER, NULL); - /* Remove the PCI bridge vertex */ - (void)hwgraph_edge_remove(xconn, EDGE_LBL_PCI, NULL); - - return(0); -} - -int -pcibr_asic_rev(devfs_handle_t pconn_vhdl) -{ - devfs_handle_t pcibr_vhdl; - arbitrary_info_t ainfo; - - if (GRAPH_SUCCESS != - hwgraph_traverse(pconn_vhdl, EDGE_LBL_MASTER, &pcibr_vhdl)) - return -1; - - if (GRAPH_SUCCESS != - hwgraph_info_get_LBL(pcibr_vhdl, INFO_LBL_PCIBR_ASIC_REV, &ainfo)) - return -1; - - return (int) ainfo; -} - -int -pcibr_write_gather_flush(devfs_handle_t pconn_vhdl) -{ - pciio_info_t pciio_info = pciio_info_get(pconn_vhdl); - pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); - pciio_slot_t slot; - slot = pciio_info_slot_get(pciio_info); - pcibr_device_write_gather_flush(pcibr_soft, slot); - return 0; -} - -/* ===================================================================== - * PIO MANAGEMENT - */ - -LOCAL iopaddr_t -pcibr_addr_pci_to_xio(devfs_handle_t pconn_vhdl, - pciio_slot_t slot, - pciio_space_t space, - iopaddr_t pci_addr, - size_t req_size, - unsigned flags) -{ - pcibr_info_t pcibr_info = pcibr_info_get(pconn_vhdl); - pciio_info_t pciio_info = &pcibr_info->f_c; - pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); - bridge_t *bridge = pcibr_soft->bs_base; - - unsigned bar; /* which BASE reg on device is decoding */ - iopaddr_t xio_addr = XIO_NOWHERE; - - pciio_space_t wspace; /* which space device is decoding */ - iopaddr_t wbase; /* base of device decode on PCI */ - size_t wsize; /* size of device decode on PCI */ - - int try; /* DevIO(x) window scanning order control */ - int win; /* which DevIO(x) window is being used */ - pciio_space_t mspace; /* target space for devio(x) register */ - iopaddr_t mbase; /* base of devio(x) mapped area on PCI */ - size_t msize; /* size of devio(x) mapped area on PCI */ - size_t mmask; /* addr bits stored in Device(x) */ - - unsigned long s; - - s = pcibr_lock(pcibr_soft); - - if (pcibr_soft->bs_slot[slot].has_host) { - slot = pcibr_soft->bs_slot[slot].host_slot; - pcibr_info = pcibr_soft->bs_slot[slot].bss_infos[0]; - } - if (space == PCIIO_SPACE_NONE) - goto done; - - if (space == PCIIO_SPACE_CFG) { - /* - * Usually, the first mapping - * established to a PCI device - * is to its config space. - * - * In any case, we definitely - * do NOT need to worry about - * PCI BASE registers, and - * MUST NOT attempt to point - * the DevIO(x) window at - * this access ... - */ - if (((flags & PCIIO_BYTE_STREAM) == 0) && - ((pci_addr + req_size) <= BRIDGE_TYPE0_CFG_FUNC_OFF)) - xio_addr = pci_addr + BRIDGE_TYPE0_CFG_DEV(slot); - - goto done; - } - if (space == PCIIO_SPACE_ROM) { - /* PIO to the Expansion Rom. - * Driver is responsible for - * enabling and disabling - * decodes properly. - */ - wbase = pcibr_info->f_rbase; - wsize = pcibr_info->f_rsize; - - /* - * While the driver should know better - * than to attempt to map more space - * than the device is decoding, he might - * do it; better to bail out here. - */ - if ((pci_addr + req_size) > wsize) - goto done; - - pci_addr += wbase; - space = PCIIO_SPACE_MEM; - } - /* - * reduce window mappings to raw - * space mappings (maybe allocating - * windows), and try for DevIO(x) - * usage (setting it if it is available). - */ - bar = space - PCIIO_SPACE_WIN0; - if (bar < 6) { - wspace = pcibr_info->f_window[bar].w_space; - if (wspace == PCIIO_SPACE_NONE) - goto done; - - /* get PCI base and size */ - wbase = pcibr_info->f_window[bar].w_base; - wsize = pcibr_info->f_window[bar].w_size; - - /* - * While the driver should know better - * than to attempt to map more space - * than the device is decoding, he might - * do it; better to bail out here. - */ - if ((pci_addr + req_size) > wsize) - goto done; - - /* shift from window relative to - * decoded space relative. - */ - pci_addr += wbase; - space = wspace; - } else - bar = -1; - - /* Scan all the DevIO(x) windows twice looking for one - * that can satisfy our request. The first time through, - * only look at assigned windows; the second time, also - * look at PCIIO_SPACE_NONE windows. Arrange the order - * so we always look at our own window first. - * - * We will not attempt to satisfy a single request - * by concatinating multiple windows. - */ - for (try = 0; try < 16; ++try) { - bridgereg_t devreg; - unsigned offset; - - win = (try + slot) % 8; - - /* If this DevIO(x) mapping area can provide - * a mapping to this address, use it. - */ - msize = (win < 2) ? 0x200000 : 0x100000; - mmask = -msize; - if (space != PCIIO_SPACE_IO) - mmask &= 0x3FFFFFFF; - - offset = pci_addr & (msize - 1); - - /* If this window can't possibly handle that request, - * go on to the next window. - */ - if (((pci_addr & (msize - 1)) + req_size) > msize) - continue; - - devreg = pcibr_soft->bs_slot[win].bss_device; - - /* Is this window "nailed down"? - * If not, maybe we can use it. - * (only check this the second time through) - */ - mspace = pcibr_soft->bs_slot[win].bss_devio.bssd_space; - if ((try > 7) && (mspace == PCIIO_SPACE_NONE)) { - - /* If this is the primary DevIO(x) window - * for some other device, skip it. - */ - if ((win != slot) && - (PCIIO_VENDOR_ID_NONE != - pcibr_soft->bs_slot[win].bss_vendor_id)) - continue; - - /* It's a free window, and we fit in it. - * Set up Device(win) to our taste. - */ - mbase = pci_addr & mmask; - - /* check that we would really get from - * here to there. - */ - if ((mbase | offset) != pci_addr) - continue; - - devreg &= ~BRIDGE_DEV_OFF_MASK; - if (space != PCIIO_SPACE_IO) - devreg |= BRIDGE_DEV_DEV_IO_MEM; - else - devreg &= ~BRIDGE_DEV_DEV_IO_MEM; - devreg |= (mbase >> 20) & BRIDGE_DEV_OFF_MASK; - - /* default is WORD_VALUES. - * if you specify both, - * operation is undefined. - */ - if (flags & PCIIO_BYTE_STREAM) - devreg |= BRIDGE_DEV_DEV_SWAP; - else - devreg &= ~BRIDGE_DEV_DEV_SWAP; - - if (pcibr_soft->bs_slot[win].bss_device != devreg) { - bridge->b_device[win].reg = devreg; - pcibr_soft->bs_slot[win].bss_device = devreg; - bridge->b_wid_tflush; /* wait until Bridge PIO complete */ - -#if DEBUG && PCI_DEBUG - printk("pcibr Device(%d): 0x%lx\n", win, bridge->b_device[win].reg); -#endif - } - pcibr_soft->bs_slot[win].bss_devio.bssd_space = space; - pcibr_soft->bs_slot[win].bss_devio.bssd_base = mbase; - xio_addr = BRIDGE_DEVIO(win) + (pci_addr - mbase); - -#if DEBUG && PCI_DEBUG - printk("%s LINE %d map to space %d space desc 0x%x[%lx..%lx] for slot %d allocates DevIO(%d) devreg 0x%x\n", - __FUNCTION__, __LINE__, space, space_desc, - pci_addr, pci_addr + req_size - 1, - slot, win, devreg); -#endif - - goto done; - } /* endif DevIO(x) not pointed */ - mbase = pcibr_soft->bs_slot[win].bss_devio.bssd_base; - - /* Now check for request incompat with DevIO(x) - */ - if ((mspace != space) || - (pci_addr < mbase) || - ((pci_addr + req_size) > (mbase + msize)) || - ((flags & PCIIO_BYTE_STREAM) && !(devreg & BRIDGE_DEV_DEV_SWAP)) || - (!(flags & PCIIO_BYTE_STREAM) && (devreg & BRIDGE_DEV_DEV_SWAP))) - continue; - - /* DevIO(x) window is pointed at PCI space - * that includes our target. Calculate the - * final XIO address, release the lock and - * return. - */ - xio_addr = BRIDGE_DEVIO(win) + (pci_addr - mbase); - -#if DEBUG && PCI_DEBUG - printk("%s LINE %d map to space %d [0x%p..0x%p] for slot %d uses DevIO(%d)\n", - __FUNCTION__, __LINE__, space, pci_addr, pci_addr + req_size - 1, slot, win); -#endif - goto done; - } - - switch (space) { - /* - * Accesses to device decode - * areas that do a not fit - * within the DevIO(x) space are - * modified to be accesses via - * the direct mapping areas. - * - * If necessary, drivers can - * explicitly ask for mappings - * into these address spaces, - * but this should never be needed. - */ - case PCIIO_SPACE_MEM: /* "mem space" */ - case PCIIO_SPACE_MEM32: /* "mem, use 32-bit-wide bus" */ - if ((pci_addr + BRIDGE_PCI_MEM32_BASE + req_size - 1) <= - BRIDGE_PCI_MEM32_LIMIT) - xio_addr = pci_addr + BRIDGE_PCI_MEM32_BASE; - break; - - case PCIIO_SPACE_MEM64: /* "mem, use 64-bit-wide bus" */ - if ((pci_addr + BRIDGE_PCI_MEM64_BASE + req_size - 1) <= - BRIDGE_PCI_MEM64_LIMIT) - xio_addr = pci_addr + BRIDGE_PCI_MEM64_BASE; - break; - - case PCIIO_SPACE_IO: /* "i/o space" */ - /* Bridge Hardware Bug WAR #482741: - * The 4G area that maps directly from - * XIO space to PCI I/O space is busted - * until Bridge Rev D. - */ - if ((pcibr_soft->bs_rev_num > BRIDGE_PART_REV_C) && - ((pci_addr + BRIDGE_PCI_IO_BASE + req_size - 1) <= - BRIDGE_PCI_IO_LIMIT)) - xio_addr = pci_addr + BRIDGE_PCI_IO_BASE; - break; - } - - /* Check that "Direct PIO" byteswapping matches, - * try to change it if it does not. - */ - if (xio_addr != XIO_NOWHERE) { - unsigned bst; /* nonzero to set bytestream */ - unsigned *bfp; /* addr of record of how swapper is set */ - unsigned swb; /* which control bit to mung */ - unsigned bfo; /* current swapper setting */ - unsigned bfn; /* desired swapper setting */ - - bfp = ((space == PCIIO_SPACE_IO) - ? (&pcibr_soft->bs_pio_end_io) - : (&pcibr_soft->bs_pio_end_mem)); - - bfo = *bfp; - - bst = flags & PCIIO_BYTE_STREAM; - - bfn = bst ? PCIIO_BYTE_STREAM : PCIIO_WORD_VALUES; - - if (bfn == bfo) { /* we already match. */ - ; - } else if (bfo != 0) { /* we have a conflict. */ -#if DEBUG && PCI_DEBUG - printk("pcibr_addr_pci_to_xio: swap conflict in space %d , was%s%s, want%s%s\n", - space, - bfo & PCIIO_BYTE_STREAM ? " BYTE_STREAM" : "", - bfo & PCIIO_WORD_VALUES ? " WORD_VALUES" : "", - bfn & PCIIO_BYTE_STREAM ? " BYTE_STREAM" : "", - bfn & PCIIO_WORD_VALUES ? " WORD_VALUES" : ""); -#endif - xio_addr = XIO_NOWHERE; - } else { /* OK to make the change. */ - bridgereg_t octl, nctl; - - swb = (space == PCIIO_SPACE_IO) ? BRIDGE_CTRL_IO_SWAP : BRIDGE_CTRL_MEM_SWAP; - octl = bridge->b_wid_control; - nctl = bst ? octl | swb : octl & ~swb; - - if (octl != nctl) /* make the change if any */ - bridge->b_wid_control = nctl; - - *bfp = bfn; /* record the assignment */ - -#if DEBUG && PCI_DEBUG - printk("pcibr_addr_pci_to_xio: swap for space %d set to%s%s\n", - space, - bfn & PCIIO_BYTE_STREAM ? " BYTE_STREAM" : "", - bfn & PCIIO_WORD_VALUES ? " WORD_VALUES" : ""); -#endif - } - } - done: - pcibr_unlock(pcibr_soft, s); - return xio_addr; -} - -/*ARGSUSED6 */ -pcibr_piomap_t -pcibr_piomap_alloc(devfs_handle_t pconn_vhdl, - device_desc_t dev_desc, - pciio_space_t space, - iopaddr_t pci_addr, - size_t req_size, - size_t req_size_max, - unsigned flags) -{ - pcibr_info_t pcibr_info = pcibr_info_get(pconn_vhdl); - pciio_info_t pciio_info = &pcibr_info->f_c; - pciio_slot_t pciio_slot = pciio_info_slot_get(pciio_info); - pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); - devfs_handle_t xconn_vhdl = pcibr_soft->bs_conn; - - pcibr_piomap_t *mapptr; - pcibr_piomap_t maplist; - pcibr_piomap_t pcibr_piomap; - iopaddr_t xio_addr; - xtalk_piomap_t xtalk_piomap; - unsigned long s; - - /* Make sure that the req sizes are non-zero */ - if ((req_size < 1) || (req_size_max < 1)) - return NULL; - - /* - * Code to translate slot/space/addr - * into xio_addr is common between - * this routine and pcibr_piotrans_addr. - */ - xio_addr = pcibr_addr_pci_to_xio(pconn_vhdl, pciio_slot, space, pci_addr, req_size, flags); - - if (xio_addr == XIO_NOWHERE) - return NULL; - - /* Check the piomap list to see if there is already an allocated - * piomap entry but not in use. If so use that one. Otherwise - * allocate a new piomap entry and add it to the piomap list - */ - mapptr = &(pcibr_info->f_piomap); - - s = pcibr_lock(pcibr_soft); - for (pcibr_piomap = *mapptr; - pcibr_piomap != NULL; - pcibr_piomap = pcibr_piomap->bp_next) { - if (pcibr_piomap->bp_mapsz == 0) - break; - } - - if (pcibr_piomap) - mapptr = NULL; - else { - pcibr_unlock(pcibr_soft, s); - NEW(pcibr_piomap); - } - - pcibr_piomap->bp_dev = pconn_vhdl; - pcibr_piomap->bp_slot = pciio_slot; - pcibr_piomap->bp_flags = flags; - pcibr_piomap->bp_space = space; - pcibr_piomap->bp_pciaddr = pci_addr; - pcibr_piomap->bp_mapsz = req_size; - pcibr_piomap->bp_soft = pcibr_soft; - pcibr_piomap->bp_toc[0] = ATOMIC_INIT(0); - - if (mapptr) { - s = pcibr_lock(pcibr_soft); - maplist = *mapptr; - pcibr_piomap->bp_next = maplist; - *mapptr = pcibr_piomap; - } - pcibr_unlock(pcibr_soft, s); - - - if (pcibr_piomap) { - xtalk_piomap = - xtalk_piomap_alloc(xconn_vhdl, 0, - xio_addr, - req_size, req_size_max, - flags & PIOMAP_FLAGS); - if (xtalk_piomap) { - pcibr_piomap->bp_xtalk_addr = xio_addr; - pcibr_piomap->bp_xtalk_pio = xtalk_piomap; - } else { - pcibr_piomap->bp_mapsz = 0; - pcibr_piomap = 0; - } - } - return pcibr_piomap; -} - -/*ARGSUSED */ -void -pcibr_piomap_free(pcibr_piomap_t pcibr_piomap) -{ - xtalk_piomap_free(pcibr_piomap->bp_xtalk_pio); - pcibr_piomap->bp_xtalk_pio = 0; - pcibr_piomap->bp_mapsz = 0; -} - -/*ARGSUSED */ -caddr_t -pcibr_piomap_addr(pcibr_piomap_t pcibr_piomap, - iopaddr_t pci_addr, - size_t req_size) -{ - return xtalk_piomap_addr(pcibr_piomap->bp_xtalk_pio, - pcibr_piomap->bp_xtalk_addr + - pci_addr - pcibr_piomap->bp_pciaddr, - req_size); -} - -/*ARGSUSED */ -void -pcibr_piomap_done(pcibr_piomap_t pcibr_piomap) -{ - xtalk_piomap_done(pcibr_piomap->bp_xtalk_pio); -} - -/*ARGSUSED */ -caddr_t -pcibr_piotrans_addr(devfs_handle_t pconn_vhdl, - device_desc_t dev_desc, - pciio_space_t space, - iopaddr_t pci_addr, - size_t req_size, - unsigned flags) -{ - pciio_info_t pciio_info = pciio_info_get(pconn_vhdl); - pciio_slot_t pciio_slot = pciio_info_slot_get(pciio_info); - pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); - devfs_handle_t xconn_vhdl = pcibr_soft->bs_conn; - - iopaddr_t xio_addr; - - xio_addr = pcibr_addr_pci_to_xio(pconn_vhdl, pciio_slot, space, pci_addr, req_size, flags); - - if (xio_addr == XIO_NOWHERE) - return NULL; - - return xtalk_piotrans_addr(xconn_vhdl, 0, xio_addr, req_size, flags & PIOMAP_FLAGS); -} - -/* - * PIO Space allocation and management. - * Allocate and Manage the PCI PIO space (mem and io space) - * This routine is pretty simplistic at this time, and - * does pretty trivial management of allocation and freeing.. - * The current scheme is prone for fragmentation.. - * Change the scheme to use bitmaps. - */ - -/*ARGSUSED */ -iopaddr_t -pcibr_piospace_alloc(devfs_handle_t pconn_vhdl, - device_desc_t dev_desc, - pciio_space_t space, - size_t req_size, - size_t alignment) -{ - pcibr_info_t pcibr_info = pcibr_info_get(pconn_vhdl); - pciio_info_t pciio_info = &pcibr_info->f_c; - pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); - - pciio_piospace_t piosp; - unsigned long s; - - iopaddr_t *pciaddr, *pcilast; - iopaddr_t start_addr; - size_t align_mask; - - /* - * Check for proper alignment - */ - ASSERT(alignment >= NBPP); - ASSERT((alignment & (alignment - 1)) == 0); - - align_mask = alignment - 1; - s = pcibr_lock(pcibr_soft); - - /* - * First look if a previously allocated chunk exists. - */ - if ((piosp = pcibr_info->f_piospace)) { - /* - * Look through the list for a right sized free chunk. - */ - do { - if (piosp->free && - (piosp->space == space) && - (piosp->count >= req_size) && - !(piosp->start & align_mask)) { - piosp->free = 0; - pcibr_unlock(pcibr_soft, s); - return piosp->start; - } - piosp = piosp->next; - } while (piosp); - } - ASSERT(!piosp); - - switch (space) { - case PCIIO_SPACE_IO: - pciaddr = &pcibr_soft->bs_spinfo.pci_io_base; - pcilast = &pcibr_soft->bs_spinfo.pci_io_last; - break; - case PCIIO_SPACE_MEM: - case PCIIO_SPACE_MEM32: - pciaddr = &pcibr_soft->bs_spinfo.pci_mem_base; - pcilast = &pcibr_soft->bs_spinfo.pci_mem_last; - break; - default: - ASSERT(0); - pcibr_unlock(pcibr_soft, s); - return 0; - } - - start_addr = *pciaddr; - - /* - * Align start_addr. - */ - if (start_addr & align_mask) - start_addr = (start_addr + align_mask) & ~align_mask; - - if ((start_addr + req_size) > *pcilast) { - /* - * If too big a request, reject it. - */ - pcibr_unlock(pcibr_soft, s); - return 0; - } - *pciaddr = (start_addr + req_size); - - NEW(piosp); - piosp->free = 0; - piosp->space = space; - piosp->start = start_addr; - piosp->count = req_size; - piosp->next = pcibr_info->f_piospace; - pcibr_info->f_piospace = piosp; - - pcibr_unlock(pcibr_soft, s); - return start_addr; -} - -/*ARGSUSED */ -void -pcibr_piospace_free(devfs_handle_t pconn_vhdl, - pciio_space_t space, - iopaddr_t pciaddr, - size_t req_size) -{ - pcibr_info_t pcibr_info = pcibr_info_get(pconn_vhdl); - pcibr_soft_t pcibr_soft = (pcibr_soft_t) pcibr_info->f_mfast; - - pciio_piospace_t piosp; - unsigned long s; - char name[1024]; - - /* - * Look through the bridge data structures for the pciio_piospace_t - * structure corresponding to 'pciaddr' - */ - s = pcibr_lock(pcibr_soft); - piosp = pcibr_info->f_piospace; - while (piosp) { - /* - * Piospace free can only be for the complete - * chunk and not parts of it.. - */ - if (piosp->start == pciaddr) { - if (piosp->count == req_size) - break; - /* - * Improper size passed for freeing.. - * Print a message and break; - */ - hwgraph_vertex_name_get(pconn_vhdl, name, 1024); - PRINT_WARNING("pcibr_piospace_free: error"); - PRINT_WARNING("Device %s freeing size (0x%lx) different than allocated (0x%lx)", - name, req_size, piosp->count); - PRINT_WARNING("Freeing 0x%lx instead", piosp->count); - break; - } - piosp = piosp->next; - } - - if (!piosp) { - PRINT_WARNING( - "pcibr_piospace_free: Address 0x%lx size 0x%lx - No match\n", - pciaddr, req_size); - pcibr_unlock(pcibr_soft, s); - return; - } - piosp->free = 1; - pcibr_unlock(pcibr_soft, s); - return; -} - -/* ===================================================================== - * DMA MANAGEMENT - * - * The Bridge ASIC provides three methods of doing - * DMA: via a "direct map" register available in - * 32-bit PCI space (which selects a contiguous 2G - * address space on some other widget), via - * "direct" addressing via 64-bit PCI space (all - * destination information comes from the PCI - * address, including transfer attributes), and via - * a "mapped" region that allows a bunch of - * different small mappings to be established with - * the PMU. - * - * For efficiency, we most prefer to use the 32-bit - * direct mapping facility, since it requires no - * resource allocations. The advantage of using the - * PMU over the 64-bit direct is that single-cycle - * PCI addressing can be used; the advantage of - * using 64-bit direct over PMU addressing is that - * we do not have to allocate entries in the PMU. - */ - -/* - * Convert PCI-generic software flags and Bridge-specific software flags - * into Bridge-specific Direct Map attribute bits. - */ -LOCAL iopaddr_t -pcibr_flags_to_d64(unsigned flags, pcibr_soft_t pcibr_soft) -{ - iopaddr_t attributes = 0; - - /* Sanity check: Bridge only allows use of VCHAN1 via 64-bit addrs */ -#ifdef LATER - ASSERT_ALWAYS(!(flags & PCIBR_VCHAN1) || (flags & PCIIO_DMA_A64)); -#endif - - /* Generic macro flags - */ - if (flags & PCIIO_DMA_DATA) { /* standard data channel */ - attributes &= ~PCI64_ATTR_BAR; /* no barrier bit */ - attributes |= PCI64_ATTR_PREF; /* prefetch on */ - } - if (flags & PCIIO_DMA_CMD) { /* standard command channel */ - attributes |= PCI64_ATTR_BAR; /* barrier bit on */ - attributes &= ~PCI64_ATTR_PREF; /* disable prefetch */ - } - /* Generic detail flags - */ - if (flags & PCIIO_PREFETCH) - attributes |= PCI64_ATTR_PREF; - if (flags & PCIIO_NOPREFETCH) - attributes &= ~PCI64_ATTR_PREF; - - /* the swap bit is in the address attributes for xbridge */ - if (pcibr_soft->bs_xbridge) { - if (flags & PCIIO_BYTE_STREAM) - attributes |= PCI64_ATTR_SWAP; - if (flags & PCIIO_WORD_VALUES) - attributes &= ~PCI64_ATTR_SWAP; - } - - /* Provider-specific flags - */ - if (flags & PCIBR_BARRIER) - attributes |= PCI64_ATTR_BAR; - if (flags & PCIBR_NOBARRIER) - attributes &= ~PCI64_ATTR_BAR; - - if (flags & PCIBR_PREFETCH) - attributes |= PCI64_ATTR_PREF; - if (flags & PCIBR_NOPREFETCH) - attributes &= ~PCI64_ATTR_PREF; - - if (flags & PCIBR_PRECISE) - attributes |= PCI64_ATTR_PREC; - if (flags & PCIBR_NOPRECISE) - attributes &= ~PCI64_ATTR_PREC; - - if (flags & PCIBR_VCHAN1) - attributes |= PCI64_ATTR_VIRTUAL; - if (flags & PCIBR_VCHAN0) - attributes &= ~PCI64_ATTR_VIRTUAL; - - return (attributes); -} - -/* - * Convert PCI-generic software flags and Bridge-specific software flags - * into Bridge-specific Address Translation Entry attribute bits. - */ -LOCAL bridge_ate_t -pcibr_flags_to_ate(unsigned flags) -{ - bridge_ate_t attributes; - - /* default if nothing specified: - * NOBARRIER - * NOPREFETCH - * NOPRECISE - * COHERENT - * Plus the valid bit - */ - attributes = ATE_CO | ATE_V; - - /* Generic macro flags - */ - if (flags & PCIIO_DMA_DATA) { /* standard data channel */ - attributes &= ~ATE_BAR; /* no barrier */ - attributes |= ATE_PREF; /* prefetch on */ - } - if (flags & PCIIO_DMA_CMD) { /* standard command channel */ - attributes |= ATE_BAR; /* barrier bit on */ - attributes &= ~ATE_PREF; /* disable prefetch */ - } - /* Generic detail flags - */ - if (flags & PCIIO_PREFETCH) - attributes |= ATE_PREF; - if (flags & PCIIO_NOPREFETCH) - attributes &= ~ATE_PREF; - - /* Provider-specific flags - */ - if (flags & PCIBR_BARRIER) - attributes |= ATE_BAR; - if (flags & PCIBR_NOBARRIER) - attributes &= ~ATE_BAR; - - if (flags & PCIBR_PREFETCH) - attributes |= ATE_PREF; - if (flags & PCIBR_NOPREFETCH) - attributes &= ~ATE_PREF; - - if (flags & PCIBR_PRECISE) - attributes |= ATE_PREC; - if (flags & PCIBR_NOPRECISE) - attributes &= ~ATE_PREC; - - return (attributes); -} - -/*ARGSUSED */ -pcibr_dmamap_t -pcibr_dmamap_alloc(devfs_handle_t pconn_vhdl, - device_desc_t dev_desc, - size_t req_size_max, - unsigned flags) -{ - pciio_info_t pciio_info = pciio_info_get(pconn_vhdl); - pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); - devfs_handle_t xconn_vhdl = pcibr_soft->bs_conn; - pciio_slot_t slot; - xwidgetnum_t xio_port; - - xtalk_dmamap_t xtalk_dmamap; - pcibr_dmamap_t pcibr_dmamap; - int ate_count; - int ate_index; - - /* merge in forced flags */ - flags |= pcibr_soft->bs_dma_flags; - -#ifdef IRIX - NEWf(pcibr_dmamap, flags); -#else - /* - * On SNIA64, these maps are pre-allocated because pcibr_dmamap_alloc() - * can be called within an interrupt thread. - */ - pcibr_dmamap = (pcibr_dmamap_t)get_free_pciio_dmamap(pcibr_soft->bs_vhdl); -#endif - - if (!pcibr_dmamap) - return 0; - - xtalk_dmamap = xtalk_dmamap_alloc(xconn_vhdl, dev_desc, req_size_max, - flags & DMAMAP_FLAGS); - if (!xtalk_dmamap) { -#if PCIBR_ATE_DEBUG - printk("pcibr_attach: xtalk_dmamap_alloc failed\n"); -#endif - DEL(pcibr_dmamap); - return 0; - } - xio_port = pcibr_soft->bs_mxid; - slot = pciio_info_slot_get(pciio_info); - - pcibr_dmamap->bd_dev = pconn_vhdl; - pcibr_dmamap->bd_slot = slot; - pcibr_dmamap->bd_soft = pcibr_soft; - pcibr_dmamap->bd_xtalk = xtalk_dmamap; - pcibr_dmamap->bd_max_size = req_size_max; - pcibr_dmamap->bd_xio_port = xio_port; - - if (flags & PCIIO_DMA_A64) { - if (!pcibr_try_set_device(pcibr_soft, slot, flags, BRIDGE_DEV_D64_BITS)) { - iopaddr_t pci_addr; - int have_rrbs; - int min_rrbs; - - /* Device is capable of A64 operations, - * and the attributes of the DMA are - * consistant with any previous DMA - * mappings using shared resources. - */ - - pci_addr = pcibr_flags_to_d64(flags, pcibr_soft); - - pcibr_dmamap->bd_flags = flags; - pcibr_dmamap->bd_xio_addr = 0; - pcibr_dmamap->bd_pci_addr = pci_addr; - - /* Make sure we have an RRB (or two). - */ - if (!(pcibr_soft->bs_rrb_fixed & (1 << slot))) { - if (flags & PCIBR_VCHAN1) - slot += PCIBR_RRB_SLOT_VIRTUAL; - have_rrbs = pcibr_soft->bs_rrb_valid[slot]; - if (have_rrbs < 2) { - if (pci_addr & PCI64_ATTR_PREF) - min_rrbs = 2; - else - min_rrbs = 1; - if (have_rrbs < min_rrbs) - do_pcibr_rrb_autoalloc(pcibr_soft, slot, min_rrbs - have_rrbs); - } - } -#if PCIBR_ATE_DEBUG - printk("pcibr_dmamap_alloc: using direct64\n"); -#endif - return pcibr_dmamap; - } -#if PCIBR_ATE_DEBUG - printk("pcibr_dmamap_alloc: unable to use direct64\n"); -#endif - flags &= ~PCIIO_DMA_A64; - } - if (flags & PCIIO_FIXED) { - /* warning: mappings may fail later, - * if direct32 can't get to the address. - */ - if (!pcibr_try_set_device(pcibr_soft, slot, flags, BRIDGE_DEV_D32_BITS)) { - /* User desires DIRECT A32 operations, - * and the attributes of the DMA are - * consistant with any previous DMA - * mappings using shared resources. - * Mapping calls may fail if target - * is outside the direct32 range. - */ -#if PCIBR_ATE_DEBUG - printk("pcibr_dmamap_alloc: using direct32\n"); -#endif - pcibr_dmamap->bd_flags = flags; - pcibr_dmamap->bd_xio_addr = pcibr_soft->bs_dir_xbase; - pcibr_dmamap->bd_pci_addr = PCI32_DIRECT_BASE; - return pcibr_dmamap; - } -#if PCIBR_ATE_DEBUG - printk("pcibr_dmamap_alloc: unable to use direct32\n"); -#endif - /* If the user demands FIXED and we can't - * give it to him, fail. - */ - xtalk_dmamap_free(xtalk_dmamap); - DEL(pcibr_dmamap); - return 0; - } - /* - * Allocate Address Translation Entries from the mapping RAM. - * Unless the PCIBR_NO_ATE_ROUNDUP flag is specified, - * the maximum number of ATEs is based on the worst-case - * scenario, where the requested target is in the - * last byte of an ATE; thus, mapping IOPGSIZE+2 - * does end up requiring three ATEs. - */ - if (!(flags & PCIBR_NO_ATE_ROUNDUP)) { - ate_count = IOPG((IOPGSIZE - 1) /* worst case start offset */ - +req_size_max /* max mapping bytes */ - - 1) + 1; /* round UP */ - } else { /* assume requested target is page aligned */ - ate_count = IOPG(req_size_max /* max mapping bytes */ - - 1) + 1; /* round UP */ - } - - ate_index = pcibr_ate_alloc(pcibr_soft, ate_count); - - if (ate_index != -1) { - if (!pcibr_try_set_device(pcibr_soft, slot, flags, BRIDGE_DEV_PMU_BITS)) { - bridge_ate_t ate_proto; - int have_rrbs; - int min_rrbs; - -#if PCIBR_ATE_DEBUG - printk("pcibr_dmamap_alloc: using PMU\n"); -#endif - - ate_proto = pcibr_flags_to_ate(flags); - - pcibr_dmamap->bd_flags = flags; - pcibr_dmamap->bd_pci_addr = - PCI32_MAPPED_BASE + IOPGSIZE * ate_index; - /* - * for xbridge the byte-swap bit == bit 29 of PCI address - */ - if (pcibr_soft->bs_xbridge) { - if (flags & PCIIO_BYTE_STREAM) - ATE_SWAP_ON(pcibr_dmamap->bd_pci_addr); - /* - * If swap was set in bss_device in pcibr_endian_set() - * we need to change the address bit. - */ - if (pcibr_soft->bs_slot[slot].bss_device & - BRIDGE_DEV_SWAP_PMU) - ATE_SWAP_ON(pcibr_dmamap->bd_pci_addr); - if (flags & PCIIO_WORD_VALUES) - ATE_SWAP_OFF(pcibr_dmamap->bd_pci_addr); - } - pcibr_dmamap->bd_xio_addr = 0; - pcibr_dmamap->bd_ate_ptr = pcibr_ate_addr(pcibr_soft, ate_index); - pcibr_dmamap->bd_ate_index = ate_index; - pcibr_dmamap->bd_ate_count = ate_count; - pcibr_dmamap->bd_ate_proto = ate_proto; - - /* Make sure we have an RRB (or two). - */ - if (!(pcibr_soft->bs_rrb_fixed & (1 << slot))) { - have_rrbs = pcibr_soft->bs_rrb_valid[slot]; - if (have_rrbs < 2) { - if (ate_proto & ATE_PREF) - min_rrbs = 2; - else - min_rrbs = 1; - if (have_rrbs < min_rrbs) - do_pcibr_rrb_autoalloc(pcibr_soft, slot, min_rrbs - have_rrbs); - } - } - if (ate_index >= pcibr_soft->bs_int_ate_size && - !pcibr_soft->bs_xbridge) { - bridge_t *bridge = pcibr_soft->bs_base; - volatile unsigned *cmd_regp; - unsigned cmd_reg; - unsigned long s; - - pcibr_dmamap->bd_flags |= PCIBR_DMAMAP_SSRAM; - - s = pcibr_lock(pcibr_soft); - cmd_regp = &(bridge-> - b_type0_cfg_dev[slot]. - l[PCI_CFG_COMMAND / 4]); - cmd_reg = *cmd_regp; - pcibr_soft->bs_slot[slot].bss_cmd_pointer = cmd_regp; - pcibr_soft->bs_slot[slot].bss_cmd_shadow = cmd_reg; - pcibr_unlock(pcibr_soft, s); - } - return pcibr_dmamap; - } -#if PCIBR_ATE_DEBUG - printk("pcibr_dmamap_alloc: unable to use PMU\n"); -#endif - pcibr_ate_free(pcibr_soft, ate_index, ate_count); - } - /* total failure: sorry, you just can't - * get from here to there that way. - */ -#if PCIBR_ATE_DEBUG - printk("pcibr_dmamap_alloc: complete failure.\n"); -#endif - xtalk_dmamap_free(xtalk_dmamap); - DEL(pcibr_dmamap); - return 0; -} - -/*ARGSUSED */ -void -pcibr_dmamap_free(pcibr_dmamap_t pcibr_dmamap) -{ - pcibr_soft_t pcibr_soft = pcibr_dmamap->bd_soft; - pciio_slot_t slot = pcibr_dmamap->bd_slot; - - unsigned flags = pcibr_dmamap->bd_flags; - - /* Make sure that bss_ext_ates_active - * is properly kept up to date. - */ - - if (PCIBR_DMAMAP_BUSY & flags) - if (PCIBR_DMAMAP_SSRAM & flags) - atomic_dec(&(pcibr_soft->bs_slot[slot]. bss_ext_ates_active)); - - xtalk_dmamap_free(pcibr_dmamap->bd_xtalk); - - if (pcibr_dmamap->bd_flags & PCIIO_DMA_A64) { - pcibr_release_device(pcibr_soft, slot, BRIDGE_DEV_D64_BITS); - } - if (pcibr_dmamap->bd_ate_count) { - pcibr_ate_free(pcibr_dmamap->bd_soft, - pcibr_dmamap->bd_ate_index, - pcibr_dmamap->bd_ate_count); - pcibr_release_device(pcibr_soft, slot, BRIDGE_DEV_PMU_BITS); - } -#ifdef IRIX - DEL(pcibr_dmamap); -#endif -} - -/* - * Setup an Address Translation Entry as specified. Use either the Bridge - * internal maps or the external map RAM, as appropriate. - */ -LOCAL bridge_ate_p -pcibr_ate_addr(pcibr_soft_t pcibr_soft, - int ate_index) -{ - bridge_t *bridge = pcibr_soft->bs_base; - - return (ate_index < pcibr_soft->bs_int_ate_size) - ? &(bridge->b_int_ate_ram[ate_index].wr) - : &(bridge->b_ext_ate_ram[ate_index]); -} - -/* - * pcibr_addr_xio_to_pci: given a PIO range, hand - * back the corresponding base PCI MEM address; - * this is used to short-circuit DMA requests that - * loop back onto this PCI bus. - */ -LOCAL iopaddr_t -pcibr_addr_xio_to_pci(pcibr_soft_t soft, - iopaddr_t xio_addr, - size_t req_size) -{ - iopaddr_t xio_lim = xio_addr + req_size - 1; - iopaddr_t pci_addr; - pciio_slot_t slot; - - if ((xio_addr >= BRIDGE_PCI_MEM32_BASE) && - (xio_lim <= BRIDGE_PCI_MEM32_LIMIT)) { - pci_addr = xio_addr - BRIDGE_PCI_MEM32_BASE; - return pci_addr; - } - if ((xio_addr >= BRIDGE_PCI_MEM64_BASE) && - (xio_lim <= BRIDGE_PCI_MEM64_LIMIT)) { - pci_addr = xio_addr - BRIDGE_PCI_MEM64_BASE; - return pci_addr; - } - for (slot = 0; slot < 8; ++slot) - if ((xio_addr >= BRIDGE_DEVIO(slot)) && - (xio_lim < BRIDGE_DEVIO(slot + 1))) { - bridgereg_t dev; - - dev = soft->bs_slot[slot].bss_device; - pci_addr = dev & BRIDGE_DEV_OFF_MASK; - pci_addr <<= BRIDGE_DEV_OFF_ADDR_SHFT; - pci_addr += xio_addr - BRIDGE_DEVIO(slot); - return (dev & BRIDGE_DEV_DEV_IO_MEM) ? pci_addr : PCI_NOWHERE; - } - return 0; -} - -/* We are starting to get more complexity - * surrounding writing ATEs, so pull - * the writing code into this new function. - */ - -#if PCIBR_FREEZE_TIME -#define ATE_FREEZE() s = ate_freeze(pcibr_dmamap, &freeze_time, cmd_regs) -#else -#define ATE_FREEZE() s = ate_freeze(pcibr_dmamap, cmd_regs) -#endif - -LOCAL unsigned -ate_freeze(pcibr_dmamap_t pcibr_dmamap, -#if PCIBR_FREEZE_TIME - unsigned *freeze_time_ptr, -#endif - unsigned *cmd_regs) -{ - pcibr_soft_t pcibr_soft = pcibr_dmamap->bd_soft; -#ifdef LATER - int dma_slot = pcibr_dmamap->bd_slot; -#endif - int ext_ates = pcibr_dmamap->bd_flags & PCIBR_DMAMAP_SSRAM; - int slot; - - unsigned long s; - unsigned cmd_reg; - volatile unsigned *cmd_lwa; - unsigned cmd_lwd; - - if (!ext_ates) - return 0; - - /* Bridge Hardware Bug WAR #484930: - * Bridge can't handle updating External ATEs - * while DMA is occuring that uses External ATEs, - * even if the particular ATEs involved are disjoint. - */ - - /* need to prevent anyone else from - * unfreezing the grant while we - * are working; also need to prevent - * this thread from being interrupted - * to keep PCI grant freeze time - * at an absolute minimum. - */ - s = pcibr_lock(pcibr_soft); - -#ifdef LATER - /* just in case pcibr_dmamap_done was not called */ - if (pcibr_dmamap->bd_flags & PCIBR_DMAMAP_BUSY) { - pcibr_dmamap->bd_flags &= ~PCIBR_DMAMAP_BUSY; - if (pcibr_dmamap->bd_flags & PCIBR_DMAMAP_SSRAM) - atomic_dec(&(pcibr_soft->bs_slot[dma_slot]. bss_ext_ates_active)); - xtalk_dmamap_done(pcibr_dmamap->bd_xtalk); - } -#endif /* LATER */ -#if PCIBR_FREEZE_TIME - *freeze_time_ptr = get_timestamp(); -#endif - - cmd_lwa = 0; - for (slot = 0; slot < 8; ++slot) - if (atomic_read(&pcibr_soft->bs_slot[slot].bss_ext_ates_active)) { - cmd_reg = pcibr_soft-> - bs_slot[slot]. - bss_cmd_shadow; - if (cmd_reg & PCI_CMD_BUS_MASTER) { - cmd_lwa = pcibr_soft-> - bs_slot[slot]. - bss_cmd_pointer; - cmd_lwd = cmd_reg ^ PCI_CMD_BUS_MASTER; - cmd_lwa[0] = cmd_lwd; - } - cmd_regs[slot] = cmd_reg; - } else - cmd_regs[slot] = 0; - - if (cmd_lwa) { - bridge_t *bridge = pcibr_soft->bs_base; - - /* Read the last master bit that has been cleared. This PIO read - * on the PCI bus is to ensure the completion of any DMAs that - * are due to bus requests issued by PCI devices before the - * clearing of master bits. - */ - cmd_lwa[0]; - - /* Flush all the write buffers in the bridge */ - for (slot = 0; slot < 8; ++slot) - if (atomic_read(&pcibr_soft->bs_slot[slot].bss_ext_ates_active)) { - /* Flush the write buffer associated with this - * PCI device which might be using dma map RAM. - */ - bridge->b_wr_req_buf[slot].reg; - } - } - return s; -} - -#define ATE_WRITE() ate_write(ate_ptr, ate_count, ate) - -LOCAL void -ate_write(bridge_ate_p ate_ptr, - int ate_count, - bridge_ate_t ate) -{ - while (ate_count-- > 0) { - *ate_ptr++ = ate; - ate += IOPGSIZE; - } -} - - -#if PCIBR_FREEZE_TIME -#define ATE_THAW() ate_thaw(pcibr_dmamap, ate_index, ate, ate_total, freeze_time, cmd_regs, s) -#else -#define ATE_THAW() ate_thaw(pcibr_dmamap, ate_index, cmd_regs, s) -#endif - -LOCAL void -ate_thaw(pcibr_dmamap_t pcibr_dmamap, - int ate_index, -#if PCIBR_FREEZE_TIME - bridge_ate_t ate, - int ate_total, - unsigned freeze_time_start, -#endif - unsigned *cmd_regs, - unsigned s) -{ - pcibr_soft_t pcibr_soft = pcibr_dmamap->bd_soft; - int dma_slot = pcibr_dmamap->bd_slot; - int slot; - bridge_t *bridge = pcibr_soft->bs_base; - int ext_ates = pcibr_dmamap->bd_flags & PCIBR_DMAMAP_SSRAM; - - unsigned cmd_reg; - -#if PCIBR_FREEZE_TIME - unsigned freeze_time; - static unsigned max_freeze_time = 0; - static unsigned max_ate_total; -#endif - - if (!ext_ates) - return; - - /* restore cmd regs */ - for (slot = 0; slot < 8; ++slot) - if ((cmd_reg = cmd_regs[slot]) & PCI_CMD_BUS_MASTER) - bridge->b_type0_cfg_dev[slot].l[PCI_CFG_COMMAND / 4] = cmd_reg; - - pcibr_dmamap->bd_flags |= PCIBR_DMAMAP_BUSY; - atomic_inc(&(pcibr_soft->bs_slot[dma_slot]. bss_ext_ates_active)); - -#if PCIBR_FREEZE_TIME - freeze_time = get_timestamp() - freeze_time_start; - - if ((max_freeze_time < freeze_time) || - (max_ate_total < ate_total)) { - if (max_freeze_time < freeze_time) - max_freeze_time = freeze_time; - if (max_ate_total < ate_total) - max_ate_total = ate_total; - pcibr_unlock(pcibr_soft, s); - printk("%s: pci freeze time %d usec for %d ATEs\n" - "\tfirst ate: %R\n", - pcibr_soft->bs_name, - freeze_time * 1000 / 1250, - ate_total, - ate, ate_bits); - } else -#endif - pcibr_unlock(pcibr_soft, s); -} - -/*ARGSUSED */ -iopaddr_t -pcibr_dmamap_addr(pcibr_dmamap_t pcibr_dmamap, - paddr_t paddr, - size_t req_size) -{ - pcibr_soft_t pcibr_soft; - iopaddr_t xio_addr; - xwidgetnum_t xio_port; - iopaddr_t pci_addr; - unsigned flags; - - ASSERT(pcibr_dmamap != NULL); - ASSERT(req_size > 0); - ASSERT(req_size <= pcibr_dmamap->bd_max_size); - - pcibr_soft = pcibr_dmamap->bd_soft; - - flags = pcibr_dmamap->bd_flags; - - xio_addr = xtalk_dmamap_addr(pcibr_dmamap->bd_xtalk, paddr, req_size); - if (XIO_PACKED(xio_addr)) { - xio_port = XIO_PORT(xio_addr); - xio_addr = XIO_ADDR(xio_addr); - } else - xio_port = pcibr_dmamap->bd_xio_port; - - /* If this DMA is to an address that - * refers back to this Bridge chip, - * reduce it back to the correct - * PCI MEM address. - */ - if (xio_port == pcibr_soft->bs_xid) { - pci_addr = pcibr_addr_xio_to_pci(pcibr_soft, xio_addr, req_size); - } else if (flags & PCIIO_DMA_A64) { - /* A64 DMA: - * always use 64-bit direct mapping, - * which always works. - * Device(x) was set up during - * dmamap allocation. - */ - - /* attributes are already bundled up into bd_pci_addr. - */ - pci_addr = pcibr_dmamap->bd_pci_addr - | ((uint64_t) xio_port << PCI64_ATTR_TARG_SHFT) - | xio_addr; - - /* Bridge Hardware WAR #482836: - * If the transfer is not cache aligned - * and the Bridge Rev is <= B, force - * prefetch to be off. - */ - if (flags & PCIBR_NOPREFETCH) - pci_addr &= ~PCI64_ATTR_PREF; - -#if DEBUG && PCIBR_DMA_DEBUG - printk("pcibr_dmamap_addr (direct64):\n" - "\twanted paddr [0x%x..0x%x]\n" - "\tXIO port 0x%x offset 0x%x\n" - "\treturning PCI 0x%x\n", - paddr, paddr + req_size - 1, - xio_port, xio_addr, pci_addr); -#endif - } else if (flags & PCIIO_FIXED) { - /* A32 direct DMA: - * always use 32-bit direct mapping, - * which may fail. - * Device(x) was set up during - * dmamap allocation. - */ - - if (xio_port != pcibr_soft->bs_dir_xport) - pci_addr = 0; /* wrong DIDN */ - else if (xio_addr < pcibr_dmamap->bd_xio_addr) - pci_addr = 0; /* out of range */ - else if ((xio_addr + req_size) > - (pcibr_dmamap->bd_xio_addr + BRIDGE_DMA_DIRECT_SIZE)) - pci_addr = 0; /* out of range */ - else - pci_addr = pcibr_dmamap->bd_pci_addr + - xio_addr - pcibr_dmamap->bd_xio_addr; - -#if DEBUG && PCIBR_DMA_DEBUG - printk("pcibr_dmamap_addr (direct32):\n" - "\twanted paddr [0x%x..0x%x]\n" - "\tXIO port 0x%x offset 0x%x\n" - "\treturning PCI 0x%x\n", - paddr, paddr + req_size - 1, - xio_port, xio_addr, pci_addr); -#endif - } else { - bridge_t *bridge = pcibr_soft->bs_base; - iopaddr_t offset = IOPGOFF(xio_addr); - bridge_ate_t ate_proto = pcibr_dmamap->bd_ate_proto; - int ate_count = IOPG(offset + req_size - 1) + 1; - - int ate_index = pcibr_dmamap->bd_ate_index; - unsigned cmd_regs[8]; - unsigned s; - -#if PCIBR_FREEZE_TIME - int ate_total = ate_count; - unsigned freeze_time; -#endif - -#if PCIBR_ATE_DEBUG - bridge_ate_t ate_cmp; - bridge_ate_p ate_cptr; - unsigned ate_lo, ate_hi; - int ate_bad = 0; - int ate_rbc = 0; -#endif - bridge_ate_p ate_ptr = pcibr_dmamap->bd_ate_ptr; - bridge_ate_t ate; - - /* Bridge Hardware WAR #482836: - * If the transfer is not cache aligned - * and the Bridge Rev is <= B, force - * prefetch to be off. - */ - if (flags & PCIBR_NOPREFETCH) - ate_proto &= ~ATE_PREF; - - ate = ate_proto - | (xio_port << ATE_TIDSHIFT) - | (xio_addr - offset); - - pci_addr = pcibr_dmamap->bd_pci_addr + offset; - - /* Fill in our mapping registers - * with the appropriate xtalk data, - * and hand back the PCI address. - */ - - ASSERT(ate_count > 0); - if (ate_count <= pcibr_dmamap->bd_ate_count) { - ATE_FREEZE(); - ATE_WRITE(); - ATE_THAW(); - bridge->b_wid_tflush; /* wait until Bridge PIO complete */ - } else { - /* The number of ATE's required is greater than the number - * allocated for this map. One way this can happen is if - * pcibr_dmamap_alloc() was called with the PCIBR_NO_ATE_ROUNDUP - * flag, and then when that map is used (right now), the - * target address tells us we really did need to roundup. - * The other possibility is that the map is just plain too - * small to handle the requested target area. - */ -#if PCIBR_ATE_DEBUG - PRINT_WARNING( "pcibr_dmamap_addr :\n" - "\twanted paddr [0x%x..0x%x]\n" - "\tate_count 0x%x bd_ate_count 0x%x\n" - "\tATE's required > number allocated\n", - paddr, paddr + req_size - 1, - ate_count, pcibr_dmamap->bd_ate_count); -#endif - pci_addr = 0; - } - - } - return pci_addr; -} - -/*ARGSUSED */ -alenlist_t -pcibr_dmamap_list(pcibr_dmamap_t pcibr_dmamap, - alenlist_t palenlist, - unsigned flags) -{ - pcibr_soft_t pcibr_soft; - bridge_t *bridge=NULL; - - unsigned al_flags = (flags & PCIIO_NOSLEEP) ? AL_NOSLEEP : 0; - int inplace = flags & PCIIO_INPLACE; - - alenlist_t pciio_alenlist = 0; - alenlist_t xtalk_alenlist; - size_t length; - iopaddr_t offset; - unsigned direct64; - int ate_index = 0; - int ate_count = 0; - int ate_total = 0; - bridge_ate_p ate_ptr = (bridge_ate_p)0; - bridge_ate_t ate_proto = (bridge_ate_t)0; - bridge_ate_t ate_prev; - bridge_ate_t ate; - alenaddr_t xio_addr; - xwidgetnum_t xio_port; - iopaddr_t pci_addr; - alenaddr_t new_addr; - - unsigned cmd_regs[8]; - unsigned s = 0; - -#if PCIBR_FREEZE_TIME - unsigned freeze_time; -#endif - int ate_freeze_done = 0; /* To pair ATE_THAW - * with an ATE_FREEZE - */ - - pcibr_soft = pcibr_dmamap->bd_soft; - - xtalk_alenlist = xtalk_dmamap_list(pcibr_dmamap->bd_xtalk, palenlist, - flags & DMAMAP_FLAGS); - if (!xtalk_alenlist) - goto fail; - - alenlist_cursor_init(xtalk_alenlist, 0, NULL); - - if (inplace) { - pciio_alenlist = xtalk_alenlist; - } else { - pciio_alenlist = alenlist_create(al_flags); - if (!pciio_alenlist) - goto fail; - } - - direct64 = pcibr_dmamap->bd_flags & PCIIO_DMA_A64; - if (!direct64) { - bridge = pcibr_soft->bs_base; - ate_ptr = pcibr_dmamap->bd_ate_ptr; - ate_index = pcibr_dmamap->bd_ate_index; - ate_proto = pcibr_dmamap->bd_ate_proto; - ATE_FREEZE(); - ate_freeze_done = 1; /* Remember that we need to do an ATE_THAW */ - } - pci_addr = pcibr_dmamap->bd_pci_addr; - - ate_prev = 0; /* matches no valid ATEs */ - while (ALENLIST_SUCCESS == - alenlist_get(xtalk_alenlist, NULL, 0, - &xio_addr, &length, al_flags)) { - if (XIO_PACKED(xio_addr)) { - xio_port = XIO_PORT(xio_addr); - xio_addr = XIO_ADDR(xio_addr); - } else - xio_port = pcibr_dmamap->bd_xio_port; - - if (xio_port == pcibr_soft->bs_xid) { - new_addr = pcibr_addr_xio_to_pci(pcibr_soft, xio_addr, length); - if (new_addr == PCI_NOWHERE) - goto fail; - } else if (direct64) { - new_addr = pci_addr | xio_addr - | ((uint64_t) xio_port << PCI64_ATTR_TARG_SHFT); - - /* Bridge Hardware WAR #482836: - * If the transfer is not cache aligned - * and the Bridge Rev is <= B, force - * prefetch to be off. - */ - if (flags & PCIBR_NOPREFETCH) - new_addr &= ~PCI64_ATTR_PREF; - - } else { - /* calculate the ate value for - * the first address. If it - * matches the previous - * ATE written (ie. we had - * multiple blocks in the - * same IOPG), then back up - * and reuse that ATE. - * - * We are NOT going to - * aggressively try to - * reuse any other ATEs. - */ - offset = IOPGOFF(xio_addr); - ate = ate_proto - | (xio_port << ATE_TIDSHIFT) - | (xio_addr - offset); - if (ate == ate_prev) { -#if PCIBR_ATE_DEBUG - printk("pcibr_dmamap_list: ATE share\n"); -#endif - ate_ptr--; - ate_index--; - pci_addr -= IOPGSIZE; - } - new_addr = pci_addr + offset; - - /* Fill in the hardware ATEs - * that contain this block. - */ - ate_count = IOPG(offset + length - 1) + 1; - ate_total += ate_count; - - /* Ensure that this map contains enough ATE's */ - if (ate_total > pcibr_dmamap->bd_ate_count) { -#if PCIBR_ATE_DEBUG - PRINT_WARNING( "pcibr_dmamap_list :\n" - "\twanted xio_addr [0x%x..0x%x]\n" - "\tate_total 0x%x bd_ate_count 0x%x\n" - "\tATE's required > number allocated\n", - xio_addr, xio_addr + length - 1, - ate_total, pcibr_dmamap->bd_ate_count); -#endif - goto fail; - } - - ATE_WRITE(); - - ate_index += ate_count; - ate_ptr += ate_count; - - ate_count <<= IOPFNSHIFT; - ate += ate_count; - pci_addr += ate_count; - } - - /* write the PCI DMA address - * out to the scatter-gather list. - */ - if (inplace) { - if (ALENLIST_SUCCESS != - alenlist_replace(pciio_alenlist, NULL, - &new_addr, &length, al_flags)) - goto fail; - } else { - if (ALENLIST_SUCCESS != - alenlist_append(pciio_alenlist, - new_addr, length, al_flags)) - goto fail; - } - } - if (!inplace) - alenlist_done(xtalk_alenlist); - - /* Reset the internal cursor of the alenlist to be returned back - * to the caller. - */ - alenlist_cursor_init(pciio_alenlist, 0, NULL); - - - /* In case an ATE_FREEZE was done do the ATE_THAW to unroll all the - * changes that ATE_FREEZE has done to implement the external SSRAM - * bug workaround. - */ - if (ate_freeze_done) { - ATE_THAW(); - bridge->b_wid_tflush; /* wait until Bridge PIO complete */ - } - return pciio_alenlist; - - fail: - /* There are various points of failure after doing an ATE_FREEZE - * We need to do an ATE_THAW. Otherwise the ATEs are locked forever. - * The decision to do an ATE_THAW needs to be based on whether a - * an ATE_FREEZE was done before. - */ - if (ate_freeze_done) { - ATE_THAW(); - bridge->b_wid_tflush; - } - if (pciio_alenlist && !inplace) - alenlist_destroy(pciio_alenlist); - return 0; -} - -/*ARGSUSED */ -void -pcibr_dmamap_done(pcibr_dmamap_t pcibr_dmamap) -{ - /* - * We could go through and invalidate ATEs here; - * for performance reasons, we don't. - * We also don't enforce the strict alternation - * between _addr/_list and _done, but Hub does. - */ - - if (pcibr_dmamap->bd_flags & PCIBR_DMAMAP_BUSY) { - pcibr_dmamap->bd_flags &= ~PCIBR_DMAMAP_BUSY; - - if (pcibr_dmamap->bd_flags & PCIBR_DMAMAP_SSRAM) - atomic_dec(&(pcibr_dmamap->bd_soft->bs_slot[pcibr_dmamap->bd_slot]. bss_ext_ates_active)); - } - - xtalk_dmamap_done(pcibr_dmamap->bd_xtalk); -} - - -/* - * For each bridge, the DIR_OFF value in the Direct Mapping Register - * determines the PCI to Crosstalk memory mapping to be used for all - * 32-bit Direct Mapping memory accesses. This mapping can be to any - * node in the system. This function will return that compact node id. - */ - -/*ARGSUSED */ -cnodeid_t -pcibr_get_dmatrans_node(devfs_handle_t pconn_vhdl) -{ - - pciio_info_t pciio_info = pciio_info_get(pconn_vhdl); - pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); - - return(NASID_TO_COMPACT_NODEID(NASID_GET(pcibr_soft->bs_dir_xbase))); -} - -/*ARGSUSED */ -iopaddr_t -pcibr_dmatrans_addr(devfs_handle_t pconn_vhdl, - device_desc_t dev_desc, - paddr_t paddr, - size_t req_size, - unsigned flags) -{ - pciio_info_t pciio_info = pciio_info_get(pconn_vhdl); - pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); - devfs_handle_t xconn_vhdl = pcibr_soft->bs_conn; - pciio_slot_t pciio_slot = pciio_info_slot_get(pciio_info); - pcibr_soft_slot_t slotp = &pcibr_soft->bs_slot[pciio_slot]; - - xwidgetnum_t xio_port; - iopaddr_t xio_addr; - iopaddr_t pci_addr; - - int have_rrbs; - int min_rrbs; - - /* merge in forced flags */ - flags |= pcibr_soft->bs_dma_flags; - - xio_addr = xtalk_dmatrans_addr(xconn_vhdl, 0, paddr, req_size, - flags & DMAMAP_FLAGS); - - if (!xio_addr) { -#if PCIBR_DMA_DEBUG - printk("pcibr_dmatrans_addr:\n" - "\tpciio connection point %v\n" - "\txtalk connection point %v\n" - "\twanted paddr [0x%x..0x%x]\n" - "\txtalk_dmatrans_addr returned 0x%x\n", - pconn_vhdl, xconn_vhdl, - paddr, paddr + req_size - 1, - xio_addr); -#endif - return 0; - } - /* - * find which XIO port this goes to. - */ - if (XIO_PACKED(xio_addr)) { - if (xio_addr == XIO_NOWHERE) { -#if PCIBR_DMA_DEBUG - printk("pcibr_dmatrans_addr:\n" - "\tpciio connection point %v\n" - "\txtalk connection point %v\n" - "\twanted paddr [0x%x..0x%x]\n" - "\txtalk_dmatrans_addr returned 0x%x\n", - pconn_vhdl, xconn_vhdl, - paddr, paddr + req_size - 1, - xio_addr); -#endif - return 0; - } - xio_port = XIO_PORT(xio_addr); - xio_addr = XIO_ADDR(xio_addr); - - } else - xio_port = pcibr_soft->bs_mxid; - - /* - * If this DMA comes back to us, - * return the PCI MEM address on - * which it would land, or NULL - * if the target is something - * on bridge other than PCI MEM. - */ - if (xio_port == pcibr_soft->bs_xid) { - pci_addr = pcibr_addr_xio_to_pci(pcibr_soft, xio_addr, req_size); - return pci_addr; - } - /* If the caller can use A64, try to - * satisfy the request with the 64-bit - * direct map. This can fail if the - * configuration bits in Device(x) - * conflict with our flags. - */ - - if (flags & PCIIO_DMA_A64) { - pci_addr = slotp->bss_d64_base; - if (!(flags & PCIBR_VCHAN1)) - flags |= PCIBR_VCHAN0; - if ((pci_addr != PCIBR_D64_BASE_UNSET) && - (flags == slotp->bss_d64_flags)) { - - pci_addr |= xio_addr - | ((uint64_t) xio_port << PCI64_ATTR_TARG_SHFT); - -#if DEBUG && PCIBR_DMA_DEBUG -#if HWG_PERF_CHECK - if (xio_addr != 0x20000000) -#endif - printk("pcibr_dmatrans_addr: [reuse]\n" - "\tpciio connection point %v\n" - "\txtalk connection point %v\n" - "\twanted paddr [0x%x..0x%x]\n" - "\txtalk_dmatrans_addr returned 0x%x\n" - "\tdirect 64bit address is 0x%x\n", - pconn_vhdl, xconn_vhdl, - paddr, paddr + req_size - 1, - xio_addr, pci_addr); -#endif - return (pci_addr); - } - if (!pcibr_try_set_device(pcibr_soft, pciio_slot, flags, BRIDGE_DEV_D64_BITS)) { - pci_addr = pcibr_flags_to_d64(flags, pcibr_soft); - slotp->bss_d64_flags = flags; - slotp->bss_d64_base = pci_addr; - pci_addr |= xio_addr - | ((uint64_t) xio_port << PCI64_ATTR_TARG_SHFT); - - /* Make sure we have an RRB (or two). - */ - if (!(pcibr_soft->bs_rrb_fixed & (1 << pciio_slot))) { - if (flags & PCIBR_VCHAN1) - pciio_slot += PCIBR_RRB_SLOT_VIRTUAL; - have_rrbs = pcibr_soft->bs_rrb_valid[pciio_slot]; - if (have_rrbs < 2) { - if (pci_addr & PCI64_ATTR_PREF) - min_rrbs = 2; - else - min_rrbs = 1; - if (have_rrbs < min_rrbs) - do_pcibr_rrb_autoalloc(pcibr_soft, pciio_slot, min_rrbs - have_rrbs); - } - } -#if PCIBR_DMA_DEBUG -#if HWG_PERF_CHECK - if (xio_addr != 0x20000000) -#endif - printk("pcibr_dmatrans_addr:\n" - "\tpciio connection point %v\n" - "\txtalk connection point %v\n" - "\twanted paddr [0x%x..0x%x]\n" - "\txtalk_dmatrans_addr returned 0x%x\n" - "\tdirect 64bit address is 0x%x\n" - "\tnew flags: 0x%x\n", - pconn_vhdl, xconn_vhdl, - paddr, paddr + req_size - 1, - xio_addr, pci_addr, (uint64_t) flags); -#endif - return (pci_addr); - } - /* our flags conflict with Device(x). - */ - flags = flags - & ~PCIIO_DMA_A64 - & ~PCIBR_VCHAN0 - ; - -#if PCIBR_DMA_DEBUG - printk("pcibr_dmatrans_addr:\n" - "\tpciio connection point %v\n" - "\txtalk connection point %v\n" - "\twanted paddr [0x%x..0x%x]\n" - "\txtalk_dmatrans_addr returned 0x%x\n" - "\tUnable to set Device(x) bits for Direct-64\n", - pconn_vhdl, xconn_vhdl, - paddr, paddr + req_size - 1, - xio_addr); -#endif - } - /* Try to satisfy the request with the 32-bit direct - * map. This can fail if the configuration bits in - * Device(x) conflict with our flags, or if the - * target address is outside where DIR_OFF points. - */ - { - size_t map_size = 1ULL << 31; - iopaddr_t xio_base = pcibr_soft->bs_dir_xbase; - iopaddr_t offset = xio_addr - xio_base; - iopaddr_t endoff = req_size + offset; - - if ((req_size > map_size) || - (xio_addr < xio_base) || - (xio_port != pcibr_soft->bs_dir_xport) || - (endoff > map_size)) { -#if PCIBR_DMA_DEBUG - printk("pcibr_dmatrans_addr:\n" - "\tpciio connection point %v\n" - "\txtalk connection point %v\n" - "\twanted paddr [0x%x..0x%x]\n" - "\txtalk_dmatrans_addr returned 0x%x\n" - "\txio region outside direct32 target\n", - pconn_vhdl, xconn_vhdl, - paddr, paddr + req_size - 1, - xio_addr); -#endif - } else { - pci_addr = slotp->bss_d32_base; - if ((pci_addr != PCIBR_D32_BASE_UNSET) && - (flags == slotp->bss_d32_flags)) { - - pci_addr |= offset; - -#if DEBUG && PCIBR_DMA_DEBUG - printk("pcibr_dmatrans_addr: [reuse]\n" - "\tpciio connection point %v\n" - "\txtalk connection point %v\n" - "\twanted paddr [0x%x..0x%x]\n" - "\txtalk_dmatrans_addr returned 0x%x\n" - "\tmapped via direct32 offset 0x%x\n" - "\twill DMA via pci addr 0x%x\n", - pconn_vhdl, xconn_vhdl, - paddr, paddr + req_size - 1, - xio_addr, offset, pci_addr); -#endif - return (pci_addr); - } - if (!pcibr_try_set_device(pcibr_soft, pciio_slot, flags, BRIDGE_DEV_D32_BITS)) { - - pci_addr = PCI32_DIRECT_BASE; - slotp->bss_d32_flags = flags; - slotp->bss_d32_base = pci_addr; - pci_addr |= offset; - - /* Make sure we have an RRB (or two). - */ - if (!(pcibr_soft->bs_rrb_fixed & (1 << pciio_slot))) { - have_rrbs = pcibr_soft->bs_rrb_valid[pciio_slot]; - if (have_rrbs < 2) { - if (slotp->bss_device & BRIDGE_DEV_PREF) - min_rrbs = 2; - else - min_rrbs = 1; - if (have_rrbs < min_rrbs) - do_pcibr_rrb_autoalloc(pcibr_soft, pciio_slot, min_rrbs - have_rrbs); - } - } -#if PCIBR_DMA_DEBUG -#if HWG_PERF_CHECK - if (xio_addr != 0x20000000) -#endif - printk("pcibr_dmatrans_addr:\n" - "\tpciio connection point %v\n" - "\txtalk connection point %v\n" - "\twanted paddr [0x%x..0x%x]\n" - "\txtalk_dmatrans_addr returned 0x%x\n" - "\tmapped via direct32 offset 0x%x\n" - "\twill DMA via pci addr 0x%x\n" - "\tnew flags: 0x%x\n", - pconn_vhdl, xconn_vhdl, - paddr, paddr + req_size - 1, - xio_addr, offset, pci_addr, (uint64_t) flags); -#endif - return (pci_addr); - } - /* our flags conflict with Device(x). - */ -#if PCIBR_DMA_DEBUG - printk("pcibr_dmatrans_addr:\n" - "\tpciio connection point %v\n" - "\txtalk connection point %v\n" - "\twanted paddr [0x%x..0x%x]\n" - "\txtalk_dmatrans_addr returned 0x%x\n" - "\tUnable to set Device(x) bits for Direct-32\n", - pconn_vhdl, xconn_vhdl, - paddr, paddr + req_size - 1, - xio_addr); -#endif - } - } - -#if PCIBR_DMA_DEBUG - printk("pcibr_dmatrans_addr:\n" - "\tpciio connection point %v\n" - "\txtalk connection point %v\n" - "\twanted paddr [0x%x..0x%x]\n" - "\txtalk_dmatrans_addr returned 0x%x\n" - "\tno acceptable PCI address found or constructable\n", - pconn_vhdl, xconn_vhdl, - paddr, paddr + req_size - 1, - xio_addr); -#endif - - return 0; -} - -/*ARGSUSED */ -alenlist_t -pcibr_dmatrans_list(devfs_handle_t pconn_vhdl, - device_desc_t dev_desc, - alenlist_t palenlist, - unsigned flags) -{ - pciio_info_t pciio_info = pciio_info_get(pconn_vhdl); - pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); - devfs_handle_t xconn_vhdl = pcibr_soft->bs_conn; - pciio_slot_t pciio_slot = pciio_info_slot_get(pciio_info); - pcibr_soft_slot_t slotp = &pcibr_soft->bs_slot[pciio_slot]; - xwidgetnum_t xio_port; - - alenlist_t pciio_alenlist = 0; - alenlist_t xtalk_alenlist = 0; - - int inplace; - unsigned direct64; - unsigned al_flags; - - iopaddr_t xio_base; - alenaddr_t xio_addr; - size_t xio_size; - - size_t map_size; - iopaddr_t pci_base; - alenaddr_t pci_addr; - - unsigned relbits = 0; - - /* merge in forced flags */ - flags |= pcibr_soft->bs_dma_flags; - - inplace = flags & PCIIO_INPLACE; - direct64 = flags & PCIIO_DMA_A64; - al_flags = (flags & PCIIO_NOSLEEP) ? AL_NOSLEEP : 0; - - if (direct64) { - map_size = 1ull << 48; - xio_base = 0; - pci_base = slotp->bss_d64_base; - if ((pci_base != PCIBR_D64_BASE_UNSET) && - (flags == slotp->bss_d64_flags)) { - /* reuse previous base info */ - } else if (pcibr_try_set_device(pcibr_soft, pciio_slot, flags, BRIDGE_DEV_D64_BITS) < 0) { - /* DMA configuration conflict */ - goto fail; - } else { - relbits = BRIDGE_DEV_D64_BITS; - pci_base = - pcibr_flags_to_d64(flags, pcibr_soft); - } - } else { - xio_base = pcibr_soft->bs_dir_xbase; - map_size = 1ull << 31; - pci_base = slotp->bss_d32_base; - if ((pci_base != PCIBR_D32_BASE_UNSET) && - (flags == slotp->bss_d32_flags)) { - /* reuse previous base info */ - } else if (pcibr_try_set_device(pcibr_soft, pciio_slot, flags, BRIDGE_DEV_D32_BITS) < 0) { - /* DMA configuration conflict */ - goto fail; - } else { - relbits = BRIDGE_DEV_D32_BITS; - pci_base = PCI32_DIRECT_BASE; - } - } - - xtalk_alenlist = xtalk_dmatrans_list(xconn_vhdl, 0, palenlist, - flags & DMAMAP_FLAGS); - if (!xtalk_alenlist) - goto fail; - - alenlist_cursor_init(xtalk_alenlist, 0, NULL); - - if (inplace) { - pciio_alenlist = xtalk_alenlist; - } else { - pciio_alenlist = alenlist_create(al_flags); - if (!pciio_alenlist) - goto fail; - } - - while (ALENLIST_SUCCESS == - alenlist_get(xtalk_alenlist, NULL, 0, - &xio_addr, &xio_size, al_flags)) { - - /* - * find which XIO port this goes to. - */ - if (XIO_PACKED(xio_addr)) { - if (xio_addr == XIO_NOWHERE) { -#if PCIBR_DMA_DEBUG - printk("pcibr_dmatrans_addr:\n" - "\tpciio connection point %v\n" - "\txtalk connection point %v\n" - "\twanted paddr [0x%x..0x%x]\n" - "\txtalk_dmatrans_addr returned 0x%x\n", - pconn_vhdl, xconn_vhdl, - paddr, paddr + req_size - 1, - xio_addr); -#endif - return 0; - } - xio_port = XIO_PORT(xio_addr); - xio_addr = XIO_ADDR(xio_addr); - } else - xio_port = pcibr_soft->bs_mxid; - - /* - * If this DMA comes back to us, - * return the PCI MEM address on - * which it would land, or NULL - * if the target is something - * on bridge other than PCI MEM. - */ - if (xio_port == pcibr_soft->bs_xid) { - pci_addr = pcibr_addr_xio_to_pci(pcibr_soft, xio_addr, xio_size); - if ( (pci_addr == (alenaddr_t)NULL) ) - goto fail; - } else if (direct64) { - ASSERT(xio_port != 0); - pci_addr = pci_base | xio_addr - | ((uint64_t) xio_port << PCI64_ATTR_TARG_SHFT); - } else { - iopaddr_t offset = xio_addr - xio_base; - iopaddr_t endoff = xio_size + offset; - - if ((xio_size > map_size) || - (xio_addr < xio_base) || - (xio_port != pcibr_soft->bs_dir_xport) || - (endoff > map_size)) - goto fail; - - pci_addr = pci_base + (xio_addr - xio_base); - } - - /* write the PCI DMA address - * out to the scatter-gather list. - */ - if (inplace) { - if (ALENLIST_SUCCESS != - alenlist_replace(pciio_alenlist, NULL, - &pci_addr, &xio_size, al_flags)) - goto fail; - } else { - if (ALENLIST_SUCCESS != - alenlist_append(pciio_alenlist, - pci_addr, xio_size, al_flags)) - goto fail; - } - } - - if (relbits) { - if (direct64) { - slotp->bss_d64_flags = flags; - slotp->bss_d64_base = pci_base; - } else { - slotp->bss_d32_flags = flags; - slotp->bss_d32_base = pci_base; - } - } - if (!inplace) - alenlist_done(xtalk_alenlist); - - /* Reset the internal cursor of the alenlist to be returned back - * to the caller. - */ - alenlist_cursor_init(pciio_alenlist, 0, NULL); - return pciio_alenlist; - - fail: - if (relbits) - pcibr_release_device(pcibr_soft, pciio_slot, relbits); - if (pciio_alenlist && !inplace) - alenlist_destroy(pciio_alenlist); - return 0; -} - -void -pcibr_dmamap_drain(pcibr_dmamap_t map) -{ - xtalk_dmamap_drain(map->bd_xtalk); -} - -void -pcibr_dmaaddr_drain(devfs_handle_t pconn_vhdl, - paddr_t paddr, - size_t bytes) -{ - pciio_info_t pciio_info = pciio_info_get(pconn_vhdl); - pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); - devfs_handle_t xconn_vhdl = pcibr_soft->bs_conn; - - xtalk_dmaaddr_drain(xconn_vhdl, paddr, bytes); -} - -void -pcibr_dmalist_drain(devfs_handle_t pconn_vhdl, - alenlist_t list) -{ - pciio_info_t pciio_info = pciio_info_get(pconn_vhdl); - pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); - devfs_handle_t xconn_vhdl = pcibr_soft->bs_conn; - - xtalk_dmalist_drain(xconn_vhdl, list); -} - -/* - * Get the starting PCIbus address out of the given DMA map. - * This function is supposed to be used by a close friend of PCI bridge - * since it relies on the fact that the starting address of the map is fixed at - * the allocation time in the current implementation of PCI bridge. - */ -iopaddr_t -pcibr_dmamap_pciaddr_get(pcibr_dmamap_t pcibr_dmamap) -{ - return (pcibr_dmamap->bd_pci_addr); -} - -/* ===================================================================== - * INTERRUPT MANAGEMENT - */ - -static unsigned -pcibr_intr_bits(pciio_info_t info, - pciio_intr_line_t lines) -{ - pciio_slot_t slot = pciio_info_slot_get(info); - unsigned bbits = 0; - - /* - * Currently favored mapping from PCI - * slot number and INTA/B/C/D to Bridge - * PCI Interrupt Bit Number: - * - * SLOT A B C D - * 0 0 4 0 4 - * 1 1 5 1 5 - * 2 2 6 2 6 - * 3 3 7 3 7 - * 4 4 0 4 0 - * 5 5 1 5 1 - * 6 6 2 6 2 - * 7 7 3 7 3 - */ - - if (slot < 8) { - if (lines & (PCIIO_INTR_LINE_A| PCIIO_INTR_LINE_C)) - bbits |= 1 << slot; - if (lines & (PCIIO_INTR_LINE_B| PCIIO_INTR_LINE_D)) - bbits |= 1 << (slot ^ 4); - } - return bbits; -} - - -/* - * Get the next wrapper pointer queued in the interrupt circular buffer. - */ -#ifdef KERNEL_THREADS -pcibr_intr_wrap_t -pcibr_wrap_get(pcibr_intr_cbuf_t cbuf) -{ - pcibr_intr_wrap_t wrap; - - if (cbuf->ib_in == cbuf->ib_out) - PRINT_PANIC("pcibr intr circular buffer empty, cbuf=0x%x, ib_in=ib_out=%d\n", - cbuf, cbuf->ib_out); - - wrap = cbuf->ib_cbuf[cbuf->ib_out++]; - cbuf->ib_out = cbuf->ib_out % IBUFSIZE; - return(wrap); -} - -/* - * Queue a wrapper pointer in the interrupt circular buffer. - */ -void -pcibr_wrap_put(pcibr_intr_wrap_t wrap, pcibr_intr_cbuf_t cbuf) -{ - int in; - unsigned long s; - - /* - * Multiple CPUs could be executing this code simultaneously - * if a handler has registered multiple interrupt lines and - * the interrupts are directed to different CPUs. - */ - s = mutex_spinlock(&cbuf->ib_lock); - in = (cbuf->ib_in + 1) % IBUFSIZE; - if (in == cbuf->ib_out) - PRINT_PANIC("pcibr intr circular buffer full, cbuf=0x%x, ib_in=%d\n", - cbuf, cbuf->ib_in); - - cbuf->ib_cbuf[cbuf->ib_in] = wrap; - cbuf->ib_in = in; - mutex_spinunlock(&cbuf->ib_lock, s); - return; -} -#endif /* KERNEL_THREADS */ - -/* - * There are end cases where a deadlock can occur if interrupt - * processing completes and the Bridge b_int_status bit is still set. - * - * One scenerio is if a second PCI interrupt occurs within 60ns of - * the previous interrupt being cleared. In this case the Bridge - * does not detect the transition, the Bridge b_int_status bit - * remains set, and because no transition was detected no interrupt - * packet is sent to the Hub/Heart. - * - * A second scenerio is possible when a b_int_status bit is being - * shared by multiple devices: - * Device #1 generates interrupt - * Bridge b_int_status bit set - * Device #2 generates interrupt - * interrupt processing begins - * ISR for device #1 runs and - * clears interrupt - * Device #1 generates interrupt - * ISR for device #2 runs and - * clears interrupt - * (b_int_status bit still set) - * interrupt processing completes - * - * Interrupt processing is now complete, but an interrupt is still - * outstanding for Device #1. But because there was no transition of - * the b_int_status bit, no interrupt packet will be generated and - * a deadlock will occur. - * - * To avoid these deadlock situations, this function is used - * to check if a specific Bridge b_int_status bit is set, and if so, - * cause the setting of the corresponding interrupt bit. - * - * On a XBridge (IP35), we do this by writing the appropriate Bridge Force - * Interrupt register. - */ -void -pcibr_force_interrupt(pcibr_intr_wrap_t wrap) -{ - unsigned bit; - pcibr_soft_t pcibr_soft = wrap->iw_soft; - bridge_t *bridge = pcibr_soft->bs_base; - cpuid_t cpuvertex_to_cpuid(devfs_handle_t vhdl); - - bit = wrap->iw_intr; - - if (pcibr_soft->bs_xbridge) { - bridge->b_force_pin[bit].intr = 1; - } else if ((1 << bit) & *wrap->iw_stat) { - cpuid_t cpu; - unsigned intr_bit; - xtalk_intr_t xtalk_intr = - pcibr_soft->bs_intr[bit].bsi_xtalk_intr; - - intr_bit = (short) xtalk_intr_vector_get(xtalk_intr); - cpu = cpuvertex_to_cpuid(xtalk_intr_cpu_get(xtalk_intr)); - REMOTE_CPU_SEND_INTR(cpu, intr_bit); - } -} - -/* Wrapper for pcibr interrupt threads. */ -#ifdef KERNEL_THREADS -static void -pcibr_intrd(pcibr_intr_t intr) -{ - pcibr_intr_wrap_t wrap; - - /* Called on each restart */ - ASSERT(cpuid() == intr->bi_mustruncpu); - -#ifdef ITHREAD_LATENCY - xthread_update_latstats(intr->bi_tinfo.thd_latstats); -#endif /* ITHREAD_LATENCY */ - - ASSERT(intr->bi_func != NULL); - intr->bi_func(intr->bi_arg); /* Invoke the interrupt handler */ - - /* - * The pcibr_intrd thread needs access to the wrapper struct - * specific to the current interrupt it is processing. Because - * multiple calls/wakeups to the thread could be queued, each - * potentially from a different interrupt line (PCIIO_INTR_LINE_A, - * etc), multiple wrapper struct pointers need to be queued. This - * is done via a circular buffer of wrapper struct pointers. - */ - wrap = pcibr_wrap_get(&intr->bi_ibuf); - - /* - * The interrupt handler has completed. Now decrement the running - * count tracking the number of handlers still running for this line. - * If this was the last handler to complete (i.e., iw_hdlrcnt == 0), - * avoid a potential deadlock condition and ensure that another - * interrupt will occur if the Bridge b_int_status bit is still - * set. - */ - atomicAddInt(&(wrap->iw_hdlrcnt), -1); - if (wrap->iw_hdlrcnt == 0) - pcibr_force_interrupt(wrap); - - ipsema(&intr->bi_tinfo.thd_isync); /* Sleep 'till next interrupt */ - /* NOTREACHED */ -} - -static void -pcibr_intrd_start(pcibr_intr_t intr) -{ - ASSERT(intr->bi_mustruncpu >= 0); - setmustrun(intr->bi_mustruncpu); - - xthread_set_func(KT_TO_XT(curthreadp), (xt_func_t *)pcibr_intrd, (void *)intr); - atomicSetInt(&intr->bi_tinfo.thd_flags, THD_INIT); - ipsema(&intr->bi_tinfo.thd_isync); /* Comes out in pcibr_intrd */ - /* NOTREACHED */ -} - - -static void -pcibr_thread_setup(pcibr_intr_t intr, int bridge_levels, ilvl_t intr_swlevel) -{ - char thread_name[32]; - - sprintf(thread_name, "pcibr_intrd[0x%x]", bridge_levels); - thread_name[IT_NAMELEN-1] = '\0'; - - /* XXX need to adjust priority whenever an interrupt is connected */ - intr->bi_tinfo.thd_pri = intr_swlevel; - atomicSetInt(&intr->bi_tinfo.thd_flags, THD_ISTHREAD | THD_REG); - xthread_setup(thread_name, intr_swlevel, &intr->bi_tinfo, - (xt_func_t *)pcibr_intrd_start, - (void *)intr); -} -#endif /* KERNEL_THREADS */ - - - -/*ARGSUSED */ -pcibr_intr_t -pcibr_intr_alloc(devfs_handle_t pconn_vhdl, - device_desc_t dev_desc, - pciio_intr_line_t lines, - devfs_handle_t owner_dev) -{ - pcibr_info_t pcibr_info = pcibr_info_get(pconn_vhdl); - pciio_slot_t pciio_slot = pcibr_info->f_slot; - pcibr_soft_t pcibr_soft = (pcibr_soft_t) pcibr_info->f_mfast; - devfs_handle_t xconn_vhdl = pcibr_soft->bs_conn; - bridge_t *bridge = pcibr_soft->bs_base; - int is_threaded = 0; -#ifdef KERNEL_THREADS - cpuid_t mustruncpu = CPU_NONE; - cpuid_t old_intrcpu = CPU_NONE; -#endif - int thread_swlevel; - - xtalk_intr_t *xtalk_intr_p; - pcibr_intr_t *pcibr_intr_p; - pcibr_intr_list_t *intr_list_p; - - unsigned pcibr_int_bits; - unsigned pcibr_int_bit; - xtalk_intr_t xtalk_intr = (xtalk_intr_t)0; - hub_intr_t hub_intr; - pcibr_intr_t pcibr_intr; - pcibr_intr_list_t intr_entry; - pcibr_intr_list_t intr_list; - bridgereg_t int_dev; - -#if DEBUG && INTR_DEBUG - printk("%v: pcibr_intr_alloc\n" - "%v:%s%s%s%s%s\n", - owner_dev, pconn_vhdl, - !(lines & 15) ? " No INTs?" : "", - lines & 1 ? " INTA" : "", - lines & 2 ? " INTB" : "", - lines & 4 ? " INTC" : "", - lines & 8 ? " INTD" : ""); -#endif - - NEW(pcibr_intr); - if (!pcibr_intr) - return NULL; - - if (dev_desc) { - cpuid_t intr_target_from_desc(device_desc_t, int); - -#ifdef KERNEL_THREADS - is_threaded = !(device_desc_flags_get(dev_desc) & D_INTR_NOTHREAD); - if (is_threaded) { - /* - * If the device descriptor contains interrupt target info, - * save the CPU requested. This is the CPU the pcibr_intrd - * thread will be set to run on. - * - * We need to get the interrupt target info at this time, because - * the original intr_target value can be overwritten, as part of - * the xtalk_intr_alloc_nothd() call, with the actual interrupt CPU. - * This can be different than the requested CPU if the lower layers - * could not direct the hardware interrupt to the requested CPU. - * Regardless of which CPU processes the hardware interrupt, the - * ISR thread will still be setup to run on the CPU originally - * requested. - */ - mustruncpu = intr_target_from_desc(dev_desc, SUBNODE_ANY); - thread_swlevel = device_desc_intr_swlevel_get(dev_desc); - } -#endif /* KERNEL_THREADS */ - } else { - extern int default_intr_pri; - - is_threaded = 1; /* PCI interrupts are threaded, by default */ - thread_swlevel = default_intr_pri; - } - - pcibr_intr->bi_dev = pconn_vhdl; - pcibr_intr->bi_lines = lines; - pcibr_intr->bi_soft = pcibr_soft; - pcibr_intr->bi_ibits = 0; /* bits will be added below */ - pcibr_intr->bi_func = 0; /* unset until connect */ - pcibr_intr->bi_arg = 0; /* unset until connect */ - pcibr_intr->bi_flags = is_threaded ? 0 : PCIIO_INTR_NOTHREAD; - pcibr_intr->bi_mustruncpu = CPU_NONE; -#ifdef KERNEL_THREADS - pcibr_intr->bi_ibuf.ib_in = 0; - pcibr_intr->bi_ibuf.ib_out = 0; -#endif - mutex_spinlock_init(&pcibr_intr->bi_ibuf.ib_lock); - - pcibr_int_bits = pcibr_soft->bs_intr_bits((pciio_info_t)pcibr_info, lines); - - - /* - * For each PCI interrupt line requested, figure - * out which Bridge PCI Interrupt Line it maps - * to, and make sure there are xtalk resources - * allocated for it. - */ -#if DEBUG && INTR_DEBUG - printk("pcibr_int_bits: 0x%X\n", pcibr_int_bits); -#endif - for (pcibr_int_bit = 0; pcibr_int_bit < 8; pcibr_int_bit ++) { - if (pcibr_int_bits & (1 << pcibr_int_bit)) { - xtalk_intr_p = &pcibr_soft->bs_intr[pcibr_int_bit].bsi_xtalk_intr; - - xtalk_intr = *xtalk_intr_p; - - if (xtalk_intr == NULL) { - /* - * This xtalk_intr_alloc is constrained for two reasons: - * 1) Normal interrupts and error interrupts need to be delivered - * through a single xtalk target widget so that there aren't any - * ordering problems with DMA, completion interrupts, and error - * interrupts. (Use of xconn_vhdl forces this.) - * - * 2) On IP35, addressing constraints on IP35 and Bridge force - * us to use a single PI number for all interrupts from a - * single Bridge. (IP35-specific code forces this, and we - * verify in pcibr_setwidint.) - */ - - /* - * All code dealing with threaded PCI interrupt handlers - * is located at the pcibr level. Because of this, - * we always want the lower layers (hub/heart_intr_alloc, - * intr_level_connect) to treat us as non-threaded so we - * don't set up a duplicate threaded environment. We make - * this happen by calling a special xtalk interface. - */ - xtalk_intr = xtalk_intr_alloc_nothd(xconn_vhdl, dev_desc, - owner_dev); -#if DEBUG && INTR_DEBUG - printk("%v: xtalk_intr=0x%X\n", xconn_vhdl, xtalk_intr); -#endif - - /* both an assert and a runtime check on this: - * we need to check in non-DEBUG kernels, and - * the ASSERT gets us more information when - * we use DEBUG kernels. - */ - ASSERT(xtalk_intr != NULL); - if (xtalk_intr == NULL) { - /* it is quite possible that our - * xtalk_intr_alloc failed because - * someone else got there first, - * and we can find their results - * in xtalk_intr_p. - */ - if (!*xtalk_intr_p) { -#ifdef SUPPORT_PRINTING_V_FORMAT - PRINT_ALERT( - "pcibr_intr_alloc %v: unable to get xtalk interrupt resources", - xconn_vhdl); -#endif - /* yes, we leak resources here. */ - return 0; - } - } else if (compare_and_swap_ptr((void **) xtalk_intr_p, NULL, xtalk_intr)) { - /* - * now tell the bridge which slot is - * using this interrupt line. - */ - int_dev = bridge->b_int_device; - int_dev &= ~BRIDGE_INT_DEV_MASK(pcibr_int_bit); - int_dev |= pciio_slot << BRIDGE_INT_DEV_SHFT(pcibr_int_bit); - bridge->b_int_device = int_dev; /* XXXMP */ - -#if DEBUG && INTR_DEBUG - printk("%v: bridge intr bit %d clears my wrb\n", - pconn_vhdl, pcibr_int_bit); -#endif - } else { - /* someone else got one allocated first; - * free the one we just created, and - * retrieve the one they allocated. - */ - xtalk_intr_free(xtalk_intr); - xtalk_intr = *xtalk_intr_p; -#if PARANOID - /* once xtalk_intr is set, we never clear it, - * so if the CAS fails above, this condition - * can "never happen" ... - */ - if (!xtalk_intr) { - PRINT_ALERT( - "pcibr_intr_alloc %v: unable to set xtalk interrupt resources", - xconn_vhdl); - /* yes, we leak resources here. */ - return 0; - } -#endif - } - } - -#ifdef KERNEL_THREADS - if (is_threaded) { - cpuid_t intrcpu = cpuvertex_to_cpuid(xtalk_intr_cpu_get(xtalk_intr)); - - /* - * It is possible that 2 (or more) interrupts originating on a - * single Bridge and used by a single device were assigned to - * different CPUs. If this occurs issue a warning message for - * this sub-optimal configuration. There are two ways this - * could happen: - * - * - There were insufficient xtalk interrupt resources to - * allow all interrupts to be assigned to the same CPU. - * This is an unlikely case, but could happen if someone - * tries to target a lot of interrupts to a single CPU. - * - * - If there is no device descriptor associated with this - * device, the xtalk/hub/heart layers will not know to - * assign the same CPU to any additional interrupts this - * driver has specified, and will perform the normal load - * leveling of interrupts across CPUs. - * (The lower layers store the CPU assigned to the first - * interrupt in the device desc, if present, and then when - * called again for additional interrupts for the same device, - * use this information to assign the same CPU to these - * interrupts.) - */ - if ((old_intrcpu != CPU_NONE) && (old_intrcpu != intrcpu)) { -#if defined(SUPPORT_PRINTING_V_FORMAT) - PRINT_WARNING("Conflict on where to schedule interrupts for %v\n", pconn_vhdl); -#else - PRINT_WARNING("Conflict on where to schedule interrupts for 0x%x\n", pconn_vhdl); -#endif - PRINT_WARNING("(on cpu %d or on cpu %d), cpu %d used\n", old_intrcpu, intrcpu, intrcpu); - } - if (old_intrcpu == CPU_NONE) - old_intrcpu = intrcpu; - /* - * For threaded drivers, set the interrupt thread to run wherever - * the interrupt is targeted, or where requested in the dev_desc. - */ - if (mustruncpu != CPU_NONE) { - pcibr_intr->bi_mustruncpu = mustruncpu; - if (mustruncpu != intrcpu) { - PRINT_WARNING("Request to target PCI interrupts to CPU %d could not\n" - " be satisfied, CPU %d used. However, interrupt thread\n" - " pcibr_intrd will run on CPU %d as requested.\n" - " %v (0x%x)\n", - mustruncpu, intrcpu, mustruncpu, owner_dev, - owner_dev); - } - } else { - pcibr_intr->bi_mustruncpu = intrcpu; - } - ASSERT(pcibr_intr->bi_mustruncpu >= 0); - - } -#endif /* KERNEL_THREADS */ - - pcibr_intr->bi_ibits |= 1 << pcibr_int_bit; - - NEW(intr_entry); - intr_entry->il_next = NULL; - intr_entry->il_intr = pcibr_intr; - intr_entry->il_wrbf = &(bridge->b_wr_req_buf[pciio_slot].reg); - intr_list_p = - &pcibr_soft->bs_intr[pcibr_int_bit].bsi_pcibr_intr_wrap.iw_list; -#if DEBUG && INTR_DEBUG -#if defined(SUPPORT_PRINTING_V_FORMAT) - printk("0x%x: Bridge bit %d wrap=0x%x\n", - pconn_vhdl, pcibr_int_bit, - pcibr_soft->bs_intr[pcibr_int_bit].bsi_pcibr_intr_wrap); -#else - printk("%v: Bridge bit %d wrap=0x%x\n", - pconn_vhdl, pcibr_int_bit, - pcibr_soft->bs_intr[pcibr_int_bit].bsi_pcibr_intr_wrap); -#endif -#endif - - if (compare_and_swap_ptr((void **) intr_list_p, NULL, intr_entry)) { - /* we are the first interrupt on this bridge bit. - */ -#if DEBUG && INTR_DEBUG - printk("%v INT 0x%x (bridge bit %d) allocated [FIRST]\n", - pconn_vhdl, pcibr_int_bits, pcibr_int_bit); -#endif - continue; - } - intr_list = *intr_list_p; - pcibr_intr_p = &intr_list->il_intr; - if (compare_and_swap_ptr((void **) pcibr_intr_p, NULL, pcibr_intr)) { - /* first entry on list was erased, - * and we replaced it, so we - * don't need our intr_entry. - */ - DEL(intr_entry); -#if DEBUG && INTR_DEBUG - printk("%v INT 0x%x (bridge bit %d) replaces erased first\n", - pconn_vhdl, pcibr_int_bits, pcibr_int_bit); -#endif - continue; - } - intr_list_p = &intr_list->il_next; - if (compare_and_swap_ptr((void **) intr_list_p, NULL, intr_entry)) { - /* we are the new second interrupt on this bit. - */ - pcibr_soft->bs_intr[pcibr_int_bit].bsi_pcibr_intr_wrap.iw_shared = 1; -#if DEBUG && INTR_DEBUG - printk("%v INT 0x%x (bridge bit %d) is new SECOND\n", - pconn_vhdl, pcibr_int_bits, pcibr_int_bit); -#endif - continue; - } - while (1) { - pcibr_intr_p = &intr_list->il_intr; - if (compare_and_swap_ptr((void **) pcibr_intr_p, NULL, pcibr_intr)) { - /* an entry on list was erased, - * and we replaced it, so we - * don't need our intr_entry. - */ - DEL(intr_entry); -#if DEBUG && INTR_DEBUG - printk("%v INT 0x%x (bridge bit %d) replaces erased Nth\n", - pconn_vhdl, pcibr_int_bits, pcibr_int_bit); -#endif - break; - } - intr_list_p = &intr_list->il_next; - if (compare_and_swap_ptr((void **) intr_list_p, NULL, intr_entry)) { - /* entry appended to share list - */ -#if DEBUG && INTR_DEBUG - printk("%v INT 0x%x (bridge bit %d) is new Nth\n", - pconn_vhdl, pcibr_int_bits, pcibr_int_bit); -#endif - break; - } - /* step to next record in chain - */ - intr_list = *intr_list_p; - } - } - } - -#ifdef KERNEL_THREADS - if (is_threaded) { - /* Set pcibr_intr->bi_tinfo */ - pcibr_thread_setup(pcibr_intr, pcibr_int_bits, thread_swlevel); - ASSERT(!(pcibr_intr->bi_flags & PCIIO_INTR_CONNECTED)); - } -#endif /* KERNEL_THREADS */ - -#if DEBUG && INTR_DEBUG - printk("%v pcibr_intr_alloc complete\n", pconn_vhdl); -#endif - hub_intr = (hub_intr_t)xtalk_intr; - pcibr_intr->bi_irq = hub_intr->i_bit; - pcibr_intr->bi_cpu = hub_intr->i_cpuid; - return pcibr_intr; -} - -/*ARGSUSED */ -void -pcibr_intr_free(pcibr_intr_t pcibr_intr) -{ - unsigned pcibr_int_bits = pcibr_intr->bi_ibits; - pcibr_soft_t pcibr_soft = pcibr_intr->bi_soft; - unsigned pcibr_int_bit; - pcibr_intr_list_t intr_list; - int intr_shared; - xtalk_intr_t *xtalk_intrp; - - for (pcibr_int_bit = 0; pcibr_int_bit < 8; pcibr_int_bit++) { - if (pcibr_int_bits & (1 << pcibr_int_bit)) { - for (intr_list = - pcibr_soft->bs_intr[pcibr_int_bit].bsi_pcibr_intr_wrap.iw_list; - intr_list != NULL; - intr_list = intr_list->il_next) - if (compare_and_swap_ptr((void **) &intr_list->il_intr, - pcibr_intr, - NULL)) { -#if DEBUG && INTR_DEBUG - printk("%s: cleared a handler from bit %d\n", - pcibr_soft->bs_name, pcibr_int_bit); -#endif - } - /* If this interrupt line is not being shared between multiple - * devices release the xtalk interrupt resources. - */ - intr_shared = - pcibr_soft->bs_intr[pcibr_int_bit].bsi_pcibr_intr_wrap.iw_shared; - xtalk_intrp = &pcibr_soft->bs_intr[pcibr_int_bit].bsi_xtalk_intr; - - if ((!intr_shared) && (*xtalk_intrp)) { - - bridge_t *bridge = pcibr_soft->bs_base; - bridgereg_t int_dev; - - xtalk_intr_free(*xtalk_intrp); - *xtalk_intrp = 0; - - /* Clear the PCI device interrupt to bridge interrupt pin - * mapping. - */ - int_dev = bridge->b_int_device; - int_dev &= ~BRIDGE_INT_DEV_MASK(pcibr_int_bit); - bridge->b_int_device = int_dev; - - } - } - } - DEL(pcibr_intr); -} - -LOCAL void -pcibr_setpciint(xtalk_intr_t xtalk_intr) -{ - iopaddr_t addr = xtalk_intr_addr_get(xtalk_intr); - xtalk_intr_vector_t vect = xtalk_intr_vector_get(xtalk_intr); - bridgereg_t *int_addr = (bridgereg_t *) - xtalk_intr_sfarg_get(xtalk_intr); - - *int_addr = ((BRIDGE_INT_ADDR_HOST & (addr >> 30)) | - (BRIDGE_INT_ADDR_FLD & vect)); -} - -/*ARGSUSED */ -int -pcibr_intr_connect(pcibr_intr_t pcibr_intr, - intr_func_t intr_func, - intr_arg_t intr_arg, - void *thread) -{ - pcibr_soft_t pcibr_soft = pcibr_intr->bi_soft; - bridge_t *bridge = pcibr_soft->bs_base; - unsigned pcibr_int_bits = pcibr_intr->bi_ibits; - unsigned pcibr_int_bit; - bridgereg_t b_int_enable; - unsigned long s; - - if (pcibr_intr == NULL) - return -1; - -#if DEBUG && INTR_DEBUG - printk("%v: pcibr_intr_connect 0x%X(0x%X)\n", - pcibr_intr->bi_dev, intr_func, intr_arg); -#endif - - pcibr_intr->bi_func = intr_func; - pcibr_intr->bi_arg = intr_arg; - *((volatile unsigned *)&pcibr_intr->bi_flags) |= PCIIO_INTR_CONNECTED; - - /* - * For each PCI interrupt line requested, figure - * out which Bridge PCI Interrupt Line it maps - * to, and make sure there are xtalk resources - * allocated for it. - */ - for (pcibr_int_bit = 0; pcibr_int_bit < 8; pcibr_int_bit++) - if (pcibr_int_bits & (1 << pcibr_int_bit)) { - pcibr_intr_wrap_t intr_wrap; - xtalk_intr_t xtalk_intr; - - xtalk_intr = pcibr_soft->bs_intr[pcibr_int_bit].bsi_xtalk_intr; - - intr_wrap = &pcibr_soft->bs_intr[pcibr_int_bit].bsi_pcibr_intr_wrap; - /* - * If this interrupt line is being shared and the connect has - * already been done, no need to do it again. - */ - if (pcibr_soft->bs_intr[pcibr_int_bit].bsi_pcibr_intr_wrap.iw_connected) - continue; - - - /* - * Use the pcibr wrapper function to handle all Bridge interrupts - * regardless of whether the interrupt line is shared or not. - */ - xtalk_intr_connect(xtalk_intr, - pcibr_intr_func, - (intr_arg_t) intr_wrap, - (xtalk_intr_setfunc_t) pcibr_setpciint, - (void *) &(bridge->b_int_addr[pcibr_int_bit].addr), - 0); - pcibr_soft->bs_intr[pcibr_int_bit].bsi_pcibr_intr_wrap.iw_connected = 1; - -#if DEBUG && INTR_DEBUG - printk("%v bridge bit %d wrapper connected\n", - pcibr_intr->bi_dev, pcibr_int_bit); -#endif - } - s = pcibr_lock(pcibr_soft); - b_int_enable = bridge->b_int_enable; - b_int_enable |= pcibr_int_bits; - bridge->b_int_enable = b_int_enable; - bridge->b_wid_tflush; /* wait until Bridge PIO complete */ - pcibr_unlock(pcibr_soft, s); - - return 0; -} - -/*ARGSUSED */ -void -pcibr_intr_disconnect(pcibr_intr_t pcibr_intr) -{ - pcibr_soft_t pcibr_soft = pcibr_intr->bi_soft; - bridge_t *bridge = pcibr_soft->bs_base; - unsigned pcibr_int_bits = pcibr_intr->bi_ibits; - unsigned pcibr_int_bit; - pcibr_intr_wrap_t intr_wrap; - bridgereg_t b_int_enable; - unsigned long s; - - /* Stop calling the function. Now. - */ - *((volatile unsigned *)&pcibr_intr->bi_flags) &= ~PCIIO_INTR_CONNECTED; - pcibr_intr->bi_func = 0; - pcibr_intr->bi_arg = 0; - /* - * For each PCI interrupt line requested, figure - * out which Bridge PCI Interrupt Line it maps - * to, and disconnect the interrupt. - */ - - /* don't disable interrupts for lines that - * are shared between devices. - */ - for (pcibr_int_bit = 0; pcibr_int_bit < 8; pcibr_int_bit++) - if ((pcibr_int_bits & (1 << pcibr_int_bit)) && - (pcibr_soft->bs_intr[pcibr_int_bit].bsi_pcibr_intr_wrap.iw_shared)) - pcibr_int_bits &= ~(1 << pcibr_int_bit); - if (!pcibr_int_bits) - return; - - s = pcibr_lock(pcibr_soft); - b_int_enable = bridge->b_int_enable; - b_int_enable &= ~pcibr_int_bits; - bridge->b_int_enable = b_int_enable; - bridge->b_wid_tflush; /* wait until Bridge PIO complete */ - pcibr_unlock(pcibr_soft, s); - - for (pcibr_int_bit = 0; pcibr_int_bit < 8; pcibr_int_bit++) - if (pcibr_int_bits & (1 << pcibr_int_bit)) { - /* if the interrupt line is now shared, - * do not disconnect it. - */ - if (pcibr_soft->bs_intr[pcibr_int_bit].bsi_pcibr_intr_wrap.iw_shared) - continue; - - xtalk_intr_disconnect(pcibr_soft->bs_intr[pcibr_int_bit].bsi_xtalk_intr); - pcibr_soft->bs_intr[pcibr_int_bit].bsi_pcibr_intr_wrap.iw_connected = 0; - -#if DEBUG && INTR_DEBUG - printk("%s: xtalk disconnect done for Bridge bit %d\n", - pcibr_soft->bs_name, pcibr_int_bit); -#endif - - /* if we are sharing the interrupt line, - * connect us up; this closes the hole - * where the another pcibr_intr_alloc() - * was in progress as we disconnected. - */ - intr_wrap = &pcibr_soft->bs_intr[pcibr_int_bit].bsi_pcibr_intr_wrap; - if (!pcibr_soft->bs_intr[pcibr_int_bit].bsi_pcibr_intr_wrap.iw_shared) - continue; - - - xtalk_intr_connect(pcibr_soft->bs_intr[pcibr_int_bit].bsi_xtalk_intr, - pcibr_intr_func, - (intr_arg_t) intr_wrap, - (xtalk_intr_setfunc_t) pcibr_setpciint, - (void *) &(bridge->b_int_addr[pcibr_int_bit].addr), - 0); - } -} - -/*ARGSUSED */ -devfs_handle_t -pcibr_intr_cpu_get(pcibr_intr_t pcibr_intr) -{ - pcibr_soft_t pcibr_soft = pcibr_intr->bi_soft; - unsigned pcibr_int_bits = pcibr_intr->bi_ibits; - unsigned pcibr_int_bit; - - for (pcibr_int_bit = 0; pcibr_int_bit < 8; pcibr_int_bit++) - if (pcibr_int_bits & (1 << pcibr_int_bit)) - return xtalk_intr_cpu_get(pcibr_soft->bs_intr[pcibr_int_bit].bsi_xtalk_intr); - return 0; -} - -/* ===================================================================== - * INTERRUPT HANDLING - */ -LOCAL void -pcibr_clearwidint(bridge_t *bridge) -{ - bridge->b_wid_int_upper = 0; - bridge->b_wid_int_lower = 0; -} - - -LOCAL void -pcibr_setwidint(xtalk_intr_t intr) -{ - xwidgetnum_t targ = xtalk_intr_target_get(intr); - iopaddr_t addr = xtalk_intr_addr_get(intr); - xtalk_intr_vector_t vect = xtalk_intr_vector_get(intr); - widgetreg_t NEW_b_wid_int_upper, NEW_b_wid_int_lower; - widgetreg_t OLD_b_wid_int_upper, OLD_b_wid_int_lower; - - bridge_t *bridge = (bridge_t *)xtalk_intr_sfarg_get(intr); - - NEW_b_wid_int_upper = ( (0x000F0000 & (targ << 16)) | - XTALK_ADDR_TO_UPPER(addr)); - NEW_b_wid_int_lower = XTALK_ADDR_TO_LOWER(addr); - - OLD_b_wid_int_upper = bridge->b_wid_int_upper; - OLD_b_wid_int_lower = bridge->b_wid_int_lower; - -#if defined(CONFIG_SGI_IP35) || defined(CONFIG_IA64_SGI_SN1) || defined(CONFIG_IA64_GENERIC) - /* Verify that all interrupts from this Bridge are using a single PI */ - if ((OLD_b_wid_int_upper != 0) && (OLD_b_wid_int_lower != 0)) { - /* - * Once set, these registers shouldn't change; they should - * be set multiple times with the same values. - * - * If we're attempting to change these registers, it means - * that our heuristics for allocating interrupts in a way - * appropriate for IP35 have failed, and the admin needs to - * explicitly direct some interrupts (or we need to make the - * heuristics more clever). - * - * In practice, we hope this doesn't happen very often, if - * at all. - */ - if ((OLD_b_wid_int_upper != NEW_b_wid_int_upper) || - (OLD_b_wid_int_lower != NEW_b_wid_int_lower)) { - PRINT_WARNING("Interrupt allocation is too complex.\n"); - PRINT_WARNING("Use explicit administrative interrupt targetting.\n"); - PRINT_WARNING("bridge=0x%lx targ=0x%x\n", (unsigned long)bridge, targ); - PRINT_WARNING("NEW=0x%x/0x%x OLD=0x%x/0x%x\n", - NEW_b_wid_int_upper, NEW_b_wid_int_lower, - OLD_b_wid_int_upper, OLD_b_wid_int_lower); - PRINT_PANIC("PCI Bridge interrupt targetting error\n"); - } - } -#endif /* CONFIG_SGI_IP35 */ - - bridge->b_wid_int_upper = NEW_b_wid_int_upper; - bridge->b_wid_int_lower = NEW_b_wid_int_lower; - bridge->b_int_host_err = vect; -} - -/* - * pcibr_intr_preset: called during mlreset time - * if the platform specific code needs to route - * one of the Bridge's xtalk interrupts before the - * xtalk infrastructure is available. - */ -void -pcibr_xintr_preset(void *which_widget, - int which_widget_intr, - xwidgetnum_t targ, - iopaddr_t addr, - xtalk_intr_vector_t vect) -{ - bridge_t *bridge = (bridge_t *) which_widget; - - if (which_widget_intr == -1) { - /* bridge widget error interrupt */ - bridge->b_wid_int_upper = ( (0x000F0000 & (targ << 16)) | - XTALK_ADDR_TO_UPPER(addr)); - bridge->b_wid_int_lower = XTALK_ADDR_TO_LOWER(addr); - bridge->b_int_host_err = vect; - - /* turn on all interrupts except - * the PCI interrupt requests, - * at least at heart. - */ - bridge->b_int_enable |= ~BRIDGE_IMR_INT_MSK; - - } else { - /* routing a PCI device interrupt. - * targ and low 38 bits of addr must - * be the same as the already set - * value for the widget error interrupt. - */ - bridge->b_int_addr[which_widget_intr].addr = - ((BRIDGE_INT_ADDR_HOST & (addr >> 30)) | - (BRIDGE_INT_ADDR_FLD & vect)); - /* - * now bridge can let it through; - * NB: still should be blocked at - * xtalk provider end, until the service - * function is set. - */ - bridge->b_int_enable |= 1 << vect; - } - bridge->b_wid_tflush; /* wait until Bridge PIO complete */ -} - - -/* - * pcibr_intr_func() - * - * This is the pcibr interrupt "wrapper" function that is called, - * in interrupt context, to initiate the interrupt handler(s) registered - * (via pcibr_intr_alloc/connect) for the occuring interrupt. Non-threaded - * handlers will be called directly, and threaded handlers will have their - * thread woken up. - */ -void -pcibr_intr_func(intr_arg_t arg) -{ - pcibr_intr_wrap_t wrap = (pcibr_intr_wrap_t) arg; - reg_p wrbf; - intr_func_t func; - pcibr_intr_t intr; - pcibr_intr_list_t list; - int clearit; -#ifdef KERNEL_THREADS - int do_nonthreaded = 0; - int do_threaded = 1; - int is_threaded = 0; -#else - int do_nonthreaded = 1; - int do_threaded = 0; - int is_threaded = 0; -#endif - int nonthreaded_count = 0; - int x = 0; - - /* - * If any handler is still running from a previous interrupt - * just return. If there's a need to call the handler(s) again, - * another interrupt will be generated either by the device or by - * pcibr_force_interrupt(). - */ - - if (wrap->iw_hdlrcnt) { - return; - } - - /* - * Call all interrupt handlers registered. - * First, the pcibr_intrd threads for any threaded handlers will be - * awoken, then any non-threaded handlers will be called sequentially. - */ - - clearit = 1; - while (do_threaded || do_nonthreaded) { - for (list = wrap->iw_list; list != NULL; list = list->il_next) { - if ((intr = list->il_intr) && - (intr->bi_flags & PCIIO_INTR_CONNECTED)) { - - ASSERT(intr->bi_func); - - /* - * This device may have initiated write - * requests since the bridge last saw - * an edge on this interrupt input; flushing - * the buffer prior to invoking the handler - * should help but may not be sufficient if we - * get more requests after the flush, followed - * by the card deciding it wants service, before - * the interrupt handler checks to see if things need - * to be done. - * - * There is a similar race condition if - * an interrupt handler loops around and - * notices further service is requred. - * Perhaps we need to have an explicit - * call that interrupt handlers need to - * do between noticing that DMA to memory - * has completed, but before observing the - * contents of memory? - */ - -#ifdef KERNEL_THREADS - is_threaded = !(intr->bi_flags & PCIIO_INTR_NOTHREAD); - if (!is_threaded) { - nonthreaded_count++; - } - - if ((do_threaded) && (is_threaded)) { - /* Only need to flush write buffers if sharing */ - - if ((wrap->iw_shared) && (wrbf = list->il_wrbf)) { - if (x = *wrbf) /* write request buffer flush */ -#ifdef SUPPORT_PRINTING_V_FORMAT - PRINT_ALERT("pcibr_intr_func %v: \n" - "write buffer flush failed, wrbf=0x%x\n", - list->il_intr->bi_dev, wrbf); -#else - PRINT_ALERT("pcibr_intr_func 0x%x: \n" - "write buffer flush failed, wrbf=0x%x\n", - list->il_intr->bi_dev, wrbf); -#endif - } - - /* - * Keep a running count of the number of interrupt - * handlers that have yet to complete. - */ - atomicAddInt(&(wrap->iw_hdlrcnt), 1); - - /* - * Prior to waking up pcibr_intrd, a pointer to the - * wrapper struct corresponding to the interrupt taken - * needs to be queued in the interrupt circular buffer. - * The pcibr_intrd thread needs the wrapper pointer in - * order to decrement the handler count (iw_hdlrcnt). - */ - pcibr_wrap_put(wrap, &intr->bi_ibuf); -#ifdef ITHREAD_LATENCY - xthread_set_istamp(intr->bi_tinfo.thd_latstats); -#endif /* ITHREAD_LATENCY */ - up(&intr->bi_tinfo.thd_isync); - } else -#endif /* KERNEL_THREADS */ - if ((do_nonthreaded) && (!is_threaded)) { - /* Non-threaded. - * Call the interrupt handler at interrupt level - */ - - /* Only need to flush write buffers if sharing */ - - if ((wrap->iw_shared) && (wrbf = list->il_wrbf)) { - if ((x = *wrbf)) /* write request buffer flush */ -#ifdef SUPPORT_PRINTING_V_FORMAT - PRINT_ALERT("pcibr_intr_func %v: \n" - "write buffer flush failed, wrbf=0x%x\n", - list->il_intr->bi_dev, wrbf); -#else - PRINT_ALERT("pcibr_intr_func %p: \n" - "write buffer flush failed, wrbf=0x%x\n", - list->il_intr->bi_dev, wrbf); -#endif - } - - func = intr->bi_func; - func(intr->bi_arg); - } - - clearit = 0; - } - } - - if (do_threaded) { - /* - * All threaded handlers have been called; - * next do non-threaded, if any. - */ - do_threaded = 0; - - if (nonthreaded_count) - do_nonthreaded = 1; - } else { - do_nonthreaded = 0; - /* - * If the non-threaded handler was the last to complete, - * (i.e., no threaded handlers still running) force an - * interrupt to avoid a potential deadlock situation. - */ - if (wrap->iw_hdlrcnt == 0) { - pcibr_force_interrupt(wrap); - } - } - } - - /* If there were no handlers, - * disable the interrupt and return. - * It will get enabled again after - * a handler is connected. - * If we don't do this, we would - * sit here and spin through the - * list forever. - */ - if (clearit) { - pcibr_soft_t pcibr_soft = wrap->iw_soft; - bridge_t *bridge = pcibr_soft->bs_base; - bridgereg_t b_int_enable; - bridgereg_t mask = 1 << wrap->iw_intr; - unsigned long s; - - s = pcibr_lock(pcibr_soft); - b_int_enable = bridge->b_int_enable; - b_int_enable &= ~mask; - bridge->b_int_enable = b_int_enable; - bridge->b_wid_tflush; /* wait until Bridge PIO complete */ - pcibr_unlock(pcibr_soft, s); - return; - } -} - -/* ===================================================================== - * ERROR HANDLING - */ - -#ifdef DEBUG -#ifdef ERROR_DEBUG -#define BRIDGE_PIOERR_TIMEOUT 100 /* Timeout with ERROR_DEBUG defined */ -#else -#define BRIDGE_PIOERR_TIMEOUT 40 /* Timeout in debug mode */ -#endif -#else -#define BRIDGE_PIOERR_TIMEOUT 1 /* Timeout in non-debug mode */ -#endif - -LOCAL void -print_bridge_errcmd(uint32_t cmdword, char *errtype) -{ -#ifdef SUPPORT_PRINTING_R_FORMAT - PRINT_WARNING( - " Bridge %s error command word register %R", - errtype, cmdword, xio_cmd_bits); -#else - PRINT_WARNING( - " Bridge %s error command word register 0x%x", - errtype, cmdword); -#endif -} - -LOCAL char *pcibr_isr_errs[] = -{ - "", "", "", "", "", "", "", "", - "08: GIO non-contiguous byte enable in crosstalk packet", - "09: PCI to Crosstalk read request timeout", - "10: PCI retry operation count exhausted.", - "11: PCI bus device select timeout", - "12: PCI device reported parity error", - "13: PCI Address/Cmd parity error ", - "14: PCI Bridge detected parity error", - "15: PCI abort condition", - "16: SSRAM parity error", - "17: LLP Transmitter Retry count wrapped", - "18: LLP Transmitter side required Retry", - "19: LLP Receiver retry count wrapped", - "20: LLP Receiver check bit error", - "21: LLP Receiver sequence number error", - "22: Request packet overflow", - "23: Request operation not supported by bridge", - "24: Request packet has invalid address for bridge widget", - "25: Incoming request xtalk command word error bit set or invalid sideband", - "26: Incoming response xtalk command word error bit set or invalid sideband", - "27: Framing error, request cmd data size does not match actual", - "28: Framing error, response cmd data size does not match actual", - "29: Unexpected response arrived", - "30: Access to SSRAM beyond device limits", - "31: Multiple errors occurred", -}; - -/* - * PCI Bridge Error interrupt handling. - * This routine gets invoked from system interrupt dispatcher - * and is responsible for invoking appropriate error handler, - * depending on the type of error. - * This IS a duplicate of bridge_errintr defined specfic to IP30. - * There are some minor differences in terms of the return value and - * parameters passed. One of these two should be removed at some point - * of time. - */ -/*ARGSUSED */ -void -pcibr_error_dump(pcibr_soft_t pcibr_soft) -{ - bridge_t *bridge = pcibr_soft->bs_base; - bridgereg_t int_status; - int i; - - int_status = (bridge->b_int_status & ~BRIDGE_ISR_INT_MSK); - - PRINT_ALERT( "%s PCI BRIDGE ERROR: int_status is 0x%X", - pcibr_soft->bs_name, int_status); - - for (i = PCIBR_ISR_ERR_START; i < PCIBR_ISR_MAX_ERRS; i++) { - if (int_status & (1 << i)) { - PRINT_WARNING( "%s", pcibr_isr_errs[i]); - } - } - - if (int_status & BRIDGE_ISR_XTALK_ERROR) { - print_bridge_errcmd(bridge->b_wid_err_cmdword, ""); - - PRINT_WARNING(" Bridge error address 0x%lx", - (((uint64_t) bridge->b_wid_err_upper << 32) | - bridge->b_wid_err_lower)); - - print_bridge_errcmd(bridge->b_wid_aux_err, "Aux"); - - if (int_status & (BRIDGE_ISR_BAD_XRESP_PKT | BRIDGE_ISR_RESP_XTLK_ERR)) { - PRINT_WARNING(" Bridge response buffer: dev-num %d buff-num %d addr 0x%lx\n", - ((bridge->b_wid_resp_upper >> 20) & 0x3), - ((bridge->b_wid_resp_upper >> 16) & 0xF), - (((uint64_t) (bridge->b_wid_resp_upper & 0xFFFF) << 32) | - bridge->b_wid_resp_lower)); - } - } - if (int_status & BRIDGE_ISR_SSRAM_PERR) - PRINT_WARNING(" Bridge SSRAM parity error register 0x%x", - bridge->b_ram_perr); - - if (int_status & BRIDGE_ISR_PCIBUS_ERROR) { - PRINT_WARNING(" PCI/GIO error upper address register 0x%x", - bridge->b_pci_err_upper); - - PRINT_WARNING(" PCI/GIO error lower address register 0x%x", - bridge->b_pci_err_lower); - } - if (int_status & BRIDGE_ISR_ERROR_FATAL) { - PRINT_PANIC("PCI Bridge Error interrupt killed the system"); - /*NOTREACHED */ - } else { - PRINT_ALERT( "Non-fatal Error in Bridge.."); - } -} - -#define PCIBR_ERRINTR_GROUP(error) \ - (( error & (BRIDGE_IRR_PCI_GRP|BRIDGE_IRR_GIO_GRP) - -uint32_t -pcibr_errintr_group(uint32_t error) -{ - uint32_t group = BRIDGE_IRR_MULTI_CLR; - - if (error & BRIDGE_IRR_PCI_GRP) - group |= BRIDGE_IRR_PCI_GRP_CLR; - if (error & BRIDGE_IRR_SSRAM_GRP) - group |= BRIDGE_IRR_SSRAM_GRP_CLR; - if (error & BRIDGE_IRR_LLP_GRP) - group |= BRIDGE_IRR_LLP_GRP_CLR; - if (error & BRIDGE_IRR_REQ_DSP_GRP) - group |= BRIDGE_IRR_REQ_DSP_GRP_CLR; - if (error & BRIDGE_IRR_RESP_BUF_GRP) - group |= BRIDGE_IRR_RESP_BUF_GRP_CLR; - if (error & BRIDGE_IRR_CRP_GRP) - group |= BRIDGE_IRR_CRP_GRP_CLR; - - return group; - -} - - -/* pcibr_pioerr_check(): - * Check to see if this pcibr has a PCI PIO - * TIMEOUT error; if so, clear it and bump - * the timeout-count on any piomaps that - * could cover the address. - */ -static void -pcibr_pioerr_check(pcibr_soft_t soft) -{ - bridge_t *bridge; - bridgereg_t b_int_status; - bridgereg_t b_pci_err_lower; - bridgereg_t b_pci_err_upper; - iopaddr_t pci_addr; - pciio_slot_t slot; - pcibr_piomap_t map; - iopaddr_t base; - size_t size; - unsigned win; - int func; - - bridge = soft->bs_base; - b_int_status = bridge->b_int_status; - if (b_int_status & BRIDGE_ISR_PCIBUS_PIOERR) { - b_pci_err_lower = bridge->b_pci_err_lower; - b_pci_err_upper = bridge->b_pci_err_upper; - b_int_status = bridge->b_int_status; - if (b_int_status & BRIDGE_ISR_PCIBUS_PIOERR) { - bridge->b_int_rst_stat = (BRIDGE_IRR_PCI_GRP_CLR| - BRIDGE_IRR_MULTI_CLR); - - pci_addr = b_pci_err_upper & BRIDGE_ERRUPPR_ADDRMASK; - pci_addr = (pci_addr << 32) | b_pci_err_lower; - - slot = 8; - while (slot-- > 0) { - int nfunc = soft->bs_slot[slot].bss_ninfo; - pcibr_info_h pcibr_infoh = soft->bs_slot[slot].bss_infos; - - for (func = 0; func < nfunc; func++) { - pcibr_info_t pcibr_info = pcibr_infoh[func]; - - if (!pcibr_info) - continue; - - for (map = pcibr_info->f_piomap; - map != NULL; map = map->bp_next) { - base = map->bp_pciaddr; - size = map->bp_mapsz; - win = map->bp_space - PCIIO_SPACE_WIN(0); - if (win < 6) - base += - soft->bs_slot[slot].bss_window[win].bssw_base; - else if (map->bp_space == PCIIO_SPACE_ROM) - base += pcibr_info->f_rbase; - if ((pci_addr >= base) && (pci_addr < (base + size))) - atomic_inc(map->bp_toc); - } - } - } - } - } -} - -/* - * PCI Bridge Error interrupt handler. - * This gets invoked, whenever a PCI bridge sends an error interrupt. - * Primarily this servers two purposes. - * - If an error can be handled (typically a PIO read/write - * error, we try to do it silently. - * - If an error cannot be handled, we die violently. - * Interrupt due to PIO errors: - * - Bridge sends an interrupt, whenever a PCI operation - * done by the bridge as the master fails. Operations could - * be either a PIO read or a PIO write. - * PIO Read operation also triggers a bus error, and it's - * We primarily ignore this interrupt in that context.. - * For PIO write errors, this is the only indication. - * and we have to handle with the info from here. - * - * So, there is no way to distinguish if an interrupt is - * due to read or write error!. - */ - - -LOCAL void -pcibr_error_intr_handler(intr_arg_t arg) -{ - pcibr_soft_t pcibr_soft; - bridge_t *bridge; - bridgereg_t int_status; - bridgereg_t err_status; - int i; - - /* REFERENCED */ - bridgereg_t disable_errintr_mask = 0; - -#if PCIBR_SOFT_LIST - /* IP27 seems to be handing us junk. - */ - { - pcibr_list_p entry; - - entry = pcibr_list; - while (1) { - if (entry == NULL) { - printk("pcibr_error_intr_handler:\n" - "\tparameter (0x%p) is not a pcibr_soft!", - arg); - PRINT_PANIC("Invalid parameter to pcibr_error_intr_handler"); - } - if ((intr_arg_t) entry->bl_soft == arg) - break; - entry = entry->bl_next; - } - } -#endif - pcibr_soft = (pcibr_soft_t) arg; - bridge = pcibr_soft->bs_base; - - /* - * pcibr_error_intr_handler gets invoked whenever bridge encounters - * an error situation, and the interrupt for that error is enabled. - * This routine decides if the error is fatal or not, and takes - * action accordingly. - * - * In one case there is a need for special action. - * In case of PIO read/write timeouts due to user level, we do - * get an error interrupt. In this case, way to handle would - * be to start a timeout. If the error was due to "read", bus - * error handling code takes care of it. If error is due to write, - * it's handled at timeout - */ - - /* int_status is which bits we have to clear; - * err_status is the bits we haven't handled yet. - */ - - int_status = bridge->b_int_status & ~BRIDGE_ISR_INT_MSK; - err_status = int_status & ~BRIDGE_ISR_MULTI_ERR; - - if (!(int_status & ~BRIDGE_ISR_INT_MSK)) { - /* - * No error bit set!!. - */ - return; - } - /* If we have a PCIBUS_PIOERR, - * hand it to the logger but otherwise - * ignore the event. - */ - if (int_status & BRIDGE_ISR_PCIBUS_PIOERR) { - pcibr_pioerr_check(pcibr_soft); - err_status &= ~BRIDGE_ISR_PCIBUS_PIOERR; - int_status &= ~BRIDGE_ISR_PCIBUS_PIOERR; - } - - - if (err_status) { - struct bs_errintr_stat_s *bs_estat = pcibr_soft->bs_errintr_stat; - - for (i = PCIBR_ISR_ERR_START; i < PCIBR_ISR_MAX_ERRS; i++, bs_estat++) { - if (err_status & (1 << i)) { - uint32_t errrate = 0; - uint32_t errcount = 0; - uint32_t errinterval = 0, current_tick = 0; - int panic_on_llp_tx_retry = 0; - int is_llp_tx_retry_intr = 0; - - bs_estat->bs_errcount_total++; - -#ifdef LATER - current_tick = lbolt; -#else - current_tick = 0; -#endif - errinterval = (current_tick - bs_estat->bs_lasterr_timestamp); - errcount = (bs_estat->bs_errcount_total - - bs_estat->bs_lasterr_snapshot); - - is_llp_tx_retry_intr = (BRIDGE_ISR_LLP_TX_RETRY == (1 << i)); - - /* On a non-zero error rate (which is equivalent to - * to 100 errors /sec at least) for the LLP transmitter - * retry interrupt we need to panic the system - * to prevent potential data corruption . - * NOTE : errcount is being compared to PCIBR_ERRTIME_THRESHOLD - * to make sure that we are not seing cases like x error - * interrupts per y ticks for very low x ,y (x > y ) which - * makes error rate be > 100 /sec. - */ - - /* Check for the divide by zero condition while - * calculating the error rates. - */ - - if (errinterval) { - errrate = errcount / errinterval; - /* If able to calculate error rate - * on a LLP transmitter retry interrupt check - * if the error rate is nonzero and we have seen - * a certain minimum number of errors. - */ - if (is_llp_tx_retry_intr && - errrate && - (errcount >= PCIBR_ERRTIME_THRESHOLD)) { - panic_on_llp_tx_retry = 1; - } - } else { - errrate = 0; - /* Since we are not able to calculate the - * error rate check if we exceeded a certain - * minimum number of errors for LLP transmitter - * retries. Note that this can only happen - * within the first tick after the last snapshot. - */ - if (is_llp_tx_retry_intr && - (errcount >= PCIBR_ERRINTR_DISABLE_LEVEL)) { - panic_on_llp_tx_retry = 1; - } - } - if (panic_on_llp_tx_retry) { - static uint32_t last_printed_rate; - - if (errrate > last_printed_rate) { - last_printed_rate = errrate; - /* Print the warning only if the error rate - * for the transmitter retry interrupt - * exceeded the previously printed rate. - */ - PRINT_WARNING( - "%s: %s, Excessive error interrupts : %d/tick\n", - pcibr_soft->bs_name, - pcibr_isr_errs[i], - errrate); - - } - /* - * Update snapshot, and time - */ - bs_estat->bs_lasterr_timestamp = current_tick; - bs_estat->bs_lasterr_snapshot = - bs_estat->bs_errcount_total; - - } - /* - * If the error rate is high enough, print the error rate. - */ - if (errinterval > PCIBR_ERRTIME_THRESHOLD) { - - if (errrate > PCIBR_ERRRATE_THRESHOLD) { - PRINT_NOTICE( "%s: %s, Error rate %d/tick", - pcibr_soft->bs_name, - pcibr_isr_errs[i], - errrate); - /* - * Update snapshot, and time - */ - bs_estat->bs_lasterr_timestamp = current_tick; - bs_estat->bs_lasterr_snapshot = - bs_estat->bs_errcount_total; - } - } - if (bs_estat->bs_errcount_total > PCIBR_ERRINTR_DISABLE_LEVEL) { - /* - * We have seen a fairly large number of errors of - * this type. Let's disable the interrupt. But flash - * a message about the interrupt being disabled. - */ - PRINT_NOTICE( - "%s Disabling error interrupt type %s. Error count %d", - pcibr_soft->bs_name, - pcibr_isr_errs[i], - bs_estat->bs_errcount_total); - disable_errintr_mask |= (1 << i); - } - } - } - } - - if (disable_errintr_mask) { - /* - * Disable some high frequency errors as they - * could eat up too much cpu time. - */ - bridge->b_int_enable &= ~disable_errintr_mask; - } - /* - * If we leave the PROM cacheable, T5 might - * try to do a cache line sized writeback to it, - * which will cause a BRIDGE_ISR_INVLD_ADDR. - */ - if ((err_status & BRIDGE_ISR_INVLD_ADDR) && - (0x00000000 == bridge->b_wid_err_upper) && - (0x00C00000 == (0xFFC00000 & bridge->b_wid_err_lower)) && - (0x00402000 == (0x00F07F00 & bridge->b_wid_err_cmdword))) { - err_status &= ~BRIDGE_ISR_INVLD_ADDR; - } -#if defined (PCIBR_LLP_CONTROL_WAR) - /* - * The bridge bug, where the llp_config or control registers - * need to be read back after being written, affects an MP - * system since there could be small windows between writing - * the register and reading it back on one cpu while another - * cpu is fielding an interrupt. If we run into this scenario, - * workaround the problem by ignoring the error. (bug 454474) - * pcibr_llp_control_war_cnt keeps an approximate number of - * times we saw this problem on a system. - */ - - if ((err_status & BRIDGE_ISR_INVLD_ADDR) && - ((((uint64_t) bridge->b_wid_err_upper << 32) | (bridge->b_wid_err_lower)) - == (BRIDGE_INT_RST_STAT & 0xff0))) { -#ifdef LATER - if (kdebug) - PRINT_NOTICE( "%s bridge: ignoring llp/control address interrupt", - pcibr_soft->bs_name); -#endif - pcibr_llp_control_war_cnt++; - err_status &= ~BRIDGE_ISR_INVLD_ADDR; - } -#endif /* PCIBR_LLP_CONTROL_WAR */ - -#ifdef DEBUG - if (err_status & BRIDGE_ISR_ERROR_DUMP) - pcibr_error_dump(pcibr_soft); -#else - if (err_status & BRIDGE_ISR_ERROR_FATAL) { - printk("BRIDGE ERR STATUS 0x%x\n", err_status); - pcibr_error_dump(pcibr_soft); - } -#endif - - /* - * We can't return without re-enabling the interrupt, since - * it would cause problems for devices like IOC3 (Lost - * interrupts ?.). So, just cleanup the interrupt, and - * use saved values later.. - */ - bridge->b_int_rst_stat = pcibr_errintr_group(int_status); -} - -/* - * pcibr_addr_toslot - * Given the 'pciaddr' find out which slot this address is - * allocated to, and return the slot number. - * While we have the info handy, construct the - * function number, space code and offset as well. - * - * NOTE: if this routine is called, we don't know whether - * the address is in CFG, MEM, or I/O space. We have to guess. - * This will be the case on PIO stores, where the only way - * we have of getting the address is to check the Bridge, which - * stores the PCI address but not the space and not the xtalk - * address (from which we could get it). - */ -LOCAL int -pcibr_addr_toslot(pcibr_soft_t pcibr_soft, - iopaddr_t pciaddr, - pciio_space_t *spacep, - iopaddr_t *offsetp, - pciio_function_t *funcp) -{ - int s, f=0, w; - iopaddr_t base; - size_t size; - pciio_piospace_t piosp; - - /* - * Check if the address is in config space - */ - - if ((pciaddr >= BRIDGE_CONFIG_BASE) && (pciaddr < BRIDGE_CONFIG_END)) { - - if (pciaddr >= BRIDGE_CONFIG1_BASE) - pciaddr -= BRIDGE_CONFIG1_BASE; - else - pciaddr -= BRIDGE_CONFIG_BASE; - - s = pciaddr / BRIDGE_CONFIG_SLOT_SIZE; - pciaddr %= BRIDGE_CONFIG_SLOT_SIZE; - - if (funcp) { - f = pciaddr / 0x100; - pciaddr %= 0x100; - } - if (spacep) - *spacep = PCIIO_SPACE_CFG; - if (offsetp) - *offsetp = pciaddr; - if (funcp) - *funcp = f; - - return s; - } - for (s = 0; s < 8; s++) { - int nf = pcibr_soft->bs_slot[s].bss_ninfo; - pcibr_info_h pcibr_infoh = pcibr_soft->bs_slot[s].bss_infos; - - for (f = 0; f < nf; f++) { - pcibr_info_t pcibr_info = pcibr_infoh[f]; - - if (!pcibr_info) - continue; - for (w = 0; w < 6; w++) { - if (pcibr_info->f_window[w].w_space - == PCIIO_SPACE_NONE) { - continue; - } - base = pcibr_info->f_window[w].w_base; - size = pcibr_info->f_window[w].w_size; - - if ((pciaddr >= base) && (pciaddr < (base + size))) { - if (spacep) - *spacep = PCIIO_SPACE_WIN(w); - if (offsetp) - *offsetp = pciaddr - base; - if (funcp) - *funcp = f; - return s; - } /* endif match */ - } /* next window */ - } /* next func */ - } /* next slot */ - - /* - * Check if the address was allocated as part of the - * pcibr_piospace_alloc calls. - */ - for (s = 0; s < 8; s++) { - int nf = pcibr_soft->bs_slot[s].bss_ninfo; - pcibr_info_h pcibr_infoh = pcibr_soft->bs_slot[s].bss_infos; - - for (f = 0; f < nf; f++) { - pcibr_info_t pcibr_info = pcibr_infoh[f]; - - if (!pcibr_info) - continue; - piosp = pcibr_info->f_piospace; - while (piosp) { - if ((piosp->start <= pciaddr) && - ((piosp->count + piosp->start) > pciaddr)) { - if (spacep) - *spacep = piosp->space; - if (offsetp) - *offsetp = pciaddr - piosp->start; - return s; - } /* endif match */ - piosp = piosp->next; - } /* next piosp */ - } /* next func */ - } /* next slot */ - - /* - * Some other random address on the PCI bus ... - * we have no way of knowing whether this was - * a MEM or I/O access; so, for now, we just - * assume that the low 1G is MEM, the next - * 3G is I/O, and anything above the 4G limit - * is obviously MEM. - */ - - if (spacep) - *spacep = ((pciaddr < (1ul << 30)) ? PCIIO_SPACE_MEM : - (pciaddr < (4ul << 30)) ? PCIIO_SPACE_IO : - PCIIO_SPACE_MEM); - if (offsetp) - *offsetp = pciaddr; - - return PCIIO_SLOT_NONE; - -} - -LOCAL void -pcibr_error_cleanup(pcibr_soft_t pcibr_soft, int error_code) -{ - bridge_t *bridge = pcibr_soft->bs_base; - - ASSERT(error_code & IOECODE_PIO); - error_code = error_code; - - bridge->b_int_rst_stat = - (BRIDGE_IRR_PCI_GRP_CLR | BRIDGE_IRR_MULTI_CLR); - (void) bridge->b_wid_tflush; /* flushbus */ -} - -/* - * pcibr_error_extract - * Given the 'pcibr vertex handle' find out which slot - * the bridge status error address (from pcibr_soft info - * hanging off the vertex) - * allocated to, and return the slot number. - * While we have the info handy, construct the - * space code and offset as well. - * - * NOTE: if this routine is called, we don't know whether - * the address is in CFG, MEM, or I/O space. We have to guess. - * This will be the case on PIO stores, where the only way - * we have of getting the address is to check the Bridge, which - * stores the PCI address but not the space and not the xtalk - * address (from which we could get it). - * - * XXX- this interface has no way to return the function - * number on a multifunction card, even though that data - * is available. - */ - -pciio_slot_t -pcibr_error_extract(devfs_handle_t pcibr_vhdl, - pciio_space_t *spacep, - iopaddr_t *offsetp) -{ - pcibr_soft_t pcibr_soft = 0; - iopaddr_t bserr_addr; - bridge_t *bridge; - pciio_slot_t slot = PCIIO_SLOT_NONE; - arbitrary_info_t rev; - - /* Do a sanity check as to whether we really got a - * bridge vertex handle. - */ - if (hwgraph_info_get_LBL(pcibr_vhdl, INFO_LBL_PCIBR_ASIC_REV, &rev) != - GRAPH_SUCCESS) - return(slot); - - pcibr_soft = pcibr_soft_get(pcibr_vhdl); - if (pcibr_soft) { - bridge = pcibr_soft->bs_base; - bserr_addr = - bridge->b_pci_err_lower | - ((uint64_t) (bridge->b_pci_err_upper & - BRIDGE_ERRUPPR_ADDRMASK) << 32); - - slot = pcibr_addr_toslot(pcibr_soft, bserr_addr, - spacep, offsetp, NULL); - } - return slot; -} - -/*ARGSUSED */ -void -pcibr_device_disable(pcibr_soft_t pcibr_soft, int devnum) -{ - /* - * XXX - * Device failed to handle error. Take steps to - * disable this device ? HOW TO DO IT ? - * - * If there are any Read response buffers associated - * with this device, it's time to get them back!! - * - * We can disassociate any interrupt level associated - * with this device, and disable that interrupt level - * - * For now it's just a place holder - */ -} - -/* - * pcibr_pioerror - * Handle PIO error that happened at the bridge pointed by pcibr_soft. - * - * Queries the Bus interface attached to see if the device driver - * mapping the device-number that caused error can handle the - * situation. If so, it will clean up any error, and return - * indicating the error was handled. If the device driver is unable - * to handle the error, it expects the bus-interface to disable that - * device, and takes any steps needed here to take away any resources - * associated with this device. - */ - -#define BEM_ADD_STR(s) printk("%s", (s)) -#ifdef SUPPORT_SGI_CMN_ERR_STUFF -#define BEM_ADD_VAR(v) printk("\t%20s: 0x%x\n", #v, (v)) -#define BEM_ADD_REG(r) printk("\t%20s: %R\n", #r, (r), r ## _desc) - -#define BEM_ADD_NSPC(n,s) printk("\t%20s: %R\n", n, s, space_desc) -#else -#define BEM_ADD_VAR(v) -#define BEM_ADD_REG(r) -#define BEM_ADD_NSPC(n,s) -#endif -#define BEM_ADD_SPC(s) BEM_ADD_NSPC(#s, s) - -/* BEM_ADD_IOE doesn't dump the whole ioerror, it just - * decodes the PCI specific portions -- we count on our - * callers to dump the raw IOE data. - */ -#ifdef LATER -#define BEM_ADD_IOE(ioe) \ - do { \ - if (IOERROR_FIELDVALID(ioe, busspace)) { \ - unsigned spc; \ - unsigned win; \ - \ - spc = IOERROR_GETVALUE(ioe, busspace); \ - win = spc - PCIIO_SPACE_WIN(0); \ - \ - switch (spc) { \ - case PCIIO_SPACE_CFG: \ - printk("\tPCI Slot %d Func %d CFG space Offset 0x%x\n", \ - pciio_widgetdev_slot_get(IOERROR_GETVALUE(ioe, widgetdev)), \ - pciio_widgetdev_func_get(IOERROR_GETVALUE(ioe, widgetdev)), \ - IOERROR_GETVALUE(ioe, busaddr)); \ - break; \ - case PCIIO_SPACE_IO: \ - printk("\tPCI I/O space Offset 0x%x\n", \ - IOERROR_GETVALUE(ioe, busaddr)); \ - break; \ - case PCIIO_SPACE_MEM: \ - case PCIIO_SPACE_MEM32: \ - case PCIIO_SPACE_MEM64: \ - printk("\tPCI MEM space Offset 0x%x\n", \ - IOERROR_GETVALUE(ioe, busaddr)); \ - break; \ - default: \ - if (win < 6) { \ - printk("\tPCI Slot %d Func %d Window %d Offset 0x%x\n",\ - pciio_widgetdev_slot_get(IOERROR_GETVALUE(ioe, widgetdev)), \ - pciio_widgetdev_func_get(IOERROR_GETVALUE(ioe, widgetdev)), \ - win, \ - IOERROR_GETVALUE(ioe, busaddr)); \ - } \ - break; \ - } \ - } \ - } while (0) -#else -#define BEM_ADD_IOE(ioe) -#endif - -/*ARGSUSED */ -LOCAL int -pcibr_pioerror( - pcibr_soft_t pcibr_soft, - int error_code, - ioerror_mode_t mode, - ioerror_t *ioe) -{ - int retval = IOERROR_HANDLED; - - devfs_handle_t pcibr_vhdl = pcibr_soft->bs_vhdl; - bridge_t *bridge = pcibr_soft->bs_base; - - bridgereg_t bridge_int_status; - bridgereg_t bridge_pci_err_lower; - bridgereg_t bridge_pci_err_upper; - bridgereg_t bridge_pci_err_addr; - - iopaddr_t bad_xaddr; - - pciio_space_t raw_space; /* raw PCI space */ - iopaddr_t raw_paddr; /* raw PCI address */ - - pciio_space_t space; /* final PCI space */ - pciio_slot_t slot; /* final PCI slot, if appropriate */ - pciio_function_t func; /* final PCI func, if appropriate */ - iopaddr_t offset; /* final PCI offset */ - - int cs, cw, cf; - pciio_space_t wx; - iopaddr_t wb; - size_t ws; - iopaddr_t wl; - - - /* - * We expect to have an "xtalkaddr" coming in, - * and need to construct the slot/space/offset. - */ - -#ifdef LATER - bad_xaddr = IOERROR_GETVALUE(ioe, xtalkaddr); -#else - bad_xaddr = -1; -#endif - - slot = PCIIO_SLOT_NONE; - func = PCIIO_FUNC_NONE; - raw_space = PCIIO_SPACE_NONE; - raw_paddr = 0; - - if ((bad_xaddr >= BRIDGE_TYPE0_CFG_DEV0) && - (bad_xaddr < BRIDGE_TYPE1_CFG)) { - raw_paddr = bad_xaddr - BRIDGE_TYPE0_CFG_DEV0; - slot = raw_paddr / BRIDGE_TYPE0_CFG_SLOT_OFF; - raw_paddr = raw_paddr % BRIDGE_TYPE0_CFG_SLOT_OFF; - raw_space = PCIIO_SPACE_CFG; - } - if ((bad_xaddr >= BRIDGE_TYPE1_CFG) && - (bad_xaddr < (BRIDGE_TYPE1_CFG + 0x1000))) { - /* Type 1 config space: - * slot and function numbers not known. - * Perhaps we can read them back? - */ - raw_paddr = bad_xaddr - BRIDGE_TYPE1_CFG; - raw_space = PCIIO_SPACE_CFG; - } - if ((bad_xaddr >= BRIDGE_DEVIO0) && - (bad_xaddr < BRIDGE_DEVIO(BRIDGE_DEV_CNT))) { - int x; - - raw_paddr = bad_xaddr - BRIDGE_DEVIO0; - x = raw_paddr / BRIDGE_DEVIO_OFF; - raw_paddr %= BRIDGE_DEVIO_OFF; - /* first two devio windows are double-sized */ - if ((x == 1) || (x == 3)) - raw_paddr += BRIDGE_DEVIO_OFF; - if (x > 0) - x--; - if (x > 1) - x--; - /* x is which devio reg; no guarantee - * PCI slot x will be responding. - * still need to figure out who decodes - * space/offset on the bus. - */ - raw_space = pcibr_soft->bs_slot[x].bss_devio.bssd_space; - if (raw_space == PCIIO_SPACE_NONE) { - /* Someone got an error because they - * accessed the PCI bus via a DevIO(x) - * window that pcibr has not yet assigned - * to any specific PCI address. It is - * quite possible that the Device(x) - * register has been changed since they - * made their access, but we will give it - * our best decode shot. - */ - raw_space = pcibr_soft->bs_slot[x].bss_device - & BRIDGE_DEV_DEV_IO_MEM - ? PCIIO_SPACE_MEM - : PCIIO_SPACE_IO; - raw_paddr += - (pcibr_soft->bs_slot[x].bss_device & - BRIDGE_DEV_OFF_MASK) << - BRIDGE_DEV_OFF_ADDR_SHFT; - } else - raw_paddr += pcibr_soft->bs_slot[x].bss_devio.bssd_base; - } - if ((bad_xaddr >= BRIDGE_PCI_MEM32_BASE) && - (bad_xaddr <= BRIDGE_PCI_MEM32_LIMIT)) { - raw_space = PCIIO_SPACE_MEM32; - raw_paddr = bad_xaddr - BRIDGE_PCI_MEM32_BASE; - } - if ((bad_xaddr >= BRIDGE_PCI_MEM64_BASE) && - (bad_xaddr <= BRIDGE_PCI_MEM64_LIMIT)) { - raw_space = PCIIO_SPACE_MEM64; - raw_paddr = bad_xaddr - BRIDGE_PCI_MEM64_BASE; - } - if ((bad_xaddr >= BRIDGE_PCI_IO_BASE) && - (bad_xaddr <= BRIDGE_PCI_IO_LIMIT)) { - raw_space = PCIIO_SPACE_IO; - raw_paddr = bad_xaddr - BRIDGE_PCI_IO_BASE; - } - space = raw_space; - offset = raw_paddr; - - if ((slot == PCIIO_SLOT_NONE) && (space != PCIIO_SPACE_NONE)) { - /* we've got a space/offset but not which - * PCI slot decodes it. Check through our - * notions of which devices decode where. - * - * Yes, this "duplicates" some logic in - * pcibr_addr_toslot; the difference is, - * this code knows which space we are in, - * and can really really tell what is - * going on (no guessing). - */ - - for (cs = 0; (cs < 8) && (slot == PCIIO_SLOT_NONE); cs++) { - int nf = pcibr_soft->bs_slot[cs].bss_ninfo; - pcibr_info_h pcibr_infoh = pcibr_soft->bs_slot[cs].bss_infos; - - for (cf = 0; (cf < nf) && (slot == PCIIO_SLOT_NONE); cf++) { - pcibr_info_t pcibr_info = pcibr_infoh[cf]; - - if (!pcibr_info) - continue; - for (cw = 0; (cw < 6) && (slot == PCIIO_SLOT_NONE); ++cw) { - if (((wx = pcibr_info->f_window[cw].w_space) != PCIIO_SPACE_NONE) && - ((wb = pcibr_info->f_window[cw].w_base) != 0) && - ((ws = pcibr_info->f_window[cw].w_size) != 0) && - ((wl = wb + ws) > wb) && - ((wb <= offset) && (wl > offset))) { - /* MEM, MEM32 and MEM64 need to - * compare as equal ... - */ - if ((wx == space) || - (((wx == PCIIO_SPACE_MEM) || - (wx == PCIIO_SPACE_MEM32) || - (wx == PCIIO_SPACE_MEM64)) && - ((space == PCIIO_SPACE_MEM) || - (space == PCIIO_SPACE_MEM32) || - (space == PCIIO_SPACE_MEM64)))) { - slot = cs; - func = cf; - space = PCIIO_SPACE_WIN(cw); - offset -= wb; - } /* endif window space match */ - } /* endif window valid and addr match */ - } /* next window unless slot set */ - } /* next func unless slot set */ - } /* next slot unless slot set */ - /* XXX- if slot is still -1, no PCI devices are - * decoding here using their standard PCI BASE - * registers. This would be a really good place - * to cross-coordinate with the pciio PCI - * address space allocation routines, to find - * out if this address is "allocated" by any of - * our subsidiary devices. - */ - } - /* Scan all piomap records on this PCI bus to update - * the TimeOut Counters on all matching maps. If we - * don't already know the slot number, take it from - * the first matching piomap. Note that we have to - * compare maps against raw_space and raw_paddr - * since space and offset could already be - * window-relative. - * - * There is a chance that one CPU could update - * through this path, and another CPU could also - * update due to an interrupt. Closing this hole - * would only result in the possibility of some - * errors never getting logged at all, and since the - * use for bp_toc is as a logical test rather than a - * strict count, the excess counts are not a - * problem. - */ - for (cs = 0; cs < 8; ++cs) { - int nf = pcibr_soft->bs_slot[cs].bss_ninfo; - pcibr_info_h pcibr_infoh = pcibr_soft->bs_slot[cs].bss_infos; - - for (cf = 0; cf < nf; cf++) { - pcibr_info_t pcibr_info = pcibr_infoh[cf]; - pcibr_piomap_t map; - - if (!pcibr_info) - continue; - - for (map = pcibr_info->f_piomap; - map != NULL; map = map->bp_next) { - wx = map->bp_space; - wb = map->bp_pciaddr; - ws = map->bp_mapsz; - cw = wx - PCIIO_SPACE_WIN(0); - if (cw < 6) { - wb += pcibr_soft->bs_slot[cs].bss_window[cw].bssw_base; - wx = pcibr_soft->bs_slot[cs].bss_window[cw].bssw_space; - } - if (wx == PCIIO_SPACE_ROM) { - wb += pcibr_info->f_rbase; - wx = PCIIO_SPACE_MEM; - } - if ((wx == PCIIO_SPACE_MEM32) || - (wx == PCIIO_SPACE_MEM64)) - wx = PCIIO_SPACE_MEM; - wl = wb + ws; - if ((wx == raw_space) && (raw_paddr >= wb) && (raw_paddr < wl)) { - atomic_inc(map->bp_toc); - if (slot == PCIIO_SLOT_NONE) { - slot = cs; - space = map->bp_space; - if (cw < 6) - offset -= pcibr_soft->bs_slot[cs].bss_window[cw].bssw_base; - } - } - } - } - } - - if (space != PCIIO_SPACE_NONE) { - if (slot != PCIIO_SLOT_NONE) { -#ifdef LATER - if (func != PCIIO_FUNC_NONE) - IOERROR_SETVALUE(ioe, widgetdev, - pciio_widgetdev_create(slot,func)); - else - IOERROR_SETVALUE(ioe, widgetdev, - pciio_widgetdev_create(slot,0)); -#else - if (func != PCIIO_FUNC_NONE) { - IOERROR_SETVALUE(ioe, widgetdev, - pciio_widgetdev_create(slot,func)); - } else { - IOERROR_SETVALUE(ioe, widgetdev, - pciio_widgetdev_create(slot,0)); - } -#endif - } - - IOERROR_SETVALUE(ioe, busspace, space); - IOERROR_SETVALUE(ioe, busaddr, offset); - } - if (mode == MODE_DEVPROBE) { - /* - * During probing, we don't really care what the - * error is. Clean up the error in Bridge, notify - * subsidiary devices, and return success. - */ - pcibr_error_cleanup(pcibr_soft, error_code); - - /* if appropriate, give the error handler for this slot - * a shot at this probe access as well. - */ - return (slot == PCIIO_SLOT_NONE) ? IOERROR_HANDLED : - pciio_error_handler(pcibr_vhdl, error_code, mode, ioe); - } - /* - * If we don't know what "PCI SPACE" the access - * was targeting, we may have problems at the - * Bridge itself. Don't touch any bridge registers, - * and do complain loudly. - */ - - if (space == PCIIO_SPACE_NONE) { - printk("XIO Bus Error at %s\n" - "\taccess to XIO bus offset 0x%lx\n" - "\tdoes not correspond to any PCI address\n", - pcibr_soft->bs_name, bad_xaddr); - - /* caller will dump contents of ioe struct */ - return IOERROR_XTALKLEVEL; - } - /* - * Read the PCI Bridge error log registers. - */ - bridge_int_status = bridge->b_int_status; - bridge_pci_err_upper = bridge->b_pci_err_upper; - bridge_pci_err_lower = bridge->b_pci_err_lower; - - bridge_pci_err_addr = - bridge_pci_err_lower - | (((iopaddr_t) bridge_pci_err_upper - & BRIDGE_ERRUPPR_ADDRMASK) << 32); - - /* - * Actual PCI Error handling situation. - * Typically happens when a user level process accesses - * PCI space, and it causes some error. - * - * Due to PCI Bridge implementation, we get two indication - * for a read error: an interrupt and a Bus error. - * We like to handle read error in the bus error context. - * But the interrupt comes and goes before bus error - * could make much progress. (NOTE: interrupd does - * come in _after_ bus error processing starts. But it's - * completed by the time bus error code reaches PCI PIO - * error handling. - * Similarly write error results in just an interrupt, - * and error handling has to be done at interrupt level. - * There is no way to distinguish at interrupt time, if an - * error interrupt is due to read/write error.. - */ - - /* We know the xtalk addr, the raw PCI bus space, - * the raw PCI bus address, the decoded PCI bus - * space, the offset within that space, and the - * decoded PCI slot (which may be "PCIIO_SLOT_NONE" if no slot - * is known to be involved). - */ - - /* - * Hand the error off to the handler registered - * for the slot that should have decoded the error, - * or to generic PCI handling (if pciio decides that - * such is appropriate). - */ - retval = pciio_error_handler(pcibr_vhdl, error_code, mode, ioe); - - if (retval != IOERROR_HANDLED) { - - /* Generate a generic message for IOERROR_UNHANDLED - * since the subsidiary handlers were silent, and - * did no recovery. - */ - if (retval == IOERROR_UNHANDLED) { - retval = IOERROR_PANIC; - - /* we may or may not want to print some of this, - * depending on debug level and which error code. - */ - - PRINT_ALERT( - "PIO Error on PCI Bus %s", - pcibr_soft->bs_name); - /* this decodes part of the ioe; our caller - * will dump the raw details in DEBUG and - * kdebug kernels. - */ - BEM_ADD_IOE(ioe); - } -#if defined(FORCE_ERRORS) - if (0) { -#elif !DEBUG - if (kdebug) { -#endif - /* - * dump raw data from bridge - */ - - BEM_ADD_STR("DEBUG DATA -- raw info from Bridge ASIC:\n"); - BEM_ADD_REG(bridge_int_status); - BEM_ADD_VAR(bridge_pci_err_upper); - BEM_ADD_VAR(bridge_pci_err_lower); - BEM_ADD_VAR(bridge_pci_err_addr); - BEM_ADD_SPC(raw_space); - BEM_ADD_VAR(raw_paddr); - if (IOERROR_FIELDVALID(ioe, widgetdev)) { - -#ifdef LATER - slot = pciio_widgetdev_slot_get(IOERROR_GETVALUE(ioe, - widgetdev)); - func = pciio_widgetdev_func_get(IOERROR_GETVALUE(ioe, - widgetdev)); -#else - slot = -1; - func = -1; -#endif - if (slot < 8) { -#ifdef SUPPORT_SGI_CMN_ERR_STUFF - bridgereg_t device = bridge->b_device[slot].reg; -#endif - - BEM_ADD_VAR(slot); - BEM_ADD_VAR(func); - BEM_ADD_REG(device); - } - } -#if !DEBUG || defined(FORCE_ERRORS) - } -#endif - - /* - * Since error could not be handled at lower level, - * error data logged has not been cleared. - * Clean up errors, and - * re-enable bridge to interrupt on error conditions. - * NOTE: Wheather we get the interrupt on PCI_ABORT or not is - * dependent on INT_ENABLE register. This write just makes sure - * that if the interrupt was enabled, we do get the interrupt. - * - * CAUTION: Resetting bit BRIDGE_IRR_PCI_GRP_CLR, acknowledges - * a group of interrupts. If while handling this error, - * some other error has occurred, that would be - * implicitly cleared by this write. - * Need a way to ensure we don't inadvertently clear some - * other errors. - */ -#ifdef LATER - if (IOERROR_FIELDVALID(ioe, widgetdev)) - pcibr_device_disable(pcibr_soft, - pciio_widgetdev_slot_get( - IOERROR_GETVALUE(ioe, widgetdev))); -#endif - - if (mode == MODE_DEVUSERERROR) - pcibr_error_cleanup(pcibr_soft, error_code); - } - return retval; -} - -/* - * bridge_dmaerror - * Some error was identified in a DMA transaction. - * This routine will identify the that caused the error, - * and try to invoke the appropriate bus service to handle this. - */ - -#define BRIDGE_DMA_READ_ERROR (BRIDGE_ISR_RESP_XTLK_ERR|BRIDGE_ISR_XREAD_REQ_TIMEOUT) - -int -pcibr_dmard_error( - pcibr_soft_t pcibr_soft, - int error_code, - ioerror_mode_t mode, - ioerror_t *ioe) -{ - devfs_handle_t pcibr_vhdl = pcibr_soft->bs_vhdl; - bridge_t *bridge = pcibr_soft->bs_base; - bridgereg_t bus_lowaddr, bus_uppraddr; - int retval = 0; - int bufnum; - - /* - * In case of DMA errors, bridge should have logged the - * address that caused the error. - * Look up the address, in the bridge error registers, and - * take appropriate action - */ -#ifdef LATER - ASSERT(IOERROR_GETVALUE(ioe, widgetnum) == pcibr_soft->bs_xid); - ASSERT(bridge); -#endif - - /* - * read error log registers - */ - bus_lowaddr = bridge->b_wid_resp_lower; - bus_uppraddr = bridge->b_wid_resp_upper; - - bufnum = BRIDGE_RESP_ERRUPPR_BUFNUM(bus_uppraddr); - IOERROR_SETVALUE(ioe, widgetdev, - pciio_widgetdev_create( - BRIDGE_RESP_ERRUPPR_DEVICE(bus_uppraddr), - 0)); - IOERROR_SETVALUE(ioe, busaddr, - (bus_lowaddr | - ((iopaddr_t) - (bus_uppraddr & - BRIDGE_ERRUPPR_ADDRMASK) << 32))); - - /* - * need to ensure that the xtalk address in ioe - * maps to PCI error address read from bridge. - * How to convert PCI address back to Xtalk address ? - * (better idea: convert XTalk address to PCI address - * and then do the compare!) - */ - - retval = pciio_error_handler(pcibr_vhdl, error_code, mode, ioe); - if (retval != IOERROR_HANDLED) -#ifdef LATER - pcibr_device_disable(pcibr_soft, - pciio_widgetdev_slot_get( - IOERROR_GETVALUE(ioe,widgetdev))); -#else - pcibr_device_disable(pcibr_soft, - pciio_widgetdev_slot_get(-1)); -#endif - - /* - * Re-enable bridge to interrupt on BRIDGE_IRR_RESP_BUF_GRP_CLR - * NOTE: Wheather we get the interrupt on BRIDGE_IRR_RESP_BUF_GRP_CLR or - * not is dependent on INT_ENABLE register. This write just makes sure - * that if the interrupt was enabled, we do get the interrupt. - */ - bridge->b_int_rst_stat = BRIDGE_IRR_RESP_BUF_GRP_CLR; - - /* - * Also, release the "bufnum" back to buffer pool that could be re-used. - * This is done by "disabling" the buffer for a moment, then restoring - * the original assignment. - */ - - { - reg_p regp; - bridgereg_t regv; - bridgereg_t mask; - - regp = (bufnum & 1) - ? &bridge->b_odd_resp - : &bridge->b_even_resp; - - mask = 0xF << ((bufnum >> 1) * 4); - - regv = *regp; - *regp = regv & ~mask; - *regp = regv; - } - - return retval; -} - -/* - * pcibr_dmawr_error: - * Handle a dma write error caused by a device attached to this bridge. - * - * ioe has the widgetnum, widgetdev, and memaddr fields updated - * But we don't know the PCI address that corresponds to "memaddr" - * nor do we know which device driver is generating this address. - * - * There is no easy way to find out the PCI address(es) that map - * to a specific system memory address. Bus handling code is also - * of not much help, since they don't keep track of the DMA mapping - * that have been handed out. - * So it's a dead-end at this time. - * - * If translation is available, we could invoke the error handling - * interface of the device driver. - */ -/*ARGSUSED */ -int -pcibr_dmawr_error( - pcibr_soft_t pcibr_soft, - int error_code, - ioerror_mode_t mode, - ioerror_t *ioe) -{ - devfs_handle_t pcibr_vhdl = pcibr_soft->bs_vhdl; - int retval; - - retval = pciio_error_handler(pcibr_vhdl, error_code, mode, ioe); - -#ifdef LATER - if (retval != IOERROR_HANDLED) { - pcibr_device_disable(pcibr_soft, - pciio_widgetdev_slot_get( - IOERROR_GETVALUE(ioe, widgetdev))); - - } -#endif - return retval; -} - -/* - * Bridge error handler. - * Interface to handle all errors that involve bridge in some way. - * - * This normally gets called from xtalk error handler. - * ioe has different set of fields set depending on the error that - * was encountered. So, we have a bit field indicating which of the - * fields are valid. - * - * NOTE: This routine could be operating in interrupt context. So, - * don't try to sleep here (till interrupt threads work!!) - */ -LOCAL int -pcibr_error_handler( - error_handler_arg_t einfo, - int error_code, - ioerror_mode_t mode, - ioerror_t *ioe) -{ - pcibr_soft_t pcibr_soft; - int retval = IOERROR_BADERRORCODE; - - pcibr_soft = (pcibr_soft_t) einfo; - - /* If we are in the action handling phase clean out the error state - * on the xswitch. - */ -#if defined(CONFIG_SGI_IO_ERROR_HANDLING) - if (e_state == ERROR_STATE_ACTION) - (void)error_state_set(xconn_vhdl, ERROR_STATE_NONE); -#endif - -#if DEBUG && ERROR_DEBUG - printk("%s: pcibr_error_handler\n", pcibr_soft->bs_name); -#endif - - ASSERT(pcibr_soft != NULL); - - if (error_code & IOECODE_PIO) - retval = pcibr_pioerror(pcibr_soft, error_code, mode, ioe); - - if (error_code & IOECODE_DMA) { - if (error_code & IOECODE_READ) { - /* - * DMA read error occurs when a device attached to the bridge - * tries to read some data from system memory, and this - * either results in a timeout or access error. - * First case is indicated by the bit "XREAD_REQ_TOUT" - * and second case by "RESP_XTALK_ERROR" bit in bridge error - * interrupt status register. - * - * pcibr_error_intr_handler would get invoked first, and it has - * the responsibility of calling pcibr_error_handler with - * suitable parameters. - */ - - retval = pcibr_dmard_error(pcibr_soft, error_code, MODE_DEVERROR, ioe); - } - if (error_code & IOECODE_WRITE) { - /* - * A device attached to this bridge has been generating - * bad DMA writes. Find out the device attached, and - * slap on it's wrist. - */ - - retval = pcibr_dmawr_error(pcibr_soft, error_code, MODE_DEVERROR, ioe); - } - } - return retval; - -} - -/* - * Reenable a device after handling the error. - * This is called by the lower layers when they wish to be reenabled - * after an error. - * Note that each layer would be calling the previous layer to reenable - * first, before going ahead with their own re-enabling. - */ - -int -pcibr_error_devenable(devfs_handle_t pconn_vhdl, int error_code) -{ - pciio_info_t pciio_info = pciio_info_get(pconn_vhdl); - pciio_slot_t pciio_slot = pciio_info_slot_get(pciio_info); - pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); - - ASSERT(error_code & IOECODE_PIO); - - /* If the error is not known to be a write, - * we have to call devenable. - * write errors are isolated to the bridge. - */ - if (!(error_code & IOECODE_WRITE)) { - devfs_handle_t xconn_vhdl = pcibr_soft->bs_conn; - int rc; - - rc = xtalk_error_devenable(xconn_vhdl, pciio_slot, error_code); - if (rc != IOERROR_HANDLED) - return rc; - } - pcibr_error_cleanup(pcibr_soft, error_code); - return IOERROR_HANDLED; -} - -/* ===================================================================== - * CONFIGURATION MANAGEMENT - */ -/*ARGSUSED */ -void -pcibr_provider_startup(devfs_handle_t pcibr) -{ -} - -/*ARGSUSED */ -void -pcibr_provider_shutdown(devfs_handle_t pcibr) -{ -} - -int -pcibr_reset(devfs_handle_t conn) -{ - pciio_info_t pciio_info = pciio_info_get(conn); - pciio_slot_t pciio_slot = pciio_info_slot_get(pciio_info); - pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); - bridge_t *bridge = pcibr_soft->bs_base; - bridgereg_t ctlreg; - unsigned cfgctl[8]; - unsigned long s; - int f, nf; - pcibr_info_h pcibr_infoh; - pcibr_info_t pcibr_info; - int win; - - if (pcibr_soft->bs_slot[pciio_slot].has_host) { - pciio_slot = pcibr_soft->bs_slot[pciio_slot].host_slot; - pcibr_info = pcibr_soft->bs_slot[pciio_slot].bss_infos[0]; - } - if (pciio_slot < 4) { - s = pcibr_lock(pcibr_soft); - nf = pcibr_soft->bs_slot[pciio_slot].bss_ninfo; - pcibr_infoh = pcibr_soft->bs_slot[pciio_slot].bss_infos; - for (f = 0; f < nf; ++f) - if (pcibr_infoh[f]) - cfgctl[f] = bridge->b_type0_cfg_dev[pciio_slot].f[f].l[PCI_CFG_COMMAND / 4]; - - ctlreg = bridge->b_wid_control; - bridge->b_wid_control = ctlreg | BRIDGE_CTRL_RST(pciio_slot); - /* XXX delay? */ - bridge->b_wid_control = ctlreg; - /* XXX delay? */ - - for (f = 0; f < nf; ++f) - if ((pcibr_info = pcibr_infoh[f])) - for (win = 0; win < 6; ++win) - if (pcibr_info->f_window[win].w_base != 0) - bridge->b_type0_cfg_dev[pciio_slot].f[f].l[PCI_CFG_BASE_ADDR(win) / 4] = - pcibr_info->f_window[win].w_base; - for (f = 0; f < nf; ++f) - if (pcibr_infoh[f]) - bridge->b_type0_cfg_dev[pciio_slot].f[f].l[PCI_CFG_COMMAND / 4] = cfgctl[f]; - pcibr_unlock(pcibr_soft, s); - - return 0; - } -#ifdef SUPPORT_PRINTING_V_FORMAT - PRINT_WARNING( "%v: pcibr_reset unimplemented for slot %d\n", - conn, pciio_slot); -#endif - return -1; -} - -pciio_endian_t -pcibr_endian_set(devfs_handle_t pconn_vhdl, - pciio_endian_t device_end, - pciio_endian_t desired_end) -{ - pciio_info_t pciio_info = pciio_info_get(pconn_vhdl); - pciio_slot_t pciio_slot = pciio_info_slot_get(pciio_info); - pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); - bridgereg_t devreg; - unsigned long s; - - /* - * Bridge supports hardware swapping; so we can always - * arrange for the caller's desired endianness. - */ - - s = pcibr_lock(pcibr_soft); - devreg = pcibr_soft->bs_slot[pciio_slot].bss_device; - if (device_end != desired_end) - devreg |= BRIDGE_DEV_SWAP_BITS; - else - devreg &= ~BRIDGE_DEV_SWAP_BITS; - - /* NOTE- if we ever put SWAP bits - * onto the disabled list, we will - * have to change the logic here. - */ - if (pcibr_soft->bs_slot[pciio_slot].bss_device != devreg) { - bridge_t *bridge = pcibr_soft->bs_base; - - bridge->b_device[pciio_slot].reg = devreg; - pcibr_soft->bs_slot[pciio_slot].bss_device = devreg; - bridge->b_wid_tflush; /* wait until Bridge PIO complete */ - } - pcibr_unlock(pcibr_soft, s); - -#if DEBUG && PCIBR_DEV_DEBUG - printk("pcibr Device(%d): 0x%p\n", pciio_slot, bridge->b_device[pciio_slot].reg); -#endif - - return desired_end; -} - -/* This (re)sets the GBR and REALTIME bits and also keeps track of how - * many sets are outstanding. Reset succeeds only if the number of outstanding - * sets == 1. - */ -int -pcibr_priority_bits_set(pcibr_soft_t pcibr_soft, - pciio_slot_t pciio_slot, - pciio_priority_t device_prio) -{ - unsigned long s; - int *counter; - bridgereg_t rtbits = 0; - bridgereg_t devreg; - int rc = PRIO_SUCCESS; - - /* in dual-slot configurations, the host and the - * guest have separate DMA resources, so they - * have separate requirements for priority bits. - */ - - counter = &(pcibr_soft->bs_slot[pciio_slot].bss_pri_uctr); - - /* - * Bridge supports PCI notions of LOW and HIGH priority - * arbitration rings via a "REAL_TIME" bit in the per-device - * Bridge register. The "GBR" bit controls access to the GBR - * ring on the xbow. These two bits are (re)set together. - * - * XXX- Bug in Rev B Bridge Si: - * Symptom: Prefetcher starts operating incorrectly. This happens - * due to corruption of the address storage ram in the prefetcher - * when a non-real time PCI request is pulled and a real-time one is - * put in it's place. Workaround: Use only a single arbitration ring - * on PCI bus. GBR and RR can still be uniquely used per - * device. NETLIST MERGE DONE, WILL BE FIXED IN REV C. - */ - - if (pcibr_soft->bs_rev_num != BRIDGE_PART_REV_B) - rtbits |= BRIDGE_DEV_RT; - - /* NOTE- if we ever put DEV_RT or DEV_GBR on - * the disabled list, we will have to take - * it into account here. - */ - - s = pcibr_lock(pcibr_soft); - devreg = pcibr_soft->bs_slot[pciio_slot].bss_device; - if (device_prio == PCI_PRIO_HIGH) { - if ((++*counter == 1)) { - if (rtbits) - devreg |= rtbits; - else - rc = PRIO_FAIL; - } - } else if (device_prio == PCI_PRIO_LOW) { - if (*counter <= 0) - rc = PRIO_FAIL; - else if (--*counter == 0) - if (rtbits) - devreg &= ~rtbits; - } - if (pcibr_soft->bs_slot[pciio_slot].bss_device != devreg) { - bridge_t *bridge = pcibr_soft->bs_base; - - bridge->b_device[pciio_slot].reg = devreg; - pcibr_soft->bs_slot[pciio_slot].bss_device = devreg; - bridge->b_wid_tflush; /* wait until Bridge PIO complete */ - } - pcibr_unlock(pcibr_soft, s); - - return rc; -} - -pciio_priority_t -pcibr_priority_set(devfs_handle_t pconn_vhdl, - pciio_priority_t device_prio) -{ - pciio_info_t pciio_info = pciio_info_get(pconn_vhdl); - pciio_slot_t pciio_slot = pciio_info_slot_get(pciio_info); - pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); - - (void) pcibr_priority_bits_set(pcibr_soft, pciio_slot, device_prio); - - return device_prio; -} - -/* - * Interfaces to allow special (e.g. SGI) drivers to set/clear - * Bridge-specific device flags. Many flags are modified through - * PCI-generic interfaces; we don't allow them to be directly - * manipulated here. Only flags that at this point seem pretty - * Bridge-specific can be set through these special interfaces. - * We may add more flags as the need arises, or remove flags and - * create PCI-generic interfaces as the need arises. - * - * Returns 0 on failure, 1 on success - */ -int -pcibr_device_flags_set(devfs_handle_t pconn_vhdl, - pcibr_device_flags_t flags) -{ - pciio_info_t pciio_info = pciio_info_get(pconn_vhdl); - pciio_slot_t pciio_slot = pciio_info_slot_get(pciio_info); - pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); - bridgereg_t set = 0; - bridgereg_t clr = 0; - - ASSERT((flags & PCIBR_DEVICE_FLAGS) == flags); - - if (flags & PCIBR_WRITE_GATHER) - set |= BRIDGE_DEV_PMU_WRGA_EN; - if (flags & PCIBR_NOWRITE_GATHER) - clr |= BRIDGE_DEV_PMU_WRGA_EN; - - if (flags & PCIBR_WRITE_GATHER) - set |= BRIDGE_DEV_DIR_WRGA_EN; - if (flags & PCIBR_NOWRITE_GATHER) - clr |= BRIDGE_DEV_DIR_WRGA_EN; - - if (flags & PCIBR_PREFETCH) - set |= BRIDGE_DEV_PREF; - if (flags & PCIBR_NOPREFETCH) - clr |= BRIDGE_DEV_PREF; - - if (flags & PCIBR_PRECISE) - set |= BRIDGE_DEV_PRECISE; - if (flags & PCIBR_NOPRECISE) - clr |= BRIDGE_DEV_PRECISE; - - if (flags & PCIBR_BARRIER) - set |= BRIDGE_DEV_BARRIER; - if (flags & PCIBR_NOBARRIER) - clr |= BRIDGE_DEV_BARRIER; - - if (flags & PCIBR_64BIT) - set |= BRIDGE_DEV_DEV_SIZE; - if (flags & PCIBR_NO64BIT) - clr |= BRIDGE_DEV_DEV_SIZE; - - if (set || clr) { - bridgereg_t devreg; - unsigned long s; - - s = pcibr_lock(pcibr_soft); - devreg = pcibr_soft->bs_slot[pciio_slot].bss_device; - devreg = (devreg & ~clr) | set; - if (pcibr_soft->bs_slot[pciio_slot].bss_device != devreg) { - bridge_t *bridge = pcibr_soft->bs_base; - - bridge->b_device[pciio_slot].reg = devreg; - pcibr_soft->bs_slot[pciio_slot].bss_device = devreg; - bridge->b_wid_tflush; /* wait until Bridge PIO complete */ - } - pcibr_unlock(pcibr_soft, s); -#if DEBUG && PCIBR_DEV_DEBUG - printk("pcibr Device(%d): %R\n", pciio_slot, bridge->b_device[pciio_slot].regbridge->b_device[pciio_slot].reg, device_bits); -#endif - } - return (1); -} - -#ifdef LITTLE_ENDIAN -/* - * on sn-ia we need to twiddle the addresses going out - * the pci bus because we use the unswizzled synergy space - * (the alternative is to use the swizzled synergy space - * and byte swap the data) - */ -#define CB(b,r) (((volatile uint8_t *) b)[((r)^4)]) -#define CS(b,r) (((volatile uint16_t *) b)[((r^4)/2)]) -#define CW(b,r) (((volatile uint32_t *) b)[((r^4)/4)]) -#else -#define CB(b,r) (((volatile uint8_t *) cfgbase)[(r)^3]) -#define CS(b,r) (((volatile uint16_t *) cfgbase)[((r)/2)^1]) -#define CW(b,r) (((volatile uint32_t *) cfgbase)[(r)/4]) -#endif /* LITTLE_ENDIAN */ - - -LOCAL cfg_p -pcibr_config_addr(devfs_handle_t conn, - unsigned reg) -{ - pcibr_info_t pcibr_info; - pciio_slot_t pciio_slot; - pciio_function_t pciio_func; - pcibr_soft_t pcibr_soft; - bridge_t *bridge; - cfg_p cfgbase = (cfg_p)0; - - pcibr_info = pcibr_info_get(conn); - - pciio_slot = pcibr_info->f_slot; - if (pciio_slot == PCIIO_SLOT_NONE) - pciio_slot = PCI_TYPE1_SLOT(reg); - - pciio_func = pcibr_info->f_func; - if (pciio_func == PCIIO_FUNC_NONE) - pciio_func = PCI_TYPE1_FUNC(reg); - - pcibr_soft = (pcibr_soft_t) pcibr_info->f_mfast; - - bridge = pcibr_soft->bs_base; - - cfgbase = bridge->b_type0_cfg_dev[pciio_slot].f[pciio_func].l; - - return cfgbase; -} - -uint64_t -pcibr_config_get(devfs_handle_t conn, - unsigned reg, - unsigned size) -{ - return do_pcibr_config_get(pcibr_config_addr(conn, reg), - PCI_TYPE1_REG(reg), size); -} - -LOCAL uint64_t -do_pcibr_config_get( - cfg_p cfgbase, - unsigned reg, - unsigned size) -{ - unsigned value; - - - value = CW(cfgbase, reg); - - if (reg & 3) - value >>= 8 * (reg & 3); - if (size < 4) - value &= (1 << (8 * size)) - 1; - - return value; -} - -void -pcibr_config_set(devfs_handle_t conn, - unsigned reg, - unsigned size, - uint64_t value) -{ - do_pcibr_config_set(pcibr_config_addr(conn, reg), - PCI_TYPE1_REG(reg), size, value); -} - -LOCAL void -do_pcibr_config_set(cfg_p cfgbase, - unsigned reg, - unsigned size, - uint64_t value) -{ - switch (size) { - case 1: - CB(cfgbase, reg) = value; - break; - case 2: - if (reg & 1) { - CB(cfgbase, reg) = value; - CB(cfgbase, reg + 1) = value >> 8; - } else - CS(cfgbase, reg) = value; - break; - case 3: - if (reg & 1) { - CB(cfgbase, reg) = value; - CS(cfgbase, (reg + 1)) = value >> 8; - } else { - CS(cfgbase, reg) = value; - CB(cfgbase, reg + 2) = value >> 16; - } - break; - - case 4: - CW(cfgbase, reg) = value; - break; - } -} - -pciio_provider_t pcibr_provider = -{ - (pciio_piomap_alloc_f *) pcibr_piomap_alloc, - (pciio_piomap_free_f *) pcibr_piomap_free, - (pciio_piomap_addr_f *) pcibr_piomap_addr, - (pciio_piomap_done_f *) pcibr_piomap_done, - (pciio_piotrans_addr_f *) pcibr_piotrans_addr, - (pciio_piospace_alloc_f *) pcibr_piospace_alloc, - (pciio_piospace_free_f *) pcibr_piospace_free, - - (pciio_dmamap_alloc_f *) pcibr_dmamap_alloc, - (pciio_dmamap_free_f *) pcibr_dmamap_free, - (pciio_dmamap_addr_f *) pcibr_dmamap_addr, - (pciio_dmamap_list_f *) pcibr_dmamap_list, - (pciio_dmamap_done_f *) pcibr_dmamap_done, - (pciio_dmatrans_addr_f *) pcibr_dmatrans_addr, - (pciio_dmatrans_list_f *) pcibr_dmatrans_list, - (pciio_dmamap_drain_f *) pcibr_dmamap_drain, - (pciio_dmaaddr_drain_f *) pcibr_dmaaddr_drain, - (pciio_dmalist_drain_f *) pcibr_dmalist_drain, - - (pciio_intr_alloc_f *) pcibr_intr_alloc, - (pciio_intr_free_f *) pcibr_intr_free, - (pciio_intr_connect_f *) pcibr_intr_connect, - (pciio_intr_disconnect_f *) pcibr_intr_disconnect, - (pciio_intr_cpu_get_f *) pcibr_intr_cpu_get, - - (pciio_provider_startup_f *) pcibr_provider_startup, - (pciio_provider_shutdown_f *) pcibr_provider_shutdown, - (pciio_reset_f *) pcibr_reset, - (pciio_write_gather_flush_f *) pcibr_write_gather_flush, - (pciio_endian_set_f *) pcibr_endian_set, - (pciio_priority_set_f *) pcibr_priority_set, - (pciio_config_get_f *) pcibr_config_get, - (pciio_config_set_f *) pcibr_config_set, - - (pciio_error_devenable_f *) pcibr_error_devenable, - (pciio_error_extract_f *) pcibr_error_extract, - -#ifdef LATER - (pciio_driver_reg_callback_f *) pcibr_driver_reg_callback, - (pciio_driver_unreg_callback_f *) pcibr_driver_unreg_callback, -#else - (pciio_driver_reg_callback_f *) 0, - (pciio_driver_unreg_callback_f *) 0, -#endif - (pciio_device_unregister_f *) pcibr_device_unregister, - (pciio_dma_enabled_f *) pcibr_dma_enabled, -}; - -LOCAL pcibr_hints_t -pcibr_hints_get(devfs_handle_t xconn_vhdl, int alloc) -{ - arbitrary_info_t ainfo = 0; - graph_error_t rv; - pcibr_hints_t hint; - - rv = hwgraph_info_get_LBL(xconn_vhdl, INFO_LBL_PCIBR_HINTS, &ainfo); - - if (alloc && (rv != GRAPH_SUCCESS)) { - - NEW(hint); - hint->rrb_alloc_funct = NULL; - hint->ph_intr_bits = NULL; - rv = hwgraph_info_add_LBL(xconn_vhdl, - INFO_LBL_PCIBR_HINTS, - (arbitrary_info_t) hint); - if (rv != GRAPH_SUCCESS) - goto abnormal_exit; - - rv = hwgraph_info_get_LBL(xconn_vhdl, INFO_LBL_PCIBR_HINTS, &ainfo); - - if (rv != GRAPH_SUCCESS) - goto abnormal_exit; - - if (ainfo != (arbitrary_info_t) hint) - goto abnormal_exit; - } - return (pcibr_hints_t) ainfo; - -abnormal_exit: -#ifdef LATER - printf("SHOULD NOT BE HERE\n"); -#endif - DEL(hint); - return(NULL); - -} - -void -pcibr_hints_fix_some_rrbs(devfs_handle_t xconn_vhdl, unsigned mask) -{ - pcibr_hints_t hint = pcibr_hints_get(xconn_vhdl, 1); - - if (hint) - hint->ph_rrb_fixed = mask; -#if DEBUG - else - printk("pcibr_hints_fix_rrbs: pcibr_hints_get failed at\n" - "\t%p\n", xconn_vhdl); -#endif -} - -void -pcibr_hints_fix_rrbs(devfs_handle_t xconn_vhdl) -{ - pcibr_hints_fix_some_rrbs(xconn_vhdl, 0xFF); -} - -void -pcibr_hints_dualslot(devfs_handle_t xconn_vhdl, - pciio_slot_t host, - pciio_slot_t guest) -{ - pcibr_hints_t hint = pcibr_hints_get(xconn_vhdl, 1); - - if (hint) - hint->ph_host_slot[guest] = host + 1; -#if DEBUG - else - printk("pcibr_hints_dualslot: pcibr_hints_get failed at\n" - "\t%p\n", xconn_vhdl); -#endif -} - -void -pcibr_hints_intr_bits(devfs_handle_t xconn_vhdl, - pcibr_intr_bits_f *xxx_intr_bits) -{ - pcibr_hints_t hint = pcibr_hints_get(xconn_vhdl, 1); - - if (hint) - hint->ph_intr_bits = xxx_intr_bits; -#if DEBUG - else - printk("pcibr_hints_intr_bits: pcibr_hints_get failed at\n" - "\t%p\n", xconn_vhdl); -#endif -} - -void -pcibr_set_rrb_callback(devfs_handle_t xconn_vhdl, rrb_alloc_funct_t rrb_alloc_funct) -{ - pcibr_hints_t hint = pcibr_hints_get(xconn_vhdl, 1); - - if (hint) - hint->rrb_alloc_funct = rrb_alloc_funct; -} - -void -pcibr_hints_handsoff(devfs_handle_t xconn_vhdl) -{ - pcibr_hints_t hint = pcibr_hints_get(xconn_vhdl, 1); - - if (hint) - hint->ph_hands_off = 1; -#if DEBUG - else - printk("pcibr_hints_handsoff: pcibr_hints_get failed at\n" - "\t%p\n", xconn_vhdl); -#endif -} - -void -pcibr_hints_subdevs(devfs_handle_t xconn_vhdl, - pciio_slot_t slot, - uint64_t subdevs) -{ - arbitrary_info_t ainfo = 0; - char sdname[16]; - devfs_handle_t pconn_vhdl = GRAPH_VERTEX_NONE; - - sprintf(sdname, "pci/%d", slot); - (void) hwgraph_path_add(xconn_vhdl, sdname, &pconn_vhdl); - if (pconn_vhdl == GRAPH_VERTEX_NONE) { -#if DEBUG - printk("pcibr_hints_subdevs: hwgraph_path_create failed at\n" - "\t%p (seeking %s)\n", xconn_vhdl, sdname); -#endif - return; - } - hwgraph_info_get_LBL(pconn_vhdl, INFO_LBL_SUBDEVS, &ainfo); - if (ainfo == 0) { - uint64_t *subdevp; - - NEW(subdevp); - if (!subdevp) { -#if DEBUG - printk("pcibr_hints_subdevs: subdev ptr alloc failed at\n" - "\t%p\n", pconn_vhdl); -#endif - return; - } - *subdevp = subdevs; - hwgraph_info_add_LBL(pconn_vhdl, INFO_LBL_SUBDEVS, (arbitrary_info_t) subdevp); - hwgraph_info_get_LBL(pconn_vhdl, INFO_LBL_SUBDEVS, &ainfo); - if (ainfo == (arbitrary_info_t) subdevp) - return; - DEL(subdevp); - if (ainfo == (arbitrary_info_t) NULL) { -#if DEBUG - printk("pcibr_hints_subdevs: null subdevs ptr at\n" - "\t%p\n", pconn_vhdl); -#endif - return; - } -#if DEBUG - printk("pcibr_subdevs_get: dup subdev add_LBL at\n" - "\t%p\n", pconn_vhdl); -#endif - } - *(uint64_t *) ainfo = subdevs; -} - - -#ifdef LATER - -#include -#include - -char *pci_space[] = {"NONE", - "ROM", - "IO", - "", - "MEM", - "MEM32", - "MEM64", - "CFG", - "WIN0", - "WIN1", - "WIN2", - "WIN3", - "WIN4", - "WIN5", - "", - "BAD"}; - -void -idbg_pss_func(pcibr_info_h pcibr_infoh, int func) -{ - pcibr_info_t pcibr_info = pcibr_infoh[func]; - char name[MAXDEVNAME]; - int win; - - if (!pcibr_info) - return; - qprintf("Per-slot Function Info\n"); -#ifdef SUPPORT_PRINTING_V_FORMAT - sprintf(name, "%v", pcibr_info->f_vertex); -#endif - qprintf("\tSlot Name : %s\n",name); - qprintf("\tPCI Bus : %d ",pcibr_info->f_bus); - qprintf("Slot : %d ", pcibr_info->f_slot); - qprintf("Function : %d ", pcibr_info->f_func); - qprintf("VendorId : 0x%x " , pcibr_info->f_vendor); - qprintf("DeviceId : 0x%x\n", pcibr_info->f_device); -#ifdef SUPPORT_PRINTING_V_FORMAT - sprintf(name, "%v", pcibr_info->f_master); -#endif - qprintf("\tBus provider : %s\n",name); - qprintf("\tProvider Fns : 0x%x ", pcibr_info->f_pops); - qprintf("Error Handler : 0x%x Arg 0x%x\n", - pcibr_info->f_efunc,pcibr_info->f_einfo); - for(win = 0 ; win < 6 ; win++) - qprintf("\tBase Reg #%d space %s base 0x%x size 0x%x\n", - win,pci_space[pcibr_info->f_window[win].w_space], - pcibr_info->f_window[win].w_base, - pcibr_info->f_window[win].w_size); - - qprintf("\tRom base 0x%x size 0x%x\n", - pcibr_info->f_rbase,pcibr_info->f_rsize); - - qprintf("\tInterrupt Bit Map\n"); - qprintf("\t\tPCI Int#\tBridge Pin#\n"); - for (win = 0 ; win < 4; win++) - qprintf("\t\tINT%c\t\t%d\n",win+'A',pcibr_info->f_ibit[win]); - qprintf("\n"); -} - - -void -idbg_pss_info(pcibr_soft_t pcibr_soft, pciio_slot_t slot) -{ - pcibr_soft_slot_t pss; - char slot_conn_name[MAXDEVNAME]; - int func; - - pss = &pcibr_soft->bs_slot[slot]; - qprintf("PCI INFRASTRUCTURAL INFO FOR SLOT %d\n", slot); - qprintf("\tHost Present ? %s ", pss->has_host ? "yes" : "no"); - qprintf("\tHost Slot : %d\n",pss->host_slot); - sprintf(slot_conn_name, "%v", pss->slot_conn); - qprintf("\tSlot Conn : %s\n",slot_conn_name); - qprintf("\t#Functions : %d\n",pss->bss_ninfo); - for (func = 0; func < pss->bss_ninfo; func++) - idbg_pss_func(pss->bss_infos,func); - qprintf("\tSpace : %s ",pci_space[pss->bss_devio.bssd_space]); - qprintf("\tBase : 0x%x ", pss->bss_devio.bssd_base); - qprintf("\tShadow Devreg : 0x%x\n", pss->bss_device); - qprintf("\tUsage counts : pmu %d d32 %d d64 %d\n", - pss->bss_pmu_uctr,pss->bss_d32_uctr,pss->bss_d64_uctr); - - qprintf("\tDirect Trans Info : d64_base 0x%x d64_flags 0x%x" - "d32_base 0x%x d32_flags 0x%x\n", - pss->bss_d64_base, pss->bss_d64_flags, - pss->bss_d32_base, pss->bss_d32_flags); - - qprintf("\tExt ATEs active ? %s", - atomic_read(&pss->bss_ext_ates_active) ? "yes" : "no"); - qprintf(" Command register : 0x%x ", pss->bss_cmd_pointer); - qprintf(" Shadow command val : 0x%x\n", pss->bss_cmd_shadow); - - qprintf("\tRRB Info : Valid %d+%d Reserved %d\n", - pcibr_soft->bs_rrb_valid[slot], - pcibr_soft->bs_rrb_valid[slot + PCIBR_RRB_SLOT_VIRTUAL], - pcibr_soft->bs_rrb_res[slot]); - -} - -int ips = 0; - -void -idbg_pss(pcibr_soft_t pcibr_soft) -{ - pciio_slot_t slot; - - - if (ips >= 0 && ips < 8) - idbg_pss_info(pcibr_soft,ips); - else if (ips < 0) - for (slot = 0; slot < 8; slot++) - idbg_pss_info(pcibr_soft,slot); - else - qprintf("Invalid ips %d\n",ips); -} - -#endif /* LATER */ - -int -pcibr_dma_enabled(devfs_handle_t pconn_vhdl) -{ - pciio_info_t pciio_info = pciio_info_get(pconn_vhdl); - pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); - - - return xtalk_dma_enabled(pcibr_soft->bs_conn); -} diff -Nru a/arch/ia64/sn/io/pciio.c b/arch/ia64/sn/io/pciio.c --- a/arch/ia64/sn/io/pciio.c Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/sn/io/pciio.c Tue Mar 12 13:58:15 2002 @@ -4,14 +4,18 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2002 Silicon Graphics, Inc. All rights reserved. */ #define USRPCI 0 +#include #include #include +#include +#include +#include +#include #include #include #include /* Must be before iograph.h to get MAX_PORT_NUM */ @@ -25,13 +29,16 @@ #include #include #include +#include +#include +#include #define DEBUG_PCIIO #undef DEBUG_PCIIO /* turn this on for yet more console output */ -#define NEW(ptr) (ptr = kmalloc(sizeof (*(ptr)), GFP_KERNEL)) -#define DEL(ptr) (kfree(ptr)) +#define GET_NEW(ptr) (ptr = kmalloc(sizeof (*(ptr)), GFP_KERNEL)) +#define DO_DEL(ptr) (kfree(ptr)) char pciio_info_fingerprint[] = "pciio_info"; @@ -63,6 +70,14 @@ get_console_nasid(void) { extern nasid_t console_nasid; + if (console_nasid < 0) { + console_nasid = ia64_sn_get_console_nasid(); + if (console_nasid < 0) { +// ZZZ What do we do if we don't get a console nasid on the hardware???? + if (IS_RUNNING_ON_SIMULATOR() ) + console_nasid = master_nasid; + } + } return console_nasid; } @@ -105,7 +120,7 @@ * completely disappear. */ -#if CONFIG_SGI_IP35 || CONFIG_IA64_SGI_SN1 || CONFIG_IA64_GENERIC +#if defined(CONFIG_IA64_SGI_SN1) /* * For the moment, we will assume that IP27 * only use Bridge ASICs to provide PCI support. @@ -115,7 +130,7 @@ #define CAST_PIOMAP(x) ((pcibr_piomap_t)(x)) #define CAST_DMAMAP(x) ((pcibr_dmamap_t)(x)) #define CAST_INTR(x) ((pcibr_intr_t)(x)) -#endif /* CONFIG_SGI_IP35 || CONFIG_IA64_SGI_SN1 */ +#endif /* CONFIG_IA64_SGI_SN1 */ /* ===================================================================== * Function Table of Contents @@ -150,14 +165,11 @@ pciio_intr_t pciio_intr_alloc(devfs_handle_t, device_desc_t, pciio_intr_line_t, devfs_handle_t); void pciio_intr_free(pciio_intr_t); -int pciio_intr_connect(pciio_intr_t, intr_func_t, intr_arg_t, void *thread); +int pciio_intr_connect(pciio_intr_t); void pciio_intr_disconnect(pciio_intr_t); devfs_handle_t pciio_intr_cpu_get(pciio_intr_t); void pciio_slot_func_to_name(char *, pciio_slot_t, pciio_function_t); -static pciio_info_t pciio_cardinfo_get(devfs_handle_t, pciio_slot_t); -int pciio_error_handler(devfs_handle_t, int, ioerror_mode_t, ioerror_t *); -int pciio_error_devenable(devfs_handle_t, int); void pciio_provider_startup(devfs_handle_t); void pciio_provider_shutdown(devfs_handle_t); @@ -257,7 +269,7 @@ #if defined(SUPPORT_PRINTING_V_FORMAT) PRINT_PANIC("%v: provider_fns == NULL", dev); #else - PRINT_PANIC("0x%x: provider_fns == NULL", dev); + PRINT_PANIC("0x%p: provider_fns == NULL", (void *)dev); #endif return provider_fns; @@ -575,13 +587,10 @@ * Returns 0 on success, returns <0 on failure. */ int -pciio_intr_connect(pciio_intr_t intr_hdl, /* pciio intr resource handle */ - intr_func_t intr_func, /* pciio intr handler */ - intr_arg_t intr_arg, /* arg to intr handler */ - void *thread) -{ /* intr thread to use */ +pciio_intr_connect(pciio_intr_t intr_hdl) /* pciio intr resource handle */ +{ return INTR_FUNC(intr_hdl, intr_connect) - (CAST_INTR(intr_hdl), intr_func, intr_arg, thread); + (CAST_INTR(intr_hdl)); } /* @@ -605,10 +614,6 @@ (CAST_INTR(intr_hdl)); } -/* ===================================================================== - * ERROR MANAGEMENT - */ - void pciio_slot_func_to_name(char *name, pciio_slot_t slot, @@ -630,193 +635,6 @@ sprintf(name, "%d%c", slot, 'a'+func); } -/* - * pciio_cardinfo_get - * - * Get the pciio info structure corresponding to the - * specified PCI "slot" (we like it when the same index - * number is used for the PCI IDSEL, the REQ/GNT pair, - * and the interrupt line being used for INTA. We like - * it so much we call it the slot number). - */ -static pciio_info_t -pciio_cardinfo_get( - devfs_handle_t pciio_vhdl, - pciio_slot_t pci_slot) -{ - char namebuf[16]; - pciio_info_t info = 0; - devfs_handle_t conn; - - pciio_slot_func_to_name(namebuf, pci_slot, PCIIO_FUNC_NONE); - if (GRAPH_SUCCESS == - hwgraph_traverse(pciio_vhdl, namebuf, &conn)) { - info = pciio_info_chk(conn); - hwgraph_vertex_unref(conn); - } - - return info; -} - -/* - * pciio_error_handler: - * dispatch an error to the appropriate - * pciio connection point, or process - * it as a generic pci error. - * Yes, the first parameter is the - * provider vertex at the middle of - * the bus; we get to the pciio connect - * point using the ioerror widgetdev field. - * - * This function is called by the - * specific PCI provider, after it has figured - * out where on the PCI bus (including which slot, - * if it can tell) the error came from. - */ -/*ARGSUSED */ -int -pciio_error_handler( - devfs_handle_t pciio_vhdl, - int error_code, - ioerror_mode_t mode, - ioerror_t *ioerror) -{ - pciio_info_t pciio_info; - devfs_handle_t pconn_vhdl; -#if USRPCI - devfs_handle_t usrpci_v; -#endif - pciio_slot_t slot; - - int retval; -#if defined(CONFIG_SGI_IO_ERROR_HANDLING) - error_state_t e_state; -#endif - -#if DEBUG && ERROR_DEBUG -#if defined(SUPPORT_PRINTING_V_FORMAT) - printk("%v: pciio_error_handler\n", pciio_vhdl); -#else - printk("0x%x: pciio_error_handler\n", pciio_vhdl); -#endif -#endif - -#if defined(SUPPORT_PRINTING_V_FORMAT) - IOERR_PRINTF(printk("%v: PCI Bus Error: Error code: %d Error mode: %d\n", - pciio_vhdl, error_code, mode)); -#else - IOERR_PRINTF(printk("0x%x: PCI Bus Error: Error code: %d Error mode: %d\n", - pciio_vhdl, error_code, mode)); -#endif - - /* If there is an error handler sitting on - * the "no-slot" connection point, give it - * first crack at the error. NOTE: it is - * quite possible that this function may - * do further refining of the ioerror. - */ - pciio_info = pciio_cardinfo_get(pciio_vhdl, PCIIO_SLOT_NONE); - if (pciio_info && pciio_info->c_efunc) { - pconn_vhdl = pciio_info_dev_get(pciio_info); -#if defined(CONFIG_SGI_IO_ERROR_HANDLING) - e_state = error_state_get(pciio_vhdl); - - if (e_state == ERROR_STATE_ACTION) - (void)error_state_set(pciio_vhdl, ERROR_STATE_NONE); - - if (error_state_set(pconn_vhdl,e_state) == - ERROR_RETURN_CODE_CANNOT_SET_STATE) - return(IOERROR_UNHANDLED); -#endif - retval = pciio_info->c_efunc - (pciio_info->c_einfo, error_code, mode, ioerror); - if (retval != IOERROR_UNHANDLED) - return retval; - } - - /* Is the error associated with a particular slot? - */ - if (IOERROR_FIELDVALID(ioerror, widgetdev)) { - /* - * NOTE : - * widgetdev is a 4byte value encoded as slot in the higher order - * 2 bytes and function in the lower order 2 bytes. - */ -#ifdef LATER - slot = pciio_widgetdev_slot_get(IOERROR_GETVALUE(ioerror, widgetdev)); -#else - slot = 0; -#endif - - /* If this slot has an error handler, - * deliver the error to it. - */ - pciio_info = pciio_cardinfo_get(pciio_vhdl, slot); - if (pciio_info != NULL) { - if (pciio_info->c_efunc != NULL) { - - pconn_vhdl = pciio_info_dev_get(pciio_info); -#if defined(CONFIG_SGI_IO_ERROR_HANDLING) - e_state = error_state_get(pciio_vhdl); - - - if (e_state == ERROR_STATE_ACTION) - (void)error_state_set(pciio_vhdl, ERROR_STATE_NONE); - - - - if (error_state_set(pconn_vhdl,e_state) == - ERROR_RETURN_CODE_CANNOT_SET_STATE) - return(IOERROR_UNHANDLED); -#endif - retval = pciio_info->c_efunc - (pciio_info->c_einfo, error_code, mode, ioerror); - if (retval != IOERROR_UNHANDLED) - return retval; - } - -#if USRPCI - /* If the USRPCI driver is available and - * knows about this connection point, - * deliver the error to it. - * - * OK to use pconn_vhdl here, even though we - * have already UNREF'd it, since we know that - * it is not going away. - */ - pconn_vhdl = pciio_info_dev_get(pciio_info); - if (GRAPH_SUCCESS == - hwgraph_traverse(pconn_vhdl, EDGE_LBL_USRPCI, &usrpci_v)) { - retval = usrpci_error_handler - (usrpci_v, error_code, IOERROR_GETVALUE(ioerror, busaddr)); - hwgraph_vertex_unref(usrpci_v); - if (retval != IOERROR_UNHANDLED) { - /* - * This unref is not needed. If this code is called often enough, - * the system will crash, due to vertex reference count reaching 0, - * causing vertex to be unallocated. -jeremy - * hwgraph_vertex_unref(pconn_vhdl); - */ - return retval; - } - } -#endif - } - } - - return (mode == MODE_DEVPROBE) - ? IOERROR_HANDLED /* probes are OK */ - : IOERROR_UNHANDLED; /* otherwise, foo! */ -} - -int -pciio_error_devenable(devfs_handle_t pconn_vhdl, int error_code) -{ - return DEV_FUNC(pconn_vhdl, error_devenable) - (pconn_vhdl, error_code); - /* no cleanup specific to this layer. */ -} - /* ===================================================================== * CONFIGURATION MANAGEMENT */ @@ -856,12 +674,12 @@ #if DEBUG #if defined(SUPPORT_PRINTING_V_FORMAT) - PRINT_ALERT("%v: pciio_endian_set is going away.\n" + printk(KERN_ALERT "%v: pciio_endian_set is going away.\n" "\tplease use PCIIO_BYTE_STREAM or PCIIO_WORD_VALUES in your\n" "\tpciio_dmamap_alloc and pciio_dmatrans calls instead.\n", dev); #else - PRINT_ALERT("0x%x: pciio_endian_set is going away.\n" + printk(KERN_ALERT "0x%x: pciio_endian_set is going away.\n" "\tplease use PCIIO_BYTE_STREAM or PCIIO_WORD_VALUES in your\n" "\tpciio_dmamap_alloc and pciio_dmatrans calls instead.\n", dev); @@ -944,14 +762,6 @@ /* ===================================================================== * GENERIC PCI SUPPORT FUNCTIONS */ -pciio_slot_t -pciio_error_extract(devfs_handle_t dev, - pciio_space_t *space, - iopaddr_t *offset) -{ - ASSERT(dev != NODEV); - return DEV_FUNC(dev,error_extract)(dev,space,offset); -} /* * Issue a hardware reset to a card. @@ -1054,14 +864,9 @@ } #endif /* DEBUG_PCIIO */ -#ifdef BRINGUP if ((pciio_info != NULL) && (pciio_info->c_fingerprint != pciio_info_fingerprint) && (pciio_info->c_fingerprint != NULL)) { -#else - if ((pciio_info != NULL) && - (pciio_info->c_fingerprint != pciio_info_fingerprint)) { -#endif /* BRINGUP */ return((pciio_info_t)-1); /* Should panic .. */ } @@ -1388,7 +1193,7 @@ pciio_device_id_t device_id) { if (!pciio_info) - NEW(pciio_info); + GET_NEW(pciio_info); ASSERT(pciio_info != NULL); pciio_info->c_slot = slot; @@ -1420,6 +1225,7 @@ { char name[32]; devfs_handle_t pconn; + int device_master_set(devfs_handle_t, devfs_handle_t); pciio_slot_func_to_name(name, pciio_info->c_slot, @@ -1431,16 +1237,14 @@ pciio_info->c_vertex = pconn; pciio_info_set(pconn, pciio_info); -#ifdef BRINGUP +#ifdef DEBUG_PCIIO { int pos; char dname[256]; pos = devfs_generate_path(pconn, dname, 256); -#ifdef DEBUG_PCIIO printk("%s : pconn path= %s \n", __FUNCTION__, &dname[pos]); -#endif } -#endif /* BRINGUP */ +#endif /* DEBUG_PCIIO */ /* * create link to our pci provider @@ -1520,7 +1324,6 @@ pciio_info_t pciio_info; pciio_vendor_id_t vendor_id; pciio_device_id_t device_id; - int pciba_attach(devfs_handle_t); pciio_device_inventory_add(pconn); @@ -1536,11 +1339,6 @@ */ ASSERT(pciio_registry != NULL); - /* - * Since pciba is not called from cdl routines .. call it here. - */ - pciba_attach(pconn); - return(cdl_add_connpt(pciio_registry, vendor_id, device_id, pconn, drv_flags)); } @@ -1625,3 +1423,85 @@ { return DEV_FUNC(pconn_vhdl, dma_enabled)(pconn_vhdl); } + +/* + * These are complementary Linux interfaces that takes in a pci_dev * as the + * first arguement instead of devfs_handle_t. + */ +iopaddr_t snia_pciio_dmatrans_addr(struct pci_dev *, device_desc_t, paddr_t, size_t, unsigned); +pciio_dmamap_t snia_pciio_dmamap_alloc(struct pci_dev *, device_desc_t, size_t, unsigned); +void snia_pciio_dmamap_free(pciio_dmamap_t); +iopaddr_t snia_pciio_dmamap_addr(pciio_dmamap_t, paddr_t, size_t); +void snia_pciio_dmamap_done(pciio_dmamap_t); +pciio_endian_t snia_pciio_endian_set(struct pci_dev *pci_dev, pciio_endian_t device_end, + pciio_endian_t desired_end); + +#include +EXPORT_SYMBOL(snia_pciio_dmatrans_addr); +EXPORT_SYMBOL(snia_pciio_dmamap_alloc); +EXPORT_SYMBOL(snia_pciio_dmamap_free); +EXPORT_SYMBOL(snia_pciio_dmamap_addr); +EXPORT_SYMBOL(snia_pciio_dmamap_done); +EXPORT_SYMBOL(snia_pciio_endian_set); + +pciio_endian_t +snia_pciio_endian_set(struct pci_dev *pci_dev, + pciio_endian_t device_end, + pciio_endian_t desired_end) +{ + devfs_handle_t dev = PCIDEV_VERTEX(pci_dev); + + return DEV_FUNC(dev, endian_set) + (dev, device_end, desired_end); +} + +iopaddr_t +snia_pciio_dmatrans_addr(struct pci_dev *pci_dev, /* translate for this device */ + device_desc_t dev_desc, /* device descriptor */ + paddr_t paddr, /* system physical address */ + size_t byte_count, /* length */ + unsigned flags) +{ /* defined in dma.h */ + + devfs_handle_t dev = PCIDEV_VERTEX(pci_dev); + + return DEV_FUNC(dev, dmatrans_addr) + (dev, dev_desc, paddr, byte_count, flags); +} + +pciio_dmamap_t +snia_pciio_dmamap_alloc(struct pci_dev *pci_dev, /* set up mappings for this device */ + device_desc_t dev_desc, /* device descriptor */ + size_t byte_count_max, /* max size of a mapping */ + unsigned flags) +{ /* defined in dma.h */ + + devfs_handle_t dev = PCIDEV_VERTEX(pci_dev); + + return (pciio_dmamap_t) DEV_FUNC(dev, dmamap_alloc) + (dev, dev_desc, byte_count_max, flags); +} + +void +snia_pciio_dmamap_free(pciio_dmamap_t pciio_dmamap) +{ + DMAMAP_FUNC(pciio_dmamap, dmamap_free) + (CAST_DMAMAP(pciio_dmamap)); +} + +iopaddr_t +snia_pciio_dmamap_addr(pciio_dmamap_t pciio_dmamap, /* use these mapping resources */ + paddr_t paddr, /* map for this address */ + size_t byte_count) +{ /* map this many bytes */ + return DMAMAP_FUNC(pciio_dmamap, dmamap_addr) + (CAST_DMAMAP(pciio_dmamap), paddr, byte_count); +} + +void +snia_pciio_dmamap_done(pciio_dmamap_t pciio_dmamap) +{ + DMAMAP_FUNC(pciio_dmamap, dmamap_done) + (CAST_DMAMAP(pciio_dmamap)); +} + diff -Nru a/arch/ia64/sn/io/sgi_if.c b/arch/ia64/sn/io/sgi_if.c --- a/arch/ia64/sn/io/sgi_if.c Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/sn/io/sgi_if.c Tue Mar 12 13:58:15 2002 @@ -4,8 +4,7 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2001 Silicon Graphics, Inc. All rights reserved. */ #include @@ -22,7 +21,7 @@ #include void * -kmem_zalloc(size_t size, int flag) +snia_kmem_zalloc(size_t size, int flag) { void *ptr = kmalloc(size, GFP_KERNEL); BZERO(ptr, size); diff -Nru a/arch/ia64/sn/io/sgi_io_init.c b/arch/ia64/sn/io/sgi_io_init.c --- a/arch/ia64/sn/io/sgi_io_init.c Tue Mar 12 13:58:14 2002 +++ b/arch/ia64/sn/io/sgi_io_init.c Tue Mar 12 13:58:14 2002 @@ -4,26 +4,24 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2002 Silicon Graphics, Inc. All rights reserved. */ #include #include #include #include -#include +#include +#include #include #include -#include +#include #include extern void mlreset(int ); extern int init_hcl(void); extern void klgraph_hack_init(void); -extern void per_hub_init(cnodeid_t); extern void hubspc_init(void); -extern void pciba_init(void); extern void pciio_init(void); extern void pcibr_init(void); extern void xtalk_init(void); @@ -33,23 +31,21 @@ extern void usrpci_init(void); extern void ioc3_init(void); extern void initialize_io(void); -extern void init_platform_nodepda(nodepda_t *, cnodeid_t ); +#if defined(CONFIG_IA64_SGI_SN1) extern void intr_clear_all(nasid_t); +#endif extern void klhwg_add_all_modules(devfs_handle_t); extern void klhwg_add_all_nodes(devfs_handle_t); void sn_mp_setup(void); extern devfs_handle_t hwgraph_root; extern void io_module_init(void); -extern cnodeid_t nasid_to_compact_node[]; extern void pci_bus_cvlink_init(void); extern void temp_hack(void); -extern void init_platform_pda(cpuid_t cpu); extern int pci_bus_to_hcl_cvlink(void); -extern synergy_da_t *Synergy_da_indr[]; -#define DEBUG_IO_INIT +/* #define DEBUG_IO_INIT */ #ifdef DEBUG_IO_INIT #define DBG(x...) printk(x) #else @@ -57,20 +53,73 @@ #endif /* DEBUG_IO_INIT */ /* - * kern/ml/csu.s calls mlsetup - * mlsetup calls mlreset(master) - kern/os/startup.c - * j main - * - - * SN/slave.s start_slave_loop calls slave_entry - * SN/slave.s slave_entry calls slave_loop - * SN/slave.s slave_loop calls bootstrap - * bootstrap in SN1/SN1asm.s calls cboot - * cboot calls mlreset(slave) - ml/SN/mp.c + * per_hub_init * - * sgi_io_infrastructure_init() gets called right before pci_init() - * in Linux mainline. This routine actually mirrors the IO Infrastructure - * call sequence in IRIX, ofcourse, nicely modified for Linux. + * This code is executed once for each Hub chip. + */ +static void +per_hub_init(cnodeid_t cnode) +{ + nasid_t nasid; + nodepda_t *npdap; + ii_icmr_u_t ii_icmr; + ii_ibcr_u_t ii_ibcr; + + nasid = COMPACT_TO_NASID_NODEID(cnode); + + ASSERT(nasid != INVALID_NASID); + ASSERT(NASID_TO_COMPACT_NODEID(nasid) == cnode); + + npdap = NODEPDA(cnode); + +#if defined(CONFIG_IA64_SGI_SN1) + /* initialize per-node synergy perf instrumentation */ + npdap->synergy_perf_enabled = 0; /* off by default */ + npdap->synergy_perf_lock = SPIN_LOCK_UNLOCKED; + npdap->synergy_perf_freq = SYNERGY_PERF_FREQ_DEFAULT; + npdap->synergy_inactive_intervals = 0; + npdap->synergy_active_intervals = 0; + npdap->synergy_perf_data = NULL; + npdap->synergy_perf_first = NULL; +#endif /* CONFIG_IA64_SGI_SN1 */ + + + /* + * Set the total number of CRBs that can be used. + */ + ii_icmr.ii_icmr_regval= 0x0; + ii_icmr.ii_icmr_fld_s.i_c_cnt = 0xF; + REMOTE_HUB_S(nasid, IIO_ICMR, ii_icmr.ii_icmr_regval); + + /* + * Set the number of CRBs that both of the BTEs combined + * can use minus 1. + */ + ii_ibcr.ii_ibcr_regval= 0x0; + ii_ibcr.ii_ibcr_fld_s.i_count = 0x8; + REMOTE_HUB_S(nasid, IIO_IBCR, ii_ibcr.ii_ibcr_regval); + + /* + * Set CRB timeout to be 10ms. + */ + REMOTE_HUB_S(nasid, IIO_ICTP, 0x1000 ); + REMOTE_HUB_S(nasid, IIO_ICTO, 0xff); + + +#if defined(CONFIG_IA64_SGI_SN1) + /* Reserve all of the hardwired interrupt levels. */ + intr_reserve_hardwired(cnode); +#endif + + /* Initialize error interrupts for this hub. */ + hub_error_init(cnode); +} + +/* + * This routine is responsible for the setup of all the IRIX hwgraph style + * stuff that's been pulled into linux. It's called by sn1_pci_find_bios which + * is called just before the generic Linux PCI layer does its probing (by + * platform_pci_fixup aka sn1_pci_fixup). * * It is very IMPORTANT that this call is only made by the Master CPU! * @@ -80,7 +129,6 @@ sgi_master_io_infr_init(void) { int cnode; - extern int maxnodes; /* * Do any early init stuff .. einit_tbl[] etc. @@ -94,11 +142,15 @@ DBG("--> sgi_master_io_infr_init: calling pci_bus_cvlink_init().\n"); pci_bus_cvlink_init(); +#ifdef BRINGUP +#ifdef CONFIG_IA64_SGI_SN1 /* * Hack to provide statically initialzed klgraph entries. */ DBG("--> sgi_master_io_infr_init: calling klgraph_hack_init()\n"); klgraph_hack_init(); +#endif /* CONFIG_IA64_SGI_SN1 */ +#endif /* BRINGUP */ /* * This is the Master CPU. Emulate mlsetup and main.c in Irix. @@ -117,7 +169,7 @@ sn_mp_setup(); DBG("--> sgi_master_io_infr_init: calling per_hub_init(0).\n"); - for (cnode = 0; cnode < maxnodes; cnode++) { + for (cnode = 0; cnode < numnodes; cnode++) { per_hub_init(cnode); } @@ -133,9 +185,6 @@ DBG("--> sgi_master_io_infr_init: calling hubspc_init()\n"); hubspc_init(); - DBG("--> sgi_master_io_infr_init: calling pciba_init()\n"); - pciba_init(); - DBG("--> sgi_master_io_infr_init: calling pciio_init()\n"); pciio_init(); @@ -172,6 +221,11 @@ DBG("--> sgi_master_io_infr_init: Setting up SGI IO Links for Linux PCI\n"); pci_bus_to_hcl_cvlink(); +#ifdef CONFIG_PCIBA + DBG("--> sgi_master_io_infr_init: calling pciba_init()\n"); + pciba_init(); +#endif + DBG("--> Leave sgi_master_io_infr_init: DONE setting up SGI Links for PCI\n"); } @@ -199,76 +253,15 @@ sn_mp_setup(void) { cnodeid_t cnode; - extern int maxnodes; cpuid_t cpu; - DBG("sn_mp_setup: Entered.\n"); - /* - * NODEPDA(x) Macro depends on nodepda - * subnodepda is also statically set to calias space which we - * do not currently support yet .. just a hack for now. - */ -#ifdef NUMA_BASE - maxnodes = numnodes; - DBG("sn_mp_setup(): maxnodes= %d numnodes= %d\n", maxnodes,numnodes); - printk("sn_mp_setup(): Allocating backing store for *Nodepdaindr[%2d] \n", - maxnodes); - - /* - * Initialize Nodpdaindr and per-node nodepdaindr array - */ - *Nodepdaindr = (nodepda_t *) kmalloc(sizeof(nodepda_t *)*numnodes, GFP_KERNEL); - for (cnode=0; cnodepernode_pdaindr = Nodepdaindr; - subnodepda = &Nodepdaindr[cnode]->snpda[cnode]; - } - nodepda = Nodepdaindr[0]; -#else - Nodepdaindr = (nodepda_t *) kmalloc(sizeof(struct nodepda_s), GFP_KERNEL); - nodepda = Nodepdaindr[0]; - subnodepda = &Nodepdaindr[0]->snpda[0]; - -#endif /* NUMA_BASE */ - - /* - * Before we let the other processors run, set up the platform specific - * stuff in the nodepda. - * - * ???? maxnodes set in mlreset .. who sets it now ???? - * ???? cpu_node_probe() called in mlreset to set up the following: - * compact_to_nasid_node[] - cnode id gives nasid - * nasid_to_compact_node[] - nasid gives cnode id - * - * do_cpumask() sets the following: - * cpuid_to_compact_node[] - cpuid gives cnode id - * - * nasid comes from gdap->g_nasidtable[] - * ml/SN/promif.c - */ - -#ifdef CONFIG_IA64_SGI_SN1 for (cpu = 0; cpu < smp_num_cpus; cpu++) { /* Skip holes in CPU space */ if (cpu_enabled(cpu)) { init_platform_pda(cpu); } } -#endif - for (cnode = 0; cnode < maxnodes; cnode++) { - /* - * Set up platform-dependent nodepda fields. - * The following routine actually sets up the hubinfo struct - * in nodepda. - */ - DBG("sn_mp_io_setup: calling init_platform_nodepda(%2d)\n",cnode); - init_platform_nodepda(Nodepdaindr[cnode], cnode); - } + /* * Initialize platform-dependent vertices in the hwgraph: * module @@ -290,24 +283,26 @@ klhwg_add_all_nodes(hwgraph_root); - for (cnode = 0; cnode < maxnodes; cnode++) { + for (cnode = 0; cnode < numnodes; cnode++) { /* * This routine clears the Hub's Interrupt registers. */ -#ifdef CONFIG_IA64_SGI_SN1 /* * We need to move this intr_clear_all() routine * from SN/intr.c to a more appropriate file. * Talk to Al Mayer. */ +#if defined(CONFIG_IA64_SGI_SN1) intr_clear_all(COMPACT_TO_NASID_NODEID(cnode)); +#endif /* now init the hub */ // per_hub_init(cnode); -#endif + } -#if defined(CONFIG_IA64_SGI_SYNERGY_PERF) +#if defined(CONFIG_IA64_SGI_SN1) synergy_perf_init(); -#endif /* CONFIG_IA64_SGI_SYNERGY_PERF */ +#endif + } diff -Nru a/arch/ia64/sn/io/sgi_io_sim.c b/arch/ia64/sn/io/sgi_io_sim.c --- a/arch/ia64/sn/io/sgi_io_sim.c Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/sn/io/sgi_io_sim.c Tue Mar 12 13:58:15 2002 @@ -4,31 +4,28 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2001 Silicon Graphics, Inc. All rights reserved. */ +#include #include #include -#include +#include +#include +#include #include #include #include #include -#include cpuid_t master_procid = 0; -int maxnodes; char arg_maxnodes[4]; -nodepda_t *Nodepdaindr[MAX_COMPACT_NODES]; -nodepda_t *nodepda; -subnode_pda_t *subnodepda; - -synergy_da_t *Synergy_da_indr[MAX_COMPACT_NODES * 2]; - extern void init_all_devices(void); +#if defined(CONFIG_IA64_SGI_SN1) +synergy_da_t *Synergy_da_indr[MAX_COMPACT_NODES * 2]; +#endif /* * Return non-zero if the given variable was specified @@ -73,27 +70,23 @@ * Routines provided by ml/SN/promif.c. */ static __psunsigned_t master_bridge_base = (__psunsigned_t)NULL; -nasid_t console_nasid; +nasid_t console_nasid = (nasid_t)-1; static char console_wid; static char console_pcislot; void set_master_bridge_base(void) { - - console_nasid = KL_CONFIG_CH_CONS_INFO(master_nasid)->nasid; console_wid = WIDGETID_GET(KL_CONFIG_CH_CONS_INFO(master_nasid)->memory_base); console_pcislot = KL_CONFIG_CH_CONS_INFO(master_nasid)->npci; - master_bridge_base = (__psunsigned_t)NODE_SWIN_BASE(console_nasid, - console_wid); - FIXME("WARNING: set_master_bridge_base: NON NASID 0 DOES NOT WORK\n"); + master_bridge_base = (__psunsigned_t)NODE_SWIN_BASE(console_nasid, console_wid); + // FIXME("WARNING: set_master_bridge_base: NON NASID 0 DOES NOT WORK\n"); } int check_nasid_equiv(nasid_t nasida, nasid_t nasidb) { - if ((nasida == nasidb) || - (nasida == NODEPDA(NASID_TO_COMPACT_NODEID(nasidb))->xbow_peer)) + if ((nasida == nasidb) || (nasida == NODEPDA(NASID_TO_COMPACT_NODEID(nasidb))->xbow_peer)) return 1; else return 0; diff -Nru a/arch/ia64/sn/io/sn1/hub_intr.c b/arch/ia64/sn/io/sn1/hub_intr.c --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/io/sn1/hub_intr.c Tue Mar 12 13:58:16 2002 @@ -0,0 +1,307 @@ +/* $Id: io.c,v 1.2 2001/06/26 14:02:43 pfg Exp $ + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (C) 1992-1997, 2000-2002 Silicon Graphics, Inc. All Rights Reserved. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +extern xtalk_provider_t hub_provider; + +/* ARGSUSED */ +void +hub_intr_init(devfs_handle_t hubv) +{ +} + +/* + * hub_device_desc_update + * Update the passed in device descriptor with the actual the + * target cpu number and interrupt priority level. + * NOTE : These might be the same as the ones passed in thru + * the descriptor. + */ +static void +hub_device_desc_update(device_desc_t dev_desc, + ilvl_t intr_swlevel, + cpuid_t cpu) +{ +} + +int allocate_my_bit = INTRCONNECT_ANYBIT; + +/* + * Allocate resources required for an interrupt as specified in dev_desc. + * Returns a hub interrupt handle on success, or 0 on failure. + */ +static hub_intr_t +do_hub_intr_alloc(devfs_handle_t dev, /* which crosstalk device */ + device_desc_t dev_desc, /* device descriptor */ + devfs_handle_t owner_dev, /* owner of this interrupt, if known */ + int uncond_nothread) /* unconditionally non-threaded */ +{ + cpuid_t cpu = (cpuid_t)0; /* cpu to receive interrupt */ + int cpupicked = 0; + int bit; /* interrupt vector */ + /*REFERENCED*/ + int intr_resflags = 0; + hub_intr_t intr_hdl; + cnodeid_t nodeid; /* node to receive interrupt */ + /*REFERENCED*/ + nasid_t nasid; /* nasid to receive interrupt */ + struct xtalk_intr_s *xtalk_info; + iopaddr_t xtalk_addr; /* xtalk addr on hub to set intr */ + xwidget_info_t xwidget_info; /* standard crosstalk widget info handle */ + char *intr_name = NULL; + ilvl_t intr_swlevel = (ilvl_t)0; + extern int default_intr_pri; + extern void synergy_intr_alloc(int, int); + + + if (dev_desc) { + if (dev_desc->flags & D_INTR_ISERR) { + intr_resflags = II_ERRORINT; + } else if (!uncond_nothread && !(dev_desc->flags & D_INTR_NOTHREAD)) { + intr_resflags = II_THREADED; + } else { + /* Neither an error nor a thread. */ + intr_resflags = 0; + } + } else { + intr_swlevel = default_intr_pri; + if (!uncond_nothread) + intr_resflags = II_THREADED; + } + + /* XXX - Need to determine if the interrupt should be threaded. */ + + /* If the cpu has not been picked already then choose a candidate + * interrupt target and reserve the interrupt bit + */ + if (!cpupicked) { + cpu = intr_heuristic(dev,dev_desc,allocate_my_bit, + intr_resflags,owner_dev, + intr_name,&bit); + } + + /* At this point we SHOULD have a valid cpu */ + if (cpu == CPU_NONE) { +#if defined(SUPPORT_PRINTING_V_FORMAT) + printk(KERN_WARNING "%v hub_intr_alloc could not allocate interrupt\n", + owner_dev); +#else + printk(KERN_WARNING "%p hub_intr_alloc could not allocate interrupt\n", + (void *)owner_dev); +#endif + return(0); + + } + + /* If the cpu has been picked already (due to the bridge data + * corruption bug) then try to reserve an interrupt bit . + */ + if (cpupicked) { + bit = intr_reserve_level(cpu, allocate_my_bit, + intr_resflags, + owner_dev, intr_name); + if (bit < 0) { +#if defined(SUPPORT_PRINTING_V_FORMAT) + printk(KERN_WARNING "Could not reserve an interrupt bit for cpu " + " %d and dev %v\n", + cpu,owner_dev); +#else + printk(KERN_WARNING "Could not reserve an interrupt bit for cpu " + " %d and dev %p\n", + (int)cpu, (void *)owner_dev); +#endif + + return(0); + } + } + + nodeid = cpuid_to_cnodeid(cpu); + nasid = cpuid_to_nasid(cpu); + xtalk_addr = HUBREG_AS_XTALKADDR(nasid, PIREG(PI_INT_PEND_MOD, cpuid_to_subnode(cpu))); + + /* + * Allocate an interrupt handle, and fill it in. There are two + * pieces to an interrupt handle: the piece needed by generic + * xtalk code which is used by crosstalk device drivers, and + * the piece needed by low-level IP27 hardware code. + */ + intr_hdl = snia_kmem_alloc_node(sizeof(struct hub_intr_s), KM_NOSLEEP, nodeid); + ASSERT_ALWAYS(intr_hdl); + + /* + * Fill in xtalk information for generic xtalk interfaces that + * operate on xtalk_intr_hdl's. + */ + xtalk_info = &intr_hdl->i_xtalk_info; + xtalk_info->xi_dev = dev; + xtalk_info->xi_vector = bit; + xtalk_info->xi_addr = xtalk_addr; + + /* + * Regardless of which CPU we ultimately interrupt, a given crosstalk + * widget always handles interrupts (and PIO and DMA) through its + * designated "master" crosstalk provider. + */ + xwidget_info = xwidget_info_get(dev); + if (xwidget_info) + xtalk_info->xi_target = xwidget_info_masterid_get(xwidget_info); + + /* Fill in low level hub information for hub_* interrupt interface */ + intr_hdl->i_swlevel = intr_swlevel; + intr_hdl->i_cpuid = cpu; + intr_hdl->i_bit = bit; + intr_hdl->i_flags = HUB_INTR_IS_ALLOCED; + + /* Store the actual interrupt priority level & interrupt target + * cpu back in the device descriptor. + */ + hub_device_desc_update(dev_desc, intr_swlevel, cpu); + synergy_intr_alloc((int)bit, (int)cpu); + return(intr_hdl); +} + +/* + * Allocate resources required for an interrupt as specified in dev_desc. + * Returns a hub interrupt handle on success, or 0 on failure. + */ +hub_intr_t +hub_intr_alloc( devfs_handle_t dev, /* which crosstalk device */ + device_desc_t dev_desc, /* device descriptor */ + devfs_handle_t owner_dev) /* owner of this interrupt, if known */ +{ + return(do_hub_intr_alloc(dev, dev_desc, owner_dev, 0)); +} + +/* + * Allocate resources required for an interrupt as specified in dev_desc. + * Uncondtionally request non-threaded, regardless of what the device + * descriptor might say. + * Returns a hub interrupt handle on success, or 0 on failure. + */ +hub_intr_t +hub_intr_alloc_nothd(devfs_handle_t dev, /* which crosstalk device */ + device_desc_t dev_desc, /* device descriptor */ + devfs_handle_t owner_dev) /* owner of this interrupt, if known */ +{ + return(do_hub_intr_alloc(dev, dev_desc, owner_dev, 1)); +} + +/* + * Free resources consumed by intr_alloc. + */ +void +hub_intr_free(hub_intr_t intr_hdl) +{ + cpuid_t cpu = intr_hdl->i_cpuid; + int bit = intr_hdl->i_bit; + xtalk_intr_t xtalk_info; + + if (intr_hdl->i_flags & HUB_INTR_IS_CONNECTED) { + /* Setting the following fields in the xtalk interrupt info + * clears the interrupt target register in the xtalk user + */ + xtalk_info = &intr_hdl->i_xtalk_info; + xtalk_info->xi_dev = NODEV; + xtalk_info->xi_vector = 0; + xtalk_info->xi_addr = 0; + hub_intr_disconnect(intr_hdl); + } + + if (intr_hdl->i_flags & HUB_INTR_IS_ALLOCED) + kfree(intr_hdl); + + intr_unreserve_level(cpu, bit); +} + + +/* + * Associate resources allocated with a previous hub_intr_alloc call with the + * described handler, arg, name, etc. + */ +/*ARGSUSED*/ +int +hub_intr_connect( hub_intr_t intr_hdl, /* xtalk intr resource handle */ + xtalk_intr_setfunc_t setfunc, /* func to set intr hw */ + void *setfunc_arg) /* arg to setfunc */ +{ + int rv; + cpuid_t cpu = intr_hdl->i_cpuid; + int bit = intr_hdl->i_bit; + extern int synergy_intr_connect(int, int); + + ASSERT(intr_hdl->i_flags & HUB_INTR_IS_ALLOCED); + + rv = intr_connect_level(cpu, bit, intr_hdl->i_swlevel, NULL); + if (rv < 0) + return(rv); + + intr_hdl->i_xtalk_info.xi_setfunc = setfunc; + intr_hdl->i_xtalk_info.xi_sfarg = setfunc_arg; + + if (setfunc) (*setfunc)((xtalk_intr_t)intr_hdl); + + intr_hdl->i_flags |= HUB_INTR_IS_CONNECTED; + return(synergy_intr_connect((int)bit, (int)cpu)); +} + + +/* + * Disassociate handler with the specified interrupt. + */ +void +hub_intr_disconnect(hub_intr_t intr_hdl) +{ + /*REFERENCED*/ + int rv; + cpuid_t cpu = intr_hdl->i_cpuid; + int bit = intr_hdl->i_bit; + xtalk_intr_setfunc_t setfunc; + + setfunc = intr_hdl->i_xtalk_info.xi_setfunc; + + /* TBD: send disconnected interrupts somewhere harmless */ + if (setfunc) (*setfunc)((xtalk_intr_t)intr_hdl); + + rv = intr_disconnect_level(cpu, bit); + ASSERT(rv == 0); + intr_hdl->i_flags &= ~HUB_INTR_IS_CONNECTED; +} + + +/* + * Return a hwgraph vertex that represents the CPU currently + * targeted by an interrupt. + */ +devfs_handle_t +hub_intr_cpu_get(hub_intr_t intr_hdl) +{ + cpuid_t cpuid = intr_hdl->i_cpuid; + ASSERT(cpuid != CPU_NONE); + + return(cpuid_to_vertex(cpuid)); +} diff -Nru a/arch/ia64/sn/io/sn1/hubcounters.c b/arch/ia64/sn/io/sn1/hubcounters.c --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/io/sn1/hubcounters.c Tue Mar 12 13:58:16 2002 @@ -0,0 +1,283 @@ +/* $Id:$ + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (C) 1992 - 1997, 2000 - 2001 Silicon Graphics, Inc. + * All rights reserved. + */ +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +extern void hubni_error_handler(char *, int); /* huberror.c */ + +static int hubstats_ioctl(struct inode *, struct file *, unsigned int, unsigned long); +struct file_operations hub_mon_fops = { + ioctl: hubstats_ioctl, +}; + +#define HUB_CAPTURE_TICKS (2 * HZ) + +#define HUB_ERR_THRESH 500 +#define USEC_PER_SEC 1000000 +#define NSEC_PER_SEC USEC_PER_SEC*1000 + +volatile int hub_print_usecs = 600 * USEC_PER_SEC; + +/* Return success if the hub's crosstalk link is working */ +int +hub_xtalk_link_up(nasid_t nasid) +{ + hubreg_t llp_csr_reg; + + /* Read the IO LLP control status register */ + llp_csr_reg = REMOTE_HUB_L(nasid, IIO_LLP_CSR); + + /* Check if the xtalk link is working */ + if (llp_csr_reg & IIO_LLP_CSR_IS_UP) + return(1); + + return(0); + + +} + +static char *error_flag_to_type(unsigned char error_flag) +{ + switch(error_flag) { + case 0x1: return ("NI retries"); + case 0x2: return ("NI SN errors"); + case 0x4: return ("NI CB errors"); + case 0x8: return ("II CB errors"); + case 0x10: return ("II SN errors"); + default: return ("Errors"); + } +} + +int +print_hub_error(hubstat_t *hsp, hubreg_t reg, + int64_t delta, unsigned char error_flag) +{ + int64_t rate; + + reg *= hsp->hs_per_minute; /* Convert to minutes */ + rate = reg / delta; + + if (rate > HUB_ERR_THRESH) { + + if(hsp->hs_maint & error_flag) + { + printk( "Excessive %s (%ld/min) on %s", + error_flag_to_type(error_flag), rate, hsp->hs_name); + } + else + { + hsp->hs_maint |= error_flag; + printk( "Excessive %s (%ld/min) on %s", + error_flag_to_type(error_flag), rate, hsp->hs_name); + } + return 1; + } else { + return 0; + } +} + + +int +check_hub_error_rates(hubstat_t *hsp) +{ + int64_t delta = hsp->hs_timestamp - hsp->hs_timebase; + int printed = 0; + + printed += print_hub_error(hsp, hsp->hs_ni_retry_errors, + delta, 0x1); + +#if 0 + printed += print_hub_error(hsp, hsp->hs_ni_sn_errors, + delta, 0x2); +#endif + + printed += print_hub_error(hsp, hsp->hs_ni_cb_errors, + delta, 0x4); + + + /* If the hub's xtalk link is not working there is + * no need to print the "Excessive..." warning + * messages + */ + if (!hub_xtalk_link_up(hsp->hs_nasid)) + return(printed); + + + printed += print_hub_error(hsp, hsp->hs_ii_cb_errors, + delta, 0x8); + + printed += print_hub_error(hsp, hsp->hs_ii_sn_errors, + delta, 0x10); + + return printed; +} + + +void +capture_hub_stats(cnodeid_t cnodeid, struct nodepda_s *npda) +{ + nasid_t nasid; + hubstat_t *hsp = &(npda->hubstats); + hubreg_t port_error; + ii_illr_u_t illr; + int count; + int overflow = 0; + + /* + * If our link wasn't up at boot time, don't worry about error rates. + */ + if (!(hsp->hs_ni_port_status & NPS_LINKUP_MASK)) { + printk("capture_hub_stats: cnode=%d hs_ni_port_status=0x%016lx : link is not up\n", + cnodeid, hsp->hs_ni_port_status); + return; + } + + nasid = COMPACT_TO_NASID_NODEID(cnodeid); + + hsp->hs_timestamp = GET_RTC_COUNTER(); + + port_error = REMOTE_HUB_L(nasid, NI_PORT_ERROR_CLEAR); + count = ((port_error & NPE_RETRYCOUNT_MASK) >> NPE_RETRYCOUNT_SHFT); + hsp->hs_ni_retry_errors += count; + if (count == NPE_COUNT_MAX) + overflow = 1; + count = ((port_error & NPE_SNERRCOUNT_MASK) >> NPE_SNERRCOUNT_SHFT); + hsp->hs_ni_sn_errors += count; + if (count == NPE_COUNT_MAX) + overflow = 1; + count = ((port_error & NPE_CBERRCOUNT_MASK) >> NPE_CBERRCOUNT_SHFT); + hsp->hs_ni_cb_errors += count; + if (overflow || count == NPE_COUNT_MAX) + hsp->hs_ni_overflows++; + + if (port_error & NPE_FATAL_ERRORS) { +#ifdef ajm + hubni_error_handler("capture_hub_stats", 1); +#else + printk("Error: hubni_error_handler in capture_hub_stats"); +#endif + } + + illr.ii_illr_regval = REMOTE_HUB_L(nasid, IIO_LLP_LOG); + REMOTE_HUB_S(nasid, IIO_LLP_LOG, 0); + + hsp->hs_ii_sn_errors += illr.ii_illr_fld_s.i_sn_cnt; + hsp->hs_ii_cb_errors += illr.ii_illr_fld_s.i_cb_cnt; + if ((illr.ii_illr_fld_s.i_sn_cnt == IIO_LLP_SN_MAX) || + (illr.ii_illr_fld_s.i_cb_cnt == IIO_LLP_CB_MAX)) + hsp->hs_ii_overflows++; + + if (hsp->hs_print) { + if (check_hub_error_rates(hsp)) { + hsp->hs_last_print = GET_RTC_COUNTER(); + hsp->hs_print = 0; + } + } else { + if ((GET_RTC_COUNTER() - + hsp->hs_last_print) > hub_print_usecs) + hsp->hs_print = 1; + } + + npda->hubticks = HUB_CAPTURE_TICKS; +} + + +void +init_hub_stats(cnodeid_t cnodeid, struct nodepda_s *npda) +{ + hubstat_t *hsp = &(npda->hubstats); + nasid_t nasid = cnodeid_to_nasid(cnodeid); + bzero(&(npda->hubstats), sizeof(hubstat_t)); + + hsp->hs_version = HUBSTAT_VERSION; + hsp->hs_cnode = cnodeid; + hsp->hs_nasid = nasid; + hsp->hs_timebase = GET_RTC_COUNTER(); + hsp->hs_ni_port_status = REMOTE_HUB_L(nasid, NI_PORT_STATUS); + + /* Clear the II error counts. */ + REMOTE_HUB_S(nasid, IIO_LLP_LOG, 0); + + /* Clear the NI counts. */ + REMOTE_HUB_L(nasid, NI_PORT_ERROR_CLEAR); + + hsp->hs_per_minute = (long long)RTC_CYCLES_PER_SEC * 60LL; + + npda->hubticks = HUB_CAPTURE_TICKS; + + /* XX should use kmem_alloc_node */ + hsp->hs_name = (char *)kmalloc(MAX_HUB_PATH, GFP_KERNEL); + ASSERT_ALWAYS(hsp->hs_name); + + sprintf(hsp->hs_name, "/dev/hw/" EDGE_LBL_MODULE "/%03d/" + EDGE_LBL_NODE "/" EDGE_LBL_HUB, + npda->module_id); + + hsp->hs_last_print = 0; + hsp->hs_print = 1; + + hub_print_usecs = hub_print_usecs; + +#if 0 + printk("init_hub_stats: cnode=%d nasid=%d hs_version=%d hs_ni_port_status=0x%016lx\n", + cnodeid, nasid, hsp->hs_version, hsp->hs_ni_port_status); +#endif +} + +static int +hubstats_ioctl(struct inode *inode, struct file *file, + unsigned int cmd, unsigned long arg) +{ + cnodeid_t cnode; + nodepda_t *npdap; + uint64_t longarg; + devfs_handle_t d; + + if ((d = devfs_get_handle_from_inode(inode)) == NULL) + return -ENODEV; + cnode = (cnodeid_t)hwgraph_fastinfo_get(d); + npdap = NODEPDA(cnode); + + if (npdap->hubstats.hs_version != HUBSTAT_VERSION) { + init_hub_stats(cnode, npdap); + } + + switch (cmd) { + case SNDRV_GET_INFOSIZE: + longarg = sizeof(hubstat_t); + if (copy_to_user((void *)arg, &longarg, sizeof(longarg))) { + return -EFAULT; + } + break; + + case SNDRV_GET_HUBINFO: + /* refresh npda->hubstats */ + capture_hub_stats(cnode, npdap); + if (copy_to_user((void *)arg, &npdap->hubstats, sizeof(hubstat_t))) { + return -EFAULT; + } + break; + + default: + return -EINVAL; + } + + return 0; +} diff -Nru a/arch/ia64/sn/io/sn1/huberror.c b/arch/ia64/sn/io/sn1/huberror.c --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/io/sn1/huberror.c Tue Mar 12 13:58:15 2002 @@ -0,0 +1,228 @@ +/* $Id$ + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (C) 1992 - 1997, 2000-2002 Silicon Graphics, Inc. All rights reserved. + */ + + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +extern void hubni_eint_init(cnodeid_t cnode); +extern void hubii_eint_init(cnodeid_t cnode); +extern void hubii_eint_handler (int irq, void *arg, struct pt_regs *ep); +extern void snia_error_intr_handler(int irq, void *devid, struct pt_regs *pt_regs); + +extern int maxcpus; + +#define HUB_ERROR_PERIOD (120 * HZ) /* 2 minutes */ + + +void +hub_error_clear(nasid_t nasid) +{ + int i; + hubreg_t idsr; + int sn; + + for(sn=0; snh_cnodeid == cnode); + + ilcsr.ii_ilcsr_regval = REMOTE_HUB_L(hinfo->h_nasid, IIO_ILCSR); + + if ((ilcsr.ii_ilcsr_fld_s.i_llp_stat & 0x2) == 0) { + /* + * HUB II link is not up. + * Just disable LLP, and don't connect any interrupts. + */ + ilcsr.ii_ilcsr_fld_s.i_llp_en = 0; + REMOTE_HUB_S(hinfo->h_nasid, IIO_ILCSR, ilcsr.ii_ilcsr_regval); + return; + } + /* Select a possible interrupt target where there is a free interrupt + * bit and also reserve the interrupt bit for this IO error interrupt + */ + intr_cpu = intr_heuristic(hub_v,0,INTRCONNECT_ANYBIT,II_ERRORINT,hub_v, + "HUB IO error interrupt",&bit); + if (intr_cpu == CPU_NONE) { + printk("hubii_eint_init: intr_reserve_level failed, cnode %d", cnode); + return; + } + + rv = intr_connect_level(intr_cpu, bit, 0, NULL); + synergy_intr_connect(bit, intr_cpu); + request_irq(bit_pos_to_irq(bit) + (intr_cpu << 8), hubii_eint_handler, 0, "SN hub error", (void *)hub_v); + ASSERT_ALWAYS(rv >= 0); + hubio_eint.ii_iidsr_regval = 0; + hubio_eint.ii_iidsr_fld_s.i_enable = 1; + hubio_eint.ii_iidsr_fld_s.i_level = bit;/* Take the least significant bits*/ + hubio_eint.ii_iidsr_fld_s.i_node = COMPACT_TO_NASID_NODEID(cnode); + hubio_eint.ii_iidsr_fld_s.i_pi_id = cpuid_to_subnode(intr_cpu); + REMOTE_HUB_S(hinfo->h_nasid, IIO_IIDSR, hubio_eint.ii_iidsr_regval); + +} + +void +hubni_eint_init(cnodeid_t cnode) +{ + int intr_bit; + cpuid_t targ; + + + if ((targ = cnodeid_to_cpuid(cnode)) == CPU_NONE) + return; + + /* The prom chooses which cpu gets these interrupts, but we + * don't know which one it chose. We will register all of the + * cpus to be sure. This only costs us an irqaction per cpu. + */ + for (; targ < CPUS_PER_NODE; targ++) { + if (!cpu_enabled(targ) ) continue; + /* connect the INTEND1 bits. */ + for (intr_bit = XB_ERROR; intr_bit <= MSC_PANIC_INTR; intr_bit++) { + intr_connect_level(targ, intr_bit, II_ERRORINT, NULL); + } + request_irq(SGI_HUB_ERROR_IRQ + (targ << 8), snia_error_intr_handler, 0, "SN hub error", NULL); + /* synergy masks are initialized in the prom to enable all interrupts. */ + /* We'll just leave them that way, here, for these interrupts. */ + } +} + + +/*ARGSUSED*/ +void +hubii_eint_handler (int irq, void *arg, struct pt_regs *ep) +{ + + panic("Hubii interrupt\n"); +} diff -Nru a/arch/ia64/sn/io/sn1/ip37.c b/arch/ia64/sn/io/sn1/ip37.c --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/io/sn1/ip37.c Tue Mar 12 13:58:15 2002 @@ -0,0 +1,47 @@ +/* + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (C) 1992 - 1997, 2000-2002 Silicon Graphics, Inc. All rights reserved. + */ + +/* + * ip37.c + * Support for IP35/IP37 machines + */ + +#include + +#include +#include +#include +#include /* for bridge_t */ + + +xwidgetnum_t +hub_widget_id(nasid_t nasid) +{ + hubii_wcr_t ii_wcr; /* the control status register */ + + ii_wcr.wcr_reg_value = REMOTE_HUB_L(nasid,IIO_WCR); + + return ii_wcr.wcr_fields_s.wcr_widget_id; +} + +int +is_fine_dirmode(void) +{ + return (((LOCAL_HUB_L(LB_REV_ID) & LRI_SYSTEM_SIZE_MASK) + >> LRI_SYSTEM_SIZE_SHFT) == SYSTEM_SIZE_SMALL); + +} + + +void +ni_reset_port(void) +{ + LOCAL_HUB_S(NI_RESET_ENABLE, NRE_RESETOK); + LOCAL_HUB_S(NI_PORT_RESET, NPR_PORTRESET | NPR_LOCALRESET); +} diff -Nru a/arch/ia64/sn/io/sn1/mem_refcnt.c b/arch/ia64/sn/io/sn1/mem_refcnt.c --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/io/sn1/mem_refcnt.c Tue Mar 12 13:58:15 2002 @@ -0,0 +1,221 @@ +/* $Id$ + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (C) 1992 - 1997, 2000-2002 Silicon Graphics, Inc. All rights reserved. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +// From numa_hw.h + +#define MIGR_COUNTER_MAX_GET(nodeid) \ + (NODEPDA_MCD((nodeid))->migr_system_kparms.migr_threshold_reference) +/* + * Get the Absolute Theshold + */ +#define MIGR_THRESHOLD_ABS_GET(nodeid) ( \ + MD_MIG_VALUE_THRESH_GET(COMPACT_TO_NASID_NODEID(nodeid))) +/* + * Get the current Differential Threshold + */ +#define MIGR_THRESHOLD_DIFF_GET(nodeid) \ + (NODEPDA_MCD(nodeid)->migr_as_kparms.migr_base_threshold) + +#define NUM_OF_HW_PAGES_PER_SW_PAGE() (NBPP / MD_PAGE_SIZE) + +// #include "migr_control.h" + +int +mem_refcnt_attach(devfs_handle_t hub) +{ +#ifndef CONFIG_IA64_SGI_SN + devfs_handle_t refcnt_dev; + + hwgraph_char_device_add(hub, + "refcnt", + "hubspc_", + &refcnt_dev); + device_info_set(refcnt_dev, (void*)(ulong)HUBSPC_REFCOUNTERS); +#endif + + return (0); +} + + +/*ARGSUSED*/ +int +mem_refcnt_open(devfs_handle_t *devp, mode_t oflag, int otyp, cred_t *crp) +{ + cnodeid_t node; + + node = master_node_get(*devp); + + ASSERT( (node >= 0) && (node < numnodes) ); + + if (NODEPDA(node)->migr_refcnt_counterbuffer == NULL) { + return (ENODEV); + } + + ASSERT( NODEPDA(node)->migr_refcnt_counterbase != NULL ); + ASSERT( NODEPDA(node)->migr_refcnt_cbsize != (size_t)0 ); + + return (0); +} + +/*ARGSUSED*/ +int +mem_refcnt_close(devfs_handle_t dev, int oflag, int otyp, cred_t *crp) +{ + return 0; +} + +/*ARGSUSED*/ +int +mem_refcnt_mmap(devfs_handle_t dev, vhandl_t *vt, off_t off, size_t len, uint prot) +{ + cnodeid_t node; + int errcode; + char* buffer; + size_t blen; + + node = master_node_get(dev); + + ASSERT( (node >= 0) && (node < numnodes) ); + + ASSERT( NODEPDA(node)->migr_refcnt_counterbuffer != NULL); + ASSERT( NODEPDA(node)->migr_refcnt_counterbase != NULL ); + ASSERT( NODEPDA(node)->migr_refcnt_cbsize != 0 ); + + /* + * XXXX deal with prot's somewhere around here.... + */ + + buffer = NODEPDA(node)->migr_refcnt_counterbuffer; + blen = NODEPDA(node)->migr_refcnt_cbsize; + + /* + * Force offset to be a multiple of sizeof(refcnt_t) + * We round up. + */ + + off = (((off - 1)/sizeof(refcnt_t)) + 1) * sizeof(refcnt_t); + + if ( ((buffer + blen) - (buffer + off + len)) < 0 ) { + return (EPERM); + } + + errcode = v_mapphys(vt, + buffer + off, + len); + + return errcode; +} + +/*ARGSUSED*/ +int +mem_refcnt_unmap(devfs_handle_t dev, vhandl_t *vt) +{ + return 0; +} + +/* ARGSUSED */ +int +mem_refcnt_ioctl(devfs_handle_t dev, + int cmd, + void *arg, + int mode, + cred_t *cred_p, + int *rvalp) +{ + cnodeid_t node; + int errcode; + extern int numnodes; + + node = master_node_get(dev); + + ASSERT( (node >= 0) && (node < numnodes) ); + + ASSERT( NODEPDA(node)->migr_refcnt_counterbuffer != NULL); + ASSERT( NODEPDA(node)->migr_refcnt_counterbase != NULL ); + ASSERT( NODEPDA(node)->migr_refcnt_cbsize != 0 ); + + errcode = 0; + + switch (cmd) { + case RCB_INFO_GET: + { + rcb_info_t rcb; + + rcb.rcb_len = NODEPDA(node)->migr_refcnt_cbsize; + + rcb.rcb_sw_sets = NODEPDA(node)->migr_refcnt_numsets; + rcb.rcb_sw_counters_per_set = numnodes; + rcb.rcb_sw_counter_size = sizeof(refcnt_t); + + rcb.rcb_base_pages = NODEPDA(node)->migr_refcnt_numsets / + NUM_OF_HW_PAGES_PER_SW_PAGE(); + rcb.rcb_base_page_size = NBPP; + rcb.rcb_base_paddr = ctob(slot_getbasepfn(node, 0)); + + rcb.rcb_cnodeid = node; + rcb.rcb_granularity = MD_PAGE_SIZE; +#ifdef LATER + rcb.rcb_hw_counter_max = MIGR_COUNTER_MAX_GET(node); + rcb.rcb_diff_threshold = MIGR_THRESHOLD_DIFF_GET(node); +#endif + rcb.rcb_abs_threshold = MIGR_THRESHOLD_ABS_GET(node); + rcb.rcb_num_slots = MAX_MEM_SLOTS; + + if (COPYOUT(&rcb, arg, sizeof(rcb_info_t))) { + errcode = EFAULT; + } + + break; + } + case RCB_SLOT_GET: + { + rcb_slot_t slot[MAX_MEM_SLOTS]; + int s; + int nslots; + + nslots = MAX_MEM_SLOTS; + ASSERT(nslots <= MAX_MEM_SLOTS); + for (s = 0; s < nslots; s++) { + slot[s].base = (uint64_t)ctob(slot_getbasepfn(node, s)); +#ifdef LATER + slot[s].size = (uint64_t)ctob(slot_getsize(node, s)); +#else + slot[s].size = (uint64_t)1; +#endif + } + if (COPYOUT(&slot[0], arg, nslots * sizeof(rcb_slot_t))) { + errcode = EFAULT; + } + + *rvalp = nslots; + break; + } + + default: + errcode = EINVAL; + break; + + } + + return errcode; +} diff -Nru a/arch/ia64/sn/io/sn1/ml_SN_intr.c b/arch/ia64/sn/io/sn1/ml_SN_intr.c --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/io/sn1/ml_SN_intr.c Tue Mar 12 13:58:14 2002 @@ -0,0 +1,1154 @@ +/* $Id$ + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (C) 1992 - 1997, 2000-2002 Silicon Graphics, Inc. All rights reserved. + */ + +/* + * intr.c- + * This file contains all of the routines necessary to set up and + * handle interrupts on an IP27 board. + */ + +#ident "$Revision: 1.167 $" + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + + +#if DEBUG_INTR_TSTAMP_DEBUG +#include +#include +#include +void do_splx_log(int, int); +void spldebug_log_event(int); +#endif + +#ifdef CONFIG_SMP +extern unsigned long cpu_online_map; +#endif +#define cpu_allows_intr(cpu) (1) +// If I understand what's going on with this, 32 should work. +// physmem_maxradius seems to be the maximum number of router +// hops to get from one end of the system to the other. With +// a maximally configured machine, with the dumbest possible +// topology, we would make 32 router hops. For what we're using +// it for, the dumbest possible should suffice. +#define physmem_maxradius() 32 + +#define SUBNODE_ANY (-1) + +extern int nmied; +extern int hub_intr_wakeup_cnt; +extern synergy_da_t *Synergy_da_indr[]; +extern cpuid_t master_procid; + +extern cnodeid_t master_node_get(devfs_handle_t vhdl); + +extern void snia_error_intr_handler(int irq, void *devid, struct pt_regs *pt_regs); + + +#define INTR_LOCK(vecblk) \ + (s = mutex_spinlock(&(vecblk)->vector_lock)) +#define INTR_UNLOCK(vecblk) \ + mutex_spinunlock(&(vecblk)->vector_lock, s) + +/* + * REACT/Pro + */ + + + +/* + * Find first bit set + * Used outside this file also + */ +int ms1bit(unsigned long x) +{ + int b; + + if (x >> 32) b = 32, x >>= 32; + else b = 0; + if (x >> 16) b += 16, x >>= 16; + if (x >> 8) b += 8, x >>= 8; + if (x >> 4) b += 4, x >>= 4; + if (x >> 2) b += 2, x >>= 2; + + return b + (int) (x >> 1); +} + +/* ARGSUSED */ +void +intr_stray(void *lvl) +{ + printk(KERN_WARNING "Stray Interrupt - level %ld to cpu %d", (long)lvl, smp_processor_id()); +} + +#if defined(DEBUG) + +/* Infrastructure to gather the device - target cpu mapping info */ +#define MAX_DEVICES 1000 /* Reasonable large number . Need not be + * the exact maximum # devices possible. + */ +#define MAX_NAME 100 +typedef struct { + dev_t dev; /* device */ + cpuid_t cpuid; /* target cpu */ + cnodeid_t cnodeid;/* node on which the target cpu is present */ + int bit; /* intr bit reserved */ + char intr_name[MAX_NAME]; /* name of the interrupt */ +} intr_dev_targ_map_t; + +intr_dev_targ_map_t intr_dev_targ_map[MAX_DEVICES]; +uint64_t intr_dev_targ_map_size; +spinlock_t intr_dev_targ_map_lock; + +/* Print out the device - target cpu mapping. + * This routine is used only in the idbg command + * "intrmap" + */ +void +intr_dev_targ_map_print(cnodeid_t cnodeid) +{ + int i,j,size = 0; + int print_flag = 0,verbose = 0; + char node_name[10]; + + if (cnodeid != CNODEID_NONE) { + nodepda_t *npda; + + npda = NODEPDA(cnodeid); + for (j=0; jintr_dispatch0.info[i].ii_flags); + qprintf("\n INT_PEND1: "); + for(i = 0 ; i < N_INTPEND_BITS ; i++) + qprintf("%d",SNPDA(npda,j)->intr_dispatch1.info[i].ii_flags); + } + verbose = 1; + } + qprintf("\n Device - Target Map [Interrupts: %s Node%s]\n\n", + (verbose ? "All" : "Non-hardwired"), + (cnodeid == CNODEID_NONE) ? "s: All" : node_name); + + qprintf("Device\tCpu\tCnode\tIntr_bit\tIntr_name\n"); + for (i = 0 ; i < intr_dev_targ_map_size ; i++) { + + print_flag = 0; + if (verbose) { + if (cnodeid != CNODEID_NONE) { + if (cnodeid == intr_dev_targ_map[i].cnodeid) + print_flag = 1; + } else { + print_flag = 1; + } + } else { + if (intr_dev_targ_map[i].dev != 0) { + if (cnodeid != CNODEID_NONE) { + if (cnodeid == + intr_dev_targ_map[i].cnodeid) + print_flag = 1; + } else { + print_flag = 1; + } + } + } + if (print_flag) { + size++; + qprintf("%d\t%d\t%d\t%d\t%s\n", + intr_dev_targ_map[i].dev, + intr_dev_targ_map[i].cpuid, + intr_dev_targ_map[i].cnodeid, + intr_dev_targ_map[i].bit, + intr_dev_targ_map[i].intr_name); + } + + } + qprintf("\nTotal : %d\n",size); +} +#endif /* DEBUG */ + +/* + * The spinlocks have already been initialized. Now initialize the interrupt + * vectors. One processor on each hub does the work. + */ +void +intr_init_vecblk(nodepda_t *npda, cnodeid_t node, int sn) +{ + int i, ip=0; + intr_vecblk_t *vecblk; + subnode_pda_t *snpda; + + + snpda = SNPDA(npda,sn); + do { + if (ip == 0) { + vecblk = &snpda->intr_dispatch0; + } else { + vecblk = &snpda->intr_dispatch1; + } + + /* Initialize this vector. */ + for (i = 0; i < N_INTPEND_BITS; i++) { + vecblk->vectors[i].iv_func = intr_stray; + vecblk->vectors[i].iv_prefunc = NULL; + vecblk->vectors[i].iv_arg = (void *)(__psint_t)(ip * N_INTPEND_BITS + i); + + vecblk->info[i].ii_owner_dev = 0; + strcpy(vecblk->info[i].ii_name, "Unused"); + vecblk->info[i].ii_flags = 0; /* No flags */ + vecblk->vectors[i].iv_mustruncpu = -1; /* No CPU yet. */ + + } + + mutex_spinlock_init(&vecblk->vector_lock); + + vecblk->vector_count = 0; + for (i = 0; i < CPUS_PER_SUBNODE; i++) + vecblk->cpu_count[i] = 0; + + vecblk->vector_state = VECTOR_UNINITED; + + } while (++ip < 2); + +} + + +/* + * do_intr_reserve_level(cpuid_t cpu, int bit, int resflags, int reserve, + * devfs_handle_t owner_dev, char *name) + * Internal work routine to reserve or unreserve an interrupt level. + * cpu is the CPU to which the interrupt will be sent. + * bit is the level bit to reserve. -1 means any level + * resflags should include II_ERRORINT if this is an + * error interrupt, II_THREADED if the interrupt handler + * will be threaded, or 0 otherwise. + * reserve should be set to II_RESERVE or II_UNRESERVE + * to get or clear a reservation. + * owner_dev is the device that "owns" this interrupt, if supplied + * name is a human-readable name for this interrupt, if supplied + * intr_reserve_level returns the bit reserved or -1 to indicate an error + */ +static int +do_intr_reserve_level(cpuid_t cpu, int bit, int resflags, int reserve, + devfs_handle_t owner_dev, char *name) +{ + intr_vecblk_t *vecblk; + hub_intmasks_t *hub_intmasks; + unsigned long s; + int rv = 0; + int ip; + synergy_da_t *sda; + int which_synergy; + cnodeid_t cnode; + + ASSERT(bit < N_INTPEND_BITS * 2); + + cnode = cpuid_to_cnodeid(cpu); + which_synergy = cpuid_to_synergy(cpu); + sda = Synergy_da_indr[(cnode * 2) + which_synergy]; + hub_intmasks = &sda->s_intmasks; + // hub_intmasks = &pdaindr[cpu].pda->p_intmasks; + + // if (pdaindr[cpu].pda == NULL) return -1; + if ((bit < N_INTPEND_BITS) && !(resflags & II_ERRORINT)) { + vecblk = hub_intmasks->dispatch0; + ip = 0; + } else { + ASSERT((bit >= N_INTPEND_BITS) || (bit == -1)); + bit -= N_INTPEND_BITS; /* Get position relative to INT_PEND1 reg. */ + vecblk = hub_intmasks->dispatch1; + ip = 1; + } + + INTR_LOCK(vecblk); + + if (bit <= -1) { + bit = 0; + ASSERT(reserve == II_RESERVE); + /* Choose any available level */ + for (; bit < N_INTPEND_BITS; bit++) { + if (!(vecblk->info[bit].ii_flags & II_RESERVE)) { + rv = bit; + break; + } + } + + /* Return -1 if all interrupt levels int this register are taken. */ + if (bit == N_INTPEND_BITS) + rv = -1; + + } else { + /* Reserve a particular level if it's available. */ + if ((vecblk->info[bit].ii_flags & II_RESERVE) == reserve) { + /* Can't (un)reserve a level that's already (un)reserved. */ + rv = -1; + } else { + rv = bit; + } + } + + /* Reserve the level and bump the count. */ + if (rv != -1) { + if (reserve) { + int maxlen = sizeof(vecblk->info[bit].ii_name) - 1; + int namelen; + vecblk->info[bit].ii_flags |= (II_RESERVE | resflags); + vecblk->info[bit].ii_owner_dev = owner_dev; + /* Copy in the name. */ + namelen = name ? strlen(name) : 0; + strncpy(vecblk->info[bit].ii_name, name, min(namelen, maxlen)); + vecblk->info[bit].ii_name[maxlen] = '\0'; + vecblk->vector_count++; + } else { + vecblk->info[bit].ii_flags = 0; /* Clear all the flags */ + vecblk->info[bit].ii_owner_dev = 0; + /* Clear the name. */ + vecblk->info[bit].ii_name[0] = '\0'; + vecblk->vector_count--; + } + } + + INTR_UNLOCK(vecblk); + +#if defined(DEBUG) + if (rv >= 0) { + int namelen = name ? strlen(name) : 0; + /* Gather this device - target cpu mapping information + * in a table which can be used later by the idbg "intrmap" + * command + */ + s = mutex_spinlock(&intr_dev_targ_map_lock); + if (intr_dev_targ_map_size < MAX_DEVICES) { + intr_dev_targ_map_t *p; + + p = &intr_dev_targ_map[intr_dev_targ_map_size]; + p->dev = owner_dev; + p->cpuid = cpu; + p->cnodeid = cpuid_to_cnodeid(cpu); + p->bit = ip * N_INTPEND_BITS + rv; + strncpy(p->intr_name, + name, + min(MAX_NAME,namelen)); + intr_dev_targ_map_size++; + } + mutex_spinunlock(&intr_dev_targ_map_lock,s); + } +#endif /* DEBUG */ + + return (((rv == -1) ? rv : (ip * N_INTPEND_BITS) + rv)) ; +} + + +/* + * WARNING: This routine should only be called from within ml/SN. + * Reserve an interrupt level. + */ +int +intr_reserve_level(cpuid_t cpu, int bit, int resflags, devfs_handle_t owner_dev, char *name) +{ + return(do_intr_reserve_level(cpu, bit, resflags, II_RESERVE, owner_dev, name)); +} + + +/* + * WARNING: This routine should only be called from within ml/SN. + * Unreserve an interrupt level. + */ +void +intr_unreserve_level(cpuid_t cpu, int bit) +{ + (void)do_intr_reserve_level(cpu, bit, 0, II_UNRESERVE, 0, NULL); +} + +/* + * Get values that vary depending on which CPU and bit we're operating on + */ +static hub_intmasks_t * +intr_get_ptrs(cpuid_t cpu, int bit, + int *new_bit, /* Bit relative to the register */ + hubreg_t **intpend_masks, /* Masks for this register */ + intr_vecblk_t **vecblk, /* Vecblock for this interrupt */ + int *ip) /* Which intpend register */ +{ + hub_intmasks_t *hub_intmasks; + synergy_da_t *sda; + int which_synergy; + cnodeid_t cnode; + + ASSERT(bit < N_INTPEND_BITS * 2); + + cnode = cpuid_to_cnodeid(cpu); + which_synergy = cpuid_to_synergy(cpu); + sda = Synergy_da_indr[(cnode * 2) + which_synergy]; + hub_intmasks = &sda->s_intmasks; + + // hub_intmasks = &pdaindr[cpu].pda->p_intmasks; + + if (bit < N_INTPEND_BITS) { + *intpend_masks = hub_intmasks->intpend0_masks; + *vecblk = hub_intmasks->dispatch0; + *ip = 0; + *new_bit = bit; + } else { + *intpend_masks = hub_intmasks->intpend1_masks; + *vecblk = hub_intmasks->dispatch1; + *ip = 1; + *new_bit = bit - N_INTPEND_BITS; + } + + return hub_intmasks; +} + + +/* + * intr_connect_level(cpuid_t cpu, int bit, ilvl_t intr_swlevel, + * intr_func_t intr_func, void *intr_arg); + * This is the lowest-level interface to the interrupt code. It shouldn't + * be called from outside the ml/SN directory. + * intr_connect_level hooks up an interrupt to a particular bit in + * the INT_PEND0/1 masks. Returns 0 on success. + * cpu is the CPU to which the interrupt will be sent. + * bit is the level bit to connect to + * intr_swlevel tells which software level to use + * intr_func is the interrupt handler + * intr_arg is an arbitrary argument interpreted by the handler + * intr_prefunc is a prologue function, to be called + * with interrupts disabled, to disable + * the interrupt at source. It is called + * with the same argument. Should be NULL for + * typical interrupts, which can be masked + * by the infrastructure at the level bit. + * intr_connect_level returns 0 on success or nonzero on an error + */ +/* ARGSUSED */ +int +intr_connect_level(cpuid_t cpu, int bit, ilvl_t intr_swlevel, intr_func_t intr_prefunc) +{ + intr_vecblk_t *vecblk; + hubreg_t *intpend_masks; + int rv = 0; + int ip; + unsigned long s; + + ASSERT(bit < N_INTPEND_BITS * 2); + + (void)intr_get_ptrs(cpu, bit, &bit, &intpend_masks, + &vecblk, &ip); + + INTR_LOCK(vecblk); + + if ((vecblk->info[bit].ii_flags & II_INUSE) || + (!(vecblk->info[bit].ii_flags & II_RESERVE))) { + /* Can't assign to a level that's in use or isn't reserved. */ + rv = -1; + } else { + /* Stuff parameters into vector and info */ + vecblk->vectors[bit].iv_prefunc = intr_prefunc; + vecblk->info[bit].ii_flags |= II_INUSE; + } + + /* Now stuff the masks if everything's okay. */ + if (!rv) { + int lslice; + volatile hubreg_t *mask_reg; + // nasid_t nasid = COMPACT_TO_NASID_NODEID(cpuid_to_cnodeid(cpu)); + nasid_t nasid = cpuid_to_nasid(cpu); + int subnode = cpuid_to_subnode(cpu); + + /* Make sure it's not already pending when we connect it. */ + REMOTE_HUB_PI_CLR_INTR(nasid, subnode, bit + ip * N_INTPEND_BITS); + + if (bit >= GFX_INTR_A && bit <= CC_PEND_B) { + intpend_masks[0] |= (1ULL << (uint64_t)bit); + } + + lslice = cpuid_to_localslice(cpu); + vecblk->cpu_count[lslice]++; +#if SN1 + /* + * On SN1, there are 8 interrupt mask registers per node: + * PI_0 MASK_0 A + * PI_0 MASK_1 A + * PI_0 MASK_0 B + * PI_0 MASK_1 B + * PI_1 MASK_0 A + * PI_1 MASK_1 A + * PI_1 MASK_0 B + * PI_1 MASK_1 B + */ +#endif + if (ip == 0) { + mask_reg = REMOTE_HUB_PI_ADDR(nasid, subnode, + PI_INT_MASK0_A + PI_INT_MASK_OFFSET * lslice); + } else { + mask_reg = REMOTE_HUB_PI_ADDR(nasid, subnode, + PI_INT_MASK1_A + PI_INT_MASK_OFFSET * lslice); + } + + HUB_S(mask_reg, intpend_masks[0]); + } + + INTR_UNLOCK(vecblk); + + return rv; +} + + +/* + * intr_disconnect_level(cpuid_t cpu, int bit) + * + * This is the lowest-level interface to the interrupt code. It should + * not be called from outside the ml/SN directory. + * intr_disconnect_level removes a particular bit from an interrupt in + * the INT_PEND0/1 masks. Returns 0 on success or nonzero on failure. + */ +int +intr_disconnect_level(cpuid_t cpu, int bit) +{ + intr_vecblk_t *vecblk; + hubreg_t *intpend_masks; + unsigned long s; + int rv = 0; + int ip; + + (void)intr_get_ptrs(cpu, bit, &bit, &intpend_masks, + &vecblk, &ip); + + INTR_LOCK(vecblk); + + if ((vecblk->info[bit].ii_flags & (II_RESERVE | II_INUSE)) != + ((II_RESERVE | II_INUSE))) { + /* Can't remove a level that's not in use or isn't reserved. */ + rv = -1; + } else { + /* Stuff parameters into vector and info */ + vecblk->vectors[bit].iv_func = (intr_func_t)NULL; + vecblk->vectors[bit].iv_prefunc = (intr_func_t)NULL; + vecblk->vectors[bit].iv_arg = 0; + vecblk->info[bit].ii_flags &= ~II_INUSE; +#ifdef BASE_ITHRTEAD + vecblk->vectors[bit].iv_mustruncpu = -1; /* No mustrun CPU any more. */ +#endif + } + + /* Now clear the masks if everything's okay. */ + if (!rv) { + int lslice; + volatile hubreg_t *mask_reg; + + intpend_masks[0] &= ~(1ULL << (uint64_t)bit); + lslice = cpuid_to_localslice(cpu); + vecblk->cpu_count[lslice]--; + mask_reg = REMOTE_HUB_PI_ADDR(COMPACT_TO_NASID_NODEID(cpuid_to_cnodeid(cpu)), + cpuid_to_subnode(cpu), + ip == 0 ? PI_INT_MASK0_A : PI_INT_MASK1_A); + mask_reg = (volatile hubreg_t *)((__psunsigned_t)mask_reg + + (PI_INT_MASK_OFFSET * lslice)); + *mask_reg = intpend_masks[0]; + } + + INTR_UNLOCK(vecblk); + + return rv; +} + +/* + * Actually block or unblock an interrupt + */ +void +do_intr_block_bit(cpuid_t cpu, int bit, int block) +{ + intr_vecblk_t *vecblk; + int ip; + unsigned long s; + hubreg_t *intpend_masks; + volatile hubreg_t mask_value; + volatile hubreg_t *mask_reg; + + intr_get_ptrs(cpu, bit, &bit, &intpend_masks, &vecblk, &ip); + + INTR_LOCK(vecblk); + + if (block) + /* Block */ + intpend_masks[0] &= ~(1ULL << (uint64_t)bit); + else + /* Unblock */ + intpend_masks[0] |= (1ULL << (uint64_t)bit); + + if (ip == 0) { + mask_reg = REMOTE_HUB_PI_ADDR(COMPACT_TO_NASID_NODEID(cpuid_to_cnodeid(cpu)), + cpuid_to_subnode(cpu), PI_INT_MASK0_A); + } else { + mask_reg = REMOTE_HUB_PI_ADDR(COMPACT_TO_NASID_NODEID(cpuid_to_cnodeid(cpu)), + cpuid_to_subnode(cpu), PI_INT_MASK1_A); + } + + HUB_S(mask_reg, intpend_masks[0]); + + /* + * Wait for it to take effect. (One read should suffice.) + * This is only necessary when blocking an interrupt + */ + if (block) + while ((mask_value = HUB_L(mask_reg)) != intpend_masks[0]) + ; + + INTR_UNLOCK(vecblk); +} + + +/* + * Block a particular interrupt (cpu/bit pair). + */ +/* ARGSUSED */ +void +intr_block_bit(cpuid_t cpu, int bit) +{ + do_intr_block_bit(cpu, bit, 1); +} + + +/* + * Unblock a particular interrupt (cpu/bit pair). + */ +/* ARGSUSED */ +void +intr_unblock_bit(cpuid_t cpu, int bit) +{ + do_intr_block_bit(cpu, bit, 0); +} + + +/* verifies that the specified CPUID is on the specified SUBNODE (if any) */ +#define cpu_on_subnode(cpuid, which_subnode) \ + (((which_subnode) == SUBNODE_ANY) || (cpuid_to_subnode(cpuid) == (which_subnode))) + + +/* + * Choose one of the CPUs on a specified node or subnode to receive + * interrupts. Don't pick a cpu which has been specified as a NOINTR cpu. + * + * Among all acceptable CPUs, the CPU that has the fewest total number + * of interrupts targetted towards it is chosen. Note that we never + * consider how frequent each of these interrupts might occur, so a rare + * hardware error interrupt is weighted equally with a disk interrupt. + */ +static cpuid_t +do_intr_cpu_choose(cnodeid_t cnode, int which_subnode) +{ + cpuid_t cpu, best_cpu = CPU_NONE; + int slice, min_count=1000; + + min_count = 1000; + for (slice=0; slice < CPUS_PER_NODE; slice++) { + intr_vecblk_t *vecblk0, *vecblk1; + int total_intrs_to_slice; + subnode_pda_t *snpda; + int local_cpu_num; + + cpu = cnode_slice_to_cpuid(cnode, slice); + if (cpu == CPU_NONE) + continue; + + /* If this cpu isn't enabled for interrupts, skip it */ + if (!cpu_enabled(cpu) || !cpu_allows_intr(cpu)) + continue; + + /* If this isn't the right subnode, skip it */ + if (!cpu_on_subnode(cpu, which_subnode)) + continue; + + /* OK, this one's a potential CPU for interrupts */ + snpda = SUBNODEPDA(cnode,SUBNODE(slice)); + vecblk0 = &snpda->intr_dispatch0; + vecblk1 = &snpda->intr_dispatch1; + local_cpu_num = LOCALCPU(slice); + total_intrs_to_slice = vecblk0->cpu_count[local_cpu_num] + + vecblk1->cpu_count[local_cpu_num]; + + if (min_count > total_intrs_to_slice) { + min_count = total_intrs_to_slice; + best_cpu = cpu; + } + } + return best_cpu; +} + +/* + * Choose an appropriate interrupt target CPU on a specified node. + * If which_subnode is SUBNODE_ANY, then subnode is not considered. + * Otherwise, the chosen CPU must be on the specified subnode. + */ +static cpuid_t +intr_cpu_choose_from_node(cnodeid_t cnode, int which_subnode) +{ + return(do_intr_cpu_choose(cnode, which_subnode)); +} + + +/* Make it easy to identify subnode vertices in the hwgraph */ +void +mark_subnodevertex_as_subnode(devfs_handle_t vhdl, int which_subnode) +{ + graph_error_t rv; + + ASSERT(0 <= which_subnode); + ASSERT(which_subnode < NUM_SUBNODES); + + rv = hwgraph_info_add_LBL(vhdl, INFO_LBL_CPUBUS, (arbitrary_info_t)which_subnode); + ASSERT_ALWAYS(rv == GRAPH_SUCCESS); + + rv = hwgraph_info_export_LBL(vhdl, INFO_LBL_CPUBUS, sizeof(arbitrary_info_t)); + ASSERT_ALWAYS(rv == GRAPH_SUCCESS); +} + + +/* + * Given a device descriptor, extract interrupt target information and + * choose an appropriate CPU. Return CPU_NONE if we can't make sense + * out of the target information. + * TBD: Should this be considered platform-independent code? + */ + + +/* + * intr_bit_reserve_test(cpuid,which_subnode,cnode,req_bit,intr_resflags, + * owner_dev,intr_name,*resp_bit) + * Either cpuid is not CPU_NONE or cnodeid not CNODE_NONE but + * not both. + * 1. If cpuid is specified, this routine tests if this cpu can be a valid + * interrupt target candidate. + * 2. If cnodeid is specified, this routine tests if there is a cpu on + * this node which can be a valid interrupt target candidate. + * 3. If a valid interrupt target cpu candidate is found then an attempt at + * reserving an interrupt bit on the corresponding cnode is made. + * + * If steps 1 & 2 both fail or step 3 fails then we are not able to get a valid + * interrupt target cpu then routine returns CPU_NONE (failure) + * Otherwise routine returns cpuid of interrupt target (success) + */ +static cpuid_t +intr_bit_reserve_test(cpuid_t cpuid, + int favor_subnode, + cnodeid_t cnodeid, + int req_bit, + int intr_resflags, + devfs_handle_t owner_dev, + char *intr_name, + int *resp_bit) +{ + + ASSERT((cpuid==CPU_NONE) || (cnodeid==CNODEID_NONE)); + + if (cnodeid != CNODEID_NONE) { + /* Try to choose a interrupt cpu candidate */ + cpuid = intr_cpu_choose_from_node(cnodeid, favor_subnode); + } + + if (cpuid != CPU_NONE) { + /* Try to reserve an interrupt bit on the hub + * corresponding to the canidate cnode. If we + * are successful then we got a cpu which can + * act as an interrupt target for the io device. + * Otherwise we need to continue the search + * further. + */ + *resp_bit = do_intr_reserve_level(cpuid, + req_bit, + intr_resflags, + II_RESERVE, + owner_dev, + intr_name); + + if (*resp_bit >= 0) + /* The interrupt target specified was fine */ + return(cpuid); + } + return(CPU_NONE); +} +/* + * intr_heuristic(dev_t dev,device_desc_t dev_desc, + * int req_bit,int intr_resflags,dev_t owner_dev, + * char *intr_name,int *resp_bit) + * + * Choose an interrupt destination for an interrupt. + * dev is the device for which the interrupt is being set up + * dev_desc is a description of hardware and policy that could + * help determine where this interrupt should go + * req_bit is the interrupt bit requested + * (can be INTRCONNECT_ANY_BIT in which the first available + * interrupt bit is used) + * intr_resflags indicates whether we want to (un)reserve bit + * owner_dev is the owner device + * intr_name is the readable interrupt name + * resp_bit indicates whether we succeeded in getting the required + * action { (un)reservation} done + * negative value indicates failure + * + */ +/* ARGSUSED */ +cpuid_t +intr_heuristic(devfs_handle_t dev, + device_desc_t dev_desc, + int req_bit, + int intr_resflags, + devfs_handle_t owner_dev, + char *intr_name, + int *resp_bit) +{ + cpuid_t cpuid; /* possible intr targ*/ + cnodeid_t candidate; /* possible canidate */ + int which_subnode = SUBNODE_ANY; + +/* SN1 + pcibr Addressing Limitation */ + { + devfs_handle_t pconn_vhdl; + pcibr_soft_t pcibr_soft; + + /* + * This combination of SN1 and Bridge hardware has an odd "limitation". + * Due to the choice of addresses for PI0 and PI1 registers on SN1 + * and historical limitations in Bridge, Bridge is unable to + * send interrupts to both PI0 CPUs and PI1 CPUs -- we have + * to choose one set or the other. That choice is implicitly + * made when Bridge first attaches its error interrupt. After + * that point, all subsequent interrupts are restricted to the + * same PI number (though it's possible to send interrupts to + * the same PI number on a different node). + * + * Since neither SN1 nor Bridge designers are willing to admit a + * bug, we can't really call this a "workaround". It's a permanent + * solution for an SN1-specific and Bridge-specific hardware + * limitation that won't ever be lifted. + */ + if ((hwgraph_edge_get(dev, EDGE_LBL_PCI, &pconn_vhdl) == GRAPH_SUCCESS) && + ((pcibr_soft = pcibr_soft_get(pconn_vhdl)) != NULL)) { + /* + * We "know" that the error interrupt is the first + * interrupt set up by pcibr_attach. Send all interrupts + * on this bridge to the same subnode number. + */ + if (pcibr_soft->bsi_err_intr) { + which_subnode = cpuid_to_subnode(((hub_intr_t) pcibr_soft->bsi_err_intr)->i_cpuid); + } + } + } + + /* Check if we can find a valid interrupt target candidate on + * the master node for the device. + */ + cpuid = intr_bit_reserve_test(CPU_NONE, + which_subnode, + master_node_get(dev), + req_bit, + intr_resflags, + owner_dev, + intr_name, + resp_bit); + + if (cpuid != CPU_NONE) { + if (cpu_on_subnode(cpuid, which_subnode)) + return(cpuid); /* got a valid interrupt target */ + else + intr_unreserve_level(cpuid, *resp_bit); + } + + printk(KERN_WARNING "Cannot target interrupts to closest node(%d): (0x%lx)\n", + master_node_get(dev),(unsigned long)owner_dev); + + /* Fall through into the default algorithm + * (exhaustive-search-for-the-nearest-possible-interrupt-target) + * for finding the interrupt target + */ + + { + /* + * Do a stupid round-robin assignment of the node. + * (Should do a "nearest neighbor" but not for SN1. + */ + static cnodeid_t last_node = -1; + + if (last_node >= numnodes) last_node = 0; + for (candidate = last_node + 1; candidate != last_node; candidate++) { + if (candidate == numnodes) candidate = 0; + cpuid = intr_bit_reserve_test(CPU_NONE, + which_subnode, + candidate, + req_bit, + intr_resflags, + owner_dev, + intr_name, + resp_bit); + + if (cpuid != CPU_NONE) { + if (cpu_on_subnode(cpuid, which_subnode)) { + last_node = candidate; + return(cpuid); /* got a valid interrupt target */ + } + else + intr_unreserve_level(cpuid, *resp_bit); + } + } + last_node = candidate; + } + + printk(KERN_WARNING "Cannot target interrupts to any close node: %ld (0x%lx)\n", + (long)owner_dev, (unsigned long)owner_dev); + + /* In the worst case try to allocate interrupt bits on the + * master processor's node. We may get here during error interrupt + * allocation phase when the topology matrix is not yet setup + * and hence cannot do an exhaustive search. + */ + ASSERT(cpu_allows_intr(master_procid)); + cpuid = intr_bit_reserve_test(master_procid, + which_subnode, + CNODEID_NONE, + req_bit, + intr_resflags, + owner_dev, + intr_name, + resp_bit); + + if (cpuid != CPU_NONE) { + if (cpu_on_subnode(cpuid, which_subnode)) + return(cpuid); + else + intr_unreserve_level(cpuid, *resp_bit); + } + + printk(KERN_WARNING "Cannot target interrupts: (0x%lx)\n", + (unsigned long)owner_dev); + + return(CPU_NONE); /* Should never get here */ +} + +struct hardwired_intr_s { + signed char level; + int flags; + char *name; +} const hardwired_intr[] = { + { INT_PEND0_BASELVL + RESERVED_INTR, 0, "Reserved" }, + { INT_PEND0_BASELVL + GFX_INTR_A, 0, "Gfx A" }, + { INT_PEND0_BASELVL + GFX_INTR_B, 0, "Gfx B" }, + { INT_PEND0_BASELVL + PG_MIG_INTR, II_THREADED, "Migration" }, + { INT_PEND0_BASELVL + UART_INTR, II_THREADED, "Bedrock/L1" }, + { INT_PEND0_BASELVL + CC_PEND_A, 0, "Crosscall A" }, + { INT_PEND0_BASELVL + CC_PEND_B, 0, "Crosscall B" }, + { INT_PEND1_BASELVL + CLK_ERR_INTR, II_ERRORINT, "Clock Error" }, + { INT_PEND1_BASELVL + COR_ERR_INTR_A, II_ERRORINT, "Correctable Error A" }, + { INT_PEND1_BASELVL + COR_ERR_INTR_B, II_ERRORINT, "Correctable Error B" }, + { INT_PEND1_BASELVL + MD_COR_ERR_INTR, II_ERRORINT, "MD Correct. Error" }, + { INT_PEND1_BASELVL + NI_ERROR_INTR, II_ERRORINT, "NI Error" }, + { INT_PEND1_BASELVL + NI_BRDCAST_ERR_A, II_ERRORINT, "Remote NI Error"}, + { INT_PEND1_BASELVL + NI_BRDCAST_ERR_B, II_ERRORINT, "Remote NI Error"}, + { INT_PEND1_BASELVL + MSC_PANIC_INTR, II_ERRORINT, "MSC Panic" }, + { INT_PEND1_BASELVL + LLP_PFAIL_INTR_A, II_ERRORINT, "LLP Pfail WAR" }, + { INT_PEND1_BASELVL + LLP_PFAIL_INTR_B, II_ERRORINT, "LLP Pfail WAR" }, + { INT_PEND1_BASELVL + NACK_INT_A, 0, "CPU A Nack count == NACK_CMP" }, + { INT_PEND1_BASELVL + NACK_INT_B, 0, "CPU B Nack count == NACK_CMP" }, + { INT_PEND1_BASELVL + LB_ERROR, 0, "Local Block Error" }, + { INT_PEND1_BASELVL + XB_ERROR, 0, "Local XBar Error" }, + { -1, 0, (char *)NULL}, +}; + +/* + * Reserve all of the hardwired interrupt levels so they're not used as + * general purpose bits later. + */ +void +intr_reserve_hardwired(cnodeid_t cnode) +{ + cpuid_t cpu; + int level; + int i; + char subnode_done[NUM_SUBNODES]; + + // cpu = cnodetocpu(cnode); + for (cpu = 0; cpu < smp_num_cpus; cpu++) { + if (cpuid_to_cnodeid(cpu) == cnode) { + break; + } + } + if (cpu == smp_num_cpus) cpu = CPU_NONE; + if (cpu == CPU_NONE) { + printk("Node %d has no CPUs", cnode); + return; + } + + for (i=0; iii_name, + vector->iv_func, vector->iv_arg, vector->iv_prefunc); + pf(" vertex 0x%x %s%s", + info->ii_owner_dev, + ((info->ii_flags) & II_RESERVE) ? "R" : "U", + ((info->ii_flags) & II_INUSE) ? "C" : "-"); + pf("%s%s%s%s", + ip & value ? "P" : "-", + ima & value ? "A" : "-", + imb & value ? "B" : "-", + ((info->ii_flags) & II_ERRORINT) ? "E" : "-"); + pf("\n"); +} + + +/* + * Dump information about interrupt vector assignment. + */ +void +intr_dumpvec(cnodeid_t cnode, void (*pf)(char *, ...)) +{ + nodepda_t *npda; + int ip, sn, bit; + intr_vecblk_t *dispatch; + hubreg_t ipr, ima, imb; + nasid_t nasid; + + if ((cnode < 0) || (cnode >= numnodes)) { + pf("intr_dumpvec: cnodeid out of range: %d\n", cnode); + return ; + } + + nasid = COMPACT_TO_NASID_NODEID(cnode); + + if (nasid == INVALID_NASID) { + pf("intr_dumpvec: Bad cnodeid: %d\n", cnode); + return ; + } + + + npda = NODEPDA(cnode); + + for (sn = 0; sn < NUM_SUBNODES; sn++) { + for (ip = 0; ip < 2; ip++) { + dispatch = ip ? &(SNPDA(npda,sn)->intr_dispatch1) : &(SNPDA(npda,sn)->intr_dispatch0); + ipr = REMOTE_HUB_PI_L(nasid, sn, ip ? PI_INT_PEND1 : PI_INT_PEND0); + ima = REMOTE_HUB_PI_L(nasid, sn, ip ? PI_INT_MASK1_A : PI_INT_MASK0_A); + imb = REMOTE_HUB_PI_L(nasid, sn, ip ? PI_INT_MASK1_B : PI_INT_MASK0_B); + + pf("Node %d INT_PEND%d:\n", cnode, ip); + + if (dispatch->ithreads_enabled) + pf(" Ithreads enabled\n"); + else + pf(" Ithreads disabled\n"); + pf(" vector_count = %d, vector_state = %d\n", + dispatch->vector_count, + dispatch->vector_state); + pf(" CPU A count %d, CPU B count %d\n", + dispatch->cpu_count[0], + dispatch->cpu_count[1]); + pf(" &vector_lock = 0x%x\n", + &(dispatch->vector_lock)); + for (bit = 0; bit < N_INTPEND_BITS; bit++) { + if ((dispatch->info[bit].ii_flags & II_RESERVE) || + (ipr & (1L << bit))) { + dump_vector(&(dispatch->info[bit]), + &(dispatch->vectors[bit]), + bit, ipr, ima, imb, pf); + } + } + pf("\n"); + } + } +} + diff -Nru a/arch/ia64/sn/io/sn1/pcibr.c b/arch/ia64/sn/io/sn1/pcibr.c --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/io/sn1/pcibr.c Tue Mar 12 13:58:14 2002 @@ -0,0 +1,7950 @@ +/* + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (C) 1992 - 1997, 2000-2002 Silicon Graphics, Inc. All rights reserved. + */ + +int NeedXbridgeSwap = 0; + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#ifdef __ia64 +#define rmallocmap atemapalloc +#define rmfreemap atemapfree +#define rmfree atefree +#define rmalloc atealloc +#endif + +extern boolean_t is_sys_critical_vertex(devfs_handle_t); + +#undef PCIBR_ATE_DEBUG + +#if 0 +#define DEBUG 1 /* To avoid lots of bad printk() formats leave off */ +#endif +#define PCI_DEBUG 1 +#define ATTACH_DEBUG 1 +#define PCIBR_SOFT_LIST 1 + +#ifndef LOCAL +#define LOCAL static +#endif + +/* + * Macros related to the Lucent USS 302/312 usb timeout workaround. It + * appears that if the lucent part can get into a retry loop if it sees a + * DAC on the bus during a pio read retry. The loop is broken after about + * 1ms, so we need to set up bridges holding this part to allow at least + * 1ms for pio. + */ + +#define USS302_TIMEOUT_WAR + +#ifdef USS302_TIMEOUT_WAR +#define LUCENT_USBHC_VENDOR_ID_NUM 0x11c1 +#define LUCENT_USBHC302_DEVICE_ID_NUM 0x5801 +#define LUCENT_USBHC312_DEVICE_ID_NUM 0x5802 +#define USS302_BRIDGE_TIMEOUT_HLD 4 +#endif + +#define PCIBR_LLP_CONTROL_WAR +#if defined (PCIBR_LLP_CONTROL_WAR) +int pcibr_llp_control_war_cnt; +#endif /* PCIBR_LLP_CONTROL_WAR */ + +int pcibr_devflag = D_MP; + +#ifdef LATER +#define F(s,n) { 1l<<(s),-(s), n } + +struct reg_desc bridge_int_status_desc[] = +{ + F(31, "MULTI_ERR"), + F(30, "PMU_ESIZE_EFAULT"), + F(29, "UNEXPECTED_RESP"), + F(28, "BAD_XRESP_PACKET"), + F(27, "BAD_XREQ_PACKET"), + F(26, "RESP_XTALK_ERROR"), + F(25, "REQ_XTALK_ERROR"), + F(24, "INVALID_ADDRESS"), + F(23, "UNSUPPORTED_XOP"), + F(22, "XREQ_FIFO_OFLOW"), + F(21, "LLP_REC_SNERROR"), + F(20, "LLP_REC_CBERROR"), + F(19, "LLP_RCTY"), + F(18, "LLP_TX_RETRY"), + F(17, "LLP_TCTY"), + F(16, "SSRAM_PERR"), + F(15, "PCI_ABORT"), + F(14, "PCI_PARITY"), + F(13, "PCI_SERR"), + F(12, "PCI_PERR"), + F(11, "PCI_MASTER_TOUT"), + F(10, "PCI_RETRY_CNT"), + F(9, "XREAD_REQ_TOUT"), + F(8, "GIO_BENABLE_ERR"), + F(7, "INT7"), + F(6, "INT6"), + F(5, "INT5"), + F(4, "INT4"), + F(3, "INT3"), + F(2, "INT2"), + F(1, "INT1"), + F(0, "INT0"), + {0} +}; + +struct reg_values space_v[] = +{ + {PCIIO_SPACE_NONE, "none"}, + {PCIIO_SPACE_ROM, "ROM"}, + {PCIIO_SPACE_IO, "I/O"}, + {PCIIO_SPACE_MEM, "MEM"}, + {PCIIO_SPACE_MEM32, "MEM(32)"}, + {PCIIO_SPACE_MEM64, "MEM(64)"}, + {PCIIO_SPACE_CFG, "CFG"}, + {PCIIO_SPACE_WIN(0), "WIN(0)"}, + {PCIIO_SPACE_WIN(1), "WIN(1)"}, + {PCIIO_SPACE_WIN(2), "WIN(2)"}, + {PCIIO_SPACE_WIN(3), "WIN(3)"}, + {PCIIO_SPACE_WIN(4), "WIN(4)"}, + {PCIIO_SPACE_WIN(5), "WIN(5)"}, + {PCIIO_SPACE_BAD, "BAD"}, + {0} +}; + +struct reg_desc space_desc[] = +{ + {0xFF, 0, "space", 0, space_v}, + {0} +}; + +#if DEBUG +#define device_desc device_bits +LOCAL struct reg_desc device_bits[] = +{ + {BRIDGE_DEV_ERR_LOCK_EN, 0, "ERR_LOCK_EN"}, + {BRIDGE_DEV_PAGE_CHK_DIS, 0, "PAGE_CHK_DIS"}, + {BRIDGE_DEV_FORCE_PCI_PAR, 0, "FORCE_PCI_PAR"}, + {BRIDGE_DEV_VIRTUAL_EN, 0, "VIRTUAL_EN"}, + {BRIDGE_DEV_PMU_WRGA_EN, 0, "PMU_WRGA_EN"}, + {BRIDGE_DEV_DIR_WRGA_EN, 0, "DIR_WRGA_EN"}, + {BRIDGE_DEV_DEV_SIZE, 0, "DEV_SIZE"}, + {BRIDGE_DEV_RT, 0, "RT"}, + {BRIDGE_DEV_SWAP_PMU, 0, "SWAP_PMU"}, + {BRIDGE_DEV_SWAP_DIR, 0, "SWAP_DIR"}, + {BRIDGE_DEV_PREF, 0, "PREF"}, + {BRIDGE_DEV_PRECISE, 0, "PRECISE"}, + {BRIDGE_DEV_COH, 0, "COH"}, + {BRIDGE_DEV_BARRIER, 0, "BARRIER"}, + {BRIDGE_DEV_GBR, 0, "GBR"}, + {BRIDGE_DEV_DEV_SWAP, 0, "DEV_SWAP"}, + {BRIDGE_DEV_DEV_IO_MEM, 0, "DEV_IO_MEM"}, + {BRIDGE_DEV_OFF_MASK, BRIDGE_DEV_OFF_ADDR_SHFT, "DEV_OFF", "%x"}, + {0} +}; +#endif /* DEBUG */ + +#ifdef SUPPORT_PRINTING_R_FORMAT +LOCAL struct reg_values xio_cmd_pactyp[] = +{ + {0x0, "RdReq"}, + {0x1, "RdResp"}, + {0x2, "WrReqWithResp"}, + {0x3, "WrResp"}, + {0x4, "WrReqNoResp"}, + {0x5, "Reserved(5)"}, + {0x6, "FetchAndOp"}, + {0x7, "Reserved(7)"}, + {0x8, "StoreAndOp"}, + {0x9, "Reserved(9)"}, + {0xa, "Reserved(a)"}, + {0xb, "Reserved(b)"}, + {0xc, "Reserved(c)"}, + {0xd, "Reserved(d)"}, + {0xe, "SpecialReq"}, + {0xf, "SpecialResp"}, + {0} +}; + +LOCAL struct reg_desc xio_cmd_bits[] = +{ + {WIDGET_DIDN, -28, "DIDN", "%x"}, + {WIDGET_SIDN, -24, "SIDN", "%x"}, + {WIDGET_PACTYP, -20, "PACTYP", 0, xio_cmd_pactyp}, + {WIDGET_TNUM, -15, "TNUM", "%x"}, + {WIDGET_COHERENT, 0, "COHERENT"}, + {WIDGET_DS, 0, "DS"}, + {WIDGET_GBR, 0, "GBR"}, + {WIDGET_VBPM, 0, "VBPM"}, + {WIDGET_ERROR, 0, "ERROR"}, + {WIDGET_BARRIER, 0, "BARRIER"}, + {0} +}; +#endif /* SUPPORT_PRINTING_R_FORMAT */ + +#if PCIBR_FREEZE_TIME || PCIBR_ATE_DEBUG +LOCAL struct reg_desc ate_bits[] = +{ + {0xFFFF000000000000ull, -48, "RMF", "%x"}, + {~(IOPGSIZE - 1) & /* may trim off some low bits */ + 0x0000FFFFFFFFF000ull, 0, "XIO", "%x"}, + {0x0000000000000F00ull, -8, "port", "%x"}, + {0x0000000000000010ull, 0, "Barrier"}, + {0x0000000000000008ull, 0, "Prefetch"}, + {0x0000000000000004ull, 0, "Precise"}, + {0x0000000000000002ull, 0, "Coherent"}, + {0x0000000000000001ull, 0, "Valid"}, + {0} +}; +#endif + +#if PCIBR_ATE_DEBUG +LOCAL struct reg_values ssram_sizes[] = +{ + {BRIDGE_CTRL_SSRAM_512K, "512k"}, + {BRIDGE_CTRL_SSRAM_128K, "128k"}, + {BRIDGE_CTRL_SSRAM_64K, "64k"}, + {BRIDGE_CTRL_SSRAM_1K, "1k"}, + {0} +}; + +LOCAL struct reg_desc control_bits[] = +{ + {BRIDGE_CTRL_FLASH_WR_EN, 0, "FLASH_WR_EN"}, + {BRIDGE_CTRL_EN_CLK50, 0, "EN_CLK50"}, + {BRIDGE_CTRL_EN_CLK40, 0, "EN_CLK40"}, + {BRIDGE_CTRL_EN_CLK33, 0, "EN_CLK33"}, + {BRIDGE_CTRL_RST_MASK, -24, "RST", "%x"}, + {BRIDGE_CTRL_IO_SWAP, 0, "IO_SWAP"}, + {BRIDGE_CTRL_MEM_SWAP, 0, "MEM_SWAP"}, + {BRIDGE_CTRL_PAGE_SIZE, 0, "PAGE_SIZE"}, + {BRIDGE_CTRL_SS_PAR_BAD, 0, "SS_PAR_BAD"}, + {BRIDGE_CTRL_SS_PAR_EN, 0, "SS_PAR_EN"}, + {BRIDGE_CTRL_SSRAM_SIZE_MASK, 0, "SSRAM_SIZE", 0, ssram_sizes}, + {BRIDGE_CTRL_F_BAD_PKT, 0, "F_BAD_PKT"}, + {BRIDGE_CTRL_LLP_XBAR_CRD_MASK, -12, "LLP_XBAR_CRD", "%d"}, + {BRIDGE_CTRL_CLR_RLLP_CNT, 0, "CLR_RLLP_CNT"}, + {BRIDGE_CTRL_CLR_TLLP_CNT, 0, "CLR_TLLP_CNT"}, + {BRIDGE_CTRL_SYS_END, 0, "SYS_END"}, + {BRIDGE_CTRL_MAX_TRANS_MASK, -4, "MAX_TRANS", "%d"}, + {BRIDGE_CTRL_WIDGET_ID_MASK, 0, "WIDGET_ID", "%x"}, + {0} +}; +#endif +#endif /* LATER */ + +/* kbrick widgetnum-to-bus layout */ +int p_busnum[MAX_PORT_NUM] = { /* widget# */ + 0, 0, 0, 0, 0, 0, 0, 0, /* 0x0 - 0x7 */ + 2, /* 0x8 */ + 1, /* 0x9 */ + 0, 0, /* 0xa - 0xb */ + 5, /* 0xc */ + 6, /* 0xd */ + 4, /* 0xe */ + 3, /* 0xf */ +}; + +/* + * Additional PIO spaces per slot are + * recorded in this structure. + */ +struct pciio_piospace_s { + pciio_piospace_t next; /* another space for this device */ + char free; /* 1 if free, 0 if in use */ + pciio_space_t space; /* Which space is in use */ + iopaddr_t start; /* Starting address of the PIO space */ + size_t count; /* size of PIO space */ +}; + +#if PCIBR_SOFT_LIST +pcibr_list_p pcibr_list = 0; +#endif + +#define INFO_LBL_PCIBR_ASIC_REV "_pcibr_asic_rev" + +#define PCIBR_D64_BASE_UNSET (0xFFFFFFFFFFFFFFFF) +#define PCIBR_D32_BASE_UNSET (0xFFFFFFFF) + +#define PCIBR_VALID_SLOT(s) (s < 8) + +#ifdef SN_XXX +extern int hub_device_flags_set(devfs_handle_t widget_dev, + hub_widget_flags_t flags); +#endif +extern pciio_dmamap_t get_free_pciio_dmamap(devfs_handle_t); +extern void free_pciio_dmamap(pcibr_dmamap_t); + +/* + * This is the file operation table for the pcibr driver. + * As each of the functions are implemented, put the + * appropriate function name below. + */ +struct file_operations pcibr_fops = { + owner: THIS_MODULE, + llseek: NULL, + read: NULL, + write: NULL, + readdir: NULL, + poll: NULL, + ioctl: NULL, + mmap: NULL, + open: NULL, + flush: NULL, + release: NULL, + fsync: NULL, + fasync: NULL, + lock: NULL, + readv: NULL, + writev: NULL +}; + +extern devfs_handle_t hwgraph_root; +extern graph_error_t hwgraph_vertex_unref(devfs_handle_t vhdl); +extern int cap_able(uint64_t x); +extern uint64_t rmalloc(struct map *mp, size_t size); +extern void rmfree(struct map *mp, size_t size, uint64_t a); +extern int hwgraph_vertex_name_get(devfs_handle_t vhdl, char *buf, uint buflen); +extern long atoi(register char *p); +extern void *swap_ptr(void **loc, void *new); +extern char *dev_to_name(devfs_handle_t dev, char *buf, uint buflen); +extern cnodeid_t nodevertex_to_cnodeid(devfs_handle_t vhdl); +extern graph_error_t hwgraph_edge_remove(devfs_handle_t from, char *name, devfs_handle_t *toptr); +extern struct map *rmallocmap(uint64_t mapsiz); +extern void rmfreemap(struct map *mp); +extern int compare_and_swap_ptr(void **location, void *old_ptr, void *new_ptr); +extern int io_path_map_widget(devfs_handle_t vertex); + + + +/* ===================================================================== + * Function Table of Contents + * + * The order of functions in this file has stopped + * making much sense. We might want to take a look + * at it some time and bring back some sanity, or + * perhaps bust this file into smaller chunks. + */ + +LOCAL void do_pcibr_rrb_clear(bridge_t *, int); +LOCAL void do_pcibr_rrb_flush(bridge_t *, int); +LOCAL int do_pcibr_rrb_count_valid(bridge_t *, pciio_slot_t); +LOCAL int do_pcibr_rrb_count_avail(bridge_t *, pciio_slot_t); +LOCAL int do_pcibr_rrb_alloc(bridge_t *, pciio_slot_t, int); +LOCAL int do_pcibr_rrb_free(bridge_t *, pciio_slot_t, int); + +LOCAL void do_pcibr_rrb_autoalloc(pcibr_soft_t, int, int); + +int pcibr_wrb_flush(devfs_handle_t); +int pcibr_rrb_alloc(devfs_handle_t, int *, int *); +int pcibr_rrb_check(devfs_handle_t, int *, int *, int *, int *); +int pcibr_alloc_all_rrbs(devfs_handle_t, int, int, int, int, int, int, int, int, int); +void pcibr_rrb_flush(devfs_handle_t); + +LOCAL int pcibr_try_set_device(pcibr_soft_t, pciio_slot_t, unsigned, bridgereg_t); +void pcibr_release_device(pcibr_soft_t, pciio_slot_t, bridgereg_t); + +LOCAL void pcibr_clearwidint(bridge_t *); +LOCAL void pcibr_setwidint(xtalk_intr_t); +LOCAL int pcibr_probe_slot(bridge_t *, cfg_p, unsigned *); + +void pcibr_init(void); +int pcibr_attach(devfs_handle_t); +int pcibr_detach(devfs_handle_t); +int pcibr_open(devfs_handle_t *, int, int, cred_t *); +int pcibr_close(devfs_handle_t, int, int, cred_t *); +int pcibr_map(devfs_handle_t, vhandl_t *, off_t, size_t, uint); +int pcibr_unmap(devfs_handle_t, vhandl_t *); +int pcibr_ioctl(devfs_handle_t, int, void *, int, struct cred *, int *); + +void pcibr_freeblock_sub(iopaddr_t *, iopaddr_t *, iopaddr_t, size_t); + +LOCAL int pcibr_init_ext_ate_ram(bridge_t *); +LOCAL int pcibr_ate_alloc(pcibr_soft_t, int); +LOCAL void pcibr_ate_free(pcibr_soft_t, int, int); + +LOCAL pcibr_info_t pcibr_info_get(devfs_handle_t); +LOCAL pcibr_info_t pcibr_device_info_new(pcibr_soft_t, pciio_slot_t, pciio_function_t, pciio_vendor_id_t, pciio_device_id_t); +LOCAL void pcibr_device_info_free(devfs_handle_t, pciio_slot_t); +LOCAL iopaddr_t pcibr_addr_pci_to_xio(devfs_handle_t, pciio_slot_t, pciio_space_t, iopaddr_t, size_t, unsigned); + +pcibr_piomap_t pcibr_piomap_alloc(devfs_handle_t, device_desc_t, pciio_space_t, iopaddr_t, size_t, size_t, unsigned); +void pcibr_piomap_free(pcibr_piomap_t); +caddr_t pcibr_piomap_addr(pcibr_piomap_t, iopaddr_t, size_t); +void pcibr_piomap_done(pcibr_piomap_t); +caddr_t pcibr_piotrans_addr(devfs_handle_t, device_desc_t, pciio_space_t, iopaddr_t, size_t, unsigned); +iopaddr_t pcibr_piospace_alloc(devfs_handle_t, device_desc_t, pciio_space_t, size_t, size_t); +void pcibr_piospace_free(devfs_handle_t, pciio_space_t, iopaddr_t, size_t); + +LOCAL iopaddr_t pcibr_flags_to_d64(unsigned, pcibr_soft_t); +LOCAL bridge_ate_t pcibr_flags_to_ate(unsigned); + +pcibr_dmamap_t pcibr_dmamap_alloc(devfs_handle_t, device_desc_t, size_t, unsigned); +void pcibr_dmamap_free(pcibr_dmamap_t); +LOCAL bridge_ate_p pcibr_ate_addr(pcibr_soft_t, int); +LOCAL iopaddr_t pcibr_addr_xio_to_pci(pcibr_soft_t, iopaddr_t, size_t); +iopaddr_t pcibr_dmamap_addr(pcibr_dmamap_t, paddr_t, size_t); +alenlist_t pcibr_dmamap_list(pcibr_dmamap_t, alenlist_t, unsigned); +void pcibr_dmamap_done(pcibr_dmamap_t); +cnodeid_t pcibr_get_dmatrans_node(devfs_handle_t); +iopaddr_t pcibr_dmatrans_addr(devfs_handle_t, device_desc_t, paddr_t, size_t, unsigned); +alenlist_t pcibr_dmatrans_list(devfs_handle_t, device_desc_t, alenlist_t, unsigned); +void pcibr_dmamap_drain(pcibr_dmamap_t); +void pcibr_dmaaddr_drain(devfs_handle_t, paddr_t, size_t); +void pcibr_dmalist_drain(devfs_handle_t, alenlist_t); +iopaddr_t pcibr_dmamap_pciaddr_get(pcibr_dmamap_t); + +static unsigned pcibr_intr_bits(pciio_info_t info, pciio_intr_line_t lines); +pcibr_intr_t pcibr_intr_alloc(devfs_handle_t, device_desc_t, pciio_intr_line_t, devfs_handle_t); +void pcibr_intr_free(pcibr_intr_t); +LOCAL void pcibr_setpciint(xtalk_intr_t); +int pcibr_intr_connect(pcibr_intr_t); +void pcibr_intr_disconnect(pcibr_intr_t); + +devfs_handle_t pcibr_intr_cpu_get(pcibr_intr_t); +void pcibr_xintr_preset(void *, int, xwidgetnum_t, iopaddr_t, xtalk_intr_vector_t); +void pcibr_intr_func(intr_arg_t); + +void pcibr_provider_startup(devfs_handle_t); +void pcibr_provider_shutdown(devfs_handle_t); + +int pcibr_reset(devfs_handle_t); +pciio_endian_t pcibr_endian_set(devfs_handle_t, pciio_endian_t, pciio_endian_t); +int pcibr_priority_bits_set(pcibr_soft_t, pciio_slot_t, pciio_priority_t); +pciio_priority_t pcibr_priority_set(devfs_handle_t, pciio_priority_t); +int pcibr_device_flags_set(devfs_handle_t, pcibr_device_flags_t); + +LOCAL cfg_p pcibr_config_addr(devfs_handle_t, unsigned); +uint64_t pcibr_config_get(devfs_handle_t, unsigned, unsigned); +LOCAL uint64_t do_pcibr_config_get(cfg_p, unsigned, unsigned); +void pcibr_config_set(devfs_handle_t, unsigned, unsigned, uint64_t); +LOCAL void do_pcibr_config_set(cfg_p, unsigned, unsigned, uint64_t); + +LOCAL pcibr_hints_t pcibr_hints_get(devfs_handle_t, int); +void pcibr_hints_fix_rrbs(devfs_handle_t); +void pcibr_hints_dualslot(devfs_handle_t, pciio_slot_t, pciio_slot_t); +void pcibr_hints_intr_bits(devfs_handle_t, pcibr_intr_bits_f *); +void pcibr_set_rrb_callback(devfs_handle_t, rrb_alloc_funct_t); +void pcibr_hints_handsoff(devfs_handle_t); +void pcibr_hints_subdevs(devfs_handle_t, pciio_slot_t, ulong); + +LOCAL int pcibr_slot_info_init(devfs_handle_t,pciio_slot_t); +LOCAL int pcibr_slot_info_free(devfs_handle_t,pciio_slot_t); + +#ifdef LATER +LOCAL int pcibr_slot_info_return(pcibr_soft_t, pciio_slot_t, + pcibr_slot_info_resp_t); +LOCAL void pcibr_slot_func_info_return(pcibr_info_h, int, + pcibr_slot_func_info_resp_t); +#endif /* LATER */ + +LOCAL int pcibr_slot_addr_space_init(devfs_handle_t,pciio_slot_t); +LOCAL int pcibr_slot_device_init(devfs_handle_t, pciio_slot_t); +LOCAL int pcibr_slot_guest_info_init(devfs_handle_t,pciio_slot_t); +LOCAL int pcibr_slot_initial_rrb_alloc(devfs_handle_t,pciio_slot_t); +LOCAL int pcibr_slot_call_device_attach(devfs_handle_t, + pciio_slot_t, int); +LOCAL int pcibr_slot_call_device_detach(devfs_handle_t, + pciio_slot_t, int); + +LOCAL int pcibr_slot_detach(devfs_handle_t, pciio_slot_t, int); +LOCAL int pcibr_is_slot_sys_critical(devfs_handle_t, pciio_slot_t); +#ifdef LATER +LOCAL int pcibr_slot_query(devfs_handle_t, pcibr_slot_info_req_t); +#endif + +/* ===================================================================== + * RRB management + */ + +#define LSBIT(word) ((word) &~ ((word)-1)) + +#define PCIBR_RRB_SLOT_VIRTUAL 8 + +LOCAL void +do_pcibr_rrb_clear(bridge_t *bridge, int rrb) +{ + bridgereg_t status; + + /* bridge_lock must be held; + * this RRB must be disabled. + */ + + /* wait until RRB has no outstanduing XIO packets. */ + while ((status = bridge->b_resp_status) & BRIDGE_RRB_INUSE(rrb)) { + ; /* XXX- beats on bridge. bad idea? */ + } + + /* if the RRB has data, drain it. */ + if (status & BRIDGE_RRB_VALID(rrb)) { + bridge->b_resp_clear = BRIDGE_RRB_CLEAR(rrb); + + /* wait until RRB is no longer valid. */ + while ((status = bridge->b_resp_status) & BRIDGE_RRB_VALID(rrb)) { + ; /* XXX- beats on bridge. bad idea? */ + } + } +} + +LOCAL void +do_pcibr_rrb_flush(bridge_t *bridge, int rrbn) +{ + reg_p rrbp = &bridge->b_rrb_map[rrbn & 1].reg; + bridgereg_t rrbv; + int shft = 4 * (rrbn >> 1); + unsigned ebit = BRIDGE_RRB_EN << shft; + + rrbv = *rrbp; + if (rrbv & ebit) + *rrbp = rrbv & ~ebit; + + do_pcibr_rrb_clear(bridge, rrbn); + + if (rrbv & ebit) + *rrbp = rrbv; +} + +/* + * pcibr_rrb_count_valid: count how many RRBs are + * marked valid for the specified PCI slot on this + * bridge. + * + * NOTE: The "slot" parameter for all pcibr_rrb + * management routines must include the "virtual" + * bit; when manageing both the normal and the + * virtual channel, separate calls to these + * routines must be made. To denote the virtual + * channel, add PCIBR_RRB_SLOT_VIRTUAL to the slot + * number. + * + * IMPL NOTE: The obvious algorithm is to iterate + * through the RRB fields, incrementing a count if + * the RRB is valid and matches the slot. However, + * it is much simpler to use an algorithm derived + * from the "partitioned add" idea. First, XOR in a + * pattern such that the fields that match this + * slot come up "all ones" and all other fields + * have zeros in the mismatching bits. Then AND + * together the bits in the field, so we end up + * with one bit turned on for each field that + * matched. Now we need to count these bits. This + * can be done either with a series of shift/add + * instructions or by using "tmp % 15"; I expect + * that the cascaded shift/add will be faster. + */ + +LOCAL int +do_pcibr_rrb_count_valid(bridge_t *bridge, + pciio_slot_t slot) +{ + bridgereg_t tmp; + + tmp = bridge->b_rrb_map[slot & 1].reg; + tmp ^= 0x11111111 * (7 - slot / 2); + tmp &= (0xCCCCCCCC & tmp) >> 2; + tmp &= (0x22222222 & tmp) >> 1; + tmp += tmp >> 4; + tmp += tmp >> 8; + tmp += tmp >> 16; + return tmp & 15; +} + +/* + * do_pcibr_rrb_count_avail: count how many RRBs are + * available to be allocated for the specified slot. + * + * IMPL NOTE: similar to the above, except we are + * just counting how many fields have the valid bit + * turned off. + */ +LOCAL int +do_pcibr_rrb_count_avail(bridge_t *bridge, + pciio_slot_t slot) +{ + bridgereg_t tmp; + + tmp = bridge->b_rrb_map[slot & 1].reg; + tmp = (0x88888888 & ~tmp) >> 3; + tmp += tmp >> 4; + tmp += tmp >> 8; + tmp += tmp >> 16; + return tmp & 15; +} + +/* + * do_pcibr_rrb_alloc: allocate some additional RRBs + * for the specified slot. Returns -1 if there were + * insufficient free RRBs to satisfy the request, + * or 0 if the request was fulfilled. + * + * Note that if a request can be partially filled, + * it will be, even if we return failure. + * + * IMPL NOTE: again we avoid iterating across all + * the RRBs; instead, we form up a word containing + * one bit for each free RRB, then peel the bits + * off from the low end. + */ +LOCAL int +do_pcibr_rrb_alloc(bridge_t *bridge, + pciio_slot_t slot, + int more) +{ + int rv = 0; + bridgereg_t reg, tmp, bit; + + reg = bridge->b_rrb_map[slot & 1].reg; + tmp = (0x88888888 & ~reg) >> 3; + while (more-- > 0) { + bit = LSBIT(tmp); + if (!bit) { + rv = -1; + break; + } + tmp &= ~bit; + reg = ((reg & ~(bit * 15)) | (bit * (8 + slot / 2))); + } + bridge->b_rrb_map[slot & 1].reg = reg; + return rv; +} + +/* + * do_pcibr_rrb_free: release some of the RRBs that + * have been allocated for the specified + * slot. Returns zero for success, or negative if + * it was unable to free that many RRBs. + * + * IMPL NOTE: We form up a bit for each RRB + * allocated to the slot, aligned with the VALID + * bitfield this time; then we peel bits off one at + * a time, releasing the corresponding RRB. + */ +LOCAL int +do_pcibr_rrb_free(bridge_t *bridge, + pciio_slot_t slot, + int less) +{ + int rv = 0; + bridgereg_t reg, tmp, clr, bit; + int i; + + clr = 0; + reg = bridge->b_rrb_map[slot & 1].reg; + + /* This needs to be done otherwise the rrb's on the virtual channel + * for this slot won't be freed !! + */ + tmp = reg & 0xbbbbbbbb; + + tmp ^= (0x11111111 * (7 - slot / 2)); + tmp &= (0x33333333 & tmp) << 2; + tmp &= (0x44444444 & tmp) << 1; + while (less-- > 0) { + bit = LSBIT(tmp); + if (!bit) { + rv = -1; + break; + } + tmp &= ~bit; + reg &= ~bit; + clr |= bit; + } + bridge->b_rrb_map[slot & 1].reg = reg; + + for (i = 0; i < 8; i++) + if (clr & (8 << (4 * i))) + do_pcibr_rrb_clear(bridge, (2 * i) + (slot & 1)); + + return rv; +} + +LOCAL void +do_pcibr_rrb_autoalloc(pcibr_soft_t pcibr_soft, + int slot, + int more_rrbs) +{ + bridge_t *bridge = pcibr_soft->bs_base; + int got; + + for (got = 0; got < more_rrbs; ++got) { + if (pcibr_soft->bs_rrb_res[slot & 7] > 0) + pcibr_soft->bs_rrb_res[slot & 7]--; + else if (pcibr_soft->bs_rrb_avail[slot & 1] > 0) + pcibr_soft->bs_rrb_avail[slot & 1]--; + else + break; + if (do_pcibr_rrb_alloc(bridge, slot, 1) < 0) + break; +#if PCIBR_RRB_DEBUG + printk( "do_pcibr_rrb_autoalloc: add one to slot %d%s\n", + slot & 7, slot & 8 ? "v" : ""); +#endif + pcibr_soft->bs_rrb_valid[slot]++; + } +#if PCIBR_RRB_DEBUG + printk("%s: %d+%d free RRBs. Allocation list:\n", pcibr_soft->bs_name, + pcibr_soft->bs_rrb_avail[0], + pcibr_soft->bs_rrb_avail[1]); + for (slot = 0; slot < 8; ++slot) + printk("\t%d+%d+%d", + 0xFFF & pcibr_soft->bs_rrb_valid[slot], + 0xFFF & pcibr_soft->bs_rrb_valid[slot + PCIBR_RRB_SLOT_VIRTUAL], + pcibr_soft->bs_rrb_res[slot]); + printk("\n"); +#endif +} + +/* + * Device driver interface to flush the write buffers for a specified + * device hanging off the bridge. + */ +int +pcibr_wrb_flush(devfs_handle_t pconn_vhdl) +{ + pciio_info_t pciio_info = pciio_info_get(pconn_vhdl); + pciio_slot_t pciio_slot = pciio_info_slot_get(pciio_info); + pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); + bridge_t *bridge = pcibr_soft->bs_base; + volatile bridgereg_t *wrb_flush; + + wrb_flush = &(bridge->b_wr_req_buf[pciio_slot].reg); + while (*wrb_flush); + + return(0); +} +/* + * Device driver interface to request RRBs for a specified device + * hanging off a Bridge. The driver requests the total number of + * RRBs it would like for the normal channel (vchan0) and for the + * "virtual channel" (vchan1). The actual number allocated to each + * channel is returned. + * + * If we cannot allocate at least one RRB to a channel that needs + * at least one, return -1 (failure). Otherwise, satisfy the request + * as best we can and return 0. + */ +int +pcibr_rrb_alloc(devfs_handle_t pconn_vhdl, + int *count_vchan0, + int *count_vchan1) +{ + pciio_info_t pciio_info = pciio_info_get(pconn_vhdl); + pciio_slot_t pciio_slot = pciio_info_slot_get(pciio_info); + pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); + bridge_t *bridge = pcibr_soft->bs_base; + int desired_vchan0; + int desired_vchan1; + int orig_vchan0; + int orig_vchan1; + int delta_vchan0; + int delta_vchan1; + int final_vchan0; + int final_vchan1; + int avail_rrbs; + unsigned long s; + int error; + + /* + * TBD: temper request with admin info about RRB allocation, + * and according to demand from other devices on this Bridge. + * + * One way of doing this would be to allocate two RRBs + * for each device on the bus, before any drivers start + * asking for extras. This has the weakness that one + * driver might not give back an "extra" RRB until after + * another driver has already failed to get one that + * it wanted. + */ + + s = pcibr_lock(pcibr_soft); + + /* How many RRBs do we own? */ + orig_vchan0 = pcibr_soft->bs_rrb_valid[pciio_slot]; + orig_vchan1 = pcibr_soft->bs_rrb_valid[pciio_slot + PCIBR_RRB_SLOT_VIRTUAL]; + + /* How many RRBs do we want? */ + desired_vchan0 = count_vchan0 ? *count_vchan0 : orig_vchan0; + desired_vchan1 = count_vchan1 ? *count_vchan1 : orig_vchan1; + + /* How many RRBs are free? */ + avail_rrbs = pcibr_soft->bs_rrb_avail[pciio_slot & 1] + + pcibr_soft->bs_rrb_res[pciio_slot]; + + /* Figure desired deltas */ + delta_vchan0 = desired_vchan0 - orig_vchan0; + delta_vchan1 = desired_vchan1 - orig_vchan1; + + /* Trim back deltas to something + * that we can actually meet, by + * decreasing the ending allocation + * for whichever channel wants + * more RRBs. If both want the same + * number, cut the second channel. + * NOTE: do not change the allocation for + * a channel that was passed as NULL. + */ + while ((delta_vchan0 + delta_vchan1) > avail_rrbs) { + if (count_vchan0 && + (!count_vchan1 || + ((orig_vchan0 + delta_vchan0) > + (orig_vchan1 + delta_vchan1)))) + delta_vchan0--; + else + delta_vchan1--; + } + + /* Figure final RRB allocations + */ + final_vchan0 = orig_vchan0 + delta_vchan0; + final_vchan1 = orig_vchan1 + delta_vchan1; + + /* If either channel wants RRBs but our actions + * would leave it with none, declare an error, + * but DO NOT change any RRB allocations. + */ + if ((desired_vchan0 && !final_vchan0) || + (desired_vchan1 && !final_vchan1)) { + + error = -1; + + } else { + + /* Commit the allocations: free, then alloc. + */ + if (delta_vchan0 < 0) + (void) do_pcibr_rrb_free(bridge, pciio_slot, -delta_vchan0); + if (delta_vchan1 < 0) + (void) do_pcibr_rrb_free(bridge, PCIBR_RRB_SLOT_VIRTUAL + pciio_slot, -delta_vchan1); + + if (delta_vchan0 > 0) + (void) do_pcibr_rrb_alloc(bridge, pciio_slot, delta_vchan0); + if (delta_vchan1 > 0) + (void) do_pcibr_rrb_alloc(bridge, PCIBR_RRB_SLOT_VIRTUAL + pciio_slot, delta_vchan1); + + /* Return final values to caller. + */ + if (count_vchan0) + *count_vchan0 = final_vchan0; + if (count_vchan1) + *count_vchan1 = final_vchan1; + + /* prevent automatic changes to this slot's RRBs + */ + pcibr_soft->bs_rrb_fixed |= 1 << pciio_slot; + + /* Track the actual allocations, release + * any further reservations, and update the + * number of available RRBs. + */ + + pcibr_soft->bs_rrb_valid[pciio_slot] = final_vchan0; + pcibr_soft->bs_rrb_valid[pciio_slot + PCIBR_RRB_SLOT_VIRTUAL] = final_vchan1; + pcibr_soft->bs_rrb_avail[pciio_slot & 1] = + pcibr_soft->bs_rrb_avail[pciio_slot & 1] + + pcibr_soft->bs_rrb_res[pciio_slot] + - delta_vchan0 + - delta_vchan1; + pcibr_soft->bs_rrb_res[pciio_slot] = 0; + +#if PCIBR_RRB_DEBUG + printk("pcibr_rrb_alloc: slot %d set to %d+%d; %d+%d free\n", + pciio_slot, final_vchan0, final_vchan1, + pcibr_soft->bs_rrb_avail[0], + pcibr_soft->bs_rrb_avail[1]); + for (pciio_slot = 0; pciio_slot < 8; ++pciio_slot) + printk("\t%d+%d+%d", + 0xFFF & pcibr_soft->bs_rrb_valid[pciio_slot], + 0xFFF & pcibr_soft->bs_rrb_valid[pciio_slot + PCIBR_RRB_SLOT_VIRTUAL], + pcibr_soft->bs_rrb_res[pciio_slot]); + printk("\n"); +#endif + + error = 0; + } + + pcibr_unlock(pcibr_soft, s); + return error; +} + +/* + * Device driver interface to check the current state + * of the RRB allocations. + * + * pconn_vhdl is your PCI connection point (specifies which + * PCI bus and which slot). + * + * count_vchan0 points to where to return the number of RRBs + * assigned to the primary DMA channel, used by all DMA + * that does not explicitly ask for the alternate virtual + * channel. + * + * count_vchan1 points to where to return the number of RRBs + * assigned to the secondary DMA channel, used when + * PCIBR_VCHAN1 and PCIIO_DMA_A64 are specified. + * + * count_reserved points to where to return the number of RRBs + * that have been automatically reserved for your device at + * startup, but which have not been assigned to a + * channel. RRBs must be assigned to a channel to be used; + * this can be done either with an explicit pcibr_rrb_alloc + * call, or automatically by the infrastructure when a DMA + * translation is constructed. Any call to pcibr_rrb_alloc + * will release any unassigned reserved RRBs back to the + * free pool. + * + * count_pool points to where to return the number of RRBs + * that are currently unassigned and unreserved. This + * number can (and will) change as other drivers make calls + * to pcibr_rrb_alloc, or automatically allocate RRBs for + * DMA beyond their initial reservation. + * + * NULL may be passed for any of the return value pointers + * the caller is not interested in. + * + * The return value is "0" if all went well, or "-1" if + * there is a problem. Additionally, if the wrong vertex + * is passed in, one of the subsidiary support functions + * could panic with a "bad pciio fingerprint." + */ + +int +pcibr_rrb_check(devfs_handle_t pconn_vhdl, + int *count_vchan0, + int *count_vchan1, + int *count_reserved, + int *count_pool) +{ + pciio_info_t pciio_info; + pciio_slot_t pciio_slot; + pcibr_soft_t pcibr_soft; + unsigned long s; + int error = -1; + + if ((pciio_info = pciio_info_get(pconn_vhdl)) && + (pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info)) && + ((pciio_slot = pciio_info_slot_get(pciio_info)) < 8)) { + + s = pcibr_lock(pcibr_soft); + + if (count_vchan0) + *count_vchan0 = + pcibr_soft->bs_rrb_valid[pciio_slot]; + + if (count_vchan1) + *count_vchan1 = + pcibr_soft->bs_rrb_valid[pciio_slot + PCIBR_RRB_SLOT_VIRTUAL]; + + if (count_reserved) + *count_reserved = + pcibr_soft->bs_rrb_res[pciio_slot]; + + if (count_pool) + *count_pool = + pcibr_soft->bs_rrb_avail[pciio_slot & 1]; + + error = 0; + + pcibr_unlock(pcibr_soft, s); + } + return error; +} + +/* pcibr_alloc_all_rrbs allocates all the rrbs available in the quantities + * requested for each of the devies. The evn_odd argument indicates whether + * allcoation for the odd or even rrbs is requested and next group of four pairse + * are the amount to assign to each device (they should sum to <= 8) and + * whether to set the viritual bit for that device (1 indictaes yes, 0 indicates no) + * the devices in order are either 0, 2, 4, 6 or 1, 3, 5, 7 + * if even_odd is even we alloc even rrbs else we allocate odd rrbs + * returns 0 if no errors else returns -1 + */ + +int +pcibr_alloc_all_rrbs(devfs_handle_t vhdl, int even_odd, + int dev_1_rrbs, int virt1, int dev_2_rrbs, int virt2, + int dev_3_rrbs, int virt3, int dev_4_rrbs, int virt4) +{ + devfs_handle_t pcibr_vhdl; + pcibr_soft_t pcibr_soft = NULL; + bridge_t *bridge = NULL; + + uint32_t rrb_setting = 0; + int rrb_shift = 7; + uint32_t cur_rrb; + int dev_rrbs[4]; + int virt[4]; + int i, j; + unsigned long s; + + if (GRAPH_SUCCESS == + hwgraph_traverse(vhdl, EDGE_LBL_PCI, &pcibr_vhdl)) { + pcibr_soft = pcibr_soft_get(pcibr_vhdl); + if (pcibr_soft) + bridge = pcibr_soft->bs_base; + hwgraph_vertex_unref(pcibr_vhdl); + } + if (bridge == NULL) + bridge = (bridge_t *) xtalk_piotrans_addr + (vhdl, NULL, 0, sizeof(bridge_t), 0); + + even_odd &= 1; + + dev_rrbs[0] = dev_1_rrbs; + dev_rrbs[1] = dev_2_rrbs; + dev_rrbs[2] = dev_3_rrbs; + dev_rrbs[3] = dev_4_rrbs; + + virt[0] = virt1; + virt[1] = virt2; + virt[2] = virt3; + virt[3] = virt4; + + if ((dev_1_rrbs + dev_2_rrbs + dev_3_rrbs + dev_4_rrbs) > 8) { + return -1; + } + if ((dev_1_rrbs < 0) || (dev_2_rrbs < 0) || (dev_3_rrbs < 0) || (dev_4_rrbs < 0)) { + return -1; + } + /* walk through rrbs */ + for (i = 0; i < 4; i++) { + if (virt[i]) { + cur_rrb = i | 0xc; + cur_rrb = cur_rrb << (rrb_shift * 4); + rrb_shift--; + rrb_setting = rrb_setting | cur_rrb; + dev_rrbs[i] = dev_rrbs[i] - 1; + } + for (j = 0; j < dev_rrbs[i]; j++) { + cur_rrb = i | 0x8; + cur_rrb = cur_rrb << (rrb_shift * 4); + rrb_shift--; + rrb_setting = rrb_setting | cur_rrb; + } + } + + if (pcibr_soft) + s = pcibr_lock(pcibr_soft); + + bridge->b_rrb_map[even_odd].reg = rrb_setting; + + if (pcibr_soft) { + + pcibr_soft->bs_rrb_fixed |= 0x55 << even_odd; + + /* since we've "FIXED" the allocations + * for these slots, we probably can dispense + * with tracking avail/res/valid data, but + * keeping it up to date helps debugging. + */ + + pcibr_soft->bs_rrb_avail[even_odd] = + 8 - (dev_1_rrbs + dev_2_rrbs + dev_3_rrbs + dev_4_rrbs); + + pcibr_soft->bs_rrb_res[even_odd + 0] = 0; + pcibr_soft->bs_rrb_res[even_odd + 2] = 0; + pcibr_soft->bs_rrb_res[even_odd + 4] = 0; + pcibr_soft->bs_rrb_res[even_odd + 6] = 0; + + pcibr_soft->bs_rrb_valid[even_odd + 0] = dev_1_rrbs - virt1; + pcibr_soft->bs_rrb_valid[even_odd + 2] = dev_2_rrbs - virt2; + pcibr_soft->bs_rrb_valid[even_odd + 4] = dev_3_rrbs - virt3; + pcibr_soft->bs_rrb_valid[even_odd + 6] = dev_4_rrbs - virt4; + + pcibr_soft->bs_rrb_valid[even_odd + 0 + PCIBR_RRB_SLOT_VIRTUAL] = virt1; + pcibr_soft->bs_rrb_valid[even_odd + 2 + PCIBR_RRB_SLOT_VIRTUAL] = virt2; + pcibr_soft->bs_rrb_valid[even_odd + 4 + PCIBR_RRB_SLOT_VIRTUAL] = virt3; + pcibr_soft->bs_rrb_valid[even_odd + 6 + PCIBR_RRB_SLOT_VIRTUAL] = virt4; + + pcibr_unlock(pcibr_soft, s); + } + return 0; +} + +/* + * pcibr_rrb_flush: chase down all the RRBs assigned + * to the specified connection point, and flush + * them. + */ +void +pcibr_rrb_flush(devfs_handle_t pconn_vhdl) +{ + pciio_info_t pciio_info = pciio_info_get(pconn_vhdl); + pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); + pciio_slot_t pciio_slot = pciio_info_slot_get(pciio_info); + bridge_t *bridge = pcibr_soft->bs_base; + unsigned long s; + reg_p rrbp; + unsigned rrbm; + int i; + int rrbn; + unsigned sval; + unsigned mask; + + sval = BRIDGE_RRB_EN | (pciio_slot >> 1); + mask = BRIDGE_RRB_EN | BRIDGE_RRB_PDEV; + rrbn = pciio_slot & 1; + rrbp = &bridge->b_rrb_map[rrbn].reg; + + s = pcibr_lock(pcibr_soft); + rrbm = *rrbp; + for (i = 0; i < 8; ++i) { + if ((rrbm & mask) == sval) + do_pcibr_rrb_flush(bridge, rrbn); + rrbm >>= 4; + rrbn += 2; + } + pcibr_unlock(pcibr_soft, s); +} + +/* ===================================================================== + * Device(x) register management + */ + +/* pcibr_try_set_device: attempt to modify Device(x) + * for the specified slot on the specified bridge + * as requested in flags, limited to the specified + * bits. Returns which BRIDGE bits were in conflict, + * or ZERO if everything went OK. + * + * Caller MUST hold pcibr_lock when calling this function. + */ +LOCAL int +pcibr_try_set_device(pcibr_soft_t pcibr_soft, + pciio_slot_t slot, + unsigned flags, + bridgereg_t mask) +{ + bridge_t *bridge; + pcibr_soft_slot_t slotp; + bridgereg_t old; + bridgereg_t new; + bridgereg_t chg; + bridgereg_t bad; + bridgereg_t badpmu; + bridgereg_t badd32; + bridgereg_t badd64; + bridgereg_t fix; + unsigned long s; + bridgereg_t xmask; + + xmask = mask; + if (pcibr_soft->bs_xbridge) { + if (mask == BRIDGE_DEV_PMU_BITS) + xmask = XBRIDGE_DEV_PMU_BITS; + if (mask == BRIDGE_DEV_D64_BITS) + xmask = XBRIDGE_DEV_D64_BITS; + } + + slotp = &pcibr_soft->bs_slot[slot]; + + s = pcibr_lock(pcibr_soft); + + bridge = pcibr_soft->bs_base; + + old = slotp->bss_device; + + /* figure out what the desired + * Device(x) bits are based on + * the flags specified. + */ + + new = old; + + /* Currently, we inherit anything that + * the new caller has not specified in + * one way or another, unless we take + * action here to not inherit. + * + * This is needed for the "swap" stuff, + * since it could have been set via + * pcibr_endian_set -- altho note that + * any explicit PCIBR_BYTE_STREAM or + * PCIBR_WORD_VALUES will freely override + * the effect of that call (and vice + * versa, no protection either way). + * + * I want to get rid of pcibr_endian_set + * in favor of tracking DMA endianness + * using the flags specified when DMA + * channels are created. + */ + +#define BRIDGE_DEV_WRGA_BITS (BRIDGE_DEV_PMU_WRGA_EN | BRIDGE_DEV_DIR_WRGA_EN) +#define BRIDGE_DEV_SWAP_BITS (BRIDGE_DEV_SWAP_PMU | BRIDGE_DEV_SWAP_DIR) + + /* Do not use Barrier, Write Gather, + * or Prefetch unless asked. + * Leave everything else as it + * was from the last time. + */ + new = new + & ~BRIDGE_DEV_BARRIER + & ~BRIDGE_DEV_WRGA_BITS + & ~BRIDGE_DEV_PREF + ; + + /* Generic macro flags + */ + if (flags & PCIIO_DMA_DATA) { + new = (new + & ~BRIDGE_DEV_BARRIER) /* barrier off */ + | BRIDGE_DEV_PREF; /* prefetch on */ + + } + if (flags & PCIIO_DMA_CMD) { + new = ((new + & ~BRIDGE_DEV_PREF) /* prefetch off */ + & ~BRIDGE_DEV_WRGA_BITS) /* write gather off */ + | BRIDGE_DEV_BARRIER; /* barrier on */ + } + /* Generic detail flags + */ + if (flags & PCIIO_WRITE_GATHER) + new |= BRIDGE_DEV_WRGA_BITS; + if (flags & PCIIO_NOWRITE_GATHER) + new &= ~BRIDGE_DEV_WRGA_BITS; + + if (flags & PCIIO_PREFETCH) + new |= BRIDGE_DEV_PREF; + if (flags & PCIIO_NOPREFETCH) + new &= ~BRIDGE_DEV_PREF; + + if (flags & PCIBR_WRITE_GATHER) + new |= BRIDGE_DEV_WRGA_BITS; + if (flags & PCIBR_NOWRITE_GATHER) + new &= ~BRIDGE_DEV_WRGA_BITS; + + if (flags & PCIIO_BYTE_STREAM) + new |= (pcibr_soft->bs_xbridge) ? + BRIDGE_DEV_SWAP_DIR : BRIDGE_DEV_SWAP_BITS; + if (flags & PCIIO_WORD_VALUES) + new &= (pcibr_soft->bs_xbridge) ? + ~BRIDGE_DEV_SWAP_DIR : ~BRIDGE_DEV_SWAP_BITS; + + /* Provider-specific flags + */ + if (flags & PCIBR_PREFETCH) + new |= BRIDGE_DEV_PREF; + if (flags & PCIBR_NOPREFETCH) + new &= ~BRIDGE_DEV_PREF; + + if (flags & PCIBR_PRECISE) + new |= BRIDGE_DEV_PRECISE; + if (flags & PCIBR_NOPRECISE) + new &= ~BRIDGE_DEV_PRECISE; + + if (flags & PCIBR_BARRIER) + new |= BRIDGE_DEV_BARRIER; + if (flags & PCIBR_NOBARRIER) + new &= ~BRIDGE_DEV_BARRIER; + + if (flags & PCIBR_64BIT) + new |= BRIDGE_DEV_DEV_SIZE; + if (flags & PCIBR_NO64BIT) + new &= ~BRIDGE_DEV_DEV_SIZE; + + chg = old ^ new; /* what are we changing, */ + chg &= xmask; /* of the interesting bits */ + + if (chg) { + + badd32 = slotp->bss_d32_uctr ? (BRIDGE_DEV_D32_BITS & chg) : 0; + if (pcibr_soft->bs_xbridge) { + badpmu = slotp->bss_pmu_uctr ? (XBRIDGE_DEV_PMU_BITS & chg) : 0; + badd64 = slotp->bss_d64_uctr ? (XBRIDGE_DEV_D64_BITS & chg) : 0; + } else { + badpmu = slotp->bss_pmu_uctr ? (BRIDGE_DEV_PMU_BITS & chg) : 0; + badd64 = slotp->bss_d64_uctr ? (BRIDGE_DEV_D64_BITS & chg) : 0; + } + bad = badpmu | badd32 | badd64; + + if (bad) { + + /* some conflicts can be resolved by + * forcing the bit on. this may cause + * some performance degredation in + * the stream(s) that want the bit off, + * but the alternative is not allowing + * the new stream at all. + */ + if ( (fix = bad & (BRIDGE_DEV_PRECISE | + BRIDGE_DEV_BARRIER)) ){ + bad &= ~fix; + /* don't change these bits if + * they are already set in "old" + */ + chg &= ~(fix & old); + } + /* some conflicts can be resolved by + * forcing the bit off. this may cause + * some performance degredation in + * the stream(s) that want the bit on, + * but the alternative is not allowing + * the new stream at all. + */ + if ( (fix = bad & (BRIDGE_DEV_WRGA_BITS | + BRIDGE_DEV_PREF)) ) { + bad &= ~fix; + /* don't change these bits if + * we wanted to turn them on. + */ + chg &= ~(fix & new); + } + /* conflicts in other bits mean + * we can not establish this DMA + * channel while the other(s) are + * still present. + */ + if (bad) { + pcibr_unlock(pcibr_soft, s); +#if (DEBUG && PCIBR_DEV_DEBUG) + printk("pcibr_try_set_device: mod blocked by %R\n", bad, device_bits); +#endif + return bad; + } + } + } + if (mask == BRIDGE_DEV_PMU_BITS) + slotp->bss_pmu_uctr++; + if (mask == BRIDGE_DEV_D32_BITS) + slotp->bss_d32_uctr++; + if (mask == BRIDGE_DEV_D64_BITS) + slotp->bss_d64_uctr++; + + /* the value we want to write is the + * original value, with the bits for + * our selected changes flipped, and + * with any disabled features turned off. + */ + new = old ^ chg; /* only change what we want to change */ + + if (slotp->bss_device == new) { + pcibr_unlock(pcibr_soft, s); + return 0; + } + bridge->b_device[slot].reg = new; + slotp->bss_device = new; + bridge->b_wid_tflush; /* wait until Bridge PIO complete */ + pcibr_unlock(pcibr_soft, s); +#if DEBUG && PCIBR_DEV_DEBUG + printk("pcibr Device(%d): 0x%p\n", slot, bridge->b_device[slot].reg); +#endif + + return 0; +} + +void +pcibr_release_device(pcibr_soft_t pcibr_soft, + pciio_slot_t slot, + bridgereg_t mask) +{ + pcibr_soft_slot_t slotp; + unsigned long s; + + slotp = &pcibr_soft->bs_slot[slot]; + + s = pcibr_lock(pcibr_soft); + + if (mask == BRIDGE_DEV_PMU_BITS) + slotp->bss_pmu_uctr--; + if (mask == BRIDGE_DEV_D32_BITS) + slotp->bss_d32_uctr--; + if (mask == BRIDGE_DEV_D64_BITS) + slotp->bss_d64_uctr--; + + pcibr_unlock(pcibr_soft, s); +} + +/* + * flush write gather buffer for slot + */ +LOCAL void +pcibr_device_write_gather_flush(pcibr_soft_t pcibr_soft, + pciio_slot_t slot) +{ + bridge_t *bridge; + unsigned long s; + volatile uint32_t wrf; + s = pcibr_lock(pcibr_soft); + bridge = pcibr_soft->bs_base; + wrf = bridge->b_wr_req_buf[slot].reg; + pcibr_unlock(pcibr_soft, s); +} + +/* ===================================================================== + * Bridge (pcibr) "Device Driver" entry points + */ + +/* + * pcibr_probe_slot: read a config space word + * while trapping any errors; reutrn zero if + * all went OK, or nonzero if there was an error. + * The value read, if any, is passed back + * through the valp parameter. + */ +LOCAL int +pcibr_probe_slot(bridge_t *bridge, + cfg_p cfg, + unsigned *valp) +{ + int rv; + bridgereg_t old_enable, new_enable; + int badaddr_val(volatile void *, int, volatile void *); + + + old_enable = bridge->b_int_enable; + new_enable = old_enable & ~BRIDGE_IMR_PCI_MST_TIMEOUT; + + bridge->b_int_enable = new_enable; + + /* + * The xbridge doesn't clear b_err_int_view unless + * multi-err is cleared... + */ + if (is_xbridge(bridge)) + if (bridge->b_err_int_view & BRIDGE_ISR_PCI_MST_TIMEOUT) { + bridge->b_int_rst_stat = BRIDGE_IRR_MULTI_CLR; + } + + if (bridge->b_int_status & BRIDGE_IRR_PCI_GRP) { + bridge->b_int_rst_stat = BRIDGE_IRR_PCI_GRP_CLR; + (void) bridge->b_wid_tflush; /* flushbus */ + } + rv = badaddr_val((void *) cfg, 4, valp); + + /* + * The xbridge doesn't set master timeout in b_int_status + * here. Fortunately it's in error_interrupt_view. + */ + if (is_xbridge(bridge)) + if (bridge->b_err_int_view & BRIDGE_ISR_PCI_MST_TIMEOUT) { + bridge->b_int_rst_stat = BRIDGE_IRR_MULTI_CLR; + rv = 1; /* unoccupied slot */ + } + + bridge->b_int_enable = old_enable; + bridge->b_wid_tflush; /* wait until Bridge PIO complete */ + + return rv; +} + +/* + * pcibr_init: called once during system startup or + * when a loadable driver is loaded. + * + * The driver_register function should normally + * be in _reg, not _init. But the pcibr driver is + * required by devinit before the _reg routines + * are called, so this is an exception. + */ +void +pcibr_init(void) +{ +#if DEBUG && ATTACH_DEBUG + printk("pcibr_init\n"); +#endif + + xwidget_driver_register(XBRIDGE_WIDGET_PART_NUM, + XBRIDGE_WIDGET_MFGR_NUM, + "pcibr_", + 0); + xwidget_driver_register(BRIDGE_WIDGET_PART_NUM, + BRIDGE_WIDGET_MFGR_NUM, + "pcibr_", + 0); +} + +/* + * open/close mmap/munmap interface would be used by processes + * that plan to map the PCI bridge, and muck around with the + * registers. This is dangerous to do, and will be allowed + * to a select brand of programs. Typically these are + * diagnostics programs, or some user level commands we may + * write to do some weird things. + * To start with expect them to have root priveleges. + * We will ask for more later. + */ +/* ARGSUSED */ +int +pcibr_open(devfs_handle_t *devp, int oflag, int otyp, cred_t *credp) +{ + return 0; +} + +/*ARGSUSED */ +int +pcibr_close(devfs_handle_t dev, int oflag, int otyp, cred_t *crp) +{ + return 0; +} + +/*ARGSUSED */ +int +pcibr_map(devfs_handle_t dev, vhandl_t *vt, off_t off, size_t len, uint prot) +{ + int error; + devfs_handle_t vhdl = dev_to_vhdl(dev); + devfs_handle_t pcibr_vhdl = hwgraph_connectpt_get(vhdl); + pcibr_soft_t pcibr_soft = pcibr_soft_get(pcibr_vhdl); + bridge_t *bridge = pcibr_soft->bs_base; + + hwgraph_vertex_unref(pcibr_vhdl); + + ASSERT(pcibr_soft); + len = ctob(btoc(len)); /* Make len page aligned */ + error = v_mapphys(vt, (void *) ((__psunsigned_t) bridge + off), len); + + /* + * If the offset being mapped corresponds to the flash prom + * base, and if the mapping succeeds, and if the user + * has requested the protections to be WRITE, enable the + * flash prom to be written. + * + * XXX- deprecate this in favor of using the + * real flash driver ... + */ + if (!error && + ((off == BRIDGE_EXTERNAL_FLASH) || + (len > BRIDGE_EXTERNAL_FLASH))) { + int s; + + /* + * ensure that we write and read without any interruption. + * The read following the write is required for the Bridge war + */ + s = splhi(); + bridge->b_wid_control |= BRIDGE_CTRL_FLASH_WR_EN; + bridge->b_wid_control; /* inval addr bug war */ + splx(s); + } + return error; +} + +/*ARGSUSED */ +int +pcibr_unmap(devfs_handle_t dev, vhandl_t *vt) +{ + devfs_handle_t pcibr_vhdl = hwgraph_connectpt_get((devfs_handle_t) dev); + pcibr_soft_t pcibr_soft = pcibr_soft_get(pcibr_vhdl); + bridge_t *bridge = pcibr_soft->bs_base; + + hwgraph_vertex_unref(pcibr_vhdl); + + /* + * If flashprom write was enabled, disable it, as + * this is the last unmap. + */ + if (bridge->b_wid_control & BRIDGE_CTRL_FLASH_WR_EN) { + int s; + + /* + * ensure that we write and read without any interruption. + * The read following the write is required for the Bridge war + */ + s = splhi(); + bridge->b_wid_control &= ~BRIDGE_CTRL_FLASH_WR_EN; + bridge->b_wid_control; /* inval addr bug war */ + splx(s); + } + return 0; +} + +/* This is special case code used by grio. There are plans to make + * this a bit more general in the future, but till then this should + * be sufficient. + */ +pciio_slot_t +pcibr_device_slot_get(devfs_handle_t dev_vhdl) +{ + char devname[MAXDEVNAME]; + devfs_handle_t tdev; + pciio_info_t pciio_info; + pciio_slot_t slot = PCIIO_SLOT_NONE; + + vertex_to_name(dev_vhdl, devname, MAXDEVNAME); + + /* run back along the canonical path + * until we find a PCI connection point. + */ + tdev = hwgraph_connectpt_get(dev_vhdl); + while (tdev != GRAPH_VERTEX_NONE) { + pciio_info = pciio_info_chk(tdev); + if (pciio_info) { + slot = pciio_info_slot_get(pciio_info); + break; + } + hwgraph_vertex_unref(tdev); + tdev = hwgraph_connectpt_get(tdev); + } + hwgraph_vertex_unref(tdev); + + return slot; +} + +/*========================================================================== + * BRIDGE PCI SLOT RELATED IOCTLs + */ +char *pci_space_name[] = {"NONE", + "ROM", + "IO", + "", + "MEM", + "MEM32", + "MEM64", + "CFG", + "WIN0", + "WIN1", + "WIN2", + "WIN3", + "WIN4", + "WIN5", + "", + "BAD"}; + + +#ifdef LATER + +void +pcibr_slot_func_info_return(pcibr_info_h pcibr_infoh, + int func, + pcibr_slot_func_info_resp_t funcp) +{ + pcibr_info_t pcibr_info = pcibr_infoh[func]; + int win; + + funcp->resp_f_status = 0; + + if (!pcibr_info) { + return; + } + + funcp->resp_f_status |= FUNC_IS_VALID; +#ifdef SUPPORT_PRINTING_V_FORMAT + sprintf(funcp->resp_f_slot_name, "%v", pcibr_info->f_vertex); +#else + sprintf(funcp->resp_f_slot_name, "%x", pcibr_info->f_vertex); +#endif + + if(is_sys_critical_vertex(pcibr_info->f_vertex)) { + funcp->resp_f_status |= FUNC_IS_SYS_CRITICAL; + } + + funcp->resp_f_bus = pcibr_info->f_bus; + funcp->resp_f_slot = pcibr_info->f_slot; + funcp->resp_f_func = pcibr_info->f_func; +#ifdef SUPPORT_PRINTING_V_FORMAT + sprintf(funcp->resp_f_master_name, "%v", pcibr_info->f_master); +#else + sprintf(funcp->resp_f_master_name, "%x", pcibr_info->f_master); +#endif + funcp->resp_f_pops = pcibr_info->f_pops; + funcp->resp_f_efunc = pcibr_info->f_efunc; + funcp->resp_f_einfo = pcibr_info->f_einfo; + + funcp->resp_f_vendor = pcibr_info->f_vendor; + funcp->resp_f_device = pcibr_info->f_device; + + for(win = 0 ; win < 6 ; win++) { + funcp->resp_f_window[win].resp_w_base = + pcibr_info->f_window[win].w_base; + funcp->resp_f_window[win].resp_w_size = + pcibr_info->f_window[win].w_size; + sprintf(funcp->resp_f_window[win].resp_w_space, + "%s", + pci_space_name[pcibr_info->f_window[win].w_space]); + } + + funcp->resp_f_rbase = pcibr_info->f_rbase; + funcp->resp_f_rsize = pcibr_info->f_rsize; + + for (win = 0 ; win < 4; win++) { + funcp->resp_f_ibit[win] = pcibr_info->f_ibit[win]; + } + + funcp->resp_f_att_det_error = pcibr_info->f_att_det_error; + +} + +int +pcibr_slot_info_return(pcibr_soft_t pcibr_soft, + pciio_slot_t slot, + pcibr_slot_info_resp_t respp) +{ + pcibr_soft_slot_t pss; + int func; + bridge_t *bridge = pcibr_soft->bs_base; + reg_p b_respp; + pcibr_slot_info_resp_t slotp; + pcibr_slot_func_info_resp_t funcp; + + slotp = snia_kmem_zalloc(sizeof(*slotp), KM_SLEEP); + if (slotp == NULL) { + return(ENOMEM); + } + + pss = &pcibr_soft->bs_slot[slot]; + + printk("\nPCI INFRASTRUCTURAL INFO FOR SLOT %d\n\n", slot); + + slotp->resp_has_host = pss->has_host; + slotp->resp_host_slot = pss->host_slot; +#ifdef SUPPORT_PRINTING_V_FORMAT + sprintf(slotp->resp_slot_conn_name, "%v", pss->slot_conn); +#else + sprintf(slotp->resp_slot_conn_name, "%x", pss->slot_conn); +#endif + slotp->resp_slot_status = pss->slot_status; + slotp->resp_l1_bus_num = io_path_map_widget(pcibr_soft->bs_vhdl); + + if (is_sys_critical_vertex(pss->slot_conn)) { + slotp->resp_slot_status |= SLOT_IS_SYS_CRITICAL; + } + + slotp->resp_bss_ninfo = pss->bss_ninfo; + + for (func = 0; func < pss->bss_ninfo; func++) { + funcp = &(slotp->resp_func[func]); + pcibr_slot_func_info_return(pss->bss_infos, func, funcp); + } + + sprintf(slotp->resp_bss_devio_bssd_space, "%s", + pci_space_name[pss->bss_devio.bssd_space]); + slotp->resp_bss_devio_bssd_base = pss->bss_devio.bssd_base; + slotp->resp_bss_device = pss->bss_device; + + slotp->resp_bss_pmu_uctr = pss->bss_pmu_uctr; + slotp->resp_bss_d32_uctr = pss->bss_d32_uctr; + slotp->resp_bss_d64_uctr = pss->bss_d64_uctr; + + slotp->resp_bss_d64_base = pss->bss_d64_base; + slotp->resp_bss_d64_flags = pss->bss_d64_flags; + slotp->resp_bss_d32_base = pss->bss_d32_base; + slotp->resp_bss_d32_flags = pss->bss_d32_flags; + + slotp->resp_bss_ext_ates_active = atomic_read(&pss->bss_ext_ates_active); + + slotp->resp_bss_cmd_pointer = pss->bss_cmd_pointer; + slotp->resp_bss_cmd_shadow = pss->bss_cmd_shadow; + + slotp->resp_bs_rrb_valid = pcibr_soft->bs_rrb_valid[slot]; + slotp->resp_bs_rrb_valid_v = pcibr_soft->bs_rrb_valid[slot + + PCIBR_RRB_SLOT_VIRTUAL]; + slotp->resp_bs_rrb_res = pcibr_soft->bs_rrb_res[slot]; + + if (slot & 1) { + b_respp = &bridge->b_odd_resp; + } else { + b_respp = &bridge->b_even_resp; + } + + slotp->resp_b_resp = *b_respp; + + slotp->resp_b_int_device = bridge->b_int_device; + slotp->resp_b_int_enable = bridge->b_int_enable; + slotp->resp_b_int_host = bridge->b_int_addr[slot].addr; + + if (COPYOUT(slotp, respp, sizeof(*respp))) { + return(EFAULT); + } + + snia_kmem_free(slotp, sizeof(*slotp)); + + return(0); +} + +/* + * pcibr_slot_query + * Return information about the PCI slot maintained by the infrastructure. + * Information is requested in the request structure. + * + * Information returned in the response structure: + * Slot hwgraph name + * Vendor/Device info + * Base register info + * Interrupt mapping from device pins to the bridge pins + * Devio register + * Software RRB info + * RRB register info + * Host/Gues info + * PCI Bus #,slot #, function # + * Slot provider hwgraph name + * Provider Functions + * Error handler + * DMA mapping usage counters + * DMA direct translation info + * External SSRAM workaround info + */ +int +pcibr_slot_query(devfs_handle_t pcibr_vhdl, pcibr_slot_info_req_t reqp) +{ + pcibr_soft_t pcibr_soft = pcibr_soft_get(pcibr_vhdl); + pciio_slot_t slot = reqp->req_slot; + pciio_slot_t tmp_slot; + pcibr_slot_info_resp_t respp = (pcibr_slot_info_resp_t) reqp->req_respp; + int size = reqp->req_size; + int error; + + /* Make sure that we are dealing with a bridge device vertex */ + if (!pcibr_soft) { + return(EINVAL); + } + + /* Make sure that we have a valid PCI slot number or PCIIO_SLOT_NONE */ + if ((!PCIBR_VALID_SLOT(slot)) && (slot != PCIIO_SLOT_NONE)) { + return(EINVAL); + } + + /* Return information for the requested PCI slot */ + if (slot != PCIIO_SLOT_NONE) { + if (size < sizeof(*respp)) { + return(EINVAL); + } + + /* Acquire read access to the slot */ + mrlock(pcibr_soft->bs_slot[slot].slot_lock, MR_ACCESS, PZERO); + + error = pcibr_slot_info_return(pcibr_soft, slot, respp); + + /* Release the slot lock */ + mrunlock(pcibr_soft->bs_slot[slot].slot_lock); + + return(error); + } + + /* Return information for all the slots */ + for (tmp_slot = 0; tmp_slot < 8; tmp_slot++) { + + if (size < sizeof(*respp)) { + return(EINVAL); + } + + /* Acquire read access to the slot */ + mrlock(pcibr_soft->bs_slot[tmp_slot].slot_lock, MR_ACCESS, PZERO); + + error = pcibr_slot_info_return(pcibr_soft, tmp_slot, respp); + + /* Release the slot lock */ + mrunlock(pcibr_soft->bs_slot[tmp_slot].slot_lock); + + if (error) { + return(error); + } + + ++respp; + size -= sizeof(*respp); + } + + return(error); +} +#endif /* LATER */ + + +/*ARGSUSED */ +int +pcibr_ioctl(devfs_handle_t dev, + int cmd, + void *arg, + int flag, + struct cred *cr, + int *rvalp) +{ + devfs_handle_t pcibr_vhdl = hwgraph_connectpt_get((devfs_handle_t)dev); +#ifdef LATER + pcibr_soft_t pcibr_soft = pcibr_soft_get(pcibr_vhdl); +#endif + int error = 0; + + hwgraph_vertex_unref(pcibr_vhdl); + + switch (cmd) { +#ifdef LATER + case GIOCSETBW: + { + grio_ioctl_info_t info; + pciio_slot_t slot = 0; + + if (!cap_able((uint64_t)CAP_DEVICE_MGT)) { + error = EPERM; + break; + } + if (COPYIN(arg, &info, sizeof(grio_ioctl_info_t))) { + error = EFAULT; + break; + } +#ifdef GRIO_DEBUG + printk("pcibr:: prev_vhdl: %d reqbw: %lld\n", + info.prev_vhdl, info.reqbw); +#endif /* GRIO_DEBUG */ + + if ((slot = pcibr_device_slot_get(info.prev_vhdl)) == + PCIIO_SLOT_NONE) { + error = EIO; + break; + } + if (info.reqbw) + pcibr_priority_bits_set(pcibr_soft, slot, PCI_PRIO_HIGH); + break; + } + + case GIOCRELEASEBW: + { + grio_ioctl_info_t info; + pciio_slot_t slot = 0; + + if (!cap_able(CAP_DEVICE_MGT)) { + error = EPERM; + break; + } + if (COPYIN(arg, &info, sizeof(grio_ioctl_info_t))) { + error = EFAULT; + break; + } +#ifdef GRIO_DEBUG + printk("pcibr:: prev_vhdl: %d reqbw: %lld\n", + info.prev_vhdl, info.reqbw); +#endif /* GRIO_DEBUG */ + + if ((slot = pcibr_device_slot_get(info.prev_vhdl)) == + PCIIO_SLOT_NONE) { + error = EIO; + break; + } + if (info.reqbw) + pcibr_priority_bits_set(pcibr_soft, slot, PCI_PRIO_LOW); + break; + } + + case PCIBR_SLOT_POWERUP: + { + pciio_slot_t slot; + + if (!cap_able(CAP_DEVICE_MGT)) { + error = EPERM; + break; + } + + slot = (pciio_slot_t)(uint64_t)arg; + error = pcibr_slot_powerup(pcibr_vhdl,slot); + break; + } + case PCIBR_SLOT_SHUTDOWN: + if (!cap_able(CAP_DEVICE_MGT)) { + error = EPERM; + break; + } + + slot = (pciio_slot_t)(uint64_t)arg; + error = pcibr_slot_powerup(pcibr_vhdl,slot); + break; + } + case PCIBR_SLOT_QUERY: + { + struct pcibr_slot_info_req_s req; + + if (!cap_able(CAP_DEVICE_MGT)) { + error = EPERM; + break; + } + + if (COPYIN(arg, &req, sizeof(req))) { + error = EFAULT; + break; + } + + error = pcibr_slot_query(pcibr_vhdl, &req); + break; + } +#endif /* LATER */ + default: + break; + + } + + return error; +} + +void +pcibr_freeblock_sub(iopaddr_t *free_basep, + iopaddr_t *free_lastp, + iopaddr_t base, + size_t size) +{ + iopaddr_t free_base = *free_basep; + iopaddr_t free_last = *free_lastp; + iopaddr_t last = base + size - 1; + + if ((last < free_base) || (base > free_last)); /* free block outside arena */ + + else if ((base <= free_base) && (last >= free_last)) + /* free block contains entire arena */ + *free_basep = *free_lastp = 0; + + else if (base <= free_base) + /* free block is head of arena */ + *free_basep = last + 1; + + else if (last >= free_last) + /* free block is tail of arena */ + *free_lastp = base - 1; + + /* + * We are left with two regions: the free area + * in the arena "below" the block, and the free + * area in the arena "above" the block. Keep + * the one that is bigger. + */ + + else if ((base - free_base) > (free_last - last)) + *free_lastp = base - 1; /* keep lower chunk */ + else + *free_basep = last + 1; /* keep upper chunk */ +} + +/* Convert from ssram_bits in control register to number of SSRAM entries */ +#define ATE_NUM_ENTRIES(n) _ate_info[n] + +/* Possible choices for number of ATE entries in Bridge's SSRAM */ +LOCAL int _ate_info[] = +{ + 0, /* 0 entries */ + 8 * 1024, /* 8K entries */ + 16 * 1024, /* 16K entries */ + 64 * 1024 /* 64K entries */ +}; + +#define ATE_NUM_SIZES (sizeof(_ate_info) / sizeof(int)) +#define ATE_PROBE_VALUE 0x0123456789abcdefULL + +/* + * Determine the size of this bridge's external mapping SSRAM, and set + * the control register appropriately to reflect this size, and initialize + * the external SSRAM. + */ +LOCAL int +pcibr_init_ext_ate_ram(bridge_t *bridge) +{ + int largest_working_size = 0; + int num_entries, entry; + int i, j; + bridgereg_t old_enable, new_enable; + int s; + + /* Probe SSRAM to determine its size. */ + old_enable = bridge->b_int_enable; + new_enable = old_enable & ~BRIDGE_IMR_PCI_MST_TIMEOUT; + bridge->b_int_enable = new_enable; + + for (i = 1; i < ATE_NUM_SIZES; i++) { + /* Try writing a value */ + bridge->b_ext_ate_ram[ATE_NUM_ENTRIES(i) - 1] = ATE_PROBE_VALUE; + + /* Guard against wrap */ + for (j = 1; j < i; j++) + bridge->b_ext_ate_ram[ATE_NUM_ENTRIES(j) - 1] = 0; + + /* See if value was written */ + if (bridge->b_ext_ate_ram[ATE_NUM_ENTRIES(i) - 1] == ATE_PROBE_VALUE) + largest_working_size = i; + } + bridge->b_int_enable = old_enable; + bridge->b_wid_tflush; /* wait until Bridge PIO complete */ + + /* + * ensure that we write and read without any interruption. + * The read following the write is required for the Bridge war + */ + + s = splhi(); + bridge->b_wid_control = (bridge->b_wid_control + & ~BRIDGE_CTRL_SSRAM_SIZE_MASK) + | BRIDGE_CTRL_SSRAM_SIZE(largest_working_size); + bridge->b_wid_control; /* inval addr bug war */ + splx(s); + + num_entries = ATE_NUM_ENTRIES(largest_working_size); + +#if PCIBR_ATE_DEBUG + if (num_entries) + printk("bridge at 0x%x: clearing %d external ATEs\n", bridge, num_entries); + else + printk("bridge at 0x%x: no externa9422l ATE RAM found\n", bridge); +#endif + + /* Initialize external mapping entries */ + for (entry = 0; entry < num_entries; entry++) + bridge->b_ext_ate_ram[entry] = 0; + + return (num_entries); +} + +/* + * Allocate "count" contiguous Bridge Address Translation Entries + * on the specified bridge to be used for PCI to XTALK mappings. + * Indices in rm map range from 1..num_entries. Indicies returned + * to caller range from 0..num_entries-1. + * + * Return the start index on success, -1 on failure. + */ +LOCAL int +pcibr_ate_alloc(pcibr_soft_t pcibr_soft, int count) +{ + int index = 0; + + index = (int) rmalloc(pcibr_soft->bs_int_ate_map, (size_t) count); +/* printk("Colin: pcibr_ate_alloc - index %d count %d \n", index, count); */ + + if (!index && pcibr_soft->bs_ext_ate_map) + index = (int) rmalloc(pcibr_soft->bs_ext_ate_map, (size_t) count); + + /* rmalloc manages resources in the 1..n + * range, with 0 being failure. + * pcibr_ate_alloc manages resources + * in the 0..n-1 range, with -1 being failure. + */ + return index - 1; +} + +LOCAL void +pcibr_ate_free(pcibr_soft_t pcibr_soft, int index, int count) +/* Who says there's no such thing as a free meal? :-) */ +{ + /* note the "+1" since rmalloc handles 1..n but + * we start counting ATEs at zero. + */ +/* printk("Colin: pcibr_ate_free - index %d count %d\n", index, count); */ + + rmfree((index < pcibr_soft->bs_int_ate_size) + ? pcibr_soft->bs_int_ate_map + : pcibr_soft->bs_ext_ate_map, + count, index + 1); +} + +LOCAL pcibr_info_t +pcibr_info_get(devfs_handle_t vhdl) +{ + return (pcibr_info_t) pciio_info_get(vhdl); +} + +pcibr_info_t +pcibr_device_info_new( + pcibr_soft_t pcibr_soft, + pciio_slot_t slot, + pciio_function_t rfunc, + pciio_vendor_id_t vendor, + pciio_device_id_t device) +{ + pcibr_info_t pcibr_info; + pciio_function_t func; + int ibit; + + func = (rfunc == PCIIO_FUNC_NONE) ? 0 : rfunc; + + NEW(pcibr_info); + pciio_device_info_new(&pcibr_info->f_c, + pcibr_soft->bs_vhdl, + slot, rfunc, + vendor, device); + + if (slot != PCIIO_SLOT_NONE) { + + /* + * Currently favored mapping from PCI + * slot number and INTA/B/C/D to Bridge + * PCI Interrupt Bit Number: + * + * SLOT A B C D + * 0 0 4 0 4 + * 1 1 5 1 5 + * 2 2 6 2 6 + * 3 3 7 3 7 + * 4 4 0 4 0 + * 5 5 1 5 1 + * 6 6 2 6 2 + * 7 7 3 7 3 + * + * XXX- allow pcibr_hints to override default + * XXX- allow ADMIN to override pcibr_hints + */ + for (ibit = 0; ibit < 4; ++ibit) + pcibr_info->f_ibit[ibit] = + (slot + 4 * ibit) & 7; + + /* + * Record the info in the sparse func info space. + */ + if (func < pcibr_soft->bs_slot[slot].bss_ninfo) + pcibr_soft->bs_slot[slot].bss_infos[func] = pcibr_info; + } + return pcibr_info; +} + +void +pcibr_device_info_free(devfs_handle_t pcibr_vhdl, pciio_slot_t slot) +{ + pcibr_soft_t pcibr_soft = pcibr_soft_get(pcibr_vhdl); + pcibr_info_t pcibr_info; + pciio_function_t func; + pcibr_soft_slot_t slotp = &pcibr_soft->bs_slot[slot]; + int nfunc = slotp->bss_ninfo; + + + for (func = 0; func < nfunc; func++) { + pcibr_info = slotp->bss_infos[func]; + + if (!pcibr_info) + continue; + + slotp->bss_infos[func] = 0; + pciio_device_info_unregister(pcibr_vhdl, &pcibr_info->f_c); + pciio_device_info_free(&pcibr_info->f_c); + DEL(pcibr_info); + } + + /* Clear the DEVIO(x) for this slot */ + slotp->bss_devio.bssd_space = PCIIO_SPACE_NONE; + slotp->bss_devio.bssd_base = PCIBR_D32_BASE_UNSET; + slotp->bss_device = 0; + + + /* Reset the mapping usage counters */ + slotp->bss_pmu_uctr = 0; + slotp->bss_d32_uctr = 0; + slotp->bss_d64_uctr = 0; + + /* Clear the Direct translation info */ + slotp->bss_d64_base = PCIBR_D64_BASE_UNSET; + slotp->bss_d64_flags = 0; + slotp->bss_d32_base = PCIBR_D32_BASE_UNSET; + slotp->bss_d32_flags = 0; + + /* Clear out shadow info necessary for the external SSRAM workaround */ + slotp->bss_ext_ates_active = ATOMIC_INIT(0); + slotp->bss_cmd_pointer = 0; + slotp->bss_cmd_shadow = 0; + +} + +/* + * PCI_ADDR_SPACE_LIMITS_LOAD + * Gets the current values of + * pci io base, + * pci io last, + * pci low memory base, + * pci low memory last, + * pci high memory base, + * pci high memory last + */ +#define PCI_ADDR_SPACE_LIMITS_LOAD() \ + pci_io_fb = pcibr_soft->bs_spinfo.pci_io_base; \ + pci_io_fl = pcibr_soft->bs_spinfo.pci_io_last; \ + pci_lo_fb = pcibr_soft->bs_spinfo.pci_swin_base; \ + pci_lo_fl = pcibr_soft->bs_spinfo.pci_swin_last; \ + pci_hi_fb = pcibr_soft->bs_spinfo.pci_mem_base; \ + pci_hi_fl = pcibr_soft->bs_spinfo.pci_mem_last; +/* + * PCI_ADDR_SPACE_LIMITS_STORE + * Sets the current values of + * pci io base, + * pci io last, + * pci low memory base, + * pci low memory last, + * pci high memory base, + * pci high memory last + */ +#define PCI_ADDR_SPACE_LIMITS_STORE() \ + pcibr_soft->bs_spinfo.pci_io_base = pci_io_fb; \ + pcibr_soft->bs_spinfo.pci_io_last = pci_io_fl; \ + pcibr_soft->bs_spinfo.pci_swin_base = pci_lo_fb; \ + pcibr_soft->bs_spinfo.pci_swin_last = pci_lo_fl; \ + pcibr_soft->bs_spinfo.pci_mem_base = pci_hi_fb; \ + pcibr_soft->bs_spinfo.pci_mem_last = pci_hi_fl; + +#define PCI_ADDR_SPACE_LIMITS_PRINT() \ + printf("+++++++++++++++++++++++\n" \ + "IO base 0x%x last 0x%x\n" \ + "SWIN base 0x%x last 0x%x\n" \ + "MEM base 0x%x last 0x%x\n" \ + "+++++++++++++++++++++++\n", \ + pcibr_soft->bs_spinfo.pci_io_base, \ + pcibr_soft->bs_spinfo.pci_io_last, \ + pcibr_soft->bs_spinfo.pci_swin_base, \ + pcibr_soft->bs_spinfo.pci_swin_last, \ + pcibr_soft->bs_spinfo.pci_mem_base, \ + pcibr_soft->bs_spinfo.pci_mem_last); + +/* + * pcibr_slot_info_init + * Probe for this slot and see if it is populated. + * If it is populated initialize the generic PCI infrastructural + * information associated with this particular PCI device. + */ +int +pcibr_slot_info_init(devfs_handle_t pcibr_vhdl, + pciio_slot_t slot) +{ + pcibr_soft_t pcibr_soft; + pcibr_info_h pcibr_infoh; + pcibr_info_t pcibr_info; + bridge_t *bridge; + cfg_p cfgw; + unsigned idword; + unsigned pfail; + unsigned idwords[8]; + pciio_vendor_id_t vendor; + pciio_device_id_t device; + unsigned htype; + cfg_p wptr; + int win; + pciio_space_t space; + iopaddr_t pci_io_fb, pci_io_fl; + iopaddr_t pci_lo_fb, pci_lo_fl; + iopaddr_t pci_hi_fb, pci_hi_fl; + int nfunc; + pciio_function_t rfunc; + int func; + devfs_handle_t conn_vhdl; + pcibr_soft_slot_t slotp; + + /* Get the basic software information required to proceed */ + pcibr_soft = pcibr_soft_get(pcibr_vhdl); + if (!pcibr_soft) + return(EINVAL); + + bridge = pcibr_soft->bs_base; + if (!PCIBR_VALID_SLOT(slot)) + return(EINVAL); + + /* If we have a host slot (eg:- IOC3 has 2 PCI slots and the initialization + * is done by the host slot then we are done. + */ + if (pcibr_soft->bs_slot[slot].has_host) { + return(0); + } + + /* Check for a slot with any system critical functions */ + if (pcibr_is_slot_sys_critical(pcibr_vhdl, slot)) + return(EPERM); + + /* Load the current values of allocated PCI address spaces */ + PCI_ADDR_SPACE_LIMITS_LOAD(); + + /* Try to read the device-id/vendor-id from the config space */ + cfgw = bridge->b_type0_cfg_dev[slot].l; + + if (pcibr_probe_slot(bridge, cfgw, &idword)) + return(ENODEV); + + slotp = &pcibr_soft->bs_slot[slot]; + slotp->slot_status |= SLOT_POWER_UP; + + vendor = 0xFFFF & idword; + /* If the vendor id is not valid then the slot is not populated + * and we are done. + */ + if (vendor == 0xFFFF) + return(ENODEV); + + device = 0xFFFF & (idword >> 16); + htype = do_pcibr_config_get(cfgw, PCI_CFG_HEADER_TYPE, 1); + + nfunc = 1; + rfunc = PCIIO_FUNC_NONE; + pfail = 0; + + /* NOTE: if a card claims to be multifunction + * but only responds to config space 0, treat + * it as a unifunction card. + */ + + if (htype & 0x80) { /* MULTIFUNCTION */ + for (func = 1; func < 8; ++func) { + cfgw = bridge->b_type0_cfg_dev[slot].f[func].l; + if (pcibr_probe_slot(bridge, cfgw, &idwords[func])) { + pfail |= 1 << func; + continue; + } + vendor = 0xFFFF & idwords[func]; + if (vendor == 0xFFFF) { + pfail |= 1 << func; + continue; + } + nfunc = func + 1; + rfunc = 0; + } + cfgw = bridge->b_type0_cfg_dev[slot].l; + } + NEWA(pcibr_infoh, nfunc); + + pcibr_soft->bs_slot[slot].bss_ninfo = nfunc; + pcibr_soft->bs_slot[slot].bss_infos = pcibr_infoh; + + for (func = 0; func < nfunc; ++func) { + unsigned cmd_reg; + + if (func) { + if (pfail & (1 << func)) + continue; + + idword = idwords[func]; + cfgw = bridge->b_type0_cfg_dev[slot].f[func].l; + + device = 0xFFFF & (idword >> 16); + htype = do_pcibr_config_get(cfgw, PCI_CFG_HEADER_TYPE, 1); + rfunc = func; + } + htype &= 0x7f; + if (htype != 0x00) { + printk(KERN_WARNING "%s pcibr: pci slot %d func %d has strange header type 0x%x\n", + pcibr_soft->bs_name, slot, func, htype); + continue; + } +#if DEBUG && ATTACH_DEBUG + printk(KERN_NOTICE + "%s pcibr: pci slot %d func %d: vendor 0x%x device 0x%x", + pcibr_soft->bs_name, slot, func, vendor, device); +#endif + + pcibr_info = pcibr_device_info_new + (pcibr_soft, slot, rfunc, vendor, device); + conn_vhdl = pciio_device_info_register(pcibr_vhdl, &pcibr_info->f_c); + if (func == 0) + slotp->slot_conn = conn_vhdl; + +#ifdef LITTLE_ENDIAN + cmd_reg = cfgw[(PCI_CFG_COMMAND ^ 4) / 4]; +#else + cmd_reg = cfgw[PCI_CFG_COMMAND / 4]; +#endif + + wptr = cfgw + PCI_CFG_BASE_ADDR_0 / 4; + + for (win = 0; win < PCI_CFG_BASE_ADDRS; ++win) { + iopaddr_t base, mask, code; + size_t size; + + /* + * GET THE BASE & SIZE OF THIS WINDOW: + * + * The low two or four bits of the BASE register + * determines which address space we are in; the + * rest is a base address. BASE registers + * determine windows that are power-of-two sized + * and naturally aligned, so we can get the size + * of a window by writing all-ones to the + * register, reading it back, and seeing which + * bits are used for decode; the least + * significant nonzero bit is also the size of + * the window. + * + * WARNING: someone may already have allocated + * some PCI space to this window, and in fact + * PIO may be in process at this very moment + * from another processor (or even from this + * one, if we get interrupted)! So, if the BASE + * already has a nonzero address, be generous + * and use the LSBit of that address as the + * size; this could overstate the window size. + * Usually, when one card is set up, all are set + * up; so, since we don't bitch about + * overlapping windows, we are ok. + * + * UNFORTUNATELY, some cards do not clear their + * BASE registers on reset. I have two heuristics + * that can detect such cards: first, if the + * decode enable is turned off for the space + * that the window uses, we can disregard the + * initial value. second, if the address is + * outside the range that we use, we can disregard + * it as well. + * + * This is looking very PCI generic. Except for + * knowing how many slots and where their config + * spaces are, this window loop and the next one + * could probably be shared with other PCI host + * adapters. It would be interesting to see if + * this could be pushed up into pciio, when we + * start supporting more PCI providers. + */ +#ifdef LITTLE_ENDIAN + base = wptr[((win*4)^4)/4]; +#else + base = wptr[win]; +#endif + + if (base & PCI_BA_IO_SPACE) { + /* BASE is in I/O space. */ + space = PCIIO_SPACE_IO; + mask = -4; + code = base & 3; + base = base & mask; + if (base == 0) { + ; /* not assigned */ + } else if (!(cmd_reg & PCI_CMD_IO_SPACE)) { + base = 0; /* decode not enabled */ + } + } else { + /* BASE is in MEM space. */ + space = PCIIO_SPACE_MEM; + mask = -16; + code = base & PCI_BA_MEM_LOCATION; /* extract BAR type */ + base = base & mask; + if (base == 0) { + ; /* not assigned */ + } else if (!(cmd_reg & PCI_CMD_MEM_SPACE)) { + base = 0; /* decode not enabled */ + } else if (base & 0xC0000000) { + base = 0; /* outside permissable range */ + } else if ((code == PCI_BA_MEM_64BIT) && +#ifdef LITTLE_ENDIAN + (wptr[(((win + 1)*4)^4)/4] != 0)) { +#else + (wptr[win + 1] != 0)) { +#endif /* LITTLE_ENDIAN */ + base = 0; /* outside permissable range */ + } + } + + if (base != 0) { /* estimate size */ + size = base & -base; + } else { /* calculate size */ +#ifdef LITTLE_ENDIAN + wptr[((win*4)^4)/4] = ~0; /* turn on all bits */ + size = wptr[((win*4)^4)/4]; /* get stored bits */ +#else + wptr[win] = ~0; /* turn on all bits */ + size = wptr[win]; /* get stored bits */ +#endif /* LITTLE_ENDIAN */ + size &= mask; /* keep addr */ + size &= -size; /* keep lsbit */ + if (size == 0) + continue; + } + + pcibr_info->f_window[win].w_space = space; + pcibr_info->f_window[win].w_base = base; + pcibr_info->f_window[win].w_size = size; + + /* + * If this window already has PCI space + * allocated for it, "subtract" that space from + * our running freeblocks. Don't worry about + * overlaps in existing allocated windows; we + * may be overstating their sizes anyway. + */ + + if (base && size) { + if (space == PCIIO_SPACE_IO) { + pcibr_freeblock_sub(&pci_io_fb, + &pci_io_fl, + base, size); + } else { + pcibr_freeblock_sub(&pci_lo_fb, + &pci_lo_fl, + base, size); + pcibr_freeblock_sub(&pci_hi_fb, + &pci_hi_fl, + base, size); + } + } +#if defined(IOC3_VENDOR_ID_NUM) && defined(IOC3_DEVICE_ID_NUM) + /* + * IOC3 BASE_ADDR* BUG WORKAROUND + * + + * If we write to BASE1 on the IOC3, the + * data in BASE0 is replaced. The + * original workaround was to remember + * the value of BASE0 and restore it + * when we ran off the end of the BASE + * registers; however, a later + * workaround was added (I think it was + * rev 1.44) to avoid setting up + * anything but BASE0, with the comment + * that writing all ones to BASE1 set + * the enable-parity-error test feature + * in IOC3's SCR bit 14. + * + * So, unless we defer doing any PCI + * space allocation until drivers + * attach, and set up a way for drivers + * (the IOC3 in paricular) to tell us + * generically to keep our hands off + * BASE registers, we gotta "know" about + * the IOC3 here. + * + * Too bad the PCI folks didn't reserve the + * all-zero value for 'no BASE here' (it is a + * valid code for an uninitialized BASE in + * 32-bit PCI memory space). + */ + + if ((vendor == IOC3_VENDOR_ID_NUM) && + (device == IOC3_DEVICE_ID_NUM)) + break; +#endif + if (code == PCI_BA_MEM_64BIT) { + win++; /* skip upper half */ +#ifdef LITTLE_ENDIAN + wptr[((win*4)^4)/4] = 0; /* which must be zero */ +#else + wptr[win] = 0; /* which must be zero */ +#endif /* LITTLE_ENDIAN */ + } + } /* next win */ + } /* next func */ + + /* Store back the values for allocated PCI address spaces */ + PCI_ADDR_SPACE_LIMITS_STORE(); + return(0); +} + +/* + * pcibr_slot_info_free + * Remove all the PCI infrastructural information associated + * with a particular PCI device. + */ +int +pcibr_slot_info_free(devfs_handle_t pcibr_vhdl, + pciio_slot_t slot) +{ + pcibr_soft_t pcibr_soft; + pcibr_info_h pcibr_infoh; + int nfunc; + + pcibr_soft = pcibr_soft_get(pcibr_vhdl); + + if (!pcibr_soft || !PCIBR_VALID_SLOT(slot)) + return(EINVAL); + + nfunc = pcibr_soft->bs_slot[slot].bss_ninfo; + + pcibr_device_info_free(pcibr_vhdl, slot); + + pcibr_infoh = pcibr_soft->bs_slot[slot].bss_infos; + DELA(pcibr_infoh,nfunc); + pcibr_soft->bs_slot[slot].bss_ninfo = 0; + + return(0); +} + +int as_debug = 0; +/* + * pcibr_slot_addr_space_init + * Reserve chunks of PCI address space as required by + * the base registers in the card. + */ +int +pcibr_slot_addr_space_init(devfs_handle_t pcibr_vhdl, + pciio_slot_t slot) +{ + pcibr_soft_t pcibr_soft; + pcibr_info_h pcibr_infoh; + pcibr_info_t pcibr_info; + bridge_t *bridge; + iopaddr_t pci_io_fb, pci_io_fl; + iopaddr_t pci_lo_fb, pci_lo_fl; + iopaddr_t pci_hi_fb, pci_hi_fl; + size_t align; + iopaddr_t mask; + int nbars; + int nfunc; + int func; + int win; + + pcibr_soft = pcibr_soft_get(pcibr_vhdl); + + if (!pcibr_soft || !PCIBR_VALID_SLOT(slot)) + return(EINVAL); + + bridge = pcibr_soft->bs_base; + + /* Get the current values for the allocated PCI address spaces */ + PCI_ADDR_SPACE_LIMITS_LOAD(); + + if (as_debug) +#ifdef LATER + PCI_ADDR_SPACE_LIMITS_PRINT(); +#endif + /* allocate address space, + * for windows that have not been + * previously assigned. + */ + if (pcibr_soft->bs_slot[slot].has_host) { + return(0); + } + + nfunc = pcibr_soft->bs_slot[slot].bss_ninfo; + if (nfunc < 1) + return(EINVAL); + + pcibr_infoh = pcibr_soft->bs_slot[slot].bss_infos; + if (!pcibr_infoh) + return(EINVAL); + + /* + * Try to make the DevIO windows not + * overlap by pushing the "io" and "hi" + * allocation areas up to the next one + * or two megabyte bound. This also + * keeps them from being zero. + * + * DO NOT do this with "pci_lo" since + * the entire "lo" area is only a + * megabyte, total ... + */ + align = (slot < 2) ? 0x200000 : 0x100000; + mask = -align; + pci_io_fb = (pci_io_fb + align - 1) & mask; + pci_hi_fb = (pci_hi_fb + align - 1) & mask; + + for (func = 0; func < nfunc; ++func) { + cfg_p cfgw; + cfg_p wptr; + pciio_space_t space; + iopaddr_t base; + size_t size; + cfg_p pci_cfg_cmd_reg_p; + unsigned pci_cfg_cmd_reg; + unsigned pci_cfg_cmd_reg_add = 0; + + pcibr_info = pcibr_infoh[func]; + + if (!pcibr_info) + continue; + + if (pcibr_info->f_vendor == PCIIO_VENDOR_ID_NONE) + continue; + + cfgw = bridge->b_type0_cfg_dev[slot].f[func].l; + wptr = cfgw + PCI_CFG_BASE_ADDR_0 / 4; + + nbars = PCI_CFG_BASE_ADDRS; + + for (win = 0; win < nbars; ++win) { + + space = pcibr_info->f_window[win].w_space; + base = pcibr_info->f_window[win].w_base; + size = pcibr_info->f_window[win].w_size; + + if (size < 1) + continue; + + if (base >= size) { +#if DEBUG && PCI_DEBUG + printk("pcibr: slot %d func %d window %d is in %d[0x%x..0x%x], alloc by prom\n", + slot, func, win, space, base, base + size - 1); +#endif + continue; /* already allocated */ + } + align = size; /* ie. 0x00001000 */ + if (align < _PAGESZ) + align = _PAGESZ; /* ie. 0x00004000 */ + mask = -align; /* ie. 0xFFFFC000 */ + + switch (space) { + case PCIIO_SPACE_IO: + base = (pci_io_fb + align - 1) & mask; + if ((base + size) > pci_io_fl) { + base = 0; + break; + } + pci_io_fb = base + size; + break; + + case PCIIO_SPACE_MEM: +#ifdef LITTLE_ENDIAN + if ((wptr[((win*4)^4)/4] & PCI_BA_MEM_LOCATION) == +#else + if ((wptr[win] & PCI_BA_MEM_LOCATION) == +#endif /* LITTLE_ENDIAN */ + PCI_BA_MEM_1MEG) { + /* allocate from 20-bit PCI space */ + base = (pci_lo_fb + align - 1) & mask; + if ((base + size) > pci_lo_fl) { + base = 0; + break; + } + pci_lo_fb = base + size; + } else { + /* allocate from 32-bit or 64-bit PCI space */ + base = (pci_hi_fb + align - 1) & mask; + if ((base + size) > pci_hi_fl) { + base = 0; + break; + } + pci_hi_fb = base + size; + } + break; + + default: + base = 0; +#if DEBUG && PCI_DEBUG + printk("pcibr: slot %d window %d had bad space code %d\n", + slot, win, space); +#endif + } + pcibr_info->f_window[win].w_base = base; +#ifdef LITTLE_ENDIAN + wptr[((win*4)^4)/4] = base; +#if DEBUG && PCI_DEBUG + printk("Setting base address 0x%p base 0x%x\n", &(wptr[((win*4)^4)/4]), base); +#endif +#else + wptr[win] = base; +#endif /* LITTLE_ENDIAN */ + +#if DEBUG && PCI_DEBUG + if (base >= size) + printk("pcibr: slot %d func %d window %d is in %d [0x%x..0x%x], alloc by pcibr\n", + slot, func, win, space, base, base + size - 1); + else + printk("pcibr: slot %d func %d window %d, unable to alloc 0x%x in 0x%p\n", + slot, func, win, size, space); +#endif + } /* next base */ + + /* + * Allocate space for the EXPANSION ROM + * NOTE: DO NOT DO THIS ON AN IOC3, + * as it blows the system away. + */ + base = size = 0; + if ((pcibr_soft->bs_slot[slot].bss_vendor_id != IOC3_VENDOR_ID_NUM) || + (pcibr_soft->bs_slot[slot].bss_device_id != IOC3_DEVICE_ID_NUM)) { + + wptr = cfgw + PCI_EXPANSION_ROM / 4; +#ifdef LITTLE_ENDIAN + wptr[1] = 0xFFFFF000; + mask = wptr[1]; +#else + *wptr = 0xFFFFF000; + mask = *wptr; +#endif /* LITTLE_ENDIAN */ + if (mask & 0xFFFFF000) { + size = mask & -mask; + align = size; + if (align < _PAGESZ) + align = _PAGESZ; + mask = -align; + base = (pci_hi_fb + align - 1) & mask; + if ((base + size) > pci_hi_fl) + base = size = 0; + else { + pci_hi_fb = base + size; +#ifdef LITTLE_ENDIAN + wptr[1] = base; +#else + *wptr = base; +#endif /* LITTLE_ENDIAN */ +#if DEBUG && PCI_DEBUG + printk("%s/%d ROM in 0x%lx..0x%lx (alloc by pcibr)\n", + pcibr_soft->bs_name, slot, + base, base + size - 1); +#endif + } + } + } + pcibr_info->f_rbase = base; + pcibr_info->f_rsize = size; + + /* + * if necessary, update the board's + * command register to enable decoding + * in the windows we added. + * + * There are some bits we always want to + * be sure are set. + */ + pci_cfg_cmd_reg_add |= PCI_CMD_IO_SPACE; + + /* + * The Adaptec 1160 FC Controller WAR #767995: + * The part incorrectly ignores the upper 32 bits of a 64 bit + * address when decoding references to it's registers so to + * keep it from responding to a bus cycle that it shouldn't + * we only use I/O space to get at it's registers. Don't + * enable memory space accesses on that PCI device. + */ + #define FCADP_VENDID 0x9004 /* Adaptec Vendor ID from fcadp.h */ + #define FCADP_DEVID 0x1160 /* Adaptec 1160 Device ID from fcadp.h */ + + if ((pcibr_info->f_vendor != FCADP_VENDID) || + (pcibr_info->f_device != FCADP_DEVID)) + pci_cfg_cmd_reg_add |= PCI_CMD_MEM_SPACE; + + pci_cfg_cmd_reg_add |= PCI_CMD_BUS_MASTER; + + pci_cfg_cmd_reg_p = cfgw + PCI_CFG_COMMAND / 4; + pci_cfg_cmd_reg = *pci_cfg_cmd_reg_p; +#if PCI_FBBE /* XXX- check here to see if dev can do fast-back-to-back */ + if (!((pci_cfg_cmd_reg >> 16) & PCI_STAT_F_BK_BK_CAP)) + fast_back_to_back_enable = 0; +#endif + pci_cfg_cmd_reg &= 0xFFFF; + if (pci_cfg_cmd_reg_add & ~pci_cfg_cmd_reg) + *pci_cfg_cmd_reg_p = pci_cfg_cmd_reg | pci_cfg_cmd_reg_add; + + } /* next func */ + + /* Now that we have allocated new chunks of PCI address spaces to this + * card we need to update the bookkeeping values which indicate + * the current PCI address space allocations. + */ + PCI_ADDR_SPACE_LIMITS_STORE(); + return(0); +} + +/* + * pcibr_slot_device_init + * Setup the device register in the bridge for this PCI slot. + */ +int +pcibr_slot_device_init(devfs_handle_t pcibr_vhdl, + pciio_slot_t slot) +{ + pcibr_soft_t pcibr_soft; + bridge_t *bridge; + bridgereg_t devreg; + + pcibr_soft = pcibr_soft_get(pcibr_vhdl); + + if (!pcibr_soft || !PCIBR_VALID_SLOT(slot)) + return(EINVAL); + + bridge = pcibr_soft->bs_base; + + /* + * Adjustments to Device(x) + * and init of bss_device shadow + */ + devreg = bridge->b_device[slot].reg; + devreg &= ~BRIDGE_DEV_PAGE_CHK_DIS; + devreg |= BRIDGE_DEV_COH | BRIDGE_DEV_VIRTUAL_EN; +#ifdef LITTLE_ENDIAN + devreg |= BRIDGE_DEV_DEV_SWAP; +#endif + pcibr_soft->bs_slot[slot].bss_device = devreg; + bridge->b_device[slot].reg = devreg; + +#if DEBUG && PCI_DEBUG + printk("pcibr Device(%d): 0x%lx\n", slot, bridge->b_device[slot].reg); +#endif + +#if DEBUG && PCI_DEBUG + printk("pcibr: PCI space allocation done.\n"); +#endif + + return(0); +} + +/* + * pcibr_slot_guest_info_init + * Setup the host/guest relations for a PCI slot. + */ +int +pcibr_slot_guest_info_init(devfs_handle_t pcibr_vhdl, + pciio_slot_t slot) +{ + pcibr_soft_t pcibr_soft; + pcibr_info_h pcibr_infoh; + pcibr_info_t pcibr_info; + pcibr_soft_slot_t slotp; + + pcibr_soft = pcibr_soft_get(pcibr_vhdl); + + if (!pcibr_soft || !PCIBR_VALID_SLOT(slot)) + return(EINVAL); + + slotp = &pcibr_soft->bs_slot[slot]; + + /* create info and verticies for guest slots; + * for compatibilitiy macros, create info + * for even unpopulated slots (but do not + * build verticies for them). + */ + if (pcibr_soft->bs_slot[slot].bss_ninfo < 1) { + NEWA(pcibr_infoh, 1); + pcibr_soft->bs_slot[slot].bss_ninfo = 1; + pcibr_soft->bs_slot[slot].bss_infos = pcibr_infoh; + + pcibr_info = pcibr_device_info_new + (pcibr_soft, slot, PCIIO_FUNC_NONE, + PCIIO_VENDOR_ID_NONE, PCIIO_DEVICE_ID_NONE); + + if (pcibr_soft->bs_slot[slot].has_host) { + slotp->slot_conn = pciio_device_info_register + (pcibr_vhdl, &pcibr_info->f_c); + } + } + + /* generate host/guest relations + */ + if (pcibr_soft->bs_slot[slot].has_host) { + int host = pcibr_soft->bs_slot[slot].host_slot; + pcibr_soft_slot_t host_slotp = &pcibr_soft->bs_slot[host]; + + hwgraph_edge_add(slotp->slot_conn, + host_slotp->slot_conn, + EDGE_LBL_HOST); + + /* XXX- only gives us one guest edge per + * host. If/when we have a host with more than + * one guest, we will need to figure out how + * the host finds all its guests, and sorts + * out which one is which. + */ + hwgraph_edge_add(host_slotp->slot_conn, + slotp->slot_conn, + EDGE_LBL_GUEST); + } + + return(0); +} + +/* + * pcibr_slot_initial_rrb_alloc + * Allocate a default number of rrbs for this slot on + * the two channels. This is dictated by the rrb allocation + * strategy routine defined per platform. + */ + +int +pcibr_slot_initial_rrb_alloc(devfs_handle_t pcibr_vhdl, + pciio_slot_t slot) +{ + pcibr_soft_t pcibr_soft; + pcibr_info_h pcibr_infoh; + pcibr_info_t pcibr_info; + bridge_t *bridge; + int c0, c1; + int r; + + pcibr_soft = pcibr_soft_get(pcibr_vhdl); + + if (!pcibr_soft || !PCIBR_VALID_SLOT(slot)) + return(EINVAL); + + bridge = pcibr_soft->bs_base; + + /* How may RRBs are on this slot? + */ + c0 = do_pcibr_rrb_count_valid(bridge, slot); + c1 = do_pcibr_rrb_count_valid(bridge, slot + PCIBR_RRB_SLOT_VIRTUAL); + +#if PCIBR_RRB_DEBUG + printk("pcibr_attach: slot %d started with %d+%d\n", slot, c0, c1); +#endif + + /* Do we really need any? + */ + pcibr_infoh = pcibr_soft->bs_slot[slot].bss_infos; + pcibr_info = pcibr_infoh[0]; + if ((pcibr_info->f_vendor == PCIIO_VENDOR_ID_NONE) && + !pcibr_soft->bs_slot[slot].has_host) { + if (c0 > 0) + do_pcibr_rrb_free(bridge, slot, c0); + if (c1 > 0) + do_pcibr_rrb_free(bridge, slot + PCIBR_RRB_SLOT_VIRTUAL, c1); + pcibr_soft->bs_rrb_valid[slot] = 0x1000; + pcibr_soft->bs_rrb_valid[slot + PCIBR_RRB_SLOT_VIRTUAL] = 0x1000; + return(ENODEV); + } + + pcibr_soft->bs_rrb_avail[slot & 1] -= c0 + c1; + pcibr_soft->bs_rrb_valid[slot] = c0; + pcibr_soft->bs_rrb_valid[slot + PCIBR_RRB_SLOT_VIRTUAL] = c1; + + pcibr_soft->bs_rrb_avail[0] = do_pcibr_rrb_count_avail(bridge, 0); + pcibr_soft->bs_rrb_avail[1] = do_pcibr_rrb_count_avail(bridge, 1); + + r = 3 - (c0 + c1); + + if (r > 0) { + pcibr_soft->bs_rrb_res[slot] = r; + pcibr_soft->bs_rrb_avail[slot & 1] -= r; + } + +#if PCIBR_RRB_DEBUG + printk("\t%d+%d+%d", + 0xFFF & pcibr_soft->bs_rrb_valid[slot], + 0xFFF & pcibr_soft->bs_rrb_valid[slot + PCIBR_RRB_SLOT_VIRTUAL], + pcibr_soft->bs_rrb_res[slot]); + printk("\n"); +#endif + + return(0); +} + +/* + * pcibr_slot_call_device_attach + * This calls the associated driver attach routine for the PCI + * card in this slot. + */ +int +pcibr_slot_call_device_attach(devfs_handle_t pcibr_vhdl, + pciio_slot_t slot, + int drv_flags) +{ + pcibr_soft_t pcibr_soft; + pcibr_info_h pcibr_infoh; + pcibr_info_t pcibr_info; + async_attach_t aa = NULL; + int func; + devfs_handle_t xconn_vhdl,conn_vhdl; + int nfunc; + int error_func; + int error_slot = 0; + int error = ENODEV; + + pcibr_soft = pcibr_soft_get(pcibr_vhdl); + + if (!pcibr_soft || !PCIBR_VALID_SLOT(slot)) + return(EINVAL); + + + if (pcibr_soft->bs_slot[slot].has_host) { + return(EPERM); + } + + xconn_vhdl = pcibr_soft->bs_conn; + aa = async_attach_get_info(xconn_vhdl); + + nfunc = pcibr_soft->bs_slot[slot].bss_ninfo; + pcibr_infoh = pcibr_soft->bs_slot[slot].bss_infos; + + for (func = 0; func < nfunc; ++func) { + + pcibr_info = pcibr_infoh[func]; + + if (!pcibr_info) + continue; + + if (pcibr_info->f_vendor == PCIIO_VENDOR_ID_NONE) + continue; + + conn_vhdl = pcibr_info->f_vertex; + +#ifdef LATER + /* + * Activate if and when we support cdl. + */ + if (aa) + async_attach_add_info(conn_vhdl, aa); +#endif /* LATER */ + + error_func = pciio_device_attach(conn_vhdl, drv_flags); + + pcibr_info->f_att_det_error = error_func; + + if (error_func) + error_slot = error_func; + + error = error_slot; + + } /* next func */ + + if (error) { + if ((error != ENODEV) && (error != EUNATCH)) + pcibr_soft->bs_slot[slot].slot_status |= SLOT_STARTUP_INCMPLT; + } else { + pcibr_soft->bs_slot[slot].slot_status |= SLOT_STARTUP_CMPLT; + } + + return(error); +} + +/* + * pcibr_slot_call_device_detach + * This calls the associated driver detach routine for the PCI + * card in this slot. + */ +int +pcibr_slot_call_device_detach(devfs_handle_t pcibr_vhdl, + pciio_slot_t slot, + int drv_flags) +{ + pcibr_soft_t pcibr_soft; + pcibr_info_h pcibr_infoh; + pcibr_info_t pcibr_info; + int func; + devfs_handle_t conn_vhdl = GRAPH_VERTEX_NONE; + int nfunc; + int error_func; + int error_slot = 0; + int error = ENODEV; + + pcibr_soft = pcibr_soft_get(pcibr_vhdl); + + if (!pcibr_soft || !PCIBR_VALID_SLOT(slot)) + return(EINVAL); + + if (pcibr_soft->bs_slot[slot].has_host) + return(EPERM); + + /* Make sure that we do not detach a system critical function vertex */ + if(pcibr_is_slot_sys_critical(pcibr_vhdl, slot)) + return(EPERM); + + nfunc = pcibr_soft->bs_slot[slot].bss_ninfo; + pcibr_infoh = pcibr_soft->bs_slot[slot].bss_infos; + + for (func = 0; func < nfunc; ++func) { + + pcibr_info = pcibr_infoh[func]; + + if (!pcibr_info) + continue; + + if (pcibr_info->f_vendor == PCIIO_VENDOR_ID_NONE) + continue; + + conn_vhdl = pcibr_info->f_vertex; + + error_func = pciio_device_detach(conn_vhdl, drv_flags); + + pcibr_info->f_att_det_error = error_func; + + if (error_func) + error_slot = error_func; + + error = error_slot; + + } /* next func */ + + pcibr_soft->bs_slot[slot].slot_status &= ~SLOT_STATUS_MASK; + + if (error) { + if ((error != ENODEV) && (error != EUNATCH)) + pcibr_soft->bs_slot[slot].slot_status |= SLOT_SHUTDOWN_INCMPLT; + } else { + if (conn_vhdl != GRAPH_VERTEX_NONE) + pcibr_device_unregister(conn_vhdl); + pcibr_soft->bs_slot[slot].slot_status |= SLOT_SHUTDOWN_CMPLT; + } + + return(error); +} + +/* + * pcibr_slot_detach + * This is a place holder routine to keep track of all the + * slot-specific freeing that needs to be done. + */ +int +pcibr_slot_detach(devfs_handle_t pcibr_vhdl, + pciio_slot_t slot, + int drv_flags) +{ + int error; + + /* Call the device detach function */ + error = (pcibr_slot_call_device_detach(pcibr_vhdl, slot, drv_flags)); + return (error); + +} + +/* + * pcibr_is_slot_sys_critical + * Check slot for any functions that are system critical. + * Return 1 if any are system critical or 0 otherwise. + * + * This function will always return 0 when called by + * pcibr_attach() because the system critical vertices + * have not yet been set in the hwgraph. + */ +int +pcibr_is_slot_sys_critical(devfs_handle_t pcibr_vhdl, + pciio_slot_t slot) +{ + pcibr_soft_t pcibr_soft; + pcibr_info_h pcibr_infoh; + pcibr_info_t pcibr_info; + devfs_handle_t conn_vhdl = GRAPH_VERTEX_NONE; + int nfunc; + int func; + + pcibr_soft = pcibr_soft_get(pcibr_vhdl); + if (!pcibr_soft || !PCIBR_VALID_SLOT(slot)) + return(0); + + nfunc = pcibr_soft->bs_slot[slot].bss_ninfo; + pcibr_infoh = pcibr_soft->bs_slot[slot].bss_infos; + + for (func = 0; func < nfunc; ++func) { + + pcibr_info = pcibr_infoh[func]; + if (!pcibr_info) + continue; + + if (pcibr_info->f_vendor == PCIIO_VENDOR_ID_NONE) + continue; + + conn_vhdl = pcibr_info->f_vertex; + if (is_sys_critical_vertex(conn_vhdl)) { +#if defined(SUPPORT_PRINTING_V_FORMAT) + printk(KERN_WARNING "%v is a system critical device vertex\n", conn_vhdl); +#else + printk(KERN_WARNING "%p is a system critical device vertex\n", (void *)conn_vhdl); +#endif + return(1); + } + + } + + return(0); +} + +/* + * pcibr_device_unregister + * This frees up any hardware resources reserved for this PCI device + * and removes any PCI infrastructural information setup for it. + * This is usually used at the time of shutting down of the PCI card. + */ +int +pcibr_device_unregister(devfs_handle_t pconn_vhdl) +{ + pciio_info_t pciio_info; + devfs_handle_t pcibr_vhdl; + pciio_slot_t slot; + pcibr_soft_t pcibr_soft; + bridge_t *bridge; + int error_call; + int error = 0; + + pciio_info = pciio_info_get(pconn_vhdl); + + pcibr_vhdl = pciio_info_master_get(pciio_info); + slot = pciio_info_slot_get(pciio_info); + + pcibr_soft = pcibr_soft_get(pcibr_vhdl); + bridge = pcibr_soft->bs_base; + + /* Clear all the hardware xtalk resources for this device */ + xtalk_widgetdev_shutdown(pcibr_soft->bs_conn, slot); + + /* Flush all the rrbs */ + pcibr_rrb_flush(pconn_vhdl); + + /* Free the rrbs allocated to this slot */ + error_call = do_pcibr_rrb_free(bridge, slot, + pcibr_soft->bs_rrb_valid[slot] + + pcibr_soft->bs_rrb_valid[slot + + PCIBR_RRB_SLOT_VIRTUAL]); + + if (error_call) + error = ERANGE; + + pcibr_soft->bs_rrb_valid[slot] = 0; + pcibr_soft->bs_rrb_valid[slot + PCIBR_RRB_SLOT_VIRTUAL] = 0; + pcibr_soft->bs_rrb_res[slot] = 0; + + /* Flush the write buffers !! */ + error_call = pcibr_wrb_flush(pconn_vhdl); + + if (error_call) + error = error_call; + + /* Clear the information specific to the slot */ + error_call = pcibr_slot_info_free(pcibr_vhdl, slot); + + if (error_call) + error = error_call; + + return(error); + +} + +/* + * build a convenience link path in the + * form of "...//bus/" + * + * returns 1 on success, 0 otherwise + * + * depends on hwgraph separator == '/' + */ +int +pcibr_bus_cnvlink(devfs_handle_t f_c, int slot) +{ + char dst[MAXDEVNAME]; + char *dp = dst; + char *cp, *xp; + int widgetnum; + char pcibus[8]; + devfs_handle_t nvtx, svtx; + int rv; + +#if DEBUG + printk("pcibr_bus_cnvlink: slot= %d f_c= %p\n", + slot, f_c); + { + int pos; + char dname[256]; + pos = devfs_generate_path(f_c, dname, 256); + printk("%s : path= %s\n", __FUNCTION__, &dname[pos]); + } +#endif + + if (GRAPH_SUCCESS != hwgraph_vertex_name_get(f_c, dst, MAXDEVNAME)) + return 0; + + /* dst example == /hw/module/001c02/Pbrick/xtalk/8/pci/direct */ + + /* find the widget number */ + xp = strstr(dst, "/"EDGE_LBL_XTALK"/"); + if (xp == NULL) + return 0; + widgetnum = atoi(xp+7); + if (widgetnum < XBOW_PORT_8 || widgetnum > XBOW_PORT_F) + return 0; + + /* remove "/pci/direct" from path */ + cp = strstr(dst, "/" EDGE_LBL_PCI "/" "direct"); + if (cp == NULL) + return 0; + *cp = (char)NULL; + + /* get the vertex for the widget */ + if (GRAPH_SUCCESS != hwgraph_traverse(NULL, dp, &svtx)) + return 0; + + *xp = (char)NULL; /* remove "/xtalk/..." from path */ + + /* dst example now == /hw/module/001c02/Pbrick */ + + /* get the bus number */ + strcat(dst, "/bus"); + sprintf(pcibus, "%d", p_busnum[widgetnum]); + + /* link to bus to widget */ + rv = hwgraph_path_add(NULL, dp, &nvtx); + if (GRAPH_SUCCESS == rv) + rv = hwgraph_edge_add(nvtx, svtx, pcibus); + + return (rv == GRAPH_SUCCESS); +} + + +/* + * pcibr_attach: called every time the crosstalk + * infrastructure is asked to initialize a widget + * that matches the part number we handed to the + * registration routine above. + */ +/*ARGSUSED */ +int +pcibr_attach(devfs_handle_t xconn_vhdl) +{ + /* REFERENCED */ + graph_error_t rc; + devfs_handle_t pcibr_vhdl; + devfs_handle_t ctlr_vhdl; + bridge_t *bridge = NULL; + bridgereg_t id; + int rev; + pcibr_soft_t pcibr_soft; + pcibr_info_t pcibr_info; + xwidget_info_t info; + xtalk_intr_t xtalk_intr; + device_desc_t dev_desc = (device_desc_t)0; + int slot; + int ibit; + devfs_handle_t noslot_conn; + char devnm[MAXDEVNAME], *s; + pcibr_hints_t pcibr_hints; + bridgereg_t b_int_enable; + unsigned rrb_fixed = 0; + + iopaddr_t pci_io_fb, pci_io_fl; + iopaddr_t pci_lo_fb, pci_lo_fl; + iopaddr_t pci_hi_fb, pci_hi_fl; + + int spl_level; +#ifdef LATER + char *nicinfo = (char *)0; +#endif + +#if PCI_FBBE + int fast_back_to_back_enable; +#endif + l1sc_t *scp; + nasid_t nasid; + + async_attach_t aa = NULL; + + aa = async_attach_get_info(xconn_vhdl); + +#if DEBUG && ATTACH_DEBUG + printk("pcibr_attach: xconn_vhdl= %p\n", xconn_vhdl); + { + int pos; + char dname[256]; + pos = devfs_generate_path(xconn_vhdl, dname, 256); + printk("%s : path= %s \n", __FUNCTION__, &dname[pos]); + } +#endif + + /* Setup the PRB for the bridge in CONVEYOR BELT + * mode. PRBs are setup in default FIRE-AND-FORGET + * mode during the initialization. + */ + hub_device_flags_set(xconn_vhdl, HUB_PIO_CONVEYOR); + + bridge = (bridge_t *) + xtalk_piotrans_addr(xconn_vhdl, NULL, + 0, sizeof(bridge_t), 0); + +#ifndef MEDUSA_HACK + if ((bridge->b_wid_stat & BRIDGE_STAT_PCI_GIO_N) == 0) + return -1; /* someone else handles GIO bridges. */ +#endif + + if (XWIDGET_PART_REV_NUM(bridge->b_wid_id) == XBRIDGE_PART_REV_A) + NeedXbridgeSwap = 1; + + /* + * Create the vertex for the PCI bus, which we + * will also use to hold the pcibr_soft and + * which will be the "master" vertex for all the + * pciio connection points we will hang off it. + * This needs to happen before we call nic_bridge_vertex_info + * as we are some of the *_vmc functions need access to the edges. + * + * Opening this vertex will provide access to + * the Bridge registers themselves. + */ + rc = hwgraph_path_add(xconn_vhdl, EDGE_LBL_PCI, &pcibr_vhdl); + ASSERT(rc == GRAPH_SUCCESS); + + ctlr_vhdl = NULL; + ctlr_vhdl = hwgraph_register(pcibr_vhdl, EDGE_LBL_CONTROLLER, + 0, DEVFS_FL_AUTO_DEVNUM, + 0, 0, + S_IFCHR | S_IRUSR | S_IWUSR | S_IRGRP, 0, 0, + &pcibr_fops, NULL); + + ASSERT(ctlr_vhdl != NULL); + + /* + * decode the nic, and hang its stuff off our + * connection point where other drivers can get + * at it. + */ +#ifdef LATER + nicinfo = BRIDGE_VERTEX_MFG_INFO(xconn_vhdl, (nic_data_t) & bridge->b_nic); +#endif + + /* + * Get the hint structure; if some NIC callback + * marked this vertex as "hands-off" then we + * just return here, before doing anything else. + */ + pcibr_hints = pcibr_hints_get(xconn_vhdl, 0); + + if (pcibr_hints && pcibr_hints->ph_hands_off) + return -1; /* generic operations disabled */ + + id = bridge->b_wid_id; + rev = XWIDGET_PART_REV_NUM(id); + + hwgraph_info_add_LBL(pcibr_vhdl, INFO_LBL_PCIBR_ASIC_REV, (arbitrary_info_t) rev); + + /* + * allocate soft state structure, fill in some + * fields, and hook it up to our vertex. + */ + NEW(pcibr_soft); + BZERO(pcibr_soft, sizeof *pcibr_soft); + pcibr_soft_set(pcibr_vhdl, pcibr_soft); + + pcibr_soft->bs_conn = xconn_vhdl; + pcibr_soft->bs_vhdl = pcibr_vhdl; + pcibr_soft->bs_base = bridge; + pcibr_soft->bs_rev_num = rev; + pcibr_soft->bs_intr_bits = pcibr_intr_bits; + if (is_xbridge(bridge)) { + pcibr_soft->bs_int_ate_size = XBRIDGE_INTERNAL_ATES; + pcibr_soft->bs_xbridge = 1; + } else { + pcibr_soft->bs_int_ate_size = BRIDGE_INTERNAL_ATES; + pcibr_soft->bs_xbridge = 0; + } + + nasid = NASID_GET(bridge); + scp = &NODEPDA( NASID_TO_COMPACT_NODEID(nasid) )->module->elsc; + pcibr_soft->bs_l1sc = scp; + pcibr_soft->bs_moduleid = iobrick_module_get(scp); + pcibr_soft->bsi_err_intr = 0; + + /* Bridges up through REV C + * are unable to set the direct + * byteswappers to BYTE_STREAM. + */ + if (pcibr_soft->bs_rev_num <= BRIDGE_PART_REV_C) { + pcibr_soft->bs_pio_end_io = PCIIO_WORD_VALUES; + pcibr_soft->bs_pio_end_mem = PCIIO_WORD_VALUES; + } +#if PCIBR_SOFT_LIST + { + pcibr_list_p self; + + NEW(self); + self->bl_soft = pcibr_soft; + self->bl_vhdl = pcibr_vhdl; + self->bl_next = pcibr_list; + self->bl_next = swap_ptr((void **) &pcibr_list, (void *)self); + } +#endif + + /* + * get the name of this bridge vertex and keep the info. Use this + * only where it is really needed now: like error interrupts. + */ + s = dev_to_name(pcibr_vhdl, devnm, MAXDEVNAME); + pcibr_soft->bs_name = kmalloc(strlen(s) + 1, GFP_KERNEL); + strcpy(pcibr_soft->bs_name, s); + +#if SHOW_REVS || DEBUG +#if !DEBUG + if (kdebug) +#endif + printk("%sBridge ASIC: rev %s (code=0x%x) at %s\n", + is_xbridge(bridge) ? "X" : "", + (rev == BRIDGE_PART_REV_A) ? "A" : + (rev == BRIDGE_PART_REV_B) ? "B" : + (rev == BRIDGE_PART_REV_C) ? "C" : + (rev == BRIDGE_PART_REV_D) ? "D" : + (rev == XBRIDGE_PART_REV_A) ? "A" : + (rev == XBRIDGE_PART_REV_B) ? "B" : + "unknown", + rev, pcibr_soft->bs_name); +#endif + + info = xwidget_info_get(xconn_vhdl); + pcibr_soft->bs_xid = xwidget_info_id_get(info); + pcibr_soft->bs_master = xwidget_info_master_get(info); + pcibr_soft->bs_mxid = xwidget_info_masterid_get(info); + + /* + * Init bridge lock. + */ + spin_lock_init(&pcibr_soft->bs_lock); + + /* + * If we have one, process the hints structure. + */ + if (pcibr_hints) { + rrb_fixed = pcibr_hints->ph_rrb_fixed; + + pcibr_soft->bs_rrb_fixed = rrb_fixed; + + if (pcibr_hints->ph_intr_bits) + pcibr_soft->bs_intr_bits = pcibr_hints->ph_intr_bits; + + for (slot = 0; slot < 8; ++slot) { + int hslot = pcibr_hints->ph_host_slot[slot] - 1; + + if (hslot < 0) { + pcibr_soft->bs_slot[slot].host_slot = slot; + } else { + pcibr_soft->bs_slot[slot].has_host = 1; + pcibr_soft->bs_slot[slot].host_slot = hslot; + } + } + } + /* + * set up initial values for state fields + */ + for (slot = 0; slot < 8; ++slot) { + pcibr_soft->bs_slot[slot].bss_devio.bssd_space = PCIIO_SPACE_NONE; + pcibr_soft->bs_slot[slot].bss_d64_base = PCIBR_D64_BASE_UNSET; + pcibr_soft->bs_slot[slot].bss_d32_base = PCIBR_D32_BASE_UNSET; + pcibr_soft->bs_slot[slot].bss_ext_ates_active = ATOMIC_INIT(0); + } + + for (ibit = 0; ibit < 8; ++ibit) { + pcibr_soft->bs_intr[ibit].bsi_xtalk_intr = 0; + pcibr_soft->bs_intr[ibit].bsi_pcibr_intr_wrap.iw_soft = pcibr_soft; + pcibr_soft->bs_intr[ibit].bsi_pcibr_intr_wrap.iw_list = NULL; + pcibr_soft->bs_intr[ibit].bsi_pcibr_intr_wrap.iw_stat = + &(bridge->b_int_status); + pcibr_soft->bs_intr[ibit].bsi_pcibr_intr_wrap.iw_hdlrcnt = 0; + pcibr_soft->bs_intr[ibit].bsi_pcibr_intr_wrap.iw_shared = 0; + pcibr_soft->bs_intr[ibit].bsi_pcibr_intr_wrap.iw_connected = 0; + } + + /* + * Initialize various Bridge registers. + */ + + /* + * On pre-Rev.D bridges, set the PCI_RETRY_CNT + * to zero to avoid dropping stores. (#475347) + */ + if (rev < BRIDGE_PART_REV_D) + bridge->b_bus_timeout &= ~BRIDGE_BUS_PCI_RETRY_MASK; + + /* + * Clear all pending interrupts. + */ + bridge->b_int_rst_stat = (BRIDGE_IRR_ALL_CLR); + + /* + * Until otherwise set up, + * assume all interrupts are + * from slot 7. + */ + bridge->b_int_device = (uint32_t) 0xffffffff; + + { + bridgereg_t dirmap; + paddr_t paddr; + iopaddr_t xbase; + xwidgetnum_t xport; + iopaddr_t offset; + int num_entries = 0; + int entry; + cnodeid_t cnodeid; + nasid_t nasid; + + /* Set the Bridge's 32-bit PCI to XTalk + * Direct Map register to the most useful + * value we can determine. Note that we + * must use a single xid for all of: + * direct-mapped 32-bit DMA accesses + * direct-mapped 64-bit DMA accesses + * DMA accesses through the PMU + * interrupts + * This is the only way to guarantee that + * completion interrupts will reach a CPU + * after all DMA data has reached memory. + * (Of course, there may be a few special + * drivers/controlers that explicitly manage + * this ordering problem.) + */ + + cnodeid = 0; /* default node id */ + /* + * Determine the base address node id to be used for all 32-bit + * Direct Mapping I/O. The default is node 0, but this can be changed + * via a DEVICE_ADMIN directive and the PCIBUS_DMATRANS_NODE + * attribute in the irix.sm config file. A device driver can obtain + * this node value via a call to pcibr_get_dmatrans_node(). + */ + nasid = COMPACT_TO_NASID_NODEID(cnodeid); + paddr = NODE_OFFSET(nasid) + 0; + + /* currently, we just assume that if we ask + * for a DMA mapping to "zero" the XIO + * host will transmute this into a request + * for the lowest hunk of memory. + */ + xbase = xtalk_dmatrans_addr(xconn_vhdl, 0, + paddr, _PAGESZ, 0); + + if (xbase != XIO_NOWHERE) { + if (XIO_PACKED(xbase)) { + xport = XIO_PORT(xbase); + xbase = XIO_ADDR(xbase); + } else + xport = pcibr_soft->bs_mxid; + + offset = xbase & ((1ull << BRIDGE_DIRMAP_OFF_ADDRSHFT) - 1ull); + xbase >>= BRIDGE_DIRMAP_OFF_ADDRSHFT; + + dirmap = xport << BRIDGE_DIRMAP_W_ID_SHFT; + + if (xbase) + dirmap |= BRIDGE_DIRMAP_OFF & xbase; + else if (offset >= (512 << 20)) + dirmap |= BRIDGE_DIRMAP_ADD512; + + bridge->b_dir_map = dirmap; + } + /* + * Set bridge's idea of page size according to the system's + * idea of "IO page size". TBD: The idea of IO page size + * should really go away. + */ + /* + * ensure that we write and read without any interruption. + * The read following the write is required for the Bridge war + */ + spl_level = splhi(); +#if IOPGSIZE == 4096 + bridge->b_wid_control &= ~BRIDGE_CTRL_PAGE_SIZE; +#elif IOPGSIZE == 16384 + bridge->b_wid_control |= BRIDGE_CTRL_PAGE_SIZE; +#else + <<>>; +#endif + bridge->b_wid_control; /* inval addr bug war */ + splx(spl_level); + + /* Initialize internal mapping entries */ + for (entry = 0; entry < pcibr_soft->bs_int_ate_size; entry++) + bridge->b_int_ate_ram[entry].wr = 0; + + /* + * Determine if there's external mapping SSRAM on this + * bridge. Set up Bridge control register appropriately, + * inititlize SSRAM, and set software up to manage RAM + * entries as an allocatable resource. + * + * Currently, we just use the rm* routines to manage ATE + * allocation. We should probably replace this with a + * Best Fit allocator. + * + * For now, if we have external SSRAM, avoid using + * the internal ssram: we can't turn PREFETCH on + * when we use the internal SSRAM; and besides, + * this also guarantees that no allocation will + * straddle the internal/external line, so we + * can increment ATE write addresses rather than + * recomparing against BRIDGE_INTERNAL_ATES every + * time. + */ + if (is_xbridge(bridge)) + num_entries = 0; + else + num_entries = pcibr_init_ext_ate_ram(bridge); + + /* we always have 128 ATEs (512 for Xbridge) inside the chip + * even if disabled for debugging. + */ + pcibr_soft->bs_int_ate_map = rmallocmap(pcibr_soft->bs_int_ate_size); + pcibr_ate_free(pcibr_soft, 0, pcibr_soft->bs_int_ate_size); +#if PCIBR_ATE_DEBUG + printk("pcibr_attach: %d INTERNAL ATEs\n", pcibr_soft->bs_int_ate_size); +#endif + + if (num_entries > pcibr_soft->bs_int_ate_size) { +#if PCIBR_ATE_NOTBOTH /* for debug -- forces us to use external ates */ + printk("pcibr_attach: disabling internal ATEs.\n"); + pcibr_ate_alloc(pcibr_soft, pcibr_soft->bs_int_ate_size); +#endif + pcibr_soft->bs_ext_ate_map = rmallocmap(num_entries); + pcibr_ate_free(pcibr_soft, pcibr_soft->bs_int_ate_size, + num_entries - pcibr_soft->bs_int_ate_size); +#if PCIBR_ATE_DEBUG + printk("pcibr_attach: %d EXTERNAL ATEs\n", + num_entries - pcibr_soft->bs_int_ate_size); +#endif + } + } + + { + bridgereg_t dirmap; + iopaddr_t xbase; + + /* + * now figure the *real* xtalk base address + * that dirmap sends us to. + */ + dirmap = bridge->b_dir_map; + if (dirmap & BRIDGE_DIRMAP_OFF) + xbase = (iopaddr_t)(dirmap & BRIDGE_DIRMAP_OFF) + << BRIDGE_DIRMAP_OFF_ADDRSHFT; + else if (dirmap & BRIDGE_DIRMAP_ADD512) + xbase = 512 << 20; + else + xbase = 0; + + pcibr_soft->bs_dir_xbase = xbase; + + /* it is entirely possible that we may, at this + * point, have our dirmap pointing somewhere + * other than our "master" port. + */ + pcibr_soft->bs_dir_xport = + (dirmap & BRIDGE_DIRMAP_W_ID) >> BRIDGE_DIRMAP_W_ID_SHFT; + } + + /* pcibr sources an error interrupt; + * figure out where to send it. + * + * If any interrupts are enabled in bridge, + * then the prom set us up and our interrupt + * has already been reconnected in mlreset + * above. + * + * Need to set the D_INTR_ISERR flag + * in the dev_desc used for allocating the + * error interrupt, so our interrupt will + * be properly routed and prioritized. + * + * If our crosstalk provider wants to + * fix widget error interrupts to specific + * destinations, D_INTR_ISERR is how it + * knows to do this. + */ + + xtalk_intr = xtalk_intr_alloc(xconn_vhdl, dev_desc, pcibr_vhdl); + ASSERT(xtalk_intr != NULL); + + pcibr_soft->bsi_err_intr = xtalk_intr; + + /* + * On IP35 with XBridge, we do some extra checks in pcibr_setwidint + * in order to work around some addressing limitations. In order + * for that fire wall to work properly, we need to make sure we + * start from a known clean state. + */ + pcibr_clearwidint(bridge); + + xtalk_intr_connect(xtalk_intr, (xtalk_intr_setfunc_t)pcibr_setwidint, (void *)bridge); + + /* + * now we can start handling error interrupts; + * enable all of them. + * NOTE: some PCI ints may already be enabled. + */ + b_int_enable = bridge->b_int_enable | BRIDGE_ISR_ERRORS; + + + bridge->b_int_enable = b_int_enable; + bridge->b_int_mode = 0; /* do not send "clear interrupt" packets */ + + bridge->b_wid_tflush; /* wait until Bridge PIO complete */ + + /* + * Depending on the rev of bridge, disable certain features. + * Easiest way seems to be to force the PCIBR_NOwhatever + * flag to be on for all DMA calls, which overrides any + * PCIBR_whatever flag or even the setting of whatever + * from the PCIIO_DMA_class flags (or even from the other + * PCIBR flags, since NO overrides YES). + */ + pcibr_soft->bs_dma_flags = 0; + + /* PREFETCH: + * Always completely disabled for REV.A; + * at "pcibr_prefetch_enable_rev", anyone + * asking for PCIIO_PREFETCH gets it. + * Between these two points, you have to ask + * for PCIBR_PREFETCH, which promises that + * your driver knows about known Bridge WARs. + */ + if (pcibr_soft->bs_rev_num < BRIDGE_PART_REV_B) + pcibr_soft->bs_dma_flags |= PCIBR_NOPREFETCH; + else if (pcibr_soft->bs_rev_num < + (BRIDGE_WIDGET_PART_NUM << 4 | pcibr_prefetch_enable_rev)) + pcibr_soft->bs_dma_flags |= PCIIO_NOPREFETCH; + + /* WRITE_GATHER: + * Disabled up to but not including the + * rev number in pcibr_wg_enable_rev. There + * is no "WAR range" as with prefetch. + */ + if (pcibr_soft->bs_rev_num < + (BRIDGE_WIDGET_PART_NUM << 4 | pcibr_wg_enable_rev)) + pcibr_soft->bs_dma_flags |= PCIBR_NOWRITE_GATHER; + + pciio_provider_register(pcibr_vhdl, &pcibr_provider); + pciio_provider_startup(pcibr_vhdl); + + pci_io_fb = 0x00000004; /* I/O FreeBlock Base */ + pci_io_fl = 0xFFFFFFFF; /* I/O FreeBlock Last */ + + pci_lo_fb = 0x00000010; /* Low Memory FreeBlock Base */ + pci_lo_fl = 0x001FFFFF; /* Low Memory FreeBlock Last */ + + pci_hi_fb = 0x00200000; /* High Memory FreeBlock Base */ + pci_hi_fl = 0x3FFFFFFF; /* High Memory FreeBlock Last */ + + + PCI_ADDR_SPACE_LIMITS_STORE(); + + /* build "no-slot" connection point + */ + pcibr_info = pcibr_device_info_new + (pcibr_soft, PCIIO_SLOT_NONE, PCIIO_FUNC_NONE, + PCIIO_VENDOR_ID_NONE, PCIIO_DEVICE_ID_NONE); + noslot_conn = pciio_device_info_register + (pcibr_vhdl, &pcibr_info->f_c); + + /* Remember the no slot connection point info for tearing it + * down during detach. + */ + pcibr_soft->bs_noslot_conn = noslot_conn; + pcibr_soft->bs_noslot_info = pcibr_info; +#if PCI_FBBE + fast_back_to_back_enable = 1; +#endif + +#if PCI_FBBE + if (fast_back_to_back_enable) { + /* + * All devices on the bus are capable of fast back to back, so + * we need to set the fast back to back bit in all devices on + * the bus that are capable of doing such accesses. + */ + } +#endif + +#ifdef LATER + /* If the bridge has been reset then there is no need to reset + * the individual PCI slots. + */ + for (slot = 0; slot < 8; ++slot) + /* Reset all the slots */ + (void)pcibr_slot_reset(pcibr_vhdl, slot); +#endif + + for (slot = 0; slot < 8; ++slot) + /* Find out what is out there */ + (void)pcibr_slot_info_init(pcibr_vhdl,slot); + + for (slot = 0; slot < 8; ++slot) + /* Set up the address space for this slot in the pci land */ + (void)pcibr_slot_addr_space_init(pcibr_vhdl,slot); + + for (slot = 0; slot < 8; ++slot) + /* Setup the device register */ + (void)pcibr_slot_device_init(pcibr_vhdl, slot); + +#ifndef __ia64 + for (slot = 0; slot < 8; ++slot) + /* Set up convenience links */ + if (is_xbridge(bridge)) + if (pcibr_soft->bs_slot[slot].bss_ninfo > 0) /* if occupied */ + pcibr_bus_cnvlink(pcibr_info->f_vertex, slot); +#endif + + for (slot = 0; slot < 8; ++slot) + /* Setup host/guest relations */ + (void)pcibr_slot_guest_info_init(pcibr_vhdl,slot); + + for (slot = 0; slot < 8; ++slot) + /* Initial RRB management */ + (void)pcibr_slot_initial_rrb_alloc(pcibr_vhdl,slot); + + /* driver attach routines should be called out from generic linux code */ + for (slot = 0; slot < 8; ++slot) + /* Call the device attach */ + (void)pcibr_slot_call_device_attach(pcibr_vhdl, slot, 0); + + /* + * Each Pbrick PCI bus only has slots 1 and 2. Similarly for + * widget 0xe on Ibricks. Allocate RRB's accordingly. + */ + if (pcibr_soft->bs_moduleid > 0) { + switch (MODULE_GET_BTCHAR(pcibr_soft->bs_moduleid)) { + case 'p': /* Pbrick */ + do_pcibr_rrb_autoalloc(pcibr_soft, 1, 8); + do_pcibr_rrb_autoalloc(pcibr_soft, 2, 8); + break; + case 'i': /* Ibrick */ + /* port 0xe on the Ibrick only has slots 1 and 2 */ + if (pcibr_soft->bs_xid == 0xe) { + do_pcibr_rrb_autoalloc(pcibr_soft, 1, 8); + do_pcibr_rrb_autoalloc(pcibr_soft, 2, 8); + } + else { + /* allocate one RRB for the serial port */ + do_pcibr_rrb_autoalloc(pcibr_soft, 0, 1); + } + break; + } /* switch */ + } + +#ifdef LATER + if (strstr(nicinfo, XTALK_PCI_PART_NUM)) { + do_pcibr_rrb_autoalloc(pcibr_soft, 1, 8); +#if PCIBR_RRB_DEBUG + printf("\n\nFound XTALK_PCI (030-1275) at %v\n", xconn_vhdl); + + printf("pcibr_attach: %v Shoebox RRB MANAGEMENT: %d+%d free\n", + pcibr_vhdl, + pcibr_soft->bs_rrb_avail[0], + pcibr_soft->bs_rrb_avail[1]); + + for (slot = 0; slot < 8; ++slot) + printf("\t%d+%d+%d", + 0xFFF & pcibr_soft->bs_rrb_valid[slot], + 0xFFF & pcibr_soft->bs_rrb_valid[slot + PCIBR_RRB_SLOT_VIRTUAL], + pcibr_soft->bs_rrb_res[slot]); + + printf("\n"); +#endif + } +#else + FIXME("pcibr_attach: Call do_pcibr_rrb_autoalloc nicinfo\n"); +#endif + + if (aa) + async_attach_add_info(noslot_conn, aa); + + pciio_device_attach(noslot_conn, 0); + + + /* + * Tear down pointer to async attach info -- async threads for + * bridge's descendants may be running but the bridge's work is done. + */ + if (aa) + async_attach_del_info(xconn_vhdl); + + return 0; +} +/* + * pcibr_detach: + * Detach the bridge device from the hwgraph after cleaning out all the + * underlying vertices. + */ +int +pcibr_detach(devfs_handle_t xconn) +{ + pciio_slot_t slot; + devfs_handle_t pcibr_vhdl; + pcibr_soft_t pcibr_soft; + bridge_t *bridge; + + /* Get the bridge vertex from its xtalk connection point */ + if (hwgraph_traverse(xconn, EDGE_LBL_PCI, &pcibr_vhdl) != GRAPH_SUCCESS) + return(1); + + pcibr_soft = pcibr_soft_get(pcibr_vhdl); + bridge = pcibr_soft->bs_base; + + /* Disable the interrupts from the bridge */ + bridge->b_int_enable = 0; + + /* Detach all the PCI devices talking to this bridge */ + for(slot = 0; slot < 8; slot++) { +#ifdef DEBUG + printk("pcibr_device_detach called for %p/%d\n", + pcibr_vhdl,slot); +#endif + pcibr_slot_detach(pcibr_vhdl, slot, 0); + } + + /* Unregister the no-slot connection point */ + pciio_device_info_unregister(pcibr_vhdl, + &(pcibr_soft->bs_noslot_info->f_c)); + + spin_lock_destroy(&pcibr_soft->bs_lock); + kfree(pcibr_soft->bs_name); + + /* Error handler gets unregistered when the widget info is + * cleaned + */ + /* Free the soft ATE maps */ + if (pcibr_soft->bs_int_ate_map) + rmfreemap(pcibr_soft->bs_int_ate_map); + if (pcibr_soft->bs_ext_ate_map) + rmfreemap(pcibr_soft->bs_ext_ate_map); + + /* Disconnect the error interrupt and free the xtalk resources + * associated with it. + */ + xtalk_intr_disconnect(pcibr_soft->bsi_err_intr); + xtalk_intr_free(pcibr_soft->bsi_err_intr); + + /* Clear the software state maintained by the bridge driver for this + * bridge. + */ + DEL(pcibr_soft); + /* Remove the Bridge revision labelled info */ + (void)hwgraph_info_remove_LBL(pcibr_vhdl, INFO_LBL_PCIBR_ASIC_REV, NULL); + /* Remove the character device associated with this bridge */ + (void)hwgraph_edge_remove(pcibr_vhdl, EDGE_LBL_CONTROLLER, NULL); + /* Remove the PCI bridge vertex */ + (void)hwgraph_edge_remove(xconn, EDGE_LBL_PCI, NULL); + + return(0); +} + +int +pcibr_asic_rev(devfs_handle_t pconn_vhdl) +{ + devfs_handle_t pcibr_vhdl; + arbitrary_info_t ainfo; + + if (GRAPH_SUCCESS != + hwgraph_traverse(pconn_vhdl, EDGE_LBL_MASTER, &pcibr_vhdl)) + return -1; + + if (GRAPH_SUCCESS != + hwgraph_info_get_LBL(pcibr_vhdl, INFO_LBL_PCIBR_ASIC_REV, &ainfo)) + return -1; + + return (int) ainfo; +} + +int +pcibr_write_gather_flush(devfs_handle_t pconn_vhdl) +{ + pciio_info_t pciio_info = pciio_info_get(pconn_vhdl); + pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); + pciio_slot_t slot; + slot = pciio_info_slot_get(pciio_info); + pcibr_device_write_gather_flush(pcibr_soft, slot); + return 0; +} + +/* ===================================================================== + * PIO MANAGEMENT + */ + +LOCAL iopaddr_t +pcibr_addr_pci_to_xio(devfs_handle_t pconn_vhdl, + pciio_slot_t slot, + pciio_space_t space, + iopaddr_t pci_addr, + size_t req_size, + unsigned flags) +{ + pcibr_info_t pcibr_info = pcibr_info_get(pconn_vhdl); + pciio_info_t pciio_info = &pcibr_info->f_c; + pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); + bridge_t *bridge = pcibr_soft->bs_base; + + unsigned bar; /* which BASE reg on device is decoding */ + iopaddr_t xio_addr = XIO_NOWHERE; + + pciio_space_t wspace; /* which space device is decoding */ + iopaddr_t wbase; /* base of device decode on PCI */ + size_t wsize; /* size of device decode on PCI */ + + int try; /* DevIO(x) window scanning order control */ + int win; /* which DevIO(x) window is being used */ + pciio_space_t mspace; /* target space for devio(x) register */ + iopaddr_t mbase; /* base of devio(x) mapped area on PCI */ + size_t msize; /* size of devio(x) mapped area on PCI */ + size_t mmask; /* addr bits stored in Device(x) */ + + unsigned long s; + + s = pcibr_lock(pcibr_soft); + + if (pcibr_soft->bs_slot[slot].has_host) { + slot = pcibr_soft->bs_slot[slot].host_slot; + pcibr_info = pcibr_soft->bs_slot[slot].bss_infos[0]; + } + if (space == PCIIO_SPACE_NONE) + goto done; + + if (space == PCIIO_SPACE_CFG) { + /* + * Usually, the first mapping + * established to a PCI device + * is to its config space. + * + * In any case, we definitely + * do NOT need to worry about + * PCI BASE registers, and + * MUST NOT attempt to point + * the DevIO(x) window at + * this access ... + */ + if (((flags & PCIIO_BYTE_STREAM) == 0) && + ((pci_addr + req_size) <= BRIDGE_TYPE0_CFG_FUNC_OFF)) + xio_addr = pci_addr + BRIDGE_TYPE0_CFG_DEV(slot); + + goto done; + } + if (space == PCIIO_SPACE_ROM) { + /* PIO to the Expansion Rom. + * Driver is responsible for + * enabling and disabling + * decodes properly. + */ + wbase = pcibr_info->f_rbase; + wsize = pcibr_info->f_rsize; + + /* + * While the driver should know better + * than to attempt to map more space + * than the device is decoding, he might + * do it; better to bail out here. + */ + if ((pci_addr + req_size) > wsize) + goto done; + + pci_addr += wbase; + space = PCIIO_SPACE_MEM; + } + /* + * reduce window mappings to raw + * space mappings (maybe allocating + * windows), and try for DevIO(x) + * usage (setting it if it is available). + */ + bar = space - PCIIO_SPACE_WIN0; + if (bar < 6) { + wspace = pcibr_info->f_window[bar].w_space; + if (wspace == PCIIO_SPACE_NONE) + goto done; + + /* get PCI base and size */ + wbase = pcibr_info->f_window[bar].w_base; + wsize = pcibr_info->f_window[bar].w_size; + + /* + * While the driver should know better + * than to attempt to map more space + * than the device is decoding, he might + * do it; better to bail out here. + */ + if ((pci_addr + req_size) > wsize) + goto done; + + /* shift from window relative to + * decoded space relative. + */ + pci_addr += wbase; + space = wspace; + } else + bar = -1; + + /* Scan all the DevIO(x) windows twice looking for one + * that can satisfy our request. The first time through, + * only look at assigned windows; the second time, also + * look at PCIIO_SPACE_NONE windows. Arrange the order + * so we always look at our own window first. + * + * We will not attempt to satisfy a single request + * by concatinating multiple windows. + */ + for (try = 0; try < 16; ++try) { + bridgereg_t devreg; + unsigned offset; + + win = (try + slot) % 8; + + /* If this DevIO(x) mapping area can provide + * a mapping to this address, use it. + */ + msize = (win < 2) ? 0x200000 : 0x100000; + mmask = -msize; + if (space != PCIIO_SPACE_IO) + mmask &= 0x3FFFFFFF; + + offset = pci_addr & (msize - 1); + + /* If this window can't possibly handle that request, + * go on to the next window. + */ + if (((pci_addr & (msize - 1)) + req_size) > msize) + continue; + + devreg = pcibr_soft->bs_slot[win].bss_device; + + /* Is this window "nailed down"? + * If not, maybe we can use it. + * (only check this the second time through) + */ + mspace = pcibr_soft->bs_slot[win].bss_devio.bssd_space; + if ((try > 7) && (mspace == PCIIO_SPACE_NONE)) { + + /* If this is the primary DevIO(x) window + * for some other device, skip it. + */ + if ((win != slot) && + (PCIIO_VENDOR_ID_NONE != + pcibr_soft->bs_slot[win].bss_vendor_id)) + continue; + + /* It's a free window, and we fit in it. + * Set up Device(win) to our taste. + */ + mbase = pci_addr & mmask; + + /* check that we would really get from + * here to there. + */ + if ((mbase | offset) != pci_addr) + continue; + + devreg &= ~BRIDGE_DEV_OFF_MASK; + if (space != PCIIO_SPACE_IO) + devreg |= BRIDGE_DEV_DEV_IO_MEM; + else + devreg &= ~BRIDGE_DEV_DEV_IO_MEM; + devreg |= (mbase >> 20) & BRIDGE_DEV_OFF_MASK; + + /* default is WORD_VALUES. + * if you specify both, + * operation is undefined. + */ + if (flags & PCIIO_BYTE_STREAM) + devreg |= BRIDGE_DEV_DEV_SWAP; + else + devreg &= ~BRIDGE_DEV_DEV_SWAP; + + if (pcibr_soft->bs_slot[win].bss_device != devreg) { + bridge->b_device[win].reg = devreg; + pcibr_soft->bs_slot[win].bss_device = devreg; + bridge->b_wid_tflush; /* wait until Bridge PIO complete */ + +#if DEBUG && PCI_DEBUG + printk("pcibr Device(%d): 0x%lx\n", win, bridge->b_device[win].reg); +#endif + } + pcibr_soft->bs_slot[win].bss_devio.bssd_space = space; + pcibr_soft->bs_slot[win].bss_devio.bssd_base = mbase; + xio_addr = BRIDGE_DEVIO(win) + (pci_addr - mbase); + +#if DEBUG && PCI_DEBUG + printk("%s LINE %d map to space %d space desc 0x%x[%lx..%lx] for slot %d allocates DevIO(%d) devreg 0x%x\n", + __FUNCTION__, __LINE__, space, space_desc, + pci_addr, pci_addr + req_size - 1, + slot, win, devreg); +#endif + + goto done; + } /* endif DevIO(x) not pointed */ + mbase = pcibr_soft->bs_slot[win].bss_devio.bssd_base; + + /* Now check for request incompat with DevIO(x) + */ + if ((mspace != space) || + (pci_addr < mbase) || + ((pci_addr + req_size) > (mbase + msize)) || + ((flags & PCIIO_BYTE_STREAM) && !(devreg & BRIDGE_DEV_DEV_SWAP)) || + (!(flags & PCIIO_BYTE_STREAM) && (devreg & BRIDGE_DEV_DEV_SWAP))) + continue; + + /* DevIO(x) window is pointed at PCI space + * that includes our target. Calculate the + * final XIO address, release the lock and + * return. + */ + xio_addr = BRIDGE_DEVIO(win) + (pci_addr - mbase); + +#if DEBUG && PCI_DEBUG + printk("%s LINE %d map to space %d [0x%p..0x%p] for slot %d uses DevIO(%d)\n", + __FUNCTION__, __LINE__, space, pci_addr, pci_addr + req_size - 1, slot, win); +#endif + goto done; + } + + switch (space) { + /* + * Accesses to device decode + * areas that do a not fit + * within the DevIO(x) space are + * modified to be accesses via + * the direct mapping areas. + * + * If necessary, drivers can + * explicitly ask for mappings + * into these address spaces, + * but this should never be needed. + */ + case PCIIO_SPACE_MEM: /* "mem space" */ + case PCIIO_SPACE_MEM32: /* "mem, use 32-bit-wide bus" */ + if ((pci_addr + BRIDGE_PCI_MEM32_BASE + req_size - 1) <= + BRIDGE_PCI_MEM32_LIMIT) + xio_addr = pci_addr + BRIDGE_PCI_MEM32_BASE; + break; + + case PCIIO_SPACE_MEM64: /* "mem, use 64-bit-wide bus" */ + if ((pci_addr + BRIDGE_PCI_MEM64_BASE + req_size - 1) <= + BRIDGE_PCI_MEM64_LIMIT) + xio_addr = pci_addr + BRIDGE_PCI_MEM64_BASE; + break; + + case PCIIO_SPACE_IO: /* "i/o space" */ + /* Bridge Hardware Bug WAR #482741: + * The 4G area that maps directly from + * XIO space to PCI I/O space is busted + * until Bridge Rev D. + */ + if ((pcibr_soft->bs_rev_num > BRIDGE_PART_REV_C) && + ((pci_addr + BRIDGE_PCI_IO_BASE + req_size - 1) <= + BRIDGE_PCI_IO_LIMIT)) + xio_addr = pci_addr + BRIDGE_PCI_IO_BASE; + break; + } + + /* Check that "Direct PIO" byteswapping matches, + * try to change it if it does not. + */ + if (xio_addr != XIO_NOWHERE) { + unsigned bst; /* nonzero to set bytestream */ + unsigned *bfp; /* addr of record of how swapper is set */ + unsigned swb; /* which control bit to mung */ + unsigned bfo; /* current swapper setting */ + unsigned bfn; /* desired swapper setting */ + + bfp = ((space == PCIIO_SPACE_IO) + ? (&pcibr_soft->bs_pio_end_io) + : (&pcibr_soft->bs_pio_end_mem)); + + bfo = *bfp; + + bst = flags & PCIIO_BYTE_STREAM; + + bfn = bst ? PCIIO_BYTE_STREAM : PCIIO_WORD_VALUES; + + if (bfn == bfo) { /* we already match. */ + ; + } else if (bfo != 0) { /* we have a conflict. */ +#if DEBUG && PCI_DEBUG + printk("pcibr_addr_pci_to_xio: swap conflict in space %d , was%s%s, want%s%s\n", + space, + bfo & PCIIO_BYTE_STREAM ? " BYTE_STREAM" : "", + bfo & PCIIO_WORD_VALUES ? " WORD_VALUES" : "", + bfn & PCIIO_BYTE_STREAM ? " BYTE_STREAM" : "", + bfn & PCIIO_WORD_VALUES ? " WORD_VALUES" : ""); +#endif + xio_addr = XIO_NOWHERE; + } else { /* OK to make the change. */ + bridgereg_t octl, nctl; + + swb = (space == PCIIO_SPACE_IO) ? BRIDGE_CTRL_IO_SWAP : BRIDGE_CTRL_MEM_SWAP; + octl = bridge->b_wid_control; + nctl = bst ? octl | swb : octl & ~swb; + + if (octl != nctl) /* make the change if any */ + bridge->b_wid_control = nctl; + + *bfp = bfn; /* record the assignment */ + +#if DEBUG && PCI_DEBUG + printk("pcibr_addr_pci_to_xio: swap for space %d set to%s%s\n", + space, + bfn & PCIIO_BYTE_STREAM ? " BYTE_STREAM" : "", + bfn & PCIIO_WORD_VALUES ? " WORD_VALUES" : ""); +#endif + } + } + done: + pcibr_unlock(pcibr_soft, s); + return xio_addr; +} + +/*ARGSUSED6 */ +pcibr_piomap_t +pcibr_piomap_alloc(devfs_handle_t pconn_vhdl, + device_desc_t dev_desc, + pciio_space_t space, + iopaddr_t pci_addr, + size_t req_size, + size_t req_size_max, + unsigned flags) +{ + pcibr_info_t pcibr_info = pcibr_info_get(pconn_vhdl); + pciio_info_t pciio_info = &pcibr_info->f_c; + pciio_slot_t pciio_slot = pciio_info_slot_get(pciio_info); + pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); + devfs_handle_t xconn_vhdl = pcibr_soft->bs_conn; + + pcibr_piomap_t *mapptr; + pcibr_piomap_t maplist; + pcibr_piomap_t pcibr_piomap; + iopaddr_t xio_addr; + xtalk_piomap_t xtalk_piomap; + unsigned long s; + + /* Make sure that the req sizes are non-zero */ + if ((req_size < 1) || (req_size_max < 1)) + return NULL; + + /* + * Code to translate slot/space/addr + * into xio_addr is common between + * this routine and pcibr_piotrans_addr. + */ + xio_addr = pcibr_addr_pci_to_xio(pconn_vhdl, pciio_slot, space, pci_addr, req_size, flags); + + if (xio_addr == XIO_NOWHERE) + return NULL; + + /* Check the piomap list to see if there is already an allocated + * piomap entry but not in use. If so use that one. Otherwise + * allocate a new piomap entry and add it to the piomap list + */ + mapptr = &(pcibr_info->f_piomap); + + s = pcibr_lock(pcibr_soft); + for (pcibr_piomap = *mapptr; + pcibr_piomap != NULL; + pcibr_piomap = pcibr_piomap->bp_next) { + if (pcibr_piomap->bp_mapsz == 0) + break; + } + + if (pcibr_piomap) + mapptr = NULL; + else { + pcibr_unlock(pcibr_soft, s); + NEW(pcibr_piomap); + } + + pcibr_piomap->bp_dev = pconn_vhdl; + pcibr_piomap->bp_slot = pciio_slot; + pcibr_piomap->bp_flags = flags; + pcibr_piomap->bp_space = space; + pcibr_piomap->bp_pciaddr = pci_addr; + pcibr_piomap->bp_mapsz = req_size; + pcibr_piomap->bp_soft = pcibr_soft; + pcibr_piomap->bp_toc[0] = ATOMIC_INIT(0); + + if (mapptr) { + s = pcibr_lock(pcibr_soft); + maplist = *mapptr; + pcibr_piomap->bp_next = maplist; + *mapptr = pcibr_piomap; + } + pcibr_unlock(pcibr_soft, s); + + + if (pcibr_piomap) { + xtalk_piomap = + xtalk_piomap_alloc(xconn_vhdl, 0, + xio_addr, + req_size, req_size_max, + flags & PIOMAP_FLAGS); + if (xtalk_piomap) { + pcibr_piomap->bp_xtalk_addr = xio_addr; + pcibr_piomap->bp_xtalk_pio = xtalk_piomap; + } else { + pcibr_piomap->bp_mapsz = 0; + pcibr_piomap = 0; + } + } + return pcibr_piomap; +} + +/*ARGSUSED */ +void +pcibr_piomap_free(pcibr_piomap_t pcibr_piomap) +{ + xtalk_piomap_free(pcibr_piomap->bp_xtalk_pio); + pcibr_piomap->bp_xtalk_pio = 0; + pcibr_piomap->bp_mapsz = 0; +} + +/*ARGSUSED */ +caddr_t +pcibr_piomap_addr(pcibr_piomap_t pcibr_piomap, + iopaddr_t pci_addr, + size_t req_size) +{ + return xtalk_piomap_addr(pcibr_piomap->bp_xtalk_pio, + pcibr_piomap->bp_xtalk_addr + + pci_addr - pcibr_piomap->bp_pciaddr, + req_size); +} + +/*ARGSUSED */ +void +pcibr_piomap_done(pcibr_piomap_t pcibr_piomap) +{ + xtalk_piomap_done(pcibr_piomap->bp_xtalk_pio); +} + +/*ARGSUSED */ +caddr_t +pcibr_piotrans_addr(devfs_handle_t pconn_vhdl, + device_desc_t dev_desc, + pciio_space_t space, + iopaddr_t pci_addr, + size_t req_size, + unsigned flags) +{ + pciio_info_t pciio_info = pciio_info_get(pconn_vhdl); + pciio_slot_t pciio_slot = pciio_info_slot_get(pciio_info); + pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); + devfs_handle_t xconn_vhdl = pcibr_soft->bs_conn; + + iopaddr_t xio_addr; + + xio_addr = pcibr_addr_pci_to_xio(pconn_vhdl, pciio_slot, space, pci_addr, req_size, flags); + + if (xio_addr == XIO_NOWHERE) + return NULL; + + return xtalk_piotrans_addr(xconn_vhdl, 0, xio_addr, req_size, flags & PIOMAP_FLAGS); +} + +/* + * PIO Space allocation and management. + * Allocate and Manage the PCI PIO space (mem and io space) + * This routine is pretty simplistic at this time, and + * does pretty trivial management of allocation and freeing.. + * The current scheme is prone for fragmentation.. + * Change the scheme to use bitmaps. + */ + +/*ARGSUSED */ +iopaddr_t +pcibr_piospace_alloc(devfs_handle_t pconn_vhdl, + device_desc_t dev_desc, + pciio_space_t space, + size_t req_size, + size_t alignment) +{ + pcibr_info_t pcibr_info = pcibr_info_get(pconn_vhdl); + pciio_info_t pciio_info = &pcibr_info->f_c; + pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); + + pciio_piospace_t piosp; + unsigned long s; + + iopaddr_t *pciaddr, *pcilast; + iopaddr_t start_addr; + size_t align_mask; + + /* + * Check for proper alignment + */ + ASSERT(alignment >= NBPP); + ASSERT((alignment & (alignment - 1)) == 0); + + align_mask = alignment - 1; + s = pcibr_lock(pcibr_soft); + + /* + * First look if a previously allocated chunk exists. + */ + if ((piosp = pcibr_info->f_piospace)) { + /* + * Look through the list for a right sized free chunk. + */ + do { + if (piosp->free && + (piosp->space == space) && + (piosp->count >= req_size) && + !(piosp->start & align_mask)) { + piosp->free = 0; + pcibr_unlock(pcibr_soft, s); + return piosp->start; + } + piosp = piosp->next; + } while (piosp); + } + ASSERT(!piosp); + + switch (space) { + case PCIIO_SPACE_IO: + pciaddr = &pcibr_soft->bs_spinfo.pci_io_base; + pcilast = &pcibr_soft->bs_spinfo.pci_io_last; + break; + case PCIIO_SPACE_MEM: + case PCIIO_SPACE_MEM32: + pciaddr = &pcibr_soft->bs_spinfo.pci_mem_base; + pcilast = &pcibr_soft->bs_spinfo.pci_mem_last; + break; + default: + ASSERT(0); + pcibr_unlock(pcibr_soft, s); + return 0; + } + + start_addr = *pciaddr; + + /* + * Align start_addr. + */ + if (start_addr & align_mask) + start_addr = (start_addr + align_mask) & ~align_mask; + + if ((start_addr + req_size) > *pcilast) { + /* + * If too big a request, reject it. + */ + pcibr_unlock(pcibr_soft, s); + return 0; + } + *pciaddr = (start_addr + req_size); + + NEW(piosp); + piosp->free = 0; + piosp->space = space; + piosp->start = start_addr; + piosp->count = req_size; + piosp->next = pcibr_info->f_piospace; + pcibr_info->f_piospace = piosp; + + pcibr_unlock(pcibr_soft, s); + return start_addr; +} + +/*ARGSUSED */ +void +pcibr_piospace_free(devfs_handle_t pconn_vhdl, + pciio_space_t space, + iopaddr_t pciaddr, + size_t req_size) +{ + pcibr_info_t pcibr_info = pcibr_info_get(pconn_vhdl); + pcibr_soft_t pcibr_soft = (pcibr_soft_t) pcibr_info->f_mfast; + + pciio_piospace_t piosp; + unsigned long s; + char name[1024]; + + /* + * Look through the bridge data structures for the pciio_piospace_t + * structure corresponding to 'pciaddr' + */ + s = pcibr_lock(pcibr_soft); + piosp = pcibr_info->f_piospace; + while (piosp) { + /* + * Piospace free can only be for the complete + * chunk and not parts of it.. + */ + if (piosp->start == pciaddr) { + if (piosp->count == req_size) + break; + /* + * Improper size passed for freeing.. + * Print a message and break; + */ + hwgraph_vertex_name_get(pconn_vhdl, name, 1024); + printk(KERN_WARNING "pcibr_piospace_free: error"); + printk(KERN_WARNING "Device %s freeing size (0x%lx) different than allocated (0x%lx)", + name, req_size, piosp->count); + printk(KERN_WARNING "Freeing 0x%lx instead", piosp->count); + break; + } + piosp = piosp->next; + } + + if (!piosp) { + printk(KERN_WARNING + "pcibr_piospace_free: Address 0x%lx size 0x%lx - No match\n", + pciaddr, req_size); + pcibr_unlock(pcibr_soft, s); + return; + } + piosp->free = 1; + pcibr_unlock(pcibr_soft, s); + return; +} + +/* ===================================================================== + * DMA MANAGEMENT + * + * The Bridge ASIC provides three methods of doing + * DMA: via a "direct map" register available in + * 32-bit PCI space (which selects a contiguous 2G + * address space on some other widget), via + * "direct" addressing via 64-bit PCI space (all + * destination information comes from the PCI + * address, including transfer attributes), and via + * a "mapped" region that allows a bunch of + * different small mappings to be established with + * the PMU. + * + * For efficiency, we most prefer to use the 32-bit + * direct mapping facility, since it requires no + * resource allocations. The advantage of using the + * PMU over the 64-bit direct is that single-cycle + * PCI addressing can be used; the advantage of + * using 64-bit direct over PMU addressing is that + * we do not have to allocate entries in the PMU. + */ + +/* + * Convert PCI-generic software flags and Bridge-specific software flags + * into Bridge-specific Direct Map attribute bits. + */ +LOCAL iopaddr_t +pcibr_flags_to_d64(unsigned flags, pcibr_soft_t pcibr_soft) +{ + iopaddr_t attributes = 0; + + /* Sanity check: Bridge only allows use of VCHAN1 via 64-bit addrs */ +#ifdef LATER + ASSERT_ALWAYS(!(flags & PCIBR_VCHAN1) || (flags & PCIIO_DMA_A64)); +#endif + + /* Generic macro flags + */ + if (flags & PCIIO_DMA_DATA) { /* standard data channel */ + attributes &= ~PCI64_ATTR_BAR; /* no barrier bit */ + attributes |= PCI64_ATTR_PREF; /* prefetch on */ + } + if (flags & PCIIO_DMA_CMD) { /* standard command channel */ + attributes |= PCI64_ATTR_BAR; /* barrier bit on */ + attributes &= ~PCI64_ATTR_PREF; /* disable prefetch */ + } + /* Generic detail flags + */ + if (flags & PCIIO_PREFETCH) + attributes |= PCI64_ATTR_PREF; + if (flags & PCIIO_NOPREFETCH) + attributes &= ~PCI64_ATTR_PREF; + + /* the swap bit is in the address attributes for xbridge */ + if (pcibr_soft->bs_xbridge) { + if (flags & PCIIO_BYTE_STREAM) + attributes |= PCI64_ATTR_SWAP; + if (flags & PCIIO_WORD_VALUES) + attributes &= ~PCI64_ATTR_SWAP; + } + + /* Provider-specific flags + */ + if (flags & PCIBR_BARRIER) + attributes |= PCI64_ATTR_BAR; + if (flags & PCIBR_NOBARRIER) + attributes &= ~PCI64_ATTR_BAR; + + if (flags & PCIBR_PREFETCH) + attributes |= PCI64_ATTR_PREF; + if (flags & PCIBR_NOPREFETCH) + attributes &= ~PCI64_ATTR_PREF; + + if (flags & PCIBR_PRECISE) + attributes |= PCI64_ATTR_PREC; + if (flags & PCIBR_NOPRECISE) + attributes &= ~PCI64_ATTR_PREC; + + if (flags & PCIBR_VCHAN1) + attributes |= PCI64_ATTR_VIRTUAL; + if (flags & PCIBR_VCHAN0) + attributes &= ~PCI64_ATTR_VIRTUAL; + + return (attributes); +} + +/* + * Convert PCI-generic software flags and Bridge-specific software flags + * into Bridge-specific Address Translation Entry attribute bits. + */ +LOCAL bridge_ate_t +pcibr_flags_to_ate(unsigned flags) +{ + bridge_ate_t attributes; + + /* default if nothing specified: + * NOBARRIER + * NOPREFETCH + * NOPRECISE + * COHERENT + * Plus the valid bit + */ + attributes = ATE_CO | ATE_V; + + /* Generic macro flags + */ + if (flags & PCIIO_DMA_DATA) { /* standard data channel */ + attributes &= ~ATE_BAR; /* no barrier */ + attributes |= ATE_PREF; /* prefetch on */ + } + if (flags & PCIIO_DMA_CMD) { /* standard command channel */ + attributes |= ATE_BAR; /* barrier bit on */ + attributes &= ~ATE_PREF; /* disable prefetch */ + } + /* Generic detail flags + */ + if (flags & PCIIO_PREFETCH) + attributes |= ATE_PREF; + if (flags & PCIIO_NOPREFETCH) + attributes &= ~ATE_PREF; + + /* Provider-specific flags + */ + if (flags & PCIBR_BARRIER) + attributes |= ATE_BAR; + if (flags & PCIBR_NOBARRIER) + attributes &= ~ATE_BAR; + + if (flags & PCIBR_PREFETCH) + attributes |= ATE_PREF; + if (flags & PCIBR_NOPREFETCH) + attributes &= ~ATE_PREF; + + if (flags & PCIBR_PRECISE) + attributes |= ATE_PREC; + if (flags & PCIBR_NOPRECISE) + attributes &= ~ATE_PREC; + + return (attributes); +} + +/*ARGSUSED */ +pcibr_dmamap_t +pcibr_dmamap_alloc(devfs_handle_t pconn_vhdl, + device_desc_t dev_desc, + size_t req_size_max, + unsigned flags) +{ + pciio_info_t pciio_info = pciio_info_get(pconn_vhdl); + pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); + devfs_handle_t xconn_vhdl = pcibr_soft->bs_conn; + pciio_slot_t slot; + xwidgetnum_t xio_port; + + xtalk_dmamap_t xtalk_dmamap; + pcibr_dmamap_t pcibr_dmamap; + int ate_count; + int ate_index; + + /* merge in forced flags */ + flags |= pcibr_soft->bs_dma_flags; + +#ifdef IRIX + NEWf(pcibr_dmamap, flags); +#else + /* + * On SNIA64, these maps are pre-allocated because pcibr_dmamap_alloc() + * can be called within an interrupt thread. + */ + pcibr_dmamap = (pcibr_dmamap_t)get_free_pciio_dmamap(pcibr_soft->bs_vhdl); +#endif + + if (!pcibr_dmamap) + return 0; + + xtalk_dmamap = xtalk_dmamap_alloc(xconn_vhdl, dev_desc, req_size_max, + flags & DMAMAP_FLAGS); + if (!xtalk_dmamap) { +#if PCIBR_ATE_DEBUG + printk("pcibr_attach: xtalk_dmamap_alloc failed\n"); +#endif +#ifdef IRIX + DEL(pcibr_dmamap); +#else + free_pciio_dmamap(pcibr_dmamap); +#endif + return 0; + } + xio_port = pcibr_soft->bs_mxid; + slot = pciio_info_slot_get(pciio_info); + + pcibr_dmamap->bd_dev = pconn_vhdl; + pcibr_dmamap->bd_slot = slot; + pcibr_dmamap->bd_soft = pcibr_soft; + pcibr_dmamap->bd_xtalk = xtalk_dmamap; + pcibr_dmamap->bd_max_size = req_size_max; + pcibr_dmamap->bd_xio_port = xio_port; + + if (flags & PCIIO_DMA_A64) { + if (!pcibr_try_set_device(pcibr_soft, slot, flags, BRIDGE_DEV_D64_BITS)) { + iopaddr_t pci_addr; + int have_rrbs; + int min_rrbs; + + /* Device is capable of A64 operations, + * and the attributes of the DMA are + * consistant with any previous DMA + * mappings using shared resources. + */ + + pci_addr = pcibr_flags_to_d64(flags, pcibr_soft); + + pcibr_dmamap->bd_flags = flags; + pcibr_dmamap->bd_xio_addr = 0; + pcibr_dmamap->bd_pci_addr = pci_addr; + + /* Make sure we have an RRB (or two). + */ + if (!(pcibr_soft->bs_rrb_fixed & (1 << slot))) { + if (flags & PCIBR_VCHAN1) + slot += PCIBR_RRB_SLOT_VIRTUAL; + have_rrbs = pcibr_soft->bs_rrb_valid[slot]; + if (have_rrbs < 2) { + if (pci_addr & PCI64_ATTR_PREF) + min_rrbs = 2; + else + min_rrbs = 1; + if (have_rrbs < min_rrbs) + do_pcibr_rrb_autoalloc(pcibr_soft, slot, min_rrbs - have_rrbs); + } + } +#if PCIBR_ATE_DEBUG + printk("pcibr_dmamap_alloc: using direct64\n"); +#endif + return pcibr_dmamap; + } +#if PCIBR_ATE_DEBUG + printk("pcibr_dmamap_alloc: unable to use direct64\n"); +#endif + flags &= ~PCIIO_DMA_A64; + } + if (flags & PCIIO_FIXED) { + /* warning: mappings may fail later, + * if direct32 can't get to the address. + */ + if (!pcibr_try_set_device(pcibr_soft, slot, flags, BRIDGE_DEV_D32_BITS)) { + /* User desires DIRECT A32 operations, + * and the attributes of the DMA are + * consistant with any previous DMA + * mappings using shared resources. + * Mapping calls may fail if target + * is outside the direct32 range. + */ +#if PCIBR_ATE_DEBUG + printk("pcibr_dmamap_alloc: using direct32\n"); +#endif + pcibr_dmamap->bd_flags = flags; + pcibr_dmamap->bd_xio_addr = pcibr_soft->bs_dir_xbase; + pcibr_dmamap->bd_pci_addr = PCI32_DIRECT_BASE; + return pcibr_dmamap; + } +#if PCIBR_ATE_DEBUG + printk("pcibr_dmamap_alloc: unable to use direct32\n"); +#endif + /* If the user demands FIXED and we can't + * give it to him, fail. + */ + xtalk_dmamap_free(xtalk_dmamap); +#ifdef IRIX + DEL(pcibr_dmamap); +#else + free_pciio_dmamap(pcibr_dmamap); +#endif + return 0; + } + /* + * Allocate Address Translation Entries from the mapping RAM. + * Unless the PCIBR_NO_ATE_ROUNDUP flag is specified, + * the maximum number of ATEs is based on the worst-case + * scenario, where the requested target is in the + * last byte of an ATE; thus, mapping IOPGSIZE+2 + * does end up requiring three ATEs. + */ + if (!(flags & PCIBR_NO_ATE_ROUNDUP)) { + ate_count = IOPG((IOPGSIZE - 1) /* worst case start offset */ + +req_size_max /* max mapping bytes */ + - 1) + 1; /* round UP */ + } else { /* assume requested target is page aligned */ + ate_count = IOPG(req_size_max /* max mapping bytes */ + - 1) + 1; /* round UP */ + } + + ate_index = pcibr_ate_alloc(pcibr_soft, ate_count); + + if (ate_index != -1) { + if (!pcibr_try_set_device(pcibr_soft, slot, flags, BRIDGE_DEV_PMU_BITS)) { + bridge_ate_t ate_proto; + int have_rrbs; + int min_rrbs; + +#if PCIBR_ATE_DEBUG + printk("pcibr_dmamap_alloc: using PMU\n"); +#endif + + ate_proto = pcibr_flags_to_ate(flags); + + pcibr_dmamap->bd_flags = flags; + pcibr_dmamap->bd_pci_addr = + PCI32_MAPPED_BASE + IOPGSIZE * ate_index; + /* + * for xbridge the byte-swap bit == bit 29 of PCI address + */ + if (pcibr_soft->bs_xbridge) { + if (flags & PCIIO_BYTE_STREAM) + ATE_SWAP_ON(pcibr_dmamap->bd_pci_addr); + /* + * If swap was set in bss_device in pcibr_endian_set() + * we need to change the address bit. + */ + if (pcibr_soft->bs_slot[slot].bss_device & + BRIDGE_DEV_SWAP_PMU) + ATE_SWAP_ON(pcibr_dmamap->bd_pci_addr); + if (flags & PCIIO_WORD_VALUES) + ATE_SWAP_OFF(pcibr_dmamap->bd_pci_addr); + } + pcibr_dmamap->bd_xio_addr = 0; + pcibr_dmamap->bd_ate_ptr = pcibr_ate_addr(pcibr_soft, ate_index); + pcibr_dmamap->bd_ate_index = ate_index; + pcibr_dmamap->bd_ate_count = ate_count; + pcibr_dmamap->bd_ate_proto = ate_proto; + + /* Make sure we have an RRB (or two). + */ + if (!(pcibr_soft->bs_rrb_fixed & (1 << slot))) { + have_rrbs = pcibr_soft->bs_rrb_valid[slot]; + if (have_rrbs < 2) { + if (ate_proto & ATE_PREF) + min_rrbs = 2; + else + min_rrbs = 1; + if (have_rrbs < min_rrbs) + do_pcibr_rrb_autoalloc(pcibr_soft, slot, min_rrbs - have_rrbs); + } + } + if (ate_index >= pcibr_soft->bs_int_ate_size && + !pcibr_soft->bs_xbridge) { + bridge_t *bridge = pcibr_soft->bs_base; + volatile unsigned *cmd_regp; + unsigned cmd_reg; + unsigned long s; + + pcibr_dmamap->bd_flags |= PCIBR_DMAMAP_SSRAM; + + s = pcibr_lock(pcibr_soft); + cmd_regp = &(bridge-> + b_type0_cfg_dev[slot]. + l[PCI_CFG_COMMAND / 4]); + cmd_reg = *cmd_regp; + pcibr_soft->bs_slot[slot].bss_cmd_pointer = cmd_regp; + pcibr_soft->bs_slot[slot].bss_cmd_shadow = cmd_reg; + pcibr_unlock(pcibr_soft, s); + } + return pcibr_dmamap; + } +#if PCIBR_ATE_DEBUG + printk("pcibr_dmamap_alloc: unable to use PMU\n"); +#endif + pcibr_ate_free(pcibr_soft, ate_index, ate_count); + } + /* total failure: sorry, you just can't + * get from here to there that way. + */ +#if PCIBR_ATE_DEBUG + printk("pcibr_dmamap_alloc: complete failure.\n"); +#endif + xtalk_dmamap_free(xtalk_dmamap); +#ifdef IRIX + DEL(pcibr_dmamap); +#else + free_pciio_dmamap(pcibr_dmamap); +#endif + return 0; +} + +/*ARGSUSED */ +void +pcibr_dmamap_free(pcibr_dmamap_t pcibr_dmamap) +{ + pcibr_soft_t pcibr_soft = pcibr_dmamap->bd_soft; + pciio_slot_t slot = pcibr_dmamap->bd_slot; + + unsigned flags = pcibr_dmamap->bd_flags; + + /* Make sure that bss_ext_ates_active + * is properly kept up to date. + */ + + if (PCIBR_DMAMAP_BUSY & flags) + if (PCIBR_DMAMAP_SSRAM & flags) + atomic_dec(&(pcibr_soft->bs_slot[slot]. bss_ext_ates_active)); + + xtalk_dmamap_free(pcibr_dmamap->bd_xtalk); + + if (pcibr_dmamap->bd_flags & PCIIO_DMA_A64) { + pcibr_release_device(pcibr_soft, slot, BRIDGE_DEV_D64_BITS); + } + if (pcibr_dmamap->bd_ate_count) { + pcibr_ate_free(pcibr_dmamap->bd_soft, + pcibr_dmamap->bd_ate_index, + pcibr_dmamap->bd_ate_count); + pcibr_release_device(pcibr_soft, slot, BRIDGE_DEV_PMU_BITS); + } +#ifdef IRIX + DEL(pcibr_dmamap); +#else + free_pciio_dmamap(pcibr_dmamap); +#endif +} + +/* + * Setup an Address Translation Entry as specified. Use either the Bridge + * internal maps or the external map RAM, as appropriate. + */ +LOCAL bridge_ate_p +pcibr_ate_addr(pcibr_soft_t pcibr_soft, + int ate_index) +{ + bridge_t *bridge = pcibr_soft->bs_base; + + return (ate_index < pcibr_soft->bs_int_ate_size) + ? &(bridge->b_int_ate_ram[ate_index].wr) + : &(bridge->b_ext_ate_ram[ate_index]); +} + +/* + * pcibr_addr_xio_to_pci: given a PIO range, hand + * back the corresponding base PCI MEM address; + * this is used to short-circuit DMA requests that + * loop back onto this PCI bus. + */ +LOCAL iopaddr_t +pcibr_addr_xio_to_pci(pcibr_soft_t soft, + iopaddr_t xio_addr, + size_t req_size) +{ + iopaddr_t xio_lim = xio_addr + req_size - 1; + iopaddr_t pci_addr; + pciio_slot_t slot; + + if ((xio_addr >= BRIDGE_PCI_MEM32_BASE) && + (xio_lim <= BRIDGE_PCI_MEM32_LIMIT)) { + pci_addr = xio_addr - BRIDGE_PCI_MEM32_BASE; + return pci_addr; + } + if ((xio_addr >= BRIDGE_PCI_MEM64_BASE) && + (xio_lim <= BRIDGE_PCI_MEM64_LIMIT)) { + pci_addr = xio_addr - BRIDGE_PCI_MEM64_BASE; + return pci_addr; + } + for (slot = 0; slot < 8; ++slot) + if ((xio_addr >= BRIDGE_DEVIO(slot)) && + (xio_lim < BRIDGE_DEVIO(slot + 1))) { + bridgereg_t dev; + + dev = soft->bs_slot[slot].bss_device; + pci_addr = dev & BRIDGE_DEV_OFF_MASK; + pci_addr <<= BRIDGE_DEV_OFF_ADDR_SHFT; + pci_addr += xio_addr - BRIDGE_DEVIO(slot); + return (dev & BRIDGE_DEV_DEV_IO_MEM) ? pci_addr : PCI_NOWHERE; + } + return 0; +} + +/* We are starting to get more complexity + * surrounding writing ATEs, so pull + * the writing code into this new function. + */ + +#if PCIBR_FREEZE_TIME +#define ATE_FREEZE() s = ate_freeze(pcibr_dmamap, &freeze_time, cmd_regs) +#else +#define ATE_FREEZE() s = ate_freeze(pcibr_dmamap, cmd_regs) +#endif + +LOCAL unsigned +ate_freeze(pcibr_dmamap_t pcibr_dmamap, +#if PCIBR_FREEZE_TIME + unsigned *freeze_time_ptr, +#endif + unsigned *cmd_regs) +{ + pcibr_soft_t pcibr_soft = pcibr_dmamap->bd_soft; +#ifdef LATER + int dma_slot = pcibr_dmamap->bd_slot; +#endif + int ext_ates = pcibr_dmamap->bd_flags & PCIBR_DMAMAP_SSRAM; + int slot; + + unsigned long s; + unsigned cmd_reg; + volatile unsigned *cmd_lwa; + unsigned cmd_lwd; + + if (!ext_ates) + return 0; + + /* Bridge Hardware Bug WAR #484930: + * Bridge can't handle updating External ATEs + * while DMA is occuring that uses External ATEs, + * even if the particular ATEs involved are disjoint. + */ + + /* need to prevent anyone else from + * unfreezing the grant while we + * are working; also need to prevent + * this thread from being interrupted + * to keep PCI grant freeze time + * at an absolute minimum. + */ + s = pcibr_lock(pcibr_soft); + +#ifdef LATER + /* just in case pcibr_dmamap_done was not called */ + if (pcibr_dmamap->bd_flags & PCIBR_DMAMAP_BUSY) { + pcibr_dmamap->bd_flags &= ~PCIBR_DMAMAP_BUSY; + if (pcibr_dmamap->bd_flags & PCIBR_DMAMAP_SSRAM) + atomic_dec(&(pcibr_soft->bs_slot[dma_slot]. bss_ext_ates_active)); + xtalk_dmamap_done(pcibr_dmamap->bd_xtalk); + } +#endif /* LATER */ +#if PCIBR_FREEZE_TIME + *freeze_time_ptr = get_timestamp(); +#endif + + cmd_lwa = 0; + for (slot = 0; slot < 8; ++slot) + if (atomic_read(&pcibr_soft->bs_slot[slot].bss_ext_ates_active)) { + cmd_reg = pcibr_soft-> + bs_slot[slot]. + bss_cmd_shadow; + if (cmd_reg & PCI_CMD_BUS_MASTER) { + cmd_lwa = pcibr_soft-> + bs_slot[slot]. + bss_cmd_pointer; + cmd_lwd = cmd_reg ^ PCI_CMD_BUS_MASTER; + cmd_lwa[0] = cmd_lwd; + } + cmd_regs[slot] = cmd_reg; + } else + cmd_regs[slot] = 0; + + if (cmd_lwa) { + bridge_t *bridge = pcibr_soft->bs_base; + + /* Read the last master bit that has been cleared. This PIO read + * on the PCI bus is to ensure the completion of any DMAs that + * are due to bus requests issued by PCI devices before the + * clearing of master bits. + */ + cmd_lwa[0]; + + /* Flush all the write buffers in the bridge */ + for (slot = 0; slot < 8; ++slot) + if (atomic_read(&pcibr_soft->bs_slot[slot].bss_ext_ates_active)) { + /* Flush the write buffer associated with this + * PCI device which might be using dma map RAM. + */ + bridge->b_wr_req_buf[slot].reg; + } + } + return s; +} + +#define ATE_WRITE() ate_write(ate_ptr, ate_count, ate) + +LOCAL void +ate_write(bridge_ate_p ate_ptr, + int ate_count, + bridge_ate_t ate) +{ + while (ate_count-- > 0) { + *ate_ptr++ = ate; + ate += IOPGSIZE; + } +} + + +#if PCIBR_FREEZE_TIME +#define ATE_THAW() ate_thaw(pcibr_dmamap, ate_index, ate, ate_total, freeze_time, cmd_regs, s) +#else +#define ATE_THAW() ate_thaw(pcibr_dmamap, ate_index, cmd_regs, s) +#endif + +LOCAL void +ate_thaw(pcibr_dmamap_t pcibr_dmamap, + int ate_index, +#if PCIBR_FREEZE_TIME + bridge_ate_t ate, + int ate_total, + unsigned freeze_time_start, +#endif + unsigned *cmd_regs, + unsigned s) +{ + pcibr_soft_t pcibr_soft = pcibr_dmamap->bd_soft; + int dma_slot = pcibr_dmamap->bd_slot; + int slot; + bridge_t *bridge = pcibr_soft->bs_base; + int ext_ates = pcibr_dmamap->bd_flags & PCIBR_DMAMAP_SSRAM; + + unsigned cmd_reg; + +#if PCIBR_FREEZE_TIME + unsigned freeze_time; + static unsigned max_freeze_time = 0; + static unsigned max_ate_total; +#endif + + if (!ext_ates) + return; + + /* restore cmd regs */ + for (slot = 0; slot < 8; ++slot) + if ((cmd_reg = cmd_regs[slot]) & PCI_CMD_BUS_MASTER) + bridge->b_type0_cfg_dev[slot].l[PCI_CFG_COMMAND / 4] = cmd_reg; + + pcibr_dmamap->bd_flags |= PCIBR_DMAMAP_BUSY; + atomic_inc(&(pcibr_soft->bs_slot[dma_slot]. bss_ext_ates_active)); + +#if PCIBR_FREEZE_TIME + freeze_time = get_timestamp() - freeze_time_start; + + if ((max_freeze_time < freeze_time) || + (max_ate_total < ate_total)) { + if (max_freeze_time < freeze_time) + max_freeze_time = freeze_time; + if (max_ate_total < ate_total) + max_ate_total = ate_total; + pcibr_unlock(pcibr_soft, s); + printk("%s: pci freeze time %d usec for %d ATEs\n" + "\tfirst ate: %R\n", + pcibr_soft->bs_name, + freeze_time * 1000 / 1250, + ate_total, + ate, ate_bits); + } else +#endif + pcibr_unlock(pcibr_soft, s); +} + +/*ARGSUSED */ +iopaddr_t +pcibr_dmamap_addr(pcibr_dmamap_t pcibr_dmamap, + paddr_t paddr, + size_t req_size) +{ + pcibr_soft_t pcibr_soft; + iopaddr_t xio_addr; + xwidgetnum_t xio_port; + iopaddr_t pci_addr; + unsigned flags; + + ASSERT(pcibr_dmamap != NULL); + ASSERT(req_size > 0); + ASSERT(req_size <= pcibr_dmamap->bd_max_size); + + pcibr_soft = pcibr_dmamap->bd_soft; + + flags = pcibr_dmamap->bd_flags; + + xio_addr = xtalk_dmamap_addr(pcibr_dmamap->bd_xtalk, paddr, req_size); + if (XIO_PACKED(xio_addr)) { + xio_port = XIO_PORT(xio_addr); + xio_addr = XIO_ADDR(xio_addr); + } else + xio_port = pcibr_dmamap->bd_xio_port; + + /* If this DMA is to an address that + * refers back to this Bridge chip, + * reduce it back to the correct + * PCI MEM address. + */ + if (xio_port == pcibr_soft->bs_xid) { + pci_addr = pcibr_addr_xio_to_pci(pcibr_soft, xio_addr, req_size); + } else if (flags & PCIIO_DMA_A64) { + /* A64 DMA: + * always use 64-bit direct mapping, + * which always works. + * Device(x) was set up during + * dmamap allocation. + */ + + /* attributes are already bundled up into bd_pci_addr. + */ + pci_addr = pcibr_dmamap->bd_pci_addr + | ((uint64_t) xio_port << PCI64_ATTR_TARG_SHFT) + | xio_addr; + + /* Bridge Hardware WAR #482836: + * If the transfer is not cache aligned + * and the Bridge Rev is <= B, force + * prefetch to be off. + */ + if (flags & PCIBR_NOPREFETCH) + pci_addr &= ~PCI64_ATTR_PREF; + +#if DEBUG && PCIBR_DMA_DEBUG + printk("pcibr_dmamap_addr (direct64):\n" + "\twanted paddr [0x%x..0x%x]\n" + "\tXIO port 0x%x offset 0x%x\n" + "\treturning PCI 0x%x\n", + paddr, paddr + req_size - 1, + xio_port, xio_addr, pci_addr); +#endif + } else if (flags & PCIIO_FIXED) { + /* A32 direct DMA: + * always use 32-bit direct mapping, + * which may fail. + * Device(x) was set up during + * dmamap allocation. + */ + + if (xio_port != pcibr_soft->bs_dir_xport) + pci_addr = 0; /* wrong DIDN */ + else if (xio_addr < pcibr_dmamap->bd_xio_addr) + pci_addr = 0; /* out of range */ + else if ((xio_addr + req_size) > + (pcibr_dmamap->bd_xio_addr + BRIDGE_DMA_DIRECT_SIZE)) + pci_addr = 0; /* out of range */ + else + pci_addr = pcibr_dmamap->bd_pci_addr + + xio_addr - pcibr_dmamap->bd_xio_addr; + +#if DEBUG && PCIBR_DMA_DEBUG + printk("pcibr_dmamap_addr (direct32):\n" + "\twanted paddr [0x%x..0x%x]\n" + "\tXIO port 0x%x offset 0x%x\n" + "\treturning PCI 0x%x\n", + paddr, paddr + req_size - 1, + xio_port, xio_addr, pci_addr); +#endif + } else { + bridge_t *bridge = pcibr_soft->bs_base; + iopaddr_t offset = IOPGOFF(xio_addr); + bridge_ate_t ate_proto = pcibr_dmamap->bd_ate_proto; + int ate_count = IOPG(offset + req_size - 1) + 1; + + int ate_index = pcibr_dmamap->bd_ate_index; + unsigned cmd_regs[8]; + unsigned s; + +#if PCIBR_FREEZE_TIME + int ate_total = ate_count; + unsigned freeze_time; +#endif + +#if PCIBR_ATE_DEBUG + bridge_ate_t ate_cmp; + bridge_ate_p ate_cptr; + unsigned ate_lo, ate_hi; + int ate_bad = 0; + int ate_rbc = 0; +#endif + bridge_ate_p ate_ptr = pcibr_dmamap->bd_ate_ptr; + bridge_ate_t ate; + + /* Bridge Hardware WAR #482836: + * If the transfer is not cache aligned + * and the Bridge Rev is <= B, force + * prefetch to be off. + */ + if (flags & PCIBR_NOPREFETCH) + ate_proto &= ~ATE_PREF; + + ate = ate_proto + | (xio_port << ATE_TIDSHIFT) + | (xio_addr - offset); + + pci_addr = pcibr_dmamap->bd_pci_addr + offset; + + /* Fill in our mapping registers + * with the appropriate xtalk data, + * and hand back the PCI address. + */ + + ASSERT(ate_count > 0); + if (ate_count <= pcibr_dmamap->bd_ate_count) { + ATE_FREEZE(); + ATE_WRITE(); + ATE_THAW(); + bridge->b_wid_tflush; /* wait until Bridge PIO complete */ + } else { + /* The number of ATE's required is greater than the number + * allocated for this map. One way this can happen is if + * pcibr_dmamap_alloc() was called with the PCIBR_NO_ATE_ROUNDUP + * flag, and then when that map is used (right now), the + * target address tells us we really did need to roundup. + * The other possibility is that the map is just plain too + * small to handle the requested target area. + */ +#if PCIBR_ATE_DEBUG + printk(KERN_WARNING "pcibr_dmamap_addr :\n" + "\twanted paddr [0x%x..0x%x]\n" + "\tate_count 0x%x bd_ate_count 0x%x\n" + "\tATE's required > number allocated\n", + paddr, paddr + req_size - 1, + ate_count, pcibr_dmamap->bd_ate_count); +#endif + pci_addr = 0; + } + + } + return pci_addr; +} + +/*ARGSUSED */ +alenlist_t +pcibr_dmamap_list(pcibr_dmamap_t pcibr_dmamap, + alenlist_t palenlist, + unsigned flags) +{ + pcibr_soft_t pcibr_soft; + bridge_t *bridge=NULL; + + unsigned al_flags = (flags & PCIIO_NOSLEEP) ? AL_NOSLEEP : 0; + int inplace = flags & PCIIO_INPLACE; + + alenlist_t pciio_alenlist = 0; + alenlist_t xtalk_alenlist; + size_t length; + iopaddr_t offset; + unsigned direct64; + int ate_index = 0; + int ate_count = 0; + int ate_total = 0; + bridge_ate_p ate_ptr = (bridge_ate_p)0; + bridge_ate_t ate_proto = (bridge_ate_t)0; + bridge_ate_t ate_prev; + bridge_ate_t ate; + alenaddr_t xio_addr; + xwidgetnum_t xio_port; + iopaddr_t pci_addr; + alenaddr_t new_addr; + + unsigned cmd_regs[8]; + unsigned s = 0; + +#if PCIBR_FREEZE_TIME + unsigned freeze_time; +#endif + int ate_freeze_done = 0; /* To pair ATE_THAW + * with an ATE_FREEZE + */ + + pcibr_soft = pcibr_dmamap->bd_soft; + + xtalk_alenlist = xtalk_dmamap_list(pcibr_dmamap->bd_xtalk, palenlist, + flags & DMAMAP_FLAGS); + if (!xtalk_alenlist) + goto fail; + + alenlist_cursor_init(xtalk_alenlist, 0, NULL); + + if (inplace) { + pciio_alenlist = xtalk_alenlist; + } else { + pciio_alenlist = alenlist_create(al_flags); + if (!pciio_alenlist) + goto fail; + } + + direct64 = pcibr_dmamap->bd_flags & PCIIO_DMA_A64; + if (!direct64) { + bridge = pcibr_soft->bs_base; + ate_ptr = pcibr_dmamap->bd_ate_ptr; + ate_index = pcibr_dmamap->bd_ate_index; + ate_proto = pcibr_dmamap->bd_ate_proto; + ATE_FREEZE(); + ate_freeze_done = 1; /* Remember that we need to do an ATE_THAW */ + } + pci_addr = pcibr_dmamap->bd_pci_addr; + + ate_prev = 0; /* matches no valid ATEs */ + while (ALENLIST_SUCCESS == + alenlist_get(xtalk_alenlist, NULL, 0, + &xio_addr, &length, al_flags)) { + if (XIO_PACKED(xio_addr)) { + xio_port = XIO_PORT(xio_addr); + xio_addr = XIO_ADDR(xio_addr); + } else + xio_port = pcibr_dmamap->bd_xio_port; + + if (xio_port == pcibr_soft->bs_xid) { + new_addr = pcibr_addr_xio_to_pci(pcibr_soft, xio_addr, length); + if (new_addr == PCI_NOWHERE) + goto fail; + } else if (direct64) { + new_addr = pci_addr | xio_addr + | ((uint64_t) xio_port << PCI64_ATTR_TARG_SHFT); + + /* Bridge Hardware WAR #482836: + * If the transfer is not cache aligned + * and the Bridge Rev is <= B, force + * prefetch to be off. + */ + if (flags & PCIBR_NOPREFETCH) + new_addr &= ~PCI64_ATTR_PREF; + + } else { + /* calculate the ate value for + * the first address. If it + * matches the previous + * ATE written (ie. we had + * multiple blocks in the + * same IOPG), then back up + * and reuse that ATE. + * + * We are NOT going to + * aggressively try to + * reuse any other ATEs. + */ + offset = IOPGOFF(xio_addr); + ate = ate_proto + | (xio_port << ATE_TIDSHIFT) + | (xio_addr - offset); + if (ate == ate_prev) { +#if PCIBR_ATE_DEBUG + printk("pcibr_dmamap_list: ATE share\n"); +#endif + ate_ptr--; + ate_index--; + pci_addr -= IOPGSIZE; + } + new_addr = pci_addr + offset; + + /* Fill in the hardware ATEs + * that contain this block. + */ + ate_count = IOPG(offset + length - 1) + 1; + ate_total += ate_count; + + /* Ensure that this map contains enough ATE's */ + if (ate_total > pcibr_dmamap->bd_ate_count) { +#if PCIBR_ATE_DEBUG + printk(KERN_WARNING "pcibr_dmamap_list :\n" + "\twanted xio_addr [0x%x..0x%x]\n" + "\tate_total 0x%x bd_ate_count 0x%x\n" + "\tATE's required > number allocated\n", + xio_addr, xio_addr + length - 1, + ate_total, pcibr_dmamap->bd_ate_count); +#endif + goto fail; + } + + ATE_WRITE(); + + ate_index += ate_count; + ate_ptr += ate_count; + + ate_count <<= IOPFNSHIFT; + ate += ate_count; + pci_addr += ate_count; + } + + /* write the PCI DMA address + * out to the scatter-gather list. + */ + if (inplace) { + if (ALENLIST_SUCCESS != + alenlist_replace(pciio_alenlist, NULL, + &new_addr, &length, al_flags)) + goto fail; + } else { + if (ALENLIST_SUCCESS != + alenlist_append(pciio_alenlist, + new_addr, length, al_flags)) + goto fail; + } + } + if (!inplace) + alenlist_done(xtalk_alenlist); + + /* Reset the internal cursor of the alenlist to be returned back + * to the caller. + */ + alenlist_cursor_init(pciio_alenlist, 0, NULL); + + + /* In case an ATE_FREEZE was done do the ATE_THAW to unroll all the + * changes that ATE_FREEZE has done to implement the external SSRAM + * bug workaround. + */ + if (ate_freeze_done) { + ATE_THAW(); + bridge->b_wid_tflush; /* wait until Bridge PIO complete */ + } + return pciio_alenlist; + + fail: + /* There are various points of failure after doing an ATE_FREEZE + * We need to do an ATE_THAW. Otherwise the ATEs are locked forever. + * The decision to do an ATE_THAW needs to be based on whether a + * an ATE_FREEZE was done before. + */ + if (ate_freeze_done) { + ATE_THAW(); + bridge->b_wid_tflush; + } + if (pciio_alenlist && !inplace) + alenlist_destroy(pciio_alenlist); + return 0; +} + +/*ARGSUSED */ +void +pcibr_dmamap_done(pcibr_dmamap_t pcibr_dmamap) +{ + /* + * We could go through and invalidate ATEs here; + * for performance reasons, we don't. + * We also don't enforce the strict alternation + * between _addr/_list and _done, but Hub does. + */ + + if (pcibr_dmamap->bd_flags & PCIBR_DMAMAP_BUSY) { + pcibr_dmamap->bd_flags &= ~PCIBR_DMAMAP_BUSY; + + if (pcibr_dmamap->bd_flags & PCIBR_DMAMAP_SSRAM) + atomic_dec(&(pcibr_dmamap->bd_soft->bs_slot[pcibr_dmamap->bd_slot]. bss_ext_ates_active)); + } + + xtalk_dmamap_done(pcibr_dmamap->bd_xtalk); +} + + +/* + * For each bridge, the DIR_OFF value in the Direct Mapping Register + * determines the PCI to Crosstalk memory mapping to be used for all + * 32-bit Direct Mapping memory accesses. This mapping can be to any + * node in the system. This function will return that compact node id. + */ + +/*ARGSUSED */ +cnodeid_t +pcibr_get_dmatrans_node(devfs_handle_t pconn_vhdl) +{ + + pciio_info_t pciio_info = pciio_info_get(pconn_vhdl); + pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); + + return(NASID_TO_COMPACT_NODEID(NASID_GET(pcibr_soft->bs_dir_xbase))); +} + +/*ARGSUSED */ +iopaddr_t +pcibr_dmatrans_addr(devfs_handle_t pconn_vhdl, + device_desc_t dev_desc, + paddr_t paddr, + size_t req_size, + unsigned flags) +{ + pciio_info_t pciio_info = pciio_info_get(pconn_vhdl); + pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); + devfs_handle_t xconn_vhdl = pcibr_soft->bs_conn; + pciio_slot_t pciio_slot = pciio_info_slot_get(pciio_info); + pcibr_soft_slot_t slotp = &pcibr_soft->bs_slot[pciio_slot]; + + xwidgetnum_t xio_port; + iopaddr_t xio_addr; + iopaddr_t pci_addr; + + int have_rrbs; + int min_rrbs; + + /* merge in forced flags */ + flags |= pcibr_soft->bs_dma_flags; + + xio_addr = xtalk_dmatrans_addr(xconn_vhdl, 0, paddr, req_size, + flags & DMAMAP_FLAGS); + + if (!xio_addr) { +#if PCIBR_DMA_DEBUG + printk("pcibr_dmatrans_addr:\n" + "\tpciio connection point %v\n" + "\txtalk connection point %v\n" + "\twanted paddr [0x%x..0x%x]\n" + "\txtalk_dmatrans_addr returned 0x%x\n", + pconn_vhdl, xconn_vhdl, + paddr, paddr + req_size - 1, + xio_addr); +#endif + return 0; + } + /* + * find which XIO port this goes to. + */ + if (XIO_PACKED(xio_addr)) { + if (xio_addr == XIO_NOWHERE) { +#if PCIBR_DMA_DEBUG + printk("pcibr_dmatrans_addr:\n" + "\tpciio connection point %v\n" + "\txtalk connection point %v\n" + "\twanted paddr [0x%x..0x%x]\n" + "\txtalk_dmatrans_addr returned 0x%x\n", + pconn_vhdl, xconn_vhdl, + paddr, paddr + req_size - 1, + xio_addr); +#endif + return 0; + } + xio_port = XIO_PORT(xio_addr); + xio_addr = XIO_ADDR(xio_addr); + + } else + xio_port = pcibr_soft->bs_mxid; + + /* + * If this DMA comes back to us, + * return the PCI MEM address on + * which it would land, or NULL + * if the target is something + * on bridge other than PCI MEM. + */ + if (xio_port == pcibr_soft->bs_xid) { + pci_addr = pcibr_addr_xio_to_pci(pcibr_soft, xio_addr, req_size); + return pci_addr; + } + /* If the caller can use A64, try to + * satisfy the request with the 64-bit + * direct map. This can fail if the + * configuration bits in Device(x) + * conflict with our flags. + */ + + if (flags & PCIIO_DMA_A64) { + pci_addr = slotp->bss_d64_base; + if (!(flags & PCIBR_VCHAN1)) + flags |= PCIBR_VCHAN0; + if ((pci_addr != PCIBR_D64_BASE_UNSET) && + (flags == slotp->bss_d64_flags)) { + + pci_addr |= xio_addr + | ((uint64_t) xio_port << PCI64_ATTR_TARG_SHFT); + +#if DEBUG && PCIBR_DMA_DEBUG +#if HWG_PERF_CHECK + if (xio_addr != 0x20000000) +#endif + printk("pcibr_dmatrans_addr: [reuse]\n" + "\tpciio connection point %v\n" + "\txtalk connection point %v\n" + "\twanted paddr [0x%x..0x%x]\n" + "\txtalk_dmatrans_addr returned 0x%x\n" + "\tdirect 64bit address is 0x%x\n", + pconn_vhdl, xconn_vhdl, + paddr, paddr + req_size - 1, + xio_addr, pci_addr); +#endif + return (pci_addr); + } + if (!pcibr_try_set_device(pcibr_soft, pciio_slot, flags, BRIDGE_DEV_D64_BITS)) { + pci_addr = pcibr_flags_to_d64(flags, pcibr_soft); + slotp->bss_d64_flags = flags; + slotp->bss_d64_base = pci_addr; + pci_addr |= xio_addr + | ((uint64_t) xio_port << PCI64_ATTR_TARG_SHFT); + + /* Make sure we have an RRB (or two). + */ + if (!(pcibr_soft->bs_rrb_fixed & (1 << pciio_slot))) { + if (flags & PCIBR_VCHAN1) + pciio_slot += PCIBR_RRB_SLOT_VIRTUAL; + have_rrbs = pcibr_soft->bs_rrb_valid[pciio_slot]; + if (have_rrbs < 2) { + if (pci_addr & PCI64_ATTR_PREF) + min_rrbs = 2; + else + min_rrbs = 1; + if (have_rrbs < min_rrbs) + do_pcibr_rrb_autoalloc(pcibr_soft, pciio_slot, min_rrbs - have_rrbs); + } + } +#if PCIBR_DMA_DEBUG +#if HWG_PERF_CHECK + if (xio_addr != 0x20000000) +#endif + printk("pcibr_dmatrans_addr:\n" + "\tpciio connection point %v\n" + "\txtalk connection point %v\n" + "\twanted paddr [0x%x..0x%x]\n" + "\txtalk_dmatrans_addr returned 0x%x\n" + "\tdirect 64bit address is 0x%x\n" + "\tnew flags: 0x%x\n", + pconn_vhdl, xconn_vhdl, + paddr, paddr + req_size - 1, + xio_addr, pci_addr, (uint64_t) flags); +#endif + return (pci_addr); + } + /* our flags conflict with Device(x). + */ + flags = flags + & ~PCIIO_DMA_A64 + & ~PCIBR_VCHAN0 + ; + +#if PCIBR_DMA_DEBUG + printk("pcibr_dmatrans_addr:\n" + "\tpciio connection point %v\n" + "\txtalk connection point %v\n" + "\twanted paddr [0x%x..0x%x]\n" + "\txtalk_dmatrans_addr returned 0x%x\n" + "\tUnable to set Device(x) bits for Direct-64\n", + pconn_vhdl, xconn_vhdl, + paddr, paddr + req_size - 1, + xio_addr); +#endif + } + /* Try to satisfy the request with the 32-bit direct + * map. This can fail if the configuration bits in + * Device(x) conflict with our flags, or if the + * target address is outside where DIR_OFF points. + */ + { + size_t map_size = 1ULL << 31; + iopaddr_t xio_base = pcibr_soft->bs_dir_xbase; + iopaddr_t offset = xio_addr - xio_base; + iopaddr_t endoff = req_size + offset; + + if ((req_size > map_size) || + (xio_addr < xio_base) || + (xio_port != pcibr_soft->bs_dir_xport) || + (endoff > map_size)) { +#if PCIBR_DMA_DEBUG + printk("pcibr_dmatrans_addr:\n" + "\tpciio connection point %v\n" + "\txtalk connection point %v\n" + "\twanted paddr [0x%x..0x%x]\n" + "\txtalk_dmatrans_addr returned 0x%x\n" + "\txio region outside direct32 target\n", + pconn_vhdl, xconn_vhdl, + paddr, paddr + req_size - 1, + xio_addr); +#endif + } else { + pci_addr = slotp->bss_d32_base; + if ((pci_addr != PCIBR_D32_BASE_UNSET) && + (flags == slotp->bss_d32_flags)) { + + pci_addr |= offset; + +#if DEBUG && PCIBR_DMA_DEBUG + printk("pcibr_dmatrans_addr: [reuse]\n" + "\tpciio connection point %v\n" + "\txtalk connection point %v\n" + "\twanted paddr [0x%x..0x%x]\n" + "\txtalk_dmatrans_addr returned 0x%x\n" + "\tmapped via direct32 offset 0x%x\n" + "\twill DMA via pci addr 0x%x\n", + pconn_vhdl, xconn_vhdl, + paddr, paddr + req_size - 1, + xio_addr, offset, pci_addr); +#endif + return (pci_addr); + } + if (!pcibr_try_set_device(pcibr_soft, pciio_slot, flags, BRIDGE_DEV_D32_BITS)) { + + pci_addr = PCI32_DIRECT_BASE; + slotp->bss_d32_flags = flags; + slotp->bss_d32_base = pci_addr; + pci_addr |= offset; + + /* Make sure we have an RRB (or two). + */ + if (!(pcibr_soft->bs_rrb_fixed & (1 << pciio_slot))) { + have_rrbs = pcibr_soft->bs_rrb_valid[pciio_slot]; + if (have_rrbs < 2) { + if (slotp->bss_device & BRIDGE_DEV_PREF) + min_rrbs = 2; + else + min_rrbs = 1; + if (have_rrbs < min_rrbs) + do_pcibr_rrb_autoalloc(pcibr_soft, pciio_slot, min_rrbs - have_rrbs); + } + } +#if PCIBR_DMA_DEBUG +#if HWG_PERF_CHECK + if (xio_addr != 0x20000000) +#endif + printk("pcibr_dmatrans_addr:\n" + "\tpciio connection point %v\n" + "\txtalk connection point %v\n" + "\twanted paddr [0x%x..0x%x]\n" + "\txtalk_dmatrans_addr returned 0x%x\n" + "\tmapped via direct32 offset 0x%x\n" + "\twill DMA via pci addr 0x%x\n" + "\tnew flags: 0x%x\n", + pconn_vhdl, xconn_vhdl, + paddr, paddr + req_size - 1, + xio_addr, offset, pci_addr, (uint64_t) flags); +#endif + return (pci_addr); + } + /* our flags conflict with Device(x). + */ +#if PCIBR_DMA_DEBUG + printk("pcibr_dmatrans_addr:\n" + "\tpciio connection point %v\n" + "\txtalk connection point %v\n" + "\twanted paddr [0x%x..0x%x]\n" + "\txtalk_dmatrans_addr returned 0x%x\n" + "\tUnable to set Device(x) bits for Direct-32\n", + pconn_vhdl, xconn_vhdl, + paddr, paddr + req_size - 1, + xio_addr); +#endif + } + } + +#if PCIBR_DMA_DEBUG + printk("pcibr_dmatrans_addr:\n" + "\tpciio connection point %v\n" + "\txtalk connection point %v\n" + "\twanted paddr [0x%x..0x%x]\n" + "\txtalk_dmatrans_addr returned 0x%x\n" + "\tno acceptable PCI address found or constructable\n", + pconn_vhdl, xconn_vhdl, + paddr, paddr + req_size - 1, + xio_addr); +#endif + + return 0; +} + +/*ARGSUSED */ +alenlist_t +pcibr_dmatrans_list(devfs_handle_t pconn_vhdl, + device_desc_t dev_desc, + alenlist_t palenlist, + unsigned flags) +{ + pciio_info_t pciio_info = pciio_info_get(pconn_vhdl); + pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); + devfs_handle_t xconn_vhdl = pcibr_soft->bs_conn; + pciio_slot_t pciio_slot = pciio_info_slot_get(pciio_info); + pcibr_soft_slot_t slotp = &pcibr_soft->bs_slot[pciio_slot]; + xwidgetnum_t xio_port; + + alenlist_t pciio_alenlist = 0; + alenlist_t xtalk_alenlist = 0; + + int inplace; + unsigned direct64; + unsigned al_flags; + + iopaddr_t xio_base; + alenaddr_t xio_addr; + size_t xio_size; + + size_t map_size; + iopaddr_t pci_base; + alenaddr_t pci_addr; + + unsigned relbits = 0; + + /* merge in forced flags */ + flags |= pcibr_soft->bs_dma_flags; + + inplace = flags & PCIIO_INPLACE; + direct64 = flags & PCIIO_DMA_A64; + al_flags = (flags & PCIIO_NOSLEEP) ? AL_NOSLEEP : 0; + + if (direct64) { + map_size = 1ull << 48; + xio_base = 0; + pci_base = slotp->bss_d64_base; + if ((pci_base != PCIBR_D64_BASE_UNSET) && + (flags == slotp->bss_d64_flags)) { + /* reuse previous base info */ + } else if (pcibr_try_set_device(pcibr_soft, pciio_slot, flags, BRIDGE_DEV_D64_BITS) < 0) { + /* DMA configuration conflict */ + goto fail; + } else { + relbits = BRIDGE_DEV_D64_BITS; + pci_base = + pcibr_flags_to_d64(flags, pcibr_soft); + } + } else { + xio_base = pcibr_soft->bs_dir_xbase; + map_size = 1ull << 31; + pci_base = slotp->bss_d32_base; + if ((pci_base != PCIBR_D32_BASE_UNSET) && + (flags == slotp->bss_d32_flags)) { + /* reuse previous base info */ + } else if (pcibr_try_set_device(pcibr_soft, pciio_slot, flags, BRIDGE_DEV_D32_BITS) < 0) { + /* DMA configuration conflict */ + goto fail; + } else { + relbits = BRIDGE_DEV_D32_BITS; + pci_base = PCI32_DIRECT_BASE; + } + } + + xtalk_alenlist = xtalk_dmatrans_list(xconn_vhdl, 0, palenlist, + flags & DMAMAP_FLAGS); + if (!xtalk_alenlist) + goto fail; + + alenlist_cursor_init(xtalk_alenlist, 0, NULL); + + if (inplace) { + pciio_alenlist = xtalk_alenlist; + } else { + pciio_alenlist = alenlist_create(al_flags); + if (!pciio_alenlist) + goto fail; + } + + while (ALENLIST_SUCCESS == + alenlist_get(xtalk_alenlist, NULL, 0, + &xio_addr, &xio_size, al_flags)) { + + /* + * find which XIO port this goes to. + */ + if (XIO_PACKED(xio_addr)) { + if (xio_addr == XIO_NOWHERE) { +#if PCIBR_DMA_DEBUG + printk("pcibr_dmatrans_addr:\n" + "\tpciio connection point %v\n" + "\txtalk connection point %v\n" + "\twanted paddr [0x%x..0x%x]\n" + "\txtalk_dmatrans_addr returned 0x%x\n", + pconn_vhdl, xconn_vhdl, + paddr, paddr + req_size - 1, + xio_addr); +#endif + return 0; + } + xio_port = XIO_PORT(xio_addr); + xio_addr = XIO_ADDR(xio_addr); + } else + xio_port = pcibr_soft->bs_mxid; + + /* + * If this DMA comes back to us, + * return the PCI MEM address on + * which it would land, or NULL + * if the target is something + * on bridge other than PCI MEM. + */ + if (xio_port == pcibr_soft->bs_xid) { + pci_addr = pcibr_addr_xio_to_pci(pcibr_soft, xio_addr, xio_size); + if ( (pci_addr == (alenaddr_t)NULL) ) + goto fail; + } else if (direct64) { + ASSERT(xio_port != 0); + pci_addr = pci_base | xio_addr + | ((uint64_t) xio_port << PCI64_ATTR_TARG_SHFT); + } else { + iopaddr_t offset = xio_addr - xio_base; + iopaddr_t endoff = xio_size + offset; + + if ((xio_size > map_size) || + (xio_addr < xio_base) || + (xio_port != pcibr_soft->bs_dir_xport) || + (endoff > map_size)) + goto fail; + + pci_addr = pci_base + (xio_addr - xio_base); + } + + /* write the PCI DMA address + * out to the scatter-gather list. + */ + if (inplace) { + if (ALENLIST_SUCCESS != + alenlist_replace(pciio_alenlist, NULL, + &pci_addr, &xio_size, al_flags)) + goto fail; + } else { + if (ALENLIST_SUCCESS != + alenlist_append(pciio_alenlist, + pci_addr, xio_size, al_flags)) + goto fail; + } + } + + if (relbits) { + if (direct64) { + slotp->bss_d64_flags = flags; + slotp->bss_d64_base = pci_base; + } else { + slotp->bss_d32_flags = flags; + slotp->bss_d32_base = pci_base; + } + } + if (!inplace) + alenlist_done(xtalk_alenlist); + + /* Reset the internal cursor of the alenlist to be returned back + * to the caller. + */ + alenlist_cursor_init(pciio_alenlist, 0, NULL); + return pciio_alenlist; + + fail: + if (relbits) + pcibr_release_device(pcibr_soft, pciio_slot, relbits); + if (pciio_alenlist && !inplace) + alenlist_destroy(pciio_alenlist); + return 0; +} + +void +pcibr_dmamap_drain(pcibr_dmamap_t map) +{ + xtalk_dmamap_drain(map->bd_xtalk); +} + +void +pcibr_dmaaddr_drain(devfs_handle_t pconn_vhdl, + paddr_t paddr, + size_t bytes) +{ + pciio_info_t pciio_info = pciio_info_get(pconn_vhdl); + pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); + devfs_handle_t xconn_vhdl = pcibr_soft->bs_conn; + + xtalk_dmaaddr_drain(xconn_vhdl, paddr, bytes); +} + +void +pcibr_dmalist_drain(devfs_handle_t pconn_vhdl, + alenlist_t list) +{ + pciio_info_t pciio_info = pciio_info_get(pconn_vhdl); + pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); + devfs_handle_t xconn_vhdl = pcibr_soft->bs_conn; + + xtalk_dmalist_drain(xconn_vhdl, list); +} + +/* + * Get the starting PCIbus address out of the given DMA map. + * This function is supposed to be used by a close friend of PCI bridge + * since it relies on the fact that the starting address of the map is fixed at + * the allocation time in the current implementation of PCI bridge. + */ +iopaddr_t +pcibr_dmamap_pciaddr_get(pcibr_dmamap_t pcibr_dmamap) +{ + return (pcibr_dmamap->bd_pci_addr); +} + +/* + * There are end cases where a deadlock can occur if interrupt + * processing completes and the Bridge b_int_status bit is still set. + * + * One scenerio is if a second PCI interrupt occurs within 60ns of + * the previous interrupt being cleared. In this case the Bridge + * does not detect the transition, the Bridge b_int_status bit + * remains set, and because no transition was detected no interrupt + * packet is sent to the Hub/Heart. + * + * A second scenerio is possible when a b_int_status bit is being + * shared by multiple devices: + * Device #1 generates interrupt + * Bridge b_int_status bit set + * Device #2 generates interrupt + * interrupt processing begins + * ISR for device #1 runs and + * clears interrupt + * Device #1 generates interrupt + * ISR for device #2 runs and + * clears interrupt + * (b_int_status bit still set) + * interrupt processing completes + * + * Interrupt processing is now complete, but an interrupt is still + * outstanding for Device #1. But because there was no transition of + * the b_int_status bit, no interrupt packet will be generated and + * a deadlock will occur. + * + * To avoid these deadlock situations, this function is used + * to check if a specific Bridge b_int_status bit is set, and if so, + * cause the setting of the corresponding interrupt bit. + * + * On a XBridge (IP35), we do this by writing the appropriate Bridge Force + * Interrupt register. + */ +void +pcibr_force_interrupt(pcibr_intr_wrap_t wrap) +{ + unsigned bit; + pcibr_soft_t pcibr_soft = wrap->iw_soft; + bridge_t *bridge = pcibr_soft->bs_base; + cpuid_t cpuvertex_to_cpuid(devfs_handle_t vhdl); + + bit = wrap->iw_intr; + + if (pcibr_soft->bs_xbridge) { + bridge->b_force_pin[bit].intr = 1; + } else if ((1 << bit) & *wrap->iw_stat) { + cpuid_t cpu; + unsigned intr_bit; + xtalk_intr_t xtalk_intr = + pcibr_soft->bs_intr[bit].bsi_xtalk_intr; + + intr_bit = (short) xtalk_intr_vector_get(xtalk_intr); + cpu = cpuvertex_to_cpuid(xtalk_intr_cpu_get(xtalk_intr)); +#if defined(CONFIG_IA64_SGI_SN1) + REMOTE_CPU_SEND_INTR(cpu, intr_bit); +#endif + } +} + +/* ===================================================================== + * INTERRUPT MANAGEMENT + */ + +static unsigned +pcibr_intr_bits(pciio_info_t info, + pciio_intr_line_t lines) +{ + pciio_slot_t slot = pciio_info_slot_get(info); + unsigned bbits = 0; + + /* + * Currently favored mapping from PCI + * slot number and INTA/B/C/D to Bridge + * PCI Interrupt Bit Number: + * + * SLOT A B C D + * 0 0 4 0 4 + * 1 1 5 1 5 + * 2 2 6 2 6 + * 3 3 7 3 7 + * 4 4 0 4 0 + * 5 5 1 5 1 + * 6 6 2 6 2 + * 7 7 3 7 3 + */ + + if (slot < 8) { + if (lines & (PCIIO_INTR_LINE_A| PCIIO_INTR_LINE_C)) + bbits |= 1 << slot; + if (lines & (PCIIO_INTR_LINE_B| PCIIO_INTR_LINE_D)) + bbits |= 1 << (slot ^ 4); + } + return bbits; +} + + +/*ARGSUSED */ +pcibr_intr_t +pcibr_intr_alloc(devfs_handle_t pconn_vhdl, + device_desc_t dev_desc, + pciio_intr_line_t lines, + devfs_handle_t owner_dev) +{ + pcibr_info_t pcibr_info = pcibr_info_get(pconn_vhdl); + pciio_slot_t pciio_slot = pcibr_info->f_slot; + pcibr_soft_t pcibr_soft = (pcibr_soft_t) pcibr_info->f_mfast; + devfs_handle_t xconn_vhdl = pcibr_soft->bs_conn; + bridge_t *bridge = pcibr_soft->bs_base; + int is_threaded = 0; + int thread_swlevel; + + xtalk_intr_t *xtalk_intr_p; + pcibr_intr_t *pcibr_intr_p; + pcibr_intr_list_t *intr_list_p; + + unsigned pcibr_int_bits; + unsigned pcibr_int_bit; + xtalk_intr_t xtalk_intr = (xtalk_intr_t)0; + hub_intr_t hub_intr; + pcibr_intr_t pcibr_intr; + pcibr_intr_list_t intr_entry; + pcibr_intr_list_t intr_list; + bridgereg_t int_dev; + +#if DEBUG && INTR_DEBUG + printk("%v: pcibr_intr_alloc\n" + "%v:%s%s%s%s%s\n", + owner_dev, pconn_vhdl, + !(lines & 15) ? " No INTs?" : "", + lines & 1 ? " INTA" : "", + lines & 2 ? " INTB" : "", + lines & 4 ? " INTC" : "", + lines & 8 ? " INTD" : ""); +#endif + + NEW(pcibr_intr); + if (!pcibr_intr) + return NULL; + + if (dev_desc) { + cpuid_t intr_target_from_desc(device_desc_t, int); + } else { + extern int default_intr_pri; + + is_threaded = 1; /* PCI interrupts are threaded, by default */ + thread_swlevel = default_intr_pri; + } + + pcibr_intr->bi_dev = pconn_vhdl; + pcibr_intr->bi_lines = lines; + pcibr_intr->bi_soft = pcibr_soft; + pcibr_intr->bi_ibits = 0; /* bits will be added below */ + pcibr_intr->bi_flags = is_threaded ? 0 : PCIIO_INTR_NOTHREAD; + pcibr_intr->bi_mustruncpu = CPU_NONE; + mutex_spinlock_init(&pcibr_intr->bi_ibuf.ib_lock); + + pcibr_int_bits = pcibr_soft->bs_intr_bits((pciio_info_t)pcibr_info, lines); + + + /* + * For each PCI interrupt line requested, figure + * out which Bridge PCI Interrupt Line it maps + * to, and make sure there are xtalk resources + * allocated for it. + */ +#if DEBUG && INTR_DEBUG + printk("pcibr_int_bits: 0x%X\n", pcibr_int_bits); +#endif + for (pcibr_int_bit = 0; pcibr_int_bit < 8; pcibr_int_bit ++) { + if (pcibr_int_bits & (1 << pcibr_int_bit)) { + xtalk_intr_p = &pcibr_soft->bs_intr[pcibr_int_bit].bsi_xtalk_intr; + + xtalk_intr = *xtalk_intr_p; + + if (xtalk_intr == NULL) { + /* + * This xtalk_intr_alloc is constrained for two reasons: + * 1) Normal interrupts and error interrupts need to be delivered + * through a single xtalk target widget so that there aren't any + * ordering problems with DMA, completion interrupts, and error + * interrupts. (Use of xconn_vhdl forces this.) + * + * 2) On IP35, addressing constraints on IP35 and Bridge force + * us to use a single PI number for all interrupts from a + * single Bridge. (IP35-specific code forces this, and we + * verify in pcibr_setwidint.) + */ + + /* + * All code dealing with threaded PCI interrupt handlers + * is located at the pcibr level. Because of this, + * we always want the lower layers (hub/heart_intr_alloc, + * intr_level_connect) to treat us as non-threaded so we + * don't set up a duplicate threaded environment. We make + * this happen by calling a special xtalk interface. + */ + xtalk_intr = xtalk_intr_alloc_nothd(xconn_vhdl, dev_desc, + owner_dev); +#if DEBUG && INTR_DEBUG + printk("%v: xtalk_intr=0x%X\n", xconn_vhdl, xtalk_intr); +#endif + + /* both an assert and a runtime check on this: + * we need to check in non-DEBUG kernels, and + * the ASSERT gets us more information when + * we use DEBUG kernels. + */ + ASSERT(xtalk_intr != NULL); + if (xtalk_intr == NULL) { + /* it is quite possible that our + * xtalk_intr_alloc failed because + * someone else got there first, + * and we can find their results + * in xtalk_intr_p. + */ + if (!*xtalk_intr_p) { +#ifdef SUPPORT_PRINTING_V_FORMAT + printk(KERN_ALERT + "pcibr_intr_alloc %v: unable to get xtalk interrupt resources", + xconn_vhdl); +#else + printk(KERN_ALERT + "pcibr_intr_alloc 0x%p: unable to get xtalk interrupt resources", + (void *)xconn_vhdl); +#endif + /* yes, we leak resources here. */ + return 0; + } + } else if (compare_and_swap_ptr((void **) xtalk_intr_p, NULL, xtalk_intr)) { + /* + * now tell the bridge which slot is + * using this interrupt line. + */ + int_dev = bridge->b_int_device; + int_dev &= ~BRIDGE_INT_DEV_MASK(pcibr_int_bit); + int_dev |= pciio_slot << BRIDGE_INT_DEV_SHFT(pcibr_int_bit); + bridge->b_int_device = int_dev; /* XXXMP */ + +#if DEBUG && INTR_DEBUG + printk("%v: bridge intr bit %d clears my wrb\n", + pconn_vhdl, pcibr_int_bit); +#endif + } else { + /* someone else got one allocated first; + * free the one we just created, and + * retrieve the one they allocated. + */ + xtalk_intr_free(xtalk_intr); + xtalk_intr = *xtalk_intr_p; +#if PARANOID + /* once xtalk_intr is set, we never clear it, + * so if the CAS fails above, this condition + * can "never happen" ... + */ + if (!xtalk_intr) { + printk(KERN_ALERT + "pcibr_intr_alloc %v: unable to set xtalk interrupt resources", + xconn_vhdl); + /* yes, we leak resources here. */ + return 0; + } +#endif + } + } + + pcibr_intr->bi_ibits |= 1 << pcibr_int_bit; + + NEW(intr_entry); + intr_entry->il_next = NULL; + intr_entry->il_intr = pcibr_intr; + intr_entry->il_wrbf = &(bridge->b_wr_req_buf[pciio_slot].reg); + intr_list_p = + &pcibr_soft->bs_intr[pcibr_int_bit].bsi_pcibr_intr_wrap.iw_list; +#if DEBUG && INTR_DEBUG +#if defined(SUPPORT_PRINTING_V_FORMAT) + printk("0x%x: Bridge bit %d wrap=0x%x\n", + pconn_vhdl, pcibr_int_bit, + pcibr_soft->bs_intr[pcibr_int_bit].bsi_pcibr_intr_wrap); +#else + printk("%v: Bridge bit %d wrap=0x%x\n", + pconn_vhdl, pcibr_int_bit, + pcibr_soft->bs_intr[pcibr_int_bit].bsi_pcibr_intr_wrap); +#endif +#endif + + if (compare_and_swap_ptr((void **) intr_list_p, NULL, intr_entry)) { + /* we are the first interrupt on this bridge bit. + */ +#if DEBUG && INTR_DEBUG + printk("%v INT 0x%x (bridge bit %d) allocated [FIRST]\n", + pconn_vhdl, pcibr_int_bits, pcibr_int_bit); +#endif + continue; + } + intr_list = *intr_list_p; + pcibr_intr_p = &intr_list->il_intr; + if (compare_and_swap_ptr((void **) pcibr_intr_p, NULL, pcibr_intr)) { + /* first entry on list was erased, + * and we replaced it, so we + * don't need our intr_entry. + */ + DEL(intr_entry); +#if DEBUG && INTR_DEBUG + printk("%v INT 0x%x (bridge bit %d) replaces erased first\n", + pconn_vhdl, pcibr_int_bits, pcibr_int_bit); +#endif + continue; + } + intr_list_p = &intr_list->il_next; + if (compare_and_swap_ptr((void **) intr_list_p, NULL, intr_entry)) { + /* we are the new second interrupt on this bit. + */ + pcibr_soft->bs_intr[pcibr_int_bit].bsi_pcibr_intr_wrap.iw_shared = 1; +#if DEBUG && INTR_DEBUG + printk("%v INT 0x%x (bridge bit %d) is new SECOND\n", + pconn_vhdl, pcibr_int_bits, pcibr_int_bit); +#endif + continue; + } + while (1) { + pcibr_intr_p = &intr_list->il_intr; + if (compare_and_swap_ptr((void **) pcibr_intr_p, NULL, pcibr_intr)) { + /* an entry on list was erased, + * and we replaced it, so we + * don't need our intr_entry. + */ + DEL(intr_entry); +#if DEBUG && INTR_DEBUG + printk("%v INT 0x%x (bridge bit %d) replaces erased Nth\n", + pconn_vhdl, pcibr_int_bits, pcibr_int_bit); +#endif + break; + } + intr_list_p = &intr_list->il_next; + if (compare_and_swap_ptr((void **) intr_list_p, NULL, intr_entry)) { + /* entry appended to share list + */ +#if DEBUG && INTR_DEBUG + printk("%v INT 0x%x (bridge bit %d) is new Nth\n", + pconn_vhdl, pcibr_int_bits, pcibr_int_bit); +#endif + break; + } + /* step to next record in chain + */ + intr_list = *intr_list_p; + } + } + } + +#if DEBUG && INTR_DEBUG + printk("%v pcibr_intr_alloc complete\n", pconn_vhdl); +#endif + hub_intr = (hub_intr_t)xtalk_intr; + pcibr_intr->bi_irq = hub_intr->i_bit; + pcibr_intr->bi_cpu = hub_intr->i_cpuid; + return pcibr_intr; +} + +/*ARGSUSED */ +void +pcibr_intr_free(pcibr_intr_t pcibr_intr) +{ + unsigned pcibr_int_bits = pcibr_intr->bi_ibits; + pcibr_soft_t pcibr_soft = pcibr_intr->bi_soft; + unsigned pcibr_int_bit; + pcibr_intr_list_t intr_list; + int intr_shared; + xtalk_intr_t *xtalk_intrp; + + for (pcibr_int_bit = 0; pcibr_int_bit < 8; pcibr_int_bit++) { + if (pcibr_int_bits & (1 << pcibr_int_bit)) { + for (intr_list = + pcibr_soft->bs_intr[pcibr_int_bit].bsi_pcibr_intr_wrap.iw_list; + intr_list != NULL; + intr_list = intr_list->il_next) + if (compare_and_swap_ptr((void **) &intr_list->il_intr, + pcibr_intr, + NULL)) { +#if DEBUG && INTR_DEBUG + printk("%s: cleared a handler from bit %d\n", + pcibr_soft->bs_name, pcibr_int_bit); +#endif + } + /* If this interrupt line is not being shared between multiple + * devices release the xtalk interrupt resources. + */ + intr_shared = + pcibr_soft->bs_intr[pcibr_int_bit].bsi_pcibr_intr_wrap.iw_shared; + xtalk_intrp = &pcibr_soft->bs_intr[pcibr_int_bit].bsi_xtalk_intr; + + if ((!intr_shared) && (*xtalk_intrp)) { + + bridge_t *bridge = pcibr_soft->bs_base; + bridgereg_t int_dev; + + xtalk_intr_free(*xtalk_intrp); + *xtalk_intrp = 0; + + /* Clear the PCI device interrupt to bridge interrupt pin + * mapping. + */ + int_dev = bridge->b_int_device; + int_dev &= ~BRIDGE_INT_DEV_MASK(pcibr_int_bit); + bridge->b_int_device = int_dev; + + } + } + } + DEL(pcibr_intr); +} + +LOCAL void +pcibr_setpciint(xtalk_intr_t xtalk_intr) +{ + iopaddr_t addr = xtalk_intr_addr_get(xtalk_intr); + xtalk_intr_vector_t vect = xtalk_intr_vector_get(xtalk_intr); + bridgereg_t *int_addr = (bridgereg_t *) + xtalk_intr_sfarg_get(xtalk_intr); + +#ifdef CONFIG_IA64_SGI_SN2 + *int_addr = ((BRIDGE_INT_ADDR_HOST & (addr >> 26)) | + (BRIDGE_INT_ADDR_FLD & vect)); +#elif CONFIG_IA64_SGI_SN1 + *int_addr = ((BRIDGE_INT_ADDR_HOST & (addr >> 30)) | + (BRIDGE_INT_ADDR_FLD & vect)); +#endif +} + +/*ARGSUSED */ +int +pcibr_intr_connect(pcibr_intr_t pcibr_intr) +{ + pcibr_soft_t pcibr_soft = pcibr_intr->bi_soft; + bridge_t *bridge = pcibr_soft->bs_base; + unsigned pcibr_int_bits = pcibr_intr->bi_ibits; + unsigned pcibr_int_bit; + bridgereg_t b_int_enable; + unsigned long s; + + if (pcibr_intr == NULL) + return -1; + +#if DEBUG && INTR_DEBUG + printk("%v: pcibr_intr_connect\n", + pcibr_intr->bi_dev); +#endif + + *((volatile unsigned *)&pcibr_intr->bi_flags) |= PCIIO_INTR_CONNECTED; + + /* + * For each PCI interrupt line requested, figure + * out which Bridge PCI Interrupt Line it maps + * to, and make sure there are xtalk resources + * allocated for it. + */ + for (pcibr_int_bit = 0; pcibr_int_bit < 8; pcibr_int_bit++) + if (pcibr_int_bits & (1 << pcibr_int_bit)) { + xtalk_intr_t xtalk_intr; + + xtalk_intr = pcibr_soft->bs_intr[pcibr_int_bit].bsi_xtalk_intr; + + /* + * If this interrupt line is being shared and the connect has + * already been done, no need to do it again. + */ + if (pcibr_soft->bs_intr[pcibr_int_bit].bsi_pcibr_intr_wrap.iw_connected) + continue; + + + /* + * Use the pcibr wrapper function to handle all Bridge interrupts + * regardless of whether the interrupt line is shared or not. + */ + xtalk_intr_connect(xtalk_intr, (xtalk_intr_setfunc_t) pcibr_setpciint, + (void *)&(bridge->b_int_addr[pcibr_int_bit].addr)); + pcibr_soft->bs_intr[pcibr_int_bit].bsi_pcibr_intr_wrap.iw_connected = 1; + +#if DEBUG && INTR_DEBUG + printk("%v bridge bit %d wrapper connected\n", + pcibr_intr->bi_dev, pcibr_int_bit); +#endif + } + s = pcibr_lock(pcibr_soft); + b_int_enable = bridge->b_int_enable; + b_int_enable |= pcibr_int_bits; + bridge->b_int_enable = b_int_enable; + bridge->b_wid_tflush; /* wait until Bridge PIO complete */ + pcibr_unlock(pcibr_soft, s); + + return 0; +} + +/*ARGSUSED */ +void +pcibr_intr_disconnect(pcibr_intr_t pcibr_intr) +{ + pcibr_soft_t pcibr_soft = pcibr_intr->bi_soft; + bridge_t *bridge = pcibr_soft->bs_base; + unsigned pcibr_int_bits = pcibr_intr->bi_ibits; + unsigned pcibr_int_bit; + bridgereg_t b_int_enable; + unsigned long s; + + /* Stop calling the function. Now. + */ + *((volatile unsigned *)&pcibr_intr->bi_flags) &= ~PCIIO_INTR_CONNECTED; + /* + * For each PCI interrupt line requested, figure + * out which Bridge PCI Interrupt Line it maps + * to, and disconnect the interrupt. + */ + + /* don't disable interrupts for lines that + * are shared between devices. + */ + for (pcibr_int_bit = 0; pcibr_int_bit < 8; pcibr_int_bit++) + if ((pcibr_int_bits & (1 << pcibr_int_bit)) && + (pcibr_soft->bs_intr[pcibr_int_bit].bsi_pcibr_intr_wrap.iw_shared)) + pcibr_int_bits &= ~(1 << pcibr_int_bit); + if (!pcibr_int_bits) + return; + + s = pcibr_lock(pcibr_soft); + b_int_enable = bridge->b_int_enable; + b_int_enable &= ~pcibr_int_bits; + bridge->b_int_enable = b_int_enable; + bridge->b_wid_tflush; /* wait until Bridge PIO complete */ + pcibr_unlock(pcibr_soft, s); + + for (pcibr_int_bit = 0; pcibr_int_bit < 8; pcibr_int_bit++) + if (pcibr_int_bits & (1 << pcibr_int_bit)) { + /* if the interrupt line is now shared, + * do not disconnect it. + */ + if (pcibr_soft->bs_intr[pcibr_int_bit].bsi_pcibr_intr_wrap.iw_shared) + continue; + + xtalk_intr_disconnect(pcibr_soft->bs_intr[pcibr_int_bit].bsi_xtalk_intr); + pcibr_soft->bs_intr[pcibr_int_bit].bsi_pcibr_intr_wrap.iw_connected = 0; + +#if DEBUG && INTR_DEBUG + printk("%s: xtalk disconnect done for Bridge bit %d\n", + pcibr_soft->bs_name, pcibr_int_bit); +#endif + + /* if we are sharing the interrupt line, + * connect us up; this closes the hole + * where the another pcibr_intr_alloc() + * was in progress as we disconnected. + */ + if (!pcibr_soft->bs_intr[pcibr_int_bit].bsi_pcibr_intr_wrap.iw_shared) + continue; + + xtalk_intr_connect(pcibr_soft->bs_intr[pcibr_int_bit].bsi_xtalk_intr, + (xtalk_intr_setfunc_t)pcibr_setpciint, + (void *) &(bridge->b_int_addr[pcibr_int_bit].addr)); + } +} + +/*ARGSUSED */ +devfs_handle_t +pcibr_intr_cpu_get(pcibr_intr_t pcibr_intr) +{ + pcibr_soft_t pcibr_soft = pcibr_intr->bi_soft; + unsigned pcibr_int_bits = pcibr_intr->bi_ibits; + unsigned pcibr_int_bit; + + for (pcibr_int_bit = 0; pcibr_int_bit < 8; pcibr_int_bit++) + if (pcibr_int_bits & (1 << pcibr_int_bit)) + return xtalk_intr_cpu_get(pcibr_soft->bs_intr[pcibr_int_bit].bsi_xtalk_intr); + return 0; +} + +/* ===================================================================== + * INTERRUPT HANDLING + */ +LOCAL void +pcibr_clearwidint(bridge_t *bridge) +{ + bridge->b_wid_int_upper = 0; + bridge->b_wid_int_lower = 0; +} + +LOCAL void +pcibr_setwidint(xtalk_intr_t intr) +{ + xwidgetnum_t targ = xtalk_intr_target_get(intr); + iopaddr_t addr = xtalk_intr_addr_get(intr); + xtalk_intr_vector_t vect = xtalk_intr_vector_get(intr); + widgetreg_t NEW_b_wid_int_upper, NEW_b_wid_int_lower; + widgetreg_t OLD_b_wid_int_upper, OLD_b_wid_int_lower; + + bridge_t *bridge = (bridge_t *)xtalk_intr_sfarg_get(intr); + + NEW_b_wid_int_upper = ( (0x000F0000 & (targ << 16)) | + XTALK_ADDR_TO_UPPER(addr)); + NEW_b_wid_int_lower = XTALK_ADDR_TO_LOWER(addr); + + OLD_b_wid_int_upper = bridge->b_wid_int_upper; + OLD_b_wid_int_lower = bridge->b_wid_int_lower; + + /* Verify that all interrupts from this Bridge are using a single PI */ + if ((OLD_b_wid_int_upper != 0) && (OLD_b_wid_int_lower != 0)) { + /* + * Once set, these registers shouldn't change; they should + * be set multiple times with the same values. + * + * If we're attempting to change these registers, it means + * that our heuristics for allocating interrupts in a way + * appropriate for IP35 have failed, and the admin needs to + * explicitly direct some interrupts (or we need to make the + * heuristics more clever). + * + * In practice, we hope this doesn't happen very often, if + * at all. + */ + if ((OLD_b_wid_int_upper != NEW_b_wid_int_upper) || + (OLD_b_wid_int_lower != NEW_b_wid_int_lower)) { + printk(KERN_WARNING "Interrupt allocation is too complex.\n"); + printk(KERN_WARNING "Use explicit administrative interrupt targetting.\n"); + printk(KERN_WARNING "bridge=0x%lx targ=0x%x\n", (unsigned long)bridge, targ); + printk(KERN_WARNING "NEW=0x%x/0x%x OLD=0x%x/0x%x\n", + NEW_b_wid_int_upper, NEW_b_wid_int_lower, + OLD_b_wid_int_upper, OLD_b_wid_int_lower); + PRINT_PANIC("PCI Bridge interrupt targetting error\n"); + } + } + + bridge->b_wid_int_upper = NEW_b_wid_int_upper; + bridge->b_wid_int_lower = NEW_b_wid_int_lower; + bridge->b_int_host_err = vect; +} + +/* + * pcibr_intr_preset: called during mlreset time + * if the platform specific code needs to route + * one of the Bridge's xtalk interrupts before the + * xtalk infrastructure is available. + */ +void +pcibr_xintr_preset(void *which_widget, + int which_widget_intr, + xwidgetnum_t targ, + iopaddr_t addr, + xtalk_intr_vector_t vect) +{ + bridge_t *bridge = (bridge_t *) which_widget; + + if (which_widget_intr == -1) { + /* bridge widget error interrupt */ + bridge->b_wid_int_upper = ( (0x000F0000 & (targ << 16)) | + XTALK_ADDR_TO_UPPER(addr)); + bridge->b_wid_int_lower = XTALK_ADDR_TO_LOWER(addr); + bridge->b_int_host_err = vect; + + /* turn on all interrupts except + * the PCI interrupt requests, + * at least at heart. + */ + bridge->b_int_enable |= ~BRIDGE_IMR_INT_MSK; + + } else { + /* routing a PCI device interrupt. + * targ and low 38 bits of addr must + * be the same as the already set + * value for the widget error interrupt. + */ + bridge->b_int_addr[which_widget_intr].addr = + ((BRIDGE_INT_ADDR_HOST & (addr >> 30)) | + (BRIDGE_INT_ADDR_FLD & vect)); + /* + * now bridge can let it through; + * NB: still should be blocked at + * xtalk provider end, until the service + * function is set. + */ + bridge->b_int_enable |= 1 << vect; + } + bridge->b_wid_tflush; /* wait until Bridge PIO complete */ +} + + +/* + * pcibr_intr_func() + * + * This is the pcibr interrupt "wrapper" function that is called, + * in interrupt context, to initiate the interrupt handler(s) registered + * (via pcibr_intr_alloc/connect) for the occuring interrupt. Non-threaded + * handlers will be called directly, and threaded handlers will have their + * thread woken up. + */ +void +pcibr_intr_func(intr_arg_t arg) +{ + pcibr_intr_wrap_t wrap = (pcibr_intr_wrap_t) arg; + reg_p wrbf; + pcibr_intr_t intr; + pcibr_intr_list_t list; + int clearit; + int do_nonthreaded = 1; + int is_threaded = 0; + int x = 0; + + /* + * If any handler is still running from a previous interrupt + * just return. If there's a need to call the handler(s) again, + * another interrupt will be generated either by the device or by + * pcibr_force_interrupt(). + */ + + if (wrap->iw_hdlrcnt) { + return; + } + + /* + * Call all interrupt handlers registered. + * First, the pcibr_intrd threads for any threaded handlers will be + * awoken, then any non-threaded handlers will be called sequentially. + */ + + clearit = 1; + while (do_nonthreaded) { + for (list = wrap->iw_list; list != NULL; list = list->il_next) { + if ((intr = list->il_intr) && + (intr->bi_flags & PCIIO_INTR_CONNECTED)) { + + /* + * This device may have initiated write + * requests since the bridge last saw + * an edge on this interrupt input; flushing + * the buffer prior to invoking the handler + * should help but may not be sufficient if we + * get more requests after the flush, followed + * by the card deciding it wants service, before + * the interrupt handler checks to see if things need + * to be done. + * + * There is a similar race condition if + * an interrupt handler loops around and + * notices further service is required. + * Perhaps we need to have an explicit + * call that interrupt handlers need to + * do between noticing that DMA to memory + * has completed, but before observing the + * contents of memory? + */ + + if ((do_nonthreaded) && (!is_threaded)) { + /* Non-threaded. + * Call the interrupt handler at interrupt level + */ + + /* Only need to flush write buffers if sharing */ + + if ((wrap->iw_shared) && (wrbf = list->il_wrbf)) { + if ((x = *wrbf)) /* write request buffer flush */ +#ifdef SUPPORT_PRINTING_V_FORMAT + printk(KERN_ALERT "pcibr_intr_func %v: \n" + "write buffer flush failed, wrbf=0x%x\n", + list->il_intr->bi_dev, wrbf); +#else + printk(KERN_ALERT "pcibr_intr_func %p: \n" + "write buffer flush failed, wrbf=0x%lx\n", + (void *)list->il_intr->bi_dev, (long) wrbf); +#endif + } + } + + clearit = 0; + } + } + + do_nonthreaded = 0; + /* + * If the non-threaded handler was the last to complete, + * (i.e., no threaded handlers still running) force an + * interrupt to avoid a potential deadlock situation. + */ + if (wrap->iw_hdlrcnt == 0) { + pcibr_force_interrupt(wrap); + } + } + + /* If there were no handlers, + * disable the interrupt and return. + * It will get enabled again after + * a handler is connected. + * If we don't do this, we would + * sit here and spin through the + * list forever. + */ + if (clearit) { + pcibr_soft_t pcibr_soft = wrap->iw_soft; + bridge_t *bridge = pcibr_soft->bs_base; + bridgereg_t b_int_enable; + bridgereg_t mask = 1 << wrap->iw_intr; + unsigned long s; + + s = pcibr_lock(pcibr_soft); + b_int_enable = bridge->b_int_enable; + b_int_enable &= ~mask; + bridge->b_int_enable = b_int_enable; + bridge->b_wid_tflush; /* wait until Bridge PIO complete */ + pcibr_unlock(pcibr_soft, s); + return; + } +} + +/* ===================================================================== + * CONFIGURATION MANAGEMENT + */ +/*ARGSUSED */ +void +pcibr_provider_startup(devfs_handle_t pcibr) +{ +} + +/*ARGSUSED */ +void +pcibr_provider_shutdown(devfs_handle_t pcibr) +{ +} + +int +pcibr_reset(devfs_handle_t conn) +{ + pciio_info_t pciio_info = pciio_info_get(conn); + pciio_slot_t pciio_slot = pciio_info_slot_get(pciio_info); + pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); + bridge_t *bridge = pcibr_soft->bs_base; + bridgereg_t ctlreg; + unsigned cfgctl[8]; + unsigned long s; + int f, nf; + pcibr_info_h pcibr_infoh; + pcibr_info_t pcibr_info; + int win; + + if (pcibr_soft->bs_slot[pciio_slot].has_host) { + pciio_slot = pcibr_soft->bs_slot[pciio_slot].host_slot; + pcibr_info = pcibr_soft->bs_slot[pciio_slot].bss_infos[0]; + } + if (pciio_slot < 4) { + s = pcibr_lock(pcibr_soft); + nf = pcibr_soft->bs_slot[pciio_slot].bss_ninfo; + pcibr_infoh = pcibr_soft->bs_slot[pciio_slot].bss_infos; + for (f = 0; f < nf; ++f) + if (pcibr_infoh[f]) + cfgctl[f] = bridge->b_type0_cfg_dev[pciio_slot].f[f].l[PCI_CFG_COMMAND / 4]; + + ctlreg = bridge->b_wid_control; + bridge->b_wid_control = ctlreg | BRIDGE_CTRL_RST(pciio_slot); + /* XXX delay? */ + bridge->b_wid_control = ctlreg; + /* XXX delay? */ + + for (f = 0; f < nf; ++f) + if ((pcibr_info = pcibr_infoh[f])) + for (win = 0; win < 6; ++win) + if (pcibr_info->f_window[win].w_base != 0) + bridge->b_type0_cfg_dev[pciio_slot].f[f].l[PCI_CFG_BASE_ADDR(win) / 4] = + pcibr_info->f_window[win].w_base; + for (f = 0; f < nf; ++f) + if (pcibr_infoh[f]) + bridge->b_type0_cfg_dev[pciio_slot].f[f].l[PCI_CFG_COMMAND / 4] = cfgctl[f]; + pcibr_unlock(pcibr_soft, s); + + return 0; + } +#ifdef SUPPORT_PRINTING_V_FORMAT + printk(KERN_WARNING "%v: pcibr_reset unimplemented for slot %d\n", + conn, pciio_slot); +#endif + return -1; +} + +pciio_endian_t +pcibr_endian_set(devfs_handle_t pconn_vhdl, + pciio_endian_t device_end, + pciio_endian_t desired_end) +{ + pciio_info_t pciio_info = pciio_info_get(pconn_vhdl); + pciio_slot_t pciio_slot = pciio_info_slot_get(pciio_info); + pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); + bridgereg_t devreg; + unsigned long s; + + /* + * Bridge supports hardware swapping; so we can always + * arrange for the caller's desired endianness. + */ + + s = pcibr_lock(pcibr_soft); + devreg = pcibr_soft->bs_slot[pciio_slot].bss_device; + if (device_end != desired_end) + devreg |= BRIDGE_DEV_SWAP_BITS; + else + devreg &= ~BRIDGE_DEV_SWAP_BITS; + + /* NOTE- if we ever put SWAP bits + * onto the disabled list, we will + * have to change the logic here. + */ + if (pcibr_soft->bs_slot[pciio_slot].bss_device != devreg) { + bridge_t *bridge = pcibr_soft->bs_base; + + bridge->b_device[pciio_slot].reg = devreg; + pcibr_soft->bs_slot[pciio_slot].bss_device = devreg; + bridge->b_wid_tflush; /* wait until Bridge PIO complete */ + } + pcibr_unlock(pcibr_soft, s); + +#if DEBUG && PCIBR_DEV_DEBUG + printk("pcibr Device(%d): 0x%p\n", pciio_slot, bridge->b_device[pciio_slot].reg); +#endif + + return desired_end; +} + +/* This (re)sets the GBR and REALTIME bits and also keeps track of how + * many sets are outstanding. Reset succeeds only if the number of outstanding + * sets == 1. + */ +int +pcibr_priority_bits_set(pcibr_soft_t pcibr_soft, + pciio_slot_t pciio_slot, + pciio_priority_t device_prio) +{ + unsigned long s; + int *counter; + bridgereg_t rtbits = 0; + bridgereg_t devreg; + int rc = PRIO_SUCCESS; + + /* in dual-slot configurations, the host and the + * guest have separate DMA resources, so they + * have separate requirements for priority bits. + */ + + counter = &(pcibr_soft->bs_slot[pciio_slot].bss_pri_uctr); + + /* + * Bridge supports PCI notions of LOW and HIGH priority + * arbitration rings via a "REAL_TIME" bit in the per-device + * Bridge register. The "GBR" bit controls access to the GBR + * ring on the xbow. These two bits are (re)set together. + * + * XXX- Bug in Rev B Bridge Si: + * Symptom: Prefetcher starts operating incorrectly. This happens + * due to corruption of the address storage ram in the prefetcher + * when a non-real time PCI request is pulled and a real-time one is + * put in it's place. Workaround: Use only a single arbitration ring + * on PCI bus. GBR and RR can still be uniquely used per + * device. NETLIST MERGE DONE, WILL BE FIXED IN REV C. + */ + + if (pcibr_soft->bs_rev_num != BRIDGE_PART_REV_B) + rtbits |= BRIDGE_DEV_RT; + + /* NOTE- if we ever put DEV_RT or DEV_GBR on + * the disabled list, we will have to take + * it into account here. + */ + + s = pcibr_lock(pcibr_soft); + devreg = pcibr_soft->bs_slot[pciio_slot].bss_device; + if (device_prio == PCI_PRIO_HIGH) { + if ((++*counter == 1)) { + if (rtbits) + devreg |= rtbits; + else + rc = PRIO_FAIL; + } + } else if (device_prio == PCI_PRIO_LOW) { + if (*counter <= 0) + rc = PRIO_FAIL; + else if (--*counter == 0) + if (rtbits) + devreg &= ~rtbits; + } + if (pcibr_soft->bs_slot[pciio_slot].bss_device != devreg) { + bridge_t *bridge = pcibr_soft->bs_base; + + bridge->b_device[pciio_slot].reg = devreg; + pcibr_soft->bs_slot[pciio_slot].bss_device = devreg; + bridge->b_wid_tflush; /* wait until Bridge PIO complete */ + } + pcibr_unlock(pcibr_soft, s); + + return rc; +} + +pciio_priority_t +pcibr_priority_set(devfs_handle_t pconn_vhdl, + pciio_priority_t device_prio) +{ + pciio_info_t pciio_info = pciio_info_get(pconn_vhdl); + pciio_slot_t pciio_slot = pciio_info_slot_get(pciio_info); + pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); + + (void) pcibr_priority_bits_set(pcibr_soft, pciio_slot, device_prio); + + return device_prio; +} + +/* + * Interfaces to allow special (e.g. SGI) drivers to set/clear + * Bridge-specific device flags. Many flags are modified through + * PCI-generic interfaces; we don't allow them to be directly + * manipulated here. Only flags that at this point seem pretty + * Bridge-specific can be set through these special interfaces. + * We may add more flags as the need arises, or remove flags and + * create PCI-generic interfaces as the need arises. + * + * Returns 0 on failure, 1 on success + */ +int +pcibr_device_flags_set(devfs_handle_t pconn_vhdl, + pcibr_device_flags_t flags) +{ + pciio_info_t pciio_info = pciio_info_get(pconn_vhdl); + pciio_slot_t pciio_slot = pciio_info_slot_get(pciio_info); + pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); + bridgereg_t set = 0; + bridgereg_t clr = 0; + + ASSERT((flags & PCIBR_DEVICE_FLAGS) == flags); + + if (flags & PCIBR_WRITE_GATHER) + set |= BRIDGE_DEV_PMU_WRGA_EN; + if (flags & PCIBR_NOWRITE_GATHER) + clr |= BRIDGE_DEV_PMU_WRGA_EN; + + if (flags & PCIBR_WRITE_GATHER) + set |= BRIDGE_DEV_DIR_WRGA_EN; + if (flags & PCIBR_NOWRITE_GATHER) + clr |= BRIDGE_DEV_DIR_WRGA_EN; + + if (flags & PCIBR_PREFETCH) + set |= BRIDGE_DEV_PREF; + if (flags & PCIBR_NOPREFETCH) + clr |= BRIDGE_DEV_PREF; + + if (flags & PCIBR_PRECISE) + set |= BRIDGE_DEV_PRECISE; + if (flags & PCIBR_NOPRECISE) + clr |= BRIDGE_DEV_PRECISE; + + if (flags & PCIBR_BARRIER) + set |= BRIDGE_DEV_BARRIER; + if (flags & PCIBR_NOBARRIER) + clr |= BRIDGE_DEV_BARRIER; + + if (flags & PCIBR_64BIT) + set |= BRIDGE_DEV_DEV_SIZE; + if (flags & PCIBR_NO64BIT) + clr |= BRIDGE_DEV_DEV_SIZE; + + if (set || clr) { + bridgereg_t devreg; + unsigned long s; + + s = pcibr_lock(pcibr_soft); + devreg = pcibr_soft->bs_slot[pciio_slot].bss_device; + devreg = (devreg & ~clr) | set; + if (pcibr_soft->bs_slot[pciio_slot].bss_device != devreg) { + bridge_t *bridge = pcibr_soft->bs_base; + + bridge->b_device[pciio_slot].reg = devreg; + pcibr_soft->bs_slot[pciio_slot].bss_device = devreg; + bridge->b_wid_tflush; /* wait until Bridge PIO complete */ + } + pcibr_unlock(pcibr_soft, s); +#if DEBUG && PCIBR_DEV_DEBUG + printk("pcibr Device(%d): %R\n", pciio_slot, bridge->b_device[pciio_slot].regbridge->b_device[pciio_slot].reg, device_bits); +#endif + } + return (1); +} + +#ifdef LITTLE_ENDIAN +/* + * on sn-ia we need to twiddle the the addresses going out + * the pci bus because we use the unswizzled synergy space + * (the alternative is to use the swizzled synergy space + * and byte swap the data) + */ +#define CB(b,r) (((volatile uint8_t *) b)[((r)^4)]) +#define CS(b,r) (((volatile uint16_t *) b)[((r^4)/2)]) +#define CW(b,r) (((volatile uint32_t *) b)[((r^4)/4)]) +#else +#define CB(b,r) (((volatile uint8_t *) cfgbase)[(r)^3]) +#define CS(b,r) (((volatile uint16_t *) cfgbase)[((r)/2)^1]) +#define CW(b,r) (((volatile uint32_t *) cfgbase)[(r)/4]) +#endif /* LITTLE_ENDIAN */ + + +LOCAL cfg_p +pcibr_config_addr(devfs_handle_t conn, + unsigned reg) +{ + pcibr_info_t pcibr_info; + pciio_slot_t pciio_slot; + pciio_function_t pciio_func; + pcibr_soft_t pcibr_soft; + bridge_t *bridge; + cfg_p cfgbase = (cfg_p)0; + + pcibr_info = pcibr_info_get(conn); + + pciio_slot = pcibr_info->f_slot; + if (pciio_slot == PCIIO_SLOT_NONE) + pciio_slot = PCI_TYPE1_SLOT(reg); + + pciio_func = pcibr_info->f_func; + if (pciio_func == PCIIO_FUNC_NONE) + pciio_func = PCI_TYPE1_FUNC(reg); + + pcibr_soft = (pcibr_soft_t) pcibr_info->f_mfast; + + bridge = pcibr_soft->bs_base; + + cfgbase = bridge->b_type0_cfg_dev[pciio_slot].f[pciio_func].l; + + return cfgbase; +} + +uint64_t +pcibr_config_get(devfs_handle_t conn, + unsigned reg, + unsigned size) +{ + return do_pcibr_config_get(pcibr_config_addr(conn, reg), + PCI_TYPE1_REG(reg), size); +} + +LOCAL uint64_t +do_pcibr_config_get( + cfg_p cfgbase, + unsigned reg, + unsigned size) +{ + unsigned value; + + + value = CW(cfgbase, reg); + + if (reg & 3) + value >>= 8 * (reg & 3); + if (size < 4) + value &= (1 << (8 * size)) - 1; + + return value; +} + +void +pcibr_config_set(devfs_handle_t conn, + unsigned reg, + unsigned size, + uint64_t value) +{ + do_pcibr_config_set(pcibr_config_addr(conn, reg), + PCI_TYPE1_REG(reg), size, value); +} + +LOCAL void +do_pcibr_config_set(cfg_p cfgbase, + unsigned reg, + unsigned size, + uint64_t value) +{ + switch (size) { + case 1: + CB(cfgbase, reg) = value; + break; + case 2: + if (reg & 1) { + CB(cfgbase, reg) = value; + CB(cfgbase, reg + 1) = value >> 8; + } else + CS(cfgbase, reg) = value; + break; + case 3: + if (reg & 1) { + CB(cfgbase, reg) = value; + CS(cfgbase, (reg + 1)) = value >> 8; + } else { + CS(cfgbase, reg) = value; + CB(cfgbase, reg + 2) = value >> 16; + } + break; + + case 4: + CW(cfgbase, reg) = value; + break; + } +} + +pciio_provider_t pcibr_provider = +{ + (pciio_piomap_alloc_f *) pcibr_piomap_alloc, + (pciio_piomap_free_f *) pcibr_piomap_free, + (pciio_piomap_addr_f *) pcibr_piomap_addr, + (pciio_piomap_done_f *) pcibr_piomap_done, + (pciio_piotrans_addr_f *) pcibr_piotrans_addr, + (pciio_piospace_alloc_f *) pcibr_piospace_alloc, + (pciio_piospace_free_f *) pcibr_piospace_free, + + (pciio_dmamap_alloc_f *) pcibr_dmamap_alloc, + (pciio_dmamap_free_f *) pcibr_dmamap_free, + (pciio_dmamap_addr_f *) pcibr_dmamap_addr, + (pciio_dmamap_list_f *) pcibr_dmamap_list, + (pciio_dmamap_done_f *) pcibr_dmamap_done, + (pciio_dmatrans_addr_f *) pcibr_dmatrans_addr, + (pciio_dmatrans_list_f *) pcibr_dmatrans_list, + (pciio_dmamap_drain_f *) pcibr_dmamap_drain, + (pciio_dmaaddr_drain_f *) pcibr_dmaaddr_drain, + (pciio_dmalist_drain_f *) pcibr_dmalist_drain, + + (pciio_intr_alloc_f *) pcibr_intr_alloc, + (pciio_intr_free_f *) pcibr_intr_free, + (pciio_intr_connect_f *) pcibr_intr_connect, + (pciio_intr_disconnect_f *) pcibr_intr_disconnect, + (pciio_intr_cpu_get_f *) pcibr_intr_cpu_get, + + (pciio_provider_startup_f *) pcibr_provider_startup, + (pciio_provider_shutdown_f *) pcibr_provider_shutdown, + (pciio_reset_f *) pcibr_reset, + (pciio_write_gather_flush_f *) pcibr_write_gather_flush, + (pciio_endian_set_f *) pcibr_endian_set, + (pciio_priority_set_f *) pcibr_priority_set, + (pciio_config_get_f *) pcibr_config_get, + (pciio_config_set_f *) pcibr_config_set, + + (pciio_error_devenable_f *) 0, + (pciio_error_extract_f *) 0, + +#ifdef LATER + (pciio_driver_reg_callback_f *) pcibr_driver_reg_callback, + (pciio_driver_unreg_callback_f *) pcibr_driver_unreg_callback, +#else + (pciio_driver_reg_callback_f *) 0, + (pciio_driver_unreg_callback_f *) 0, +#endif + (pciio_device_unregister_f *) pcibr_device_unregister, + (pciio_dma_enabled_f *) pcibr_dma_enabled, +}; + +LOCAL pcibr_hints_t +pcibr_hints_get(devfs_handle_t xconn_vhdl, int alloc) +{ + arbitrary_info_t ainfo = 0; + graph_error_t rv; + pcibr_hints_t hint; + + rv = hwgraph_info_get_LBL(xconn_vhdl, INFO_LBL_PCIBR_HINTS, &ainfo); + + if (alloc && (rv != GRAPH_SUCCESS)) { + + NEW(hint); + hint->rrb_alloc_funct = NULL; + hint->ph_intr_bits = NULL; + rv = hwgraph_info_add_LBL(xconn_vhdl, + INFO_LBL_PCIBR_HINTS, + (arbitrary_info_t) hint); + if (rv != GRAPH_SUCCESS) + goto abnormal_exit; + + rv = hwgraph_info_get_LBL(xconn_vhdl, INFO_LBL_PCIBR_HINTS, &ainfo); + + if (rv != GRAPH_SUCCESS) + goto abnormal_exit; + + if (ainfo != (arbitrary_info_t) hint) + goto abnormal_exit; + } + return (pcibr_hints_t) ainfo; + +abnormal_exit: +#ifdef LATER + printf("SHOULD NOT BE HERE\n"); +#endif + DEL(hint); + return(NULL); + +} + +void +pcibr_hints_fix_some_rrbs(devfs_handle_t xconn_vhdl, unsigned mask) +{ + pcibr_hints_t hint = pcibr_hints_get(xconn_vhdl, 1); + + if (hint) + hint->ph_rrb_fixed = mask; +#if DEBUG + else + printk("pcibr_hints_fix_rrbs: pcibr_hints_get failed at\n" + "\t%p\n", xconn_vhdl); +#endif +} + +void +pcibr_hints_fix_rrbs(devfs_handle_t xconn_vhdl) +{ + pcibr_hints_fix_some_rrbs(xconn_vhdl, 0xFF); +} + +void +pcibr_hints_dualslot(devfs_handle_t xconn_vhdl, + pciio_slot_t host, + pciio_slot_t guest) +{ + pcibr_hints_t hint = pcibr_hints_get(xconn_vhdl, 1); + + if (hint) + hint->ph_host_slot[guest] = host + 1; +#if DEBUG + else + printk("pcibr_hints_dualslot: pcibr_hints_get failed at\n" + "\t%p\n", xconn_vhdl); +#endif +} + +void +pcibr_hints_intr_bits(devfs_handle_t xconn_vhdl, + pcibr_intr_bits_f *xxx_intr_bits) +{ + pcibr_hints_t hint = pcibr_hints_get(xconn_vhdl, 1); + + if (hint) + hint->ph_intr_bits = xxx_intr_bits; +#if DEBUG + else + printk("pcibr_hints_intr_bits: pcibr_hints_get failed at\n" + "\t%p\n", xconn_vhdl); +#endif +} + +void +pcibr_set_rrb_callback(devfs_handle_t xconn_vhdl, rrb_alloc_funct_t rrb_alloc_funct) +{ + pcibr_hints_t hint = pcibr_hints_get(xconn_vhdl, 1); + + if (hint) + hint->rrb_alloc_funct = rrb_alloc_funct; +} + +void +pcibr_hints_handsoff(devfs_handle_t xconn_vhdl) +{ + pcibr_hints_t hint = pcibr_hints_get(xconn_vhdl, 1); + + if (hint) + hint->ph_hands_off = 1; +#if DEBUG + else + printk("pcibr_hints_handsoff: pcibr_hints_get failed at\n" + "\t%p\n", xconn_vhdl); +#endif +} + +void +pcibr_hints_subdevs(devfs_handle_t xconn_vhdl, + pciio_slot_t slot, + uint64_t subdevs) +{ + arbitrary_info_t ainfo = 0; + char sdname[16]; + devfs_handle_t pconn_vhdl = GRAPH_VERTEX_NONE; + + sprintf(sdname, "pci/%d", slot); + (void) hwgraph_path_add(xconn_vhdl, sdname, &pconn_vhdl); + if (pconn_vhdl == GRAPH_VERTEX_NONE) { +#if DEBUG + printk("pcibr_hints_subdevs: hwgraph_path_create failed at\n" + "\t%p (seeking %s)\n", xconn_vhdl, sdname); +#endif + return; + } + hwgraph_info_get_LBL(pconn_vhdl, INFO_LBL_SUBDEVS, &ainfo); + if (ainfo == 0) { + uint64_t *subdevp; + + NEW(subdevp); + if (!subdevp) { +#if DEBUG + printk("pcibr_hints_subdevs: subdev ptr alloc failed at\n" + "\t%p\n", pconn_vhdl); +#endif + return; + } + *subdevp = subdevs; + hwgraph_info_add_LBL(pconn_vhdl, INFO_LBL_SUBDEVS, (arbitrary_info_t) subdevp); + hwgraph_info_get_LBL(pconn_vhdl, INFO_LBL_SUBDEVS, &ainfo); + if (ainfo == (arbitrary_info_t) subdevp) + return; + DEL(subdevp); + if (ainfo == (arbitrary_info_t) NULL) { +#if DEBUG + printk("pcibr_hints_subdevs: null subdevs ptr at\n" + "\t%p\n", pconn_vhdl); +#endif + return; + } +#if DEBUG + printk("pcibr_subdevs_get: dup subdev add_LBL at\n" + "\t%p\n", pconn_vhdl); +#endif + } + *(uint64_t *) ainfo = subdevs; +} + + +#ifdef LATER + +#include +#include + +char *pci_space[] = {"NONE", + "ROM", + "IO", + "", + "MEM", + "MEM32", + "MEM64", + "CFG", + "WIN0", + "WIN1", + "WIN2", + "WIN3", + "WIN4", + "WIN5", + "", + "BAD"}; + +void +idbg_pss_func(pcibr_info_h pcibr_infoh, int func) +{ + pcibr_info_t pcibr_info = pcibr_infoh[func]; + char name[MAXDEVNAME]; + int win; + + if (!pcibr_info) + return; + qprintf("Per-slot Function Info\n"); +#ifdef SUPPORT_PRINTING_V_FORMAT + sprintf(name, "%v", pcibr_info->f_vertex); +#endif + qprintf("\tSlot Name : %s\n",name); + qprintf("\tPCI Bus : %d ",pcibr_info->f_bus); + qprintf("Slot : %d ", pcibr_info->f_slot); + qprintf("Function : %d ", pcibr_info->f_func); + qprintf("VendorId : 0x%x " , pcibr_info->f_vendor); + qprintf("DeviceId : 0x%x\n", pcibr_info->f_device); +#ifdef SUPPORT_PRINTING_V_FORMAT + sprintf(name, "%v", pcibr_info->f_master); +#endif + qprintf("\tBus provider : %s\n",name); + qprintf("\tProvider Fns : 0x%x ", pcibr_info->f_pops); + qprintf("Error Handler : 0x%x Arg 0x%x\n", + pcibr_info->f_efunc,pcibr_info->f_einfo); + for(win = 0 ; win < 6 ; win++) + qprintf("\tBase Reg #%d space %s base 0x%x size 0x%x\n", + win,pci_space[pcibr_info->f_window[win].w_space], + pcibr_info->f_window[win].w_base, + pcibr_info->f_window[win].w_size); + + qprintf("\tRom base 0x%x size 0x%x\n", + pcibr_info->f_rbase,pcibr_info->f_rsize); + + qprintf("\tInterrupt Bit Map\n"); + qprintf("\t\tPCI Int#\tBridge Pin#\n"); + for (win = 0 ; win < 4; win++) + qprintf("\t\tINT%c\t\t%d\n",win+'A',pcibr_info->f_ibit[win]); + qprintf("\n"); +} + + +void +idbg_pss_info(pcibr_soft_t pcibr_soft, pciio_slot_t slot) +{ + pcibr_soft_slot_t pss; + char slot_conn_name[MAXDEVNAME]; + int func; + + pss = &pcibr_soft->bs_slot[slot]; + qprintf("PCI INFRASTRUCTURAL INFO FOR SLOT %d\n", slot); + qprintf("\tHost Present ? %s ", pss->has_host ? "yes" : "no"); + qprintf("\tHost Slot : %d\n",pss->host_slot); + sprintf(slot_conn_name, "%v", pss->slot_conn); + qprintf("\tSlot Conn : %s\n",slot_conn_name); + qprintf("\t#Functions : %d\n",pss->bss_ninfo); + for (func = 0; func < pss->bss_ninfo; func++) + idbg_pss_func(pss->bss_infos,func); + qprintf("\tSpace : %s ",pci_space[pss->bss_devio.bssd_space]); + qprintf("\tBase : 0x%x ", pss->bss_devio.bssd_base); + qprintf("\tShadow Devreg : 0x%x\n", pss->bss_device); + qprintf("\tUsage counts : pmu %d d32 %d d64 %d\n", + pss->bss_pmu_uctr,pss->bss_d32_uctr,pss->bss_d64_uctr); + + qprintf("\tDirect Trans Info : d64_base 0x%x d64_flags 0x%x" + "d32_base 0x%x d32_flags 0x%x\n", + pss->bss_d64_base, pss->bss_d64_flags, + pss->bss_d32_base, pss->bss_d32_flags); + + qprintf("\tExt ATEs active ? %s", + atomic_read(&pss->bss_ext_ates_active) ? "yes" : "no"); + qprintf(" Command register : 0x%x ", pss->bss_cmd_pointer); + qprintf(" Shadow command val : 0x%x\n", pss->bss_cmd_shadow); + + qprintf("\tRRB Info : Valid %d+%d Reserved %d\n", + pcibr_soft->bs_rrb_valid[slot], + pcibr_soft->bs_rrb_valid[slot + PCIBR_RRB_SLOT_VIRTUAL], + pcibr_soft->bs_rrb_res[slot]); + +} + +int ips = 0; + +void +idbg_pss(pcibr_soft_t pcibr_soft) +{ + pciio_slot_t slot; + + + if (ips >= 0 && ips < 8) + idbg_pss_info(pcibr_soft,ips); + else if (ips < 0) + for (slot = 0; slot < 8; slot++) + idbg_pss_info(pcibr_soft,slot); + else + qprintf("Invalid ips %d\n",ips); +} + +#endif /* LATER */ + +int +pcibr_dma_enabled(devfs_handle_t pconn_vhdl) +{ + pciio_info_t pciio_info = pciio_info_get(pconn_vhdl); + pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); + + + return xtalk_dma_enabled(pcibr_soft->bs_conn); +} diff -Nru a/arch/ia64/sn/io/sn2/bte_error.c b/arch/ia64/sn/io/sn2/bte_error.c --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/io/sn2/bte_error.c Tue Mar 12 13:58:16 2002 @@ -0,0 +1,190 @@ +/* $Id$ + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (C) 1992 - 1997, 2000,2002 Silicon Graphics, Inc. All rights reserved. + */ + + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +/************************************************************************ + * * + * BTE ERROR RECOVERY * + * * + * Given a BTE error, the node causing the error must do the following: * + * a) Clear all crbs relating to that BTE * + * 1) Read CRBA value for crb in question * + * 2) Mark CRB as VALID, store local physical * + * address known to be good in the address field * + * (bte_notification_targ is a known good local * + * address). * + * 3) Write CRBA * + * 4) Using ICCR, FLUSH the CRB, and wait for it to * + * complete. * + * ... BTE BUSY bit should now be clear (or at least * + * should be after ALL CRBs associated with the * + * transfer are complete. * + * * + * b) Re-enable BTE * + * 1) Write IMEM with BTE Enable + XXX bits + * 2) Write IECLR with BTE clear bits + * 3) Clear IIDSR INT_SENT bits. + * * + ************************************************************************/ + +#ifdef BTE_ERROR +// This routine is not called. Yet. It may be someday. It probably +// *should* be someday. Until then, ifdef it out. +bte_result_t +bte_error_handler(bte_handle_t *bh) +/* + * Function: bte_error_handler + * Purpose: Process a BTE error after a transfer has failed. + * Parameters: bh - bte handle of bte that failed. + * Returns: The BTE error type. + * Notes: + */ +{ + devfs_handle_t hub_v; + hubinfo_t hinfo; + int il; + hubreg_t iidsr, imem, ieclr; + hubreg_t bte_status; + + bh->bh_bte->bte_error_count++; + + /* + * Process any CRB logs - we know that the bte_context contains + * the BTE completion status, but to avoid a race with error + * processing, we force a call to pick up any CRB errors pending. + * After this call, we know that we have any CRB errors related to + * this BTE transfer in the context. + */ + hub_v = cnodeid_to_vertex(bh->bh_bte->bte_cnode); + hubinfo_get(hub_v, &hinfo); + (void)hubiio_crb_error_handler(hub_v, hinfo); + + /* Be sure BTE is stopped */ + + (void)BTE_LOAD(bh->bh_bte->bte_base, BTEOFF_CTRL); + + /* + * Now clear up the rest of the error - be sure to hold crblock + * to avoid race with other cpu on this node. + */ + imem = REMOTE_HUB_L(hinfo->h_nasid, IIO_IMEM); + ieclr = REMOTE_HUB_L(hinfo->h_nasid, IIO_IECLR); + if (bh->bh_bte->bte_num == 0) { + imem |= IIO_IMEM_W0ESD | IIO_IMEM_B0ESD; + ieclr|= IECLR_BTE0; + } else { + imem |= IIO_IMEM_W0ESD | IIO_IMEM_B1ESD; + ieclr|= IECLR_BTE1; + } + + REMOTE_HUB_S(hinfo->h_nasid, IIO_IMEM, imem); + REMOTE_HUB_S(hinfo->h_nasid, IIO_IECLR, ieclr); + + iidsr = REMOTE_HUB_L(hinfo->h_nasid, IIO_IIDSR); + iidsr &= ~IIO_IIDSR_SENT_MASK; + iidsr |= IIO_IIDSR_ENB_MASK; + REMOTE_HUB_S(hinfo->h_nasid, IIO_IIDSR, iidsr); + mutex_spinunlock(&hinfo->h_crblock, il); + + bte_status = BTE_LOAD(bh->bh_bte->bte_base, BTEOFF_STAT); + BTE_STORE(bh->bh_bte->bte_base, BTEOFF_STAT, bte_status & ~IBLS_BUSY); + ASSERT(!BTE_IS_BUSY(BTE_LOAD(bh->bh_bte->bte_base, BTEOFF_STAT))); + + switch(bh->bh_error) { + case IIO_ICRB_ECODE_PERR: + return(BTEFAIL_POISON); + case IIO_ICRB_ECODE_WERR: + return(BTEFAIL_PROT); + case IIO_ICRB_ECODE_AERR: + return(BTEFAIL_ACCESS); + case IIO_ICRB_ECODE_TOUT: + return(BTEFAIL_TOUT); + case IIO_ICRB_ECODE_XTERR: + return(BTEFAIL_ERROR); + case IIO_ICRB_ECODE_DERR: + return(BTEFAIL_DIR); + case IIO_ICRB_ECODE_PWERR: + case IIO_ICRB_ECODE_PRERR: + /* NO BREAK */ + default: + printk("BTE failure (%d) unexpected\n", + bh->bh_error); + return(BTEFAIL_ERROR); + } +} +#endif // BTE_ERROR + +void +bte_crb_error_handler(devfs_handle_t hub_v, int btenum, + int crbnum, ioerror_t *ioe) +/* + * Function: bte_crb_error_handler + * Purpose: Process a CRB for a specific HUB/BTE + * Parameters: hub_v - vertex of hub in HW graph + * btenum - bte number on hub (0 == a, 1 == b) + * crbnum - crb number being processed + * Notes: + * This routine assumes serialization at a higher level. A CRB + * should not be processed more than once. The error recovery + * follows the following sequence - if you change this, be real + * sure about what you are doing. + * + */ +{ + hubinfo_t hinfo; + icrba_t crba; + icrbb_t crbb; + nasid_t n; + + hubinfo_get(hub_v, &hinfo); + + + n = hinfo->h_nasid; + + /* Step 1 */ + crba.ii_icrb0_a_regval = REMOTE_HUB_L(n, IIO_ICRB_A(crbnum)); + crbb.ii_icrb0_b_regval = REMOTE_HUB_L(n, IIO_ICRB_B(crbnum)); + + + /* Zero error and error code to prevent error_dump complaining + * about these CRBs. + */ + crbb.b_error=0; + crbb.b_ecode=0; + + /* Step 2 */ + REMOTE_HUB_S(n, IIO_ICRB_A(crbnum), crba.ii_icrb0_a_regval); + /* Step 3 */ + REMOTE_HUB_S(n, IIO_ICCR, + IIO_ICCR_PENDING | IIO_ICCR_CMD_FLUSH | crbnum); + while (REMOTE_HUB_L(n, IIO_ICCR) & IIO_ICCR_PENDING) + ; +} + diff -Nru a/arch/ia64/sn/io/sn2/ml_SN_intr.c b/arch/ia64/sn/io/sn2/ml_SN_intr.c --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/io/sn2/ml_SN_intr.c Tue Mar 12 13:58:16 2002 @@ -0,0 +1,469 @@ +/* $Id$ + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (C) 1992-1997, 2000-2002 Silicon Graphics, Inc. All Rights Reserved. + */ + +/* + * intr.c- + * This file contains all of the routines necessary to set up and + * handle interrupts on an IPXX board. + */ + +#ident "$Revision: 1.167 $" + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +extern irqpda_t *irqpdaindr[]; +extern cnodeid_t master_node_get(devfs_handle_t vhdl); +extern nasid_t master_nasid; + +// Initialize some shub registers for interrupts, both IO and error. + +void +intr_init_vecblk( nodepda_t *npda, + cnodeid_t node, + int sn) +{ + int nasid = cnodeid_to_nasid(node); + nasid_t console_nasid; + sh_ii_int0_config_u_t ii_int_config; + cpuid_t cpu; + cpuid_t cpu0, cpu1; + nodepda_t *lnodepda; + sh_ii_int0_enable_u_t ii_int_enable; + sh_local_int0_config_u_t local_int_config; + sh_local_int0_enable_u_t local_int_enable; + sh_fsb_system_agent_config_u_t fsb_system_agent; + sh_int_node_id_config_u_t node_id_config; + int is_console; + + console_nasid = get_console_nasid(); + if (console_nasid < 0) { + console_nasid = master_nasid; + } + + is_console = nasid == console_nasid; + + if (is_headless_node(node) ) { + int cnode; + struct ia64_sal_retval ret_stuff; + + // retarget all interrupts on this node to the master node. + node_id_config.sh_int_node_id_config_regval = 0; + node_id_config.sh_int_node_id_config_s.node_id = master_nasid; + node_id_config.sh_int_node_id_config_s.id_sel = 1; + HUB_S( (unsigned long *)GLOBAL_MMR_ADDR(nasid, SH_INT_NODE_ID_CONFIG), + node_id_config.sh_int_node_id_config_regval); + cnode = nasid_to_cnodeid(master_nasid); + lnodepda = NODEPDA(cnode); + cpu = lnodepda->node_first_cpu; + cpu = cpu_physical_id(cpu); + SAL_CALL(ret_stuff, SN_SAL_REGISTER_CE, nasid, cpu, master_nasid,0,0,0,0); + if (ret_stuff.status < 0) { + printk("%s: SN_SAL_REGISTER_CE SAL_CALL failed\n",__FUNCTION__); + } + } else { + lnodepda = NODEPDA(node); + cpu = lnodepda->node_first_cpu; + cpu = cpu_physical_id(cpu); + } + + // Get the physical id's of the cpu's on this node. + cpu0 = id_eid_to_cpu_physical_id(nasid, 0); + cpu1 = id_eid_to_cpu_physical_id(nasid, 1); + + HUB_S( (unsigned long *)GLOBAL_MMR_ADDR(nasid, SH_PI_ERROR_MASK), 0); + HUB_S( (unsigned long *)GLOBAL_MMR_ADDR(nasid, SH_PI_CRBP_ERROR_MASK), 0); + + // The II_INT_CONFIG register for cpu 0. + ii_int_config.sh_ii_int0_config_s.type = 0; + ii_int_config.sh_ii_int0_config_s.agt = 0; + ii_int_config.sh_ii_int0_config_s.pid = cpu0; + ii_int_config.sh_ii_int0_config_s.base = 0; + + HUB_S((unsigned long *)GLOBAL_MMR_ADDR(nasid, SH_II_INT0_CONFIG), + ii_int_config.sh_ii_int0_config_regval); + + // The II_INT_CONFIG register for cpu 1. + ii_int_config.sh_ii_int0_config_s.type = 0; + ii_int_config.sh_ii_int0_config_s.agt = 0; + ii_int_config.sh_ii_int0_config_s.pid = cpu1; + ii_int_config.sh_ii_int0_config_s.base = 0; + + HUB_S((unsigned long *)GLOBAL_MMR_ADDR(nasid, SH_II_INT1_CONFIG), + ii_int_config.sh_ii_int0_config_regval); + + // Enable interrupts for II_INT0 and 1. + ii_int_enable.sh_ii_int0_enable_s.ii_enable = 1; + + HUB_S((unsigned long *)GLOBAL_MMR_ADDR(nasid, SH_II_INT0_ENABLE), + ii_int_enable.sh_ii_int0_enable_regval); + HUB_S((unsigned long *)GLOBAL_MMR_ADDR(nasid, SH_II_INT1_ENABLE), + ii_int_enable.sh_ii_int0_enable_regval); + + // init error regs + // LOCAL_INT0 is for the UART only. + + local_int_config.sh_local_int0_config_s.type = 0; + local_int_config.sh_local_int0_config_s.agt = 0; + local_int_config.sh_local_int0_config_s.pid = cpu; + local_int_config.sh_local_int0_config_s.base = 0; + local_int_config.sh_local_int0_config_s.idx = SGI_UART_VECTOR; + + HUB_S((unsigned long *)GLOBAL_MMR_ADDR(nasid, SH_LOCAL_INT0_CONFIG), + local_int_config.sh_local_int0_config_regval); + + // LOCAL_INT1 is for all hardware errors. + // It will send a BERR, which will result in an MCA. + local_int_config.sh_local_int0_config_s.idx = 0; + + HUB_S((unsigned long *)GLOBAL_MMR_ADDR(nasid, SH_LOCAL_INT1_CONFIG), + local_int_config.sh_local_int0_config_regval); + + // Clear the LOCAL_INT_ENABLE register. + local_int_enable.sh_local_int0_enable_regval = 0; + + if (is_console) { + // Enable the UART interrupt. Only applies to the console nasid. + local_int_enable.sh_local_int0_enable_s.uart_int = 1; + + HUB_S((unsigned long *)GLOBAL_MMR_ADDR(nasid, SH_LOCAL_INT0_ENABLE), + local_int_enable.sh_local_int0_enable_regval); + } + + // Enable all the error interrupts. + local_int_enable.sh_local_int0_enable_s.uart_int = 0; + local_int_enable.sh_local_int0_enable_s.pi_hw_int = 1; + local_int_enable.sh_local_int0_enable_s.md_hw_int = 1; + local_int_enable.sh_local_int0_enable_s.xn_hw_int = 1; + local_int_enable.sh_local_int0_enable_s.lb_hw_int = 1; + local_int_enable.sh_local_int0_enable_s.ii_hw_int = 1; + local_int_enable.sh_local_int0_enable_s.pi_uce_int = 1; + local_int_enable.sh_local_int0_enable_s.md_uce_int = 1; + local_int_enable.sh_local_int0_enable_s.xn_uce_int = 1; + local_int_enable.sh_local_int0_enable_s.system_shutdown_int = 1; + local_int_enable.sh_local_int0_enable_s.l1_nmi_int = 1; + local_int_enable.sh_local_int0_enable_s.stop_clock = 1; + + + // Send BERR, rather than an interrupt, for shub errors. + local_int_config.sh_local_int0_config_s.agt = 1; + HUB_S((unsigned long *)GLOBAL_MMR_ADDR(nasid, SH_LOCAL_INT1_CONFIG), + local_int_config.sh_local_int0_config_regval); + + HUB_S((unsigned long *)GLOBAL_MMR_ADDR(nasid, SH_LOCAL_INT1_ENABLE), + local_int_enable.sh_local_int0_enable_regval); + + // Make sure BERR is enabled. + fsb_system_agent.sh_fsb_system_agent_config_regval = + HUB_L( (unsigned long *)GLOBAL_MMR_ADDR(nasid, SH_FSB_SYSTEM_AGENT_CONFIG) ); + fsb_system_agent.sh_fsb_system_agent_config_s.berr_assert_en = 1; + HUB_S((unsigned long *)GLOBAL_MMR_ADDR(nasid, SH_FSB_SYSTEM_AGENT_CONFIG), + fsb_system_agent.sh_fsb_system_agent_config_regval); + + // Set LOCAL_INT2 to field CEs + + local_int_enable.sh_local_int0_enable_regval = 0; + + local_int_config.sh_local_int0_config_s.agt = 0; + local_int_config.sh_local_int0_config_s.idx = SGI_SHUB_ERROR_VECTOR; + HUB_S((unsigned long *)GLOBAL_MMR_ADDR(nasid, SH_LOCAL_INT2_CONFIG), + local_int_config.sh_local_int0_config_regval); + + local_int_enable.sh_local_int0_enable_s.pi_ce_int = 1; + local_int_enable.sh_local_int0_enable_s.md_ce_int = 1; + local_int_enable.sh_local_int0_enable_s.xn_ce_int = 1; + + HUB_S((unsigned long *)GLOBAL_MMR_ADDR(nasid, SH_LOCAL_INT2_ENABLE), + local_int_enable.sh_local_int0_enable_regval); + + // Make sure all the rest of the LOCAL_INT regs are disabled. + local_int_enable.sh_local_int0_enable_regval = 0; + HUB_S((unsigned long *)GLOBAL_MMR_ADDR(nasid, SH_LOCAL_INT3_ENABLE), + local_int_enable.sh_local_int0_enable_regval); + + HUB_S((unsigned long *)GLOBAL_MMR_ADDR(nasid, SH_LOCAL_INT4_ENABLE), + local_int_enable.sh_local_int0_enable_regval); + + HUB_S((unsigned long *)GLOBAL_MMR_ADDR(nasid, SH_LOCAL_INT5_ENABLE), + local_int_enable.sh_local_int0_enable_regval); + +} + +// (Un)Reserve an irq on this cpu. + +static int +do_intr_reserve_level(cpuid_t cpu, + int bit, + int reserve) +{ + int i; + irqpda_t *irqs = irqpdaindr[cpu]; + + if (reserve) { + if (bit < 0) { + for (i = IA64_SN2_FIRST_DEVICE_VECTOR; i <= IA64_SN2_LAST_DEVICE_VECTOR; i++) { + if (irqs->irq_flags[i] == 0) { + bit = i; + break; + } + } + } + if (bit < 0) { + return -1; + } + if (irqs->irq_flags[bit] & SN2_IRQ_RESERVED) { + return -1; + } else { + irqs->num_irq_used++; + irqs->irq_flags[bit] |= SN2_IRQ_RESERVED; + return bit; + } + } else { + if (irqs->irq_flags[bit] & SN2_IRQ_RESERVED) { + irqs->num_irq_used--; + irqs->irq_flags[bit] &= ~SN2_IRQ_RESERVED; + return bit; + } else { + return -1; + } + } +} + +int +intr_reserve_level(cpuid_t cpu, + int bit, + int resflags, + devfs_handle_t owner_dev, + char *name) +{ + return(do_intr_reserve_level(cpu, bit, 1)); +} + +void +intr_unreserve_level(cpuid_t cpu, + int bit) +{ + (void)do_intr_reserve_level(cpu, bit, 0); +} + +// Mark an irq on this cpu as (dis)connected. + +static int +do_intr_connect_level(cpuid_t cpu, + int bit, + int connect) +{ + irqpda_t *irqs = irqpdaindr[cpu]; + + if (connect) { + if (irqs->irq_flags[bit] & SN2_IRQ_CONNECTED) { + return -1; + } else { + irqs->irq_flags[bit] |= SN2_IRQ_CONNECTED; + return bit; + } + } else { + if (irqs->irq_flags[bit] & SN2_IRQ_CONNECTED) { + irqs->irq_flags[bit] &= ~SN2_IRQ_CONNECTED; + return bit; + } else { + return -1; + } + } + return(bit); +} + +int +intr_connect_level(cpuid_t cpu, + int bit, + ilvl_t is, + intr_func_t intr_prefunc) +{ + return(do_intr_connect_level(cpu, bit, 1)); +} + +int +intr_disconnect_level(cpuid_t cpu, + int bit) +{ + return(do_intr_connect_level(cpu, bit, 0)); +} + +// Choose a cpu on this node. +// We choose the one with the least number of int's assigned to it. + +static cpuid_t +do_intr_cpu_choose(cnodeid_t cnode) { + cpuid_t cpu, best_cpu = CPU_NONE; + int slice, min_count = 1000; + irqpda_t *irqs; + + for (slice = 0; slice < CPUS_PER_NODE; slice++) { + int intrs; + + cpu = cnode_slice_to_cpuid(cnode, slice); + if (cpu == CPU_NONE) { + continue; + } + + if (!cpu_enabled(cpu)) { + continue; + } + + irqs = irqpdaindr[cpu]; + intrs = irqs->num_irq_used; + + if (min_count > intrs) { + min_count = intrs; + best_cpu = cpu; + } + } + return best_cpu; +} + +static cpuid_t +intr_cpu_choose_from_node(cnodeid_t cnode) +{ + return(do_intr_cpu_choose(cnode)); +} + +// See if we can use this cpu/vect. + +static cpuid_t +intr_bit_reserve_test(cpuid_t cpu, + int favor_subnode, + cnodeid_t cnode, + int req_bit, + int resflags, + devfs_handle_t owner_dev, + char *name, + int *resp_bit) +{ + ASSERT( (cpu == CPU_NONE) || (cnode == CNODEID_NONE) ); + + if (cnode != CNODEID_NONE) { + cpu = intr_cpu_choose_from_node(cnode); + } + + if (cpu != CPU_NONE) { + *resp_bit = do_intr_reserve_level(cpu, req_bit, 1); + if (*resp_bit >= 0) { + return(cpu); + } + } + return CPU_NONE; +} + +// Find the node to assign for this interrupt. + +cpuid_t +intr_heuristic(devfs_handle_t dev, + device_desc_t dev_desc, + int req_bit, + int resflags, + devfs_handle_t owner_dev, + char *name, + int *resp_bit) +{ + cpuid_t cpuid; + cnodeid_t candidate = -1; + devfs_handle_t pconn_vhdl; + pcibr_soft_t pcibr_soft; + +/* SN2 + pcibr addressing limitation */ +/* Due to this limitation, all interrupts from a given bridge must go to the name node.*/ +/* This limitation does not exist on PIC. */ + + if ( (hwgraph_edge_get(dev, EDGE_LBL_PCI, &pconn_vhdl) == GRAPH_SUCCESS) && + ( (pcibr_soft = pcibr_soft_get(pconn_vhdl) ) != NULL) ) { + if (pcibr_soft->bsi_err_intr) { + candidate = cpuid_to_cnodeid( ((hub_intr_t)pcibr_soft->bsi_err_intr)->i_cpuid); + } + } + + if (candidate >= 0) { + // The node was chosen already when we assigned the error interrupt. + cpuid = intr_bit_reserve_test(CPU_NONE, + 0, + candidate, + req_bit, + 0, + owner_dev, + name, + resp_bit); + } else { + // Need to choose one. Try the controlling c-brick first. + cpuid = intr_bit_reserve_test(CPU_NONE, + 0, + master_node_get(dev), + req_bit, + 0, + owner_dev, + name, + resp_bit); + } + + if (cpuid != CPU_NONE) { + return cpuid; + } + + if (candidate >= 0) { + printk("Cannot target interrupt to target node (%d).\n",candidate); + return CPU_NONE; + } else { + printk("Cannot target interrupt to closest node (%d) 0x%p\n", + master_node_get(dev), (void *)owner_dev); + } + + // We couldn't put it on the closest node. Try to find another one. + // Do a stupid round-robin assignment of the node. + + { + static cnodeid_t last_node = -1; + if (last_node >= numnodes) last_node = 0; + for (candidate = last_node + 1; candidate != last_node; candidate++) { + if (candidate == numnodes) candidate = 0; + cpuid = intr_bit_reserve_test(CPU_NONE, + 0, + candidate, + req_bit, + 0, + owner_dev, + name, + resp_bit); + if (cpuid != CPU_NONE) { + return cpuid; + } + } + } + + printk("cannot target interrupt: 0x%p\n",(void *)owner_dev); + return CPU_NONE; +} diff -Nru a/arch/ia64/sn/io/sn2/pcibr/pcibr_ate.c b/arch/ia64/sn/io/sn2/pcibr/pcibr_ate.c --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/io/sn2/pcibr/pcibr_ate.c Tue Mar 12 13:58:16 2002 @@ -0,0 +1,454 @@ +/* + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (C) 2001-2002 Silicon Graphics, Inc. All rights reserved. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#ifdef __ia64 +uint64_t atealloc(struct map *mp, size_t size); +void atefree(struct map *mp, size_t size, uint64_t a); +void atemapfree(struct map *mp); +struct map *atemapalloc(uint64_t mapsiz); + +#define rmallocmap atemapalloc +#define rmfreemap atemapfree +#define rmfree atefree +#define rmalloc atealloc +#endif + + +#ifdef LATER +#if (PCIBR_FREEZE_TIME) || PCIBR_ATE_DEBUG +LOCAL struct reg_desc ate_bits[] = +{ + {0xFFFF000000000000ull, -48, "RMF", "%x"}, + {~(IOPGSIZE - 1) & /* may trim off some low bits */ + 0x0000FFFFFFFFF000ull, 0, "XIO", "%x"}, + {0x0000000000000F00ull, -8, "port", "%x"}, + {0x0000000000000010ull, 0, "Barrier"}, + {0x0000000000000008ull, 0, "Prefetch"}, + {0x0000000000000004ull, 0, "Precise"}, + {0x0000000000000002ull, 0, "Coherent"}, + {0x0000000000000001ull, 0, "Valid"}, + {0} +}; +#endif +#endif /* LATER */ + +#ifndef LOCAL +#define LOCAL static +#endif + +/* + * functions + */ +int pcibr_init_ext_ate_ram(bridge_t *); +int pcibr_ate_alloc(pcibr_soft_t, int); +void pcibr_ate_free(pcibr_soft_t, int, int); +bridge_ate_t pcibr_flags_to_ate(unsigned); +bridge_ate_p pcibr_ate_addr(pcibr_soft_t, int); +unsigned ate_freeze(pcibr_dmamap_t pcibr_dmamap, +#if PCIBR_FREEZE_TIME + unsigned *freeze_time_ptr, +#endif + unsigned *cmd_regs); +void ate_write(bridge_ate_p ate_ptr, int ate_count, bridge_ate_t ate); +void ate_thaw(pcibr_dmamap_t pcibr_dmamap, + int ate_index, +#if PCIBR_FREEZE_TIME + bridge_ate_t ate, + int ate_total, + unsigned freeze_time_start, +#endif + unsigned *cmd_regs, + unsigned s); + + +/* Convert from ssram_bits in control register to number of SSRAM entries */ +#define ATE_NUM_ENTRIES(n) _ate_info[n] + +/* Possible choices for number of ATE entries in Bridge's SSRAM */ +LOCAL int _ate_info[] = +{ + 0, /* 0 entries */ + 8 * 1024, /* 8K entries */ + 16 * 1024, /* 16K entries */ + 64 * 1024 /* 64K entries */ +}; + +#define ATE_NUM_SIZES (sizeof(_ate_info) / sizeof(int)) +#define ATE_PROBE_VALUE 0x0123456789abcdefULL + +/* + * Determine the size of this bridge's external mapping SSRAM, and set + * the control register appropriately to reflect this size, and initialize + * the external SSRAM. + */ +int +pcibr_init_ext_ate_ram(bridge_t *bridge) +{ + int largest_working_size = 0; + int num_entries, entry; + int i, j; + bridgereg_t old_enable, new_enable; + int s; + + /* Probe SSRAM to determine its size. */ + old_enable = bridge->b_int_enable; + new_enable = old_enable & ~BRIDGE_IMR_PCI_MST_TIMEOUT; + bridge->b_int_enable = new_enable; + + for (i = 1; i < ATE_NUM_SIZES; i++) { + /* Try writing a value */ + bridge->b_ext_ate_ram[ATE_NUM_ENTRIES(i) - 1] = ATE_PROBE_VALUE; + + /* Guard against wrap */ + for (j = 1; j < i; j++) + bridge->b_ext_ate_ram[ATE_NUM_ENTRIES(j) - 1] = 0; + + /* See if value was written */ + if (bridge->b_ext_ate_ram[ATE_NUM_ENTRIES(i) - 1] == ATE_PROBE_VALUE) + largest_working_size = i; + } + bridge->b_int_enable = old_enable; + bridge->b_wid_tflush; /* wait until Bridge PIO complete */ + + /* + * ensure that we write and read without any interruption. + * The read following the write is required for the Bridge war + */ + + s = splhi(); + bridge->b_wid_control = (bridge->b_wid_control + & ~BRIDGE_CTRL_SSRAM_SIZE_MASK) + | BRIDGE_CTRL_SSRAM_SIZE(largest_working_size); + bridge->b_wid_control; /* inval addr bug war */ + splx(s); + + num_entries = ATE_NUM_ENTRIES(largest_working_size); + +#if PCIBR_ATE_DEBUG + if (num_entries) + printk("bridge at 0x%x: clearing %d external ATEs\n", bridge, num_entries); + else + printk("bridge at 0x%x: no external ATE RAM found\n", bridge); +#endif + + /* Initialize external mapping entries */ + for (entry = 0; entry < num_entries; entry++) + bridge->b_ext_ate_ram[entry] = 0; + + return (num_entries); +} + +/* + * Allocate "count" contiguous Bridge Address Translation Entries + * on the specified bridge to be used for PCI to XTALK mappings. + * Indices in rm map range from 1..num_entries. Indicies returned + * to caller range from 0..num_entries-1. + * + * Return the start index on success, -1 on failure. + */ +int +pcibr_ate_alloc(pcibr_soft_t pcibr_soft, int count) +{ + int index = 0; + + index = (int) rmalloc(pcibr_soft->bs_int_ate_map, (size_t) count); + + if (!index && pcibr_soft->bs_ext_ate_map) + index = (int) rmalloc(pcibr_soft->bs_ext_ate_map, (size_t) count); + + /* rmalloc manages resources in the 1..n + * range, with 0 being failure. + * pcibr_ate_alloc manages resources + * in the 0..n-1 range, with -1 being failure. + */ + return index - 1; +} + +void +pcibr_ate_free(pcibr_soft_t pcibr_soft, int index, int count) +/* Who says there's no such thing as a free meal? :-) */ +{ + /* note the "+1" since rmalloc handles 1..n but + * we start counting ATEs at zero. + */ + rmfree((index < pcibr_soft->bs_int_ate_size) + ? pcibr_soft->bs_int_ate_map + : pcibr_soft->bs_ext_ate_map, + count, index + 1); +} + +/* + * Convert PCI-generic software flags and Bridge-specific software flags + * into Bridge-specific Address Translation Entry attribute bits. + */ +bridge_ate_t +pcibr_flags_to_ate(unsigned flags) +{ + bridge_ate_t attributes; + + /* default if nothing specified: + * NOBARRIER + * NOPREFETCH + * NOPRECISE + * COHERENT + * Plus the valid bit + */ + attributes = ATE_CO | ATE_V; + + /* Generic macro flags + */ + if (flags & PCIIO_DMA_DATA) { /* standard data channel */ + attributes &= ~ATE_BAR; /* no barrier */ + attributes |= ATE_PREF; /* prefetch on */ + } + if (flags & PCIIO_DMA_CMD) { /* standard command channel */ + attributes |= ATE_BAR; /* barrier bit on */ + attributes &= ~ATE_PREF; /* disable prefetch */ + } + /* Generic detail flags + */ + if (flags & PCIIO_PREFETCH) + attributes |= ATE_PREF; + if (flags & PCIIO_NOPREFETCH) + attributes &= ~ATE_PREF; + + /* Provider-specific flags + */ + if (flags & PCIBR_BARRIER) + attributes |= ATE_BAR; + if (flags & PCIBR_NOBARRIER) + attributes &= ~ATE_BAR; + + if (flags & PCIBR_PREFETCH) + attributes |= ATE_PREF; + if (flags & PCIBR_NOPREFETCH) + attributes &= ~ATE_PREF; + + if (flags & PCIBR_PRECISE) + attributes |= ATE_PREC; + if (flags & PCIBR_NOPRECISE) + attributes &= ~ATE_PREC; + + return (attributes); +} + +/* + * Setup an Address Translation Entry as specified. Use either the Bridge + * internal maps or the external map RAM, as appropriate. + */ +bridge_ate_p +pcibr_ate_addr(pcibr_soft_t pcibr_soft, + int ate_index) +{ + bridge_t *bridge = pcibr_soft->bs_base; + + return (ate_index < pcibr_soft->bs_int_ate_size) + ? &(bridge->b_int_ate_ram[ate_index].wr) + : &(bridge->b_ext_ate_ram[ate_index]); +} + +/* We are starting to get more complexity + * surrounding writing ATEs, so pull + * the writing code into this new function. + */ + +#if PCIBR_FREEZE_TIME +#define ATE_FREEZE() s = ate_freeze(pcibr_dmamap, &freeze_time, cmd_regs) +#else +#define ATE_FREEZE() s = ate_freeze(pcibr_dmamap, cmd_regs) +#endif + +unsigned +ate_freeze(pcibr_dmamap_t pcibr_dmamap, +#if PCIBR_FREEZE_TIME + unsigned *freeze_time_ptr, +#endif + unsigned *cmd_regs) +{ + pcibr_soft_t pcibr_soft = pcibr_dmamap->bd_soft; +#ifdef LATER + int dma_slot = pcibr_dmamap->bd_slot; +#endif + int ext_ates = pcibr_dmamap->bd_flags & PCIBR_DMAMAP_SSRAM; + int slot; + + unsigned long s; + unsigned cmd_reg; + volatile unsigned *cmd_lwa; + unsigned cmd_lwd; + + if (!ext_ates) + return 0; + + /* Bridge Hardware Bug WAR #484930: + * Bridge can't handle updating External ATEs + * while DMA is occuring that uses External ATEs, + * even if the particular ATEs involved are disjoint. + */ + + /* need to prevent anyone else from + * unfreezing the grant while we + * are working; also need to prevent + * this thread from being interrupted + * to keep PCI grant freeze time + * at an absolute minimum. + */ + s = pcibr_lock(pcibr_soft); + +#ifdef LATER + /* just in case pcibr_dmamap_done was not called */ + if (pcibr_dmamap->bd_flags & PCIBR_DMAMAP_BUSY) { + pcibr_dmamap->bd_flags &= ~PCIBR_DMAMAP_BUSY; + if (pcibr_dmamap->bd_flags & PCIBR_DMAMAP_SSRAM) + atomic_dec(&(pcibr_soft->bs_slot[dma_slot]. bss_ext_ates_active)); + xtalk_dmamap_done(pcibr_dmamap->bd_xtalk); + } +#endif /* LATER */ +#if PCIBR_FREEZE_TIME + *freeze_time_ptr = get_timestamp(); +#endif + + cmd_lwa = 0; + for (slot = 0; slot < 8; ++slot) + if (atomic_read(&pcibr_soft->bs_slot[slot].bss_ext_ates_active)) { + cmd_reg = pcibr_soft-> + bs_slot[slot]. + bss_cmd_shadow; + if (cmd_reg & PCI_CMD_BUS_MASTER) { + cmd_lwa = pcibr_soft-> + bs_slot[slot]. + bss_cmd_pointer; + cmd_lwd = cmd_reg ^ PCI_CMD_BUS_MASTER; + cmd_lwa[0] = cmd_lwd; + } + cmd_regs[slot] = cmd_reg; + } else + cmd_regs[slot] = 0; + + if (cmd_lwa) { + bridge_t *bridge = pcibr_soft->bs_base; + + /* Read the last master bit that has been cleared. This PIO read + * on the PCI bus is to ensure the completion of any DMAs that + * are due to bus requests issued by PCI devices before the + * clearing of master bits. + */ + cmd_lwa[0]; + + /* Flush all the write buffers in the bridge */ + for (slot = 0; slot < 8; ++slot) + if (atomic_read(&pcibr_soft->bs_slot[slot].bss_ext_ates_active)) { + /* Flush the write buffer associated with this + * PCI device which might be using dma map RAM. + */ + bridge->b_wr_req_buf[slot].reg; + } + } + return s; +} + +#define ATE_WRITE() ate_write(ate_ptr, ate_count, ate) + +void +ate_write(bridge_ate_p ate_ptr, + int ate_count, + bridge_ate_t ate) +{ + while (ate_count-- > 0) { + *ate_ptr++ = ate; + ate += IOPGSIZE; + } +} + +#if PCIBR_FREEZE_TIME +#define ATE_THAW() ate_thaw(pcibr_dmamap, ate_index, ate, ate_total, freeze_time, cmd_regs, s) +#else +#define ATE_THAW() ate_thaw(pcibr_dmamap, ate_index, cmd_regs, s) +#endif + +void +ate_thaw(pcibr_dmamap_t pcibr_dmamap, + int ate_index, +#if PCIBR_FREEZE_TIME + bridge_ate_t ate, + int ate_total, + unsigned freeze_time_start, +#endif + unsigned *cmd_regs, + unsigned s) +{ + pcibr_soft_t pcibr_soft = pcibr_dmamap->bd_soft; + int dma_slot = pcibr_dmamap->bd_slot; + int slot; + bridge_t *bridge = pcibr_soft->bs_base; + int ext_ates = pcibr_dmamap->bd_flags & PCIBR_DMAMAP_SSRAM; + + unsigned cmd_reg; + +#if PCIBR_FREEZE_TIME + unsigned freeze_time; + static unsigned max_freeze_time = 0; + static unsigned max_ate_total; +#endif + + if (!ext_ates) + return; + + /* restore cmd regs */ + for (slot = 0; slot < 8; ++slot) + if ((cmd_reg = cmd_regs[slot]) & PCI_CMD_BUS_MASTER) + bridge->b_type0_cfg_dev[slot].l[PCI_CFG_COMMAND / 4] = cmd_reg; + + pcibr_dmamap->bd_flags |= PCIBR_DMAMAP_BUSY; + atomic_inc(&(pcibr_soft->bs_slot[dma_slot]. bss_ext_ates_active)); + +#if PCIBR_FREEZE_TIME + freeze_time = get_timestamp() - freeze_time_start; + + if ((max_freeze_time < freeze_time) || + (max_ate_total < ate_total)) { + if (max_freeze_time < freeze_time) + max_freeze_time = freeze_time; + if (max_ate_total < ate_total) + max_ate_total = ate_total; + pcibr_unlock(pcibr_soft, s); + printk("%s: pci freeze time %d usec for %d ATEs\n" + "\tfirst ate: %R\n", + pcibr_soft->bs_name, + freeze_time * 1000 / 1250, + ate_total, + ate, ate_bits); + } else +#endif + pcibr_unlock(pcibr_soft, s); +} diff -Nru a/arch/ia64/sn/io/sn2/pcibr/pcibr_config.c b/arch/ia64/sn/io/sn2/pcibr/pcibr_config.c --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/io/sn2/pcibr/pcibr_config.c Tue Mar 12 13:58:16 2002 @@ -0,0 +1,143 @@ +/* + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (C) 2001-2002 Silicon Graphics, Inc. All rights reserved. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +extern pcibr_info_t pcibr_info_get(devfs_handle_t); + +uint64_t pcibr_config_get(devfs_handle_t, unsigned, unsigned); +uint64_t do_pcibr_config_get(cfg_p, unsigned, unsigned); +void pcibr_config_set(devfs_handle_t, unsigned, unsigned, uint64_t); +void do_pcibr_config_set(cfg_p, unsigned, unsigned, uint64_t); + +#define CB(b,r) (((volatile uint8_t *) cfgbase)[(r)^3]) +#define CS(b,r) (((volatile uint16_t *) cfgbase)[((r)/2)^1]) +#define CW(b,r) (((volatile uint32_t *) cfgbase)[(r)/4]) + + +cfg_p +pcibr_config_addr(devfs_handle_t conn, + unsigned reg) +{ + pcibr_info_t pcibr_info; + pciio_slot_t pciio_slot; + pciio_function_t pciio_func; + pcibr_soft_t pcibr_soft; + bridge_t *bridge; + cfg_p cfgbase = (cfg_p)0; + + pcibr_info = pcibr_info_get(conn); + + pciio_slot = pcibr_info->f_slot; + if (pciio_slot == PCIIO_SLOT_NONE) + pciio_slot = PCI_TYPE1_SLOT(reg); + + pciio_func = pcibr_info->f_func; + if (pciio_func == PCIIO_FUNC_NONE) + pciio_func = PCI_TYPE1_FUNC(reg); + + pcibr_soft = (pcibr_soft_t) pcibr_info->f_mfast; + + bridge = pcibr_soft->bs_base; + + cfgbase = bridge->b_type0_cfg_dev[pciio_slot].f[pciio_func].l; + + return cfgbase; +} + +uint64_t +pcibr_config_get(devfs_handle_t conn, + unsigned reg, + unsigned size) +{ + return do_pcibr_config_get(pcibr_config_addr(conn, reg), + PCI_TYPE1_REG(reg), size); +} + +uint64_t +do_pcibr_config_get( + cfg_p cfgbase, + unsigned reg, + unsigned size) +{ + unsigned value; + + value = CW(cfgbase, reg); + + if (reg & 3) + value >>= 8 * (reg & 3); + if (size < 4) + value &= (1 << (8 * size)) - 1; + return value; +} + +void +pcibr_config_set(devfs_handle_t conn, + unsigned reg, + unsigned size, + uint64_t value) +{ + do_pcibr_config_set(pcibr_config_addr(conn, reg), + PCI_TYPE1_REG(reg), size, value); +} + +void +do_pcibr_config_set(cfg_p cfgbase, + unsigned reg, + unsigned size, + uint64_t value) +{ + switch (size) { + case 1: + CB(cfgbase, reg) = value; + break; + case 2: + if (reg & 1) { + CB(cfgbase, reg) = value; + CB(cfgbase, reg + 1) = value >> 8; + } else + CS(cfgbase, reg) = value; + break; + case 3: + if (reg & 1) { + CB(cfgbase, reg) = value; + CS(cfgbase, (reg + 1)) = value >> 8; + } else { + CS(cfgbase, reg) = value; + CB(cfgbase, reg + 2) = value >> 16; + } + break; + + case 4: + CW(cfgbase, reg) = value; + break; + } +} diff -Nru a/arch/ia64/sn/io/sn2/pcibr/pcibr_dvr.c b/arch/ia64/sn/io/sn2/pcibr/pcibr_dvr.c --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/io/sn2/pcibr/pcibr_dvr.c Tue Mar 12 13:58:16 2002 @@ -0,0 +1,4279 @@ +/* + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (C) 2001-2002 Silicon Graphics, Inc. All rights reserved. + */ + + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#ifdef __ia64 +#define rmallocmap atemapalloc +#define rmfreemap atemapfree +#define rmfree atefree +#define rmalloc atealloc +#endif + +/* + * Macros related to the Lucent USS 302/312 usb timeout workaround. It + * appears that if the lucent part can get into a retry loop if it sees a + * DAC on the bus during a pio read retry. The loop is broken after about + * 1ms, so we need to set up bridges holding this part to allow at least + * 1ms for pio. + */ + +#define USS302_TIMEOUT_WAR + +#ifdef USS302_TIMEOUT_WAR +#define LUCENT_USBHC_VENDOR_ID_NUM 0x11c1 +#define LUCENT_USBHC302_DEVICE_ID_NUM 0x5801 +#define LUCENT_USBHC312_DEVICE_ID_NUM 0x5802 +#define USS302_BRIDGE_TIMEOUT_HLD 4 +#endif + +int pcibr_devflag = D_MP; + +/* + * This is the file operation table for the pcibr driver. + * As each of the functions are implemented, put the + * appropriate function name below. + */ +struct file_operations pcibr_fops = { + owner: THIS_MODULE, + llseek: NULL, + read: NULL, + write: NULL, + readdir: NULL, + poll: NULL, + ioctl: NULL, + mmap: NULL, + open: NULL, + flush: NULL, + release: NULL, + fsync: NULL, + fasync: NULL, + lock: NULL, + readv: NULL, + writev: NULL +}; + +#ifdef LATER + +#if PCIBR_ATE_DEBUG +static struct reg_values ssram_sizes[] = +{ + {BRIDGE_CTRL_SSRAM_512K, "512k"}, + {BRIDGE_CTRL_SSRAM_128K, "128k"}, + {BRIDGE_CTRL_SSRAM_64K, "64k"}, + {BRIDGE_CTRL_SSRAM_1K, "1k"}, + {0} +}; + +static struct reg_desc control_bits[] = +{ + {BRIDGE_CTRL_FLASH_WR_EN, 0, "FLASH_WR_EN"}, + {BRIDGE_CTRL_EN_CLK50, 0, "EN_CLK50"}, + {BRIDGE_CTRL_EN_CLK40, 0, "EN_CLK40"}, + {BRIDGE_CTRL_EN_CLK33, 0, "EN_CLK33"}, + {BRIDGE_CTRL_RST_MASK, -24, "RST", "%x"}, + {BRIDGE_CTRL_IO_SWAP, 0, "IO_SWAP"}, + {BRIDGE_CTRL_MEM_SWAP, 0, "MEM_SWAP"}, + {BRIDGE_CTRL_PAGE_SIZE, 0, "PAGE_SIZE"}, + {BRIDGE_CTRL_SS_PAR_BAD, 0, "SS_PAR_BAD"}, + {BRIDGE_CTRL_SS_PAR_EN, 0, "SS_PAR_EN"}, + {BRIDGE_CTRL_SSRAM_SIZE_MASK, 0, "SSRAM_SIZE", 0, ssram_sizes}, + {BRIDGE_CTRL_F_BAD_PKT, 0, "F_BAD_PKT"}, + {BRIDGE_CTRL_LLP_XBAR_CRD_MASK, -12, "LLP_XBAR_CRD", "%d"}, + {BRIDGE_CTRL_CLR_RLLP_CNT, 0, "CLR_RLLP_CNT"}, + {BRIDGE_CTRL_CLR_TLLP_CNT, 0, "CLR_TLLP_CNT"}, + {BRIDGE_CTRL_SYS_END, 0, "SYS_END"}, + + {BRIDGE_CTRL_BUS_SPEED_MASK, -4, "BUS_SPEED", "%d"}, + {BRIDGE_CTRL_WIDGET_ID_MASK, 0, "WIDGET_ID", "%x"}, + {0} +}; +#endif +#endif /* LATER */ + +/* kbrick widgetnum-to-bus layout */ +int p_busnum[MAX_PORT_NUM] = { /* widget# */ + 0, 0, 0, 0, 0, 0, 0, 0, /* 0x0 - 0x7 */ + 2, /* 0x8 */ + 1, /* 0x9 */ + 0, 0, /* 0xa - 0xb */ + 5, /* 0xc */ + 6, /* 0xd */ + 4, /* 0xe */ + 3, /* 0xf */ +}; + +/* + * Additional PIO spaces per slot are + * recorded in this structure. + */ +struct pciio_piospace_s { + pciio_piospace_t next; /* another space for this device */ + char free; /* 1 if free, 0 if in use */ + pciio_space_t space; /* Which space is in use */ + iopaddr_t start; /* Starting address of the PIO space */ + size_t count; /* size of PIO space */ +}; + +#if PCIBR_SOFT_LIST +pcibr_list_p pcibr_list = 0; +#endif + +extern int hwgraph_vertex_name_get(devfs_handle_t vhdl, char *buf, uint buflen); +extern int hub_device_flags_set(devfs_handle_t widget_dev, hub_widget_flags_t flags); +extern long atoi(register char *p); +extern cnodeid_t nodevertex_to_cnodeid(devfs_handle_t vhdl); +extern void *swap_ptr(void **loc, void *new); +extern char *dev_to_name(devfs_handle_t dev, char *buf, uint buflen); +extern struct map *atemapalloc(uint64_t); +extern void atefree(struct map *, size_t, uint64_t); +extern void atemapfree(struct map *); +extern pciio_dmamap_t get_free_pciio_dmamap(devfs_handle_t); +extern void free_pciio_dmamap(pcibr_dmamap_t); + +#define ATE_WRITE() ate_write(ate_ptr, ate_count, ate) +#if PCIBR_FREEZE_TIME +#define ATE_FREEZE() s = ate_freeze(pcibr_dmamap, &freeze_time, cmd_regs) +#else +#define ATE_FREEZE() s = ate_freeze(pcibr_dmamap, cmd_regs) +#endif /* PCIBR_FREEZE_TIME */ + +#if PCIBR_FREEZE_TIME +#define ATE_THAW() ate_thaw(pcibr_dmamap, ate_index, ate, ate_total, freeze_time, cmd_regs, s) +#else +#define ATE_THAW() ate_thaw(pcibr_dmamap, ate_index, cmd_regs, s) +#endif + + +/* ===================================================================== + * Function Table of Contents + * + * The order of functions in this file has stopped + * making much sense. We might want to take a look + * at it some time and bring back some sanity, or + * perhaps bust this file into smaller chunks. + */ + +extern void do_pcibr_rrb_clear(bridge_t *, int); +extern void do_pcibr_rrb_flush(bridge_t *, int); +extern int do_pcibr_rrb_count_valid(bridge_t *, pciio_slot_t); +extern int do_pcibr_rrb_count_avail(bridge_t *, pciio_slot_t); +extern int do_pcibr_rrb_alloc(bridge_t *, pciio_slot_t, int); +extern int do_pcibr_rrb_free(bridge_t *, pciio_slot_t, int); + +extern void do_pcibr_rrb_autoalloc(pcibr_soft_t, int, int); + +extern int pcibr_wrb_flush(devfs_handle_t); +extern int pcibr_rrb_alloc(devfs_handle_t, int *, int *); +extern int pcibr_rrb_check(devfs_handle_t, int *, int *, int *, int *); +extern int pcibr_alloc_all_rrbs(devfs_handle_t, int, int, int, int, int, int, int, int, int); +extern void pcibr_rrb_flush(devfs_handle_t); + +static int pcibr_try_set_device(pcibr_soft_t, pciio_slot_t, unsigned, bridgereg_t); +void pcibr_release_device(pcibr_soft_t, pciio_slot_t, bridgereg_t); + +extern void pcibr_clearwidint(bridge_t *); +extern void pcibr_setwidint(xtalk_intr_t); + +void pcibr_init(void); +int pcibr_attach(devfs_handle_t); +int pcibr_detach(devfs_handle_t); +int pcibr_open(devfs_handle_t *, int, int, cred_t *); +int pcibr_close(devfs_handle_t, int, int, cred_t *); +int pcibr_map(devfs_handle_t, vhandl_t *, off_t, size_t, uint); +int pcibr_unmap(devfs_handle_t, vhandl_t *); +int pcibr_ioctl(devfs_handle_t, int, void *, int, struct cred *, int *); + +void pcibr_freeblock_sub(iopaddr_t *, iopaddr_t *, iopaddr_t, size_t); + +extern int pcibr_init_ext_ate_ram(bridge_t *); +extern int pcibr_ate_alloc(pcibr_soft_t, int); +extern void pcibr_ate_free(pcibr_soft_t, int, int); + +extern unsigned ate_freeze(pcibr_dmamap_t pcibr_dmamap, +#if PCIBR_FREEZE_TIME + unsigned *freeze_time_ptr, +#endif + unsigned *cmd_regs); +extern void ate_write(bridge_ate_p ate_ptr, int ate_count, bridge_ate_t ate); +extern void ate_thaw(pcibr_dmamap_t pcibr_dmamap, int ate_index, +#if PCIBR_FREEZE_TIME + bridge_ate_t ate, + int ate_total, + unsigned freeze_time_start, +#endif + unsigned *cmd_regs, + unsigned s); + +pcibr_info_t pcibr_info_get(devfs_handle_t); + +static iopaddr_t pcibr_addr_pci_to_xio(devfs_handle_t, pciio_slot_t, pciio_space_t, iopaddr_t, size_t, unsigned); + +pcibr_piomap_t pcibr_piomap_alloc(devfs_handle_t, device_desc_t, pciio_space_t, iopaddr_t, size_t, size_t, unsigned); +void pcibr_piomap_free(pcibr_piomap_t); +caddr_t pcibr_piomap_addr(pcibr_piomap_t, iopaddr_t, size_t); +void pcibr_piomap_done(pcibr_piomap_t); +caddr_t pcibr_piotrans_addr(devfs_handle_t, device_desc_t, pciio_space_t, iopaddr_t, size_t, unsigned); +iopaddr_t pcibr_piospace_alloc(devfs_handle_t, device_desc_t, pciio_space_t, size_t, size_t); +void pcibr_piospace_free(devfs_handle_t, pciio_space_t, iopaddr_t, size_t); + +static iopaddr_t pcibr_flags_to_d64(unsigned, pcibr_soft_t); +extern bridge_ate_t pcibr_flags_to_ate(unsigned); + +pcibr_dmamap_t pcibr_dmamap_alloc(devfs_handle_t, device_desc_t, size_t, unsigned); +void pcibr_dmamap_free(pcibr_dmamap_t); +extern bridge_ate_p pcibr_ate_addr(pcibr_soft_t, int); +static iopaddr_t pcibr_addr_xio_to_pci(pcibr_soft_t, iopaddr_t, size_t); +iopaddr_t pcibr_dmamap_addr(pcibr_dmamap_t, paddr_t, size_t); +alenlist_t pcibr_dmamap_list(pcibr_dmamap_t, alenlist_t, unsigned); +void pcibr_dmamap_done(pcibr_dmamap_t); +cnodeid_t pcibr_get_dmatrans_node(devfs_handle_t); +iopaddr_t pcibr_dmatrans_addr(devfs_handle_t, device_desc_t, paddr_t, size_t, unsigned); +alenlist_t pcibr_dmatrans_list(devfs_handle_t, device_desc_t, alenlist_t, unsigned); +void pcibr_dmamap_drain(pcibr_dmamap_t); +void pcibr_dmaaddr_drain(devfs_handle_t, paddr_t, size_t); +void pcibr_dmalist_drain(devfs_handle_t, alenlist_t); +iopaddr_t pcibr_dmamap_pciaddr_get(pcibr_dmamap_t); + +extern unsigned pcibr_intr_bits(pciio_info_t info, pciio_intr_line_t lines); +extern pcibr_intr_t pcibr_intr_alloc(devfs_handle_t, device_desc_t, pciio_intr_line_t, devfs_handle_t); +extern void pcibr_intr_free(pcibr_intr_t); +extern void pcibr_setpciint(xtalk_intr_t); +extern int pcibr_intr_connect(pcibr_intr_t); +extern void pcibr_intr_disconnect(pcibr_intr_t); + +extern devfs_handle_t pcibr_intr_cpu_get(pcibr_intr_t); +extern void pcibr_xintr_preset(void *, int, xwidgetnum_t, iopaddr_t, xtalk_intr_vector_t); +extern void pcibr_intr_func(intr_arg_t); + +extern void print_bridge_errcmd(uint32_t, char *); + +extern void pcibr_error_dump(pcibr_soft_t); +extern uint32_t pcibr_errintr_group(uint32_t); +extern void pcibr_pioerr_check(pcibr_soft_t); +extern void pcibr_error_intr_handler(intr_arg_t); + +extern int pcibr_addr_toslot(pcibr_soft_t, iopaddr_t, pciio_space_t *, iopaddr_t *, pciio_function_t *); +extern void pcibr_error_cleanup(pcibr_soft_t, int); +extern void pcibr_device_disable(pcibr_soft_t, int); +extern int pcibr_pioerror(pcibr_soft_t, int, ioerror_mode_t, ioerror_t *); +extern int pcibr_dmard_error(pcibr_soft_t, int, ioerror_mode_t, ioerror_t *); +extern int pcibr_dmawr_error(pcibr_soft_t, int, ioerror_mode_t, ioerror_t *); +extern int pcibr_error_handler(error_handler_arg_t, int, ioerror_mode_t, ioerror_t *); +extern int pcibr_error_devenable(devfs_handle_t, int); + +void pcibr_provider_startup(devfs_handle_t); +void pcibr_provider_shutdown(devfs_handle_t); + +int pcibr_reset(devfs_handle_t); +pciio_endian_t pcibr_endian_set(devfs_handle_t, pciio_endian_t, pciio_endian_t); +int pcibr_priority_bits_set(pcibr_soft_t, pciio_slot_t, pciio_priority_t); +pciio_priority_t pcibr_priority_set(devfs_handle_t, pciio_priority_t); +int pcibr_device_flags_set(devfs_handle_t, pcibr_device_flags_t); + +extern cfg_p pcibr_config_addr(devfs_handle_t, unsigned); +extern uint64_t pcibr_config_get(devfs_handle_t, unsigned, unsigned); +extern void pcibr_config_set(devfs_handle_t, unsigned, unsigned, uint64_t); +extern void do_pcibr_config_set(cfg_p, unsigned, unsigned, uint64_t); + +extern pcibr_hints_t pcibr_hints_get(devfs_handle_t, int); +extern void pcibr_hints_fix_rrbs(devfs_handle_t); +extern void pcibr_hints_dualslot(devfs_handle_t, pciio_slot_t, pciio_slot_t); +extern void pcibr_hints_intr_bits(devfs_handle_t, pcibr_intr_bits_f *); +extern void pcibr_set_rrb_callback(devfs_handle_t, rrb_alloc_funct_t); +extern void pcibr_hints_handsoff(devfs_handle_t); +extern void pcibr_hints_subdevs(devfs_handle_t, pciio_slot_t, uint64_t); + +#ifdef BRIDGE_B_DATACORR_WAR +extern int ql_bridge_rev_b_war(devfs_handle_t); +extern int bridge_rev_b_data_check_disable; +char *rev_b_datacorr_warning = +"***************************** WARNING! ******************************\n"; +char *rev_b_datacorr_mesg = +"UNRECOVERABLE IO LINK ERROR. CONTACT SERVICE PROVIDER\n"; +#endif + +extern int pcibr_slot_reset(devfs_handle_t,pciio_slot_t); +extern int pcibr_slot_info_init(devfs_handle_t,pciio_slot_t); +extern int pcibr_slot_info_free(devfs_handle_t,pciio_slot_t); +extern int pcibr_slot_addr_space_init(devfs_handle_t,pciio_slot_t); +extern int pcibr_slot_device_init(devfs_handle_t, pciio_slot_t); +extern int pcibr_slot_guest_info_init(devfs_handle_t,pciio_slot_t); +extern int pcibr_slot_call_device_attach(devfs_handle_t, pciio_slot_t, int); +extern int pcibr_slot_call_device_detach(devfs_handle_t, pciio_slot_t, int); +extern int pcibr_slot_attach(devfs_handle_t, pciio_slot_t, int, char *, int *); +extern int pcibr_slot_detach(devfs_handle_t, pciio_slot_t, int); +extern int pcibr_is_slot_sys_critical(devfs_handle_t, pciio_slot_t); + +#ifdef LATER +extern int pcibr_slot_startup(devfs_handle_t, pcibr_slot_req_t); +extern int pcibr_slot_shutdown(devfs_handle_t, pcibr_slot_req_t); +extern int pcibr_slot_query(devfs_handle_t, pcibr_slot_req_t); +#endif + +extern int pcibr_slot_initial_rrb_alloc(devfs_handle_t, pciio_slot_t); +extern int pcibr_initial_rrb(devfs_handle_t, pciio_slot_t, pciio_slot_t); + + + +/* ===================================================================== + * Device(x) register management + */ + +/* pcibr_try_set_device: attempt to modify Device(x) + * for the specified slot on the specified bridge + * as requested in flags, limited to the specified + * bits. Returns which BRIDGE bits were in conflict, + * or ZERO if everything went OK. + * + * Caller MUST hold pcibr_lock when calling this function. + */ +static int +pcibr_try_set_device(pcibr_soft_t pcibr_soft, + pciio_slot_t slot, + unsigned flags, + bridgereg_t mask) +{ + bridge_t *bridge; + pcibr_soft_slot_t slotp; + bridgereg_t old; + bridgereg_t new; + bridgereg_t chg; + bridgereg_t bad; + bridgereg_t badpmu; + bridgereg_t badd32; + bridgereg_t badd64; + bridgereg_t fix; + unsigned long s; + bridgereg_t xmask; + + xmask = mask; + if (pcibr_soft->bs_xbridge) { + if (mask == BRIDGE_DEV_PMU_BITS) + xmask = XBRIDGE_DEV_PMU_BITS; + if (mask == BRIDGE_DEV_D64_BITS) + xmask = XBRIDGE_DEV_D64_BITS; + } + + slotp = &pcibr_soft->bs_slot[slot]; + + s = pcibr_lock(pcibr_soft); + + bridge = pcibr_soft->bs_base; + + old = slotp->bss_device; + + /* figure out what the desired + * Device(x) bits are based on + * the flags specified. + */ + + new = old; + + /* Currently, we inherit anything that + * the new caller has not specified in + * one way or another, unless we take + * action here to not inherit. + * + * This is needed for the "swap" stuff, + * since it could have been set via + * pcibr_endian_set -- altho note that + * any explicit PCIBR_BYTE_STREAM or + * PCIBR_WORD_VALUES will freely override + * the effect of that call (and vice + * versa, no protection either way). + * + * I want to get rid of pcibr_endian_set + * in favor of tracking DMA endianness + * using the flags specified when DMA + * channels are created. + */ + +#define BRIDGE_DEV_WRGA_BITS (BRIDGE_DEV_PMU_WRGA_EN | BRIDGE_DEV_DIR_WRGA_EN) +#define BRIDGE_DEV_SWAP_BITS (BRIDGE_DEV_SWAP_PMU | BRIDGE_DEV_SWAP_DIR) + + /* Do not use Barrier, Write Gather, + * or Prefetch unless asked. + * Leave everything else as it + * was from the last time. + */ + new = new + & ~BRIDGE_DEV_BARRIER + & ~BRIDGE_DEV_WRGA_BITS + & ~BRIDGE_DEV_PREF + ; + + /* Generic macro flags + */ + if (flags & PCIIO_DMA_DATA) { + new = (new + & ~BRIDGE_DEV_BARRIER) /* barrier off */ + | BRIDGE_DEV_PREF; /* prefetch on */ + + } + if (flags & PCIIO_DMA_CMD) { + new = ((new + & ~BRIDGE_DEV_PREF) /* prefetch off */ + & ~BRIDGE_DEV_WRGA_BITS) /* write gather off */ + | BRIDGE_DEV_BARRIER; /* barrier on */ + } + /* Generic detail flags + */ + if (flags & PCIIO_WRITE_GATHER) + new |= BRIDGE_DEV_WRGA_BITS; + if (flags & PCIIO_NOWRITE_GATHER) + new &= ~BRIDGE_DEV_WRGA_BITS; + + if (flags & PCIIO_PREFETCH) + new |= BRIDGE_DEV_PREF; + if (flags & PCIIO_NOPREFETCH) + new &= ~BRIDGE_DEV_PREF; + + if (flags & PCIBR_WRITE_GATHER) + new |= BRIDGE_DEV_WRGA_BITS; + if (flags & PCIBR_NOWRITE_GATHER) + new &= ~BRIDGE_DEV_WRGA_BITS; + + if (flags & PCIIO_BYTE_STREAM) + new |= (pcibr_soft->bs_xbridge) ? + BRIDGE_DEV_SWAP_DIR : BRIDGE_DEV_SWAP_BITS; + if (flags & PCIIO_WORD_VALUES) + new &= (pcibr_soft->bs_xbridge) ? + ~BRIDGE_DEV_SWAP_DIR : ~BRIDGE_DEV_SWAP_BITS; + + /* Provider-specific flags + */ + if (flags & PCIBR_PREFETCH) + new |= BRIDGE_DEV_PREF; + if (flags & PCIBR_NOPREFETCH) + new &= ~BRIDGE_DEV_PREF; + + if (flags & PCIBR_PRECISE) + new |= BRIDGE_DEV_PRECISE; + if (flags & PCIBR_NOPRECISE) + new &= ~BRIDGE_DEV_PRECISE; + + if (flags & PCIBR_BARRIER) + new |= BRIDGE_DEV_BARRIER; + if (flags & PCIBR_NOBARRIER) + new &= ~BRIDGE_DEV_BARRIER; + + if (flags & PCIBR_64BIT) + new |= BRIDGE_DEV_DEV_SIZE; + if (flags & PCIBR_NO64BIT) + new &= ~BRIDGE_DEV_DEV_SIZE; + + chg = old ^ new; /* what are we changing, */ + chg &= xmask; /* of the interesting bits */ + + if (chg) { + + badd32 = slotp->bss_d32_uctr ? (BRIDGE_DEV_D32_BITS & chg) : 0; + if (pcibr_soft->bs_xbridge) { + badpmu = slotp->bss_pmu_uctr ? (XBRIDGE_DEV_PMU_BITS & chg) : 0; + badd64 = slotp->bss_d64_uctr ? (XBRIDGE_DEV_D64_BITS & chg) : 0; + } else { + badpmu = slotp->bss_pmu_uctr ? (BRIDGE_DEV_PMU_BITS & chg) : 0; + badd64 = slotp->bss_d64_uctr ? (BRIDGE_DEV_D64_BITS & chg) : 0; + } + bad = badpmu | badd32 | badd64; + + if (bad) { + + /* some conflicts can be resolved by + * forcing the bit on. this may cause + * some performance degredation in + * the stream(s) that want the bit off, + * but the alternative is not allowing + * the new stream at all. + */ + if ( (fix = bad & (BRIDGE_DEV_PRECISE | + BRIDGE_DEV_BARRIER)) ){ + bad &= ~fix; + /* don't change these bits if + * they are already set in "old" + */ + chg &= ~(fix & old); + } + /* some conflicts can be resolved by + * forcing the bit off. this may cause + * some performance degredation in + * the stream(s) that want the bit on, + * but the alternative is not allowing + * the new stream at all. + */ + if ( (fix = bad & (BRIDGE_DEV_WRGA_BITS | + BRIDGE_DEV_PREF)) ) { + bad &= ~fix; + /* don't change these bits if + * we wanted to turn them on. + */ + chg &= ~(fix & new); + } + /* conflicts in other bits mean + * we can not establish this DMA + * channel while the other(s) are + * still present. + */ + if (bad) { + pcibr_unlock(pcibr_soft, s); +#if (DEBUG && PCIBR_DEV_DEBUG) + printk("pcibr_try_set_device: mod blocked by %R\n", bad, device_bits); +#endif + return bad; + } + } + } + if (mask == BRIDGE_DEV_PMU_BITS) + slotp->bss_pmu_uctr++; + if (mask == BRIDGE_DEV_D32_BITS) + slotp->bss_d32_uctr++; + if (mask == BRIDGE_DEV_D64_BITS) + slotp->bss_d64_uctr++; + + /* the value we want to write is the + * original value, with the bits for + * our selected changes flipped, and + * with any disabled features turned off. + */ + new = old ^ chg; /* only change what we want to change */ + + if (slotp->bss_device == new) { + pcibr_unlock(pcibr_soft, s); + return 0; + } + bridge->b_device[slot].reg = new; + slotp->bss_device = new; + bridge->b_wid_tflush; /* wait until Bridge PIO complete */ + pcibr_unlock(pcibr_soft, s); +#if DEBUG && PCIBR_DEV_DEBUG + printk("pcibr Device(%d): 0x%p\n", slot, bridge->b_device[slot].reg); +#endif + + return 0; +} + +void +pcibr_release_device(pcibr_soft_t pcibr_soft, + pciio_slot_t slot, + bridgereg_t mask) +{ + pcibr_soft_slot_t slotp; + unsigned long s; + + slotp = &pcibr_soft->bs_slot[slot]; + + s = pcibr_lock(pcibr_soft); + + if (mask == BRIDGE_DEV_PMU_BITS) + slotp->bss_pmu_uctr--; + if (mask == BRIDGE_DEV_D32_BITS) + slotp->bss_d32_uctr--; + if (mask == BRIDGE_DEV_D64_BITS) + slotp->bss_d64_uctr--; + + pcibr_unlock(pcibr_soft, s); +} + +/* + * flush write gather buffer for slot + */ +static void +pcibr_device_write_gather_flush(pcibr_soft_t pcibr_soft, + pciio_slot_t slot) +{ + bridge_t *bridge; + unsigned long s; + volatile uint32_t wrf; + s = pcibr_lock(pcibr_soft); + bridge = pcibr_soft->bs_base; + wrf = bridge->b_wr_req_buf[slot].reg; + pcibr_unlock(pcibr_soft, s); +} + +/* ===================================================================== + * Bridge (pcibr) "Device Driver" entry points + */ + + +/* + * pcibr_init: called once during system startup or + * when a loadable driver is loaded. + * + * The driver_register function should normally + * be in _reg, not _init. But the pcibr driver is + * required by devinit before the _reg routines + * are called, so this is an exception. + */ +void +pcibr_init(void) +{ +#if DEBUG && ATTACH_DEBUG + printk("pcibr_init\n"); +#endif + + xwidget_driver_register(XBRIDGE_WIDGET_PART_NUM, + XBRIDGE_WIDGET_MFGR_NUM, + "pcibr_", + 0); + xwidget_driver_register(BRIDGE_WIDGET_PART_NUM, + BRIDGE_WIDGET_MFGR_NUM, + "pcibr_", + 0); +} + +/* + * open/close mmap/munmap interface would be used by processes + * that plan to map the PCI bridge, and muck around with the + * registers. This is dangerous to do, and will be allowed + * to a select brand of programs. Typically these are + * diagnostics programs, or some user level commands we may + * write to do some weird things. + * To start with expect them to have root priveleges. + * We will ask for more later. + */ +/* ARGSUSED */ +int +pcibr_open(devfs_handle_t *devp, int oflag, int otyp, cred_t *credp) +{ + return 0; +} + +/*ARGSUSED */ +int +pcibr_close(devfs_handle_t dev, int oflag, int otyp, cred_t *crp) +{ + return 0; +} + +/*ARGSUSED */ +int +pcibr_map(devfs_handle_t dev, vhandl_t *vt, off_t off, size_t len, uint prot) +{ + int error; + devfs_handle_t vhdl = dev_to_vhdl(dev); + devfs_handle_t pcibr_vhdl = hwgraph_connectpt_get(vhdl); + pcibr_soft_t pcibr_soft = pcibr_soft_get(pcibr_vhdl); + bridge_t *bridge = pcibr_soft->bs_base; + + hwgraph_vertex_unref(pcibr_vhdl); + + ASSERT(pcibr_soft); + len = ctob(btoc(len)); /* Make len page aligned */ + error = v_mapphys(vt, (void *) ((__psunsigned_t) bridge + off), len); + + /* + * If the offset being mapped corresponds to the flash prom + * base, and if the mapping succeeds, and if the user + * has requested the protections to be WRITE, enable the + * flash prom to be written. + * + * XXX- deprecate this in favor of using the + * real flash driver ... + */ + if (!error && + ((off == BRIDGE_EXTERNAL_FLASH) || + (len > BRIDGE_EXTERNAL_FLASH))) { + int s; + + /* + * ensure that we write and read without any interruption. + * The read following the write is required for the Bridge war + */ + s = splhi(); + bridge->b_wid_control |= BRIDGE_CTRL_FLASH_WR_EN; + bridge->b_wid_control; /* inval addr bug war */ + splx(s); + } + + return error; +} + +/*ARGSUSED */ +int +pcibr_unmap(devfs_handle_t dev, vhandl_t *vt) +{ + devfs_handle_t pcibr_vhdl = hwgraph_connectpt_get((devfs_handle_t) dev); + pcibr_soft_t pcibr_soft = pcibr_soft_get(pcibr_vhdl); + bridge_t *bridge = pcibr_soft->bs_base; + + hwgraph_vertex_unref(pcibr_vhdl); + + /* + * If flashprom write was enabled, disable it, as + * this is the last unmap. + */ + if (bridge->b_wid_control & BRIDGE_CTRL_FLASH_WR_EN) { + int s; + + /* + * ensure that we write and read without any interruption. + * The read following the write is required for the Bridge war + */ + s = splhi(); + bridge->b_wid_control &= ~BRIDGE_CTRL_FLASH_WR_EN; + bridge->b_wid_control; /* inval addr bug war */ + splx(s); + } + return 0; +} + +/* This is special case code used by grio. There are plans to make + * this a bit more general in the future, but till then this should + * be sufficient. + */ +pciio_slot_t +pcibr_device_slot_get(devfs_handle_t dev_vhdl) +{ + char devname[MAXDEVNAME]; + devfs_handle_t tdev; + pciio_info_t pciio_info; + pciio_slot_t slot = PCIIO_SLOT_NONE; + + vertex_to_name(dev_vhdl, devname, MAXDEVNAME); + + /* run back along the canonical path + * until we find a PCI connection point. + */ + tdev = hwgraph_connectpt_get(dev_vhdl); + while (tdev != GRAPH_VERTEX_NONE) { + pciio_info = pciio_info_chk(tdev); + if (pciio_info) { + slot = pciio_info_slot_get(pciio_info); + break; + } + hwgraph_vertex_unref(tdev); + tdev = hwgraph_connectpt_get(tdev); + } + hwgraph_vertex_unref(tdev); + + return slot; +} + +/*ARGSUSED */ +int +pcibr_ioctl(devfs_handle_t dev, + int cmd, + void *arg, + int flag, + struct cred *cr, + int *rvalp) +{ + devfs_handle_t pcibr_vhdl = hwgraph_connectpt_get((devfs_handle_t)dev); +#ifdef LATER + pcibr_soft_t pcibr_soft = pcibr_soft_get(pcibr_vhdl); +#endif + int error = 0; + + hwgraph_vertex_unref(pcibr_vhdl); + + switch (cmd) { +#ifdef LATER + case GIOCSETBW: + { + grio_ioctl_info_t info; + pciio_slot_t slot = 0; + + if (!cap_able((uint64_t)CAP_DEVICE_MGT)) { + error = EPERM; + break; + } + if (COPYIN(arg, &info, sizeof(grio_ioctl_info_t))) { + error = EFAULT; + break; + } +#ifdef GRIO_DEBUG + printk("pcibr:: prev_vhdl: %d reqbw: %lld\n", + info.prev_vhdl, info.reqbw); +#endif /* GRIO_DEBUG */ + + if ((slot = pcibr_device_slot_get(info.prev_vhdl)) == + PCIIO_SLOT_NONE) { + error = EIO; + break; + } + if (info.reqbw) + pcibr_priority_bits_set(pcibr_soft, slot, PCI_PRIO_HIGH); + break; + } + + case GIOCRELEASEBW: + { + grio_ioctl_info_t info; + pciio_slot_t slot = 0; + + if (!cap_able(CAP_DEVICE_MGT)) { + error = EPERM; + break; + } + if (COPYIN(arg, &info, sizeof(grio_ioctl_info_t))) { + error = EFAULT; + break; + } +#ifdef GRIO_DEBUG + printk("pcibr:: prev_vhdl: %d reqbw: %lld\n", + info.prev_vhdl, info.reqbw); +#endif /* GRIO_DEBUG */ + + if ((slot = pcibr_device_slot_get(info.prev_vhdl)) == + PCIIO_SLOT_NONE) { + error = EIO; + break; + } + if (info.reqbw) + pcibr_priority_bits_set(pcibr_soft, slot, PCI_PRIO_LOW); + break; + } + + case PCIBR_SLOT_STARTUP: + { + struct pcibr_slot_req_s req; + + if (!cap_able(CAP_DEVICE_MGT)) { + error = EPERM; + break; + } + + if (COPYIN(arg, &req, sizeof(req))) { + error = EFAULT; + break; + } + + error = pcibr_slot_startup(pcibr_vhdl, &req); + break; + } + case PCIBR_SLOT_SHUTDOWN: + { + struct pcibr_slot_req_s req; + + if (!cap_able(CAP_DEVICE_MGT)) { + error = EPERM; + break; + } + + if (COPYIN(arg, &req, sizeof(req))) { + error = EFAULT; + break; + } + + error = pcibr_slot_shutdown(pcibr_vhdl, &req); + break; + } + case PCIBR_SLOT_QUERY: + { + struct pcibr_slot_req_s req; + + if (!cap_able(CAP_DEVICE_MGT)) { + error = EPERM; + break; + } + + if (COPYIN(arg, &req, sizeof(req))) { + error = EFAULT; + break; + } + + error = pcibr_slot_query(pcibr_vhdl, &req); + break; + } +#endif /* LATER */ + default: + break; + + } + + return error; +} + +void +pcibr_freeblock_sub(iopaddr_t *free_basep, + iopaddr_t *free_lastp, + iopaddr_t base, + size_t size) +{ + iopaddr_t free_base = *free_basep; + iopaddr_t free_last = *free_lastp; + iopaddr_t last = base + size - 1; + + if ((last < free_base) || (base > free_last)); /* free block outside arena */ + + else if ((base <= free_base) && (last >= free_last)) + /* free block contains entire arena */ + *free_basep = *free_lastp = 0; + + else if (base <= free_base) + /* free block is head of arena */ + *free_basep = last + 1; + + else if (last >= free_last) + /* free block is tail of arena */ + *free_lastp = base - 1; + + /* + * We are left with two regions: the free area + * in the arena "below" the block, and the free + * area in the arena "above" the block. Keep + * the one that is bigger. + */ + + else if ((base - free_base) > (free_last - last)) + *free_lastp = base - 1; /* keep lower chunk */ + else + *free_basep = last + 1; /* keep upper chunk */ +} + +pcibr_info_t +pcibr_info_get(devfs_handle_t vhdl) +{ + return (pcibr_info_t) pciio_info_get(vhdl); +} + +pcibr_info_t +pcibr_device_info_new( + pcibr_soft_t pcibr_soft, + pciio_slot_t slot, + pciio_function_t rfunc, + pciio_vendor_id_t vendor, + pciio_device_id_t device) +{ + pcibr_info_t pcibr_info; + pciio_function_t func; + int ibit; + + func = (rfunc == PCIIO_FUNC_NONE) ? 0 : rfunc; + + NEW(pcibr_info); + + pciio_device_info_new(&pcibr_info->f_c, + pcibr_soft->bs_vhdl, + slot, rfunc, + vendor, device); + +/* pfg - this is new ..... */ + /* Set PCI bus number */ + pcibr_info->f_bus = io_path_map_widget(pcibr_soft->bs_vhdl); + + if (slot != PCIIO_SLOT_NONE) { + + /* + * Currently favored mapping from PCI + * slot number and INTA/B/C/D to Bridge + * PCI Interrupt Bit Number: + * + * SLOT A B C D + * 0 0 4 0 4 + * 1 1 5 1 5 + * 2 2 6 2 6 + * 3 3 7 3 7 + * 4 4 0 4 0 + * 5 5 1 5 1 + * 6 6 2 6 2 + * 7 7 3 7 3 + * + * XXX- allow pcibr_hints to override default + * XXX- allow ADMIN to override pcibr_hints + */ + for (ibit = 0; ibit < 4; ++ibit) + pcibr_info->f_ibit[ibit] = + (slot + 4 * ibit) & 7; + + /* + * Record the info in the sparse func info space. + */ + if (func < pcibr_soft->bs_slot[slot].bss_ninfo) + pcibr_soft->bs_slot[slot].bss_infos[func] = pcibr_info; + } + return pcibr_info; +} + + +/* FIXME: for now this is needed by both pcibr.c and + * pcibr_slot.c. Need to find a better way, the least + * of which would be to move it to pcibr_private.h + */ + +/* + * PCI_ADDR_SPACE_LIMITS_STORE + * Sets the current values of + * pci io base, + * pci io last, + * pci low memory base, + * pci low memory last, + * pci high memory base, + * pci high memory last + */ +#define PCI_ADDR_SPACE_LIMITS_STORE() \ + pcibr_soft->bs_spinfo.pci_io_base = pci_io_fb; \ + pcibr_soft->bs_spinfo.pci_io_last = pci_io_fl; \ + pcibr_soft->bs_spinfo.pci_swin_base = pci_lo_fb; \ + pcibr_soft->bs_spinfo.pci_swin_last = pci_lo_fl; \ + pcibr_soft->bs_spinfo.pci_mem_base = pci_hi_fb; \ + pcibr_soft->bs_spinfo.pci_mem_last = pci_hi_fl; + + +/* + * pcibr_device_unregister + * This frees up any hardware resources reserved for this PCI device + * and removes any PCI infrastructural information setup for it. + * This is usually used at the time of shutting down of the PCI card. + */ +int +pcibr_device_unregister(devfs_handle_t pconn_vhdl) +{ + pciio_info_t pciio_info; + devfs_handle_t pcibr_vhdl; + pciio_slot_t slot; + pcibr_soft_t pcibr_soft; + bridge_t *bridge; + int count_vchan0, count_vchan1; + unsigned s; + int error_call; + int error = 0; + + pciio_info = pciio_info_get(pconn_vhdl); + + pcibr_vhdl = pciio_info_master_get(pciio_info); + slot = pciio_info_slot_get(pciio_info); + + pcibr_soft = pcibr_soft_get(pcibr_vhdl); + bridge = pcibr_soft->bs_base; + + /* Clear all the hardware xtalk resources for this device */ + xtalk_widgetdev_shutdown(pcibr_soft->bs_conn, slot); + + /* Flush all the rrbs */ + pcibr_rrb_flush(pconn_vhdl); + + /* + * If the RRB configuration for this slot has changed, set it + * back to the boot-time default + */ + if (pcibr_soft->bs_rrb_valid_dflt[slot] >= 0) { + + s = pcibr_lock(pcibr_soft); + + /* Free the rrbs allocated to this slot */ + error_call = do_pcibr_rrb_free(bridge, slot, + pcibr_soft->bs_rrb_valid[slot] + + pcibr_soft->bs_rrb_valid[slot + + PCIBR_RRB_SLOT_VIRTUAL]); + + if (error_call) + error = ERANGE; + + pcibr_soft->bs_rrb_res[slot] = pcibr_soft->bs_rrb_res[slot] + + pcibr_soft->bs_rrb_valid[slot] + + pcibr_soft->bs_rrb_valid[slot + + PCIBR_RRB_SLOT_VIRTUAL]; + + count_vchan0 = pcibr_soft->bs_rrb_valid_dflt[slot]; + count_vchan1 = pcibr_soft->bs_rrb_valid_dflt[slot + + PCIBR_RRB_SLOT_VIRTUAL]; + + pcibr_unlock(pcibr_soft, s); + + pcibr_rrb_alloc(pconn_vhdl, &count_vchan0, &count_vchan1); + + } + + /* Flush the write buffers !! */ + error_call = pcibr_wrb_flush(pconn_vhdl); + + if (error_call) + error = error_call; + + /* Clear the information specific to the slot */ + error_call = pcibr_slot_info_free(pcibr_vhdl, slot); + + if (error_call) + error = error_call; + + return(error); + +} + +/* + * pcibr_driver_reg_callback + * CDL will call this function for each device found in the PCI + * registry that matches the vendor/device IDs supported by + * the driver being registered. The device's connection vertex + * and the driver's attach function return status enable the + * slot's device status to be set. + */ +void +pcibr_driver_reg_callback(devfs_handle_t pconn_vhdl, + int key1, int key2, int error) +{ + pciio_info_t pciio_info; + pcibr_info_t pcibr_info; + devfs_handle_t pcibr_vhdl; + pciio_slot_t slot; + pcibr_soft_t pcibr_soft; + + /* Do not set slot status for vendor/device ID wildcard drivers */ + if ((key1 == -1) || (key2 == -1)) + return; + + pciio_info = pciio_info_get(pconn_vhdl); + pcibr_info = pcibr_info_get(pconn_vhdl); + + pcibr_vhdl = pciio_info_master_get(pciio_info); + slot = pciio_info_slot_get(pciio_info); + + pcibr_soft = pcibr_soft_get(pcibr_vhdl); + + /* This may be a loadable driver so lock out any pciconfig actions */ + mrlock(pcibr_soft->bs_bus_lock, MR_UPDATE, PZERO); + + pcibr_info->f_att_det_error = error; + + pcibr_soft->bs_slot[slot].slot_status &= ~SLOT_STATUS_MASK; + + if (error) { + pcibr_soft->bs_slot[slot].slot_status |= SLOT_STARTUP_INCMPLT; + } else { + pcibr_soft->bs_slot[slot].slot_status |= SLOT_STARTUP_CMPLT; + } + + /* Release the bus lock */ + mrunlock(pcibr_soft->bs_bus_lock); + +} + +/* + * pcibr_driver_unreg_callback + * CDL will call this function for each device found in the PCI + * registry that matches the vendor/device IDs supported by + * the driver being unregistered. The device's connection vertex + * and the driver's detach function return status enable the + * slot's device status to be set. + */ +void +pcibr_driver_unreg_callback(devfs_handle_t pconn_vhdl, + int key1, int key2, int error) +{ + pciio_info_t pciio_info; + pcibr_info_t pcibr_info; + devfs_handle_t pcibr_vhdl; + pciio_slot_t slot; + pcibr_soft_t pcibr_soft; + + /* Do not set slot status for vendor/device ID wildcard drivers */ + if ((key1 == -1) || (key2 == -1)) + return; + + pciio_info = pciio_info_get(pconn_vhdl); + pcibr_info = pcibr_info_get(pconn_vhdl); + + pcibr_vhdl = pciio_info_master_get(pciio_info); + slot = pciio_info_slot_get(pciio_info); + + pcibr_soft = pcibr_soft_get(pcibr_vhdl); + + /* This may be a loadable driver so lock out any pciconfig actions */ + mrlock(pcibr_soft->bs_bus_lock, MR_UPDATE, PZERO); + + pcibr_info->f_att_det_error = error; + + pcibr_soft->bs_slot[slot].slot_status &= ~SLOT_STATUS_MASK; + + if (error) { + pcibr_soft->bs_slot[slot].slot_status |= SLOT_SHUTDOWN_INCMPLT; + } else { + pcibr_soft->bs_slot[slot].slot_status |= SLOT_SHUTDOWN_CMPLT; + } + + /* Release the bus lock */ + mrunlock(pcibr_soft->bs_bus_lock); + +} + +/* + * build a convenience link path in the + * form of "...//bus/" + * + * returns 1 on success, 0 otherwise + * + * depends on hwgraph separator == '/' + */ +int +pcibr_bus_cnvlink(devfs_handle_t f_c, int slot) +{ + char dst[MAXDEVNAME]; + char *dp = dst; + char *cp, *xp; + int widgetnum; + char pcibus[8]; + devfs_handle_t nvtx, svtx; + int rv; + +#if DEBUG + printk("pcibr_bus_cnvlink: slot= %d f_c= %p\n", + slot, f_c); + { + int pos; + char dname[256]; + pos = devfs_generate_path(f_c, dname, 256); + printk("%s : path= %s\n", __FUNCTION__, &dname[pos]); + } +#endif + + if (GRAPH_SUCCESS != hwgraph_vertex_name_get(f_c, dst, MAXDEVNAME)) + return 0; + + /* dst example == /hw/module/001c02/Pbrick/xtalk/8/pci/direct */ + + /* find the widget number */ + xp = strstr(dst, "/"EDGE_LBL_XTALK"/"); + if (xp == NULL) + return 0; + widgetnum = atoi(xp+7); + if (widgetnum < XBOW_PORT_8 || widgetnum > XBOW_PORT_F) + return 0; + + /* remove "/pci/direct" from path */ + cp = strstr(dst, "/" EDGE_LBL_PCI "/" "direct"); + if (cp == NULL) + return 0; + *cp = (char)NULL; + + /* get the vertex for the widget */ + if (GRAPH_SUCCESS != hwgraph_traverse(NULL, dp, &svtx)) + return 0; + + *xp = (char)NULL; /* remove "/xtalk/..." from path */ + + /* dst example now == /hw/module/001c02/Pbrick */ + + /* get the bus number */ + strcat(dst, "/bus"); + sprintf(pcibus, "%d", p_busnum[widgetnum]); + + /* link to bus to widget */ + rv = hwgraph_path_add(NULL, dp, &nvtx); + if (GRAPH_SUCCESS == rv) + rv = hwgraph_edge_add(nvtx, svtx, pcibus); + + return (rv == GRAPH_SUCCESS); +} + + +/* + * pcibr_attach: called every time the crosstalk + * infrastructure is asked to initialize a widget + * that matches the part number we handed to the + * registration routine above. + */ +/*ARGSUSED */ +int +pcibr_attach(devfs_handle_t xconn_vhdl) +{ + /* REFERENCED */ + graph_error_t rc; + devfs_handle_t pcibr_vhdl; + devfs_handle_t ctlr_vhdl; + bridge_t *bridge = NULL; + bridgereg_t id; + int rev; + pcibr_soft_t pcibr_soft; + pcibr_info_t pcibr_info; + xwidget_info_t info; + xtalk_intr_t xtalk_intr; + device_desc_t dev_desc = (device_desc_t)0; + int slot; + int ibit; + devfs_handle_t noslot_conn; + char devnm[MAXDEVNAME], *s; + pcibr_hints_t pcibr_hints; + bridgereg_t b_int_enable; + unsigned rrb_fixed = 0; + + iopaddr_t pci_io_fb, pci_io_fl; + iopaddr_t pci_lo_fb, pci_lo_fl; + iopaddr_t pci_hi_fb, pci_hi_fl; + + int spl_level; +#ifdef LATER + char *nicinfo = (char *)0; +#endif + +#if PCI_FBBE + int fast_back_to_back_enable; +#endif + l1sc_t *scp; + nasid_t nasid; + + async_attach_t aa = NULL; + + aa = async_attach_get_info(xconn_vhdl); + +#if DEBUG && ATTACH_DEBUG + printk("pcibr_attach: xconn_vhdl= %p\n", xconn_vhdl); + { + int pos; + char dname[256]; + pos = devfs_generate_path(xconn_vhdl, dname, 256); + printk("%s : path= %s \n", __FUNCTION__, &dname[pos]); + } +#endif + + /* Setup the PRB for the bridge in CONVEYOR BELT + * mode. PRBs are setup in default FIRE-AND-FORGET + * mode during the initialization. + */ + hub_device_flags_set(xconn_vhdl, HUB_PIO_CONVEYOR); + + bridge = (bridge_t *) + xtalk_piotrans_addr(xconn_vhdl, NULL, + 0, sizeof(bridge_t), 0); + + /* + * Create the vertex for the PCI bus, which we + * will also use to hold the pcibr_soft and + * which will be the "master" vertex for all the + * pciio connection points we will hang off it. + * This needs to happen before we call nic_bridge_vertex_info + * as we are some of the *_vmc functions need access to the edges. + * + * Opening this vertex will provide access to + * the Bridge registers themselves. + */ + rc = hwgraph_path_add(xconn_vhdl, EDGE_LBL_PCI, &pcibr_vhdl); + ASSERT(rc == GRAPH_SUCCESS); + + ctlr_vhdl = NULL; + ctlr_vhdl = hwgraph_register(pcibr_vhdl, EDGE_LBL_CONTROLLER, + 0, DEVFS_FL_AUTO_DEVNUM, + 0, 0, + S_IFCHR | S_IRUSR | S_IWUSR | S_IRGRP, 0, 0, + &pcibr_fops, NULL); + + ASSERT(ctlr_vhdl != NULL); + + /* + * decode the nic, and hang its stuff off our + * connection point where other drivers can get + * at it. + */ +#ifdef LATER + nicinfo = BRIDGE_VERTEX_MFG_INFO(xconn_vhdl, (nic_data_t) & bridge->b_nic); +#endif + + /* + * Get the hint structure; if some NIC callback + * marked this vertex as "hands-off" then we + * just return here, before doing anything else. + */ + pcibr_hints = pcibr_hints_get(xconn_vhdl, 0); + + if (pcibr_hints && pcibr_hints->ph_hands_off) + return -1; /* generic operations disabled */ + + id = bridge->b_wid_id; + rev = XWIDGET_PART_REV_NUM(id); + + hwgraph_info_add_LBL(pcibr_vhdl, INFO_LBL_PCIBR_ASIC_REV, (arbitrary_info_t) rev); + + /* + * allocate soft state structure, fill in some + * fields, and hook it up to our vertex. + */ + NEW(pcibr_soft); + BZERO(pcibr_soft, sizeof *pcibr_soft); + pcibr_soft_set(pcibr_vhdl, pcibr_soft); + + pcibr_soft->bs_conn = xconn_vhdl; + pcibr_soft->bs_vhdl = pcibr_vhdl; + pcibr_soft->bs_base = bridge; + pcibr_soft->bs_rev_num = rev; + pcibr_soft->bs_intr_bits = pcibr_intr_bits; + if (is_xbridge(bridge)) { + pcibr_soft->bs_int_ate_size = XBRIDGE_INTERNAL_ATES; + pcibr_soft->bs_xbridge = 1; + } else { + pcibr_soft->bs_int_ate_size = BRIDGE_INTERNAL_ATES; + pcibr_soft->bs_xbridge = 0; + } + + nasid = NASID_GET(bridge); + scp = &NODEPDA( NASID_TO_COMPACT_NODEID(nasid) )->module->elsc; + pcibr_soft->bs_l1sc = scp; + pcibr_soft->bs_moduleid = iobrick_module_get(scp); + pcibr_soft->bsi_err_intr = 0; + + /* Bridges up through REV C + * are unable to set the direct + * byteswappers to BYTE_STREAM. + */ + if (pcibr_soft->bs_rev_num <= BRIDGE_PART_REV_C) { + pcibr_soft->bs_pio_end_io = PCIIO_WORD_VALUES; + pcibr_soft->bs_pio_end_mem = PCIIO_WORD_VALUES; + } +#if PCIBR_SOFT_LIST + { + pcibr_list_p self; + + NEW(self); + self->bl_soft = pcibr_soft; + self->bl_vhdl = pcibr_vhdl; + self->bl_next = pcibr_list; + self->bl_next = swap_ptr((void **) &pcibr_list, (void *)self); + } +#endif + + /* + * get the name of this bridge vertex and keep the info. Use this + * only where it is really needed now: like error interrupts. + */ + s = dev_to_name(pcibr_vhdl, devnm, MAXDEVNAME); + pcibr_soft->bs_name = kmalloc(strlen(s) + 1, GFP_KERNEL); + strcpy(pcibr_soft->bs_name, s); + +#if SHOW_REVS || DEBUG +#if !DEBUG + if (kdebug) +#endif + printk("%sBridge ASIC: rev %s (code=0x%x) at %s\n", + is_xbridge(bridge) ? "X" : "", + (rev == BRIDGE_PART_REV_A) ? "A" : + (rev == BRIDGE_PART_REV_B) ? "B" : + (rev == BRIDGE_PART_REV_C) ? "C" : + (rev == BRIDGE_PART_REV_D) ? "D" : + (rev == XBRIDGE_PART_REV_A) ? "A" : + (rev == XBRIDGE_PART_REV_B) ? "B" : + "unknown", + rev, pcibr_soft->bs_name); +#endif + + info = xwidget_info_get(xconn_vhdl); + pcibr_soft->bs_xid = xwidget_info_id_get(info); + pcibr_soft->bs_master = xwidget_info_master_get(info); + pcibr_soft->bs_mxid = xwidget_info_masterid_get(info); + + /* + * Init bridge lock. + */ + spin_lock_init(&pcibr_soft->bs_lock); + + /* + * If we have one, process the hints structure. + */ + if (pcibr_hints) { + rrb_fixed = pcibr_hints->ph_rrb_fixed; + + pcibr_soft->bs_rrb_fixed = rrb_fixed; + + if (pcibr_hints->ph_intr_bits) + pcibr_soft->bs_intr_bits = pcibr_hints->ph_intr_bits; + + for (slot = 0; slot < 8; ++slot) { + int hslot = pcibr_hints->ph_host_slot[slot] - 1; + + if (hslot < 0) { + pcibr_soft->bs_slot[slot].host_slot = slot; + } else { + pcibr_soft->bs_slot[slot].has_host = 1; + pcibr_soft->bs_slot[slot].host_slot = hslot; + } + } + } + /* + * set up initial values for state fields + */ + for (slot = 0; slot < 8; ++slot) { + pcibr_soft->bs_slot[slot].bss_devio.bssd_space = PCIIO_SPACE_NONE; + pcibr_soft->bs_slot[slot].bss_d64_base = PCIBR_D64_BASE_UNSET; + pcibr_soft->bs_slot[slot].bss_d32_base = PCIBR_D32_BASE_UNSET; + pcibr_soft->bs_slot[slot].bss_ext_ates_active = ATOMIC_INIT(0); + } + + for (ibit = 0; ibit < 8; ++ibit) { + pcibr_soft->bs_intr[ibit].bsi_xtalk_intr = 0; + pcibr_soft->bs_intr[ibit].bsi_pcibr_intr_wrap.iw_soft = pcibr_soft; + pcibr_soft->bs_intr[ibit].bsi_pcibr_intr_wrap.iw_list = NULL; + pcibr_soft->bs_intr[ibit].bsi_pcibr_intr_wrap.iw_stat = + &(bridge->b_int_status); + pcibr_soft->bs_intr[ibit].bsi_pcibr_intr_wrap.iw_hdlrcnt = 0; + pcibr_soft->bs_intr[ibit].bsi_pcibr_intr_wrap.iw_shared = 0; + pcibr_soft->bs_intr[ibit].bsi_pcibr_intr_wrap.iw_connected = 0; + } + + /* + * Initialize various Bridge registers. + */ + + /* + * On pre-Rev.D bridges, set the PCI_RETRY_CNT + * to zero to avoid dropping stores. (#475347) + */ + if (rev < BRIDGE_PART_REV_D) + bridge->b_bus_timeout &= ~BRIDGE_BUS_PCI_RETRY_MASK; + + /* + * Clear all pending interrupts. + */ + bridge->b_int_rst_stat = (BRIDGE_IRR_ALL_CLR); + + /* + * Until otherwise set up, + * assume all interrupts are + * from slot 7. + */ + bridge->b_int_device = (uint32_t) 0xffffffff; + + { + bridgereg_t dirmap; + paddr_t paddr; + iopaddr_t xbase; + xwidgetnum_t xport; + iopaddr_t offset; + int num_entries = 0; + int entry; + cnodeid_t cnodeid; + nasid_t nasid; + + /* Set the Bridge's 32-bit PCI to XTalk + * Direct Map register to the most useful + * value we can determine. Note that we + * must use a single xid for all of: + * direct-mapped 32-bit DMA accesses + * direct-mapped 64-bit DMA accesses + * DMA accesses through the PMU + * interrupts + * This is the only way to guarantee that + * completion interrupts will reach a CPU + * after all DMA data has reached memory. + * (Of course, there may be a few special + * drivers/controlers that explicitly manage + * this ordering problem.) + */ + + cnodeid = 0; /* default node id */ + nasid = COMPACT_TO_NASID_NODEID(cnodeid); + paddr = NODE_OFFSET(nasid) + 0; + + /* currently, we just assume that if we ask + * for a DMA mapping to "zero" the XIO + * host will transmute this into a request + * for the lowest hunk of memory. + */ + xbase = xtalk_dmatrans_addr(xconn_vhdl, 0, + paddr, _PAGESZ, 0); + + if (xbase != XIO_NOWHERE) { + if (XIO_PACKED(xbase)) { + xport = XIO_PORT(xbase); + xbase = XIO_ADDR(xbase); + } else + xport = pcibr_soft->bs_mxid; + + offset = xbase & ((1ull << BRIDGE_DIRMAP_OFF_ADDRSHFT) - 1ull); + xbase >>= BRIDGE_DIRMAP_OFF_ADDRSHFT; + + dirmap = xport << BRIDGE_DIRMAP_W_ID_SHFT; + + if (xbase) + dirmap |= BRIDGE_DIRMAP_OFF & xbase; + else if (offset >= (512 << 20)) + dirmap |= BRIDGE_DIRMAP_ADD512; + + bridge->b_dir_map = dirmap; + } + /* + * Set bridge's idea of page size according to the system's + * idea of "IO page size". TBD: The idea of IO page size + * should really go away. + */ + /* + * ensure that we write and read without any interruption. + * The read following the write is required for the Bridge war + */ + spl_level = splhi(); +#if IOPGSIZE == 4096 + bridge->b_wid_control &= ~BRIDGE_CTRL_PAGE_SIZE; +#elif IOPGSIZE == 16384 + bridge->b_wid_control |= BRIDGE_CTRL_PAGE_SIZE; +#else + <<>>; +#endif + bridge->b_wid_control; /* inval addr bug war */ + splx(spl_level); + + /* Initialize internal mapping entries */ + for (entry = 0; entry < pcibr_soft->bs_int_ate_size; entry++) { + bridge->b_int_ate_ram[entry].wr = 0; + } + + /* + * Determine if there's external mapping SSRAM on this + * bridge. Set up Bridge control register appropriately, + * inititlize SSRAM, and set software up to manage RAM + * entries as an allocatable resource. + * + * Currently, we just use the rm* routines to manage ATE + * allocation. We should probably replace this with a + * Best Fit allocator. + * + * For now, if we have external SSRAM, avoid using + * the internal ssram: we can't turn PREFETCH on + * when we use the internal SSRAM; and besides, + * this also guarantees that no allocation will + * straddle the internal/external line, so we + * can increment ATE write addresses rather than + * recomparing against BRIDGE_INTERNAL_ATES every + * time. + */ + if (is_xbridge(bridge)) + num_entries = 0; + else + num_entries = pcibr_init_ext_ate_ram(bridge); + + /* we always have 128 ATEs (512 for Xbridge) inside the chip + * even if disabled for debugging. + */ + pcibr_soft->bs_int_ate_map = rmallocmap(pcibr_soft->bs_int_ate_size); + pcibr_ate_free(pcibr_soft, 0, pcibr_soft->bs_int_ate_size); +#if PCIBR_ATE_DEBUG + printk("pcibr_attach: %d INTERNAL ATEs\n", pcibr_soft->bs_int_ate_size); +#endif + + if (num_entries > pcibr_soft->bs_int_ate_size) { +#if PCIBR_ATE_NOTBOTH /* for debug -- forces us to use external ates */ + printk("pcibr_attach: disabling internal ATEs.\n"); + pcibr_ate_alloc(pcibr_soft, pcibr_soft->bs_int_ate_size); +#endif + pcibr_soft->bs_ext_ate_map = rmallocmap(num_entries); + pcibr_ate_free(pcibr_soft, pcibr_soft->bs_int_ate_size, + num_entries - pcibr_soft->bs_int_ate_size); +#if PCIBR_ATE_DEBUG + printk("pcibr_attach: %d EXTERNAL ATEs\n", + num_entries - pcibr_soft->bs_int_ate_size); +#endif + } + } + + { + bridgereg_t dirmap; + iopaddr_t xbase; + + /* + * now figure the *real* xtalk base address + * that dirmap sends us to. + */ + dirmap = bridge->b_dir_map; + if (dirmap & BRIDGE_DIRMAP_OFF) + xbase = (iopaddr_t)(dirmap & BRIDGE_DIRMAP_OFF) + << BRIDGE_DIRMAP_OFF_ADDRSHFT; + else if (dirmap & BRIDGE_DIRMAP_ADD512) + xbase = 512 << 20; + else + xbase = 0; + + pcibr_soft->bs_dir_xbase = xbase; + + /* it is entirely possible that we may, at this + * point, have our dirmap pointing somewhere + * other than our "master" port. + */ + pcibr_soft->bs_dir_xport = + (dirmap & BRIDGE_DIRMAP_W_ID) >> BRIDGE_DIRMAP_W_ID_SHFT; + } + + /* pcibr sources an error interrupt; + * figure out where to send it. + * + * If any interrupts are enabled in bridge, + * then the prom set us up and our interrupt + * has already been reconnected in mlreset + * above. + * + * Need to set the D_INTR_ISERR flag + * in the dev_desc used for allocating the + * error interrupt, so our interrupt will + * be properly routed and prioritized. + * + * If our crosstalk provider wants to + * fix widget error interrupts to specific + * destinations, D_INTR_ISERR is how it + * knows to do this. + */ + + xtalk_intr = xtalk_intr_alloc(xconn_vhdl, dev_desc, pcibr_vhdl); + ASSERT(xtalk_intr != NULL); + + pcibr_soft->bsi_err_intr = xtalk_intr; + + /* + * On IP35 with XBridge, we do some extra checks in pcibr_setwidint + * in order to work around some addressing limitations. In order + * for that fire wall to work properly, we need to make sure we + * start from a known clean state. + */ + pcibr_clearwidint(bridge); + + xtalk_intr_connect(xtalk_intr, (xtalk_intr_setfunc_t)pcibr_setwidint, (void *)bridge); + + /* + * now we can start handling error interrupts; + * enable all of them. + * NOTE: some PCI ints may already be enabled. + */ + b_int_enable = bridge->b_int_enable | BRIDGE_ISR_ERRORS; + + + bridge->b_int_enable = b_int_enable; + bridge->b_int_mode = 0; /* do not send "clear interrupt" packets */ + + bridge->b_wid_tflush; /* wait until Bridge PIO complete */ + + /* + * Depending on the rev of bridge, disable certain features. + * Easiest way seems to be to force the PCIBR_NOwhatever + * flag to be on for all DMA calls, which overrides any + * PCIBR_whatever flag or even the setting of whatever + * from the PCIIO_DMA_class flags (or even from the other + * PCIBR flags, since NO overrides YES). + */ + pcibr_soft->bs_dma_flags = 0; + + /* PREFETCH: + * Always completely disabled for REV.A; + * at "pcibr_prefetch_enable_rev", anyone + * asking for PCIIO_PREFETCH gets it. + * Between these two points, you have to ask + * for PCIBR_PREFETCH, which promises that + * your driver knows about known Bridge WARs. + */ + if (pcibr_soft->bs_rev_num < BRIDGE_PART_REV_B) + pcibr_soft->bs_dma_flags |= PCIBR_NOPREFETCH; + else if (pcibr_soft->bs_rev_num < + (BRIDGE_WIDGET_PART_NUM << 4 | pcibr_prefetch_enable_rev)) + pcibr_soft->bs_dma_flags |= PCIIO_NOPREFETCH; + + /* WRITE_GATHER: + * Disabled up to but not including the + * rev number in pcibr_wg_enable_rev. There + * is no "WAR range" as with prefetch. + */ + if (pcibr_soft->bs_rev_num < + (BRIDGE_WIDGET_PART_NUM << 4 | pcibr_wg_enable_rev)) + pcibr_soft->bs_dma_flags |= PCIBR_NOWRITE_GATHER; + + pciio_provider_register(pcibr_vhdl, &pcibr_provider); + pciio_provider_startup(pcibr_vhdl); + + pci_io_fb = 0x00000004; /* I/O FreeBlock Base */ + pci_io_fl = 0xFFFFFFFF; /* I/O FreeBlock Last */ + + pci_lo_fb = 0x00000010; /* Low Memory FreeBlock Base */ + pci_lo_fl = 0x001FFFFF; /* Low Memory FreeBlock Last */ + + pci_hi_fb = 0x00200000; /* High Memory FreeBlock Base */ + pci_hi_fl = 0x3FFFFFFF; /* High Memory FreeBlock Last */ + + + PCI_ADDR_SPACE_LIMITS_STORE(); + + /* build "no-slot" connection point + */ + pcibr_info = pcibr_device_info_new + (pcibr_soft, PCIIO_SLOT_NONE, PCIIO_FUNC_NONE, + PCIIO_VENDOR_ID_NONE, PCIIO_DEVICE_ID_NONE); + noslot_conn = pciio_device_info_register + (pcibr_vhdl, &pcibr_info->f_c); + + /* Remember the no slot connection point info for tearing it + * down during detach. + */ + pcibr_soft->bs_noslot_conn = noslot_conn; + pcibr_soft->bs_noslot_info = pcibr_info; +#if PCI_FBBE + fast_back_to_back_enable = 1; +#endif + +#if PCI_FBBE + if (fast_back_to_back_enable) { + /* + * All devices on the bus are capable of fast back to back, so + * we need to set the fast back to back bit in all devices on + * the bus that are capable of doing such accesses. + */ + } +#endif + +#ifdef LATER + /* If the bridge has been reset then there is no need to reset + * the individual PCI slots. + */ + for (slot = 0; slot < 8; ++slot) + /* Reset all the slots */ + (void)pcibr_slot_reset(pcibr_vhdl, slot); +#endif + + for (slot = 0; slot < 8; ++slot) + /* Find out what is out there */ + (void)pcibr_slot_info_init(pcibr_vhdl,slot); + + for (slot = 0; slot < 8; ++slot) + /* Set up the address space for this slot in the pci land */ + (void)pcibr_slot_addr_space_init(pcibr_vhdl,slot); + + for (slot = 0; slot < 8; ++slot) + /* Setup the device register */ + (void)pcibr_slot_device_init(pcibr_vhdl, slot); + + for (slot = 0; slot < 8; ++slot) + /* Setup host/guest relations */ + (void)pcibr_slot_guest_info_init(pcibr_vhdl,slot); + + for (slot = 0; slot < 8; ++slot) + /* Initial RRB management */ + (void)pcibr_slot_initial_rrb_alloc(pcibr_vhdl,slot); + + /* driver attach routines should be called out from generic linux code */ + for (slot = 0; slot < 8; ++slot) + /* Call the device attach */ + (void)pcibr_slot_call_device_attach(pcibr_vhdl, slot, 0); + + /* + * Each Pbrick PCI bus only has slots 1 and 2. Similarly for + * widget 0xe on Ibricks. Allocate RRB's accordingly. + */ + if (pcibr_soft->bs_moduleid > 0) { + switch (MODULE_GET_BTCHAR(pcibr_soft->bs_moduleid)) { + case 'p': /* Pbrick */ + do_pcibr_rrb_autoalloc(pcibr_soft, 1, 8); + do_pcibr_rrb_autoalloc(pcibr_soft, 2, 8); + break; + case 'i': /* Ibrick */ + /* port 0xe on the Ibrick only has slots 1 and 2 */ + if (pcibr_soft->bs_xid == 0xe) { + do_pcibr_rrb_autoalloc(pcibr_soft, 1, 8); + do_pcibr_rrb_autoalloc(pcibr_soft, 2, 8); + } + else { + /* allocate one RRB for the serial port */ + do_pcibr_rrb_autoalloc(pcibr_soft, 0, 1); + } + break; + } /* switch */ + } + +#ifdef LATER + if (strstr(nicinfo, XTALK_PCI_PART_NUM)) { + do_pcibr_rrb_autoalloc(pcibr_soft, 1, 8); +#if PCIBR_RRB_DEBUG + printf("\n\nFound XTALK_PCI (030-1275) at %v\n", xconn_vhdl); + + printf("pcibr_attach: %v Shoebox RRB MANAGEMENT: %d+%d free\n", + pcibr_vhdl, + pcibr_soft->bs_rrb_avail[0], + pcibr_soft->bs_rrb_avail[1]); + + for (slot = 0; slot < 8; ++slot) + printf("\t%d+%d+%d", + 0xFFF & pcibr_soft->bs_rrb_valid[slot], + 0xFFF & pcibr_soft->bs_rrb_valid[slot + PCIBR_RRB_SLOT_VIRTUAL], + pcibr_soft->bs_rrb_res[slot]); + + printf("\n"); +#endif + } +#else + FIXME("pcibr_attach: Call do_pcibr_rrb_autoalloc nicinfo\n"); +#endif + + if (aa) + async_attach_add_info(noslot_conn, aa); + + pciio_device_attach(noslot_conn, 0); + + + /* + * Tear down pointer to async attach info -- async threads for + * bridge's descendants may be running but the bridge's work is done. + */ + if (aa) + async_attach_del_info(xconn_vhdl); + + return 0; +} +/* + * pcibr_detach: + * Detach the bridge device from the hwgraph after cleaning out all the + * underlying vertices. + */ +int +pcibr_detach(devfs_handle_t xconn) +{ + pciio_slot_t slot; + devfs_handle_t pcibr_vhdl; + pcibr_soft_t pcibr_soft; + bridge_t *bridge; + + /* Get the bridge vertex from its xtalk connection point */ + if (hwgraph_traverse(xconn, EDGE_LBL_PCI, &pcibr_vhdl) != GRAPH_SUCCESS) + return(1); + + pcibr_soft = pcibr_soft_get(pcibr_vhdl); + bridge = pcibr_soft->bs_base; + + /* Disable the interrupts from the bridge */ + bridge->b_int_enable = 0; + + /* Detach all the PCI devices talking to this bridge */ + for(slot = 0; slot < 8; slot++) { +#ifdef DEBUG + printk("pcibr_device_detach called for %p/%d\n", + pcibr_vhdl,slot); +#endif + pcibr_slot_detach(pcibr_vhdl, slot, 0); + } + + /* Unregister the no-slot connection point */ + pciio_device_info_unregister(pcibr_vhdl, + &(pcibr_soft->bs_noslot_info->f_c)); + + spin_lock_destroy(&pcibr_soft->bs_lock); + kfree(pcibr_soft->bs_name); + + /* Error handler gets unregistered when the widget info is + * cleaned + */ + /* Free the soft ATE maps */ + if (pcibr_soft->bs_int_ate_map) + rmfreemap(pcibr_soft->bs_int_ate_map); + if (pcibr_soft->bs_ext_ate_map) + rmfreemap(pcibr_soft->bs_ext_ate_map); + + /* Disconnect the error interrupt and free the xtalk resources + * associated with it. + */ + xtalk_intr_disconnect(pcibr_soft->bsi_err_intr); + xtalk_intr_free(pcibr_soft->bsi_err_intr); + + /* Clear the software state maintained by the bridge driver for this + * bridge. + */ + DEL(pcibr_soft); + /* Remove the Bridge revision labelled info */ + (void)hwgraph_info_remove_LBL(pcibr_vhdl, INFO_LBL_PCIBR_ASIC_REV, NULL); + /* Remove the character device associated with this bridge */ + (void)hwgraph_edge_remove(pcibr_vhdl, EDGE_LBL_CONTROLLER, NULL); + /* Remove the PCI bridge vertex */ + (void)hwgraph_edge_remove(xconn, EDGE_LBL_PCI, NULL); + + return(0); +} + +int +pcibr_asic_rev(devfs_handle_t pconn_vhdl) +{ + devfs_handle_t pcibr_vhdl; + arbitrary_info_t ainfo; + + if (GRAPH_SUCCESS != + hwgraph_traverse(pconn_vhdl, EDGE_LBL_MASTER, &pcibr_vhdl)) + return -1; + + if (GRAPH_SUCCESS != + hwgraph_info_get_LBL(pcibr_vhdl, INFO_LBL_PCIBR_ASIC_REV, &ainfo)) + return -1; + + return (int) ainfo; +} + +int +pcibr_write_gather_flush(devfs_handle_t pconn_vhdl) +{ + pciio_info_t pciio_info = pciio_info_get(pconn_vhdl); + pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); + pciio_slot_t slot; + slot = pciio_info_slot_get(pciio_info); + pcibr_device_write_gather_flush(pcibr_soft, slot); + return 0; +} + +/* ===================================================================== + * PIO MANAGEMENT + */ + +static iopaddr_t +pcibr_addr_pci_to_xio(devfs_handle_t pconn_vhdl, + pciio_slot_t slot, + pciio_space_t space, + iopaddr_t pci_addr, + size_t req_size, + unsigned flags) +{ + pcibr_info_t pcibr_info = pcibr_info_get(pconn_vhdl); + pciio_info_t pciio_info = &pcibr_info->f_c; + pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); + bridge_t *bridge = pcibr_soft->bs_base; + + unsigned bar; /* which BASE reg on device is decoding */ + iopaddr_t xio_addr = XIO_NOWHERE; + + pciio_space_t wspace; /* which space device is decoding */ + iopaddr_t wbase; /* base of device decode on PCI */ + size_t wsize; /* size of device decode on PCI */ + + int try; /* DevIO(x) window scanning order control */ + int win; /* which DevIO(x) window is being used */ + pciio_space_t mspace; /* target space for devio(x) register */ + iopaddr_t mbase; /* base of devio(x) mapped area on PCI */ + size_t msize; /* size of devio(x) mapped area on PCI */ + size_t mmask; /* addr bits stored in Device(x) */ + + unsigned long s; + + s = pcibr_lock(pcibr_soft); + + if (pcibr_soft->bs_slot[slot].has_host) { + slot = pcibr_soft->bs_slot[slot].host_slot; + pcibr_info = pcibr_soft->bs_slot[slot].bss_infos[0]; + } + if (space == PCIIO_SPACE_NONE) + goto done; + + if (space == PCIIO_SPACE_CFG) { + /* + * Usually, the first mapping + * established to a PCI device + * is to its config space. + * + * In any case, we definitely + * do NOT need to worry about + * PCI BASE registers, and + * MUST NOT attempt to point + * the DevIO(x) window at + * this access ... + */ + if (((flags & PCIIO_BYTE_STREAM) == 0) && + ((pci_addr + req_size) <= BRIDGE_TYPE0_CFG_FUNC_OFF)) + xio_addr = pci_addr + BRIDGE_TYPE0_CFG_DEV(slot); + + goto done; + } + if (space == PCIIO_SPACE_ROM) { + /* PIO to the Expansion Rom. + * Driver is responsible for + * enabling and disabling + * decodes properly. + */ + wbase = pcibr_info->f_rbase; + wsize = pcibr_info->f_rsize; + + /* + * While the driver should know better + * than to attempt to map more space + * than the device is decoding, he might + * do it; better to bail out here. + */ + if ((pci_addr + req_size) > wsize) + goto done; + + pci_addr += wbase; + space = PCIIO_SPACE_MEM; + } + /* + * reduce window mappings to raw + * space mappings (maybe allocating + * windows), and try for DevIO(x) + * usage (setting it if it is available). + */ + bar = space - PCIIO_SPACE_WIN0; + if (bar < 6) { + wspace = pcibr_info->f_window[bar].w_space; + if (wspace == PCIIO_SPACE_NONE) + goto done; + + /* get PCI base and size */ + wbase = pcibr_info->f_window[bar].w_base; + wsize = pcibr_info->f_window[bar].w_size; + + /* + * While the driver should know better + * than to attempt to map more space + * than the device is decoding, he might + * do it; better to bail out here. + */ + if ((pci_addr + req_size) > wsize) + goto done; + + /* shift from window relative to + * decoded space relative. + */ + pci_addr += wbase; + space = wspace; + } else + bar = -1; + + /* Scan all the DevIO(x) windows twice looking for one + * that can satisfy our request. The first time through, + * only look at assigned windows; the second time, also + * look at PCIIO_SPACE_NONE windows. Arrange the order + * so we always look at our own window first. + * + * We will not attempt to satisfy a single request + * by concatinating multiple windows. + */ + for (try = 0; try < 16; ++try) { + bridgereg_t devreg; + unsigned offset; + + win = (try + slot) % 8; + + /* If this DevIO(x) mapping area can provide + * a mapping to this address, use it. + */ + msize = (win < 2) ? 0x200000 : 0x100000; + mmask = -msize; + if (space != PCIIO_SPACE_IO) + mmask &= 0x3FFFFFFF; + + offset = pci_addr & (msize - 1); + + /* If this window can't possibly handle that request, + * go on to the next window. + */ + if (((pci_addr & (msize - 1)) + req_size) > msize) + continue; + + devreg = pcibr_soft->bs_slot[win].bss_device; + + /* Is this window "nailed down"? + * If not, maybe we can use it. + * (only check this the second time through) + */ + mspace = pcibr_soft->bs_slot[win].bss_devio.bssd_space; + if ((try > 7) && (mspace == PCIIO_SPACE_NONE)) { + + /* If this is the primary DevIO(x) window + * for some other device, skip it. + */ + if ((win != slot) && + (PCIIO_VENDOR_ID_NONE != + pcibr_soft->bs_slot[win].bss_vendor_id)) + continue; + + /* It's a free window, and we fit in it. + * Set up Device(win) to our taste. + */ + mbase = pci_addr & mmask; + + /* check that we would really get from + * here to there. + */ + if ((mbase | offset) != pci_addr) + continue; + + devreg &= ~BRIDGE_DEV_OFF_MASK; + if (space != PCIIO_SPACE_IO) + devreg |= BRIDGE_DEV_DEV_IO_MEM; + else + devreg &= ~BRIDGE_DEV_DEV_IO_MEM; + devreg |= (mbase >> 20) & BRIDGE_DEV_OFF_MASK; + + /* default is WORD_VALUES. + * if you specify both, + * operation is undefined. + */ + if (flags & PCIIO_BYTE_STREAM) + devreg |= BRIDGE_DEV_DEV_SWAP; + else + devreg &= ~BRIDGE_DEV_DEV_SWAP; + + if (pcibr_soft->bs_slot[win].bss_device != devreg) { + bridge->b_device[win].reg = devreg; + pcibr_soft->bs_slot[win].bss_device = devreg; + bridge->b_wid_tflush; /* wait until Bridge PIO complete */ + +#if DEBUG && PCI_DEBUG + printk("pcibr Device(%d): 0x%lx\n", win, bridge->b_device[win].reg); +#endif + } + pcibr_soft->bs_slot[win].bss_devio.bssd_space = space; + pcibr_soft->bs_slot[win].bss_devio.bssd_base = mbase; + xio_addr = BRIDGE_DEVIO(win) + (pci_addr - mbase); + +#if DEBUG && PCI_DEBUG + printk("%s LINE %d map to space %d space desc 0x%x[%lx..%lx] for slot %d allocates DevIO(%d) devreg 0x%x\n", + __FUNCTION__, __LINE__, space, space_desc, + pci_addr, pci_addr + req_size - 1, + slot, win, devreg); +#endif + + goto done; + } /* endif DevIO(x) not pointed */ + mbase = pcibr_soft->bs_slot[win].bss_devio.bssd_base; + + /* Now check for request incompat with DevIO(x) + */ + if ((mspace != space) || + (pci_addr < mbase) || + ((pci_addr + req_size) > (mbase + msize)) || + ((flags & PCIIO_BYTE_STREAM) && !(devreg & BRIDGE_DEV_DEV_SWAP)) || + (!(flags & PCIIO_BYTE_STREAM) && (devreg & BRIDGE_DEV_DEV_SWAP))) + continue; + + /* DevIO(x) window is pointed at PCI space + * that includes our target. Calculate the + * final XIO address, release the lock and + * return. + */ + xio_addr = BRIDGE_DEVIO(win) + (pci_addr - mbase); + +#if DEBUG && PCI_DEBUG + printk("%s LINE %d map to space %d [0x%p..0x%p] for slot %d uses DevIO(%d)\n", + __FUNCTION__, __LINE__, space, pci_addr, pci_addr + req_size - 1, slot, win); +#endif + goto done; + } + + switch (space) { + /* + * Accesses to device decode + * areas that do a not fit + * within the DevIO(x) space are + * modified to be accesses via + * the direct mapping areas. + * + * If necessary, drivers can + * explicitly ask for mappings + * into these address spaces, + * but this should never be needed. + */ + case PCIIO_SPACE_MEM: /* "mem space" */ + case PCIIO_SPACE_MEM32: /* "mem, use 32-bit-wide bus" */ + if ((pci_addr + BRIDGE_PCI_MEM32_BASE + req_size - 1) <= + BRIDGE_PCI_MEM32_LIMIT) + xio_addr = pci_addr + BRIDGE_PCI_MEM32_BASE; + break; + + case PCIIO_SPACE_MEM64: /* "mem, use 64-bit-wide bus" */ + if ((pci_addr + BRIDGE_PCI_MEM64_BASE + req_size - 1) <= + BRIDGE_PCI_MEM64_LIMIT) + xio_addr = pci_addr + BRIDGE_PCI_MEM64_BASE; + break; + + case PCIIO_SPACE_IO: /* "i/o space" */ + /* Bridge Hardware Bug WAR #482741: + * The 4G area that maps directly from + * XIO space to PCI I/O space is busted + * until Bridge Rev D. + */ + if ((pcibr_soft->bs_rev_num > BRIDGE_PART_REV_C) && + ((pci_addr + BRIDGE_PCI_IO_BASE + req_size - 1) <= + BRIDGE_PCI_IO_LIMIT)) + xio_addr = pci_addr + BRIDGE_PCI_IO_BASE; + break; + } + + /* Check that "Direct PIO" byteswapping matches, + * try to change it if it does not. + */ + if (xio_addr != XIO_NOWHERE) { + unsigned bst; /* nonzero to set bytestream */ + unsigned *bfp; /* addr of record of how swapper is set */ + unsigned swb; /* which control bit to mung */ + unsigned bfo; /* current swapper setting */ + unsigned bfn; /* desired swapper setting */ + + bfp = ((space == PCIIO_SPACE_IO) + ? (&pcibr_soft->bs_pio_end_io) + : (&pcibr_soft->bs_pio_end_mem)); + + bfo = *bfp; + + bst = flags & PCIIO_BYTE_STREAM; + + bfn = bst ? PCIIO_BYTE_STREAM : PCIIO_WORD_VALUES; + + if (bfn == bfo) { /* we already match. */ + ; + } else if (bfo != 0) { /* we have a conflict. */ +#if DEBUG && PCI_DEBUG + printk("pcibr_addr_pci_to_xio: swap conflict in space %d , was%s%s, want%s%s\n", + space, + bfo & PCIIO_BYTE_STREAM ? " BYTE_STREAM" : "", + bfo & PCIIO_WORD_VALUES ? " WORD_VALUES" : "", + bfn & PCIIO_BYTE_STREAM ? " BYTE_STREAM" : "", + bfn & PCIIO_WORD_VALUES ? " WORD_VALUES" : ""); +#endif + xio_addr = XIO_NOWHERE; + } else { /* OK to make the change. */ + bridgereg_t octl, nctl; + + swb = (space == PCIIO_SPACE_IO) ? BRIDGE_CTRL_IO_SWAP : BRIDGE_CTRL_MEM_SWAP; + octl = bridge->b_wid_control; + nctl = bst ? octl | swb : octl & ~swb; + + if (octl != nctl) /* make the change if any */ + bridge->b_wid_control = nctl; + + *bfp = bfn; /* record the assignment */ + +#if DEBUG && PCI_DEBUG + printk("pcibr_addr_pci_to_xio: swap for space %d set to%s%s\n", + space, + bfn & PCIIO_BYTE_STREAM ? " BYTE_STREAM" : "", + bfn & PCIIO_WORD_VALUES ? " WORD_VALUES" : ""); +#endif + } + } + done: + pcibr_unlock(pcibr_soft, s); + return xio_addr; +} + +/*ARGSUSED6 */ +pcibr_piomap_t +pcibr_piomap_alloc(devfs_handle_t pconn_vhdl, + device_desc_t dev_desc, + pciio_space_t space, + iopaddr_t pci_addr, + size_t req_size, + size_t req_size_max, + unsigned flags) +{ + pcibr_info_t pcibr_info = pcibr_info_get(pconn_vhdl); + pciio_info_t pciio_info = &pcibr_info->f_c; + pciio_slot_t pciio_slot = pciio_info_slot_get(pciio_info); + pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); + devfs_handle_t xconn_vhdl = pcibr_soft->bs_conn; + + pcibr_piomap_t *mapptr; + pcibr_piomap_t maplist; + pcibr_piomap_t pcibr_piomap; + iopaddr_t xio_addr; + xtalk_piomap_t xtalk_piomap; + unsigned long s; + + /* Make sure that the req sizes are non-zero */ + if ((req_size < 1) || (req_size_max < 1)) + return NULL; + + /* + * Code to translate slot/space/addr + * into xio_addr is common between + * this routine and pcibr_piotrans_addr. + */ + xio_addr = pcibr_addr_pci_to_xio(pconn_vhdl, pciio_slot, space, pci_addr, req_size, flags); + + if (xio_addr == XIO_NOWHERE) + return NULL; + + /* Check the piomap list to see if there is already an allocated + * piomap entry but not in use. If so use that one. Otherwise + * allocate a new piomap entry and add it to the piomap list + */ + mapptr = &(pcibr_info->f_piomap); + + s = pcibr_lock(pcibr_soft); + for (pcibr_piomap = *mapptr; + pcibr_piomap != NULL; + pcibr_piomap = pcibr_piomap->bp_next) { + if (pcibr_piomap->bp_mapsz == 0) + break; + } + + if (pcibr_piomap) + mapptr = NULL; + else { + pcibr_unlock(pcibr_soft, s); + NEW(pcibr_piomap); + } + + pcibr_piomap->bp_dev = pconn_vhdl; + pcibr_piomap->bp_slot = pciio_slot; + pcibr_piomap->bp_flags = flags; + pcibr_piomap->bp_space = space; + pcibr_piomap->bp_pciaddr = pci_addr; + pcibr_piomap->bp_mapsz = req_size; + pcibr_piomap->bp_soft = pcibr_soft; + pcibr_piomap->bp_toc[0] = ATOMIC_INIT(0); + + if (mapptr) { + s = pcibr_lock(pcibr_soft); + maplist = *mapptr; + pcibr_piomap->bp_next = maplist; + *mapptr = pcibr_piomap; + } + pcibr_unlock(pcibr_soft, s); + + + if (pcibr_piomap) { + xtalk_piomap = + xtalk_piomap_alloc(xconn_vhdl, 0, + xio_addr, + req_size, req_size_max, + flags & PIOMAP_FLAGS); + if (xtalk_piomap) { + pcibr_piomap->bp_xtalk_addr = xio_addr; + pcibr_piomap->bp_xtalk_pio = xtalk_piomap; + } else { + pcibr_piomap->bp_mapsz = 0; + pcibr_piomap = 0; + } + } + return pcibr_piomap; +} + +/*ARGSUSED */ +void +pcibr_piomap_free(pcibr_piomap_t pcibr_piomap) +{ + xtalk_piomap_free(pcibr_piomap->bp_xtalk_pio); + pcibr_piomap->bp_xtalk_pio = 0; + pcibr_piomap->bp_mapsz = 0; +} + +/*ARGSUSED */ +caddr_t +pcibr_piomap_addr(pcibr_piomap_t pcibr_piomap, + iopaddr_t pci_addr, + size_t req_size) +{ + return xtalk_piomap_addr(pcibr_piomap->bp_xtalk_pio, + pcibr_piomap->bp_xtalk_addr + + pci_addr - pcibr_piomap->bp_pciaddr, + req_size); +} + +/*ARGSUSED */ +void +pcibr_piomap_done(pcibr_piomap_t pcibr_piomap) +{ + xtalk_piomap_done(pcibr_piomap->bp_xtalk_pio); +} + +/*ARGSUSED */ +caddr_t +pcibr_piotrans_addr(devfs_handle_t pconn_vhdl, + device_desc_t dev_desc, + pciio_space_t space, + iopaddr_t pci_addr, + size_t req_size, + unsigned flags) +{ + pciio_info_t pciio_info = pciio_info_get(pconn_vhdl); + pciio_slot_t pciio_slot = pciio_info_slot_get(pciio_info); + pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); + devfs_handle_t xconn_vhdl = pcibr_soft->bs_conn; + + iopaddr_t xio_addr; + + xio_addr = pcibr_addr_pci_to_xio(pconn_vhdl, pciio_slot, space, pci_addr, req_size, flags); + + if (xio_addr == XIO_NOWHERE) + return NULL; + + return xtalk_piotrans_addr(xconn_vhdl, 0, xio_addr, req_size, flags & PIOMAP_FLAGS); +} + +/* + * PIO Space allocation and management. + * Allocate and Manage the PCI PIO space (mem and io space) + * This routine is pretty simplistic at this time, and + * does pretty trivial management of allocation and freeing.. + * The current scheme is prone for fragmentation.. + * Change the scheme to use bitmaps. + */ + +/*ARGSUSED */ +iopaddr_t +pcibr_piospace_alloc(devfs_handle_t pconn_vhdl, + device_desc_t dev_desc, + pciio_space_t space, + size_t req_size, + size_t alignment) +{ + pcibr_info_t pcibr_info = pcibr_info_get(pconn_vhdl); + pciio_info_t pciio_info = &pcibr_info->f_c; + pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); + + pciio_piospace_t piosp; + unsigned long s; + + iopaddr_t *pciaddr, *pcilast; + iopaddr_t start_addr; + size_t align_mask; + + /* + * Check for proper alignment + */ + ASSERT(alignment >= NBPP); + ASSERT((alignment & (alignment - 1)) == 0); + + align_mask = alignment - 1; + s = pcibr_lock(pcibr_soft); + + /* + * First look if a previously allocated chunk exists. + */ + if ((piosp = pcibr_info->f_piospace)) { + /* + * Look through the list for a right sized free chunk. + */ + do { + if (piosp->free && + (piosp->space == space) && + (piosp->count >= req_size) && + !(piosp->start & align_mask)) { + piosp->free = 0; + pcibr_unlock(pcibr_soft, s); + return piosp->start; + } + piosp = piosp->next; + } while (piosp); + } + ASSERT(!piosp); + + switch (space) { + case PCIIO_SPACE_IO: + pciaddr = &pcibr_soft->bs_spinfo.pci_io_base; + pcilast = &pcibr_soft->bs_spinfo.pci_io_last; + break; + case PCIIO_SPACE_MEM: + case PCIIO_SPACE_MEM32: + pciaddr = &pcibr_soft->bs_spinfo.pci_mem_base; + pcilast = &pcibr_soft->bs_spinfo.pci_mem_last; + break; + default: + ASSERT(0); + pcibr_unlock(pcibr_soft, s); + return 0; + } + + start_addr = *pciaddr; + + /* + * Align start_addr. + */ + if (start_addr & align_mask) + start_addr = (start_addr + align_mask) & ~align_mask; + + if ((start_addr + req_size) > *pcilast) { + /* + * If too big a request, reject it. + */ + pcibr_unlock(pcibr_soft, s); + return 0; + } + *pciaddr = (start_addr + req_size); + + NEW(piosp); + piosp->free = 0; + piosp->space = space; + piosp->start = start_addr; + piosp->count = req_size; + piosp->next = pcibr_info->f_piospace; + pcibr_info->f_piospace = piosp; + + pcibr_unlock(pcibr_soft, s); + return start_addr; +} + +/*ARGSUSED */ +void +pcibr_piospace_free(devfs_handle_t pconn_vhdl, + pciio_space_t space, + iopaddr_t pciaddr, + size_t req_size) +{ + pcibr_info_t pcibr_info = pcibr_info_get(pconn_vhdl); + pcibr_soft_t pcibr_soft = (pcibr_soft_t) pcibr_info->f_mfast; + + pciio_piospace_t piosp; + unsigned long s; + char name[1024]; + + /* + * Look through the bridge data structures for the pciio_piospace_t + * structure corresponding to 'pciaddr' + */ + s = pcibr_lock(pcibr_soft); + piosp = pcibr_info->f_piospace; + while (piosp) { + /* + * Piospace free can only be for the complete + * chunk and not parts of it.. + */ + if (piosp->start == pciaddr) { + if (piosp->count == req_size) + break; + /* + * Improper size passed for freeing.. + * Print a message and break; + */ + hwgraph_vertex_name_get(pconn_vhdl, name, 1024); + printk(KERN_WARNING "pcibr_piospace_free: error"); + printk(KERN_WARNING "Device %s freeing size (0x%lx) different than allocated (0x%lx)", + name, req_size, piosp->count); + printk(KERN_WARNING "Freeing 0x%lx instead", piosp->count); + break; + } + piosp = piosp->next; + } + + if (!piosp) { + printk(KERN_WARNING + "pcibr_piospace_free: Address 0x%lx size 0x%lx - No match\n", + pciaddr, req_size); + pcibr_unlock(pcibr_soft, s); + return; + } + piosp->free = 1; + pcibr_unlock(pcibr_soft, s); + return; +} + +/* ===================================================================== + * DMA MANAGEMENT + * + * The Bridge ASIC provides three methods of doing + * DMA: via a "direct map" register available in + * 32-bit PCI space (which selects a contiguous 2G + * address space on some other widget), via + * "direct" addressing via 64-bit PCI space (all + * destination information comes from the PCI + * address, including transfer attributes), and via + * a "mapped" region that allows a bunch of + * different small mappings to be established with + * the PMU. + * + * For efficiency, we most prefer to use the 32-bit + * direct mapping facility, since it requires no + * resource allocations. The advantage of using the + * PMU over the 64-bit direct is that single-cycle + * PCI addressing can be used; the advantage of + * using 64-bit direct over PMU addressing is that + * we do not have to allocate entries in the PMU. + */ + +/* + * Convert PCI-generic software flags and Bridge-specific software flags + * into Bridge-specific Direct Map attribute bits. + */ +static iopaddr_t +pcibr_flags_to_d64(unsigned flags, pcibr_soft_t pcibr_soft) +{ + iopaddr_t attributes = 0; + + /* Sanity check: Bridge only allows use of VCHAN1 via 64-bit addrs */ +#ifdef LATER + ASSERT_ALWAYS(!(flags & PCIBR_VCHAN1) || (flags & PCIIO_DMA_A64)); +#endif + + /* Generic macro flags + */ + if (flags & PCIIO_DMA_DATA) { /* standard data channel */ + attributes &= ~PCI64_ATTR_BAR; /* no barrier bit */ + attributes |= PCI64_ATTR_PREF; /* prefetch on */ + } + if (flags & PCIIO_DMA_CMD) { /* standard command channel */ + attributes |= PCI64_ATTR_BAR; /* barrier bit on */ + attributes &= ~PCI64_ATTR_PREF; /* disable prefetch */ + } + /* Generic detail flags + */ + if (flags & PCIIO_PREFETCH) + attributes |= PCI64_ATTR_PREF; + if (flags & PCIIO_NOPREFETCH) + attributes &= ~PCI64_ATTR_PREF; + + /* the swap bit is in the address attributes for xbridge */ + if (pcibr_soft->bs_xbridge) { + if (flags & PCIIO_BYTE_STREAM) + attributes |= PCI64_ATTR_SWAP; + if (flags & PCIIO_WORD_VALUES) + attributes &= ~PCI64_ATTR_SWAP; + } + + /* Provider-specific flags + */ + if (flags & PCIBR_BARRIER) + attributes |= PCI64_ATTR_BAR; + if (flags & PCIBR_NOBARRIER) + attributes &= ~PCI64_ATTR_BAR; + + if (flags & PCIBR_PREFETCH) + attributes |= PCI64_ATTR_PREF; + if (flags & PCIBR_NOPREFETCH) + attributes &= ~PCI64_ATTR_PREF; + + if (flags & PCIBR_PRECISE) + attributes |= PCI64_ATTR_PREC; + if (flags & PCIBR_NOPRECISE) + attributes &= ~PCI64_ATTR_PREC; + + if (flags & PCIBR_VCHAN1) + attributes |= PCI64_ATTR_VIRTUAL; + if (flags & PCIBR_VCHAN0) + attributes &= ~PCI64_ATTR_VIRTUAL; + + return (attributes); +} + +/*ARGSUSED */ +pcibr_dmamap_t +pcibr_dmamap_alloc(devfs_handle_t pconn_vhdl, + device_desc_t dev_desc, + size_t req_size_max, + unsigned flags) +{ + pciio_info_t pciio_info = pciio_info_get(pconn_vhdl); + pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); + devfs_handle_t xconn_vhdl = pcibr_soft->bs_conn; + pciio_slot_t slot; + xwidgetnum_t xio_port; + + xtalk_dmamap_t xtalk_dmamap; + pcibr_dmamap_t pcibr_dmamap; + int ate_count; + int ate_index; + + /* merge in forced flags */ + flags |= pcibr_soft->bs_dma_flags; + + /* + * On SNIA64, these maps are pre-allocated because pcibr_dmamap_alloc() + * can be called within an interrupt thread. + */ + pcibr_dmamap = (pcibr_dmamap_t)get_free_pciio_dmamap(pcibr_soft->bs_vhdl); + + if (!pcibr_dmamap) + return 0; + + xtalk_dmamap = xtalk_dmamap_alloc(xconn_vhdl, dev_desc, req_size_max, + flags & DMAMAP_FLAGS); + if (!xtalk_dmamap) { +#if PCIBR_ATE_DEBUG + printk("pcibr_attach: xtalk_dmamap_alloc failed\n"); +#endif + free_pciio_dmamap(pcibr_dmamap); + return 0; + } + xio_port = pcibr_soft->bs_mxid; + slot = pciio_info_slot_get(pciio_info); + + pcibr_dmamap->bd_dev = pconn_vhdl; + pcibr_dmamap->bd_slot = slot; + pcibr_dmamap->bd_soft = pcibr_soft; + pcibr_dmamap->bd_xtalk = xtalk_dmamap; + pcibr_dmamap->bd_max_size = req_size_max; + pcibr_dmamap->bd_xio_port = xio_port; + + if (flags & PCIIO_DMA_A64) { + if (!pcibr_try_set_device(pcibr_soft, slot, flags, BRIDGE_DEV_D64_BITS)) { + iopaddr_t pci_addr; + int have_rrbs; + int min_rrbs; + + /* Device is capable of A64 operations, + * and the attributes of the DMA are + * consistant with any previous DMA + * mappings using shared resources. + */ + + pci_addr = pcibr_flags_to_d64(flags, pcibr_soft); + + pcibr_dmamap->bd_flags = flags; + pcibr_dmamap->bd_xio_addr = 0; + pcibr_dmamap->bd_pci_addr = pci_addr; + + /* Make sure we have an RRB (or two). + */ + if (!(pcibr_soft->bs_rrb_fixed & (1 << slot))) { + if (flags & PCIBR_VCHAN1) + slot += PCIBR_RRB_SLOT_VIRTUAL; + have_rrbs = pcibr_soft->bs_rrb_valid[slot]; + if (have_rrbs < 2) { + if (pci_addr & PCI64_ATTR_PREF) + min_rrbs = 2; + else + min_rrbs = 1; + if (have_rrbs < min_rrbs) + do_pcibr_rrb_autoalloc(pcibr_soft, slot, min_rrbs - have_rrbs); + } + } +#if PCIBR_ATE_DEBUG + printk("pcibr_dmamap_alloc: using direct64\n"); +#endif + return pcibr_dmamap; + } +#if PCIBR_ATE_DEBUG + printk("pcibr_dmamap_alloc: unable to use direct64\n"); +#endif + flags &= ~PCIIO_DMA_A64; + } + if (flags & PCIIO_FIXED) { + /* warning: mappings may fail later, + * if direct32 can't get to the address. + */ + if (!pcibr_try_set_device(pcibr_soft, slot, flags, BRIDGE_DEV_D32_BITS)) { + /* User desires DIRECT A32 operations, + * and the attributes of the DMA are + * consistant with any previous DMA + * mappings using shared resources. + * Mapping calls may fail if target + * is outside the direct32 range. + */ +#if PCIBR_ATE_DEBUG + printk("pcibr_dmamap_alloc: using direct32\n"); +#endif + pcibr_dmamap->bd_flags = flags; + pcibr_dmamap->bd_xio_addr = pcibr_soft->bs_dir_xbase; + pcibr_dmamap->bd_pci_addr = PCI32_DIRECT_BASE; + return pcibr_dmamap; + } +#if PCIBR_ATE_DEBUG + printk("pcibr_dmamap_alloc: unable to use direct32\n"); +#endif + /* If the user demands FIXED and we can't + * give it to him, fail. + */ + xtalk_dmamap_free(xtalk_dmamap); + free_pciio_dmamap(pcibr_dmamap); + return 0; + } + /* + * Allocate Address Translation Entries from the mapping RAM. + * Unless the PCIBR_NO_ATE_ROUNDUP flag is specified, + * the maximum number of ATEs is based on the worst-case + * scenario, where the requested target is in the + * last byte of an ATE; thus, mapping IOPGSIZE+2 + * does end up requiring three ATEs. + */ + if (!(flags & PCIBR_NO_ATE_ROUNDUP)) { + ate_count = IOPG((IOPGSIZE - 1) /* worst case start offset */ + +req_size_max /* max mapping bytes */ + - 1) + 1; /* round UP */ + } else { /* assume requested target is page aligned */ + ate_count = IOPG(req_size_max /* max mapping bytes */ + - 1) + 1; /* round UP */ + } + + ate_index = pcibr_ate_alloc(pcibr_soft, ate_count); + + if (ate_index != -1) { + if (!pcibr_try_set_device(pcibr_soft, slot, flags, BRIDGE_DEV_PMU_BITS)) { + bridge_ate_t ate_proto; + int have_rrbs; + int min_rrbs; + +#if PCIBR_ATE_DEBUG + printk("pcibr_dmamap_alloc: using PMU\n"); +#endif + + ate_proto = pcibr_flags_to_ate(flags); + + pcibr_dmamap->bd_flags = flags; + pcibr_dmamap->bd_pci_addr = + PCI32_MAPPED_BASE + IOPGSIZE * ate_index; + /* + * for xbridge the byte-swap bit == bit 29 of PCI address + */ + if (pcibr_soft->bs_xbridge) { + if (flags & PCIIO_BYTE_STREAM) + ATE_SWAP_ON(pcibr_dmamap->bd_pci_addr); + /* + * If swap was set in bss_device in pcibr_endian_set() + * we need to change the address bit. + */ + if (pcibr_soft->bs_slot[slot].bss_device & + BRIDGE_DEV_SWAP_PMU) + ATE_SWAP_ON(pcibr_dmamap->bd_pci_addr); + if (flags & PCIIO_WORD_VALUES) + ATE_SWAP_OFF(pcibr_dmamap->bd_pci_addr); + } + pcibr_dmamap->bd_xio_addr = 0; + pcibr_dmamap->bd_ate_ptr = pcibr_ate_addr(pcibr_soft, ate_index); + pcibr_dmamap->bd_ate_index = ate_index; + pcibr_dmamap->bd_ate_count = ate_count; + pcibr_dmamap->bd_ate_proto = ate_proto; + + /* Make sure we have an RRB (or two). + */ + if (!(pcibr_soft->bs_rrb_fixed & (1 << slot))) { + have_rrbs = pcibr_soft->bs_rrb_valid[slot]; + if (have_rrbs < 2) { + if (ate_proto & ATE_PREF) + min_rrbs = 2; + else + min_rrbs = 1; + if (have_rrbs < min_rrbs) + do_pcibr_rrb_autoalloc(pcibr_soft, slot, min_rrbs - have_rrbs); + } + } + if (ate_index >= pcibr_soft->bs_int_ate_size && + !pcibr_soft->bs_xbridge) { + bridge_t *bridge = pcibr_soft->bs_base; + volatile unsigned *cmd_regp; + unsigned cmd_reg; + unsigned long s; + + pcibr_dmamap->bd_flags |= PCIBR_DMAMAP_SSRAM; + + s = pcibr_lock(pcibr_soft); + cmd_regp = &(bridge-> + b_type0_cfg_dev[slot]. + l[PCI_CFG_COMMAND / 4]); + cmd_reg = *cmd_regp; + pcibr_soft->bs_slot[slot].bss_cmd_pointer = cmd_regp; + pcibr_soft->bs_slot[slot].bss_cmd_shadow = cmd_reg; + pcibr_unlock(pcibr_soft, s); + } + return pcibr_dmamap; + } +#if PCIBR_ATE_DEBUG + printk("pcibr_dmamap_alloc: unable to use PMU\n"); +#endif + pcibr_ate_free(pcibr_soft, ate_index, ate_count); + } + /* total failure: sorry, you just can't + * get from here to there that way. + */ +#if PCIBR_ATE_DEBUG + printk("pcibr_dmamap_alloc: complete failure.\n"); +#endif + xtalk_dmamap_free(xtalk_dmamap); + free_pciio_dmamap(pcibr_dmamap); + return 0; +} + +/*ARGSUSED */ +void +pcibr_dmamap_free(pcibr_dmamap_t pcibr_dmamap) +{ + pcibr_soft_t pcibr_soft = pcibr_dmamap->bd_soft; + pciio_slot_t slot = pcibr_dmamap->bd_slot; + + unsigned flags = pcibr_dmamap->bd_flags; + + /* Make sure that bss_ext_ates_active + * is properly kept up to date. + */ + + if (PCIBR_DMAMAP_BUSY & flags) + if (PCIBR_DMAMAP_SSRAM & flags) + atomic_dec(&(pcibr_soft->bs_slot[slot]. bss_ext_ates_active)); + + xtalk_dmamap_free(pcibr_dmamap->bd_xtalk); + + if (pcibr_dmamap->bd_flags & PCIIO_DMA_A64) { + pcibr_release_device(pcibr_soft, slot, BRIDGE_DEV_D64_BITS); + } + if (pcibr_dmamap->bd_ate_count) { + pcibr_ate_free(pcibr_dmamap->bd_soft, + pcibr_dmamap->bd_ate_index, + pcibr_dmamap->bd_ate_count); + pcibr_release_device(pcibr_soft, slot, BRIDGE_DEV_PMU_BITS); + } + + free_pciio_dmamap(pcibr_dmamap); +} + +/* + * pcibr_addr_xio_to_pci: given a PIO range, hand + * back the corresponding base PCI MEM address; + * this is used to short-circuit DMA requests that + * loop back onto this PCI bus. + */ +static iopaddr_t +pcibr_addr_xio_to_pci(pcibr_soft_t soft, + iopaddr_t xio_addr, + size_t req_size) +{ + iopaddr_t xio_lim = xio_addr + req_size - 1; + iopaddr_t pci_addr; + pciio_slot_t slot; + + if ((xio_addr >= BRIDGE_PCI_MEM32_BASE) && + (xio_lim <= BRIDGE_PCI_MEM32_LIMIT)) { + pci_addr = xio_addr - BRIDGE_PCI_MEM32_BASE; + return pci_addr; + } + if ((xio_addr >= BRIDGE_PCI_MEM64_BASE) && + (xio_lim <= BRIDGE_PCI_MEM64_LIMIT)) { + pci_addr = xio_addr - BRIDGE_PCI_MEM64_BASE; + return pci_addr; + } + for (slot = 0; slot < 8; ++slot) + if ((xio_addr >= BRIDGE_DEVIO(slot)) && + (xio_lim < BRIDGE_DEVIO(slot + 1))) { + bridgereg_t dev; + + dev = soft->bs_slot[slot].bss_device; + pci_addr = dev & BRIDGE_DEV_OFF_MASK; + pci_addr <<= BRIDGE_DEV_OFF_ADDR_SHFT; + pci_addr += xio_addr - BRIDGE_DEVIO(slot); + return (dev & BRIDGE_DEV_DEV_IO_MEM) ? pci_addr : PCI_NOWHERE; + } + return 0; +} + +/*ARGSUSED */ +iopaddr_t +pcibr_dmamap_addr(pcibr_dmamap_t pcibr_dmamap, + paddr_t paddr, + size_t req_size) +{ + pcibr_soft_t pcibr_soft; + iopaddr_t xio_addr; + xwidgetnum_t xio_port; + iopaddr_t pci_addr; + unsigned flags; + + ASSERT(pcibr_dmamap != NULL); + ASSERT(req_size > 0); + ASSERT(req_size <= pcibr_dmamap->bd_max_size); + + pcibr_soft = pcibr_dmamap->bd_soft; + + flags = pcibr_dmamap->bd_flags; + + xio_addr = xtalk_dmamap_addr(pcibr_dmamap->bd_xtalk, paddr, req_size); + if (XIO_PACKED(xio_addr)) { + xio_port = XIO_PORT(xio_addr); + xio_addr = XIO_ADDR(xio_addr); + } else + xio_port = pcibr_dmamap->bd_xio_port; + + /* If this DMA is to an address that + * refers back to this Bridge chip, + * reduce it back to the correct + * PCI MEM address. + */ + if (xio_port == pcibr_soft->bs_xid) { + pci_addr = pcibr_addr_xio_to_pci(pcibr_soft, xio_addr, req_size); + } else if (flags & PCIIO_DMA_A64) { + /* A64 DMA: + * always use 64-bit direct mapping, + * which always works. + * Device(x) was set up during + * dmamap allocation. + */ + + /* attributes are already bundled up into bd_pci_addr. + */ + pci_addr = pcibr_dmamap->bd_pci_addr + | ((uint64_t) xio_port << PCI64_ATTR_TARG_SHFT) + | xio_addr; + + /* Bridge Hardware WAR #482836: + * If the transfer is not cache aligned + * and the Bridge Rev is <= B, force + * prefetch to be off. + */ + if (flags & PCIBR_NOPREFETCH) + pci_addr &= ~PCI64_ATTR_PREF; + +#if DEBUG && PCIBR_DMA_DEBUG + printk("pcibr_dmamap_addr (direct64):\n" + "\twanted paddr [0x%x..0x%x]\n" + "\tXIO port 0x%x offset 0x%x\n" + "\treturning PCI 0x%x\n", + paddr, paddr + req_size - 1, + xio_port, xio_addr, pci_addr); +#endif + } else if (flags & PCIIO_FIXED) { + /* A32 direct DMA: + * always use 32-bit direct mapping, + * which may fail. + * Device(x) was set up during + * dmamap allocation. + */ + + if (xio_port != pcibr_soft->bs_dir_xport) + pci_addr = 0; /* wrong DIDN */ + else if (xio_addr < pcibr_dmamap->bd_xio_addr) + pci_addr = 0; /* out of range */ + else if ((xio_addr + req_size) > + (pcibr_dmamap->bd_xio_addr + BRIDGE_DMA_DIRECT_SIZE)) + pci_addr = 0; /* out of range */ + else + pci_addr = pcibr_dmamap->bd_pci_addr + + xio_addr - pcibr_dmamap->bd_xio_addr; + +#if DEBUG && PCIBR_DMA_DEBUG + printk("pcibr_dmamap_addr (direct32):\n" + "\twanted paddr [0x%x..0x%x]\n" + "\tXIO port 0x%x offset 0x%x\n" + "\treturning PCI 0x%x\n", + paddr, paddr + req_size - 1, + xio_port, xio_addr, pci_addr); +#endif + } else { + bridge_t *bridge = pcibr_soft->bs_base; + iopaddr_t offset = IOPGOFF(xio_addr); + bridge_ate_t ate_proto = pcibr_dmamap->bd_ate_proto; + int ate_count = IOPG(offset + req_size - 1) + 1; + + int ate_index = pcibr_dmamap->bd_ate_index; + unsigned cmd_regs[8]; + unsigned s; + +#if PCIBR_FREEZE_TIME + int ate_total = ate_count; + unsigned freeze_time; +#endif + +#if PCIBR_ATE_DEBUG + bridge_ate_t ate_cmp; + bridge_ate_p ate_cptr; + unsigned ate_lo, ate_hi; + int ate_bad = 0; + int ate_rbc = 0; +#endif + bridge_ate_p ate_ptr = pcibr_dmamap->bd_ate_ptr; + bridge_ate_t ate; + + /* Bridge Hardware WAR #482836: + * If the transfer is not cache aligned + * and the Bridge Rev is <= B, force + * prefetch to be off. + */ + if (flags & PCIBR_NOPREFETCH) + ate_proto &= ~ATE_PREF; + + ate = ate_proto + | (xio_port << ATE_TIDSHIFT) + | (xio_addr - offset); + + pci_addr = pcibr_dmamap->bd_pci_addr + offset; + + /* Fill in our mapping registers + * with the appropriate xtalk data, + * and hand back the PCI address. + */ + + ASSERT(ate_count > 0); + if (ate_count <= pcibr_dmamap->bd_ate_count) { + ATE_FREEZE(); + ATE_WRITE(); + ATE_THAW(); + bridge->b_wid_tflush; /* wait until Bridge PIO complete */ + } else { + /* The number of ATE's required is greater than the number + * allocated for this map. One way this can happen is if + * pcibr_dmamap_alloc() was called with the PCIBR_NO_ATE_ROUNDUP + * flag, and then when that map is used (right now), the + * target address tells us we really did need to roundup. + * The other possibility is that the map is just plain too + * small to handle the requested target area. + */ +#if PCIBR_ATE_DEBUG + printk(KERN_WARNING "pcibr_dmamap_addr :\n" + "\twanted paddr [0x%x..0x%x]\n" + "\tate_count 0x%x bd_ate_count 0x%x\n" + "\tATE's required > number allocated\n", + paddr, paddr + req_size - 1, + ate_count, pcibr_dmamap->bd_ate_count); +#endif + pci_addr = 0; + } + + } + return pci_addr; +} + +/*ARGSUSED */ +alenlist_t +pcibr_dmamap_list(pcibr_dmamap_t pcibr_dmamap, + alenlist_t palenlist, + unsigned flags) +{ + pcibr_soft_t pcibr_soft; + bridge_t *bridge=NULL; + + unsigned al_flags = (flags & PCIIO_NOSLEEP) ? AL_NOSLEEP : 0; + int inplace = flags & PCIIO_INPLACE; + + alenlist_t pciio_alenlist = 0; + alenlist_t xtalk_alenlist; + size_t length; + iopaddr_t offset; + unsigned direct64; + int ate_index = 0; + int ate_count = 0; + int ate_total = 0; + bridge_ate_p ate_ptr = (bridge_ate_p)0; + bridge_ate_t ate_proto = (bridge_ate_t)0; + bridge_ate_t ate_prev; + bridge_ate_t ate; + alenaddr_t xio_addr; + xwidgetnum_t xio_port; + iopaddr_t pci_addr; + alenaddr_t new_addr; + unsigned cmd_regs[8]; + unsigned s = 0; + +#if PCIBR_FREEZE_TIME + unsigned freeze_time; +#endif + int ate_freeze_done = 0; /* To pair ATE_THAW + * with an ATE_FREEZE + */ + + pcibr_soft = pcibr_dmamap->bd_soft; + + xtalk_alenlist = xtalk_dmamap_list(pcibr_dmamap->bd_xtalk, palenlist, + flags & DMAMAP_FLAGS); + if (!xtalk_alenlist) + goto fail; + + alenlist_cursor_init(xtalk_alenlist, 0, NULL); + + if (inplace) { + pciio_alenlist = xtalk_alenlist; + } else { + pciio_alenlist = alenlist_create(al_flags); + if (!pciio_alenlist) + goto fail; + } + + direct64 = pcibr_dmamap->bd_flags & PCIIO_DMA_A64; + if (!direct64) { + bridge = pcibr_soft->bs_base; + ate_ptr = pcibr_dmamap->bd_ate_ptr; + ate_index = pcibr_dmamap->bd_ate_index; + ate_proto = pcibr_dmamap->bd_ate_proto; + ATE_FREEZE(); + ate_freeze_done = 1; /* Remember that we need to do an ATE_THAW */ + } + pci_addr = pcibr_dmamap->bd_pci_addr; + + ate_prev = 0; /* matches no valid ATEs */ + while (ALENLIST_SUCCESS == + alenlist_get(xtalk_alenlist, NULL, 0, + &xio_addr, &length, al_flags)) { + if (XIO_PACKED(xio_addr)) { + xio_port = XIO_PORT(xio_addr); + xio_addr = XIO_ADDR(xio_addr); + } else + xio_port = pcibr_dmamap->bd_xio_port; + + if (xio_port == pcibr_soft->bs_xid) { + new_addr = pcibr_addr_xio_to_pci(pcibr_soft, xio_addr, length); + if (new_addr == PCI_NOWHERE) + goto fail; + } else if (direct64) { + new_addr = pci_addr | xio_addr + | ((uint64_t) xio_port << PCI64_ATTR_TARG_SHFT); + + /* Bridge Hardware WAR #482836: + * If the transfer is not cache aligned + * and the Bridge Rev is <= B, force + * prefetch to be off. + */ + if (flags & PCIBR_NOPREFETCH) + new_addr &= ~PCI64_ATTR_PREF; + + } else { + /* calculate the ate value for + * the first address. If it + * matches the previous + * ATE written (ie. we had + * multiple blocks in the + * same IOPG), then back up + * and reuse that ATE. + * + * We are NOT going to + * aggressively try to + * reuse any other ATEs. + */ + offset = IOPGOFF(xio_addr); + ate = ate_proto + | (xio_port << ATE_TIDSHIFT) + | (xio_addr - offset); + if (ate == ate_prev) { +#if PCIBR_ATE_DEBUG + printk("pcibr_dmamap_list: ATE share\n"); +#endif + ate_ptr--; + ate_index--; + pci_addr -= IOPGSIZE; + } + new_addr = pci_addr + offset; + + /* Fill in the hardware ATEs + * that contain this block. + */ + ate_count = IOPG(offset + length - 1) + 1; + ate_total += ate_count; + + /* Ensure that this map contains enough ATE's */ + if (ate_total > pcibr_dmamap->bd_ate_count) { +#if PCIBR_ATE_DEBUG + printk(KERN_WARNING "pcibr_dmamap_list :\n" + "\twanted xio_addr [0x%x..0x%x]\n" + "\tate_total 0x%x bd_ate_count 0x%x\n" + "\tATE's required > number allocated\n", + xio_addr, xio_addr + length - 1, + ate_total, pcibr_dmamap->bd_ate_count); +#endif + goto fail; + } + + ATE_WRITE(); + + ate_index += ate_count; + ate_ptr += ate_count; + + ate_count <<= IOPFNSHIFT; + ate += ate_count; + pci_addr += ate_count; + } + + /* write the PCI DMA address + * out to the scatter-gather list. + */ + if (inplace) { + if (ALENLIST_SUCCESS != + alenlist_replace(pciio_alenlist, NULL, + &new_addr, &length, al_flags)) + goto fail; + } else { + if (ALENLIST_SUCCESS != + alenlist_append(pciio_alenlist, + new_addr, length, al_flags)) + goto fail; + } + } + if (!inplace) + alenlist_done(xtalk_alenlist); + + /* Reset the internal cursor of the alenlist to be returned back + * to the caller. + */ + alenlist_cursor_init(pciio_alenlist, 0, NULL); + + + /* In case an ATE_FREEZE was done do the ATE_THAW to unroll all the + * changes that ATE_FREEZE has done to implement the external SSRAM + * bug workaround. + */ + if (ate_freeze_done) { + ATE_THAW(); + bridge->b_wid_tflush; /* wait until Bridge PIO complete */ + } + return pciio_alenlist; + + fail: + /* There are various points of failure after doing an ATE_FREEZE + * We need to do an ATE_THAW. Otherwise the ATEs are locked forever. + * The decision to do an ATE_THAW needs to be based on whether a + * an ATE_FREEZE was done before. + */ + if (ate_freeze_done) { + ATE_THAW(); + bridge->b_wid_tflush; + } + if (pciio_alenlist && !inplace) + alenlist_destroy(pciio_alenlist); + return 0; +} + +/*ARGSUSED */ +void +pcibr_dmamap_done(pcibr_dmamap_t pcibr_dmamap) +{ + /* + * We could go through and invalidate ATEs here; + * for performance reasons, we don't. + * We also don't enforce the strict alternation + * between _addr/_list and _done, but Hub does. + */ + + if (pcibr_dmamap->bd_flags & PCIBR_DMAMAP_BUSY) { + pcibr_dmamap->bd_flags &= ~PCIBR_DMAMAP_BUSY; + + if (pcibr_dmamap->bd_flags & PCIBR_DMAMAP_SSRAM) + atomic_dec(&(pcibr_dmamap->bd_soft->bs_slot[pcibr_dmamap->bd_slot]. bss_ext_ates_active)); + } + xtalk_dmamap_done(pcibr_dmamap->bd_xtalk); +} + + +/* + * For each bridge, the DIR_OFF value in the Direct Mapping Register + * determines the PCI to Crosstalk memory mapping to be used for all + * 32-bit Direct Mapping memory accesses. This mapping can be to any + * node in the system. This function will return that compact node id. + */ + +/*ARGSUSED */ +cnodeid_t +pcibr_get_dmatrans_node(devfs_handle_t pconn_vhdl) +{ + + pciio_info_t pciio_info = pciio_info_get(pconn_vhdl); + pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); + + return(NASID_TO_COMPACT_NODEID(NASID_GET(pcibr_soft->bs_dir_xbase))); +} + +/*ARGSUSED */ +iopaddr_t +pcibr_dmatrans_addr(devfs_handle_t pconn_vhdl, + device_desc_t dev_desc, + paddr_t paddr, + size_t req_size, + unsigned flags) +{ + pciio_info_t pciio_info = pciio_info_get(pconn_vhdl); + pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); + devfs_handle_t xconn_vhdl = pcibr_soft->bs_conn; + pciio_slot_t pciio_slot = pciio_info_slot_get(pciio_info); + pcibr_soft_slot_t slotp = &pcibr_soft->bs_slot[pciio_slot]; + + xwidgetnum_t xio_port; + iopaddr_t xio_addr; + iopaddr_t pci_addr; + + int have_rrbs; + int min_rrbs; + + /* merge in forced flags */ + flags |= pcibr_soft->bs_dma_flags; + + xio_addr = xtalk_dmatrans_addr(xconn_vhdl, 0, paddr, req_size, + flags & DMAMAP_FLAGS); + + if (!xio_addr) { +#if PCIBR_DMA_DEBUG + printk("pcibr_dmatrans_addr:\n" + "\tpciio connection point %v\n" + "\txtalk connection point %v\n" + "\twanted paddr [0x%x..0x%x]\n" + "\txtalk_dmatrans_addr returned 0x%x\n", + pconn_vhdl, xconn_vhdl, + paddr, paddr + req_size - 1, + xio_addr); +#endif + return 0; + } + /* + * find which XIO port this goes to. + */ + if (XIO_PACKED(xio_addr)) { + if (xio_addr == XIO_NOWHERE) { +#if PCIBR_DMA_DEBUG + printk("pcibr_dmatrans_addr:\n" + "\tpciio connection point %v\n" + "\txtalk connection point %v\n" + "\twanted paddr [0x%x..0x%x]\n" + "\txtalk_dmatrans_addr returned 0x%x\n", + pconn_vhdl, xconn_vhdl, + paddr, paddr + req_size - 1, + xio_addr); +#endif + return 0; + } + xio_port = XIO_PORT(xio_addr); + xio_addr = XIO_ADDR(xio_addr); + + } else + xio_port = pcibr_soft->bs_mxid; + + /* + * If this DMA comes back to us, + * return the PCI MEM address on + * which it would land, or NULL + * if the target is something + * on bridge other than PCI MEM. + */ + if (xio_port == pcibr_soft->bs_xid) { + pci_addr = pcibr_addr_xio_to_pci(pcibr_soft, xio_addr, req_size); + return pci_addr; + } + /* If the caller can use A64, try to + * satisfy the request with the 64-bit + * direct map. This can fail if the + * configuration bits in Device(x) + * conflict with our flags. + */ + + if (flags & PCIIO_DMA_A64) { + pci_addr = slotp->bss_d64_base; + if (!(flags & PCIBR_VCHAN1)) + flags |= PCIBR_VCHAN0; + if ((pci_addr != PCIBR_D64_BASE_UNSET) && + (flags == slotp->bss_d64_flags)) { + + pci_addr |= xio_addr + | ((uint64_t) xio_port << PCI64_ATTR_TARG_SHFT); + +#if DEBUG && PCIBR_DMA_DEBUG +#if HWG_PERF_CHECK + if (xio_addr != 0x20000000) +#endif + printk("pcibr_dmatrans_addr: [reuse]\n" + "\tpciio connection point %v\n" + "\txtalk connection point %v\n" + "\twanted paddr [0x%x..0x%x]\n" + "\txtalk_dmatrans_addr returned 0x%x\n" + "\tdirect 64bit address is 0x%x\n", + pconn_vhdl, xconn_vhdl, + paddr, paddr + req_size - 1, + xio_addr, pci_addr); +#endif + return (pci_addr); + } + if (!pcibr_try_set_device(pcibr_soft, pciio_slot, flags, BRIDGE_DEV_D64_BITS)) { + pci_addr = pcibr_flags_to_d64(flags, pcibr_soft); + slotp->bss_d64_flags = flags; + slotp->bss_d64_base = pci_addr; + pci_addr |= xio_addr + | ((uint64_t) xio_port << PCI64_ATTR_TARG_SHFT); + + /* Make sure we have an RRB (or two). + */ + if (!(pcibr_soft->bs_rrb_fixed & (1 << pciio_slot))) { + if (flags & PCIBR_VCHAN1) + pciio_slot += PCIBR_RRB_SLOT_VIRTUAL; + have_rrbs = pcibr_soft->bs_rrb_valid[pciio_slot]; + if (have_rrbs < 2) { + if (pci_addr & PCI64_ATTR_PREF) + min_rrbs = 2; + else + min_rrbs = 1; + if (have_rrbs < min_rrbs) + do_pcibr_rrb_autoalloc(pcibr_soft, pciio_slot, min_rrbs - have_rrbs); + } + } +#if PCIBR_DMA_DEBUG +#if HWG_PERF_CHECK + if (xio_addr != 0x20000000) +#endif + printk("pcibr_dmatrans_addr:\n" + "\tpciio connection point %v\n" + "\txtalk connection point %v\n" + "\twanted paddr [0x%x..0x%x]\n" + "\txtalk_dmatrans_addr returned 0x%x\n" + "\tdirect 64bit address is 0x%x\n" + "\tnew flags: 0x%x\n", + pconn_vhdl, xconn_vhdl, + paddr, paddr + req_size - 1, + xio_addr, pci_addr, (uint64_t) flags); +#endif + return (pci_addr); + } + /* our flags conflict with Device(x). + */ + flags = flags + & ~PCIIO_DMA_A64 + & ~PCIBR_VCHAN0 + ; + +#if PCIBR_DMA_DEBUG + printk("pcibr_dmatrans_addr:\n" + "\tpciio connection point %v\n" + "\txtalk connection point %v\n" + "\twanted paddr [0x%x..0x%x]\n" + "\txtalk_dmatrans_addr returned 0x%x\n" + "\tUnable to set Device(x) bits for Direct-64\n", + pconn_vhdl, xconn_vhdl, + paddr, paddr + req_size - 1, + xio_addr); +#endif + } + /* Try to satisfy the request with the 32-bit direct + * map. This can fail if the configuration bits in + * Device(x) conflict with our flags, or if the + * target address is outside where DIR_OFF points. + */ + { + size_t map_size = 1ULL << 31; + iopaddr_t xio_base = pcibr_soft->bs_dir_xbase; + iopaddr_t offset = xio_addr - xio_base; + iopaddr_t endoff = req_size + offset; + + if ((req_size > map_size) || + (xio_addr < xio_base) || + (xio_port != pcibr_soft->bs_dir_xport) || + (endoff > map_size)) { +#if PCIBR_DMA_DEBUG + printk("pcibr_dmatrans_addr:\n" + "\tpciio connection point %v\n" + "\txtalk connection point %v\n" + "\twanted paddr [0x%x..0x%x]\n" + "\txtalk_dmatrans_addr returned 0x%x\n" + "\txio region outside direct32 target\n", + pconn_vhdl, xconn_vhdl, + paddr, paddr + req_size - 1, + xio_addr); +#endif + } else { + pci_addr = slotp->bss_d32_base; + if ((pci_addr != PCIBR_D32_BASE_UNSET) && + (flags == slotp->bss_d32_flags)) { + + pci_addr |= offset; + +#if DEBUG && PCIBR_DMA_DEBUG + printk("pcibr_dmatrans_addr: [reuse]\n" + "\tpciio connection point %v\n" + "\txtalk connection point %v\n" + "\twanted paddr [0x%x..0x%x]\n" + "\txtalk_dmatrans_addr returned 0x%x\n" + "\tmapped via direct32 offset 0x%x\n" + "\twill DMA via pci addr 0x%x\n", + pconn_vhdl, xconn_vhdl, + paddr, paddr + req_size - 1, + xio_addr, offset, pci_addr); +#endif + return (pci_addr); + } + if (!pcibr_try_set_device(pcibr_soft, pciio_slot, flags, BRIDGE_DEV_D32_BITS)) { + + pci_addr = PCI32_DIRECT_BASE; + slotp->bss_d32_flags = flags; + slotp->bss_d32_base = pci_addr; + pci_addr |= offset; + + /* Make sure we have an RRB (or two). + */ + if (!(pcibr_soft->bs_rrb_fixed & (1 << pciio_slot))) { + have_rrbs = pcibr_soft->bs_rrb_valid[pciio_slot]; + if (have_rrbs < 2) { + if (slotp->bss_device & BRIDGE_DEV_PREF) + min_rrbs = 2; + else + min_rrbs = 1; + if (have_rrbs < min_rrbs) + do_pcibr_rrb_autoalloc(pcibr_soft, pciio_slot, min_rrbs - have_rrbs); + } + } +#if PCIBR_DMA_DEBUG +#if HWG_PERF_CHECK + if (xio_addr != 0x20000000) +#endif + printk("pcibr_dmatrans_addr:\n" + "\tpciio connection point %v\n" + "\txtalk connection point %v\n" + "\twanted paddr [0x%x..0x%x]\n" + "\txtalk_dmatrans_addr returned 0x%x\n" + "\tmapped via direct32 offset 0x%x\n" + "\twill DMA via pci addr 0x%x\n" + "\tnew flags: 0x%x\n", + pconn_vhdl, xconn_vhdl, + paddr, paddr + req_size - 1, + xio_addr, offset, pci_addr, (uint64_t) flags); +#endif + return (pci_addr); + } + /* our flags conflict with Device(x). + */ +#if PCIBR_DMA_DEBUG + printk("pcibr_dmatrans_addr:\n" + "\tpciio connection point %v\n" + "\txtalk connection point %v\n" + "\twanted paddr [0x%x..0x%x]\n" + "\txtalk_dmatrans_addr returned 0x%x\n" + "\tUnable to set Device(x) bits for Direct-32\n", + pconn_vhdl, xconn_vhdl, + paddr, paddr + req_size - 1, + xio_addr); +#endif + } + } + +#if PCIBR_DMA_DEBUG + printk("pcibr_dmatrans_addr:\n" + "\tpciio connection point %v\n" + "\txtalk connection point %v\n" + "\twanted paddr [0x%x..0x%x]\n" + "\txtalk_dmatrans_addr returned 0x%x\n" + "\tno acceptable PCI address found or constructable\n", + pconn_vhdl, xconn_vhdl, + paddr, paddr + req_size - 1, + xio_addr); +#endif + + return 0; +} + +/*ARGSUSED */ +alenlist_t +pcibr_dmatrans_list(devfs_handle_t pconn_vhdl, + device_desc_t dev_desc, + alenlist_t palenlist, + unsigned flags) +{ + pciio_info_t pciio_info = pciio_info_get(pconn_vhdl); + pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); + devfs_handle_t xconn_vhdl = pcibr_soft->bs_conn; + pciio_slot_t pciio_slot = pciio_info_slot_get(pciio_info); + pcibr_soft_slot_t slotp = &pcibr_soft->bs_slot[pciio_slot]; + xwidgetnum_t xio_port; + + alenlist_t pciio_alenlist = 0; + alenlist_t xtalk_alenlist = 0; + + int inplace; + unsigned direct64; + unsigned al_flags; + + iopaddr_t xio_base; + alenaddr_t xio_addr; + size_t xio_size; + + size_t map_size; + iopaddr_t pci_base; + alenaddr_t pci_addr; + + unsigned relbits = 0; + + /* merge in forced flags */ + flags |= pcibr_soft->bs_dma_flags; + + inplace = flags & PCIIO_INPLACE; + direct64 = flags & PCIIO_DMA_A64; + al_flags = (flags & PCIIO_NOSLEEP) ? AL_NOSLEEP : 0; + + if (direct64) { + map_size = 1ull << 48; + xio_base = 0; + pci_base = slotp->bss_d64_base; + if ((pci_base != PCIBR_D64_BASE_UNSET) && + (flags == slotp->bss_d64_flags)) { + /* reuse previous base info */ + } else if (pcibr_try_set_device(pcibr_soft, pciio_slot, flags, BRIDGE_DEV_D64_BITS) < 0) { + /* DMA configuration conflict */ + goto fail; + } else { + relbits = BRIDGE_DEV_D64_BITS; + pci_base = + pcibr_flags_to_d64(flags, pcibr_soft); + } + } else { + xio_base = pcibr_soft->bs_dir_xbase; + map_size = 1ull << 31; + pci_base = slotp->bss_d32_base; + if ((pci_base != PCIBR_D32_BASE_UNSET) && + (flags == slotp->bss_d32_flags)) { + /* reuse previous base info */ + } else if (pcibr_try_set_device(pcibr_soft, pciio_slot, flags, BRIDGE_DEV_D32_BITS) < 0) { + /* DMA configuration conflict */ + goto fail; + } else { + relbits = BRIDGE_DEV_D32_BITS; + pci_base = PCI32_DIRECT_BASE; + } + } + + xtalk_alenlist = xtalk_dmatrans_list(xconn_vhdl, 0, palenlist, + flags & DMAMAP_FLAGS); + if (!xtalk_alenlist) + goto fail; + + alenlist_cursor_init(xtalk_alenlist, 0, NULL); + + if (inplace) { + pciio_alenlist = xtalk_alenlist; + } else { + pciio_alenlist = alenlist_create(al_flags); + if (!pciio_alenlist) + goto fail; + } + + while (ALENLIST_SUCCESS == + alenlist_get(xtalk_alenlist, NULL, 0, + &xio_addr, &xio_size, al_flags)) { + + /* + * find which XIO port this goes to. + */ + if (XIO_PACKED(xio_addr)) { + if (xio_addr == XIO_NOWHERE) { +#if PCIBR_DMA_DEBUG + printk("pcibr_dmatrans_addr:\n" + "\tpciio connection point %v\n" + "\txtalk connection point %v\n" + "\twanted paddr [0x%x..0x%x]\n" + "\txtalk_dmatrans_addr returned 0x%x\n", + pconn_vhdl, xconn_vhdl, + paddr, paddr + req_size - 1, + xio_addr); +#endif + return 0; + } + xio_port = XIO_PORT(xio_addr); + xio_addr = XIO_ADDR(xio_addr); + } else + xio_port = pcibr_soft->bs_mxid; + + /* + * If this DMA comes back to us, + * return the PCI MEM address on + * which it would land, or NULL + * if the target is something + * on bridge other than PCI MEM. + */ + if (xio_port == pcibr_soft->bs_xid) { + pci_addr = pcibr_addr_xio_to_pci(pcibr_soft, xio_addr, xio_size); + if ( (pci_addr == (alenaddr_t)NULL) ) + goto fail; + } else if (direct64) { + ASSERT(xio_port != 0); + pci_addr = pci_base | xio_addr + | ((uint64_t) xio_port << PCI64_ATTR_TARG_SHFT); + } else { + iopaddr_t offset = xio_addr - xio_base; + iopaddr_t endoff = xio_size + offset; + + if ((xio_size > map_size) || + (xio_addr < xio_base) || + (xio_port != pcibr_soft->bs_dir_xport) || + (endoff > map_size)) + goto fail; + + pci_addr = pci_base + (xio_addr - xio_base); + } + + /* write the PCI DMA address + * out to the scatter-gather list. + */ + if (inplace) { + if (ALENLIST_SUCCESS != + alenlist_replace(pciio_alenlist, NULL, + &pci_addr, &xio_size, al_flags)) + goto fail; + } else { + if (ALENLIST_SUCCESS != + alenlist_append(pciio_alenlist, + pci_addr, xio_size, al_flags)) + goto fail; + } + } + + if (relbits) { + if (direct64) { + slotp->bss_d64_flags = flags; + slotp->bss_d64_base = pci_base; + } else { + slotp->bss_d32_flags = flags; + slotp->bss_d32_base = pci_base; + } + } + if (!inplace) + alenlist_done(xtalk_alenlist); + + /* Reset the internal cursor of the alenlist to be returned back + * to the caller. + */ + alenlist_cursor_init(pciio_alenlist, 0, NULL); + return pciio_alenlist; + + fail: + if (relbits) + pcibr_release_device(pcibr_soft, pciio_slot, relbits); + if (pciio_alenlist && !inplace) + alenlist_destroy(pciio_alenlist); + return 0; +} + +void +pcibr_dmamap_drain(pcibr_dmamap_t map) +{ + xtalk_dmamap_drain(map->bd_xtalk); +} + +void +pcibr_dmaaddr_drain(devfs_handle_t pconn_vhdl, + paddr_t paddr, + size_t bytes) +{ + pciio_info_t pciio_info = pciio_info_get(pconn_vhdl); + pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); + devfs_handle_t xconn_vhdl = pcibr_soft->bs_conn; + + xtalk_dmaaddr_drain(xconn_vhdl, paddr, bytes); +} + +void +pcibr_dmalist_drain(devfs_handle_t pconn_vhdl, + alenlist_t list) +{ + pciio_info_t pciio_info = pciio_info_get(pconn_vhdl); + pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); + devfs_handle_t xconn_vhdl = pcibr_soft->bs_conn; + + xtalk_dmalist_drain(xconn_vhdl, list); +} + +/* + * Get the starting PCIbus address out of the given DMA map. + * This function is supposed to be used by a close friend of PCI bridge + * since it relies on the fact that the starting address of the map is fixed at + * the allocation time in the current implementation of PCI bridge. + */ +iopaddr_t +pcibr_dmamap_pciaddr_get(pcibr_dmamap_t pcibr_dmamap) +{ + return (pcibr_dmamap->bd_pci_addr); +} + +/* ===================================================================== + * CONFIGURATION MANAGEMENT + */ +/*ARGSUSED */ +void +pcibr_provider_startup(devfs_handle_t pcibr) +{ +} + +/*ARGSUSED */ +void +pcibr_provider_shutdown(devfs_handle_t pcibr) +{ +} + +int +pcibr_reset(devfs_handle_t conn) +{ + pciio_info_t pciio_info = pciio_info_get(conn); + pciio_slot_t pciio_slot = pciio_info_slot_get(pciio_info); + pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); + bridge_t *bridge = pcibr_soft->bs_base; + bridgereg_t ctlreg; + unsigned cfgctl[8]; + unsigned long s; + int f, nf; + pcibr_info_h pcibr_infoh; + pcibr_info_t pcibr_info; + int win; + + if (pcibr_soft->bs_slot[pciio_slot].has_host) { + pciio_slot = pcibr_soft->bs_slot[pciio_slot].host_slot; + pcibr_info = pcibr_soft->bs_slot[pciio_slot].bss_infos[0]; + } + if (pciio_slot < 4) { + s = pcibr_lock(pcibr_soft); + nf = pcibr_soft->bs_slot[pciio_slot].bss_ninfo; + pcibr_infoh = pcibr_soft->bs_slot[pciio_slot].bss_infos; + for (f = 0; f < nf; ++f) + if (pcibr_infoh[f]) + cfgctl[f] = bridge->b_type0_cfg_dev[pciio_slot].f[f].l[PCI_CFG_COMMAND / 4]; + + ctlreg = bridge->b_wid_control; + bridge->b_wid_control = ctlreg | BRIDGE_CTRL_RST(pciio_slot); + /* XXX delay? */ + bridge->b_wid_control = ctlreg; + /* XXX delay? */ + + for (f = 0; f < nf; ++f) + if ((pcibr_info = pcibr_infoh[f])) + for (win = 0; win < 6; ++win) + if (pcibr_info->f_window[win].w_base != 0) + bridge->b_type0_cfg_dev[pciio_slot].f[f].l[PCI_CFG_BASE_ADDR(win) / 4] = + pcibr_info->f_window[win].w_base; + for (f = 0; f < nf; ++f) + if (pcibr_infoh[f]) + bridge->b_type0_cfg_dev[pciio_slot].f[f].l[PCI_CFG_COMMAND / 4] = cfgctl[f]; + pcibr_unlock(pcibr_soft, s); + + return 0; + } +#ifdef SUPPORT_PRINTING_V_FORMAT + printk(KERN_WARNING "%v: pcibr_reset unimplemented for slot %d\n", + conn, pciio_slot); +#endif + return -1; +} + +pciio_endian_t +pcibr_endian_set(devfs_handle_t pconn_vhdl, + pciio_endian_t device_end, + pciio_endian_t desired_end) +{ + pciio_info_t pciio_info = pciio_info_get(pconn_vhdl); + pciio_slot_t pciio_slot = pciio_info_slot_get(pciio_info); + pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); + bridgereg_t devreg; + unsigned long s; + + /* + * Bridge supports hardware swapping; so we can always + * arrange for the caller's desired endianness. + */ + + s = pcibr_lock(pcibr_soft); + devreg = pcibr_soft->bs_slot[pciio_slot].bss_device; + if (device_end != desired_end) + devreg |= BRIDGE_DEV_SWAP_BITS; + else + devreg &= ~BRIDGE_DEV_SWAP_BITS; + + /* NOTE- if we ever put SWAP bits + * onto the disabled list, we will + * have to change the logic here. + */ + if (pcibr_soft->bs_slot[pciio_slot].bss_device != devreg) { + bridge_t *bridge = pcibr_soft->bs_base; + + bridge->b_device[pciio_slot].reg = devreg; + pcibr_soft->bs_slot[pciio_slot].bss_device = devreg; + bridge->b_wid_tflush; /* wait until Bridge PIO complete */ + } + pcibr_unlock(pcibr_soft, s); + +#if DEBUG && PCIBR_DEV_DEBUG + printk("pcibr Device(%d): 0x%p\n", pciio_slot, bridge->b_device[pciio_slot].reg); +#endif + + return desired_end; +} + +/* This (re)sets the GBR and REALTIME bits and also keeps track of how + * many sets are outstanding. Reset succeeds only if the number of outstanding + * sets == 1. + */ +int +pcibr_priority_bits_set(pcibr_soft_t pcibr_soft, + pciio_slot_t pciio_slot, + pciio_priority_t device_prio) +{ + unsigned long s; + int *counter; + bridgereg_t rtbits = 0; + bridgereg_t devreg; + int rc = PRIO_SUCCESS; + + /* in dual-slot configurations, the host and the + * guest have separate DMA resources, so they + * have separate requirements for priority bits. + */ + + counter = &(pcibr_soft->bs_slot[pciio_slot].bss_pri_uctr); + + /* + * Bridge supports PCI notions of LOW and HIGH priority + * arbitration rings via a "REAL_TIME" bit in the per-device + * Bridge register. The "GBR" bit controls access to the GBR + * ring on the xbow. These two bits are (re)set together. + * + * XXX- Bug in Rev B Bridge Si: + * Symptom: Prefetcher starts operating incorrectly. This happens + * due to corruption of the address storage ram in the prefetcher + * when a non-real time PCI request is pulled and a real-time one is + * put in it's place. Workaround: Use only a single arbitration ring + * on PCI bus. GBR and RR can still be uniquely used per + * device. NETLIST MERGE DONE, WILL BE FIXED IN REV C. + */ + + if (pcibr_soft->bs_rev_num != BRIDGE_PART_REV_B) + rtbits |= BRIDGE_DEV_RT; + + /* NOTE- if we ever put DEV_RT or DEV_GBR on + * the disabled list, we will have to take + * it into account here. + */ + + s = pcibr_lock(pcibr_soft); + devreg = pcibr_soft->bs_slot[pciio_slot].bss_device; + if (device_prio == PCI_PRIO_HIGH) { + if ((++*counter == 1)) { + if (rtbits) + devreg |= rtbits; + else + rc = PRIO_FAIL; + } + } else if (device_prio == PCI_PRIO_LOW) { + if (*counter <= 0) + rc = PRIO_FAIL; + else if (--*counter == 0) + if (rtbits) + devreg &= ~rtbits; + } + if (pcibr_soft->bs_slot[pciio_slot].bss_device != devreg) { + bridge_t *bridge = pcibr_soft->bs_base; + + bridge->b_device[pciio_slot].reg = devreg; + pcibr_soft->bs_slot[pciio_slot].bss_device = devreg; + bridge->b_wid_tflush; /* wait until Bridge PIO complete */ + } + pcibr_unlock(pcibr_soft, s); + + return rc; +} + +pciio_priority_t +pcibr_priority_set(devfs_handle_t pconn_vhdl, + pciio_priority_t device_prio) +{ + pciio_info_t pciio_info = pciio_info_get(pconn_vhdl); + pciio_slot_t pciio_slot = pciio_info_slot_get(pciio_info); + pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); + + (void) pcibr_priority_bits_set(pcibr_soft, pciio_slot, device_prio); + + return device_prio; +} + +/* + * Interfaces to allow special (e.g. SGI) drivers to set/clear + * Bridge-specific device flags. Many flags are modified through + * PCI-generic interfaces; we don't allow them to be directly + * manipulated here. Only flags that at this point seem pretty + * Bridge-specific can be set through these special interfaces. + * We may add more flags as the need arises, or remove flags and + * create PCI-generic interfaces as the need arises. + * + * Returns 0 on failure, 1 on success + */ +int +pcibr_device_flags_set(devfs_handle_t pconn_vhdl, + pcibr_device_flags_t flags) +{ + pciio_info_t pciio_info = pciio_info_get(pconn_vhdl); + pciio_slot_t pciio_slot = pciio_info_slot_get(pciio_info); + pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); + bridgereg_t set = 0; + bridgereg_t clr = 0; + + ASSERT((flags & PCIBR_DEVICE_FLAGS) == flags); + + if (flags & PCIBR_WRITE_GATHER) + set |= BRIDGE_DEV_PMU_WRGA_EN; + if (flags & PCIBR_NOWRITE_GATHER) + clr |= BRIDGE_DEV_PMU_WRGA_EN; + + if (flags & PCIBR_WRITE_GATHER) + set |= BRIDGE_DEV_DIR_WRGA_EN; + if (flags & PCIBR_NOWRITE_GATHER) + clr |= BRIDGE_DEV_DIR_WRGA_EN; + + if (flags & PCIBR_PREFETCH) + set |= BRIDGE_DEV_PREF; + if (flags & PCIBR_NOPREFETCH) + clr |= BRIDGE_DEV_PREF; + + if (flags & PCIBR_PRECISE) + set |= BRIDGE_DEV_PRECISE; + if (flags & PCIBR_NOPRECISE) + clr |= BRIDGE_DEV_PRECISE; + + if (flags & PCIBR_BARRIER) + set |= BRIDGE_DEV_BARRIER; + if (flags & PCIBR_NOBARRIER) + clr |= BRIDGE_DEV_BARRIER; + + if (flags & PCIBR_64BIT) + set |= BRIDGE_DEV_DEV_SIZE; + if (flags & PCIBR_NO64BIT) + clr |= BRIDGE_DEV_DEV_SIZE; + + if (set || clr) { + bridgereg_t devreg; + unsigned long s; + + s = pcibr_lock(pcibr_soft); + devreg = pcibr_soft->bs_slot[pciio_slot].bss_device; + devreg = (devreg & ~clr) | set; + if (pcibr_soft->bs_slot[pciio_slot].bss_device != devreg) { + bridge_t *bridge = pcibr_soft->bs_base; + + bridge->b_device[pciio_slot].reg = devreg; + pcibr_soft->bs_slot[pciio_slot].bss_device = devreg; + bridge->b_wid_tflush; /* wait until Bridge PIO complete */ + } + pcibr_unlock(pcibr_soft, s); +#if DEBUG && PCIBR_DEV_DEBUG + printk("pcibr Device(%d): %R\n", pciio_slot, bridge->b_device[pciio_slot].regbridge->b_device[pciio_slot].reg, device_bits); +#endif + } + return (1); +} + +pciio_provider_t pcibr_provider = +{ + (pciio_piomap_alloc_f *) pcibr_piomap_alloc, + (pciio_piomap_free_f *) pcibr_piomap_free, + (pciio_piomap_addr_f *) pcibr_piomap_addr, + (pciio_piomap_done_f *) pcibr_piomap_done, + (pciio_piotrans_addr_f *) pcibr_piotrans_addr, + (pciio_piospace_alloc_f *) pcibr_piospace_alloc, + (pciio_piospace_free_f *) pcibr_piospace_free, + + (pciio_dmamap_alloc_f *) pcibr_dmamap_alloc, + (pciio_dmamap_free_f *) pcibr_dmamap_free, + (pciio_dmamap_addr_f *) pcibr_dmamap_addr, + (pciio_dmamap_list_f *) pcibr_dmamap_list, + (pciio_dmamap_done_f *) pcibr_dmamap_done, + (pciio_dmatrans_addr_f *) pcibr_dmatrans_addr, + (pciio_dmatrans_list_f *) pcibr_dmatrans_list, + (pciio_dmamap_drain_f *) pcibr_dmamap_drain, + (pciio_dmaaddr_drain_f *) pcibr_dmaaddr_drain, + (pciio_dmalist_drain_f *) pcibr_dmalist_drain, + + (pciio_intr_alloc_f *) pcibr_intr_alloc, + (pciio_intr_free_f *) pcibr_intr_free, + (pciio_intr_connect_f *) pcibr_intr_connect, + (pciio_intr_disconnect_f *) pcibr_intr_disconnect, + (pciio_intr_cpu_get_f *) pcibr_intr_cpu_get, + + (pciio_provider_startup_f *) pcibr_provider_startup, + (pciio_provider_shutdown_f *) pcibr_provider_shutdown, + (pciio_reset_f *) pcibr_reset, + (pciio_write_gather_flush_f *) pcibr_write_gather_flush, + (pciio_endian_set_f *) pcibr_endian_set, + (pciio_priority_set_f *) pcibr_priority_set, + (pciio_config_get_f *) pcibr_config_get, + (pciio_config_set_f *) pcibr_config_set, + + (pciio_error_devenable_f *) 0, + (pciio_error_extract_f *) 0, + +#ifdef LATER + (pciio_driver_reg_callback_f *) pcibr_driver_reg_callback, + (pciio_driver_unreg_callback_f *) pcibr_driver_unreg_callback, +#else + (pciio_driver_reg_callback_f *) 0, + (pciio_driver_unreg_callback_f *) 0, +#endif + (pciio_device_unregister_f *) pcibr_device_unregister, + (pciio_dma_enabled_f *) pcibr_dma_enabled, +}; + +int +pcibr_dma_enabled(devfs_handle_t pconn_vhdl) +{ + pciio_info_t pciio_info = pciio_info_get(pconn_vhdl); + pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); + + + return xtalk_dma_enabled(pcibr_soft->bs_conn); +} diff -Nru a/arch/ia64/sn/io/sn2/pcibr/pcibr_error.c b/arch/ia64/sn/io/sn2/pcibr/pcibr_error.c --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/io/sn2/pcibr/pcibr_error.c Tue Mar 12 13:58:16 2002 @@ -0,0 +1,1737 @@ +/* + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (C) 2001-2002 Silicon Graphics, Inc. All rights reserved. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#ifdef __ia64 +#define rmallocmap atemapalloc +#define rmfreemap atemapfree +#define rmfree atefree +#define rmalloc atealloc +#endif + +extern int hubii_check_widget_disabled(nasid_t, int); + +/* ===================================================================== + * ERROR HANDLING + */ + +#ifdef DEBUG +#ifdef ERROR_DEBUG +#define BRIDGE_PIOERR_TIMEOUT 100 /* Timeout with ERROR_DEBUG defined */ +#else +#define BRIDGE_PIOERR_TIMEOUT 40 /* Timeout in debug mode */ +#endif +#else +#define BRIDGE_PIOERR_TIMEOUT 1 /* Timeout in non-debug mode */ +#endif + +#ifdef DEBUG +#ifdef ERROR_DEBUG +bridgereg_t bridge_errors_to_dump = ~BRIDGE_ISR_INT_MSK; +#else +bridgereg_t bridge_errors_to_dump = BRIDGE_ISR_ERROR_DUMP; +#endif +#else +bridgereg_t bridge_errors_to_dump = BRIDGE_ISR_ERROR_FATAL | + BRIDGE_ISR_PCIBUS_PIOERR; +#endif + +#if defined (PCIBR_LLP_CONTROL_WAR) +int pcibr_llp_control_war_cnt; +#endif /* PCIBR_LLP_CONTROL_WAR */ + +/* FIXME: can these arrays be local ? */ + +#ifdef LATER + +struct reg_values xio_cmd_pactyp[] = +{ + {0x0, "RdReq"}, + {0x1, "RdResp"}, + {0x2, "WrReqWithResp"}, + {0x3, "WrResp"}, + {0x4, "WrReqNoResp"}, + {0x5, "Reserved(5)"}, + {0x6, "FetchAndOp"}, + {0x7, "Reserved(7)"}, + {0x8, "StoreAndOp"}, + {0x9, "Reserved(9)"}, + {0xa, "Reserved(a)"}, + {0xb, "Reserved(b)"}, + {0xc, "Reserved(c)"}, + {0xd, "Reserved(d)"}, + {0xe, "SpecialReq"}, + {0xf, "SpecialResp"}, + {0} +}; + +struct reg_desc xio_cmd_bits[] = +{ + {WIDGET_DIDN, -28, "DIDN", "%x"}, + {WIDGET_SIDN, -24, "SIDN", "%x"}, + {WIDGET_PACTYP, -20, "PACTYP", 0, xio_cmd_pactyp}, + {WIDGET_TNUM, -15, "TNUM", "%x"}, + {WIDGET_COHERENT, 0, "COHERENT"}, + {WIDGET_DS, 0, "DS"}, + {WIDGET_GBR, 0, "GBR"}, + {WIDGET_VBPM, 0, "VBPM"}, + {WIDGET_ERROR, 0, "ERROR"}, + {WIDGET_BARRIER, 0, "BARRIER"}, + {0} +}; + +#define F(s,n) { 1l<<(s),-(s), n } + +struct reg_desc bridge_int_status_desc[] = +{ + F(31, "MULTI_ERR"), + F(30, "PMU_ESIZE_EFAULT"), + F(29, "UNEXPECTED_RESP"), + F(28, "BAD_XRESP_PACKET"), + F(27, "BAD_XREQ_PACKET"), + F(26, "RESP_XTALK_ERROR"), + F(25, "REQ_XTALK_ERROR"), + F(24, "INVALID_ADDRESS"), + F(23, "UNSUPPORTED_XOP"), + F(22, "XREQ_FIFO_OFLOW"), + F(21, "LLP_REC_SNERROR"), + F(20, "LLP_REC_CBERROR"), + F(19, "LLP_RCTY"), + F(18, "LLP_TX_RETRY"), + F(17, "LLP_TCTY"), + F(16, "SSRAM_PERR"), + F(15, "PCI_ABORT"), + F(14, "PCI_PARITY"), + F(13, "PCI_SERR"), + F(12, "PCI_PERR"), + F(11, "PCI_MASTER_TOUT"), + F(10, "PCI_RETRY_CNT"), + F(9, "XREAD_REQ_TOUT"), + F(8, "GIO_BENABLE_ERR"), + F(7, "INT7"), + F(6, "INT6"), + F(5, "INT5"), + F(4, "INT4"), + F(3, "INT3"), + F(2, "INT2"), + F(1, "INT1"), + F(0, "INT0"), + {0} +}; + +struct reg_values space_v[] = +{ + {PCIIO_SPACE_NONE, "none"}, + {PCIIO_SPACE_ROM, "ROM"}, + {PCIIO_SPACE_IO, "I/O"}, + {PCIIO_SPACE_MEM, "MEM"}, + {PCIIO_SPACE_MEM32, "MEM(32)"}, + {PCIIO_SPACE_MEM64, "MEM(64)"}, + {PCIIO_SPACE_CFG, "CFG"}, + {PCIIO_SPACE_WIN(0), "WIN(0)"}, + {PCIIO_SPACE_WIN(1), "WIN(1)"}, + {PCIIO_SPACE_WIN(2), "WIN(2)"}, + {PCIIO_SPACE_WIN(3), "WIN(3)"}, + {PCIIO_SPACE_WIN(4), "WIN(4)"}, + {PCIIO_SPACE_WIN(5), "WIN(5)"}, + {PCIIO_SPACE_BAD, "BAD"}, + {0} +}; +struct reg_desc space_desc[] = +{ + {0xFF, 0, "space", 0, space_v}, + {0} +}; +#define device_desc device_bits +struct reg_desc device_bits[] = +{ + {BRIDGE_DEV_ERR_LOCK_EN, 0, "ERR_LOCK_EN"}, + {BRIDGE_DEV_PAGE_CHK_DIS, 0, "PAGE_CHK_DIS"}, + {BRIDGE_DEV_FORCE_PCI_PAR, 0, "FORCE_PCI_PAR"}, + {BRIDGE_DEV_VIRTUAL_EN, 0, "VIRTUAL_EN"}, + {BRIDGE_DEV_PMU_WRGA_EN, 0, "PMU_WRGA_EN"}, + {BRIDGE_DEV_DIR_WRGA_EN, 0, "DIR_WRGA_EN"}, + {BRIDGE_DEV_DEV_SIZE, 0, "DEV_SIZE"}, + {BRIDGE_DEV_RT, 0, "RT"}, + {BRIDGE_DEV_SWAP_PMU, 0, "SWAP_PMU"}, + {BRIDGE_DEV_SWAP_DIR, 0, "SWAP_DIR"}, + {BRIDGE_DEV_PREF, 0, "PREF"}, + {BRIDGE_DEV_PRECISE, 0, "PRECISE"}, + {BRIDGE_DEV_COH, 0, "COH"}, + {BRIDGE_DEV_BARRIER, 0, "BARRIER"}, + {BRIDGE_DEV_GBR, 0, "GBR"}, + {BRIDGE_DEV_DEV_SWAP, 0, "DEV_SWAP"}, + {BRIDGE_DEV_DEV_IO_MEM, 0, "DEV_IO_MEM"}, + {BRIDGE_DEV_OFF_MASK, BRIDGE_DEV_OFF_ADDR_SHFT, "DEV_OFF", "%x"}, + {0} +}; + +#endif /* LATER */ + +void +print_bridge_errcmd(uint32_t cmdword, char *errtype) +{ + printk( + "\t Bridge %s Error Command Word Register %R\n", + errtype, cmdword, xio_cmd_bits); +} + +char *pcibr_isr_errs[] = +{ + "", "", "", "", "", "", "", "", + "08: GIO non-contiguous byte enable in crosstalk packet", + "09: PCI to Crosstalk read request timeout", + "10: PCI retry operation count exhausted.", + "11: PCI bus device select timeout", + "12: PCI device reported parity error", + "13: PCI Address/Cmd parity error ", + "14: PCI Bridge detected parity error", + "15: PCI abort condition", + "16: SSRAM parity error", + "17: LLP Transmitter Retry count wrapped", + "18: LLP Transmitter side required Retry", + "19: LLP Receiver retry count wrapped", + "20: LLP Receiver check bit error", + "21: LLP Receiver sequence number error", + "22: Request packet overflow", + "23: Request operation not supported by bridge", + "24: Request packet has invalid address for bridge widget", + "25: Incoming request xtalk command word error bit set or invalid sideband", + "26: Incoming response xtalk command word error bit set or invalid sideband", + "27: Framing error, request cmd data size does not match actual", + "28: Framing error, response cmd data size does not match actual", + "29: Unexpected response arrived", + "30: PMU Access Fault", + "31: Multiple errors occurred", +}; + +#define BEM_ADD_STR(s) printk("%s", (s)) +#define BEM_ADD_VAR(v) printk("\t%20s: 0x%x\n", #v, (v)) +#define BEM_ADD_REG(r) printk("\t%20s: %R\n", #r, (r), r ## _desc) +#define BEM_ADD_NSPC(n,s) printk("\t%20s: %R\n", n, s, space_desc) +#define BEM_ADD_SPC(s) BEM_ADD_NSPC(#s, s) + +/* + * display memory directory state + */ +void +pcibr_show_dir_state(paddr_t paddr, char *prefix) +{ + int state; + uint64_t vec_ptr; + hubreg_t elo; + extern char *dir_state_str[]; + extern void get_dir_ent(paddr_t, int *, uint64_t *, hubreg_t *); + + get_dir_ent(paddr, &state, &vec_ptr, &elo); + + printf("%saddr 0x%x: state 0x%x owner 0x%x (%s)\n", + prefix, paddr, state, vec_ptr, dir_state_str[state]); +} + + +/* + * Dump relevant error information for Bridge error interrupts. + */ +/*ARGSUSED */ +void +pcibr_error_dump(pcibr_soft_t pcibr_soft) +{ + bridge_t *bridge = pcibr_soft->bs_base; + bridgereg_t int_status; + bridgereg_t mult_int; + int bit; + int i; + char *reg_desc; + paddr_t addr; + + int_status = (bridge->b_int_status & ~BRIDGE_ISR_INT_MSK); + if (!int_status) { + /* No error bits set */ + return; + } + + /* Check if dumping the same error information multiple times */ + if (test_and_set_int((int *) &pcibr_soft->bs_errinfo.bserr_intstat, + int_status) == int_status) { + return; + } + + printk(KERN_ALERT "PCI BRIDGE ERROR: int_status is 0x%X for %s\n" + " Dumping relevant %sBridge registers for each bit set...\n", + int_status, pcibr_soft->bs_name, + (is_xbridge(bridge) ? "X" : "")); + + for (i = PCIBR_ISR_ERR_START; i < PCIBR_ISR_MAX_ERRS; i++) { + bit = 1 << i; + + /* + * A number of int_status bits are only defined for Bridge. + * Ignore them in the case of an XBridge. + */ + if (is_xbridge(bridge) && ((bit == BRIDGE_ISR_MULTI_ERR) || + (bit == BRIDGE_ISR_SSRAM_PERR) || + (bit == BRIDGE_ISR_GIO_B_ENBL_ERR))) { + continue; + } + + if (int_status & bit) { + printk("\t%s\n", pcibr_isr_errs[i]); + + switch (bit) { + case BRIDGE_ISR_PAGE_FAULT: /* PMU_PAGE_FAULT (XBridge) */ +/* case BRIDGE_ISR_PMU_ESIZE_FAULT: PMU_ESIZE_FAULT (Bridge) */ + if (is_xbridge(bridge)) + reg_desc = "Map Fault Address"; + else + reg_desc = "SSRAM Parity Error"; + + printk("\t %s Register: 0x%x\n", reg_desc, + bridge->b_ram_perr_or_map_fault); + break; + + case BRIDGE_ISR_UNEXP_RESP: /* UNEXPECTED_RESP */ + print_bridge_errcmd(bridge->b_wid_aux_err, "Aux"); + break; + + case BRIDGE_ISR_BAD_XRESP_PKT: /* BAD_RESP_PACKET */ + case BRIDGE_ISR_RESP_XTLK_ERR: /* RESP_XTALK_ERROR */ + case BRIDGE_ISR_XREAD_REQ_TIMEOUT: /* XREAD_REQ_TOUT */ + + addr = (((uint64_t) (bridge->b_wid_resp_upper & 0xFFFF) << 32) + | bridge->b_wid_resp_lower); + printk( + "\t Bridge Response Buffer Error Upper Address Register: 0x%x\n" + "\t Bridge Response Buffer Error Lower Address Register: 0x%x\n" + "\t dev-num %d buff-num %d addr 0x%x\n", + bridge->b_wid_resp_upper, bridge->b_wid_resp_lower, + ((bridge->b_wid_resp_upper >> 20) & 0x3), + ((bridge->b_wid_resp_upper >> 16) & 0xF), + addr); + if (bit == BRIDGE_ISR_RESP_XTLK_ERR) { + /* display memory directory associated with cacheline */ + pcibr_show_dir_state(addr, "\t "); + } + break; + + case BRIDGE_ISR_BAD_XREQ_PKT: /* BAD_XREQ_PACKET */ + case BRIDGE_ISR_REQ_XTLK_ERR: /* REQ_XTALK_ERROR */ + case BRIDGE_ISR_INVLD_ADDR: /* INVALID_ADDRESS */ + case BRIDGE_ISR_UNSUPPORTED_XOP: /* UNSUPPORTED_XOP */ + print_bridge_errcmd(bridge->b_wid_aux_err, ""); + printk("\t Bridge Error Upper Address Register: 0x%x\n" + "\t Bridge Error Lower Address Register: 0x%x\n" + "\t Bridge Error Address: 0x%x\n", + (uint64_t) bridge->b_wid_err_upper, + (uint64_t) bridge->b_wid_err_lower, + (((uint64_t) bridge->b_wid_err_upper << 32) | + bridge->b_wid_err_lower)); + break; + + case BRIDGE_ISR_SSRAM_PERR: /* SSRAM_PERR */ + if (!is_xbridge(bridge)) { /* only defined on Bridge */ + printk( + "\t Bridge SSRAM Parity Error Register: 0x%x\n", + bridge->b_ram_perr); + } + break; + + case BRIDGE_ISR_PCI_ABORT: /* PCI_ABORT */ + case BRIDGE_ISR_PCI_PARITY: /* PCI_PARITY */ + case BRIDGE_ISR_PCI_SERR: /* PCI_SERR */ + case BRIDGE_ISR_PCI_PERR: /* PCI_PERR */ + case BRIDGE_ISR_PCI_MST_TIMEOUT: /* PCI_MASTER_TOUT */ + case BRIDGE_ISR_PCI_RETRY_CNT: /* PCI_RETRY_CNT */ + case BRIDGE_ISR_GIO_B_ENBL_ERR: /* GIO BENABLE_ERR */ + printk("\t PCI Error Upper Address Register: 0x%x\n" + "\t PCI Error Lower Address Register: 0x%x\n" + "\t PCI Error Address: 0x%x\n", + (uint64_t) bridge->b_pci_err_upper, + (uint64_t) bridge->b_pci_err_lower, + (((uint64_t) bridge->b_pci_err_upper << 32) | + bridge->b_pci_err_lower)); + break; + } + } + } + + if (is_xbridge(bridge) && (bridge->b_mult_int & ~BRIDGE_ISR_INT_MSK)) { + mult_int = bridge->b_mult_int; + printk(" XBridge Multiple Interrupt Register is 0x%x\n", + mult_int); + for (i = PCIBR_ISR_ERR_START; i < PCIBR_ISR_MAX_ERRS; i++) { + if (mult_int & (1 << i)) + printk("\t%s\n", pcibr_isr_errs[i]); + } + } +} + +#define PCIBR_ERRINTR_GROUP(error) \ + (( error & (BRIDGE_IRR_PCI_GRP|BRIDGE_IRR_GIO_GRP) + +uint32_t +pcibr_errintr_group(uint32_t error) +{ + uint32_t group = BRIDGE_IRR_MULTI_CLR; + + if (error & BRIDGE_IRR_PCI_GRP) + group |= BRIDGE_IRR_PCI_GRP_CLR; + if (error & BRIDGE_IRR_SSRAM_GRP) + group |= BRIDGE_IRR_SSRAM_GRP_CLR; + if (error & BRIDGE_IRR_LLP_GRP) + group |= BRIDGE_IRR_LLP_GRP_CLR; + if (error & BRIDGE_IRR_REQ_DSP_GRP) + group |= BRIDGE_IRR_REQ_DSP_GRP_CLR; + if (error & BRIDGE_IRR_RESP_BUF_GRP) + group |= BRIDGE_IRR_RESP_BUF_GRP_CLR; + if (error & BRIDGE_IRR_CRP_GRP) + group |= BRIDGE_IRR_CRP_GRP_CLR; + + return group; + +} + + +/* pcibr_pioerr_check(): + * Check to see if this pcibr has a PCI PIO + * TIMEOUT error; if so, bump the timeout-count + * on any piomaps that could cover the address. + */ +static void +pcibr_pioerr_check(pcibr_soft_t soft) +{ + bridge_t *bridge; + bridgereg_t b_int_status; + bridgereg_t b_pci_err_lower; + bridgereg_t b_pci_err_upper; + iopaddr_t pci_addr; + pciio_slot_t slot; + pcibr_piomap_t map; + iopaddr_t base; + size_t size; + unsigned win; + int func; + + bridge = soft->bs_base; + b_int_status = bridge->b_int_status; + if (b_int_status & BRIDGE_ISR_PCIBUS_PIOERR) { + b_pci_err_lower = bridge->b_pci_err_lower; + b_pci_err_upper = bridge->b_pci_err_upper; + b_int_status = bridge->b_int_status; + if (b_int_status & BRIDGE_ISR_PCIBUS_PIOERR) { + + pci_addr = b_pci_err_upper & BRIDGE_ERRUPPR_ADDRMASK; + pci_addr = (pci_addr << 32) | b_pci_err_lower; + + slot = 8; + while (slot-- > 0) { + int nfunc = soft->bs_slot[slot].bss_ninfo; + pcibr_info_h pcibr_infoh = soft->bs_slot[slot].bss_infos; + + for (func = 0; func < nfunc; func++) { + pcibr_info_t pcibr_info = pcibr_infoh[func]; + + if (!pcibr_info) + continue; + + for (map = pcibr_info->f_piomap; + map != NULL; map = map->bp_next) { + base = map->bp_pciaddr; + size = map->bp_mapsz; + win = map->bp_space - PCIIO_SPACE_WIN(0); + if (win < 6) + base += + soft->bs_slot[slot].bss_window[win].bssw_base; + else if (map->bp_space == PCIIO_SPACE_ROM) + base += pcibr_info->f_rbase; + if ((pci_addr >= base) && (pci_addr < (base + size))) + atomicAddInt(map->bp_toc, 1); + } + } + } + } + } +} + +/* + * PCI Bridge Error interrupt handler. + * This gets invoked, whenever a PCI bridge sends an error interrupt. + * Primarily this servers two purposes. + * - If an error can be handled (typically a PIO read/write + * error, we try to do it silently. + * - If an error cannot be handled, we die violently. + * Interrupt due to PIO errors: + * - Bridge sends an interrupt, whenever a PCI operation + * done by the bridge as the master fails. Operations could + * be either a PIO read or a PIO write. + * PIO Read operation also triggers a bus error, and it's + * We primarily ignore this interrupt in that context.. + * For PIO write errors, this is the only indication. + * and we have to handle with the info from here. + * + * So, there is no way to distinguish if an interrupt is + * due to read or write error!. + */ + + +void +pcibr_error_intr_handler(intr_arg_t arg) +{ + pcibr_soft_t pcibr_soft; + bridge_t *bridge; + bridgereg_t int_status; + bridgereg_t err_status; + int i; + + /* REFERENCED */ + bridgereg_t disable_errintr_mask = 0; + int rv; + int error_code = IOECODE_DMA | IOECODE_READ; + ioerror_mode_t mode = MODE_DEVERROR; + ioerror_t ioe; + nasid_t nasid; + +#if PCIBR_SOFT_LIST + { + extern pcibr_list_p pcibr_list; + pcibr_list_p entry; + + entry = pcibr_list; + while (1) { + if (entry == NULL) { + PRINT_PANIC( + "pcibr_error_intr_handler:\n" + "\tmy parameter (0x%x) is not a pcibr_soft!", + arg); + } + if ((intr_arg_t) entry->bl_soft == arg) + break; + entry = entry->bl_next; + } + } +#endif + pcibr_soft = (pcibr_soft_t) arg; + bridge = pcibr_soft->bs_base; + + /* + * pcibr_error_intr_handler gets invoked whenever bridge encounters + * an error situation, and the interrupt for that error is enabled. + * This routine decides if the error is fatal or not, and takes + * action accordingly. + * + * In the case of PIO read/write timeouts, there is no way + * to know if it was a read or write request that timed out. + * If the error was due to a "read", a bus error will also occur + * and the bus error handling code takes care of it. + * If the error is due to a "write", the error is currently logged + * by this routine. For SN1 and SN0, if fire-and-forget mode is + * disabled, a write error response xtalk packet will be sent to + * the II, which will cause an II error interrupt. No write error + * recovery actions of any kind currently take place at the pcibr + * layer! (e.g., no panic on unrecovered write error) + * + * Prior to reading the Bridge int_status register we need to ensure + * that there are no error bits set in the lower layers (hubii) + * that have disabled PIO access to the widget. If so, there is nothing + * we can do until the bits clear, so we setup a timeout and try again + * later. + */ + + nasid = NASID_GET(bridge); + if (hubii_check_widget_disabled(nasid, pcibr_soft->bs_xid)) { + timeout(pcibr_error_intr_handler, pcibr_soft, BRIDGE_PIOERR_TIMEOUT); + pcibr_soft->bs_errinfo.bserr_toutcnt++; + return; + } + + /* int_status is which bits we have to clear; + * err_status is the bits we haven't handled yet. + */ + + int_status = bridge->b_int_status & ~BRIDGE_ISR_INT_MSK; + err_status = int_status & ~BRIDGE_ISR_MULTI_ERR; + + if (!(int_status & ~BRIDGE_ISR_INT_MSK)) { + /* + * No error bit set!!. + */ + return; + } + /* + * If we have a PCIBUS_PIOERR, hand it to the logger. + */ + if (int_status & BRIDGE_ISR_PCIBUS_PIOERR) { + pcibr_pioerr_check(pcibr_soft); + } + + if (err_status) { + struct bs_errintr_stat_s *bs_estat = pcibr_soft->bs_errintr_stat; + + for (i = PCIBR_ISR_ERR_START; i < PCIBR_ISR_MAX_ERRS; i++, bs_estat++) { + if (err_status & (1 << i)) { + uint32_t errrate = 0; + uint32_t errcount = 0; + uint32_t errinterval = 0, current_tick = 0; + int llp_tx_retry_errors = 0; + int is_llp_tx_retry_intr = 0; + + bs_estat->bs_errcount_total++; + + current_tick = lbolt; + errinterval = (current_tick - bs_estat->bs_lasterr_timestamp); + errcount = (bs_estat->bs_errcount_total - + bs_estat->bs_lasterr_snapshot); + + is_llp_tx_retry_intr = (BRIDGE_ISR_LLP_TX_RETRY == (1 << i)); + + /* Check for the divide by zero condition while + * calculating the error rates. + */ + + if (errinterval) { + errrate = errcount / errinterval; + /* If able to calculate error rate + * on a LLP transmitter retry interrupt, check + * if the error rate is nonzero and we have seen + * a certain minimum number of errors. + * + * NOTE : errcount is being compared to + * PCIBR_ERRTIME_THRESHOLD to make sure that we are not + * seeing cases like x error interrupts per y ticks for + * very low x ,y (x > y ) which could result in a + * rate > 100/tick. + */ + if (is_llp_tx_retry_intr && + errrate && + (errcount >= PCIBR_ERRTIME_THRESHOLD)) { + llp_tx_retry_errors = 1; + } + } else { + errrate = 0; + /* Since we are not able to calculate the + * error rate check if we exceeded a certain + * minimum number of errors for LLP transmitter + * retries. Note that this can only happen + * within the first tick after the last snapshot. + */ + if (is_llp_tx_retry_intr && + (errcount >= PCIBR_ERRINTR_DISABLE_LEVEL)) { + llp_tx_retry_errors = 1; + } + } + + /* + * If a non-zero error rate (which is equivalent to + * to 100 errors/tick at least) for the LLP transmitter + * retry interrupt was seen, check if we should print + * a warning message. + */ + + if (llp_tx_retry_errors) { + static uint32_t last_printed_rate; + + if (errrate > last_printed_rate) { + last_printed_rate = errrate; + /* Print the warning only if the error rate + * for the transmitter retry interrupt + * exceeded the previously printed rate. + */ + printk(KERN_WARNING + "%s: %s, Excessive error interrupts : %d/tick\n", + pcibr_soft->bs_name, + pcibr_isr_errs[i], + errrate); + + } + /* + * Update snapshot, and time + */ + bs_estat->bs_lasterr_timestamp = current_tick; + bs_estat->bs_lasterr_snapshot = + bs_estat->bs_errcount_total; + + } + /* + * If the error rate is high enough, print the error rate. + */ + if (errinterval > PCIBR_ERRTIME_THRESHOLD) { + + if (errrate > PCIBR_ERRRATE_THRESHOLD) { + printk(KERN_NOTICE "%s: %s, Error rate %d/tick", + pcibr_soft->bs_name, + pcibr_isr_errs[i], + errrate); + /* + * Update snapshot, and time + */ + bs_estat->bs_lasterr_timestamp = current_tick; + bs_estat->bs_lasterr_snapshot = + bs_estat->bs_errcount_total; + } + } + if (bs_estat->bs_errcount_total > PCIBR_ERRINTR_DISABLE_LEVEL) { + /* + * We have seen a fairly large number of errors of + * this type. Let's disable the interrupt. But flash + * a message about the interrupt being disabled. + */ + printk(KERN_NOTICE + "%s Disabling error interrupt type %s. Error count %d", + pcibr_soft->bs_name, + pcibr_isr_errs[i], + bs_estat->bs_errcount_total); + disable_errintr_mask |= (1 << i); + } + } + } + } + + if (disable_errintr_mask) { + /* + * Disable some high frequency errors as they + * could eat up too much cpu time. + */ + bridge->b_int_enable &= ~disable_errintr_mask; + } + /* + * If we leave the PROM cacheable, T5 might + * try to do a cache line sized writeback to it, + * which will cause a BRIDGE_ISR_INVLD_ADDR. + */ + if ((err_status & BRIDGE_ISR_INVLD_ADDR) && + (0x00000000 == bridge->b_wid_err_upper) && + (0x00C00000 == (0xFFC00000 & bridge->b_wid_err_lower)) && + (0x00402000 == (0x00F07F00 & bridge->b_wid_err_cmdword))) { + err_status &= ~BRIDGE_ISR_INVLD_ADDR; + } +#if defined (PCIBR_LLP_CONTROL_WAR) + /* + * The bridge bug, where the llp_config or control registers + * need to be read back after being written, affects an MP + * system since there could be small windows between writing + * the register and reading it back on one cpu while another + * cpu is fielding an interrupt. If we run into this scenario, + * workaround the problem by ignoring the error. (bug 454474) + * pcibr_llp_control_war_cnt keeps an approximate number of + * times we saw this problem on a system. + */ + + if ((err_status & BRIDGE_ISR_INVLD_ADDR) && + ((((uint64_t) bridge->b_wid_err_upper << 32) | (bridge->b_wid_err_lower)) + == (BRIDGE_INT_RST_STAT & 0xff0))) { +#if 0 + if (kdebug) + printk(KERN_NOTICE "%s bridge: ignoring llp/control address interrupt", + pcibr_soft->bs_name); +#endif + pcibr_llp_control_war_cnt++; + err_status &= ~BRIDGE_ISR_INVLD_ADDR; + } +#endif /* PCIBR_LLP_CONTROL_WAR */ + +#ifdef EHE_ENABLE + /* Check if this is the RESP_XTALK_ERROR interrupt. + * This can happen due to a failed DMA READ operation. + */ + if (err_status & BRIDGE_ISR_RESP_XTLK_ERR) { + /* Phase 1 : Look at the error state in the bridge and further + * down in the device layers. + */ + (void)error_state_set(pcibr_soft->bs_conn, ERROR_STATE_LOOKUP); + IOERROR_SETVALUE(&ioe, widgetnum, pcibr_soft->bs_xid); + (void)pcibr_error_handler((error_handler_arg_t)pcibr_soft, + error_code, + mode, + &ioe); + /* Phase 2 : Perform the action agreed upon in phase 1. + */ + (void)error_state_set(pcibr_soft->bs_conn, ERROR_STATE_ACTION); + rv = pcibr_error_handler((error_handler_arg_t)pcibr_soft, + error_code, + mode, + &ioe); + } + if (rv != IOERROR_HANDLED) { +#endif /* EHE_ENABLE */ + + /* Dump/Log Bridge error interrupt info */ + if (err_status & bridge_errors_to_dump) { + printk("BRIDGE ERR_STATUS 0x%x\n", err_status); + pcibr_error_dump(pcibr_soft); + } + + if (err_status & BRIDGE_ISR_ERROR_FATAL) { + machine_error_dump(""); + cmn_err_tag(14, CE_PANIC, "PCI Bridge Error interrupt killed the system"); + /*NOTREACHED */ + } + +#ifdef EHE_ENABLE + } +#endif + + /* + * We can't return without re-enabling the interrupt, since + * it would cause problems for devices like IOC3 (Lost + * interrupts ?.). So, just cleanup the interrupt, and + * use saved values later.. + */ + bridge->b_int_rst_stat = pcibr_errintr_group(int_status); + + /* Zero out bserr_intstat field */ + test_and_set_int((int *) &pcibr_soft->bs_errinfo.bserr_intstat, 0); +} + +/* + * pcibr_addr_toslot + * Given the 'pciaddr' find out which slot this address is + * allocated to, and return the slot number. + * While we have the info handy, construct the + * function number, space code and offset as well. + * + * NOTE: if this routine is called, we don't know whether + * the address is in CFG, MEM, or I/O space. We have to guess. + * This will be the case on PIO stores, where the only way + * we have of getting the address is to check the Bridge, which + * stores the PCI address but not the space and not the xtalk + * address (from which we could get it). + */ +int +pcibr_addr_toslot(pcibr_soft_t pcibr_soft, + iopaddr_t pciaddr, + pciio_space_t *spacep, + iopaddr_t *offsetp, + pciio_function_t *funcp) +{ + int s, f, w; + iopaddr_t base; + size_t size; + pciio_piospace_t piosp; + + /* + * Check if the address is in config space + */ + + if ((pciaddr >= BRIDGE_CONFIG_BASE) && (pciaddr < BRIDGE_CONFIG_END)) { + + if (pciaddr >= BRIDGE_CONFIG1_BASE) + pciaddr -= BRIDGE_CONFIG1_BASE; + else + pciaddr -= BRIDGE_CONFIG_BASE; + + s = pciaddr / BRIDGE_CONFIG_SLOT_SIZE; + pciaddr %= BRIDGE_CONFIG_SLOT_SIZE; + + if (funcp) { + f = pciaddr / 0x100; + pciaddr %= 0x100; + } + if (spacep) + *spacep = PCIIO_SPACE_CFG; + if (offsetp) + *offsetp = pciaddr; + if (funcp) + *funcp = f; + + return s; + } + for (s = 0; s < 8; s++) { + int nf = pcibr_soft->bs_slot[s].bss_ninfo; + pcibr_info_h pcibr_infoh = pcibr_soft->bs_slot[s].bss_infos; + + for (f = 0; f < nf; f++) { + pcibr_info_t pcibr_info = pcibr_infoh[f]; + + if (!pcibr_info) + continue; + for (w = 0; w < 6; w++) { + if (pcibr_info->f_window[w].w_space + == PCIIO_SPACE_NONE) { + continue; + } + base = pcibr_info->f_window[w].w_base; + size = pcibr_info->f_window[w].w_size; + + if ((pciaddr >= base) && (pciaddr < (base + size))) { + if (spacep) + *spacep = PCIIO_SPACE_WIN(w); + if (offsetp) + *offsetp = pciaddr - base; + if (funcp) + *funcp = f; + return s; + } /* endif match */ + } /* next window */ + } /* next func */ + } /* next slot */ + + /* + * Check if the address was allocated as part of the + * pcibr_piospace_alloc calls. + */ + for (s = 0; s < 8; s++) { + int nf = pcibr_soft->bs_slot[s].bss_ninfo; + pcibr_info_h pcibr_infoh = pcibr_soft->bs_slot[s].bss_infos; + + for (f = 0; f < nf; f++) { + pcibr_info_t pcibr_info = pcibr_infoh[f]; + + if (!pcibr_info) + continue; + piosp = pcibr_info->f_piospace; + while (piosp) { + if ((piosp->start <= pciaddr) && + ((piosp->count + piosp->start) > pciaddr)) { + if (spacep) + *spacep = piosp->space; + if (offsetp) + *offsetp = pciaddr - piosp->start; + return s; + } /* endif match */ + piosp = piosp->next; + } /* next piosp */ + } /* next func */ + } /* next slot */ + + /* + * Some other random address on the PCI bus ... + * we have no way of knowing whether this was + * a MEM or I/O access; so, for now, we just + * assume that the low 1G is MEM, the next + * 3G is I/O, and anything above the 4G limit + * is obviously MEM. + */ + + if (spacep) + *spacep = ((pciaddr < (1ul << 30)) ? PCIIO_SPACE_MEM : + (pciaddr < (4ul << 30)) ? PCIIO_SPACE_IO : + PCIIO_SPACE_MEM); + if (offsetp) + *offsetp = pciaddr; + + return PCIIO_SLOT_NONE; + +} + +void +pcibr_error_cleanup(pcibr_soft_t pcibr_soft, int error_code) +{ + bridge_t *bridge = pcibr_soft->bs_base; + + ASSERT(error_code & IOECODE_PIO); + error_code = error_code; + + bridge->b_int_rst_stat = + (BRIDGE_IRR_PCI_GRP_CLR | BRIDGE_IRR_MULTI_CLR); + (void) bridge->b_wid_tflush; /* flushbus */ +} + +/* + * pcibr_error_extract + * Given the 'pcibr vertex handle' find out which slot + * the bridge status error address (from pcibr_soft info + * hanging off the vertex) + * allocated to, and return the slot number. + * While we have the info handy, construct the + * space code and offset as well. + * + * NOTE: if this routine is called, we don't know whether + * the address is in CFG, MEM, or I/O space. We have to guess. + * This will be the case on PIO stores, where the only way + * we have of getting the address is to check the Bridge, which + * stores the PCI address but not the space and not the xtalk + * address (from which we could get it). + * + * XXX- this interface has no way to return the function + * number on a multifunction card, even though that data + * is available. + */ + +pciio_slot_t +pcibr_error_extract(devfs_handle_t pcibr_vhdl, + pciio_space_t *spacep, + iopaddr_t *offsetp) +{ + pcibr_soft_t pcibr_soft = 0; + iopaddr_t bserr_addr; + bridge_t *bridge; + pciio_slot_t slot = PCIIO_SLOT_NONE; + arbitrary_info_t rev; + + /* Do a sanity check as to whether we really got a + * bridge vertex handle. + */ + if (hwgraph_info_get_LBL(pcibr_vhdl, INFO_LBL_PCIBR_ASIC_REV, &rev) != + GRAPH_SUCCESS) + return(slot); + + pcibr_soft = pcibr_soft_get(pcibr_vhdl); + if (pcibr_soft) { + bridge = pcibr_soft->bs_base; + bserr_addr = + bridge->b_pci_err_lower | + ((uint64_t) (bridge->b_pci_err_upper & + BRIDGE_ERRUPPR_ADDRMASK) << 32); + + slot = pcibr_addr_toslot(pcibr_soft, bserr_addr, + spacep, offsetp, NULL); + } + return slot; +} + +/*ARGSUSED */ +void +pcibr_device_disable(pcibr_soft_t pcibr_soft, int devnum) +{ + /* + * XXX + * Device failed to handle error. Take steps to + * disable this device ? HOW TO DO IT ? + * + * If there are any Read response buffers associated + * with this device, it's time to get them back!! + * + * We can disassociate any interrupt level associated + * with this device, and disable that interrupt level + * + * For now it's just a place holder + */ +} + +/* + * pcibr_pioerror + * Handle PIO error that happened at the bridge pointed by pcibr_soft. + * + * Queries the Bus interface attached to see if the device driver + * mapping the device-number that caused error can handle the + * situation. If so, it will clean up any error, and return + * indicating the error was handled. If the device driver is unable + * to handle the error, it expects the bus-interface to disable that + * device, and takes any steps needed here to take away any resources + * associated with this device. + */ + +#define BEM_ADD_STR(s) printk("%s", (s)) +#define BEM_ADD_VAR(v) printk("\t%20s: 0x%x\n", #v, (v)) +#define BEM_ADD_REG(r) printk("\t%20s: %R\n", #r, (r), r ## _desc) + +#define BEM_ADD_NSPC(n,s) printk("\t%20s: %R\n", n, s, space_desc) +#define BEM_ADD_SPC(s) BEM_ADD_NSPC(#s, s) + +/* BEM_ADD_IOE doesn't dump the whole ioerror, it just + * decodes the PCI specific portions -- we count on our + * callers to dump the raw IOE data. + */ +#define BEM_ADD_IOE(ioe) \ + do { \ + if (IOERROR_FIELDVALID(ioe, busspace)) { \ + unsigned spc; \ + unsigned win; \ + \ + spc = IOERROR_GETVALUE(ioe, busspace); \ + win = spc - PCIIO_SPACE_WIN(0); \ + \ + switch (spc) { \ + case PCIIO_SPACE_CFG: \ + printk( \ + "\tPCI Slot %d Func %d CFG space Offset 0x%x\n", \ + pciio_widgetdev_slot_get(IOERROR_GETVALUE(ioe, widgetdev)), \ + pciio_widgetdev_func_get(IOERROR_GETVALUE(ioe, widgetdev)), \ + IOERROR_GETVALUE(ioe, busaddr)); \ + break; \ + case PCIIO_SPACE_IO: \ + printk( \ + "\tPCI I/O space Offset 0x%x\n", \ + IOERROR_GETVALUE(ioe, busaddr)); \ + break; \ + case PCIIO_SPACE_MEM: \ + case PCIIO_SPACE_MEM32: \ + case PCIIO_SPACE_MEM64: \ + printk( \ + "\tPCI MEM space Offset 0x%x\n", \ + IOERROR_GETVALUE(ioe, busaddr)); \ + break; \ + default: \ + if (win < 6) { \ + printk( \ + "\tPCI Slot %d Func %d Window %d Offset 0x%x\n",\ + pciio_widgetdev_slot_get(IOERROR_GETVALUE(ioe, widgetdev)), \ + pciio_widgetdev_func_get(IOERROR_GETVALUE(ioe, widgetdev)), \ + win, \ + IOERROR_GETVALUE(ioe, busaddr)); \ + } \ + break; \ + } \ + } \ + } while (0) + +/*ARGSUSED */ +int +pcibr_pioerror( + pcibr_soft_t pcibr_soft, + int error_code, + ioerror_mode_t mode, + ioerror_t *ioe) +{ + int retval = IOERROR_HANDLED; + + devfs_handle_t pcibr_vhdl = pcibr_soft->bs_vhdl; + bridge_t *bridge = pcibr_soft->bs_base; + + iopaddr_t bad_xaddr; + + pciio_space_t raw_space; /* raw PCI space */ + iopaddr_t raw_paddr; /* raw PCI address */ + + pciio_space_t space; /* final PCI space */ + pciio_slot_t slot; /* final PCI slot, if appropriate */ + pciio_function_t func; /* final PCI func, if appropriate */ + iopaddr_t offset; /* final PCI offset */ + + int cs, cw, cf; + pciio_space_t wx; + iopaddr_t wb; + size_t ws; + iopaddr_t wl; + + + /* + * We expect to have an "xtalkaddr" coming in, + * and need to construct the slot/space/offset. + */ + + bad_xaddr = IOERROR_GETVALUE(ioe, xtalkaddr); + + slot = PCIIO_SLOT_NONE; + func = PCIIO_FUNC_NONE; + raw_space = PCIIO_SPACE_NONE; + raw_paddr = 0; + + if ((bad_xaddr >= BRIDGE_TYPE0_CFG_DEV0) && + (bad_xaddr < BRIDGE_TYPE1_CFG)) { + raw_paddr = bad_xaddr - BRIDGE_TYPE0_CFG_DEV0; + slot = raw_paddr / BRIDGE_TYPE0_CFG_SLOT_OFF; + raw_paddr = raw_paddr % BRIDGE_TYPE0_CFG_SLOT_OFF; + raw_space = PCIIO_SPACE_CFG; + } + if ((bad_xaddr >= BRIDGE_TYPE1_CFG) && + (bad_xaddr < (BRIDGE_TYPE1_CFG + 0x1000))) { + /* Type 1 config space: + * slot and function numbers not known. + * Perhaps we can read them back? + */ + raw_paddr = bad_xaddr - BRIDGE_TYPE1_CFG; + raw_space = PCIIO_SPACE_CFG; + } + if ((bad_xaddr >= BRIDGE_DEVIO0) && + (bad_xaddr < BRIDGE_DEVIO(BRIDGE_DEV_CNT))) { + int x; + + raw_paddr = bad_xaddr - BRIDGE_DEVIO0; + x = raw_paddr / BRIDGE_DEVIO_OFF; + raw_paddr %= BRIDGE_DEVIO_OFF; + /* first two devio windows are double-sized */ + if ((x == 1) || (x == 3)) + raw_paddr += BRIDGE_DEVIO_OFF; + if (x > 0) + x--; + if (x > 1) + x--; + /* x is which devio reg; no guarantee + * PCI slot x will be responding. + * still need to figure out who decodes + * space/offset on the bus. + */ + raw_space = pcibr_soft->bs_slot[x].bss_devio.bssd_space; + if (raw_space == PCIIO_SPACE_NONE) { + /* Someone got an error because they + * accessed the PCI bus via a DevIO(x) + * window that pcibr has not yet assigned + * to any specific PCI address. It is + * quite possible that the Device(x) + * register has been changed since they + * made their access, but we will give it + * our best decode shot. + */ + raw_space = pcibr_soft->bs_slot[x].bss_device + & BRIDGE_DEV_DEV_IO_MEM + ? PCIIO_SPACE_MEM + : PCIIO_SPACE_IO; + raw_paddr += + (pcibr_soft->bs_slot[x].bss_device & + BRIDGE_DEV_OFF_MASK) << + BRIDGE_DEV_OFF_ADDR_SHFT; + } else + raw_paddr += pcibr_soft->bs_slot[x].bss_devio.bssd_base; + } + if ((bad_xaddr >= BRIDGE_PCI_MEM32_BASE) && + (bad_xaddr <= BRIDGE_PCI_MEM32_LIMIT)) { + raw_space = PCIIO_SPACE_MEM32; + raw_paddr = bad_xaddr - BRIDGE_PCI_MEM32_BASE; + } + if ((bad_xaddr >= BRIDGE_PCI_MEM64_BASE) && + (bad_xaddr <= BRIDGE_PCI_MEM64_LIMIT)) { + raw_space = PCIIO_SPACE_MEM64; + raw_paddr = bad_xaddr - BRIDGE_PCI_MEM64_BASE; + } + if ((bad_xaddr >= BRIDGE_PCI_IO_BASE) && + (bad_xaddr <= BRIDGE_PCI_IO_LIMIT)) { + raw_space = PCIIO_SPACE_IO; + raw_paddr = bad_xaddr - BRIDGE_PCI_IO_BASE; + } + space = raw_space; + offset = raw_paddr; + + if ((slot == PCIIO_SLOT_NONE) && (space != PCIIO_SPACE_NONE)) { + /* we've got a space/offset but not which + * PCI slot decodes it. Check through our + * notions of which devices decode where. + * + * Yes, this "duplicates" some logic in + * pcibr_addr_toslot; the difference is, + * this code knows which space we are in, + * and can really really tell what is + * going on (no guessing). + */ + + for (cs = 0; (cs < 8) && (slot == PCIIO_SLOT_NONE); cs++) { + int nf = pcibr_soft->bs_slot[cs].bss_ninfo; + pcibr_info_h pcibr_infoh = pcibr_soft->bs_slot[cs].bss_infos; + + for (cf = 0; (cf < nf) && (slot == PCIIO_SLOT_NONE); cf++) { + pcibr_info_t pcibr_info = pcibr_infoh[cf]; + + if (!pcibr_info) + continue; + for (cw = 0; (cw < 6) && (slot == PCIIO_SLOT_NONE); ++cw) { + if (((wx = pcibr_info->f_window[cw].w_space) != PCIIO_SPACE_NONE) && + ((wb = pcibr_info->f_window[cw].w_base) != 0) && + ((ws = pcibr_info->f_window[cw].w_size) != 0) && + ((wl = wb + ws) > wb) && + ((wb <= offset) && (wl > offset))) { + /* MEM, MEM32 and MEM64 need to + * compare as equal ... + */ + if ((wx == space) || + (((wx == PCIIO_SPACE_MEM) || + (wx == PCIIO_SPACE_MEM32) || + (wx == PCIIO_SPACE_MEM64)) && + ((space == PCIIO_SPACE_MEM) || + (space == PCIIO_SPACE_MEM32) || + (space == PCIIO_SPACE_MEM64)))) { + slot = cs; + func = cf; + space = PCIIO_SPACE_WIN(cw); + offset -= wb; + } /* endif window space match */ + } /* endif window valid and addr match */ + } /* next window unless slot set */ + } /* next func unless slot set */ + } /* next slot unless slot set */ + /* XXX- if slot is still -1, no PCI devices are + * decoding here using their standard PCI BASE + * registers. This would be a really good place + * to cross-coordinate with the pciio PCI + * address space allocation routines, to find + * out if this address is "allocated" by any of + * our subsidiary devices. + */ + } + /* Scan all piomap records on this PCI bus to update + * the TimeOut Counters on all matching maps. If we + * don't already know the slot number, take it from + * the first matching piomap. Note that we have to + * compare maps against raw_space and raw_paddr + * since space and offset could already be + * window-relative. + * + * There is a chance that one CPU could update + * through this path, and another CPU could also + * update due to an interrupt. Closing this hole + * would only result in the possibility of some + * errors never getting logged at all, and since the + * use for bp_toc is as a logical test rather than a + * strict count, the excess counts are not a + * problem. + */ + for (cs = 0; cs < 8; ++cs) { + int nf = pcibr_soft->bs_slot[cs].bss_ninfo; + pcibr_info_h pcibr_infoh = pcibr_soft->bs_slot[cs].bss_infos; + + for (cf = 0; cf < nf; cf++) { + pcibr_info_t pcibr_info = pcibr_infoh[cf]; + pcibr_piomap_t map; + + if (!pcibr_info) + continue; + + for (map = pcibr_info->f_piomap; + map != NULL; map = map->bp_next) { + wx = map->bp_space; + wb = map->bp_pciaddr; + ws = map->bp_mapsz; + cw = wx - PCIIO_SPACE_WIN(0); + if (cw < 6) { + wb += pcibr_soft->bs_slot[cs].bss_window[cw].bssw_base; + wx = pcibr_soft->bs_slot[cs].bss_window[cw].bssw_space; + } + if (wx == PCIIO_SPACE_ROM) { + wb += pcibr_info->f_rbase; + wx = PCIIO_SPACE_MEM; + } + if ((wx == PCIIO_SPACE_MEM32) || + (wx == PCIIO_SPACE_MEM64)) + wx = PCIIO_SPACE_MEM; + wl = wb + ws; + if ((wx == raw_space) && (raw_paddr >= wb) && (raw_paddr < wl)) { + atomicAddInt(map->bp_toc, 1); + if (slot == PCIIO_SLOT_NONE) { + slot = cs; + space = map->bp_space; + if (cw < 6) + offset -= pcibr_soft->bs_slot[cs].bss_window[cw].bssw_base; + } + } + } + } + } + + if (space != PCIIO_SPACE_NONE) { + if (slot != PCIIO_SLOT_NONE) + if (func != PCIIO_FUNC_NONE) + IOERROR_SETVALUE(ioe, widgetdev, + pciio_widgetdev_create(slot,func)); + else + IOERROR_SETVALUE(ioe, widgetdev, + pciio_widgetdev_create(slot,0)); + + IOERROR_SETVALUE(ioe, busspace, space); + IOERROR_SETVALUE(ioe, busaddr, offset); + } + if (mode == MODE_DEVPROBE) { + /* + * During probing, we don't really care what the + * error is. Clean up the error in Bridge, notify + * subsidiary devices, and return success. + */ + pcibr_error_cleanup(pcibr_soft, error_code); + + /* if appropriate, give the error handler for this slot + * a shot at this probe access as well. + */ + return (slot == PCIIO_SLOT_NONE) ? IOERROR_HANDLED : + pciio_error_handler(pcibr_vhdl, error_code, mode, ioe); + } + /* + * If we don't know what "PCI SPACE" the access + * was targeting, we may have problems at the + * Bridge itself. Don't touch any bridge registers, + * and do complain loudly. + */ + + if (space == PCIIO_SPACE_NONE) { + printk("XIO Bus Error at %s\n" + "\taccess to XIO bus offset 0x%x\n" + "\tdoes not correspond to any PCI address\n", + pcibr_soft->bs_name, bad_xaddr); + + /* caller will dump contents of ioe struct */ + return IOERROR_XTALKLEVEL; + } + + /* + * Actual PCI Error handling situation. + * Typically happens when a user level process accesses + * PCI space, and it causes some error. + * + * Due to PCI Bridge implementation, we get two indication + * for a read error: an interrupt and a Bus error. + * We like to handle read error in the bus error context. + * But the interrupt comes and goes before bus error + * could make much progress. (NOTE: interrupd does + * come in _after_ bus error processing starts. But it's + * completed by the time bus error code reaches PCI PIO + * error handling. + * Similarly write error results in just an interrupt, + * and error handling has to be done at interrupt level. + * There is no way to distinguish at interrupt time, if an + * error interrupt is due to read/write error.. + */ + + /* We know the xtalk addr, the raw PCI bus space, + * the raw PCI bus address, the decoded PCI bus + * space, the offset within that space, and the + * decoded PCI slot (which may be "PCIIO_SLOT_NONE" if no slot + * is known to be involved). + */ + + /* + * Hand the error off to the handler registered + * for the slot that should have decoded the error, + * or to generic PCI handling (if pciio decides that + * such is appropriate). + */ + retval = pciio_error_handler(pcibr_vhdl, error_code, mode, ioe); + + if (retval != IOERROR_HANDLED) { + + /* Generate a generic message for IOERROR_UNHANDLED + * since the subsidiary handlers were silent, and + * did no recovery. + */ + if (retval == IOERROR_UNHANDLED) { + retval = IOERROR_PANIC; + + /* we may or may not want to print some of this, + * depending on debug level and which error code. + */ + + printk(KERN_ALERT + "PIO Error on PCI Bus %s", + pcibr_soft->bs_name); + /* this decodes part of the ioe; our caller + * will dump the raw details in DEBUG and + * kdebug kernels. + */ + BEM_ADD_IOE(ioe); + } +#if defined(FORCE_ERRORS) + if (0) { +#elif !DEBUG + if (kdebug) { +#endif + /* + * Dump raw data from Bridge/PCI layer. + */ + + BEM_ADD_STR("Raw info from Bridge/PCI layer:\n"); + if (bridge->b_int_status & BRIDGE_ISR_PCIBUS_PIOERR) + pcibr_error_dump(pcibr_soft); + BEM_ADD_SPC(raw_space); + BEM_ADD_VAR(raw_paddr); + if (IOERROR_FIELDVALID(ioe, widgetdev)) { + + slot = pciio_widgetdev_slot_get(IOERROR_GETVALUE(ioe, + widgetdev)); + func = pciio_widgetdev_func_get(IOERROR_GETVALUE(ioe, + widgetdev)); + if (slot < 8) { + bridgereg_t device = bridge->b_device[slot].reg; + + BEM_ADD_VAR(slot); + BEM_ADD_VAR(func); + BEM_ADD_REG(device); + } + } +#if !DEBUG || defined(FORCE_ERRORS) + } +#endif + + /* + * Since error could not be handled at lower level, + * error data logged has not been cleared. + * Clean up errors, and + * re-enable bridge to interrupt on error conditions. + * NOTE: Wheather we get the interrupt on PCI_ABORT or not is + * dependent on INT_ENABLE register. This write just makes sure + * that if the interrupt was enabled, we do get the interrupt. + * + * CAUTION: Resetting bit BRIDGE_IRR_PCI_GRP_CLR, acknowledges + * a group of interrupts. If while handling this error, + * some other error has occured, that would be + * implicitly cleared by this write. + * Need a way to ensure we don't inadvertently clear some + * other errors. + */ + if (IOERROR_FIELDVALID(ioe, widgetdev)) + pcibr_device_disable(pcibr_soft, + pciio_widgetdev_slot_get( + IOERROR_GETVALUE(ioe, widgetdev))); + + if (mode == MODE_DEVUSERERROR) + pcibr_error_cleanup(pcibr_soft, error_code); + } + return retval; +} + +/* + * bridge_dmaerror + * Some error was identified in a DMA transaction. + * This routine will identify the that caused the error, + * and try to invoke the appropriate bus service to handle this. + */ + +#define BRIDGE_DMA_READ_ERROR (BRIDGE_ISR_RESP_XTLK_ERR|BRIDGE_ISR_XREAD_REQ_TIMEOUT) + +int +pcibr_dmard_error( + pcibr_soft_t pcibr_soft, + int error_code, + ioerror_mode_t mode, + ioerror_t *ioe) +{ + devfs_handle_t pcibr_vhdl = pcibr_soft->bs_vhdl; + bridge_t *bridge = pcibr_soft->bs_base; + bridgereg_t bus_lowaddr, bus_uppraddr; + int retval = 0; + int bufnum; + + /* + * In case of DMA errors, bridge should have logged the + * address that caused the error. + * Look up the address, in the bridge error registers, and + * take appropriate action + */ + ASSERT(IOERROR_GETVALUE(ioe, widgetnum) == pcibr_soft->bs_xid); + ASSERT(bridge); + + /* + * read error log registers + */ + bus_lowaddr = bridge->b_wid_resp_lower; + bus_uppraddr = bridge->b_wid_resp_upper; + + bufnum = BRIDGE_RESP_ERRUPPR_BUFNUM(bus_uppraddr); + IOERROR_SETVALUE(ioe, widgetdev, + pciio_widgetdev_create( + BRIDGE_RESP_ERRUPPR_DEVICE(bus_uppraddr), + 0)); + IOERROR_SETVALUE(ioe, busaddr, + (bus_lowaddr | + ((iopaddr_t) + (bus_uppraddr & + BRIDGE_ERRUPPR_ADDRMASK) << 32))); + + /* + * need to ensure that the xtalk adress in ioe + * maps to PCI error address read from bridge. + * How to convert PCI address back to Xtalk address ? + * (better idea: convert XTalk address to PCI address + * and then do the compare!) + */ + + retval = pciio_error_handler(pcibr_vhdl, error_code, mode, ioe); + if (retval != IOERROR_HANDLED) + pcibr_device_disable(pcibr_soft, + pciio_widgetdev_slot_get( + IOERROR_GETVALUE(ioe,widgetdev))); + + /* + * Re-enable bridge to interrupt on BRIDGE_IRR_RESP_BUF_GRP_CLR + * NOTE: Wheather we get the interrupt on BRIDGE_IRR_RESP_BUF_GRP_CLR or + * not is dependent on INT_ENABLE register. This write just makes sure + * that if the interrupt was enabled, we do get the interrupt. + */ + bridge->b_int_rst_stat = BRIDGE_IRR_RESP_BUF_GRP_CLR; + + /* + * Also, release the "bufnum" back to buffer pool that could be re-used. + * This is done by "disabling" the buffer for a moment, then restoring + * the original assignment. + */ + + { + reg_p regp; + bridgereg_t regv; + bridgereg_t mask; + + regp = (bufnum & 1) + ? &bridge->b_odd_resp + : &bridge->b_even_resp; + + mask = 0xF << ((bufnum >> 1) * 4); + + regv = *regp; + *regp = regv & ~mask; + *regp = regv; + } + + return retval; +} + +/* + * pcibr_dmawr_error: + * Handle a dma write error caused by a device attached to this bridge. + * + * ioe has the widgetnum, widgetdev, and memaddr fields updated + * But we don't know the PCI address that corresponds to "memaddr" + * nor do we know which device driver is generating this address. + * + * There is no easy way to find out the PCI address(es) that map + * to a specific system memory address. Bus handling code is also + * of not much help, since they don't keep track of the DMA mapping + * that have been handed out. + * So it's a dead-end at this time. + * + * If translation is available, we could invoke the error handling + * interface of the device driver. + */ +/*ARGSUSED */ +int +pcibr_dmawr_error( + pcibr_soft_t pcibr_soft, + int error_code, + ioerror_mode_t mode, + ioerror_t *ioe) +{ + devfs_handle_t pcibr_vhdl = pcibr_soft->bs_vhdl; + int retval; + + retval = pciio_error_handler(pcibr_vhdl, error_code, mode, ioe); + + if (retval != IOERROR_HANDLED) { + pcibr_device_disable(pcibr_soft, + pciio_widgetdev_slot_get( + IOERROR_GETVALUE(ioe, widgetdev))); + + } + return retval; +} + +/* + * Bridge error handler. + * Interface to handle all errors that involve bridge in some way. + * + * This normally gets called from xtalk error handler. + * ioe has different set of fields set depending on the error that + * was encountered. So, we have a bit field indicating which of the + * fields are valid. + * + * NOTE: This routine could be operating in interrupt context. So, + * don't try to sleep here (till interrupt threads work!!) + */ +int +pcibr_error_handler( + error_handler_arg_t einfo, + int error_code, + ioerror_mode_t mode, + ioerror_t *ioe) +{ + pcibr_soft_t pcibr_soft; + int retval = IOERROR_BADERRORCODE; + +#ifdef EHE_ENABLE + devfs_handle_t xconn_vhdl,pcibr_vhdl; + error_state_t e_state; +#endif /* EHE_ENABLE */ + + pcibr_soft = (pcibr_soft_t) einfo; + +#ifdef EHE_ENABLE + xconn_vhdl = pcibr_soft->bs_conn; + pcibr_vhdl = pcibr_soft->bs_vhdl; + + e_state = error_state_get(xconn_vhdl); + + if (error_state_set(pcibr_vhdl, e_state) == + ERROR_RETURN_CODE_CANNOT_SET_STATE) + return(IOERROR_UNHANDLED); + + /* If we are in the action handling phase clean out the error state + * on the xswitch. + */ + if (e_state == ERROR_STATE_ACTION) + (void)error_state_set(xconn_vhdl, ERROR_STATE_NONE); +#endif /* EHE_ENABLE */ + +#if DEBUG && ERROR_DEBUG + printk("%s: pcibr_error_handler\n", pcibr_soft->bs_name); +#endif + + ASSERT(pcibr_soft != NULL); + + if (error_code & IOECODE_PIO) + retval = pcibr_pioerror(pcibr_soft, error_code, mode, ioe); + + if (error_code & IOECODE_DMA) { + if (error_code & IOECODE_READ) { + /* + * DMA read error occurs when a device attached to the bridge + * tries to read some data from system memory, and this + * either results in a timeout or access error. + * First case is indicated by the bit "XREAD_REQ_TOUT" + * and second case by "RESP_XTALK_ERROR" bit in bridge error + * interrupt status register. + * + * pcibr_error_intr_handler would get invoked first, and it has + * the responsibility of calling pcibr_error_handler with + * suitable parameters. + */ + + retval = pcibr_dmard_error(pcibr_soft, error_code, MODE_DEVERROR, ioe); + } + if (error_code & IOECODE_WRITE) { + /* + * A device attached to this bridge has been generating + * bad DMA writes. Find out the device attached, and + * slap on it's wrist. + */ + + retval = pcibr_dmawr_error(pcibr_soft, error_code, MODE_DEVERROR, ioe); + } + } + return retval; + +} + +/* + * Reenable a device after handling the error. + * This is called by the lower layers when they wish to be reenabled + * after an error. + * Note that each layer would be calling the previous layer to reenable + * first, before going ahead with their own re-enabling. + */ + +int +pcibr_error_devenable(devfs_handle_t pconn_vhdl, int error_code) +{ + pciio_info_t pciio_info = pciio_info_get(pconn_vhdl); + pciio_slot_t pciio_slot = pciio_info_slot_get(pciio_info); + pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); + + ASSERT(error_code & IOECODE_PIO); + + /* If the error is not known to be a write, + * we have to call devenable. + * write errors are isolated to the bridge. + */ + if (!(error_code & IOECODE_WRITE)) { + devfs_handle_t xconn_vhdl = pcibr_soft->bs_conn; + int rc; + + rc = xtalk_error_devenable(xconn_vhdl, pciio_slot, error_code); + if (rc != IOERROR_HANDLED) + return rc; + } + pcibr_error_cleanup(pcibr_soft, error_code); + return IOERROR_HANDLED; +} diff -Nru a/arch/ia64/sn/io/sn2/pcibr/pcibr_hints.c b/arch/ia64/sn/io/sn2/pcibr/pcibr_hints.c --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/io/sn2/pcibr/pcibr_hints.c Tue Mar 12 13:58:16 2002 @@ -0,0 +1,204 @@ +/* + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (C) 2001-2002 Silicon Graphics, Inc. All rights reserved. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +pcibr_hints_t pcibr_hints_get(devfs_handle_t, int); +void pcibr_hints_fix_rrbs(devfs_handle_t); +void pcibr_hints_dualslot(devfs_handle_t, pciio_slot_t, pciio_slot_t); +void pcibr_hints_intr_bits(devfs_handle_t, pcibr_intr_bits_f *); +void pcibr_set_rrb_callback(devfs_handle_t, rrb_alloc_funct_t); +void pcibr_hints_handsoff(devfs_handle_t); +void pcibr_hints_subdevs(devfs_handle_t, pciio_slot_t, uint64_t); + +pcibr_hints_t +pcibr_hints_get(devfs_handle_t xconn_vhdl, int alloc) +{ + arbitrary_info_t ainfo = 0; + graph_error_t rv; + pcibr_hints_t hint; + + rv = hwgraph_info_get_LBL(xconn_vhdl, INFO_LBL_PCIBR_HINTS, &ainfo); + + if (alloc && (rv != GRAPH_SUCCESS)) { + + NEW(hint); + hint->rrb_alloc_funct = NULL; + hint->ph_intr_bits = NULL; + rv = hwgraph_info_add_LBL(xconn_vhdl, + INFO_LBL_PCIBR_HINTS, + (arbitrary_info_t) hint); + if (rv != GRAPH_SUCCESS) + goto abnormal_exit; + + rv = hwgraph_info_get_LBL(xconn_vhdl, INFO_LBL_PCIBR_HINTS, &ainfo); + + if (rv != GRAPH_SUCCESS) + goto abnormal_exit; + + if (ainfo != (arbitrary_info_t) hint) + goto abnormal_exit; + } + return (pcibr_hints_t) ainfo; + +abnormal_exit: +#ifdef LATER + printf("SHOULD NOT BE HERE\n"); +#endif + DEL(hint); + return(NULL); + +} + +void +pcibr_hints_fix_some_rrbs(devfs_handle_t xconn_vhdl, unsigned mask) +{ + pcibr_hints_t hint = pcibr_hints_get(xconn_vhdl, 1); + + if (hint) + hint->ph_rrb_fixed = mask; +#if DEBUG + else + printk("pcibr_hints_fix_rrbs: pcibr_hints_get failed at\n" + "\t%p\n", xconn_vhdl); +#endif +} + +void +pcibr_hints_fix_rrbs(devfs_handle_t xconn_vhdl) +{ + pcibr_hints_fix_some_rrbs(xconn_vhdl, 0xFF); +} + +void +pcibr_hints_dualslot(devfs_handle_t xconn_vhdl, + pciio_slot_t host, + pciio_slot_t guest) +{ + pcibr_hints_t hint = pcibr_hints_get(xconn_vhdl, 1); + + if (hint) + hint->ph_host_slot[guest] = host + 1; +#if DEBUG + else + printk("pcibr_hints_dualslot: pcibr_hints_get failed at\n" + "\t%p\n", xconn_vhdl); +#endif +} + +void +pcibr_hints_intr_bits(devfs_handle_t xconn_vhdl, + pcibr_intr_bits_f *xxx_intr_bits) +{ + pcibr_hints_t hint = pcibr_hints_get(xconn_vhdl, 1); + + if (hint) + hint->ph_intr_bits = xxx_intr_bits; +#if DEBUG + else + printk("pcibr_hints_intr_bits: pcibr_hints_get failed at\n" + "\t%p\n", xconn_vhdl); +#endif +} + +void +pcibr_set_rrb_callback(devfs_handle_t xconn_vhdl, rrb_alloc_funct_t rrb_alloc_funct) +{ + pcibr_hints_t hint = pcibr_hints_get(xconn_vhdl, 1); + + if (hint) + hint->rrb_alloc_funct = rrb_alloc_funct; +} + +void +pcibr_hints_handsoff(devfs_handle_t xconn_vhdl) +{ + pcibr_hints_t hint = pcibr_hints_get(xconn_vhdl, 1); + + if (hint) + hint->ph_hands_off = 1; +#if DEBUG + else + printk("pcibr_hints_handsoff: pcibr_hints_get failed at\n" + "\t%p\n", xconn_vhdl); +#endif +} + +void +pcibr_hints_subdevs(devfs_handle_t xconn_vhdl, + pciio_slot_t slot, + uint64_t subdevs) +{ + arbitrary_info_t ainfo = 0; + char sdname[16]; + devfs_handle_t pconn_vhdl = GRAPH_VERTEX_NONE; + + sprintf(sdname, "pci/%d", slot); + (void) hwgraph_path_add(xconn_vhdl, sdname, &pconn_vhdl); + if (pconn_vhdl == GRAPH_VERTEX_NONE) { +#if DEBUG + printk("pcibr_hints_subdevs: hwgraph_path_create failed at\n" + "\t%p (seeking %s)\n", xconn_vhdl, sdname); +#endif + return; + } + hwgraph_info_get_LBL(pconn_vhdl, INFO_LBL_SUBDEVS, &ainfo); + if (ainfo == 0) { + uint64_t *subdevp; + + NEW(subdevp); + if (!subdevp) { +#if DEBUG + printk("pcibr_hints_subdevs: subdev ptr alloc failed at\n" + "\t%p\n", pconn_vhdl); +#endif + return; + } + *subdevp = subdevs; + hwgraph_info_add_LBL(pconn_vhdl, INFO_LBL_SUBDEVS, (arbitrary_info_t) subdevp); + hwgraph_info_get_LBL(pconn_vhdl, INFO_LBL_SUBDEVS, &ainfo); + if (ainfo == (arbitrary_info_t) subdevp) + return; + DEL(subdevp); + if (ainfo == (arbitrary_info_t) NULL) { +#if DEBUG + printk("pcibr_hints_subdevs: null subdevs ptr at\n" + "\t%p\n", pconn_vhdl); +#endif + return; + } +#if DEBUG + printk("pcibr_subdevs_get: dup subdev add_LBL at\n" + "\t%p\n", pconn_vhdl); +#endif + } + *(uint64_t *) ainfo = subdevs; +} diff -Nru a/arch/ia64/sn/io/sn2/pcibr/pcibr_idbg.c b/arch/ia64/sn/io/sn2/pcibr/pcibr_idbg.c --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/io/sn2/pcibr/pcibr_idbg.c Tue Mar 12 13:58:16 2002 @@ -0,0 +1,147 @@ +/* + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (C) 2001-2002 Silicon Graphics, Inc. All rights reserved. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#ifdef LATER + +char *pci_space[] = {"NONE", + "ROM", + "IO", + "", + "MEM", + "MEM32", + "MEM64", + "CFG", + "WIN0", + "WIN1", + "WIN2", + "WIN3", + "WIN4", + "WIN5", + "", + "BAD"}; + +void +idbg_pss_func(pcibr_info_h pcibr_infoh, int func) +{ + pcibr_info_t pcibr_info = pcibr_infoh[func]; + char name[MAXDEVNAME]; + int win; + + if (!pcibr_info) + return; + qprintf("Per-slot Function Info\n"); + sprintf(name, "%v", pcibr_info->f_vertex); + qprintf("\tSlot Name : %s\n",name); + qprintf("\tPCI Bus : %d ",pcibr_info->f_bus); + qprintf("Slot : %d ", pcibr_info->f_slot); + qprintf("Function : %d ", pcibr_info->f_func); + qprintf("VendorId : 0x%x " , pcibr_info->f_vendor); + qprintf("DeviceId : 0x%x\n", pcibr_info->f_device); + sprintf(name, "%v", pcibr_info->f_master); + qprintf("\tBus provider : %s\n",name); + qprintf("\tProvider Fns : 0x%x ", pcibr_info->f_pops); + qprintf("Error Handler : 0x%x Arg 0x%x\n", + pcibr_info->f_efunc,pcibr_info->f_einfo); + for(win = 0 ; win < 6 ; win++) + qprintf("\tBase Reg #%d space %s base 0x%x size 0x%x\n", + win,pci_space[pcibr_info->f_window[win].w_space], + pcibr_info->f_window[win].w_base, + pcibr_info->f_window[win].w_size); + + qprintf("\tRom base 0x%x size 0x%x\n", + pcibr_info->f_rbase,pcibr_info->f_rsize); + + qprintf("\tInterrupt Bit Map\n"); + qprintf("\t\tPCI Int#\tBridge Pin#\n"); + for (win = 0 ; win < 4; win++) + qprintf("\t\tINT%c\t\t%d\n",win+'A',pcibr_info->f_ibit[win]); + qprintf("\n"); +} + + +void +idbg_pss_info(pcibr_soft_t pcibr_soft, pciio_slot_t slot) +{ + pcibr_soft_slot_t pss; + char slot_conn_name[MAXDEVNAME]; + int func; + + pss = &pcibr_soft->bs_slot[slot]; + qprintf("PCI INFRASTRUCTURAL INFO FOR SLOT %d\n", slot); + qprintf("\tHost Present ? %s ", pss->has_host ? "yes" : "no"); + qprintf("\tHost Slot : %d\n",pss->host_slot); + sprintf(slot_conn_name, "%v", pss->slot_conn); + qprintf("\tSlot Conn : %s\n",slot_conn_name); + qprintf("\t#Functions : %d\n",pss->bss_ninfo); + for (func = 0; func < pss->bss_ninfo; func++) + idbg_pss_func(pss->bss_infos,func); + qprintf("\tSpace : %s ",pci_space[pss->bss_devio.bssd_space]); + qprintf("\tBase : 0x%x ", pss->bss_devio.bssd_base); + qprintf("\tShadow Devreg : 0x%x\n", pss->bss_device); + qprintf("\tUsage counts : pmu %d d32 %d d64 %d\n", + pss->bss_pmu_uctr,pss->bss_d32_uctr,pss->bss_d64_uctr); + + qprintf("\tDirect Trans Info : d64_base 0x%x d64_flags 0x%x" + "d32_base 0x%x d32_flags 0x%x\n", + pss->bss_d64_base, pss->bss_d64_flags, + pss->bss_d32_base, pss->bss_d32_flags); + + qprintf("\tExt ATEs active ? %s", + pss->bss_ext_ates_active ? "yes" : "no"); + qprintf(" Command register : 0x%x ", pss->bss_cmd_pointer); + qprintf(" Shadow command val : 0x%x\n", pss->bss_cmd_shadow); + + qprintf("\tRRB Info : Valid %d+%d Reserved %d\n", + pcibr_soft->bs_rrb_valid[slot], + pcibr_soft->bs_rrb_valid[slot + PCIBR_RRB_SLOT_VIRTUAL], + pcibr_soft->bs_rrb_res[slot]); + +} + +int ips = 0; + +void +idbg_pss(pcibr_soft_t pcibr_soft) +{ + pciio_slot_t slot; + + + if (ips >= 0 && ips < 8) + idbg_pss_info(pcibr_soft,ips); + else if (ips < 0) + for (slot = 0; slot < 8; slot++) + idbg_pss_info(pcibr_soft,slot); + else + qprintf("Invalid ips %d\n",ips); +} +#endif /* LATER */ diff -Nru a/arch/ia64/sn/io/sn2/pcibr/pcibr_intr.c b/arch/ia64/sn/io/sn2/pcibr/pcibr_intr.c --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/io/sn2/pcibr/pcibr_intr.c Tue Mar 12 13:58:16 2002 @@ -0,0 +1,907 @@ +/* + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (C) 2001 Silicon Graphics, Inc. All rights reserved. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#ifdef __ia64 +#define rmallocmap atemapalloc +#define rmfreemap atemapfree +#define rmfree atefree +#define rmalloc atealloc +#endif + +unsigned pcibr_intr_bits(pciio_info_t info, pciio_intr_line_t lines); +pcibr_intr_t pcibr_intr_alloc(devfs_handle_t, device_desc_t, pciio_intr_line_t, devfs_handle_t); +void pcibr_intr_free(pcibr_intr_t); +void pcibr_setpciint(xtalk_intr_t); +int pcibr_intr_connect(pcibr_intr_t); +void pcibr_intr_disconnect(pcibr_intr_t); + +devfs_handle_t pcibr_intr_cpu_get(pcibr_intr_t); +void pcibr_xintr_preset(void *, int, xwidgetnum_t, iopaddr_t, xtalk_intr_vector_t); +void pcibr_intr_func(intr_arg_t); + +extern pcibr_info_t pcibr_info_get(devfs_handle_t); + +/* ===================================================================== + * INTERRUPT MANAGEMENT + */ + +unsigned +pcibr_intr_bits(pciio_info_t info, + pciio_intr_line_t lines) +{ + pciio_slot_t slot = pciio_info_slot_get(info); + unsigned bbits = 0; + + /* + * Currently favored mapping from PCI + * slot number and INTA/B/C/D to Bridge + * PCI Interrupt Bit Number: + * + * SLOT A B C D + * 0 0 4 0 4 + * 1 1 5 1 5 + * 2 2 6 2 6 + * 3 3 7 3 7 + * 4 4 0 4 0 + * 5 5 1 5 1 + * 6 6 2 6 2 + * 7 7 3 7 3 + */ + + if (slot < 8) { + if (lines & (PCIIO_INTR_LINE_A| PCIIO_INTR_LINE_C)) + bbits |= 1 << slot; + if (lines & (PCIIO_INTR_LINE_B| PCIIO_INTR_LINE_D)) + bbits |= 1 << (slot ^ 4); + } + return bbits; +} + + +/* + * Get the next wrapper pointer queued in the interrupt circular buffer. + */ +pcibr_intr_wrap_t +pcibr_wrap_get(pcibr_intr_cbuf_t cbuf) +{ + pcibr_intr_wrap_t wrap; + + if (cbuf->ib_in == cbuf->ib_out) + PRINT_PANIC( "pcibr intr circular buffer empty, cbuf=0x%p, ib_in=ib_out=%d\n", + (void *)cbuf, cbuf->ib_out); + + wrap = cbuf->ib_cbuf[cbuf->ib_out++]; + cbuf->ib_out = cbuf->ib_out % IBUFSIZE; + return(wrap); +} + +/* + * Queue a wrapper pointer in the interrupt circular buffer. + */ +void +pcibr_wrap_put(pcibr_intr_wrap_t wrap, pcibr_intr_cbuf_t cbuf) +{ + int in; + int s; + + /* + * Multiple CPUs could be executing this code simultaneously + * if a handler has registered multiple interrupt lines and + * the interrupts are directed to different CPUs. + */ + s = mutex_spinlock(&cbuf->ib_lock); + in = (cbuf->ib_in + 1) % IBUFSIZE; + if (in == cbuf->ib_out) + PRINT_PANIC( "pcibr intr circular buffer full, cbuf=0x%p, ib_in=%d\n", + (void *)cbuf, cbuf->ib_in); + + cbuf->ib_cbuf[cbuf->ib_in] = wrap; + cbuf->ib_in = in; + mutex_spinunlock(&cbuf->ib_lock, s); + return; +} + +/* + * There are end cases where a deadlock can occur if interrupt + * processing completes and the Bridge b_int_status bit is still set. + * + * One scenerio is if a second PCI interrupt occurs within 60ns of + * the previous interrupt being cleared. In this case the Bridge + * does not detect the transition, the Bridge b_int_status bit + * remains set, and because no transition was detected no interrupt + * packet is sent to the Hub/Heart. + * + * A second scenerio is possible when a b_int_status bit is being + * shared by multiple devices: + * Device #1 generates interrupt + * Bridge b_int_status bit set + * Device #2 generates interrupt + * interrupt processing begins + * ISR for device #1 runs and + * clears interrupt + * Device #1 generates interrupt + * ISR for device #2 runs and + * clears interrupt + * (b_int_status bit still set) + * interrupt processing completes + * + * Interrupt processing is now complete, but an interrupt is still + * outstanding for Device #1. But because there was no transition of + * the b_int_status bit, no interrupt packet will be generated and + * a deadlock will occur. + * + * To avoid these deadlock situations, this function is used + * to check if a specific Bridge b_int_status bit is set, and if so, + * cause the setting of the corresponding interrupt bit. + * + * On a XBridge (IP35), we do this by writing the appropriate Bridge Force + * Interrupt register. + */ +void +pcibr_force_interrupt(pcibr_intr_wrap_t wrap) +{ + unsigned bit; + pcibr_soft_t pcibr_soft = wrap->iw_soft; + bridge_t *bridge = pcibr_soft->bs_base; + cpuid_t cpuvertex_to_cpuid(devfs_handle_t vhdl); + + bit = wrap->iw_intr; + + if (pcibr_soft->bs_xbridge) { + bridge->b_force_pin[bit].intr = 1; + } else if ((1 << bit) & *wrap->iw_stat) { + cpuid_t cpu; + unsigned intr_bit; + xtalk_intr_t xtalk_intr = + pcibr_soft->bs_intr[bit].bsi_xtalk_intr; + + intr_bit = (short) xtalk_intr_vector_get(xtalk_intr); + cpu = cpuvertex_to_cpuid(xtalk_intr_cpu_get(xtalk_intr)); +#if defined(CONFIG_IA64_SGI_SN1) + REMOTE_CPU_SEND_INTR(cpu, intr_bit); +#endif + } +} + +/*ARGSUSED */ +pcibr_intr_t +pcibr_intr_alloc(devfs_handle_t pconn_vhdl, + device_desc_t dev_desc, + pciio_intr_line_t lines, + devfs_handle_t owner_dev) +{ + pcibr_info_t pcibr_info = pcibr_info_get(pconn_vhdl); + pciio_slot_t pciio_slot = pcibr_info->f_slot; + pcibr_soft_t pcibr_soft = (pcibr_soft_t) pcibr_info->f_mfast; + devfs_handle_t xconn_vhdl = pcibr_soft->bs_conn; + bridge_t *bridge = pcibr_soft->bs_base; + int is_threaded = 0; + int thread_swlevel; + + xtalk_intr_t *xtalk_intr_p; + pcibr_intr_t *pcibr_intr_p; + pcibr_intr_list_t *intr_list_p; + + unsigned pcibr_int_bits; + unsigned pcibr_int_bit; + xtalk_intr_t xtalk_intr = (xtalk_intr_t)0; + hub_intr_t hub_intr; + pcibr_intr_t pcibr_intr; + pcibr_intr_list_t intr_entry; + pcibr_intr_list_t intr_list; + bridgereg_t int_dev; + +#if DEBUG && INTR_DEBUG + printk("%v: pcibr_intr_alloc\n" + "%v:%s%s%s%s%s\n", + owner_dev, pconn_vhdl, + !(lines & 15) ? " No INTs?" : "", + lines & 1 ? " INTA" : "", + lines & 2 ? " INTB" : "", + lines & 4 ? " INTC" : "", + lines & 8 ? " INTD" : ""); +#endif + + NEW(pcibr_intr); + if (!pcibr_intr) + return NULL; + + if (dev_desc) { + cpuid_t intr_target_from_desc(device_desc_t, int); + } else { + extern int default_intr_pri; + + is_threaded = 1; /* PCI interrupts are threaded, by default */ + thread_swlevel = default_intr_pri; + } + + pcibr_intr->bi_dev = pconn_vhdl; + pcibr_intr->bi_lines = lines; + pcibr_intr->bi_soft = pcibr_soft; + pcibr_intr->bi_ibits = 0; /* bits will be added below */ + pcibr_intr->bi_flags = is_threaded ? 0 : PCIIO_INTR_NOTHREAD; + pcibr_intr->bi_mustruncpu = CPU_NONE; + pcibr_intr->bi_ibuf.ib_in = 0; + pcibr_intr->bi_ibuf.ib_out = 0; + mutex_spinlock_init(&pcibr_intr->bi_ibuf.ib_lock); + + pcibr_int_bits = pcibr_soft->bs_intr_bits((pciio_info_t)pcibr_info, lines); + + + /* + * For each PCI interrupt line requested, figure + * out which Bridge PCI Interrupt Line it maps + * to, and make sure there are xtalk resources + * allocated for it. + */ +#if DEBUG && INTR_DEBUG + printk("pcibr_int_bits: 0x%X\n", pcibr_int_bits); +#endif + for (pcibr_int_bit = 0; pcibr_int_bit < 8; pcibr_int_bit ++) { + if (pcibr_int_bits & (1 << pcibr_int_bit)) { + xtalk_intr_p = &pcibr_soft->bs_intr[pcibr_int_bit].bsi_xtalk_intr; + + xtalk_intr = *xtalk_intr_p; + + if (xtalk_intr == NULL) { + /* + * This xtalk_intr_alloc is constrained for two reasons: + * 1) Normal interrupts and error interrupts need to be delivered + * through a single xtalk target widget so that there aren't any + * ordering problems with DMA, completion interrupts, and error + * interrupts. (Use of xconn_vhdl forces this.) + * + * 2) On IP35, addressing constraints on IP35 and Bridge force + * us to use a single PI number for all interrupts from a + * single Bridge. (IP35-specific code forces this, and we + * verify in pcibr_setwidint.) + */ + + /* + * All code dealing with threaded PCI interrupt handlers + * is located at the pcibr level. Because of this, + * we always want the lower layers (hub/heart_intr_alloc, + * intr_level_connect) to treat us as non-threaded so we + * don't set up a duplicate threaded environment. We make + * this happen by calling a special xtalk interface. + */ + xtalk_intr = xtalk_intr_alloc_nothd(xconn_vhdl, dev_desc, + owner_dev); +#if DEBUG && INTR_DEBUG + printk("%v: xtalk_intr=0x%X\n", xconn_vhdl, xtalk_intr); +#endif + + /* both an assert and a runtime check on this: + * we need to check in non-DEBUG kernels, and + * the ASSERT gets us more information when + * we use DEBUG kernels. + */ + ASSERT(xtalk_intr != NULL); + if (xtalk_intr == NULL) { + /* it is quite possible that our + * xtalk_intr_alloc failed because + * someone else got there first, + * and we can find their results + * in xtalk_intr_p. + */ + if (!*xtalk_intr_p) { +#ifdef SUPPORT_PRINTING_V_FORMAT + printk(KERN_ALERT + "pcibr_intr_alloc %v: unable to get xtalk interrupt resources", + xconn_vhdl); +#else + printk(KERN_ALERT + "pcibr_intr_alloc 0x%p: unable to get xtalk interrupt resources", + (void *)xconn_vhdl); +#endif + /* yes, we leak resources here. */ + return 0; + } + } else if (compare_and_swap_ptr((void **) xtalk_intr_p, NULL, xtalk_intr)) { + /* + * now tell the bridge which slot is + * using this interrupt line. + */ + int_dev = bridge->b_int_device; + int_dev &= ~BRIDGE_INT_DEV_MASK(pcibr_int_bit); + int_dev |= pciio_slot << BRIDGE_INT_DEV_SHFT(pcibr_int_bit); + bridge->b_int_device = int_dev; /* XXXMP */ + +#if DEBUG && INTR_DEBUG + printk("%v: bridge intr bit %d clears my wrb\n", + pconn_vhdl, pcibr_int_bit); +#endif + } else { + /* someone else got one allocated first; + * free the one we just created, and + * retrieve the one they allocated. + */ + xtalk_intr_free(xtalk_intr); + xtalk_intr = *xtalk_intr_p; +#if PARANOID + /* once xtalk_intr is set, we never clear it, + * so if the CAS fails above, this condition + * can "never happen" ... + */ + if (!xtalk_intr) { + printk(KERN_ALERT + "pcibr_intr_alloc %v: unable to set xtalk interrupt resources", + xconn_vhdl); + /* yes, we leak resources here. */ + return 0; + } +#endif + } + } + + pcibr_intr->bi_ibits |= 1 << pcibr_int_bit; + + NEW(intr_entry); + intr_entry->il_next = NULL; + intr_entry->il_intr = pcibr_intr; + intr_entry->il_wrbf = &(bridge->b_wr_req_buf[pciio_slot].reg); + intr_list_p = + &pcibr_soft->bs_intr[pcibr_int_bit].bsi_pcibr_intr_wrap.iw_list; +#if DEBUG && INTR_DEBUG +#if defined(SUPPORT_PRINTING_V_FORMAT) + printk("0x%x: Bridge bit %d wrap=0x%x\n", + pconn_vhdl, pcibr_int_bit, + pcibr_soft->bs_intr[pcibr_int_bit].bsi_pcibr_intr_wrap); +#else + printk("%v: Bridge bit %d wrap=0x%x\n", + pconn_vhdl, pcibr_int_bit, + pcibr_soft->bs_intr[pcibr_int_bit].bsi_pcibr_intr_wrap); +#endif +#endif + + if (compare_and_swap_ptr((void **) intr_list_p, NULL, intr_entry)) { + /* we are the first interrupt on this bridge bit. + */ +#if DEBUG && INTR_DEBUG + printk("%v INT 0x%x (bridge bit %d) allocated [FIRST]\n", + pconn_vhdl, pcibr_int_bits, pcibr_int_bit); +#endif + continue; + } + intr_list = *intr_list_p; + pcibr_intr_p = &intr_list->il_intr; + if (compare_and_swap_ptr((void **) pcibr_intr_p, NULL, pcibr_intr)) { + /* first entry on list was erased, + * and we replaced it, so we + * don't need our intr_entry. + */ + DEL(intr_entry); +#if DEBUG && INTR_DEBUG + printk("%v INT 0x%x (bridge bit %d) replaces erased first\n", + pconn_vhdl, pcibr_int_bits, pcibr_int_bit); +#endif + continue; + } + intr_list_p = &intr_list->il_next; + if (compare_and_swap_ptr((void **) intr_list_p, NULL, intr_entry)) { + /* we are the new second interrupt on this bit. + */ + pcibr_soft->bs_intr[pcibr_int_bit].bsi_pcibr_intr_wrap.iw_shared = 1; +#if DEBUG && INTR_DEBUG + printk("%v INT 0x%x (bridge bit %d) is new SECOND\n", + pconn_vhdl, pcibr_int_bits, pcibr_int_bit); +#endif + continue; + } + while (1) { + pcibr_intr_p = &intr_list->il_intr; + if (compare_and_swap_ptr((void **) pcibr_intr_p, NULL, pcibr_intr)) { + /* an entry on list was erased, + * and we replaced it, so we + * don't need our intr_entry. + */ + DEL(intr_entry); +#if DEBUG && INTR_DEBUG + printk("%v INT 0x%x (bridge bit %d) replaces erased Nth\n", + pconn_vhdl, pcibr_int_bits, pcibr_int_bit); +#endif + break; + } + intr_list_p = &intr_list->il_next; + if (compare_and_swap_ptr((void **) intr_list_p, NULL, intr_entry)) { + /* entry appended to share list + */ +#if DEBUG && INTR_DEBUG + printk("%v INT 0x%x (bridge bit %d) is new Nth\n", + pconn_vhdl, pcibr_int_bits, pcibr_int_bit); +#endif + break; + } + /* step to next record in chain + */ + intr_list = *intr_list_p; + } + } + } + +#if DEBUG && INTR_DEBUG + printk("%v pcibr_intr_alloc complete\n", pconn_vhdl); +#endif + hub_intr = (hub_intr_t)xtalk_intr; + pcibr_intr->bi_irq = hub_intr->i_bit; + pcibr_intr->bi_cpu = hub_intr->i_cpuid; + return pcibr_intr; +} + +/*ARGSUSED */ +void +pcibr_intr_free(pcibr_intr_t pcibr_intr) +{ + unsigned pcibr_int_bits = pcibr_intr->bi_ibits; + pcibr_soft_t pcibr_soft = pcibr_intr->bi_soft; + unsigned pcibr_int_bit; + pcibr_intr_list_t intr_list; + int intr_shared; + xtalk_intr_t *xtalk_intrp; + + for (pcibr_int_bit = 0; pcibr_int_bit < 8; pcibr_int_bit++) { + if (pcibr_int_bits & (1 << pcibr_int_bit)) { + for (intr_list = + pcibr_soft->bs_intr[pcibr_int_bit].bsi_pcibr_intr_wrap.iw_list; + intr_list != NULL; + intr_list = intr_list->il_next) + if (compare_and_swap_ptr((void **) &intr_list->il_intr, + pcibr_intr, + NULL)) { +#if DEBUG && INTR_DEBUG + printk("%s: cleared a handler from bit %d\n", + pcibr_soft->bs_name, pcibr_int_bit); +#endif + } + /* If this interrupt line is not being shared between multiple + * devices release the xtalk interrupt resources. + */ + intr_shared = + pcibr_soft->bs_intr[pcibr_int_bit].bsi_pcibr_intr_wrap.iw_shared; + xtalk_intrp = &pcibr_soft->bs_intr[pcibr_int_bit].bsi_xtalk_intr; + + if ((!intr_shared) && (*xtalk_intrp)) { + + bridge_t *bridge = pcibr_soft->bs_base; + bridgereg_t int_dev; + + xtalk_intr_free(*xtalk_intrp); + *xtalk_intrp = 0; + + /* Clear the PCI device interrupt to bridge interrupt pin + * mapping. + */ + int_dev = bridge->b_int_device; + int_dev &= ~BRIDGE_INT_DEV_MASK(pcibr_int_bit); + bridge->b_int_device = int_dev; + + } + } + } + DEL(pcibr_intr); +} + +void +pcibr_setpciint(xtalk_intr_t xtalk_intr) +{ + iopaddr_t addr = xtalk_intr_addr_get(xtalk_intr); + xtalk_intr_vector_t vect = xtalk_intr_vector_get(xtalk_intr); + bridgereg_t *int_addr = (bridgereg_t *) + xtalk_intr_sfarg_get(xtalk_intr); + + *int_addr = ((BRIDGE_INT_ADDR_HOST & (addr >> 30)) | + (BRIDGE_INT_ADDR_FLD & vect)); +} + +/*ARGSUSED */ +int +pcibr_intr_connect(pcibr_intr_t pcibr_intr) +{ + pcibr_soft_t pcibr_soft = pcibr_intr->bi_soft; + bridge_t *bridge = pcibr_soft->bs_base; + unsigned pcibr_int_bits = pcibr_intr->bi_ibits; + unsigned pcibr_int_bit; + bridgereg_t b_int_enable; + unsigned long s; + + if (pcibr_intr == NULL) + return -1; + +#if DEBUG && INTR_DEBUG + printk("%v: pcibr_intr_connect\n", + pcibr_intr->bi_dev); +#endif + + *((volatile unsigned *)&pcibr_intr->bi_flags) |= PCIIO_INTR_CONNECTED; + + /* + * For each PCI interrupt line requested, figure + * out which Bridge PCI Interrupt Line it maps + * to, and make sure there are xtalk resources + * allocated for it. + */ + for (pcibr_int_bit = 0; pcibr_int_bit < 8; pcibr_int_bit++) + if (pcibr_int_bits & (1 << pcibr_int_bit)) { + xtalk_intr_t xtalk_intr; + + xtalk_intr = pcibr_soft->bs_intr[pcibr_int_bit].bsi_xtalk_intr; + + /* + * If this interrupt line is being shared and the connect has + * already been done, no need to do it again. + */ + if (pcibr_soft->bs_intr[pcibr_int_bit].bsi_pcibr_intr_wrap.iw_connected) + continue; + + + /* + * Use the pcibr wrapper function to handle all Bridge interrupts + * regardless of whether the interrupt line is shared or not. + */ + xtalk_intr_connect(xtalk_intr, (xtalk_intr_setfunc_t) pcibr_setpciint, + (void *)&(bridge->b_int_addr[pcibr_int_bit].addr)); + pcibr_soft->bs_intr[pcibr_int_bit].bsi_pcibr_intr_wrap.iw_connected = 1; + +#if DEBUG && INTR_DEBUG + printk("%v bridge bit %d wrapper connected\n", + pcibr_intr->bi_dev, pcibr_int_bit); +#endif + } + s = pcibr_lock(pcibr_soft); + b_int_enable = bridge->b_int_enable; + b_int_enable |= pcibr_int_bits; + bridge->b_int_enable = b_int_enable; + bridge->b_wid_tflush; /* wait until Bridge PIO complete */ + pcibr_unlock(pcibr_soft, s); + + return 0; +} + +/*ARGSUSED */ +void +pcibr_intr_disconnect(pcibr_intr_t pcibr_intr) +{ + pcibr_soft_t pcibr_soft = pcibr_intr->bi_soft; + bridge_t *bridge = pcibr_soft->bs_base; + unsigned pcibr_int_bits = pcibr_intr->bi_ibits; + unsigned pcibr_int_bit; + bridgereg_t b_int_enable; + unsigned long s; + + /* Stop calling the function. Now. + */ + *((volatile unsigned *)&pcibr_intr->bi_flags) &= ~PCIIO_INTR_CONNECTED; + + /* + * For each PCI interrupt line requested, figure + * out which Bridge PCI Interrupt Line it maps + * to, and disconnect the interrupt. + */ + + /* don't disable interrupts for lines that + * are shared between devices. + */ + for (pcibr_int_bit = 0; pcibr_int_bit < 8; pcibr_int_bit++) + if ((pcibr_int_bits & (1 << pcibr_int_bit)) && + (pcibr_soft->bs_intr[pcibr_int_bit].bsi_pcibr_intr_wrap.iw_shared)) + pcibr_int_bits &= ~(1 << pcibr_int_bit); + if (!pcibr_int_bits) + return; + + s = pcibr_lock(pcibr_soft); + b_int_enable = bridge->b_int_enable; + b_int_enable &= ~pcibr_int_bits; + bridge->b_int_enable = b_int_enable; + bridge->b_wid_tflush; /* wait until Bridge PIO complete */ + pcibr_unlock(pcibr_soft, s); + + for (pcibr_int_bit = 0; pcibr_int_bit < 8; pcibr_int_bit++) + if (pcibr_int_bits & (1 << pcibr_int_bit)) { + /* if the interrupt line is now shared, + * do not disconnect it. + */ + if (pcibr_soft->bs_intr[pcibr_int_bit].bsi_pcibr_intr_wrap.iw_shared) + continue; + + xtalk_intr_disconnect(pcibr_soft->bs_intr[pcibr_int_bit].bsi_xtalk_intr); + pcibr_soft->bs_intr[pcibr_int_bit].bsi_pcibr_intr_wrap.iw_connected = 0; + +#if DEBUG && INTR_DEBUG + printk("%s: xtalk disconnect done for Bridge bit %d\n", + pcibr_soft->bs_name, pcibr_int_bit); +#endif + + /* if we are sharing the interrupt line, + * connect us up; this closes the hole + * where the another pcibr_intr_alloc() + * was in progress as we disconnected. + */ + if (!pcibr_soft->bs_intr[pcibr_int_bit].bsi_pcibr_intr_wrap.iw_shared) + continue; + + xtalk_intr_connect(pcibr_soft->bs_intr[pcibr_int_bit].bsi_xtalk_intr, + (xtalk_intr_setfunc_t)pcibr_setpciint, + (void *) &(bridge->b_int_addr[pcibr_int_bit].addr)); + } +} + +/*ARGSUSED */ +devfs_handle_t +pcibr_intr_cpu_get(pcibr_intr_t pcibr_intr) +{ + pcibr_soft_t pcibr_soft = pcibr_intr->bi_soft; + unsigned pcibr_int_bits = pcibr_intr->bi_ibits; + unsigned pcibr_int_bit; + + for (pcibr_int_bit = 0; pcibr_int_bit < 8; pcibr_int_bit++) + if (pcibr_int_bits & (1 << pcibr_int_bit)) + return xtalk_intr_cpu_get(pcibr_soft->bs_intr[pcibr_int_bit].bsi_xtalk_intr); + return 0; +} + +/* ===================================================================== + * INTERRUPT HANDLING + */ +void +pcibr_clearwidint(bridge_t *bridge) +{ + bridge->b_wid_int_upper = 0; + bridge->b_wid_int_lower = 0; +} + + +void +pcibr_setwidint(xtalk_intr_t intr) +{ + xwidgetnum_t targ = xtalk_intr_target_get(intr); + iopaddr_t addr = xtalk_intr_addr_get(intr); + xtalk_intr_vector_t vect = xtalk_intr_vector_get(intr); + widgetreg_t NEW_b_wid_int_upper, NEW_b_wid_int_lower; + widgetreg_t OLD_b_wid_int_upper, OLD_b_wid_int_lower; + + bridge_t *bridge = (bridge_t *)xtalk_intr_sfarg_get(intr); + + NEW_b_wid_int_upper = ( (0x000F0000 & (targ << 16)) | + XTALK_ADDR_TO_UPPER(addr)); + NEW_b_wid_int_lower = XTALK_ADDR_TO_LOWER(addr); + + OLD_b_wid_int_upper = bridge->b_wid_int_upper; + OLD_b_wid_int_lower = bridge->b_wid_int_lower; + + /* Verify that all interrupts from this Bridge are using a single PI */ + if ((OLD_b_wid_int_upper != 0) && (OLD_b_wid_int_lower != 0)) { + /* + * Once set, these registers shouldn't change; they should + * be set multiple times with the same values. + * + * If we're attempting to change these registers, it means + * that our heuristics for allocating interrupts in a way + * appropriate for IP35 have failed, and the admin needs to + * explicitly direct some interrupts (or we need to make the + * heuristics more clever). + * + * In practice, we hope this doesn't happen very often, if + * at all. + */ + if ((OLD_b_wid_int_upper != NEW_b_wid_int_upper) || + (OLD_b_wid_int_lower != NEW_b_wid_int_lower)) { + printk(KERN_WARNING "Interrupt allocation is too complex.\n"); + printk(KERN_WARNING "Use explicit administrative interrupt targetting.\n"); + printk(KERN_WARNING "bridge=0x%lx targ=0x%x\n", (unsigned long)bridge, targ); + printk(KERN_WARNING "NEW=0x%x/0x%x OLD=0x%x/0x%x\n", + NEW_b_wid_int_upper, NEW_b_wid_int_lower, + OLD_b_wid_int_upper, OLD_b_wid_int_lower); + PRINT_PANIC("PCI Bridge interrupt targetting error\n"); + } + } + + bridge->b_wid_int_upper = NEW_b_wid_int_upper; + bridge->b_wid_int_lower = NEW_b_wid_int_lower; + bridge->b_int_host_err = vect; +} + +/* + * pcibr_intr_preset: called during mlreset time + * if the platform specific code needs to route + * one of the Bridge's xtalk interrupts before the + * xtalk infrastructure is available. + */ +void +pcibr_xintr_preset(void *which_widget, + int which_widget_intr, + xwidgetnum_t targ, + iopaddr_t addr, + xtalk_intr_vector_t vect) +{ + bridge_t *bridge = (bridge_t *) which_widget; + + if (which_widget_intr == -1) { + /* bridge widget error interrupt */ + bridge->b_wid_int_upper = ( (0x000F0000 & (targ << 16)) | + XTALK_ADDR_TO_UPPER(addr)); + bridge->b_wid_int_lower = XTALK_ADDR_TO_LOWER(addr); + bridge->b_int_host_err = vect; + + /* turn on all interrupts except + * the PCI interrupt requests, + * at least at heart. + */ + bridge->b_int_enable |= ~BRIDGE_IMR_INT_MSK; + + } else { + /* routing a PCI device interrupt. + * targ and low 38 bits of addr must + * be the same as the already set + * value for the widget error interrupt. + */ + bridge->b_int_addr[which_widget_intr].addr = + ((BRIDGE_INT_ADDR_HOST & (addr >> 30)) | + (BRIDGE_INT_ADDR_FLD & vect)); + /* + * now bridge can let it through; + * NB: still should be blocked at + * xtalk provider end, until the service + * function is set. + */ + bridge->b_int_enable |= 1 << vect; + } + bridge->b_wid_tflush; /* wait until Bridge PIO complete */ +} + + +/* + * pcibr_intr_func() + * + * This is the pcibr interrupt "wrapper" function that is called, + * in interrupt context, to initiate the interrupt handler(s) registered + * (via pcibr_intr_alloc/connect) for the occuring interrupt. Non-threaded + * handlers will be called directly, and threaded handlers will have their + * thread woken up. + */ +void +pcibr_intr_func(intr_arg_t arg) +{ + pcibr_intr_wrap_t wrap = (pcibr_intr_wrap_t) arg; + reg_p wrbf; + pcibr_intr_t intr; + pcibr_intr_list_t list; + int clearit; + int do_nonthreaded = 1; + int is_threaded = 0; + int x = 0; + + /* + * If any handler is still running from a previous interrupt + * just return. If there's a need to call the handler(s) again, + * another interrupt will be generated either by the device or by + * pcibr_force_interrupt(). + */ + + if (wrap->iw_hdlrcnt) { + return; + } + + /* + * Call all interrupt handlers registered. + * First, the pcibr_intrd threads for any threaded handlers will be + * awoken, then any non-threaded handlers will be called sequentially. + */ + + clearit = 1; + while (do_nonthreaded) { + for (list = wrap->iw_list; list != NULL; list = list->il_next) { + if ((intr = list->il_intr) && + (intr->bi_flags & PCIIO_INTR_CONNECTED)) { + + /* + * This device may have initiated write + * requests since the bridge last saw + * an edge on this interrupt input; flushing + * the buffer prior to invoking the handler + * should help but may not be sufficient if we + * get more requests after the flush, followed + * by the card deciding it wants service, before + * the interrupt handler checks to see if things need + * to be done. + * + * There is a similar race condition if + * an interrupt handler loops around and + * notices further service is required. + * Perhaps we need to have an explicit + * call that interrupt handlers need to + * do between noticing that DMA to memory + * has completed, but before observing the + * contents of memory? + */ + + if ((do_nonthreaded) && (!is_threaded)) { + /* Non-threaded. + * Call the interrupt handler at interrupt level + */ + + /* Only need to flush write buffers if sharing */ + + if ((wrap->iw_shared) && (wrbf = list->il_wrbf)) { + if ((x = *wrbf)) /* write request buffer flush */ +#ifdef SUPPORT_PRINTING_V_FORMAT + printk(KERN_ALERT "pcibr_intr_func %v: \n" + "write buffer flush failed, wrbf=0x%x\n", + list->il_intr->bi_dev, wrbf); +#else + printk(KERN_ALERT "pcibr_intr_func %p: \n" + "write buffer flush failed, wrbf=0x%lx\n", + (void *)list->il_intr->bi_dev, (long) wrbf); +#endif + } + } + + clearit = 0; + } + } + + do_nonthreaded = 0; + /* + * If the non-threaded handler was the last to complete, + * (i.e., no threaded handlers still running) force an + * interrupt to avoid a potential deadlock situation. + */ + if (wrap->iw_hdlrcnt == 0) { + pcibr_force_interrupt(wrap); + } + } + + /* If there were no handlers, + * disable the interrupt and return. + * It will get enabled again after + * a handler is connected. + * If we don't do this, we would + * sit here and spin through the + * list forever. + */ + if (clearit) { + pcibr_soft_t pcibr_soft = wrap->iw_soft; + bridge_t *bridge = pcibr_soft->bs_base; + bridgereg_t b_int_enable; + bridgereg_t mask = 1 << wrap->iw_intr; + unsigned long s; + + s = pcibr_lock(pcibr_soft); + b_int_enable = bridge->b_int_enable; + b_int_enable &= ~mask; + bridge->b_int_enable = b_int_enable; + bridge->b_wid_tflush; /* wait until Bridge PIO complete */ + pcibr_unlock(pcibr_soft, s); + return; + } +} diff -Nru a/arch/ia64/sn/io/sn2/pcibr/pcibr_rrb.c b/arch/ia64/sn/io/sn2/pcibr/pcibr_rrb.c --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/io/sn2/pcibr/pcibr_rrb.c Tue Mar 12 13:58:16 2002 @@ -0,0 +1,896 @@ +/* + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (C) 2001-2002 Silicon Graphics, Inc. All rights reserved. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +void do_pcibr_rrb_clear(bridge_t *, int); +void do_pcibr_rrb_flush(bridge_t *, int); +int do_pcibr_rrb_count_valid(bridge_t *, pciio_slot_t); +int do_pcibr_rrb_count_avail(bridge_t *, pciio_slot_t); +int do_pcibr_rrb_alloc(bridge_t *, pciio_slot_t, int); +int do_pcibr_rrb_free(bridge_t *, pciio_slot_t, int); + +void do_pcibr_rrb_autoalloc(pcibr_soft_t, int, int); + +int pcibr_wrb_flush(devfs_handle_t); +int pcibr_rrb_alloc(devfs_handle_t, int *, int *); +int pcibr_rrb_check(devfs_handle_t, int *, int *, int *, int *); +int pcibr_alloc_all_rrbs(devfs_handle_t, int, int, int, int, int, int, int, int, int); +void pcibr_rrb_flush(devfs_handle_t); +int pcibr_slot_initial_rrb_alloc(devfs_handle_t,pciio_slot_t); + +/* + * RRB Management + */ + +#define LSBIT(word) ((word) &~ ((word)-1)) + +void +do_pcibr_rrb_clear(bridge_t *bridge, int rrb) +{ + bridgereg_t status; + + /* bridge_lock must be held; + * this RRB must be disabled. + */ + + /* wait until RRB has no outstanduing XIO packets. */ + while ((status = bridge->b_resp_status) & BRIDGE_RRB_INUSE(rrb)) { + ; /* XXX- beats on bridge. bad idea? */ + } + + /* if the RRB has data, drain it. */ + if (status & BRIDGE_RRB_VALID(rrb)) { + bridge->b_resp_clear = BRIDGE_RRB_CLEAR(rrb); + + /* wait until RRB is no longer valid. */ + while ((status = bridge->b_resp_status) & BRIDGE_RRB_VALID(rrb)) { + ; /* XXX- beats on bridge. bad idea? */ + } + } +} + +void +do_pcibr_rrb_flush(bridge_t *bridge, int rrbn) +{ + reg_p rrbp = &bridge->b_rrb_map[rrbn & 1].reg; + bridgereg_t rrbv; + int shft = 4 * (rrbn >> 1); + unsigned ebit = BRIDGE_RRB_EN << shft; + + rrbv = *rrbp; + if (rrbv & ebit) + *rrbp = rrbv & ~ebit; + + do_pcibr_rrb_clear(bridge, rrbn); + + if (rrbv & ebit) + *rrbp = rrbv; +} + +/* + * pcibr_rrb_count_valid: count how many RRBs are + * marked valid for the specified PCI slot on this + * bridge. + * + * NOTE: The "slot" parameter for all pcibr_rrb + * management routines must include the "virtual" + * bit; when manageing both the normal and the + * virtual channel, separate calls to these + * routines must be made. To denote the virtual + * channel, add PCIBR_RRB_SLOT_VIRTUAL to the slot + * number. + * + * IMPL NOTE: The obvious algorithm is to iterate + * through the RRB fields, incrementing a count if + * the RRB is valid and matches the slot. However, + * it is much simpler to use an algorithm derived + * from the "partitioned add" idea. First, XOR in a + * pattern such that the fields that match this + * slot come up "all ones" and all other fields + * have zeros in the mismatching bits. Then AND + * together the bits in the field, so we end up + * with one bit turned on for each field that + * matched. Now we need to count these bits. This + * can be done either with a series of shift/add + * instructions or by using "tmp % 15"; I expect + * that the cascaded shift/add will be faster. + */ + +int +do_pcibr_rrb_count_valid(bridge_t *bridge, + pciio_slot_t slot) +{ + bridgereg_t tmp; + + tmp = bridge->b_rrb_map[slot & 1].reg; + tmp ^= 0x11111111 * (7 - slot / 2); + tmp &= (0xCCCCCCCC & tmp) >> 2; + tmp &= (0x22222222 & tmp) >> 1; + tmp += tmp >> 4; + tmp += tmp >> 8; + tmp += tmp >> 16; + return tmp & 15; +} + +/* + * do_pcibr_rrb_count_avail: count how many RRBs are + * available to be allocated for the specified slot. + * + * IMPL NOTE: similar to the above, except we are + * just counting how many fields have the valid bit + * turned off. + */ +int +do_pcibr_rrb_count_avail(bridge_t *bridge, + pciio_slot_t slot) +{ + bridgereg_t tmp; + + tmp = bridge->b_rrb_map[slot & 1].reg; + tmp = (0x88888888 & ~tmp) >> 3; + tmp += tmp >> 4; + tmp += tmp >> 8; + tmp += tmp >> 16; + return tmp & 15; +} + +/* + * do_pcibr_rrb_alloc: allocate some additional RRBs + * for the specified slot. Returns -1 if there were + * insufficient free RRBs to satisfy the request, + * or 0 if the request was fulfilled. + * + * Note that if a request can be partially filled, + * it will be, even if we return failure. + * + * IMPL NOTE: again we avoid iterating across all + * the RRBs; instead, we form up a word containing + * one bit for each free RRB, then peel the bits + * off from the low end. + */ +int +do_pcibr_rrb_alloc(bridge_t *bridge, + pciio_slot_t slot, + int more) +{ + int rv = 0; + bridgereg_t reg, tmp, bit; + + reg = bridge->b_rrb_map[slot & 1].reg; + tmp = (0x88888888 & ~reg) >> 3; + while (more-- > 0) { + bit = LSBIT(tmp); + if (!bit) { + rv = -1; + break; + } + tmp &= ~bit; + reg = ((reg & ~(bit * 15)) | (bit * (8 + slot / 2))); + } + bridge->b_rrb_map[slot & 1].reg = reg; + return rv; +} + +/* + * do_pcibr_rrb_free: release some of the RRBs that + * have been allocated for the specified + * slot. Returns zero for success, or negative if + * it was unable to free that many RRBs. + * + * IMPL NOTE: We form up a bit for each RRB + * allocated to the slot, aligned with the VALID + * bitfield this time; then we peel bits off one at + * a time, releasing the corresponding RRB. + */ +int +do_pcibr_rrb_free(bridge_t *bridge, + pciio_slot_t slot, + int less) +{ + int rv = 0; + bridgereg_t reg, tmp, clr, bit; + int i; + + clr = 0; + reg = bridge->b_rrb_map[slot & 1].reg; + + /* This needs to be done otherwise the rrb's on the virtual channel + * for this slot won't be freed !! + */ + tmp = reg & 0xbbbbbbbb; + + tmp ^= (0x11111111 * (7 - slot / 2)); + tmp &= (0x33333333 & tmp) << 2; + tmp &= (0x44444444 & tmp) << 1; + while (less-- > 0) { + bit = LSBIT(tmp); + if (!bit) { + rv = -1; + break; + } + tmp &= ~bit; + reg &= ~bit; + clr |= bit; + } + bridge->b_rrb_map[slot & 1].reg = reg; + + for (i = 0; i < 8; i++) + if (clr & (8 << (4 * i))) + do_pcibr_rrb_clear(bridge, (2 * i) + (slot & 1)); + + return rv; +} + +void +do_pcibr_rrb_autoalloc(pcibr_soft_t pcibr_soft, + int slot, + int more_rrbs) +{ + bridge_t *bridge = pcibr_soft->bs_base; + int got; + + for (got = 0; got < more_rrbs; ++got) { + if (pcibr_soft->bs_rrb_res[slot & 7] > 0) + pcibr_soft->bs_rrb_res[slot & 7]--; + else if (pcibr_soft->bs_rrb_avail[slot & 1] > 0) + pcibr_soft->bs_rrb_avail[slot & 1]--; + else + break; + if (do_pcibr_rrb_alloc(bridge, slot, 1) < 0) + break; +#if PCIBR_RRB_DEBUG + printk("do_pcibr_rrb_autoalloc: add one to slot %d%s\n", + slot & 7, slot & 8 ? "v" : ""); +#endif + pcibr_soft->bs_rrb_valid[slot]++; + } +#if PCIBR_RRB_DEBUG + printk("%s: %d+%d free RRBs. Allocation list:\n", pcibr_soft->bs_name, + pcibr_soft->bs_rrb_avail[0], + pcibr_soft->bs_rrb_avail[1]); + for (slot = 0; slot < 8; ++slot) + printk("\t%d+%d+%d", + 0xFFF & pcibr_soft->bs_rrb_valid[slot], + 0xFFF & pcibr_soft->bs_rrb_valid[slot + PCIBR_RRB_SLOT_VIRTUAL], + pcibr_soft->bs_rrb_res[slot]); + printk("\n"); +#endif +} + +/* + * Device driver interface to flush the write buffers for a specified + * device hanging off the bridge. + */ +int +pcibr_wrb_flush(devfs_handle_t pconn_vhdl) +{ + pciio_info_t pciio_info = pciio_info_get(pconn_vhdl); + pciio_slot_t pciio_slot = pciio_info_slot_get(pciio_info); + pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); + bridge_t *bridge = pcibr_soft->bs_base; + volatile bridgereg_t *wrb_flush; + + wrb_flush = &(bridge->b_wr_req_buf[pciio_slot].reg); + while (*wrb_flush); + + return(0); +} + +/* + * Device driver interface to request RRBs for a specified device + * hanging off a Bridge. The driver requests the total number of + * RRBs it would like for the normal channel (vchan0) and for the + * "virtual channel" (vchan1). The actual number allocated to each + * channel is returned. + * + * If we cannot allocate at least one RRB to a channel that needs + * at least one, return -1 (failure). Otherwise, satisfy the request + * as best we can and return 0. + */ +int +pcibr_rrb_alloc(devfs_handle_t pconn_vhdl, + int *count_vchan0, + int *count_vchan1) +{ + pciio_info_t pciio_info = pciio_info_get(pconn_vhdl); + pciio_slot_t pciio_slot = pciio_info_slot_get(pciio_info); + pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); + bridge_t *bridge = pcibr_soft->bs_base; + int desired_vchan0; + int desired_vchan1; + int orig_vchan0; + int orig_vchan1; + int delta_vchan0; + int delta_vchan1; + int final_vchan0; + int final_vchan1; + int avail_rrbs; + int res_rrbs; + unsigned long s; + int error; + + /* + * TBD: temper request with admin info about RRB allocation, + * and according to demand from other devices on this Bridge. + * + * One way of doing this would be to allocate two RRBs + * for each device on the bus, before any drivers start + * asking for extras. This has the weakness that one + * driver might not give back an "extra" RRB until after + * another driver has already failed to get one that + * it wanted. + */ + + s = pcibr_lock(pcibr_soft); + + /* Save the boot-time RRB configuration for this slot */ + if (pcibr_soft->bs_rrb_valid_dflt[pciio_slot] < 0) { + pcibr_soft->bs_rrb_valid_dflt[pciio_slot] = + pcibr_soft->bs_rrb_valid[pciio_slot]; + pcibr_soft->bs_rrb_valid_dflt[pciio_slot + PCIBR_RRB_SLOT_VIRTUAL] = + pcibr_soft->bs_rrb_valid[pciio_slot + PCIBR_RRB_SLOT_VIRTUAL]; + pcibr_soft->bs_rrb_res_dflt[pciio_slot] = + pcibr_soft->bs_rrb_res[pciio_slot]; + + } + + /* How many RRBs do we own? */ + orig_vchan0 = pcibr_soft->bs_rrb_valid[pciio_slot]; + orig_vchan1 = pcibr_soft->bs_rrb_valid[pciio_slot + PCIBR_RRB_SLOT_VIRTUAL]; + + /* How many RRBs do we want? */ + desired_vchan0 = count_vchan0 ? *count_vchan0 : orig_vchan0; + desired_vchan1 = count_vchan1 ? *count_vchan1 : orig_vchan1; + + /* How many RRBs are free? */ + avail_rrbs = pcibr_soft->bs_rrb_avail[pciio_slot & 1] + + pcibr_soft->bs_rrb_res[pciio_slot]; + + /* Figure desired deltas */ + delta_vchan0 = desired_vchan0 - orig_vchan0; + delta_vchan1 = desired_vchan1 - orig_vchan1; + + /* Trim back deltas to something + * that we can actually meet, by + * decreasing the ending allocation + * for whichever channel wants + * more RRBs. If both want the same + * number, cut the second channel. + * NOTE: do not change the allocation for + * a channel that was passed as NULL. + */ + while ((delta_vchan0 + delta_vchan1) > avail_rrbs) { + if (count_vchan0 && + (!count_vchan1 || + ((orig_vchan0 + delta_vchan0) > + (orig_vchan1 + delta_vchan1)))) + delta_vchan0--; + else + delta_vchan1--; + } + + /* Figure final RRB allocations + */ + final_vchan0 = orig_vchan0 + delta_vchan0; + final_vchan1 = orig_vchan1 + delta_vchan1; + + /* If either channel wants RRBs but our actions + * would leave it with none, declare an error, + * but DO NOT change any RRB allocations. + */ + if ((desired_vchan0 && !final_vchan0) || + (desired_vchan1 && !final_vchan1)) { + + error = -1; + + } else { + + /* Commit the allocations: free, then alloc. + */ + if (delta_vchan0 < 0) + (void) do_pcibr_rrb_free(bridge, pciio_slot, -delta_vchan0); + if (delta_vchan1 < 0) + (void) do_pcibr_rrb_free(bridge, PCIBR_RRB_SLOT_VIRTUAL + pciio_slot, -delta_vchan1); + + if (delta_vchan0 > 0) + (void) do_pcibr_rrb_alloc(bridge, pciio_slot, delta_vchan0); + if (delta_vchan1 > 0) + (void) do_pcibr_rrb_alloc(bridge, PCIBR_RRB_SLOT_VIRTUAL + pciio_slot, delta_vchan1); + + /* Return final values to caller. + */ + if (count_vchan0) + *count_vchan0 = final_vchan0; + if (count_vchan1) + *count_vchan1 = final_vchan1; + + /* prevent automatic changes to this slot's RRBs + */ + pcibr_soft->bs_rrb_fixed |= 1 << pciio_slot; + + /* Track the actual allocations, release + * any further reservations, and update the + * number of available RRBs. + */ + + pcibr_soft->bs_rrb_valid[pciio_slot] = final_vchan0; + pcibr_soft->bs_rrb_valid[pciio_slot + PCIBR_RRB_SLOT_VIRTUAL] = final_vchan1; + pcibr_soft->bs_rrb_avail[pciio_slot & 1] = + pcibr_soft->bs_rrb_avail[pciio_slot & 1] + + pcibr_soft->bs_rrb_res[pciio_slot] + - delta_vchan0 + - delta_vchan1; + pcibr_soft->bs_rrb_res[pciio_slot] = 0; + + /* + * Reserve enough RRBs so this slot's RRB configuration can be + * reset to its boot-time default following a hot-plug shut-down + */ + res_rrbs = (pcibr_soft->bs_rrb_valid_dflt[pciio_slot] - + pcibr_soft->bs_rrb_valid[pciio_slot]) + + (pcibr_soft->bs_rrb_valid_dflt[pciio_slot + + PCIBR_RRB_SLOT_VIRTUAL] - + pcibr_soft->bs_rrb_valid[pciio_slot + + PCIBR_RRB_SLOT_VIRTUAL]) + + (pcibr_soft->bs_rrb_res_dflt[pciio_slot] - + pcibr_soft->bs_rrb_res[pciio_slot]); + + if (res_rrbs > 0) { + pcibr_soft->bs_rrb_res[pciio_slot] = res_rrbs; + pcibr_soft->bs_rrb_avail[pciio_slot & 1] = + pcibr_soft->bs_rrb_avail[pciio_slot & 1] + - res_rrbs; + } + +#if PCIBR_RRB_DEBUG + printk("pcibr_rrb_alloc: slot %d set to %d+%d; %d+%d free\n", + pciio_slot, final_vchan0, final_vchan1, + pcibr_soft->bs_rrb_avail[0], + pcibr_soft->bs_rrb_avail[1]); + for (pciio_slot = 0; pciio_slot < 8; ++pciio_slot) + printk("\t%d+%d+%d", + 0xFFF & pcibr_soft->bs_rrb_valid[pciio_slot], + 0xFFF & pcibr_soft->bs_rrb_valid[pciio_slot + PCIBR_RRB_SLOT_VIRTUAL], + pcibr_soft->bs_rrb_res[pciio_slot]); + printk("\n"); +#endif + + error = 0; + } + + pcibr_unlock(pcibr_soft, s); + + return error; +} + +/* + * Device driver interface to check the current state + * of the RRB allocations. + * + * pconn_vhdl is your PCI connection point (specifies which + * PCI bus and which slot). + * + * count_vchan0 points to where to return the number of RRBs + * assigned to the primary DMA channel, used by all DMA + * that does not explicitly ask for the alternate virtual + * channel. + * + * count_vchan1 points to where to return the number of RRBs + * assigned to the secondary DMA channel, used when + * PCIBR_VCHAN1 and PCIIO_DMA_A64 are specified. + * + * count_reserved points to where to return the number of RRBs + * that have been automatically reserved for your device at + * startup, but which have not been assigned to a + * channel. RRBs must be assigned to a channel to be used; + * this can be done either with an explicit pcibr_rrb_alloc + * call, or automatically by the infrastructure when a DMA + * translation is constructed. Any call to pcibr_rrb_alloc + * will release any unassigned reserved RRBs back to the + * free pool. + * + * count_pool points to where to return the number of RRBs + * that are currently unassigned and unreserved. This + * number can (and will) change as other drivers make calls + * to pcibr_rrb_alloc, or automatically allocate RRBs for + * DMA beyond their initial reservation. + * + * NULL may be passed for any of the return value pointers + * the caller is not interested in. + * + * The return value is "0" if all went well, or "-1" if + * there is a problem. Additionally, if the wrong vertex + * is passed in, one of the subsidiary support functions + * could panic with a "bad pciio fingerprint." + */ + +int +pcibr_rrb_check(devfs_handle_t pconn_vhdl, + int *count_vchan0, + int *count_vchan1, + int *count_reserved, + int *count_pool) +{ + pciio_info_t pciio_info; + pciio_slot_t pciio_slot; + pcibr_soft_t pcibr_soft; + unsigned long s; + int error = -1; + + if ((pciio_info = pciio_info_get(pconn_vhdl)) && + (pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info)) && + ((pciio_slot = pciio_info_slot_get(pciio_info)) < 8)) { + + s = pcibr_lock(pcibr_soft); + + if (count_vchan0) + *count_vchan0 = + pcibr_soft->bs_rrb_valid[pciio_slot]; + + if (count_vchan1) + *count_vchan1 = + pcibr_soft->bs_rrb_valid[pciio_slot + PCIBR_RRB_SLOT_VIRTUAL]; + + if (count_reserved) + *count_reserved = + pcibr_soft->bs_rrb_res[pciio_slot]; + + if (count_pool) + *count_pool = + pcibr_soft->bs_rrb_avail[pciio_slot & 1]; + + error = 0; + + pcibr_unlock(pcibr_soft, s); + } + return error; +} + +/* pcibr_alloc_all_rrbs allocates all the rrbs available in the quantities + * requested for each of the devices. The evn_odd argument indicates whether + * allocation is for the odd or even rrbs. The next group of four argument + * pairs indicate the amount of rrbs to be assigned to each device. The first + * argument of each pair indicate the total number of rrbs to allocate for that + * device. The second argument of each pair indicates how many rrb's from the + * first argument should be assigned to the virtual channel. The total of all + * of the first arguments should be <= 8. The second argument should be <= the + * first argument. + * if even_odd = 0 the devices in order are 0, 2, 4, 6 + * if even_odd = 1 the devices in order are 1, 3, 5, 7 + * returns 0 if no errors else returns -1 + */ + +int +pcibr_alloc_all_rrbs(devfs_handle_t vhdl, int even_odd, + int dev_1_rrbs, int virt1, int dev_2_rrbs, int virt2, + int dev_3_rrbs, int virt3, int dev_4_rrbs, int virt4) +{ + devfs_handle_t pcibr_vhdl; + pcibr_soft_t pcibr_soft = (pcibr_soft_t)0; + bridge_t *bridge = NULL; + + uint32_t rrb_setting = 0; + int rrb_shift = 7; + uint32_t cur_rrb; + int dev_rrbs[4]; + int virt[4]; + int i, j; + unsigned long s; + + if (GRAPH_SUCCESS == + hwgraph_traverse(vhdl, EDGE_LBL_PCI, &pcibr_vhdl)) { + pcibr_soft = pcibr_soft_get(pcibr_vhdl); + if (pcibr_soft) + bridge = pcibr_soft->bs_base; + hwgraph_vertex_unref(pcibr_vhdl); + } + if (bridge == NULL) + bridge = (bridge_t *) xtalk_piotrans_addr + (vhdl, NULL, 0, sizeof(bridge_t), 0); + + even_odd &= 1; + + dev_rrbs[0] = dev_1_rrbs; + dev_rrbs[1] = dev_2_rrbs; + dev_rrbs[2] = dev_3_rrbs; + dev_rrbs[3] = dev_4_rrbs; + + virt[0] = virt1; + virt[1] = virt2; + virt[2] = virt3; + virt[3] = virt4; + + if ((dev_1_rrbs + dev_2_rrbs + dev_3_rrbs + dev_4_rrbs) > 8) { + return -1; + } + if ((dev_1_rrbs < 0) || (dev_2_rrbs < 0) || (dev_3_rrbs < 0) || (dev_4_rrbs < 0)) { + return -1; + } + /* walk through rrbs */ + for (i = 0; i < 4; i++) { + if (virt[i]) { + for( j = 0; j < virt[i]; j++) { + cur_rrb = i | 0xc; + cur_rrb = cur_rrb << (rrb_shift * 4); + rrb_shift--; + rrb_setting = rrb_setting | cur_rrb; + dev_rrbs[i] = dev_rrbs[i] - 1; + } + } + for (j = 0; j < dev_rrbs[i]; j++) { + cur_rrb = i | 0x8; + cur_rrb = cur_rrb << (rrb_shift * 4); + rrb_shift--; + rrb_setting = rrb_setting | cur_rrb; + } + } + + if (pcibr_soft) + s = pcibr_lock(pcibr_soft); + + bridge->b_rrb_map[even_odd].reg = rrb_setting; + + if (pcibr_soft) { + + pcibr_soft->bs_rrb_fixed |= 0x55 << even_odd; + + /* since we've "FIXED" the allocations + * for these slots, we probably can dispense + * with tracking avail/res/valid data, but + * keeping it up to date helps debugging. + */ + + pcibr_soft->bs_rrb_avail[even_odd] = + 8 - (dev_1_rrbs + dev_2_rrbs + dev_3_rrbs + dev_4_rrbs); + + pcibr_soft->bs_rrb_res[even_odd + 0] = 0; + pcibr_soft->bs_rrb_res[even_odd + 2] = 0; + pcibr_soft->bs_rrb_res[even_odd + 4] = 0; + pcibr_soft->bs_rrb_res[even_odd + 6] = 0; + + pcibr_soft->bs_rrb_valid[even_odd + 0] = dev_1_rrbs - virt1; + pcibr_soft->bs_rrb_valid[even_odd + 2] = dev_2_rrbs - virt2; + pcibr_soft->bs_rrb_valid[even_odd + 4] = dev_3_rrbs - virt3; + pcibr_soft->bs_rrb_valid[even_odd + 6] = dev_4_rrbs - virt4; + + pcibr_soft->bs_rrb_valid[even_odd + 0 + PCIBR_RRB_SLOT_VIRTUAL] = virt1; + pcibr_soft->bs_rrb_valid[even_odd + 2 + PCIBR_RRB_SLOT_VIRTUAL] = virt2; + pcibr_soft->bs_rrb_valid[even_odd + 4 + PCIBR_RRB_SLOT_VIRTUAL] = virt3; + pcibr_soft->bs_rrb_valid[even_odd + 6 + PCIBR_RRB_SLOT_VIRTUAL] = virt4; + + pcibr_unlock(pcibr_soft, s); + } + return 0; +} + +/* + * pcibr_rrb_flush: chase down all the RRBs assigned + * to the specified connection point, and flush + * them. + */ +void +pcibr_rrb_flush(devfs_handle_t pconn_vhdl) +{ + pciio_info_t pciio_info = pciio_info_get(pconn_vhdl); + pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info); + pciio_slot_t pciio_slot = pciio_info_slot_get(pciio_info); + bridge_t *bridge = pcibr_soft->bs_base; + unsigned long s; + reg_p rrbp; + unsigned rrbm; + int i; + int rrbn; + unsigned sval; + unsigned mask; + + sval = BRIDGE_RRB_EN | (pciio_slot >> 1); + mask = BRIDGE_RRB_EN | BRIDGE_RRB_PDEV; + rrbn = pciio_slot & 1; + rrbp = &bridge->b_rrb_map[rrbn].reg; + + s = pcibr_lock(pcibr_soft); + rrbm = *rrbp; + for (i = 0; i < 8; ++i) { + if ((rrbm & mask) == sval) + do_pcibr_rrb_flush(bridge, rrbn); + rrbm >>= 4; + rrbn += 2; + } + pcibr_unlock(pcibr_soft, s); +} + +/* + * pcibr_slot_initial_rrb_alloc + * Allocate a default number of rrbs for this slot on + * the two channels. This is dictated by the rrb allocation + * strategy routine defined per platform. + */ + +int +pcibr_slot_initial_rrb_alloc(devfs_handle_t pcibr_vhdl, + pciio_slot_t slot) +{ + pcibr_soft_t pcibr_soft; + pcibr_info_h pcibr_infoh; + pcibr_info_t pcibr_info; + bridge_t *bridge; + int c0, c1, r; + + pcibr_soft = pcibr_soft_get(pcibr_vhdl); + + if (!pcibr_soft || !PCIBR_VALID_SLOT(slot)) + return(EINVAL); + + bridge = pcibr_soft->bs_base; + + /* How may RRBs are on this slot? + */ + c0 = do_pcibr_rrb_count_valid(bridge, slot); + c1 = do_pcibr_rrb_count_valid(bridge, slot + PCIBR_RRB_SLOT_VIRTUAL); + +#if PCIBR_RRB_DEBUG + printk( + "pcibr_slot_initial_rrb_alloc: slot %d started with %d+%d\n", + slot, c0, c1); +#endif + + /* Do we really need any? + */ + pcibr_infoh = pcibr_soft->bs_slot[slot].bss_infos; + pcibr_info = pcibr_infoh[0]; + if ((pcibr_info->f_vendor == PCIIO_VENDOR_ID_NONE) && + !pcibr_soft->bs_slot[slot].has_host) { + if (c0 > 0) + do_pcibr_rrb_free(bridge, slot, c0); + if (c1 > 0) + do_pcibr_rrb_free(bridge, slot + PCIBR_RRB_SLOT_VIRTUAL, c1); + pcibr_soft->bs_rrb_valid[slot] = 0x1000; + pcibr_soft->bs_rrb_valid[slot + PCIBR_RRB_SLOT_VIRTUAL] = 0x1000; + return(ENODEV); + } + + pcibr_soft->bs_rrb_avail[slot & 1] -= c0 + c1; + pcibr_soft->bs_rrb_valid[slot] = c0; + pcibr_soft->bs_rrb_valid[slot + PCIBR_RRB_SLOT_VIRTUAL] = c1; + + pcibr_soft->bs_rrb_avail[0] = do_pcibr_rrb_count_avail(bridge, 0); + pcibr_soft->bs_rrb_avail[1] = do_pcibr_rrb_count_avail(bridge, 1); + + r = 3 - (c0 + c1); + + if (r > 0) { + pcibr_soft->bs_rrb_res[slot] = r; + pcibr_soft->bs_rrb_avail[slot & 1] -= r; + } + +#if PCIBR_RRB_DEBUG + printk("\t%d+%d+%d", + 0xFFF & pcibr_soft->bs_rrb_valid[slot], + 0xFFF & pcibr_soft->bs_rrb_valid[slot + PCIBR_RRB_SLOT_VIRTUAL], + pcibr_soft->bs_rrb_res[slot]); + printk("\n"); +#endif + + return(0); +} + +/* + * pcibr_initial_rrb + * Assign an equal total number of RRBs to all candidate slots, + * where the total is the sum of the number of RRBs assigned to + * the normal channel, the number of RRBs assigned to the virtual + * channel, and the number of RRBs assigned as reserved. + * + * A candidate slot is a populated slot on a non-SN1 system or + * any existing (populated or empty) slot on an SN1 system. + * Empty SN1 slots need RRBs to support hot-plug operations. + */ + +int +pcibr_initial_rrb(devfs_handle_t pcibr_vhdl, + pciio_slot_t first, pciio_slot_t last) +{ + pcibr_soft_t pcibr_soft = pcibr_soft_get(pcibr_vhdl); + bridge_t *bridge = pcibr_soft->bs_base; + pciio_slot_t slot; + int c0, c1; + int have[2][3]; + int res[2]; + int eo; + + have[0][0] = have[0][1] = have[0][2] = 0; + have[1][0] = have[1][1] = have[1][2] = 0; + res[0] = res[1] = 0; + + for (slot = 0; slot < 8; ++slot) { + /* Initial RRB management; give back RRBs in all non-existent slots */ + (void) pcibr_slot_initial_rrb_alloc(pcibr_vhdl, slot); + + /* Base calculations only on existing slots */ + if ((slot >= first) && (slot <= last)) { + c0 = pcibr_soft->bs_rrb_valid[slot]; + c1 = pcibr_soft->bs_rrb_valid[slot + PCIBR_RRB_SLOT_VIRTUAL]; + if ((c0 + c1) < 3) + have[slot & 1][c0 + c1]++; + } + } + + /* Initialize even/odd slot available RRB counts */ + pcibr_soft->bs_rrb_avail[0] = do_pcibr_rrb_count_avail(bridge, 0); + pcibr_soft->bs_rrb_avail[1] = do_pcibr_rrb_count_avail(bridge, 1); + + /* + * Calculate reserved RRBs for slots based on current RRB usage + */ + for (eo = 0; eo < 2; eo++) { + if ((3 * have[eo][0] + 2 * have[eo][1] + have[eo][2]) <= pcibr_soft->bs_rrb_avail[eo]) + res[eo] = 3; + else if ((2 * have[eo][0] + have[eo][1]) <= pcibr_soft->bs_rrb_avail[eo]) + res[eo] = 2; + else if (have[eo][0] <= pcibr_soft->bs_rrb_avail[eo]) + res[eo] = 1; + else + res[eo] = 0; + + } + + /* Assign reserved RRBs to existing slots */ + for (slot = first; slot <= last; ++slot) { + int r; + + c0 = pcibr_soft->bs_rrb_valid[slot]; + c1 = pcibr_soft->bs_rrb_valid[slot + PCIBR_RRB_SLOT_VIRTUAL]; + r = res[slot & 1] - (c0 + c1); + + if (r > 0) { + pcibr_soft->bs_rrb_res[slot] = r; + pcibr_soft->bs_rrb_avail[slot & 1] -= r; + } + } + +#if PCIBR_RRB_DEBUG + printk("%v RRB MANAGEMENT: %d+%d free\n", + pcibr_vhdl, + pcibr_soft->bs_rrb_avail[0], + pcibr_soft->bs_rrb_avail[1]); + for (slot = first; slot <= last; ++slot) + printk("\tslot %d: %d+%d+%d", slot, + 0xFFF & pcibr_soft->bs_rrb_valid[slot], + 0xFFF & pcibr_soft->bs_rrb_valid[slot + PCIBR_RRB_SLOT_VIRTUAL], + pcibr_soft->bs_rrb_res[slot]); + printk("\n"); +#endif + + return 0; + +} + diff -Nru a/arch/ia64/sn/io/sn2/pcibr/pcibr_slot.c b/arch/ia64/sn/io/sn2/pcibr/pcibr_slot.c --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/io/sn2/pcibr/pcibr_slot.c Tue Mar 12 13:58:16 2002 @@ -0,0 +1,1692 @@ +/* + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (C) 2001-2002 Silicon Graphics, Inc. All rights reserved. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +extern pcibr_info_t pcibr_info_get(devfs_handle_t); +extern int pcibr_widget_to_bus(int); +extern pcibr_info_t pcibr_device_info_new(pcibr_soft_t, pciio_slot_t, pciio_function_t, pciio_vendor_id_t, pciio_device_id_t); +extern void pcibr_freeblock_sub(iopaddr_t *, iopaddr_t *, iopaddr_t, size_t); +extern int pcibr_slot_initial_rrb_alloc(devfs_handle_t,pciio_slot_t); +#if 0 +int pcibr_slot_reset(devfs_handle_t pcibr_vhdl, pciio_slot_t slot); +#endif + +int pcibr_slot_info_init(devfs_handle_t pcibr_vhdl, pciio_slot_t slot); +int pcibr_slot_info_free(devfs_handle_t pcibr_vhdl, pciio_slot_t slot); +int pcibr_slot_addr_space_init(devfs_handle_t pcibr_vhdl, pciio_slot_t slot); +int pcibr_slot_device_init(devfs_handle_t pcibr_vhdl, pciio_slot_t slot); +int pcibr_slot_guest_info_init(devfs_handle_t pcibr_vhdl, pciio_slot_t slot); +int pcibr_slot_call_device_attach(devfs_handle_t pcibr_vhdl, + pciio_slot_t slot, int drv_flags); +int pcibr_slot_call_device_detach(devfs_handle_t pcibr_vhdl, + pciio_slot_t slot, int drv_flags); +int pcibr_slot_detach(devfs_handle_t pcibr_vhdl, pciio_slot_t slot, int drv_flags); +int pcibr_is_slot_sys_critical(devfs_handle_t pcibr_vhdl, pciio_slot_t slot); +int pcibr_probe_slot(bridge_t *, cfg_p, unsigned int *); +void pcibr_device_info_free(devfs_handle_t, pciio_slot_t); +extern uint64_t do_pcibr_config_get(cfg_p, unsigned, unsigned); + +#ifdef LATER +int pcibr_slot_attach(devfs_handle_t pcibr_vhdl, pciio_slot_t slot, + int drv_flags, char *l1_msg, int *sub_errorp); +int pcibr_slot_pwr(devfs_handle_t, pciio_slot_t, int, char *); +int pcibr_slot_startup(devfs_handle_t, pcibr_slot_req_t); +int pcibr_slot_shutdown(devfs_handle_t, pcibr_slot_req_t); +void pcibr_slot_func_info_return(pcibr_info_h pcibr_infoh, int func, + pcibr_slot_func_info_resp_t funcp); +int pcibr_slot_info_return(pcibr_soft_t pcibr_soft, pciio_slot_t slot, + pcibr_slot_info_resp_t respp); +int pcibr_slot_query(devfs_handle_t, pcibr_slot_req_t); +#endif /* LATER */ + +extern devfs_handle_t baseio_pci_vhdl; +int scsi_ctlr_nums_add(devfs_handle_t, devfs_handle_t); + +/* For now .... */ +/* + * PCI Hot-Plug Capability Flags + */ +#define D_PCI_HOT_PLUG_ATTACH 0x200 /* Driver supports PCI hot-plug attach */ +#define D_PCI_HOT_PLUG_DETACH 0x400 /* Driver supports PCI hot-plug detach */ + + +/*========================================================================== + * BRIDGE PCI SLOT RELATED IOCTLs + */ + +#ifdef LATER + +/* + * pcibr_slot_startup + * Software start-up the PCI slot. + */ +int +pcibr_slot_startup(devfs_handle_t pcibr_vhdl, pcibr_slot_req_t reqp) +{ + pcibr_soft_t pcibr_soft = pcibr_soft_get(pcibr_vhdl); + pciio_slot_t slot = reqp->req_slot; + int error = 0; + char l1_msg[BRL1_QSIZE+1]; + struct pcibr_slot_up_resp_s tmp_up_resp; + + /* Make sure that we are dealing with a bridge device vertex */ + if (!pcibr_soft) { + return(PCI_NOT_A_BRIDGE); + } + + /* Do not allow start-up of a slot in a shoehorn */ + if(nic_vertex_info_match(pcibr_soft->bs_conn, XTALK_PCI_PART_NUM)) { + return(PCI_SLOT_IN_SHOEHORN); + } + + /* Check for the valid slot */ + if (!PCIBR_VALID_SLOT(slot)) + return(PCI_NOT_A_SLOT); + + /* Acquire update access to the bus */ + mrlock(pcibr_soft->bs_bus_lock, MR_UPDATE, PZERO); + + if (pcibr_soft->bs_slot[slot].slot_status & SLOT_STARTUP_CMPLT) { + error = PCI_SLOT_ALREADY_UP; + goto startup_unlock; + } + + error = pcibr_slot_attach(pcibr_vhdl, slot, D_PCI_HOT_PLUG_ATTACH, + l1_msg, &tmp_up_resp.resp_sub_errno); + + strncpy(tmp_up_resp.resp_l1_msg, l1_msg, L1_QSIZE); + tmp_up_resp.resp_l1_msg[L1_QSIZE] = '\0'; + + if (COPYOUT(&tmp_up_resp, reqp->req_respp.up, reqp->req_size)) { + return(EFAULT); + } + + startup_unlock: + + /* Release the bus lock */ + mrunlock(pcibr_soft->bs_bus_lock); + + return(error); +} + +/* + * pcibr_slot_shutdown + * Software shut-down the PCI slot + */ +int +pcibr_slot_shutdown(devfs_handle_t pcibr_vhdl, pcibr_slot_req_t reqp) +{ + pcibr_soft_t pcibr_soft = pcibr_soft_get(pcibr_vhdl); + bridge_t *bridge; + pciio_slot_t slot = reqp->req_slot; + int error = 0; + char l1_msg[BRL1_QSIZE+1]; + struct pcibr_slot_down_resp_s tmp_down_resp; + pciio_slot_t tmp_slot; + + /* Make sure that we are dealing with a bridge device vertex */ + if (!pcibr_soft) { + return(PCI_NOT_A_BRIDGE); + } + + bridge = pcibr_soft->bs_base; + + /* Check for valid slot */ + if (!PCIBR_VALID_SLOT(slot)) + return(PCI_NOT_A_SLOT); + + /* Do not allow shut-down of a slot in a shoehorn */ + if(nic_vertex_info_match(pcibr_soft->bs_conn, XTALK_PCI_PART_NUM)) { + return(PCI_SLOT_IN_SHOEHORN); + } + + /* Acquire update access to the bus */ + mrlock(pcibr_soft->bs_bus_lock, MR_UPDATE, PZERO); + + if ((pcibr_soft->bs_slot[slot].slot_status & SLOT_SHUTDOWN_CMPLT) || + ((pcibr_soft->bs_slot[slot].slot_status & SLOT_STATUS_MASK) == 0)) { + error = PCI_SLOT_ALREADY_DOWN; + /* + * RJR - Should we invoke an L1 slot power-down command just in case + * a previous shut-down failed to power-down the slot? + */ + goto shutdown_unlock; + } + + /* Do not allow the last 33 MHz card to be removed */ + if ((bridge->b_wid_control & BRIDGE_CTRL_BUS_SPEED_MASK) == + BRIDGE_CTRL_BUS_SPEED_33) { + for (tmp_slot = pcibr_soft->bs_first_slot; + tmp_slot <= pcibr_soft->bs_last_slot; tmp_slot++) + if (tmp_slot != slot) + if (pcibr_soft->bs_slot[tmp_slot].slot_status & SLOT_POWER_UP) { + error++; + break; + } + if (!error) { + error = PCI_EMPTY_33MHZ; + goto shutdown_unlock; + } + } + + error = pcibr_slot_detach(pcibr_vhdl, slot, D_PCI_HOT_PLUG_DETACH, + l1_msg, &tmp_down_resp.resp_sub_errno); + + strncpy(tmp_down_resp.resp_l1_msg, l1_msg, L1_QSIZE); + tmp_down_resp.resp_l1_msg[L1_QSIZE] = '\0'; + + if (COPYOUT(&tmp_down_resp, reqp->req_respp.down, reqp->req_size)) { + return(EFAULT); + } + + shutdown_unlock: + + /* Release the bus lock */ + mrunlock(pcibr_soft->bs_bus_lock); + + return(error); +} + +char *pci_space_name[] = {"NONE", + "ROM", + "IO", + "", + "MEM", + "MEM32", + "MEM64", + "CFG", + "WIN0", + "WIN1", + "WIN2", + "WIN3", + "WIN4", + "WIN5", + "", + "BAD"}; + +void +pcibr_slot_func_info_return(pcibr_info_h pcibr_infoh, + int func, + pcibr_slot_func_info_resp_t funcp) +{ + pcibr_info_t pcibr_info = pcibr_infoh[func]; + int win; + + funcp->resp_f_status = 0; + + if (!pcibr_info) { + return; + } + + funcp->resp_f_status |= FUNC_IS_VALID; + sprintf(funcp->resp_f_slot_name, "%v", pcibr_info->f_vertex); + + if(is_sys_critical_vertex(pcibr_info->f_vertex)) { + funcp->resp_f_status |= FUNC_IS_SYS_CRITICAL; + } + + funcp->resp_f_bus = pcibr_info->f_bus; + funcp->resp_f_slot = pcibr_info->f_slot; + funcp->resp_f_func = pcibr_info->f_func; + sprintf(funcp->resp_f_master_name, "%v", pcibr_info->f_master); + funcp->resp_f_pops = pcibr_info->f_pops; + funcp->resp_f_efunc = pcibr_info->f_efunc; + funcp->resp_f_einfo = pcibr_info->f_einfo; + + funcp->resp_f_vendor = pcibr_info->f_vendor; + funcp->resp_f_device = pcibr_info->f_device; + + for(win = 0 ; win < 6 ; win++) { + funcp->resp_f_window[win].resp_w_base = + pcibr_info->f_window[win].w_base; + funcp->resp_f_window[win].resp_w_size = + pcibr_info->f_window[win].w_size; + sprintf(funcp->resp_f_window[win].resp_w_space, + "%s", + pci_space_name[pcibr_info->f_window[win].w_space]); + } + + funcp->resp_f_rbase = pcibr_info->f_rbase; + funcp->resp_f_rsize = pcibr_info->f_rsize; + + for (win = 0 ; win < 4; win++) { + funcp->resp_f_ibit[win] = pcibr_info->f_ibit[win]; + } + + funcp->resp_f_att_det_error = pcibr_info->f_att_det_error; + +} + +int +pcibr_slot_info_return(pcibr_soft_t pcibr_soft, + pciio_slot_t slot, + pcibr_slot_info_resp_t respp) +{ + pcibr_soft_slot_t pss; + int func; + bridge_t *bridge = pcibr_soft->bs_base; + reg_p b_respp; + pcibr_slot_info_resp_t slotp; + pcibr_slot_func_info_resp_t funcp; + + slotp = kmem_zalloc(sizeof(*slotp), KM_SLEEP); + if (slotp == NULL) { + return(ENOMEM); + } + + pss = &pcibr_soft->bs_slot[slot]; + + slotp->resp_has_host = pss->has_host; + slotp->resp_host_slot = pss->host_slot; + sprintf(slotp->resp_slot_conn_name, "%v", pss->slot_conn); + slotp->resp_slot_status = pss->slot_status; + + slotp->resp_l1_bus_num = io_path_map_widget(pcibr_soft->bs_vhdl); + + if (is_sys_critical_vertex(pss->slot_conn)) { + slotp->resp_slot_status |= SLOT_IS_SYS_CRITICAL; + } + + slotp->resp_bss_ninfo = pss->bss_ninfo; + + for (func = 0; func < pss->bss_ninfo; func++) { + funcp = &(slotp->resp_func[func]); + pcibr_slot_func_info_return(pss->bss_infos, func, funcp); + } + + sprintf(slotp->resp_bss_devio_bssd_space, "%s", + pci_space_name[pss->bss_devio.bssd_space]); + slotp->resp_bss_devio_bssd_base = pss->bss_devio.bssd_base; + slotp->resp_bss_device = pss->bss_device; + + slotp->resp_bss_pmu_uctr = pss->bss_pmu_uctr; + slotp->resp_bss_d32_uctr = pss->bss_d32_uctr; + slotp->resp_bss_d64_uctr = pss->bss_d64_uctr; + + slotp->resp_bss_d64_base = pss->bss_d64_base; + slotp->resp_bss_d64_flags = pss->bss_d64_flags; + slotp->resp_bss_d32_base = pss->bss_d32_base; + slotp->resp_bss_d32_flags = pss->bss_d32_flags; + + slotp->resp_bss_ext_ates_active = pss->bss_ext_ates_active; + + slotp->resp_bss_cmd_pointer = pss->bss_cmd_pointer; + slotp->resp_bss_cmd_shadow = pss->bss_cmd_shadow; + + slotp->resp_bs_rrb_valid = pcibr_soft->bs_rrb_valid[slot]; + slotp->resp_bs_rrb_valid_v = pcibr_soft->bs_rrb_valid[slot + + PCIBR_RRB_SLOT_VIRTUAL]; + slotp->resp_bs_rrb_res = pcibr_soft->bs_rrb_res[slot]; + + if (slot & 1) { + b_respp = &bridge->b_odd_resp; + } else { + b_respp = &bridge->b_even_resp; + } + + slotp->resp_b_resp = *b_respp; + + slotp->resp_b_wid_control = bridge->b_wid_control; + slotp->resp_b_int_device = bridge->b_int_device; + slotp->resp_b_int_enable = bridge->b_int_enable; + slotp->resp_b_int_host = bridge->b_int_addr[slot].addr; + + if (COPYOUT(slotp, respp, sizeof(*respp))) { + return(EFAULT); + } + + kmem_free(slotp, sizeof(*slotp)); + + return(0); +} + +/* + * pcibr_slot_query + * Return information about the PCI slot maintained by the infrastructure. + * Information is requested in the request structure. + * + * Information returned in the response structure: + * Slot hwgraph name + * Vendor/Device info + * Base register info + * Interrupt mapping from device pins to the bridge pins + * Devio register + * Software RRB info + * RRB register info + * Host/Gues info + * PCI Bus #,slot #, function # + * Slot provider hwgraph name + * Provider Functions + * Error handler + * DMA mapping usage counters + * DMA direct translation info + * External SSRAM workaround info + */ +int +pcibr_slot_query(devfs_handle_t pcibr_vhdl, pcibr_slot_req_t reqp) +{ + pcibr_soft_t pcibr_soft = pcibr_soft_get(pcibr_vhdl); + pciio_slot_t slot = reqp->req_slot; + pciio_slot_t tmp_slot; + pcibr_slot_info_resp_t respp = reqp->req_respp.query; + int size = reqp->req_size; + int error; + + /* Make sure that we are dealing with a bridge device vertex */ + if (!pcibr_soft) { + return(PCI_NOT_A_BRIDGE); + } + + /* Make sure that we have a valid PCI slot number or PCIIO_SLOT_NONE */ + if ((!PCIBR_VALID_SLOT(slot)) && (slot != PCIIO_SLOT_NONE)) { + return(PCI_NOT_A_SLOT); + } + + /* Do not allow a query of a slot in a shoehorn */ + if(nic_vertex_info_match(pcibr_soft->bs_conn, XTALK_PCI_PART_NUM)) { + return(PCI_SLOT_IN_SHOEHORN); + } + + /* Return information for the requested PCI slot */ + if (slot != PCIIO_SLOT_NONE) { + if (size < sizeof(*respp)) { + return(PCI_RESP_AREA_TOO_SMALL); + } + + /* Acquire read access to the bus */ + mrlock(pcibr_soft->bs_bus_lock, MR_ACCESS, PZERO); + + error = pcibr_slot_info_return(pcibr_soft, slot, respp); + + /* Release the bus lock */ + mrunlock(pcibr_soft->bs_bus_lock); + + return(error); + } + + /* Return information for all the slots */ + for (tmp_slot = 0; tmp_slot < 8; tmp_slot++) { + + if (size < sizeof(*respp)) { + return(PCI_RESP_AREA_TOO_SMALL); + } + + /* Acquire read access to the bus */ + mrlock(pcibr_soft->bs_bus_lock, MR_ACCESS, PZERO); + + error = pcibr_slot_info_return(pcibr_soft, tmp_slot, respp); + + /* Release the bus lock */ + mrunlock(pcibr_soft->bs_bus_lock); + + if (error) { + return(error); + } + + ++respp; + size -= sizeof(*respp); + } + + return(error); +} +#endif /* LATER */ + +/* FIXME: there should be a better way to do this. + * pcibr_attach() needs PCI_ADDR_SPACE_LIMITS_STORE + */ + +/* + * PCI_ADDR_SPACE_LIMITS_LOAD + * Gets the current values of + * pci io base, + * pci io last, + * pci low memory base, + * pci low memory last, + * pci high memory base, + * pci high memory last + */ +#define PCI_ADDR_SPACE_LIMITS_LOAD() \ + pci_io_fb = pcibr_soft->bs_spinfo.pci_io_base; \ + pci_io_fl = pcibr_soft->bs_spinfo.pci_io_last; \ + pci_lo_fb = pcibr_soft->bs_spinfo.pci_swin_base; \ + pci_lo_fl = pcibr_soft->bs_spinfo.pci_swin_last; \ + pci_hi_fb = pcibr_soft->bs_spinfo.pci_mem_base; \ + pci_hi_fl = pcibr_soft->bs_spinfo.pci_mem_last; +/* + * PCI_ADDR_SPACE_LIMITS_STORE + * Sets the current values of + * pci io base, + * pci io last, + * pci low memory base, + * pci low memory last, + * pci high memory base, + * pci high memory last + */ +#define PCI_ADDR_SPACE_LIMITS_STORE() \ + pcibr_soft->bs_spinfo.pci_io_base = pci_io_fb; \ + pcibr_soft->bs_spinfo.pci_io_last = pci_io_fl; \ + pcibr_soft->bs_spinfo.pci_swin_base = pci_lo_fb; \ + pcibr_soft->bs_spinfo.pci_swin_last = pci_lo_fl; \ + pcibr_soft->bs_spinfo.pci_mem_base = pci_hi_fb; \ + pcibr_soft->bs_spinfo.pci_mem_last = pci_hi_fl; + +#define PCI_ADDR_SPACE_LIMITS_PRINT() \ + printf("+++++++++++++++++++++++\n" \ + "IO base 0x%x last 0x%x\n" \ + "SWIN base 0x%x last 0x%x\n" \ + "MEM base 0x%x last 0x%x\n" \ + "+++++++++++++++++++++++\n", \ + pcibr_soft->bs_spinfo.pci_io_base, \ + pcibr_soft->bs_spinfo.pci_io_last, \ + pcibr_soft->bs_spinfo.pci_swin_base, \ + pcibr_soft->bs_spinfo.pci_swin_last, \ + pcibr_soft->bs_spinfo.pci_mem_base, \ + pcibr_soft->bs_spinfo.pci_mem_last); + + +/* + * pcibr_slot_info_init + * Probe for this slot and see if it is populated. + * If it is populated initialize the generic PCI infrastructural + * information associated with this particular PCI device. + */ +int +pcibr_slot_info_init(devfs_handle_t pcibr_vhdl, + pciio_slot_t slot) +{ + pcibr_soft_t pcibr_soft; + pcibr_info_h pcibr_infoh; + pcibr_info_t pcibr_info; + bridge_t *bridge; + cfg_p cfgw; + unsigned idword; + unsigned pfail; + unsigned idwords[8]; + pciio_vendor_id_t vendor; + pciio_device_id_t device; + unsigned htype; + cfg_p wptr; + int win; + pciio_space_t space; + iopaddr_t pci_io_fb, pci_io_fl; + iopaddr_t pci_lo_fb, pci_lo_fl; + iopaddr_t pci_hi_fb, pci_hi_fl; + int nfunc; + pciio_function_t rfunc; + int func; + devfs_handle_t conn_vhdl; + pcibr_soft_slot_t slotp; + + /* Get the basic software information required to proceed */ + pcibr_soft = pcibr_soft_get(pcibr_vhdl); + if (!pcibr_soft) + return(EINVAL); + + bridge = pcibr_soft->bs_base; + if (!PCIBR_VALID_SLOT(slot)) + return(EINVAL); + + /* If we have a host slot (eg:- IOC3 has 2 PCI slots and the initialization + * is done by the host slot then we are done. + */ + if (pcibr_soft->bs_slot[slot].has_host) { + return(0); + } + + /* Check for a slot with any system critical functions */ + if (pcibr_is_slot_sys_critical(pcibr_vhdl, slot)) + return(EPERM); + + /* Load the current values of allocated PCI address spaces */ + PCI_ADDR_SPACE_LIMITS_LOAD(); + + /* Try to read the device-id/vendor-id from the config space */ + cfgw = bridge->b_type0_cfg_dev[slot].l; + + if (pcibr_probe_slot(bridge, cfgw, &idword)) + return(ENODEV); + + slotp = &pcibr_soft->bs_slot[slot]; + slotp->slot_status |= SLOT_POWER_UP; + + vendor = 0xFFFF & idword; + /* If the vendor id is not valid then the slot is not populated + * and we are done. + */ + if (vendor == 0xFFFF) + return(ENODEV); + + device = 0xFFFF & (idword >> 16); + htype = do_pcibr_config_get(cfgw, PCI_CFG_HEADER_TYPE, 1); + + nfunc = 1; + rfunc = PCIIO_FUNC_NONE; + pfail = 0; + + /* NOTE: if a card claims to be multifunction + * but only responds to config space 0, treat + * it as a unifunction card. + */ + + if (htype & 0x80) { /* MULTIFUNCTION */ + for (func = 1; func < 8; ++func) { + cfgw = bridge->b_type0_cfg_dev[slot].f[func].l; + if (pcibr_probe_slot(bridge, cfgw, &idwords[func])) { + pfail |= 1 << func; + continue; + } + vendor = 0xFFFF & idwords[func]; + if (vendor == 0xFFFF) { + pfail |= 1 << func; + continue; + } + nfunc = func + 1; + rfunc = 0; + } + cfgw = bridge->b_type0_cfg_dev[slot].l; + } + NEWA(pcibr_infoh, nfunc); + + pcibr_soft->bs_slot[slot].bss_ninfo = nfunc; + pcibr_soft->bs_slot[slot].bss_infos = pcibr_infoh; + + for (func = 0; func < nfunc; ++func) { + unsigned cmd_reg; + + if (func) { + if (pfail & (1 << func)) + continue; + + idword = idwords[func]; + cfgw = bridge->b_type0_cfg_dev[slot].f[func].l; + + device = 0xFFFF & (idword >> 16); + htype = do_pcibr_config_get(cfgw, PCI_CFG_HEADER_TYPE, 1); + rfunc = func; + } + htype &= 0x7f; + if (htype != 0x00) { + printk(KERN_WARNING "%s pcibr: pci slot %d func %d has strange header type 0x%x\n", + pcibr_soft->bs_name, slot, func, htype); + continue; + } +#if DEBUG && ATTACH_DEBUG + printk(KERN_NOTICE + "%s pcibr: pci slot %d func %d: vendor 0x%x device 0x%x", + pcibr_soft->bs_name, slot, func, vendor, device); +#endif + + pcibr_info = pcibr_device_info_new + (pcibr_soft, slot, rfunc, vendor, device); + conn_vhdl = pciio_device_info_register(pcibr_vhdl, &pcibr_info->f_c); + if (func == 0) + slotp->slot_conn = conn_vhdl; + +#ifdef LITTLE_ENDIAN + cmd_reg = cfgw[(PCI_CFG_COMMAND ^ 4) / 4]; +#else + cmd_reg = cfgw[PCI_CFG_COMMAND / 4]; +#endif + + wptr = cfgw + PCI_CFG_BASE_ADDR_0 / 4; + + for (win = 0; win < PCI_CFG_BASE_ADDRS; ++win) { + iopaddr_t base, mask, code; + size_t size; + + /* + * GET THE BASE & SIZE OF THIS WINDOW: + * + * The low two or four bits of the BASE register + * determines which address space we are in; the + * rest is a base address. BASE registers + * determine windows that are power-of-two sized + * and naturally aligned, so we can get the size + * of a window by writing all-ones to the + * register, reading it back, and seeing which + * bits are used for decode; the least + * significant nonzero bit is also the size of + * the window. + * + * WARNING: someone may already have allocated + * some PCI space to this window, and in fact + * PIO may be in process at this very moment + * from another processor (or even from this + * one, if we get interrupted)! So, if the BASE + * already has a nonzero address, be generous + * and use the LSBit of that address as the + * size; this could overstate the window size. + * Usually, when one card is set up, all are set + * up; so, since we don't bitch about + * overlapping windows, we are ok. + * + * UNFORTUNATELY, some cards do not clear their + * BASE registers on reset. I have two heuristics + * that can detect such cards: first, if the + * decode enable is turned off for the space + * that the window uses, we can disregard the + * initial value. second, if the address is + * outside the range that we use, we can disregard + * it as well. + * + * This is looking very PCI generic. Except for + * knowing how many slots and where their config + * spaces are, this window loop and the next one + * could probably be shared with other PCI host + * adapters. It would be interesting to see if + * this could be pushed up into pciio, when we + * start supporting more PCI providers. + */ +#ifdef LITTLE_ENDIAN + base = wptr[((win*4)^4)/4]; +#else + base = wptr[win]; +#endif + + if (base & PCI_BA_IO_SPACE) { + /* BASE is in I/O space. */ + space = PCIIO_SPACE_IO; + mask = -4; + code = base & 3; + base = base & mask; + if (base == 0) { + ; /* not assigned */ + } else if (!(cmd_reg & PCI_CMD_IO_SPACE)) { + base = 0; /* decode not enabled */ + } + } else { + /* BASE is in MEM space. */ + space = PCIIO_SPACE_MEM; + mask = -16; + code = base & PCI_BA_MEM_LOCATION; /* extract BAR type */ + base = base & mask; + if (base == 0) { + ; /* not assigned */ + } else if (!(cmd_reg & PCI_CMD_MEM_SPACE)) { + base = 0; /* decode not enabled */ + } else if (base & 0xC0000000) { + base = 0; /* outside permissable range */ + } else if ((code == PCI_BA_MEM_64BIT) && +#ifdef LITTLE_ENDIAN + (wptr[(((win + 1)*4)^4)/4] != 0)) { +#else + (wptr[win + 1] != 0)) { +#endif /* LITTLE_ENDIAN */ + base = 0; /* outside permissable range */ + } + } + + if (base != 0) { /* estimate size */ + size = base & -base; + } else { /* calculate size */ +#ifdef LITTLE_ENDIAN + wptr[((win*4)^4)/4] = ~0; /* turn on all bits */ + size = wptr[((win*4)^4)/4]; /* get stored bits */ +#else + wptr[win] = ~0; /* turn on all bits */ + size = wptr[win]; /* get stored bits */ +#endif /* LITTLE_ENDIAN */ + size &= mask; /* keep addr */ + size &= -size; /* keep lsbit */ + if (size == 0) + continue; + } + + pcibr_info->f_window[win].w_space = space; + pcibr_info->f_window[win].w_base = base; + pcibr_info->f_window[win].w_size = size; + + /* + * If this window already has PCI space + * allocated for it, "subtract" that space from + * our running freeblocks. Don't worry about + * overlaps in existing allocated windows; we + * may be overstating their sizes anyway. + */ + + if (base && size) { + if (space == PCIIO_SPACE_IO) { + pcibr_freeblock_sub(&pci_io_fb, + &pci_io_fl, + base, size); + } else { + pcibr_freeblock_sub(&pci_lo_fb, + &pci_lo_fl, + base, size); + pcibr_freeblock_sub(&pci_hi_fb, + &pci_hi_fl, + base, size); + } + } +#if defined(IOC3_VENDOR_ID_NUM) && defined(IOC3_DEVICE_ID_NUM) + /* + * IOC3 BASE_ADDR* BUG WORKAROUND + * + + * If we write to BASE1 on the IOC3, the + * data in BASE0 is replaced. The + * original workaround was to remember + * the value of BASE0 and restore it + * when we ran off the end of the BASE + * registers; however, a later + * workaround was added (I think it was + * rev 1.44) to avoid setting up + * anything but BASE0, with the comment + * that writing all ones to BASE1 set + * the enable-parity-error test feature + * in IOC3's SCR bit 14. + * + * So, unless we defer doing any PCI + * space allocation until drivers + * attach, and set up a way for drivers + * (the IOC3 in paricular) to tell us + * generically to keep our hands off + * BASE registers, we gotta "know" about + * the IOC3 here. + * + * Too bad the PCI folks didn't reserve the + * all-zero value for 'no BASE here' (it is a + * valid code for an uninitialized BASE in + * 32-bit PCI memory space). + */ + + if ((vendor == IOC3_VENDOR_ID_NUM) && + (device == IOC3_DEVICE_ID_NUM)) + break; +#endif + if (code == PCI_BA_MEM_64BIT) { + win++; /* skip upper half */ +#ifdef LITTLE_ENDIAN + wptr[((win*4)^4)/4] = 0; /* which must be zero */ +#else + wptr[win] = 0; /* which must be zero */ +#endif /* LITTLE_ENDIAN */ + } + } /* next win */ + } /* next func */ + + /* Store back the values for allocated PCI address spaces */ + PCI_ADDR_SPACE_LIMITS_STORE(); + return(0); +} + +/* + * pcibr_slot_info_free + * Remove all the PCI infrastructural information associated + * with a particular PCI device. + */ +int +pcibr_slot_info_free(devfs_handle_t pcibr_vhdl, + pciio_slot_t slot) +{ + pcibr_soft_t pcibr_soft; + pcibr_info_h pcibr_infoh; + int nfunc; + + pcibr_soft = pcibr_soft_get(pcibr_vhdl); + + if (!pcibr_soft || !PCIBR_VALID_SLOT(slot)) + return(EINVAL); + + nfunc = pcibr_soft->bs_slot[slot].bss_ninfo; + + pcibr_device_info_free(pcibr_vhdl, slot); + + pcibr_infoh = pcibr_soft->bs_slot[slot].bss_infos; + DELA(pcibr_infoh,nfunc); + pcibr_soft->bs_slot[slot].bss_ninfo = 0; + + return(0); +} + +int as_debug = 0; +/* + * pcibr_slot_addr_space_init + * Reserve chunks of PCI address space as required by + * the base registers in the card. + */ +int +pcibr_slot_addr_space_init(devfs_handle_t pcibr_vhdl, + pciio_slot_t slot) +{ + pcibr_soft_t pcibr_soft; + pcibr_info_h pcibr_infoh; + pcibr_info_t pcibr_info; + bridge_t *bridge; + iopaddr_t pci_io_fb, pci_io_fl; + iopaddr_t pci_lo_fb, pci_lo_fl; + iopaddr_t pci_hi_fb, pci_hi_fl; + size_t align; + iopaddr_t mask; + int nbars; + int nfunc; + int func; + int win; + + pcibr_soft = pcibr_soft_get(pcibr_vhdl); + + if (!pcibr_soft || !PCIBR_VALID_SLOT(slot)) + return(EINVAL); + + bridge = pcibr_soft->bs_base; + + /* Get the current values for the allocated PCI address spaces */ + PCI_ADDR_SPACE_LIMITS_LOAD(); + + if (as_debug) +#ifdef LATER + PCI_ADDR_SPACE_LIMITS_PRINT(); +#endif + + /* allocate address space, + * for windows that have not been + * previously assigned. + */ + if (pcibr_soft->bs_slot[slot].has_host) { + return(0); + } + + nfunc = pcibr_soft->bs_slot[slot].bss_ninfo; + if (nfunc < 1) + return(EINVAL); + + pcibr_infoh = pcibr_soft->bs_slot[slot].bss_infos; + if (!pcibr_infoh) + return(EINVAL); + + /* + * Try to make the DevIO windows not + * overlap by pushing the "io" and "hi" + * allocation areas up to the next one + * or two megabyte bound. This also + * keeps them from being zero. + * + * DO NOT do this with "pci_lo" since + * the entire "lo" area is only a + * megabyte, total ... + */ + align = (slot < 2) ? 0x200000 : 0x100000; + mask = -align; + pci_io_fb = (pci_io_fb + align - 1) & mask; + pci_hi_fb = (pci_hi_fb + align - 1) & mask; + + for (func = 0; func < nfunc; ++func) { + cfg_p cfgw; + cfg_p wptr; + pciio_space_t space; + iopaddr_t base; + size_t size; + cfg_p pci_cfg_cmd_reg_p; + unsigned pci_cfg_cmd_reg; + unsigned pci_cfg_cmd_reg_add = 0; + + pcibr_info = pcibr_infoh[func]; + + if (!pcibr_info) + continue; + + if (pcibr_info->f_vendor == PCIIO_VENDOR_ID_NONE) + continue; + + cfgw = bridge->b_type0_cfg_dev[slot].f[func].l; + wptr = cfgw + PCI_CFG_BASE_ADDR_0 / 4; + + nbars = PCI_CFG_BASE_ADDRS; + + for (win = 0; win < nbars; ++win) { + + space = pcibr_info->f_window[win].w_space; + base = pcibr_info->f_window[win].w_base; + size = pcibr_info->f_window[win].w_size; + + if (size < 1) + continue; + + if (base >= size) { +#if DEBUG && PCI_DEBUG + printk("pcibr: slot %d func %d window %d is in %d[0x%x..0x%x], alloc by prom\n", + slot, func, win, space, base, base + size - 1); +#endif + continue; /* already allocated */ + } + align = size; /* ie. 0x00001000 */ + if (align < _PAGESZ) + align = _PAGESZ; /* ie. 0x00004000 */ + mask = -align; /* ie. 0xFFFFC000 */ + + switch (space) { + case PCIIO_SPACE_IO: + base = (pci_io_fb + align - 1) & mask; + if ((base + size) > pci_io_fl) { + base = 0; + break; + } + pci_io_fb = base + size; + break; + + case PCIIO_SPACE_MEM: +#ifdef LITTLE_ENDIAN + if ((wptr[((win*4)^4)/4] & PCI_BA_MEM_LOCATION) == +#else + if ((wptr[win] & PCI_BA_MEM_LOCATION) == +#endif /* LITTLE_ENDIAN */ + PCI_BA_MEM_1MEG) { + /* allocate from 20-bit PCI space */ + base = (pci_lo_fb + align - 1) & mask; + if ((base + size) > pci_lo_fl) { + base = 0; + break; + } + pci_lo_fb = base + size; + } else { + /* allocate from 32-bit or 64-bit PCI space */ + base = (pci_hi_fb + align - 1) & mask; + if ((base + size) > pci_hi_fl) { + base = 0; + break; + } + pci_hi_fb = base + size; + } + break; + + default: + base = 0; +#if DEBUG && PCI_DEBUG + printk("pcibr: slot %d window %d had bad space code %d\n", + slot, win, space); +#endif + } + pcibr_info->f_window[win].w_base = base; +#ifdef LITTLE_ENDIAN + wptr[((win*4)^4)/4] = base; +#if DEBUG && PCI_DEBUG + printk("Setting base address 0x%p base 0x%x\n", &(wptr[((win*4)^4)/4]), base); +#endif +#else + wptr[win] = base; +#endif /* LITTLE_ENDIAN */ + +#if DEBUG && PCI_DEBUG + if (base >= size) + printk("pcibr: slot %d func %d window %d is in %d [0x%x..0x%x], alloc by pcibr\n", + slot, func, win, space, base, base + size - 1); + else + printk("pcibr: slot %d func %d window %d, unable to alloc 0x%x in 0x%p\n", + slot, func, win, size, space); +#endif + } /* next base */ + + /* + * Allocate space for the EXPANSION ROM + * NOTE: DO NOT DO THIS ON AN IOC3, + * as it blows the system away. + */ + base = size = 0; + if ((pcibr_soft->bs_slot[slot].bss_vendor_id != IOC3_VENDOR_ID_NUM) || + (pcibr_soft->bs_slot[slot].bss_device_id != IOC3_DEVICE_ID_NUM)) { + + wptr = cfgw + PCI_EXPANSION_ROM / 4; +#ifdef LITTLE_ENDIAN + wptr[1] = 0xFFFFF000; + mask = wptr[1]; +#else + *wptr = 0xFFFFF000; + mask = *wptr; +#endif /* LITTLE_ENDIAN */ + if (mask & 0xFFFFF000) { + size = mask & -mask; + align = size; + if (align < _PAGESZ) + align = _PAGESZ; + mask = -align; + base = (pci_hi_fb + align - 1) & mask; + if ((base + size) > pci_hi_fl) + base = size = 0; + else { + pci_hi_fb = base + size; +#ifdef LITTLE_ENDIAN + wptr[1] = base; +#else + *wptr = base; +#endif /* LITTLE_ENDIAN */ +#if DEBUG && PCI_DEBUG + printk("%s/%d ROM in 0x%lx..0x%lx (alloc by pcibr)\n", + pcibr_soft->bs_name, slot, + base, base + size - 1); +#endif + } + } + } + pcibr_info->f_rbase = base; + pcibr_info->f_rsize = size; + + /* + * if necessary, update the board's + * command register to enable decoding + * in the windows we added. + * + * There are some bits we always want to + * be sure are set. + */ + pci_cfg_cmd_reg_add |= PCI_CMD_IO_SPACE; + + /* + * The Adaptec 1160 FC Controller WAR #767995: + * The part incorrectly ignores the upper 32 bits of a 64 bit + * address when decoding references to it's registers so to + * keep it from responding to a bus cycle that it shouldn't + * we only use I/O space to get at it's registers. Don't + * enable memory space accesses on that PCI device. + */ + #define FCADP_VENDID 0x9004 /* Adaptec Vendor ID from fcadp.h */ + #define FCADP_DEVID 0x1160 /* Adaptec 1160 Device ID from fcadp.h */ + + if ((pcibr_info->f_vendor != FCADP_VENDID) || + (pcibr_info->f_device != FCADP_DEVID)) + pci_cfg_cmd_reg_add |= PCI_CMD_MEM_SPACE; + + pci_cfg_cmd_reg_add |= PCI_CMD_BUS_MASTER; + + pci_cfg_cmd_reg_p = cfgw + PCI_CFG_COMMAND / 4; + pci_cfg_cmd_reg = *pci_cfg_cmd_reg_p; +#if PCI_FBBE /* XXX- check here to see if dev can do fast-back-to-back */ + if (!((pci_cfg_cmd_reg >> 16) & PCI_STAT_F_BK_BK_CAP)) + fast_back_to_back_enable = 0; +#endif + pci_cfg_cmd_reg &= 0xFFFF; + if (pci_cfg_cmd_reg_add & ~pci_cfg_cmd_reg) + *pci_cfg_cmd_reg_p = pci_cfg_cmd_reg | pci_cfg_cmd_reg_add; + + } /* next func */ + + /* Now that we have allocated new chunks of PCI address spaces to this + * card we need to update the bookkeeping values which indicate + * the current PCI address space allocations. + */ + PCI_ADDR_SPACE_LIMITS_STORE(); + return(0); +} + +/* + * pcibr_slot_device_init + * Setup the device register in the bridge for this PCI slot. + */ +int +pcibr_slot_device_init(devfs_handle_t pcibr_vhdl, + pciio_slot_t slot) +{ + pcibr_soft_t pcibr_soft; + bridge_t *bridge; + bridgereg_t devreg; + + pcibr_soft = pcibr_soft_get(pcibr_vhdl); + + if (!pcibr_soft || !PCIBR_VALID_SLOT(slot)) + return(EINVAL); + + bridge = pcibr_soft->bs_base; + + /* + * Adjustments to Device(x) + * and init of bss_device shadow + */ + devreg = bridge->b_device[slot].reg; + devreg &= ~BRIDGE_DEV_PAGE_CHK_DIS; + devreg |= BRIDGE_DEV_COH | BRIDGE_DEV_VIRTUAL_EN; +#ifdef LITTLE_ENDIAN + devreg |= BRIDGE_DEV_DEV_SWAP; +#endif + pcibr_soft->bs_slot[slot].bss_device = devreg; + bridge->b_device[slot].reg = devreg; + +#if DEBUG && PCI_DEBUG + printk("pcibr Device(%d): 0x%lx\n", slot, bridge->b_device[slot].reg); +#endif + +#if DEBUG && PCI_DEBUG + printk("pcibr: PCI space allocation done.\n"); +#endif + + return(0); +} + +/* + * pcibr_slot_guest_info_init + * Setup the host/guest relations for a PCI slot. + */ +int +pcibr_slot_guest_info_init(devfs_handle_t pcibr_vhdl, + pciio_slot_t slot) +{ + pcibr_soft_t pcibr_soft; + pcibr_info_h pcibr_infoh; + pcibr_info_t pcibr_info; + pcibr_soft_slot_t slotp; + + pcibr_soft = pcibr_soft_get(pcibr_vhdl); + + if (!pcibr_soft || !PCIBR_VALID_SLOT(slot)) + return(EINVAL); + + slotp = &pcibr_soft->bs_slot[slot]; + + /* create info and verticies for guest slots; + * for compatibilitiy macros, create info + * for even unpopulated slots (but do not + * build verticies for them). + */ + if (pcibr_soft->bs_slot[slot].bss_ninfo < 1) { + NEWA(pcibr_infoh, 1); + pcibr_soft->bs_slot[slot].bss_ninfo = 1; + pcibr_soft->bs_slot[slot].bss_infos = pcibr_infoh; + + pcibr_info = pcibr_device_info_new + (pcibr_soft, slot, PCIIO_FUNC_NONE, + PCIIO_VENDOR_ID_NONE, PCIIO_DEVICE_ID_NONE); + + if (pcibr_soft->bs_slot[slot].has_host) { + slotp->slot_conn = pciio_device_info_register + (pcibr_vhdl, &pcibr_info->f_c); + } + } + + /* generate host/guest relations + */ + if (pcibr_soft->bs_slot[slot].has_host) { + int host = pcibr_soft->bs_slot[slot].host_slot; + pcibr_soft_slot_t host_slotp = &pcibr_soft->bs_slot[host]; + + hwgraph_edge_add(slotp->slot_conn, + host_slotp->slot_conn, + EDGE_LBL_HOST); + + /* XXX- only gives us one guest edge per + * host. If/when we have a host with more than + * one guest, we will need to figure out how + * the host finds all its guests, and sorts + * out which one is which. + */ + hwgraph_edge_add(host_slotp->slot_conn, + slotp->slot_conn, + EDGE_LBL_GUEST); + } + + return(0); +} + + +/* + * pcibr_slot_call_device_attach + * This calls the associated driver attach routine for the PCI + * card in this slot. + */ +int +pcibr_slot_call_device_attach(devfs_handle_t pcibr_vhdl, + pciio_slot_t slot, + int drv_flags) +{ + pcibr_soft_t pcibr_soft; + pcibr_info_h pcibr_infoh; + pcibr_info_t pcibr_info; + async_attach_t aa = NULL; + int func; + devfs_handle_t xconn_vhdl,conn_vhdl; + int nfunc; + int error_func; + int error_slot = 0; + int error = ENODEV; + + pcibr_soft = pcibr_soft_get(pcibr_vhdl); + + if (!pcibr_soft || !PCIBR_VALID_SLOT(slot)) + return(EINVAL); + + + if (pcibr_soft->bs_slot[slot].has_host) { + return(EPERM); + } + + xconn_vhdl = pcibr_soft->bs_conn; + aa = async_attach_get_info(xconn_vhdl); + + nfunc = pcibr_soft->bs_slot[slot].bss_ninfo; + pcibr_infoh = pcibr_soft->bs_slot[slot].bss_infos; + + for (func = 0; func < nfunc; ++func) { + + pcibr_info = pcibr_infoh[func]; + + if (!pcibr_info) + continue; + + if (pcibr_info->f_vendor == PCIIO_VENDOR_ID_NONE) + continue; + + conn_vhdl = pcibr_info->f_vertex; + +#ifdef LATER + /* + * Activate if and when we support cdl. + */ + if (aa) + async_attach_add_info(conn_vhdl, aa); +#endif /* LATER */ + + error_func = pciio_device_attach(conn_vhdl, drv_flags); + + pcibr_info->f_att_det_error = error_func; + + if (error_func) + error_slot = error_func; + + error = error_slot; + + } /* next func */ + + if (error) { + if ((error != ENODEV) && (error != EUNATCH)) + pcibr_soft->bs_slot[slot].slot_status |= SLOT_STARTUP_INCMPLT; + } else { + pcibr_soft->bs_slot[slot].slot_status |= SLOT_STARTUP_CMPLT; + } + + return(error); +} + +/* + * pcibr_slot_call_device_detach + * This calls the associated driver detach routine for the PCI + * card in this slot. + */ +int +pcibr_slot_call_device_detach(devfs_handle_t pcibr_vhdl, + pciio_slot_t slot, + int drv_flags) +{ + pcibr_soft_t pcibr_soft; + pcibr_info_h pcibr_infoh; + pcibr_info_t pcibr_info; + int func; + devfs_handle_t conn_vhdl = GRAPH_VERTEX_NONE; + int nfunc; + int error_func; + int error_slot = 0; + int error = ENODEV; + + pcibr_soft = pcibr_soft_get(pcibr_vhdl); + + if (!pcibr_soft || !PCIBR_VALID_SLOT(slot)) + return(EINVAL); + + if (pcibr_soft->bs_slot[slot].has_host) + return(EPERM); + + /* Make sure that we do not detach a system critical function vertex */ + if(pcibr_is_slot_sys_critical(pcibr_vhdl, slot)) + return(EPERM); + + nfunc = pcibr_soft->bs_slot[slot].bss_ninfo; + pcibr_infoh = pcibr_soft->bs_slot[slot].bss_infos; + + for (func = 0; func < nfunc; ++func) { + + pcibr_info = pcibr_infoh[func]; + + if (!pcibr_info) + continue; + + if (pcibr_info->f_vendor == PCIIO_VENDOR_ID_NONE) + continue; + + conn_vhdl = pcibr_info->f_vertex; + + error_func = pciio_device_detach(conn_vhdl, drv_flags); + + pcibr_info->f_att_det_error = error_func; + + if (error_func) + error_slot = error_func; + + error = error_slot; + + } /* next func */ + + pcibr_soft->bs_slot[slot].slot_status &= ~SLOT_STATUS_MASK; + + if (error) { + if ((error != ENODEV) && (error != EUNATCH)) + pcibr_soft->bs_slot[slot].slot_status |= SLOT_SHUTDOWN_INCMPLT; + } else { + if (conn_vhdl != GRAPH_VERTEX_NONE) + pcibr_device_unregister(conn_vhdl); + pcibr_soft->bs_slot[slot].slot_status |= SLOT_SHUTDOWN_CMPLT; + } + + return(error); +} + +#ifdef LATER + +/* + * pcibr_slot_attach + * This is a place holder routine to keep track of all the + * slot-specific initialization that needs to be done. + * This is usually called when we want to initialize a new + * PCI card on the bus. + */ +int +pcibr_slot_attach(devfs_handle_t pcibr_vhdl, + pciio_slot_t slot, + int drv_flags, + char *l1_msg, + int *sub_errorp) +{ + pcibr_soft_t pcibr_soft = pcibr_soft_get(pcibr_vhdl); + timespec_t ts; + int error; + + if (!(pcibr_soft->bs_slot[slot].slot_status & SLOT_POWER_UP)) { + /* Power-up the slot */ + error = pcibr_slot_pwr(pcibr_vhdl, slot, L1_REQ_PCI_UP, l1_msg); + if (error) { + if (sub_errorp) + *sub_errorp = error; + return(PCI_L1_ERR); + } else { + pcibr_soft->bs_slot[slot].slot_status &= ~SLOT_POWER_MASK; + pcibr_soft->bs_slot[slot].slot_status |= SLOT_POWER_UP; + } + +#ifdef LATER + /* + * Allow cards like the Alteon Gigabit Ethernet Adapter to complete + * on-card initialization following the slot reset + */ + ts.tv_sec = 0; /* 0 secs */ + ts.tv_nsec = 500 * (1000 * 1000); /* 500 msecs */ + nano_delay(&ts); +#else +#endif +#if 0 + /* Reset the slot */ + error = pcibr_slot_reset(pcibr_vhdl, slot) + if (error) { + if (sub_errorp) + *sub_errorp = error; + return(PCI_SLOT_RESET_ERR); + } +#endif + + /* Find out what is out there */ + error = pcibr_slot_info_init(pcibr_vhdl, slot); + if (error) { + if (sub_errorp) + *sub_errorp = error; + return(PCI_SLOT_INFO_INIT_ERR); + } + + /* Set up the address space for this slot in the PCI land */ + error = pcibr_slot_addr_space_init(pcibr_vhdl, slot); + if (error) { + if (sub_errorp) + *sub_errorp = error; + return(PCI_SLOT_ADDR_INIT_ERR); + } + + /* Setup the device register */ + error = pcibr_slot_device_init(pcibr_vhdl, slot); + if (error) { + if (sub_errorp) + *sub_errorp = error; + return(PCI_SLOT_DEV_INIT_ERR); + } + + /* Setup host/guest relations */ + error = pcibr_slot_guest_info_init(pcibr_vhdl, slot); + if (error) { + if (sub_errorp) + *sub_errorp = error; + return(PCI_SLOT_GUEST_INIT_ERR); + } + + /* Initial RRB management */ + error = pcibr_slot_initial_rrb_alloc(pcibr_vhdl, slot); + if (error) { + if (sub_errorp) + *sub_errorp = error; + return(PCI_SLOT_RRB_ALLOC_ERR); + } + + } + + /* Call the device attach */ + error = pcibr_slot_call_device_attach(pcibr_vhdl, slot, drv_flags); + if (error) { + if (sub_errorp) + *sub_errorp = error; + if (error == EUNATCH) + return(PCI_NO_DRIVER); + else + return(PCI_SLOT_DRV_ATTACH_ERR); + } + + return(0); +} +#endif /* LATER */ + +/* + * pcibr_slot_detach + * This is a place holder routine to keep track of all the + * slot-specific freeing that needs to be done. + */ +int +pcibr_slot_detach(devfs_handle_t pcibr_vhdl, + pciio_slot_t slot, + int drv_flags) +{ + int error; + + /* Call the device detach function */ + error = (pcibr_slot_call_device_detach(pcibr_vhdl, slot, drv_flags)); + return (error); + +} + +/* + * pcibr_is_slot_sys_critical + * Check slot for any functions that are system critical. + * Return 1 if any are system critical or 0 otherwise. + * + * This function will always return 0 when called by + * pcibr_attach() because the system critical vertices + * have not yet been set in the hwgraph. + */ +int +pcibr_is_slot_sys_critical(devfs_handle_t pcibr_vhdl, + pciio_slot_t slot) +{ + pcibr_soft_t pcibr_soft; + pcibr_info_h pcibr_infoh; + pcibr_info_t pcibr_info; + devfs_handle_t conn_vhdl = GRAPH_VERTEX_NONE; + int nfunc; + int func; + boolean_t is_sys_critical_vertex(devfs_handle_t); + + pcibr_soft = pcibr_soft_get(pcibr_vhdl); + if (!pcibr_soft || !PCIBR_VALID_SLOT(slot)) + return(0); + + nfunc = pcibr_soft->bs_slot[slot].bss_ninfo; + pcibr_infoh = pcibr_soft->bs_slot[slot].bss_infos; + + for (func = 0; func < nfunc; ++func) { + + pcibr_info = pcibr_infoh[func]; + if (!pcibr_info) + continue; + + if (pcibr_info->f_vendor == PCIIO_VENDOR_ID_NONE) + continue; + + conn_vhdl = pcibr_info->f_vertex; + if (is_sys_critical_vertex(conn_vhdl)) { +#if defined(SUPPORT_PRINTING_V_FORMAT) + printk(KERN_WARNING "%v is a system critical device vertex\n", conn_vhdl); +#else + printk(KERN_WARNING "%p is a system critical device vertex\n", (void *)conn_vhdl); +#endif + return(1); + } + + } + + return(0); +} + +/* + * pcibr_probe_slot: read a config space word + * while trapping any errors; reutrn zero if + * all went OK, or nonzero if there was an error. + * The value read, if any, is passed back + * through the valp parameter. + */ +int +pcibr_probe_slot(bridge_t *bridge, + cfg_p cfg, + unsigned *valp) +{ + int rv; + bridgereg_t old_enable, new_enable; + int badaddr_val(volatile void *, int, volatile void *); + + old_enable = bridge->b_int_enable; + new_enable = old_enable & ~BRIDGE_IMR_PCI_MST_TIMEOUT; + + bridge->b_int_enable = new_enable; + + /* + * The xbridge doesn't clear b_err_int_view unless + * multi-err is cleared... + */ + if (is_xbridge(bridge)) + if (bridge->b_err_int_view & BRIDGE_ISR_PCI_MST_TIMEOUT) { + bridge->b_int_rst_stat = BRIDGE_IRR_MULTI_CLR; + } + + if (bridge->b_int_status & BRIDGE_IRR_PCI_GRP) { + bridge->b_int_rst_stat = BRIDGE_IRR_PCI_GRP_CLR; + (void) bridge->b_wid_tflush; /* flushbus */ + } + rv = badaddr_val((void *) cfg, 4, valp); + + /* + * The xbridge doesn't set master timeout in b_int_status + * here. Fortunately it's in error_interrupt_view. + */ + if (is_xbridge(bridge)) + if (bridge->b_err_int_view & BRIDGE_ISR_PCI_MST_TIMEOUT) { + bridge->b_int_rst_stat = BRIDGE_IRR_MULTI_CLR; + rv = 1; /* unoccupied slot */ + } + + bridge->b_int_enable = old_enable; + bridge->b_wid_tflush; /* wait until Bridge PIO complete */ + + return rv; +} + +void +pcibr_device_info_free(devfs_handle_t pcibr_vhdl, pciio_slot_t slot) +{ + pcibr_soft_t pcibr_soft = pcibr_soft_get(pcibr_vhdl); + pcibr_info_t pcibr_info; + pciio_function_t func; + pcibr_soft_slot_t slotp = &pcibr_soft->bs_slot[slot]; + int nfunc = slotp->bss_ninfo; + int bar; + int devio_index; + int s; + + + for (func = 0; func < nfunc; func++) { + pcibr_info = slotp->bss_infos[func]; + + if (!pcibr_info) + continue; + + s = pcibr_lock(pcibr_soft); + + for (bar = 0; bar < PCI_CFG_BASE_ADDRS; bar++) { + if (pcibr_info->f_window[bar].w_space == PCIIO_SPACE_NONE) + continue; + + /* Get index of the DevIO(x) register used to access this BAR */ + devio_index = pcibr_info->f_window[bar].w_devio_index; + + + /* On last use, clear the DevIO(x) used to access this BAR */ + if (! --pcibr_soft->bs_slot[devio_index].bss_devio.bssd_ref_cnt) { + pcibr_soft->bs_slot[devio_index].bss_devio.bssd_space = + PCIIO_SPACE_NONE; + pcibr_soft->bs_slot[devio_index].bss_devio.bssd_base = + PCIBR_D32_BASE_UNSET; + pcibr_soft->bs_slot[devio_index].bss_device = 0; + } + } + + pcibr_unlock(pcibr_soft, s); + + slotp->bss_infos[func] = 0; + pciio_device_info_unregister(pcibr_vhdl, &pcibr_info->f_c); + pciio_device_info_free(&pcibr_info->f_c); + + DEL(pcibr_info); + } + + /* Reset the mapping usage counters */ + slotp->bss_pmu_uctr = 0; + slotp->bss_d32_uctr = 0; + slotp->bss_d64_uctr = 0; + + /* Clear the Direct translation info */ + slotp->bss_d64_base = PCIBR_D64_BASE_UNSET; + slotp->bss_d64_flags = 0; + slotp->bss_d32_base = PCIBR_D32_BASE_UNSET; + slotp->bss_d32_flags = 0; + + /* Clear out shadow info necessary for the external SSRAM workaround */ + slotp->bss_ext_ates_active = ATOMIC_INIT(0); + slotp->bss_cmd_pointer = 0; + slotp->bss_cmd_shadow = 0; + +} diff -Nru a/arch/ia64/sn/io/sn2/shub_intr.c b/arch/ia64/sn/io/sn2/shub_intr.c --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/io/sn2/shub_intr.c Tue Mar 12 13:58:16 2002 @@ -0,0 +1,215 @@ +/* $Id: shub_intr.c,v 1.2 2001/06/26 14:02:43 pfg Exp $ + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (C) 1992-1997, 2000-2002 Silicon Graphics, Inc. All Rights Reserved. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +extern void hub_device_desc_update(device_desc_t, ilvl_t, cpuid_t); + +/* ARGSUSED */ +void +hub_intr_init(devfs_handle_t hubv) +{ + extern void sn_cpei_handler(int, void *, struct pt_regs *); + extern void sn_init_cpei_timer(void); + + if (request_irq(SGI_SHUB_ERROR_VECTOR, sn_cpei_handler, 0, "SN hub error", NULL) ) { + printk("hub_intr_init: Couldn't register SGI_SHUB_ERROR_VECTOR = %x\n",SGI_SHUB_ERROR_VECTOR); + } + sn_init_cpei_timer(); +} + +xwidgetnum_t +hub_widget_id(nasid_t nasid) +{ + hubii_wcr_t ii_wcr; /* the control status register */ + + ii_wcr.wcr_reg_value = REMOTE_HUB_L(nasid,IIO_WCR); + + return ii_wcr.wcr_fields_s.wcr_widget_id; +} + +static hub_intr_t +do_hub_intr_alloc(devfs_handle_t dev, + device_desc_t dev_desc, + devfs_handle_t owner_dev, + int uncond_nothread) +{ + cpuid_t cpu = 0; + int vector; + hub_intr_t intr_hdl; + cnodeid_t cnode; + int cpuphys, slice; + int nasid; + iopaddr_t xtalk_addr; + struct xtalk_intr_s *xtalk_info; + xwidget_info_t xwidget_info; + ilvl_t intr_swlevel = 0; + + cpu = intr_heuristic(dev, dev_desc, -1, 0, owner_dev, NULL, &vector); + + if (cpu == CPU_NONE) { + printk("Unable to allocate interrupt for 0x%p\n", (void *)owner_dev); + return(0); + } + + cpuphys = cpu_physical_id(cpu); + slice = cpu_physical_id_to_slice(cpuphys); + nasid = cpu_physical_id_to_nasid(cpuphys); + cnode = cpuid_to_cnodeid(cpu); + + if (slice) { + xtalk_addr = SH_II_INT1 | GLOBAL_MMR_SPACE | + ((unsigned long)nasid << 36) | (1UL << 47); + } else { + xtalk_addr = SH_II_INT0 | GLOBAL_MMR_SPACE | + ((unsigned long)nasid << 36) | (1UL << 47); + } + + intr_hdl = snia_kmem_alloc_node(sizeof(struct hub_intr_s), KM_NOSLEEP, cnode); + ASSERT_ALWAYS(intr_hdl); + + xtalk_info = &intr_hdl->i_xtalk_info; + xtalk_info->xi_dev = dev; + xtalk_info->xi_vector = vector; + xtalk_info->xi_addr = xtalk_addr; + + xwidget_info = xwidget_info_get(dev); + if (xwidget_info) { + xtalk_info->xi_target = xwidget_info_masterid_get(xwidget_info); + } + + intr_hdl->i_swlevel = intr_swlevel; + intr_hdl->i_cpuid = cpu; + intr_hdl->i_bit = vector; + intr_hdl->i_flags |= HUB_INTR_IS_ALLOCED; + + hub_device_desc_update(dev_desc, intr_swlevel, cpu); + return(intr_hdl); +} + +hub_intr_t +hub_intr_alloc(devfs_handle_t dev, + device_desc_t dev_desc, + devfs_handle_t owner_dev) +{ + return(do_hub_intr_alloc(dev, dev_desc, owner_dev, 0)); +} + +hub_intr_t +hub_intr_alloc_nothd(devfs_handle_t dev, + device_desc_t dev_desc, + devfs_handle_t owner_dev) +{ + return(do_hub_intr_alloc(dev, dev_desc, owner_dev, 1)); +} + +void +hub_intr_free(hub_intr_t intr_hdl) +{ + cpuid_t cpu = intr_hdl->i_cpuid; + int vector = intr_hdl->i_bit; + xtalk_intr_t xtalk_info; + + if (intr_hdl->i_flags & HUB_INTR_IS_CONNECTED) { + xtalk_info = &intr_hdl->i_xtalk_info; + xtalk_info->xi_dev = NODEV; + xtalk_info->xi_vector = 0; + xtalk_info->xi_addr = 0; + hub_intr_disconnect(intr_hdl); + } + + if (intr_hdl->i_flags & HUB_INTR_IS_ALLOCED) { + kfree(intr_hdl); + } + intr_unreserve_level(cpu, vector); +} + +int +hub_intr_connect(hub_intr_t intr_hdl, + xtalk_intr_setfunc_t setfunc, + void *setfunc_arg) +{ + int rv; + cpuid_t cpu = intr_hdl->i_cpuid; + int vector = intr_hdl->i_bit; + + ASSERT(intr_hdl->i_flags & HUB_INTR_IS_ALLOCED); + + rv = intr_connect_level(cpu, vector, intr_hdl->i_swlevel, NULL); + + if (rv < 0) { + return rv; + } + + intr_hdl->i_xtalk_info.xi_setfunc = setfunc; + intr_hdl->i_xtalk_info.xi_sfarg = setfunc_arg; + + if (setfunc) { + (*setfunc)((xtalk_intr_t)intr_hdl); + } + + intr_hdl->i_flags |= HUB_INTR_IS_CONNECTED; + + return 0; +} + +/* + * Disassociate handler with the specified interrupt. + */ +void +hub_intr_disconnect(hub_intr_t intr_hdl) +{ + /*REFERENCED*/ + int rv; + cpuid_t cpu = intr_hdl->i_cpuid; + int bit = intr_hdl->i_bit; + xtalk_intr_setfunc_t setfunc; + + setfunc = intr_hdl->i_xtalk_info.xi_setfunc; + + /* TBD: send disconnected interrupts somewhere harmless */ + if (setfunc) (*setfunc)((xtalk_intr_t)intr_hdl); + + rv = intr_disconnect_level(cpu, bit); + ASSERT(rv == 0); + intr_hdl->i_flags &= ~HUB_INTR_IS_CONNECTED; +} + + +/* + * Return a hwgraph vertex that represents the CPU currently + * targeted by an interrupt. + */ +devfs_handle_t +hub_intr_cpu_get(hub_intr_t intr_hdl) +{ + cpuid_t cpuid = intr_hdl->i_cpuid; + + ASSERT(cpuid != CPU_NONE); + + return(cpuid_to_vertex(cpuid)); +} diff -Nru a/arch/ia64/sn/io/sn2/shuberror.c b/arch/ia64/sn/io/sn2/shuberror.c --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/io/sn2/shuberror.c Tue Mar 12 13:58:16 2002 @@ -0,0 +1,478 @@ +/* $Id$ + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (C) 1992 - 1997, 2000,2002 Silicon Graphics, Inc. All rights reserved. + */ + + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +extern void hubni_eint_init(cnodeid_t cnode); +extern void hubii_eint_init(cnodeid_t cnode); +extern void hubii_eint_handler (int irq, void *arg, struct pt_regs *ep); +int hubiio_crb_error_handler(devfs_handle_t hub_v, hubinfo_t hinfo); +int hubiio_prb_error_handler(devfs_handle_t hub_v, hubinfo_t hinfo); +extern void bte_crb_error_handler(devfs_handle_t hub_v, int btenum, int crbnum, ioerror_t *ioe); + +extern int maxcpus; + +#define HUB_ERROR_PERIOD (120 * HZ) /* 2 minutes */ + + +void +hub_error_clear(nasid_t nasid) +{ + int i; + hubreg_t idsr; + + /* + * Make sure spurious write response errors are cleared + * (values are from hub_set_prb()) + */ + for (i = 0; i <= HUB_WIDGET_ID_MAX - HUB_WIDGET_ID_MIN + 1; i++) { + iprb_t prb; + + prb.iprb_regval = REMOTE_HUB_L(nasid, IIO_IOPRB_0 + (i * sizeof(hubreg_t))); + + /* Clear out some fields */ + prb.iprb_ovflow = 1; + prb.iprb_bnakctr = 0; + prb.iprb_anakctr = 0; + + prb.iprb_xtalkctr = 3; /* approx. PIO credits for the widget */ + + REMOTE_HUB_S(nasid, IIO_IOPRB_0 + (i * sizeof(hubreg_t)), prb.iprb_regval); + } + + REMOTE_HUB_S(nasid, IIO_IO_ERR_CLR, -1); + idsr = REMOTE_HUB_L(nasid, IIO_IIDSR); + REMOTE_HUB_S(nasid, IIO_IIDSR, (idsr & ~(IIO_IIDSR_SENT_MASK))); + +} + + +/* + * Function : hub_error_init + * Purpose : initialize the error handling requirements for a given hub. + * Parameters : cnode, the compact nodeid. + * Assumptions : Called only once per hub, either by a local cpu. Or by a + * remote cpu, when this hub is headless.(cpuless) + * Returns : None + */ + +void +hub_error_init(cnodeid_t cnode) +{ + nasid_t nasid; + + nasid = cnodeid_to_nasid(cnode); + hub_error_clear(nasid); + + + /* + * Now setup the hub ii error interrupt handler. + */ + + hubii_eint_init(cnode); + + return; +} + +/* + * Function : hubii_eint_init + * Parameters : cnode + * Purpose : to initialize the hub iio error interrupt. + * Assumptions : Called once per hub, by the cpu which will ultimately + * handle this interrupt. + * Returns : None. + */ + + +void +hubii_eint_init(cnodeid_t cnode) +{ + int bit, rv; + ii_iidsr_u_t hubio_eint; + hubinfo_t hinfo; + cpuid_t intr_cpu; + devfs_handle_t hub_v; + ii_ilcsr_u_t ilcsr; + int bit_pos_to_irq(int bit); + int synergy_intr_connect(int bit, int cpuid); + + + hub_v = (devfs_handle_t)cnodeid_to_vertex(cnode); + ASSERT_ALWAYS(hub_v); + hubinfo_get(hub_v, &hinfo); + + ASSERT(hinfo); + ASSERT(hinfo->h_cnodeid == cnode); + + ilcsr.ii_ilcsr_regval = REMOTE_HUB_L(hinfo->h_nasid, IIO_ILCSR); + + if ((ilcsr.ii_ilcsr_fld_s.i_llp_stat & 0x2) == 0) { + /* + * HUB II link is not up. + * Just disable LLP, and don't connect any interrupts. + */ + ilcsr.ii_ilcsr_fld_s.i_llp_en = 0; + REMOTE_HUB_S(hinfo->h_nasid, IIO_ILCSR, ilcsr.ii_ilcsr_regval); + return; + } + /* Select a possible interrupt target where there is a free interrupt + * bit and also reserve the interrupt bit for this IO error interrupt + */ + intr_cpu = intr_heuristic(hub_v,0,-1,0,hub_v, + "HUB IO error interrupt",&bit); + if (intr_cpu == CPU_NONE) { + printk("hubii_eint_init: intr_reserve_level failed, cnode %d", cnode); + return; + } + + rv = intr_connect_level(intr_cpu, bit, 0, NULL); + request_irq(bit + (intr_cpu << 8), hubii_eint_handler, 0, "SN hub error", (void *)hub_v); + ASSERT_ALWAYS(rv >= 0); + hubio_eint.ii_iidsr_regval = 0; + hubio_eint.ii_iidsr_fld_s.i_enable = 1; + hubio_eint.ii_iidsr_fld_s.i_level = bit;/* Take the least significant bits*/ + hubio_eint.ii_iidsr_fld_s.i_node = COMPACT_TO_NASID_NODEID(cnode); + hubio_eint.ii_iidsr_fld_s.i_pi_id = cpuid_to_subnode(intr_cpu); + REMOTE_HUB_S(hinfo->h_nasid, IIO_IIDSR, hubio_eint.ii_iidsr_regval); + +} + + +/*ARGSUSED*/ +void +hubii_eint_handler (int irq, void *arg, struct pt_regs *ep) +{ + devfs_handle_t hub_v; + hubinfo_t hinfo; + ii_wstat_u_t wstat; + hubreg_t idsr; + + + /* two levels of casting avoids compiler warning.!! */ + hub_v = (devfs_handle_t)(long)(arg); + ASSERT(hub_v); + + hubinfo_get(hub_v, &hinfo); + + /* + * Identify the reason for error. + */ + wstat.ii_wstat_regval = REMOTE_HUB_L(hinfo->h_nasid, IIO_WSTAT); + + if (wstat.ii_wstat_fld_s.w_crazy) { + char *reason; + /* + * We can do a couple of things here. + * Look at the fields TX_MX_RTY/XT_TAIL_TO/XT_CRD_TO to check + * which of these caused the CRAZY bit to be set. + * You may be able to check if the Link is up really. + */ + if (wstat.ii_wstat_fld_s.w_tx_mx_rty) + reason = "Micro Packet Retry Timeout"; + else if (wstat.ii_wstat_fld_s.w_xt_tail_to) + reason = "Crosstalk Tail Timeout"; + else if (wstat.ii_wstat_fld_s.w_xt_crd_to) + reason = "Crosstalk Credit Timeout"; + else { + hubreg_t hubii_imem; + /* + * Check if widget 0 has been marked as shutdown, or + * if BTE 0/1 has been marked. + */ + hubii_imem = REMOTE_HUB_L(hinfo->h_nasid, IIO_IMEM); + if (hubii_imem & IIO_IMEM_W0ESD) + reason = "Hub Widget 0 has been Shutdown"; + else if (hubii_imem & IIO_IMEM_B0ESD) + reason = "BTE 0 has been shutdown"; + else if (hubii_imem & IIO_IMEM_B1ESD) + reason = "BTE 1 has been shutdown"; + else reason = "Unknown"; + + } + /* + * Note: we may never be able to print this, if the II talking + * to Xbow which hosts the console is dead. + */ + printk("Hub %d to Xtalk Link failed (II_ECRAZY) Reason: %s", + hinfo->h_cnodeid, reason); + } + + /* + * It's a toss as to which one among PRB/CRB to check first. + * Current decision is based on the severity of the errors. + * IO CRB errors tend to be more severe than PRB errors. + * + * It is possible for BTE errors to have been handled already, so we + * may not see any errors handled here. + */ + (void)hubiio_crb_error_handler(hub_v, hinfo); + (void)hubiio_prb_error_handler(hub_v, hinfo); + /* + * If we reach here, it indicates crb/prb handlers successfully + * handled the error. So, re-enable II to send more interrupt + * and return. + */ + REMOTE_HUB_S(hinfo->h_nasid, IIO_IECLR, 0xffffff); + idsr = REMOTE_HUB_L(hinfo->h_nasid, IIO_IIDSR) & ~IIO_IIDSR_SENT_MASK; + REMOTE_HUB_S(hinfo->h_nasid, IIO_IIDSR, idsr); +} + +/* + * Free the hub CRB "crbnum" which encountered an error. + * Assumption is, error handling was successfully done, + * and we now want to return the CRB back to Hub for normal usage. + * + * In order to free the CRB, all that's needed is to de-allocate it + * + * Assumption: + * No other processor is mucking around with the hub control register. + * So, upper layer has to single thread this. + */ +void +hubiio_crb_free(hubinfo_t hinfo, int crbnum) +{ + ii_icrb0_a_u_t icrba; + + /* + * The hardware does NOT clear the mark bit, so it must get cleared + * here to be sure the error is not processed twice. + */ + icrba.ii_icrb0_a_regval = REMOTE_HUB_L(hinfo->h_nasid, IIO_ICRB_A(crbnum)); + icrba.a_valid = 0; + REMOTE_HUB_S(hinfo->h_nasid, IIO_ICRB_A(crbnum), icrba.ii_icrb0_a_regval); + /* + * Deallocate the register. + */ + + REMOTE_HUB_S(hinfo->h_nasid, IIO_ICDR, (IIO_ICDR_PND | crbnum)); + + /* + * Wait till hub indicates it's done. + */ + while (REMOTE_HUB_L(hinfo->h_nasid, IIO_ICDR) & IIO_ICDR_PND) + us_delay(1); + +} + + +/* + * Array of error names that get logged in CRBs + */ +char *hubiio_crb_errors[] = { + "Directory Error", + "CRB Poison Error", + "I/O Write Error", + "I/O Access Error", + "I/O Partial Write Error", + "I/O Partial Read Error", + "I/O Timeout Error", + "Xtalk Error Packet" +}; + +/* + * hubiio_crb_error_handler + * + * This routine gets invoked when a hub gets an error + * interrupt. So, the routine is running in interrupt context + * at error interrupt level. + * Action: + * It's responsible for identifying ALL the CRBs that are marked + * with error, and process them. + * + * If you find the CRB that's marked with error, map this to the + * reason it caused error, and invoke appropriate error handler. + * + * XXX Be aware of the information in the context register. + * + * NOTE: + * Use REMOTE_HUB_* macro instead of LOCAL_HUB_* so that the interrupt + * handler can be run on any node. (not necessarily the node + * corresponding to the hub that encountered error). + */ + +int +hubiio_crb_error_handler(devfs_handle_t hub_v, hubinfo_t hinfo) +{ + cnodeid_t cnode; + nasid_t nasid; + ii_icrb0_a_u_t icrba; /* II CRB Register A */ + ii_icrb0_b_u_t icrbb; /* II CRB Register B */ + ii_icrb0_c_u_t icrbc; /* II CRB Register C */ + ii_icrb0_d_u_t icrbd; /* II CRB Register D */ + int i; + int num_errors = 0; /* Num of errors handled */ + ioerror_t ioerror; + + nasid = hinfo->h_nasid; + cnode = NASID_TO_COMPACT_NODEID(nasid); + + /* + * Scan through all CRBs in the Hub, and handle the errors + * in any of the CRBs marked. + */ + for (i = 0; i < IIO_NUM_CRBS; i++) { + icrba.ii_icrb0_a_regval = REMOTE_HUB_L(nasid, IIO_ICRB_A(i)); + + IOERROR_INIT(&ioerror); + + /* read other CRB error registers. */ + icrbb.ii_icrb0_b_regval = REMOTE_HUB_L(nasid, IIO_ICRB_B(i)); + icrbc.ii_icrb0_c_regval = REMOTE_HUB_L(nasid, IIO_ICRB_C(i)); + icrbd.ii_icrb0_d_regval = REMOTE_HUB_L(nasid, IIO_ICRB_D(i)); + + IOERROR_SETVALUE(&ioerror,errortype,icrbb.b_ecode); + /* Check if this error is due to BTE operation, + * and handle it separately. + */ + if (icrbd.d_bteop || + ((icrbb.b_initiator == IIO_ICRB_INIT_BTE0 || + icrbb.b_initiator == IIO_ICRB_INIT_BTE1) && + (icrbb.b_imsgtype == IIO_ICRB_IMSGT_BTE || + icrbb.b_imsgtype == IIO_ICRB_IMSGT_SN1NET))){ + + int bte_num; + + if (icrbd.d_bteop) + bte_num = icrbc.c_btenum; + else /* b_initiator bit 2 gives BTE number */ + bte_num = (icrbb.b_initiator & 0x4) >> 2; + + bte_crb_error_handler(hub_v, bte_num, + i, &ioerror); + hubiio_crb_free(hinfo, i); + num_errors++; + continue; + } + + /* + * XXX + * Assuming the only other error that would reach here is + * crosstalk errors. + * If CRB times out on a message from Xtalk, it changes + * the message type to CRB. + * + * If we get here due to other errors (SN0net/CRB) + * what's the action ? + */ + + /* + * Pick out the useful fields in CRB, and + * tuck them away into ioerror structure. + */ + IOERROR_SETVALUE(&ioerror,xtalkaddr,icrba.a_addr << IIO_ICRB_ADDR_SHFT); + IOERROR_SETVALUE(&ioerror,widgetnum,icrba.a_sidn); + + + if (icrba.a_iow){ + /* + * XXX We shouldn't really have BRIDGE-specific code + * here, but alas.... + * + * The BRIDGE (or XBRIDGE) sets the upper bit of TNUM + * to indicate a WRITE operation. It sets the next + * bit to indicate an INTERRUPT operation. The bottom + * 3 bits of TNUM indicate which device was responsible. + */ + IOERROR_SETVALUE(&ioerror,widgetdev, + TNUM_TO_WIDGET_DEV(icrba.a_tnum)); + + } + + } + return num_errors; +} + +/*ARGSUSED*/ +/* + * hubii_prb_handler + * Handle the error reported in the PRB for wiget number wnum. + * This typically happens on a PIO write error. + * There is nothing much we can do in this interrupt context for + * PIO write errors. For e.g. QL scsi controller has the + * habit of flaking out on PIO writes. + * Print a message and try to continue for now + * Cleanup involes freeing the PRB register + */ +static void +hubii_prb_handler(devfs_handle_t hub_v, hubinfo_t hinfo, int wnum) +{ + nasid_t nasid; + + nasid = hinfo->h_nasid; + /* + * Clear error bit by writing to IECLR register. + */ + REMOTE_HUB_S(nasid, IIO_IO_ERR_CLR, (1 << wnum)); + /* + * PIO Write to Widget 'i' got into an error. + * Invoke hubiio_error_handler with this information. + */ + printk( "Hub nasid %d got a PIO Write error from widget %d, cleaning up and continuing", + nasid, wnum); + /* + * XXX + * It may be necessary to adjust IO PRB counter + * to account for any lost credits. + */ +} + +int +hubiio_prb_error_handler(devfs_handle_t hub_v, hubinfo_t hinfo) +{ + int wnum; + nasid_t nasid; + int num_errors = 0; + iprb_t iprb; + + nasid = hinfo->h_nasid; + /* + * Check if IPRB0 has any error first. + */ + iprb.iprb_regval = REMOTE_HUB_L(nasid, IIO_IOPRB(0)); + if (iprb.iprb_error) { + num_errors++; + hubii_prb_handler(hub_v, hinfo, 0); + } + /* + * Look through PRBs 8 - F to see if any of them has error bit set. + * If true, invoke hub iio error handler for this widget. + */ + for (wnum = HUB_WIDGET_ID_MIN; wnum <= HUB_WIDGET_ID_MAX; wnum++) { + iprb.iprb_regval = REMOTE_HUB_L(nasid, IIO_IOPRB(wnum)); + + if (!iprb.iprb_error) + continue; + + num_errors++; + hubii_prb_handler(hub_v, hinfo, wnum); + } + + return num_errors; +} + diff -Nru a/arch/ia64/sn/io/stubs.c b/arch/ia64/sn/io/stubs.c --- a/arch/ia64/sn/io/stubs.c Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/sn/io/stubs.c Tue Mar 12 13:58:15 2002 @@ -4,8 +4,7 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2002 Silicon Graphics, Inc. All rights reserved. */ #include @@ -21,6 +20,7 @@ #include #include #include +#include /****** ****** hack defines ...... @@ -61,45 +61,45 @@ } void * -kmem_alloc_node(register size_t size, register int flags, cnodeid_t node) +snia_kmem_alloc_node(register size_t size, register int flags, cnodeid_t node) { /* Allocates on node 'node' */ - FIXME("kmem_alloc_node : use kmalloc"); + FIXME("snia_kmem_alloc_node : use kmalloc"); return(kmalloc(size, GFP_KERNEL)); } void * -kmem_zalloc_node(register size_t size, register int flags, cnodeid_t node) +snia_kmem_zalloc_node(register size_t size, register int flags, cnodeid_t node) { - FIXME("kmem_zalloc_node : use kmalloc"); + FIXME("snia_kmem_zalloc_node : use kmalloc"); return(kmalloc(size, GFP_KERNEL)); } void -kmem_free(void *where, int size) +snia_kmem_free(void *where, int size) { - FIXME("kmem_free : use kfree"); + FIXME("snia_kmem_free : use kfree"); return(kfree(where)); } void * -kmem_zone_alloc(register zone_t *zone, int flags) +snia_kmem_zone_alloc(register zone_t *zone, int flags) { - FIXME("kmem_zone_alloc : return null"); + FIXME("snia_kmem_zone_alloc : return null"); return((void *)0); } void -kmem_zone_free(register zone_t *zone, void *ptr) +snia_kmem_zone_free(register zone_t *zone, void *ptr) { - FIXME("kmem_zone_free : no-op"); + FIXME("snia_kmem_zone_free : no-op"); } zone_t * -kmem_zone_init(register int size, char *zone_name) +snia_kmem_zone_init(register int size, char *zone_name) { - FIXME("kmem_zone_free : returns NULL"); + FIXME("snia_kmem_zone_free : returns NULL"); return((zone_t *)0); } diff -Nru a/arch/ia64/sn/io/xbow.c b/arch/ia64/sn/io/xbow.c --- a/arch/ia64/sn/io/xbow.c Tue Mar 12 13:58:14 2002 +++ b/arch/ia64/sn/io/xbow.c Tue Mar 12 13:58:14 2002 @@ -4,8 +4,7 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2001 Silicon Graphics, Inc. All rights reserved. */ #include @@ -19,6 +18,7 @@ #include #include #include +#include /* #define DEBUG 1 */ /* #define XBOW_DEBUG 1 */ @@ -102,7 +102,6 @@ #ifdef LATER static void xbow_setwidint(xtalk_intr_t); static void xbow_errintr_handler(intr_arg_t); -static error_handler_f xbow_error_handler; #endif void xbow_intr_preset(void *, int, xwidgetnum_t, iopaddr_t, xtalk_intr_vector_t); @@ -281,7 +280,7 @@ /* &hcl_fops */ (void *)&vhdl, NULL); if (!vhdl) { printk(KERN_WARNING "xbow_attach: Unable to create char device for xbow conn %p\n", - conn); + (void *)conn); } /* @@ -306,7 +305,7 @@ /* * get the name of this xbow vertex and keep the info. - * This is needed during errors and interrupts, but as + * This is needed during errors and interupts, but as * long as we have it, we can use it elsewhere. */ s = dev_to_name(vhdl, devnm, MAXDEVNAME); @@ -371,36 +370,9 @@ } /* - * attach the crossbow error interrupt. - */ -#ifdef LATER - dev_desc = device_desc_dup(vhdl); - device_desc_flags_set(dev_desc, - device_desc_flags_get(dev_desc) | D_INTR_ISERR); - device_desc_intr_name_set(dev_desc, "Crossbow error"); - - intr_hdl = xtalk_intr_alloc(conn, dev_desc, vhdl); - ASSERT(intr_hdl != NULL); - - xtalk_intr_connect(intr_hdl, - (intr_func_t) xbow_errintr_handler, - (intr_arg_t) soft, - (xtalk_intr_setfunc_t) xbow_setwidint, - (void *) xbow, - (void *) 0); - device_desc_free(dev_desc); - - xwidget_error_register(conn, xbow_error_handler, soft); - -#else - FIXME("xbow_attach: Fixme: we bypassed attaching xbow error interrupt.\n"); -#endif /* LATER */ - - /* * Enable xbow error interrupts */ - xbow->xb_wid_control = (XB_WID_CTRL_REG_ACC_IE | - XB_WID_CTRL_XTALK_IE); + xbow->xb_wid_control = (XB_WID_CTRL_REG_ACC_IE | XB_WID_CTRL_XTALK_IE); /* * take a census of the widgets present, @@ -918,460 +890,6 @@ return 1; } -/* - * xbow_errintr_handler will be called if the xbow - * sends an interrupt request to report an error. - */ - -#ifdef LATER -static void -xbow_errintr_handler(intr_arg_t arg) -{ - ioerror_t ioe[1]; - xbow_soft_t soft = (xbow_soft_t) arg; - xbow_t *xbow = soft->base; - xbowreg_t wid_control; - xbowreg_t wid_stat; - xbowreg_t wid_err_cmdword; - xbowreg_t wid_err_upper; - xbowreg_t wid_err_lower; - w_err_cmd_word_u wid_err; - uint64_t wid_err_addr; - - int fatal = 0; - int dump_ioe = 0; - - wid_control = xbow->xb_wid_control; - wid_stat = xbow->xb_wid_stat_clr; - wid_err_cmdword = xbow->xb_wid_err_cmdword; - wid_err_upper = xbow->xb_wid_err_upper; - wid_err_lower = xbow->xb_wid_err_lower; - xbow->xb_wid_err_cmdword = 0; - - wid_err_addr = - wid_err_lower - | (((iopaddr_t) wid_err_upper - & WIDGET_ERR_UPPER_ADDR_ONLY) - << 32); - - if (wid_stat & XB_WID_STAT_LINK_INTR_MASK) { - int port; - - wid_err.r = wid_err_cmdword; - - for (port = MAX_PORT_NUM - MAX_XBOW_PORTS; - port < MAX_PORT_NUM; port++) { - if (wid_stat & XB_WID_STAT_LINK_INTR(port)) { - xb_linkregs_t *link = &(xbow->xb_link(port)); - xbowreg_t link_control = link->link_control; - xbowreg_t link_status = link->link_status_clr; - xbowreg_t link_aux_status = link->link_aux_status; - xbowreg_t link_pend; - - link_pend = link_status & link_control & - (XB_STAT_ILLEGAL_DST_ERR - | XB_STAT_OALLOC_IBUF_ERR - | XB_STAT_RCV_CNT_OFLOW_ERR - | XB_STAT_XMT_CNT_OFLOW_ERR - | XB_STAT_XMT_MAX_RTRY_ERR - | XB_STAT_RCV_ERR - | XB_STAT_XMT_RTRY_ERR - | XB_STAT_MAXREQ_TOUT_ERR - | XB_STAT_SRC_TOUT_ERR - ); - - if (link_pend & XB_STAT_ILLEGAL_DST_ERR) { - if (wid_err.f.sidn == port) { - IOERROR_INIT(ioe); - IOERROR_SETVALUE(ioe, widgetnum, port); - IOERROR_SETVALUE(ioe, xtalkaddr, wid_err_addr); - if (IOERROR_HANDLED == - xbow_error_handler(soft, - IOECODE_DMA, - MODE_DEVERROR, - ioe)) { - link_pend &= ~XB_STAT_ILLEGAL_DST_ERR; - } else { - dump_ioe++; - } - } - } - /* Xbow/Bridge WAR: - * if the bridge signals an LLP Transmitter Retry, - * rewrite its control register. - * If someone else triggers this interrupt, - * ignore (and disable) the interrupt. - */ - if (link_pend & XB_STAT_XMT_RTRY_ERR) { - if (!xbow_xmit_retry_error(soft, port)) { - link_control &= ~XB_CTRL_XMT_RTRY_IE; - link->link_control = link_control; - link->link_control; /* stall until written */ - } - link_pend &= ~XB_STAT_XMT_RTRY_ERR; - } - if (link_pend) { - devfs_handle_t xwidget_vhdl; - char *xwidget_name; - - /* Get the widget name corresponding to the current - * xbow link. - */ - xwidget_vhdl = xbow_widget_lookup(soft->busv,port); - xwidget_name = xwidget_name_get(xwidget_vhdl); - -#ifdef LATER - printk("%s port %X[%s] XIO Bus Error", - soft->name, port, xwidget_name); - if (link_status & XB_STAT_MULTI_ERR) - XEM_ADD_STR("\tMultiple Errors\n"); - if (link_status & XB_STAT_ILLEGAL_DST_ERR) - XEM_ADD_STR("\tInvalid Packet Destination\n"); - if (link_status & XB_STAT_OALLOC_IBUF_ERR) - XEM_ADD_STR("\tInput Overallocation Error\n"); - if (link_status & XB_STAT_RCV_CNT_OFLOW_ERR) - XEM_ADD_STR("\tLLP receive error counter overflow\n"); - if (link_status & XB_STAT_XMT_CNT_OFLOW_ERR) - XEM_ADD_STR("\tLLP transmit retry counter overflow\n"); - if (link_status & XB_STAT_XMT_MAX_RTRY_ERR) - XEM_ADD_STR("\tLLP Max Transmitter Retry\n"); - if (link_status & XB_STAT_RCV_ERR) - XEM_ADD_STR("\tLLP Receiver error\n"); - if (link_status & XB_STAT_XMT_RTRY_ERR) - XEM_ADD_STR("\tLLP Transmitter Retry\n"); - if (link_status & XB_STAT_MAXREQ_TOUT_ERR) - XEM_ADD_STR("\tMaximum Request Timeout\n"); - if (link_status & XB_STAT_SRC_TOUT_ERR) - XEM_ADD_STR("\tSource Timeout Error\n"); -#endif /* LATER */ - { - int other_port; - - for (other_port = 8; other_port < 16; ++other_port) { - if (link_aux_status & (1 << other_port)) { - /* XXX- need to go to "other_port" - * and clean up after the timeout? - */ - XEM_ADD_VAR(other_port); - } - } - } - -#if !DEBUG - if (kdebug) { -#endif - XEM_ADD_VAR(link_control); - XEM_ADD_VAR(link_status); - XEM_ADD_VAR(link_aux_status); - - if (dump_ioe) { - XEM_ADD_IOE(); - dump_ioe = 0; - } -#if !DEBUG - } -#endif - fatal++; - } - } - } - } - if (wid_stat & wid_control & XB_WID_STAT_WIDGET0_INTR) { - /* we have a "widget zero" problem */ - - if (wid_stat & (XB_WID_STAT_MULTI_ERR - | XB_WID_STAT_XTALK_ERR - | XB_WID_STAT_REG_ACC_ERR)) { - - printk("%s Port 0 XIO Bus Error", - soft->name); - if (wid_stat & XB_WID_STAT_MULTI_ERR) - XEM_ADD_STR("\tMultiple Error\n"); - if (wid_stat & XB_WID_STAT_XTALK_ERR) - XEM_ADD_STR("\tXIO Error\n"); - if (wid_stat & XB_WID_STAT_REG_ACC_ERR) - XEM_ADD_STR("\tRegister Access Error\n"); - - fatal++; - } - } - if (fatal) { - XEM_ADD_VAR(wid_stat); - XEM_ADD_VAR(wid_control); - XEM_ADD_VAR(wid_err_cmdword); - XEM_ADD_VAR(wid_err_upper); - XEM_ADD_VAR(wid_err_lower); - XEM_ADD_VAR(wid_err_addr); - PRINT_PANIC("XIO Bus Error"); - } -} -#endif /* LATER */ - -/* - * XBOW ERROR Handling routines. - * These get invoked as part of walking down the error handling path - * from hub/heart towards the I/O device that caused the error. - */ - -/* - * xbow_error_handler - * XBow error handling dispatch routine. - * This is the primary interface used by external world to invoke - * in case of an error related to a xbow. - * Only functionality in this layer is to identify the widget handle - * given the widgetnum. Otherwise, xbow does not gathers any error - * data. - */ - -#ifdef LATER -static int -xbow_error_handler( - void *einfo, - int error_code, - ioerror_mode_t mode, - ioerror_t *ioerror) -{ - int retval = IOERROR_WIDGETLEVEL; - - xbow_soft_t soft = (xbow_soft_t) einfo; - int port; - devfs_handle_t conn; - devfs_handle_t busv; - - xbow_t *xbow = soft->base; - xbowreg_t wid_stat; - xbowreg_t wid_err_cmdword; - xbowreg_t wid_err_upper; - xbowreg_t wid_err_lower; - uint64_t wid_err_addr; - - xb_linkregs_t *link; - xbowreg_t link_control; - xbowreg_t link_status; - xbowreg_t link_aux_status; - - ASSERT(soft != 0); - busv = soft->busv; - -#if DEBUG && ERROR_DEBUG - printk("%s: xbow_error_handler\n", soft->name, busv); -#endif - - port = IOERROR_GETVALUE(ioerror, widgetnum); - - if (port == 0) { - /* error during access to xbow: - * do NOT attempt to access xbow regs. - */ - if (mode == MODE_DEVPROBE) - return IOERROR_HANDLED; - - if (error_code & IOECODE_DMA) { - PRINT_ALERT("DMA error blamed on Crossbow at %s\n" - "\tbut Crosbow never initiates DMA!", - soft->name); - } - if (error_code & IOECODE_PIO) { - PRINT_ALERt("PIO Error on XIO Bus %s\n" - "\tattempting to access XIO controller\n" - "\twith offset 0x%X", - soft->name, - IOERROR_GETVALUE(ioerror, xtalkaddr)); - } - /* caller will dump contents of ioerror - * in DEBUG and kdebug kernels. - */ - - return retval; - } - /* - * error not on port zero: - * safe to read xbow registers. - */ - wid_stat = xbow->xb_wid_stat; - wid_err_cmdword = xbow->xb_wid_err_cmdword; - wid_err_upper = xbow->xb_wid_err_upper; - wid_err_lower = xbow->xb_wid_err_lower; - - wid_err_addr = - wid_err_lower - | (((iopaddr_t) wid_err_upper - & WIDGET_ERR_UPPER_ADDR_ONLY) - << 32); - - if ((port < BASE_XBOW_PORT) || - (port >= MAX_PORT_NUM)) { - - if (mode == MODE_DEVPROBE) - return IOERROR_HANDLED; - - if (error_code & IOECODE_DMA) { - PRINT_ALERT("DMA error blamed on XIO port at %s/%d\n" - "\tbut Crossbow does not support that port", - soft->name, port); - } - if (error_code & IOECODE_PIO) { - PRINT_ALERT("PIO Error on XIO Bus %s\n" - "\tattempting to access XIO port %d\n" - "\t(which Crossbow does not support)" - "\twith offset 0x%X", - soft->name, port, - IOERROR_GETVALUE(ioerror, xtalkaddr)); - } -#if !DEBUG - if (kdebug) { -#endif - XEM_ADD_STR("Raw status values for Crossbow:\n"); - XEM_ADD_VAR(wid_stat); - XEM_ADD_VAR(wid_err_cmdword); - XEM_ADD_VAR(wid_err_upper); - XEM_ADD_VAR(wid_err_lower); - XEM_ADD_VAR(wid_err_addr); -#if !DEBUG - } -#endif - - /* caller will dump contents of ioerror - * in DEBUG and kdebug kernels. - */ - - return retval; - } - /* access to valid port: - * ok to check port status. - */ - - link = &(xbow->xb_link(port)); - link_control = link->link_control; - link_status = link->link_status; - link_aux_status = link->link_aux_status; - - /* Check that there is something present - * in that XIO port. - */ - if (!(link_aux_status & XB_AUX_STAT_PRESENT)) { - /* nobody connected. */ - if (mode == MODE_DEVPROBE) - return IOERROR_HANDLED; - - if (error_code & IOECODE_DMA) { - PRINT_ALERT("DMA error blamed on XIO port at %s/%d\n" - "\tbut there is no device connected there.", - soft->name, port); - } - if (error_code & IOECODE_PIO) { - PRINT_ALERT("PIO Error on XIO Bus %s\n" - "\tattempting to access XIO port %d\n" - "\t(which has no device connected)" - "\twith offset 0x%X", - soft->name, port, - IOERROR_GETVALUE(ioerror, xtalkaddr)); - } -#if !DEBUG - if (kdebug) { -#endif - XEM_ADD_STR("Raw status values for Crossbow:\n"); - XEM_ADD_VAR(wid_stat); - XEM_ADD_VAR(wid_err_cmdword); - XEM_ADD_VAR(wid_err_upper); - XEM_ADD_VAR(wid_err_lower); - XEM_ADD_VAR(wid_err_addr); - XEM_ADD_VAR(port); - XEM_ADD_VAR(link_control); - XEM_ADD_VAR(link_status); - XEM_ADD_VAR(link_aux_status); -#if !DEBUG - } -#endif - return retval; - - } - /* Check that the link is alive. - */ - if (!(link_status & XB_STAT_LINKALIVE)) { - /* nobody connected. */ - if (mode == MODE_DEVPROBE) - return IOERROR_HANDLED; - - PRINT_ALERT("%s%sError on XIO Bus %s port %d", - (error_code & IOECODE_DMA) ? "DMA " : "", - (error_code & IOECODE_PIO) ? "PIO " : "", - soft->name, port); - - if ((error_code & IOECODE_PIO) && - (IOERROR_FIELDVALID(ioerror, xtalkaddr))) { - printk("\tAccess attempted to offset 0x%X\n", - IOERROR_GETVALUE(ioerror, xtalkaddr)); - } - if (link_aux_status & XB_AUX_LINKFAIL_RST_BAD) - XEM_ADD_STR("\tLink never came out of reset\n"); - else - XEM_ADD_STR("\tLink failed while transferring data\n"); - - } - /* get the connection point for the widget - * involved in this error; if it exists and - * is not our connectpoint, cycle back through - * xtalk_error_handler to deliver control to - * the proper handler (or to report a generic - * crosstalk error). - * - * If the downstream handler won't handle - * the problem, we let our upstream caller - * deal with it, after (in DEBUG and kdebug - * kernels) dumping the xbow state for this - * port. - */ - conn = xbow_widget_lookup(busv, port); - if ((conn != GRAPH_VERTEX_NONE) && - (conn != soft->conn)) { - retval = xtalk_error_handler(conn, error_code, mode, ioerror); - if (retval == IOERROR_HANDLED) - return IOERROR_HANDLED; - } - if (mode == MODE_DEVPROBE) - return IOERROR_HANDLED; - - if (retval == IOERROR_UNHANDLED) { - retval = IOERROR_PANIC; - - PRINT_ALERT("%s%sError on XIO Bus %s port %d", - (error_code & IOECODE_DMA) ? "DMA " : "", - (error_code & IOECODE_PIO) ? "PIO " : "", - soft->name, port); - - if ((error_code & IOECODE_PIO) && - (IOERROR_FIELDVALID(ioerror, xtalkaddr))) { - printk("\tAccess attempted to offset 0x%X\n", - IOERROR_GETVALUE(ioerror, xtalkaddr)); - } - } - -#if !DEBUG - if (kdebug) { -#endif - XEM_ADD_STR("Raw status values for Crossbow:\n"); - XEM_ADD_VAR(wid_stat); - XEM_ADD_VAR(wid_err_cmdword); - XEM_ADD_VAR(wid_err_upper); - XEM_ADD_VAR(wid_err_lower); - XEM_ADD_VAR(wid_err_addr); - XEM_ADD_VAR(port); - XEM_ADD_VAR(link_control); - XEM_ADD_VAR(link_status); - XEM_ADD_VAR(link_aux_status); -#if !DEBUG - } -#endif - /* caller will dump raw ioerror data - * in DEBUG and kdebug kernels. - */ - - return retval; -} - -#endif /* LATER */ - void xbow_update_perf_counters(devfs_handle_t vhdl) { @@ -1520,7 +1038,7 @@ if (lnk_sts.linkstatus & ~(XB_STAT_RCV_ERR | XB_STAT_XMT_RTRY_ERR | XB_STAT_LINKALIVE)) { #ifdef LATER - PRINT_WARNING("link %d[%s]: bad status 0x%x\n", + printk(KERN_WARNING "link %d[%s]: bad status 0x%x\n", link, xwidget_name, lnk_sts.linkstatus); #endif } diff -Nru a/arch/ia64/sn/io/xswitch.c b/arch/ia64/sn/io/xswitch.c --- a/arch/ia64/sn/io/xswitch.c Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/sn/io/xswitch.c Tue Mar 12 13:58:15 2002 @@ -4,14 +4,13 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2001 Silicon Graphics, Inc. All rights reserved. */ #include #include #include -#include +#include #include #include #include diff -Nru a/arch/ia64/sn/io/xtalk.c b/arch/ia64/sn/io/xtalk.c --- a/arch/ia64/sn/io/xtalk.c Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/sn/io/xtalk.c Tue Mar 12 13:58:15 2002 @@ -4,24 +4,22 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2001 Silicon Graphics, Inc. All rights reserved. */ #include #include #include -#include +#include +#include #include #include #include #include #include - #include #include #include - #include /* @@ -41,7 +39,6 @@ cdl_p xtalk_registry = NULL; -#include #define DEV_FUNC(dev,func) hub_##func #define CAST_PIOMAP(x) ((hub_piomap_t)(x)) #define CAST_DMAMAP(x) ((hub_dmamap_t)(x)) @@ -72,7 +69,7 @@ xtalk_intr_t xtalk_intr_alloc(devfs_handle_t, device_desc_t, devfs_handle_t); xtalk_intr_t xtalk_intr_alloc_nothd(devfs_handle_t, device_desc_t, devfs_handle_t); void xtalk_intr_free(xtalk_intr_t); -int xtalk_intr_connect(xtalk_intr_t, intr_func_t, intr_arg_t, xtalk_intr_setfunc_t, void *, void *); +int xtalk_intr_connect(xtalk_intr_t, xtalk_intr_setfunc_t, void *); void xtalk_intr_disconnect(xtalk_intr_t); devfs_handle_t xtalk_intr_cpu_get(xtalk_intr_t); int xtalk_error_handler(devfs_handle_t, int, ioerror_mode_t, ioerror_t *); @@ -113,8 +110,6 @@ xwidgetnum_t, devfs_handle_t, xwidgetnum_t, async_attach_t); int xwidget_unregister(devfs_handle_t); -void xwidget_error_register(devfs_handle_t, error_handler_f *, - error_handler_arg_t); void xwidget_reset(devfs_handle_t); char *xwidget_name_get(devfs_handle_t); #if !defined(DEV_FUNC) @@ -472,14 +467,11 @@ */ int xtalk_intr_connect(xtalk_intr_t intr_hdl, /* xtalk intr resource handle */ - intr_func_t intr_func, /* xtalk intr handler */ - intr_arg_t intr_arg, /* arg to intr handler */ xtalk_intr_setfunc_t setfunc, /* func to set intr hw */ - void *setfunc_arg, /* arg to setfunc */ - void *thread) -{ /* intr thread to use */ + void *setfunc_arg) /* arg to setfunc */ +{ return INTR_FUNC(intr_hdl, intr_connect) - (CAST_INTR(intr_hdl), intr_func, intr_arg, setfunc, setfunc_arg, thread); + (CAST_INTR(intr_hdl), setfunc, setfunc_arg); } @@ -506,85 +498,6 @@ } -/* - * ===================================================================== - * ERROR MANAGEMENT - */ - -/* - * xtalk_error_handler: - * pass this error on to the handler registered - * at the specified xtalk connecdtion point, - * or complain about it here if there is no handler. - * - * This routine plays two roles during error delivery - * to most widgets: first, the external agent (heart, - * hub, or whatever) calls in with the error and the - * connect point representing the crosstalk switch, - * or whatever crosstalk device is directly connected - * to the agent. - * - * If there is a switch, it will generally look at the - * widget number stashed in the ioerror structure; and, - * if the error came from some widget other than the - * switch, it will call back into xtalk_error_handler - * with the connection point of the offending port. - */ -int -xtalk_error_handler( - devfs_handle_t xconn, - int error_code, - ioerror_mode_t mode, - ioerror_t *ioerror) -{ - xwidget_info_t xwidget_info; - -#if DEBUG && ERROR_DEBUG -#ifdef SUPPORT_PRINTING_V_FORMAT - printk("%v: xtalk_error_handler\n", xconn); -#else - printk("%x: xtalk_error_handler\n", xconn); -#endif -#endif - - xwidget_info = xwidget_info_get(xconn); - /* Make sure that xwidget_info is a valid pointer before derefencing it. - * We could come in here during very early initialization. - */ - if (xwidget_info && xwidget_info->w_efunc) - return xwidget_info->w_efunc - (xwidget_info->w_einfo, - error_code, mode, ioerror); - /* - * no error handler registered for - * the offending port. it's not clear - * what needs to be done, but reporting - * it would be a good thing, unless it - * is a mode that requires nothing. - */ - if ((mode == MODE_DEVPROBE) || (mode == MODE_DEVUSERERROR) || - (mode == MODE_DEVREENABLE)) - return IOERROR_HANDLED; - -#ifdef LATER -#ifdef SUPPORT_PRINTING_V_FORMAT - PRINT_WARNING("Xbow at %v encountered Fatal error", xconn); -#else - PRINT_WARNING("Xbow at %x encountered Fatal error", xconn); -#endif -#endif /* LATER */ - ioerror_dump("xtalk", error_code, mode, ioerror); - - return IOERROR_UNHANDLED; -} - -int -xtalk_error_devenable(devfs_handle_t xconn_vhdl, int devnum, int error_code) -{ - return DEV_FUNC(xconn_vhdl, error_devenable) (xconn_vhdl, devnum, error_code); -} - - /* ===================================================================== * CONFIGURATION MANAGEMENT */ @@ -977,7 +890,7 @@ widget_info->w_einfo = 0; /* * get the name of this xwidget vertex and keep the info. - * This is needed during errors and interrupts, but as + * This is needed during errors and interupts, but as * long as we have it, we can use it elsewhere. */ s = dev_to_name(widget,devnm,MAXDEVNAME); @@ -1038,19 +951,6 @@ return(0); } -void -xwidget_error_register(devfs_handle_t xwidget, - error_handler_f *efunc, - error_handler_arg_t einfo) -{ - xwidget_info_t xwidget_info; - - xwidget_info = xwidget_info_get(xwidget); - ASSERT(xwidget_info != NULL); - xwidget_info->w_efunc = efunc; - xwidget_info->w_einfo = einfo; -} - /* * Issue a link reset to a widget. */ @@ -1120,17 +1020,5 @@ xwidget_unregister(widget_vhdl); - return(0); -} -/* - * xtalk_device_inquiry - * Find out hardware information about the xtalk widget. - */ -int -xtalk_device_inquiry(devfs_handle_t xbus_vhdl, xwidgetnum_t widget) -{ - - extern void hub_device_inquiry(devfs_handle_t, xwidgetnum_t); - hub_device_inquiry(xbus_vhdl, widget); return(0); } diff -Nru a/arch/ia64/sn/kernel/Makefile b/arch/ia64/sn/kernel/Makefile --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/kernel/Makefile Tue Mar 12 13:58:15 2002 @@ -0,0 +1,62 @@ +# arch/ia64/sn/Makefile +# +# Copyright (C) 1999,2001-2002 Silicon Graphics, Inc. All Rights Reserved. +# +# This program is free software; you can redistribute it and/or modify it +# under the terms of version 2 of the GNU General Public License +# as published by the Free Software Foundation. +# +# This program is distributed in the hope that it would be useful, but +# WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. +# +# Further, this software is distributed without any warranty that it is +# free of the rightful claim of any third person regarding infringement +# or the like. Any license provided herein, whether implied or +# otherwise, applies only to this software file. Patent licenses, if +# any, provided herein do not apply to combinations of this program with +# other software, or any other product whatsoever. +# +# You should have received a copy of the GNU General Public +# License along with this program; if not, write the Free Software +# Foundation, Inc., 59 Temple Place - Suite 330, Boston MA 02111-1307, USA. +# +# Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy, +# Mountain View, CA 94043, or: +# +# http://www.sgi.com +# +# For further information regarding this notice, see: +# +# http://oss.sgi.com/projects/GenInfo/NoticeExplan +# + +EXTRA_CFLAGS := -DLITTLE_ENDIAN + +.S.s: + $(CPP) $(AFLAGS) $(AFLAGS_KERNEL) -o $*.s $< +.S.o: + $(CC) $(AFLAGS) $(AFLAGS_KERNEL) -c -o $*.o $< + +O_TARGET = sn.o + +ifeq ($(CONFIG_MODULES),y) +export-objs = sn_ksyms.o +endif + +subdir-$(CONFIG_IA64_SGI_SN1) = sn1 +subdir-$(CONFIG_IA64_SGI_SN2) = sn2 + +obj-y = probe.o setup.o sn_asm.o sv.o bte.o +obj-$(CONFIG_IA64_SGI_SN1) += irq.o mca.o +obj-$(CONFIG_IA64_SGI_SN2) += irq.o mca.o + +obj-$(CONFIG_IA64_SGI_SN1) += sn1/sn1.a +obj-$(CONFIG_IA64_SGI_SN2) += sn2/sn2.a + +obj-$(CONFIG_IA64_SGI_AUTOTEST) += llsc4.o misctest.o +obj-$(CONFIG_IA64_GENERIC) += machvec.o +obj-$(CONFIG_MODULES) += sn_ksyms.o + + +include $(TOPDIR)/Rules.make diff -Nru a/arch/ia64/sn/kernel/bte.c b/arch/ia64/sn/kernel/bte.c --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/kernel/bte.c Tue Mar 12 13:58:16 2002 @@ -0,0 +1,244 @@ +/* + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (c) 2001-2002 Silicon Graphics, Inc. All rights reserved. + */ + +#include +#include +#include +#include +#include +#include + +#include +#include + +#include + +int bte_offsets[] = { IIO_IBLS0, IIO_IBLS1 }; + +/* + * bte_init_node(nodepda, cnode) + * + * Initialize the nodepda structure with BTE base addresses and + * spinlocks. + * + */ +void +bte_init_node(nodepda_t * mynodepda, cnodeid_t cNode) +{ + int i; + + /* + * Indicate that all the block transfer engines on this node + * are available. + */ + for (i = 0; i < BTES_PER_NODE; i++) { +#ifdef CONFIG_IA64_SGI_SN2 + /* >>> Don't know why the 0x1800000L is here. Robin */ + mynodepda->node_bte_info[i].bte_base_addr = + (char *)LOCAL_MMR_ADDR(bte_offsets[i] | 0x1800000L); +#elif CONFIG_IA64_SGI_SN1 + mynodepda->node_bte_info[i].bte_base_addr = + (char *)LOCAL_HUB_ADDR(bte_offsets[i]); +#else +#error BTE Not defined for this hardware platform. +#endif + +#ifdef CONFIG_IA64_SGI_BTE_LOCKING + /* Initialize the notification and spinlock */ + /* so the first transfer can occur. */ + mynodepda->node_bte_info[i].mostRecentNotification = + &(mynodepda->node_bte_info[i].notify); + mynodepda->node_bte_info[i].notify = 0L; + spin_lock_init(&mynodepda->node_bte_info[i].spinlock); +#endif /* CONFIG_IA64_SGI_BTE_LOCKING */ + + } +} + +/* + * bte_init_cpu() + * + * Initialize the cpupda structure with pointers to the + * nodepda bte blocks. + * + */ +void +bte_init_cpu(void) +{ + /* Called by setup.c as each cpu is being added to the nodepda */ + if (local_node_data->active_cpu_count & 0x1) { + pda.cpubte[0] = &(nodepda->node_bte_info[0]); + pda.cpubte[1] = &(nodepda->node_bte_info[1]); + } else { + pda.cpubte[0] = &(nodepda->node_bte_info[1]); + pda.cpubte[1] = &(nodepda->node_bte_info[0]); + } +} + + +/* + * bte_unaligned_copy(src, dest, len, mode) + * + * use the block transfer engine to move kernel + * memory from src to dest using the assigned mode. + * + * Paramaters: + * src - physical address of the transfer source. + * dest - physical address of the transfer destination. + * len - number of bytes to transfer from source to dest. + * mode - hardware defined. See reference information + * for IBCT0/1 in the SGI documentation. + * bteBlock - kernel virtual address of a temporary + * buffer used during unaligned transfers. + * + * NOTE: If the source, dest, and len are all cache line aligned, + * then it would be _FAR_ preferrable to use bte_copy instead. + */ +bte_result_t +bte_unaligned_copy(u64 src, u64 dest, u64 len, u64 mode, char *bteBlock) +{ + int destFirstCacheOffset; + u64 headBteSource; + u64 headBteLen; + u64 headBcopySrcOffset; + u64 headBcopyDest; + u64 headBcopyLen; + u64 footBteSource; + u64 footBteLen; + u64 footBcopyDest; + u64 footBcopyLen; + bte_result_t rv; + + if (len == 0) { + return (BTE_SUCCESS); + } + + headBcopySrcOffset = src & L1_CACHE_MASK; + destFirstCacheOffset = dest & L1_CACHE_MASK; + + /* + * At this point, the transfer is broken into + * (up to) three sections. The first section is + * from the start address to the first physical + * cache line, the second is from the first physical + * cache line to the last complete cache line, + * and the third is from the last cache line to the + * end of the buffer. The first and third sections + * are handled by bte copying into a temporary buffer + * and then bcopy'ing the necessary section into the + * final location. The middle section is handled with + * a standard bte copy. + * + * One nasty exception to the above rule is when the + * source and destination are not symetrically + * mis-aligned. If the source offset from the first + * cache line is different from the destination offset, + * we make the first section be the entire transfer + * and the bcopy the entire block into place. + */ + if (headBcopySrcOffset == destFirstCacheOffset) { + + /* + * Both the source and destination are the same + * distance from a cache line boundary so we can + * use the bte to transfer the bulk of the + * data. + */ + headBteSource = src & ~L1_CACHE_MASK; + headBcopyDest = dest; + if (headBcopySrcOffset) { + headBcopyLen = + (len > + (L1_CACHE_BYTES - + headBcopySrcOffset) ? L1_CACHE_BYTES + - headBcopySrcOffset : len); + headBteLen = L1_CACHE_BYTES; + } else { + headBcopyLen = 0; + headBteLen = 0; + } + + if (len > headBcopyLen) { + footBcopyLen = + (len - headBcopyLen) & L1_CACHE_MASK; + footBteLen = L1_CACHE_BYTES; + + footBteSource = src + len - footBcopyLen; + footBcopyDest = dest + len - footBcopyLen; + + if (footBcopyDest == + (headBcopyDest + headBcopyLen)) { + /* + * We have two contigous bcopy + * blocks. Merge them. + */ + headBcopyLen += footBcopyLen; + headBteLen += footBteLen; + } else if (footBcopyLen > 0) { + rv = bte_copy(footBteSource, + __pa(bteBlock), + footBteLen, mode, NULL); + if (rv != BTE_SUCCESS) { + return (rv); + } + + + memcpy(__va(footBcopyDest), + (char *)bteBlock, footBcopyLen); + } + } else { + footBcopyLen = 0; + footBteLen = 0; + } + + if (len > (headBcopyLen + footBcopyLen)) { + /* now transfer the middle. */ + rv = bte_copy((src + headBcopyLen), + (dest + + headBcopyLen), + (len - headBcopyLen - + footBcopyLen), mode, NULL); + if (rv != BTE_SUCCESS) { + return (rv); + } + + } + } else { + + + /* + * The transfer is not symetric, we will + * allocate a buffer large enough for all the + * data, bte_copy into that buffer and then + * bcopy to the destination. + */ + + /* Add the leader from source */ + headBteLen = len + (src & L1_CACHE_MASK); + /* Add the trailing bytes from footer. */ + headBteLen += + L1_CACHE_BYTES - (headBteLen & L1_CACHE_MASK); + headBteSource = src & ~L1_CACHE_MASK; + headBcopySrcOffset = src & L1_CACHE_MASK; + headBcopyDest = dest; + headBcopyLen = len; + } + + if (headBcopyLen > 0) { + rv = bte_copy(headBteSource, + __pa(bteBlock), headBteLen, mode, NULL); + if (rv != BTE_SUCCESS) { + return (rv); + } + + memcpy(__va(headBcopyDest), ((char *)bteBlock + + headBcopySrcOffset), + headBcopyLen); + } + return (BTE_SUCCESS); +} diff -Nru a/arch/ia64/sn/kernel/irq.c b/arch/ia64/sn/kernel/irq.c --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/kernel/irq.c Tue Mar 12 13:58:15 2002 @@ -0,0 +1,343 @@ +/* + * Platform dependent support for SGI SN1 + * + * Copyright (c) 2000-2002 Silicon Graphics, Inc. All Rights Reserved. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of version 2 of the GNU General Public License + * as published by the Free Software Foundation. + * + * This program is distributed in the hope that it would be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. + * + * Further, this software is distributed without any warranty that it is + * free of the rightful claim of any third person regarding infringement + * or the like. Any license provided herein, whether implied or + * otherwise, applies only to this software file. Patent licenses, if + * any, provided herein do not apply to combinations of this program with + * other software, or any other product whatsoever. + * + * You should have received a copy of the GNU General Public + * License along with this program; if not, write the Free Software + * Foundation, Inc., 59 Temple Place - Suite 330, Boston MA 02111-1307, USA. + * + * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy, + * Mountain View, CA 94043, or: + * + * http://www.sgi.com + * + * For further information regarding this notice, see: + * + * http://oss.sgi.com/projects/GenInfo/NoticeExplan + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#ifdef ajmtestintr +#include +#include +#endif /* ajmtestintr */ +#include +#include +#include +#include +#include +#include + +int irq_to_bit_pos(int irq); + + + +static unsigned int +sn1_startup_irq(unsigned int irq) +{ + return(0); +} + +static void +sn1_shutdown_irq(unsigned int irq) +{ +} + +static void +sn1_disable_irq(unsigned int irq) +{ +} + +static void +sn1_enable_irq(unsigned int irq) +{ +} + +static void +sn1_ack_irq(unsigned int irq) +{ +#ifdef CONFIG_IA64_SGI_SN1 + int bit = -1; + unsigned long long intpend_val; + int subnode; +#endif +#ifdef CONFIG_IA64_SGI_SN2 + unsigned long event_occurred, mask = 0; +#endif + int nasid; + + irq = irq & 0xff; + nasid = smp_physical_node_id(); +#ifdef CONFIG_IA64_SGI_SN1 + subnode = cpuid_to_subnode(smp_processor_id()); + if (irq == SGI_UART_IRQ) { + intpend_val = REMOTE_HUB_PI_L(nasid, subnode, PI_INT_PEND0); + if (intpend_val & (1L<> 8; + + irq = irq & 0xff; + + return(_sn1_irq_desc[cpu] + irq); +} + +u8 +sn1_irq_to_vector(u8 irq) { + return(irq & 0xff); +} + +unsigned int +sn1_local_vector_to_irq(u8 vector) { + return ( (smp_processor_id() << 8) + vector); +} + +int +sn1_valid_irq(u8 irq) { + + return( ((irq & 0xff) < NR_IRQS) && ((irq >> 8) < NR_CPUS) ); +} + +void *kmalloc(size_t, int); + +void +sn1_irq_init (void) +{ + int i; + irq_desc_t *base_desc = _irq_desc; + + for (i=IA64_FIRST_DEVICE_VECTOR; i 118) bit = 118; + +#ifdef CONFIG_IA64_SGI_SN1 + if (bit >= GFX_INTR_A && bit <= CC_PEND_B) { + return SGI_UART_IRQ; + } +#endif + + return bit + BIT_TO_IRQ; +} + +int +irq_to_bit_pos(int irq) { +#define IRQ_TO_BIT 64 + int bit = irq - IRQ_TO_BIT; + + return bit; +} + +#ifdef ajmtestintr + +#include +struct timer_list intr_test_timer; +int intr_test_icount[NR_IRQS]; +struct intr_test_reg_struct { + pcibr_soft_t pcibr_soft; + int slot; +}; +struct intr_test_reg_struct intr_test_registered[NR_IRQS]; + +void +intr_test_handle_timer(unsigned long data) { + int i; + bridge_t *bridge; + + for (i=0;ibs_intr[intr_test_registered[i].slot].bsi_xtalk_intr; + /* send interrupt */ + bridge = pcibr_soft->bs_base; + bridge->b_force_always[intr_test_registered[i].slot].intr = 1; + } + } + del_timer(&intr_test_timer); + intr_test_timer.expires = jiffies + HZ/100; + add_timer(&intr_test_timer); +} + +void +intr_test_set_timer(void) { + intr_test_timer.expires = jiffies + HZ/100; + intr_test_timer.function = intr_test_handle_timer; + add_timer(&intr_test_timer); +} + +void +intr_test_register_irq(int irq, pcibr_soft_t pcibr_soft, int slot) { + irq = irq & 0xff; + intr_test_registered[irq].pcibr_soft = pcibr_soft; + intr_test_registered[irq].slot = slot; +} + +void +intr_test_handle_intr(int irq, void *junk, struct pt_regs *morejunk) { + intr_test_icount[irq]++; + printk("RECEIVED %d INTERRUPTS ON IRQ %d\n",intr_test_icount[irq], irq); +} +#endif /* ajmtestintr */ diff -Nru a/arch/ia64/sn/kernel/llsc4.c b/arch/ia64/sn/kernel/llsc4.c --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/kernel/llsc4.c Tue Mar 12 13:58:15 2002 @@ -0,0 +1,1037 @@ +/* + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (C) 2000-2001 Silicon Graphics, Inc. All rights reserved. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "llsc4.h" + + +#ifdef STANDALONE +#include "lock.h" +#endif + +#ifdef INTTEST +static int inttest=0; +#endif + +#ifdef IA64_SEMFIX_INSN +#undef IA64_SEMFIX_INSN +#endif +#ifdef IA64_SEMFIX +#undef IA64_SEMFIX +#endif +# define IA64_SEMFIX_INSN +# define IA64_SEMFIX "" + +#define NOLOCK 0xdead +#define BGUARD(linei) (0xbbbb0000 | (linei)); +#define EGUARD(linei) (0xeeee0000 | (linei)); +#define GUARDLINE(v) ((v)&0xffff) + +/* + * Test parameter table for AUTOTEST + */ +typedef struct { + int passes; + int linecount; + int linepad; +} autotest_table_t; + +autotest_table_t autotest_table[] = { + {50000000, 2, 0x2b4 }, + {50000000, 16, 0, }, + {50000000, 16, 4, }, + {50000000, 128, 0x44 }, + {50000000, 128, 0x84 }, + {50000000, 128, 0x200 }, + {50000000, 128, 0x204 }, + {50000000, 128, 0x2b4 }, + {50000000, 2, 8*MB+0x2b4 }, + {50000000, 16, 8*MB+0 }, + {50000000, 16, 8*MB+4 }, + {50000000, 128, 8*MB+0x44 }, + {50000000, 128, 8*MB+0x84 }, + {50000000, 128, 8*MB+0x200 }, + {50000000, 128, 8*MB+0x204 }, + {50000000, 128, 8*MB+0x2b4 }, + {0}}; + +/* + * Array of virtual addresses available for test purposes. + */ + +typedef struct { + long vstart; + long vend; + long nextaddr; + long nextinit; + int wrapcount; +} memmap_t; + +#define MAPCHUNKS 128 +memmap_t memmap[MAPCHUNKS]; +int memmapx=0; + +typedef struct { + void *addr; + long data[16]; + long data_fc[16]; +} capture_line_t; + +typedef struct { + int size; + void *blockaddr; + void *shadaddr; + long blockdata[48]; + long shaddata[48]; + long blockdata_fc[48]; + long shaddata_fc[48]; + long synerr; +} capture_t; + +/* + * PORTING NOTE: revisit this statement. On hardware we put mbase at 0 and + * the rest of the tables have to start at 1MB to skip PROM tables. + */ +#define THREADPRIVATE(t) ((threadprivate_t*)(((long)mbase)+1024*1024+t*((sizeof(threadprivate_t)+511)/512*512))) + +#define k_capture mbase->sk_capture +#define k_go mbase->sk_go +#define k_linecount mbase->sk_linecount +#define k_passes mbase->sk_passes +#define k_napticks mbase->sk_napticks +#define k_stop_on_error mbase->sk_stop_on_error +#define k_verbose mbase->sk_verbose +#define k_threadprivate mbase->sk_threadprivate +#define k_blocks mbase->sk_blocks +#define k_iter_msg mbase->sk_iter_msg +#define k_vv mbase->sk_vv +#define k_linepad mbase->sk_linepad +#define k_options mbase->sk_options +#define k_testnumber mbase->sk_testnumber +#define k_currentpass mbase->sk_currentpass + +static long blocks[MAX_LINECOUNT]; /* addresses of data blocks */ +static control_t *mbase; +static vint initialized=0; + +static unsigned int ran_conf_llsc(int); +static int rerr(capture_t *, char *, void *, void *, int, int, int, int, int, int); +static void dumpline(void *, char *, char *, void *, void *, int); +static int checkstop(int, int, uint); +static void spin(int); +static void capturedata(capture_t *, uint, void *, void *, int); +static int randn(uint max, uint *seed); +static uint zrandom (uint *zranseed); +static int set_lock(uint *, uint); +static int clr_lock(uint *, uint); +static void Speedo(void); + +int autotest_enabled=0; +static int llsctest_number=-1; +static int errstop_enabled=0; +static int fail_enabled=0; +static int l4_opt=0; +static int selective_trigger=0; +static int dump_block_addrs_opt=0; +static lock_t errlock=NOLOCK; +static private_t init_private[LLSC_MAXCPUS]; + +static int __init autotest_enable(char *str) +{ + autotest_enabled = 1; + return 1; +} +static int __init set_llscblkadr(char *str) +{ + dump_block_addrs_opt = 1; + return 1; +} +static int __init set_llscselt(char *str) +{ + selective_trigger = 1; + return 1; +} +static int __init set_llsctest(char *str) +{ + llsctest_number = simple_strtol(str, &str, 10); + if (llsctest_number < 0 || llsctest_number > 15) + llsctest_number = -1; + return 1; +} +static int __init set_llscerrstop(char *str) +{ + errstop_enabled = 1; + return 1; +} +static int __init set_llscfail(char *str) +{ + fail_enabled = 8; + return 1; +} +static int __init set_llscl4(char *str) +{ + l4_opt = 1; + return 1; +} + +static void print_params(void) +{ + printk ("********* Enter AUTOTEST facility on master cpu *************\n"); + printk (" Test options:\n"); + printk (" llsctest=\t%d\tTest number to run (all = -1)\n", llsctest_number); + printk (" llscerrstop \t%s\tStop on error\n", errstop_enabled ? "on" : "off"); + printk (" llscfail \t%s\tForce a failure to test the trigger & error messages\n", fail_enabled ? "on" : "off"); + printk (" llscselt \t%s\tSelective triger on failures\n", selective_trigger ? "on" : "off"); + printk (" llscblkadr \t%s\tDump data block addresses\n", dump_block_addrs_opt ? "on" : "off"); + printk (" llscl4 \t%s\tRun only tests that evict from L4\n", l4_opt ? "on" : "off"); + printk (" SEMFIX: %s\n", IA64_SEMFIX); + printk ("\n"); +} +__setup("autotest", autotest_enable); +__setup("llsctest=", set_llsctest); +__setup("llscerrstop", set_llscerrstop); +__setup("llscfail", set_llscfail); +__setup("llscselt", set_llscselt); +__setup("llscblkadr", set_llscblkadr); +__setup("llscl4", set_llscl4); + + + +static inline int +set_lock(uint *lock, uint id) +{ + uint old; + old = cmpxchg_acq(lock, NOLOCK, id); + return (old == NOLOCK); +} + +static inline int +clr_lock(uint *lock, uint id) +{ + uint old; + old = cmpxchg_rel(lock, id, NOLOCK); + return (old == id); +} + +static inline void +init_lock(uint *lock) +{ + *lock = NOLOCK; +} + +/*------------------------------------------------------------------------+ +| Routine : ran_conf_llsc - ll/sc shared data test | +| Description: This test checks the coherency of shared data | ++------------------------------------------------------------------------*/ +static unsigned int +ran_conf_llsc(int thread) +{ + private_t pval; + share_t sval, sval2; + uint vv, linei, slinei, sharei, pass; + long t; + lock_t lockpat; + share_t *sharecopy; + long verbose, napticks, passes, linecount, lcount; + dataline_t *linep, *slinep; + int s, seed; + threadprivate_t *tp; + uint iter_msg, iter_msg_i=0; + int vv_mask; + int correct_errors; + int errs=0; + int stillbad; + capture_t capdata; + private_t *privp; + share_t *sharep; + + + linecount = k_linecount; + napticks = k_napticks; + verbose = k_verbose; + passes = k_passes; + iter_msg = k_iter_msg; + seed = (thread + 1) * 647; + tp = THREADPRIVATE(thread); + vv_mask = (k_vv>>((thread%16)*4)) & 0xf; + correct_errors = k_options&0xff; + + memset (&capdata, 0, sizeof(capdata)); + for (linei=0; lineiprivate[linei] = thread; + + for (pass = 1; passes == 0 || pass < passes; pass++) { + lockpat = (pass & 0x0fffffff) + (thread <<28); + if (lockpat == NOLOCK) + continue; + tp->threadpasses = pass; + if (checkstop(thread, pass, lockpat)) + return 0; + iter_msg_i++; + if (iter_msg && iter_msg_i > iter_msg) { + printk("Thread %d, Pass %d\n", thread, pass); + iter_msg_i = 0; + } + lcount = 0; + + /* + * Select line to perform operations on. + */ + linei = randn(linecount, &seed); + sharei = randn(2, &seed); + slinei = (linei + (linecount/2))%linecount; /* I dont like this - fix later */ + + linep = (dataline_t *)blocks[linei]; + slinep = (dataline_t *)blocks[slinei]; + if (sharei == 0) + sharecopy = &slinep->share0; + else + sharecopy = &slinep->share1; + + + vv = randn(4, &seed); + if ((vv_mask & (1<private[thread]; + sharep = &linep->share[sharei]; + + switch(vv) { + case 0: + /* Read and verify private count on line. */ + pval = *privp; + if (verbose) + printk("Line:%3d, Thread:%d:%d. Val: %x\n", linei, thread, vv, tp->private[linei]); + if (pval != tp->private[linei]) { + capturedata(&capdata, pass, privp, NULL, sizeof(*privp)); + stillbad = (*privp != tp->private[linei]); + if (rerr(&capdata, "Private count", linep, slinep, thread, pass, linei, tp->private[linei], pval, stillbad)) { + return 1; + } + if (correct_errors) { + tp->private[linei] = *privp; + } + errs++; + } + break; + + case 1: + /* Read, verify, and increment private count on line. */ + pval = *privp; + if (verbose) + printk("Line:%3d, Thread:%d:%d. Val: %x\n", linei, thread, vv, tp->private[linei]); + if (pval != tp->private[linei]) { + capturedata(&capdata, pass, privp, NULL, sizeof(*privp)); + stillbad = (*privp != tp->private[linei]); + if (rerr(&capdata, "Private count & inc", linep, slinep, thread, pass, linei, tp->private[linei], pval, stillbad)) { + return 1; + } + errs++; + } + pval = (pval==255) ? 0 : pval+1; + *privp = pval; + tp->private[linei] = pval; + break; + + case 2: + /* Lock line, read and verify shared data. */ + if (verbose) + printk("Line:%3d, Thread:%d:%d. Val: %x\n", linei, thread, vv, *sharecopy); + lcount = 0; + while (LOCK(sharei) != 1) { + if (checkstop(thread, pass, lockpat)) + return 0; + if (lcount++>1000000) { + capturedata(&capdata, pass, LOCKADDR(sharei), NULL, sizeof(lock_t)); + stillbad = (GETLOCK(sharei) != 0); + rerr(&capdata, "Shared data lock", linep, slinep, thread, pass, linei, 0, GETLOCK(sharei), stillbad); + return 1; + } + if ((lcount&0x3fff) == 0) + udelay(1000); + } + + sval = *sharep; + sval2 = *sharecopy; + if (pass > 12 && thread == 0 && fail_enabled == 1) + sval++; + if (sval != sval2) { + capturedata(&capdata, pass, sharep, sharecopy, sizeof(*sharecopy)); + stillbad = (*sharep != *sharecopy); + if (!stillbad && *sharep != sval && *sharecopy == sval2) + stillbad = 2; + if (rerr(&capdata, "Shared data", linep, slinep, thread, pass, linei, sval2, sval, stillbad)) { + return 1; + } + if (correct_errors) + *sharep = *sharecopy; + errs++; + } + + + if ( (s=UNLOCK(sharei)) != 1) { + capturedata(&capdata, pass, LOCKADDR(sharei), NULL, 4); + stillbad = (GETLOCK(sharei) != lockpat); + if (rerr(&capdata, "Shared data unlock", linep, slinep, thread, pass, linei, lockpat, GETLOCK(sharei), stillbad)) + return 1; + if (correct_errors) + ZEROLOCK(sharei); + errs++; + } + break; + + case 3: + /* Lock line, read and verify shared data, modify shared data. */ + if (verbose) + printk("Line:%3d, Thread:%d:%d. Val: %x\n", linei, thread, vv, *sharecopy); + lcount = 0; + while (LOCK(sharei) != 1) { + if (checkstop(thread, pass, lockpat)) + return 0; + if (lcount++>1000000) { + capturedata(&capdata, pass, LOCKADDR(sharei), NULL, sizeof(lock_t)); + stillbad = (GETLOCK(sharei) != 0); + rerr(&capdata, "Shared data lock & inc", linep, slinep, thread, pass, linei, 0, GETLOCK(sharei), stillbad); + return 1; + } + if ((lcount&0x3fff) == 0) + udelay(1000); + } + sval = *sharep; + sval2 = *sharecopy; + if (sval != sval2) { + capturedata(&capdata, pass, sharep, sharecopy, sizeof(*sharecopy)); + stillbad = (*sharep != *sharecopy); + if (!stillbad && *sharep != sval && *sharecopy == sval2) + stillbad = 2; + if (rerr(&capdata, "Shared data & inc", linep, slinep, thread, pass, linei, sval2, sval, stillbad)) { + return 1; + } + errs++; + } + + *sharep = lockpat; + *sharecopy = lockpat; + + + if ( (s=UNLOCK(sharei)) != 1) { + capturedata(&capdata, pass, LOCKADDR(sharei), NULL, 4); + stillbad = (GETLOCK(sharei) != lockpat); + if (rerr(&capdata, "Shared data & inc unlock", linep, slinep, thread, pass, linei, thread, GETLOCK(sharei), stillbad)) + return 1; + if (correct_errors) + ZEROLOCK(sharei); + errs++; + } + break; + } + } + + return (errs > 0); +} + +static void +trigger_la(long val) +{ + long *p; + + p = (long*)0xc0000a0001000020L; /* PI_CPU_NUM */ + *p = val; +} + +static long +getsynerr(void) +{ + long err, *errp; + + errp = (long*)0xc0000e0000000340L; /* SYN_ERR */ + err = *errp; + if (err) + *errp = -1L; + return (err & ~0x60); +} + +static int +rerr(capture_t *cap, char *msg, void *lp, void *slp, int thread, int pass, int badlinei, int exp, int found, int stillbad) +{ + int cpu, i, linei; + long synerr; + int selt; + + + selt = selective_trigger && stillbad > 1 && + memcmp(cap->blockdata, cap->blockdata_fc, 128) != 0 && + memcmp(cap->shaddata, cap->shaddata_fc, 128) == 0; + if (selt) { + trigger_la(pass); + } else if (selective_trigger) { + k_go = ST_STOP; + return k_stop_on_error;; + } + + spin(1); + i = 100; + while (i && set_lock(&errlock, 1) != 1) { + spin(1); + i--; + } + printk ("\nDataError!: %-20s, test %ld, thread %d, line:%d, pass %d (0x%x), time %ld expected:%x, found:%x\n", + msg, k_testnumber, thread, badlinei, pass, pass, jiffies, exp, found); + + dumpline (lp, "Corrupted data", "D ", cap->blockaddr, cap->blockdata, cap->size); +#ifdef ZZZ + if (memcmp(cap->blockdata, cap->blockdata_fc, 128)) + dumpline (lp, "Corrupted data", "DF", cap->blockaddr, cap->blockdata_fc, cap->size); +#endif + + if (cap->shadaddr) { + dumpline (slp, "Shadow data", "S ", cap->shadaddr, cap->shaddata, cap->size); +#ifdef ZZZ + if (memcmp(cap->shaddata, cap->shaddata_fc, 128)) + dumpline (slp, "Shadow data", "SF", cap->shadaddr, cap->shaddata_fc, cap->size); +#endif + } + + printk("Threadpasses: "); + for (cpu=0,i=0; cputhreadpasses) { + if (i && (i%8) == 0) + printk("\n : "); + printk(" %d:0x%x", cpu, k_threadprivate[cpu]->threadpasses); + i++; + } + printk("\n"); + + for (linei=0; lineiguard1); + g2linei = GUARDLINE(linep->guard2); + g1err = (g1linei != linei); + g2err = (g2linei != linei); + sh0err = (linep->share[0] != slinep->share0); + sh1err = (linep->share[1] != slinep->share1); + + if (g1err || g2err || sh0err || sh1err) { + printk("Line 0x%lx (%03d), %sG1 0x%lx (%03d), %sG2 0x%lx (%03d), %sSH0 %08x (%08x), %sSH1 %08x (%08x)\n", + blocks[linei], linei, + g1err ? "*" : " ", blocks[g1linei], g1linei, + g2err ? "*" : " ", blocks[g2linei], g2linei, + sh0err ? "*" : " ", linep->share[0], slinep->share0, + sh1err ? "*" : " ", linep->share[1], slinep->share1); + + + } + } + + printk("\nData was %sfixed by flushcache\n", (stillbad == 1 ? "**** NOT **** " : " ")); + synerr = getsynerr(); + if (synerr) + printk("SYNERR: Thread %d, Synerr: 0x%lx\n", thread, synerr); + spin(2); + printk("\n\n"); + clr_lock(&errlock, 1); + + if (errstop_enabled) { + local_irq_disable(); + while(1); + } + return k_stop_on_error; +} + + +static void +dumpline(void *lp, char *str1, char *str2, void *addr, void *data, int size) +{ + long *p; + int i, off; + + printk("%s at 0x%lx, size %d, block starts at 0x%lx\n", str1, (long)addr, size, (long)lp); + p = (long*) data; + for (i=0; i<48; i++, p++) { + if (i%8 == 0) printk("%2s", i==16 ? str2 : " "); + printk(" %016lx", *p); + if ((i&7)==7) printk("\n"); + } + printk(" "); + off = (((long)addr) ^ size) & 63L; + for (i=0; i=off) ? "--" : " "); + if ((i%8) == 7) + printk(" "); + } + + off = ((long)addr) & 127; + printk(" (line %d)\n", 2+off/64+1); +} + + +static int +randn(uint max, uint *seedp) +{ + if (max == 1) + return(0); + else + return((int)(zrandom(seedp)>>10) % max); +} + + +static int +checkstop(int thread, int pass, uint lockpat) +{ + long synerr; + + if (k_go == ST_RUN) + return 0; + if (k_go == ST_STOP) + return 1; + + if (errstop_enabled) { + local_irq_disable(); + while(1); + } + synerr = getsynerr(); + spin(2); + if (k_go == ST_STOP) + return 1; + if (synerr) + printk("SYNERR: Thread %d, Synerr: 0x%lx\n", thread, synerr); + return 1; +} + + +static void +spin(int j) +{ + udelay(j * 500000); +} + +static void +capturedata(capture_t *cap, uint pass, void *blockaddr, void *shadaddr, int size) +{ + + if (!selective_trigger) + trigger_la (pass); + + memcpy (cap->blockdata, CACHEALIGN(blockaddr)-128, 3*128); + if (shadaddr) + memcpy (cap->shaddata, CACHEALIGN(shadaddr)-128, 3*128); + + if (k_stop_on_error) { + k_go = ST_ERRSTOP; + } + + cap->size = size; + cap->blockaddr = blockaddr; + cap->shadaddr = shadaddr; + + asm volatile ("fc %0" :: "r"(blockaddr) : "memory"); + ia64_sync_i(); + ia64_srlz_d(); + memcpy (cap->blockdata_fc, CACHEALIGN(blockaddr)-128, 3*128); + + if (shadaddr) { + asm volatile ("fc %0" :: "r"(shadaddr) : "memory"); + ia64_sync_i(); + ia64_srlz_d(); + memcpy (cap->shaddata_fc, CACHEALIGN(shadaddr)-128, 3*128); + } +} + +int zranmult = 0x48c27395; + +static uint +zrandom (uint *seedp) +{ + *seedp = (*seedp * zranmult) & 0x7fffffff; + return (*seedp); +} + + +void +set_autotest_params(void) +{ + static int testnumber=-1; + + if (llsctest_number >= 0) { + testnumber = llsctest_number; + } else { + testnumber++; + if (autotest_table[testnumber].passes == 0) { + testnumber = 0; + dump_block_addrs_opt = 0; + } + } + if (testnumber == 0 && l4_opt) testnumber = 9; + + k_passes = autotest_table[testnumber].passes; + k_linepad = autotest_table[testnumber].linepad; + k_linecount = autotest_table[testnumber].linecount; + k_testnumber = testnumber; + + if (IS_RUNNING_ON_SIMULATOR()) { + printk ("llsc start test %ld\n", k_testnumber); + k_passes = 1000; + } +} + + +static void +set_leds(int errs) +{ + unsigned char leds=0; + + /* + * Leds are: + * ppppeee- + * where + * pppp = test number + * eee = error count but top bit is stick + */ + + leds = ((errs&7)<<1) | ((k_testnumber&15)<<4) | (errs ? 0x08 : 0); + set_led_bits(leds, LED_MASK_AUTOTEST); +} + +static void +setup_block_addresses(void) +{ + int i, stride, memmapi; + dataline_t *dp; + long *ip, *ipe; + + + stride = k_linepad + sizeof(dataline_t); + memmapi = 0; + for (i=0; i= memmap[memmapi].vend) { + memmap[memmapi].wrapcount++; + memmap[memmapi].nextaddr = memmap[memmapi].vstart + + memmap[memmapi].wrapcount * sizeof(dataline_t); + } + + ip = (long*)((memmap[memmapi].nextinit+7)&~7); + ipe = (long*)(memmap[memmapi].nextaddr+2*sizeof(dataline_t)+8); + while(ip <= ipe && ip < ((long*)memmap[memmapi].vend-8)) + *ip++ = (long)ip; + memmap[memmapi].nextinit = (long) ipe; + dp->guard1 = BGUARD(i); + dp->guard2 = EGUARD(i); + dp->lock[0] = dp->lock[1] = NOLOCK; + dp->share[0] = dp->share0 = 0x1111; + dp->share[1] = dp->share1 = 0x2222; + memcpy(dp->private, init_private, LLSC_MAXCPUS*sizeof(private_t)); + + + if (stride > 16384) { + memmapi++; + if (memmapi == memmapx) + memmapi = 0; + } + } + +} + +static void +dump_block_addrs(void) +{ + int i; + + printk("LLSC TestNumber %ld\n", k_testnumber); + + for (i=0; ithreadstate == TS_KILLED) { + set_led_bits(LED_MASK_AUTOTEST, LED_MASK_AUTOTEST); + while(1); + } + k_threadprivate[cpuid]->threadstate = state; +} + +static int +build_mem_map(unsigned long start, unsigned long end, void *arg) +{ + long lstart; + long align = 8*MB; + + /* + * HACK - skip the kernel on the first node + */ + + printk ("LLSC memmap: start 0x%lx, end 0x%lx, (0x%lx - 0x%lx)\n", + start, end, (long) virt_to_page(start), (long) virt_to_page(end-PAGE_SIZE)); + + if (memmapx >= MAPCHUNKS) + return 0; + while (end > start && (PageReserved(virt_to_page(end-PAGE_SIZE)) || virt_to_page(end-PAGE_SIZE)->count.counter > 0)) + end -= PAGE_SIZE; + + lstart = end; + while (lstart > start && (!PageReserved(virt_to_page(lstart-PAGE_SIZE)) && virt_to_page(lstart-PAGE_SIZE)->count.counter == 0)) + lstart -= PAGE_SIZE; + + lstart = (lstart + align -1) /align * align; + end = end / align * align; + if (lstart >= end) + return 0; + printk (" memmap: start 0x%lx, end 0x%lx\n", lstart, end); + + memmap[memmapx].vstart = lstart; + memmap[memmapx].vend = end; + memmapx++; + return 0; +} + +void int_test(void); + +int +llsc_main (int cpuid, long mbasex) +{ + int i, cpu, is_master, repeatcnt=0; + unsigned int preverr=0, errs=0, pass=0; + int automode=0; + +#ifdef INTTEST + if (inttest) + int_test(); +#endif + + if (!autotest_enabled) + return 0; + +#ifdef CONFIG_SMP + is_master = !smp_processor_id(); +#else + is_master = 1; +#endif + + + if (is_master) { + print_params(); + if(!IS_RUNNING_ON_SIMULATOR()) + spin(10); + mbase = (control_t*)mbasex; + k_currentpass = 0; + k_go = ST_IDLE; + k_passes = DEF_PASSES; + k_napticks = DEF_NAPTICKS; + k_stop_on_error = DEF_STOP_ON_ERROR; + k_verbose = DEF_VERBOSE; + k_linecount = DEF_LINECOUNT; + k_iter_msg = DEF_ITER_MSG; + k_vv = DEF_VV; + k_linepad = DEF_LINEPAD; + k_blocks = (void*)blocks; + efi_memmap_walk(build_mem_map, 0); + +#ifdef CONFIG_IA64_SGI_AUTOTEST + automode = 1; +#endif + + for (i=0; i 5) { + set_autotest_params(); + repeatcnt = 0; + } + } else { + while (k_go == ST_IDLE); + } + + k_go = ST_INIT; + if (k_linecount > MAX_LINECOUNT) k_linecount = MAX_LINECOUNT; + k_linecount = k_linecount & ~1; + setup_block_addresses(); + if (!preverr && dump_block_addrs_opt) + dump_block_addrs(); + + k_currentpass = pass++; + k_go = ST_RUN; + if (fail_enabled) + fail_enabled--; + + } else { + while (k_go != ST_RUN || k_currentpass != pass); + pass++; + } + + + set_leds(errs); + set_thread_state(cpuid, TS_RUNNING); + + errs += ran_conf_llsc(cpuid); + preverr = (k_go == ST_ERRSTOP); + + set_leds(errs); + set_thread_state(cpuid, TS_STOPPED); + + if (is_master) { + Speedo(); + for (i=0, cpu=0; cputhreadstate == TS_RUNNING) { + i++; + if (i == 10000) { + k_go = ST_STOP; + printk (" llsc master stopping test number %ld\n", k_testnumber); + } + if (i > 100000) { + k_threadprivate[cpu]->threadstate = TS_KILLED; + printk (" llsc: master killing cpuid %d, running test number %ld\n", + cpu, k_testnumber); + } + udelay(1000); + } + } + } + + goto loop; +} + + +static void +Speedo(void) +{ + static int i = 0; + + switch (++i%4) { + case 0: + printk("|\b"); + break; + case 1: + printk("\\\b"); + break; + case 2: + printk("-\b"); + break; + case 3: + printk("/\b"); + break; + } +} + +#ifdef INTTEST + +/* ======================================================================================================== + * + * Some test code to verify that interrupts work + * + * Add the following to the arch/ia64/kernel/smp.c after the comment "Reschedule callback" + * if (zzzprint_resched) printk(" cpu %d got interrupt\n", smp_processor_id()); + * + * Enable the code in arch/ia64/sn/sn1/smp.c to print sending IPIs. + * + */ + +static int __init set_inttest(char *str) +{ + inttest = 1; + autotest_enabled = 1; + + return 1; +} + +__setup("inttest=", set_inttest); + +int zzzprint_resched=0; + +void +int_test() { + int mycpu, cpu; + static volatile int control_cpu=0; + + mycpu = smp_processor_id(); + zzzprint_resched = 2; + + printk("Testing cross interrupts\n"); + + while (control_cpu != smp_num_cpus) { + if (mycpu == cpu_logical_map(control_cpu)) { + for (cpu=0; cpulock[(i)] +#define LOCK(i) set_lock(LOCKADDR(i), lockpat) +#define UNLOCK(i) clr_lock(LOCKADDR(i), lockpat) +#define GETLOCK(i) *LOCKADDR(i) +#define ZEROLOCK(i) init_lock(LOCKADDR(i)) + +#define CACHEALIGN(a) ((char*)((long)(a) & ~127L)) + +typedef uint guard_t; +typedef uint lock_t; +typedef uint share_t; +typedef uchar private_t; + +typedef struct { + guard_t guard1; + lock_t lock[2]; + share_t share[2]; + private_t private[LLSC_MAXCPUS]; + share_t share0; + share_t share1; + guard_t guard2; +} dataline_t ; + + +#define LINEPAD k_linepad +#define LINESTRIDE (((sizeof(dataline_t)+CACHELINE-1)/CACHELINE)*CACHELINE + LINEPAD) + + +typedef struct { + vint threadstate; + uint threadpasses; + private_t private[MAX_LINECOUNT]; +} threadprivate_t; + +typedef struct { + vlong sk_go; /* 0=idle, 1=init, 2=run */ + long sk_linecount; + long sk_passes; + long sk_napticks; + long sk_stop_on_error; + long sk_verbose; + long sk_iter_msg; + long sk_vv; + long sk_linepad; + long sk_options; + long sk_testnumber; + vlong sk_currentpass; + void *sk_blocks; + threadprivate_t *sk_threadprivate[LLSC_MAXCPUS]; +} control_t; + +/* Run state (k_go) constants */ +#define ST_IDLE 0 +#define ST_INIT 1 +#define ST_RUN 2 +#define ST_STOP 3 +#define ST_ERRSTOP 4 + + +/* Threadstate constants */ +#define TS_STOPPED 0 +#define TS_RUNNING 1 +#define TS_KILLED 2 + + + +int llsc_main (int cpuid, long mbasex); + diff -Nru a/arch/ia64/sn/kernel/machvec.c b/arch/ia64/sn/kernel/machvec.c --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/kernel/machvec.c Tue Mar 12 13:58:14 2002 @@ -0,0 +1,18 @@ +#define MACHVEC_PLATFORM_NAME sn1 +#include +#include +#include +void* +sn1_mk_io_addr_MACRO + +dma_addr_t +sn1_pci_map_single_MACRO + +int +sn1_pci_map_sg_MACRO + +unsigned long +sn1_virt_to_phys_MACRO + +void * +sn1_phys_to_virt_MACRO diff -Nru a/arch/ia64/sn/kernel/mca.c b/arch/ia64/sn/kernel/mca.c --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/kernel/mca.c Tue Mar 12 13:58:16 2002 @@ -0,0 +1,278 @@ +/* + * File: mca.c + * Purpose: SN specific MCA code. + * + * Copyright (C) 2001-2002 Silicon Graphics, Inc. All Rights Reserved. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of version 2 of the GNU General Public License + * as published by the Free Software Foundation. + * + * This program is distributed in the hope that it would be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. + * + * Further, this software is distributed without any warranty that it is + * free of the rightful claim of any third person regarding infringement + * or the like. Any license provided herein, whether implied or + * otherwise, applies only to this software file. Patent licenses, if + * any, provided herein do not apply to combinations of this program with + * other software, or any other product whatsoever. + * + * You should have received a copy of the GNU General Public + * License along with this program; if not, write the Free Software + * Foundation, Inc., 59 Temple Place - Suite 330, Boston MA 02111-1307, USA. + * + * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy, + * Mountain View, CA 94043, or: + * + * http://www.sgi.com + * + * For further information regarding this notice, see: + * + * http://oss.sgi.com/projects/GenInfo/NoticeExplan + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include + +static char *shub_mmr_names[] = { + "sh_event_occurred", + "sh_first_error", + "sh_event_overflow", + +/* PI */ + "sh_pi_first_error", + "sh_pi_error_summary", + "sh_pi_error_overflow", + +/* PI HW */ + "sh_pi_error_detail_1", + "sh_pi_error_detail_2", + "sh_pi_hw_time_stamp", + +/* PI UCE */ + "sh_pi_uncorrected_detail_1", + "sh_pi_uncorrected_detail_2", + "sh_pi_uncorrected_detail_3", + "sh_pi_uncorrected_detail_4", + "sh_pi_uncor_time_stamp", + +/* PI CE */ + "sh_pi_corrected_detail_1", + "sh_pi_corrected_detail_2", + "sh_pi_corrected_detail_3", + "sh_pi_corrected_detail_4", + "sh_pi_cor_time_stamp", + +/* MD */ + "sh_mem_error_summary", + "sh_mem_error_overflow", +/* MD HW */ + "sh_misc_err_hdr_upper", + "sh_misc_err_hdr_lower", + "sh_md_dqlp_mmr_xperr_val", + "sh_md_dqlp_mmr_yperr_val", + "sh_md_dqrp_mmr_xperr_val", + "sh_md_dqrp_mmr_yperr_val", + "sh_md_hw_time_stamp", + +/* MD UCE */ + "sh_dir_uc_err_hdr_lower", + "sh_dir_uc_err_hdr_upper", + "sh_md_dqlp_mmr_xuerr1", + "sh_md_dqlp_mmr_xuerr2", + "sh_md_dqlp_mmr_yuerr1", + "sh_md_dqlp_mmr_yuerr2", + "sh_md_dqrp_mmr_xuerr1", + "sh_md_dqrp_mmr_xuerr2", + "sh_md_dqrp_mmr_yuerr1", + "sh_md_dqrp_mmr_yuerr2", + "sh_md_uncor_time_stamp", + +/* MD CE */ + "sh_dir_cor_err_hdr_lower", + "sh_dir_cor_err_hdr_upper", + "sh_md_dqlp_mmr_xcerr1", + "sh_md_dqlp_mmr_xcerr2", + "sh_md_dqlp_mmr_ycerr1", + "sh_md_dqlp_mmr_ycerr2", + "sh_md_dqrp_mmr_xcerr1", + "sh_md_dqrp_mmr_xcerr2", + "sh_md_dqrp_mmr_ycerr1", + "sh_md_dqrp_mmr_ycerr2", + "sh_md_cor_time_stamp", + +/* MD CE, UCE */ + "sh_md_dqls_mmr_xamopw_err", + "sh_md_dqrs_mmr_yamopw_err", + +/* XN */ + "sh_xn_error_summary", + "sh_xn_first_error", + "sh_xn_error_overflow", + +/* XN HW */ + "sh_xniilb_error_summary", + "sh_xniilb_first_error", + "sh_xniilb_error_overflow", + "sh_xniilb_error_detail_1", + "sh_xniilb_error_detail_2", + "sh_xniilb_error_detail_3", + + "sh_ni0_error_summary_1", + "sh_ni0_first_error_1", + "sh_ni0_error_overflow_1", + + "sh_ni0_error_summary_2", + "sh_ni0_first_error_2", + "sh_ni0_error_overflow_2", + "sh_ni0_error_detail_1", + "sh_ni0_error_detail_2", + "sh_ni0_error_detail_3", + + "sh_ni1_error_summary_1", + "sh_ni1_first_error_1", + "sh_ni1_error_overflow_1", + + "sh_ni1_error_summary_2", + "sh_ni1_first_error_2", + "sh_ni1_error_overflow_2", + + "sh_ni1_error_detail_1", + "sh_ni1_error_detail_2", + "sh_ni1_error_detail_3", + + "sh_xn_hw_time_stamp", + +/* XN HW & UCE & SBE */ + "sh_xnpi_error_summary", + "sh_xnpi_first_error", + "sh_xnpi_error_overflow", + "sh_xnpi_error_detail_1", + + "sh_xnmd_error_summary", + "sh_xnmd_first_error", + "sh_xnmd_error_overflow", + "sh_xnmd_ecc_err_report", + "sh_xnmd_error_detail_1", + +/* XN UCE */ + "sh_xn_uncorrected_detail_1", + "sh_xn_uncorrected_detail_2", + "sh_xn_uncorrected_detail_3", + "sh_xn_uncorrected_detail_4", + "sh_xn_uncor_time_stamp", + +/* XN CE */ + "sh_xn_corrected_detail_1", + "sh_xn_corrected_detail_2", + "sh_xn_corrected_detail_3", + "sh_xn_corrected_detail_4", + "sh_xn_cor_time_stamp", + +/* LB HW */ + "sh_lb_error_summary", + "sh_lb_first_error", + "sh_lb_error_overflow", + "sh_lb_error_detail_1", + "sh_lb_error_detail_2", + "sh_lb_error_detail_3", + "sh_lb_error_detail_4", + "sh_lb_error_detail_5", + "sh_junk_error_status", +}; + +void +sal_log_plat_print(int header_len, int sect_len, u8 *p_data, prfunc_t prfunc) +{ + sal_log_plat_info_t *sh_info = (sal_log_plat_info_t *) p_data; + u64 *mmr_val = (u64 *)&(sh_info->shub_state); + char **mmr_name = shub_mmr_names; + int mmr_count = sizeof(sal_log_shub_state_t)>>3; + + while(mmr_count) { + if(*mmr_val) { + prfunc("%-40s: %#016lx\n",*mmr_name, *mmr_val); + } + mmr_name++; + mmr_val++; + mmr_count--; + } + +} + +void +sn_cpei_handler(int irq, void *devid, struct pt_regs *regs) { + + struct ia64_sal_retval isrv; +// this function's sole purpose is to call SAL when we receive +// a CE interrupt from SHUB or when the timer routine decides +// we need to call SAL to check for CEs. + + // CALL SAL_LOG_CE + SAL_CALL(isrv, SN_SAL_LOG_CE, irq, 0, 0, 0, 0, 0, 0); +} + +#include + +#define CPEI_INTERVAL (HZ/100) +struct timer_list sn_cpei_timer; +void sn_init_cpei_timer(void); + +void +sn_cpei_timer_handler(unsigned long dummy) { + sn_cpei_handler(-1, NULL, NULL); + del_timer(&sn_cpei_timer); + sn_cpei_timer.expires = jiffies + CPEI_INTERVAL; + add_timer(&sn_cpei_timer); +} + +void +sn_init_cpei_timer() { + sn_cpei_timer.expires = jiffies + CPEI_INTERVAL; + sn_cpei_timer.function = sn_cpei_timer_handler; + add_timer(&sn_cpei_timer); +} + +#ifdef ajmtestceintr + +struct timer_list sn_ce_timer; + +void +sn_ce_timer_handler(long dummy) { + unsigned long *pi_ce_error_inject_reg = 0xc00000092fffff00; + + *pi_ce_error_inject_reg = 0x0000000000000100; + del_timer(&sn_ce_timer); + sn_ce_timer.expires = jiffies + CPEI_INTERVAL; + add_timer(&sn_ce_timer); +} + +sn_init_ce_timer() { + sn_ce_timer.expires = jiffies + CPEI_INTERVAL; + sn_ce_timer.function = sn_ce_timer_handler; + add_timer(&sn_ce_timer); +} +#endif /* ajmtestceintr */ diff -Nru a/arch/ia64/sn/kernel/misctest.c b/arch/ia64/sn/kernel/misctest.c --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/kernel/misctest.c Tue Mar 12 13:58:16 2002 @@ -0,0 +1,122 @@ +/* + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (C) 2000-2001 Silicon Graphics, Inc. All rights reserved. + */ + +#include +#include +#include +#include +#include +#include +#include + + +extern int autotest_enabled; +int mcatest=0; + + + +/* + * mcatest + * 1 = expected MCA + * 2 = unexpected MCA + * 3 = expected MCA + unexpected MCA + * 4 = INIT + * 5 = speculative load to garbage memory address + * 6 = speculative load with ld8.s (needs poison hack in PROM) + * 7 = speculative load from mis-predicted branch (needs poison hack in PROM) + */ +static int __init set_mcatest(char *str) +{ + get_option(&str, &mcatest); + return 1; +} + +__setup("mcatest=", set_mcatest); + +void +sgi_mcatest(void) +{ + if (mcatest == 1 || mcatest == 3) { + long *p, result, adrs[] = {0xc0000a000f021004UL, 0xc0000a000f026004UL, 0x800000000, 0x500000, 0}; + long size[] = {1,2,4,8}; + int r, i, j; + p = (long*)0xc000000000000000UL; + ia64_fc(p); + *p = 0x0123456789abcdefL; + for (i=0; i<5; i++) { + for (j=0; j<4; j++) { + printk("Probing 0x%lx, size %ld\n", adrs[i], size[j]); + result = -1; + r = ia64_sn_probe_io_slot (adrs[i], size[j], &result); + printk(" status %d, val 0x%lx\n", r, result); + } + } + } + if (mcatest == 2 || mcatest == 3) { + void zzzmca(int, int, int); + printk("About to cause unexpected MCA\n"); + zzzmca(mcatest, 0x32dead, 0x33dead); + } + if (mcatest == 4) { + long *p; + int delivery_mode = 5; + printk("About to try to cause an INIT on cpu 0\n"); + p = (long*)((0xc0000a0000000000LL | ((long)get_nasid())<<33) | 0x1800080); + *p = (delivery_mode << 8); + udelay(10000); + printk("Returned from INIT\n"); + } + if (mcatest == 5) { + int zzzspec(long); + int i; + long flags, dcr, res, val, addr=0xff00000000UL; + + dcr = ia64_get_dcr(); + for (i=0; i<5; i++) { + printk("Default DCR: 0x%lx\n", ia64_get_dcr()); + printk("zzzspec: 0x%x\n", zzzspec(addr)); + ia64_set_dcr(0); + printk("New DCR: 0x%lx\n", ia64_get_dcr()); + printk("zzzspec: 0x%x\n", zzzspec(addr)); + ia64_set_dcr(dcr); + res = ia64_sn_probe_io_slot(0xff00000000UL, 8, &val); + printk("zzzspec: probe %ld, 0x%lx\n", res, val); + ia64_clear_ic(flags); + ia64_itc(0x2, 0xe00000ff00000000UL, + pte_val(mk_pte_phys(0xff00000000UL, + __pgprot(__DIRTY_BITS|_PAGE_PL_0|_PAGE_AR_RW))), _PAGE_SIZE_256M); + local_irq_restore(flags); + ia64_srlz_i (); + } + + } + if (mcatest == 6) { + int zzzspec(long); + int i; + long dcr, addr=0xe000000008000000UL; + + dcr = ia64_get_dcr(); + for (i=0; i<5; i++) { + printk("zzzspec: 0x%x\n", zzzspec(addr)); + ia64_set_dcr(0); + } + ia64_set_dcr(dcr); + } + if (mcatest == 7) { + int zzzspec2(long, long); + int i; + long addr=0xe000000008000000UL; + long addr2=0xe000000007000000UL; + + for (i=0; i<5; i++) { + printk("zzzspec2\n"); + zzzspec2(addr, addr2); + } + } +} diff -Nru a/arch/ia64/sn/kernel/probe.c b/arch/ia64/sn/kernel/probe.c --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/kernel/probe.c Tue Mar 12 13:58:15 2002 @@ -0,0 +1,81 @@ +/* + * Platform dependent support for IO probing. + * + * Copyright (c) 2000-2002 Silicon Graphics, Inc. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of version 2 of the GNU General Public License + * as published by the Free Software Foundation. + * + * This program is distributed in the hope that it would be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. + * + * Further, this software is distributed without any warranty that it is + * free of the rightful claim of any third person regarding infringement + * or the like. Any license provided herein, whether implied or + * otherwise, applies only to this software file. Patent licenses, if + * any, provided herein do not apply to combinations of this program with + * other software, or any other product whatsoever. + * + * You should have received a copy of the GNU General Public + * License along with this program; if not, write the Free Software + * Foundation, Inc., 59 Temple Place - Suite 330, Boston MA 02111-1307, USA. + * + * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy, + * Mountain View, CA 94043, or: + * + * http://www.sgi.com + * + * For further information regarding this notice, see: + * + * http://oss.sgi.com/projects/GenInfo/NoticeExplan + */ + +#include + +/** + * ia64_sn_probe_io_slot - test a memory location for readability + * @paddr: physical address to probe + * @size: number bytes to read (1,2,4,8) + * @data_ptr: address to store value read by probe (-1 returned if probe fails) + * + * This function will probe a physical address to determine if + * the address can be read. If reading the address causes a BUS + * error, an error is returned. If the probe succeeds, the contents + * of the memory location is returned. + * + * Return values: + * 0 - probe successful + * 1 - probe failed (generated MCA) + * 2 - Bad arg + * <0 - PAL error + */ +u64 +ia64_sn_probe_io_slot(long paddr, long size, void *data_ptr) +{ + struct ia64_sal_retval isrv; + + SAL_CALL(isrv, SN_SAL_PROBE, paddr, size, 0, 0, 0, 0, 0); + + if (data_ptr) { + switch (size) { + case 1: + *((u8*)data_ptr) = (u8)isrv.v0; + break; + case 2: + *((u16*)data_ptr) = (u16)isrv.v0; + break; + case 4: + *((u32*)data_ptr) = (u32)isrv.v0; + break; + case 8: + *((u64*)data_ptr) = (u64)isrv.v0; + break; + default: + isrv.status = 2; + } + } + + return isrv.status; +} diff -Nru a/arch/ia64/sn/kernel/setup.c b/arch/ia64/sn/kernel/setup.c --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/kernel/setup.c Tue Mar 12 13:58:15 2002 @@ -0,0 +1,437 @@ +/* + * Copyright (C) 1999,2001-2002 Silicon Graphics, Inc. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of version 2 of the GNU General Public License + * as published by the Free Software Foundation. + * + * This program is distributed in the hope that it would be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. + * + * Further, this software is distributed without any warranty that it is + * free of the rightful claim of any third person regarding infringement + * or the like. Any license provided herein, whether implied or + * otherwise, applies only to this software file. Patent licenses, if + * any, provided herein do not apply to combinations of this program with + * other software, or any other product whatsoever. + * + * You should have received a copy of the GNU General Public + * License along with this program; if not, write the Free Software + * Foundation, Inc., 59 Temple Place - Suite 330, Boston MA 02111-1307, USA. + * + * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy, + * Mountain View, CA 94043, or: + * + * http://www.sgi.com + * + * For further information regarding this notice, see: + * + * http://oss.sgi.com/projects/GenInfo/NoticeExplan + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#ifdef CONFIG_IA64_MCA +#include +#endif +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#ifdef CONFIG_IA64_SGI_SN2 +#include +#endif + +extern void bte_init_node (nodepda_t *, cnodeid_t); +extern void bte_init_cpu (void); + +long sn_rtc_cycles_per_second; + +/* + * This is the address of the RRegs in the HSpace of the global + * master. It is used by a hack in serial.c (serial_[in|out], + * printk.c (early_printk), and kdb_io.c to put console output on that + * node's Bedrock UART. It is initialized here to 0, so that + * early_printk won't try to access the UART before + * master_node_bedrock_address is properly calculated. + */ +u64 master_node_bedrock_address = 0UL; + +static void sn_init_pdas(void); + +extern struct irq_desc *_sn1_irq_desc[]; + +#if defined(CONFIG_IA64_SGI_SN1) +extern synergy_da_t *Synergy_da_indr[]; +#endif + +static nodepda_t *nodepdaindr[MAX_COMPACT_NODES]; + +#ifdef CONFIG_IA64_SGI_SN2 +irqpda_t *irqpdaindr[NR_CPUS]; +#endif /* CONFIG_IA64_SGI_SN2 */ + + +/* + * The format of "screen_info" is strange, and due to early i386-setup + * code. This is just enough to make the console code think we're on a + * VGA color display. + */ +struct screen_info sn1_screen_info = { + orig_x: 0, + orig_y: 0, + orig_video_mode: 3, + orig_video_cols: 80, + orig_video_ega_bx: 3, + orig_video_lines: 25, + orig_video_isVGA: 1, + orig_video_points: 16 +}; + +/* + * This is here so we can use the CMOS detection in ide-probe.c to + * determine what drives are present. In theory, we don't need this + * as the auto-detection could be done via ide-probe.c:do_probe() but + * in practice that would be much slower, which is painful when + * running in the simulator. Note that passing zeroes in DRIVE_INFO + * is sufficient (the IDE driver will autodetect the drive geometry). + */ +char drive_info[4*16]; + +/** + * sn1_map_nr - return the mem_map entry for a given kernel address + * @addr: kernel address to query + * + * Finds the mem_map entry for the kernel address given. Used by + * virt_to_page() (asm-ia64/page.h), among other things. + */ +unsigned long +sn1_map_nr (unsigned long addr) +{ + return MAP_NR_DISCONTIG(addr); +} + +/** + * early_sn1_setup - early setup routine for SN platforms + * + * Sets up an intial console to aid debugging. Intended primarily + * for bringup, it's only called if %BRINGUP and %CONFIG_IA64_EARLY_PRINTK + * are turned on. See start_kernel() in init/main.c. + */ +#if defined(CONFIG_IA64_EARLY_PRINTK) +void __init +early_sn1_setup(void) +{ +#if defined(CONFIG_SERIAL_SGI_L1_PROTOCOL) + if ( IS_RUNNING_ON_SIMULATOR() ) +#endif + { +#ifdef CONFIG_IA64_SGI_SN2 + master_node_bedrock_address = (u64)REMOTE_HUB(get_nasid(), SH_JUNK_BUS_UART0); +#else + master_node_bedrock_address = (u64)REMOTE_HSPEC_ADDR(get_nasid(), 0); +#endif + printk(KERN_DEBUG "early_sn1_setup: setting master_node_bedrock_address to 0x%lx\n", master_node_bedrock_address); + } +} +#endif /* CONFIG_IA64_EARLY_PRINTK */ + +#ifdef NOT_YET_CONFIG_IA64_MCA +extern void ia64_mca_cpe_int_handler (int cpe_irq, void *arg, struct pt_regs *ptregs); +static struct irqaction mca_cpe_irqaction = { + handler: ia64_mca_cpe_int_handler, + flags: SA_INTERRUPT, + name: "cpe_hndlr" +}; +#endif +#ifdef CONFIG_IA64_MCA +extern int platform_irq_list[]; +#endif + +extern nasid_t master_nasid; + +/** + * sn1_setup - SN platform setup routine + * @cmdline_p: kernel command line + * + * Handles platform setup for SN machines. This includes determining + * the RTC frequency (via a SAL call), initializing secondary CPUs, and + * setting up per-node data areas. The console is also initialized here. + */ +void __init +sn1_setup(char **cmdline_p) +{ + long status, ticks_per_sec, drift; + int i; + +#if defined(CONFIG_SERIAL) && !defined(CONFIG_SERIAL_SGI_L1_PROTOCOL) + struct serial_struct req; +#endif + + master_nasid = get_nasid(); + (void)get_console_nasid(); + + status = ia64_sal_freq_base(SAL_FREQ_BASE_REALTIME_CLOCK, &ticks_per_sec, &drift); + if (status != 0 || ticks_per_sec < 100000) + printk(KERN_WARNING "unable to determine platform RTC clock frequency\n"); + else + sn_rtc_cycles_per_second = ticks_per_sec; + + for (i=0;ithread.flags |= IA64_THREAD_FPEMU_NOPRINT; +} + +/** + * sn_init_pdas - setup node data areas + * + * One time setup for Node Data Area. Called by sn1_setup(). + */ +void +sn_init_pdas(void) +{ + cnodeid_t cnode; + + /* + * Make sure that the PDA fits entirely in the same page as the + * cpu_data area. + */ + if ((PDAADDR&~PAGE_MASK)+sizeof(pda_t) > PAGE_SIZE) + panic("overflow of cpu_data page"); + + /* + * Allocate & initalize the nodepda for each node. + */ + for (cnode=0; cnode < numnodes; cnode++) { + nodepdaindr[cnode] = alloc_bootmem_node(NODE_DATA(cnode), sizeof(nodepda_t)); + memset(nodepdaindr[cnode], 0, sizeof(nodepda_t)); + +#if defined(CONFIG_IA64_SGI_SN1) + Synergy_da_indr[cnode * 2] = (synergy_da_t *) alloc_bootmem_node(NODE_DATA(cnode), sizeof(synergy_da_t)); + Synergy_da_indr[cnode * 2 + 1] = (synergy_da_t *) alloc_bootmem_node(NODE_DATA(cnode), sizeof(synergy_da_t)); + memset(Synergy_da_indr[cnode * 2], 0, sizeof(synergy_da_t)); + memset(Synergy_da_indr[cnode * 2 + 1], 0, sizeof(synergy_da_t)); +#endif + } + + /* + * Now copy the array of nodepda pointers to each nodepda. + */ + for (cnode=0; cnode < numnodes; cnode++) + memcpy(nodepdaindr[cnode]->pernode_pdaindr, nodepdaindr, sizeof(nodepdaindr)); + + + /* + * Set up IO related platform-dependent nodepda fields. + * The following routine actually sets up the hubinfo struct + * in nodepda. + */ + for (cnode = 0; cnode < numnodes; cnode++) { + init_platform_nodepda(nodepdaindr[cnode], cnode); + bte_init_node (nodepdaindr[cnode], cnode); + } +} + +/** + * sn_cpu_init - initialize per-cpu data areas + * @cpuid: cpuid of the caller + * + * Called during cpu initialization on each cpu as it starts. + * Currently, initializes the per-cpu data area for SNIA. + * Also sets up a few fields in the nodepda. Also known as + * platform_cpu_init() by the ia64 machvec code. + */ +void __init +sn_cpu_init(void) +{ + int cpuid; + int cpuphyid; + int nasid; + int slice; + int cnode; + + /* + * The boot cpu makes this call again after platform initialization is + * complete. + */ + if (nodepdaindr[0] == NULL) + return; + + cpuid = smp_processor_id(); + cpuphyid = ((ia64_get_lid() >> 16) & 0xffff); + nasid = cpu_physical_id_to_nasid(cpuphyid); + cnode = nasid_to_cnodeid(nasid); + slice = cpu_physical_id_to_slice(cpuphyid); + + pda.p_nodepda = nodepdaindr[cnode]; + pda.led_address = (long*) (LED0 + (slice<active_cpu_count == 1) + nodepda->node_first_cpu = cpuid; + +#ifdef CONFIG_IA64_SGI_SN1 + { + int synergy; + synergy = cpu_physical_id_to_synergy(cpuphyid); + pda.p_subnodepda = &nodepdaindr[cnode]->snpda[synergy]; + } +#endif + +#ifdef CONFIG_IA64_SGI_SN2 + + /* + * We must use different memory allocators for first cpu (bootmem + * allocator) than for the other cpus (regular allocator). + */ + if (cpuid == 0) + irqpdaindr[cpuid] = alloc_bootmem_node(NODE_DATA(cpuid_to_cnodeid(cpuid)),sizeof(irqpda_t)); + else + irqpdaindr[cpuid] = page_address(alloc_pages_node(local_cnodeid(), GFP_KERNEL, get_order(sizeof(irqpda_t)))); + memset(irqpdaindr[cpuid], 0, sizeof(irqpda_t)); + pda.p_irqpda = irqpdaindr[cpuid]; + pda.pio_write_status_addr = (volatile unsigned long *)LOCAL_MMR_ADDR((slice < 2 ? SH_PIO_WRITE_STATUS_0 : SH_PIO_WRITE_STATUS_1 ) ); +#endif + +#ifdef CONFIG_IA64_SGI_SN1 + pda.bedrock_rev_id = (volatile unsigned long *) LOCAL_HUB(LB_REV_ID); + if (cpuid_to_synergy(cpuid)) + /* CPU B */ + pda.pio_write_status_addr = (volatile unsigned long *) GBL_PERF_B_ADDR; + else + /* CPU A */ + pda.pio_write_status_addr = (volatile unsigned long *) GBL_PERF_A_ADDR; +#endif + + + bte_init_cpu(); +} + + +/** + * cnodeid_to_cpuid - convert a cnode to a cpuid of a cpu on the node. + * @cnode: node to get a cpuid from + * + * Returns -1 if no cpus exist on the node. + * NOTE:BRINGUP ZZZ This is NOT a good way to find cpus on the node. + * Need a better way!! + */ +int +cnodeid_to_cpuid(int cnode) { + int cpu; + + for (cpu = 0; cpu < smp_num_cpus; cpu++) + if (cpuid_to_cnodeid(cpu) == cnode) + break; + + if (cpu == smp_num_cpus) + cpu = -1; + + return cpu; +} diff -Nru a/arch/ia64/sn/kernel/sn1/Makefile b/arch/ia64/sn/kernel/sn1/Makefile --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/kernel/sn1/Makefile Tue Mar 12 13:58:16 2002 @@ -0,0 +1,51 @@ +# +# ia64/platform/sn/sn1/Makefile +# +# Copyright (C) 1999,2001-2002 Silicon Graphics, Inc. All rights reserved. +# +# This program is free software; you can redistribute it and/or modify it +# under the terms of version 2 of the GNU General Public License +# as published by the Free Software Foundation. +# +# This program is distributed in the hope that it would be useful, but +# WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. +# +# Further, this software is distributed without any warranty that it is +# free of the rightful claim of any third person regarding infringement +# or the like. Any license provided herein, whether implied or +# otherwise, applies only to this software file. Patent licenses, if +# any, provided herein do not apply to combinations of this program with +# other software, or any other product whatsoever. +# +# You should have received a copy of the GNU General Public +# License along with this program; if not, write the Free Software +# Foundation, Inc., 59 Temple Place - Suite 330, Boston MA 02111-1307, USA. +# +# Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy, +# Mountain View, CA 94043, or: +# +# http://www.sgi.com +# +# For further information regarding this notice, see: +# +# http://oss.sgi.com/projects/GenInfo/NoticeExplan +# + + +EXTRA_CFLAGS := -DLITTLE_ENDIAN + +.S.s: + $(CPP) $(AFLAGS) $(AFLAGS_KERNEL) -o $*.s $< +.S.o: + $(CC) $(AFLAGS) $(AFLAGS_KERNEL) -c -o $*.o $< + +all: sn1.a + +O_TARGET = sn1.a + +obj-y = cache.o error.o iomv.o synergy.o sn1_smp.o + +clean:: + +include $(TOPDIR)/Rules.make diff -Nru a/arch/ia64/sn/kernel/sn1/cache.c b/arch/ia64/sn/kernel/sn1/cache.c --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/kernel/sn1/cache.c Tue Mar 12 13:58:16 2002 @@ -0,0 +1,81 @@ +/* + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (C) 2001-2002 Silicon Graphics, Inc. All rights reserved. + * + */ + +#include +#include +#include +#include +#include +#include + +#ifndef MB +#define MB (1024*1024) +#endif + +/* + * Lock for protecting SYN_TAG_DISABLE_WAY. + * Consider making this a per-FSB lock. + */ +static spinlock_t flush_lock = SPIN_LOCK_UNLOCKED; + +/** + * sn_flush_all_caches - flush a range of addresses from all caches (incl. L4) + * @flush_addr: identity mapped region 7 address to start flushing + * @bytes: number of bytes to flush + * + * Flush a range of addresses from all caches including L4. All addresses + * fully or partially contained within @flush_addr to @flush_addr + @bytes + * are flushed from the all caches. + */ +void +sn_flush_all_caches(long flush_addr, long bytes) +{ + ulong addr, baddr, eaddr, bitbucket; + int way, alias; + + /* + * Because of the way synergy implements "fc", this flushes the + * data from all caches on all cpus & L4's on OTHER FSBs. It also + * flushes both cpus on the local FSB. It does NOT flush it from + * the local FSB. + */ + flush_icache_range(flush_addr, flush_addr+bytes); + + /* + * Memory DIMMs are a minimum of 256MB and start on 256MB + * boundaries. Convert the start address to an address + * that is between +0MB & +128 of the same DIMM. + * Then add 8MB to skip the uncached MinState areas if the address + * is on the master node. + */ + if (bytes > SYNERGY_L4_BYTES_PER_WAY) + bytes = SYNERGY_L4_BYTES_PER_WAY; + baddr = TO_NODE(smp_physical_node_id(), PAGE_OFFSET + (flush_addr & (128*MB-1)) + 8*MB); + eaddr = (baddr+bytes+SYNERGY_BLOCK_SIZE-1) & ~(SYNERGY_BLOCK_SIZE-1); + baddr = baddr & ~(SYNERGY_BLOCK_SIZE-1); + + /* + * Now flush the local synergy. + */ + spin_lock(&flush_lock); + for(way=0; way +#include +#include + +#include +#include +#include +#include +#include +#include +#include + +/** + * snia_error_intr_handler - handle SN specific error interrupts + * @irq: error interrupt received + * @devid: device causing the interrupt + * @pt_regs: saved register state + * + * This routine is called when certain interrupts occur on SN systems. + * It will either recover from the situations that caused the interrupt + * or panic. + */ +void +snia_error_intr_handler(int irq, void *devid, struct pt_regs *pt_regs) +{ + unsigned long long intpend_val; + unsigned long long bit; + + switch (irq) { + case SGI_UART_IRQ: + /* + * This isn't really an error interrupt. We're just + * here because we have to do something with them. + * This is probably wrong, and this code will be + * removed. + */ + intpend_val = LOCAL_HUB_L(PI_INT_PEND0); + if ( (bit = ~(1L< +#include +#include +#include +#include + +static inline void * +sn1_io_addr(unsigned long port) +{ + if (!IS_RUNNING_ON_SIMULATOR()) { + return( (void *) (port | __IA64_UNCACHED_OFFSET)); + } else { + unsigned long io_base; + unsigned long addr; + + /* + * word align port, but need more than 10 bits + * for accessing registers in bedrock local block + * (so we don't do port&0xfff) + */ + if ((port >= 0x1f0 && port <= 0x1f7) || + port == 0x3f6 || port == 0x3f7) { + io_base = __IA64_UNCACHED_OFFSET | 0x00000FFFFC000000; + addr = io_base | ((port >> 2) << 12) | (port & 0xfff); + } else { + addr = __ia64_get_io_port_base() | ((port >> 2) << 2); + } + return(void *) addr; + } +} + +/** + * sn1_inb - read a byte from a port + * @port: port to read from + * + * Reads a byte from @port and returns it to the caller. + */ +unsigned int +sn1_inb (unsigned long port) +{ +return __ia64_inb ( port ); +} + +/** + * sn1_inw - read a word from a port + * @port: port to read from + * + * Reads a word from @port and returns it to the caller. + */ +unsigned int +sn1_inw (unsigned long port) +{ +return __ia64_inw ( port ); +} + +/** + * sn1_inl - read a word from a port + * @port: port to read from + * + * Reads a word from @port and returns it to the caller. + */ +unsigned int +sn1_inl (unsigned long port) +{ +return __ia64_inl ( port ); +} + +/** + * sn1_outb - write a byte to a port + * @port: port to write to + * @val: value to write + * + * Writes @val to @port. + */ +void +sn1_outb (unsigned char val, unsigned long port) +{ +return __ia64_outb ( val, port ); +} + +/** + * sn1_outw - write a word to a port + * @port: port to write to + * @val: value to write + * + * Writes @val to @port. + */ +void +sn1_outw (unsigned short val, unsigned long port) +{ +return __ia64_outw ( val, port ); +} + +/** + * sn1_outl - write a word to a port + * @port: port to write to + * @val: value to write + * + * Writes @val to @port. + */ +void +sn1_outl (unsigned int val, unsigned long port) +{ +return __ia64_outl ( val, port ); +} + +/** + * sn1_inb - read a byte from a port + * @port: port to read from + * + * Reads a byte from @port and returns it to the caller. + */ +unsigned int +sn1_inb (unsigned long port) +{ + volatile unsigned char *addr = sn1_io_addr(port); + unsigned char ret; + + ret = *addr; + __ia64_mf_a(); + return ret; +} + +/** + * sn1_inw - read a word from a port + * 2port: port to read from + * + * Reads a word from @port and returns it to the caller. + */ +unsigned int +sn1_inw (unsigned long port) +{ + volatile unsigned short *addr = sn1_io_addr(port); + unsigned short ret; + + ret = *addr; + __ia64_mf_a(); + return ret; +} + +/** + * sn1_inl - read a word from a port + * @port: port to read from + * + * Reads a word from @port and returns it to the caller. + */ +unsigned int +sn1_inl (unsigned long port) +{ + volatile unsigned int *addr = sn1_io_addr(port); + unsigned int ret; + + ret = *addr; + __ia64_mf_a(); + return ret; +} + +/** + * sn1_outb - write a byte to a port + * @port: port to write to + * @val: value to write + * + * Writes @val to @port. + */ +void +sn1_outb (unsigned char val, unsigned long port) +{ + volatile unsigned char *addr = sn1_io_addr(port); + + *addr = val; + __ia64_mf_a(); +} + +/** + * sn1_outw - write a word to a port + * @port: port to write to + * @val: value to write + * + * Writes @val to @port. + */ +void +sn1_outw (unsigned short val, unsigned long port) +{ + volatile unsigned short *addr = sn1_io_addr(port); + + *addr = val; + __ia64_mf_a(); +} + +/** + * sn1_outl - write a word to a port + * @port: port to write to + * @val: value to write + * + * Writes @val to @port. + */ +void +sn1_outl (unsigned int val, unsigned long port) +{ + volatile unsigned int *addr = sn1_io_addr(port); + + *addr = val; + __ia64_mf_a(); +} +#endif /* SN1_IOPORTS */ + +void +sn_mmiob () +{ + PIO_FLUSH(); +} diff -Nru a/arch/ia64/sn/kernel/sn1/sn1_smp.c b/arch/ia64/sn/kernel/sn1/sn1_smp.c --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/kernel/sn1/sn1_smp.c Tue Mar 12 13:58:15 2002 @@ -0,0 +1,449 @@ +/* + * SN1 Platform specific SMP Support + * + * Copyright (C) 2000-2002 Silicon Graphics, Inc. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of version 2 of the GNU General Public License + * as published by the Free Software Foundation. + * + * This program is distributed in the hope that it would be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. + * + * Further, this software is distributed without any warranty that it is + * free of the rightful claim of any third person regarding infringement + * or the like. Any license provided herein, whether implied or + * otherwise, applies only to this software file. Patent licenses, if + * any, provided herein do not apply to combinations of this program with + * other software, or any other product whatsoever. + * + * You should have received a copy of the GNU General Public + * License along with this program; if not, write the Free Software + * Foundation, Inc., 59 Temple Place - Suite 330, Boston MA 02111-1307, USA. + * + * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy, + * Mountain View, CA 94043, or: + * + * http://www.sgi.com + * + * For further information regarding this notice, see: + * + * http://oss.sgi.com/projects/GenInfo/NoticeExplan + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +/* + * The following structure is used to pass params thru smp_call_function + * to other cpus for flushing TLB ranges. + */ +typedef struct { + unsigned long start; + unsigned long end; + unsigned long nbits; + unsigned int rid; + atomic_t unfinished_count; +} ptc_params_t; + +#define NUMPTC 512 + +static ptc_params_t ptcParamArray[NUMPTC] __attribute__((__aligned__(128))); + +/* use separate cache lines on ptcParamsNextByCpu to avoid false sharing */ +static ptc_params_t *ptcParamsNextByCpu[NR_CPUS*16] __attribute__((__aligned__(128))); +static volatile ptc_params_t *ptcParamsEmpty __cacheline_aligned; + +/*REFERENCED*/ +static spinlock_t ptcParamsLock __cacheline_aligned = SPIN_LOCK_UNLOCKED; + +static int ptcInit = 0; +#ifdef PTCDEBUG +static int ptcParamsAllBusy = 0; /* debugging/statistics */ +static int ptcCountBacklog = 0; +static int ptcBacklog[NUMPTC+1]; +static char ptcParamsCounts[NR_CPUS][NUMPTC] __attribute__((__aligned__(128))); +static char ptcParamsResults[NR_CPUS][NUMPTC] __attribute__((__aligned__(128))); +#endif + +/* + * Make smp_send_flush_tlbsmp_send_flush_tlb() a weak reference, + * so that we get a clean compile with the ia64 patch without the + * actual SN1 specific code in arch/ia64/kernel/smp.c. + */ +extern void smp_send_flush_tlb (void) __attribute((weak)); + +/* + * The following table/struct is for remembering PTC coherency domains. It + * is also used to translate sapicid into cpuids. We dont want to start + * cpus unless we know their cache domain. + */ +#ifdef PTC_NOTYET +sn_sapicid_info_t sn_sapicid_info[NR_CPUS]; +#endif + +/** + * sn1_ptc_l_range - purge local translation cache + * @start: start of virtual address range + * @end: end of virtual address range + * @nbits: specifies number of bytes to purge per instruction (num = 1<<(nbits & 0xfc)) + * + * Purges the range specified from the local processor's translation cache + * (as opposed to the translation registers). Note that more than the specified + * range *may* be cleared from the cache by some processors. + * + * This is probably not good enough, but I don't want to try to make it better + * until I get some statistics on a running system. At a minimum, we should only + * send IPIs to 1 processor in each TLB domain & have it issue a ptc.g on it's + * own FSB. Also, we only have to serialize per FSB, not globally. + * + * More likely, we will have to do some work to reduce the frequency of calls to + * this routine. + */ +static inline void +sn1_ptc_l_range(unsigned long start, unsigned long end, unsigned long nbits) +{ + do { + __asm__ __volatile__ ("ptc.l %0,%1" :: "r"(start), "r"(nbits<<2) : "memory"); + start += (1UL << nbits); + } while (start < end); + ia64_srlz_d(); +} + +/** + * sn1_received_flush_tlb - cpu tlb flush routine + * + * Flushes the TLB of a given processor. + */ +void +sn1_received_flush_tlb(void) +{ + unsigned long start, end, nbits; + unsigned int rid, saved_rid; + int cpu = smp_processor_id(); + int result; + ptc_params_t *ptcParams; + + ptcParams = ptcParamsNextByCpu[cpu*16]; + if (ptcParams == ptcParamsEmpty) + return; + + do { + start = ptcParams->start; + saved_rid = (unsigned int) ia64_get_rr(start); + end = ptcParams->end; + nbits = ptcParams->nbits; + rid = ptcParams->rid; + + if (saved_rid != rid) { + ia64_set_rr(start, (unsigned long)rid); + ia64_srlz_d(); + } + + sn1_ptc_l_range(start, end, nbits); + + if (saved_rid != rid) + ia64_set_rr(start, (unsigned long)saved_rid); + + ia64_srlz_i(); + + result = atomic_dec(&ptcParams->unfinished_count); +#ifdef PTCDEBUG + { + int i = ptcParams-&ptcParamArray[0]; + ptcParamsResults[cpu][i] = (char) result; + ptcParamsCounts[cpu][i]++; + } +#endif /* PTCDEBUG */ + + if (++ptcParams == &ptcParamArray[NUMPTC]) + ptcParams = &ptcParamArray[0]; + + } while (ptcParams != ptcParamsEmpty); + + ptcParamsNextByCpu[cpu*16] = ptcParams; +} + +/** + * sn1_global_tlb_purge - flush a translation cache range on all processors + * @start: start of virtual address range to flush + * @end: end of virtual address range + * @nbits: specifies number of bytes to purge per instruction (num = 1<<(nbits & 0xfc)) + * + * Flushes the translation cache of all processors from @start to @end. + */ +void +sn1_global_tlb_purge (unsigned long start, unsigned long end, unsigned long nbits) +{ + ptc_params_t *params; + ptc_params_t *next; + unsigned long irqflags; +#ifdef PTCDEBUG + ptc_params_t *nextnext; + int backlog = 0; +#endif + + if (smp_num_cpus == 1) { + sn1_ptc_l_range(start, end, nbits); + return; + } + + if (in_interrupt()) { + /* + * If at interrupt level and cannot get spinlock, + * then do something useful by flushing own tlbflush queue + * so as to avoid a possible deadlock. + */ + while (!spin_trylock(&ptcParamsLock)) { + local_irq_save(irqflags); + sn1_received_flush_tlb(); + local_irq_restore(irqflags); + udelay(10); /* take it easier on the bus */ + } + } else { + spin_lock(&ptcParamsLock); + } + + if (!ptcInit) { + int cpu; + ptcInit = 1; + memset(ptcParamArray, 0, sizeof(ptcParamArray)); + ptcParamsEmpty = &ptcParamArray[0]; + for (cpu=0; cpu= &ptcParamArray[0]) { + if (atomic_read(&ptr->unfinished_count) == 0) + break; + ++backlog; + } + + if (backlog) { + /* check the end of the array */ + ptr = &ptcParamArray[NUMPTC]; + while (--ptr > params) { + if (atomic_read(&ptr->unfinished_count) == 0) + break; + ++backlog; + } + } + ptcBacklog[backlog]++; + } +#endif /* PTCDEBUG */ + + /* wait for the next entry to clear...should be rare */ + if (atomic_read(&next->unfinished_count) > 0) { +#ifdef PTCDEBUG + ptcParamsAllBusy++; + + if (atomic_read(&nextnext->unfinished_count) == 0) { + if (atomic_read(&next->unfinished_count) > 0) { + panic("\nnonzero next zero nextnext %lx %lx\n", + (long)next, (long)nextnext); + } + } +#endif + + /* it could be this cpu that is behind */ + local_irq_save(irqflags); + sn1_received_flush_tlb(); + local_irq_restore(irqflags); + + /* now we know it's not this cpu, so just wait */ + while (atomic_read(&next->unfinished_count) > 0) { + barrier(); + } + } + + params->start = start; + params->end = end; + params->nbits = nbits; + params->rid = (unsigned int) ia64_get_rr(start); + atomic_set(¶ms->unfinished_count, smp_num_cpus); + + /* The atomic_set above can hit memory *after* the update + * to ptcParamsEmpty below, which opens a timing window + * that other cpus can squeeze into! + */ + mb(); + + /* everything is ready to process: + * -- global lock is held + * -- new entry + 1 is free + * -- new entry is set up + * so now: + * -- update the global next pointer + * -- unlock the global lock + * -- send IPI to notify other cpus + * -- process the data ourselves + */ + ptcParamsEmpty = next; + spin_unlock(&ptcParamsLock); + smp_send_flush_tlb(); + + local_irq_save(irqflags); + sn1_received_flush_tlb(); + local_irq_restore(irqflags); + + /* Currently we don't think global TLB purges need to be atomic. + * All CPUs get sent IPIs, so if they haven't done the purge, + * they're busy with interrupts that are at the IPI level, which is + * priority 15. We're asserting that any code at that level + * shouldn't be using user TLB entries. To change this to wait + * for all the flushes to complete, enable the following code. + */ +#ifdef SN1_SYNCHRONOUS_GLOBAL_TLB_PURGE + /* this code is not tested */ + /* wait for the flush to complete */ + while (atomic_read(¶ms.unfinished_count) > 1) + barrier(); + + atomic_set(¶ms->unfinished_count, 0); +#endif +} + +/** + * sn1_send_IPI - send an IPI to a processor + * @cpuid: target of the IPI + * @vector: command to send + * @delivery_mode: delivery mechanism + * @redirect: redirect the IPI? + * + * Sends an IPI (interprocessor interrupt) to the processor specified by + * @cpuid. @delivery_mode can be one of the following + * + * %IA64_IPI_DM_INT - pend an interrupt + * %IA64_IPI_DM_PMI - pend a PMI + * %IA64_IPI_DM_NMI - pend an NMI + * %IA64_IPI_DM_INIT - pend an INIT interrupt + */ +void +sn1_send_IPI(int cpuid, int vector, int delivery_mode, int redirect) +{ + long *p, nasid, slice; + static int off[4] = {0x1800080, 0x1800088, 0x1a00080, 0x1a00088}; + + /* + * ZZZ - Replace with standard macros when available. + */ + nasid = cpuid_to_nasid(cpuid); + slice = cpuid_to_slice(cpuid); + p = (long*)(0xc0000a0000000000LL | (nasid<<33) | off[slice]); + +#if defined(ZZZBRINGUP) + { + static int count=0; + if (count++ < 10) printk("ZZ sendIPI 0x%x->0x%x, vec %d, nasid 0x%lx, slice %ld, adr 0x%lx\n", + smp_processor_id(), cpuid, vector, nasid, slice, (long)p); + } +#endif + mb(); + *p = (delivery_mode << 8) | (vector & 0xff); +} + + +#ifdef CONFIG_SMP + +#ifdef PTC_NOTYET +static void __init +process_sal_ptc_domain_info(ia64_sal_ptc_domain_info_t *di, int domain) +{ + ia64_sal_ptc_domain_proc_entry_t *pe; + int i, sapicid, cpuid; + + pe = __va(di->proc_list); + for (i=0; iproc_count; i++, pe++) { + sapicid = id_eid_to_sapicid(pe->id, pe->eid); + cpuid = cpu_logical_id(sapicid); + sn_sapicid_info[cpuid].domain = domain; + sn_sapicid_info[cpuid].sapicid = sapicid; + } +} + + +static void __init +process_sal_desc_ptc(ia64_sal_desc_ptc_t *ptc) +{ + ia64_sal_ptc_domain_info_t *di; + int i; + + di = __va(ptc->domain_info); + for (i=0; inum_domains; i++, di++) { + process_sal_ptc_domain_info(di, i); + } +} +#endif /* PTC_NOTYET */ + +/** + * init_sn1_smp_config - setup PTC domains per processor + */ +void __init +init_sn1_smp_config(void) +{ + if (!ia64_ptc_domain_info) { + printk("SMP: Can't find PTC domain info. Forcing UP mode\n"); + smp_num_cpus = 1; + return; + } + +#ifdef PTC_NOTYET + memset (sn_sapicid_info, -1, sizeof(sn_sapicid_info)); + process_sal_desc_ptc(ia64_ptc_domain_info); +#endif +} + +#else /* CONFIG_SMP */ + +void __init +init_sn1_smp_config(void) +{ + +#ifdef PTC_NOTYET + sn_sapicid_info[0].sapicid = hard_smp_processor_id(); +#endif +} + +#endif /* CONFIG_SMP */ diff -Nru a/arch/ia64/sn/kernel/sn1/synergy.c b/arch/ia64/sn/kernel/sn1/synergy.c --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/kernel/sn1/synergy.c Tue Mar 12 13:58:15 2002 @@ -0,0 +1,532 @@ +/* + * SN1 Platform specific synergy Support + * + * Copyright (C) 2000-2002 Silicon Graphics, Inc. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of version 2 of the GNU General Public License + * as published by the Free Software Foundation. + * + * This program is distributed in the hope that it would be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. + * + * Further, this software is distributed without any warranty that it is + * free of the rightful claim of any third person regarding infringement + * or the like. Any license provided herein, whether implied or + * otherwise, applies only to this software file. Patent licenses, if + * any, provided herein do not apply to combinations of this program with + * other software, or any other product whatsoever. + * + * You should have received a copy of the GNU General Public + * License along with this program; if not, write the Free Software + * Foundation, Inc., 59 Temple Place - Suite 330, Boston MA 02111-1307, USA. + * + * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy, + * Mountain View, CA 94043, or: + * + * http://www.sgi.com + * + * For further information regarding this notice, see: + * + * http://oss.sgi.com/projects/GenInfo/NoticeExplan + */ + +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +int bit_pos_to_irq(int bit); +void setclear_mask_b(int irq, int cpuid, int set); +void setclear_mask_a(int irq, int cpuid, int set); +void * kmalloc(size_t size, int flags); + +static int synergy_perf_initialized = 0; + +void +synergy_intr_alloc(int bit, int cpuid) { + return; +} + +int +synergy_intr_connect(int bit, + int cpuid) +{ + int irq; + unsigned is_b; + + irq = bit_pos_to_irq(bit); + + is_b = (cpuid_to_slice(cpuid)) & 1; + if (is_b) { + setclear_mask_b(irq,cpuid,1); + setclear_mask_a(irq,cpuid, 0); + } else { + setclear_mask_a(irq, cpuid, 1); + setclear_mask_b(irq, cpuid, 0); + } + return 0; +} +void +setclear_mask_a(int irq, int cpuid, int set) +{ + int synergy; + int nasid; + int reg_num; + unsigned long mask; + unsigned long addr; + unsigned long reg; + unsigned long val; + int my_cnode, my_synergy; + int target_cnode, target_synergy; + + /* + * Perform some idiot checks .. + */ + if ( (irq < 0) || (irq > 255) || + (cpuid < 0) || (cpuid > 512) ) { + printk("clear_mask_a: Invalid parameter irq %d cpuid %d\n", irq, cpuid); + return; + } + + target_cnode = cpuid_to_cnodeid(cpuid); + target_synergy = cpuid_to_synergy(cpuid); + my_cnode = cpuid_to_cnodeid(smp_processor_id()); + my_synergy = cpuid_to_synergy(smp_processor_id()); + + reg_num = irq / 64; + mask = 1; + mask <<= (irq % 64); + switch (reg_num) { + case 0: + reg = VEC_MASK0A; + addr = VEC_MASK0A_ADDR; + break; + case 1: + reg = VEC_MASK1A; + addr = VEC_MASK1A_ADDR; + break; + case 2: + reg = VEC_MASK2A; + addr = VEC_MASK2A_ADDR; + break; + case 3: + reg = VEC_MASK3A; + addr = VEC_MASK3A_ADDR; + break; + default: + reg = addr = 0; + break; + } + if (my_cnode == target_cnode && my_synergy == target_synergy) { + // local synergy + val = READ_LOCAL_SYNERGY_REG(addr); + if (set) { + val |= mask; + } else { + val &= ~mask; + } + WRITE_LOCAL_SYNERGY_REG(addr, val); + val = READ_LOCAL_SYNERGY_REG(addr); + } else { /* remote synergy */ + synergy = cpuid_to_synergy(cpuid); + nasid = cpuid_to_nasid(cpuid); + val = REMOTE_SYNERGY_LOAD(nasid, synergy, reg); + if (set) { + val |= mask; + } else { + val &= ~mask; + } + REMOTE_SYNERGY_STORE(nasid, synergy, reg, val); + } +} + +void +setclear_mask_b(int irq, int cpuid, int set) +{ + int synergy; + int nasid; + int reg_num; + unsigned long mask; + unsigned long addr; + unsigned long reg; + unsigned long val; + int my_cnode, my_synergy; + int target_cnode, target_synergy; + + /* + * Perform some idiot checks .. + */ + if ( (irq < 0) || (irq > 255) || + (cpuid < 0) || (cpuid > 512) ) { + printk("clear_mask_b: Invalid parameter irq %d cpuid %d\n", irq, cpuid); + return; + } + + target_cnode = cpuid_to_cnodeid(cpuid); + target_synergy = cpuid_to_synergy(cpuid); + my_cnode = cpuid_to_cnodeid(smp_processor_id()); + my_synergy = cpuid_to_synergy(smp_processor_id()); + + reg_num = irq / 64; + mask = 1; + mask <<= (irq % 64); + switch (reg_num) { + case 0: + reg = VEC_MASK0B; + addr = VEC_MASK0B_ADDR; + break; + case 1: + reg = VEC_MASK1B; + addr = VEC_MASK1B_ADDR; + break; + case 2: + reg = VEC_MASK2B; + addr = VEC_MASK2B_ADDR; + break; + case 3: + reg = VEC_MASK3B; + addr = VEC_MASK3B_ADDR; + break; + default: + reg = addr = 0; + break; + } + if (my_cnode == target_cnode && my_synergy == target_synergy) { + // local synergy + val = READ_LOCAL_SYNERGY_REG(addr); + if (set) { + val |= mask; + } else { + val &= ~mask; + } + WRITE_LOCAL_SYNERGY_REG(addr, val); + val = READ_LOCAL_SYNERGY_REG(addr); + } else { /* remote synergy */ + synergy = cpuid_to_synergy(cpuid); + nasid = cpuid_to_nasid(cpuid); + val = REMOTE_SYNERGY_LOAD(nasid, synergy, reg); + if (set) { + val |= mask; + } else { + val &= ~mask; + } + REMOTE_SYNERGY_STORE(nasid, synergy, reg, val); + } +} + +/* + * Synergy perf stats. Multiplexed via timer_interrupt. + */ + +static int +synergy_perf_append(uint64_t modesel) +{ + int cnode; + nodepda_t *npdap; + synergy_perf_t *p; + int checked = 0; + int err = 0; + + /* bit 45 is enable */ + modesel |= (1UL << 45); + + for (cnode=0; cnode < numnodes; cnode++) { + /* for each node, insert a new synergy_perf entry */ + if ((npdap = NODEPDA(cnode)) == NULL) { + printk("synergy_perf_append: cnode=%d NODEPDA(cnode)==NULL, nodepda=%p\n", cnode, (void *)nodepda); + continue; + } + + if (npdap->synergy_perf_enabled) { + /* user must disable counting to append new events */ + err = -EBUSY; + break; + } + + if (!checked && npdap->synergy_perf_data != NULL) { + checked = 1; + for (p = npdap->synergy_perf_first; ;) { + if (p->modesel == modesel) + return 0; /* event already registered */ + if ((p = p->next) == npdap->synergy_perf_first) + break; + } + } + + /* XX use kmem_alloc_node() when it is implemented */ + p = (synergy_perf_t *)kmalloc(sizeof(synergy_perf_t), GFP_KERNEL); + if ((((uint64_t)p) & 7UL) != 0) + BUG(); /* bad alignment */ + if (p == NULL) { + err = -ENOMEM; + break; + } + else { + memset(p, 0, sizeof(synergy_perf_t)); + p->modesel = modesel; + + spin_lock_irq(&npdap->synergy_perf_lock); + if (npdap->synergy_perf_data == NULL) { + /* circular list */ + p->next = p; + npdap->synergy_perf_first = p; + npdap->synergy_perf_data = p; + } + else { + p->next = npdap->synergy_perf_data->next; + npdap->synergy_perf_data->next = p; + } + spin_unlock_irq(&npdap->synergy_perf_lock); + } + } + + return err; +} + +static void +synergy_perf_set_freq(int freq) +{ + int cnode; + nodepda_t *npdap; + + for (cnode=0; cnode < numnodes; cnode++) { + if ((npdap = NODEPDA(cnode)) != NULL) + npdap->synergy_perf_freq = freq; + } +} + +static void +synergy_perf_set_enable(int enable) +{ + int cnode; + nodepda_t *npdap; + + for (cnode=0; cnode < numnodes; cnode++) { + if ((npdap = NODEPDA(cnode)) != NULL) + npdap->synergy_perf_enabled = enable; + } + printk("NOTICE: synergy perf counting %sabled on all nodes\n", enable ? "en" : "dis"); +} + +static int +synergy_perf_size(nodepda_t *npdap) +{ + synergy_perf_t *p; + int n; + + if (npdap->synergy_perf_enabled == 0) { + /* no stats to return */ + return 0; + } + + spin_lock_irq(&npdap->synergy_perf_lock); + for (n=0, p = npdap->synergy_perf_first; p;) { + n++; + p = p->next; + if (p == npdap->synergy_perf_first) + break; + } + spin_unlock_irq(&npdap->synergy_perf_lock); + + /* bytes == n pairs of {event,counter} */ + return n * 2 * sizeof(uint64_t); +} + +static int +synergy_perf_ioctl(struct inode *inode, struct file *file, + unsigned int cmd, unsigned long arg) +{ + int cnode; + nodepda_t *npdap; + synergy_perf_t *p; + int intarg; + int fsb; + uint64_t longarg; + uint64_t *stats; + int n; + devfs_handle_t d; + arbitrary_info_t info; + + if ((d = devfs_get_handle_from_inode(inode)) == NULL) + return -ENODEV; + info = hwgraph_fastinfo_get(d); + + cnode = SYNERGY_PERF_INFO_CNODE(info); + fsb = SYNERGY_PERF_INFO_FSB(info); + npdap = NODEPDA(cnode); + + switch (cmd) { + case SNDRV_GET_SYNERGY_VERSION: + /* return int, version of data structure for SNDRV_GET_SYNERGYINFO */ + intarg = 1; /* version 1 */ + if (copy_to_user((void *)arg, &intarg, sizeof(intarg))) + return -EFAULT; + break; + + case SNDRV_GET_INFOSIZE: + /* return int, sizeof buf needed for SYNERGY_PERF_GET_STATS */ + intarg = synergy_perf_size(npdap); + if (copy_to_user((void *)arg, &intarg, sizeof(intarg))) + return -EFAULT; + break; + + case SNDRV_GET_SYNERGYINFO: + /* return array of event/value pairs, this node only */ + if ((intarg = synergy_perf_size(npdap)) <= 0) + return -ENODATA; + if ((stats = (uint64_t *)kmalloc(intarg, GFP_KERNEL)) == NULL) + return -ENOMEM; + spin_lock_irq(&npdap->synergy_perf_lock); + for (n=0, p = npdap->synergy_perf_first; p;) { + stats[n++] = p->modesel; + if (p->intervals > 0) + stats[n++] = p->counts[fsb] * p->total_intervals / p->intervals; + else + stats[n++] = 0; + p = p->next; + if (p == npdap->synergy_perf_first) + break; + } + spin_unlock_irq(&npdap->synergy_perf_lock); + + if (copy_to_user((void *)arg, stats, intarg)) { + kfree(stats); + return -EFAULT; + } + + kfree(stats); + break; + + case SNDRV_SYNERGY_APPEND: + /* reads 64bit event, append synergy perf event to all nodes */ + if (copy_from_user(&longarg, (void *)arg, sizeof(longarg))) + return -EFAULT; + return synergy_perf_append(longarg); + break; + + case SNDRV_GET_SYNERGY_STATUS: + /* return int, 1 if enabled else 0 */ + intarg = npdap->synergy_perf_enabled; + if (copy_to_user((void *)arg, &intarg, sizeof(intarg))) + return -EFAULT; + break; + + case SNDRV_SYNERGY_ENABLE: + /* read int, if true enable counting else disable */ + if (copy_from_user(&intarg, (void *)arg, sizeof(intarg))) + return -EFAULT; + synergy_perf_set_enable(intarg); + break; + + case SNDRV_SYNERGY_FREQ: + /* read int, set jiffies per update */ + if (copy_from_user(&intarg, (void *)arg, sizeof(intarg))) + return -EFAULT; + if (intarg < 0 || intarg >= HZ) + return -EINVAL; + synergy_perf_set_freq(intarg); + break; + + default: + printk("Warning: invalid ioctl %d on synergy mon for cnode=%d fsb=%d\n", cmd, cnode, fsb); + return -EINVAL; + } + return(0); +} + +struct file_operations synergy_mon_fops = { + ioctl: synergy_perf_ioctl, +}; + +void +synergy_perf_update(int cpu) +{ + nasid_t nasid; + cnodeid_t cnode; + struct nodepda_s *npdap; + + /* + * synergy_perf_initialized is set by synergy_perf_init() + * which is called last thing by sn_mp_setup(), i.e. well + * after nodepda has been initialized. + */ + if (!synergy_perf_initialized) + return; + + cnode = cpuid_to_cnodeid(cpu); + npdap = NODEPDA(cnode); + + if (npdap == NULL || cnode < 0 || cnode >= numnodes) + /* this should not happen: still in early io init */ + return; + +#if 0 + /* use this to check nodepda initialization */ + if (((uint64_t)npdap) & 0x7) { + printk("\nERROR on cpu %d : cnode=%d, npdap == %p, not aligned\n", cpu, cnode, npdap); + BUG(); + } +#endif + + if (npdap->synergy_perf_enabled == 0 || npdap->synergy_perf_data == NULL) { + /* Not enabled, or no events to monitor */ + return; + } + + if (npdap->synergy_inactive_intervals++ % npdap->synergy_perf_freq != 0) { + /* don't multiplex on every timer interrupt */ + return; + } + + /* + * Read registers for last interval and increment counters. + * Hold the per-node synergy_perf_lock so concurrent readers get + * consistent values. + */ + spin_lock_irq(&npdap->synergy_perf_lock); + + nasid = cpuid_to_nasid(cpu); + npdap->synergy_active_intervals++; + npdap->synergy_perf_data->intervals++; + npdap->synergy_perf_data->total_intervals = npdap->synergy_active_intervals; + + npdap->synergy_perf_data->counts[0] += 0xffffffffffUL & + REMOTE_SYNERGY_LOAD(nasid, 0, PERF_CNTR0_A); + + npdap->synergy_perf_data->counts[1] += 0xffffffffffUL & + REMOTE_SYNERGY_LOAD(nasid, 1, PERF_CNTR0_B); + + /* skip to next in circular list */ + npdap->synergy_perf_data = npdap->synergy_perf_data->next; + + spin_unlock_irq(&npdap->synergy_perf_lock); + + /* set the counter 0 selection modes for both A and B */ + REMOTE_SYNERGY_STORE(nasid, 0, PERF_CNTL0_A, npdap->synergy_perf_data->modesel); + REMOTE_SYNERGY_STORE(nasid, 1, PERF_CNTL0_B, npdap->synergy_perf_data->modesel); + + /* and reset the counter registers to zero */ + REMOTE_SYNERGY_STORE(nasid, 0, PERF_CNTR0_A, 0UL); + REMOTE_SYNERGY_STORE(nasid, 1, PERF_CNTR0_B, 0UL); +} + +void +synergy_perf_init(void) +{ + printk("synergy_perf_init(), counting is initially disabled\n"); + synergy_perf_initialized++; +} diff -Nru a/arch/ia64/sn/kernel/sn2/Makefile b/arch/ia64/sn/kernel/sn2/Makefile --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/kernel/sn2/Makefile Tue Mar 12 13:58:16 2002 @@ -0,0 +1,51 @@ +# +# ia64/platform/sn/sn1/Makefile +# +# Copyright (C) 1999,2001-2002 Silicon Graphics, Inc. All rights reserved. +# +# This program is free software; you can redistribute it and/or modify it +# under the terms of version 2 of the GNU General Public License +# as published by the Free Software Foundation. +# +# This program is distributed in the hope that it would be useful, but +# WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. +# +# Further, this software is distributed without any warranty that it is +# free of the rightful claim of any third person regarding infringement +# or the like. Any license provided herein, whether implied or +# otherwise, applies only to this software file. Patent licenses, if +# any, provided herein do not apply to combinations of this program with +# other software, or any other product whatsoever. +# +# You should have received a copy of the GNU General Public +# License along with this program; if not, write the Free Software +# Foundation, Inc., 59 Temple Place - Suite 330, Boston MA 02111-1307, USA. +# +# Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy, +# Mountain View, CA 94043, or: +# +# http://www.sgi.com +# +# For further information regarding this notice, see: +# +# http://oss.sgi.com/projects/GenInfo/NoticeExplan +# + + +EXTRA_CFLAGS := -DLITTLE_ENDIAN + +.S.s: + $(CPP) $(AFLAGS) $(AFLAGS_KERNEL) -o $*.s $< +.S.o: + $(CC) $(AFLAGS) $(AFLAGS_KERNEL) -c -o $*.o $< + +all: sn2.a + +O_TARGET = sn2.a + +obj-y = cache.o iomv.o sn2_smp.o + +clean:: + +include $(TOPDIR)/Rules.make diff -Nru a/arch/ia64/sn/kernel/sn2/cache.c b/arch/ia64/sn/kernel/sn2/cache.c --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/kernel/sn2/cache.c Tue Mar 12 13:58:16 2002 @@ -0,0 +1,29 @@ +/* + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (C) 2001-2002 Silicon Graphics, Inc. All rights reserved. + * + */ + +#include + +/** + * sn_flush_all_caches - flush a range of address from all caches (incl. L4) + * @flush_addr: identity mapped region 7 address to start flushing + * @bytes: number of bytes to flush + * + * Flush a range of addresses from all caches including L4. + * All addresses fully or partially contained within + * @flush_addr to @flush_addr + @bytes are flushed + * from the all caches. + */ +void +sn_flush_all_caches(long flush_addr, long bytes) +{ + flush_icache_range(flush_addr, flush_addr+bytes); +} + + diff -Nru a/arch/ia64/sn/kernel/sn2/iomv.c b/arch/ia64/sn/kernel/sn2/iomv.c --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/kernel/sn2/iomv.c Tue Mar 12 13:58:16 2002 @@ -0,0 +1,222 @@ +/* + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (C) 2000-2002 Silicon Graphics, Inc. All rights reserved. + */ + +#include +#include +#include +#include + +#ifdef Colin /* Use the same calls as Generic IA64 defined in io.h */ +/** + * sn1_io_addr - convert a in/out port to an i/o address + * @port: port to convert + * + * Legacy in/out instructions are converted to ld/st instructions + * on IA64. This routine will convert a port number into a valid + * SN i/o address. Used by sn1_in*() and sn1_out*(). + */ +static inline void * +sn1_io_addr(unsigned long port) +{ + if (!IS_RUNNING_ON_SIMULATOR()) { + return( (void *) (port | __IA64_UNCACHED_OFFSET)); + } else { + unsigned long io_base; + unsigned long addr; + + /* + * word align port, but need more than 10 bits + * for accessing registers in bedrock local block + * (so we don't do port&0xfff) + */ + if ((port >= 0x1f0 && port <= 0x1f7) || + port == 0x3f6 || port == 0x3f7) { + io_base = (0xc000000fcc000000 | ((unsigned long)get_nasid() << 38)); + addr = io_base | ((port >> 2) << 12) | (port & 0xfff); + } else { + addr = __ia64_get_io_port_base() | ((port >> 2) << 2); + } + return(void *) addr; + } +} + +/** + * sn1_inb - read a byte from a port + * @port: port to read from + * + * Reads a byte from @port and returns it to the caller. + */ +unsigned int +sn1_inb (unsigned long port) +{ +return __ia64_inb ( port ); +} + +/** + * sn1_inw - read a word from a port + * @port: port to read from + * + * Reads a word from @port and returns it to the caller. + */ +unsigned int +sn1_inw (unsigned long port) +{ +return __ia64_inw ( port ); +} + +/** + * sn1_inl - read a word from a port + * @port: port to read from + * + * Reads a word from @port and returns it to the caller. + */ +unsigned int +sn1_inl (unsigned long port) +{ +return __ia64_inl ( port ); +} + +/** + * sn1_outb - write a byte to a port + * @port: port to write to + * @val: value to write + * + * Writes @val to @port. + */ +void +sn1_outb (unsigned char val, unsigned long port) +{ +return __ia64_outb ( val, port ); +} + +/** + * sn1_outw - write a word to a port + * @port: port to write to + * @val: value to write + * + * Writes @val to @port. + */ +void +sn1_outw (unsigned short val, unsigned long port) +{ +return __ia64_outw ( val, port ); +} + +/** + * sn1_outl - write a word to a port + * @port: port to write to + * @val: value to write + * + * Writes @val to @port. + */ +void +sn1_outl (unsigned int val, unsigned long port) +{ +return __ia64_outl ( val, port ); +} + +/** + * sn1_inb - read a byte from a port + * @port: port to read from + * + * Reads a byte from @port and returns it to the caller. + */ +unsigned int +sn1_inb (unsigned long port) +{ + volatile unsigned char *addr = sn1_io_addr(port); + unsigned char ret; + + ret = *addr; + __ia64_mf_a(); + return ret; +} + +/** + * sn1_inw - read a word from a port + * 2port: port to read from + * + * Reads a word from @port and returns it to the caller. + */ +unsigned int +sn1_inw (unsigned long port) +{ + volatile unsigned short *addr = sn1_io_addr(port); + unsigned short ret; + + ret = *addr; + __ia64_mf_a(); + return ret; +} + +/** + * sn1_inl - read a word from a port + * @port: port to read from + * + * Reads a word from @port and returns it to the caller. + */ +unsigned int +sn1_inl (unsigned long port) +{ + volatile unsigned int *addr = sn1_io_addr(port); + unsigned int ret; + + ret = *addr; + __ia64_mf_a(); + return ret; +} + +/** + * sn1_outb - write a byte to a port + * @port: port to write to + * @val: value to write + * + * Writes @val to @port. + */ +void +sn1_outb (unsigned char val, unsigned long port) +{ + volatile unsigned char *addr = sn1_io_addr(port); + + *addr = val; + __ia64_mf_a(); +} + +/** + * sn1_outw - write a word to a port + * @port: port to write to + * @val: value to write + * + * Writes @val to @port. + */ +void +sn1_outw (unsigned short val, unsigned long port) +{ + volatile unsigned short *addr = sn1_io_addr(port); + + *addr = val; + __ia64_mf_a(); +} + +/** + * sn1_outl - write a word to a port + * @port: port to write to + * @val: value to write + * + * Writes @val to @port. + */ +void +sn1_outl (unsigned int val, unsigned long port) +{ + volatile unsigned int *addr = sn1_io_addr(port); + + *addr = val; + __ia64_mf_a(); +} + +#endif diff -Nru a/arch/ia64/sn/kernel/sn2/sn2_smp.c b/arch/ia64/sn/kernel/sn2/sn2_smp.c --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/kernel/sn2/sn2_smp.c Tue Mar 12 13:58:16 2002 @@ -0,0 +1,159 @@ +/* + * SN2 Platform specific SMP Support + * + * Copyright (C) 2000-2002 Silicon Graphics, Inc. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of version 2 of the GNU General Public License + * as published by the Free Software Foundation. + * + * This program is distributed in the hope that it would be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. + * + * Further, this software is distributed without any warranty that it is + * free of the rightful claim of any third person regarding infringement + * or the like. Any license provided herein, whether implied or + * otherwise, applies only to this software file. Patent licenses, if + * any, provided herein do not apply to combinations of this program with + * other software, or any other product whatsoever. + * + * You should have received a copy of the GNU General Public + * License along with this program; if not, write the Free Software + * Foundation, Inc., 59 Temple Place - Suite 330, Boston MA 02111-1307, USA. + * + * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy, + * Mountain View, CA 94043, or: + * + * http://www.sgi.com + * + * For further information regarding this notice, see: + * + * http://oss.sgi.com/projects/GenInfo/NoticeExplan + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +/** + * sn2_global_tlb_purge - globally purge translation cache of virtual address range + * @start: start of virtual address range + * @end: end of virtual address range + * @nbits: specifies number of bytes to purge per instruction (num = 1<<(nbits & 0xfc)) + * + * Purges the translation caches of all processors of the given virtual address + * range. + */ +void +sn2_global_tlb_purge (unsigned long start, unsigned long end, unsigned long nbits) +{ + int cnode, nasid; + volatile long *ptc0, *ptc1, *piows; + unsigned long ws, next, data0, data1; + + piows = (long*)LOCAL_MMR_ADDR(get_slice() ? SH_PIO_WRITE_STATUS_1 : SH_PIO_WRITE_STATUS_0); + data0 = (1UL<>8)<0x%x, vec %d, nasid 0x%lx, slice %ld, adr 0x%lx, val 0x%lx\n", + smp_processor_id(), cpuid, vector, nasid, slice, (long)p, val); + } +#endif + mb(); + *p = val; + +} + +/** + * init_sn2_smp_config - initialize SN2 smp configuration + * + * currently a NOP. + */ +void __init +init_sn2_smp_config(void) +{ + +} diff -Nru a/arch/ia64/sn/kernel/sn_asm.S b/arch/ia64/sn/kernel/sn_asm.S --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/kernel/sn_asm.S Tue Mar 12 13:58:16 2002 @@ -0,0 +1,148 @@ + +/* + * Copyright (c) 2000-2001 Silicon Graphics, Inc. All rights reserved. + */ + +#include +#ifdef CONFIG_IA64_SGI_AUTOTEST + +// Testing only. +// Routine will cause MCAs +// zzzmsa(n) +// n=0 MCA via duplicate TLB dropin +// n=0 MCA via read of garbage address +// + +#define ITIR(key, ps) ((key<<8) | (ps<<2)) +#define TLB_PAGESIZE 28 // Use 256MB pages for now. + + .global zzzmca + .proc zzzmca +zzzmca: + alloc loc4 = ar.pfs,2,8,1,0;; + cmp.ne p6,p0=r32,r0;; + movl r2=0x2dead + movl r3=0x3dead + movl r15=0x15dead + movl r16=0x16dead + movl r31=0x31dead + movl loc0=0x34beef + movl loc1=0x35beef + movl loc2=0x36beef + movl loc3=0x37beef + movl out0=0x42beef + + movl r20=0x32feed;; + mov ar32=r20 + movl r20=0x36feed;; + mov ar36=r20 + movl r20=0x65feed;; + mov ar65=r20 + movl r20=0x66feed;; + mov ar66=r20 + +(p6) br.cond.sptk 1f + + rsm 0x2000;; + srlz.d; + mov r11 = 1 + mov r3 = ITIR(0,TLB_PAGESIZE);; + mov cr.itir = r3 + mov r10 = 0;; + itr.d dtr[r11] = r10;; + mov r11 = 2 + + itr.d dtr[r11] = r10;; + br 9f + +1: movl r8=0xfe00000048;; + ld8 r9=[r8];; + mf + mf.a + srlz.d + +9: mov ar.pfs=loc4 + br.ret.sptk rp + + .endp zzzmca + + .global zzzspec + .proc zzzspec +zzzspec: + mov r8=r32 + movl r9=0xe000000000000000 + movl r10=0x4000;; + ld8.s r16=[r8];; + ld8.s r17=[r9];; + add r8=r8,r10;; + ld8.s r18=[r8];; + add r8=r8,r10;; + ld8.s r19=[r8];; + add r8=r8,r10;; + ld8.s r20=[r8];; + mov r8=r0 + tnat.nz p6,p0=r16 + tnat.nz p7,p0=r17 + tnat.nz p8,p0=r18 + tnat.nz p9,p0=r19 + tnat.nz p10,p0=r20;; + (p6) dep r8=-1,r8,0,1;; + (p7) dep r8=-1,r8,1,1;; + (p8) dep r8=-1,r8,2,1;; + (p9) dep r8=-1,r8,3,1;; + (p10) dep r8=-1,r8,4,1;; + br.ret.sptk rp + .endp zzzspec + + .global zzzspec2 + .proc zzzspec2 +zzzspec2: + cmp.eq p6,p7=r2,r2 + movl r16=0xc0000a0001000020 + ;; + mf + ;; + ld8 r9=[r16] + (p6) br.spnt 1f + ld8 r10=[r32] + ;; + 1: mf.a + mf + + ld8 r9=[r16];; + cmp.ne p6,p7=r9,r16 + (p6) br.spnt 1f + ld8 r10=[r32] + ;; + 1: mf.a + mf + + ld8 r9=[r33];; + cmp.ne p6,p7=r9,r33 + (p6) br.spnt 1f + ld8 r10=[r32] + ;; + 1: mf.a + mf + + tpa r23=r32 + add r20=512,r33 + add r21=1024,r33;; + ld8 r9=[r20] + ld8 r10=[r21];; + nop.i 0 + { .mib + nop.m 0 + cmp.ne p6,p7=r10,r33 + (p6) br.spnt 1f + } + ld8 r10=[r32] + ;; + 1: mf.a + mf + br.ret.sptk rp + + .endp zzzspec + +#endif + diff -Nru a/arch/ia64/sn/kernel/sn_ksyms.c b/arch/ia64/sn/kernel/sn_ksyms.c --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/kernel/sn_ksyms.c Tue Mar 12 13:58:16 2002 @@ -0,0 +1,64 @@ +/* + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (C) 2000-2002 Silicon Graphics, Inc. All rights reserved. + */ + + +/* + * Architecture-specific kernel symbols + */ + +#include +#include + +#include + +/* + * other stuff (more to be added later, cleanup then) + */ +EXPORT_SYMBOL(sn1_pci_map_sg); +EXPORT_SYMBOL(sn1_pci_unmap_sg); +EXPORT_SYMBOL(sn1_pci_alloc_consistent); +EXPORT_SYMBOL(sn1_pci_free_consistent); +EXPORT_SYMBOL(sn1_dma_address); + +#include +#include +extern devfs_handle_t base_io_scsi_ctlr_vhdl[]; +#include +extern cnodeid_t master_node_get(devfs_handle_t vhdl); +#include +EXPORT_SYMBOL(base_io_scsi_ctlr_vhdl); +EXPORT_SYMBOL(master_node_get); + + +/* + * symbols referenced by the PCIBA module + */ +#include +#include +#include +#include + +devfs_handle_t +devfn_to_vertex(unsigned char busnum, unsigned int devfn); +EXPORT_SYMBOL(devfn_to_vertex); +EXPORT_SYMBOL(hwgraph_vertex_unref); +EXPORT_SYMBOL(pciio_config_get); +EXPORT_SYMBOL(pciio_info_slot_get); +EXPORT_SYMBOL(hwgraph_edge_add); +EXPORT_SYMBOL(pciio_info_master_get); +EXPORT_SYMBOL(pciio_info_get); +#ifdef CONFIG_IA64_SGI_SN_DEBUG +EXPORT_SYMBOL(__pa_debug); +EXPORT_SYMBOL(__va_debug); +#endif + +/* added by tduffy 04.08.01 to fix depmod issues */ +#include +EXPORT_SYMBOL(sn1_pci_unmap_single); +EXPORT_SYMBOL(sn1_pci_map_single); +EXPORT_SYMBOL(sn1_pci_dma_sync_single); diff -Nru a/arch/ia64/sn/kernel/sv.c b/arch/ia64/sn/kernel/sv.c --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/arch/ia64/sn/kernel/sv.c Tue Mar 12 13:58:15 2002 @@ -0,0 +1,552 @@ +/* + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (C) 2000-2001 Silicon Graphics, Inc. All rights reserved + * + * This implemenation of synchronization variables is heavily based on + * one done by Steve Lord + * + * Paul Cassella + */ + +#include +#include +#include +#include + +#include +#include +#include +#include + +#include + +/* Define this to have sv_test() run some simple tests. + kernel_thread() must behave as expected when this is called. */ +#undef RUN_SV_TEST + +#define DEBUG + +/* Set up some macros so sv_wait(), sv_signal(), and sv_broadcast() + can sanity check interrupt state on architectures where we know + how. */ +#ifdef DEBUG + #define SV_DEBUG_INTERRUPT_STATE + #ifdef __mips64 + #define SV_TEST_INTERRUPTS_ENABLED(flags) ((flags & 0x1) != 0) + #define SV_TEST_INTERRUPTS_DISABLED(flags) ((flags & 0x1) == 0) + #define SV_INTERRUPT_TEST_WORKERS 31 + #elif defined(__ia64) + #define SV_TEST_INTERRUPTS_ENABLED(flags) ((flags & 0x4000) != 0) + #define SV_TEST_INTERRUPTS_DISABLED(flags) ((flags & 0x4000) == 0) + #define SV_INTERRUPT_TEST_WORKERS 4 /* simulator's slow */ + #else + #undef SV_DEBUG_INTERRUPT_STATE + #define SV_INTERRUPT_TEST_WORKERS 4 /* reasonable? default. */ + #endif /* __mips64 */ +#endif /* DEBUG */ + + +/* XXX FIXME hack hack hack. Our mips64 tree is from before the + switch to WQ_FLAG_EXCLUSIVE, and our ia64 tree is from after it. */ +#ifdef TASK_EXCLUSIVE + #undef EXCLUSIVE_IN_QUEUE +#else + #define EXCLUSIVE_IN_QUEUE + #define TASK_EXCLUSIVE 0 /* for the set_current_state() in sv_wait() */ +#endif + + +static inline void sv_lock(sv_t *sv) { + spin_lock(&sv->sv_lock); +} + +static inline void sv_unlock(sv_t *sv) { + spin_unlock(&sv->sv_lock); +} + +/* up() is "extern inline", so we can't pass its address to sv_wait. + Use this function's address instead. */ +static void up_wrapper(struct semaphore *sem) { + up(sem); +} + +/* spin_unlock() is sometimes a macro. */ +static void spin_unlock_wrapper(spinlock_t *s) { + spin_unlock(s); +} + +/* XXX Perhaps sv_wait() should do the switch() each time and avoid + the extra indirection and the need for the _wrapper functions? */ + +static inline void sv_set_mon_type(sv_t *sv, int type) { + switch (type) { + case SV_MON_SPIN: + sv->sv_mon_unlock_func = + (sv_mon_unlock_func_t)spin_unlock_wrapper; + break; + case SV_MON_SEMA: + sv->sv_mon_unlock_func = + (sv_mon_unlock_func_t)up_wrapper; + if(sv->sv_flags & SV_INTS) { + printk(KERN_ERR "sv_set_mon_type: The monitor lock " + "cannot be shared with interrupts if it is a " + "semaphore!\n"); + BUG(); + } + if(sv->sv_flags & SV_BHS) { + printk(KERN_ERR "sv_set_mon_type: The monitor lock " + "cannot be shared with bottom-halves if it is " + "a semaphore!\n"); + BUG(); + } + break; +#if 0 + /* + * If needed, and will need to think about interrupts. This + * may be needed, for example, if someone wants to use sv's + * with something like dev_base; writers need to hold two + * locks. + */ + case SV_MON_CUSTOM: + { + struct sv_mon_custom *c = lock; + sv->sv_mon_unlock_func = c->sv_mon_unlock_func; + sv->sv_mon_lock = c->sv_mon_lock; + break; + } +#endif + + default: + printk(KERN_ERR "sv_set_mon_type: unknown type %d (0x%x)! " + "(flags 0x%x)\n", type, type, sv->sv_flags); + BUG(); + break; + } + sv->sv_flags |= type; +} + +static inline void sv_set_ord(sv_t *sv, int ord) { + if (!ord) + ord = SV_ORDER_DEFAULT; + + if (ord != SV_ORDER_FIFO && ord != SV_ORDER_LIFO) { + printk(KERN_EMERG "sv_set_ord: unknown order %d (0x%x)! ", + ord, ord); + BUG(); + } + + sv->sv_flags |= ord; +} + +void sv_init(sv_t *sv, sv_mon_lock_t *lock, int flags) +{ + int ord = flags & SV_ORDER_MASK; + int type = flags & SV_MON_MASK; + + /* Copy all non-order, non-type flags */ + sv->sv_flags = (flags & ~(SV_ORDER_MASK | SV_MON_MASK)); + + if((sv->sv_flags & (SV_INTS | SV_BHS)) == (SV_INTS | SV_BHS)) { + printk(KERN_ERR "sv_init: do not set both SV_INTS and SV_BHS, only SV_INTS.\n"); + BUG(); + } + + sv_set_ord(sv, ord); + sv_set_mon_type(sv, type); + + /* If lock is NULL, we'll get it from sv_wait_compat() (and + ignore it in sv_signal() and sv_broadcast()). */ + sv->sv_mon_lock = lock; + + spin_lock_init(&sv->sv_lock); + init_waitqueue_head(&sv->sv_waiters); +} + +/* + * The associated lock must be locked on entry. It is unlocked on return. + * + * Return values: + * + * n < 0 : interrupted, -n jiffies remaining on timeout, or -1 if timeout == 0 + * n = 0 : timeout expired + * n > 0 : sv_signal()'d, n jiffies remaining on timeout, or 1 if timeout == 0 + */ +signed long sv_wait(sv_t *sv, int sv_wait_flags, unsigned long timeout) +{ + DECLARE_WAITQUEUE( wait, current ); + unsigned long flags; + signed long ret = 0; + +#ifdef SV_DEBUG_INTERRUPT_STATE + { + unsigned long flags; + __save_flags(flags); + + if(sv->sv_flags & SV_INTS) { + if(SV_TEST_INTERRUPTS_ENABLED(flags)) { + printk(KERN_ERR "sv_wait: SV_INTS and interrupts " + "enabled (flags: 0x%lx)\n", flags); + BUG(); + } + } else { + if (SV_TEST_INTERRUPTS_DISABLED(flags)) { + printk(KERN_WARNING "sv_wait: !SV_INTS and interrupts " + "disabled! (flags: 0x%lx)\n", flags); + } + } + } +#endif /* SV_DEBUG_INTERRUPT_STATE */ + + sv_lock(sv); + + sv->sv_mon_unlock_func(sv->sv_mon_lock); + + /* Add ourselves to the wait queue and set the state before + * releasing the sv_lock so as to avoid racing with the + * wake_up() in sv_signal() and sv_broadcast(). + */ + + /* don't need the _irqsave part, but there is no wq_write_lock() */ + wq_write_lock_irqsave(&sv->sv_waiters.lock, flags); + +#ifdef EXCLUSIVE_IN_QUEUE + wait.flags |= WQ_FLAG_EXCLUSIVE; +#endif + + switch(sv->sv_flags & SV_ORDER_MASK) { + case SV_ORDER_FIFO: + __add_wait_queue_tail(&sv->sv_waiters, &wait); + break; + case SV_ORDER_FILO: + __add_wait_queue(&sv->sv_waiters, &wait); + break; + default: + printk(KERN_ERR "sv_wait: unknown order! (sv: 0x%p, flags: 0x%x)\n", + (void *)sv, sv->sv_flags); + BUG(); + } + wq_write_unlock_irqrestore(&sv->sv_waiters.lock, flags); + + if(sv_wait_flags & SV_WAIT_SIG) + set_current_state(TASK_EXCLUSIVE | TASK_INTERRUPTIBLE ); + else + set_current_state(TASK_EXCLUSIVE | TASK_UNINTERRUPTIBLE); + + spin_unlock(&sv->sv_lock); + + if(sv->sv_flags & SV_INTS) + local_irq_enable(); + else if(sv->sv_flags & SV_BHS) + local_bh_enable(); + + if (timeout) + ret = schedule_timeout(timeout); + else + schedule(); + + if(current->state != TASK_RUNNING) /* XXX Is this possible? */ { + printk(KERN_ERR "sv_wait: state not TASK_RUNNING after " + "schedule().\n"); + set_current_state(TASK_RUNNING); + } + + remove_wait_queue(&sv->sv_waiters, &wait); + + /* Return cases: + - woken by a sv_signal/sv_broadcast + - woken by a signal + - woken by timeout expiring + */ + + /* XXX This isn't really accurate; we may have been woken + before the signal anyway.... */ + if(signal_pending(current)) + return timeout ? -ret : -1; + return timeout ? ret : 1; +} + + +void sv_signal(sv_t *sv) +{ + /* If interrupts can acquire this lock, they can also acquire the + sv_mon_lock, which we must already have to have called this, so + interrupts must be disabled already. If interrupts cannot + contend for this lock, we don't have to worry about it. */ + +#ifdef SV_DEBUG_INTERRUPT_STATE + if(sv->sv_flags & SV_INTS) { + unsigned long flags; + __save_flags(flags); + if(SV_TEST_INTERRUPTS_ENABLED(flags)) + printk(KERN_ERR "sv_signal: SV_INTS and " + "interrupts enabled! (flags: 0x%lx)\n", flags); + } +#endif /* SV_DEBUG_INTERRUPT_STATE */ + + sv_lock(sv); + wake_up(&sv->sv_waiters); + sv_unlock(sv); +} + +void sv_broadcast(sv_t *sv) +{ +#ifdef SV_DEBUG_INTERRUPT_STATE + if(sv->sv_flags & SV_INTS) { + unsigned long flags; + __save_flags(flags); + if(SV_TEST_INTERRUPTS_ENABLED(flags)) + printk(KERN_ERR "sv_broadcast: SV_INTS and " + "interrupts enabled! (flags: 0x%lx)\n", flags); + } +#endif /* SV_DEBUG_INTERRUPT_STATE */ + + sv_lock(sv); + wake_up_all(&sv->sv_waiters); + sv_unlock(sv); +} + +void sv_destroy(sv_t *sv) +{ + if(!spin_trylock(&sv->sv_lock)) { + printk(KERN_ERR "sv_destroy: someone else has sv 0x%p locked!\n", (void *)sv); + BUG(); + } + + /* XXX Check that the waitqueue is empty? + Mark the sv destroyed? + */ +} + + +#ifdef RUN_SV_TEST + +static DECLARE_MUTEX_LOCKED(talkback); +static DECLARE_MUTEX_LOCKED(sem); +sv_t sv; +sv_t sv_filo; + +static int sv_test_1_w(void *arg) +{ + printk("sv_test_1_w: acquiring spinlock 0x%p...\n", arg); + + spin_lock((spinlock_t*)arg); + printk("sv_test_1_w: spinlock acquired, waking sv_test_1_s.\n"); + + up(&sem); + + printk("sv_test_1_w: sv_spin_wait()'ing.\n"); + + sv_spin_wait(&sv, arg); + + printk("sv_test_1_w: talkback.\n"); + up(&talkback); + + printk("sv_test_1_w: exiting.\n"); + return 0; +} + +static int sv_test_1_s(void *arg) +{ + printk("sv_test_1_s: waiting for semaphore.\n"); + down(&sem); + printk("sv_test_1_s: semaphore acquired. Acquiring spinlock.\n"); + spin_lock((spinlock_t*)arg); + printk("sv_test_1_s: spinlock acquired. sv_signaling.\n"); + sv_signal(&sv); + printk("sv_test_1_s: talkback.\n"); + up(&talkback); + printk("sv_test_1_s: exiting.\n"); + return 0; + +} + +static int count; +static DECLARE_MUTEX(monitor); + +static int sv_test_2_w(void *arg) +{ + int dummy = count++; + sv_t *sv = (sv_t *)arg; + + down(&monitor); + up(&talkback); + printk("sv_test_2_w: thread %d started, sv_waiting.\n", dummy); + sv_sema_wait(sv, &monitor); + printk("sv_test_2_w: thread %d woken, exiting.\n", dummy); + up(&sem); + return 0; +} + +static int sv_test_2_s_1(void *arg) +{ + int i; + sv_t *sv = (sv_t *)arg; + + down(&monitor); + for(i = 0; i < 3; i++) { + printk("sv_test_2_s_1: waking one thread.\n"); + sv_signal(sv); + down(&sem); + } + + printk("sv_test_2_s_1: signaling and broadcasting again. Nothing should happen.\n"); + sv_signal(sv); + sv_broadcast(sv); + sv_signal(sv); + sv_broadcast(sv); + + printk("sv_test_2_s_1: talkbacking.\n"); + up(&talkback); + up(&monitor); + return 0; +} + +static int sv_test_2_s(void *arg) +{ + int i; + sv_t *sv = (sv_t *)arg; + + down(&monitor); + for(i = 0; i < 3; i++) { + printk("sv_test_2_s: waking one thread (should be %d.)\n", i); + sv_signal(sv); + down(&sem); + } + + printk("sv_test_3_s: waking remaining threads with broadcast.\n"); + sv_broadcast(sv); + for(; i < 10; i++) + down(&sem); + + printk("sv_test_3_s: sending talkback.\n"); + up(&talkback); + + printk("sv_test_3_s: exiting.\n"); + up(&monitor); + return 0; +} + + +static void big_test(sv_t *sv) +{ + int i; + + count = 0; + + for(i = 0; i < 3; i++) { + printk("big_test: spawning thread %d.\n", i); + kernel_thread(sv_test_2_w, sv, 0); + down(&talkback); + } + + printk("big_test: spawning first wake-up thread.\n"); + kernel_thread(sv_test_2_s_1, sv, 0); + + down(&talkback); + printk("big_test: talkback happened.\n"); + + + for(i = 3; i < 13; i++) { + printk("big_test: spawning thread %d.\n", i); + kernel_thread(sv_test_2_w, sv, 0); + down(&talkback); + } + + printk("big_test: spawning wake-up thread.\n"); + kernel_thread(sv_test_2_s, sv, 0); + + down(&talkback); +} + +sv_t int_test_sv; +spinlock_t int_test_spin = SPIN_LOCK_UNLOCKED; +int int_test_ready; +static int irqtestcount; + +static int interrupt_test_worker(void *unused) +{ + int id = ++irqtestcount; + int it = 0; + unsigned long flags, flags2; + + printk("ITW: thread %d started.\n", id); + + while(1) { + __save_flags(flags2); + if(jiffies % 3) { + printk("ITW %2d %5d: irqsaving (%lx)\n", id, it, flags2); + spin_lock_irqsave(&int_test_spin, flags); + } else { + printk("ITW %2d %5d: spin_lock_irqing (%lx)\n", id, it, flags2); + spin_lock_irq(&int_test_spin); + } + + __save_flags(flags2); + printk("ITW %2d %5d: locked, sv_waiting (%lx).\n", id, it, flags2); + sv_wait(&int_test_sv, 0, 0); + + __save_flags(flags2); + printk("ITW %2d %5d: wait finished (%lx), pausing\n", id, it, flags2); + set_current_state(TASK_INTERRUPTIBLE); + schedule_timeout(jiffies & 0xf); + if(current->state != TASK_RUNNING) + printk("ITW: current->state isn't RUNNING after schedule!\n"); + it++; + } +} + +static void interrupt_test(void) +{ + int i; + + printk("interrupt_test: initing sv.\n"); + sv_init(&int_test_sv, &int_test_spin, SV_MON_SPIN | SV_INTS); + + for(i = 0; i < SV_INTERRUPT_TEST_WORKERS; i++) { + printk("interrupt_test: starting test thread %d.\n", i); + kernel_thread(interrupt_test_worker, 0, 0); + } + printk("interrupt_test: done with init part.\n"); + int_test_ready = 1; +} + +int sv_test(void) +{ + spinlock_t s = SPIN_LOCK_UNLOCKED; + + sv_init(&sv, &s, SV_MON_SPIN); + printk("sv_test: starting sv_test_1_w.\n"); + kernel_thread(sv_test_1_w, &s, 0); + printk("sv_test: starting sv_test_1_s.\n"); + kernel_thread(sv_test_1_s, &s, 0); + + printk("sv_test: waiting for talkback.\n"); + down(&talkback); down(&talkback); + printk("sv_test: talkback happened, sv_destroying.\n"); + sv_destroy(&sv); + + count = 0; + + printk("sv_test: beginning big_test on sv.\n"); + + sv_init(&sv, &monitor, SV_MON_SEMA); + big_test(&sv); + sv_destroy(&sv); + + printk("sv_test: beginning big_test on sv_filo.\n"); + sv_init(&sv_filo, &monitor, SV_MON_SEMA | SV_ORDER_FILO); + big_test(&sv_filo); + sv_destroy(&sv_filo); + + interrupt_test(); + + printk("sv_test: done.\n"); + return 0; +} + +__initcall(sv_test); + +#endif /* RUN_SV_TEST */ diff -Nru a/arch/ia64/sn/sn1/Makefile b/arch/ia64/sn/sn1/Makefile --- a/arch/ia64/sn/sn1/Makefile Tue Mar 12 13:58:15 2002 +++ /dev/null Wed Dec 31 16:00:00 1969 @@ -1,31 +0,0 @@ -# -# ia64/platform/sn/sn1/Makefile -# -# Copyright (C) 1999 Silicon Graphics, Inc. -# Copyright (C) Srinivasa Thirumalachar (sprasad@engr.sgi.com) -# - -EXTRA_CFLAGS := -DSN -DLANGUAGE_C=1 -D_LANGUAGE_C=1 -I. -DBRINGUP \ - -DDIRECT_L1_CONSOLE -DNUMA_BASE -DSIMULATED_KLGRAPH \ - -DNUMA_MIGR_CONTROL -DLITTLE_ENDIAN -DREAL_HARDWARE \ - -DNEW_INTERRUPTS - -.S.s: - $(CPP) $(AFLAGS) $(AFLAGS_KERNEL) -o $*.s $< -.S.o: - $(CC) $(AFLAGS) $(AFLAGS_KERNEL) -c -o $*.o $< - -all: sn1.a - -O_TARGET = sn1.a - -obj-y = irq.o setup.o iomv.o mm.o smp.o synergy.o sn1_asm.o \ - discontig.o probe.o error.o sv.o - -obj-$(CONFIG_IA64_SGI_AUTOTEST) += llsc4.o -obj-$(CONFIG_IA64_GENERIC) += machvec.o -obj-$(CONFIG_MODULES) += sn1_ksyms.o - -clean:: - -include $(TOPDIR)/Rules.make diff -Nru a/arch/ia64/sn/sn1/discontig.c b/arch/ia64/sn/sn1/discontig.c --- a/arch/ia64/sn/sn1/discontig.c Tue Mar 12 13:58:15 2002 +++ /dev/null Wed Dec 31 16:00:00 1969 @@ -1,159 +0,0 @@ -/* - * Copyright 2000, Silicon Graphics, sprasad@engr.sgi.com - * Copyright 2000, Kanoj Sarcar, kanoj@sgi.com - */ - -/* - * Contains common definitions and globals for NUMA platform - * support. For now, SN-IA64 and SN-MIPS are the NUMA platforms. - */ - -#include -#include -#include -#include -#include -#include - -extern int numnodes ; - -plat_pg_data_t plat_node_data[MAXNODES]; -bootmem_data_t bdata[MAXNODES]; -int chunktonid[MAXCHUNKS]; -int nasid_map[MAXNASIDS]; - -void __init -init_chunktonid(void) -{ - memset(chunktonid, -1, sizeof(chunktonid)) ; -} - -void __init -init_nodeidmap(void) -{ - memset(nasid_map, -1, sizeof(nasid_map)) ; -} - -int cnodeid_map[MAXNODES] ; -void __init -init_cnodeidmap(void) -{ - memset(cnodeid_map, -1, sizeof(cnodeid_map)) ; -} - -int -numa_debug(void) -{ - panic("NUMA debug\n"); - return(0); -} - -int __init -build_cnodeid_map(void) -{ - int i,j ; - - for (i=0,j=0;i= 0) - cnodeid_map[j++] = i ; - } - return j ; -} - -/* - * Since efi_memmap_walk merges contiguous banks, this code will need - * to find all the nasids covered by the input memory descriptor. - */ -static int __init -build_nasid_map(unsigned long start, unsigned long end, void *arg) -{ - unsigned long vaddr = start; - int nasid = GetNasId(__pa(vaddr)); - - while (vaddr < end) { - if (nasid < MAXNASIDS) - nasid_map[nasid] = 0; - else - panic("build_nasid_map"); - vaddr = (unsigned long)__va((unsigned long)(++nasid) << - SN1_NODE_ADDR_SHIFT); - } - return 0; -} - -void __init -fix_nasid_map(void) -{ - int i ; - int j ; - - /* For every nasid */ - for (j=0;jbdata ; - printk("%d 0x%016lx 0x%016lx 0x%016lx\n", i, - bdata->node_boot_start, bdata->node_low_pfn, - (unsigned long)bdata->node_bootmem_map) ; - } -} - -void __init -discontig_mem_init(void) -{ - extern void setup_sn1_bootmem(int); - int maxnodes ; - - init_chunktonid() ; - init_nodeidmap() ; - init_cnodeidmap() ; - efi_memmap_walk(build_nasid_map, 0) ; - maxnodes = build_cnodeid_map() ; - fix_nasid_map() ; -#ifdef CONFIG_DISCONTIGMEM - setup_sn1_bootmem(maxnodes) ; -#endif - numnodes = maxnodes; - dump_bootmem_info() ; -} - -void -dump_node_data(void) -{ - int i; - - printk("NODE DATA ....\n") ; - printk("Node, Start, Size, MemMap, BitMap, StartP, Mapnr, Size, Id\n") ; - for (i=0;ivalid_addr_bitmap, - NODE_DATA(i)->node_start_paddr, - NODE_DATA(i)->node_start_mapnr, - NODE_DATA(i)->node_size, - NODE_DATA(i)->node_id) ; - } -} - diff -Nru a/arch/ia64/sn/sn1/error.c b/arch/ia64/sn/sn1/error.c --- a/arch/ia64/sn/sn1/error.c Tue Mar 12 13:58:15 2002 +++ /dev/null Wed Dec 31 16:00:00 1969 @@ -1,149 +0,0 @@ - - -/* - * SN1 Platform specific error Support - * - * Copyright (C) 2001 Silicon Graphics, Inc. - * Copyright (C) 2001 Alan Mayer (ajm@sgi.com) - */ - - - -#include -#include -#include - -#include -#include -#include -#include -#include -#include -#include - -void -snia_error_intr_handler(int irq, void *devid, struct pt_regs *pt_regs) { - unsigned long long intpend_val; - unsigned long long bit; - - switch (irq) { - case SGI_UART_IRQ: - // This isn't really an error interrupt. We're just - // here because we have to do something with them. - // This is probably wrong, and this code will be - // removed. - intpend_val = LOCAL_HUB_L(PI_INT_PEND0); - if ( (bit = ~(1L< -#include - -static inline void * -sn1_io_addr(unsigned long port) -{ - if (!IS_RUNNING_ON_SIMULATOR()) { - return( (void *) (port | __IA64_UNCACHED_OFFSET)); - } else { - unsigned long io_base; - unsigned long addr; - - /* - * word align port, but need more than 10 bits - * for accessing registers in bedrock local block - * (so we don't do port&0xfff) - */ - if (port >= 0x1f0 && port <= 0x1f7 || - port == 0x3f6 || port == 0x3f7) { - io_base = __IA64_UNCACHED_OFFSET | 0x00000FFFFC000000; - addr = io_base | ((port >> 2) << 12) | (port & 0xfff); - } else { - addr = __ia64_get_io_port_base() | ((port >> 2) << 2); - } - return(void *) addr; - } -} - -unsigned int -sn1_inb (unsigned long port) -{ - volatile unsigned char *addr = sn1_io_addr(port); - unsigned char ret; - - ret = *addr; - __ia64_mf_a(); - return ret; -} - -unsigned int -sn1_inw (unsigned long port) -{ - volatile unsigned short *addr = sn1_io_addr(port); - unsigned short ret; - - ret = *addr; - __ia64_mf_a(); - return ret; -} - -unsigned int -sn1_inl (unsigned long port) -{ - volatile unsigned int *addr = sn1_io_addr(port); - unsigned int ret; - - ret = *addr; - __ia64_mf_a(); - return ret; -} - -void -sn1_outb (unsigned char val, unsigned long port) -{ - volatile unsigned char *addr = sn1_io_addr(port); - - *addr = val; - __ia64_mf_a(); -} - -void -sn1_outw (unsigned short val, unsigned long port) -{ - volatile unsigned short *addr = sn1_io_addr(port); - - *addr = val; - __ia64_mf_a(); -} - -void -sn1_outl (unsigned int val, unsigned long port) -{ - volatile unsigned int *addr = sn1_io_addr(port); - - *addr = val; - __ia64_mf_a(); -} diff -Nru a/arch/ia64/sn/sn1/irq.c b/arch/ia64/sn/sn1/irq.c --- a/arch/ia64/sn/sn1/irq.c Tue Mar 12 13:58:15 2002 +++ /dev/null Wed Dec 31 16:00:00 1969 @@ -1,183 +0,0 @@ -/* - * Platform dependent support for SGI SN1 - * - * Copyright (C) 2000 Silicon Graphics - * Copyright (C) 2000 Jack Steiner (steiner@sgi.com) - * Copyright (C) 2000 Alan Mayer (ajm@sgi.com) - * Copyright (C) 2000 Kanoj Sarcar (kanoj@sgi.com) - */ - -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#define IRQ_BIT_OFFSET 64 - -int bit_pos_to_irq(int bit) -{ - if (bit > 118) - bit = 118; - return (bit + IRQ_BIT_OFFSET); -} - -static inline int irq_to_bit_pos(int irq) -{ - int bit = irq - IRQ_BIT_OFFSET; - - if (bit > 63) - bit -= 64; - return bit; -} - -static unsigned int -sn1_startup_irq(unsigned int irq) -{ - return(0); -} - -static void -sn1_shutdown_irq(unsigned int irq) -{ -} - -static void -sn1_disable_irq(unsigned int irq) -{ -} - -static void -sn1_enable_irq(unsigned int irq) -{ -} - -static void -sn1_ack_irq(unsigned int irq) -{ -} - -static void -sn1_end_irq(unsigned int irq) -{ - int bit; - - bit = irq_to_bit_pos(irq); - LOCAL_HUB_CLR_INTR(bit); -} - -static void -sn1_set_affinity_irq(unsigned int irq, unsigned long mask) -{ -} - -struct hw_interrupt_type irq_type_sn1 = { - "sn1_irq", - sn1_startup_irq, - sn1_shutdown_irq, - sn1_enable_irq, - sn1_disable_irq, - sn1_ack_irq, - sn1_end_irq, - sn1_set_affinity_irq -}; - - -void -sn1_irq_init (void) -{ - int i; - - for (i = 0; i <= NR_IRQS; ++i) { - if (idesc_from_vector(i)->handler == &no_irq_type) { - idesc_from_vector(i)->handler = &irq_type_sn1; - } - } -} - - - -#if !defined(CONFIG_IA64_SGI_SN1) -void -sn1_pci_fixup(int arg) -{ -} -#endif - -#ifdef CONFIG_PERCPU_IRQ - -extern irq_desc_t irq_descX[NR_IRQS]; -irq_desc_t *irq_desc_ptr[NR_CPUS] = { irq_descX }; - -/* - * Each slave AP allocates its own irq table. - */ -int __init cpu_irq_init(void) -{ - irq_desc_ptr[smp_processor_id()] = (irq_desc_t *)kmalloc(sizeof(irq_descX), GFP_KERNEL); - if (irq_desc_ptr[smp_processor_id()] == 0) - return(-1); - memcpy(irq_desc_ptr[smp_processor_id()], irq_desc_ptr[0], - sizeof(irq_descX)); - return(0); -} - -/* - * This can also allocate the irq tables for the other cpus, specifically - * on their nodes. - */ -int __init master_irq_init(void) -{ - return(0); -} - -/* - * The input is an ivt level. - */ -irq_desc_t *idesc_from_vector(unsigned int ivnum) -{ - return(irq_desc_ptr[smp_processor_id()] + ivnum); -} - -/* - * The input is a "soft" level, that we encoded in. - */ -irq_desc_t *idesc_from_irq(unsigned int irq) -{ - return(irq_desc_ptr[irq >> 8] + (irq & 0xff)); -} - -unsigned int ivector_from_irq(unsigned int irq) -{ - return(irq & 0xff); -} - -/* - * This should return the Linux irq # for the i/p vector on the - * i/p cpu. We currently do not track this. - */ -unsigned int irq_from_cpuvector(int cpunum, unsigned int vector) -{ - return (vector); -} - -#endif /* CONFIG_PERCPU_IRQ */ diff -Nru a/arch/ia64/sn/sn1/llsc4.c b/arch/ia64/sn/sn1/llsc4.c --- a/arch/ia64/sn/sn1/llsc4.c Tue Mar 12 13:58:15 2002 +++ /dev/null Wed Dec 31 16:00:00 1969 @@ -1,952 +0,0 @@ -/* - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * - * Copyright (C) 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Jack Steiner (steiner@sgi.com) - */ - -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -extern void bringup_set_led_bits(u8 bits, u8 mask); - -#include "llsc4.h" - - -#ifdef STANDALONE -#include "lock.h" -#endif - -#ifdef INTTEST -static int inttest=0; -#endif - -/* - * Test parameter table for AUTOTEST - */ -typedef struct { - int passes; - int linecount; - int linepad; -} autotest_table_t; - -autotest_table_t autotest_table[] = { - {5000000, 2, 0x2b4 }, - {5000000, 16, 0, }, - {5000000, 16, 4, }, - {5000000, 128, 0x44 }, - {5000000, 128, 0x84 }, - {5000000, 128, 0x200 }, - {5000000, 128, 0x204 }, - {5000000, 128, 0x2b4 }, - {5000000, 2, 8*MB+0x2b4 }, - {5000000, 16, 8*MB+0 }, - {5000000, 16, 8*MB+4 }, - {5000000, 128, 8*MB+0x44 }, - {5000000, 128, 8*MB+0x84 }, - {5000000, 128, 8*MB+0x200 }, - {5000000, 128, 8*MB+0x204 }, - {5000000, 128, 8*MB+0x2b4 }, - {0}}; - -/* - * Array of virtual addresses available for test purposes. - */ - -typedef struct { - long vstart; - long vend; - long nextaddr; - int wrapcount; -} memmap_t; - -memmap_t memmap[MAXCHUNKS]; -int memmapx=0; - -typedef struct { - void *addr; - long data[16]; - long data_fc[16]; -} capture_line_t; - -typedef struct { - int size; - void *blockaddr; - void *shadaddr; - long blockdata[16]; - long shaddata[16]; - long blockdata_fc[16]; - long shaddata_fc[16]; - long synerr; -} capture_t; - -/* - * PORTING NOTE: revisit this statement. On hardware we put mbase at 0 and - * the rest of the tables have to start at 1MB to skip PROM tables. - */ -#define THREADPRIVATE(t) ((threadprivate_t*)(((long)mbase)+1024*1024+t*((sizeof(threadprivate_t)+511)/512*512))) - -#define k_capture mbase->sk_capture -#define k_go mbase->sk_go -#define k_linecount mbase->sk_linecount -#define k_passes mbase->sk_passes -#define k_napticks mbase->sk_napticks -#define k_stop_on_error mbase->sk_stop_on_error -#define k_verbose mbase->sk_verbose -#define k_threadprivate mbase->sk_threadprivate -#define k_blocks mbase->sk_blocks -#define k_iter_msg mbase->sk_iter_msg -#define k_vv mbase->sk_vv -#define k_linepad mbase->sk_linepad -#define k_options mbase->sk_options -#define k_testnumber mbase->sk_testnumber -#define k_currentpass mbase->sk_currentpass - -static long blocks[MAX_LINECOUNT]; /* addresses of data blocks */ -static control_t *mbase; -static vint initialized=0; - -static unsigned int ran_conf_llsc(int); -static int rerr(capture_t *, char *, void *, void *, int, int, int, int, int, int); -static void dumpline(void *, char *, char *, void *, void *, int); -static int checkstop(int, int, uint); -static void spin(int); -static void capturedata(capture_t *, uint, void *, void *, int); -static int randn(uint max, uint *seed); -static uint zrandom (uint *zranseed); -static int set_lock(uint *, uint); -static int clr_lock(uint *, uint); -static void Speedo(void); - -int autotest_enabled=0; -static int llsctest_number=-1; -static int errstop_enabled=0; -static int fail_enabled=0; -static int selective_trigger=0; -static int dump_block_addrs_opt=0; -static uint errlock=0; - -static int __init autotest_enable(char *str) -{ - autotest_enabled = 1; - return 1; -} -static int __init set_llscblkadr(char *str) -{ - dump_block_addrs_opt = 1; - return 1; -} -static int __init set_llscselt(char *str) -{ - selective_trigger = 1; - return 1; -} -static int __init set_llsctest(char *str) -{ - llsctest_number = simple_strtol(str, &str, 10); - if (llsctest_number < 0 || llsctest_number > 15) - llsctest_number = -1; - return 1; -} -static int __init set_llscerrstop(char *str) -{ - errstop_enabled = 1; - return 1; -} -static int __init set_llscfail(char *str) -{ - fail_enabled = 8; - return 1; -} - -static void print_params(void) -{ - printk ("********* Enter AUTOTEST facility on master cpu *************\n"); - printk (" Test options:\n"); - printk (" llsctest=\t%d\tTest number to run (all = -1)\n", llsctest_number); - printk (" llscerrstop \t%s\tStop on error\n", errstop_enabled ? "on" : "off"); - printk (" llscfail \t%s\tForce a failure to test the trigger & error messages\n", fail_enabled ? "on" : "off"); - printk (" llscselt \t%s\tSelective triger on failures\n", selective_trigger ? "on" : "off"); - printk (" llscblkadr \t%s\tDump data block addresses\n", dump_block_addrs_opt ? "on" : "off"); - printk ("\n"); -} -__setup("autotest", autotest_enable); -__setup("llsctest=", set_llsctest); -__setup("llscerrstop", set_llscerrstop); -__setup("llscfail", set_llscfail); -__setup("llscselt", set_llscselt); -__setup("llscblkadr", set_llscblkadr); - - -extern inline int -set_lock(uint *lock, uint id) -{ - uint old; - old = cmpxchg_acq(lock, 0, id); - return (old == 0); -} - -extern inline int -clr_lock(uint *lock, uint id) -{ - uint old; - old = cmpxchg_rel(lock, id, 0); - return (old == id); -} - -extern inline void -zero_lock(uint *lock) -{ - *lock = 0; -} - -/*------------------------------------------------------------------------+ -| Routine : ran_conf_llsc - ll/sc shared data test | -| Description: This test checks the coherency of shared data | -+------------------------------------------------------------------------*/ -static unsigned int -ran_conf_llsc(int thread) -{ - private_t pval; - share_t sval, sval2; - uint vv, linei, slinei, sharei, pass; - long t; - lock_t lockpat; - share_t *sharecopy; - long verbose, napticks, passes, linecount, lcount; - dataline_t *linep, *slinep; - int s, seed; - threadprivate_t *tp; - uint iter_msg, iter_msg_i=0; - int vv_mask; - int correct_errors; - int errs=0; - int stillbad; - capture_t capdata; - private_t *privp; - share_t *sharep; - - - linecount = k_linecount; - napticks = k_napticks; - verbose = k_verbose; - passes = k_passes; - iter_msg = k_iter_msg; - seed = (thread + 1) * 647; - tp = THREADPRIVATE(thread); - vv_mask = (k_vv>>((thread%16)*4)) & 0xf; - correct_errors = k_options&0xff; - - memset (&tp->private, 0, sizeof(tp->private)); - memset (&capdata, 0, sizeof(capdata)); - - for (pass = 1; passes == 0 || pass < passes; pass++) { - lockpat = (pass & 0x0fffffff) + (thread <<28); - tp->threadpasses = pass; - if (checkstop(thread, pass, lockpat)) - return 0; - iter_msg_i++; - if (iter_msg && iter_msg_i > iter_msg) { - printk("Thread %d, Pass %d\n", thread, pass); - iter_msg_i = 0; - } - lcount = 0; - - /* - * Select line to perform operations on. - */ - linei = randn(linecount, &seed); - sharei = randn(2, &seed); - slinei = (linei + (linecount/2))%linecount; /* I dont like this - fix later */ - - linep = (dataline_t *)blocks[linei]; - slinep = (dataline_t *)blocks[slinei]; - if (sharei == 0) - sharecopy = &slinep->share0; - else - sharecopy = &slinep->share1; - - - vv = randn(4, &seed); - if ((vv_mask & (1<private[thread]; - sharep = &linep->share[sharei]; - - switch(vv) { - case 0: - /* Read and verify private count on line. */ - pval = *privp; - if (verbose) - printk("Line:%3d, Thread:%d:%d. Val: %x\n", linei, thread, vv, tp->private[linei]); - if (pval != tp->private[linei]) { - capturedata(&capdata, pass, privp, NULL, sizeof(*privp)); - stillbad = (*privp != tp->private[linei]); - if (rerr(&capdata, "Private count", linep, slinep, thread, pass, linei, tp->private[linei], pval, stillbad)) { - return 1; - } - if (correct_errors) { - tp->private[linei] = *privp; - } - errs++; - } - break; - - case 1: - /* Read, verify, and increment private count on line. */ - pval = *privp; - if (verbose) - printk("Line:%3d, Thread:%d:%d. Val: %x\n", linei, thread, vv, tp->private[linei]); - if (pval != tp->private[linei]) { - capturedata(&capdata, pass, privp, NULL, sizeof(*privp)); - stillbad = (*privp != tp->private[linei]); - if (rerr(&capdata, "Private count & inc", linep, slinep, thread, pass, linei, tp->private[linei], pval, stillbad)) { - return 1; - } - errs++; - } - pval++; - *privp = pval; - tp->private[linei] = pval; - break; - - case 2: - /* Lock line, read and verify shared data. */ - if (verbose) - printk("Line:%3d, Thread:%d:%d. Val: %x\n", linei, thread, vv, *sharecopy); - lcount = 0; - while (LOCK(sharei) != 1) { - if (checkstop(thread, pass, lockpat)) - return 0; - if (lcount++>1000000) { - capturedata(&capdata, pass, LOCKADDR(sharei), NULL, sizeof(lock_t)); - stillbad = (GETLOCK(sharei) != 0); - rerr(&capdata, "Shared data lock", linep, slinep, thread, pass, linei, 0, GETLOCK(sharei), stillbad); - return 1; - } - if ((lcount&0x3fff) == 0) - udelay(1000); - } - - sval = *sharep; - sval2 = *sharecopy; - if (pass > 12 && thread == 0 && fail_enabled == 1) - sval++; - if (sval != sval2) { - capturedata(&capdata, pass, sharep, sharecopy, sizeof(*sharecopy)); - stillbad = (*sharep != *sharecopy); - if (!stillbad && *sharep != sval && *sharecopy == sval2) - stillbad = 2; - if (rerr(&capdata, "Shared data", linep, slinep, thread, pass, linei, sval2, sval, stillbad)) { - return 1; - } - if (correct_errors) - *sharep = *sharecopy; - errs++; - } - - - if ( (s=UNLOCK(sharei)) != 1) { - capturedata(&capdata, pass, LOCKADDR(sharei), NULL, 4); - stillbad = (GETLOCK(sharei) != lockpat); - if (rerr(&capdata, "Shared data unlock", linep, slinep, thread, pass, linei, lockpat, GETLOCK(sharei), stillbad)) - return 1; - if (correct_errors) - ZEROLOCK(sharei); - errs++; - } - break; - - case 3: - /* Lock line, read and verify shared data, modify shared data. */ - if (verbose) - printk("Line:%3d, Thread:%d:%d. Val: %x\n", linei, thread, vv, *sharecopy); - lcount = 0; - while (LOCK(sharei) != 1) { - if (checkstop(thread, pass, lockpat)) - return 0; - if (lcount++>1000000) { - capturedata(&capdata, pass, LOCKADDR(sharei), NULL, sizeof(lock_t)); - stillbad = (GETLOCK(sharei) != 0); - rerr(&capdata, "Shared data lock & inc", linep, slinep, thread, pass, linei, 0, GETLOCK(sharei), stillbad); - return 1; - } - if ((lcount&0x3fff) == 0) - udelay(1000); - } - sval = *sharep; - sval2 = *sharecopy; - if (sval != sval2) { - capturedata(&capdata, pass, sharep, sharecopy, sizeof(*sharecopy)); - stillbad = (*sharep != *sharecopy); - if (!stillbad && *sharep != sval && *sharecopy == sval2) - stillbad = 2; - if (rerr(&capdata, "Shared data & inc", linep, slinep, thread, pass, linei, sval2, sval, stillbad)) { - return 1; - } - errs++; - } - - *sharep = lockpat; - *sharecopy = lockpat; - - - if ( (s=UNLOCK(sharei)) != 1) { - capturedata(&capdata, pass, LOCKADDR(sharei), NULL, 4); - stillbad = (GETLOCK(sharei) != lockpat); - if (rerr(&capdata, "Shared data & inc unlock", linep, slinep, thread, pass, linei, thread, GETLOCK(sharei), stillbad)) - return 1; - if (correct_errors) - ZEROLOCK(sharei); - errs++; - } - break; - } - } - - return (errs > 0); -} - -static void -trigger_la(long val) -{ - long *p; - - p = (long*)0xc0000a0001000020L; /* PI_CPU_NUM */ - *p = val; -} - -static long -getsynerr(void) -{ - long err, *errp; - - errp = (long*)0xc0000e0000000340L; /* SYN_ERR */ - err = *errp; - if (err) - *errp = -1L; - return (err & ~0x60); -} - -static int -rerr(capture_t *cap, char *msg, void *lp, void *slp, int thread, int pass, int linei, int exp, int found, int stillbad) -{ - int cpu, i; - long synerr; - int selt; - - - selt = selective_trigger && stillbad > 1 && - memcmp(cap->blockdata, cap->blockdata_fc, 128) != 0 && - memcmp(cap->shaddata, cap->shaddata_fc, 128) == 0; - if (selt) { - trigger_la(pass); - } else if (selective_trigger) { - k_go = ST_STOP; - return k_stop_on_error;; - } - - spin(1); - i = 100; - while (i && set_lock(&errlock, 1) != 1) { - spin(1); - i--; - } - printk ("\nDataError!: %-20s, test %ld, thread %d, line:%d, pass %d (0x%x), time %ld expected:%x, found:%x\n", - msg, k_testnumber, thread, linei, pass, pass, jiffies, exp, found); - - dumpline (lp, "Corrupted data", "D ", cap->blockaddr, cap->blockdata, cap->size); - if (memcmp(cap->blockdata, cap->blockdata_fc, 128)) - dumpline (lp, "Corrupted data", "DF", cap->blockaddr, cap->blockdata_fc, cap->size); - - if (cap->shadaddr) { - dumpline (slp, "Shadow data", "S ", cap->shadaddr, cap->shaddata, cap->size); - if (memcmp(cap->shaddata, cap->shaddata_fc, 128)) - dumpline (slp, "Shadow data", "SF", cap->shadaddr, cap->shaddata_fc, cap->size); - } - - printk("Threadpasses: "); - for (cpu=0; cputhreadpasses) - printk(" %d:0x%x", cpu, k_threadprivate[cpu]->threadpasses); - - - printk("\nData was %sfixed by flushcache\n", (stillbad == 1 ? "**** NOT **** " : " ")); - synerr = getsynerr(); - if (synerr) - printk("SYNERR: Thread %d, Synerr: 0x%lx\n", thread, synerr); - spin(2); - printk("\n\n"); - clr_lock(&errlock, 1); - - if (errstop_enabled) { - local_irq_disable(); - while(1); - } - return k_stop_on_error; -} - - -static void -dumpline(void *lp, char *str1, char *str2, void *addr, void *data, int size) -{ - long *p; - int i, off; - - printk("%s at 0x%lx, size %d, block starts at 0x%lx\n", str1, (long)addr, size, (long)lp); - p = (long*) data; - for (i=0; i<16; i++, p++) { - if (i==0) printk("%2s", str2); - if (i==8) printk(" "); - printk(" %016lx", *p); - if ((i&7)==7) printk("\n"); - } - printk(" "); - off = (((long)addr) ^ size) & 63L; - for (i=0; i=off) ? "--" : " "); - if ((i%8) == 7) - printk(" "); - } - - off = ((long)addr) & 127; - printk(" (line %d)\n", off/64+1); -} - - -static int -randn(uint max, uint *seedp) -{ - if (max == 1) - return(0); - else - return((int)(zrandom(seedp)>>10) % max); -} - - -static int -checkstop(int thread, int pass, uint lockpat) -{ - long synerr; - - if (k_go == ST_RUN) - return 0; - if (k_go == ST_STOP) - return 1; - - if (errstop_enabled) { - local_irq_disable(); - while(1); - } - synerr = getsynerr(); - spin(2); - if (k_go == ST_STOP) - return 1; - if (synerr) - printk("SYNERR: Thread %d, Synerr: 0x%lx\n", thread, synerr); - return 1; -} - - -static void -spin(int j) -{ - udelay(j * 500000); -} - -static void -capturedata(capture_t *cap, uint pass, void *blockaddr, void *shadaddr, int size) -{ - - if (!selective_trigger) - trigger_la (pass); - - memcpy (cap->blockdata, CACHEALIGN(blockaddr), 128); - if (shadaddr) - memcpy (cap->shaddata, CACHEALIGN(shadaddr), 128); - - if (k_stop_on_error) { - k_go = ST_ERRSTOP; - } - - cap->size = size; - cap->blockaddr = blockaddr; - cap->shadaddr = shadaddr; - - asm volatile ("fc %0" :: "r"(blockaddr) : "memory"); - ia64_sync_i(); - ia64_srlz_d(); - memcpy (cap->blockdata_fc, CACHEALIGN(blockaddr), 128); - - if (shadaddr) { - asm volatile ("fc %0" :: "r"(shadaddr) : "memory"); - ia64_sync_i(); - ia64_srlz_d(); - memcpy (cap->shaddata_fc, CACHEALIGN(shadaddr), 128); - } -} - -int zranmult = 0x48c27395; - -static uint -zrandom (uint *seedp) -{ - *seedp = (*seedp * zranmult) & 0x7fffffff; - return (*seedp); -} - - -void -set_autotest_params(void) -{ - static int testnumber=-1; - - if (llsctest_number >= 0) { - testnumber = llsctest_number; - } else { - testnumber++; - if (autotest_table[testnumber].passes == 0) { - testnumber = 0; - dump_block_addrs_opt = 0; - } - } - k_passes = autotest_table[testnumber].passes; - k_linepad = autotest_table[testnumber].linepad; - k_linecount = autotest_table[testnumber].linecount; - k_testnumber = testnumber; - - if (IS_RUNNING_ON_SIMULATOR()) { - printk ("llsc start test %ld\n", k_testnumber); - k_passes = 1000; - } -} - - -static void -set_leds(int errs) -{ - unsigned char leds=0; - - /* - * Leds are: - * ppppeee- - * where - * pppp = test number - * eee = error count but top bit is stick - */ - - leds = ((errs&7)<<1) | ((k_testnumber&15)<<4) | (errs ? 0x08 : 0); - bringup_set_led_bits(leds, 0xfe); -} - -static void -setup_block_addresses(void) -{ - int i, stride, memmapi; - - stride = LINESTRIDE; - memmapi = 0; - for (i=0; i= memmap[memmapi].vend) { - memmap[memmapi].wrapcount++; - memmap[memmapi].nextaddr = memmap[memmapi].vstart + - memmap[memmapi].wrapcount * sizeof(dataline_t); - } - - memset((void*)blocks[i], 0, sizeof(dataline_t)); - - if (stride > 16384) { - memmapi++; - if (memmapi == memmapx) - memmapi = 0; - } - } - -} - -static void -dump_block_addrs(void) -{ - int i; - - printk("LLSC TestNumber %ld\n", k_testnumber); - - for (i=0; ithreadstate == TS_KILLED) { - bringup_set_led_bits(0xfe, 0xfe); - while(1); - } - k_threadprivate[cpuid]->threadstate = state; -} - -static int -build_mem_map(unsigned long start, unsigned long end, void *arg) -{ - long lstart; - long align = 8*MB; - /* - * HACK - skip the kernel on the first node - */ - - printk ("LLSC memmap: start 0x%lx, end 0x%lx, (0x%lx - 0x%lx)\n", - start, end, (long) virt_to_page(start), (long) virt_to_page(end-PAGE_SIZE)); - - while (end > start && (PageReserved(virt_to_page(end-PAGE_SIZE)) || virt_to_page(end-PAGE_SIZE)->count.counter > 0)) - end -= PAGE_SIZE; - - lstart = end; - while (lstart > start && (!PageReserved(virt_to_page(lstart-PAGE_SIZE)) && virt_to_page(lstart-PAGE_SIZE)->count.counter == 0)) - lstart -= PAGE_SIZE; - - lstart = (lstart + align -1) /align * align; - end = end / align * align; - if (lstart >= end) - return 0; - printk (" memmap: start 0x%lx, end 0x%lx\n", lstart, end); - - memmap[memmapx].vstart = lstart; - memmap[memmapx].vend = end; - memmapx++; - return 0; -} - -void int_test(void); - -int -llsc_main (int cpuid, long mbasex) -{ - int i, cpu, is_master, repeatcnt=0; - unsigned int preverr=0, errs=0, pass=0; - int automode=0; - -#ifdef INTTEST - if (inttest) - int_test(); -#endif - - if (!autotest_enabled) - return 0; - -#ifdef CONFIG_SMP - is_master = !smp_processor_id(); -#else - is_master = 1; -#endif - - - if (is_master) { - print_params(); - if(!IS_RUNNING_ON_SIMULATOR()) - spin(10); - mbase = (control_t*)mbasex; - k_currentpass = 0; - k_go = ST_IDLE; - k_passes = DEF_PASSES; - k_napticks = DEF_NAPTICKS; - k_stop_on_error = DEF_STOP_ON_ERROR; - k_verbose = DEF_VERBOSE; - k_linecount = DEF_LINECOUNT; - k_iter_msg = DEF_ITER_MSG; - k_vv = DEF_VV; - k_linepad = DEF_LINEPAD; - k_blocks = (void*)blocks; - efi_memmap_walk(build_mem_map, 0); - -#ifdef CONFIG_IA64_SGI_AUTOTEST - automode = 1; -#endif - - for (i=0; i 5) { - set_autotest_params(); - repeatcnt = 0; - } - } else { - while (k_go == ST_IDLE); - } - - k_go = ST_INIT; - if (k_linecount > MAX_LINECOUNT) k_linecount = MAX_LINECOUNT; - k_linecount = k_linecount & ~1; - setup_block_addresses(); - if (dump_block_addrs_opt) - dump_block_addrs(); - - k_currentpass = pass++; - k_go = ST_RUN; - if (fail_enabled) - fail_enabled--; - - } else { - while (k_go != ST_RUN || k_currentpass != pass); - pass++; - } - - - set_leds(errs); - set_thread_state(cpuid, TS_RUNNING); - - errs += ran_conf_llsc(cpuid); - preverr = (k_go == ST_ERRSTOP); - - set_leds(errs); - set_thread_state(cpuid, TS_STOPPED); - - if (is_master) { - Speedo(); - for (i=0, cpu=0; cputhreadstate == TS_RUNNING) { - i++; - if (i == 10000) { - k_go = ST_STOP; - printk (" llsc master stopping test number %ld\n", k_testnumber); - } - if (i > 100000) { - k_threadprivate[cpu]->threadstate = TS_KILLED; - printk (" llsc: master killing cpuid %d, running test number %ld\n", - cpu, k_testnumber); - } - udelay(1000); - } - } - } - - goto loop; -} - - -static void -Speedo(void) -{ - static int i = 0; - - switch (++i%4) { - case 0: - printk("|\b"); - break; - case 1: - printk("\\\b"); - break; - case 2: - printk("-\b"); - break; - case 3: - printk("/\b"); - break; - } -} - -#ifdef INTTEST - -/* ======================================================================================================== - * - * Some test code to verify that interrupts work - * - * Add the following to the arch/ia64/kernel/smp.c after the comment "Reschedule callback" - * if (zzzprint_resched) printk(" cpu %d got interrupt\n", smp_processor_id()); - * - * Enable the code in arch/ia64/sn/sn1/smp.c to print sending IPIs. - * - */ - -static int __init set_inttest(char *str) -{ - inttest = 1; - autotest_enabled = 1; - - return 1; -} - -__setup("inttest=", set_inttest); - -int zzzprint_resched=0; - -void -int_test() { - int mycpu, cpu; - static volatile int control_cpu=0; - - mycpu = smp_processor_id(); - zzzprint_resched = 2; - - printk("Testing cross interrupts\n"); - - while (control_cpu != smp_num_cpus) { - if (mycpu == cpu_logical_map(control_cpu)) { - for (cpu=0; cpulock[(i)] -#define LOCK(i) set_lock(LOCKADDR(i), lockpat) -#define UNLOCK(i) clr_lock(LOCKADDR(i), lockpat) -#define GETLOCK(i) *LOCKADDR(i) -#define ZEROLOCK(i) zero_lock(LOCKADDR(i)) - -#define CACHEALIGN(a) ((void*)((long)(a) & ~127L)) - -typedef uint lock_t; -typedef uint share_t; -typedef uint private_t; - -typedef struct { - lock_t lock[2]; - share_t share[2]; - private_t private[MAXCPUS]; - share_t share0; - share_t share1; -} dataline_t ; - - -#define LINEPAD k_linepad -#define LINESTRIDE (((sizeof(dataline_t)+CACHELINE-1)/CACHELINE)*CACHELINE + LINEPAD) - - -typedef struct { - vint threadstate; - uint threadpasses; - private_t private[MAX_LINECOUNT]; -} threadprivate_t; - -typedef struct { - vlong sk_go; /* 0=idle, 1=init, 2=run */ - long sk_linecount; - long sk_passes; - long sk_napticks; - long sk_stop_on_error; - long sk_verbose; - long sk_iter_msg; - long sk_vv; - long sk_linepad; - long sk_options; - long sk_testnumber; - vlong sk_currentpass; - void *sk_blocks; - threadprivate_t *sk_threadprivate[MAXCPUS]; -} control_t; - -/* Run state (k_go) constants */ -#define ST_IDLE 0 -#define ST_INIT 1 -#define ST_RUN 2 -#define ST_STOP 3 -#define ST_ERRSTOP 4 - - -/* Threadstate constants */ -#define TS_STOPPED 0 -#define TS_RUNNING 1 -#define TS_KILLED 2 - - - -int llsc_main (int cpuid, long mbasex); - diff -Nru a/arch/ia64/sn/sn1/machvec.c b/arch/ia64/sn/sn1/machvec.c --- a/arch/ia64/sn/sn1/machvec.c Tue Mar 12 13:58:14 2002 +++ /dev/null Wed Dec 31 16:00:00 1969 @@ -1,18 +0,0 @@ -#define MACHVEC_PLATFORM_NAME sn1 -#include -#include -#include -void* -sn1_mk_io_addr_MACRO - -dma_addr_t -sn1_pci_map_single_MACRO - -int -sn1_pci_map_sg_MACRO - -unsigned long -sn1_virt_to_phys_MACRO - -void * -sn1_phys_to_virt_MACRO diff -Nru a/arch/ia64/sn/sn1/mm.c b/arch/ia64/sn/sn1/mm.c --- a/arch/ia64/sn/sn1/mm.c Tue Mar 12 13:58:15 2002 +++ /dev/null Wed Dec 31 16:00:00 1969 @@ -1,394 +0,0 @@ -/* - * Copyright, 2000-2001, Silicon Graphics. - * Copyright Srinivasa Thirumalachar (sprasad@engr.sgi.com) - * Copyright 2000-2001 Kanoj Sarcar (kanoj@sgi.com) - */ - -#include -#include -#include -#include -#include -#include - -#define MIN(a,b) ((a) < (b) ? (a) : (b)) -#define MAX(a,b) ((a) > (b) ? (a) : (b)) - -#define DONE_NOTHING 0 -#define DONE_FINDING 1 -#define DONE_BUILDING 2 - -struct nodemem_s { - u64 start; /* start of kernel usable memory */ - u64 end; /* end of kernel usable memory */ - u64 mtot; /* total kernel usable memory */ - u64 done; /* state of bootmem initialization */ - u64 bstart; /* where should the bootmem area be */ - u64 bsize; /* bootmap size */ - u64 hole[SN1_MAX_BANK_PER_NODE]; -} nodemem[MAXNODES]; - -static int nodemem_valid = 0; - -static int __init -free_unused_memmap_hole(int nid, unsigned long start, unsigned long end) -{ - struct page * page, *pageend; - unsigned long count = 0; - - if (start >= end) - return 0; - - /* - * Get the memmap ptrs to the start and end of the holes. - * virt_to_page(start) will panic, if start is in hole. - * Can we do virt_to_page(end), if end is on the next node? - */ - - page = virt_to_page(start - 1); - page++; - pageend = virt_to_page(end); - - printk("hpage=0x%lx, hpageend=0x%lx\n", (u64)page, (u64)pageend) ; - free_bootmem_node(NODE_DATA(nid), __pa(page), (u64)pageend - (u64)page); - - return count; -} - -static void __init -free_unused_memmap_node(int nid) -{ - u64 i = 0; - u64 holestart = -1; - u64 start = nodemem[nid].start; - - start = ((start >> SN1_NODE_ADDR_SHIFT) << SN1_NODE_ADDR_SHIFT); - do { - holestart = nodemem[nid].hole[i]; - i++; - while ((i < SN1_MAX_BANK_PER_NODE) && - (nodemem[nid].hole[i] == (u64)-1)) - i++; - if (i < SN1_MAX_BANK_PER_NODE) - free_unused_memmap_hole(nid, holestart, - start + (i<> SN1_NODE_ADDR_SHIFT) << SN1_NODE_ADDR_SHIFT); - - nodesize = nodemem[nid].end - start ; - numpfn = nodesize >> PAGE_SHIFT; - - bank0size = nodemem[nid].hole[0] - start ; - /* If nid == master node && no kernel text replication */ - bank0size -= 0xA00000 ; /* Kernel text + stuff */ - bank0size -= ((numpfn + 7) >> 3); - - if ((numpfn * sizeof(mem_map_t)) > bank0size) { - printk("nid = %d, ns=0x%lx, npfn=0x%lx, bank0size=0x%lx\n", - nid, nodesize, numpfn, bank0size) ; - return 0 ; - } - - return 1 ; -} - -static void __init -check_pgtbl_size(int nid) -{ - int bank = SN1_MAX_BANK_PER_NODE - 1 ; - - /* Find highest bank with valid memory */ - while ((nodemem[nid].hole[bank] == -1) && (bank)) - bank-- ; - - while (!pgtbl_size_ok(nid)) { - /* Remove that bank of memory */ - /* Collect some numbers later */ - printk("Ignoring node %d bank %d\n", nid, bank) ; - nodemem[nid].hole[bank--] = -1 ; - /* Get to the next populated bank */ - while ((nodemem[nid].hole[bank] == -1) && (bank)) - bank-- ; - printk("Using only upto bank %d on node %d\n", bank,nid) ; - nodemem[nid].end = nodemem[nid].hole[bank] ; - if (!bank) break ; - } -} - -void dump_nodemem_map(int) ; - -#ifdef CONFIG_DISCONTIGMEM - -extern bootmem_data_t bdata[]; - -/* - * This assumes there will be a hole in kernel-usable memory between nodes - * (due to prom). The memory descriptors invoked via efi_memmap_walk are - * in increasing order. It tries to identify first suitable free area to - * put the bootmem for the node in. When presented with the md holding - * the kernel, it only searches at the end of the kernel area. - */ -static int __init -find_node_bootmem(unsigned long start, unsigned long end, void *arg) -{ - int nasid = GetNasId(__pa(start)); - int cnodeid = NASID_TO_CNODEID(nasid); - unsigned long nodesize; - extern char _end; - unsigned long kaddr = (unsigned long)&_end; - - /* - * Track memory available to kernel. - */ - nodemem[cnodeid].mtot += ((end - start) >> PAGE_SHIFT); - if (nodemem[cnodeid].done != DONE_NOTHING) - return(0); - nodesize = nodemem[cnodeid].end - ((nodemem[cnodeid].start >> - SN1_NODE_ADDR_SHIFT) << SN1_NODE_ADDR_SHIFT); - nodesize >>= PAGE_SHIFT; - - /* - * Adjust limits for the md holding the kernel. - */ - if ((start < kaddr) && (end > kaddr)) - start = PAGE_ALIGN(kaddr); - - /* - * We need space for mem_map, bootmem map plus a few more pages - * to satisfy alloc_bootmems out of node 0. - */ - if ((end - start) > ((nodesize * sizeof(struct page)) + (nodesize/8) - + (10 * PAGE_SIZE))) { - nodemem[cnodeid].bstart = start; - nodemem[cnodeid].done = DONE_FINDING; - } - return(0); -} - -/* - * This assumes there will be a hole in kernel-usable memory between nodes - * (due to prom). The memory descriptors invoked via efi_memmap_walk are - * in increasing order. - */ -static int __init -build_node_bootmem(unsigned long start, unsigned long end, void *arg) -{ - int nasid = GetNasId(__pa(start)); - int curnodeid = NASID_TO_CNODEID(nasid); - int i; - unsigned long pstart, pend; - extern char _end, _stext; - unsigned long kaddr = (unsigned long)&_end; - - if (nodemem[curnodeid].done == DONE_FINDING) { - /* - * This is where we come to know the node is present. - * Do node wide tasks. - */ - nodemem[curnodeid].done = DONE_BUILDING; - NODE_DATA(curnodeid)->bdata = &(bdata[curnodeid]); - - /* - * Update the chunktonid array as a node wide task. There - * are too many smalls mds on first node to do this per md. - */ - pstart = __pa(nodemem[curnodeid].start); - pend = __pa(nodemem[curnodeid].end); - pstart &= CHUNKMASK; - pend = (pend + CHUNKSZ - 1) & CHUNKMASK; - /* Possible check point to enforce minimum node size */ - if (nodemem[curnodeid].bstart == -1) { - printk("No valid bootmem area on node %d\n", curnodeid); - while(1); - } - for (i = PCHUNKNUM(pstart); i <= PCHUNKNUM(pend - 1); i++) - chunktonid[i] = curnodeid; - if ((CHUNKTONID(PCHUNKNUM(pend)) > MAXCHUNKS) || - (PCHUNKNUM(pstart) >= PCHUNKNUM(pend))) { - printk("Ign 0x%lx-0x%lx, ", __pa(start), __pa(end)); - return(0); - } - - /* - * NODE_START and NODE_SIZE determine the physical range - * on the node that mem_map array needs to be set up for. - */ - NODE_START(curnodeid) = ((nodemem[curnodeid].start >> - SN1_NODE_ADDR_SHIFT) << SN1_NODE_ADDR_SHIFT); - NODE_SIZE(curnodeid) = (nodemem[curnodeid].end - - NODE_START(curnodeid)); - - nodemem[curnodeid].bsize = - init_bootmem_node(NODE_DATA(curnodeid), - (__pa(nodemem[curnodeid].bstart) >> PAGE_SHIFT), - (__pa((nodemem[curnodeid].start >> SN1_NODE_ADDR_SHIFT) - << SN1_NODE_ADDR_SHIFT) >> PAGE_SHIFT), - (__pa(nodemem[curnodeid].end) >> PAGE_SHIFT)); - - } else if (nodemem[curnodeid].done == DONE_NOTHING) { - printk("build_node_bootmem: node %d weirdness\n", curnodeid); - while(1); /* Paranoia */ - } - - /* - * Free the entire md. - */ - free_bootmem_node(NODE_DATA(curnodeid), __pa(start), (end - start)); - - /* - * Reclaim back the bootmap and kernel areas. - */ - if ((start <= nodemem[curnodeid].bstart) && (end > - nodemem[curnodeid].bstart)) - reserve_bootmem_node(NODE_DATA(curnodeid), - __pa(nodemem[curnodeid].bstart), nodemem[curnodeid].bsize); - if ((start <= kaddr) && (end > kaddr)) - reserve_bootmem_node(NODE_DATA(curnodeid), - __pa(&_stext), (&_end - &_stext)); - - return(0); -} - -void __init -setup_sn1_bootmem(int maxnodes) -{ - int i; - - for (i = 0; i < MAXNODES; i++) { - nodemem[i].start = nodemem[i].bstart = -1; - nodemem[i].end = nodemem[i].bsize = nodemem[i].mtot = 0; - nodemem[i].done = DONE_NOTHING; - memset(&nodemem[i].hole, -1, sizeof(nodemem[i].hole)); - } - efi_memmap_walk(build_nodemem_map, 0); - - nodemem_valid = 1; - - /* - * After building the nodemem map, check if the node memmap - * will fit in the first bank of each node. If not change - * the node end addr till it fits. - */ - - for (i = 0; i < maxnodes; i++) - check_pgtbl_size(i); - - dump_nodemem_map(maxnodes); - - efi_memmap_walk(find_node_bootmem, 0); - efi_memmap_walk(build_node_bootmem, 0); -} -#endif - -void __init -discontig_paging_init(void) -{ - int i; - unsigned long max_dma, zones_size[MAX_NR_ZONES], holes_size[MAX_NR_ZONES]; - extern void dump_node_data(void); - - max_dma = virt_to_phys((void *) MAX_DMA_ADDRESS) >> PAGE_SHIFT; - for (i = 0; i < numnodes; i++) { - unsigned long startpfn = __pa((void *)NODE_START(i)) >> PAGE_SHIFT; - unsigned long numpfn = NODE_SIZE(i) >> PAGE_SHIFT; - memset(zones_size, 0, sizeof(zones_size)); - memset(holes_size, 0, sizeof(holes_size)); - holes_size[ZONE_DMA] = numpfn - nodemem[i].mtot; - - if ((startpfn + numpfn) < max_dma) { - zones_size[ZONE_DMA] = numpfn; - } else if (startpfn > max_dma) { - zones_size[ZONE_NORMAL] = numpfn; - panic("discontig_paging_init: %d\n", i); - } else { - zones_size[ZONE_DMA] = (max_dma - startpfn); - zones_size[ZONE_NORMAL] = numpfn - zones_size[ZONE_DMA]; - panic("discontig_paging_init: %d\n", i); - } - free_area_init_node(i, NODE_DATA(i), NULL, zones_size, startpfn< ") ; - for (j=0;j - -/* - * ia64_sn_probe_io_slot - * This function will probe a physical address to determine if - * the address can be read. If reading the address causes a BUS - * error, an error is returned. If the probe succeeds, the contents - * of the memory location is returned. - * - * Calling sequence: - * ia64_probe_io_slot(paddr, size, data_ptr) - * - * Input: - * paddr Physical address to probe - * size Number bytes to read (1,2,4,8) - * data_ptr Address to store value read by probe - * (-1 returned if probe fails) - * - * Output: - * Status - * 0 - probe successful - * 1 - probe failed (generated MCA) - * 2 - Bad arg - * <0 - PAL error - */ - - -u64 -ia64_sn_probe_io_slot(long paddr, long size, void *data_ptr) -{ - struct ia64_sal_retval isrv; - - SAL_CALL(isrv, SN_SAL_PROBE, paddr, size, 0, 0, 0, 0, 0); - - if (data_ptr) { - switch (size) { - case 1: - *((u8*)data_ptr) = (u8)isrv.v0; - break; - case 2: - *((u16*)data_ptr) = (u16)isrv.v0; - break; - case 4: - *((u32*)data_ptr) = (u32)isrv.v0; - break; - case 8: - *((u64*)data_ptr) = (u64)isrv.v0; - break; - default: - isrv.status = 2; - } - } - - return isrv.status; -} diff -Nru a/arch/ia64/sn/sn1/setup.c b/arch/ia64/sn/sn1/setup.c --- a/arch/ia64/sn/sn1/setup.c Tue Mar 12 13:58:15 2002 +++ /dev/null Wed Dec 31 16:00:00 1969 @@ -1,254 +0,0 @@ -/* - * - * Copyright (C) 1999 Silicon Graphics, Inc. - * Copyright (C) Vijay Chander(vijay@engr.sgi.com) - */ -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#include -#include -#include -#include -#include -#include -#include -#include - - -/* - * This is the address of the RRegs in the HSpace of the global - * master. It is used by a hack in serial.c (serial_[in|out], - * printk.c (early_printk), and kdb_io.c to put console output on that - * node's Bedrock UART. It is initialized here to 0, so that - * early_printk won't try to access the UART before - * master_node_bedrock_address is properly calculated. - */ -u64 master_node_bedrock_address = 0UL; - -static void sn_fix_ivt_for_partitioned_system(void); - - -/* - * The format of "screen_info" is strange, and due to early i386-setup - * code. This is just enough to make the console code think we're on a - * VGA color display. - */ -struct screen_info sn1_screen_info = { - orig_x: 0, - orig_y: 0, - orig_video_mode: 3, - orig_video_cols: 80, - orig_video_ega_bx: 3, - orig_video_lines: 25, - orig_video_isVGA: 1, - orig_video_points: 16 -}; - -/* - * This is here so we can use the CMOS detection in ide-probe.c to - * determine what drives are present. In theory, we don't need this - * as the auto-detection could be done via ide-probe.c:do_probe() but - * in practice that would be much slower, which is painful when - * running in the simulator. Note that passing zeroes in DRIVE_INFO - * is sufficient (the IDE driver will autodetect the drive geometry). - */ -char drive_info[4*16]; - -unsigned long -sn1_map_nr (unsigned long addr) -{ -#ifdef CONFIG_DISCONTIGMEM - return MAP_NR_SN1(addr); -#else - return MAP_NR_DENSE(addr); -#endif -} - -#if defined(BRINGUP) && defined(CONFIG_IA64_EARLY_PRINTK) -void __init -early_sn1_setup(void) -{ - master_node_bedrock_address = - (u64)REMOTE_HSPEC_ADDR(get_nasid(), 0); - printk("early_sn1_setup: setting master_node_bedrock_address to 0x%lx\n", master_node_bedrock_address); -} -#endif /* BRINGUP && CONFIG_IA64_EARLY_PRINTK */ - -void __init -sn1_setup(char **cmdline_p) -{ -#if defined(CONFIG_SERIAL) && !defined(CONFIG_SERIAL_SGI_L1_PROTOCOL) - struct serial_struct req; -#endif - - MAX_DMA_ADDRESS = PAGE_OFFSET + 0x10000000000UL; - master_node_bedrock_address = - (u64)REMOTE_HSPEC_ADDR(get_nasid(), 0); - printk("sn1_setup: setting master_node_bedrock_address to 0x%lx\n", - master_node_bedrock_address); - -#if defined(CONFIG_SERIAL) && !defined(CONFIG_SERIAL_SGI_L1_PROTOCOL) - /* - * We do early_serial_setup() to clean out the rs-table[] from the - * statically compiled in version. - */ - memset(&req, 0, sizeof(struct serial_struct)); - req.line = 0; - req.baud_base = 124800; - req.port = 0; - req.port_high = 0; - req.irq = 0; - req.flags = (ASYNC_BOOT_AUTOCONF | ASYNC_SKIP_TEST); - req.io_type = SERIAL_IO_MEM; - req.hub6 = 0; - req.iomem_base = (u8 *)(master_node_bedrock_address + 0x80); - req.iomem_reg_shift = 3; - req.type = 0; - req.xmit_fifo_size = 0; - req.custom_divisor = 0; - req.closing_wait = 0; - early_serial_setup(&req); -#endif /* CONFIG_SERIAL && !CONFIG_SERIAL_SGI_L1_PROTOCOL */ - - ROOT_DEV = to_kdev_t(0x0301); /* default to first IDE drive */ - sn_fix_ivt_for_partitioned_system(); - -#ifdef CONFIG_SMP - init_smp_config(); -#endif - screen_info = sn1_screen_info; -} - - -/* - * sn_fix_ivt_for_partitioned_system - * - * This is an ugly hack that is needed for partitioned systems. - * - * On a partitioned system, most partitions do NOT have a physical address 0. - * Unfortunately, the exception handling code in ivt.S has a couple of physical - * addresses of kernel structures hardcoded into "movl" instructions. - * These addresses are correct on partition 0 only. On all other partitions, - * the addresses must be changed to reference the correct address. - * - * This routine scans the ivt code and replaces the hardcoded addresses with - * the correct address. - * - * Note that we could have made the ivt.S code dynamically determine the correct - * address but this would add code to performance critical pathes. This option - * was rejected. - */ - -#define TEMP_mlx 4 /* template type that contains movl instruction */ -#define TEMP_mlX 5 /* template type that contains movl instruction */ - -typedef union { /* Instruction encoding for movl instruction */ - struct { - unsigned long qp:6; - unsigned long r1:7; - unsigned long imm7b:7; - unsigned long vc:1; - unsigned long ic:1; - unsigned long imm5c:5; - unsigned long imm9d:9; - unsigned long i:1; - unsigned long op:4; - unsigned long fill:23; - } b; - unsigned long l; -} movl_instruction_t; - -#define MOVL_OPCODE 6 -#define MOVL_ARG(a,b) (((long)a.i<<63) | ((long)b<<22) | ((long)a.ic<<21) | \ - ((long)a.imm5c<<16) | ((long)a.imm9d<<7) | ((long)a.imm7b)) - -typedef struct { /* Instruction bundle */ - unsigned long template:5; - unsigned long ins2:41; - unsigned long ins1l:18; - unsigned long ins1u:23; - unsigned long ins0:41; -} instruction_bundle_t; - - -static void __init -sn_fix_ivt_for_partitioned_system(void) -{ - extern int ia64_ivt; - instruction_bundle_t *p, *pend; - movl_instruction_t ins0, ins1, ins2; - long new_ins1, phys_offset; - unsigned long val; - - /* - * Setup to scan the ivt code. - */ - p = (instruction_bundle_t*)&ia64_ivt; - pend = p + 0x8000/sizeof(instruction_bundle_t); - phys_offset = __pa(p) & ~0x1ffffffffUL; - - /* - * Hunt for movl instructions that contain the node 0 physical address - * of "SWAPPER_PGD_ADDR". These addresses must be relocated to reference the - * actual node that the kernel is loaded on. - */ - for (; p < pend; p++) { - if (p->template != TEMP_mlx && p->template != TEMP_mlX) - continue; - ins0.l = p->ins0; - if (ins0.b.op != MOVL_OPCODE) - continue; - ins1.l = ((long)p->ins1u<<18) | p->ins1l; - ins2.l = p->ins2; - val = MOVL_ARG(ins0.b, ins1.l); - - /* - * Test for correct address. SWAPPER_PGD_ADDR will - * always be a node 0 virtual address. Note that we cant - * use the __pa or __va macros here since they may contain - * debug code that gets fooled here. - */ - if ((PAGE_OFFSET | val) != SWAPPER_PGD_ADDR) - continue; - - /* - * We found an instruction that needs to be fixed. The following - * inserts the NASID of the ivt into the movl instruction. - */ - new_ins1 = ins1.l | (phys_offset>>22); - p->ins1l = new_ins1 & 0x3ffff; - p->ins1u = (new_ins1>>18) & 0x7fffff; - ia64_fc(p); - } - - /* - * Do necessary serialization. - */ - ia64_sync_i(); - ia64_srlz_i(); - -} - -int -IS_RUNNING_ON_SIMULATOR(void) -{ -#ifdef CONFIG_IA64_SGI_SN1_SIM - long sn; - asm("mov %0=cpuid[%1]" : "=r"(sn) : "r"(2)); - return(sn == SNMAGIC); -#else - return(0); -#endif -} diff -Nru a/arch/ia64/sn/sn1/smp.c b/arch/ia64/sn/sn1/smp.c --- a/arch/ia64/sn/sn1/smp.c Tue Mar 12 13:58:15 2002 +++ /dev/null Wed Dec 31 16:00:00 1969 @@ -1,186 +0,0 @@ -/* - * SN1 Platform specific SMP Support - * - * Copyright (C) 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 Jack Steiner - */ - - - -#include -#include -#include -#include -#include -#include -#include - -#include -#include -#include -#include -#include -#include -#include - - - - -/* - * The following structure is used to pass params thru smp_call_function - * to other cpus for flushing TLB ranges. - */ -typedef struct { - unsigned long start; - unsigned long end; - unsigned long nbits; -} ptc_params_t; - - -/* - * The following table/struct is for remembering PTC coherency domains. It - * is also used to translate sapicid into cpuids. We dont want to start - * cpus unless we know their cache domain. - */ -#ifdef PTC_NOTYET -sn_sapicid_info_t sn_sapicid_info[NR_CPUS]; -#endif - - - -#ifdef PTC_NOTYET -/* - * NOTE: This is probably not good enough, but I dont want to try to make - * it better until I get some statistics on a running system. - * At a minimum, we should only send IPIs to 1 processor in each TLB domain - * & have it issue a ptc.g on it's own FSB. Also, serialize per FSB, not - * globally. - * - * More likely, we will have to do some work to reduce the frequency of calls to - * this routine. - */ - -static void -sn1_ptc_local(void *arg) -{ - ptc_params_t *params = arg; - unsigned long start, end, nbits; - - start = params->start; - end = params->end; - nbits = params->nbits; - - do { - __asm__ __volatile__ ("ptc.l %0,%1" :: "r"(start), "r"(nbits<<2) : "memory"); - start += (1UL << nbits); - } while (start < end); -} - - -void -sn1_ptc_global (unsigned long start, unsigned long end, unsigned long nbits) -{ - ptc_params_t params; - - params.start = start; - params.end = end; - params.nbits = nbits; - - if (smp_call_function(sn1_ptc_local, ¶ms, 1, 0) != 0) - panic("Unable to do ptc_global - timed out"); - - sn1_ptc_local(¶ms); -} -#endif - - - - -void -sn1_send_IPI(int cpuid, int vector, int delivery_mode, int redirect) -{ - long *p, nasid, slice; - static int off[4] = {0x1800080, 0x1800088, 0x1a00080, 0x1a00088}; - - /* - * ZZZ - Replace with standard macros when available. - */ - nasid = cpuid_to_nasid(cpuid); - slice = cpuid_to_slice(cpuid); - p = (long*)(0xc0000a0000000000LL | (nasid<<33) | off[slice]); - -#if defined(ZZZBRINGUP) - { - static int count=0; - if (count++ < 10) printk("ZZ sendIPI 0x%x->0x%x, vec %d, nasid 0x%lx, slice %ld, adr 0x%lx\n", - smp_processor_id(), cpuid, vector, nasid, slice, (long)p); - } -#endif - mb(); - *p = (delivery_mode << 8) | (vector & 0xff); - -} - - -#ifdef CONFIG_SMP - -#ifdef PTC_NOTYET -static void __init -process_sal_ptc_domain_info(ia64_sal_ptc_domain_info_t *di, int domain) -{ - ia64_sal_ptc_domain_proc_entry_t *pe; - int i, sapicid, cpuid; - - pe = __va(di->proc_list); - for (i=0; iproc_count; i++, pe++) { - sapicid = id_eid_to_sapicid(pe->id, pe->eid); - cpuid = cpu_logical_id(sapicid); - sn_sapicid_info[cpuid].domain = domain; - sn_sapicid_info[cpuid].sapicid = sapicid; - } -} - - -static void __init -process_sal_desc_ptc(ia64_sal_desc_ptc_t *ptc) -{ - ia64_sal_ptc_domain_info_t *di; - int i; - - di = __va(ptc->domain_info); - for (i=0; inum_domains; i++, di++) { - process_sal_ptc_domain_info(di, i); - } -} -#endif - - -void __init -init_sn1_smp_config(void) -{ - - if (!ia64_ptc_domain_info) { - printk("SMP: Can't find PTC domain info. Forcing UP mode\n"); - smp_num_cpus = 1; - return; - } - -#ifdef PTC_NOTYET - memset (sn_sapicid_info, -1, sizeof(sn_sapicid_info)); - process_sal_desc_ptc(ia64_ptc_domain_info); -#endif - -} - -#else /* CONFIG_SMP */ - -void __init -init_sn1_smp_config(void) -{ - -#ifdef PTC_NOTYET - sn_sapicid_info[0].sapicid = hard_smp_processor_id(); -#endif -} - -#endif /* CONFIG_SMP */ diff -Nru a/arch/ia64/sn/sn1/sn1_asm.S b/arch/ia64/sn/sn1/sn1_asm.S --- a/arch/ia64/sn/sn1/sn1_asm.S Tue Mar 12 13:58:14 2002 +++ /dev/null Wed Dec 31 16:00:00 1969 @@ -1,149 +0,0 @@ - -/* - * Copyright (C) 2000 Silicon Graphics - * Copyright (C) 2000 Jack Steiner (steiner@sgi.com) - */ - -#include -#ifdef CONFIG_IA64_SGI_AUTOTEST - -// Testing only. -// Routine will cause MCAs -// zzzmsa(n) -// n=0 MCA via duplicate TLB dropin -// n=0 MCA via read of garbage address -// - -#define ITIR(key, ps) ((key<<8) | (ps<<2)) -#define TLB_PAGESIZE 28 // Use 256MB pages for now. - - .global zzzmca - .proc zzzmca -zzzmca: - alloc loc4 = ar.pfs,2,8,1,0;; - cmp.ne p6,p0=r32,r0;; - movl r2=0x2dead - movl r3=0x3dead - movl r15=0x15dead - movl r16=0x16dead - movl r31=0x31dead - movl loc0=0x34beef - movl loc1=0x35beef - movl loc2=0x36beef - movl loc3=0x37beef - movl out0=0x42beef - - movl r20=0x32feed;; - mov ar32=r20 - movl r20=0x36feed;; - mov ar36=r20 - movl r20=0x65feed;; - mov ar65=r20 - movl r20=0x66feed;; - mov ar66=r20 - -(p6) br.cond.sptk 1f - - rsm 0x2000;; - srlz.d; - mov r11 = 1 - mov r3 = ITIR(0,TLB_PAGESIZE);; - mov cr.itir = r3 - mov r10 = 0;; - itr.d dtr[r11] = r10;; - mov r11 = 2 - - itr.d dtr[r11] = r10;; - br 9f - -1: movl r8=0xfe00000048;; - ld8 r9=[r8];; - mf - mf.a - srlz.d - -9: mov ar.pfs=loc4 - br.ret.sptk rp - - .endp zzzmca - - .global zzzspec - .proc zzzspec -zzzspec: - mov r8=r32 - movl r9=0xe000000000000000 - movl r10=0x4000;; - ld8.s r16=[r8];; - ld8.s r17=[r9];; - add r8=r8,r10;; - ld8.s r18=[r8];; - add r8=r8,r10;; - ld8.s r19=[r8];; - add r8=r8,r10;; - ld8.s r20=[r8];; - mov r8=r0 - tnat.nz p6,p0=r16 - tnat.nz p7,p0=r17 - tnat.nz p8,p0=r18 - tnat.nz p9,p0=r19 - tnat.nz p10,p0=r20;; - (p6) dep r8=-1,r8,0,1;; - (p7) dep r8=-1,r8,1,1;; - (p8) dep r8=-1,r8,2,1;; - (p9) dep r8=-1,r8,3,1;; - (p10) dep r8=-1,r8,4,1;; - br.ret.sptk rp - .endp zzzspec - - .global zzzspec2 - .proc zzzspec2 -zzzspec2: - cmp.eq p6,p7=r2,r2 - movl r16=0xc0000a0001000020 - ;; - mf - ;; - ld8 r9=[r16] - (p6) br.spnt 1f - ld8 r10=[r32] - ;; - 1: mf.a - mf - - ld8 r9=[r16];; - cmp.ne p6,p7=r9,r16 - (p6) br.spnt 1f - ld8 r10=[r32] - ;; - 1: mf.a - mf - - ld8 r9=[r33];; - cmp.ne p6,p7=r9,r33 - (p6) br.spnt 1f - ld8 r10=[r32] - ;; - 1: mf.a - mf - - tpa r23=r32 - add r20=512,r33 - add r21=1024,r33;; - ld8 r9=[r20] - ld8 r10=[r21];; - nop.i 0 - { .mib - nop.m 0 - cmp.ne p6,p7=r10,r33 - (p6) br.spnt 1f - } - ld8 r10=[r32] - ;; - 1: mf.a - mf - br.ret.sptk rp - - .endp zzzspec - -#endif - diff -Nru a/arch/ia64/sn/sn1/sn1_ksyms.c b/arch/ia64/sn/sn1/sn1_ksyms.c --- a/arch/ia64/sn/sn1/sn1_ksyms.c Tue Mar 12 13:58:15 2002 +++ /dev/null Wed Dec 31 16:00:00 1969 @@ -1,39 +0,0 @@ -/* - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * - * Copyright (C) 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 Jesse Barnes (jbarnes@sgi.com) - */ - - -/* - * Architecture-specific kernel symbols - */ - -#include - -#include - -/* - * I/O routines - */ -EXPORT_SYMBOL(sn1_outb); -EXPORT_SYMBOL(sn1_outl); -EXPORT_SYMBOL(sn1_outw); -EXPORT_SYMBOL(sn1_inw); -EXPORT_SYMBOL(sn1_inb); -EXPORT_SYMBOL(sn1_inl); - -/* - * other stuff (more to be added later, cleanup then) - */ -EXPORT_SYMBOL(sn1_pci_map_sg); -EXPORT_SYMBOL(sn1_pci_unmap_sg); -EXPORT_SYMBOL(sn1_pci_alloc_consistent); -EXPORT_SYMBOL(sn1_pci_free_consistent); -EXPORT_SYMBOL(sn1_dma_address); - -#include -EXPORT_SYMBOL(alloc_pages); diff -Nru a/arch/ia64/sn/sn1/sv.c b/arch/ia64/sn/sn1/sv.c --- a/arch/ia64/sn/sn1/sv.c Tue Mar 12 13:58:15 2002 +++ /dev/null Wed Dec 31 16:00:00 1969 @@ -1,551 +0,0 @@ -/* - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * - * Copyright (C) 2000 Silicon Graphics, Inc. All rights reserved - * - * This implemenation of synchronization variables is heavily based on - * one done by Steve Lord - * - * Paul Cassella - */ - -#include -#include -#include - -#include -#include -#include -#include - -#include - -/* Define this to have sv_test() run some simple tests. - kernel_thread() must behave as expected when this is called. */ -#undef RUN_SV_TEST - -#define DEBUG - -/* Set up some macros so sv_wait(), sv_signal(), and sv_broadcast() - can sanity check interrupt state on architectures where we know - how. */ -#ifdef DEBUG - #define SV_DEBUG_INTERRUPT_STATE - #ifdef __mips64 - #define SV_TEST_INTERRUPTS_ENABLED(flags) ((flags & 0x1) != 0) - #define SV_TEST_INTERRUPTS_DISABLED(flags) ((flags & 0x1) == 0) - #define SV_INTERRUPT_TEST_WORKERS 31 - #elif defined(__ia64) - #define SV_TEST_INTERRUPTS_ENABLED(flags) ((flags & 0x4000) != 0) - #define SV_TEST_INTERRUPTS_DISABLED(flags) ((flags & 0x4000) == 0) - #define SV_INTERRUPT_TEST_WORKERS 4 /* simulator's slow */ - #else - #undef SV_DEBUG_INTERRUPT_STATE - #define SV_INTERRUPT_TEST_WORKERS 4 /* reasonable? default. */ - #endif /* __mips64 */ -#endif /* DEBUG */ - - -/* XXX FIXME hack hack hack. Our mips64 tree is from before the - switch to WQ_FLAG_EXCLUSIVE, and our ia64 tree is from after it. */ -#ifdef TASK_EXCLUSIVE - #undef EXCLUSIVE_IN_QUEUE -#else - #define EXCLUSIVE_IN_QUEUE - #define TASK_EXCLUSIVE 0 /* for the set_current_state() in sv_wait() */ -#endif - - -static inline void sv_lock(sv_t *sv) { - spin_lock(&sv->sv_lock); -} - -static inline void sv_unlock(sv_t *sv) { - spin_unlock(&sv->sv_lock); -} - -/* up() is "extern inline", so we can't pass its address to sv_wait. - Use this function's address instead. */ -static void up_wrapper(struct semaphore *sem) { - up(sem); -} - -/* spin_unlock() is sometimes a macro. */ -static void spin_unlock_wrapper(spinlock_t *s) { - spin_unlock(s); -} - -/* XXX Perhaps sv_wait() should do the switch() each time and avoid - the extra indirection and the need for the _wrapper functions? */ - -static inline void sv_set_mon_type(sv_t *sv, int type) { - switch (type) { - case SV_MON_SPIN: - sv->sv_mon_unlock_func = - (sv_mon_unlock_func_t)spin_unlock_wrapper; - break; - case SV_MON_SEMA: - sv->sv_mon_unlock_func = - (sv_mon_unlock_func_t)up_wrapper; - if(sv->sv_flags & SV_INTS) { - printk(KERN_ERR "sv_set_mon_type: The monitor lock " - "cannot be shared with interrupts if it is a " - "semaphore!\n"); - BUG(); - } - if(sv->sv_flags & SV_BHS) { - printk(KERN_ERR "sv_set_mon_type: The monitor lock " - "cannot be shared with bottom-halves if it is " - "a semaphore!\n"); - BUG(); - } - break; -#if 0 - /* - * If needed, and will need to think about interrupts. This - * may be needed, for example, if someone wants to use sv's - * with something like dev_base; writers need to hold two - * locks. - */ - case SV_MON_CUSTOM: - { - struct sv_mon_custom *c = lock; - sv->sv_mon_unlock_func = c->sv_mon_unlock_func; - sv->sv_mon_lock = c->sv_mon_lock; - break; - } -#endif - - default: - printk(KERN_ERR "sv_set_mon_type: unknown type %d (0x%x)! " - "(flags 0x%x)\n", type, type, sv->sv_flags); - BUG(); - break; - } - sv->sv_flags |= type; -} - -static inline void sv_set_ord(sv_t *sv, int ord) { - if (!ord) - ord = SV_ORDER_DEFAULT; - - if (ord != SV_ORDER_FIFO && ord != SV_ORDER_LIFO) { - printk(KERN_EMERG "sv_set_ord: unknown order %d (0x%x)! ", - ord, ord); - BUG(); - } - - sv->sv_flags |= ord; -} - -void sv_init(sv_t *sv, sv_mon_lock_t *lock, int flags) -{ - int ord = flags & SV_ORDER_MASK; - int type = flags & SV_MON_MASK; - - /* Copy all non-order, non-type flags */ - sv->sv_flags = (flags & ~(SV_ORDER_MASK | SV_MON_MASK)); - - if((sv->sv_flags & (SV_INTS | SV_BHS)) == (SV_INTS | SV_BHS)) { - printk(KERN_ERR "sv_init: do not set both SV_INTS and SV_BHS, only SV_INTS.\n"); - BUG(); - } - - sv_set_ord(sv, ord); - sv_set_mon_type(sv, type); - - /* If lock is NULL, we'll get it from sv_wait_compat() (and - ignore it in sv_signal() and sv_broadcast()). */ - sv->sv_mon_lock = lock; - - spin_lock_init(&sv->sv_lock); - init_waitqueue_head(&sv->sv_waiters); -} - -/* - * The associated lock must be locked on entry. It is unlocked on return. - * - * Return values: - * - * n < 0 : interrupted, -n jiffies remaining on timeout, or -1 if timeout == 0 - * n = 0 : timeout expired - * n > 0 : sv_signal()'d, n jiffies remaining on timeout, or 1 if timeout == 0 - */ -signed long sv_wait(sv_t *sv, int sv_wait_flags, unsigned long timeout) -{ - DECLARE_WAITQUEUE( wait, current ); - unsigned long flags; - signed long ret = 0; - -#ifdef SV_DEBUG_INTERRUPT_STATE - { - unsigned long flags; - __save_flags(flags); - - if(sv->sv_flags & SV_INTS) { - if(SV_TEST_INTERRUPTS_ENABLED(flags)) { - printk(KERN_ERR "sv_wait: SV_INTS and interrupts " - "enabled (flags: 0x%lx)\n", flags); - BUG(); - } - } else { - if (SV_TEST_INTERRUPTS_DISABLED(flags)) { - printk(KERN_WARNING "sv_wait: !SV_INTS and interrupts " - "disabled! (flags: 0x%lx)\n", flags); - } - } - } -#endif /* SV_DEBUG_INTERRUPT_STATE */ - - sv_lock(sv); - - sv->sv_mon_unlock_func(sv->sv_mon_lock); - - /* Add ourselves to the wait queue and set the state before - * releasing the sv_lock so as to avoid racing with the - * wake_up() in sv_signal() and sv_broadcast(). - */ - - /* don't need the _irqsave part, but there is no wq_write_lock() */ - wq_write_lock_irqsave(&sv->sv_waiters.lock, flags); - -#ifdef EXCLUSIVE_IN_QUEUE - wait.flags |= WQ_FLAG_EXCLUSIVE; -#endif - - switch(sv->sv_flags & SV_ORDER_MASK) { - case SV_ORDER_FIFO: - __add_wait_queue_tail(&sv->sv_waiters, &wait); - break; - case SV_ORDER_FILO: - __add_wait_queue(&sv->sv_waiters, &wait); - break; - default: - printk(KERN_ERR "sv_wait: unknown order! (sv: 0x%p, flags: 0x%x)\n", - sv, sv->sv_flags); - BUG(); - } - wq_write_unlock_irqrestore(&sv->sv_waiters.lock, flags); - - if(sv_wait_flags & SV_WAIT_SIG) - set_current_state(TASK_EXCLUSIVE | TASK_INTERRUPTIBLE ); - else - set_current_state(TASK_EXCLUSIVE | TASK_UNINTERRUPTIBLE); - - spin_unlock(&sv->sv_lock); - - if(sv->sv_flags & SV_INTS) - local_irq_enable(); - else if(sv->sv_flags & SV_BHS) - local_bh_enable(); - - if (timeout) - ret = schedule_timeout(timeout); - else - schedule(); - - if(current->state != TASK_RUNNING) /* XXX Is this possible? */ { - printk(KERN_ERR "sv_wait: state not TASK_RUNNING after " - "schedule().\n"); - set_current_state(TASK_RUNNING); - } - - remove_wait_queue(&sv->sv_waiters, &wait); - - /* Return cases: - - woken by a sv_signal/sv_broadcast - - woken by a signal - - woken by timeout expiring - */ - - /* XXX This isn't really accurate; we may have been woken - before the signal anyway.... */ - if(signal_pending(current)) - return timeout ? -ret : -1; - return timeout ? ret : 1; -} - - -void sv_signal(sv_t *sv) -{ - /* If interrupts can acquire this lock, they can also acquire the - sv_mon_lock, which we must already have to have called this, so - interrupts must be disabled already. If interrupts cannot - contend for this lock, we don't have to worry about it. */ - -#ifdef SV_DEBUG_INTERRUPT_STATE - if(sv->sv_flags & SV_INTS) { - unsigned long flags; - __save_flags(flags); - if(SV_TEST_INTERRUPTS_ENABLED(flags)) - printk(KERN_ERR "sv_signal: SV_INTS and " - "interrupts enabled! (flags: 0x%lx)\n", flags); - } -#endif /* SV_DEBUG_INTERRUPT_STATE */ - - sv_lock(sv); - wake_up(&sv->sv_waiters); - sv_unlock(sv); -} - -void sv_broadcast(sv_t *sv) -{ -#ifdef SV_DEBUG_INTERRUPT_STATE - if(sv->sv_flags & SV_INTS) { - unsigned long flags; - __save_flags(flags); - if(SV_TEST_INTERRUPTS_ENABLED(flags)) - printk(KERN_ERR "sv_broadcast: SV_INTS and " - "interrupts enabled! (flags: 0x%lx)\n", flags); - } -#endif /* SV_DEBUG_INTERRUPT_STATE */ - - sv_lock(sv); - wake_up_all(&sv->sv_waiters); - sv_unlock(sv); -} - -void sv_destroy(sv_t *sv) -{ - if(!spin_trylock(&sv->sv_lock)) { - printk(KERN_ERR "sv_destroy: someone else has sv 0x%p locked!\n", sv); - BUG(); - } - - /* XXX Check that the waitqueue is empty? - Mark the sv destroyed? - */ -} - - -#ifdef RUN_SV_TEST - -static DECLARE_MUTEX_LOCKED(talkback); -static DECLARE_MUTEX_LOCKED(sem); -sv_t sv; -sv_t sv_filo; - -static int sv_test_1_w(void *arg) -{ - printk("sv_test_1_w: acquiring spinlock 0x%p...\n", arg); - - spin_lock((spinlock_t*)arg); - printk("sv_test_1_w: spinlock acquired, waking sv_test_1_s.\n"); - - up(&sem); - - printk("sv_test_1_w: sv_spin_wait()'ing.\n"); - - sv_spin_wait(&sv, arg); - - printk("sv_test_1_w: talkback.\n"); - up(&talkback); - - printk("sv_test_1_w: exiting.\n"); - return 0; -} - -static int sv_test_1_s(void *arg) -{ - printk("sv_test_1_s: waiting for semaphore.\n"); - down(&sem); - printk("sv_test_1_s: semaphore acquired. Acquiring spinlock.\n"); - spin_lock((spinlock_t*)arg); - printk("sv_test_1_s: spinlock acquired. sv_signaling.\n"); - sv_signal(&sv); - printk("sv_test_1_s: talkback.\n"); - up(&talkback); - printk("sv_test_1_s: exiting.\n"); - return 0; - -} - -static int count; -static DECLARE_MUTEX(monitor); - -static int sv_test_2_w(void *arg) -{ - int dummy = count++; - sv_t *sv = (sv_t *)arg; - - down(&monitor); - up(&talkback); - printk("sv_test_2_w: thread %d started, sv_waiting.\n", dummy); - sv_sema_wait(sv, &monitor); - printk("sv_test_2_w: thread %d woken, exiting.\n", dummy); - up(&sem); - return 0; -} - -static int sv_test_2_s_1(void *arg) -{ - int i; - sv_t *sv = (sv_t *)arg; - - down(&monitor); - for(i = 0; i < 3; i++) { - printk("sv_test_2_s_1: waking one thread.\n"); - sv_signal(sv); - down(&sem); - } - - printk("sv_test_2_s_1: signaling and broadcasting again. Nothing should happen.\n"); - sv_signal(sv); - sv_broadcast(sv); - sv_signal(sv); - sv_broadcast(sv); - - printk("sv_test_2_s_1: talkbacking.\n"); - up(&talkback); - up(&monitor); - return 0; -} - -static int sv_test_2_s(void *arg) -{ - int i; - sv_t *sv = (sv_t *)arg; - - down(&monitor); - for(i = 0; i < 3; i++) { - printk("sv_test_2_s: waking one thread (should be %d.)\n", i); - sv_signal(sv); - down(&sem); - } - - printk("sv_test_3_s: waking remaining threads with broadcast.\n"); - sv_broadcast(sv); - for(; i < 10; i++) - down(&sem); - - printk("sv_test_3_s: sending talkback.\n"); - up(&talkback); - - printk("sv_test_3_s: exiting.\n"); - up(&monitor); - return 0; -} - - -static void big_test(sv_t *sv) -{ - int i; - - count = 0; - - for(i = 0; i < 3; i++) { - printk("big_test: spawning thread %d.\n", i); - kernel_thread(sv_test_2_w, sv, 0); - down(&talkback); - } - - printk("big_test: spawning first wake-up thread.\n"); - kernel_thread(sv_test_2_s_1, sv, 0); - - down(&talkback); - printk("big_test: talkback happened.\n"); - - - for(i = 3; i < 13; i++) { - printk("big_test: spawning thread %d.\n", i); - kernel_thread(sv_test_2_w, sv, 0); - down(&talkback); - } - - printk("big_test: spawning wake-up thread.\n"); - kernel_thread(sv_test_2_s, sv, 0); - - down(&talkback); -} - -sv_t int_test_sv; -spinlock_t int_test_spin = SPIN_LOCK_UNLOCKED; -int int_test_ready; -static int irqtestcount; - -static int interrupt_test_worker(void *unused) -{ - int id = ++irqtestcount; - int it = 0; - unsigned long flags, flags2; - - printk("ITW: thread %d started.\n", id); - - while(1) { - __save_flags(flags2); - if(jiffies % 3) { - printk("ITW %2d %5d: irqsaving (%lx)\n", id, it, flags2); - spin_lock_irqsave(&int_test_spin, flags); - } else { - printk("ITW %2d %5d: spin_lock_irqing (%lx)\n", id, it, flags2); - spin_lock_irq(&int_test_spin); - } - - __save_flags(flags2); - printk("ITW %2d %5d: locked, sv_waiting (%lx).\n", id, it, flags2); - sv_wait(&int_test_sv, 0, 0); - - __save_flags(flags2); - printk("ITW %2d %5d: wait finished (%lx), pausing\n", id, it, flags2); - set_current_state(TASK_INTERRUPTIBLE); - schedule_timeout(jiffies & 0xf); - if(current->state != TASK_RUNNING) - printk("ITW: current->state isn't RUNNING after schedule!\n"); - it++; - } -} - -static void interrupt_test(void) -{ - int i; - - printk("interrupt_test: initing sv.\n"); - sv_init(&int_test_sv, &int_test_spin, SV_MON_SPIN | SV_INTS); - - for(i = 0; i < SV_INTERRUPT_TEST_WORKERS; i++) { - printk("interrupt_test: starting test thread %d.\n", i); - kernel_thread(interrupt_test_worker, 0, 0); - } - printk("interrupt_test: done with init part.\n"); - int_test_ready = 1; -} - -int sv_test(void) -{ - spinlock_t s = SPIN_LOCK_UNLOCKED; - - sv_init(&sv, &s, SV_MON_SPIN); - printk("sv_test: starting sv_test_1_w.\n"); - kernel_thread(sv_test_1_w, &s, 0); - printk("sv_test: starting sv_test_1_s.\n"); - kernel_thread(sv_test_1_s, &s, 0); - - printk("sv_test: waiting for talkback.\n"); - down(&talkback); down(&talkback); - printk("sv_test: talkback happened, sv_destroying.\n"); - sv_destroy(&sv); - - count = 0; - - printk("sv_test: beginning big_test on sv.\n"); - - sv_init(&sv, &monitor, SV_MON_SEMA); - big_test(&sv); - sv_destroy(&sv); - - printk("sv_test: beginning big_test on sv_filo.\n"); - sv_init(&sv_filo, &monitor, SV_MON_SEMA | SV_ORDER_FILO); - big_test(&sv_filo); - sv_destroy(&sv_filo); - - interrupt_test(); - - printk("sv_test: done.\n"); - return 0; -} - -__initcall(sv_test); - -#endif /* RUN_SV_TEST */ diff -Nru a/arch/ia64/sn/sn1/synergy.c b/arch/ia64/sn/sn1/synergy.c --- a/arch/ia64/sn/sn1/synergy.c Tue Mar 12 13:58:15 2002 +++ /dev/null Wed Dec 31 16:00:00 1969 @@ -1,429 +0,0 @@ - -/* - * SN1 Platform specific synergy Support - * - * Copyright (C) 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 Alan Mayer (ajm@sgi.com) - */ - - - -#include -#include -#include -#include -#include -#include - -#include -#include -#include -#include -#include -#include -#include -#include - -int bit_pos_to_irq(int bit); -void setclear_mask_b(int irq, int cpuid, int set); -void setclear_mask_a(int irq, int cpuid, int set); -void * kmalloc(size_t size, int flags); - - -void -synergy_intr_alloc(int bit, int cpuid) { - return; -} - -int -synergy_intr_connect(int bit, - int cpuid) -{ - int irq; - unsigned is_b; - - irq = bit_pos_to_irq(bit); - - is_b = (cpuid_to_slice(cpuid)) & 1; - if (is_b) { - setclear_mask_b(irq,cpuid,1); - setclear_mask_a(irq,cpuid, 0); - } else { - setclear_mask_a(irq, cpuid, 1); - setclear_mask_b(irq, cpuid, 0); - } - return 0; -} -void -setclear_mask_a(int irq, int cpuid, int set) -{ - int synergy; - int nasid; - int reg_num; - unsigned long mask; - unsigned long addr; - unsigned long reg; - unsigned long val; - int my_cnode, my_synergy; - int target_cnode, target_synergy; - - /* - * Perform some idiot checks .. - */ - if ( (irq < 0) || (irq > 255) || - (cpuid < 0) || (cpuid > 512) ) { - printk("clear_mask_a: Invalid parameter irq %d cpuid %d\n", irq, cpuid); - return; - } - - target_cnode = cpuid_to_cnodeid(cpuid); - target_synergy = cpuid_to_synergy(cpuid); - my_cnode = cpuid_to_cnodeid(smp_processor_id()); - my_synergy = cpuid_to_synergy(smp_processor_id()); - - reg_num = irq / 64; - mask = 1; - mask <<= (irq % 64); - switch (reg_num) { - case 0: - reg = VEC_MASK0A; - addr = VEC_MASK0A_ADDR; - break; - case 1: - reg = VEC_MASK1A; - addr = VEC_MASK1A_ADDR; - break; - case 2: - reg = VEC_MASK2A; - addr = VEC_MASK2A_ADDR; - break; - case 3: - reg = VEC_MASK3A; - addr = VEC_MASK3A_ADDR; - break; - default: - reg = addr = 0; - break; - } - if (my_cnode == target_cnode && my_synergy == target_synergy) { - // local synergy - val = READ_LOCAL_SYNERGY_REG(addr); - if (set) { - val |= mask; - } else { - val &= ~mask; - } - WRITE_LOCAL_SYNERGY_REG(addr, val); - val = READ_LOCAL_SYNERGY_REG(addr); - } else { /* remote synergy */ - synergy = cpuid_to_synergy(cpuid); - nasid = cpuid_to_nasid(cpuid); - val = REMOTE_SYNERGY_LOAD(nasid, synergy, reg); - if (set) { - val |= mask; - } else { - val &= ~mask; - } - REMOTE_SYNERGY_STORE(nasid, synergy, reg, val); - } -} - -void -setclear_mask_b(int irq, int cpuid, int set) -{ - int synergy; - int nasid; - int reg_num; - unsigned long mask; - unsigned long addr; - unsigned long reg; - unsigned long val; - int my_cnode, my_synergy; - int target_cnode, target_synergy; - - /* - * Perform some idiot checks .. - */ - if ( (irq < 0) || (irq > 255) || - (cpuid < 0) || (cpuid > 512) ) { - printk("clear_mask_b: Invalid parameter irq %d cpuid %d\n", irq, cpuid); - return; - } - - target_cnode = cpuid_to_cnodeid(cpuid); - target_synergy = cpuid_to_synergy(cpuid); - my_cnode = cpuid_to_cnodeid(smp_processor_id()); - my_synergy = cpuid_to_synergy(smp_processor_id()); - - reg_num = irq / 64; - mask = 1; - mask <<= (irq % 64); - switch (reg_num) { - case 0: - reg = VEC_MASK0B; - addr = VEC_MASK0B_ADDR; - break; - case 1: - reg = VEC_MASK1B; - addr = VEC_MASK1B_ADDR; - break; - case 2: - reg = VEC_MASK2B; - addr = VEC_MASK2B_ADDR; - break; - case 3: - reg = VEC_MASK3B; - addr = VEC_MASK3B_ADDR; - break; - default: - reg = addr = 0; - break; - } - if (my_cnode == target_cnode && my_synergy == target_synergy) { - // local synergy - val = READ_LOCAL_SYNERGY_REG(addr); - if (set) { - val |= mask; - } else { - val &= ~mask; - } - WRITE_LOCAL_SYNERGY_REG(addr, val); - val = READ_LOCAL_SYNERGY_REG(addr); - } else { /* remote synergy */ - synergy = cpuid_to_synergy(cpuid); - nasid = cpuid_to_nasid(cpuid); - val = REMOTE_SYNERGY_LOAD(nasid, synergy, reg); - if (set) { - val |= mask; - } else { - val &= ~mask; - } - REMOTE_SYNERGY_STORE(nasid, synergy, reg, val); - } -} - -#if defined(CONFIG_IA64_SGI_SYNERGY_PERF) - -/* - * Synergy perf registers. Multiplexed via timer_interrupt - */ -static struct proc_dir_entry *synergy_perf_proc = NULL; - -/* - * read handler for /proc/synergy - */ -static int -synergy_perf_read_proc (char *page, char **start, off_t off, - int count, int *eof, void *data) -{ - cnodeid_t cnode; - nodepda_t *npdap; - synergy_perf_t *p; - int len = 0; - - len += sprintf(page+len, "# cnode module slot event synergy-A synergy-B\n"); - - /* walk the event list for each node */ - for (cnode=0; cnode < numnodes; cnode++) { - npdap = NODEPDA(cnode); - if (npdap->synergy_perf_enabled == 0) { - len += sprintf(page+len, "# DISABLED\n"); - break; - } - - spin_lock_irq(&npdap->synergy_perf_lock); - for (p = npdap->synergy_perf_first; p;) { - uint64_t cnt_a=0, cnt_b=0; - - if (p->intervals > 0) { - cnt_a = p->counts[0] * npdap->synergy_active_intervals / p->intervals; - cnt_b = p->counts[1] * npdap->synergy_active_intervals / p->intervals; - } - - len += sprintf(page+len, "%d %d %d %12lx %lu %lu\n", - (int)cnode, (int)npdap->module_id, (int)npdap->slotdesc, - p->modesel, cnt_a, cnt_b); - - p = p->next; - if (p == npdap->synergy_perf_first) - break; - } - spin_unlock_irq(&npdap->synergy_perf_lock); - } - - if (len <= off+count) *eof = 1; - *start = page + off; - len -= off; - if (len>count) len = count; - if (len<0) len = 0; - - return len; -} - -static int -synergy_perf_append(uint64_t modesel) -{ - int cnode; - nodepda_t *npdap; - synergy_perf_t *p; - int err = 0; - - /* bit 45 is enable */ - modesel |= (1UL << 45); - - for (cnode=0; cnode < numnodes; cnode++) { - /* for each node, insert a new synergy_perf entry */ - if ((npdap = NODEPDA(cnode)) == NULL) { - printk("synergy_perf_append: cnode=%d NODEPDA(cnode)==NULL, nodepda=%p\n", cnode, nodepda); - continue; - } - - /* XX use kmem_alloc_node() when it is implemented */ - p = (synergy_perf_t *)kmalloc(sizeof(synergy_perf_t), GFP_KERNEL); - if (p == NULL) - err = -ENOMEM; - else { - memset(p, 0, sizeof(synergy_perf_t)); - p->modesel = modesel; - if (npdap->synergy_perf_data == NULL) { - /* circular list */ - p->next = p; - npdap->synergy_perf_data = p; - npdap->synergy_perf_first = p; - } - else { - /* - * Jumble up the insertion order so we get better sampling. - * Once the list is complete, "first" stays the same so the - * reporting order is consistent. - */ - p->next = npdap->synergy_perf_first->next; - npdap->synergy_perf_first->next = p; - npdap->synergy_perf_first = p->next; - } - } - } - - return err; -} - -static int -synergy_perf_write_proc (struct file *file, const char *buffer, - unsigned long count, void *data) -{ - int cnode; - nodepda_t *npdap; - uint64_t modesel; - char cmd[64]; - extern long atoi(char *); - - if (count == sizeof(uint64_t)) { - if (copy_from_user(&modesel, buffer, sizeof(uint64_t))) - return -EFAULT; - synergy_perf_append(modesel); - } - else { - if (copy_from_user(cmd, buffer, count < sizeof(cmd) ? count : sizeof(cmd))) - return -EFAULT; - if (strncmp(cmd, "enable", 6) == 0) { - /* enable counting */ - for (cnode=0; cnode < numnodes; cnode++) { - npdap = NODEPDA(cnode); - npdap->synergy_perf_enabled = 1; - } - printk("NOTICE: synergy perf counting enabled\n"); - } - else - if (strncmp(cmd, "disable", 7) == 0) { - /* disable counting */ - for (cnode=0; cnode < numnodes; cnode++) { - npdap = NODEPDA(cnode); - npdap->synergy_perf_enabled = 0; - } - printk("NOTICE: synergy perf counting disabled\n"); - } - else - if (strncmp(cmd, "frequency", 9) == 0) { - /* set the update frequency (timer-interrupts per update) */ - int freq; - - if (count < 12) - return -EINVAL; - freq = atoi(cmd + 10); - if (freq <= 0 || freq > 100) - return -EINVAL; - for (cnode=0; cnode < numnodes; cnode++) { - npdap = NODEPDA(cnode); - npdap->synergy_perf_freq = (uint64_t)freq; - } - printk("NOTICE: synergy perf freq set to %d\n", freq); - } - else - return -EINVAL; - } - - return count; -} - -void -synergy_perf_update(int cpu) -{ - nasid_t nasid; - cnodeid_t cnode = cpuid_to_cnodeid(cpu); - struct nodepda_s *npdap; - extern struct nodepda_s *nodepda; - - if (nodepda == NULL || (npdap=NODEPDA(cnode)) == NULL || npdap->synergy_perf_enabled == 0 || - npdap->synergy_perf_data == NULL) { - /* I/O not initialized, or not enabled, or no events to monitor */ - return; - } - - if (npdap->synergy_inactive_intervals++ % npdap->synergy_perf_freq != 0) { - /* don't multiplex on every timer interrupt */ - return; - } - - /* - * Read registers for last interval and increment counters. - * Hold the per-node synergy_perf_lock so concurrent readers get - * consistent values. - */ - spin_lock_irq(&npdap->synergy_perf_lock); - - nasid = cpuid_to_nasid(cpu); - npdap->synergy_active_intervals++; - npdap->synergy_perf_data->intervals++; - - npdap->synergy_perf_data->counts[0] += 0xffffffffffUL & - REMOTE_SYNERGY_LOAD(nasid, 0, PERF_CNTR0_A); - - npdap->synergy_perf_data->counts[1] += 0xffffffffffUL & - REMOTE_SYNERGY_LOAD(nasid, 1, PERF_CNTR0_B); - - /* skip to next in circular list */ - npdap->synergy_perf_data = npdap->synergy_perf_data->next; - - spin_unlock_irq(&npdap->synergy_perf_lock); - - /* set the counter 0 selection modes for both A and B */ - REMOTE_SYNERGY_STORE(nasid, 0, PERF_CNTL0_A, npdap->synergy_perf_data->modesel); - REMOTE_SYNERGY_STORE(nasid, 1, PERF_CNTL0_B, npdap->synergy_perf_data->modesel); - - /* and reset the counter registers to zero */ - REMOTE_SYNERGY_STORE(nasid, 0, PERF_CNTR0_A, 0UL); - REMOTE_SYNERGY_STORE(nasid, 1, PERF_CNTR0_B, 0UL); -} - -void -synergy_perf_init(void) -{ - if ((synergy_perf_proc = create_proc_entry("synergy", 0644, NULL)) != NULL) { - synergy_perf_proc->read_proc = synergy_perf_read_proc; - synergy_perf_proc->write_proc = synergy_perf_write_proc; - printk("markgw: synergy_perf_init()\n"); - } -} - -#endif /* CONFIG_IA64_SGI_SYNERGY_PERF */ - diff -Nru a/arch/ia64/sn/tools/make_textsym b/arch/ia64/sn/tools/make_textsym --- a/arch/ia64/sn/tools/make_textsym Tue Mar 12 13:58:14 2002 +++ b/arch/ia64/sn/tools/make_textsym Tue Mar 12 13:58:14 2002 @@ -1,5 +1,14 @@ #!/bin/sh +# # Build a textsym file for use in the Arium ITP probe. +# +# +# This file is subject to the terms and conditions of the GNU General Public +# License. See the file "COPYING" in the main directory of this archive +# for more details. +# +# Copyright (c) 2001-2002 Silicon Graphics, Inc. All rights reserved. +# help() { cat < $TMPSYM +SN1=`egrep "dig_setup|Synergy_da_indr" $TMPSYM|wc -l` + +# Dataprefix and textprefix correspond to the VGLOBAL_BASE and VPERNODE_BASE. +# Eventually, these values should be: +# dataprefix ffffffff +# textprefix fffffffe +# but right now they're still changing, so make them dynamic. +dataprefix=`awk ' / \.data / { print substr($1, 0, 8) ; exit ; }' $TMPSYM` +textprefix=`awk ' / \.text / { print substr($1, 0, 8) ; exit ; }' $TMPSYM` # pipe everything thru sort echo "TEXTSYM V1.0" (cat < 0) { + n = n*16 + substr(s,1,1) + s = substr(s,2) + } + printf "GLOBAL | %s | DATA | %s | %d\n", $1, $NF, n + } } if($NF == "_end") exit } -' ) | egrep -v " __device| __vendor" | awk ' +' $TMPSYM ) | egrep -v " __device| __vendor" | awk -v sn1="$SN1" ' /GLOBAL/ { print $0 - print substr($0,1,9) substr($0,18,18) "Phy_" substr($0,36) + if (sn1 != 0) { + /* 32 bits of sn1 physical addrs, */ + print substr($0,1,9) substr($0,18,18) "Phy_" substr($0,36) + } else { + /* 38 bits of sn2 physical addrs, need addr space bits */ + print substr($0,1,9) "30" substr($0,18,18) "Phy_" substr($0,36) + } } ' | sort -k3 - - N=`wc -l $TEXTSYM|awk '{print $1}'` echo "Generated TEXTSYM file" >&2 diff -Nru a/arch/ia64/tools/Makefile b/arch/ia64/tools/Makefile --- a/arch/ia64/tools/Makefile Tue Mar 12 13:58:14 2002 +++ b/arch/ia64/tools/Makefile Tue Mar 12 13:58:14 2002 @@ -2,7 +2,7 @@ TARGET = $(TOPDIR)/include/asm-ia64/offsets.h -all: +all: mrproper: @@ -34,7 +34,8 @@ comma := , print_offsets: print_offsets.c FORCE_RECOMPILE - $(CC) $(CFLAGS) -DKBUILD_BASENAME=$(subst $(comma),_,$(subst -,_,$(*F))) print_offsets.c -o $@ + $(CC) $(CFLAGS) -DKBUILD_BASENAME=$(subst $(comma),_,$(subst -,_,$(*F))) \ + print_offsets.c -o $@ FORCE_RECOMPILE: @@ -44,7 +45,8 @@ $(AWK) -f print_offsets.awk $^ > $@ print_offsets.s: print_offsets.c - $(CC) $(CFLAGS) -DKBUILD_BASENAME=$(subst $(comma),_,$(subst -,_,$(*F))) -S print_offsets.c -o $@ + $(CC) $(CFLAGS) -DKBUILD_BASENAME=$(subst $(comma),_,$(subst -,_,$(*F))) -S \ + print_offsets.c -o $@ endif diff -Nru a/arch/ia64/tools/print_offsets.awk b/arch/ia64/tools/print_offsets.awk --- a/arch/ia64/tools/print_offsets.awk Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/tools/print_offsets.awk Tue Mar 12 13:58:15 2002 @@ -12,7 +12,7 @@ # PT_PTRACED == 1< + * Copyright (C) 1999-2002 Hewlett-Packard Co + * David Mosberger-Tang * * Note that this file has dual use: when building the kernel * natively, the file is translated into a binary and executed. When @@ -44,6 +44,7 @@ tab[] = { { "IA64_TASK_SIZE", sizeof (struct task_struct) }, + { "IA64_THREAD_INFO_SIZE", sizeof (struct thread_info) }, { "IA64_PT_REGS_SIZE", sizeof (struct pt_regs) }, { "IA64_SWITCH_STACK_SIZE", sizeof (struct switch_stack) }, { "IA64_SIGINFO_SIZE", sizeof (struct siginfo) }, @@ -51,14 +52,11 @@ { "SIGFRAME_SIZE", sizeof (struct sigframe) }, { "UNW_FRAME_INFO_SIZE", sizeof (struct unw_frame_info) }, { "", 0 }, /* spacer */ -#error { "IA64_TASK_PTRACE_OFFSET", offsetof (struct task_struct, ptrace) }, -#error { "IA64_TASK_SIGPENDING_OFFSET", offsetof (struct task_struct, sigpending) }, -#error { "IA64_TASK_NEED_RESCHED_OFFSET", offsetof (struct task_struct, need_resched) }, - { "IA64_TASK_PROCESSOR_OFFSET", offsetof (struct task_struct, processor) }, + { "IA64_TASK_PTRACE_OFFSET", offsetof (struct task_struct, ptrace) }, { "IA64_TASK_THREAD_OFFSET", offsetof (struct task_struct, thread) }, { "IA64_TASK_THREAD_KSP_OFFSET", offsetof (struct task_struct, thread.ksp) }, #ifdef CONFIG_PERFMON - { "IA64_TASK_PFM_MUST_BLOCK_OFFSET",offsetof(struct task_struct, thread.pfm_must_block) }, + { "IA64_TASK_PFM_OVFL_BLOCK_RESET_OFFSET",offsetof(struct task_struct, thread.pfm_ovfl_block_reset) }, #endif { "IA64_TASK_PID_OFFSET", offsetof (struct task_struct, pid) }, { "IA64_TASK_MM_OFFSET", offsetof (struct task_struct, mm) }, @@ -197,7 +195,9 @@ subtle ways should PT_PTRACED ever change. Ditto for PT_TRACESYS_BIT. */ printf ("#define PT_PTRACED_BIT\t\t\t%u\n", ffs (PT_PTRACED) - 1); - printf ("#define PT_TRACESYS_BIT\t\t\t%u\n\n", ffs (PT_TRACESYS) - 1); +#if 0 + printf ("#define PT_SYSCALLTRACE_BIT\t\t\t%u\n\n", ffs (PT_SYSCALLTRACE) - 1); +#endif for (i = 0; i < sizeof (tab) / sizeof (tab[0]); ++i) { diff -Nru a/arch/ia64/vmlinux.lds.S b/arch/ia64/vmlinux.lds.S --- a/arch/ia64/vmlinux.lds.S Tue Mar 12 13:58:15 2002 +++ b/arch/ia64/vmlinux.lds.S Tue Mar 12 13:58:15 2002 @@ -1,6 +1,6 @@ #include -#include +#include #include OUTPUT_FORMAT("elf64-ia64-little") @@ -13,6 +13,8 @@ *(.text.exit) *(.data.exit) *(.exitcall.exit) + *(.IA_64.unwind.text.exit) + *(.IA_64.unwind_info.text.exit) } v = PAGE_OFFSET; /* this symbol is here to make debugging easier... */ @@ -104,23 +106,23 @@ { *(.setup.init) } __setup_end = .; __initcall_start = .; - .initcall.init : AT(ADDR(.initcall1.init) - PAGE_OFFSET) + .initcall.init : AT(ADDR(.initcall.init) - PAGE_OFFSET) { - *(.initcall1.init) - *(.initcall2.init) - *(.initcall3.init) - *(.initcall4.init) - *(.initcall5.init) - *(.initcall6.init) - *(.initcall7.init) + *(.initcall1.init) + *(.initcall2.init) + *(.initcall3.init) + *(.initcall4.init) + *(.initcall5.init) + *(.initcall6.init) + *(.initcall7.init) } __initcall_end = .; . = ALIGN(PAGE_SIZE); __init_end = .; /* The initial task and kernel stack */ - init_task : AT(ADDR(init_task) - PAGE_OFFSET) - { *(init_task) } + .data.init_task : AT(ADDR(.data.init_task) - PAGE_OFFSET) + { *(.data.init_task) } .data.page_aligned : AT(ADDR(.data.page_aligned) - PAGE_OFFSET) { *(.data.idt) } diff -Nru a/arch/mips/defconfig-ddb5476 b/arch/mips/defconfig-ddb5476 --- a/arch/mips/defconfig-ddb5476 Tue Mar 12 13:58:15 2002 +++ b/arch/mips/defconfig-ddb5476 Tue Mar 12 13:58:15 2002 @@ -224,7 +224,6 @@ CONFIG_BLK_DEV_IDEPCI=y # CONFIG_IDEPCI_SHARE_IRQ is not set # CONFIG_BLK_DEV_IDEDMA_PCI is not set -# CONFIG_BLK_DEV_ADMA is not set # CONFIG_BLK_DEV_OFFBOARD is not set # CONFIG_IDEDMA_PCI_AUTO is not set # CONFIG_BLK_DEV_IDEDMA is not set diff -Nru a/arch/mips/defconfig-it8172 b/arch/mips/defconfig-it8172 --- a/arch/mips/defconfig-it8172 Tue Mar 12 13:58:15 2002 +++ b/arch/mips/defconfig-it8172 Tue Mar 12 13:58:15 2002 @@ -289,7 +289,6 @@ CONFIG_BLK_DEV_IDEPCI=y CONFIG_IDEPCI_SHARE_IRQ=y CONFIG_BLK_DEV_IDEDMA_PCI=y -CONFIG_BLK_DEV_ADMA=y # CONFIG_BLK_DEV_OFFBOARD is not set CONFIG_IDEDMA_PCI_AUTO=y CONFIG_BLK_DEV_IDEDMA=y diff -Nru a/arch/mips64/kernel/ioctl32.c b/arch/mips64/kernel/ioctl32.c --- a/arch/mips64/kernel/ioctl32.c Tue Mar 12 13:58:15 2002 +++ b/arch/mips64/kernel/ioctl32.c Tue Mar 12 13:58:15 2002 @@ -750,9 +750,7 @@ IOCTL32_DEFAULT(HDIO_SET_NOWERR), IOCTL32_DEFAULT(HDIO_SET_DMA), IOCTL32_DEFAULT(HDIO_SET_PIO_MODE), - IOCTL32_DEFAULT(HDIO_SCAN_HWIF), IOCTL32_DEFAULT(HDIO_SET_NICE), - //HDIO_UNREGISTER_HWIF IOCTL32_DEFAULT(BLKROSET), /* fs.h ioctls */ IOCTL32_DEFAULT(BLKROGET), diff -Nru a/arch/ppc/configs/common_defconfig b/arch/ppc/configs/common_defconfig --- a/arch/ppc/configs/common_defconfig Tue Mar 12 13:58:15 2002 +++ b/arch/ppc/configs/common_defconfig Tue Mar 12 13:58:15 2002 @@ -252,7 +252,6 @@ CONFIG_BLK_DEV_IDEPCI=y CONFIG_IDEPCI_SHARE_IRQ=y CONFIG_BLK_DEV_IDEDMA_PCI=y -CONFIG_BLK_DEV_ADMA=y # CONFIG_BLK_DEV_OFFBOARD is not set CONFIG_IDEDMA_PCI_AUTO=y CONFIG_BLK_DEV_IDEDMA=y diff -Nru a/arch/ppc/configs/k2_defconfig b/arch/ppc/configs/k2_defconfig --- a/arch/ppc/configs/k2_defconfig Tue Mar 12 13:58:14 2002 +++ b/arch/ppc/configs/k2_defconfig Tue Mar 12 13:58:14 2002 @@ -235,7 +235,6 @@ CONFIG_BLK_DEV_IDEPCI=y CONFIG_IDEPCI_SHARE_IRQ=y CONFIG_BLK_DEV_IDEDMA_PCI=y -CONFIG_BLK_DEV_ADMA=y # CONFIG_BLK_DEV_OFFBOARD is not set # CONFIG_IDEDMA_PCI_AUTO is not set CONFIG_BLK_DEV_IDEDMA=y diff -Nru a/arch/ppc/configs/menf1_defconfig b/arch/ppc/configs/menf1_defconfig --- a/arch/ppc/configs/menf1_defconfig Tue Mar 12 13:58:14 2002 +++ b/arch/ppc/configs/menf1_defconfig Tue Mar 12 13:58:14 2002 @@ -239,7 +239,6 @@ CONFIG_BLK_DEV_IDEPCI=y # CONFIG_IDEPCI_SHARE_IRQ is not set # CONFIG_BLK_DEV_IDEDMA_PCI is not set -# CONFIG_BLK_DEV_ADMA is not set # CONFIG_BLK_DEV_OFFBOARD is not set # CONFIG_IDEDMA_PCI_AUTO is not set # CONFIG_BLK_DEV_IDEDMA is not set diff -Nru a/arch/ppc/configs/pmac_defconfig b/arch/ppc/configs/pmac_defconfig --- a/arch/ppc/configs/pmac_defconfig Tue Mar 12 13:58:16 2002 +++ b/arch/ppc/configs/pmac_defconfig Tue Mar 12 13:58:16 2002 @@ -242,7 +242,6 @@ CONFIG_BLK_DEV_IDEPCI=y CONFIG_IDEPCI_SHARE_IRQ=y CONFIG_BLK_DEV_IDEDMA_PCI=y -CONFIG_BLK_DEV_ADMA=y # CONFIG_BLK_DEV_OFFBOARD is not set CONFIG_IDEDMA_PCI_AUTO=y CONFIG_BLK_DEV_IDEDMA=y diff -Nru a/arch/ppc/configs/pplus_defconfig b/arch/ppc/configs/pplus_defconfig --- a/arch/ppc/configs/pplus_defconfig Tue Mar 12 13:58:15 2002 +++ b/arch/ppc/configs/pplus_defconfig Tue Mar 12 13:58:15 2002 @@ -246,7 +246,6 @@ CONFIG_BLK_DEV_IDEPCI=y CONFIG_IDEPCI_SHARE_IRQ=y CONFIG_BLK_DEV_IDEDMA_PCI=y -CONFIG_BLK_DEV_ADMA=y # CONFIG_BLK_DEV_OFFBOARD is not set # CONFIG_IDEDMA_PCI_AUTO is not set CONFIG_BLK_DEV_IDEDMA=y diff -Nru a/arch/ppc/configs/sandpoint_defconfig b/arch/ppc/configs/sandpoint_defconfig --- a/arch/ppc/configs/sandpoint_defconfig Tue Mar 12 13:58:15 2002 +++ b/arch/ppc/configs/sandpoint_defconfig Tue Mar 12 13:58:15 2002 @@ -209,7 +209,6 @@ CONFIG_BLK_DEV_IDEPCI=y CONFIG_IDEPCI_SHARE_IRQ=y CONFIG_BLK_DEV_IDEDMA_PCI=y -CONFIG_BLK_DEV_ADMA=y # CONFIG_BLK_DEV_OFFBOARD is not set # CONFIG_IDEDMA_PCI_AUTO is not set CONFIG_BLK_DEV_IDEDMA=y diff -Nru a/arch/ppc/defconfig b/arch/ppc/defconfig --- a/arch/ppc/defconfig Tue Mar 12 13:58:15 2002 +++ b/arch/ppc/defconfig Tue Mar 12 13:58:15 2002 @@ -252,7 +252,6 @@ CONFIG_BLK_DEV_IDEPCI=y CONFIG_IDEPCI_SHARE_IRQ=y CONFIG_BLK_DEV_IDEDMA_PCI=y -CONFIG_BLK_DEV_ADMA=y # CONFIG_BLK_DEV_OFFBOARD is not set CONFIG_IDEDMA_PCI_AUTO=y CONFIG_BLK_DEV_IDEDMA=y diff -Nru a/arch/ppc/kernel/misc.S b/arch/ppc/kernel/misc.S --- a/arch/ppc/kernel/misc.S Tue Mar 12 13:58:15 2002 +++ b/arch/ppc/kernel/misc.S Tue Mar 12 13:58:15 2002 @@ -1289,6 +1289,7 @@ .long sys_removexattr .long sys_lremovexattr .long sys_fremovexattr /* 220 */ + .long sys_futex .rept NR_syscalls-(.-sys_call_table)/4 .long sys_ni_syscall .endr diff -Nru a/arch/ppc64/kernel/ioctl32.c b/arch/ppc64/kernel/ioctl32.c --- a/arch/ppc64/kernel/ioctl32.c Tue Mar 12 13:58:16 2002 +++ b/arch/ppc64/kernel/ioctl32.c Tue Mar 12 13:58:16 2002 @@ -3713,7 +3713,6 @@ COMPATIBLE_IOCTL(HDIO_SET_MULTCOUNT), COMPATIBLE_IOCTL(HDIO_DRIVE_CMD), COMPATIBLE_IOCTL(HDIO_SET_PIO_MODE), -COMPATIBLE_IOCTL(HDIO_SCAN_HWIF), COMPATIBLE_IOCTL(HDIO_SET_NICE), /* 0x02 -- Floppy ioctls */ COMPATIBLE_IOCTL(FDMSGON), diff -Nru a/arch/sparc/kernel/systbls.S b/arch/sparc/kernel/systbls.S --- a/arch/sparc/kernel/systbls.S Tue Mar 12 13:58:15 2002 +++ b/arch/sparc/kernel/systbls.S Tue Mar 12 13:58:15 2002 @@ -46,7 +46,7 @@ /*125*/ .long sys_nis_syscall, sys_setreuid16, sys_setregid16, sys_rename, sys_truncate /*130*/ .long sys_ftruncate, sys_flock, sys_lstat64, sys_nis_syscall, sys_nis_syscall /*135*/ .long sys_nis_syscall, sys_mkdir, sys_rmdir, sys_utimes, sys_stat64 -/*140*/ .long sys_nis_syscall, sys_nis_syscall, sys_nis_syscall, sys_gettid, sys_getrlimit +/*140*/ .long sys_sendfile64, sys_nis_syscall, sys_nis_syscall, sys_gettid, sys_getrlimit /*145*/ .long sys_setrlimit, sys_pivot_root, sys_prctl, sys_pciconfig_read, sys_pciconfig_write /*150*/ .long sys_nis_syscall, sys_nis_syscall, sys_nis_syscall, sys_poll, sys_getdents64 /*155*/ .long sys_fcntl64, sys_nis_syscall, sys_statfs, sys_fstatfs, sys_oldumount diff -Nru a/arch/sparc/vmlinux.lds b/arch/sparc/vmlinux.lds --- a/arch/sparc/vmlinux.lds Tue Mar 12 13:58:14 2002 +++ b/arch/sparc/vmlinux.lds Tue Mar 12 13:58:14 2002 @@ -55,6 +55,10 @@ *(.initcall7.init) } __initcall_end = .; + . = ALIGN(32); + __per_cpu_start = .; + .data.percpu : { *(.data.percpu) } + __per_cpu_end = .; . = ALIGN(4096); __init_end = .; . = ALIGN(32); diff -Nru a/arch/sparc64/defconfig b/arch/sparc64/defconfig --- a/arch/sparc64/defconfig Tue Mar 12 13:58:15 2002 +++ b/arch/sparc64/defconfig Tue Mar 12 13:58:15 2002 @@ -29,6 +29,7 @@ CONFIG_VT=y CONFIG_VT_CONSOLE=y CONFIG_SMP=y +# CONFIG_PREEMPT is not set CONFIG_SPARC64=y CONFIG_HOTPLUG=y CONFIG_HAVE_DEC_LOCK=y @@ -156,7 +157,7 @@ # # Block devices # -CONFIG_BLK_DEV_FD=y +# CONFIG_BLK_DEV_FD is not set CONFIG_BLK_DEV_LOOP=m CONFIG_BLK_DEV_NBD=m @@ -275,7 +276,7 @@ # CONFIG_BLK_DEV_IDESCSI is not set # -# IDE chipset support/bugfixes +# IDE chipset support # # CONFIG_BLK_DEV_CMD640 is not set # CONFIG_BLK_DEV_CMD640_ENHANCED is not set @@ -291,7 +292,6 @@ # CONFIG_IDEDMA_PCI_WIP is not set # CONFIG_BLK_DEV_IDEDMA_TIMEOUT is not set # CONFIG_IDEDMA_NEW_DRIVE_LISTINGS is not set -CONFIG_BLK_DEV_ADMA=y # CONFIG_BLK_DEV_AEC62XX is not set # CONFIG_AEC62XX_TUNING is not set CONFIG_BLK_DEV_ALI15X3=y @@ -473,14 +473,9 @@ CONFIG_ADAPTEC_STARFIRE=m # CONFIG_APRICOT is not set # CONFIG_CS89x0 is not set -CONFIG_DE2104X=m -CONFIG_TULIP=m -# CONFIG_TULIP_MWI is not set -# CONFIG_TULIP_MMIO is not set -CONFIG_DE4X5=m CONFIG_DGRS=m -# CONFIG_DM9102 is not set CONFIG_EEPRO100=m +CONFIG_E100=m # CONFIG_LNE390 is not set CONFIG_FEALNX=m CONFIG_NATSEMI=m @@ -500,7 +495,6 @@ # CONFIG_TLAN is not set CONFIG_VIA_RHINE=m # CONFIG_VIA_RHINE_MMIO is not set -CONFIG_WINBOND_840=m # CONFIG_NET_POCKET is not set # @@ -509,12 +503,13 @@ CONFIG_ACENIC=m # CONFIG_ACENIC_OMIT_TIGON_I is not set CONFIG_DL2K=m +CONFIG_E1000=m CONFIG_MYRI_SBUS=m CONFIG_NS83820=m CONFIG_HAMACHI=m CONFIG_YELLOWFIN=m CONFIG_SK98LIN=m -# CONFIG_TIGON3 is not set +CONFIG_TIGON3=m CONFIG_FDDI=y # CONFIG_DEFXX is not set CONFIG_SKFP=m @@ -554,6 +549,18 @@ # CONFIG_WAN is not set # +# "Tulip" family network device support +# +CONFIG_NET_TULIP=y +CONFIG_DE2104X=m +CONFIG_TULIP=m +# CONFIG_TULIP_MWI is not set +# CONFIG_TULIP_MMIO is not set +CONFIG_DE4X5=m +CONFIG_WINBOND_840=m +# CONFIG_DM9102 is not set + +# # Unix 98 PTY support # CONFIG_UNIX98_PTYS=y @@ -644,6 +651,9 @@ CONFIG_ISO9660_FS=m CONFIG_JOLIET=y # CONFIG_ZISOFS is not set +CONFIG_JFS_FS=m +# CONFIG_JFS_DEBUG is not set +# CONFIG_JFS_STATISTICS is not set CONFIG_MINIX_FS=m # CONFIG_VXFS_FS is not set # CONFIG_NTFS_FS is not set @@ -674,6 +684,7 @@ # CONFIG_ROOT_NFS is not set CONFIG_NFSD=m CONFIG_NFSD_V3=y +CONFIG_NFSD_TCP=y CONFIG_SUNRPC=y CONFIG_LOCKD=y CONFIG_LOCKD_V4=y @@ -743,27 +754,16 @@ # Sound # CONFIG_SOUND=m -CONFIG_SOUND_BT878=m -# CONFIG_SOUND_CMPCI is not set -# CONFIG_SOUND_EMU10K1 is not set -# CONFIG_MIDI_EMU10K1 is not set -# CONFIG_SOUND_FUSION is not set -# CONFIG_SOUND_CS4281 is not set -# CONFIG_SOUND_ES1370 is not set -CONFIG_SOUND_ES1371=m -# CONFIG_SOUND_ESSSOLO1 is not set -# CONFIG_SOUND_MAESTRO is not set -# CONFIG_SOUND_MAESTRO3 is not set -# CONFIG_SOUND_ICH is not set -# CONFIG_SOUND_RME96XX is not set -# CONFIG_SOUND_SONICVIBES is not set -CONFIG_SOUND_TRIDENT=m -# CONFIG_SOUND_MSNDCLAS is not set -# CONFIG_SOUND_MSNDPIN is not set -# CONFIG_SOUND_VIA82CXXX is not set -# CONFIG_MIDI_VIA82CXXX is not set -# CONFIG_SOUND_OSS is not set -# CONFIG_SOUND_TVMIXER is not set + +# +# Open Sound System +# +# CONFIG_SOUND_PRIME is not set + +# +# Advanced Linux Sound Architecture +# +# CONFIG_SND is not set # # USB support @@ -831,6 +831,7 @@ CONFIG_USB_VICAM=m CONFIG_USB_DSBR=m CONFIG_USB_DABUSB=m +CONFIG_USB_KONICAWC=m # # USB Network adaptors diff -Nru a/arch/sparc64/kernel/central.c b/arch/sparc64/kernel/central.c --- a/arch/sparc64/kernel/central.c Tue Mar 12 13:58:16 2002 +++ b/arch/sparc64/kernel/central.c Tue Mar 12 13:58:16 2002 @@ -247,6 +247,55 @@ (central->clkver ? upa_readb(central->clkver) : 0x00)); } +static void init_all_fhc_hw(void) +{ + struct linux_fhc *fhc; + + for(fhc = fhc_list; fhc != NULL; fhc = fhc->next) { + u32 tmp; + + /* Clear all of the interrupt mapping registers + * just in case OBP left them in a foul state. + */ +#define ZAP(ICLR, IMAP) \ +do { u32 imap_tmp; \ + upa_writel(0, (ICLR)); \ + upa_readl(ICLR); \ + imap_tmp = upa_readl(IMAP); \ + imap_tmp &= ~(0x80000000); \ + upa_writel(imap_tmp, (IMAP)); \ + upa_readl(IMAP); \ +} while (0) + + ZAP(fhc->fhc_regs.ffregs + FHC_FFREGS_ICLR, + fhc->fhc_regs.ffregs + FHC_FFREGS_IMAP); + ZAP(fhc->fhc_regs.sregs + FHC_SREGS_ICLR, + fhc->fhc_regs.sregs + FHC_SREGS_IMAP); + ZAP(fhc->fhc_regs.uregs + FHC_UREGS_ICLR, + fhc->fhc_regs.uregs + FHC_UREGS_IMAP); + ZAP(fhc->fhc_regs.tregs + FHC_TREGS_ICLR, + fhc->fhc_regs.tregs + FHC_TREGS_IMAP); + +#undef ZAP + + /* Setup FHC control register. */ + tmp = upa_readl(fhc->fhc_regs.pregs + FHC_PREGS_CTRL); + + /* All non-central boards have this bit set. */ + if(! IS_CENTRAL_FHC(fhc)) + tmp |= FHC_CONTROL_IXIST; + + /* For all FHCs, clear the firmware synchronization + * line and both low power mode enables. + */ + tmp &= ~(FHC_CONTROL_AOFF | FHC_CONTROL_BOFF | FHC_CONTROL_SLINE); + + upa_writel(tmp, fhc->fhc_regs.pregs + FHC_PREGS_CTRL); + upa_readl(fhc->fhc_regs.pregs + FHC_PREGS_CTRL); + } + +} + void central_probe(void) { struct linux_prom_registers fpregs[6]; @@ -341,6 +390,8 @@ ((err & FHC_ID_MANUF) >> 1)); probe_other_fhcs(); + + init_all_fhc_hw(); } static __inline__ void fhc_ledblink(struct linux_fhc *fhc, int on) @@ -398,55 +449,11 @@ void firetruck_init(void) { struct linux_central *central = central_bus; - struct linux_fhc *fhc; u8 ctrl; /* No central bus, nothing to do. */ if (central == NULL) return; - - for(fhc = fhc_list; fhc != NULL; fhc = fhc->next) { - u32 tmp; - - /* Clear all of the interrupt mapping registers - * just in case OBP left them in a foul state. - */ -#define ZAP(ICLR, IMAP) \ -do { u32 imap_tmp; \ - upa_writel(0, (ICLR)); \ - upa_readl(ICLR); \ - imap_tmp = upa_readl(IMAP); \ - imap_tmp &= ~(0x80000000); \ - upa_writel(imap_tmp, (IMAP)); \ - upa_readl(IMAP); \ -} while (0) - - ZAP(fhc->fhc_regs.ffregs + FHC_FFREGS_ICLR, - fhc->fhc_regs.ffregs + FHC_FFREGS_IMAP); - ZAP(fhc->fhc_regs.sregs + FHC_SREGS_ICLR, - fhc->fhc_regs.sregs + FHC_SREGS_IMAP); - ZAP(fhc->fhc_regs.uregs + FHC_UREGS_ICLR, - fhc->fhc_regs.uregs + FHC_UREGS_IMAP); - ZAP(fhc->fhc_regs.tregs + FHC_TREGS_ICLR, - fhc->fhc_regs.tregs + FHC_TREGS_IMAP); - -#undef ZAP - - /* Setup FHC control register. */ - tmp = upa_readl(fhc->fhc_regs.pregs + FHC_PREGS_CTRL); - - /* All non-central boards have this bit set. */ - if(! IS_CENTRAL_FHC(fhc)) - tmp |= FHC_CONTROL_IXIST; - - /* For all FHCs, clear the firmware synchronization - * line and both low power mode enables. - */ - tmp &= ~(FHC_CONTROL_AOFF | FHC_CONTROL_BOFF | FHC_CONTROL_SLINE); - - upa_writel(tmp, fhc->fhc_regs.pregs + FHC_PREGS_CTRL); - upa_readl(fhc->fhc_regs.pregs + FHC_PREGS_CTRL); - } /* OBP leaves it on, turn it off so clock board timer LED * is in sync with FHC ones. diff -Nru a/arch/sparc64/kernel/entry.S b/arch/sparc64/kernel/entry.S --- a/arch/sparc64/kernel/entry.S Tue Mar 12 13:58:14 2002 +++ b/arch/sparc64/kernel/entry.S Tue Mar 12 13:58:14 2002 @@ -1436,7 +1436,6 @@ * %o7 for us. Check performance counter stuff too. */ andn %o7, _TIF_NEWCHILD, %l0 - mov %g5, %o0 /* 'prev' */ call schedule_tail stx %l0, [%g6 + TI_FLAGS] andcc %l0, _TIF_PERFCTR, %g0 diff -Nru a/arch/sparc64/kernel/ioctl32.c b/arch/sparc64/kernel/ioctl32.c --- a/arch/sparc64/kernel/ioctl32.c Tue Mar 12 13:58:15 2002 +++ b/arch/sparc64/kernel/ioctl32.c Tue Mar 12 13:58:15 2002 @@ -96,6 +96,7 @@ #include #include #include +#include /* Use this to get at 32-bit user passed pointers. See sys_sparc32.c for description about these. */ @@ -3973,7 +3974,6 @@ COMPATIBLE_IOCTL(HDIO_SET_MULTCOUNT) COMPATIBLE_IOCTL(HDIO_DRIVE_CMD) COMPATIBLE_IOCTL(HDIO_SET_PIO_MODE) -COMPATIBLE_IOCTL(HDIO_SCAN_HWIF) COMPATIBLE_IOCTL(HDIO_SET_NICE) /* 0x02 -- Floppy ioctls */ COMPATIBLE_IOCTL(FDMSGON) @@ -4527,6 +4527,13 @@ COMPATIBLE_IOCTL(WIOCSTART) COMPATIBLE_IOCTL(WIOCSTOP) COMPATIBLE_IOCTL(WIOCGSTAT) +/* Big R */ +COMPATIBLE_IOCTL(RNDGETENTCNT) +COMPATIBLE_IOCTL(RNDADDTOENTCNT) +COMPATIBLE_IOCTL(RNDGETPOOL) +COMPATIBLE_IOCTL(RNDADDENTROPY) +COMPATIBLE_IOCTL(RNDZAPENTCNT) +COMPATIBLE_IOCTL(RNDCLEARPOOL) /* Bluetooth ioctls */ COMPATIBLE_IOCTL(HCIDEVUP) COMPATIBLE_IOCTL(HCIDEVDOWN) diff -Nru a/arch/sparc64/kernel/pci.c b/arch/sparc64/kernel/pci.c --- a/arch/sparc64/kernel/pci.c Tue Mar 12 13:58:14 2002 +++ b/arch/sparc64/kernel/pci.c Tue Mar 12 13:58:14 2002 @@ -418,7 +418,7 @@ enum pci_mmap_state mmap_state) { unsigned long user_offset = vma->vm_pgoff << PAGE_SHIFT; - unsigned long user32 = user_offset & 0xffffffffUL; + unsigned long user32 = user_offset & pci_memspace_mask; unsigned long largest_base, this_base, addr32; int i; @@ -448,7 +448,7 @@ this_base = rp->start; - addr32 = (this_base & PAGE_MASK) & 0xffffffffUL; + addr32 = (this_base & PAGE_MASK) & pci_memspace_mask; if (mmap_state == pci_mmap_io) addr32 &= 0xffffff; @@ -464,7 +464,7 @@ if (mmap_state == pci_mmap_io) vma->vm_pgoff = (((largest_base & ~0xffffffUL) | user32) >> PAGE_SHIFT); else - vma->vm_pgoff = (((largest_base & ~0xffffffffUL) | user32) >> PAGE_SHIFT); + vma->vm_pgoff = (((largest_base & ~(pci_memspace_mask)) | user32) >> PAGE_SHIFT); return 0; } diff -Nru a/arch/sparc64/kernel/pci_common.c b/arch/sparc64/kernel/pci_common.c --- a/arch/sparc64/kernel/pci_common.c Tue Mar 12 13:58:15 2002 +++ b/arch/sparc64/kernel/pci_common.c Tue Mar 12 13:58:15 2002 @@ -583,21 +583,45 @@ /* If we are underneath a PCI bridge, use PROM register * property of the parent bridge which is closest to * the PBM. + * + * However if that parent bridge has interrupt map/mask + * properties of it's own we use the PROM register property + * of the next child device on the path to PDEV. + * + * In detail the two cases are (note that the 'X' below is the + * 'next child on the path to PDEV' mentioned above): + * + * 1) PBM --> PCI bus lacking int{map,mask} --> X ... PDEV + * + * Here we use regs of 'PCI bus' device. + * + * 2) PBM --> PCI bus with int{map,mask} --> X ... PDEV + * + * Here we use regs of 'X'. Note that X can be PDEV. */ if (pdev->bus->number != pbm->pci_first_busno) { - struct pcidev_cookie *bus_pcp; - struct pci_dev *pwalk; - int offset, plen; - - pwalk = pdev->bus->self; - while (pwalk->bus && - pwalk->bus->number != pbm->pci_first_busno) - pwalk = pwalk->bus->self; + struct pcidev_cookie *bus_pcp, *regs_pcp; + struct pci_dev *bus_dev, *regs_dev; + int plen; + + bus_dev = pdev->bus->self; + regs_dev = pdev; + + while (bus_dev->bus && + bus_dev->bus->number != pbm->pci_first_busno) { + regs_dev = bus_dev; + bus_dev = bus_dev->bus->self; + } + + regs_pcp = regs_dev->sysdata; + pregs = regs_pcp->prom_regs; - bus_pcp = pwalk->sysdata; + bus_pcp = bus_dev->sysdata; /* But if the PCI bridge has it's own interrupt map - * and mask properties, use that and the device regs. + * and mask properties, use that and the regs of the + * PCI entity at the next level down on the path to the + * device. */ plen = prom_getproperty(bus_pcp->prom_node, "interrupt-map", (char *) &bridge_local_intmap[0], @@ -605,38 +629,21 @@ if (plen != -1) { intmap = &bridge_local_intmap[0]; num_intmap = plen / sizeof(struct linux_prom_pci_intmap); - plen = prom_getproperty(bus_pcp->prom_node, "interrupt-map-mask", + plen = prom_getproperty(bus_pcp->prom_node, + "interrupt-map-mask", (char *) &bridge_local_intmask, sizeof(bridge_local_intmask)); if (plen == -1) { - prom_printf("pbm_intmap_match: Bridge has intmap but " - "no intmask.\n"); - prom_halt(); + printk("pci_intmap_match: Warning! Bridge has intmap " + "but no intmask.\n"); + printk("pci_intmap_match: Trying to recover.\n"); + return 0; } - goto check_intmap; - } - - pregs = bus_pcp->prom_regs; - - offset = prom_getint(dev_pcp->prom_node, - "fcode-rom-offset"); - - /* Did PROM know better and assign an interrupt other - * than #INTA to the device? - We test here for presence of - * FCODE on the card, in this case we assume PROM has set - * correct 'interrupts' property, unless it is quadhme. - */ - if (offset == -1 || - !strcmp(dev_pcp->prom_name, "SUNW,qfe") || - !strcmp(dev_pcp->prom_name, "qfe")) { - /* - * No, use low slot number bits of child as IRQ line. - */ - *interrupt = ((*interrupt - 1 + PCI_SLOT(pdev->devfn)) & 3) + 1; + } else { + pregs = bus_pcp->prom_regs; } } -check_intmap: hi = pregs->phys_hi & intmask->phys_hi; mid = pregs->phys_mid & intmask->phys_mid; lo = pregs->phys_lo & intmask->phys_lo; @@ -652,12 +659,22 @@ } } - prom_printf("pbm_intmap_match: bus %02x, devfn %02x: ", + /* Print it both to OBP console and kernel one so that if bootup + * hangs here the user has the information to report. + */ + prom_printf("pci_intmap_match: bus %02x, devfn %02x: ", pdev->bus->number, pdev->devfn); prom_printf("IRQ [%08x.%08x.%08x.%08x] not found in interrupt-map\n", pregs->phys_hi, pregs->phys_mid, pregs->phys_lo, *interrupt); prom_printf("Please email this information to davem@redhat.com\n"); - prom_halt(); + + printk("pci_intmap_match: bus %02x, devfn %02x: ", + pdev->bus->number, pdev->devfn); + printk("IRQ [%08x.%08x.%08x.%08x] not found in interrupt-map\n", + pregs->phys_hi, pregs->phys_mid, pregs->phys_lo, *interrupt); + printk("Please email this information to davem@redhat.com\n"); + + return 0; } static void __init pdev_fixup_irq(struct pci_dev *pdev) @@ -703,6 +720,20 @@ goto have_irq; } + /* Firmware gets quad-hme interrupts property totally + * wrong. It is 4 EBUS+HME devices behind a Digital bridge. + * For each of the 4 instances the EBUS has interrupt property + * '1' and the HME has interrupt property '2'. So we have to + * fix this up. + */ + if (!strcmp(pcp->prom_name, "SUNW,qfe") || + !strcmp(pcp->prom_name, "qfe")) { + if (PCI_SLOT(pdev->devfn) & ~3) + BUG(); + + prom_irq = PCI_SLOT(pdev->devfn) + 1; + } + /* Can we find a matching entry in the interrupt-map? */ if (pci_intmap_match(pdev, &prom_irq)) { pdev->irq = p->irq_build(pbm, pdev, (portid << 6) | prom_irq); @@ -738,12 +769,19 @@ * ranges. -DaveM */ if (pdev->bus->number == pbm->pci_first_busno) { - slot = (pdev->devfn >> 3) - pbm->pci_first_slot; + slot = PCI_SLOT(pdev->devfn) - pbm->pci_first_slot; } else { + struct pci_dev *bus_dev; + /* Underneath a bridge, use slot number of parent - * bridge. + * bridge which is closest to the PBM. */ - slot = (pdev->bus->self->devfn >> 3) - pbm->pci_first_slot; + bus_dev = pdev->bus->self; + while (bus_dev->bus && + bus_dev->bus->number != pbm->pci_first_busno) + bus_dev = bus_dev->bus->self; + + slot = PCI_SLOT(bus_dev->devfn) - pbm->pci_first_slot; } slot = slot << 2; diff -Nru a/arch/sparc64/kernel/pci_psycho.c b/arch/sparc64/kernel/pci_psycho.c --- a/arch/sparc64/kernel/pci_psycho.c Tue Mar 12 13:58:15 2002 +++ b/arch/sparc64/kernel/pci_psycho.c Tue Mar 12 13:58:15 2002 @@ -326,15 +326,15 @@ /*0x14*/0, 0, 0, 0, /* PCI B slot 1 Int A, B, C, D */ /*0x18*/0, 0, 0, 0, /* PCI B slot 2 Int A, B, C, D */ /*0x1c*/0, 0, 0, 0, /* PCI B slot 3 Int A, B, C, D */ -/*0x20*/3, /* SCSI */ +/*0x20*/4, /* SCSI */ /*0x21*/5, /* Ethernet */ /*0x22*/8, /* Parallel Port */ /*0x23*/13, /* Audio Record */ /*0x24*/14, /* Audio Playback */ /*0x25*/15, /* PowerFail */ -/*0x26*/3, /* second SCSI */ +/*0x26*/4, /* second SCSI */ /*0x27*/11, /* Floppy */ -/*0x28*/2, /* Spare Hardware */ +/*0x28*/4, /* Spare Hardware */ /*0x29*/9, /* Keyboard */ /*0x2a*/4, /* Mouse */ /*0x2b*/12, /* Serial */ @@ -353,7 +353,7 @@ ret = psycho_pil_table[ino]; if (ret == 0 && pdev == NULL) { - ret = 2; + ret = 4; } else if (ret == 0) { switch ((pdev->class >> 16) & 0xff) { case PCI_BASE_CLASS_STORAGE: @@ -376,7 +376,7 @@ break; default: - ret = 2; + ret = 4; break; }; } diff -Nru a/arch/sparc64/kernel/pci_sabre.c b/arch/sparc64/kernel/pci_sabre.c --- a/arch/sparc64/kernel/pci_sabre.c Tue Mar 12 13:58:15 2002 +++ b/arch/sparc64/kernel/pci_sabre.c Tue Mar 12 13:58:15 2002 @@ -564,15 +564,15 @@ /*0x14*/0, 0, 0, 0, /* PCI B slot 1 Int A, B, C, D */ /*0x18*/0, 0, 0, 0, /* PCI B slot 2 Int A, B, C, D */ /*0x1c*/0, 0, 0, 0, /* PCI B slot 3 Int A, B, C, D */ -/*0x20*/3, /* SCSI */ +/*0x20*/4, /* SCSI */ /*0x21*/5, /* Ethernet */ /*0x22*/8, /* Parallel Port */ /*0x23*/13, /* Audio Record */ /*0x24*/14, /* Audio Playback */ /*0x25*/15, /* PowerFail */ -/*0x26*/3, /* second SCSI */ +/*0x26*/4, /* second SCSI */ /*0x27*/11, /* Floppy */ -/*0x28*/2, /* Spare Hardware */ +/*0x28*/4, /* Spare Hardware */ /*0x29*/9, /* Keyboard */ /*0x2a*/4, /* Mouse */ /*0x2b*/12, /* Serial */ @@ -596,7 +596,7 @@ ret = sabre_pil_table[ino]; if (ret == 0 && pdev == NULL) { - ret = 2; + ret = 4; } else if (ret == 0) { switch ((pdev->class >> 16) & 0xff) { case PCI_BASE_CLASS_STORAGE: @@ -619,7 +619,7 @@ break; default: - ret = 2; + ret = 4; break; }; } diff -Nru a/arch/sparc64/kernel/pci_schizo.c b/arch/sparc64/kernel/pci_schizo.c --- a/arch/sparc64/kernel/pci_schizo.c Tue Mar 12 13:58:14 2002 +++ b/arch/sparc64/kernel/pci_schizo.c Tue Mar 12 13:58:15 2002 @@ -291,8 +291,8 @@ /*0x0c*/0, 0, 0, 0, /* PCI slot 3 Int A, B, C, D */ /*0x10*/0, 0, 0, 0, /* PCI slot 4 Int A, B, C, D */ /*0x14*/0, 0, 0, 0, /* PCI slot 5 Int A, B, C, D */ -/*0x18*/3, /* SCSI */ -/*0x19*/3, /* second SCSI */ +/*0x18*/4, /* SCSI */ +/*0x19*/4, /* second SCSI */ /*0x1a*/0, /* UNKNOWN */ /*0x1b*/0, /* UNKNOWN */ /*0x1c*/8, /* Parallel */ @@ -302,7 +302,7 @@ /*0x20*/13, /* Audio Record */ /*0x21*/14, /* Audio Playback */ /*0x22*/12, /* Serial */ -/*0x23*/2, /* EBUS I2C */ +/*0x23*/4, /* EBUS I2C */ /*0x24*/10, /* RTC Clock */ /*0x25*/11, /* Floppy */ /*0x26*/0, /* UNKNOWN */ @@ -344,7 +344,7 @@ ret = schizo_pil_table[ino]; if (ret == 0 && pdev == NULL) { - ret = 2; + ret = 4; } else if (ret == 0) { switch ((pdev->class >> 16) & 0xff) { case PCI_BASE_CLASS_STORAGE: @@ -367,7 +367,7 @@ break; default: - ret = 2; + ret = 4; break; }; } @@ -1082,15 +1082,22 @@ #define SCHIZO_PCIA_CTRL (SCHIZO_PBM_A_REGS_OFF + 0x2000UL) #define SCHIZO_PCIB_CTRL (SCHIZO_PBM_B_REGS_OFF + 0x2000UL) -#define SCHIZO_PCICTRL_BUNUS (1UL << 63UL) +#define SCHIZO_PCICTRL_BUS_UNUS (1UL << 63UL) #define SCHIZO_PCICTRL_ESLCK (1UL << 51UL) +#define SCHIZO_PCICTRL_ERRSLOT (7UL << 48UL) #define SCHIZO_PCICTRL_TTO_ERR (1UL << 38UL) #define SCHIZO_PCICTRL_RTRY_ERR (1UL << 37UL) #define SCHIZO_PCICTRL_DTO_ERR (1UL << 36UL) #define SCHIZO_PCICTRL_SBH_ERR (1UL << 35UL) #define SCHIZO_PCICTRL_SERR (1UL << 34UL) +#define SCHIZO_PCICTRL_PCISPD (1UL << 33UL) +#define SCHIZO_PCICTRL_PTO (3UL << 24UL) +#define SCHIZO_PCICTRL_DTO_INT (1UL << 19UL) #define SCHIZO_PCICTRL_SBH_INT (1UL << 18UL) #define SCHIZO_PCICTRL_EEN (1UL << 17UL) +#define SCHIZO_PCICTRL_PARK (1UL << 16UL) +#define SCHIZO_PCICTRL_PCIRST (1UL << 8UL) +#define SCHIZO_PCICTRL_ARB (0x3fUL << 0UL) static void __init schizo_register_error_handlers(struct pci_controller_info *p) { @@ -1167,7 +1174,7 @@ * bits for each PBM. */ tmp = schizo_read(base + SCHIZO_PCIA_CTRL); - tmp |= (SCHIZO_PCICTRL_BUNUS | + tmp |= (SCHIZO_PCICTRL_BUS_UNUS | SCHIZO_PCICTRL_ESLCK | SCHIZO_PCICTRL_TTO_ERR | SCHIZO_PCICTRL_RTRY_ERR | @@ -1179,7 +1186,7 @@ schizo_write(base + SCHIZO_PCIA_CTRL, tmp); tmp = schizo_read(base + SCHIZO_PCIB_CTRL); - tmp |= (SCHIZO_PCICTRL_BUNUS | + tmp |= (SCHIZO_PCICTRL_BUS_UNUS | SCHIZO_PCICTRL_ESLCK | SCHIZO_PCICTRL_TTO_ERR | SCHIZO_PCICTRL_RTRY_ERR | @@ -1742,6 +1749,22 @@ schizo_pbm_strbuf_init(p, pbm, is_pbm_a); } +#define SCHIZO_PCIA_IRQ_RETRY (SCHIZO_PBM_A_REGS_OFF + 0x1a00UL) +#define SCHIZO_PCIB_IRQ_RETRY (SCHIZO_PBM_B_REGS_OFF + 0x1a00UL) +#define SCHIZO_IRQ_RETRY_INF 0xffUL + +#define SCHIZO_PCIA_DIAG (SCHIZO_PBM_A_REGS_OFF + 0x2020UL) +#define SCHIZO_PCIB_DIAG (SCHIZO_PBM_B_REGS_OFF + 0x2020UL) +#define SCHIZO_PCIDIAG_D_BADECC (1UL << 10UL) /* Disable BAD ECC errors */ +#define SCHIZO_PCIDIAG_D_BYPASS (1UL << 9UL) /* Disable MMU bypass mode */ +#define SCHIZO_PCIDIAG_D_TTO (1UL << 8UL) /* Disable TTO errors */ +#define SCHIZO_PCIDIAG_D_RTRYARB (1UL << 7UL) /* Disable retry arbitration */ +#define SCHIZO_PCIDIAG_D_RETRY (1UL << 6UL) /* Disable retry limit */ +#define SCHIZO_PCIDIAG_D_INTSYNC (1UL << 5UL) /* Disable interrupt/DMA synch */ +#define SCHIZO_PCIDIAG_I_DMA_PARITY (1UL << 3UL) /* Invert DMA parity */ +#define SCHIZO_PCIDIAG_I_PIOD_PARITY (1UL << 2UL) /* Invert PIO data parity */ +#define SCHIZO_PCIDIAG_I_PIOA_PARITY (1UL << 1U)L /* Invert PIO address parity */ + static void schizo_controller_hwinit(struct pci_controller_info *p) { unsigned long pbm_a_base, pbm_b_base; @@ -1751,17 +1774,37 @@ pbm_b_base = p->controller_regs + SCHIZO_PBM_B_REGS_OFF; /* Set IRQ retry to infinity. */ - schizo_write(pbm_a_base + 0x1a00UL, 0xff); - schizo_write(pbm_b_base + 0x1a00UL, 0xff); + schizo_write(p->controller_regs + SCHIZO_PCIA_IRQ_RETRY, + SCHIZO_IRQ_RETRY_INF); + schizo_write(p->controller_regs + SCHIZO_PCIB_IRQ_RETRY, + SCHIZO_IRQ_RETRY_INF); + + /* Enable arbiter for all PCI slots. Also, disable PCI interval + * timer so that DTO (Discard TimeOuts) are not reported because + * some Schizo revisions report them erroneously. + */ - /* Enable arbiter for all PCI slots. */ - tmp = schizo_read(pbm_a_base + 0x2000UL); - tmp |= 0x3fUL; - schizo_write(pbm_a_base + 0x2000UL, tmp); - - tmp = schizo_read(pbm_b_base + 0x2000UL); - tmp |= 0x3fUL; - schizo_write(pbm_b_base + 0x2000UL, tmp); + tmp = schizo_read(p->controller_regs + SCHIZO_PCIA_CTRL); + tmp |= SCHIZO_PCICTRL_ARB; + tmp &= ~SCHIZO_PCICTRL_PTO; + schizo_write(p->controller_regs + SCHIZO_PCIA_CTRL, tmp); + + tmp = schizo_read(p->controller_regs + SCHIZO_PCIB_CTRL); + tmp |= SCHIZO_PCICTRL_ARB; + tmp &= ~SCHIZO_PCICTRL_PTO; + schizo_write(p->controller_regs + SCHIZO_PCIB_CTRL, tmp); + + /* Disable TTO error reporting (won't happen anyway since we + * disabled the PCI interval timer above) and retry arbitration + * (can cause hangs in some Schizo revisions). + */ + tmp = schizo_read(p->controller_regs + SCHIZO_PCIA_DIAG); + tmp |= (SCHIZO_PCIDIAG_D_TTO | SCHIZO_PCIDIAG_D_RTRYARB); + schizo_write(p->controller_regs + SCHIZO_PCIA_DIAG, tmp); + + tmp = schizo_read(p->controller_regs + SCHIZO_PCIB_DIAG); + tmp |= (SCHIZO_PCIDIAG_D_TTO | SCHIZO_PCIDIAG_D_RTRYARB); + schizo_write(p->controller_regs + SCHIZO_PCIB_DIAG, tmp); } void __init schizo_init(int node, char *model_name) diff -Nru a/arch/sparc64/kernel/process.c b/arch/sparc64/kernel/process.c --- a/arch/sparc64/kernel/process.c Tue Mar 12 13:58:15 2002 +++ b/arch/sparc64/kernel/process.c Tue Mar 12 13:58:15 2002 @@ -106,9 +106,11 @@ int cpu = smp_processor_id(); if (local_irq_count(cpu) == 0 && - local_bh_count(cpu) == 0) - preempt_schedule(); - current_thread_info()->preempt_count--; + local_bh_count(cpu) == 0 && + test_thread_flag(TIF_NEED_RESCHED)) { + current->state = TASK_RUNNING; + schedule(); + } } #endif diff -Nru a/arch/sparc64/kernel/ptrace.c b/arch/sparc64/kernel/ptrace.c --- a/arch/sparc64/kernel/ptrace.c Tue Mar 12 13:58:15 2002 +++ b/arch/sparc64/kernel/ptrace.c Tue Mar 12 13:58:15 2002 @@ -627,9 +627,11 @@ if (!(current->ptrace & PT_PTRACED)) return; current->exit_code = SIGTRAP; + preempt_disable(); current->state = TASK_STOPPED; notify_parent(current, SIGCHLD); schedule(); + preempt_enable(); /* * this isn't the same as continuing with a signal, but it will do * for normal use. strace only continues with a signal if the diff -Nru a/arch/sparc64/kernel/rtrap.S b/arch/sparc64/kernel/rtrap.S --- a/arch/sparc64/kernel/rtrap.S Tue Mar 12 13:58:15 2002 +++ b/arch/sparc64/kernel/rtrap.S Tue Mar 12 13:58:15 2002 @@ -15,7 +15,6 @@ #include #include -#define PTREGS_OFF (STACK_BIAS + REGWIN_SZ) #define RTRAP_PSTATE (PSTATE_RMO|PSTATE_PEF|PSTATE_PRIV|PSTATE_IE) #define RTRAP_PSTATE_IRQOFF (PSTATE_RMO|PSTATE_PEF|PSTATE_PRIV) #define RTRAP_PSTATE_AG_IRQOFF (PSTATE_RMO|PSTATE_PEF|PSTATE_PRIV|PSTATE_AG) @@ -150,7 +149,7 @@ andn %l1, %l4, %l1 .align 64 - .globl rtrap_irq, rtrap_clr_l6, rtrap, irqsz_patchme + .globl rtrap_irq, rtrap_clr_l6, rtrap, irqsz_patchme, rtrap_xcall rtrap_irq: #ifdef CONFIG_PREEMPT ldsw [%g6 + TI_PRE_COUNT], %l0 @@ -165,9 +164,11 @@ lduw [%l2 + %l0], %l1 ! softirq_pending cmp %l1, 0 + /* mm/ultra.S:xcall_report_regs KNOWS about this load. */ bne,pn %icc, __handle_softirq ldx [%sp + PTREGS_OFF + PT_V9_TSTATE], %l1 __handle_softirq_continue: +rtrap_xcall: sethi %hi(0xf << 20), %l4 andcc %l1, TSTATE_PRIV, %l3 and %l1, %l4, %l4 @@ -276,9 +277,9 @@ add %l5, 1, %l6 stw %l6, [%g6 + TI_PRE_COUNT] call kpreempt_maybe - wrpr %g0, RTRAP_PSTATE, %pstate - wrpr %g0, RTRAP_PSTATE_IRQOFF, %pstate - stw %l5, [%g6 + TI_PRE_COUNT] + nop + ba,pt %xcc, rtrap + stw %l5, [%g6 + TI_PRE_COUNT] #endif kern_fpucheck: ldub [%g6 + TI_FPDEPTH], %l5 brz,pt %l5, rt_continue @@ -331,5 +332,3 @@ wr %g0, FPRS_DU, %fprs ba,pt %xcc, rt_continue stb %l5, [%g6 + TI_FPDEPTH] - -#undef PTREGS_OFF diff -Nru a/arch/sparc64/kernel/sbus.c b/arch/sparc64/kernel/sbus.c --- a/arch/sparc64/kernel/sbus.c Tue Mar 12 13:58:16 2002 +++ b/arch/sparc64/kernel/sbus.c Tue Mar 12 13:58:16 2002 @@ -628,11 +628,11 @@ /* SBUS SYSIO INO number to Sparc PIL level. */ static unsigned char sysio_ino_to_pil[] = { - 0, 2, 2, 7, 5, 7, 8, 9, /* SBUS slot 0 */ - 0, 2, 2, 7, 5, 7, 8, 9, /* SBUS slot 1 */ - 0, 2, 2, 7, 5, 7, 8, 9, /* SBUS slot 2 */ - 0, 2, 2, 7, 5, 7, 8, 9, /* SBUS slot 3 */ - 3, /* Onboard SCSI */ + 0, 4, 4, 7, 5, 7, 8, 9, /* SBUS slot 0 */ + 0, 4, 4, 7, 5, 7, 8, 9, /* SBUS slot 1 */ + 0, 4, 4, 7, 5, 7, 8, 9, /* SBUS slot 2 */ + 0, 4, 4, 7, 5, 7, 8, 9, /* SBUS slot 3 */ + 4, /* Onboard SCSI */ 5, /* Onboard Ethernet */ /*XXX*/ 8, /* Onboard BPP */ 0, /* Bogon */ diff -Nru a/arch/sparc64/kernel/setup.c b/arch/sparc64/kernel/setup.c --- a/arch/sparc64/kernel/setup.c Tue Mar 12 13:58:15 2002 +++ b/arch/sparc64/kernel/setup.c Tue Mar 12 13:58:15 2002 @@ -160,11 +160,16 @@ pmdp = pmd_offset(pgdp, va); if (pmd_none(*pmdp)) goto done; - ptep = pte_offset(pmdp, va); - if (!pte_present(*ptep)) - goto done; - tte = pte_val(*ptep); - res = PROM_TRUE; + + /* Preemption implicitly disabled by virtue of + * being called from inside OBP. + */ + ptep = pte_offset_map(pmdp, va); + if (pte_present(*ptep)) { + tte = pte_val(*ptep); + res = PROM_TRUE; + } + pte_unmap(ptep); goto done; } @@ -210,11 +215,15 @@ pmdp = pmd_offset(pgdp, va); if (pmd_none(*pmdp)) goto done; - ptep = pte_offset(pmdp, va); - if (!pte_present(*ptep)) - goto done; - tte = pte_val(*ptep); - res = PROM_TRUE; + + /* Preemption implicitly disabled by virtue of + * being called from inside OBP. + */ + ptep = pte_offset_kernel(pmdp, va); + if (pte_present(*ptep)) { + tte = pte_val(*ptep); + res = PROM_TRUE; + } goto done; } @@ -530,7 +539,7 @@ if (!root_flags) root_mountflags &= ~MS_RDONLY; ROOT_DEV = to_kdev_t(root_dev); -#ifdef CONFIG_BLK_DEV_RAM +#ifdef CONFIG_BLK_DEV_INITRD rd_image_start = ram_flags & RAMDISK_IMAGE_START_MASK; rd_prompt = ((ram_flags & RAMDISK_PROMPT_FLAG) != 0); rd_doload = ((ram_flags & RAMDISK_LOAD_FLAG) != 0); diff -Nru a/arch/sparc64/kernel/signal.c b/arch/sparc64/kernel/signal.c --- a/arch/sparc64/kernel/signal.c Tue Mar 12 13:58:15 2002 +++ b/arch/sparc64/kernel/signal.c Tue Mar 12 13:58:15 2002 @@ -713,9 +713,11 @@ if ((current->ptrace & PT_PTRACED) && signr != SIGKILL) { current->exit_code = signr; + preempt_disable(); current->state = TASK_STOPPED; notify_parent(current, SIGCHLD); schedule(); + preempt_enable(); if (!(signr = current->exit_code)) continue; current->exit_code = 0; @@ -766,16 +768,20 @@ if (is_orphaned_pgrp(current->pgrp)) continue; - case SIGSTOP: - if (current->ptrace & PT_PTRACED) - continue; - current->state = TASK_STOPPED; + case SIGSTOP: { + struct signal_struct *sig; + current->exit_code = signr; - if (!(current->p_pptr->sig->action[SIGCHLD-1].sa.sa_flags & + sig = current->p_pptr->sig; + preempt_disable(); + current->state = TASK_STOPPED; + if (sig && !(sig->action[SIGCHLD-1].sa.sa_flags & SA_NOCLDSTOP)) notify_parent(current, SIGCHLD); schedule(); + preempt_enable(); continue; + } case SIGQUIT: case SIGILL: case SIGTRAP: case SIGABRT: case SIGFPE: case SIGSEGV: diff -Nru a/arch/sparc64/kernel/signal32.c b/arch/sparc64/kernel/signal32.c --- a/arch/sparc64/kernel/signal32.c Tue Mar 12 13:58:14 2002 +++ b/arch/sparc64/kernel/signal32.c Tue Mar 12 13:58:14 2002 @@ -776,7 +776,7 @@ unsigned long address = ((unsigned long)&(sf->insns[0])); pgd_t *pgdp = pgd_offset(current->mm, address); pmd_t *pmdp = pmd_offset(pgdp, address); - pte_t *ptep = pte_offset(pmdp, address); + pte_t *ptep; regs->u_regs[UREG_I7] = (unsigned long) (&(sf->insns[0]) - 2); @@ -785,6 +785,8 @@ if (err) goto sigsegv; + preempt_disable(); + ptep = pte_offset_map(pmdp, address); if (pte_present(*ptep)) { unsigned long page = (unsigned long) page_address(pte_page(*ptep)); @@ -794,6 +796,8 @@ : : "r" (page), "r" (address & (PAGE_SIZE - 1)) : "memory"); } + pte_unmap(ptep); + preempt_enable(); } return; @@ -1225,7 +1229,7 @@ unsigned long address = ((unsigned long)&(sf->insns[0])); pgd_t *pgdp = pgd_offset(current->mm, address); pmd_t *pmdp = pmd_offset(pgdp, address); - pte_t *ptep = pte_offset(pmdp, address); + pte_t *ptep; regs->u_regs[UREG_I7] = (unsigned long) (&(sf->insns[0]) - 2); @@ -1237,6 +1241,8 @@ if (err) goto sigsegv; + preempt_disable(); + ptep = pte_offset_map(pmdp, address); if (pte_present(*ptep)) { unsigned long page = (unsigned long) page_address(pte_page(*ptep)); @@ -1246,6 +1252,8 @@ : : "r" (page), "r" (address & (PAGE_SIZE - 1)) : "memory"); } + pte_unmap(ptep); + preempt_enable(); } return; @@ -1379,9 +1387,11 @@ if ((current->ptrace & PT_PTRACED) && signr != SIGKILL) { current->exit_code = signr; + preempt_disable(); current->state = TASK_STOPPED; notify_parent(current, SIGCHLD); schedule(); + preempt_enable(); if (!(signr = current->exit_code)) continue; current->exit_code = 0; @@ -1432,17 +1442,20 @@ if (is_orphaned_pgrp(current->pgrp)) continue; - case SIGSTOP: - if (current->ptrace & PT_PTRACED) - continue; - current->state = TASK_STOPPED; + case SIGSTOP: { + struct signal_struct *sig; + current->exit_code = signr; - if (!(current->p_pptr->sig->action[SIGCHLD-1].sa.sa_flags & + sig = current->p_pptr->sig; + preempt_disable(); + current->state = TASK_STOPPED; + if (sig && !(sig->action[SIGCHLD-1].sa.sa_flags & SA_NOCLDSTOP)) notify_parent(current, SIGCHLD); schedule(); + preempt_enable(); continue; - + } case SIGQUIT: case SIGILL: case SIGTRAP: case SIGABRT: case SIGFPE: case SIGSEGV: case SIGBUS: case SIGSYS: case SIGXCPU: case SIGXFSZ: diff -Nru a/arch/sparc64/kernel/smp.c b/arch/sparc64/kernel/smp.c --- a/arch/sparc64/kernel/smp.c Tue Mar 12 13:58:14 2002 +++ b/arch/sparc64/kernel/smp.c Tue Mar 12 13:58:14 2002 @@ -594,11 +594,12 @@ return 0; } -void smp_call_function_client(void) +void smp_call_function_client(int irq, struct pt_regs *regs) { void (*func) (void *info) = call_data->func; void *info = call_data->info; + clear_softint(1 << irq); if (call_data->wait) { /* let initiator proceed only after completion */ func(info); @@ -722,6 +723,12 @@ } } +void smp_receive_signal_client(int irq, struct pt_regs *regs) +{ + /* Just return, rtrap takes care of the rest. */ + clear_softint(1 << irq); +} + void smp_report_regs(void) { smp_cross_call(&xcall_report_regs, 0, 0, 0); @@ -885,48 +892,6 @@ } } -/* Process migration IPIs. */ - -extern unsigned long xcall_migrate_task; - -static spinlock_t migration_lock = SPIN_LOCK_UNLOCKED; -static task_t *new_task; - -void smp_migrate_task(int cpu, task_t *p) -{ - unsigned long mask = 1UL << cpu; - - if (cpu == smp_processor_id()) - return; - - if (smp_processors_ready && (cpu_present_map & mask) != 0) { - u64 data0 = (((u64)&xcall_migrate_task) & 0xffffffff); - - _raw_spin_lock(&migration_lock); - new_task = p; - - if (tlb_type == spitfire) - spitfire_xcall_deliver(data0, 0, 0, mask); - else - cheetah_xcall_deliver(data0, 0, 0, mask); - } -} - -/* Called at PIL level 1. */ -asmlinkage void smp_task_migration_interrupt(int irq, struct pt_regs *regs) -{ - task_t *p; - - if (irq != PIL_MIGRATE) - BUG(); - - clear_softint(1 << irq); - - p = new_task; - _raw_spin_unlock(&migration_lock); - sched_task_migrated(p); -} - /* CPU capture. */ /* #define CAPTURE_DEBUG */ extern unsigned long xcall_capture; @@ -982,10 +947,14 @@ extern void prom_world(int); extern void save_alternate_globals(unsigned long *); extern void restore_alternate_globals(unsigned long *); -void smp_penguin_jailcell(void) +void smp_penguin_jailcell(int irq, struct pt_regs *regs) { unsigned long global_save[24]; + clear_softint(1 << irq); + + preempt_disable(); + __asm__ __volatile__("flushw"); save_alternate_globals(global_save); prom_world(1); @@ -996,6 +965,8 @@ restore_alternate_globals(global_save); atomic_dec(&smp_capture_registry); prom_world(0); + + preempt_enable(); } extern unsigned long xcall_promstop; diff -Nru a/arch/sparc64/kernel/sparc64_ksyms.c b/arch/sparc64/kernel/sparc64_ksyms.c --- a/arch/sparc64/kernel/sparc64_ksyms.c Tue Mar 12 13:58:15 2002 +++ b/arch/sparc64/kernel/sparc64_ksyms.c Tue Mar 12 13:58:15 2002 @@ -249,7 +249,7 @@ /* Should really be in linux/kernel/ksyms.c */ EXPORT_SYMBOL(dump_thread); EXPORT_SYMBOL(dump_fpu); -EXPORT_SYMBOL(pte_alloc_one); +EXPORT_SYMBOL(pte_alloc_one_kernel); #ifndef CONFIG_SMP EXPORT_SYMBOL(pgt_quicklists); #endif diff -Nru a/arch/sparc64/kernel/sys_sparc32.c b/arch/sparc64/kernel/sys_sparc32.c --- a/arch/sparc64/kernel/sys_sparc32.c Tue Mar 12 13:58:14 2002 +++ b/arch/sparc64/kernel/sys_sparc32.c Tue Mar 12 13:58:14 2002 @@ -50,6 +50,7 @@ #include #include #include +#include #include #include @@ -1067,16 +1068,20 @@ /* First get the "struct iovec" from user memory and * verify all the pointers */ + retval = 0; if (!count) - return 0; + goto out_nofree; + retval = -EFAULT; if (verify_area(VERIFY_READ, vector, sizeof(struct iovec32)*count)) - return -EFAULT; + goto out_nofree; + retval = -EINVAL; if (count > UIO_MAXIOV) - return -EINVAL; + goto out_nofree; if (count > UIO_FASTIOV) { + retval = -ENOMEM; iov = kmalloc(count*sizeof(struct iovec), GFP_KERNEL); if (!iov) - return -ENOMEM; + goto out_nofree; } tot_len = 0; @@ -1136,6 +1141,11 @@ out: if (iov != iovstack) kfree(iov); +out_nofree: + /* VERIFY_WRITE actually means a read, as we write to user space */ + if ((retval + (type == VERIFY_WRITE)) > 0) + dnotify_parent(file->f_dentry, + (type == VERIFY_WRITE) ? DN_MODIFY : DN_ACCESS); return retval; } @@ -3950,6 +3960,27 @@ set_fs(old_fs); if (offset && put_user(of, offset)) + return -EFAULT; + + return ret; +} + +extern asmlinkage ssize_t sys_sendfile64(int out_fd, int in_fd, loff_t *offset, size_t count); + +asmlinkage int sys32_sendfile64(int out_fd, int in_fd, __kernel_loff_t32 *offset, s32 count) +{ + mm_segment_t old_fs = get_fs(); + int ret; + loff_t lof; + + if (offset && get_user(lof, offset)) + return -EFAULT; + + set_fs(KERNEL_DS); + ret = sys_sendfile64(out_fd, in_fd, offset ? &lof : NULL, count); + set_fs(old_fs); + + if (offset && put_user(lof, offset)) return -EFAULT; return ret; diff -Nru a/arch/sparc64/kernel/systbls.S b/arch/sparc64/kernel/systbls.S --- a/arch/sparc64/kernel/systbls.S Tue Mar 12 13:58:15 2002 +++ b/arch/sparc64/kernel/systbls.S Tue Mar 12 13:58:15 2002 @@ -47,7 +47,7 @@ .word sys_nis_syscall, sys32_setreuid16, sys32_setregid16, sys_rename, sys_truncate /*130*/ .word sys_ftruncate, sys_flock, sys_lstat64, sys_nis_syscall, sys_nis_syscall .word sys_nis_syscall, sys_mkdir, sys_rmdir, sys32_utimes, sys_stat64 -/*140*/ .word sys_nis_syscall, sys_nis_syscall, sys_nis_syscall, sys_gettid, sys32_getrlimit +/*140*/ .word sys32_sendfile64, sys_nis_syscall, sys_nis_syscall, sys_gettid, sys32_getrlimit .word sys32_setrlimit, sys_pivot_root, sys32_prctl, sys32_pciconfig_read, sys32_pciconfig_write /*150*/ .word sys_nis_syscall, sys_nis_syscall, sys_nis_syscall, sys_poll, sys_getdents64 .word sys32_fcntl64, sys_nis_syscall, sys32_statfs, sys32_fstatfs, sys_oldumount @@ -106,7 +106,7 @@ .word sys_recvfrom, sys_setreuid, sys_setregid, sys_rename, sys_truncate /*130*/ .word sys_ftruncate, sys_flock, sys_nis_syscall, sys_sendto, sys_shutdown .word sys_socketpair, sys_mkdir, sys_rmdir, sys_utimes, sys_nis_syscall -/*140*/ .word sys_nis_syscall, sys_getpeername, sys_nis_syscall, sys_gettid, sys_getrlimit +/*140*/ .word sys_sendfile64, sys_getpeername, sys_nis_syscall, sys_gettid, sys_getrlimit .word sys_setrlimit, sys_pivot_root, sys_prctl, sys_pciconfig_read, sys_pciconfig_write /*150*/ .word sys_getsockname, sys_nis_syscall, sys_nis_syscall, sys_poll, sys_getdents64 .word sys_nis_syscall, sys_nis_syscall, sys_statfs, sys_fstatfs, sys_oldumount diff -Nru a/arch/sparc64/kernel/time.c b/arch/sparc64/kernel/time.c --- a/arch/sparc64/kernel/time.c Tue Mar 12 13:58:15 2002 +++ b/arch/sparc64/kernel/time.c Tue Mar 12 13:58:15 2002 @@ -405,6 +405,7 @@ char model[128]; int node, busnd = -1, err; unsigned long flags; + struct linux_central *cbus; #ifdef CONFIG_PCI struct linux_ebus *ebus = NULL; struct isa_bridge *isa_br = NULL; @@ -431,21 +432,30 @@ __save_and_cli(flags); - if(central_bus != NULL) { + cbus = central_bus; + if (cbus != NULL) busnd = central_bus->child->prom_node; - } + + /* Check FHC Central then EBUSs then ISA bridges then SBUSs. + * That way we handle the presence of multiple properly. + * + * As a special case, machines with Central must provide the + * timer chip there. + */ #ifdef CONFIG_PCI - else if (ebus_chain != NULL) { + if (ebus_chain != NULL) { ebus = ebus_chain; - busnd = ebus->prom_node; - } else if (isa_chain != NULL) { + if (busnd == -1) + busnd = ebus->prom_node; + } + if (isa_chain != NULL) { isa_br = isa_chain; - busnd = isa_br->prom_node; + if (busnd == -1) + busnd = isa_br->prom_node; } #endif - else if (sbus_root != NULL) { + if (sbus_root != NULL && busnd == -1) busnd = sbus_root->prom_node; - } if (busnd == -1) { prom_printf("clock_probe: problem, cannot find bus to search.\n"); @@ -464,7 +474,12 @@ strcmp(model, "mk48t59") && strcmp(model, "m5819") && strcmp(model, "ds1287")) { - if (node) + if (cbus != NULL) { + prom_printf("clock_probe: Central bus lacks timer chip.\n"); + prom_halt(); + } + + if (node != 0) node = prom_getsibling(node); #ifdef CONFIG_PCI while ((node == 0) && ebus != NULL) { @@ -496,12 +511,12 @@ prom_halt(); } - if(central_bus) { + if (cbus != NULL) { apply_fhc_ranges(central_bus->child, clk_reg, 1); apply_central_ranges(central_bus, clk_reg, 1); } #ifdef CONFIG_PCI - else if (ebus_chain != NULL) { + else if (ebus != NULL) { struct linux_ebus_device *edev; for_each_ebusdev(edev, ebus) @@ -523,7 +538,8 @@ mstk48t02_regs = mstk48t59_regs + MOSTEK_48T59_48T02; } break; - } else if (isa_chain != NULL) { + } + else if (isa_br != NULL) { struct isa_device *isadev; try_isa_clock: diff -Nru a/arch/sparc64/kernel/ttable.S b/arch/sparc64/kernel/ttable.S --- a/arch/sparc64/kernel/ttable.S Tue Mar 12 13:58:14 2002 +++ b/arch/sparc64/kernel/ttable.S Tue Mar 12 13:58:14 2002 @@ -45,12 +45,15 @@ tl0_resv038: BTRAP(0x38) BTRAP(0x39) BTRAP(0x3a) BTRAP(0x3b) BTRAP(0x3c) BTRAP(0x3d) tl0_resv03e: BTRAP(0x3e) BTRAP(0x3f) BTRAP(0x40) #ifdef CONFIG_SMP -tl0_irq1: TRAP_IRQ(smp_task_migration_interrupt, 1) +tl0_irq1: TRAP_IRQ(smp_call_function_client, 1) +tl0_irq2: TRAP_IRQ(smp_receive_signal_client, 2) +tl0_irq3: TRAP_IRQ(smp_penguin_jailcell, 3) #else tl0_irq1: BTRAP(0x41) +tl0_irq2: BTRAP(0x42) +tl0_irq3: BTRAP(0x43) #endif -tl0_irq2: TRAP_IRQ(handler_irq, 2) -tl0_irq3: TRAP_IRQ(handler_irq, 3) TRAP_IRQ(handler_irq, 4) +tl0_irq4: TRAP_IRQ(handler_irq, 4) tl0_irq5: TRAP_IRQ(handler_irq, 5) TRAP_IRQ(handler_irq, 6) tl0_irq7: TRAP_IRQ(handler_irq, 7) TRAP_IRQ(handler_irq, 8) tl0_irq9: TRAP_IRQ(handler_irq, 9) TRAP_IRQ(handler_irq, 10) diff -Nru a/arch/sparc64/mm/fault.c b/arch/sparc64/mm/fault.c --- a/arch/sparc64/mm/fault.c Tue Mar 12 13:58:15 2002 +++ b/arch/sparc64/mm/fault.c Tue Mar 12 13:58:15 2002 @@ -160,10 +160,12 @@ pmdp = pmd_offset(pgdp, tpc); if (pmd_none(*pmdp)) goto outret; - ptep = pte_offset(pmdp, tpc); + + /* This disables preemption for us as well. */ __asm__ __volatile__("rdpr %%pstate, %0" : "=r" (pstate)); __asm__ __volatile__("wrpr %0, %1, %%pstate" : : "r" (pstate), "i" (PSTATE_IE)); + ptep = pte_offset_map(pmdp, tpc); pte = *ptep; if (!pte_present(pte)) goto out; @@ -177,6 +179,7 @@ : "r" (pa), "i" (ASI_PHYS_USE_EC)); out: + pte_unmap(ptep); __asm__ __volatile__("wrpr %0, 0x0, %%pstate" : : "r" (pstate)); outret: return insn; @@ -340,6 +343,20 @@ goto good_area; if (!(vma->vm_flags & VM_GROWSDOWN)) goto bad_area; + if (!(fault_code & FAULT_CODE_WRITE)) { + /* Non-faulting loads shouldn't expand stack. */ + insn = get_fault_insn(regs, insn); + if ((insn & 0xc0800000) == 0xc0800000) { + unsigned char asi; + + if (insn & 0x2000) + asi = (regs->tstate >> 24); + else + asi = (insn >> 5); + if ((asi & 0xf2) == 0x82) + goto bad_area; + } + } if (expand_stack(vma, address)) goto bad_area; /* diff -Nru a/arch/sparc64/mm/generic.c b/arch/sparc64/mm/generic.c --- a/arch/sparc64/mm/generic.c Tue Mar 12 13:58:15 2002 +++ b/arch/sparc64/mm/generic.c Tue Mar 12 13:58:15 2002 @@ -101,10 +101,11 @@ end = PGDIR_SIZE; offset -= address; do { - pte_t * pte = pte_alloc(current->mm, pmd, address); + pte_t * pte = pte_alloc_map(current->mm, pmd, address); if (!pte) return -ENOMEM; io_remap_pte_range(pte, address, end - address, address + offset, prot, space); + pte_unmap(pte); address = (address + PMD_SIZE) & PMD_MASK; pmd++; } while (address < end); diff -Nru a/arch/sparc64/mm/init.c b/arch/sparc64/mm/init.c --- a/arch/sparc64/mm/init.c Tue Mar 12 13:58:15 2002 +++ b/arch/sparc64/mm/init.c Tue Mar 12 13:58:15 2002 @@ -65,24 +65,27 @@ int bigkernel = 0; -int do_check_pgt_cache(int low, int high) -{ - int freed = 0; +/* XXX Tune this... */ +#define PGT_CACHE_LOW 25 +#define PGT_CACHE_HIGH 50 - if (pgtable_cache_size > high) { +void check_pgt_cache(void) +{ + preempt_disable(); + if (pgtable_cache_size > PGT_CACHE_HIGH) { do { #ifdef CONFIG_SMP if (pgd_quicklist) - free_pgd_slow(get_pgd_fast()), freed++; + free_pgd_slow(get_pgd_fast()); #endif if (pte_quicklist[0]) - free_pte_slow(pte_alloc_one_fast(NULL, 0)), freed++; + free_pte_slow(pte_alloc_one_fast(NULL, 0)); if (pte_quicklist[1]) - free_pte_slow(pte_alloc_one_fast(NULL, 1 << (PAGE_SHIFT + 10))), freed++; - } while (pgtable_cache_size > low); + free_pte_slow(pte_alloc_one_fast(NULL, 1 << (PAGE_SHIFT + 10))); + } while (pgtable_cache_size > PGT_CACHE_LOW); } -#ifndef CONFIG_SMP - if (pgd_cache_size > high / 4) { +#ifndef CONFIG_SMP + if (pgd_cache_size > PGT_CACHE_HIGH / 4) { struct page *page, *page2; for (page2 = NULL, page = (struct page *)pgd_quicklist; page;) { if ((unsigned long)page->pprev_hash == 3) { @@ -94,12 +97,11 @@ page->pprev_hash = NULL; pgd_cache_size -= 2; __free_page(page); - freed++; if (page2) page = page2->next_hash; else page = (struct page *)pgd_quicklist; - if (pgd_cache_size <= low / 4) + if (pgd_cache_size <= PGT_CACHE_LOW / 4) break; continue; } @@ -108,7 +110,7 @@ } } #endif - return freed; + preempt_enable(); } #ifdef CONFIG_DEBUG_DCFLUSH @@ -143,7 +145,7 @@ static __inline__ void set_dcache_dirty(struct page *page) { unsigned long mask = smp_processor_id(); - unsigned long non_cpu_bits = (1UL << 24UL) - 1UL; + unsigned long non_cpu_bits = ~((NR_CPUS - 1UL) << 24UL); mask = (mask << 24) | (1UL << PG_dcache_dirty); __asm__ __volatile__("1:\n\t" "ldx [%2], %%g7\n\t" @@ -166,6 +168,7 @@ "1:\n\t" "ldx [%2], %%g7\n\t" "srlx %%g7, 24, %%g5\n\t" + "and %%g5, %3, %%g5\n\t" "cmp %%g5, %0\n\t" "bne,pn %%icc, 2f\n\t" " andn %%g7, %1, %%g5\n\t" @@ -175,7 +178,8 @@ " membar #StoreLoad | #StoreStore\n" "2:" : /* no outputs */ - : "r" (cpu), "r" (mask), "r" (&page->flags) + : "r" (cpu), "r" (mask), "r" (&page->flags), + "i" (NR_CPUS - 1UL) : "g5", "g7"); } @@ -189,7 +193,7 @@ if (VALID_PAGE(page) && page->mapping && ((pg_flags = page->flags) & (1UL << PG_dcache_dirty))) { - int cpu = (pg_flags >> 24); + int cpu = ((pg_flags >> 24) & (NR_CPUS - 1UL)); /* This is just to optimize away some function calls * in the SMP case. @@ -212,8 +216,8 @@ int dirty_cpu = dcache_dirty_cpu(page); if (page->mapping && - page->mapping->i_mmap == NULL && - page->mapping->i_mmap_shared == NULL) { + list_empty(&page->mapping->i_mmap) && + list_empty(&page->mapping->i_mmap_shared)) { if (dirty) { if (dirty_cpu == smp_processor_id()) return; @@ -244,7 +248,7 @@ if (pmd_none(*pmd)) return; - ptep = pte_offset(pmd, address); + ptep = pte_offset_map(pmd, address); offset = address & ~PMD_MASK; if (offset + size > PMD_SIZE) size = PMD_SIZE - offset; @@ -267,6 +271,7 @@ flush_dcache_page_all(mm, page); } } + pte_unmap(ptep - 1); } static inline void flush_cache_pmd_range(struct mm_struct *mm, pgd_t *dir, unsigned long address, unsigned long size) @@ -389,7 +394,7 @@ *error = 1; return(0); } - ptep = (pte_t *)pmd_page(*pmdp) + ((promva >> 13) & 0x3ff); + ptep = (pte_t *)__pmd_page(*pmdp) + ((promva >> 13) & 0x3ff); if (!pte_present(*ptep)) { if (error) *error = 1; @@ -466,7 +471,7 @@ memset(ptep, 0, BASE_PAGE_SIZE); pmd_set(pmdp, ptep); } - ptep = (pte_t *)pmd_page(*pmdp) + + ptep = (pte_t *)__pmd_page(*pmdp) + ((vaddr >> 13) & 0x3ff); val = trans[i].data; @@ -1133,11 +1138,20 @@ #else #define DC_ALIAS_SHIFT 0 #endif -pte_t *pte_alloc_one(struct mm_struct *mm, unsigned long address) +pte_t *pte_alloc_one_kernel(struct mm_struct *mm, unsigned long address) { - struct page *page = alloc_pages(GFP_KERNEL, DC_ALIAS_SHIFT); - unsigned long color = VPTE_COLOR(address); + struct page *page; + unsigned long color; + + { + pte_t *ptep = pte_alloc_one_fast(mm, address); + + if (ptep) + return ptep; + } + color = VPTE_COLOR(address); + page = alloc_pages(GFP_KERNEL, DC_ALIAS_SHIFT); if (page) { unsigned long *to_free; unsigned long paddr; @@ -1159,9 +1173,11 @@ #if (L1DCACHE_SIZE > PAGE_SIZE) /* is there D$ aliasing problem */ /* Now free the other one up, adjust cache size. */ + preempt_disable(); *to_free = (unsigned long) pte_quicklist[color ^ 0x1]; pte_quicklist[color ^ 0x1] = to_free; pgtable_cache_size++; + preempt_enable(); #endif return pte; diff -Nru a/arch/sparc64/mm/ultra.S b/arch/sparc64/mm/ultra.S --- a/arch/sparc64/mm/ultra.S Tue Mar 12 13:58:15 2002 +++ b/arch/sparc64/mm/ultra.S Tue Mar 12 13:58:15 2002 @@ -11,6 +11,7 @@ #include #include #include +#include /* Basically, all this madness has to do with the * fact that Cheetah does not support IMMU flushes @@ -482,6 +483,15 @@ nop nop + /* NOTE: This is SPECIAL!! We do etrap/rtrap however + * we choose to deal with the "BH's run with + * %pil==15" problem (described in asm/pil.h) + * by just invoking rtrap directly past where + * BH's are checked for. + * + * We do it like this because we do not want %pil==15 + * lockups to prevent regs being reported. + */ .globl xcall_report_regs xcall_report_regs: rdpr %pstate, %g2 @@ -489,12 +499,14 @@ rdpr %pil, %g2 wrpr %g0, 15, %pil sethi %hi(109f), %g7 - b,pt %xcc, etrap_irq + b,pt %xcc, etrap 109: or %g7, %lo(109b), %g7 call __show_regs add %sp, STACK_BIAS + REGWIN_SZ, %o0 - b,pt %xcc, rtrap_irq - nop + clr %l6 + /* Has to be a non-v9 branch due to the large distance. */ + b rtrap_xcall + ldx [%sp + PTREGS_OFF + PT_V9_TSTATE], %l1 .align 32 .globl xcall_flush_dcache_page_cheetah @@ -543,20 +555,6 @@ nop nop - .globl xcall_capture -xcall_capture: - rdpr %pstate, %g2 - wrpr %g2, PSTATE_IG | PSTATE_AG, %pstate - rdpr %pil, %g2 - wrpr %g0, 15, %pil - sethi %hi(109f), %g7 - b,pt %xcc, etrap_irq -109: or %g7, %lo(109b), %g7 - call smp_penguin_jailcell - nop - b,pt %xcc, rtrap_irq - nop - .globl xcall_promstop xcall_promstop: rdpr %pstate, %g2 @@ -564,7 +562,7 @@ rdpr %pil, %g2 wrpr %g0, 15, %pil sethi %hi(109f), %g7 - b,pt %xcc, etrap_irq + b,pt %xcc, etrap 109: or %g7, %lo(109b), %g7 flushw call prom_stopself @@ -573,21 +571,6 @@ 1: b,a,pt %xcc, 1b nop - .globl xcall_receive_signal -xcall_receive_signal: - rdpr %pstate, %g2 - wrpr %g2, PSTATE_IG | PSTATE_AG, %pstate - rdpr %tstate, %g1 - andcc %g1, TSTATE_PRIV, %g0 - /* If we did not trap from user space, just ignore. */ - bne,pn %xcc, 99f - sethi %hi(109f), %g7 - b,pt %xcc, etrap -109: or %g7, %lo(109b), %g7 - b,pt %xcc, rtrap - clr %l6 -99: retry - .data errata32_hwbug: @@ -670,25 +653,20 @@ __cheetah_xcall_flush_cache_all: retry + /* These just get rescheduled to PIL vectors. */ .globl xcall_call_function xcall_call_function: - rdpr %pstate, %g2 - wrpr %g2, PSTATE_IG | PSTATE_AG, %pstate - rdpr %pil, %g2 - wrpr %g0, 15, %pil - sethi %hi(109f), %g7 - b,pt %xcc, etrap_irq -109: or %g7, %lo(109b), %g7 - call smp_call_function_client - nop - b,pt %xcc, rtrap_irq - nop + wr %g0, (1 << PIL_SMP_CALL_FUNC), %set_softint + retry - .globl xcall_migrate_task -xcall_migrate_task: - mov 1, %g2 - sllx %g2, (PIL_MIGRATE), %g2 - wr %g2, 0x0, %set_softint + .globl xcall_receive_signal +xcall_receive_signal: + wr %g0, (1 << PIL_SMP_RECEIVE_SIGNAL), %set_softint + retry + + .globl xcall_capture +xcall_capture: + wr %g0, (1 << PIL_SMP_CAPTURE), %set_softint retry #endif /* CONFIG_SMP */ diff -Nru a/arch/sparc64/solaris/ioctl.c b/arch/sparc64/solaris/ioctl.c --- a/arch/sparc64/solaris/ioctl.c Tue Mar 12 13:58:15 2002 +++ b/arch/sparc64/solaris/ioctl.c Tue Mar 12 13:58:15 2002 @@ -289,11 +289,15 @@ { struct inode *ino; /* I wonder which of these tests are superfluous... --patrik */ + read_lock(¤t->files->file_lock); if (! current->files->fd[fd] || ! current->files->fd[fd]->f_dentry || ! (ino = current->files->fd[fd]->f_dentry->d_inode) || - ! ino->i_sock) + ! ino->i_sock) { + read_unlock(¤t->files->file_lock); return TBADF; + } + read_unlock(¤t->files->file_lock); switch (cmd & 0xff) { case 109: /* SI_SOCKPARAMS */ diff -Nru a/arch/sparc64/vmlinux.lds b/arch/sparc64/vmlinux.lds --- a/arch/sparc64/vmlinux.lds Tue Mar 12 13:58:15 2002 +++ b/arch/sparc64/vmlinux.lds Tue Mar 12 13:58:15 2002 @@ -56,6 +56,10 @@ *(.initcall7.init) } __initcall_end = .; + . = ALIGN(32); + __per_cpu_start = .; + .data.percpu : { *(.data.percpu) } + __per_cpu_end = .; . = ALIGN(8192); __init_end = .; . = ALIGN(64); diff -Nru a/arch/x86_64/ia32/ia32_ioctl.c b/arch/x86_64/ia32/ia32_ioctl.c --- a/arch/x86_64/ia32/ia32_ioctl.c Tue Mar 12 13:58:14 2002 +++ b/arch/x86_64/ia32/ia32_ioctl.c Tue Mar 12 13:58:14 2002 @@ -3059,7 +3059,6 @@ COMPATIBLE_IOCTL(HDIO_SET_MULTCOUNT) COMPATIBLE_IOCTL(HDIO_DRIVE_CMD) COMPATIBLE_IOCTL(HDIO_SET_PIO_MODE) -COMPATIBLE_IOCTL(HDIO_SCAN_HWIF) COMPATIBLE_IOCTL(HDIO_SET_NICE) /* 0x02 -- Floppy ioctls */ COMPATIBLE_IOCTL(FDMSGON) diff -Nru a/drivers/acorn/char/i2c.c b/drivers/acorn/char/i2c.c --- a/drivers/acorn/char/i2c.c Tue Mar 12 13:58:15 2002 +++ b/drivers/acorn/char/i2c.c Tue Mar 12 13:58:15 2002 @@ -14,6 +14,9 @@ */ #include #include +#include +#include +#include #include #include @@ -21,15 +24,19 @@ #include #include #include +#include #include "pcf8583.h" -extern unsigned long -mktime(unsigned int year, unsigned int mon, unsigned int day, - unsigned int hour, unsigned int min, unsigned int sec); extern int (*set_rtc)(void); static struct i2c_client *rtc_client; +static const unsigned char days_in_mon[] = + { 0, 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31 }; +static unsigned int rtc_epoch = 1900; + +#define CMOS_CHECKSUM (63) +#define CMOS_YEAR (64 + 128) static inline int rtc_command(int cmd, void *data) { @@ -44,12 +51,10 @@ /* * Read the current RTC time and date, and update xtime. */ -static void get_rtc_time(void) +static void get_rtc_time(struct rtc_tm *rtctm, unsigned int *year) { - unsigned char ctrl; - unsigned char year; - struct rtc_tm rtctm; - struct mem rtcmem = { 0xc0, 1, &year }; + unsigned char ctrl, yr[2]; + struct mem rtcmem = { CMOS_YEAR, sizeof(yr), yr }; /* * Ensure that the RTC is running. @@ -73,22 +78,53 @@ if (rtc_command(MEM_READ, &rtcmem)) return; - if (rtc_command(RTC_GETDATETIME, &rtctm)) + if (rtc_command(RTC_GETDATETIME, rtctm)) return; - if (year < 70) - year += 100; + *year = yr[1] * 100 + yr[0]; +} + +static int set_rtc_time(struct rtc_tm *rtctm, unsigned int year) +{ + unsigned char yr[2], leap, chk; + struct mem cmos_year = { CMOS_YEAR, sizeof(yr), yr }; + struct mem cmos_check = { CMOS_CHECKSUM, 1, &chk }; + int ret; + + leap = (!(year % 4) && (year % 100)) || !(year % 400); + + if (rtctm->mon > 12 || rtctm->mday == 0) + return -EINVAL; + + if (rtctm->mday > (days_in_mon[rtctm->mon] + (rtctm->mon == 2 && leap))) + return -EINVAL; + + if (rtctm->hours >= 24 || rtctm->mins >= 60 || rtctm->secs >= 60) + return -EINVAL; - xtime.tv_usec = rtctm.cs * 10000; - xtime.tv_sec = mktime(1900 + year, rtctm.mon, rtctm.mday, - rtctm.hours, rtctm.mins, rtctm.secs); + ret = rtc_command(RTC_SETDATETIME, rtctm); + if (ret == 0) { + rtc_command(MEM_READ, &cmos_check); + rtc_command(MEM_READ, &cmos_year); + + chk -= yr[1] + yr[0]; + + yr[1] = year / 100; + yr[0] = year % 100; + + chk += yr[1] + yr[0]; + + rtc_command(MEM_WRITE, &cmos_year); + rtc_command(MEM_WRITE, &cmos_check); + } + return ret; } /* * Set the RTC time only. Note that * we do not touch the date. */ -static int set_rtc_time(void) +static int k_set_rtc_time(void) { struct rtc_tm new_rtctm, old_rtctm; unsigned long nowtime = xtime.tv_sec; @@ -110,13 +146,70 @@ * [ rtc: 1/1/2000 23:58:00, real 2/1/2000 00:01:00, * rtc gets set to 1/1/2000 00:01:00 ] */ - if ((old_rtctm.hours == 23 && old_rtctm.mins == 59) || - (new_rtctm.hours == 23 && new_rtctm.mins == 59)) + if ((old_rtctm.hours == 23 && old_rtctm.mins == 59) || + (new_rtctm.hours == 23 && new_rtctm.mins == 59)) return 1; return rtc_command(RTC_SETTIME, &new_rtctm); } +static int rtc_ioctl(struct inode *inode, struct file *file, + unsigned int cmd, unsigned long arg) +{ + unsigned int year; + struct rtc_time rtctm; + struct rtc_tm rtc_raw; + + switch (cmd) { + case RTC_ALM_READ: + case RTC_ALM_SET: + break; + + case RTC_RD_TIME: + get_rtc_time(&rtc_raw, &year); + rtctm.tm_sec = rtc_raw.secs; + rtctm.tm_min = rtc_raw.mins; + rtctm.tm_hour = rtc_raw.hours; + rtctm.tm_mday = rtc_raw.mday; + rtctm.tm_mon = rtc_raw.mon - 1; /* month starts at 0 */ + rtctm.tm_year = year - 1900; /* starts at 1900 */ + return copy_to_user((void *)arg, &rtctm, sizeof(rtctm)) + ? -EFAULT : 0; + + case RTC_SET_TIME: + if (!capable(CAP_SYS_TIME)) + return -EACCES; + + if (copy_from_user(&rtctm, (void *)arg, sizeof(rtctm))) + return -EFAULT; + rtc_raw.secs = rtctm.tm_sec; + rtc_raw.mins = rtctm.tm_min; + rtc_raw.hours = rtctm.tm_hour; + rtc_raw.mday = rtctm.tm_mday; + rtc_raw.mon = rtctm.tm_mon + 1; + rtc_raw.year_off = 2; + year = rtctm.tm_year + 1900; + return set_rtc_time(&rtc_raw, year); + break; + + case RTC_EPOCH_READ: + return put_user(rtc_epoch, (unsigned long *)arg); + + } + return -EINVAL; +} + +static struct file_operations rtc_fops = { + ioctl: rtc_ioctl, +}; + +static struct miscdevice rtc_dev = { + minor: RTC_MINOR, + name: "rtc", + fops: &rtc_fops, +}; + +/* IOC / IOMD i2c driver */ #define FORCE_ONES 0xdc #define SCL 0x02 @@ -184,9 +277,16 @@ { if (client->id == I2C_DRIVERID_PCF8583 && client->addr == 0x50) { + struct rtc_tm rtctm; + unsigned int year; + rtc_client = client; - get_rtc_time(); - set_rtc = set_rtc_time; + get_rtc_time(&rtctm, &year); + + xtime.tv_usec = rtctm.cs * 10000; + xtime.tv_sec = mktime(year, rtctm.mon, rtctm.mday, + rtctm.hours, rtctm.mins, rtctm.secs); + set_rtc = k_set_rtc_time; } return 0; @@ -212,9 +312,16 @@ static int __init i2c_ioc_init(void) { + int ret; + force_ones = FORCE_ONES | SCL | SDA; - return i2c_bit_add_bus(&ioc_ops); + ret = i2c_bit_add_bus(&ioc_ops); + + if (ret >= 0) + misc_register(&rtc_dev); + + return ret; } __initcall(i2c_ioc_init); diff -Nru a/drivers/acorn/char/serial-atomwide.c b/drivers/acorn/char/serial-atomwide.c --- a/drivers/acorn/char/serial-atomwide.c Tue Mar 12 13:58:15 2002 +++ b/drivers/acorn/char/serial-atomwide.c Tue Mar 12 13:58:15 2002 @@ -20,7 +20,4 @@ #define MY_PORT_ADDRESS(port,cardaddr) \ ((cardaddr) + 0x200 - (port) * 0x100) -#define INIT serial_card_atomwide_init -#define EXIT serial_card_atomwide_exit - #include "serial-card.c" diff -Nru a/drivers/acorn/char/serial-card.c b/drivers/acorn/char/serial-card.c --- a/drivers/acorn/char/serial-card.c Tue Mar 12 13:58:15 2002 +++ b/drivers/acorn/char/serial-card.c Tue Mar 12 13:58:15 2002 @@ -29,6 +29,7 @@ #include #include #include +#include #include #include @@ -38,95 +39,84 @@ #define NUM_SERIALS MY_NUMPORTS * MAX_ECARDS #endif -#ifdef MODULE -static int __serial_ports[NUM_SERIALS]; -static int __serial_pcount; -static int __serial_addr[NUM_SERIALS]; +static int serial_ports[NUM_SERIALS]; +static int serial_pcount; +static int serial_addr[NUM_SERIALS]; static struct expansion_card *expcard[MAX_ECARDS]; -#define ADD_ECARD(ec,card) expcard[(card)] = (ec) -#define ADD_PORT(port,addr) \ - do { \ - __serial_ports[__serial_pcount] = (port); \ - __serial_addr[__serial_pcount] = (addr); \ - __serial_pcount += 1; \ - } while (0) -#else -#define ADD_ECARD(ec,card) -#define ADD_PORT(port,addr) -#endif static const card_ids serial_cids[] = { MY_CARD_LIST, { 0xffff, 0xffff } }; static inline int serial_register_onedev (unsigned long port, int irq) { - struct serial_struct req; + struct serial_struct req; - memset(&req, 0, sizeof(req)); - req.baud_base = MY_BAUD_BASE; - req.irq = irq; - req.port = port; - req.flags = 0; + memset(&req, 0, sizeof(req)); + req.baud_base = MY_BAUD_BASE; + req.irq = irq; + req.port = port; + req.flags = 0; - return register_serial(&req); + return register_serial(&req); } -static int __init INIT (void) +static int __init serial_card_init(void) { - int card = 0; - - ecard_startfind (); - - do { - struct expansion_card *ec; - unsigned long cardaddr; - int port; - - ec = ecard_find (0, serial_cids); - if (!ec) - break; - - cardaddr = MY_BASE_ADDRESS(ec); - - for (port = 0; port < MY_NUMPORTS; port ++) { - unsigned long address; - int line; + int card = 0; - address = MY_PORT_ADDRESS(port, cardaddr); + ecard_startfind (); - line = serial_register_onedev (address, ec->irq); - if (line < 0) - break; - ADD_PORT(line, address); - } - - if (port) { - ecard_claim (ec); - ADD_ECARD(ec, card); - } else - break; - } while (++card < MAX_ECARDS); - return card ? 0 : -ENODEV; + do { + struct expansion_card *ec; + unsigned long cardaddr; + int port; + + ec = ecard_find (0, serial_cids); + if (!ec) + break; + + cardaddr = MY_BASE_ADDRESS(ec); + + for (port = 0; port < MY_NUMPORTS; port ++) { + unsigned long address; + int line; + + address = MY_PORT_ADDRESS(port, cardaddr); + + line = serial_register_onedev (address, ec->irq); + if (line < 0) + break; + serial_ports[serial_pcount] = line; + serial_addr[serial_pcount] = address; + serial_pcount += 1; + } + + if (port) { + ecard_claim (ec); + expcard[card] = ec; + } else + break; + } while (++card < MAX_ECARDS); + return card ? 0 : -ENODEV; } -static void __exit EXIT (void) +static void __exit serial_card_exit(void) { -#ifdef MODULE - int i; + int i; - for (i = 0; i < __serial_pcount; i++) { - unregister_serial(__serial_ports[i]); - release_region(__serial_addr[i], 8); - } - - for (i = 0; i < MAX_ECARDS; i++) - if (expcard[i]) - ecard_release (expcard[i]); -#endif + for (i = 0; i < serial_pcount; i++) { + unregister_serial(serial_ports[i]); + release_region(serial_addr[i], 8); + } + + for (i = 0; i < MAX_ECARDS; i++) + if (expcard[i]) + ecard_release (expcard[i]); } EXPORT_NO_SYMBOLS; +MODULE_AUTHOR("Russell King"); MODULE_LICENSE("GPL"); -module_init(INIT); -module_exit(EXIT); +module_init(serial_card_init); +module_exit(serial_card_exit); diff -Nru a/drivers/acorn/char/serial-dualsp.c b/drivers/acorn/char/serial-dualsp.c --- a/drivers/acorn/char/serial-dualsp.c Tue Mar 12 13:58:15 2002 +++ b/drivers/acorn/char/serial-dualsp.c Tue Mar 12 13:58:15 2002 @@ -18,7 +18,4 @@ #define MY_PORT_ADDRESS(port,cardaddress) \ ((cardaddress) + (port) * 8) -#define INIT serial_card_dualsp_init -#define EXIT serial_card_dualsp_exit - #include "serial-card.c" diff -Nru a/drivers/acorn/net/ether3.c b/drivers/acorn/net/ether3.c --- a/drivers/acorn/net/ether3.c Tue Mar 12 13:58:15 2002 +++ b/drivers/acorn/net/ether3.c Tue Mar 12 13:58:15 2002 @@ -718,7 +718,7 @@ /* * Don't print this message too many times... */ - if (jiffies - last_warned > 30 * HZ) { + if (time_after(jiffies, last_warned + 10 * HZ)) { last_warned = jiffies; printk("%s: memory squeeze, dropping packet.\n", dev->name); } diff -Nru a/drivers/acorn/net/etherh.c b/drivers/acorn/net/etherh.c --- a/drivers/acorn/net/etherh.c Tue Mar 12 13:58:16 2002 +++ b/drivers/acorn/net/etherh.c Tue Mar 12 13:58:16 2002 @@ -1,7 +1,7 @@ /* * linux/drivers/acorn/net/etherh.c * - * Copyright (C) 2000 Russell King + * Copyright (C) 2000-2002 Russell King * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as @@ -23,6 +23,7 @@ * 12-10-1999 CK/TEW EtherM driver first release * 21-12-2000 TTC EtherH/EtherM integration * 25-12-2000 RMK 1.08 Clean integration of EtherM into this driver. + * 03-01-2002 RMK 1.09 Always enable IRQs if we're in the nic slot. */ #include @@ -64,13 +65,18 @@ { 0xffff, 0xffff } }; +struct etherh_priv { + unsigned int id; + unsigned int ctrl_port; + unsigned int ctrl; +}; MODULE_AUTHOR("Russell King"); MODULE_DESCRIPTION("EtherH/EtherM driver"); MODULE_LICENSE("GPL"); static char version[] __initdata = - "EtherH/EtherM Driver (c) 2000 Russell King v1.08\n"; + "EtherH/EtherM Driver (c) 2002 Russell King v1.09\n"; #define ETHERH500_DATAPORT 0x200 /* MEMC */ #define ETHERH500_NS8390 0x000 /* MEMC */ @@ -97,18 +103,61 @@ #define ETHERM_TX_START_PAGE 64 #define ETHERM_STOP_PAGE 127 -/* --------------------------------------------------------------------------- */ +/* ------------------------------------------------------------------------ */ + +static inline void etherh_set_ctrl(struct etherh_priv *eh, unsigned int mask) +{ + eh->ctrl |= mask; + outb(eh->ctrl, eh->ctrl_port); +} + +static inline void etherh_clr_ctrl(struct etherh_priv *eh, unsigned int mask) +{ + eh->ctrl &= ~mask; + outb(eh->ctrl, eh->ctrl_port); +} + +static inline unsigned int etherh_get_stat(struct etherh_priv *eh) +{ + return inb(eh->ctrl_port); +} + + + + +static void etherh_irq_enable(ecard_t *ec, int irqnr) +{ + struct etherh_priv *eh = ec->irq_data; + + etherh_set_ctrl(eh, ETHERH_CP_IE); +} + +static void etherh_irq_disable(ecard_t *ec, int irqnr) +{ + struct etherh_priv *eh = ec->irq_data; + + etherh_clr_ctrl(eh, ETHERH_CP_IE); +} + +static expansioncard_ops_t etherh_ops = { + irqenable: etherh_irq_enable, + irqdisable: etherh_irq_disable, +}; + + + static void etherh_setif(struct net_device *dev) { struct ei_device *ei_local = (struct ei_device *) dev->priv; + struct etherh_priv *eh = (struct etherh_priv *)dev->rmem_start; unsigned long addr, flags; - save_flags_cli(flags); + local_irq_save(flags); /* set the interface type */ - switch (dev->mem_end) { + switch (eh->id) { case PROD_I3_ETHERLAN600: case PROD_I3_ETHERLAN600A: addr = dev->base_addr + EN0_RCNTHI; @@ -124,14 +173,13 @@ break; case PROD_I3_ETHERLAN500: - addr = dev->rmem_start; - switch (dev->if_port) { case IF_PORT_10BASE2: - outb(inb(addr) & ~ETHERH_CP_IF, addr); + etherh_clr_ctrl(eh, ETHERH_CP_IF); break; + case IF_PORT_10BASET: - outb(inb(addr) | ETHERH_CP_IF, addr); + etherh_set_ctrl(eh, ETHERH_CP_IF); break; } break; @@ -140,16 +188,17 @@ break; } - restore_flags(flags); + local_irq_restore(flags); } static int etherh_getifstat(struct net_device *dev) { struct ei_device *ei_local = (struct ei_device *) dev->priv; + struct etherh_priv *eh = (struct etherh_priv *)dev->rmem_start; int stat = 0; - switch (dev->mem_end) { + switch (eh->id) { case PROD_I3_ETHERLAN600: case PROD_I3_ETHERLAN600A: switch (dev->if_port) { @@ -168,7 +217,7 @@ stat = 1; break; case IF_PORT_10BASET: - stat = inb(dev->rmem_start) & ETHERH_CP_HEARTBEAT; + stat = etherh_get_stat(eh) & ETHERH_CP_HEARTBEAT; break; } break; @@ -251,7 +300,13 @@ return; } - ei_local->dmaing |= 1; + /* + * Make sure we have a round number of bytes if we're in word mode. + */ + if (count & 1 && ei_local->word16) + count++; + + ei_local->dmaing = 1; addr = dev->base_addr; dma_addr = dev->mem_start; @@ -291,7 +346,7 @@ } outb (ENISR_RDC, addr + EN0_ISR); - ei_local->dmaing &= ~1; + ei_local->dmaing = 0; } /* @@ -311,7 +366,7 @@ return; } - ei_local->dmaing |= 1; + ei_local->dmaing = 1; addr = dev->base_addr; dma_addr = dev->mem_start; @@ -332,7 +387,7 @@ insb (dma_addr, buf, count); outb (ENISR_RDC, addr + EN0_ISR); - ei_local->dmaing &= ~1; + ei_local->dmaing = 0; } /* @@ -351,7 +406,7 @@ return; } - ei_local->dmaing |= 1; + ei_local->dmaing = 1; addr = dev->base_addr; dma_addr = dev->mem_start; @@ -369,7 +424,7 @@ insb (dma_addr, hdr, sizeof (*hdr)); outb (ENISR_RDC, addr + EN0_ISR); - ei_local->dmaing &= ~1; + ei_local->dmaing = 0; } /* @@ -427,23 +482,6 @@ return 0; } -static void etherh_irq_enable(ecard_t *ec, int irqnr) -{ - unsigned int ctrl_addr = (unsigned int)ec->irq_data; - outb(inb(ctrl_addr) | ETHERH_CP_IE, ctrl_addr); -} - -static void etherh_irq_disable(ecard_t *ec, int irqnr) -{ - unsigned int ctrl_addr = (unsigned int)ec->irq_data; - outb(inb(ctrl_addr) & ~ETHERH_CP_IE, ctrl_addr); -} - -static expansioncard_ops_t etherh_ops = { - irqenable: etherh_irq_enable, - irqdisable: etherh_irq_disable, -}; - /* * Initialisation */ @@ -506,6 +544,7 @@ { struct ei_device *ei_local; struct net_device *dev; + struct etherh_priv *eh; const char *dev_type; int i, size; @@ -517,6 +556,10 @@ if (!dev) goto out; + eh = kmalloc(sizeof(struct etherh_priv), GFP_KERNEL); + if (!eh) + goto out_nopriv; + SET_MODULE_OWNER(dev); dev->open = etherh_open; @@ -524,8 +567,15 @@ dev->set_config = etherh_set_config; dev->irq = ec->irq; dev->base_addr = ecard_address(ec, ECARD_MEMC, 0); - dev->mem_end = ec->cid.product; + dev->rmem_start = (unsigned long)eh; + + /* + * IRQ and control port handling + */ ec->ops = ðerh_ops; + ec->irq_data = eh; + eh->ctrl = 0; + eh->id = ec->cid.product; switch (ec->cid.product) { case PROD_ANT_ETHERM: @@ -533,7 +583,7 @@ goto free; dev->base_addr += ETHERM_NS8390; dev->mem_start = dev->base_addr + ETHERM_DATAPORT; - ec->irq_data = (void *)(dev->base_addr + ETHERM_CTRLPORT); + eh->ctrl_port = dev->base_addr + ETHERM_CTRLPORT; break; case PROD_I3_ETHERLAN500: @@ -541,8 +591,7 @@ goto free; dev->base_addr += ETHERH500_NS8390; dev->mem_start = dev->base_addr + ETHERH500_DATAPORT; - dev->rmem_start = (unsigned long) - ec->irq_data = (void *)ecard_address (ec, ECARD_IOC, ECARD_FAST) + eh->ctrl_port = ecard_address (ec, ECARD_IOC, ECARD_FAST) + ETHERH500_CTRLPORT; break; @@ -551,8 +600,8 @@ if (etherh_addr(dev->dev_addr, ec)) goto free; dev->base_addr += ETHERH600_NS8390; - dev->mem_start = dev->base_addr + ETHERH600_DATAPORT; - ec->irq_data = (void *)(dev->base_addr + ETHERH600_CTRLPORT); + dev->mem_start = dev->base_addr + ETHERH600_DATAPORT; + eh->ctrl_port = dev->base_addr + ETHERH600_CTRLPORT; break; default: @@ -572,6 +621,12 @@ goto release; /* + * If we're in the NIC slot, make sure the IRQ is enabled + */ + if (dev->irq == 11) + etherh_set_ctrl(eh, ETHERH_CP_IE); + + /* * Unfortunately, ethdev_init eventually calls * ether_setup, which re-writes dev->flags. */ @@ -636,6 +691,8 @@ release: release_region(dev->base_addr, 16); free: + kfree(eh); +out_nopriv: unregister_netdev(dev); kfree(dev); out: @@ -696,6 +753,7 @@ } if (e_card[i]) { e_card[i]->ops = NULL; + kfree(e_card[i]->irq_data); ecard_release(e_card[i]); e_card[i] = NULL; } diff -Nru a/drivers/acorn/scsi/acornscsi.c b/drivers/acorn/scsi/acornscsi.c --- a/drivers/acorn/scsi/acornscsi.c Tue Mar 12 13:58:14 2002 +++ b/drivers/acorn/scsi/acornscsi.c Tue Mar 12 13:58:14 2002 @@ -2583,10 +2583,10 @@ done(SCpnt); return 0; } - save_flags_cli(flags); + local_irq_save(flags); if (host->scsi.phase == PHASE_IDLE) acornscsi_kick(host); - restore_flags(flags); + local_irq_restore(flags); } return 0; } diff -Nru a/drivers/acorn/scsi/fas216.c b/drivers/acorn/scsi/fas216.c --- a/drivers/acorn/scsi/fas216.c Tue Mar 12 13:58:15 2002 +++ b/drivers/acorn/scsi/fas216.c Tue Mar 12 13:58:15 2002 @@ -2137,7 +2137,7 @@ * However, we must re-enable interrupts, or else we'll be * waiting forever. */ - spin_unlock_irq(&io_request_lock); + spin_unlock_irq(info->host->host_lock); while (!info->internal_done) { /* @@ -2149,13 +2149,13 @@ * to be some time (eg, disconnected). */ if (inb(REG_STAT(info)) & STAT_INT) { - spin_lock_irq(&io_request_lock); + spin_lock_irq(info->host->host_lock); fas216_intr(info->host); - spin_unlock_irq(&io_request_lock); + spin_unlock_irq(info->host->host_lock); } } - spin_lock_irq(&io_request_lock); + spin_lock_irq(info->host->host_lock); return SCpnt->result; } @@ -2459,13 +2459,13 @@ /* * Ugly ugly ugly! - * We need to release the io_request_lock and enable + * We need to release the host_lock and enable * IRQs if we sleep, but we must relock and disable * IRQs after the sleep. */ - spin_unlock_irq(&io_request_lock); + spin_unlock_irq(info->host->host_lock); scsi_sleep(25*HZ/100); - spin_lock_irq(&io_request_lock); + spin_lock_irq(info->host->host_lock); /* * Release the SCSI reset. @@ -2628,9 +2628,9 @@ /* * scsi standard says wait 250ms */ - spin_unlock_irq(&io_request_lock); + spin_unlock_irq(info->host->host_lock); scsi_sleep(25*HZ/100); - spin_lock_irq(&io_request_lock); + spin_lock_irq(info->host->host_lock); outb(info->scsi.cfg[0], REG_CNTL1(info)); inb(REG_INST(info)); diff -Nru a/drivers/char/efirtc.c b/drivers/char/efirtc.c --- a/drivers/char/efirtc.c Tue Mar 12 13:58:15 2002 +++ b/drivers/char/efirtc.c Tue Mar 12 13:58:15 2002 @@ -40,7 +40,7 @@ #include #include -#define EFI_RTC_VERSION "0.2" +#define EFI_RTC_VERSION "0.3" #define EFI_ISDST (EFI_TIME_ADJUST_DAYLIGHT|EFI_TIME_IN_DAYLIGHT) /* @@ -315,56 +315,45 @@ spin_unlock_irqrestore(&efi_rtc_lock,flags); p += sprintf(p, - "Time :\n" - "Year : %u\n" - "Month : %u\n" - "Day : %u\n" - "Hour : %u\n" - "Minute : %u\n" - "Second : %u\n" - "Nanosecond: %u\n" - "Daylight : %u\n", - eft.year, eft.month, eft.day, eft.hour, eft.minute, - eft.second, eft.nanosecond, eft.daylight); + "Time : %u:%u:%u.%09u\n" + "Date : %u-%u-%u\n" + "Daylight : %u\n", + eft.hour, eft.minute, eft.second, eft.nanosecond, + eft.year, eft.month, eft.day, + eft.daylight); if ( eft.timezone == EFI_UNSPECIFIED_TIMEZONE) - p += sprintf(p, "Timezone : unspecified\n"); + p += sprintf(p, "Timezone : unspecified\n"); else /* XXX fixme: convert to string? */ - p += sprintf(p, "Timezone : %u\n", eft.timezone); + p += sprintf(p, "Timezone : %u\n", eft.timezone); p += sprintf(p, - "\nWakeup Alm:\n" - "Enabled : %s\n" - "Pending : %s\n" - "Year : %u\n" - "Month : %u\n" - "Day : %u\n" - "Hour : %u\n" - "Minute : %u\n" - "Second : %u\n" - "Nanosecond: %u\n" - "Daylight : %u\n", - enabled == 1 ? "Yes" : "No", - pending == 1 ? "Yes" : "No", - alm.year, alm.month, alm.day, alm.hour, alm.minute, - alm.second, alm.nanosecond, alm.daylight); + "Alarm Time : %u:%u:%u.%09u\n" + "Alarm Date : %u-%u-%u\n" + "Alarm Daylight : %u\n" + "Enabled : %s\n" + "Pending : %s\n", + alm.hour, alm.minute, alm.second, alm.nanosecond, + alm.year, alm.month, alm.day, + alm.daylight, + enabled == 1 ? "yes" : "no", + pending == 1 ? "yes" : "no"); if ( eft.timezone == EFI_UNSPECIFIED_TIMEZONE) - p += sprintf(p, "Timezone : unspecified\n"); + p += sprintf(p, "Timezone : unspecified\n"); else /* XXX fixme: convert to string? */ - p += sprintf(p, "Timezone : %u\n", eft.timezone); + p += sprintf(p, "Timezone : %u\n", alm.timezone); /* * now prints the capabilities */ p += sprintf(p, - "\nClock Cap :\n" - "Resolution: %u\n" - "Accuracy : %u\n" - "SetstoZero: %u\n", + "Resolution : %u\n" + "Accuracy : %u\n" + "SetstoZero : %u\n", cap.resolution, cap.accuracy, cap.sets_to_zero); return p - buf; @@ -390,7 +379,7 @@ misc_register(&efi_rtc_dev); - create_proc_read_entry ("efirtc", 0, NULL, efi_rtc_read_proc, NULL); + create_proc_read_entry ("driver/efirtc", 0, NULL, efi_rtc_read_proc, NULL); return 0; } diff -Nru a/drivers/hotplug/pci_hotplug_core.c b/drivers/hotplug/pci_hotplug_core.c --- a/drivers/hotplug/pci_hotplug_core.c Tue Mar 12 13:58:15 2002 +++ b/drivers/hotplug/pci_hotplug_core.c Tue Mar 12 13:58:15 2002 @@ -350,7 +350,7 @@ owner: THIS_MODULE, name: "pcihpfs", get_sb: pcihpfs_get_sb, - fs_flags: FS_LITTER, + kill_sb: kill_litter_super, }; static int get_mount (void) diff -Nru a/drivers/ide/Config.help b/drivers/ide/Config.help --- a/drivers/ide/Config.help Tue Mar 12 13:58:16 2002 +++ b/drivers/ide/Config.help Tue Mar 12 13:58:16 2002 @@ -251,10 +251,6 @@ be (U)DMA capable but aren't. This is a blanket on/off test with no speed limit options. - Straight GNU GCC 2.7.3/2.8.X compilers are known to be safe; - whereas, many versions of EGCS have a problem and miscompile if you - say Y here. - If in doubt, say N. CONFIG_BLK_DEV_IDEDMA_TIMEOUT @@ -319,10 +315,6 @@ It is SAFEST to say N to this question. -CONFIG_BLK_DEV_ADMA - Please read the comments at the top of - . - CONFIG_BLK_DEV_PDC_ADMA Please read the comments at the top of . @@ -515,10 +507,16 @@ For FastTrak enable overriding BIOS. CONFIG_BLK_DEV_SIS5513 - This driver ensures (U)DMA support for SIS5513 chipset based - mainboards. SiS620/530 UDMA mode 4, SiS5600/5597 UDMA mode 2, all - other DMA mode 2 limited chipsets are unsupported to date. + This driver ensures (U)DMA support for SIS5513 chipset family based + mainboards. + The following chipsets are supported: + ATA16: SiS5511, SiS5513 + ATA33: SiS5591, SiS5597, SiS5598, SiS5600 + ATA66: SiS530, SiS540, SiS620, SiS630, SiS640 + ATA100: SiS635, SiS645, SiS650, SiS730, SiS735, SiS740, + SiS745, SiS750 + If you say Y here, you need to say Y to "Use DMA by default when available" as well. diff -Nru a/drivers/ide/Config.in b/drivers/ide/Config.in --- a/drivers/ide/Config.in Tue Mar 12 13:58:16 2002 +++ b/drivers/ide/Config.in Tue Mar 12 13:58:16 2002 @@ -4,7 +4,7 @@ # Andre Hedrick # mainmenu_option next_comment -comment 'IDE, ATA and ATAPI Block devices' +comment 'ATA and ATAPI Block devices' dep_tristate 'Enhanced IDE/MFM/RLL disk/cdrom/tape/floppy support' CONFIG_BLK_DEV_IDE $CONFIG_IDE comment 'Please see Documentation/ide.txt for help/info on IDE drives' @@ -34,121 +34,120 @@ dep_tristate ' SCSI emulation support' CONFIG_BLK_DEV_IDESCSI $CONFIG_BLK_DEV_IDE $CONFIG_SCSI comment 'IDE chipset support' - if [ "$CONFIG_BLK_DEV_IDE" != "n" ]; then - dep_bool ' CMD640 chipset bugfix/support' CONFIG_BLK_DEV_CMD640 $CONFIG_X86 - dep_bool ' CMD640 enhanced support' CONFIG_BLK_DEV_CMD640_ENHANCED $CONFIG_BLK_DEV_CMD640 - dep_bool ' ISA-PNP EIDE support' CONFIG_BLK_DEV_ISAPNP $CONFIG_ISAPNP - if [ "$CONFIG_PCI" = "y" ]; then - dep_bool ' RZ1000 chipset bugfix/support' CONFIG_BLK_DEV_RZ1000 $CONFIG_X86 - bool ' Generic PCI IDE chipset support' CONFIG_BLK_DEV_IDEPCI - if [ "$CONFIG_BLK_DEV_IDEPCI" = "y" ]; then - bool ' Sharing PCI IDE interrupts support' CONFIG_IDEPCI_SHARE_IRQ - bool ' Generic PCI bus-master DMA support' CONFIG_BLK_DEV_IDEDMA_PCI - bool ' Boot off-board chipsets first support' CONFIG_BLK_DEV_OFFBOARD - dep_bool ' Use PCI DMA by default when available' CONFIG_IDEDMA_PCI_AUTO $CONFIG_BLK_DEV_IDEDMA_PCI - dep_bool ' Enable DMA only for disks ' CONFIG_IDEDMA_ONLYDISK $CONFIG_IDEDMA_PCI_AUTO - define_bool CONFIG_BLK_DEV_IDEDMA $CONFIG_BLK_DEV_IDEDMA_PCI - dep_bool ' ATA Work(s) In Progress (EXPERIMENTAL)' CONFIG_IDEDMA_PCI_WIP $CONFIG_BLK_DEV_IDEDMA_PCI $CONFIG_EXPERIMENTAL - dep_bool ' Attempt to HACK around Chipsets that TIMEOUT (WIP)' CONFIG_BLK_DEV_IDEDMA_TIMEOUT $CONFIG_IDEDMA_PCI_WIP - dep_bool ' Good-Bad DMA Model-Firmware (WIP)' CONFIG_IDEDMA_NEW_DRIVE_LISTINGS $CONFIG_IDEDMA_PCI_WIP -# dep_bool ' Asynchronous DMA support (WIP) (EXPERIMENTAL)' CONFIG_BLK_DEV_ADMA $CONFIG_BLK_DEV_IDEDMA_PCI $CONFIG_IDEDMA_PCI_WIP - define_bool CONFIG_BLK_DEV_ADMA $CONFIG_BLK_DEV_IDEDMA_PCI -# dep_bool ' Tag Command Queue DMA support (WIP) (EXPERIMENTAL)' CONFIG_BLK_DEV_IDEDMA_TCQ $CONFIG_BLK_DEV_IDEDMA_PCI $CONFIG_IDEDMA_PCI_WIP - - dep_bool ' AEC62XX chipset support' CONFIG_BLK_DEV_AEC62XX $CONFIG_BLK_DEV_IDEDMA_PCI - dep_mbool ' AEC62XX Tuning support' CONFIG_AEC62XX_TUNING $CONFIG_BLK_DEV_AEC62XX - dep_bool ' ALI M15x3 chipset support' CONFIG_BLK_DEV_ALI15X3 $CONFIG_BLK_DEV_IDEDMA_PCI - dep_mbool ' ALI M15x3 WDC support (DANGEROUS)' CONFIG_WDC_ALI15X3 $CONFIG_BLK_DEV_ALI15X3 - dep_bool ' AMD Viper support' CONFIG_BLK_DEV_AMD74XX $CONFIG_BLK_DEV_IDEDMA_PCI - dep_mbool ' AMD Viper ATA-66 Override (WIP)' CONFIG_AMD74XX_OVERRIDE $CONFIG_BLK_DEV_AMD74XX $CONFIG_IDEDMA_PCI_WIP - dep_bool ' CMD64X chipset support' CONFIG_BLK_DEV_CMD64X $CONFIG_BLK_DEV_IDEDMA_PCI - dep_bool ' CY82C693 chipset support' CONFIG_BLK_DEV_CY82C693 $CONFIG_BLK_DEV_IDEDMA_PCI - dep_bool ' Cyrix CS5530 MediaGX chipset support' CONFIG_BLK_DEV_CS5530 $CONFIG_BLK_DEV_IDEDMA_PCI - dep_bool ' HPT34X chipset support' CONFIG_BLK_DEV_HPT34X $CONFIG_BLK_DEV_IDEDMA_PCI - dep_mbool ' HPT34X AUTODMA support (WIP)' CONFIG_HPT34X_AUTODMA $CONFIG_BLK_DEV_HPT34X $CONFIG_IDEDMA_PCI_WIP - dep_bool ' HPT366 chipset support' CONFIG_BLK_DEV_HPT366 $CONFIG_BLK_DEV_IDEDMA_PCI - if [ "$CONFIG_X86" = "y" -o "$CONFIG_IA64" = "y" ]; then - dep_mbool ' Intel PIIXn chipsets support' CONFIG_BLK_DEV_PIIX $CONFIG_BLK_DEV_IDEDMA_PCI - dep_mbool ' PIIXn Tuning support' CONFIG_PIIX_TUNING $CONFIG_BLK_DEV_PIIX $CONFIG_IDEDMA_PCI_AUTO - fi - if [ "$CONFIG_MIPS_ITE8172" = "y" -o "$CONFIG_MIPS_IVR" = "y" ]; then - dep_mbool ' IT8172 IDE support' CONFIG_BLK_DEV_IT8172 $CONFIG_BLK_DEV_IDEDMA_PCI - dep_mbool ' IT8172 IDE Tuning support' CONFIG_IT8172_TUNING $CONFIG_BLK_DEV_IT8172 $CONFIG_IDEDMA_PCI_AUTO - fi - dep_bool ' NS87415 chipset support (EXPERIMENTAL)' CONFIG_BLK_DEV_NS87415 $CONFIG_BLK_DEV_IDEDMA_PCI - dep_bool ' OPTi 82C621 chipset enhanced support (EXPERIMENTAL)' CONFIG_BLK_DEV_OPTI621 $CONFIG_EXPERIMENTAL - dep_mbool ' Pacific Digital A-DMA support (EXPERIMENTAL)' CONFIG_BLK_DEV_PDC_ADMA $CONFIG_BLK_DEV_ADMA $CONFIG_IDEDMA_PCI_WIP - dep_bool ' PROMISE PDC202{46|62|65|67|68|69|70} support' CONFIG_BLK_DEV_PDC202XX $CONFIG_BLK_DEV_IDEDMA_PCI - dep_bool ' Special UDMA Feature' CONFIG_PDC202XX_BURST $CONFIG_BLK_DEV_PDC202XX - dep_bool ' Special FastTrak Feature' CONFIG_PDC202XX_FORCE $CONFIG_BLK_DEV_PDC202XX - dep_bool ' ServerWorks OSB4/CSB5 chipsets support' CONFIG_BLK_DEV_SVWKS $CONFIG_BLK_DEV_IDEDMA_PCI $CONFIG_X86 - dep_bool ' SiS5513 chipset support' CONFIG_BLK_DEV_SIS5513 $CONFIG_BLK_DEV_IDEDMA_PCI $CONFIG_X86 - dep_bool ' SLC90E66 chipset support' CONFIG_BLK_DEV_SLC90E66 $CONFIG_BLK_DEV_IDEDMA_PCI $CONFIG_X86 - dep_bool ' Tekram TRM290 chipset support (EXPERIMENTAL)' CONFIG_BLK_DEV_TRM290 $CONFIG_BLK_DEV_IDEDMA_PCI - dep_bool ' VIA82CXXX chipset support' CONFIG_BLK_DEV_VIA82CXXX $CONFIG_BLK_DEV_IDEDMA_PCI - fi - - if [ "$CONFIG_PPC" = "y" -o "$CONFIG_ARM" = "y" ]; then - bool ' Winbond SL82c105 support' CONFIG_BLK_DEV_SL82C105 - fi - fi - if [ "$CONFIG_ALL_PPC" = "y" ]; then - bool ' Builtin PowerMac IDE support' CONFIG_BLK_DEV_IDE_PMAC - dep_bool ' PowerMac IDE DMA support' CONFIG_BLK_DEV_IDEDMA_PMAC $CONFIG_BLK_DEV_IDE_PMAC - dep_bool ' Use DMA by default' CONFIG_BLK_DEV_IDEDMA_PMAC_AUTO $CONFIG_BLK_DEV_IDEDMA_PMAC - if [ "$CONFIG_BLK_DEV_IDE_PMAC" = "y" ]; then - define_bool CONFIG_BLK_DEV_IDEDMA $CONFIG_BLK_DEV_IDEDMA_PMAC + dep_bool ' CMD640 chipset bugfix/support' CONFIG_BLK_DEV_CMD640 $CONFIG_X86 + dep_bool ' CMD640 enhanced support' CONFIG_BLK_DEV_CMD640_ENHANCED $CONFIG_BLK_DEV_CMD640 + dep_bool ' ISA-PNP EIDE support' CONFIG_BLK_DEV_ISAPNP $CONFIG_ISAPNP + if [ "$CONFIG_PCI" = "y" ]; then + dep_bool ' RZ1000 chipset bugfix/support' CONFIG_BLK_DEV_RZ1000 $CONFIG_X86 + bool ' Generic PCI IDE chipset support' CONFIG_BLK_DEV_IDEPCI + if [ "$CONFIG_BLK_DEV_IDEPCI" = "y" ]; then + bool ' Boot off-board chipsets first support' CONFIG_BLK_DEV_OFFBOARD + bool ' Sharing PCI IDE interrupts support' CONFIG_IDEPCI_SHARE_IRQ + bool ' Generic PCI bus-master DMA support' CONFIG_BLK_DEV_IDEDMA_PCI + dep_bool ' Use PCI DMA by default when available' CONFIG_IDEDMA_PCI_AUTO $CONFIG_BLK_DEV_IDEDMA_PCI + dep_bool ' Enable DMA only for disks ' CONFIG_IDEDMA_ONLYDISK $CONFIG_IDEDMA_PCI_AUTO + define_bool CONFIG_BLK_DEV_IDEDMA $CONFIG_BLK_DEV_IDEDMA_PCI + dep_bool ' ATA Work(s) In Progress (EXPERIMENTAL)' CONFIG_IDEDMA_PCI_WIP $CONFIG_BLK_DEV_IDEDMA_PCI $CONFIG_EXPERIMENTAL + dep_bool ' Attempt to HACK around Chipsets that TIMEOUT (WIP)' CONFIG_BLK_DEV_IDEDMA_TIMEOUT $CONFIG_IDEDMA_PCI_WIP + dep_bool ' Good-Bad DMA Model-Firmware (WIP)' CONFIG_IDEDMA_NEW_DRIVE_LISTINGS $CONFIG_IDEDMA_PCI_WIP + dep_bool ' AEC62XX chipset support' CONFIG_BLK_DEV_AEC62XX $CONFIG_BLK_DEV_IDEDMA_PCI + dep_mbool ' AEC62XX Tuning support' CONFIG_AEC62XX_TUNING $CONFIG_BLK_DEV_AEC62XX + dep_bool ' ALI M15x3 chipset support' CONFIG_BLK_DEV_ALI15X3 $CONFIG_BLK_DEV_IDEDMA_PCI + dep_mbool ' ALI M15x3 WDC support (DANGEROUS)' CONFIG_WDC_ALI15X3 $CONFIG_BLK_DEV_ALI15X3 + dep_bool ' AMD Viper support' CONFIG_BLK_DEV_AMD74XX $CONFIG_BLK_DEV_IDEDMA_PCI + dep_mbool ' AMD Viper ATA-66 Override (WIP)' CONFIG_AMD74XX_OVERRIDE $CONFIG_BLK_DEV_AMD74XX $CONFIG_IDEDMA_PCI_WIP + dep_bool ' CMD64X chipset support' CONFIG_BLK_DEV_CMD64X $CONFIG_BLK_DEV_IDEDMA_PCI + dep_bool ' CY82C693 chipset support' CONFIG_BLK_DEV_CY82C693 $CONFIG_BLK_DEV_IDEDMA_PCI + dep_bool ' Cyrix CS5530 MediaGX chipset support' CONFIG_BLK_DEV_CS5530 $CONFIG_BLK_DEV_IDEDMA_PCI + dep_bool ' HPT34X chipset support' CONFIG_BLK_DEV_HPT34X $CONFIG_BLK_DEV_IDEDMA_PCI + dep_mbool ' HPT34X AUTODMA support (WIP)' CONFIG_HPT34X_AUTODMA $CONFIG_BLK_DEV_HPT34X $CONFIG_IDEDMA_PCI_WIP + dep_bool ' HPT366 chipset support' CONFIG_BLK_DEV_HPT366 $CONFIG_BLK_DEV_IDEDMA_PCI + if [ "$CONFIG_X86" = "y" -o "$CONFIG_IA64" = "y" ]; then + dep_mbool ' Intel PIIXn chipsets support' CONFIG_BLK_DEV_PIIX $CONFIG_BLK_DEV_IDEDMA_PCI + dep_mbool ' PIIXn Tuning support' CONFIG_PIIX_TUNING $CONFIG_BLK_DEV_PIIX $CONFIG_IDEDMA_PCI_AUTO fi - if [ "$CONFIG_BLK_DEV_IDEDMA_PMAC" = "y" ]; then - define_bool CONFIG_BLK_DEV_IDEPCI $CONFIG_BLK_DEV_IDEDMA_PMAC + if [ "$CONFIG_MIPS_ITE8172" = "y" -o "$CONFIG_MIPS_IVR" = "y" ]; then + dep_mbool ' IT8172 IDE support' CONFIG_BLK_DEV_IT8172 $CONFIG_BLK_DEV_IDEDMA_PCI + dep_mbool ' IT8172 IDE Tuning support' CONFIG_IT8172_TUNING $CONFIG_BLK_DEV_IT8172 $CONFIG_IDEDMA_PCI_AUTO fi + dep_bool ' NS87415 chipset support (EXPERIMENTAL)' CONFIG_BLK_DEV_NS87415 $CONFIG_BLK_DEV_IDEDMA_PCI + dep_bool ' OPTi 82C621 chipset enhanced support (EXPERIMENTAL)' CONFIG_BLK_DEV_OPTI621 $CONFIG_EXPERIMENTAL + dep_mbool ' Pacific Digital A-DMA support (EXPERIMENTAL)' CONFIG_BLK_DEV_PDC_ADMA $CONFIG_IDEDMA_PCI_WIP + dep_bool ' PROMISE PDC202{46|62|65|67|68|69|70} support' CONFIG_BLK_DEV_PDC202XX $CONFIG_BLK_DEV_IDEDMA_PCI + dep_bool ' Special UDMA Feature' CONFIG_PDC202XX_BURST $CONFIG_BLK_DEV_PDC202XX + dep_bool ' Special FastTrak Feature' CONFIG_PDC202XX_FORCE $CONFIG_BLK_DEV_PDC202XX + dep_bool ' ServerWorks OSB4/CSB5 chipsets support' CONFIG_BLK_DEV_SVWKS $CONFIG_BLK_DEV_IDEDMA_PCI $CONFIG_X86 + dep_bool ' SiS5513 chipset support' CONFIG_BLK_DEV_SIS5513 $CONFIG_BLK_DEV_IDEDMA_PCI $CONFIG_X86 + dep_bool ' SLC90E66 chipset support' CONFIG_BLK_DEV_SLC90E66 $CONFIG_BLK_DEV_IDEDMA_PCI $CONFIG_X86 + dep_bool ' Tekram TRM290 chipset support (EXPERIMENTAL)' CONFIG_BLK_DEV_TRM290 $CONFIG_BLK_DEV_IDEDMA_PCI + dep_bool ' VIA82CXXX chipset support' CONFIG_BLK_DEV_VIA82CXXX $CONFIG_BLK_DEV_IDEDMA_PCI fi - if [ "$CONFIG_ARCH_ACORN" = "y" ]; then - dep_bool ' ICS IDE interface support' CONFIG_BLK_DEV_IDE_ICSIDE $CONFIG_ARCH_ACORN - dep_bool ' ICS DMA support' CONFIG_BLK_DEV_IDEDMA_ICS $CONFIG_BLK_DEV_IDE_ICSIDE - dep_bool ' Use ICS DMA by default' CONFIG_IDEDMA_ICS_AUTO $CONFIG_BLK_DEV_IDEDMA_ICS - define_bool CONFIG_BLK_DEV_IDEDMA $CONFIG_BLK_DEV_IDEDMA_ICS - dep_bool ' RapIDE interface support' CONFIG_BLK_DEV_IDE_RAPIDE $CONFIG_ARCH_ACORN - fi - if [ "$CONFIG_AMIGA" = "y" ]; then - dep_bool ' Amiga Gayle IDE interface support' CONFIG_BLK_DEV_GAYLE $CONFIG_AMIGA - dep_mbool ' Amiga IDE Doubler support (EXPERIMENTAL)' CONFIG_BLK_DEV_IDEDOUBLER $CONFIG_BLK_DEV_GAYLE $CONFIG_EXPERIMENTAL - fi - if [ "$CONFIG_ZORRO" = "y" -a "$CONFIG_EXPERIMENTAL" = "y" ]; then - dep_mbool ' Buddha/Catweasel IDE interface support (EXPERIMENTAL)' CONFIG_BLK_DEV_BUDDHA $CONFIG_ZORRO $CONFIG_EXPERIMENTAL - fi - if [ "$CONFIG_ATARI" = "y" ]; then - dep_bool ' Falcon IDE interface support' CONFIG_BLK_DEV_FALCON_IDE $CONFIG_ATARI - fi - if [ "$CONFIG_MAC" = "y" ]; then - dep_bool ' Macintosh Quadra/Powerbook IDE interface support' CONFIG_BLK_DEV_MAC_IDE $CONFIG_MAC + + if [ "$CONFIG_PPC" = "y" -o "$CONFIG_ARM" = "y" ]; then + bool ' Winbond SL82c105 support' CONFIG_BLK_DEV_SL82C105 fi - if [ "$CONFIG_Q40" = "y" ]; then - dep_bool ' Q40/Q60 IDE interface support' CONFIG_BLK_DEV_Q40IDE $CONFIG_Q40 + fi + if [ "$CONFIG_ALL_PPC" = "y" ]; then + bool ' Builtin PowerMac IDE support' CONFIG_BLK_DEV_IDE_PMAC + dep_bool ' PowerMac IDE DMA support' CONFIG_BLK_DEV_IDEDMA_PMAC $CONFIG_BLK_DEV_IDE_PMAC + dep_bool ' Use DMA by default' CONFIG_BLK_DEV_IDEDMA_PMAC_AUTO $CONFIG_BLK_DEV_IDEDMA_PMAC + if [ "$CONFIG_BLK_DEV_IDE_PMAC" = "y" ]; then + define_bool CONFIG_BLK_DEV_IDEDMA $CONFIG_BLK_DEV_IDEDMA_PMAC fi - if [ "$CONFIG_8xx" = "y" ]; then - dep_bool ' MPC8xx IDE support' CONFIG_BLK_DEV_MPC8xx_IDE $CONFIG_8xx + if [ "$CONFIG_BLK_DEV_IDEDMA_PMAC" = "y" ]; then + define_bool CONFIG_BLK_DEV_IDEPCI $CONFIG_BLK_DEV_IDEDMA_PMAC fi + fi + if [ "$CONFIG_ARCH_ACORN" = "y" ]; then + dep_bool ' ICS IDE interface support' CONFIG_BLK_DEV_IDE_ICSIDE $CONFIG_ARCH_ACORN + dep_bool ' ICS DMA support' CONFIG_BLK_DEV_IDEDMA_ICS $CONFIG_BLK_DEV_IDE_ICSIDE + dep_bool ' Use ICS DMA by default' CONFIG_IDEDMA_ICS_AUTO $CONFIG_BLK_DEV_IDEDMA_ICS + define_bool CONFIG_BLK_DEV_IDEDMA $CONFIG_BLK_DEV_IDEDMA_ICS + dep_bool ' RapIDE interface support' CONFIG_BLK_DEV_IDE_RAPIDE $CONFIG_ARCH_ACORN + fi + if [ "$CONFIG_AMIGA" = "y" ]; then + dep_bool ' Amiga Gayle IDE interface support' CONFIG_BLK_DEV_GAYLE $CONFIG_AMIGA + dep_mbool ' Amiga IDE Doubler support (EXPERIMENTAL)' CONFIG_BLK_DEV_IDEDOUBLER $CONFIG_BLK_DEV_GAYLE $CONFIG_EXPERIMENTAL + fi + if [ "$CONFIG_ZORRO" = "y" -a "$CONFIG_EXPERIMENTAL" = "y" ]; then + dep_mbool ' Buddha/Catweasel IDE interface support (EXPERIMENTAL)' CONFIG_BLK_DEV_BUDDHA $CONFIG_ZORRO $CONFIG_EXPERIMENTAL + fi + if [ "$CONFIG_ATARI" = "y" ]; then + dep_bool ' Falcon IDE interface support' CONFIG_BLK_DEV_FALCON_IDE $CONFIG_ATARI + fi + if [ "$CONFIG_MAC" = "y" ]; then + dep_bool ' Macintosh Quadra/Powerbook IDE interface support' CONFIG_BLK_DEV_MAC_IDE $CONFIG_MAC + fi + if [ "$CONFIG_Q40" = "y" ]; then + dep_bool ' Q40/Q60 IDE interface support' CONFIG_BLK_DEV_Q40IDE $CONFIG_Q40 + fi + if [ "$CONFIG_8xx" = "y" ]; then + dep_bool ' MPC8xx IDE support' CONFIG_BLK_DEV_MPC8xx_IDE $CONFIG_8xx + fi - if [ "$CONFIG_BLK_DEV_MPC8xx_IDE" = "y" ]; then - choice 'Type of MPC8xx IDE interface' \ - "8xx_PCCARD CONFIG_IDE_8xx_PCCARD \ - 8xx_DIRECT CONFIG_IDE_8xx_DIRECT \ - EXT_DIRECT CONFIG_IDE_EXT_DIRECT" 8xx_PCCARD - fi + if [ "$CONFIG_BLK_DEV_MPC8xx_IDE" = "y" ]; then + choice 'Type of MPC8xx IDE interface' \ + "8xx_PCCARD CONFIG_IDE_8xx_PCCARD \ + 8xx_DIRECT CONFIG_IDE_8xx_DIRECT \ + EXT_DIRECT CONFIG_IDE_EXT_DIRECT" 8xx_PCCARD + fi - bool ' Other IDE chipset support' CONFIG_IDE_CHIPSETS - if [ "$CONFIG_IDE_CHIPSETS" = "y" ]; then - comment 'Note: most of these also require special kernel boot parameters' - bool ' ALI M14xx support' CONFIG_BLK_DEV_ALI14XX - bool ' DTC-2278 support' CONFIG_BLK_DEV_DTC2278 - bool ' Holtek HT6560B support' CONFIG_BLK_DEV_HT6560B - if [ "$CONFIG_BLK_DEV_IDEDISK" = "y" -a "$CONFIG_EXPERIMENTAL" = "y" ]; then - bool ' PROMISE DC4030 support (EXPERIMENTAL)' CONFIG_BLK_DEV_PDC4030 - fi - bool ' QDI QD65xx support' CONFIG_BLK_DEV_QD65XX - bool ' UMC-8672 support' CONFIG_BLK_DEV_UMC8672 + bool ' Other IDE chipset support' CONFIG_IDE_CHIPSETS + if [ "$CONFIG_IDE_CHIPSETS" = "y" ]; then + comment 'Note: most of these also require special kernel boot parameters' + bool ' ALI M14xx support' CONFIG_BLK_DEV_ALI14XX + bool ' DTC-2278 support' CONFIG_BLK_DEV_DTC2278 + bool ' Holtek HT6560B support' CONFIG_BLK_DEV_HT6560B + if [ "$CONFIG_BLK_DEV_IDEDISK" = "y" -a "$CONFIG_EXPERIMENTAL" = "y" ]; then + bool ' PROMISE DC4030 support (EXPERIMENTAL)' CONFIG_BLK_DEV_PDC4030 fi + bool ' QDI QD65xx support' CONFIG_BLK_DEV_QD65XX + bool ' UMC-8672 support' CONFIG_BLK_DEV_UMC8672 + fi + if [ "$CONFIG_BLK_DEV_IDEDMA_PCI" = "y" -o \ + "$CONFIG_BLK_DEV_IDEDMA_PMAC" = "y" -o \ + "$CONFIG_BLK_DEV_IDEDMA_ICS" = "y" ]; then + bool ' IGNORE word93 Validation BITS' CONFIG_IDEDMA_IVB fi else bool 'Old hard disk (MFM/RLL/IDE) driver' CONFIG_BLK_DEV_HD_ONLY @@ -161,12 +160,6 @@ define_bool CONFIG_IDEDMA_AUTO y else define_bool CONFIG_IDEDMA_AUTO n -fi - -if [ "$CONFIG_BLK_DEV_IDEDMA_PCI" = "y" -o \ - "$CONFIG_BLK_DEV_IDEDMA_PMAC" = "y" -o \ - "$CONFIG_BLK_DEV_IDEDMA_ICS" = "y" ]; then - bool ' IGNORE word93 Validation BITS' CONFIG_IDEDMA_IVB fi if [ "$CONFIG_BLK_DEV_TIVO" = "y" ]; then diff -Nru a/drivers/ide/Makefile b/drivers/ide/Makefile --- a/drivers/ide/Makefile Tue Mar 12 13:58:14 2002 +++ b/drivers/ide/Makefile Tue Mar 12 13:58:14 2002 @@ -44,7 +44,6 @@ ide-obj-$(CONFIG_BLK_DEV_HPT366) += hpt366.o ide-obj-$(CONFIG_BLK_DEV_HT6560B) += ht6560b.o ide-obj-$(CONFIG_BLK_DEV_IDE_ICSIDE) += icside.o -ide-obj-$(CONFIG_BLK_DEV_ADMA) += ide-adma.o ide-obj-$(CONFIG_BLK_DEV_IDEDMA_PCI) += ide-dma.o ide-obj-$(CONFIG_BLK_DEV_IDEPCI) += ide-pci.o ide-obj-$(CONFIG_BLK_DEV_ISAPNP) += ide-pnp.o diff -Nru a/drivers/ide/amd74xx.c b/drivers/ide/amd74xx.c --- a/drivers/ide/amd74xx.c Tue Mar 12 13:58:15 2002 +++ b/drivers/ide/amd74xx.c Tue Mar 12 13:58:15 2002 @@ -1,485 +1,451 @@ /* - * linux/drivers/ide/amd74xx.c Version 0.05 June 9, 2000 + * $Id: amd74xx.c,v 2.7 2002/09/01 17:37:00 vojtech Exp $ * - * Copyright (C) 1999-2000 Andre Hedrick - * May be copied or modified under the terms of the GNU General Public License + * Copyright (c) 2000-2002 Vojtech Pavlik * + * Based on the work of: + * Andre Hedrick + */ + +/* + * AMD 755/756/766/8111 IDE driver for Linux. + * + * UDMA66 and higher modes are autoenabled only in case the BIOS has detected a + * 80 wire cable. To ignore the BIOS data and assume the cable is present, use + * 'ide0=ata66' or 'ide1=ata66' on the kernel command line. + */ + +/* + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + * + * Should you need to contact me, the author, you can do so either by + * e-mail - mail your message to , or by paper mail: + * Vojtech Pavlik, Simunkova 1594, Prague 8, 182 00 Czech Republic */ #include -#include #include -#include -#include -#include #include #include -#include - -#include -#include #include +#include #include - #include -#include -#include "ide_modes.h" +#include "ide-timing.h" + +#define AMD_IDE_ENABLE 0x40 +#define AMD_IDE_CONFIG 0x41 +#define AMD_CABLE_DETECT 0x42 +#define AMD_DRIVE_TIMING 0x48 +#define AMD_8BIT_TIMING 0x4e +#define AMD_ADDRESS_SETUP 0x4c +#define AMD_UDMA_TIMING 0x50 + +#define AMD_UDMA 0x07 +#define AMD_UDMA_33 0x01 +#define AMD_UDMA_66 0x02 +#define AMD_UDMA_100 0x03 +#define AMD_BAD_SWDMA 0x08 +#define AMD_BAD_FIFO 0x10 -#define DISPLAY_VIPER_TIMINGS +/* + * AMD SouthBridge chips. + */ + +static struct amd_ide_chip { + char *name; + unsigned short id; + unsigned char rev; + unsigned char flags; +} amd_ide_chips[] = { + { "8111", PCI_DEVICE_ID_AMD_8111_IDE, 0x00, AMD_UDMA_100 }, + { "768 Opus", PCI_DEVICE_ID_AMD_OPUS_7441, 0x00, AMD_UDMA_100 }, + { "766 Viper", PCI_DEVICE_ID_AMD_VIPER_7411, 0x00, AMD_UDMA_100 | AMD_BAD_FIFO }, + { "756/c4+ Viper", PCI_DEVICE_ID_AMD_VIPER_7409, 0x07, AMD_UDMA_66 }, + { "756 Viper", PCI_DEVICE_ID_AMD_VIPER_7409, 0x00, AMD_UDMA_66 | AMD_BAD_SWDMA }, + { "755 Cobra", PCI_DEVICE_ID_AMD_COBRA_7401, 0x00, AMD_UDMA_33 | AMD_BAD_SWDMA }, + { NULL } +}; + +static struct amd_ide_chip *amd_config; +static unsigned char amd_enabled; +static unsigned int amd_80w; +static unsigned int amd_clock; + +static unsigned char amd_cyc2udma[] = { 6, 6, 5, 4, 0, 1, 1, 2, 2, 3, 3 }; +static unsigned char amd_udma2cyc[] = { 4, 6, 8, 10, 3, 2, 1, 1 }; +static char *amd_dma[] = { "MWDMA16", "UDMA33", "UDMA66", "UDMA100" }; + +/* + * AMD /proc entry. + */ + +#ifdef CONFIG_PROC_FS -#if defined(DISPLAY_VIPER_TIMINGS) && defined(CONFIG_PROC_FS) #include #include -static int amd74xx_get_info(char *, char **, off_t, int); -extern int (*amd74xx_display_info)(char *, char **, off_t, int); /* ide-proc.c */ +byte amd74xx_proc; +int amd_base; static struct pci_dev *bmide_dev; +extern int (*amd74xx_display_info)(char *, char **, off_t, int); /* ide-proc.c */ -static int amd74xx_get_info (char *buffer, char **addr, off_t offset, int count) -{ +#define amd_print(format, arg...) p += sprintf(p, format "\n" , ## arg) +#define amd_print_drive(name, format, arg...)\ + p += sprintf(p, name); for (i = 0; i < 4; i++) p += sprintf(p, format, ## arg); p += sprintf(p, "\n"); + +static int amd_get_info(char *buffer, char **addr, off_t offset, int count) +{ + int speed[4], cycle[4], setup[4], active[4], recover[4], den[4], + uen[4], udma[4], active8b[4], recover8b[4]; + struct pci_dev *dev = bmide_dev; + unsigned int v, u, i; + unsigned short c, w; + unsigned char t; char *p = buffer; - u32 bibma = pci_resource_start(bmide_dev, 4); - u8 c0 = 0, c1 = 0; - /* - * at that point bibma+0x2 et bibma+0xa are byte registers - * to investigate: - */ - c0 = inb_p((unsigned short)bibma + 0x02); - c1 = inb_p((unsigned short)bibma + 0x0a); - - p += sprintf(p, "\n AMD %04X VIPER Chipset.\n", bmide_dev->device); - p += sprintf(p, "--------------- Primary Channel ---------------- Secondary Channel -------------\n"); - p += sprintf(p, " %sabled %sabled\n", - (c0&0x80) ? "dis" : " en", - (c1&0x80) ? "dis" : " en"); - p += sprintf(p, "--------------- drive0 --------- drive1 -------- drive0 ---------- drive1 ------\n"); - p += sprintf(p, "DMA enabled: %s %s %s %s\n", - (c0&0x20) ? "yes" : "no ", (c0&0x40) ? "yes" : "no ", - (c1&0x20) ? "yes" : "no ", (c1&0x40) ? "yes" : "no " ); - p += sprintf(p, "UDMA\n"); - p += sprintf(p, "DMA\n"); - p += sprintf(p, "PIO\n"); + amd_print("----------AMD BusMastering IDE Configuration----------------"); - return p-buffer; /* => must be less than 4k! */ -} -#endif /* defined(DISPLAY_VIPER_TIMINGS) && defined(CONFIG_PROC_FS) */ + amd_print("Driver Version: 2.7"); + amd_print("South Bridge: AMD-%s", amd_config->name); -byte amd74xx_proc = 0; + pci_read_config_byte(dev, PCI_REVISION_ID, &t); + amd_print("Revision: IDE %#x", t); + amd_print("Highest DMA rate: %s", amd_dma[amd_config->flags & AMD_UDMA]); -extern char *ide_xfer_verbose (byte xfer_rate); + amd_print("BM-DMA base: %#x", amd_base); + amd_print("PCI clock: %d.%dMHz", amd_clock / 1000, amd_clock / 100 % 10); + + amd_print("-----------------------Primary IDE-------Secondary IDE------"); -static unsigned int amd74xx_swdma_check (struct pci_dev *dev) -{ - unsigned int class_rev; + pci_read_config_byte(dev, AMD_IDE_CONFIG, &t); + amd_print("Prefetch Buffer: %10s%20s", (t & 0x80) ? "yes" : "no", (t & 0x20) ? "yes" : "no"); + amd_print("Post Write Buffer: %10s%20s", (t & 0x40) ? "yes" : "no", (t & 0x10) ? "yes" : "no"); - if ((dev->device == PCI_DEVICE_ID_AMD_VIPER_7411) || - (dev->device == PCI_DEVICE_ID_AMD_VIPER_7441)) - return 0; - - pci_read_config_dword(dev, PCI_CLASS_REVISION, &class_rev); - class_rev &= 0xff; - return ((int) (class_rev >= 7) ? 1 : 0); -} + pci_read_config_byte(dev, AMD_IDE_ENABLE, &t); + amd_print("Enabled: %10s%20s", (t & 0x02) ? "yes" : "no", (t & 0x01) ? "yes" : "no"); -static int amd74xx_swdma_error(ide_drive_t *drive) -{ - printk("%s: single-word DMA not support (revision < C4)\n", drive->name); - return 0; -} + c = inb(amd_base + 0x02) | (inb(amd_base + 0x0a) << 8); + amd_print("Simplex only: %10s%20s", (c & 0x80) ? "yes" : "no", (c & 0x8000) ? "yes" : "no"); -/* - * Here is where all the hard work goes to program the chipset. - * - */ -static int amd74xx_tune_chipset (ide_drive_t *drive, byte speed) -{ - ide_hwif_t *hwif = HWIF(drive); - struct pci_dev *dev = hwif->pci_dev; - int err = 0; - byte unit = (drive->select.b.unit & 0x01); -#ifdef CONFIG_BLK_DEV_IDEDMA - unsigned long dma_base = hwif->dma_base; -#endif /* CONFIG_BLK_DEV_IDEDMA */ - byte drive_pci = 0x00; - byte drive_pci2 = 0x00; - byte ultra_timing = 0x00; - byte dma_pio_timing = 0x00; - byte pio_timing = 0x00; - - switch (drive->dn) { - case 0: drive_pci = 0x53; drive_pci2 = 0x4b; break; - case 1: drive_pci = 0x52; drive_pci2 = 0x4a; break; - case 2: drive_pci = 0x51; drive_pci2 = 0x49; break; - case 3: drive_pci = 0x50; drive_pci2 = 0x48; break; - default: - return -1; - } - - pci_read_config_byte(dev, drive_pci, &ultra_timing); - pci_read_config_byte(dev, drive_pci2, &dma_pio_timing); - pci_read_config_byte(dev, 0x4c, &pio_timing); - -#ifdef DEBUG - printk("%s:%d: Speed 0x%02x UDMA 0x%02x DMAPIO 0x%02x PIO 0x%02x\n", - drive->name, drive->dn, speed, ultra_timing, dma_pio_timing, pio_timing); -#endif + amd_print("Cable Type: %10s%20s", (amd_80w & 1) ? "80w" : "40w", (amd_80w & 2) ? "80w" : "40w"); - ultra_timing &= ~0xC7; - dma_pio_timing &= ~0xFF; - pio_timing &= ~(0x03 << drive->dn); - -#ifdef DEBUG - printk("%s: UDMA 0x%02x DMAPIO 0x%02x PIO 0x%02x\n", - drive->name, ultra_timing, dma_pio_timing, pio_timing); -#endif + if (!amd_clock) + return p - buffer; - switch(speed) { -#ifdef CONFIG_BLK_DEV_IDEDMA - case XFER_UDMA_7: - case XFER_UDMA_6: - speed = XFER_UDMA_5; - case XFER_UDMA_5: - ultra_timing |= 0x46; - dma_pio_timing |= 0x20; - break; - case XFER_UDMA_4: - ultra_timing |= 0x45; - dma_pio_timing |= 0x20; - break; - case XFER_UDMA_3: - ultra_timing |= 0x44; - dma_pio_timing |= 0x20; - break; - case XFER_UDMA_2: - ultra_timing |= 0x40; - dma_pio_timing |= 0x20; - break; - case XFER_UDMA_1: - ultra_timing |= 0x41; - dma_pio_timing |= 0x20; - break; - case XFER_UDMA_0: - ultra_timing |= 0x42; - dma_pio_timing |= 0x20; - break; - case XFER_MW_DMA_2: - dma_pio_timing |= 0x20; - break; - case XFER_MW_DMA_1: - dma_pio_timing |= 0x21; - break; - case XFER_MW_DMA_0: - dma_pio_timing |= 0x77; - break; - case XFER_SW_DMA_2: - if (!amd74xx_swdma_check(dev)) - return amd74xx_swdma_error(drive); - dma_pio_timing |= 0x42; - break; - case XFER_SW_DMA_1: - if (!amd74xx_swdma_check(dev)) - return amd74xx_swdma_error(drive); - dma_pio_timing |= 0x65; - break; - case XFER_SW_DMA_0: - if (!amd74xx_swdma_check(dev)) - return amd74xx_swdma_error(drive); - dma_pio_timing |= 0xA8; - break; -#endif /* CONFIG_BLK_DEV_IDEDMA */ - case XFER_PIO_4: - dma_pio_timing |= 0x20; - break; - case XFER_PIO_3: - dma_pio_timing |= 0x22; - break; - case XFER_PIO_2: - dma_pio_timing |= 0x42; - break; - case XFER_PIO_1: - dma_pio_timing |= 0x65; - break; - case XFER_PIO_0: - default: - dma_pio_timing |= 0xA8; - break; - } + amd_print("-------------------drive0----drive1----drive2----drive3-----"); - pio_timing |= (0x03 << drive->dn); + pci_read_config_byte(dev, AMD_ADDRESS_SETUP, &t); + pci_read_config_dword(dev, AMD_DRIVE_TIMING, &v); + pci_read_config_word(dev, AMD_8BIT_TIMING, &w); + pci_read_config_dword(dev, AMD_UDMA_TIMING, &u); - if (!drive->init_speed) - drive->init_speed = speed; + for (i = 0; i < 4; i++) { + setup[i] = ((t >> ((3 - i) << 1)) & 0x3) + 1; + recover8b[i] = ((w >> ((1 - (i >> 1)) << 3)) & 0xf) + 1; + active8b[i] = ((w >> (((1 - (i >> 1)) << 3) + 4)) & 0xf) + 1; + active[i] = ((v >> (((3 - i) << 3) + 4)) & 0xf) + 1; + recover[i] = ((v >> ((3 - i) << 3)) & 0xf) + 1; -#ifdef CONFIG_BLK_DEV_IDEDMA - pci_write_config_byte(dev, drive_pci, ultra_timing); -#endif /* CONFIG_BLK_DEV_IDEDMA */ - pci_write_config_byte(dev, drive_pci2, dma_pio_timing); - pci_write_config_byte(dev, 0x4c, pio_timing); + udma[i] = amd_udma2cyc[((u >> ((3 - i) << 3)) & 0x7)]; + uen[i] = ((u >> ((3 - i) << 3)) & 0x40) ? 1 : 0; + den[i] = (c & ((i & 1) ? 0x40 : 0x20) << ((i & 2) << 2)); + + if (den[i] && uen[i] && udma[i] == 1) { + speed[i] = amd_clock * 3; + cycle[i] = 666666 / amd_clock; + continue; + } + + speed[i] = 4 * amd_clock / ((den[i] && uen[i]) ? udma[i] : (active[i] + recover[i]) * 2); + cycle[i] = 1000000 * ((den[i] && uen[i]) ? udma[i] : (active[i] + recover[i]) * 2) / amd_clock / 2; + } + + amd_print_drive("Transfer Mode: ", "%10s", den[i] ? (uen[i] ? "UDMA" : "DMA") : "PIO"); + + amd_print_drive("Address Setup: ", "%8dns", 1000000 * setup[i] / amd_clock); + amd_print_drive("Cmd Active: ", "%8dns", 1000000 * active8b[i] / amd_clock); + amd_print_drive("Cmd Recovery: ", "%8dns", 1000000 * recover8b[i] / amd_clock); + amd_print_drive("Data Active: ", "%8dns", 1000000 * active[i] / amd_clock); + amd_print_drive("Data Recovery: ", "%8dns", 1000000 * recover[i] / amd_clock); + amd_print_drive("Cycle Time: ", "%8dns", cycle[i]); + amd_print_drive("Transfer Rate: ", "%4d.%dMB/s", speed[i] / 1000, speed[i] / 100 % 10); + + return p - buffer; /* hoping it is less than 4K... */ +} -#ifdef DEBUG - printk("%s: UDMA 0x%02x DMAPIO 0x%02x PIO 0x%02x\n", - drive->name, ultra_timing, dma_pio_timing, pio_timing); #endif -#ifdef CONFIG_BLK_DEV_IDEDMA - if (speed > XFER_PIO_4) { - outb(inb(dma_base+2)|(1<<(5+unit)), dma_base+2); - } else { - outb(inb(dma_base+2) & ~(1<<(5+unit)), dma_base+2); +/* + * amd_set_speed() writes timing values to the chipset registers + */ + +static void amd_set_speed(struct pci_dev *dev, unsigned char dn, struct ide_timing *timing) +{ + unsigned char t; + + pci_read_config_byte(dev, AMD_ADDRESS_SETUP, &t); + t = (t & ~(3 << ((3 - dn) << 1))) | ((FIT(timing->setup, 1, 4) - 1) << ((3 - dn) << 1)); + pci_write_config_byte(dev, AMD_ADDRESS_SETUP, t); + + pci_write_config_byte(dev, AMD_8BIT_TIMING + (1 - (dn >> 1)), + ((FIT(timing->act8b, 1, 16) - 1) << 4) | (FIT(timing->rec8b, 1, 16) - 1)); + + pci_write_config_byte(dev, AMD_DRIVE_TIMING + (3 - dn), + ((FIT(timing->active, 1, 16) - 1) << 4) | (FIT(timing->recover, 1, 16) - 1)); + + switch (amd_config->flags & AMD_UDMA) { + case AMD_UDMA_33: t = timing->udma ? (0xc0 | (FIT(timing->udma, 2, 5) - 2)) : 0x03; break; + case AMD_UDMA_66: t = timing->udma ? (0xc0 | amd_cyc2udma[FIT(timing->udma, 2, 10)]) : 0x03; break; + case AMD_UDMA_100: t = timing->udma ? (0xc0 | amd_cyc2udma[FIT(timing->udma, 1, 10)]) : 0x03; break; + default: return; } -#endif /* CONFIG_BLK_DEV_IDEDMA */ - err = ide_config_drive_speed(drive, speed); - drive->current_speed = speed; - return (err); + pci_write_config_byte(dev, AMD_UDMA_TIMING + (3 - dn), t); } -static void config_chipset_for_pio (ide_drive_t *drive) +/* + * amd_set_drive() computes timing values configures the drive and + * the chipset to a desired transfer mode. It also can be called + * by upper layers. + */ + +static int amd_set_drive(ide_drive_t *drive, unsigned char speed) { - unsigned short eide_pio_timing[6] = {960, 480, 240, 180, 120, 90}; - unsigned short xfer_pio = drive->id->eide_pio_modes; - byte timing, speed, pio; - - pio = ide_get_best_pio_mode(drive, 255, 5, NULL); - - if (xfer_pio> 4) - xfer_pio = 0; - - if (drive->id->eide_pio_iordy > 0) { - for (xfer_pio = 5; - xfer_pio>0 && - drive->id->eide_pio_iordy>eide_pio_timing[xfer_pio]; - xfer_pio--); - } else { - xfer_pio = (drive->id->eide_pio_modes & 4) ? 0x05 : - (drive->id->eide_pio_modes & 2) ? 0x04 : - (drive->id->eide_pio_modes & 1) ? 0x03 : - (drive->id->tPIO & 2) ? 0x02 : - (drive->id->tPIO & 1) ? 0x01 : xfer_pio; - } - - timing = (xfer_pio >= pio) ? xfer_pio : pio; - - switch(timing) { - case 4: speed = XFER_PIO_4;break; - case 3: speed = XFER_PIO_3;break; - case 2: speed = XFER_PIO_2;break; - case 1: speed = XFER_PIO_1;break; - default: - speed = (!drive->id->tPIO) ? XFER_PIO_0 : XFER_PIO_SLOW; - break; + ide_drive_t *peer = HWIF(drive)->drives + (~drive->dn & 1); + struct ide_timing t, p; + int T, UT; + + if (speed != XFER_PIO_SLOW && speed != drive->current_speed) + if (ide_config_drive_speed(drive, speed)) + printk(KERN_WARNING "ide%d: Drive %d didn't accept speed setting. Oh, well.\n", + drive->dn >> 1, drive->dn & 1); + + T = 1000000000 / amd_clock; + UT = T / MIN(MAX(amd_config->flags & AMD_UDMA, 1), 2); + + ide_timing_compute(drive, speed, &t, T, UT); + + if (peer->present) { + ide_timing_compute(peer, peer->current_speed, &p, T, UT); + ide_timing_merge(&p, &t, &t, IDE_TIMING_8BIT); } - (void) amd74xx_tune_chipset(drive, speed); + + if (speed == XFER_UDMA_5 && amd_clock <= 33333) t.udma = 1; + + amd_set_speed(HWIF(drive)->pci_dev, drive->dn, &t); + + if (!drive->init_speed) + drive->init_speed = speed; drive->current_speed = speed; + + return 0; } -static void amd74xx_tune_drive (ide_drive_t *drive, byte pio) +/* + * amd74xx_tune_drive() is a callback from upper layers for + * PIO-only tuning. + */ + +static void amd74xx_tune_drive(ide_drive_t *drive, unsigned char pio) { - byte speed; - switch(pio) { - case 4: speed = XFER_PIO_4;break; - case 3: speed = XFER_PIO_3;break; - case 2: speed = XFER_PIO_2;break; - case 1: speed = XFER_PIO_1;break; - default: speed = XFER_PIO_0;break; + if (!((amd_enabled >> HWIF(drive)->channel) & 1)) + return; + + if (pio == 255) { + amd_set_drive(drive, ide_find_best_mode(drive, XFER_PIO | XFER_EPIO)); + return; } - (void) amd74xx_tune_chipset(drive, speed); + + amd_set_drive(drive, XFER_PIO_0 + MIN(pio, 5)); } #ifdef CONFIG_BLK_DEV_IDEDMA + /* - * This allows the configuration of ide_pci chipset registers - * for cards that learn about the drive's UDMA, DMA, PIO capabilities - * after the drive is reported by the OS. + * amd74xx_dmaproc() is a callback from upper layers that can do + * a lot, but we use it for DMA/PIO tuning only, delegating everything + * else to the default ide_dmaproc(). */ -static int config_chipset_for_dma (ide_drive_t *drive) + +int amd74xx_dmaproc(ide_dma_action_t func, ide_drive_t *drive) { - ide_hwif_t *hwif = HWIF(drive); - struct pci_dev *dev = hwif->pci_dev; - struct hd_driveid *id = drive->id; - byte udma_66 = eighty_ninty_three(drive); - byte udma_100 = ((dev->device==PCI_DEVICE_ID_AMD_VIPER_7411)|| - (dev->device==PCI_DEVICE_ID_AMD_VIPER_7441)) ? 1 : 0; - byte speed = 0x00; - int rval; - - if ((id->dma_ultra & 0x0020) && (udma_66) && (udma_100)) { - speed = XFER_UDMA_5; - } else if ((id->dma_ultra & 0x0010) && (udma_66)) { - speed = XFER_UDMA_4; - } else if ((id->dma_ultra & 0x0008) && (udma_66)) { - speed = XFER_UDMA_3; - } else if (id->dma_ultra & 0x0004) { - speed = XFER_UDMA_2; - } else if (id->dma_ultra & 0x0002) { - speed = XFER_UDMA_1; - } else if (id->dma_ultra & 0x0001) { - speed = XFER_UDMA_0; - } else if (id->dma_mword & 0x0004) { - speed = XFER_MW_DMA_2; - } else if (id->dma_mword & 0x0002) { - speed = XFER_MW_DMA_1; - } else if (id->dma_mword & 0x0001) { - speed = XFER_MW_DMA_0; - } else { - return ((int) ide_dma_off_quietly); - } - - (void) amd74xx_tune_chipset(drive, speed); - - rval = (int)( ((id->dma_ultra >> 11) & 7) ? ide_dma_on : - ((id->dma_ultra >> 8) & 7) ? ide_dma_on : - ((id->dma_mword >> 8) & 7) ? ide_dma_on : - ide_dma_off_quietly); - return rval; + if (func == ide_dma_check) { + + short w80 = HWIF(drive)->udma_four; + + short speed = ide_find_best_mode(drive, + XFER_PIO | XFER_EPIO | XFER_MWDMA | XFER_UDMA | + ((amd_config->flags & AMD_BAD_SWDMA) ? 0 : XFER_SWDMA) | + (w80 && (amd_config->flags & AMD_UDMA) >= AMD_UDMA_66 ? XFER_UDMA_66 : 0) | + (w80 && (amd_config->flags & AMD_UDMA) >= AMD_UDMA_100 ? XFER_UDMA_100 : 0)); + + amd_set_drive(drive, speed); + + func = (HWIF(drive)->autodma && (speed & XFER_MODE) != XFER_PIO) + ? ide_dma_on : ide_dma_off_quietly; + } + + return ide_dmaproc(func, drive); } -static int config_drive_xfer_rate (ide_drive_t *drive) +#endif /* CONFIG_BLK_DEV_IDEDMA */ + +/* + * The initialization callback. Here we determine the IDE chip type + * and initialize its drive independent registers. + */ + +unsigned int __init pci_init_amd74xx(struct pci_dev *dev, const char *name) { - struct hd_driveid *id = drive->id; - ide_dma_action_t dma_func = ide_dma_on; + unsigned char t; + unsigned int u; + int i; - if (id && (id->capability & 1) && HWIF(drive)->autodma) { - /* Consult the list of known "bad" drives */ - if (ide_dmaproc(ide_dma_bad_drive, drive)) { - dma_func = ide_dma_off; - goto fast_ata_pio; - } - dma_func = ide_dma_off_quietly; - if (id->field_valid & 4) { - if (id->dma_ultra & 0x003F) { - /* Force if Capable UltraDMA */ - dma_func = config_chipset_for_dma(drive); - if ((id->field_valid & 2) && - (dma_func != ide_dma_on)) - goto try_dma_modes; - } - } else if (id->field_valid & 2) { -try_dma_modes: - if ((id->dma_mword & 0x0007) || - ((id->dma_1word & 0x007) && - (amd74xx_swdma_check(HWIF(drive)->pci_dev)))) { - /* Force if Capable regular DMA modes */ - dma_func = config_chipset_for_dma(drive); - if (dma_func != ide_dma_on) - goto no_dma_set; - } - - } else if (ide_dmaproc(ide_dma_good_drive, drive)) { - if (id->eide_dma_time > 150) { - goto no_dma_set; - } - /* Consult the list of known "good" drives */ - dma_func = config_chipset_for_dma(drive); - if (dma_func != ide_dma_on) - goto no_dma_set; - } else { - goto fast_ata_pio; +/* + * Find out what AMD IDE is this. + */ + + for (amd_config = amd_ide_chips; amd_config->id; amd_config++) { + pci_read_config_byte(dev, PCI_REVISION_ID, &t); + if (dev->device == amd_config->id && t >= amd_config->rev) + break; } - } else if ((id->capability & 8) || (id->field_valid & 2)) { -fast_ata_pio: - dma_func = ide_dma_off_quietly; -no_dma_set: - config_chipset_for_pio(drive); + if (!amd_config->id) { + printk(KERN_WARNING "AMD_IDE: Unknown AMD IDE Chip, contact Vojtech Pavlik \n"); + return -ENODEV; } - return HWIF(drive)->dmaproc(dma_func, drive); -} /* - * amd74xx_dmaproc() initiates/aborts (U)DMA read/write operations on a drive. + * Check 80-wire cable presence. */ -int amd74xx_dmaproc (ide_dma_action_t func, ide_drive_t *drive) -{ - switch (func) { - case ide_dma_check: - return config_drive_xfer_rate(drive); - default: + switch (amd_config->flags & AMD_UDMA) { + + case AMD_UDMA_100: + pci_read_config_byte(dev, AMD_CABLE_DETECT, &t); + amd_80w = ((u & 0x3) ? 1 : 0) | ((u & 0xc) ? 2 : 0); + for (i = 24; i >= 0; i -= 8) + if (((u >> i) & 4) && !(amd_80w & (1 << (1 - (i >> 4))))) { + printk(KERN_WARNING "AMD_IDE: Bios didn't set cable bits corectly. Enabling workaround.\n"); + amd_80w |= (1 << (1 - (i >> 4))); + } + break; + + case AMD_UDMA_66: + pci_read_config_dword(dev, AMD_UDMA_TIMING, &u); + for (i = 24; i >= 0; i -= 8) + if ((u >> i) & 4) + amd_80w |= (1 << (1 - (i >> 4))); break; } - return ide_dmaproc(func, drive); /* use standard DMA stuff */ -} -#endif /* CONFIG_BLK_DEV_IDEDMA */ -unsigned int __init pci_init_amd74xx(struct pci_dev *dev) -{ - unsigned long fixdma_base = pci_resource_start(dev, 4); + pci_read_config_dword(dev, AMD_IDE_ENABLE, &u); + amd_enabled = ((u & 1) ? 2 : 0) | ((u & 2) ? 1 : 0); -#ifdef CONFIG_BLK_DEV_IDEDMA - if (!amd74xx_swdma_check(dev)) - printk("%s: disabling single-word DMA support (revision < C4)\n", dev->name); -#endif /* CONFIG_BLK_DEV_IDEDMA */ +/* + * Take care of prefetch & postwrite. + */ - if (!fixdma_base) { - /* - * - */ - } else { - /* - * enable DMA capable bit, and "not" simplex only - */ - outb(inb(fixdma_base+2) & 0x60, fixdma_base+2); + pci_read_config_byte(dev, AMD_IDE_CONFIG, &t); + pci_write_config_byte(dev, AMD_IDE_CONFIG, + (amd_config->flags & AMD_BAD_FIFO) ? (t & 0x0f) : (t | 0xf0)); - if (inb(fixdma_base+2) & 0x80) - printk("%s: simplex device: DMA will fail!!\n", dev->name); +/* + * Determine the system bus clock. + */ + + amd_clock = system_bus_speed * 1000; + + switch (amd_clock) { + case 33000: amd_clock = 33333; break; + case 37000: amd_clock = 37500; break; + case 41000: amd_clock = 41666; break; } -#if defined(DISPLAY_VIPER_TIMINGS) && defined(CONFIG_PROC_FS) + + if (amd_clock < 20000 || amd_clock > 50000) { + printk(KERN_WARNING "AMD_IDE: User given PCI clock speed impossible (%d), using 33 MHz instead.\n", amd_clock); + printk(KERN_WARNING "AMD_IDE: Use ide0=ata66 if you want to assume 80-wire cable\n"); + amd_clock = 33333; + } + +/* + * Print the boot message. + */ + + pci_read_config_byte(dev, PCI_REVISION_ID, &t); + printk(KERN_INFO "AMD_IDE: AMD-%s (rev %02x) IDE %s controller on pci%s\n", + amd_config->name, t, amd_dma[amd_config->flags & AMD_UDMA], dev->slot_name); + +/* + * Register /proc/ide/amd74xx entry + */ + +#ifdef CONFIG_PROC_FS if (!amd74xx_proc) { - amd74xx_proc = 1; + amd_base = pci_resource_start(dev, 4); bmide_dev = dev; - amd74xx_display_info = &amd74xx_get_info; + amd74xx_display_info = &amd_get_info; + amd74xx_proc = 1; } -#endif /* DISPLAY_VIPER_TIMINGS && CONFIG_PROC_FS */ +#endif return 0; } -unsigned int __init ata66_amd74xx (ide_hwif_t *hwif) +unsigned int __init ata66_amd74xx(ide_hwif_t *hwif) { -#ifdef CONFIG_AMD74XX_OVERRIDE - byte ata66 = 1; -#else - byte ata66 = 0; -#endif /* CONFIG_AMD74XX_OVERRIDE */ - -#if 0 - pci_read_config_byte(hwif->pci_dev, 0x48, &ata66); - return ((ata66 & 0x02) ? 0 : 1); -#endif - return ata66; + return ((amd_enabled & amd_80w) >> hwif->channel) & 1; } -void __init ide_init_amd74xx (ide_hwif_t *hwif) +void __init ide_init_amd74xx(ide_hwif_t *hwif) { - hwif->tuneproc = &amd74xx_tune_drive; - hwif->speedproc = &amd74xx_tune_chipset; + int i; - hwif->highmem = 1; - -#ifndef CONFIG_BLK_DEV_IDEDMA - hwif->drives[0].autotune = 1; - hwif->drives[1].autotune = 1; + hwif->tuneproc = &amd74xx_tune_drive; + hwif->speedproc = &amd_set_drive; hwif->autodma = 0; - return; -#else + for (i = 0; i < 2; i++) { + hwif->drives[i].io_32bit = 1; + hwif->drives[i].unmask = 1; + hwif->drives[i].autotune = 1; + hwif->drives[i].dn = hwif->channel * 2 + i; + } + +#ifdef CONFIG_BLK_DEV_IDEDMA if (hwif->dma_base) { + hwif->highmem = 1; hwif->dmaproc = &amd74xx_dmaproc; +#ifdef CONFIG_IDEDMA_AUTO if (!noautodma) hwif->autodma = 1; - } else { - hwif->autodma = 0; - hwif->drives[0].autotune = 1; - hwif->drives[1].autotune = 1; +#endif } #endif /* CONFIG_BLK_DEV_IDEDMA */ } -void __init ide_dmacapable_amd74xx (ide_hwif_t *hwif, unsigned long dmabase) +/* + * We allow the BM-DMA driver only work on enabled interfaces. + */ + +void __init ide_dmacapable_amd74xx(ide_hwif_t *hwif, unsigned long dmabase) { - ide_setup_dma(hwif, dmabase, 8); + if ((amd_enabled >> hwif->channel) & 1) + ide_setup_dma(hwif, dmabase, 8); } diff -Nru a/drivers/ide/ide-adma.c b/drivers/ide/ide-adma.c --- a/drivers/ide/ide-adma.c Tue Mar 12 13:58:15 2002 +++ /dev/null Wed Dec 31 16:00:00 1969 @@ -1,9 +0,0 @@ -/* - * linux/drivers/ide/ide-adma.c Version 0.00 June 24, 2001 - * - * Copyright (c) 2001 Andre Hedrick - * - * Asynchronous DMA -- TBA, this is a holding file. - * - */ - diff -Nru a/drivers/ide/ide-cd.c b/drivers/ide/ide-cd.c --- a/drivers/ide/ide-cd.c Tue Mar 12 13:58:15 2002 +++ b/drivers/ide/ide-cd.c Tue Mar 12 13:58:15 2002 @@ -2508,6 +2508,11 @@ if (!CDROM_CONFIG_FLAGS (drive)->close_tray) devinfo->mask |= CDC_CLOSE_TRAY; + /* FIXME: I'm less that sure that this is the proper thing to do, since + * ware already adding the devices to devfs int ide.c upon device + * registration. + */ + devinfo->de = devfs_register(drive->de, "cd", DEVFS_FL_DEFAULT, HWIF(drive)->major, minor, S_IFBLK | S_IRUGO | S_IWUGO, diff -Nru a/drivers/ide/ide-cs.c b/drivers/ide/ide-cs.c --- a/drivers/ide/ide-cs.c Tue Mar 12 13:58:15 2002 +++ b/drivers/ide/ide-cs.c Tue Mar 12 13:58:15 2002 @@ -401,7 +401,7 @@ DEBUG(0, "ide_release(0x%p)\n", link); if (info->ndev) { - ide_unregister(info->hd); + ide_unregister(&ide_hwifs[info->hd]); MOD_DEC_USE_COUNT; } diff -Nru a/drivers/ide/ide-disk.c b/drivers/ide/ide-disk.c --- a/drivers/ide/ide-disk.c Tue Mar 12 13:58:14 2002 +++ b/drivers/ide/ide-disk.c Tue Mar 12 13:58:14 2002 @@ -117,6 +117,8 @@ */ static ide_startstop_t do_rw_disk (ide_drive_t *drive, struct request *rq, unsigned long block) { + if (drive->blocked) + panic("ide: Request while drive blocked? You don't like your data intact?"); if (!(rq->flags & REQ_CMD)) { blk_dump_rq_flags(rq, "do_rw_disk, bad command"); ide_end_request(drive, 0); @@ -903,13 +905,36 @@ ide_add_setting(drive, "max_failures", SETTING_RW, -1, -1, TYPE_INT, 0, 65535, 1, 1, &drive->max_failures, NULL); } +static int idedisk_suspend(struct device *dev, u32 state, u32 level) +{ + int i; + ide_drive_t *drive = dev->driver_data; + + printk("ide_disk_suspend()\n"); + while (HWGROUP(drive)->handler) + schedule(); + drive->blocked = 1; +} + +static int idedisk_resume(struct device *dev, u32 level) +{ + ide_drive_t *drive = dev->driver_data; + if (!drive->blocked) + panic("ide: Resume but not suspended?\n"); + drive->blocked = 0; +} + + /* This is just a hook for the overall driver tree. * * FIXME: This is soon goig to replace the custom linked list games played up * to great extend between the different components of the IDE drivers. */ -static struct device_driver idedisk_devdrv = {}; +static struct device_driver idedisk_devdrv = { + suspend: idedisk_suspend, + resume: idedisk_resume, +}; static void idedisk_setup(ide_drive_t *drive) { @@ -956,6 +981,7 @@ sprintf(drive->device.name, "ide-disk"); drive->device.driver = &idedisk_devdrv; drive->device.parent = &HWIF(drive)->device; + drive->device.driver_data = drive; device_register(&drive->device); } diff -Nru a/drivers/ide/ide-dma.c b/drivers/ide/ide-dma.c --- a/drivers/ide/ide-dma.c Tue Mar 12 13:58:15 2002 +++ b/drivers/ide/ide-dma.c Tue Mar 12 13:58:15 2002 @@ -707,8 +707,11 @@ /* * Needed for allowing full modular support of ide-driver */ -int ide_release_dma (ide_hwif_t *hwif) +int ide_release_dma(ide_hwif_t *hwif) { + if (!hwif->dma_base) + return; + if (hwif->dmatable_cpu) { pci_free_consistent(hwif->pci_dev, PRD_ENTRIES * PRD_BYTES, @@ -723,6 +726,8 @@ if ((hwif->dma_extra) && (hwif->channel == 0)) release_region((hwif->dma_base + 16), hwif->dma_extra); release_region(hwif->dma_base, 8); + hwif->dma_base = 0; + return 1; } diff -Nru a/drivers/ide/ide-pci.c b/drivers/ide/ide-pci.c --- a/drivers/ide/ide-pci.c Tue Mar 12 13:58:16 2002 +++ b/drivers/ide/ide-pci.c Tue Mar 12 13:58:16 2002 @@ -6,15 +6,8 @@ */ /* - * This module provides support for automatic detection and - * configuration of all PCI IDE interfaces present in a system. - */ - -/* - * Chipsets that are on the IDE_IGNORE list because of problems of not being - * set at compile time. - * - * CONFIG_BLK_DEV_PDC202XX + * This module provides support for automatic detection and configuration of + * all PCI ATA host chip chanells interfaces present in a system. */ #include @@ -34,7 +27,14 @@ #define PCI_VENDOR_ID_HINT 0x3388 #define PCI_DEVICE_ID_HINT 0x8013 -#define IDE_IGNORE ((void *)-1) +/* + * Some combi chips, which can be used on the PCI bus or the VL bus can be in + * some systems acessed either through the PCI config space or through the + * hosts IO bus. If the corresponding initialization driver is using the host + * IO space to deal with them please define the following. + */ + +#define ATA_PCI_IGNORE ((void *)-1) #define IDE_NO_DRIVER ((void *)-2) #ifdef CONFIG_BLK_DEV_AEC62XX @@ -284,10 +284,11 @@ {PCI_VENDOR_ID_CYRIX, PCI_DEVICE_ID_CYRIX_5530_IDE, pci_init_cs5530, NULL, ide_init_cs5530, NULL, {{0x00,0x00,0x00}, {0x00,0x00,0x00}}, ON_BOARD, 0, ATA_F_DMA }, #endif #ifdef CONFIG_BLK_DEV_AMD74XX - {PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_COBRA_7401, NULL, NULL, NULL, ide_dmacapable_amd74xx, {{0x40,0x01,0x01}, {0x40,0x02,0x02}}, ON_BOARD, 0, 0 }, + {PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_COBRA_7401, pci_init_amd74xx, ata66_amd74xx, ide_init_amd74xx, ide_dmacapable_amd74xx, {{0x40,0x01,0x01}, {0x40,0x02,0x02}}, ON_BOARD, 0, 0 }, {PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_VIPER_7409, pci_init_amd74xx, ata66_amd74xx, ide_init_amd74xx, ide_dmacapable_amd74xx, {{0x40,0x01,0x01}, {0x40,0x02,0x02}}, ON_BOARD, 0, 0 }, {PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_VIPER_7411, pci_init_amd74xx, ata66_amd74xx, ide_init_amd74xx, ide_dmacapable_amd74xx, {{0x40,0x01,0x01}, {0x40,0x02,0x02}}, ON_BOARD, 0, 0 }, - {PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_VIPER_7441, pci_init_amd74xx, ata66_amd74xx, ide_init_amd74xx, ide_dmacapable_amd74xx, {{0x40,0x01,0x01}, {0x40,0x02,0x02}}, ON_BOARD, 0, 0 }, + {PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_OPUS_7441, pci_init_amd74xx, ata66_amd74xx, ide_init_amd74xx, ide_dmacapable_amd74xx, {{0x40,0x01,0x01}, {0x40,0x02,0x02}}, ON_BOARD, 0, 0 }, + {PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_8111_IDE, pci_init_amd74xx, ata66_amd74xx, ide_init_amd74xx, ide_dmacapable_amd74xx, {{0x40,0x01,0x01}, {0x40,0x02,0x02}}, ON_BOARD, 0, 0 }, #endif #ifdef CONFIG_BLK_DEV_PDC_ADMA {PCI_VENDOR_ID_PDC, PCI_DEVICE_ID_PDC_1841, pci_init_pdcadma, ata66_pdcadma, ide_init_pdcadma, ide_dmacapable_pdcadma, {{0x00,0x00,0x00}, {0x00,0x00,0x00}}, OFF_BOARD, 0, ATA_F_NODMA }, @@ -306,7 +307,7 @@ * but which still need some generic quirk handling. */ {PCI_VENDOR_ID_PCTECH, PCI_DEVICE_ID_PCTECH_SAMURAI_IDE, NULL, NULL, NULL, NULL, {{0x00,0x00,0x00}, {0x00,0x00,0x00}}, ON_BOARD, 0, 0 }, - {PCI_VENDOR_ID_CMD, PCI_DEVICE_ID_CMD_640, NULL, NULL, IDE_IGNORE, NULL, {{0x00,0x00,0x00}, {0x00,0x00,0x00}}, ON_BOARD, 0, 0 }, + {PCI_VENDOR_ID_CMD, PCI_DEVICE_ID_CMD_640, NULL, NULL, ATA_PCI_IGNORE, NULL, {{0x00,0x00,0x00}, {0x00,0x00,0x00}}, ON_BOARD, 0, 0 }, {PCI_VENDOR_ID_NS, PCI_DEVICE_ID_NS_87410, NULL, NULL, NULL, NULL, {{0x43,0x08,0x08}, {0x47,0x08,0x08}}, ON_BOARD, 0, 0 }, {PCI_VENDOR_ID_HINT, PCI_DEVICE_ID_HINT, NULL, NULL, NULL, NULL, {{0x00,0x00,0x00}, {0x00,0x00,0x00}}, ON_BOARD, 0, 0 }, {PCI_VENDOR_ID_HOLTEK, PCI_DEVICE_ID_HOLTEK_6565, NULL, NULL, NULL, NULL, {{0x00,0x00,0x00}, {0x00,0x00,0x00}}, ON_BOARD, 0, 0 }, @@ -883,11 +884,11 @@ * This finds all PCI IDE controllers and calls appropriate initialization * functions for them. */ -static void __init ide_scan_pcidev(struct pci_dev *dev) +static void __init scan_pcidev(struct pci_dev *dev) { unsigned short vendor; unsigned short device; - ide_pci_device_t *d; + ide_pci_device_t *d; vendor = dev->vendor; device = dev->device; @@ -898,7 +899,7 @@ while (d->vendor && !(d->vendor == vendor && d->device == device)) ++d; - if (d->init_hwif == IDE_IGNORE) + if (d->init_hwif == ATA_PCI_IGNORE) printk("%s: has been ignored by PCI bus scan\n", dev->name); else if ((d->vendor == PCI_VENDOR_ID_OPTI && d->device == PCI_DEVICE_ID_OPTI_82C558) && !(PCI_FUNC(dev->devfn) & 1)) return; @@ -922,17 +923,17 @@ } } -void __init ide_scan_pcibus (int scan_direction) +void __init ide_scan_pcibus(int scan_direction) { struct pci_dev *dev; if (!scan_direction) { pci_for_each_dev(dev) { - ide_scan_pcidev(dev); + scan_pcidev(dev); } } else { pci_for_each_dev_reverse(dev) { - ide_scan_pcidev(dev); + scan_pcidev(dev); } } } diff -Nru a/drivers/ide/ide-probe.c b/drivers/ide/ide-probe.c --- a/drivers/ide/ide-probe.c Tue Mar 12 13:58:14 2002 +++ b/drivers/ide/ide-probe.c Tue Mar 12 13:58:14 2002 @@ -575,19 +575,17 @@ static void ide_init_queue(ide_drive_t *drive) { request_queue_t *q = &drive->queue; - int max_sectors; -#ifdef CONFIG_BLK_DEV_PDC4030 - int is_pdc4030_chipset = (HWIF(drive)->chipset == ide_pdc4030); -#else - const int is_pdc4030_chipset = 0; -#endif + int max_sectors = 255; q->queuedata = HWGROUP(drive); blk_init_queue(q, do_ide_request, &ide_lock); blk_queue_segment_boundary(q, 0xffff); /* IDE can do up to 128K per request, pdc4030 needs smaller limit */ - max_sectors = (is_pdc4030_chipset ? 127 : 255); +#ifdef CONFIG_BLK_DEV_PDC4030 + if (HWIF(drive)->chipset == ide_pdc4030) + max_sectors = 127; +#endif blk_queue_max_sectors(q, max_sectors); /* IDE DMA can do PRD_ENTRIES number of segments. */ diff -Nru a/drivers/ide/ide.c b/drivers/ide/ide.c --- a/drivers/ide/ide.c Tue Mar 12 13:58:15 2002 +++ b/drivers/ide/ide.c Tue Mar 12 13:58:15 2002 @@ -194,6 +194,7 @@ extern void pnpide_init(int); #endif +#ifdef CONFIG_BLK_DEV_IDE_MODES /* * Constant tables for PIO mode programming: */ @@ -282,6 +283,7 @@ { "QUANTUM FIREBALL_1280", 3 }, { NULL, 0 } }; +#endif /* default maximum number of failures */ #define IDE_DEFAULT_MAX_FAILURES 1 @@ -314,7 +316,7 @@ */ ide_hwif_t ide_hwifs[MAX_HWIFS]; /* master data repository */ - +#ifdef CONFIG_BLK_DEV_IDE_MODES /* * This routine searches the ide_pio_blacklist for an entry * matching the start/whole of the supplied model name. @@ -332,6 +334,7 @@ } return -1; } +#endif /* * This routine returns the recommended PIO settings for a given drive, @@ -445,7 +448,7 @@ /* * Do not even *think* about calling this! */ -static void init_hwif_data (unsigned int index) +static void init_hwif_data(ide_hwif_t *hwif, unsigned int index) { static const byte ide_major[] = { IDE0_MAJOR, IDE1_MAJOR, IDE2_MAJOR, IDE3_MAJOR, IDE4_MAJOR, @@ -454,7 +457,6 @@ unsigned int unit; hw_regs_t hw; - ide_hwif_t *hwif = &ide_hwifs[index]; /* bulk initialize hwif & drive info with zeros */ memset(hwif, 0, sizeof(ide_hwif_t)); @@ -507,7 +509,7 @@ #define MAGIC_COOKIE 0x12345678 static void __init init_ide_data (void) { - unsigned int index; + unsigned int h; static unsigned long magic_cookie = MAGIC_COOKIE; if (magic_cookie != MAGIC_COOKIE) @@ -515,8 +517,8 @@ magic_cookie = 0; /* Initialize all interface structures */ - for (index = 0; index < MAX_HWIFS; ++index) - init_hwif_data(index); + for (h = 0; h < MAX_HWIFS; ++h) + init_hwif_data(&ide_hwifs[h], h); /* Add default hw interfaces */ ide_init_default_hwifs(); @@ -1629,7 +1631,7 @@ * But note that it can also be invoked as a result of a "sleep" operation * triggered by the mod_timer() call in ide_do_request. */ -void ide_timer_expiry (unsigned long data) +void ide_timer_expiry(unsigned long data) { ide_hwgroup_t *hwgroup = (ide_hwgroup_t *) data; ide_handler_t *handler; @@ -1667,7 +1669,7 @@ if ((expiry = hwgroup->expiry) != NULL) { /* continue */ if ((wait = expiry(drive)) != 0) { - /* reset timer */ + /* reengage timer */ hwgroup->timer.expires = jiffies + wait; add_timer(&hwgroup->timer); spin_unlock_irqrestore(&ide_lock, flags); @@ -1867,15 +1869,15 @@ * get_info_ptr() returns the (ide_drive_t *) for a given device number. * It returns NULL if the given device number does not match any present drives. */ -ide_drive_t *get_info_ptr (kdev_t i_rdev) +ide_drive_t *get_info_ptr(kdev_t i_rdev) { - int major = major(i_rdev); - unsigned int h; + unsigned int major = major(i_rdev); + int h; for (h = 0; h < MAX_HWIFS; ++h) { ide_hwif_t *hwif = &ide_hwifs[h]; if (hwif->present && major == hwif->major) { - unsigned unit = DEVICE_NR(i_rdev); + int unit = DEVICE_NR(i_rdev); if (unit < MAX_DRIVES) { ide_drive_t *drive = &hwif->drives[unit]; if (drive->present) @@ -2012,13 +2014,13 @@ { ide_hwif_t *hwif; ide_drive_t *drive; - int index; - int unit; + int h; - for (index = 0; index < MAX_HWIFS; ++index) { - hwif = &ide_hwifs[index]; + for (h = 0; h < MAX_HWIFS; ++h) { + int unit; + hwif = &ide_hwifs[h]; for (unit = 0; unit < MAX_DRIVES; ++unit) { - drive = &ide_hwifs[index].drives[unit]; + drive = &ide_hwifs[h].drives[unit]; if (drive->revalidate) { drive->revalidate = 0; if (!initializing) @@ -2164,22 +2166,18 @@ #endif } -void ide_unregister (unsigned int index) +void ide_unregister(ide_hwif_t *hwif) { struct gendisk *gd; ide_drive_t *drive, *d; - ide_hwif_t *hwif, *g; + ide_hwif_t *g; ide_hwgroup_t *hwgroup; int irq_count = 0, unit, i; unsigned long flags; unsigned int p, minor; ide_hwif_t old_hwif; - if (index >= MAX_HWIFS) - return; - save_flags(flags); /* all CPUs */ - cli(); /* all CPUs */ - hwif = &ide_hwifs[index]; + spin_lock_irqsave(&ide_lock, flags); if (!hwif->present) goto abort; put_device(&hwif->device); @@ -2202,7 +2200,7 @@ /* * All clear? Then blow away the buffer cache */ - sti(); + spin_unlock_irqrestore(&ide_lock, flags); for (unit = 0; unit < MAX_DRIVES; ++unit) { drive = &hwif->drives[unit]; if (!drive->present) @@ -2214,11 +2212,11 @@ invalidate_device(devp, 0); } } + } #ifdef CONFIG_PROC_FS - destroy_proc_ide_drives(hwif); + destroy_proc_ide_drives(hwif); #endif - } - cli(); + spin_lock_irqsave(&ide_lock, flags); hwgroup = hwif->hwgroup; /* @@ -2271,11 +2269,8 @@ hwgroup->hwif = HWIF(hwgroup->drive); #if defined(CONFIG_BLK_DEV_IDEDMA) && !defined(CONFIG_DMA_NONPCI) - if (hwif->dma_base) { - (void) ide_release_dma(hwif); - hwif->dma_base = 0; - } -#endif /* (CONFIG_BLK_DEV_IDEDMA) && !(CONFIG_DMA_NONPCI) */ + ide_release_dma(hwif); +#endif /* * Remove us from the kernel's knowledge @@ -2297,8 +2292,14 @@ kfree(gd); hwif->gd = NULL; } + + /* + * Reinitialize the hwif handler, but preserve any special methods for + * it. + */ + old_hwif = *hwif; - init_hwif_data(index); /* restore hwif data to pristine status */ + init_hwif_data(hwif, hwif->index); hwif->hwgroup = old_hwif.hwgroup; hwif->tuneproc = old_hwif.tuneproc; hwif->speedproc = old_hwif.speedproc; @@ -2329,7 +2330,7 @@ #endif hwif->straight8 = old_hwif.straight8; abort: - restore_flags(flags); /* all CPUs */ + spin_unlock_irqrestore(&ide_lock, flags); } /* @@ -2374,28 +2375,27 @@ */ int ide_register_hw(hw_regs_t *hw, ide_hwif_t **hwifp) { - int index, retry = 1; + int h, retry = 1; ide_hwif_t *hwif; do { - for (index = 0; index < MAX_HWIFS; ++index) { - hwif = &ide_hwifs[index]; + for (h = 0; h < MAX_HWIFS; ++h) { + hwif = &ide_hwifs[h]; if (hwif->hw.io_ports[IDE_DATA_OFFSET] == hw->io_ports[IDE_DATA_OFFSET]) goto found; } - for (index = 0; index < MAX_HWIFS; ++index) { - hwif = &ide_hwifs[index]; + for (h = 0; h < MAX_HWIFS; ++h) { + hwif = &ide_hwifs[h]; if ((!hwif->present && !hwif->mate && !initializing) || (!hwif->hw.io_ports[IDE_DATA_OFFSET] && initializing)) goto found; } - for (index = 0; index < MAX_HWIFS; index++) - ide_unregister(index); + for (h = 0; h < MAX_HWIFS; ++h) + ide_unregister(&ide_hwifs[h]); } while (retry--); return -1; found: - if (hwif->present) - ide_unregister(index); + ide_unregister(hwif); if (hwif->present) return -1; memcpy(&hwif->hw, hw, sizeof(*hw)); @@ -2415,7 +2415,7 @@ if (hwifp) *hwifp = hwif; - return (initializing || hwif->present) ? index : -1; + return (initializing || hwif->present) ? h : -1; } /* @@ -2756,21 +2756,6 @@ return -EACCES; return ide_task_ioctl(drive, inode, file, cmd, arg); - case HDIO_SCAN_HWIF: - { - int args[3]; - if (!capable(CAP_SYS_ADMIN)) return -EACCES; - if (copy_from_user(args, (void *)arg, 3 * sizeof(int))) - return -EFAULT; - if (ide_register(args[0], args[1], args[2]) == -1) - return -EIO; - return 0; - } - case HDIO_UNREGISTER_HWIF: - if (!capable(CAP_SYS_ADMIN)) return -EACCES; - /* (arg > MAX_HWIFS) checked in function */ - ide_unregister(arg); - return 0; case HDIO_SET_NICE: if (!capable(CAP_SYS_ADMIN)) return -EACCES; if (arg != (arg & ((1 << IDE_NICE_DSC_OVERLAP) | (1 << IDE_NICE_1)))) @@ -3479,6 +3464,7 @@ revalidate: ide_revalidate_disk }}; +EXPORT_SYMBOL(ide_fops); EXPORT_SYMBOL(ide_hwifs); EXPORT_SYMBOL(ide_spin_wait_hwgroup); EXPORT_SYMBOL(revalidate_drives); @@ -3584,7 +3570,7 @@ */ static int __init ata_module_init(void) { - int i; + int h; printk(KERN_INFO "Uniform Multi-Platform E-IDE driver ver.:" VERSION "\n"); @@ -3666,7 +3652,7 @@ pnpide_init(1); #endif -#if defined(CONFIG_BLK_DEV_IDE) || defined(CONFIG_BLK_DEV_IDE_MODULES) +#if defined(CONFIG_BLK_DEV_IDE) || defined(CONFIG_BLK_DEV_IDE_MODULE) # if defined(__mc68000__) || defined(CONFIG_APUS) if (ide_hwifs[0].io_ports[IDE_DATA_OFFSET]) { ide_get_lock(&ide_intr_lock, NULL, NULL);/* for atari only */ @@ -3714,8 +3700,8 @@ initializing = 0; - for (i = 0; i < MAX_HWIFS; ++i) { - ide_hwif_t *hwif = &ide_hwifs[i]; + for (h = 0; h < MAX_HWIFS; ++h) { + ide_hwif_t *hwif = &ide_hwifs[h]; if (hwif->present) ide_geninit(hwif); } @@ -3750,21 +3736,17 @@ static void __exit cleanup_ata (void) { - int index; + int h; unregister_reboot_notifier(&ide_notifier); - for (index = 0; index < MAX_HWIFS; ++index) { - ide_unregister(index); -# if defined(CONFIG_BLK_DEV_IDEDMA) && !defined(CONFIG_DMA_NONPCI) - if (ide_hwifs[index].dma_base) - ide_release_dma(&ide_hwifs[index]); -# endif /* (CONFIG_BLK_DEV_IDEDMA) && !(CONFIG_DMA_NONPCI) */ + for (h = 0; h < MAX_HWIFS; ++h) { + ide_unregister(&ide_hwifs[h]); } # ifdef CONFIG_PROC_FS proc_ide_destroy(); # endif - devfs_unregister (ide_devfs_handle); + devfs_unregister(ide_devfs_handle); } module_init(init_ata); diff -Nru a/drivers/ide/sis5513.c b/drivers/ide/sis5513.c --- a/drivers/ide/sis5513.c Tue Mar 12 13:58:15 2002 +++ b/drivers/ide/sis5513.c Tue Mar 12 13:58:15 2002 @@ -1,11 +1,35 @@ /* - * linux/drivers/ide/sis5513.c Version 0.11 June 9, 2000 + * linux/drivers/ide/sis5513.c Version 0.13 March 4, 2002 * * Copyright (C) 1999-2000 Andre Hedrick + * Copyright (C) 2002 Lionel Bouton , Maintainer * May be copied or modified under the terms of the GNU General Public License * - * Thanks to SIS Taiwan for direct support and hardware. - * Tested and designed on the SiS620/5513 chipset. +*/ + +/* Thanks : + * For direct support and hardware : SiS Taiwan. + * For ATA100 support advice : Daniela Engert. + * For checking code correctness, providing patches : + * John Fremlin, Manfred Spraul + */ + +/* + * Original tests and design on the SiS620/5513 chipset. + * ATA100 tests and design on the SiS735/5513 chipset. + * ATA16/33 design from specs + */ + +/* + * TODO: + * - Get ridden of SisHostChipInfo[] completness dependancy. + * - Get ATA-133 datasheets, implement ATA-133 init code. + * - Are there pre-ATA_16 SiS chips ? -> tune init code for them + * or remove ATA_00 define + * - More checks in the config registers (force values instead of + * relying on the BIOS setting them correctly). + * - Further optimisations ? + * . for example ATA66+ regs 0x48 & 0x4A */ #include @@ -28,88 +52,184 @@ #include "ide_modes.h" +// #define DEBUG +/* if BROKEN_LEVEL is defined it limits the DMA mode + at boot time to its value */ +// #define BROKEN_LEVEL XFER_SW_DMA_0 #define DISPLAY_SIS_TIMINGS -#define SIS5513_DEBUG_DRIVE_INFO 0 -static struct pci_dev *host_dev = NULL; +/* Miscellaneaous flags */ +#define SIS5513_LATENCY 0x01 +/* ATA transfer mode capabilities */ +#define ATA_00 0x00 +#define ATA_16 0x01 +#define ATA_33 0x02 +#define ATA_66 0x03 +#define ATA_100a 0x04 +#define ATA_100 0x05 +#define ATA_133 0x06 + +static unsigned char dma_capability = 0x00; -#define SIS5513_FLAG_ATA_00 0x00000000 -#define SIS5513_FLAG_ATA_16 0x00000001 -#define SIS5513_FLAG_ATA_33 0x00000002 -#define SIS5513_FLAG_ATA_66 0x00000004 -#define SIS5513_FLAG_LATENCY 0x00000010 +/* + * Debug code: following IDE config registers' changes + */ +#ifdef DEBUG +/* Copy of IDE Config registers 0x00 -> 0x58 + Fewer might be used depending on the actual chipset */ +static unsigned char ide_regs_copy[] = { + 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0 +}; + +static byte sis5513_max_config_register(void) { + switch(dma_capability) { + case ATA_00: + case ATA_16: return 0x4f; + case ATA_33: return 0x52; + case ATA_66: + case ATA_100a: + case ATA_100: + case ATA_133: + default: return 0x57; + } +} + +/* Read config registers, print differences from previous read */ +static void sis5513_load_verify_registers(struct pci_dev* dev, char* info) { + int i; + byte reg_val; + byte changed=0; + byte max = sis5513_max_config_register(); + + printk("SIS5513: %s, changed registers:\n", info); + for(i=0; i<=max; i++) { + pci_read_config_byte(dev, i, ®_val); + if (reg_val != ide_regs_copy[i]) { + printk("%0#x: %0#x -> %0#x\n", + i, ide_regs_copy[i], reg_val); + ide_regs_copy[i]=reg_val; + changed=1; + } + } + + if (!changed) { + printk("none\n"); + } +} + +/* Load config registers, no printing */ +static void sis5513_load_registers(struct pci_dev* dev) { + int i; + byte max = sis5513_max_config_register(); + + for(i=0; i<=max; i++) { + pci_read_config_byte(dev, i, &(ide_regs_copy[i])); + } +} + +/* Print a register */ +static void sis5513_print_register(int reg) { + printk(" %0#x:%0#x", reg, ide_regs_copy[reg]); +} + +/* Print valuable registers */ +static void sis5513_print_registers(struct pci_dev* dev, char* marker) { + int i; + byte max = sis5513_max_config_register(); + + sis5513_load_registers(dev); + printk("SIS5513 %s\n", marker); + printk("SIS5513 dump:"); + for(i=0x00; i<0x40; i++) { + if ((i % 0x10)==0) printk("\n "); + sis5513_print_register(i); + } + for(; i<49; i++) { + sis5513_print_register(i); + } + printk("\n "); + + for(; i<=max; i++) { + sis5513_print_register(i); + } + printk("\n"); +} +#endif + + +/* + * Devices supported + */ static const struct { const char *name; unsigned short host_id; - unsigned int flags; + unsigned char dma_capability; + unsigned char flags; } SiSHostChipInfo[] = { - { "SiS530", PCI_DEVICE_ID_SI_530, SIS5513_FLAG_ATA_66, }, - { "SiS540", PCI_DEVICE_ID_SI_540, SIS5513_FLAG_ATA_66, }, - { "SiS620", PCI_DEVICE_ID_SI_620, SIS5513_FLAG_ATA_66|SIS5513_FLAG_LATENCY, }, - { "SiS630", PCI_DEVICE_ID_SI_630, SIS5513_FLAG_ATA_66|SIS5513_FLAG_LATENCY, }, - { "SiS635", PCI_DEVICE_ID_SI_635, SIS5513_FLAG_ATA_66|SIS5513_FLAG_LATENCY, }, - { "SiS640", PCI_DEVICE_ID_SI_640, SIS5513_FLAG_ATA_66|SIS5513_FLAG_LATENCY, }, - { "SiS645", PCI_DEVICE_ID_SI_645, SIS5513_FLAG_ATA_66|SIS5513_FLAG_LATENCY, }, - { "SiS650", PCI_DEVICE_ID_SI_650, SIS5513_FLAG_ATA_66|SIS5513_FLAG_LATENCY, }, - { "SiS730", PCI_DEVICE_ID_SI_730, SIS5513_FLAG_ATA_66|SIS5513_FLAG_LATENCY, }, - { "SiS735", PCI_DEVICE_ID_SI_735, SIS5513_FLAG_ATA_66|SIS5513_FLAG_LATENCY, }, - { "SiS740", PCI_DEVICE_ID_SI_740, SIS5513_FLAG_ATA_66|SIS5513_FLAG_LATENCY, }, - { "SiS745", PCI_DEVICE_ID_SI_745, SIS5513_FLAG_ATA_66|SIS5513_FLAG_LATENCY, }, - { "SiS750", PCI_DEVICE_ID_SI_750, SIS5513_FLAG_ATA_66|SIS5513_FLAG_LATENCY, }, - { "SiS5591", PCI_DEVICE_ID_SI_5591, SIS5513_FLAG_ATA_33, }, - { "SiS5597", PCI_DEVICE_ID_SI_5597, SIS5513_FLAG_ATA_33, }, - { "SiS5600", PCI_DEVICE_ID_SI_5600, SIS5513_FLAG_ATA_33, }, - { "SiS5511", PCI_DEVICE_ID_SI_5511, SIS5513_FLAG_ATA_16, }, -}; - -#if 0 - -static struct _pio_mode_mapping { - byte data_active; - byte recovery; - byte pio_mode; -} pio_mode_mapping[] = { - { 8, 12, 0 }, - { 6, 7, 1 }, - { 4, 4, 2 }, - { 3, 3, 3 }, - { 3, 1, 4 } + { "SiS750", PCI_DEVICE_ID_SI_750, ATA_100, SIS5513_LATENCY }, + { "SiS745", PCI_DEVICE_ID_SI_745, ATA_100, SIS5513_LATENCY }, + { "SiS740", PCI_DEVICE_ID_SI_740, ATA_100, SIS5513_LATENCY }, + { "SiS735", PCI_DEVICE_ID_SI_735, ATA_100, SIS5513_LATENCY }, + { "SiS730", PCI_DEVICE_ID_SI_730, ATA_100a, SIS5513_LATENCY }, + { "SiS650", PCI_DEVICE_ID_SI_650, ATA_100, SIS5513_LATENCY }, + { "SiS645", PCI_DEVICE_ID_SI_645, ATA_100, SIS5513_LATENCY }, + { "SiS635", PCI_DEVICE_ID_SI_635, ATA_100, SIS5513_LATENCY }, + { "SiS640", PCI_DEVICE_ID_SI_640, ATA_66, SIS5513_LATENCY }, + { "SiS630", PCI_DEVICE_ID_SI_630, ATA_66, SIS5513_LATENCY }, + { "SiS620", PCI_DEVICE_ID_SI_620, ATA_66, SIS5513_LATENCY }, + { "SiS540", PCI_DEVICE_ID_SI_540, ATA_66, 0}, + { "SiS530", PCI_DEVICE_ID_SI_530, ATA_66, 0}, + { "SiS5600", PCI_DEVICE_ID_SI_5600, ATA_33, 0}, + { "SiS5598", PCI_DEVICE_ID_SI_5598, ATA_33, 0}, + { "SiS5597", PCI_DEVICE_ID_SI_5597, ATA_33, 0}, + { "SiS5591", PCI_DEVICE_ID_SI_5591, ATA_33, 0}, + { "SiS5513", PCI_DEVICE_ID_SI_5513, ATA_16, 0}, + { "SiS5511", PCI_DEVICE_ID_SI_5511, ATA_16, 0}, }; -static struct _dma_mode_mapping { - byte data_active; - byte recovery; - byte dma_mode; -} dma_mode_mapping[] = { - { 8, 8, 0 }, - { 3, 2, 1 }, - { 3, 1, 2 } +/* Cycle time bits and values vary accross chip dma capabilities + These three arrays hold the register layout and the values to set. + Indexed by dma_capability and (dma_mode - XFER_UDMA_0) */ +static byte cycle_time_offset[] = {0,0,5,4,4,0,0}; +static byte cycle_time_range[] = {0,0,2,3,3,4,4}; +static byte cycle_time_value[][XFER_UDMA_5 - XFER_UDMA_0 + 1] = { + {0,0,0,0,0,0}, /* no udma */ + {0,0,0,0,0,0}, /* no udma */ + {3,2,1,0,0,0}, + {7,5,3,2,1,0}, + {7,5,3,2,1,0}, + {11,7,5,4,2,1}, + {0,0,0,0,0,0} /* not yet known, ask SiS */ }; -static struct _udma_mode_mapping { - byte cycle_time; - char * udma_mode; -} udma_mode_mapping[] = { - { 8, "Mode 0" }, - { 6, "Mode 1" }, - { 4, "Mode 2" }, - { 3, "Mode 3" }, - { 2, "Mode 4" }, - { 0, "Mode 5" } -}; +static struct pci_dev *host_dev = NULL; -static __inline__ char * find_udma_mode (byte cycle_time) -{ - int n; - - for (n = 0; n <= 4; n++) - if (udma_mode_mapping[n].cycle_time <= cycle_time) - return udma_mode_mapping[n].udma_mode; - return udma_mode_mapping[4].udma_mode; -} -#endif +/* + * Printing configuration + */ #if defined(DISPLAY_SIS_TIMINGS) && defined(CONFIG_PROC_FS) #include #include @@ -118,12 +238,12 @@ extern int (*sis_display_info)(char *, char **, off_t, int); /* ide-proc.c */ static struct pci_dev *bmide_dev; -static char *cable_type[] = { +static char* cable_type[] = { "80 pins", "40 pins" }; -static char *recovery_time [] ={ +static char* recovery_time[] ={ "12 PCICLK", "1 PCICLK", "2 PCICLK", "3 PCICLK", "4 PCICLK", "5 PCICLCK", @@ -134,101 +254,184 @@ "15 PCICLK", "15 PCICLK" }; -static char * cycle_time [] = { - "2 CLK", "2 CLK", - "3 CLK", "4 CLK", - "5 CLK", "6 CLK", - "7 CLK", "8 CLK" -}; - -static char * active_time [] = { +static char* active_time[] = { "8 PCICLK", "1 PCICLCK", - "2 PCICLK", "2 PCICLK", + "2 PCICLK", "3 PCICLK", "4 PCICLK", "5 PCICLK", "6 PCICLK", "12 PCICLK" }; +static char* cycle_time[] = { + "Reserved", "2 CLK", + "3 CLK", "4 CLK", + "5 CLK", "6 CLK", + "7 CLK", "8 CLK", + "9 CLK", "10 CLK", + "11 CLK", "12 CLK", + "Reserved", "Reserved", + "Reserved", "Reserved" +}; + +/* Generic add master or slave info function */ +static char* get_drives_info (char *buffer, byte pos) +{ + byte reg00, reg01, reg10, reg11; /* timing registers */ + char* p = buffer; + +/* Postwrite/Prefetch */ + pci_read_config_byte(bmide_dev, 0x4b, ®00); + p += sprintf(p, "Drive %d: Postwrite %s \t \t Postwrite %s\n", + pos, (reg00 & (0x10 << pos)) ? "Enabled" : "Disabled", + (reg00 & (0x40 << pos)) ? "Enabled" : "Disabled"); + p += sprintf(p, " Prefetch %s \t \t Prefetch %s\n", + (reg00 & (0x01 << pos)) ? "Enabled" : "Disabled", + (reg00 & (0x04 << pos)) ? "Enabled" : "Disabled"); + + pci_read_config_byte(bmide_dev, 0x40+2*pos, ®00); + pci_read_config_byte(bmide_dev, 0x41+2*pos, ®01); + pci_read_config_byte(bmide_dev, 0x44+2*pos, ®10); + pci_read_config_byte(bmide_dev, 0x45+2*pos, ®11); + +/* UDMA */ + if (dma_capability >= ATA_33) { + p += sprintf(p, " UDMA %s \t \t \t UDMA %s\n", + (reg01 & 0x80) ? "Enabled" : "Disabled", + (reg11 & 0x80) ? "Enabled" : "Disabled"); + + p += sprintf(p, " UDMA Cycle Time "); + switch(dma_capability) { + case ATA_33: p += sprintf(p, cycle_time[(reg01 & 0x60) >> 5]); break; + case ATA_66: + case ATA_100a: p += sprintf(p, cycle_time[(reg01 & 0x70) >> 4]); break; + case ATA_100: p += sprintf(p, cycle_time[reg01 & 0x0F]); break; + case ATA_133: + default: p += sprintf(p, "133+ ?"); break; + } + p += sprintf(p, " \t UDMA Cycle Time "); + switch(dma_capability) { + case ATA_33: p += sprintf(p, cycle_time[(reg11 & 0x60) >> 5]); break; + case ATA_66: + case ATA_100a: p += sprintf(p, cycle_time[(reg11 & 0x70) >> 4]); break; + case ATA_100: p += sprintf(p, cycle_time[reg11 & 0x0F]); break; + case ATA_133: + default: p += sprintf(p, "133+ ?"); break; + } + p += sprintf(p, "\n"); + } + +/* Data Active */ + p += sprintf(p, " Data Active Time "); + switch(dma_capability) { + case ATA_00: + case ATA_16: /* confirmed */ + case ATA_33: + case ATA_66: + case ATA_100a: p += sprintf(p, active_time[reg01 & 0x07]); break; + case ATA_100: p += sprintf(p, active_time[(reg00 & 0x70) >> 4]); break; + case ATA_133: + default: p += sprintf(p, "133+ ?"); break; + } + p += sprintf(p, " \t Data Active Time "); + switch(dma_capability) { + case ATA_00: + case ATA_16: + case ATA_33: + case ATA_66: + case ATA_100a: p += sprintf(p, active_time[reg11 & 0x07]); break; + case ATA_100: p += sprintf(p, active_time[(reg10 & 0x70) >> 4]); break; + case ATA_133: + default: p += sprintf(p, "133+ ?"); break; + } + p += sprintf(p, "\n"); + +/* Data Recovery */ + /* warning: may need (reg&0x07) for pre ATA66 chips */ + p += sprintf(p, " Data Recovery Time %s \t Data Recovery Time %s\n", + recovery_time[reg00 & 0x0f], recovery_time[reg10 & 0x0f]); + + return p; +} + +static char* get_masters_info(char* buffer) +{ + return get_drives_info(buffer, 0); +} + +static char* get_slaves_info(char* buffer) +{ + return get_drives_info(buffer, 1); +} + +/* Main get_info, called on /proc/ide/sis reads */ static int sis_get_info (char *buffer, char **addr, off_t offset, int count) { - int rc; char *p = buffer; - byte reg,reg1; + byte reg; u16 reg2, reg3; + p += sprintf(p, "\nSiS 5513 "); + switch(dma_capability) { + case ATA_00: p += sprintf(p, "Unknown???"); break; + case ATA_16: p += sprintf(p, "DMA 16"); break; + case ATA_33: p += sprintf(p, "Ultra 33"); break; + case ATA_66: p += sprintf(p, "Ultra 66"); break; + case ATA_100a: + case ATA_100: p += sprintf(p, "Ultra 100"); break; + case ATA_133: + default: p+= sprintf(p, "Ultra 133+"); break; + } + p += sprintf(p, " chipset\n"); p += sprintf(p, "--------------- Primary Channel ---------------- Secondary Channel -------------\n"); - rc = pci_read_config_byte(bmide_dev, 0x4a, ®); - p += sprintf(p, "Channel Status: %s \t \t \t \t %s \n", - (reg & 0x02) ? "On" : "Off", - (reg & 0x04) ? "On" : "Off"); - - rc = pci_read_config_byte(bmide_dev, 0x09, ®); + +/* Status */ + pci_read_config_byte(bmide_dev, 0x4a, ®); + p += sprintf(p, "Channel Status: "); + if (dma_capability < ATA_66) { + p += sprintf(p, "%s \t \t \t \t %s\n", + (reg & 0x04) ? "On" : "Off", + (reg & 0x02) ? "On" : "Off"); + } else { + p += sprintf(p, "%s \t \t \t \t %s \n", + (reg & 0x02) ? "On" : "Off", + (reg & 0x04) ? "On" : "Off"); + } + +/* Operation Mode */ + pci_read_config_byte(bmide_dev, 0x09, ®); p += sprintf(p, "Operation Mode: %s \t \t \t %s \n", (reg & 0x01) ? "Native" : "Compatible", (reg & 0x04) ? "Native" : "Compatible"); - - rc = pci_read_config_byte(bmide_dev, 0x48, ®); - p += sprintf(p, "Cable Type: %s \t \t \t %s\n", - (reg & 0x10) ? cable_type[1] : cable_type[0], - (reg & 0x20) ? cable_type[1] : cable_type[0]); - - rc = pci_read_config_word(bmide_dev, 0x4c, ®2); - rc = pci_read_config_word(bmide_dev, 0x4e, ®3); - p += sprintf(p, "Prefetch Count: %d \t \t \t \t %d\n", - reg2, reg3); - - rc = pci_read_config_byte(bmide_dev, 0x4b, ®); - p += sprintf(p, "Drive 0: Postwrite %s \t \t Postwrite %s\n", - (reg & 0x10) ? "Enabled" : "Disabled", - (reg & 0x40) ? "Enabled" : "Disabled"); - p += sprintf(p, " Prefetch %s \t \t Prefetch %s\n", - (reg & 0x01) ? "Enabled" : "Disabled", - (reg & 0x04) ? "Enabled" : "Disabled"); - - rc = pci_read_config_byte(bmide_dev, 0x41, ®); - rc = pci_read_config_byte(bmide_dev, 0x45, ®1); - p += sprintf(p, " UDMA %s \t \t \t UDMA %s\n", - (reg & 0x80) ? "Enabled" : "Disabled", - (reg1 & 0x80) ? "Enabled" : "Disabled"); - p += sprintf(p, " UDMA Cycle Time %s \t UDMA Cycle Time %s\n", - cycle_time[(reg & 0x70) >> 4], cycle_time[(reg1 & 0x70) >> 4]); - p += sprintf(p, " Data Active Time %s \t Data Active Time %s\n", - active_time[(reg & 0x07)], active_time[(reg1 &0x07)] ); - - rc = pci_read_config_byte(bmide_dev, 0x40, ®); - rc = pci_read_config_byte(bmide_dev, 0x44, ®1); - p += sprintf(p, " Data Recovery Time %s \t Data Recovery Time %s\n", - recovery_time[(reg & 0x0f)], recovery_time[(reg1 & 0x0f)]); +/* 80-pin cable ? */ + if (dma_capability > ATA_33) { + pci_read_config_byte(bmide_dev, 0x48, ®); + p += sprintf(p, "Cable Type: %s \t \t \t %s\n", + (reg & 0x10) ? cable_type[1] : cable_type[0], + (reg & 0x20) ? cable_type[1] : cable_type[0]); + } - rc = pci_read_config_byte(bmide_dev, 0x4b, ®); - p += sprintf(p, "Drive 1: Postwrite %s \t \t Postwrite %s\n", - (reg & 0x20) ? "Enabled" : "Disabled", - (reg & 0x80) ? "Enabled" : "Disabled"); - p += sprintf(p, " Prefetch %s \t \t Prefetch %s\n", - (reg & 0x02) ? "Enabled" : "Disabled", - (reg & 0x08) ? "Enabled" : "Disabled"); +/* Prefetch Count */ + pci_read_config_word(bmide_dev, 0x4c, ®2); + pci_read_config_word(bmide_dev, 0x4e, ®3); + p += sprintf(p, "Prefetch Count: %d \t \t \t \t %d\n", + reg2, reg3); - rc = pci_read_config_byte(bmide_dev, 0x43, ®); - rc = pci_read_config_byte(bmide_dev, 0x47, ®1); - p += sprintf(p, " UDMA %s \t \t \t UDMA %s\n", - (reg & 0x80) ? "Enabled" : "Disabled", - (reg1 & 0x80) ? "Enabled" : "Disabled"); - p += sprintf(p, " UDMA Cycle Time %s \t UDMA Cycle Time %s\n", - cycle_time[(reg & 0x70) >> 4], cycle_time[(reg1 & 0x70) >> 4]); - p += sprintf(p, " Data Active Time %s \t Data Active Time %s\n", - active_time[(reg & 0x07)], active_time[(reg1 &0x07)] ); + p = get_masters_info(p); + p = get_slaves_info(p); - rc = pci_read_config_byte(bmide_dev, 0x42, ®); - rc = pci_read_config_byte(bmide_dev, 0x46, ®1); - p += sprintf(p, " Data Recovery Time %s \t Data Recovery Time %s\n", - recovery_time[(reg & 0x0f)], recovery_time[(reg1 & 0x0f)]); return p-buffer; } #endif /* defined(DISPLAY_SIS_TIMINGS) && defined(CONFIG_PROC_FS) */ + byte sis_proc = 0; extern char *ide_xfer_verbose (byte xfer_rate); + +/* + * Configuration functions + */ +/* Enables per-drive prefetch and postwrite */ static void config_drive_art_rwp (ide_drive_t *drive) { ide_hwif_t *hwif = HWIF(drive); @@ -237,14 +440,24 @@ byte reg4bh = 0; byte rw_prefetch = (0x11 << drive->dn); - pci_read_config_byte(dev, 0x4b, ®4bh); +#ifdef DEBUG + printk("SIS5513: config_drive_art_rwp, drive %d\n", drive->dn); + sis5513_load_verify_registers(dev, "config_drive_art_rwp start"); +#endif + if (drive->type != ATA_DISK) return; - + pci_read_config_byte(dev, 0x4b, ®4bh); + if ((reg4bh & rw_prefetch) != rw_prefetch) pci_write_config_byte(dev, 0x4b, reg4bh|rw_prefetch); +#ifdef DEBUG + sis5513_load_verify_registers(dev, "config_drive_art_rwp end"); +#endif } + +/* Set per-drive active and recovery time */ static void config_art_rwp_pio (ide_drive_t *drive, byte pio) { ide_hwif_t *hwif = HWIF(drive); @@ -255,6 +468,10 @@ unsigned short eide_pio_timing[6] = {600, 390, 240, 180, 120, 90}; unsigned short xfer_pio = drive->id->eide_pio_modes; +#ifdef DEBUG + sis5513_load_verify_registers(dev, "config_drive_art_rwp_pio start"); +#endif + config_drive_art_rwp(drive); pio = ide_get_best_pio_mode(drive, 255, pio, NULL); @@ -263,8 +480,8 @@ if (drive->id->eide_pio_iordy > 0) { for (xfer_pio = 5; - xfer_pio>0 && - drive->id->eide_pio_iordy>eide_pio_timing[xfer_pio]; + (xfer_pio > 0) && + (drive->id->eide_pio_iordy > eide_pio_timing[xfer_pio]); xfer_pio--); } else { xfer_pio = (drive->id->eide_pio_modes & 4) ? 0x05 : @@ -274,14 +491,10 @@ timing = (xfer_pio >= pio) ? xfer_pio : pio; -/* - * Mode 0 Mode 1 Mode 2 Mode 3 Mode 4 - * Active time 8T (240ns) 6T (180ns) 4T (120ns) 3T (90ns) 3T (90ns) - * 0x41 2:0 bits 000 110 100 011 011 - * Recovery time 12T (360ns) 7T (210ns) 4T (120ns) 3T (90ns) 1T (30ns) - * 0x40 3:0 bits 0000 0111 0100 0011 0001 - * Cycle time 20T (600ns) 13T (390ns) 8T (240ns) 6T (180ns) 4T (120ns) - */ +#ifdef DEBUG + printk("SIS5513: config_drive_art_rwp_pio, drive %d, pio %d, timing %d\n", + drive->dn, pio, timing); +#endif switch(drive->dn) { case 0: drive_pci = 0x40; break; @@ -291,31 +504,43 @@ default: return; } - pci_read_config_byte(dev, drive_pci, &test1); - pci_read_config_byte(dev, drive_pci|0x01, &test2); - - /* - * Do a blanket clear of active and recovery timings. - */ - - test1 &= ~0x07; - test2 &= ~0x0F; - - switch(timing) { - case 4: test1 |= 0x01; test2 |= 0x03; break; - case 3: test1 |= 0x03; test2 |= 0x03; break; - case 2: test1 |= 0x04; test2 |= 0x04; break; - case 1: test1 |= 0x07; test2 |= 0x06; break; - default: break; + /* register layout changed with newer ATA100 chips */ + if (dma_capability < ATA_100) { + pci_read_config_byte(dev, drive_pci, &test1); + pci_read_config_byte(dev, drive_pci+1, &test2); + + /* Clear active and recovery timings */ + test1 &= ~0x0F; + test2 &= ~0x07; + + switch(timing) { + case 4: test1 |= 0x01; test2 |= 0x03; break; + case 3: test1 |= 0x03; test2 |= 0x03; break; + case 2: test1 |= 0x04; test2 |= 0x04; break; + case 1: test1 |= 0x07; test2 |= 0x06; break; + default: break; + } + pci_write_config_byte(dev, drive_pci, test1); + pci_write_config_byte(dev, drive_pci+1, test2); + } else { + switch(timing) { /* active recovery + v v */ + case 4: test1 = 0x30|0x01; break; + case 3: test1 = 0x30|0x03; break; + case 2: test1 = 0x40|0x04; break; + case 1: test1 = 0x60|0x07; break; + default: break; + } + pci_write_config_byte(dev, drive_pci, test1); } - pci_write_config_byte(dev, drive_pci, test1); - pci_write_config_byte(dev, drive_pci|0x01, test2); +#ifdef DEBUG + sis5513_load_verify_registers(dev, "config_drive_art_rwp_pio start"); +#endif } static int config_chipset_for_pio (ide_drive_t *drive, byte pio) { - int err; byte speed; switch(pio) { @@ -328,8 +553,7 @@ config_art_rwp_pio(drive, pio); drive->current_speed = speed; - err = ide_config_drive_speed(drive, speed); - return err; + return ide_config_drive_speed(drive, speed); } static int sis5513_tune_chipset (ide_drive_t *drive, byte speed) @@ -337,82 +561,73 @@ ide_hwif_t *hwif = HWIF(drive); struct pci_dev *dev = hwif->pci_dev; - byte drive_pci, test1, test2; - byte unmask, four_two, mask = 0; - - if (host_dev) { - switch(host_dev->device) { - case PCI_DEVICE_ID_SI_530: - case PCI_DEVICE_ID_SI_540: - case PCI_DEVICE_ID_SI_620: - case PCI_DEVICE_ID_SI_630: - case PCI_DEVICE_ID_SI_635: - case PCI_DEVICE_ID_SI_640: - case PCI_DEVICE_ID_SI_645: - case PCI_DEVICE_ID_SI_650: - case PCI_DEVICE_ID_SI_730: - case PCI_DEVICE_ID_SI_735: - case PCI_DEVICE_ID_SI_740: - case PCI_DEVICE_ID_SI_745: - case PCI_DEVICE_ID_SI_750: - unmask = 0xF0; - four_two = 0x01; - break; - default: - unmask = 0xE0; - four_two = 0x00; - break; - } - } else { - unmask = 0xE0; - four_two = 0x00; - } + byte drive_pci, reg; +#ifdef DEBUG + sis5513_load_verify_registers(dev, "sis5513_tune_chipset start"); + printk("SIS5513: sis5513_tune_chipset, drive %d, speed %d\n", + drive->dn, speed); +#endif switch(drive->dn) { - case 0: drive_pci = 0x40;break; - case 1: drive_pci = 0x42;break; - case 2: drive_pci = 0x44;break; - case 3: drive_pci = 0x46;break; + case 0: drive_pci = 0x40; break; + case 1: drive_pci = 0x42; break; + case 2: drive_pci = 0x44; break; + case 3: drive_pci = 0x46; break; default: return ide_dma_off; } - pci_read_config_byte(dev, drive_pci, &test1); - pci_read_config_byte(dev, drive_pci|0x01, &test2); +#ifdef BROKEN_LEVEL +#ifdef DEBUG + printk("SIS5513: BROKEN_LEVEL activated, speed=%d -> speed=%d\n", speed, BROKEN_LEVEL); +#endif + if (speed > BROKEN_LEVEL) speed = BROKEN_LEVEL; +#endif - if ((speed <= XFER_MW_DMA_2) && (test2 & 0x80)) { - pci_write_config_byte(dev, drive_pci|0x01, test2 & ~0x80); - pci_read_config_byte(dev, drive_pci|0x01, &test2); - } else { - pci_write_config_byte(dev, drive_pci|0x01, test2 & ~unmask); + pci_read_config_byte(dev, drive_pci+1, ®); + /* Disable UDMA bit for non UDMA modes on UDMA chips */ + if ((speed < XFER_UDMA_0) && (dma_capability > ATA_16)) { + reg &= 0x7F; + pci_write_config_byte(dev, drive_pci+1, reg); } + /* Config chip for mode */ switch(speed) { #ifdef CONFIG_BLK_DEV_IDEDMA - case XFER_UDMA_5: mask = 0x80; break; - case XFER_UDMA_4: mask = 0x90; break; - case XFER_UDMA_3: mask = 0xA0; break; - case XFER_UDMA_2: mask = (four_two) ? 0xB0 : 0xA0; break; - case XFER_UDMA_1: mask = (four_two) ? 0xD0 : 0xC0; break; - case XFER_UDMA_0: mask = unmask; break; + case XFER_UDMA_5: + case XFER_UDMA_4: + case XFER_UDMA_3: + case XFER_UDMA_2: + case XFER_UDMA_1: + case XFER_UDMA_0: + /* Force the UDMA bit on if we want to use UDMA */ + reg |= 0x80; + /* clean reg cycle time bits */ + reg &= ~((0xFF >> (8 - cycle_time_range[dma_capability])) + << cycle_time_offset[dma_capability]); + /* set reg cycle time bits */ + reg |= cycle_time_value[dma_capability-ATA_00][speed-XFER_UDMA_0] + << cycle_time_offset[dma_capability]; + pci_write_config_byte(dev, drive_pci+1, reg); + break; case XFER_MW_DMA_2: case XFER_MW_DMA_1: case XFER_MW_DMA_0: case XFER_SW_DMA_2: case XFER_SW_DMA_1: - case XFER_SW_DMA_0: break; + case XFER_SW_DMA_0: + break; #endif /* CONFIG_BLK_DEV_IDEDMA */ case XFER_PIO_4: return((int) config_chipset_for_pio(drive, 4)); case XFER_PIO_3: return((int) config_chipset_for_pio(drive, 3)); case XFER_PIO_2: return((int) config_chipset_for_pio(drive, 2)); case XFER_PIO_1: return((int) config_chipset_for_pio(drive, 1)); case XFER_PIO_0: - default: return((int) config_chipset_for_pio(drive, 0)); + default: return((int) config_chipset_for_pio(drive, 0)); } - - if (speed > XFER_MW_DMA_2) - pci_write_config_byte(dev, drive_pci|0x01, test2|mask); - drive->current_speed = speed; +#ifdef DEBUG + sis5513_load_verify_registers(dev, "sis5513_tune_chipset end"); +#endif return ((int) ide_config_drive_speed(drive, speed)); } @@ -430,47 +645,27 @@ struct hd_driveid *id = drive->id; ide_hwif_t *hwif = HWIF(drive); - byte four_two = 0, speed = 0; - int err; + byte speed = 0; byte unit = (drive->select.b.unit & 0x01); byte udma_66 = eighty_ninty_three(drive); - byte ultra_100 = 0; - if (host_dev) { - switch(host_dev->device) { - case PCI_DEVICE_ID_SI_635: - case PCI_DEVICE_ID_SI_640: - case PCI_DEVICE_ID_SI_645: - case PCI_DEVICE_ID_SI_650: - case PCI_DEVICE_ID_SI_730: - case PCI_DEVICE_ID_SI_735: - case PCI_DEVICE_ID_SI_740: - case PCI_DEVICE_ID_SI_745: - case PCI_DEVICE_ID_SI_750: - ultra_100 = 1; - case PCI_DEVICE_ID_SI_530: - case PCI_DEVICE_ID_SI_540: - case PCI_DEVICE_ID_SI_620: - case PCI_DEVICE_ID_SI_630: - four_two = 0x01; - break; - default: - four_two = 0x00; break; - } - } +#ifdef DEBUG + printk("SIS5513: config_chipset_for_dma, drive %d, ultra %d\n", + drive->dn, ultra); +#endif - if ((id->dma_ultra & 0x0020) && (ultra) && (udma_66) && (four_two) && (ultra_100)) + if ((id->dma_ultra & 0x0020) && ultra && udma_66 && (dma_capability >= ATA_100a)) speed = XFER_UDMA_5; - else if ((id->dma_ultra & 0x0010) && (ultra) && (udma_66) && (four_two)) + else if ((id->dma_ultra & 0x0010) && ultra && udma_66 && (dma_capability >= ATA_66)) speed = XFER_UDMA_4; - else if ((id->dma_ultra & 0x0008) && (ultra) && (udma_66) && (four_two)) + else if ((id->dma_ultra & 0x0008) && ultra && udma_66 && (dma_capability >= ATA_66)) speed = XFER_UDMA_3; - else if ((id->dma_ultra & 0x0004) && (ultra)) + else if ((id->dma_ultra & 0x0004) && ultra && (dma_capability >= ATA_33)) speed = XFER_UDMA_2; - else if ((id->dma_ultra & 0x0002) && (ultra)) + else if ((id->dma_ultra & 0x0002) && ultra && (dma_capability >= ATA_33)) speed = XFER_UDMA_1; - else if ((id->dma_ultra & 0x0001) && (ultra)) + else if ((id->dma_ultra & 0x0001) && ultra && (dma_capability >= ATA_33)) speed = XFER_UDMA_0; else if (id->dma_mword & 0x0004) speed = XFER_MW_DMA_2; @@ -489,11 +684,7 @@ outb(inb(hwif->dma_base+2)|(1<<(5+unit)), hwif->dma_base+2); - err = sis5513_tune_chipset(drive, speed); - -#if SIS5513_DEBUG_DRIVE_INFO - printk("%s: %s drive%d\n", drive->name, ide_xfer_verbose(speed), drive->dn); -#endif /* SIS5513_DEBUG_DRIVE_INFO */ + sis5513_tune_chipset(drive, speed); return ((int) ((id->dma_ultra >> 11) & 7) ? ide_dma_on : ((id->dma_ultra >> 8) & 7) ? ide_dma_on : @@ -550,9 +741,7 @@ return HWIF(drive)->dmaproc(dma_func, drive); } -/* - * sis5513_dmaproc() initiates/aborts (U)DMA read/write operations on a drive. - */ +/* initiates/aborts (U)DMA read/write operations on a drive. */ int sis5513_dmaproc (ide_dma_action_t func, ide_drive_t *drive) { switch (func) { @@ -567,15 +756,18 @@ } #endif /* CONFIG_BLK_DEV_IDEDMA */ +/* Chip detection and general config */ unsigned int __init pci_init_sis5513(struct pci_dev *dev) { struct pci_dev *host; int i = 0; - byte latency = 0; - pci_read_config_byte(dev, PCI_LATENCY_TIMER, &latency); +#ifdef DEBUG + sis5513_print_registers(dev, "pci_init_sis5513 start"); +#endif - for (i = 0; i < ARRAY_SIZE (SiSHostChipInfo) && !host_dev; i++) { + /* Find the chip */ + for (i = 0; i < ARRAY_SIZE(SiSHostChipInfo) && !host_dev; i++) { host = pci_find_device (PCI_VENDOR_ID_SI, SiSHostChipInfo[i].host_id, NULL); @@ -583,30 +775,67 @@ continue; host_dev = host; + dma_capability = SiSHostChipInfo[i].dma_capability; printk(SiSHostChipInfo[i].name); printk("\n"); - if (SiSHostChipInfo[i].flags & SIS5513_FLAG_LATENCY) { - if (latency != 0x10) - pci_write_config_byte(dev, PCI_LATENCY_TIMER, 0x10); + + if (SiSHostChipInfo[i].flags & SIS5513_LATENCY) { + byte latency = (dma_capability == ATA_100)? 0x80 : 0x10; /* Lacking specs */ + pci_write_config_byte(dev, PCI_LATENCY_TIMER, latency); } } + /* Make general config ops here + 1/ tell IDE channels to operate in Compabitility mode only + 2/ tell old chips to allow per drive IDE timings */ if (host_dev) { - byte reg52h = 0; - - pci_read_config_byte(dev, 0x52, ®52h); - if (!(reg52h & 0x04)) { - /* set IDE controller to operate in Compabitility mode only */ - pci_write_config_byte(dev, 0x52, reg52h|0x04); + byte reg; + switch(dma_capability) { + case ATA_133: + case ATA_100: + /* Set compatibility bit */ + pci_read_config_byte(dev, 0x49, ®); + if (!(reg & 0x01)) { + pci_write_config_byte(dev, 0x49, reg|0x01); + } + break; + case ATA_100a: + case ATA_66: + /* On ATA_66 chips the bit was elsewhere */ + pci_read_config_byte(dev, 0x52, ®); + if (!(reg & 0x04)) { + pci_write_config_byte(dev, 0x52, reg|0x04); + } + break; + case ATA_33: + /* On ATA_33 we didn't have a single bit to set */ + pci_read_config_byte(dev, 0x09, ®); + if ((reg & 0x0f) != 0x00) { + pci_write_config_byte(dev, 0x09, reg&0xf0); + } + case ATA_16: + /* force per drive recovery and active timings + needed on ATA_33 and below chips */ + pci_read_config_byte(dev, 0x52, ®); + if (!(reg & 0x08)) { + pci_write_config_byte(dev, 0x52, reg|0x08); + } + break; + case ATA_00: + default: break; } + #if defined(DISPLAY_SIS_TIMINGS) && defined(CONFIG_PROC_FS) if (!sis_proc) { sis_proc = 1; bmide_dev = dev; sis_display_info = &sis_get_info; } -#endif /* defined(DISPLAY_SIS_TIMINGS) && defined(CONFIG_PROC_FS) */ +#endif } +#ifdef DEBUG + sis5513_load_verify_registers(dev, "pci_init_sis5513 end"); +#endif return 0; } @@ -616,27 +845,10 @@ byte mask = hwif->channel ? 0x20 : 0x10; pci_read_config_byte(hwif->pci_dev, 0x48, ®48h); - if (host_dev) { - switch(host_dev->device) { - case PCI_DEVICE_ID_SI_530: - case PCI_DEVICE_ID_SI_540: - case PCI_DEVICE_ID_SI_620: - case PCI_DEVICE_ID_SI_630: - case PCI_DEVICE_ID_SI_635: - case PCI_DEVICE_ID_SI_640: - case PCI_DEVICE_ID_SI_645: - case PCI_DEVICE_ID_SI_650: - case PCI_DEVICE_ID_SI_730: - case PCI_DEVICE_ID_SI_735: - case PCI_DEVICE_ID_SI_740: - case PCI_DEVICE_ID_SI_745: - case PCI_DEVICE_ID_SI_750: - ata66 = (reg48h & mask) ? 0 : 1; - default: - break; - } + if (dma_capability >= ATA_66) { + ata66 = (reg48h & mask) ? 0 : 1; } - return (ata66); + return ata66; } void __init ide_init_sis5513 (ide_hwif_t *hwif) @@ -651,34 +863,17 @@ return; if (host_dev) { - switch(host_dev->device) { #ifdef CONFIG_BLK_DEV_IDEDMA - case PCI_DEVICE_ID_SI_530: - case PCI_DEVICE_ID_SI_540: - case PCI_DEVICE_ID_SI_620: - case PCI_DEVICE_ID_SI_630: - case PCI_DEVICE_ID_SI_635: - case PCI_DEVICE_ID_SI_640: - case PCI_DEVICE_ID_SI_645: - case PCI_DEVICE_ID_SI_650: - case PCI_DEVICE_ID_SI_730: - case PCI_DEVICE_ID_SI_735: - case PCI_DEVICE_ID_SI_740: - case PCI_DEVICE_ID_SI_745: - case PCI_DEVICE_ID_SI_750: - case PCI_DEVICE_ID_SI_5600: - case PCI_DEVICE_ID_SI_5597: - case PCI_DEVICE_ID_SI_5591: - if (!noautodma) - hwif->autodma = 1; - hwif->highmem = 1; - hwif->dmaproc = &sis5513_dmaproc; - break; -#endif /* CONFIG_BLK_DEV_IDEDMA */ - default: - hwif->autodma = 0; - break; + if (dma_capability > ATA_16) { + hwif->autodma = noautodma ? 0 : 1; + hwif->highmem = 1; + hwif->dmaproc = &sis5513_dmaproc; + } else { +#endif + hwif->autodma = 0; +#ifdef CONFIG_BLK_DEV_IDEDMA } +#endif } return; } diff -Nru a/drivers/ide/via82cxxx.c b/drivers/ide/via82cxxx.c --- a/drivers/ide/via82cxxx.c Tue Mar 12 13:58:15 2002 +++ b/drivers/ide/via82cxxx.c Tue Mar 12 13:58:15 2002 @@ -1,5 +1,5 @@ /* - * $Id: via82cxxx.c,v 3.33 2001/12/23 22:46:12 vojtech Exp $ + * $Id: via82cxxx.c,v 3.34 2002/02/12 11:26:11 vojtech Exp $ * * Copyright (c) 2000-2001 Vojtech Pavlik * @@ -163,7 +163,7 @@ via_print("----------VIA BusMastering IDE Configuration----------------"); - via_print("Driver Version: 3.33"); + via_print("Driver Version: 3.34"); via_print("South Bridge: VIA %s", via_config->name); pci_read_config_byte(isa_dev, PCI_REVISION_ID, &t); @@ -495,7 +495,7 @@ if (via_clock < 20000 || via_clock > 50000) { printk(KERN_WARNING "VP_IDE: User given PCI clock speed impossible (%d), using 33 MHz instead.\n", via_clock); printk(KERN_WARNING "VP_IDE: Use ide0=ata66 if you want to assume 80-wire cable.\n"); - via_clock = 33; + via_clock = 33333; } /* diff -Nru a/drivers/isdn/avmb1/capifs.c b/drivers/isdn/avmb1/capifs.c --- a/drivers/isdn/avmb1/capifs.c Tue Mar 12 13:58:15 2002 +++ b/drivers/isdn/avmb1/capifs.c Tue Mar 12 13:58:15 2002 @@ -387,6 +387,7 @@ owner: THIS_MODULE, name: "capifs", get_sb: capifs_get_sb, + kill_sb: kill_anon_super, }; void capifs_new_ncci(char type, unsigned int num, kdev_t device) diff -Nru a/drivers/media/radio/miropcm20-radio.c b/drivers/media/radio/miropcm20-radio.c --- a/drivers/media/radio/miropcm20-radio.c Tue Mar 12 13:58:15 2002 +++ b/drivers/media/radio/miropcm20-radio.c Tue Mar 12 13:58:15 2002 @@ -22,7 +22,7 @@ #include #include #include -#include "../../sound/aci.h" +#include "../../../sound/oss/aci.h" #include "miropcm20-rds-core.h" static int users = 0; diff -Nru a/drivers/media/radio/miropcm20-rds-core.c b/drivers/media/radio/miropcm20-rds-core.c --- a/drivers/media/radio/miropcm20-rds-core.c Tue Mar 12 13:58:16 2002 +++ b/drivers/media/radio/miropcm20-rds-core.c Tue Mar 12 13:58:16 2002 @@ -21,7 +21,7 @@ #include #include #include -#include "../../sound/aci.h" +#include "../../../sound/oss/aci.h" #include "miropcm20-rds-core.h" #define DEBUG 0 diff -Nru a/drivers/media/video/videodev.c b/drivers/media/video/videodev.c --- a/drivers/media/video/videodev.c Tue Mar 12 13:58:15 2002 +++ b/drivers/media/video/videodev.c Tue Mar 12 13:58:15 2002 @@ -25,15 +25,13 @@ #include #include #include -#include #include - +#include #include #include #include -#include - +#include #define VIDEO_NUM_DEVICES 256 @@ -42,6 +40,7 @@ */ static struct video_device *video_device[VIDEO_NUM_DEVICES]; +static DECLARE_MUTEX(videodev_lock); #if defined(CONFIG_PROC_FS) && defined(CONFIG_VIDEO_PROC_FS) @@ -62,155 +61,138 @@ #endif /* CONFIG_PROC_FS && CONFIG_VIDEO_PROC_FS */ - -/* - * Read will do some smarts later on. Buffer pin etc. - */ - -static ssize_t video_read(struct file *file, - char *buf, size_t count, loff_t *ppos) +struct video_device* video_devdata(struct file *file) { - struct video_device *vfl=video_device[minor(file->f_dentry->d_inode->i_rdev)]; - if(vfl->read) - return vfl->read(vfl, buf, count, file->f_flags&O_NONBLOCK); - else - return -EINVAL; + return video_device[minor(file->f_dentry->d_inode->i_rdev)]; } - -/* - * Write for now does nothing. No reason it shouldnt do overlay setting - * for some boards I guess.. - */ - -static ssize_t video_write(struct file *file, const char *buf, - size_t count, loff_t *ppos) -{ - struct video_device *vfl=video_device[minor(file->f_dentry->d_inode->i_rdev)]; - if(vfl->write) - return vfl->write(vfl, buf, count, file->f_flags&O_NONBLOCK); - else - return 0; -} - -/* - * Poll to see if we're readable, can probably be used for timing on incoming - * frames, etc.. - */ - -static unsigned int video_poll(struct file *file, poll_table * wait) -{ - struct video_device *vfl=video_device[minor(file->f_dentry->d_inode->i_rdev)]; - if(vfl->poll) - return vfl->poll(vfl, file, wait); - else - return 0; -} - - /* * Open a video device. */ - static int video_open(struct inode *inode, struct file *file) { unsigned int minor = minor(inode->i_rdev); - int err, retval = 0; + int err = 0; struct video_device *vfl; + struct file_operations *old_fops; if(minor>=VIDEO_NUM_DEVICES) return -ENODEV; - lock_kernel(); + down(&videodev_lock); vfl=video_device[minor]; if(vfl==NULL) { char modname[20]; + up(&videodev_lock); sprintf (modname, "char-major-%d-%d", VIDEO_MAJOR, minor); request_module(modname); + down(&videodev_lock); vfl=video_device[minor]; if (vfl==NULL) { - retval = -ENODEV; - goto error_out; - } - } - if(vfl->busy) { - retval = -EBUSY; - goto error_out; - } - vfl->busy=1; /* In case vfl->open sleeps */ - - if(vfl->owner) - __MOD_INC_USE_COUNT(vfl->owner); - - if(vfl->open) - { - err=vfl->open(vfl,0); /* Tell the device it is open */ - if(err) - { - vfl->busy=0; - if(vfl->owner) - __MOD_DEC_USE_COUNT(vfl->owner); - - unlock_kernel(); - return err; + up(&videodev_lock); + return -ENODEV; } } - unlock_kernel(); - return 0; -error_out: - unlock_kernel(); - return retval; + old_fops = file->f_op; + file->f_op = fops_get(vfl->fops); + if(file->f_op->open) + err = file->f_op->open(inode,file); + if (err) { + fops_put(file->f_op); + file->f_op = fops_get(old_fops); + } + fops_put(old_fops); + up(&videodev_lock); + return err; } /* - * Last close of a video for Linux device + * ioctl helper function -- handles userspace copying */ +int +video_generic_ioctl(struct inode *inode, struct file *file, + unsigned int cmd, unsigned long arg) +{ + struct video_device *vfl = video_devdata(file); + char sbuf[128]; + void *mbuf = NULL; + void *parg = NULL; + int err = -EINVAL; -static int video_release(struct inode *inode, struct file *file) -{ - struct video_device *vfl; - lock_kernel(); - vfl=video_device[minor(inode->i_rdev)]; - if(vfl->close) - vfl->close(vfl); - vfl->busy=0; - if(vfl->owner) - __MOD_DEC_USE_COUNT(vfl->owner); - unlock_kernel(); - return 0; -} + if (vfl->kernel_ioctl == NULL) + return -EINVAL; -static int video_ioctl(struct inode *inode, struct file *file, - unsigned int cmd, unsigned long arg) -{ - struct video_device *vfl=video_device[minor(inode->i_rdev)]; - int err=vfl->ioctl(vfl, cmd, (void *)arg); + /* Copy arguments into temp kernel buffer */ + switch (_IOC_DIR(cmd)) { + case _IOC_NONE: + parg = (void *)arg; + break; + case _IOC_READ: /* some v4l ioctls are marked wrong ... */ + case _IOC_WRITE: + case (_IOC_WRITE | _IOC_READ): + if (_IOC_SIZE(cmd) <= sizeof(sbuf)) { + parg = sbuf; + } else { + /* too big to allocate from stack */ + mbuf = kmalloc(_IOC_SIZE(cmd),GFP_KERNEL); + if (NULL == mbuf) + return -ENOMEM; + parg = mbuf; + } + + err = -EFAULT; + if (copy_from_user(parg, (void *)arg, _IOC_SIZE(cmd))) + goto out; + break; + } - if(err!=-ENOIOCTLCMD) - return err; - - switch(cmd) + /* call driver */ + err = vfl->kernel_ioctl(inode, file, cmd, parg); + if (err == -ENOIOCTLCMD) + err = -EINVAL; + if (err < 0) + goto out; + + /* Copy results into user buffer */ + switch (_IOC_DIR(cmd)) { - default: - return -EINVAL; + case _IOC_READ: + case (_IOC_WRITE | _IOC_READ): + if (copy_to_user((void *)arg, parg, _IOC_SIZE(cmd))) + err = -EFAULT; + break; } + +out: + if (mbuf) + kfree(mbuf); + return err; } /* - * We need to do MMAP support + * open/release helper functions -- handle exclusive opens */ - -int video_mmap(struct file *file, struct vm_area_struct *vma) +extern int video_exclusive_open(struct inode *inode, struct file *file) { - int ret = -EINVAL; - struct video_device *vfl=video_device[minor(file->f_dentry->d_inode->i_rdev)]; - if(vfl->mmap) { - lock_kernel(); - ret = vfl->mmap(vma, vfl, (char *)vma->vm_start, - (unsigned long)(vma->vm_end-vma->vm_start)); - unlock_kernel(); + struct video_device *vfl = video_devdata(file); + int retval = 0; + + down(&vfl->lock); + if (vfl->users) { + retval = -EBUSY; + } else { + vfl->users++; } - return ret; + up(&vfl->lock); + return retval; +} + +extern int video_exclusive_release(struct inode *inode, struct file *file) +{ + struct video_device *vfl = video_devdata(file); + + vfl->users--; + return 0; } /* @@ -392,13 +374,10 @@ * %VFL_TYPE_RADIO - A radio card */ -static DECLARE_MUTEX(videodev_register_lock); - int video_register_device(struct video_device *vfd, int type, int nr) { int i=0; int base; - int err; int end; char *name_base; char name[16]; @@ -430,50 +409,36 @@ } /* pick a minor number */ - down(&videodev_register_lock); + down(&videodev_lock); if (-1 == nr) { /* use first free */ for(i=base;iminor=i; - up(&videodev_register_lock); + up(&videodev_lock); - /* The init call may sleep so we book the slot out - then call */ - MOD_INC_USE_COUNT; - if(vfd->initialize) { - err=vfd->initialize(vfd); - if(err<0) { - video_device[i]=NULL; - MOD_DEC_USE_COUNT; - return err; - } - } sprintf (name, "v4l/%s%d", name_base, i - base); - /* - * Start the device root only. Anything else - * has serious privacy issues. - */ vfd->devfs_handle = devfs_register (NULL, name, DEVFS_FL_DEFAULT, VIDEO_MAJOR, vfd->minor, S_IFCHR | S_IRUSR | S_IWUSR, &video_fops, NULL); + init_MUTEX(&vfd->lock); #if defined(CONFIG_PROC_FS) && defined(CONFIG_VIDEO_PROC_FS) sprintf (name, "%s%d", name_base, i - base); @@ -492,8 +457,9 @@ void video_unregister_device(struct video_device *vfd) { + down(&videodev_lock); if(video_device[vfd->minor]!=vfd) - panic("vfd: bad unregister"); + panic("videodev: bad unregister"); #if defined(CONFIG_PROC_FS) && defined(CONFIG_VIDEO_PROC_FS) videodev_proc_destroy_dev (vfd); @@ -501,7 +467,7 @@ devfs_unregister (vfd->devfs_handle); video_device[vfd->minor]=NULL; - MOD_DEC_USE_COUNT; + up(&videodev_lock); } @@ -509,13 +475,7 @@ { owner: THIS_MODULE, llseek: no_llseek, - read: video_read, - write: video_write, - ioctl: video_ioctl, - mmap: video_mmap, open: video_open, - release: video_release, - poll: video_poll, }; /* @@ -540,12 +500,9 @@ static void __exit videodev_exit(void) { -#ifdef MODULE #if defined(CONFIG_PROC_FS) && defined(CONFIG_VIDEO_PROC_FS) videodev_proc_destroy (); #endif -#endif - devfs_unregister_chrdev(VIDEO_MAJOR, "video_capture"); } @@ -554,6 +511,10 @@ EXPORT_SYMBOL(video_register_device); EXPORT_SYMBOL(video_unregister_device); +EXPORT_SYMBOL(video_devdata); +EXPORT_SYMBOL(video_generic_ioctl); +EXPORT_SYMBOL(video_exclusive_open); +EXPORT_SYMBOL(video_exclusive_release); MODULE_AUTHOR("Alan Cox"); MODULE_DESCRIPTION("Device registrar for Video4Linux drivers"); diff -Nru a/drivers/net/pppoe.c b/drivers/net/pppoe.c --- a/drivers/net/pppoe.c Tue Mar 12 13:58:14 2002 +++ b/drivers/net/pppoe.c Tue Mar 12 13:58:14 2002 @@ -635,7 +635,7 @@ sk->state = PPPOX_CONNECTED; } - sk->num = sp->sa_addr.pppoe.sid; + po->num = sp->sa_addr.pppoe.sid; end: release_sock(sk); @@ -788,7 +788,7 @@ hdr.ver = 1; hdr.type = 1; hdr.code = 0; - hdr.sid = sk->num; + hdr.sid = po->num; lock_sock(sk); @@ -862,7 +862,7 @@ hdr.ver = 1; hdr.type = 1; hdr.code = 0; - hdr.sid = sk->num; + hdr.sid = po->num; hdr.length = htons(skb->len); if (!dev) diff -Nru a/drivers/net/sunhme.c b/drivers/net/sunhme.c --- a/drivers/net/sunhme.c Tue Mar 12 13:58:15 2002 +++ b/drivers/net/sunhme.c Tue Mar 12 13:58:15 2002 @@ -1611,12 +1611,12 @@ /* Set the RX and TX ring ptrs. */ HMD(("ring ptrs rxr[%08x] txr[%08x]\n", - (hp->hblock_dvma + hblock_offset(happy_meal_rxd, 0)), - (hp->hblock_dvma + hblock_offset(happy_meal_txd, 0)))); + ((__u32)hp->hblock_dvma + hblock_offset(happy_meal_rxd, 0)), + ((__u32)hp->hblock_dvma + hblock_offset(happy_meal_txd, 0)))); hme_write32(hp, erxregs + ERX_RING, - (hp->hblock_dvma + hblock_offset(happy_meal_rxd, 0))); + ((__u32)hp->hblock_dvma + hblock_offset(happy_meal_rxd, 0))); hme_write32(hp, etxregs + ETX_RING, - (hp->hblock_dvma + hblock_offset(happy_meal_txd, 0))); + ((__u32)hp->hblock_dvma + hblock_offset(happy_meal_txd, 0))); /* Set the supported burst sizes. */ HMD(("happy_meal_init: old[%08x] bursts<", @@ -2643,21 +2643,23 @@ struct happy_meal *hp; struct net_device *dev; int i, qfe_slot = -1; + int err = -ENODEV; if (is_qfe) { qp = quattro_sbus_find(sdev); if (qp == NULL) - return -ENODEV; + goto err_out; for (qfe_slot = 0; qfe_slot < 4; qfe_slot++) if (qp->happy_meals[qfe_slot] == NULL) break; if (qfe_slot == 4) - return -ENODEV; + goto err_out; } + err = -ENOMEM; dev = init_etherdev(NULL, sizeof(struct happy_meal)); if (!dev) - return -ENOMEM; + goto err_out; SET_MODULE_OWNER(dev); if (hme_version_printed++ == 0) @@ -2701,11 +2703,12 @@ spin_lock_init(&hp->happy_lock); + err = -ENODEV; if (sdev->num_registers != 5) { printk(KERN_ERR "happymeal: Device does not have 5 regs, it has %d.\n", sdev->num_registers); printk(KERN_ERR "happymeal: Would you like that for here or to go?\n"); - return -ENODEV; + goto err_out_free_netdev; } if (qp != NULL) { @@ -2719,35 +2722,35 @@ GREG_REG_SIZE, "HME Global Regs"); if (!hp->gregs) { printk(KERN_ERR "happymeal: Cannot map Happy Meal global registers.\n"); - return -ENODEV; + goto err_out_free_netdev; } hp->etxregs = sbus_ioremap(&sdev->resource[1], 0, ETX_REG_SIZE, "HME TX Regs"); if (!hp->etxregs) { printk(KERN_ERR "happymeal: Cannot map Happy Meal MAC Transmit registers.\n"); - return -ENODEV; + goto err_out_iounmap; } hp->erxregs = sbus_ioremap(&sdev->resource[2], 0, ERX_REG_SIZE, "HME RX Regs"); if (!hp->erxregs) { printk(KERN_ERR "happymeal: Cannot map Happy Meal MAC Receive registers.\n"); - return -ENODEV; + goto err_out_iounmap; } hp->bigmacregs = sbus_ioremap(&sdev->resource[3], 0, BMAC_REG_SIZE, "HME BIGMAC Regs"); if (!hp->bigmacregs) { printk(KERN_ERR "happymeal: Cannot map Happy Meal BIGMAC registers.\n"); - return -ENODEV; + goto err_out_iounmap; } hp->tcvregs = sbus_ioremap(&sdev->resource[4], 0, TCVR_REG_SIZE, "HME Tranceiver Regs"); if (!hp->tcvregs) { printk(KERN_ERR "happymeal: Cannot map Happy Meal Tranceiver registers.\n"); - return -ENODEV; + goto err_out_iounmap; } hp->hm_revision = prom_getintdefault(sdev->prom_node, "hm-rev", 0xff); @@ -2770,6 +2773,11 @@ hp->happy_block = sbus_alloc_consistent(hp->happy_dev, PAGE_SIZE, &hp->hblock_dvma); + err = -ENOMEM; + if (!hp->happy_block) { + printk(KERN_ERR "happymeal: Cannot allocate descriptors.\n"); + goto err_out_iounmap; + } /* Force check of the link first time we are brought up. */ hp->linkcheck = 0; @@ -2822,6 +2830,25 @@ root_happy_dev = hp; return 0; + +err_out_iounmap: + if (hp->gregs) + sbus_iounmap(hp->gregs, GREG_REG_SIZE); + if (hp->etxregs) + sbus_iounmap(hp->etxregs, ETX_REG_SIZE); + if (hp->erxregs) + sbus_iounmap(hp->erxregs, ERX_REG_SIZE); + if (hp->bigmacregs) + sbus_iounmap(hp->bigmacregs, BMAC_REG_SIZE); + if (hp->tcvregs) + sbus_iounmap(hp->tcvregs, TCVR_REG_SIZE); + +err_out_free_netdev: + unregister_netdev(dev); + kfree(dev); + +err_out: + return err; } #endif @@ -2838,6 +2865,7 @@ unsigned long hpreg_base; int i, qfe_slot = -1; char prom_name[64]; + int err; /* Now make sure pci_dev cookie is there. */ #ifdef __sparc__ @@ -2854,20 +2882,22 @@ strcpy(prom_name, "qfe"); #endif + err = -ENODEV; if (!strcmp(prom_name, "SUNW,qfe") || !strcmp(prom_name, "qfe")) { qp = quattro_pci_find(pdev); if (qp == NULL) - return -ENODEV; + goto err_out; for (qfe_slot = 0; qfe_slot < 4; qfe_slot++) if (qp->happy_meals[qfe_slot] == NULL) break; if (qfe_slot == 4) - return -ENODEV; + goto err_out; } dev = init_etherdev(NULL, sizeof(struct happy_meal)); + err = -ENOMEM; if (!dev) - return -ENOMEM; + goto err_out; SET_MODULE_OWNER(dev); if (hme_version_printed++ == 0) @@ -2912,9 +2942,10 @@ } hpreg_base = pci_resource_start(pdev, 0); + err = -ENODEV; if ((pci_resource_flags(pdev, 0) & IORESOURCE_IO) != 0) { printk(KERN_ERR "happymeal(PCI): Cannot find proper PCI device base address.\n"); - return -ENODEV; + goto err_out_clear_quattro; } if ((hpreg_base = (unsigned long) ioremap(hpreg_base, 0x8000)) == 0) { printk(KERN_ERR "happymeal(PCI): Unable to remap card memory.\n"); @@ -2983,9 +3014,10 @@ hp->happy_block = (struct hmeal_init_block *) pci_alloc_consistent(pdev, PAGE_SIZE, &hp->hblock_dvma); + err = -ENODEV; if (!hp->happy_block) { printk(KERN_ERR "happymeal(PCI): Cannot get hme init block.\n"); - return -ENODEV; + goto err_out_iounmap; } hp->linkcheck = 0; @@ -3034,6 +3066,19 @@ root_happy_dev = hp; return 0; + +err_out_iounmap: + iounmap((void *)hp->gregs); + +err_out_clear_quattro: + if (qp != NULL) + qp->happy_meals[qfe_slot] = NULL; + + unregister_netdev(dev); + kfree(dev); + +err_out: + return err; } #endif diff -Nru a/drivers/net/sunhme.h b/drivers/net/sunhme.h --- a/drivers/net/sunhme.h Tue Mar 12 13:58:15 2002 +++ b/drivers/net/sunhme.h Tue Mar 12 13:58:15 2002 @@ -431,7 +431,7 @@ unsigned long bigmacregs; /* BIGMAC core regs */ unsigned long tcvregs; /* MIF transceiver regs */ - __u32 hblock_dvma; /* DVMA visible address happy block */ + dma_addr_t hblock_dvma; /* DVMA visible address happy block */ unsigned int happy_flags; /* Driver state flags */ enum happy_transceiver tcvr_type; /* Kind of transceiver in use */ unsigned int happy_bursts; /* Get your mind out of the gutter */ diff -Nru a/drivers/parport/parport_cs.c b/drivers/parport/parport_cs.c --- a/drivers/parport/parport_cs.c Tue Mar 12 13:58:15 2002 +++ b/drivers/parport/parport_cs.c Tue Mar 12 13:58:15 2002 @@ -45,6 +45,7 @@ #include #include #include +#include #include #include @@ -106,7 +107,6 @@ static dev_link_t *dev_list = NULL; extern struct parport_operations parport_pc_ops; -static struct parport_operations parport_cs_ops; /*====================================================================*/ @@ -458,13 +458,6 @@ "does not match!\n"); return -1; } - -#if (LINUX_VERSION_CODE < VERSION(2,3,6)) - /* This is to protect against unloading modules out of order */ - parport_cs_ops = parport_pc_ops; - parport_cs_ops.inc_use_count = &inc_use_count; - parport_cs_ops.dec_use_count = &dec_use_count; -#endif register_pccard_driver(&dev_info, &parport_attach, &parport_detach); return 0; diff -Nru a/drivers/pci/pci.c b/drivers/pci/pci.c --- a/drivers/pci/pci.c Tue Mar 12 13:58:15 2002 +++ b/drivers/pci/pci.c Tue Mar 12 13:58:15 2002 @@ -2001,16 +2001,16 @@ int map, block; if ((page = pool_find_page (pool, dma)) == 0) { - printk (KERN_ERR "pci_pool_free %s/%s, %p/%x (bad dma)\n", + printk (KERN_ERR "pci_pool_free %s/%s, %p/%lx (bad dma)\n", pool->dev ? pool->dev->slot_name : NULL, - pool->name, vaddr, (int) (dma & 0xffffffff)); + pool->name, vaddr, (unsigned long) dma); return; } #ifdef CONFIG_PCIPOOL_DEBUG if (((dma - page->dma) + (void *)page->vaddr) != vaddr) { - printk (KERN_ERR "pci_pool_free %s/%s, %p (bad vaddr)/%x\n", + printk (KERN_ERR "pci_pool_free %s/%s, %p (bad vaddr)/%lx\n", pool->dev ? pool->dev->slot_name : NULL, - pool->name, vaddr, (int) (dma & 0xffffffff)); + pool->name, vaddr, (unsigned long) dma); return; } #endif diff -Nru a/drivers/pcmcia/sa1100_generic.c b/drivers/pcmcia/sa1100_generic.c --- a/drivers/pcmcia/sa1100_generic.c Tue Mar 12 13:58:14 2002 +++ b/drivers/pcmcia/sa1100_generic.c Tue Mar 12 13:58:14 2002 @@ -83,6 +83,68 @@ static struct tq_struct sa1100_pcmcia_task; /* + * sa1100_pcmcia_default_mecr_timing + * ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + * + * Calculate MECR clock wait states for given CPU clock + * speed and command wait state. This function can be over- + * written by a board specific version. + * + * The default is to simply calculate the BS values as specified in + * the INTEL SA1100 development manual + * "Expansion Memory (PCMCIA) Configuration Register (MECR)" + * that's section 10.2.5 in _my_ version of the manuial ;) + */ +static int sa1100_pcmcia_default_mecr_timing(unsigned int sock, unsigned int cpu_speed, + unsigned int cmd_time ) +{ + return sa1100_pcmcia_mecr_bs( cmd_time, cpu_speed ); +} + +/* sa1100_pcmcia_set_mecr() + * ^^^^^^^^^^^^^^^^^^^^^^^^^^^ + * + * set MECR value for socket based on this sockets + * io, mem and attribute space access speed. + * Call board specific BS value calculation to allow boards + * to tweak the BS values. + */ +static int sa1100_pcmcia_set_mecr( int sock ) +{ + struct sa1100_pcmcia_socket *skt; + u32 mecr; + int clock; + long flags; + unsigned int bs; + + if ( sock<0 || sock>SA1100_PCMCIA_MAX_SOCK ) + return -1; + + skt = PCMCIA_SOCKET( sock ); + + local_irq_save(flags); + + clock = cpufreq_get(0); + bs = pcmcia_low_level->socket_get_timing( sock, clock, skt->speed_io); + + mecr = MECR; + MECR_FAST_SET(mecr, sock, 0); + MECR_BSIO_SET(mecr, sock, bs ); + MECR_BSA_SET(mecr, sock, bs ); + MECR_BSM_SET(mecr, sock, bs ); + MECR = mecr; + + local_irq_restore(flags); + + DEBUG(4, "%s(): FAST%u %lx BSM%u %lx BSA%u %lx BSIO%u %lx\n", + __FUNCTION__, sock, MECR_FAST_GET(mecr, sock), sock, + MECR_BSM_GET(mecr, sock), sock, MECR_BSA_GET(mecr, sock), + sock, MECR_BSIO_GET(mecr, sock)); + + return 0; +} + +/* * sa1100_pcmcia_state_to_config * ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ * @@ -586,26 +648,10 @@ } if (map->flags & MAP_ACTIVE) { - unsigned int clock, speed = map->speed; - unsigned long mecr; - - if (speed == 0) - speed = SA1100_PCMCIA_IO_ACCESS; + if ( map->speed == 0) + map->speed = SA1100_PCMCIA_IO_ACCESS; - clock = cpufreq_get(0); - - mecr = MECR; - - MECR_BSIO_SET(mecr, sock, sa1100_pcmcia_mecr_bs(speed, clock)); - - skt->speed_io = speed; - - DEBUG(4, "%s(): FAST%u %lx BSM%u %lx BSA%u %lx BSIO%u %lx\n", - __FUNCTION__, sock, MECR_FAST_GET(mecr, sock), sock, - MECR_BSM_GET(mecr, sock), sock, MECR_BSA_GET(mecr, sock), - sock, MECR_BSIO_GET(mecr, sock)); - - MECR = mecr; + sa1100_pcmcia_set_mecr( sock ); } if (map->stop == 1) @@ -683,39 +729,19 @@ } if (map->flags & MAP_ACTIVE) { - unsigned int clock, speed = map->speed; - unsigned long mecr; - - /* - * When clients issue RequestMap, the access speed is not always - * properly configured. Choose some sensible defaults. - */ - if (speed == 0) { - if (skt->cs_state.Vcc == 33) - speed = SA1100_PCMCIA_3V_MEM_ACCESS; - else - speed = SA1100_PCMCIA_5V_MEM_ACCESS; - } - - clock = cpufreq_get(0); + /* + * When clients issue RequestMap, the access speed is not always + * properly configured. Choose some sensible defaults. + */ + if (map->speed == 0) { + if (skt->cs_state.Vcc == 33) + map->speed = SA1100_PCMCIA_3V_MEM_ACCESS; + else + map->speed = SA1100_PCMCIA_5V_MEM_ACCESS; + } - /* Fixme: MECR is not pre-empt safe. */ - mecr = MECR; + sa1100_pcmcia_set_mecr( sock ); - if (map->flags & MAP_ATTRIB) { - MECR_BSA_SET(mecr, sock, sa1100_pcmcia_mecr_bs(speed, clock)); - skt->speed_attr = speed; - } else { - MECR_BSM_SET(mecr, sock, sa1100_pcmcia_mecr_bs(speed, clock)); - skt->speed_mem = speed; - } - - DEBUG(4, "%s(): FAST%u %lx BSM%u %lx BSA%u %lx BSIO%u %lx\n", - __FUNCTION__, sock, MECR_FAST_GET(mecr, sock), sock, - MECR_BSM_GET(mecr, sock), sock, MECR_BSA_GET(mecr, sock), - sock, MECR_BSIO_GET(mecr, sock)); - - MECR = mecr; } start = (map->flags & MAP_ATTRIB) ? skt->phys_attr : skt->phys_mem; @@ -857,20 +883,10 @@ static void sa1100_pcmcia_update_mecr(unsigned int clock) { unsigned int sock; - unsigned long mecr = MECR; - for(sock = 0; sock < SA1100_PCMCIA_MAX_SOCK; ++sock){ - struct sa1100_pcmcia_socket *skt = PCMCIA_SOCKET(sock); - - MECR_BSIO_SET(mecr, sock, - sa1100_pcmcia_mecr_bs(skt->speed_io, clock)); - MECR_BSA_SET(mecr, sock, - sa1100_pcmcia_mecr_bs(skt->speed_attr, clock)); - MECR_BSM_SET(mecr, sock, - sa1100_pcmcia_mecr_bs(skt->speed_mem, clock)); + for (sock = 0; sock < SA1100_PCMCIA_MAX_SOCK; ++sock) { + sa1100_pcmcia_set_mecr( sock ); } - - MECR = mecr; } /* sa1100_pcmcia_notifier() @@ -929,8 +945,7 @@ struct pcmcia_init pcmcia_init; struct pcmcia_state state[SA1100_PCMCIA_MAX_SOCK]; struct pcmcia_state_array state_array; - unsigned int i, clock; - unsigned long mecr; + unsigned int i; int ret; /* @@ -941,6 +956,13 @@ pcmcia_low_level = ops; + /* + * set default MECR calculation if the board specific + * code did not specify one... + */ + if (!pcmcia_low_level->socket_get_timing) + pcmcia_low_level->socket_get_timing = sa1100_pcmcia_default_mecr_timing; + pcmcia_init.handler = sa1100_pcmcia_interrupt; ret = ops->init(&pcmcia_init); if (ret < 0) { @@ -967,10 +989,6 @@ * We initialize the MECR to default values here, because we are * not guaranteed to see a SetIOMap operation at runtime. */ - mecr = 0; - - clock = cpufreq_get(0); - for (i = 0; i < sa1100_pcmcia_socket_count; i++) { struct sa1100_pcmcia_socket *skt = PCMCIA_SOCKET(i); struct pcmcia_irq_info irq_info; @@ -1000,13 +1018,9 @@ goto out_err; } - MECR_FAST_SET(mecr, i, 0); - MECR_BSIO_SET(mecr, i, sa1100_pcmcia_mecr_bs(skt->speed_io, clock)); - MECR_BSA_SET(mecr, i, sa1100_pcmcia_mecr_bs(skt->speed_attr, clock)); - MECR_BSM_SET(mecr, i, sa1100_pcmcia_mecr_bs(skt->speed_mem, clock)); + sa1100_pcmcia_set_mecr( i ); } - MECR = mecr; /* Only advertise as many sockets as we can detect */ ret = register_ss_entry(sa1100_pcmcia_socket_count, diff -Nru a/drivers/pcmcia/sa1100_generic.h b/drivers/pcmcia/sa1100_generic.h --- a/drivers/pcmcia/sa1100_generic.h Tue Mar 12 13:58:14 2002 +++ b/drivers/pcmcia/sa1100_generic.h Tue Mar 12 13:58:14 2002 @@ -69,6 +69,12 @@ * Disable card status IRQs and PCMCIA bus on suspend. */ int (*socket_suspend)(int sock); + + /* + * Calculate MECR timing clock wait states + */ + int (*socket_get_timing)(unsigned int sock, unsigned int cpu_speed, + unsigned int cmd_time ); }; extern int sa1100_register_pcmcia(struct pcmcia_low_level *); diff -Nru a/drivers/scsi/ide-scsi.c b/drivers/scsi/ide-scsi.c --- a/drivers/scsi/ide-scsi.c Tue Mar 12 13:58:14 2002 +++ b/drivers/scsi/ide-scsi.c Tue Mar 12 13:58:14 2002 @@ -290,7 +290,7 @@ if (!test_bit(PC_WRITING, &pc->flags) && pc->actually_transferred && pc->actually_transferred <= 1024 && pc->buffer) { printk(", rst = "); scsi_buf = pc->scsi_cmd->request_buffer; - hexdump(scsi_buf, min(16, pc->scsi_cmd->request_bufflen)); + hexdump(scsi_buf, min(16U, pc->scsi_cmd->request_bufflen)); } else printk("\n"); } } @@ -307,7 +307,7 @@ static inline unsigned long get_timeout(idescsi_pc_t *pc) { - return max(WAIT_CMD, pc->timeout - jiffies); + return max((unsigned long) WAIT_CMD, pc->timeout - jiffies); } /* @@ -565,7 +565,7 @@ /* * idescsi_init will register the driver for each scsi. */ -static int idescsi_init(void) +int idescsi_init(void) { ide_drive_t *drive; idescsi_scsi_t *scsi; diff -Nru a/drivers/usb/inode.c b/drivers/usb/inode.c --- a/drivers/usb/inode.c Tue Mar 12 13:58:14 2002 +++ b/drivers/usb/inode.c Tue Mar 12 13:58:14 2002 @@ -489,12 +489,14 @@ owner: THIS_MODULE, name: "usbdevfs", get_sb: usb_get_sb, + kill_sb: kill_anon_super, }; static struct file_system_type usb_fs_type = { owner: THIS_MODULE, name: "usbfs", get_sb: usb_get_sb, + kill_sb: kill_anon_super, }; /* --------------------------------------------------------------------- */ diff -Nru a/fs/Config.help b/fs/Config.help --- a/fs/Config.help Tue Mar 12 13:58:16 2002 +++ b/fs/Config.help Tue Mar 12 13:58:16 2002 @@ -561,6 +561,10 @@ If you would like to include the NFSv3 server as well as the NFSv2 server, say Y here. If unsure, say Y. +CONFIG_NFSD_TCP + Enable NFS service over TCP connections. This the officially + still experimental, but seems to work well. + CONFIG_HPFS_FS OS/2 is IBM's operating system for PC's, the same as Warp, and HPFS is the file system used for organizing files on OS/2 hard disk diff -Nru a/fs/Config.in b/fs/Config.in --- a/fs/Config.in Tue Mar 12 13:58:14 2002 +++ b/fs/Config.in Tue Mar 12 13:58:14 2002 @@ -44,7 +44,8 @@ fi dep_tristate 'Journalling Flash File System v2 (JFFS2) support' CONFIG_JFFS2_FS $CONFIG_MTD if [ "$CONFIG_JFFS2_FS" = "y" -o "$CONFIG_JFFS2_FS" = "m" ] ; then - int 'JFFS2 debugging verbosity (0 = quiet, 2 = noisy)' CONFIG_JFFS2_FS_DEBUG 0 + int ' JFFS2 debugging verbosity (0 = quiet, 2 = noisy)' CONFIG_JFFS2_FS_DEBUG 0 + dep_bool ' JFFS2 support for NAND flash' CONFIG_JFFS2_FS_NAND $CONFIG_EXPERIMENTAL fi tristate 'Compressed ROM file system support' CONFIG_CRAMFS bool 'Virtual memory file system support (former shm fs)' CONFIG_TMPFS diff -Nru a/fs/adfs/super.c b/fs/adfs/super.c --- a/fs/adfs/super.c Tue Mar 12 13:58:14 2002 +++ b/fs/adfs/super.c Tue Mar 12 13:58:14 2002 @@ -485,6 +485,7 @@ owner: THIS_MODULE, name: "adfs", get_sb: adfs_get_sb, + kill_sb: kill_block_super, fs_flags: FS_REQUIRES_DEV, }; diff -Nru a/fs/affs/super.c b/fs/affs/super.c --- a/fs/affs/super.c Tue Mar 12 13:58:15 2002 +++ b/fs/affs/super.c Tue Mar 12 13:58:15 2002 @@ -536,6 +536,7 @@ owner: THIS_MODULE, name: "affs", get_sb: affs_get_sb, + kill_sb: kill_block_super, fs_flags: FS_REQUIRES_DEV, }; diff -Nru a/fs/autofs/init.c b/fs/autofs/init.c --- a/fs/autofs/init.c Tue Mar 12 13:58:14 2002 +++ b/fs/autofs/init.c Tue Mar 12 13:58:14 2002 @@ -24,6 +24,7 @@ owner: THIS_MODULE, name: "autofs", get_sb: autofs_get_sb, + kill_sb: kill_anon_super, }; static int __init init_autofs_fs(void) diff -Nru a/fs/autofs4/init.c b/fs/autofs4/init.c --- a/fs/autofs4/init.c Tue Mar 12 13:58:15 2002 +++ b/fs/autofs4/init.c Tue Mar 12 13:58:15 2002 @@ -24,6 +24,7 @@ owner: THIS_MODULE, name: "autofs", get_sb: autofs_get_sb, + kill_sb: kill_anon_super, }; static int __init init_autofs4_fs(void) diff -Nru a/fs/bfs/inode.c b/fs/bfs/inode.c --- a/fs/bfs/inode.c Tue Mar 12 13:58:15 2002 +++ b/fs/bfs/inode.c Tue Mar 12 13:58:15 2002 @@ -371,6 +371,7 @@ owner: THIS_MODULE, name: "bfs", get_sb: bfs_get_sb, + kill_sb: kill_block_super, fs_flags: FS_REQUIRES_DEV, }; diff -Nru a/fs/binfmt_misc.c b/fs/binfmt_misc.c --- a/fs/binfmt_misc.c Tue Mar 12 13:58:15 2002 +++ b/fs/binfmt_misc.c Tue Mar 12 13:58:15 2002 @@ -721,7 +721,7 @@ owner: THIS_MODULE, name: "binfmt_misc", get_sb: bm_get_sb, - fs_flags: FS_LITTER, + kill_sb: kill_litter_super, }; static int __init init_misc_binfmt(void) diff -Nru a/fs/block_dev.c b/fs/block_dev.c --- a/fs/block_dev.c Tue Mar 12 13:58:15 2002 +++ b/fs/block_dev.c Tue Mar 12 13:58:15 2002 @@ -260,6 +260,7 @@ static struct file_system_type bd_type = { name: "bdev", get_sb: bd_get_sb, + kill_sb: kill_anon_super, fs_flags: FS_NOMOUNT, }; diff -Nru a/fs/coda/inode.c b/fs/coda/inode.c --- a/fs/coda/inode.c Tue Mar 12 13:58:15 2002 +++ b/fs/coda/inode.c Tue Mar 12 13:58:15 2002 @@ -316,4 +316,5 @@ owner: THIS_MODULE, name: "coda", get_sb: coda_get_sb, + kill_sb: kill_anon_super, }; diff -Nru a/fs/cramfs/inode.c b/fs/cramfs/inode.c --- a/fs/cramfs/inode.c Tue Mar 12 13:58:14 2002 +++ b/fs/cramfs/inode.c Tue Mar 12 13:58:14 2002 @@ -20,16 +20,12 @@ #include #include #include +#include +#include #include #include -#define CRAMFS_SB_MAGIC u.cramfs_sb.magic -#define CRAMFS_SB_SIZE u.cramfs_sb.size -#define CRAMFS_SB_BLOCKS u.cramfs_sb.blocks -#define CRAMFS_SB_FILES u.cramfs_sb.files -#define CRAMFS_SB_FLAGS u.cramfs_sb.flags - static struct super_operations cramfs_ops; static struct inode_operations cramfs_dir_inode_operations; static struct file_operations cramfs_directory_operations; @@ -188,12 +184,23 @@ return read_buffers[buffer] + offset; } +static void cramfs_put_super(struct super_block *sb) +{ + kfree(sb->u.generic_sbp); + sb->u.generic_sbp = NULL; +} static int cramfs_fill_super(struct super_block *sb, void *data, int silent) { int i; struct cramfs_super super; unsigned long root_offset; + struct cramfs_sb_info *sbi; + + sbi = kmalloc(sizeof(struct cramfs_sb_info), GFP_KERNEL); + if (!sbi) + return -ENOMEM; + sb->u.generic_sbp = sbi; sb_set_blocksize(sb, PAGE_CACHE_SIZE); @@ -229,16 +236,16 @@ } root_offset = super.root.offset << 2; if (super.flags & CRAMFS_FLAG_FSID_VERSION_2) { - sb->CRAMFS_SB_SIZE=super.size; - sb->CRAMFS_SB_BLOCKS=super.fsid.blocks; - sb->CRAMFS_SB_FILES=super.fsid.files; + sbi->size=super.size; + sbi->blocks=super.fsid.blocks; + sbi->files=super.fsid.files; } else { - sb->CRAMFS_SB_SIZE=1<<28; - sb->CRAMFS_SB_BLOCKS=0; - sb->CRAMFS_SB_FILES=0; + sbi->size=1<<28; + sbi->blocks=0; + sbi->files=0; } - sb->CRAMFS_SB_MAGIC=super.magic; - sb->CRAMFS_SB_FLAGS=super.flags; + sbi->magic=super.magic; + sbi->flags=super.flags; if (root_offset == 0) printk(KERN_INFO "cramfs: empty filesystem"); else if (!(super.flags & CRAMFS_FLAG_SHIFTED_ROOT_OFFSET) && @@ -254,6 +261,8 @@ sb->s_root = d_alloc_root(get_cramfs_inode(sb, &super.root)); return 0; out: + kfree(sbi); + sb->u.generic_sbp = NULL; return -EINVAL; } @@ -261,10 +270,10 @@ { buf->f_type = CRAMFS_MAGIC; buf->f_bsize = PAGE_CACHE_SIZE; - buf->f_blocks = sb->CRAMFS_SB_BLOCKS; + buf->f_blocks = CRAMFS_SB(sb)->blocks; buf->f_bfree = 0; buf->f_bavail = 0; - buf->f_files = sb->CRAMFS_SB_FILES; + buf->f_files = CRAMFS_SB(sb)->files; buf->f_ffree = 0; buf->f_namelen = CRAMFS_MAXPATHLEN; return 0; @@ -334,7 +343,7 @@ int sorted; lock_kernel(); - sorted = dir->i_sb->CRAMFS_SB_FLAGS & CRAMFS_FLAG_SORTED_DIRS; + sorted = CRAMFS_SB(dir->i_sb)->flags & CRAMFS_FLAG_SORTED_DIRS; while (offset < dir->i_size) { struct cramfs_inode *de; char *name; @@ -445,6 +454,7 @@ }; static struct super_operations cramfs_ops = { + put_super: cramfs_put_super, statfs: cramfs_statfs, }; @@ -458,6 +468,7 @@ owner: THIS_MODULE, name: "cramfs", get_sb: cramfs_get_sb, + kill_sb: kill_block_super, fs_flags: FS_REQUIRES_DEV, }; diff -Nru a/fs/devfs/base.c b/fs/devfs/base.c --- a/fs/devfs/base.c Tue Mar 12 13:58:15 2002 +++ b/fs/devfs/base.c Tue Mar 12 13:58:15 2002 @@ -3323,6 +3323,7 @@ static struct file_system_type devfs_fs_type = { name: DEVFS_NAME, get_sb: devfs_get_sb, + kill_sb: kill_anon_super, }; /* File operations for devfsd follow */ diff -Nru a/fs/devpts/inode.c b/fs/devpts/inode.c --- a/fs/devpts/inode.c Tue Mar 12 13:58:15 2002 +++ b/fs/devpts/inode.c Tue Mar 12 13:58:15 2002 @@ -189,6 +189,7 @@ owner: THIS_MODULE, name: "devpts", get_sb: devpts_get_sb, + kill_sb: kill_anon_super, }; void devpts_pty_new(int number, kdev_t device) diff -Nru a/fs/dnotify.c b/fs/dnotify.c --- a/fs/dnotify.c Tue Mar 12 13:58:15 2002 +++ b/fs/dnotify.c Tue Mar 12 13:58:15 2002 @@ -1,7 +1,7 @@ /* * Directory notifications for Linux. * - * Copyright (C) 2000 Stephen Rothwell + * Copyright (C) 2000,2001,2002 Stephen Rothwell * * This program is free software; you can redistribute it and/or modify it * under the terms of the GNU General Public License as published by the @@ -59,7 +59,7 @@ write_lock(&dn_lock); prev = &inode->i_dnotify; for (odn = *prev; odn != NULL; prev = &odn->dn_next, odn = *prev) - if (odn->dn_filp == filp) + if ((odn->dn_owner == current->files) && (odn->dn_filp == filp)) break; if (odn != NULL) { if (turning_off) { @@ -82,6 +82,7 @@ dn->dn_mask = arg; dn->dn_fd = fd; dn->dn_filp = filp; + dn->dn_owner = current->files; inode->i_dnotify_mask |= arg & ~DN_MULTISHOT; dn->dn_next = inode->i_dnotify; inode->i_dnotify = dn; diff -Nru a/fs/driverfs/inode.c b/fs/driverfs/inode.c --- a/fs/driverfs/inode.c Tue Mar 12 13:58:14 2002 +++ b/fs/driverfs/inode.c Tue Mar 12 13:58:14 2002 @@ -433,7 +433,7 @@ owner: THIS_MODULE, name: "driverfs", get_sb: driverfs_get_sb, - fs_flags: FS_LITTER, + kill_sb: kill_litter_super, }; static int get_mount(void) diff -Nru a/fs/efs/super.c b/fs/efs/super.c --- a/fs/efs/super.c Tue Mar 12 13:58:15 2002 +++ b/fs/efs/super.c Tue Mar 12 13:58:15 2002 @@ -24,6 +24,7 @@ owner: THIS_MODULE, name: "efs", get_sb: efs_get_sb, + kill_sb: kill_block_super, fs_flags: FS_REQUIRES_DEV, }; diff -Nru a/fs/exec.c b/fs/exec.c --- a/fs/exec.c Tue Mar 12 13:58:15 2002 +++ b/fs/exec.c Tue Mar 12 13:58:15 2002 @@ -154,7 +154,7 @@ } /* - * count() counts the number of arguments/envelopes + * count() counts the number of strings in array ARGV. */ static int count(char ** argv, int max) { @@ -177,7 +177,7 @@ } /* - * 'copy_strings()' copies argument/envelope strings from user + * 'copy_strings()' copies argument/environment strings from user * memory to free pages in kernel mem. These are in a format ready * to be put directly into the top of new user memory. */ diff -Nru a/fs/ext2/super.c b/fs/ext2/super.c --- a/fs/ext2/super.c Tue Mar 12 13:58:15 2002 +++ b/fs/ext2/super.c Tue Mar 12 13:58:15 2002 @@ -853,6 +853,7 @@ owner: THIS_MODULE, name: "ext2", get_sb: ext2_get_sb, + kill_sb: kill_block_super, fs_flags: FS_REQUIRES_DEV, }; diff -Nru a/fs/ext3/super.c b/fs/ext3/super.c --- a/fs/ext3/super.c Tue Mar 12 13:58:16 2002 +++ b/fs/ext3/super.c Tue Mar 12 13:58:16 2002 @@ -1778,6 +1778,7 @@ owner: THIS_MODULE, name: "ext3", get_sb: ext3_get_sb, + kill_sb: kill_block_super, fs_flags: FS_REQUIRES_DEV, }; diff -Nru a/fs/freevxfs/vxfs_super.c b/fs/freevxfs/vxfs_super.c --- a/fs/freevxfs/vxfs_super.c Tue Mar 12 13:58:15 2002 +++ b/fs/freevxfs/vxfs_super.c Tue Mar 12 13:58:15 2002 @@ -237,6 +237,7 @@ owner: THIS_MODULE, name: "vxfs", get_sb: vxfs_get_sb, + kill_sb: kill_block_super, fs_flags: FS_REQUIRES_DEV, }; diff -Nru a/fs/hfs/super.c b/fs/hfs/super.c --- a/fs/hfs/super.c Tue Mar 12 13:58:15 2002 +++ b/fs/hfs/super.c Tue Mar 12 13:58:15 2002 @@ -105,6 +105,7 @@ owner: THIS_MODULE, name: "hfs", get_sb: hfs_get_sb, + kill_sb: kill_block_super, fs_flags: FS_REQUIRES_DEV, }; diff -Nru a/fs/hpfs/super.c b/fs/hpfs/super.c --- a/fs/hpfs/super.c Tue Mar 12 13:58:15 2002 +++ b/fs/hpfs/super.c Tue Mar 12 13:58:15 2002 @@ -619,6 +619,7 @@ owner: THIS_MODULE, name: "hpfs", get_sb: hpfs_get_sb, + kill_sb: kill_block_super, fs_flags: FS_REQUIRES_DEV, }; diff -Nru a/fs/inode.c b/fs/inode.c --- a/fs/inode.c Tue Mar 12 13:58:15 2002 +++ b/fs/inode.c Tue Mar 12 13:58:15 2002 @@ -467,8 +467,7 @@ if (sb) { spin_lock(&inode_lock); - while (inode->i_state & I_DIRTY) - sync_one(inode, sync); + sync_one(inode, sync); spin_unlock(&inode_lock); if (sync) wait_on_inode(inode); diff -Nru a/fs/isofs/inode.c b/fs/isofs/inode.c --- a/fs/isofs/inode.c Tue Mar 12 13:58:16 2002 +++ b/fs/isofs/inode.c Tue Mar 12 13:58:16 2002 @@ -1404,6 +1404,7 @@ owner: THIS_MODULE, name: "iso9660", get_sb: isofs_get_sb, + kill_sb: kill_block_super, fs_flags: FS_REQUIRES_DEV, }; diff -Nru a/fs/jffs/inode-v23.c b/fs/jffs/inode-v23.c --- a/fs/jffs/inode-v23.c Tue Mar 12 13:58:14 2002 +++ b/fs/jffs/inode-v23.c Tue Mar 12 13:58:14 2002 @@ -1763,6 +1763,7 @@ owner: THIS_MODULE, name: "jffs", get_sb: jffs_get_sb, + kill_sb: kill_block_super, fs_flags: FS_REQUIRES_DEV, }; diff -Nru a/fs/jffs2/Makefile b/fs/jffs2/Makefile --- a/fs/jffs2/Makefile Tue Mar 12 13:58:15 2002 +++ b/fs/jffs2/Makefile Tue Mar 12 13:58:15 2002 @@ -1,7 +1,7 @@ # # Makefile for the linux Journalling Flash FileSystem (JFFS) routines. # -# $Id: Makefile,v 1.25 2001/09/25 20:59:41 dwmw2 Exp $ +# $Id: Makefile,v 1.34 2002/03/08 11:27:59 dwmw2 Exp $ # # Note! Dependencies are done automagically by 'make dep', which also # removes any old dependencies. DON'T put your own dependencies here @@ -10,15 +10,20 @@ # Note 2! The CFLAGS definitions are now in the main makefile... -COMPR_OBJS := compr.o compr_rubin.o compr_rtime.o pushpull.o \ - compr_zlib.o +COMPR_OBJS := compr.o compr_rubin.o compr_rtime.o compr_zlib.o JFFS2_OBJS := dir.o file.o ioctl.o nodelist.o malloc.o \ - read.o nodemgmt.o readinode.o super.o write.o scan.o gc.o \ - symlink.o build.o erase.o background.o + read.o nodemgmt.o readinode.o write.o scan.o gc.o \ + symlink.o build.o erase.o background.o fs.o writev.o + +LINUX_OBJS-24 := super-v24.o crc32.o +LINUX_OBJS-25 := super.o + +NAND_OBJS-$(CONFIG_JFFS2_FS_NAND) := wbuf.o O_TARGET := jffs2.o -obj-y := $(COMPR_OBJS) $(JFFS2_OBJS) +obj-y := $(COMPR_OBJS) $(JFFS2_OBJS) $(VERS_OBJS) $(NAND_OBJS-y) \ + $(LINUX_OBJS-$(VERSION)$(PATCHLEVEL)) obj-m := $(O_TARGET) include $(TOPDIR)/Rules.make diff -Nru a/fs/jffs2/README.Locking b/fs/jffs2/README.Locking --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/fs/jffs2/README.Locking Tue Mar 12 13:58:16 2002 @@ -0,0 +1,116 @@ + $Id: README.Locking,v 1.4 2002/03/08 16:20:06 dwmw2 Exp $ + + JFFS2 LOCKING DOCUMENTATION + --------------------------- + +At least theoretically, JFFS2 does not require the Big Kernel Lock +(BKL), which was always helpfully obtained for it by Linux 2.4 VFS +code. It has its own locking, as described below. + +This document attempts to describe the existing locking rules for +JFFS2. It is not expected to remain perfectly up to date, but ought to +be fairly close. + + + alloc_sem + --------- + +The alloc_sem is a per-filesystem semaphore, used primarily to ensure +contiguous allocation of space on the medium. It is automatically +obtained during space allocations (jffs2_reserve_space()) and freed +upon write completion (jffs2_complete_reservation()). Note that +the garbage collector will obtain this right at the beginning of +jffs2_garbage_collect_pass() and release it at the end, thereby +preventing any other write activity on the file system during a +garbage collect pass. + +When writing new nodes, the alloc_sem must be held until the new nodes +have been properly linked into the data structures for the inode to +which they belong. This is for the benefit of NAND flash - adding new +nodes to an inode may obsolete old ones, and by holding the alloc_sem +until this happens we ensure that any data in the write-buffer at the +time this happens are part of the new node, not just something that +was written afterwards. Hence, we can ensure the newly-obsoleted nodes +don't actually get erased until the write-buffer has been flushed to +the medium. + +With the introduction of NAND flash support and the write-buffer, +the alloc_sem is also used to protect the wbuf-related members of the +jffs2_sb_info structure. Atomically reading the wbuf_len member to see +if the wbuf is currently holding any data is permitted, though. + +Ordering constraints: See f->sem. + + + File Semaphore f->sem + --------------------- + +This is the JFFS2-internal equivalent of the inode semaphore i->i_sem. +It protects the contents of the jffs2_inode_info private inode data, +including the linked list of node fragments (but see the notes below on +erase_completion_lock), etc. + +The reason that the i_sem itself isn't used for this purpose is to +avoid deadlocks with garbage collection -- the VFS will lock the i_sem +before calling a function which may need to allocate space. The +allocation may trigger garbage-collection, which may need to move a +node belonging to the inode which was locked in the first place by the +VFS. If the garbage collection code were to attempt to lock the i_sem +of the inode from which it's garbage-collecting a physical node, this +lead to deadlock, unless we played games with unlocking the i_sem +before calling the space allocation functions. + +Instead of playing such games, we just have an extra internal +semaphore, which is obtained by the garbage collection code and also +by the normal file system code _after_ allocation of space. + +Ordering constraints: + + 1. Never attempt to allocate space or lock alloc_sem with + any f->sem held. + 2. Never attempt to lock two file semaphores in one thread. + No ordering rules have been made for doing so. + + + erase_completion_lock spinlock + ------------------------------ + +This is used to serialise access to the eraseblock lists, to the +per-eraseblock lists of physical jffs2_raw_node_ref structures, and +(NB) the per-inode list of physical nodes. The latter is a special +case - see below. + +As the MTD API permits erase-completion callback functions to be +called from bottom-half (timer) context, and these functions access +the data structures protected by this lock, it must be locked with +spin_lock_bh(). + +Note that the per-inode list of physical nodes (f->nodes) is a special +case. Any changes to _valid_ nodes (i.e. ->flash_offset & 1 == 0) in +the list are protected by the file semaphore f->sem. But the erase +code may remove _obsolete_ nodes from the list while holding only the +erase_completion_lock. So you can walk the list only while holding the +erase_completion_lock, and can drop the lock temporarily mid-walk as +long as the pointer you're holding is to a _valid_ node, not an +obsolete one. + +The erase_completion_lock is also used to protect the c->gc_task +pointer when the garbage collection thread exits. The code to kill the +GC thread locks it, sends the signal, then unlocks it - while the GC +thread itself locks it, zeroes c->gc_task, then unlocks on the exit path. + + node_free_sem + ------------- + +This semaphore is only used by the erase code which frees obsolete +node references and the jffs2_garbage_collect_deletion_dirent() +function. The latter function on NAND flash must read _obsolete_ nodes +to determine whether the 'deletion dirent' under consideration can be +discarded or whether it is still required to show that an inode has +been unlinked. Because reading from the flash may sleep, the +erase_completion_lock cannot be held, so an alternative, more +heavyweight lock was required to prevent the erase code from freeing +the jffs2_raw_node_ref structures in question while the garbage +collection code is looking at them. + +Suggestions for alternative solutions to this problem would be welcomed. diff -Nru a/fs/jffs2/TODO b/fs/jffs2/TODO --- a/fs/jffs2/TODO Tue Mar 12 13:58:16 2002 +++ b/fs/jffs2/TODO Tue Mar 12 13:58:16 2002 @@ -1,8 +1,7 @@ -$Id: TODO,v 1.3 2001/03/01 23:26:48 dwmw2 Exp $ +$Id: TODO,v 1.7 2002/03/11 12:36:59 dwmw2 Exp $ - - disable compression in commit_write()? Or at least optimise the 'always write - whole page' bit. - - fix zlib. It's ugly as hell and there are at least three copies in the kernel tree + - Locking audit. Even more so now 2.5 took away the BKL. + - disable compression in commit_write()? - fine-tune the allocation / GC thresholds - chattr support - turning on/off and tuning compression per-inode - checkpointing (do we need this? scan is quite fast) @@ -10,11 +9,16 @@ mount doesn't have to read the flash twice for large files. Make this a per-inode option, changable with chattr, so you can decide which inodes should be in-core immediately after mount. - - stop it depending on a block device. mount(8) needs a change for this. - - make it work on NAND flash. We need to know when we can GC - deletion dirents, etc. And think about holes/truncation. It can - all be done reasonably simply, but it need implementing. - - NAND flash will require new dirent/dnode structures on the medium with - ECC data in rather than just the CRC we're using ATM. + - stop it depending on a block device. - test, test, test + + - NAND flash support: + - flush_wbuf using GC to fill it, don't just pad. + - Deal with write errors. Data don't get lost - we just have to write + the affected node(s) out again somewhere else. + - make fsync flush only if actually required + - make sys_sync() work. + - reboot notifier + - timed flush of old wbuf + - fix magical second arg of jffs2_flush_wbuf(). Split into two or more functions instead. diff -Nru a/fs/jffs2/background.c b/fs/jffs2/background.c --- a/fs/jffs2/background.c Tue Mar 12 13:58:14 2002 +++ b/fs/jffs2/background.c Tue Mar 12 13:58:14 2002 @@ -1,7 +1,7 @@ /* * JFFS2 -- Journalling Flash File System, Version 2. * - * Copyright (C) 2001 Red Hat, Inc. + * Copyright (C) 2001, 2002 Red Hat, Inc. * * Created by David Woodhouse * @@ -31,19 +31,19 @@ * provisions above, a recipient may use your version of this file * under either the RHEPL or the GPL. * - * $Id: background.c,v 1.16 2001/10/08 09:22:38 dwmw2 Exp $ + * $Id: background.c,v 1.23 2002/03/06 12:37:08 dwmw2 Exp $ * */ #define __KERNEL_SYSCALLS__ #include -#include #include #include #include #include #include +#include /* recalc_sigpending() */ #include "nodelist.h" @@ -106,10 +106,7 @@ sprintf(current->comm, "jffs2_gcd_mtd%d", c->mtd->index); -#if LINUX_VERSION_CODE < KERNEL_VERSION(2,5,0) - /* FIXME in the 2.2 backport */ - current->nice = 10; -#endif + set_user_nice(current, 10); for (;;) { spin_lock_irq(¤t->sigmask_lock); diff -Nru a/fs/jffs2/build.c b/fs/jffs2/build.c --- a/fs/jffs2/build.c Tue Mar 12 13:58:15 2002 +++ b/fs/jffs2/build.c Tue Mar 12 13:58:15 2002 @@ -1,7 +1,7 @@ /* * JFFS2 -- Journalling Flash File System, Version 2. * - * Copyright (C) 2001 Red Hat, Inc. + * Copyright (C) 2001, 2002 Red Hat, Inc. * * Created by David Woodhouse * @@ -31,12 +31,11 @@ * provisions above, a recipient may use your version of this file * under either the RHEPL or the GPL. * - * $Id: build.c,v 1.16 2001/03/15 15:38:23 dwmw2 Exp $ + * $Id: build.c,v 1.32 2002/03/08 15:11:24 dwmw2 Exp $ * */ #include -#include #include #include "nodelist.h" @@ -51,7 +50,7 @@ - Scan directory tree from top down, setting nlink in inocaches - Scan inocaches for inodes with nlink==0 */ -int jffs2_build_filesystem(struct jffs2_sb_info *c) +static int jffs2_build_filesystem(struct jffs2_sb_info *c) { int ret; int i; @@ -59,7 +58,11 @@ /* First, scan the medium and build all the inode caches with lists of physical nodes */ + + c->flags |= JFFS2_SB_FLAG_MOUNTING; ret = jffs2_scan_medium(c); + c->flags &= ~JFFS2_SB_FLAG_MOUNTING; + if (ret) return ret; @@ -147,6 +150,7 @@ D1(printk(KERN_DEBUG "jffs2_build_inode_pass1 ignoring old metadata at 0x%08x\n", metadata->fn->raw->flash_offset &~3)); + jffs2_mark_node_obsolete(c, metadata->fn->raw); jffs2_free_full_dnode(metadata->fn); jffs2_free_tmp_dnode_info(metadata); metadata = NULL; @@ -159,15 +163,25 @@ if (!metadata) { metadata = tn; } else { + /* This will only happen if it has the _same_ version + number as the existing metadata node. */ D1(printk(KERN_DEBUG "jffs2_build_inode_pass1 ignoring new metadata at 0x%08x\n", tn->fn->raw->flash_offset &~3)); + jffs2_mark_node_obsolete(c, tn->fn->raw); jffs2_free_full_dnode(tn->fn); jffs2_free_tmp_dnode_info(tn); } } } - + + if (ic->scan->version) { + /* It's a regular file, so truncate it to the last known + i_size, if necessary */ + D1(printk(KERN_DEBUG "jffs2_build_inode_pass1 truncating fraglist to 0x%08x\n", ic->scan->isize)); + jffs2_truncate_fraglist(c, &fraglist, ic->scan->isize); + } + /* OK. Now clear up */ if (metadata) { jffs2_free_full_dnode(metadata->fn); @@ -201,6 +215,10 @@ if (child_ic->nlink++ && fd->type == DT_DIR) { printk(KERN_NOTICE "Child dir \"%s\" (ino #%u) of dir ino #%u appears to be a hard link\n", fd->name, fd->ino, ic->ino); + if (fd->ino == 1 && ic->ino == 1) { + printk(KERN_NOTICE "This is mostly harmless, and probably caused by creating a JFFS2 image\n"); + printk(KERN_NOTICE "using a buggy version of mkfs.jffs2. Use at least v1.17.\n"); + } /* What do we do about it? */ } D1(printk(KERN_DEBUG "Increased nlink for child \"%s\" (ino #%u)\n", fd->name, fd->ino)); @@ -251,7 +269,58 @@ } kfree(ic->scan); ic->scan = NULL; - // jffs2_del_ino_cache(c, ic); - // jffs2_free_inode_cache(ic); + + /* + We don't delete the inocache from the hash list and free it yet. + The erase code will do that, when all the nodes are completely gone. + */ + return ret; +} + +int jffs2_do_mount_fs(struct jffs2_sb_info *c) +{ + int i; + + c->free_size = c->flash_size; + c->nr_blocks = c->flash_size / c->sector_size; + c->blocks = kmalloc(sizeof(struct jffs2_eraseblock) * c->nr_blocks, GFP_KERNEL); + if (!c->blocks) + return -ENOMEM; + for (i=0; inr_blocks; i++) { + INIT_LIST_HEAD(&c->blocks[i].list); + c->blocks[i].offset = i * c->sector_size; + c->blocks[i].free_size = c->sector_size; + c->blocks[i].dirty_size = 0; + c->blocks[i].used_size = 0; + c->blocks[i].first_node = NULL; + c->blocks[i].last_node = NULL; + } + + init_MUTEX(&c->alloc_sem); + init_MUTEX(&c->erase_free_sem); + init_waitqueue_head(&c->erase_wait); + spin_lock_init(&c->erase_completion_lock); + spin_lock_init(&c->inocache_lock); + + INIT_LIST_HEAD(&c->clean_list); + INIT_LIST_HEAD(&c->dirty_list); + INIT_LIST_HEAD(&c->erasable_list); + INIT_LIST_HEAD(&c->erasing_list); + INIT_LIST_HEAD(&c->erase_pending_list); + INIT_LIST_HEAD(&c->erasable_pending_wbuf_list); + INIT_LIST_HEAD(&c->erase_complete_list); + INIT_LIST_HEAD(&c->free_list); + INIT_LIST_HEAD(&c->bad_list); + INIT_LIST_HEAD(&c->bad_used_list); + c->highest_ino = 1; + + if (jffs2_build_filesystem(c)) { + D1(printk(KERN_DEBUG "build_fs failed\n")); + jffs2_free_ino_caches(c); + jffs2_free_raw_node_refs(c); + kfree(c->blocks); + return -EIO; + } + return 0; } diff -Nru a/fs/jffs2/compr.c b/fs/jffs2/compr.c --- a/fs/jffs2/compr.c Tue Mar 12 13:58:15 2002 +++ b/fs/jffs2/compr.c Tue Mar 12 13:58:15 2002 @@ -1,7 +1,7 @@ /* * JFFS2 -- Journalling Flash File System, Version 2. * - * Copyright (C) 2001 Red Hat, Inc. + * Copyright (C) 2001, 2002 Red Hat, Inc. * * Created by Arjan van de Ven * @@ -31,24 +31,34 @@ * provisions above, a recipient may use your version of this file * under either the RHEPL or the GPL. * - * $Id: compr.c,v 1.17 2001/09/23 09:56:46 dwmw2 Exp $ + * $Id: compr.c,v 1.23 2002/01/25 01:49:26 dwmw2 Exp $ * */ +#ifdef __KERNEL__ #include #include -#include #include +#else +#define KERN_DEBUG +#define KERN_NOTICE +#define KERN_WARNING +#define printk printf +#include +#include +#include +#endif + #include -int zlib_compress(unsigned char *data_in, unsigned char *cpage_out, __u32 *sourcelen, __u32 *dstlen); -void zlib_decompress(unsigned char *data_in, unsigned char *cpage_out, __u32 srclen, __u32 destlen); -int rtime_compress(unsigned char *data_in, unsigned char *cpage_out, __u32 *sourcelen, __u32 *dstlen); -void rtime_decompress(unsigned char *data_in, unsigned char *cpage_out, __u32 srclen, __u32 destlen); -int rubinmips_compress(unsigned char *data_in, unsigned char *cpage_out, __u32 *sourcelen, __u32 *dstlen); -void rubinmips_decompress(unsigned char *data_in, unsigned char *cpage_out, __u32 srclen, __u32 destlen); -int dynrubin_compress(unsigned char *data_in, unsigned char *cpage_out, __u32 *sourcelen, __u32 *dstlen); -void dynrubin_decompress(unsigned char *data_in, unsigned char *cpage_out, __u32 srclen, __u32 destlen); +int jffs2_zlib_compress(unsigned char *data_in, unsigned char *cpage_out, uint32_t *sourcelen, uint32_t *dstlen); +void jffs2_zlib_decompress(unsigned char *data_in, unsigned char *cpage_out, uint32_t srclen, uint32_t destlen); +int jffs2_rtime_compress(unsigned char *data_in, unsigned char *cpage_out, uint32_t *sourcelen, uint32_t *dstlen); +void jffs2_rtime_decompress(unsigned char *data_in, unsigned char *cpage_out, uint32_t srclen, uint32_t destlen); +int jffs2_rubinmips_compress(unsigned char *data_in, unsigned char *cpage_out, uint32_t *sourcelen, uint32_t *dstlen); +void jffs2_rubinmips_decompress(unsigned char *data_in, unsigned char *cpage_out, uint32_t srclen, uint32_t destlen); +int jffs2_dynrubin_compress(unsigned char *data_in, unsigned char *cpage_out, uint32_t *sourcelen, uint32_t *dstlen); +void jffs2_dynrubin_decompress(unsigned char *data_in, unsigned char *cpage_out, uint32_t srclen, uint32_t destlen); /* jffs2_compress: @@ -69,28 +79,28 @@ * *datalen accordingly to show the amount of data which were compressed. */ unsigned char jffs2_compress(unsigned char *data_in, unsigned char *cpage_out, - __u32 *datalen, __u32 *cdatalen) + uint32_t *datalen, uint32_t *cdatalen) { int ret; - ret = zlib_compress(data_in, cpage_out, datalen, cdatalen); + ret = jffs2_zlib_compress(data_in, cpage_out, datalen, cdatalen); if (!ret) { return JFFS2_COMPR_ZLIB; } #if 0 /* Disabled 23/9/1. With zlib it hardly ever gets a look in */ - ret = dynrubin_compress(data_in, cpage_out, datalen, cdatalen); + ret = jffs2_dynrubin_compress(data_in, cpage_out, datalen, cdatalen); if (!ret) { return JFFS2_COMPR_DYNRUBIN; } #endif #if 0 /* Disabled 26/2/1. Obsoleted by dynrubin */ - ret = rubinmips_compress(data_in, cpage_out, datalen, cdatalen); + ret = jffs2_rubinmips_compress(data_in, cpage_out, datalen, cdatalen); if (!ret) { return JFFS2_COMPR_RUBINMIPS; } #endif /* rtime does manage to recompress already-compressed data */ - ret = rtime_compress(data_in, cpage_out, datalen, cdatalen); + ret = jffs2_rtime_compress(data_in, cpage_out, datalen, cdatalen); if (!ret) { return JFFS2_COMPR_RTIME; } @@ -108,7 +118,7 @@ int jffs2_decompress(unsigned char comprtype, unsigned char *cdata_in, - unsigned char *data_out, __u32 cdatalen, __u32 datalen) + unsigned char *data_out, uint32_t cdatalen, uint32_t datalen) { switch (comprtype) { case JFFS2_COMPR_NONE: @@ -121,23 +131,23 @@ break; case JFFS2_COMPR_ZLIB: - zlib_decompress(cdata_in, data_out, cdatalen, datalen); + jffs2_zlib_decompress(cdata_in, data_out, cdatalen, datalen); break; case JFFS2_COMPR_RTIME: - rtime_decompress(cdata_in, data_out, cdatalen, datalen); + jffs2_rtime_decompress(cdata_in, data_out, cdatalen, datalen); break; case JFFS2_COMPR_RUBINMIPS: #if 0 /* Disabled 23/9/1 */ - rubinmips_decompress(cdata_in, data_out, cdatalen, datalen); + jffs2_rubinmips_decompress(cdata_in, data_out, cdatalen, datalen); #else printk(KERN_WARNING "JFFS2: Rubinmips compression encountered but support not compiled in!\n"); #endif break; case JFFS2_COMPR_DYNRUBIN: #if 1 /* Phase this one out */ - dynrubin_decompress(cdata_in, data_out, cdatalen, datalen); + jffs2_dynrubin_decompress(cdata_in, data_out, cdatalen, datalen); #else printk(KERN_WARNING "JFFS2: Dynrubin compression encountered but support not compiled in!\n"); #endif diff -Nru a/fs/jffs2/compr_rtime.c b/fs/jffs2/compr_rtime.c --- a/fs/jffs2/compr_rtime.c Tue Mar 12 13:58:14 2002 +++ b/fs/jffs2/compr_rtime.c Tue Mar 12 13:58:14 2002 @@ -1,7 +1,7 @@ /* * JFFS2 -- Journalling Flash File System, Version 2. * - * Copyright (C) 2001 Red Hat, Inc. + * Copyright (C) 2001, 2002 Red Hat, Inc. * * Created by Arjan van de Ven * @@ -31,7 +31,7 @@ * provisions above, a recipient may use your version of this file * under either the RHEPL or the GPL. * - * $Id: compr_rtime.c,v 1.5 2001/03/15 15:38:23 dwmw2 Exp $ + * $Id: compr_rtime.c,v 1.8 2002/01/25 01:49:26 dwmw2 Exp $ * * * Very simple lz77-ish encoder. @@ -51,8 +51,8 @@ #include /* _compress returns the compressed size, -1 if bigger */ -int rtime_compress(unsigned char *data_in, unsigned char *cpage_out, - __u32 *sourcelen, __u32 *dstlen) +int jffs2_rtime_compress(unsigned char *data_in, unsigned char *cpage_out, + uint32_t *sourcelen, uint32_t *dstlen) { int positions[256]; int outpos = 0; @@ -91,8 +91,8 @@ } -void rtime_decompress(unsigned char *data_in, unsigned char *cpage_out, - __u32 srclen, __u32 destlen) +void jffs2_rtime_decompress(unsigned char *data_in, unsigned char *cpage_out, + uint32_t srclen, uint32_t destlen) { int positions[256]; int outpos = 0; diff -Nru a/fs/jffs2/compr_rubin.c b/fs/jffs2/compr_rubin.c --- a/fs/jffs2/compr_rubin.c Tue Mar 12 13:58:15 2002 +++ b/fs/jffs2/compr_rubin.c Tue Mar 12 13:58:15 2002 @@ -1,7 +1,7 @@ /* * JFFS2 -- Journalling Flash File System, Version 2. * - * Copyright (C) 2001 Red Hat, Inc. + * Copyright (C) 2001, 2002 Red Hat, Inc. * * Created by Arjan van de Ven * @@ -31,7 +31,7 @@ * provisions above, a recipient may use your version of this file * under either the RHEPL or the GPL. * - * $Id: compr_rubin.c,v 1.13 2001/09/23 10:06:05 rmk Exp $ + * $Id: compr_rubin.c,v 1.16 2002/01/25 01:49:26 dwmw2 Exp $ * */ @@ -43,7 +43,7 @@ -void init_rubin(struct rubin_state *rs, int div, int *bits) +static void init_rubin(struct rubin_state *rs, int div, int *bits) { int c; @@ -56,7 +56,7 @@ } -int encode(struct rubin_state *rs, long A, long B, int symbol) +static int encode(struct rubin_state *rs, long A, long B, int symbol) { long i0, i1; @@ -91,7 +91,7 @@ } -void end_rubin(struct rubin_state *rs) +static void end_rubin(struct rubin_state *rs) { int i; @@ -104,7 +104,7 @@ } -void init_decode(struct rubin_state *rs, int div, int *bits) +static void init_decode(struct rubin_state *rs, int div, int *bits) { init_rubin(rs, div, bits); @@ -151,7 +151,7 @@ rs->rec_q = rec_q; } -int decode(struct rubin_state *rs, long A, long B) +static int decode(struct rubin_state *rs, long A, long B) { unsigned long p = rs->p, q = rs->q; long i0, threshold; @@ -212,8 +212,8 @@ -int rubin_do_compress(int bit_divider, int *bits, unsigned char *data_in, - unsigned char *cpage_out, __u32 *sourcelen, __u32 *dstlen) +static int rubin_do_compress(int bit_divider, int *bits, unsigned char *data_in, + unsigned char *cpage_out, uint32_t *sourcelen, uint32_t *dstlen) { int outpos = 0; int pos=0; @@ -246,20 +246,20 @@ } #if 0 /* _compress returns the compressed size, -1 if bigger */ -int rubinmips_compress(unsigned char *data_in, unsigned char *cpage_out, - __u32 *sourcelen, __u32 *dstlen) +int jffs2_rubinmips_compress(unsigned char *data_in, unsigned char *cpage_out, + uint32_t *sourcelen, uint32_t *dstlen) { return rubin_do_compress(BIT_DIVIDER_MIPS, bits_mips, data_in, cpage_out, sourcelen, dstlen); } #endif -int dynrubin_compress(unsigned char *data_in, unsigned char *cpage_out, - __u32 *sourcelen, __u32 *dstlen) +int jffs2_dynrubin_compress(unsigned char *data_in, unsigned char *cpage_out, + uint32_t *sourcelen, uint32_t *dstlen) { int bits[8]; unsigned char histo[256]; int i; int ret; - __u32 mysrclen, mydstlen; + uint32_t mysrclen, mydstlen; mysrclen = *sourcelen; mydstlen = *dstlen - 8; @@ -315,8 +315,8 @@ return 0; } -void rubin_do_decompress(int bit_divider, int *bits, unsigned char *cdata_in, - unsigned char *page_out, __u32 srclen, __u32 destlen) +static void rubin_do_decompress(int bit_divider, int *bits, unsigned char *cdata_in, + unsigned char *page_out, uint32_t srclen, uint32_t destlen) { int outpos = 0; struct rubin_state rs; @@ -330,14 +330,14 @@ } -void rubinmips_decompress(unsigned char *data_in, unsigned char *cpage_out, - __u32 sourcelen, __u32 dstlen) +void jffs2_rubinmips_decompress(unsigned char *data_in, unsigned char *cpage_out, + uint32_t sourcelen, uint32_t dstlen) { rubin_do_decompress(BIT_DIVIDER_MIPS, bits_mips, data_in, cpage_out, sourcelen, dstlen); } -void dynrubin_decompress(unsigned char *data_in, unsigned char *cpage_out, - __u32 sourcelen, __u32 dstlen) +void jffs2_dynrubin_decompress(unsigned char *data_in, unsigned char *cpage_out, + uint32_t sourcelen, uint32_t dstlen) { int bits[8]; int c; diff -Nru a/fs/jffs2/compr_rubin.h b/fs/jffs2/compr_rubin.h --- a/fs/jffs2/compr_rubin.h Tue Mar 12 13:58:15 2002 +++ b/fs/jffs2/compr_rubin.h Tue Mar 12 13:58:15 2002 @@ -1,7 +1,7 @@ /* Rubin encoder/decoder header */ /* work started at : aug 3, 1994 */ /* last modification : aug 15, 1994 */ -/* $Id: compr_rubin.h,v 1.5 2001/02/26 13:50:01 dwmw2 Exp $ */ +/* $Id: compr_rubin.h,v 1.6 2002/01/25 01:49:26 dwmw2 Exp $ */ #include "pushpull.h" @@ -19,10 +19,3 @@ int bit_divider; int bits[8]; }; - - -void init_rubin (struct rubin_state *rs, int div, int *bits); -int encode (struct rubin_state *, long, long, int); -void end_rubin (struct rubin_state *); -void init_decode (struct rubin_state *, int div, int *bits); -int decode (struct rubin_state *, long, long); diff -Nru a/fs/jffs2/compr_zlib.c b/fs/jffs2/compr_zlib.c --- a/fs/jffs2/compr_zlib.c Tue Mar 12 13:58:14 2002 +++ b/fs/jffs2/compr_zlib.c Tue Mar 12 13:58:14 2002 @@ -85,7 +85,7 @@ vfree(inflate_workspace); } -int zlib_compress(unsigned char *data_in, unsigned char *cpage_out, +int jffs2_zlib_compress(unsigned char *data_in, unsigned char *cpage_out, uint32_t *sourcelen, uint32_t *dstlen) { z_stream strm; @@ -145,7 +145,7 @@ return 0; } -void zlib_decompress(unsigned char *data_in, unsigned char *cpage_out, +void jffs2_zlib_decompress(unsigned char *data_in, unsigned char *cpage_out, uint32_t srclen, uint32_t destlen) { z_stream strm; diff -Nru a/fs/jffs2/comprtest.c b/fs/jffs2/comprtest.c --- a/fs/jffs2/comprtest.c Tue Mar 12 13:58:14 2002 +++ b/fs/jffs2/comprtest.c Tue Mar 12 13:58:14 2002 @@ -1,4 +1,4 @@ -/* $Id: comprtest.c,v 1.4 2001/02/21 14:03:20 dwmw2 Exp $ */ +/* $Id: comprtest.c,v 1.5 2002/01/03 15:20:44 dwmw2 Exp $ */ #include #include @@ -266,13 +266,13 @@ static unsigned char decomprbuf[TESTDATA_LEN]; int jffs2_decompress(unsigned char comprtype, unsigned char *cdata_in, - unsigned char *data_out, __u32 cdatalen, __u32 datalen); + unsigned char *data_out, uint32_t cdatalen, uint32_t datalen); unsigned char jffs2_compress(unsigned char *data_in, unsigned char *cpage_out, - __u32 *datalen, __u32 *cdatalen); + uint32_t *datalen, uint32_t *cdatalen); int init_module(void ) { unsigned char comprtype; - __u32 c, d; + uint32_t c, d; int ret; printk("Original data: %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x\n", diff -Nru a/fs/jffs2/dir.c b/fs/jffs2/dir.c --- a/fs/jffs2/dir.c Tue Mar 12 13:58:15 2002 +++ b/fs/jffs2/dir.c Tue Mar 12 13:58:15 2002 @@ -1,7 +1,7 @@ /* * JFFS2 -- Journalling Flash File System, Version 2. * - * Copyright (C) 2001 Red Hat, Inc. + * Copyright (C) 2001, 2002 Red Hat, Inc. * * Created by David Woodhouse * @@ -31,12 +31,13 @@ * provisions above, a recipient may use your version of this file * under either the RHEPL or the GPL. * - * $Id: dir.c,v 1.45.2.5 2002/02/23 14:31:09 dwmw2 Exp $ + * $Id: dir.c,v 1.68 2002/03/11 12:36:59 dwmw2 Exp $ * */ #include #include +#include #include #include #include /* For completion */ @@ -44,7 +45,6 @@ #include #include #include -#include #include "nodelist.h" static int jffs2_readdir (struct file *, void *, filldir_t); @@ -65,7 +65,7 @@ read: generic_read_dir, readdir: jffs2_readdir, ioctl: jffs2_ioctl, - fsync: jffs2_null_fsync + fsync: jffs2_fsync }; @@ -95,12 +95,11 @@ struct jffs2_inode_info *dir_f; struct jffs2_sb_info *c; struct jffs2_full_dirent *fd = NULL, *fd_list; - __u32 ino = 0; + uint32_t ino = 0; struct inode *inode = NULL; D1(printk(KERN_DEBUG "jffs2_lookup()\n")); - lock_kernel(); dir_f = JFFS2_INODE_INFO(dir_i); c = JFFS2_SB_INFO(dir_i->i_sb); @@ -121,12 +120,10 @@ if (ino) { inode = iget(dir_i->i_sb, ino); if (!inode) { - unlock_kernel(); printk(KERN_WARNING "iget() failed for ino #%u\n", ino); return (ERR_PTR(-EIO)); } } - unlock_kernel(); d_add(target, inode); @@ -158,8 +155,9 @@ offset++; } if (offset == 1) { - D1(printk(KERN_DEBUG "Dirent 1: \"..\", ino #%lu\n", parent_ino(filp->f_dentry))); - if (filldir(dirent, "..", 2, 1, parent_ino(filp->f_dentry), DT_DIR) < 0) + unsigned long pino = parent_ino(filp->f_dentry); + D1(printk(KERN_DEBUG "Dirent 1: \"..\", ino #%lu\n", pino)); + if (filldir(dirent, "..", 2, 1, pino, DT_DIR) < 0) goto out; offset++; } @@ -193,50 +191,28 @@ /***********************************************************************/ + static int jffs2_create(struct inode *dir_i, struct dentry *dentry, int mode) { + struct jffs2_raw_inode *ri; struct jffs2_inode_info *f, *dir_f; struct jffs2_sb_info *c; struct inode *inode; - struct jffs2_raw_inode *ri; - struct jffs2_raw_dirent *rd; - struct jffs2_full_dnode *fn; - struct jffs2_full_dirent *fd; - int namelen; - __u32 alloclen, phys_ofs; - __u32 writtenlen; int ret; - lock_kernel(); ri = jffs2_alloc_raw_inode(); - if (!ri) { - unlock_kernel(); + if (!ri) return -ENOMEM; - } c = JFFS2_SB_INFO(dir_i->i_sb); D1(printk(KERN_DEBUG "jffs2_create()\n")); - /* Try to reserve enough space for both node and dirent. - * Just the node will do for now, though - */ - namelen = dentry->d_name.len; - ret = jffs2_reserve_space(c, sizeof(*ri), &phys_ofs, &alloclen, ALLOC_NORMAL); - D1(printk(KERN_DEBUG "jffs2_create(): reserved 0x%x bytes\n", alloclen)); - if (ret) { - jffs2_free_raw_inode(ri); - unlock_kernel(); - return ret; - } - inode = jffs2_new_inode(dir_i, mode, ri); if (IS_ERR(inode)) { D1(printk(KERN_DEBUG "jffs2_new_inode() failed\n")); jffs2_free_raw_inode(ri); - jffs2_complete_reservation(c); - unlock_kernel(); return PTR_ERR(inode); } @@ -246,278 +222,71 @@ inode->i_mapping->nrpages = 0; f = JFFS2_INODE_INFO(inode); - - ri->data_crc = 0; - ri->node_crc = crc32(0, ri, sizeof(*ri)-8); - - fn = jffs2_write_dnode(inode, ri, NULL, 0, phys_ofs, &writtenlen); - D1(printk(KERN_DEBUG "jffs2_create created file with mode 0x%x\n", ri->mode)); - jffs2_free_raw_inode(ri); - - if (IS_ERR(fn)) { - D1(printk(KERN_DEBUG "jffs2_write_dnode() failed\n")); - /* Eeek. Wave bye bye */ - up(&f->sem); - jffs2_complete_reservation(c); - jffs2_clear_inode(inode); - unlock_kernel(); - return PTR_ERR(fn); - } - /* No data here. Only a metadata node, which will be - obsoleted by the first data write - */ - f->metadata = fn; - - /* Work out where to put the dirent node now. */ - writtenlen = PAD(writtenlen); - phys_ofs += writtenlen; - alloclen -= writtenlen; - up(&f->sem); - - if (alloclen < sizeof(*rd)+namelen) { - /* Not enough space left in this chunk. Get some more */ - jffs2_complete_reservation(c); - ret = jffs2_reserve_space(c, sizeof(*rd)+namelen, &phys_ofs, &alloclen, ALLOC_NORMAL); - - if (ret) { - /* Eep. */ - D1(printk(KERN_DEBUG "jffs2_reserve_space() for dirent failed\n")); - jffs2_clear_inode(inode); - unlock_kernel(); - return ret; - } - } - - rd = jffs2_alloc_raw_dirent(); - if (!rd) { - /* Argh. Now we treat it like a normal delete */ - jffs2_complete_reservation(c); - jffs2_clear_inode(inode); - unlock_kernel(); - return -ENOMEM; - } - dir_f = JFFS2_INODE_INFO(dir_i); - down(&dir_f->sem); - - rd->magic = JFFS2_MAGIC_BITMASK; - rd->nodetype = JFFS2_NODETYPE_DIRENT; - rd->totlen = sizeof(*rd) + namelen; - rd->hdr_crc = crc32(0, rd, sizeof(struct jffs2_unknown_node)-4); - - rd->pino = dir_i->i_ino; - rd->version = ++dir_f->highest_version; - rd->ino = inode->i_ino; - rd->mctime = CURRENT_TIME; - rd->nsize = namelen; - rd->type = DT_REG; - rd->node_crc = crc32(0, rd, sizeof(*rd)-8); - rd->name_crc = crc32(0, dentry->d_name.name, namelen); - fd = jffs2_write_dirent(dir_i, rd, dentry->d_name.name, namelen, phys_ofs, &writtenlen); + ret = jffs2_do_create(c, dir_f, f, ri, + dentry->d_name.name, dentry->d_name.len); - jffs2_complete_reservation(c); - - if (IS_ERR(fd)) { - /* dirent failed to write. Delete the inode normally - as if it were the final unlink() */ - jffs2_free_raw_dirent(rd); - up(&dir_f->sem); + if (ret) { jffs2_clear_inode(inode); - unlock_kernel(); - return PTR_ERR(fd); + make_bad_inode(inode); + iput(inode); + jffs2_free_raw_inode(ri); + return ret; } - dir_i->i_mtime = dir_i->i_ctime = rd->mctime; - - jffs2_free_raw_dirent(rd); - - /* Link the fd into the inode's list, obsoleting an old - one if necessary. */ - jffs2_add_fd_to_list(c, fd, &dir_f->dents); - up(&dir_f->sem); + dir_i->i_mtime = dir_i->i_ctime = ri->ctime; + jffs2_free_raw_inode(ri); d_instantiate(dentry, inode); D1(printk(KERN_DEBUG "jffs2_create: Created ino #%lu with mode %o, nlink %d(%d). nrpages %ld\n", inode->i_ino, inode->i_mode, inode->i_nlink, f->inocache->nlink, inode->i_mapping->nrpages)); - unlock_kernel(); return 0; } /***********************************************************************/ -static int jffs2_do_unlink(struct inode *dir_i, struct dentry *dentry, int rename) -{ - struct jffs2_inode_info *dir_f, *f; - struct jffs2_sb_info *c; - struct jffs2_raw_dirent *rd; - struct jffs2_full_dirent *fd; - __u32 alloclen, phys_ofs; - int ret; - - c = JFFS2_SB_INFO(dir_i->i_sb); - - rd = jffs2_alloc_raw_dirent(); - if (!rd) - return -ENOMEM; - - ret = jffs2_reserve_space(c, sizeof(*rd)+dentry->d_name.len, &phys_ofs, &alloclen, ALLOC_DELETION); - if (ret) { - jffs2_free_raw_dirent(rd); - return ret; - } - - dir_f = JFFS2_INODE_INFO(dir_i); - down(&dir_f->sem); - - /* Build a deletion node */ - rd->magic = JFFS2_MAGIC_BITMASK; - rd->nodetype = JFFS2_NODETYPE_DIRENT; - rd->totlen = sizeof(*rd) + dentry->d_name.len; - rd->hdr_crc = crc32(0, rd, sizeof(struct jffs2_unknown_node)-4); - - rd->pino = dir_i->i_ino; - rd->version = ++dir_f->highest_version; - rd->ino = 0; - rd->mctime = CURRENT_TIME; - rd->nsize = dentry->d_name.len; - rd->type = DT_UNKNOWN; - rd->node_crc = crc32(0, rd, sizeof(*rd)-8); - rd->name_crc = crc32(0, dentry->d_name.name, dentry->d_name.len); - - fd = jffs2_write_dirent(dir_i, rd, dentry->d_name.name, dentry->d_name.len, phys_ofs, NULL); - - jffs2_complete_reservation(c); - jffs2_free_raw_dirent(rd); - - if (IS_ERR(fd)) { - up(&dir_f->sem); - return PTR_ERR(fd); - } - - /* File it. This will mark the old one obsolete. */ - jffs2_add_fd_to_list(c, fd, &dir_f->dents); - up(&dir_f->sem); - - if (!rename) { - f = JFFS2_INODE_INFO(dentry->d_inode); - down(&f->sem); - - while (f->dents) { - /* There can be only deleted ones */ - fd = f->dents; - - f->dents = fd->next; - - if (fd->ino) { - printk(KERN_WARNING "Deleting inode #%u with active dentry \"%s\"->ino #%u\n", - f->inocache->ino, fd->name, fd->ino); - } else { - D1(printk(KERN_DEBUG "Removing deletion dirent for \"%s\" from dir ino #%u\n", fd->name, f->inocache->ino)); - } - jffs2_mark_node_obsolete(c, fd->raw); - jffs2_free_full_dirent(fd); - } - - f->inocache->nlink--; - dentry->d_inode->i_nlink--; - up(&f->sem); - } - - return 0; -} static int jffs2_unlink(struct inode *dir_i, struct dentry *dentry) { - int res; - lock_kernel(); - res = jffs2_do_unlink(dir_i, dentry, 0); - unlock_kernel(); - return res; -} -/***********************************************************************/ - -static int jffs2_do_link (struct dentry *old_dentry, struct inode *dir_i, struct dentry *dentry, int rename) -{ - struct jffs2_inode_info *dir_f, *f; - struct jffs2_sb_info *c; - struct jffs2_raw_dirent *rd; - struct jffs2_full_dirent *fd; - __u32 alloclen, phys_ofs; + struct jffs2_sb_info *c = JFFS2_SB_INFO(dir_i->i_sb); + struct jffs2_inode_info *dir_f = JFFS2_INODE_INFO(dir_i); + struct jffs2_inode_info *dead_f = JFFS2_INODE_INFO(dentry->d_inode); int ret; - c = JFFS2_SB_INFO(dir_i->i_sb); - - rd = jffs2_alloc_raw_dirent(); - if (!rd) - return -ENOMEM; - - ret = jffs2_reserve_space(c, sizeof(*rd)+dentry->d_name.len, &phys_ofs, &alloclen, ALLOC_NORMAL); - if (ret) { - jffs2_free_raw_dirent(rd); - return ret; - } - - dir_f = JFFS2_INODE_INFO(dir_i); - down(&dir_f->sem); - - /* Build a deletion node */ - rd->magic = JFFS2_MAGIC_BITMASK; - rd->nodetype = JFFS2_NODETYPE_DIRENT; - rd->totlen = sizeof(*rd) + dentry->d_name.len; - rd->hdr_crc = crc32(0, rd, sizeof(struct jffs2_unknown_node)-4); - - rd->pino = dir_i->i_ino; - rd->version = ++dir_f->highest_version; - rd->ino = old_dentry->d_inode->i_ino; - rd->mctime = CURRENT_TIME; - rd->nsize = dentry->d_name.len; - - /* XXX: This is ugly. */ - rd->type = (old_dentry->d_inode->i_mode & S_IFMT) >> 12; - if (!rd->type) rd->type = DT_REG; - - rd->node_crc = crc32(0, rd, sizeof(*rd)-8); - rd->name_crc = crc32(0, dentry->d_name.name, dentry->d_name.len); - - fd = jffs2_write_dirent(dir_i, rd, dentry->d_name.name, dentry->d_name.len, phys_ofs, NULL); - - jffs2_complete_reservation(c); - jffs2_free_raw_dirent(rd); - - if (IS_ERR(fd)) { - up(&dir_f->sem); - return PTR_ERR(fd); - } - - /* File it. This will mark the old one obsolete. */ - jffs2_add_fd_to_list(c, fd, &dir_f->dents); - up(&dir_f->sem); - - if (!rename) { - f = JFFS2_INODE_INFO(old_dentry->d_inode); - down(&f->sem); - old_dentry->d_inode->i_nlink = ++f->inocache->nlink; - up(&f->sem); - } - return 0; + ret = jffs2_do_unlink(c, dir_f, dentry->d_name.name, + dentry->d_name.len, dead_f); + dentry->d_inode->i_nlink = dead_f->inocache->nlink; + return ret; } +/***********************************************************************/ + static int jffs2_link (struct dentry *old_dentry, struct inode *dir_i, struct dentry *dentry) { + struct jffs2_sb_info *c = JFFS2_SB_INFO(old_dentry->d_inode->i_sb); + struct jffs2_inode_info *f = JFFS2_INODE_INFO(old_dentry->d_inode); + struct jffs2_inode_info *dir_f = JFFS2_INODE_INFO(dir_i); int ret; + uint8_t type; if (S_ISDIR(old_dentry->d_inode->i_mode)) return -EPERM; - lock_kernel(); - ret = jffs2_do_link(old_dentry, dir_i, dentry, 0); + /* XXX: This is ugly */ + type = (old_dentry->d_inode->i_mode & S_IFMT) >> 12; + if (!type) type = DT_REG; + + ret = jffs2_do_link(c, dir_f, f->inocache->ino, type, dentry->d_name.name, dentry->d_name.len); + if (!ret) { + down(&f->sem); + old_dentry->d_inode->i_nlink = ++f->inocache->nlink; + up(&f->sem); d_instantiate(dentry, old_dentry->d_inode); atomic_inc(&old_dentry->d_inode->i_count); } - unlock_kernel(); return ret; } @@ -533,8 +302,8 @@ struct jffs2_full_dnode *fn; struct jffs2_full_dirent *fd; int namelen; - __u32 alloclen, phys_ofs; - __u32 writtenlen; + uint32_t alloclen, phys_ofs; + uint32_t writtenlen; int ret; /* FIXME: If you care. We'd need to use frags for the target @@ -542,7 +311,6 @@ if (strlen(target) > 254) return -EINVAL; - lock_kernel(); ri = jffs2_alloc_raw_inode(); if (!ri) @@ -558,7 +326,6 @@ if (ret) { jffs2_free_raw_inode(ri); - unlock_kernel(); return ret; } @@ -567,7 +334,6 @@ if (IS_ERR(inode)) { jffs2_free_raw_inode(ri); jffs2_complete_reservation(c); - unlock_kernel(); return PTR_ERR(inode); } @@ -583,7 +349,7 @@ ri->data_crc = crc32(0, target, strlen(target)); ri->node_crc = crc32(0, ri, sizeof(*ri)-8); - fn = jffs2_write_dnode(inode, ri, target, strlen(target), phys_ofs, &writtenlen); + fn = jffs2_write_dnode(c, f, ri, target, strlen(target), phys_ofs, &writtenlen); jffs2_free_raw_inode(ri); @@ -592,7 +358,6 @@ up(&f->sem); jffs2_complete_reservation(c); jffs2_clear_inode(inode); - unlock_kernel(); return PTR_ERR(fn); } /* No data here. Only a metadata node, which will be @@ -613,7 +378,6 @@ if (ret) { /* Eep. */ jffs2_clear_inode(inode); - unlock_kernel(); return ret; } } @@ -623,7 +387,6 @@ /* Argh. Now we treat it like a normal delete */ jffs2_complete_reservation(c); jffs2_clear_inode(inode); - unlock_kernel(); return -ENOMEM; } @@ -644,19 +407,18 @@ rd->node_crc = crc32(0, rd, sizeof(*rd)-8); rd->name_crc = crc32(0, dentry->d_name.name, namelen); - fd = jffs2_write_dirent(dir_i, rd, dentry->d_name.name, namelen, phys_ofs, &writtenlen); - - jffs2_complete_reservation(c); - + fd = jffs2_write_dirent(c, dir_f, rd, dentry->d_name.name, namelen, phys_ofs, &writtenlen); + if (IS_ERR(fd)) { /* dirent failed to write. Delete the inode normally as if it were the final unlink() */ + jffs2_complete_reservation(c); jffs2_free_raw_dirent(rd); up(&dir_f->sem); jffs2_clear_inode(inode); - unlock_kernel(); return PTR_ERR(fd); } + dir_i->i_mtime = dir_i->i_ctime = rd->mctime; jffs2_free_raw_dirent(rd); @@ -664,10 +426,11 @@ /* Link the fd into the inode's list, obsoleting an old one if necessary. */ jffs2_add_fd_to_list(c, fd, &dir_f->dents); + up(&dir_f->sem); + jffs2_complete_reservation(c); d_instantiate(dentry, inode); - unlock_kernel(); return 0; } @@ -682,18 +445,15 @@ struct jffs2_full_dnode *fn; struct jffs2_full_dirent *fd; int namelen; - __u32 alloclen, phys_ofs; - __u32 writtenlen; + uint32_t alloclen, phys_ofs; + uint32_t writtenlen; int ret; mode |= S_IFDIR; - lock_kernel(); ri = jffs2_alloc_raw_inode(); - if (!ri) { - unlock_kernel(); + if (!ri) return -ENOMEM; - } c = JFFS2_SB_INFO(dir_i->i_sb); @@ -705,7 +465,6 @@ if (ret) { jffs2_free_raw_inode(ri); - unlock_kernel(); return ret; } @@ -714,19 +473,20 @@ if (IS_ERR(inode)) { jffs2_free_raw_inode(ri); jffs2_complete_reservation(c); - unlock_kernel(); return PTR_ERR(inode); } inode->i_op = &jffs2_dir_inode_operations; inode->i_fop = &jffs2_dir_operations; + /* Directories get nlink 2 at start */ + inode->i_nlink = 2; f = JFFS2_INODE_INFO(inode); ri->data_crc = 0; ri->node_crc = crc32(0, ri, sizeof(*ri)-8); - fn = jffs2_write_dnode(inode, ri, NULL, 0, phys_ofs, &writtenlen); + fn = jffs2_write_dnode(c, f, ri, NULL, 0, phys_ofs, &writtenlen); jffs2_free_raw_inode(ri); @@ -735,7 +495,6 @@ up(&f->sem); jffs2_complete_reservation(c); jffs2_clear_inode(inode); - unlock_kernel(); return PTR_ERR(fn); } /* No data here. Only a metadata node, which will be @@ -756,7 +515,6 @@ if (ret) { /* Eep. */ jffs2_clear_inode(inode); - unlock_kernel(); return ret; } } @@ -766,7 +524,6 @@ /* Argh. Now we treat it like a normal delete */ jffs2_complete_reservation(c); jffs2_clear_inode(inode); - unlock_kernel(); return -ENOMEM; } @@ -787,31 +544,31 @@ rd->node_crc = crc32(0, rd, sizeof(*rd)-8); rd->name_crc = crc32(0, dentry->d_name.name, namelen); - fd = jffs2_write_dirent(dir_i, rd, dentry->d_name.name, namelen, phys_ofs, &writtenlen); - - jffs2_complete_reservation(c); + fd = jffs2_write_dirent(c, dir_f, rd, dentry->d_name.name, namelen, phys_ofs, &writtenlen); if (IS_ERR(fd)) { /* dirent failed to write. Delete the inode normally as if it were the final unlink() */ + jffs2_complete_reservation(c); jffs2_free_raw_dirent(rd); up(&dir_f->sem); jffs2_clear_inode(inode); - unlock_kernel(); return PTR_ERR(fd); } dir_i->i_mtime = dir_i->i_ctime = rd->mctime; + dir_i->i_nlink++; jffs2_free_raw_dirent(rd); /* Link the fd into the inode's list, obsoleting an old one if necessary. */ jffs2_add_fd_to_list(c, fd, &dir_f->dents); + up(&dir_f->sem); + jffs2_complete_reservation(c); d_instantiate(dentry, inode); - unlock_kernel(); return 0; } @@ -819,16 +576,16 @@ { struct jffs2_inode_info *f = JFFS2_INODE_INFO(dentry->d_inode); struct jffs2_full_dirent *fd; + int ret; - lock_kernel(); for (fd = f->dents ; fd; fd = fd->next) { - if (fd->ino) { - unlock_kernel(); + if (fd->ino) return -ENOTEMPTY; - } } - unlock_kernel(); - return jffs2_unlink(dir_i, dentry); + ret = jffs2_unlink(dir_i, dentry); + if (!ret) + dir_i->i_nlink--; + return ret; } static int jffs2_mknod (struct inode *dir_i, struct dentry *dentry, int mode, int rdev) @@ -843,16 +600,13 @@ int namelen; unsigned short dev; int devlen = 0; - __u32 alloclen, phys_ofs; - __u32 writtenlen; + uint32_t alloclen, phys_ofs; + uint32_t writtenlen; int ret; - lock_kernel(); ri = jffs2_alloc_raw_inode(); - if (!ri) { - unlock_kernel(); + if (!ri) return -ENOMEM; - } c = JFFS2_SB_INFO(dir_i->i_sb); @@ -869,7 +623,6 @@ if (ret) { jffs2_free_raw_inode(ri); - unlock_kernel(); return ret; } @@ -878,7 +631,6 @@ if (IS_ERR(inode)) { jffs2_free_raw_inode(ri); jffs2_complete_reservation(c); - unlock_kernel(); return PTR_ERR(inode); } inode->i_op = &jffs2_file_inode_operations; @@ -894,7 +646,7 @@ ri->data_crc = crc32(0, &dev, devlen); ri->node_crc = crc32(0, ri, sizeof(*ri)-8); - fn = jffs2_write_dnode(inode, ri, (char *)&dev, devlen, phys_ofs, &writtenlen); + fn = jffs2_write_dnode(c, f, ri, (char *)&dev, devlen, phys_ofs, &writtenlen); jffs2_free_raw_inode(ri); @@ -903,7 +655,6 @@ up(&f->sem); jffs2_complete_reservation(c); jffs2_clear_inode(inode); - unlock_kernel(); return PTR_ERR(fn); } /* No data here. Only a metadata node, which will be @@ -924,7 +675,6 @@ if (ret) { /* Eep. */ jffs2_clear_inode(inode); - unlock_kernel(); return ret; } } @@ -934,7 +684,6 @@ /* Argh. Now we treat it like a normal delete */ jffs2_complete_reservation(c); jffs2_clear_inode(inode); - unlock_kernel(); return -ENOMEM; } @@ -958,17 +707,15 @@ rd->node_crc = crc32(0, rd, sizeof(*rd)-8); rd->name_crc = crc32(0, dentry->d_name.name, namelen); - fd = jffs2_write_dirent(dir_i, rd, dentry->d_name.name, namelen, phys_ofs, &writtenlen); - - jffs2_complete_reservation(c); + fd = jffs2_write_dirent(c, dir_f, rd, dentry->d_name.name, namelen, phys_ofs, &writtenlen); if (IS_ERR(fd)) { /* dirent failed to write. Delete the inode normally as if it were the final unlink() */ + jffs2_complete_reservation(c); jffs2_free_raw_dirent(rd); up(&dir_f->sem); jffs2_clear_inode(inode); - unlock_kernel(); return PTR_ERR(fd); } @@ -979,8 +726,9 @@ /* Link the fd into the inode's list, obsoleting an old one if necessary. */ jffs2_add_fd_to_list(c, fd, &dir_f->dents); + up(&dir_f->sem); - unlock_kernel(); + jffs2_complete_reservation(c); d_instantiate(dentry, inode); @@ -991,35 +739,54 @@ struct inode *new_dir_i, struct dentry *new_dentry) { int ret; + struct jffs2_sb_info *c = JFFS2_SB_INFO(old_dir_i->i_sb); + uint8_t type; + /* XXX: We probably ought to alloc enough space for both nodes at the same time. Writing the new link, then getting -ENOSPC, is quite bad :) */ /* Make a hard link */ - lock_kernel(); - ret = jffs2_do_link(old_dentry, new_dir_i, new_dentry, 1); - if (ret) { - unlock_kernel(); + + /* XXX: This is ugly */ + type = (old_dentry->d_inode->i_mode & S_IFMT) >> 12; + if (!type) type = DT_REG; + + ret = jffs2_do_link(c, JFFS2_INODE_INFO(new_dir_i), + old_dentry->d_inode->i_ino, type, + new_dentry->d_name.name, new_dentry->d_name.len); + + if (ret) return ret; - } + + if (S_ISDIR(old_dentry->d_inode->i_mode)) + new_dir_i->i_nlink++; /* Unlink the original */ - ret = jffs2_do_unlink(old_dir_i, old_dentry, 1); - + ret = jffs2_do_unlink(c, JFFS2_INODE_INFO(old_dir_i), + old_dentry->d_name.name, old_dentry->d_name.len, NULL); + + /* We don't touch inode->i_nlink */ + if (ret) { /* Oh shit. We really ought to make a single node which can do both atomically */ struct jffs2_inode_info *f = JFFS2_INODE_INFO(old_dentry->d_inode); down(&f->sem); - old_dentry->d_inode->i_nlink = f->inocache->nlink++; + old_dentry->d_inode->i_nlink++; + f->inocache->nlink++; up(&f->sem); - + printk(KERN_NOTICE "jffs2_rename(): Link succeeded, unlink failed (err %d). You now have a hard link\n", ret); /* Might as well let the VFS know */ d_instantiate(new_dentry, old_dentry->d_inode); atomic_inc(&old_dentry->d_inode->i_count); + return ret; } - unlock_kernel(); - return ret; + + if (S_ISDIR(old_dentry->d_inode->i_mode)) + old_dir_i->i_nlink--; + + return 0; } diff -Nru a/fs/jffs2/erase.c b/fs/jffs2/erase.c --- a/fs/jffs2/erase.c Tue Mar 12 13:58:14 2002 +++ b/fs/jffs2/erase.c Tue Mar 12 13:58:14 2002 @@ -1,7 +1,7 @@ /* * JFFS2 -- Journalling Flash File System, Version 2. * - * Copyright (C) 2001 Red Hat, Inc. + * Copyright (C) 2001, 2002 Red Hat, Inc. * * Created by David Woodhouse * @@ -31,14 +31,15 @@ * provisions above, a recipient may use your version of this file * under either the RHEPL or the GPL. * - * $Id: erase.c,v 1.24 2001/12/06 16:38:38 dwmw2 Exp $ + * $Id: erase.c,v 1.35 2002/03/08 15:11:24 dwmw2 Exp $ * */ + #include #include #include -#include #include +#include #include #include "nodelist.h" @@ -48,12 +49,13 @@ }; static void jffs2_erase_callback(struct erase_info *); +static void jffs2_erase_succeeded(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb); static void jffs2_free_all_node_refs(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb); void jffs2_erase_block(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb) { - struct erase_info *instr; int ret; + struct erase_info *instr; instr = kmalloc(sizeof(struct erase_info) + sizeof(struct erase_priv_struct), GFP_KERNEL); if (!instr) { @@ -77,10 +79,16 @@ ((struct erase_priv_struct *)instr->priv)->jeb = jeb; ((struct erase_priv_struct *)instr->priv)->c = c; + /* NAND , read out the fail counter, if possible */ + if (!jffs2_can_mark_obsolete(c)) + jffs2_nand_read_failcnt(c,jeb); + ret = c->mtd->erase(c->mtd, instr); - if (!ret) { + if (!ret) return; - } + + kfree(instr); + if (ret == -ENOMEM || ret == -EAGAIN) { /* Erase failed immediately. Refile it on the list */ D1(printk(KERN_DEBUG "Erase at 0x%08x failed: %d. Refiling on erase_pending_list\n", jeb->offset, ret)); @@ -89,7 +97,6 @@ list_add(&jeb->list, &c->erase_pending_list); c->erasing_size -= c->sector_size; spin_unlock_bh(&c->erase_completion_lock); - kfree(instr); return; } @@ -97,6 +104,7 @@ printk(KERN_WARNING "Erase at 0x%08x failed immediately: -EROFS. Is the sector locked?\n", jeb->offset); else printk(KERN_WARNING "Erase at 0x%08x failed immediately: errno %d\n", jeb->offset, ret); + spin_lock_bh(&c->erase_completion_lock); list_del(&jeb->list); list_add(&jeb->list, &c->bad_list); @@ -105,13 +113,14 @@ c->erasing_size -= c->sector_size; spin_unlock_bh(&c->erase_completion_lock); wake_up(&c->erase_wait); - kfree(instr); } void jffs2_erase_pending_blocks(struct jffs2_sb_info *c) { struct jffs2_eraseblock *jeb; + down(&c->erase_free_sem); + spin_lock_bh(&c->erase_completion_lock); while (!list_empty(&c->erase_pending_list)) { @@ -130,35 +139,50 @@ spin_unlock_bh(&c->erase_completion_lock); jffs2_erase_block(c, jeb); + /* Be nice */ cond_resched(); + spin_lock_bh(&c->erase_completion_lock); } spin_unlock_bh(&c->erase_completion_lock); D1(printk(KERN_DEBUG "jffs2_erase_pending_blocks completed\n")); + + up(&c->erase_free_sem); } +static void jffs2_erase_succeeded(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb) +{ + D1(printk(KERN_DEBUG "Erase completed successfully at 0x%08x\n", jeb->offset)); + spin_lock(&c->erase_completion_lock); + list_del(&jeb->list); + list_add_tail(&jeb->list, &c->erase_complete_list); + spin_unlock(&c->erase_completion_lock); +} + + +static inline void jffs2_erase_failed(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb) +{ + spin_lock(&c->erase_completion_lock); + c->erasing_size -= c->sector_size; + c->bad_size += c->sector_size; + list_del(&jeb->list); + list_add(&jeb->list, &c->bad_list); + c->nr_erasing_blocks--; + spin_unlock(&c->erase_completion_lock); + wake_up(&c->erase_wait); +} + static void jffs2_erase_callback(struct erase_info *instr) { struct erase_priv_struct *priv = (void *)instr->priv; if(instr->state != MTD_ERASE_DONE) { printk(KERN_WARNING "Erase at 0x%08x finished, but state != MTD_ERASE_DONE. State is 0x%x instead.\n", instr->addr, instr->state); - spin_lock(&priv->c->erase_completion_lock); - priv->c->erasing_size -= priv->c->sector_size; - priv->c->bad_size += priv->c->sector_size; - list_del(&priv->jeb->list); - list_add(&priv->jeb->list, &priv->c->bad_list); - priv->c->nr_erasing_blocks--; - spin_unlock(&priv->c->erase_completion_lock); - wake_up(&priv->c->erase_wait); + jffs2_erase_failed(priv->c, priv->jeb); } else { - D1(printk(KERN_DEBUG "Erase completed successfully at 0x%08x\n", instr->addr)); - spin_lock(&priv->c->erase_completion_lock); - list_del(&priv->jeb->list); - list_add_tail(&priv->jeb->list, &priv->c->erase_complete_list); - spin_unlock(&priv->c->erase_completion_lock); + jffs2_erase_succeeded(priv->c, priv->jeb); } /* Make sure someone picks up the block off the erase_complete list */ OFNI_BS_2SFFJ(priv->c)->s_dirt = 1; @@ -263,17 +287,18 @@ void jffs2_mark_erased_blocks(struct jffs2_sb_info *c) { static struct jffs2_unknown_node marker = { - magic: JFFS2_MAGIC_BITMASK, - nodetype: JFFS2_NODETYPE_CLEANMARKER, - totlen: sizeof(struct jffs2_unknown_node) + magic: JFFS2_MAGIC_BITMASK, + nodetype: JFFS2_NODETYPE_CLEANMARKER, + totlen: sizeof(struct jffs2_unknown_node) }; struct jffs2_eraseblock *jeb; - struct jffs2_raw_node_ref *marker_ref; + struct jffs2_raw_node_ref *marker_ref = NULL; unsigned char *ebuf; - ssize_t retlen; + size_t retlen; int ret; - marker.hdr_crc = crc32(0, &marker, sizeof(struct jffs2_unknown_node)-4); + if (unlikely(!marker.hdr_crc)) + marker.hdr_crc = crc32(0, &marker, sizeof(struct jffs2_unknown_node)-4); spin_lock_bh(&c->erase_completion_lock); while (!list_empty(&c->erase_complete_list)) { @@ -281,27 +306,28 @@ list_del(&jeb->list); spin_unlock_bh(&c->erase_completion_lock); - marker_ref = jffs2_alloc_raw_node_ref(); - if (!marker_ref) { - printk(KERN_WARNING "Failed to allocate raw node ref for clean marker\n"); - /* Come back later */ - jffs2_erase_pending_trigger(c); - return; + if (!jffs2_cleanmarker_oob(c)) { + marker_ref = jffs2_alloc_raw_node_ref(); + if (!marker_ref) { + printk(KERN_WARNING "Failed to allocate raw node ref for clean marker\n"); + /* Come back later */ + jffs2_erase_pending_trigger(c); + return; + } } - ebuf = kmalloc(PAGE_SIZE, GFP_KERNEL); if (!ebuf) { printk(KERN_WARNING "Failed to allocate page buffer for verifying erase at 0x%08x. Assuming it worked\n", jeb->offset); } else { - __u32 ofs = jeb->offset; + uint32_t ofs = jeb->offset; D1(printk(KERN_DEBUG "Verifying erase at 0x%08x\n", jeb->offset)); while(ofs < jeb->offset + c->sector_size) { - __u32 readlen = min((__u32)PAGE_SIZE, jeb->offset + c->sector_size - ofs); + uint32_t readlen = min((uint32_t)PAGE_SIZE, jeb->offset + c->sector_size - ofs); int i; - ret = c->mtd->read(c->mtd, ofs, readlen, &retlen, ebuf); - if (ret < 0) { + ret = jffs2_flash_read(c, ofs, readlen, &retlen, ebuf); + if (ret) { printk(KERN_WARNING "Read of newly-erased block at 0x%08x failed: %d. Putting on bad_list\n", ofs, ret); goto bad; } @@ -315,7 +341,10 @@ if (datum + 1) { printk(KERN_WARNING "Newly-erased block contained word 0x%lx at offset 0x%08x\n", datum, ofs + i); bad: - jffs2_free_raw_node_ref(marker_ref); + if (!jffs2_cleanmarker_oob(c)) + jffs2_free_raw_node_ref(marker_ref); + else + jffs2_write_nand_badblock( c ,jeb ); kfree(ebuf); bad2: spin_lock_bh(&c->erase_completion_lock); @@ -336,28 +365,40 @@ /* Write the erase complete marker */ D1(printk(KERN_DEBUG "Writing erased marker to block at 0x%08x\n", jeb->offset)); - ret = c->mtd->write(c->mtd, jeb->offset, sizeof(marker), &retlen, (char *)&marker); - if (ret) { - printk(KERN_WARNING "Write clean marker to block at 0x%08x failed: %d\n", - jeb->offset, ret); - goto bad2; - } - if (retlen != sizeof(marker)) { - printk(KERN_WARNING "Short write to newly-erased block at 0x%08x: Wanted %d, got %d\n", - jeb->offset, sizeof(marker), retlen); - goto bad2; - } + if (jffs2_cleanmarker_oob(c)) { - marker_ref->next_in_ino = NULL; - marker_ref->next_phys = NULL; - marker_ref->flash_offset = jeb->offset; - marker_ref->totlen = PAD(sizeof(marker)); - - jeb->first_node = jeb->last_node = marker_ref; - - jeb->free_size = c->sector_size - marker_ref->totlen; - jeb->used_size = marker_ref->totlen; - jeb->dirty_size = 0; + if (jffs2_write_nand_cleanmarker(c, jeb)) + goto bad2; + + jeb->first_node = jeb->last_node = NULL; + + jeb->free_size = c->sector_size; + jeb->used_size = 0; + jeb->dirty_size = 0; + } else { + ret = jffs2_flash_write(c, jeb->offset, sizeof(marker), &retlen, (char *)&marker); + if (ret) { + printk(KERN_WARNING "Write clean marker to block at 0x%08x failed: %d\n", + jeb->offset, ret); + goto bad2; + } + if (retlen != sizeof(marker)) { + printk(KERN_WARNING "Short write to newly-erased block at 0x%08x: Wanted %d, got %d\n", + jeb->offset, sizeof(marker), retlen); + goto bad2; + } + + marker_ref->next_in_ino = NULL; + marker_ref->next_phys = NULL; + marker_ref->flash_offset = jeb->offset; + marker_ref->totlen = PAD(sizeof(marker)); + + jeb->first_node = jeb->last_node = marker_ref; + + jeb->free_size = c->sector_size - marker_ref->totlen; + jeb->used_size = marker_ref->totlen; + jeb->dirty_size = 0; + } spin_lock_bh(&c->erase_completion_lock); c->erasing_size -= c->sector_size; diff -Nru a/fs/jffs2/file.c b/fs/jffs2/file.c --- a/fs/jffs2/file.c Tue Mar 12 13:58:15 2002 +++ b/fs/jffs2/file.c Tue Mar 12 13:58:15 2002 @@ -1,7 +1,7 @@ /* * JFFS2 -- Journalling Flash File System, Version 2. * - * Copyright (C) 2001 Red Hat, Inc. + * Copyright (C) 2001, 2002 Red Hat, Inc. * * Created by David Woodhouse * @@ -31,7 +31,7 @@ * provisions above, a recipient may use your version of this file * under either the RHEPL or the GPL. * - * $Id: file.c,v 1.58.2.1 2002/02/23 14:25:36 dwmw2 Exp $ + * $Id: file.c,v 1.70 2002/03/05 09:55:07 dwmw2 Exp $ * */ @@ -39,6 +39,7 @@ #include /* for min() */ #include #include +#include #include #include #include @@ -48,10 +49,27 @@ extern loff_t generic_file_llseek(struct file *file, loff_t offset, int origin) __attribute__((weak)); -int jffs2_null_fsync(struct file *filp, struct dentry *dentry, int datasync) +int jffs2_fsync(struct file *filp, struct dentry *dentry, int datasync) { - /* Move along. Nothing to see here */ - return 0; + struct inode *inode = dentry->d_inode; + struct jffs2_inode_info *f = JFFS2_INODE_INFO(inode); + struct jffs2_sb_info *c = JFFS2_SB_INFO(inode->i_sb); + if (!c->wbuf || !c->wbuf_len) + return 0; + + /* flush write buffer and update c->nextblock */ + + /* FIXME NAND */ + /* At the moment we flush the buffer, to make sure + * that every thing is on the flash. + * maybe we have to think about it to find a smarter + * solution. + */ + down(&f->sem); + jffs2_flush_wbuf(c,2); + up(&f->sem); + + return 0; } struct file_operations jffs2_file_operations = @@ -62,7 +80,7 @@ write: generic_file_write, ioctl: jffs2_ioctl, mmap: generic_file_mmap, - fsync: jffs2_null_fsync + fsync: jffs2_fsync }; /* jffs2_file_inode_operations */ @@ -90,7 +108,7 @@ unsigned char *mdata = NULL; int mdatalen = 0; unsigned int ivalid; - __u32 phys_ofs, alloclen; + uint32_t phys_ofs, alloclen; int ret; D1(printk(KERN_DEBUG "jffs2_setattr(): ino #%lu\n", inode->i_ino)); ret = inode_change_ok(inode, iattr); @@ -132,12 +150,12 @@ ret = jffs2_reserve_space(c, sizeof(*ri) + mdatalen, &phys_ofs, &alloclen, ALLOC_NORMAL); if (ret) { jffs2_free_raw_inode(ri); - if (S_ISLNK(inode->i_mode)) + if (S_ISLNK(inode->i_mode & S_IFMT)) kfree(mdata); return ret; } down(&f->sem); - ivalid = iattr->ia_valid; + ivalid = iattr->ia_valid; ri->magic = JFFS2_MAGIC_BITMASK; ri->nodetype = JFFS2_NODETYPE_INODE; @@ -175,13 +193,12 @@ else ri->data_crc = 0; - new_metadata = jffs2_write_dnode(inode, ri, mdata, mdatalen, phys_ofs, NULL); + new_metadata = jffs2_write_dnode(c, f, ri, mdata, mdatalen, phys_ofs, NULL); if (S_ISLNK(inode->i_mode)) kfree(mdata); - - jffs2_complete_reservation(c); if (IS_ERR(new_metadata)) { + jffs2_complete_reservation(c); jffs2_free_raw_inode(ri); up(&f->sem); return PTR_ERR(new_metadata); @@ -214,7 +231,10 @@ jffs2_free_full_dnode(old_metadata); } jffs2_free_raw_inode(ri); + up(&f->sem); + jffs2_complete_reservation(c); + return 0; } @@ -222,85 +242,30 @@ { struct jffs2_inode_info *f = JFFS2_INODE_INFO(inode); struct jffs2_sb_info *c = JFFS2_SB_INFO(inode->i_sb); - struct jffs2_node_frag *frag = f->fraglist; - __u32 offset = pg->index << PAGE_CACHE_SHIFT; - __u32 end = offset + PAGE_CACHE_SIZE; unsigned char *pg_buf; int ret; - D1(printk(KERN_DEBUG "jffs2_do_readpage_nolock(): ino #%lu, page at offset 0x%x\n", inode->i_ino, offset)); + D1(printk(KERN_DEBUG "jffs2_do_readpage_nolock(): ino #%lu, page at offset 0x%lx\n", inode->i_ino, pg->index << PAGE_CACHE_SHIFT)); if (!PageLocked(pg)) PAGE_BUG(pg); - while(frag && frag->ofs + frag->size <= offset) { - // D1(printk(KERN_DEBUG "skipping frag %d-%d; before the region we care about\n", frag->ofs, frag->ofs + frag->size)); - frag = frag->next; - } - pg_buf = kmap(pg); + /* FIXME: Can kmap fail? */ - /* XXX FIXME: Where a single physical node actually shows up in two - frags, we read it twice. Don't do that. */ - /* Now we're pointing at the first frag which overlaps our page */ - while(offset < end) { - D2(printk(KERN_DEBUG "jffs2_readpage: offset %d, end %d\n", offset, end)); - if (!frag || frag->ofs > offset) { - __u32 holesize = end - offset; - if (frag) { - D1(printk(KERN_NOTICE "Eep. Hole in ino %ld fraglist. frag->ofs = 0x%08x, offset = 0x%08x\n", inode->i_ino, frag->ofs, offset)); - holesize = min(holesize, frag->ofs - offset); - D1(jffs2_print_frag_list(f)); - } - D1(printk(KERN_DEBUG "Filling non-frag hole from %d-%d\n", offset, offset+holesize)); - memset(pg_buf, 0, holesize); - pg_buf += holesize; - offset += holesize; - continue; - } else if (frag->ofs < offset && (offset & (PAGE_CACHE_SIZE-1)) != 0) { - D1(printk(KERN_NOTICE "Eep. Overlap in ino #%ld fraglist. frag->ofs = 0x%08x, offset = 0x%08x\n", - inode->i_ino, frag->ofs, offset)); - D1(jffs2_print_frag_list(f)); - memset(pg_buf, 0, end - offset); - ClearPageUptodate(pg); - SetPageError(pg); - kunmap(pg); - return -EIO; - } else if (!frag->node) { - __u32 holeend = min(end, frag->ofs + frag->size); - D1(printk(KERN_DEBUG "Filling frag hole from %d-%d (frag 0x%x 0x%x)\n", offset, holeend, frag->ofs, frag->ofs + frag->size)); - memset(pg_buf, 0, holeend - offset); - pg_buf += holeend - offset; - offset = holeend; - frag = frag->next; - continue; - } else { - __u32 readlen; - readlen = min(frag->size, end - offset); - D1(printk(KERN_DEBUG "Reading %d-%d from node at 0x%x\n", frag->ofs, frag->ofs+readlen, frag->node->raw->flash_offset & ~3)); - ret = jffs2_read_dnode(c, frag->node, pg_buf, frag->ofs - frag->node->ofs, readlen); - D2(printk(KERN_DEBUG "node read done\n")); - if (ret) { - D1(printk(KERN_DEBUG"jffs2_readpage error %d\n",ret)); - memset(pg_buf, 0, frag->size); - ClearPageUptodate(pg); - SetPageError(pg); - kunmap(pg); - return ret; - } - } - pg_buf += frag->size; - offset += frag->size; - frag = frag->next; - D2(printk(KERN_DEBUG "node read was OK. Looping\n")); - } - D2(printk(KERN_DEBUG "readpage finishing\n")); - SetPageUptodate(pg); - ClearPageError(pg); + ret = jffs2_read_inode_range(c, f, pg_buf, pg->index << PAGE_CACHE_SHIFT, PAGE_CACHE_SIZE); - flush_dcache_page(pg); + if (ret) { + ClearPageUptodate(pg); + SetPageError(pg); + } else { + SetPageUptodate(pg); + ClearPageError(pg); + } + flush_dcache_page(pg); kunmap(pg); + D1(printk(KERN_DEBUG "readpage finished\n")); return 0; } @@ -328,18 +293,18 @@ { struct inode *inode = filp->f_dentry->d_inode; struct jffs2_inode_info *f = JFFS2_INODE_INFO(inode); - __u32 pageofs = pg->index << PAGE_CACHE_SHIFT; + uint32_t pageofs = pg->index << PAGE_CACHE_SHIFT; int ret = 0; down(&f->sem); - D1(printk(KERN_DEBUG "jffs2_prepare_write() nrpages %ld\n", inode->i_mapping->nrpages)); + D1(printk(KERN_DEBUG "jffs2_prepare_write()\n")); if (pageofs > inode->i_size) { /* Make new hole frag from old EOF to new page */ struct jffs2_sb_info *c = JFFS2_SB_INFO(inode->i_sb); struct jffs2_raw_inode ri; struct jffs2_full_dnode *fn; - __u32 phys_ofs, alloc_len; + uint32_t phys_ofs, alloc_len; D1(printk(KERN_DEBUG "Writing new hole frag 0x%x-0x%x between current EOF and new page\n", (unsigned int)inode->i_size, pageofs)); @@ -361,7 +326,7 @@ ri.mode = inode->i_mode; ri.uid = inode->i_uid; ri.gid = inode->i_gid; - ri.isize = max((__u32)inode->i_size, pageofs); + ri.isize = max((uint32_t)inode->i_size, pageofs); ri.atime = ri.ctime = ri.mtime = CURRENT_TIME; ri.offset = inode->i_size; ri.dsize = pageofs - inode->i_size; @@ -370,10 +335,11 @@ ri.node_crc = crc32(0, &ri, sizeof(ri)-8); ri.data_crc = 0; - fn = jffs2_write_dnode(inode, &ri, NULL, 0, phys_ofs, NULL); - jffs2_complete_reservation(c); + fn = jffs2_write_dnode(c, f, &ri, NULL, 0, phys_ofs, NULL); + if (IS_ERR(fn)) { ret = PTR_ERR(fn); + jffs2_complete_reservation(c); up(&f->sem); return ret; } @@ -387,9 +353,11 @@ D1(printk(KERN_DEBUG "Eep. add_full_dnode_to_inode() failed in prepare_write, returned %d\n", ret)); jffs2_mark_node_obsolete(c, fn->raw); jffs2_free_full_dnode(fn); + jffs2_complete_reservation(c); up(&f->sem); return ret; } + jffs2_complete_reservation(c); inode->i_size = pageofs; } @@ -397,7 +365,7 @@ /* Read in the page if it wasn't already present */ if (!Page_Uptodate(pg) && (start || end < PAGE_SIZE)) ret = jffs2_do_readpage_nolock(inode, pg); - D1(printk(KERN_DEBUG "end prepare_write(). nrpages %ld\n", inode->i_mapping->nrpages)); + D1(printk(KERN_DEBUG "end prepare_write()\n")); up(&f->sem); return ret; } @@ -410,119 +378,49 @@ struct inode *inode = filp->f_dentry->d_inode; struct jffs2_inode_info *f = JFFS2_INODE_INFO(inode); struct jffs2_sb_info *c = JFFS2_SB_INFO(inode->i_sb); - __u32 newsize = max_t(__u32, filp->f_dentry->d_inode->i_size, (pg->index << PAGE_CACHE_SHIFT) + end); - __u32 file_ofs = (pg->index << PAGE_CACHE_SHIFT); - __u32 writelen = min((__u32)PAGE_CACHE_SIZE, newsize - file_ofs); struct jffs2_raw_inode *ri; int ret = 0; - ssize_t writtenlen = 0; + uint32_t writtenlen = 0; - D1(printk(KERN_DEBUG "jffs2_commit_write(): ino #%lu, page at 0x%lx, range %d-%d, nrpages %ld\n", inode->i_ino, pg->index << PAGE_CACHE_SHIFT, start, end, filp->f_dentry->d_inode->i_mapping->nrpages)); + D1(printk(KERN_DEBUG "jffs2_commit_write(): ino #%lu, page at 0x%lx, range %d-%d\n", + inode->i_ino, pg->index << PAGE_CACHE_SHIFT, start, end)); ri = jffs2_alloc_raw_inode(); - if (!ri) - return -ENOMEM; - - while(writelen) { - struct jffs2_full_dnode *fn; - unsigned char *comprbuf = NULL; - unsigned char comprtype = JFFS2_COMPR_NONE; - __u32 phys_ofs, alloclen; - __u32 datalen, cdatalen; - - D2(printk(KERN_DEBUG "jffs2_commit_write() loop: 0x%x to write to 0x%x\n", writelen, file_ofs)); - - ret = jffs2_reserve_space(c, sizeof(*ri) + JFFS2_MIN_DATA_LEN, &phys_ofs, &alloclen, ALLOC_NORMAL); - if (ret) { - SetPageError(pg); - D1(printk(KERN_DEBUG "jffs2_reserve_space returned %d\n", ret)); - break; - } - down(&f->sem); - datalen = writelen; - cdatalen = min(alloclen - sizeof(*ri), writelen); - - comprbuf = kmalloc(cdatalen, GFP_KERNEL); - if (comprbuf) { - comprtype = jffs2_compress(page_address(pg)+ (file_ofs & (PAGE_CACHE_SIZE-1)), comprbuf, &datalen, &cdatalen); - } - if (comprtype == JFFS2_COMPR_NONE) { - /* Either compression failed, or the allocation of comprbuf failed */ - if (comprbuf) - kfree(comprbuf); - comprbuf = page_address(pg) + (file_ofs & (PAGE_CACHE_SIZE -1)); - datalen = cdatalen; - } - /* Now comprbuf points to the data to be written, be it compressed or not. - comprtype holds the compression type, and comprtype == JFFS2_COMPR_NONE means - that the comprbuf doesn't need to be kfree()d. - */ - - ri->magic = JFFS2_MAGIC_BITMASK; - ri->nodetype = JFFS2_NODETYPE_INODE; - ri->totlen = sizeof(*ri) + cdatalen; - ri->hdr_crc = crc32(0, ri, sizeof(struct jffs2_unknown_node)-4); - - ri->ino = inode->i_ino; - ri->version = ++f->highest_version; - ri->mode = inode->i_mode; - ri->uid = inode->i_uid; - ri->gid = inode->i_gid; - ri->isize = max((__u32)inode->i_size, file_ofs + datalen); - ri->atime = ri->ctime = ri->mtime = CURRENT_TIME; - ri->offset = file_ofs; - ri->csize = cdatalen; - ri->dsize = datalen; - ri->compr = comprtype; - ri->node_crc = crc32(0, ri, sizeof(*ri)-8); - ri->data_crc = crc32(0, comprbuf, cdatalen); - fn = jffs2_write_dnode(inode, ri, comprbuf, cdatalen, phys_ofs, NULL); + if (!ri) { + D1(printk(KERN_DEBUG "jffs2_commit_write(): Allocation of raw inode failed\n")); + return -ENOMEM; + } - jffs2_complete_reservation(c); + /* Set the fields that the generic jffs2_write_inode_range() code can't find */ + ri->ino = inode->i_ino; + ri->mode = inode->i_mode; + ri->uid = inode->i_uid; + ri->gid = inode->i_gid; + ri->isize = (uint32_t)inode->i_size; + ri->atime = ri->ctime = ri->mtime = CURRENT_TIME; + + /* We rely on the fact that generic_file_write() currently kmaps the page for us. */ + ret = jffs2_write_inode_range(c, f, ri, page_address(pg) + start, + (pg->index << PAGE_CACHE_SHIFT) + start, end - start, &writtenlen); - if (comprtype != JFFS2_COMPR_NONE) - kfree(comprbuf); + if (ret) { + /* There was an error writing. */ + SetPageError(pg); + } - if (IS_ERR(fn)) { - ret = PTR_ERR(fn); - up(&f->sem); - SetPageError(pg); - break; - } - ret = jffs2_add_full_dnode_to_inode(c, f, fn); - if (f->metadata) { - jffs2_mark_node_obsolete(c, f->metadata->raw); - jffs2_free_full_dnode(f->metadata); - f->metadata = NULL; + if (writtenlen) { + if (inode->i_size < (pg->index << PAGE_CACHE_SHIFT) + start + writtenlen) { + inode->i_size = (pg->index << PAGE_CACHE_SHIFT) + start + writtenlen; + inode->i_blocks = (inode->i_size + 511) >> 9; + + inode->i_ctime = inode->i_mtime = ri->ctime; } - up(&f->sem); - if (ret) { - /* Eep */ - D1(printk(KERN_DEBUG "Eep. add_full_dnode_to_inode() failed in commit_write, returned %d\n", ret)); - jffs2_mark_node_obsolete(c, fn->raw); - jffs2_free_full_dnode(fn); - SetPageError(pg); - break; - } - inode->i_size = ri->isize; - inode->i_blocks = (inode->i_size + 511) >> 9; - inode->i_ctime = inode->i_mtime = ri->ctime; - if (!datalen) { - printk(KERN_WARNING "Eep. We didn't actually write any bloody data\n"); - ret = -EIO; - SetPageError(pg); - break; - } - D1(printk(KERN_DEBUG "increasing writtenlen by %d\n", datalen)); - writtenlen += datalen; - file_ofs += datalen; - writelen -= datalen; } jffs2_free_raw_inode(ri); - if (writtenlen < end) { + if (start+writtenlen < end) { /* generic_file_write has written more to the page cache than we've actually written to the medium. Mark the page !Uptodate so that it gets reread */ @@ -530,13 +428,7 @@ SetPageError(pg); ClearPageUptodate(pg); } - if (writtenlen <= start) { - /* We didn't even get to the start of the affected part */ - ret = ret?ret:-ENOSPC; - D1(printk(KERN_DEBUG "jffs2_commit_write(): Only %x bytes written to page. start (%x) not reached, returning %d\n", writtenlen, start, ret)); - } - writtenlen = min(end-start, writtenlen-start); - D1(printk(KERN_DEBUG "jffs2_commit_write() returning %d. nrpages is %ld\n",writtenlen?writtenlen:ret, inode->i_mapping->nrpages)); + D1(printk(KERN_DEBUG "jffs2_commit_write() returning %d\n",writtenlen?writtenlen:ret)); return writtenlen?writtenlen:ret; } diff -Nru a/fs/jffs2/fs.c b/fs/jffs2/fs.c --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/fs/jffs2/fs.c Tue Mar 12 13:58:16 2002 @@ -0,0 +1,455 @@ +/* + * JFFS2 -- Journalling Flash File System, Version 2. + * + * Copyright (C) 2001, 2002 Red Hat, Inc. + * + * Created by David Woodhouse + * + * The original JFFS, from which the design for JFFS2 was derived, + * was designed and implemented by Axis Communications AB. + * + * The contents of this file are subject to the Red Hat eCos Public + * License Version 1.1 (the "Licence"); you may not use this file + * except in compliance with the Licence. You may obtain a copy of + * the Licence at http://www.redhat.com/ + * + * Software distributed under the Licence is distributed on an "AS IS" + * basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. + * See the Licence for the specific language governing rights and + * limitations under the Licence. + * + * The Original Code is JFFS2 - Journalling Flash File System, version 2 + * + * Alternatively, the contents of this file may be used under the + * terms of the GNU General Public License version 2 (the "GPL"), in + * which case the provisions of the GPL are applicable instead of the + * above. If you wish to allow the use of your version of this file + * only under the terms of the GPL and not to allow others to use your + * version of this file under the RHEPL, indicate your decision by + * deleting the provisions above and replace them with the notice and + * other provisions required by the GPL. If you do not delete the + * provisions above, a recipient may use your version of this file + * under either the RHEPL or the GPL. + * + * $Id: fs.c,v 1.4 2002/03/11 12:36:59 dwmw2 Exp $ + * + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "nodelist.h" + +int jffs2_statfs(struct super_block *sb, struct statfs *buf) +{ + struct jffs2_sb_info *c = JFFS2_SB_INFO(sb); + unsigned long avail; + + buf->f_type = JFFS2_SUPER_MAGIC; + buf->f_bsize = 1 << PAGE_SHIFT; + buf->f_blocks = c->flash_size >> PAGE_SHIFT; + buf->f_files = 0; + buf->f_ffree = 0; + buf->f_namelen = JFFS2_MAX_NAME_LEN; + + spin_lock_bh(&c->erase_completion_lock); + + avail = c->dirty_size + c->free_size; + if (avail > c->sector_size * JFFS2_RESERVED_BLOCKS_WRITE) + avail -= c->sector_size * JFFS2_RESERVED_BLOCKS_WRITE; + else + avail = 0; + + buf->f_bavail = buf->f_bfree = avail >> PAGE_SHIFT; + +#if CONFIG_JFFS2_FS_DEBUG > 0 + printk(KERN_DEBUG "STATFS:\n"); + printk(KERN_DEBUG "flash_size: %08x\n", c->flash_size); + printk(KERN_DEBUG "used_size: %08x\n", c->used_size); + printk(KERN_DEBUG "dirty_size: %08x\n", c->dirty_size); + printk(KERN_DEBUG "free_size: %08x\n", c->free_size); + printk(KERN_DEBUG "erasing_size: %08x\n", c->erasing_size); + printk(KERN_DEBUG "bad_size: %08x\n", c->bad_size); + printk(KERN_DEBUG "sector_size: %08x\n", c->sector_size); + printk(KERN_DEBUG "jffs2_reserved_blocks size: %08x\n",c->sector_size * JFFS2_RESERVED_BLOCKS_WRITE); + + if (c->nextblock) { + printk(KERN_DEBUG "nextblock: 0x%08x\n", c->nextblock->offset); + } else { + printk(KERN_DEBUG "nextblock: NULL\n"); + } + if (c->gcblock) { + printk(KERN_DEBUG "gcblock: 0x%08x\n", c->gcblock->offset); + } else { + printk(KERN_DEBUG "gcblock: NULL\n"); + } + if (list_empty(&c->clean_list)) { + printk(KERN_DEBUG "clean_list: empty\n"); + } else { + struct list_head *this; + + list_for_each(this, &c->clean_list) { + struct jffs2_eraseblock *jeb = list_entry(this, struct jffs2_eraseblock, list); + printk(KERN_DEBUG "clean_list: %08x\n", jeb->offset); + } + } + if (list_empty(&c->dirty_list)) { + printk(KERN_DEBUG "dirty_list: empty\n"); + } else { + struct list_head *this; + + list_for_each(this, &c->dirty_list) { + struct jffs2_eraseblock *jeb = list_entry(this, struct jffs2_eraseblock, list); + printk(KERN_DEBUG "dirty_list: %08x\n", jeb->offset); + } + } + if (list_empty(&c->erasable_list)) { + printk(KERN_DEBUG "erasable_list: empty\n"); + } else { + struct list_head *this; + + list_for_each(this, &c->erasable_list) { + struct jffs2_eraseblock *jeb = list_entry(this, struct jffs2_eraseblock, list); + printk(KERN_DEBUG "erasable_list: %08x\n", jeb->offset); + } + } + if (list_empty(&c->erasing_list)) { + printk(KERN_DEBUG "erasing_list: empty\n"); + } else { + struct list_head *this; + + list_for_each(this, &c->erasing_list) { + struct jffs2_eraseblock *jeb = list_entry(this, struct jffs2_eraseblock, list); + printk(KERN_DEBUG "erasing_list: %08x\n", jeb->offset); + } + } + if (list_empty(&c->erase_pending_list)) { + printk(KERN_DEBUG "erase_pending_list: empty\n"); + } else { + struct list_head *this; + + list_for_each(this, &c->erase_pending_list) { + struct jffs2_eraseblock *jeb = list_entry(this, struct jffs2_eraseblock, list); + printk(KERN_DEBUG "erase_pending_list: %08x\n", jeb->offset); + } + } + if (list_empty(&c->erasable_pending_wbuf_list)) { + printk(KERN_DEBUG "erasable_pending_wbuf_list: empty\n"); + } else { + struct list_head *this; + + list_for_each(this, &c->erasable_pending_wbuf_list) { + struct jffs2_eraseblock *jeb = list_entry(this, struct jffs2_eraseblock, list); + printk(KERN_DEBUG "erase_pending_wbuf_list: %08x\n", jeb->offset); + } + } + if (list_empty(&c->free_list)) { + printk(KERN_DEBUG "free_list: empty\n"); + } else { + struct list_head *this; + + list_for_each(this, &c->free_list) { + struct jffs2_eraseblock *jeb = list_entry(this, struct jffs2_eraseblock, list); + printk(KERN_DEBUG "free_list: %08x\n", jeb->offset); + } + } + if (list_empty(&c->bad_list)) { + printk(KERN_DEBUG "bad_list: empty\n"); + } else { + struct list_head *this; + + list_for_each(this, &c->bad_list) { + struct jffs2_eraseblock *jeb = list_entry(this, struct jffs2_eraseblock, list); + printk(KERN_DEBUG "bad_list: %08x\n", jeb->offset); + } + } + if (list_empty(&c->bad_used_list)) { + printk(KERN_DEBUG "bad_used_list: empty\n"); + } else { + struct list_head *this; + + list_for_each(this, &c->bad_used_list) { + struct jffs2_eraseblock *jeb = list_entry(this, struct jffs2_eraseblock, list); + printk(KERN_DEBUG "bad_used_list: %08x\n", jeb->offset); + } + } +#endif /* CONFIG_JFFS2_FS_DEBUG */ + + spin_unlock_bh(&c->erase_completion_lock); + + + return 0; +} + + +void jffs2_clear_inode (struct inode *inode) +{ + /* We can forget about this inode for now - drop all + * the nodelists associated with it, etc. + */ + struct jffs2_sb_info *c = JFFS2_SB_INFO(inode->i_sb); + struct jffs2_inode_info *f = JFFS2_INODE_INFO(inode); + + D1(printk(KERN_DEBUG "jffs2_clear_inode(): ino #%lu mode %o\n", inode->i_ino, inode->i_mode)); + + jffs2_do_clear_inode(c, f); +} + +void jffs2_read_inode (struct inode *inode) +{ + struct jffs2_inode_info *f; + struct jffs2_sb_info *c; + struct jffs2_raw_inode latest_node; + int ret; + + D1(printk(KERN_DEBUG "jffs2_read_inode(): inode->i_ino == %lu\n", inode->i_ino)); + + f = JFFS2_INODE_INFO(inode); + c = JFFS2_SB_INFO(inode->i_sb); + + jffs2_init_inode_info(f); + + ret = jffs2_do_read_inode(c, f, inode->i_ino, &latest_node); + + if (ret) { + make_bad_inode(inode); + up(&f->sem); + return; + } + inode->i_mode = latest_node.mode; + inode->i_uid = latest_node.uid; + inode->i_gid = latest_node.gid; + inode->i_size = latest_node.isize; + inode->i_atime = latest_node.atime; + inode->i_mtime = latest_node.mtime; + inode->i_ctime = latest_node.ctime; + + inode->i_nlink = f->inocache->nlink; + + inode->i_blksize = PAGE_SIZE; + inode->i_blocks = (inode->i_size + 511) >> 9; + + switch (inode->i_mode & S_IFMT) { + unsigned short rdev; + + case S_IFLNK: + inode->i_op = &jffs2_symlink_inode_operations; + break; + + case S_IFDIR: + { + struct jffs2_full_dirent *fd; + + for (fd=f->dents; fd; fd = fd->next) { + if (fd->type == DT_DIR && fd->ino) + inode->i_nlink++; + } + /* and '..' */ + inode->i_nlink++; + /* Root dir gets i_nlink 3 for some reason */ + if (inode->i_ino == 1) + inode->i_nlink++; + + inode->i_op = &jffs2_dir_inode_operations; + inode->i_fop = &jffs2_dir_operations; + break; + } + case S_IFREG: + inode->i_op = &jffs2_file_inode_operations; + inode->i_fop = &jffs2_file_operations; + inode->i_mapping->a_ops = &jffs2_file_address_operations; + inode->i_mapping->nrpages = 0; + break; + + case S_IFBLK: + case S_IFCHR: + /* Read the device numbers from the media */ + D1(printk(KERN_DEBUG "Reading device numbers from flash\n")); + if (jffs2_read_dnode(c, f->metadata, (char *)&rdev, 0, sizeof(rdev)) < 0) { + /* Eep */ + printk(KERN_NOTICE "Read device numbers for inode %lu failed\n", (unsigned long)inode->i_ino); + jffs2_do_clear_inode(c, f); + make_bad_inode(inode); + up(&f->sem); + return; + } + + case S_IFSOCK: + case S_IFIFO: + inode->i_op = &jffs2_file_inode_operations; + init_special_inode(inode, inode->i_mode, kdev_t_to_nr(mk_kdev(rdev>>8, rdev&0xff))); + break; + + default: + printk(KERN_WARNING "jffs2_read_inode(): Bogus imode %o for ino %lu\n", inode->i_mode, (unsigned long)inode->i_ino); + } + + up(&f->sem); + + D1(printk(KERN_DEBUG "jffs2_read_inode() returning\n")); +} + + +int jffs2_remount_fs (struct super_block *sb, int *flags, char *data) +{ + struct jffs2_sb_info *c = JFFS2_SB_INFO(sb); + + if (c->flags & JFFS2_SB_FLAG_RO && !(sb->s_flags & MS_RDONLY)) + return -EROFS; + + /* We stop if it was running, then restart if it needs to. + This also catches the case where it was stopped and this + is just a remount to restart it */ + if (!(sb->s_flags & MS_RDONLY)) + jffs2_stop_garbage_collect_thread(c); + + if (!(*flags & MS_RDONLY)) + jffs2_start_garbage_collect_thread(c); + + sb->s_flags = (sb->s_flags & ~MS_RDONLY)|(*flags & MS_RDONLY); + + return 0; +} + +void jffs2_write_super (struct super_block *sb) +{ + struct jffs2_sb_info *c = JFFS2_SB_INFO(sb); + sb->s_dirt = 0; + + if (sb->s_flags & MS_RDONLY) + return; + + D1(printk("jffs2_write_super(): flush_wbuf before gc-trigger\n")); + jffs2_flush_wbuf(c, 2); + jffs2_garbage_collect_trigger(c); + jffs2_erase_pending_blocks(c); + jffs2_mark_erased_blocks(c); +} + + +/* jffs2_new_inode: allocate a new inode and inocache, add it to the hash, + fill in the raw_inode while you're at it. */ +struct inode *jffs2_new_inode (struct inode *dir_i, int mode, struct jffs2_raw_inode *ri) +{ + struct inode *inode; + struct super_block *sb = dir_i->i_sb; + struct jffs2_sb_info *c; + struct jffs2_inode_info *f; + int ret; + + D1(printk(KERN_DEBUG "jffs2_new_inode(): dir_i %ld, mode 0x%x\n", dir_i->i_ino, mode)); + + c = JFFS2_SB_INFO(sb); + + inode = new_inode(sb); + + if (!inode) + return ERR_PTR(-ENOMEM); + + f = JFFS2_INODE_INFO(inode); + jffs2_init_inode_info(f); + + memset(ri, 0, sizeof(*ri)); + /* Set OS-specific defaults for new inodes */ + ri->uid = current->fsuid; + + if (dir_i->i_mode & S_ISGID) { + ri->gid = dir_i->i_gid; + if (S_ISDIR(mode)) + ri->mode |= S_ISGID; + } else { + ri->gid = current->fsgid; + } + ri->mode = mode; + ret = jffs2_do_new_inode (c, f, mode, ri); + if (ret) { + make_bad_inode(inode); + iput(inode); + return ERR_PTR(ret); + } + inode->i_nlink = 1; + inode->i_ino = ri->ino; + inode->i_mode = ri->mode; + inode->i_gid = ri->gid; + inode->i_uid = ri->uid; + inode->i_atime = inode->i_ctime = inode->i_mtime = + ri->atime = ri->mtime = ri->ctime = CURRENT_TIME; + inode->i_blksize = PAGE_SIZE; + inode->i_blocks = 0; + inode->i_size = 0; + + insert_inode_hash(inode); + + return inode; +} + + +int jffs2_do_fill_super(struct super_block *sb, void *data, int silent) +{ + struct jffs2_sb_info *c; + struct inode *root_i; + + c = JFFS2_SB_INFO(sb); + + c->sector_size = c->mtd->erasesize; + c->flash_size = c->mtd->size; + + if (c->sector_size < 0x10000) { + printk(KERN_INFO "jffs2: Erase block size too small (%dKiB). Using 64KiB instead\n", + c->sector_size / 1024); + c->sector_size = 0x10000; + } + if (c->flash_size < 5*c->sector_size) { + printk(KERN_ERR "jffs2: Too few erase blocks (%d)\n", + c->flash_size / c->sector_size); + return -EINVAL; + } + if (c->mtd->type == MTD_NANDFLASH) { + /* Initialise write buffer */ + c->wbuf_pagesize = c->mtd->oobblock; + c->wbuf_ofs = 0xFFFFFFFF; + c->wbuf = kmalloc(c->wbuf_pagesize, GFP_KERNEL); + if (!c->wbuf) + goto out_mtd; + } + + if (jffs2_do_mount_fs(c)) + goto out_mtd; + + D1(printk(KERN_DEBUG "jffs2_do_fill_super(): Getting root inode\n")); + root_i = iget(sb, 1); + if (is_bad_inode(root_i)) { + D1(printk(KERN_WARNING "get root inode failed\n")); + goto out_nodes; + } + + D1(printk(KERN_DEBUG "jffs2_do_fill_super(): d_alloc_root()\n")); + sb->s_root = d_alloc_root(root_i); + if (!sb->s_root) + goto out_root_i; + +#if LINUX_VERSION_CODE >= 0x20403 + sb->s_maxbytes = 0xFFFFFFFF; +#endif + sb->s_blocksize = PAGE_CACHE_SIZE; + sb->s_blocksize_bits = PAGE_CACHE_SHIFT; + sb->s_magic = JFFS2_SUPER_MAGIC; + if (!(sb->s_flags & MS_RDONLY)) + jffs2_start_garbage_collect_thread(c); + return 0; + + out_root_i: + iput(root_i); + out_nodes: + jffs2_free_ino_caches(c); + jffs2_free_raw_node_refs(c); + kfree(c->blocks); + out_mtd: + return -EINVAL; +} diff -Nru a/fs/jffs2/gc.c b/fs/jffs2/gc.c --- a/fs/jffs2/gc.c Tue Mar 12 13:58:15 2002 +++ b/fs/jffs2/gc.c Tue Mar 12 13:58:15 2002 @@ -1,7 +1,7 @@ /* * JFFS2 -- Journalling Flash File System, Version 2. * - * Copyright (C) 2001 Red Hat, Inc. + * Copyright (C) 2001, 2002 Red Hat, Inc. * * Created by David Woodhouse * @@ -31,45 +31,47 @@ * provisions above, a recipient may use your version of this file * under either the RHEPL or the GPL. * - * $Id: gc.c,v 1.52.2.2 2002/02/23 14:25:36 dwmw2 Exp $ + * $Id: gc.c,v 1.68 2002/03/08 15:11:24 dwmw2 Exp $ * */ #include #include #include -#include -#include #include #include #include #include "nodelist.h" static int jffs2_garbage_collect_metadata(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb, - struct inode *inode, struct jffs2_full_dnode *fd); + struct jffs2_inode_info *f, struct jffs2_full_dnode *fd); static int jffs2_garbage_collect_dirent(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb, - struct inode *inode, struct jffs2_full_dirent *fd); + struct jffs2_inode_info *f, struct jffs2_full_dirent *fd); static int jffs2_garbage_collect_deletion_dirent(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb, - struct inode *inode, struct jffs2_full_dirent *fd); + struct jffs2_inode_info *f, struct jffs2_full_dirent *fd); static int jffs2_garbage_collect_hole(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb, - struct inode *indeo, struct jffs2_full_dnode *fn, - __u32 start, __u32 end); + struct jffs2_inode_info *f, struct jffs2_full_dnode *fn, + uint32_t start, uint32_t end); static int jffs2_garbage_collect_dnode(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb, - struct inode *inode, struct jffs2_full_dnode *fn, - __u32 start, __u32 end); + struct jffs2_inode_info *f, struct jffs2_full_dnode *fn, + uint32_t start, uint32_t end); /* Called with erase_completion_lock held */ static struct jffs2_eraseblock *jffs2_find_gc_block(struct jffs2_sb_info *c) { struct jffs2_eraseblock *ret; struct list_head *nextlist = NULL; + int n = jiffies % 128; /* Pick an eraseblock to garbage collect next. This is where we'll put the clever wear-levelling algorithms. Eventually. */ if (!list_empty(&c->bad_used_list) && c->nr_free_blocks > JFFS2_RESERVED_BLOCKS_GCBAD) { D1(printk(KERN_DEBUG "Picking block from bad_used_list to GC next\n")); nextlist = &c->bad_used_list; - } else if (jiffies % 100 && !list_empty(&c->dirty_list)) { + } else if (n < 100 && !list_empty(&c->erasable_list)) { + D1(printk(KERN_DEBUG "Picking block from erasable_list to GC next\n")); + nextlist = &c->erasable_list; + } else if (n < 126 && !list_empty(&c->dirty_list)) { /* Most of the time, pick one off the dirty list */ D1(printk(KERN_DEBUG "Picking block from dirty_list to GC next\n")); nextlist = &c->dirty_list; @@ -80,9 +82,13 @@ D1(printk(KERN_DEBUG "Picking block from dirty_list to GC next (clean_list was empty)\n")); nextlist = &c->dirty_list; + } else if (!list_empty(&c->erasable_list)) { + D1(printk(KERN_DEBUG "Picking block from erasable_list to GC next (clean_list and dirty_list were empty)\n")); + + nextlist = &c->erasable_list; } else { /* Eep. Both were empty */ - printk(KERN_NOTICE "jffs2: No clean _or_ dirty blocks to GC from! Where are they all?\n"); + printk(KERN_NOTICE "jffs2: No clean, dirty _or_ erasable blocks to GC from! Where are they all?\n"); return NULL; } @@ -109,8 +115,8 @@ struct jffs2_node_frag *frag; struct jffs2_full_dnode *fn = NULL; struct jffs2_full_dirent *fd; - __u32 start = 0, end = 0, nrfrags = 0; - __u32 inum; + uint32_t start = 0, end = 0, nrfrags = 0; + uint32_t inum; struct inode *inode; int ret = 0; @@ -154,6 +160,8 @@ D1(printk(KERN_DEBUG "Going to garbage collect node at 0x%08x\n", raw->flash_offset &~3)); if (!raw->next_in_ino) { /* Inode-less node. Clean marker, snapshot or something like that */ + /* FIXME: If it's something that needs to be copied, including something + we don't grok that has JFFS2_NODETYPE_RWCOMPAT_COPY, we should do so */ spin_unlock_bh(&c->erase_completion_lock); jffs2_mark_node_obsolete(c, raw); goto eraseit_lock; @@ -177,6 +185,7 @@ f = JFFS2_INODE_INFO(inode); down(&f->sem); + /* Now we have the lock for this inode. Check that it's still the one at the head of the list. */ @@ -188,7 +197,7 @@ /* OK. Looks safe. And nobody can get us now because we have the semaphore. Move the block */ if (f->metadata && f->metadata->raw == raw) { fn = f->metadata; - ret = jffs2_garbage_collect_metadata(c, jeb, inode, fn); + ret = jffs2_garbage_collect_metadata(c, jeb, f, fn); goto upnout; } @@ -206,10 +215,10 @@ /* We found a datanode. Do the GC */ if((start >> PAGE_CACHE_SHIFT) < ((end-1) >> PAGE_CACHE_SHIFT)) { /* It crosses a page boundary. Therefore, it must be a hole. */ - ret = jffs2_garbage_collect_hole(c, jeb, inode, fn, start, end); + ret = jffs2_garbage_collect_hole(c, jeb, f, fn, start, end); } else { /* It could still be a hole. But we GC the page this way anyway */ - ret = jffs2_garbage_collect_dnode(c, jeb, inode, fn, start, end); + ret = jffs2_garbage_collect_dnode(c, jeb, f, fn, start, end); } goto upnout; } @@ -221,11 +230,12 @@ } if (fd && fd->ino) { - ret = jffs2_garbage_collect_dirent(c, jeb, inode, fd); + ret = jffs2_garbage_collect_dirent(c, jeb, f, fd); } else if (fd) { - ret = jffs2_garbage_collect_deletion_dirent(c, jeb, inode, fd); + ret = jffs2_garbage_collect_deletion_dirent(c, jeb, f, fd); } else { - printk(KERN_WARNING "Raw node at 0x%08x wasn't in node lists for ino #%lu\n", raw->flash_offset&~3, inode->i_ino); + printk(KERN_WARNING "Raw node at 0x%08x wasn't in node lists for ino #%u\n", + raw->flash_offset&~3, f->inocache->ino); if (raw->flash_offset & 1) { printk(KERN_WARNING "But it's obsolete so we don't mind too much\n"); } else { @@ -256,24 +266,25 @@ } static int jffs2_garbage_collect_metadata(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb, - struct inode *inode, struct jffs2_full_dnode *fn) + struct jffs2_inode_info *f, struct jffs2_full_dnode *fn) { - struct jffs2_inode_info *f = JFFS2_INODE_INFO(inode); struct jffs2_full_dnode *new_fn; struct jffs2_raw_inode ri; unsigned short dev; char *mdata = NULL, mdatalen = 0; - __u32 alloclen, phys_ofs; + uint32_t alloclen, phys_ofs; int ret; - if (S_ISBLK(inode->i_mode) || S_ISCHR(inode->i_mode)) { + if (S_ISBLK(JFFS2_F_I_MODE(f)) || + S_ISCHR(JFFS2_F_I_MODE(f)) ) { /* For these, we don't actually need to read the old node */ - dev = (major(inode->i_rdev) << 8) | - minor(inode->i_rdev); + /* FIXME: for minor or major > 255. */ + dev = ((JFFS2_F_I_RDEV_MAJ(f) << 8) | + JFFS2_F_I_RDEV_MIN(f)); mdata = (char *)&dev; mdatalen = sizeof(dev); D1(printk(KERN_DEBUG "jffs2_garbage_collect_metadata(): Writing %d bytes of kdev_t\n", mdatalen)); - } else if (S_ISLNK(inode->i_mode)) { + } else if (S_ISLNK(JFFS2_F_I_MODE(f))) { mdatalen = fn->size; mdata = kmalloc(fn->size, GFP_KERNEL); if (!mdata) { @@ -303,15 +314,15 @@ ri.totlen = sizeof(ri) + mdatalen; ri.hdr_crc = crc32(0, &ri, sizeof(struct jffs2_unknown_node)-4); - ri.ino = inode->i_ino; + ri.ino = f->inocache->ino; ri.version = ++f->highest_version; - ri.mode = inode->i_mode; - ri.uid = inode->i_uid; - ri.gid = inode->i_gid; - ri.isize = inode->i_size; - ri.atime = inode->i_atime; - ri.ctime = inode->i_ctime; - ri.mtime = inode->i_mtime; + ri.mode = JFFS2_F_I_MODE(f); + ri.uid = JFFS2_F_I_UID(f); + ri.gid = JFFS2_F_I_GID(f); + ri.isize = JFFS2_F_I_SIZE(f); + ri.atime = JFFS2_F_I_ATIME(f); + ri.ctime = JFFS2_F_I_CTIME(f); + ri.mtime = JFFS2_F_I_MTIME(f); ri.offset = 0; ri.csize = mdatalen; ri.dsize = mdatalen; @@ -319,7 +330,7 @@ ri.node_crc = crc32(0, &ri, sizeof(ri)-8); ri.data_crc = crc32(0, mdata, mdatalen); - new_fn = jffs2_write_dnode(inode, &ri, mdata, mdatalen, phys_ofs, NULL); + new_fn = jffs2_write_dnode(c, f, &ri, mdata, mdatalen, phys_ofs, NULL); if (IS_ERR(new_fn)) { printk(KERN_WARNING "Error writing new dnode: %ld\n", PTR_ERR(new_fn)); @@ -330,18 +341,17 @@ jffs2_free_full_dnode(fn); f->metadata = new_fn; out: - if (S_ISLNK(inode->i_mode)) + if (S_ISLNK(JFFS2_F_I_MODE(f))) kfree(mdata); return ret; } static int jffs2_garbage_collect_dirent(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb, - struct inode *inode, struct jffs2_full_dirent *fd) + struct jffs2_inode_info *f, struct jffs2_full_dirent *fd) { - struct jffs2_inode_info *f = JFFS2_INODE_INFO(inode); struct jffs2_full_dirent *new_fd; struct jffs2_raw_dirent rd; - __u32 alloclen, phys_ofs; + uint32_t alloclen, phys_ofs; int ret; rd.magic = JFFS2_MAGIC_BITMASK; @@ -350,10 +360,10 @@ rd.totlen = sizeof(rd) + rd.nsize; rd.hdr_crc = crc32(0, &rd, sizeof(struct jffs2_unknown_node)-4); - rd.pino = inode->i_ino; + rd.pino = f->inocache->ino; rd.version = ++f->highest_version; rd.ino = fd->ino; - rd.mctime = max(inode->i_mtime, inode->i_ctime); + rd.mctime = max(JFFS2_F_I_MTIME(f), JFFS2_F_I_CTIME(f)); rd.type = fd->type; rd.node_crc = crc32(0, &rd, sizeof(rd)-8); rd.name_crc = crc32(0, fd->name, rd.nsize); @@ -364,7 +374,7 @@ sizeof(rd)+rd.nsize, ret); return ret; } - new_fd = jffs2_write_dirent(inode, &rd, fd->name, rd.nsize, phys_ofs, NULL); + new_fd = jffs2_write_dirent(c, f, &rd, fd->name, rd.nsize, phys_ofs, NULL); if (IS_ERR(new_fd)) { printk(KERN_WARNING "jffs2_write_dirent in garbage_collect_dirent failed: %ld\n", PTR_ERR(new_fd)); @@ -375,19 +385,119 @@ } static int jffs2_garbage_collect_deletion_dirent(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb, - struct inode *inode, struct jffs2_full_dirent *fd) + struct jffs2_inode_info *f, struct jffs2_full_dirent *fd) { - struct jffs2_inode_info *f = JFFS2_INODE_INFO(inode); struct jffs2_full_dirent **fdp = &f->dents; int found = 0; - /* FIXME: When we run on NAND flash, we need to work out whether - this deletion dirent is still needed to actively delete a - 'real' dirent with the same name that's still somewhere else - on the flash. For now, we know that we've actually obliterated - all the older dirents when they became obsolete, so we didn't - really need to write the deletion to flash in the first place. - */ + /* On a medium where we can't actually mark nodes obsolete + pernamently, such as NAND flash, we need to work out + whether this deletion dirent is still needed to actively + delete a 'real' dirent with the same name that's still + somewhere else on the flash. */ + if (!jffs2_can_mark_obsolete(c)) { + struct jffs2_raw_dirent rd; + struct jffs2_raw_node_ref *raw; + int ret; + size_t retlen; + int name_len = strlen(fd->name); + uint32_t name_crc = crc32(0, fd->name, name_len); + char *namebuf = NULL; + + /* Prevent the erase code from nicking the obsolete node refs while + we're looking at them. I really don't like this extra lock but + can't see any alternative. Suggestions on a postcard to... */ + down(&c->erase_free_sem); + + for (raw = f->inocache->nodes; raw != (void *)f->inocache; raw = raw->next_in_ino) { + /* We only care about obsolete ones */ + if (!(raw->flash_offset & 1)) + continue; + + /* Doesn't matter if there's one in the same erase block. We're going to + delete it too at the same time. */ + if ((raw->flash_offset & ~(c->sector_size-1)) == + (fd->raw->flash_offset & ~(c->sector_size-1))) + continue; + + /* This is an obsolete node belonging to the same directory */ + ret = jffs2_flash_read(c, raw->flash_offset & ~3, sizeof(struct jffs2_unknown_node), &retlen, (char *)&rd); + if (ret) { + printk(KERN_WARNING "jffs2_g_c_deletion_dirent(): Read error (%d) reading header from obsolete node at %08x\n", ret, raw->flash_offset & ~3); + /* If we can't read it, we don't need to continune to obsolete it. Continue */ + continue; + } + if (retlen != sizeof(struct jffs2_unknown_node)) { + printk(KERN_WARNING "jffs2_g_c_deletion_dirent(): Short read (%d not %d) reading header from obsolete node at %08x\n", + retlen, sizeof(struct jffs2_unknown_node), raw->flash_offset & ~3); + continue; + } + if (rd.nodetype != JFFS2_NODETYPE_DIRENT || + PAD(rd.totlen) != PAD(sizeof(rd) + name_len)) + continue; + + /* OK, it's a dirent node, it's the right length. We have to take a + closer look at it... */ + ret = jffs2_flash_read(c, raw->flash_offset & ~3, sizeof(rd), &retlen, (char *)&rd); + if (ret) { + printk(KERN_WARNING "jffs2_g_c_deletion_dirent(): Read error (%d) reading from obsolete node at %08x\n", ret, raw->flash_offset & ~3); + /* If we can't read it, we don't need to continune to obsolete it. Continue */ + continue; + } + if (retlen != sizeof(struct jffs2_unknown_node)) { + printk(KERN_WARNING "jffs2_g_c_deletion_dirent(): Short read (%d not %d) reading from obsolete node at %08x\n", + retlen, sizeof(struct jffs2_unknown_node), raw->flash_offset & ~3); + continue; + } + + /* If the name CRC doesn't match, skip */ + if (rd.name_crc != name_crc) + continue; + /* If the name length doesn't match, or it's another deletion dirent, skip */ + if (rd.nsize != name_len || !rd.ino) + continue; + + /* OK, check the actual name now */ + if (!namebuf) { + namebuf = kmalloc(name_len + 1, GFP_KERNEL); + if (!namebuf) { + up(&c->erase_free_sem); + return -ENOMEM; + } + } + /* We read the extra byte before it so it's a word-aligned read */ + ret = jffs2_flash_read(c, (raw->flash_offset & ~3)+sizeof(rd)-1, name_len+1, &retlen, namebuf); + if (ret) { + printk(KERN_WARNING "jffs2_g_c_deletion_dirent(): Read error (%d) reading name from obsolete node at %08x\n", ret, raw->flash_offset & ~3); + /* If we can't read it, we don't need to continune to obsolete it. Continue */ + continue; + } + if (retlen != sizeof(rd)) { + printk(KERN_WARNING "jffs2_g_c_deletion_dirent(): Short read (%d not %d) reading name from obsolete node at %08x\n", + retlen, name_len, raw->flash_offset & ~3); + continue; + } + if (memcmp(namebuf+1, fd->name, name_len)) + continue; + + /* OK. The name really does match. There really is still an older node on + the flash which our deletion dirent obsoletes. So we have to write out + a new deletion dirent to replace it */ + + if (namebuf) + kfree(namebuf); + + up(&c->erase_free_sem); + return jffs2_garbage_collect_dirent(c, jeb, f, fd); + } + + up(&c->erase_free_sem); + + if (namebuf) + kfree(namebuf); + } + + /* No need for it any more. Just mark it obsolete and remove it from the list */ while (*fdp) { if ((*fdp) == fd) { found = 1; @@ -397,7 +507,7 @@ fdp = &(*fdp)->next; } if (!found) { - printk(KERN_WARNING "Deletion dirent \"%s\" not found in list for ino #%lu\n", fd->name, inode->i_ino); + printk(KERN_WARNING "Deletion dirent \"%s\" not found in list for ino #%u\n", fd->name, f->inocache->ino); } jffs2_mark_node_obsolete(c, fd->raw); jffs2_free_full_dirent(fd); @@ -405,27 +515,26 @@ } static int jffs2_garbage_collect_hole(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb, - struct inode *inode, struct jffs2_full_dnode *fn, - __u32 start, __u32 end) + struct jffs2_inode_info *f, struct jffs2_full_dnode *fn, + uint32_t start, uint32_t end) { - struct jffs2_inode_info *f = JFFS2_INODE_INFO(inode); struct jffs2_raw_inode ri; struct jffs2_node_frag *frag; struct jffs2_full_dnode *new_fn; - __u32 alloclen, phys_ofs; + uint32_t alloclen, phys_ofs; int ret; - D1(printk(KERN_DEBUG "Writing replacement hole node for ino #%lu from offset 0x%x to 0x%x\n", - inode->i_ino, start, end)); + D1(printk(KERN_DEBUG "Writing replacement hole node for ino #%u from offset 0x%x to 0x%x\n", + f->inocache->ino, start, end)); memset(&ri, 0, sizeof(ri)); if(fn->frags > 1) { size_t readlen; - __u32 crc; + uint32_t crc; /* It's partially obsoleted by a later write. So we have to write it out again with the _same_ version as before */ - ret = c->mtd->read(c->mtd, fn->raw->flash_offset & ~3, sizeof(ri), &readlen, (char *)&ri); + ret = jffs2_flash_read(c, fn->raw->flash_offset & ~3, sizeof(ri), &readlen, (char *)&ri); if (readlen != sizeof(ri) || ret) { printk(KERN_WARNING "Node read failed in jffs2_garbage_collect_hole. Ret %d, retlen %d. Data will be lost by writing new hold node\n", ret, readlen); goto fill; @@ -445,14 +554,14 @@ printk(KERN_WARNING "jffs2_garbage_collect_hole: Node at 0x%08x had CRC 0x%08x which doesn't match calculated CRC 0x%08x\n", fn->raw->flash_offset & ~3, ri.node_crc, crc); /* FIXME: We could possibly deal with this by writing new holes for each frag */ - printk(KERN_WARNING "Data in the range 0x%08x to 0x%08x of inode #%lu will be lost\n", - start, end, inode->i_ino); + printk(KERN_WARNING "Data in the range 0x%08x to 0x%08x of inode #%u will be lost\n", + start, end, f->inocache->ino); goto fill; } if (ri.compr != JFFS2_COMPR_ZERO) { printk(KERN_WARNING "jffs2_garbage_collect_hole: Node 0x%08x wasn't a hole node!\n", fn->raw->flash_offset & ~3); - printk(KERN_WARNING "Data in the range 0x%08x to 0x%08x of inode #%lu will be lost\n", - start, end, inode->i_ino); + printk(KERN_WARNING "Data in the range 0x%08x to 0x%08x of inode #%u will be lost\n", + start, end, f->inocache->ino); goto fill; } } else { @@ -462,20 +571,20 @@ ri.totlen = sizeof(ri); ri.hdr_crc = crc32(0, &ri, sizeof(struct jffs2_unknown_node)-4); - ri.ino = inode->i_ino; + ri.ino = f->inocache->ino; ri.version = ++f->highest_version; ri.offset = start; ri.dsize = end - start; ri.csize = 0; ri.compr = JFFS2_COMPR_ZERO; } - ri.mode = inode->i_mode; - ri.uid = inode->i_uid; - ri.gid = inode->i_gid; - ri.isize = inode->i_size; - ri.atime = inode->i_atime; - ri.ctime = inode->i_ctime; - ri.mtime = inode->i_mtime; + ri.mode = JFFS2_F_I_MODE(f); + ri.uid = JFFS2_F_I_UID(f); + ri.gid = JFFS2_F_I_GID(f); + ri.isize = JFFS2_F_I_SIZE(f); + ri.atime = JFFS2_F_I_ATIME(f); + ri.ctime = JFFS2_F_I_CTIME(f); + ri.mtime = JFFS2_F_I_MTIME(f); ri.data_crc = 0; ri.node_crc = crc32(0, &ri, sizeof(ri)-8); @@ -485,7 +594,7 @@ sizeof(ri), ret); return ret; } - new_fn = jffs2_write_dnode(inode, &ri, NULL, 0, phys_ofs, NULL); + new_fn = jffs2_write_dnode(c, f, &ri, NULL, 0, phys_ofs, NULL); if (IS_ERR(new_fn)) { printk(KERN_WARNING "Error writing new hole node: %ld\n", PTR_ERR(new_fn)); @@ -525,23 +634,22 @@ } static int jffs2_garbage_collect_dnode(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb, - struct inode *inode, struct jffs2_full_dnode *fn, - __u32 start, __u32 end) + struct jffs2_inode_info *f, struct jffs2_full_dnode *fn, + uint32_t start, uint32_t end) { - struct jffs2_inode_info *f = JFFS2_INODE_INFO(inode); struct jffs2_full_dnode *new_fn; struct jffs2_raw_inode ri; - __u32 alloclen, phys_ofs, offset, orig_end; + uint32_t alloclen, phys_ofs, offset, orig_end; int ret = 0; unsigned char *comprbuf = NULL, *writebuf; struct page *pg; unsigned char *pg_ptr; - + /* FIXME: */ struct inode *inode = OFNI_EDONI_2SFFJ(f); memset(&ri, 0, sizeof(ri)); - D1(printk(KERN_DEBUG "Writing replacement dnode for ino #%lu from offset 0x%x to 0x%x\n", - inode->i_ino, start, end)); + D1(printk(KERN_DEBUG "Writing replacement dnode for ino #%u from offset 0x%x to 0x%x\n", + f->inocache->ino, start, end)); orig_end = end; @@ -561,7 +669,7 @@ /* Shitloads of space */ /* FIXME: Integrate this properly with GC calculations */ start &= ~(PAGE_CACHE_SIZE-1); - end = min_t(__u32, start + PAGE_CACHE_SIZE, inode->i_size); + end = min_t(uint32_t, start + PAGE_CACHE_SIZE, JFFS2_F_I_SIZE(f)); D1(printk(KERN_DEBUG "Plenty of free space, so expanding to write from offset 0x%x to 0x%x\n", start, end)); if (end < orig_end) { @@ -588,8 +696,8 @@ offset = start; while(offset < orig_end) { - __u32 datalen; - __u32 cdatalen; + uint32_t datalen; + uint32_t cdatalen; char comprtype = JFFS2_COMPR_NONE; ret = jffs2_reserve_space_gc(c, sizeof(ri) + JFFS2_MIN_DATA_LEN, &phys_ofs, &alloclen); @@ -617,15 +725,15 @@ ri.totlen = sizeof(ri) + cdatalen; ri.hdr_crc = crc32(0, &ri, sizeof(struct jffs2_unknown_node)-4); - ri.ino = inode->i_ino; + ri.ino = f->inocache->ino; ri.version = ++f->highest_version; - ri.mode = inode->i_mode; - ri.uid = inode->i_uid; - ri.gid = inode->i_gid; - ri.isize = inode->i_size; - ri.atime = inode->i_atime; - ri.ctime = inode->i_ctime; - ri.mtime = inode->i_mtime; + ri.mode = JFFS2_F_I_MODE(f); + ri.uid = JFFS2_F_I_UID(f); + ri.gid = JFFS2_F_I_GID(f); + ri.isize = JFFS2_F_I_SIZE(f); + ri.atime = JFFS2_F_I_ATIME(f); + ri.ctime = JFFS2_F_I_CTIME(f); + ri.mtime = JFFS2_F_I_MTIME(f); ri.offset = offset; ri.csize = cdatalen; ri.dsize = datalen; @@ -633,7 +741,7 @@ ri.node_crc = crc32(0, &ri, sizeof(ri)-8); ri.data_crc = crc32(0, writebuf, cdatalen); - new_fn = jffs2_write_dnode(inode, &ri, writebuf, cdatalen, phys_ofs, NULL); + new_fn = jffs2_write_dnode(c, f, &ri, writebuf, cdatalen, phys_ofs, NULL); if (IS_ERR(new_fn)) { printk(KERN_WARNING "Error writing new dnode: %ld\n", PTR_ERR(new_fn)); diff -Nru a/fs/jffs2/malloc.c b/fs/jffs2/malloc.c --- a/fs/jffs2/malloc.c Tue Mar 12 13:58:14 2002 +++ b/fs/jffs2/malloc.c Tue Mar 12 13:58:14 2002 @@ -1,7 +1,7 @@ /* * JFFS2 -- Journalling Flash File System, Version 2. * - * Copyright (C) 2001 Red Hat, Inc. + * Copyright (C) 2001, 2002 Red Hat, Inc. * * Created by David Woodhouse * @@ -31,7 +31,7 @@ * provisions above, a recipient may use your version of this file * under either the RHEPL or the GPL. * - * $Id: malloc.c,v 1.16 2001/03/15 15:38:24 dwmw2 Exp $ + * $Id: malloc.c,v 1.21 2002/03/12 17:36:55 dwmw2 Exp $ * */ @@ -49,7 +49,6 @@ /* These are initialised to NULL in the kernel startup code. If you're porting to other operating systems, beware */ -static kmem_cache_t *jffs2_inode_cachep; static kmem_cache_t *full_dnode_slab; static kmem_cache_t *raw_dirent_slab; static kmem_cache_t *raw_inode_slab; @@ -58,89 +57,47 @@ static kmem_cache_t *node_frag_slab; static kmem_cache_t *inode_cache_slab; -void jffs2_free_tmp_dnode_info_list(struct jffs2_tmp_dnode_info *tn) -{ - struct jffs2_tmp_dnode_info *next; - - while (tn) { - next = tn; - tn = tn->next; - jffs2_free_full_dnode(next->fn); - jffs2_free_tmp_dnode_info(next); - } -} - -void jffs2_free_full_dirent_list(struct jffs2_full_dirent *fd) -{ - struct jffs2_full_dirent *next; - - while (fd) { - next = fd->next; - jffs2_free_full_dirent(fd); - fd = next; - } -} - -struct inode *jffs2_alloc_inode(struct super_block *sb) -{ - struct jffs2_inode_info *ei; - ei = (struct jffs2_inode_info *)kmem_cache_alloc(jffs2_inode_cachep, SLAB_KERNEL); - if (!ei) - return NULL; - return &ei->vfs_inode; -} - -void jffs2_destroy_inode(struct inode *inode) -{ - kmem_cache_free(jffs2_inode_cachep, JFFS2_INODE_INFO(inode)); -} - -static void init_once(void * foo, kmem_cache_t * cachep, unsigned long flags) -{ - struct jffs2_inode_info *ei = (struct jffs2_inode_info *) foo; - - if ((flags & (SLAB_CTOR_VERIFY|SLAB_CTOR_CONSTRUCTOR)) == - SLAB_CTOR_CONSTRUCTOR) { - init_MUTEX(&ei->sem); - inode_init_once(&ei->vfs_inode); - } -} - int __init jffs2_create_slab_caches(void) { - full_dnode_slab = kmem_cache_create("jffs2_full_dnode", sizeof(struct jffs2_full_dnode), 0, JFFS2_SLAB_POISON, NULL, NULL); + full_dnode_slab = kmem_cache_create("jffs2_full_dnode", + sizeof(struct jffs2_full_dnode), + 0, JFFS2_SLAB_POISON, NULL, NULL); if (!full_dnode_slab) goto err; - raw_dirent_slab = kmem_cache_create("jffs2_raw_dirent", sizeof(struct jffs2_raw_dirent), 0, JFFS2_SLAB_POISON, NULL, NULL); + raw_dirent_slab = kmem_cache_create("jffs2_raw_dirent", + sizeof(struct jffs2_raw_dirent), + 0, JFFS2_SLAB_POISON, NULL, NULL); if (!raw_dirent_slab) goto err; - raw_inode_slab = kmem_cache_create("jffs2_raw_inode", sizeof(struct jffs2_raw_inode), 0, JFFS2_SLAB_POISON, NULL, NULL); + raw_inode_slab = kmem_cache_create("jffs2_raw_inode", + sizeof(struct jffs2_raw_inode), + 0, JFFS2_SLAB_POISON, NULL, NULL); if (!raw_inode_slab) goto err; - tmp_dnode_info_slab = kmem_cache_create("jffs2_tmp_dnode", sizeof(struct jffs2_tmp_dnode_info), 0, JFFS2_SLAB_POISON, NULL, NULL); + tmp_dnode_info_slab = kmem_cache_create("jffs2_tmp_dnode", + sizeof(struct jffs2_tmp_dnode_info), + 0, JFFS2_SLAB_POISON, NULL, NULL); if (!tmp_dnode_info_slab) goto err; - raw_node_ref_slab = kmem_cache_create("jffs2_raw_node_ref", sizeof(struct jffs2_raw_node_ref), 0, JFFS2_SLAB_POISON, NULL, NULL); + raw_node_ref_slab = kmem_cache_create("jffs2_raw_node_ref", + sizeof(struct jffs2_raw_node_ref), + 0, JFFS2_SLAB_POISON, NULL, NULL); if (!raw_node_ref_slab) goto err; - node_frag_slab = kmem_cache_create("jffs2_node_frag", sizeof(struct jffs2_node_frag), 0, JFFS2_SLAB_POISON, NULL, NULL); + node_frag_slab = kmem_cache_create("jffs2_node_frag", + sizeof(struct jffs2_node_frag), + 0, JFFS2_SLAB_POISON, NULL, NULL); if (!node_frag_slab) goto err; - jffs2_inode_cachep = kmem_cache_create("jffs2_inode_cache", - sizeof(struct jffs2_inode_info), - 0, SLAB_HWCACHE_ALIGN, - init_once, NULL); - if (!jffs2_inode_cachep) - goto err; - - inode_cache_slab = kmem_cache_create("jffs2_inode", sizeof(struct jffs2_inode_cache), 0, JFFS2_SLAB_POISON, NULL, NULL); - + inode_cache_slab = kmem_cache_create("jffs2_inode_cache", + sizeof(struct jffs2_inode_cache), + 0, JFFS2_SLAB_POISON, NULL, NULL); if (inode_cache_slab) return 0; err: @@ -164,9 +121,6 @@ kmem_cache_destroy(node_frag_slab); if(inode_cache_slab) kmem_cache_destroy(inode_cache_slab); - if(jffs2_inode_cachep) - kmem_cache_destroy(jffs2_inode_cachep); - } struct jffs2_full_dirent *jffs2_alloc_full_dirent(int namesize) diff -Nru a/fs/jffs2/nodelist.c b/fs/jffs2/nodelist.c --- a/fs/jffs2/nodelist.c Tue Mar 12 13:58:14 2002 +++ b/fs/jffs2/nodelist.c Tue Mar 12 13:58:14 2002 @@ -31,14 +31,14 @@ * provisions above, a recipient may use your version of this file * under either the RHEPL or the GPL. * - * $Id: nodelist.c,v 1.30.2.3 2002/02/23 14:04:44 dwmw2 Exp $ + * $Id: nodelist.c,v 1.42 2002/03/11 11:17:29 dwmw2 Exp $ * */ #include -#include #include #include +#include #include "nodelist.h" void jffs2_add_fd_to_list(struct jffs2_sb_info *c, struct jffs2_full_dirent *new, struct jffs2_full_dirent **list) @@ -89,13 +89,37 @@ *prev = tn; } +static void jffs2_free_tmp_dnode_info_list(struct jffs2_tmp_dnode_info *tn) +{ + struct jffs2_tmp_dnode_info *next; + + while (tn) { + next = tn; + tn = tn->next; + jffs2_free_full_dnode(next->fn); + jffs2_free_tmp_dnode_info(next); + } +} + +static void jffs2_free_full_dirent_list(struct jffs2_full_dirent *fd) +{ + struct jffs2_full_dirent *next; + + while (fd) { + next = fd->next; + jffs2_free_full_dirent(fd); + fd = next; + } +} + + /* Get tmp_dnode_info and full_dirent for all non-obsolete nodes associated with this ino, returning the former in order of version */ int jffs2_get_inode_nodes(struct jffs2_sb_info *c, ino_t ino, struct jffs2_inode_info *f, struct jffs2_tmp_dnode_info **tnp, struct jffs2_full_dirent **fdp, - __u32 *highest_version, __u32 *latest_mctime, - __u32 *mctime_ver) + uint32_t *highest_version, uint32_t *latest_mctime, + uint32_t *mctime_ver) { struct jffs2_raw_node_ref *ref = f->inocache->nodes; struct jffs2_tmp_dnode_info *tn, *ret_tn = NULL; @@ -111,6 +135,9 @@ if (!f->inocache->nodes) { printk(KERN_WARNING "Eep. no nodes for ino #%lu\n", ino); } + + spin_lock_bh(&c->erase_completion_lock); + for (ref = f->inocache->nodes; ref && ref->next_in_ino; ref = ref->next_in_ino) { /* Work out whether it's a data node or a dirent node */ if (ref->flash_offset & 1) { @@ -118,7 +145,12 @@ D1(printk(KERN_DEBUG "node at 0x%08x is obsoleted. Ignoring.\n", ref->flash_offset &~3)); continue; } - err = c->mtd->read(c->mtd, (ref->flash_offset & ~3), min(ref->totlen, sizeof(node)), &retlen, (void *)&node); + /* We can hold a pointer to a non-obsolete node without the spinlock, + but _obsolete_ nodes may disappear at any time, if the block + they're in gets erased */ + spin_unlock_bh(&c->erase_completion_lock); + + err = jffs2_flash_read(c, (ref->flash_offset & ~3), min(ref->totlen, sizeof(node)), &retlen, (void *)&node); if (err) { printk(KERN_WARNING "error %d reading node at 0x%08x in get_inode_nodes()\n", err, (ref->flash_offset) & ~3); goto free_out; @@ -143,8 +175,10 @@ if (node.d.version > *highest_version) *highest_version = node.d.version; if (ref->flash_offset & 1) { - /* Obsoleted */ - continue; + /* Obsoleted. This cannot happen, surely? dwmw2 20020308 */ + printk(KERN_ERR "Dirent node at 0x%08x became obsolete while we weren't looking\n", + ref->flash_offset & ~3); + BUG(); } fd = jffs2_alloc_full_dirent(node.d.nsize+1); if (!fd) { @@ -167,7 +201,7 @@ dirent we've already read from the flash */ if (retlen > sizeof(struct jffs2_raw_dirent)) - memcpy(&fd->name[0], &node.d.name[0], min((__u32)node.d.nsize, (retlen-sizeof(struct jffs2_raw_dirent)))); + memcpy(&fd->name[0], &node.d.name[0], min((uint32_t)node.d.nsize, (retlen-sizeof(struct jffs2_raw_dirent)))); /* Do we need to copy any more of the name directly from the flash? @@ -175,7 +209,7 @@ if (node.d.nsize + sizeof(struct jffs2_raw_dirent) > retlen) { int already = retlen - sizeof(struct jffs2_raw_dirent); - err = c->mtd->read(c->mtd, (ref->flash_offset & ~3) + retlen, + err = jffs2_flash_read(c, (ref->flash_offset & ~3) + retlen, node.d.nsize - already, &retlen, &fd->name[already]); if (!err && retlen != node.d.nsize - already) err = -EIO; @@ -207,9 +241,10 @@ D1(printk(KERN_DEBUG "version %d, highest_version now %d\n", node.d.version, *highest_version)); if (ref->flash_offset & 1) { - D1(printk(KERN_DEBUG "obsoleted\n")); - /* Obsoleted */ - continue; + /* Obsoleted. This cannot happen, surely? dwmw2 20020308 */ + printk(KERN_ERR "Inode node at 0x%08x became obsolete while we weren't looking\n", + ref->flash_offset & ~3); + BUG(); } tn = jffs2_alloc_tmp_dnode_info(); if (!tn) { @@ -254,7 +289,10 @@ break; } } + spin_lock_bh(&c->erase_completion_lock); + } + spin_unlock_bh(&c->erase_completion_lock); *tnp = ret_tn; *fdp = ret_fd; @@ -272,15 +310,19 @@ D2(printk(KERN_DEBUG "jffs2_get_ino_cache(): ino %u\n", ino)); spin_lock (&c->inocache_lock); - ret = c->inocache_list[ino % INOCACHE_HASHSIZE]; - while (ret && ret->ino < ino) { - ret = ret->next; - } - spin_unlock(&c->inocache_lock); + if (c->inocache_last && c->inocache_last->ino == ino) { + ret = c->inocache_last; + } else { + ret = c->inocache_list[ino % INOCACHE_HASHSIZE]; + while (ret && ret->ino < ino) { + ret = ret->next; + } - if (ret && ret->ino != ino) - ret = NULL; + if (ret && ret->ino != ino) + ret = NULL; + } + spin_unlock(&c->inocache_lock); D2(printk(KERN_DEBUG "jffs2_get_ino_cache found %p for ino %u\n", ret, ino)); return ret; @@ -299,6 +341,9 @@ } new->next = *prev; *prev = new; + + c->inocache_last = new; + spin_unlock(&c->inocache_lock); } @@ -316,6 +361,9 @@ if ((*prev) == old) { *prev = old->next; } + if (c->inocache_last == old) + c->inocache_last = NULL; + spin_unlock(&c->inocache_lock); } @@ -334,6 +382,7 @@ } c->inocache_list[i] = NULL; } + c->inocache_last = NULL; } void jffs2_free_raw_node_refs(struct jffs2_sb_info *c) diff -Nru a/fs/jffs2/nodelist.h b/fs/jffs2/nodelist.h --- a/fs/jffs2/nodelist.h Tue Mar 12 13:58:15 2002 +++ b/fs/jffs2/nodelist.h Tue Mar 12 13:58:15 2002 @@ -1,7 +1,7 @@ /* * JFFS2 -- Journalling Flash File System, Version 2. * - * Copyright (C) 2001 Red Hat, Inc. + * Copyright (C) 2001, 2002 Red Hat, Inc. * * Created by David Woodhouse * @@ -31,16 +31,21 @@ * provisions above, a recipient may use your version of this file * under either the RHEPL or the GPL. * - * $Id: nodelist.h,v 1.46.2.1 2002/02/23 14:04:44 dwmw2 Exp $ - * + zlib_init calls from v1.65 + * $Id: nodelist.h,v 1.68 2002/03/08 11:27:19 dwmw2 Exp $ * */ +#ifndef __JFFS2_NODELIST_H__ +#define __JFFS2_NODELIST_H__ + #include #include +#include /* For min/max in older kernels */ +#include #include #include +#include "os-linux.h" #ifndef CONFIG_JFFS2_FS_DEBUG #define CONFIG_JFFS2_FS_DEBUG 2 @@ -72,10 +77,10 @@ for this inode instead. The inode_cache will have NULL in the first word so you know when you've got there :) */ struct jffs2_raw_node_ref *next_phys; - // __u32 ino; - __u32 flash_offset; - __u32 totlen; -// __u16 nodetype; + // uint32_t ino; + uint32_t flash_offset; + uint32_t totlen; +// uint16_t nodetype; /* flash_offset & 3 always has to be zero, because nodes are always aligned at 4 bytes. So we have a couple of extra bits @@ -108,13 +113,16 @@ chain. */ struct jffs2_inode_cache *next; struct jffs2_raw_node_ref *nodes; - __u32 ino; + uint32_t ino; int nlink; }; struct jffs2_scan_info { struct jffs2_full_dirent *dents; struct jffs2_tmp_dnode_info *tmpnodes; + /* Latest i_size info */ + uint32_t version; + uint32_t isize; }; /* Larger representation of a raw node, kept in-core only when the @@ -124,9 +132,9 @@ struct jffs2_full_dnode { struct jffs2_raw_node_ref *raw; - __u32 ofs; /* Don't really need this, but optimisation */ - __u32 size; - __u32 frags; /* Number of fragments which currently refer + uint32_t ofs; /* Don't really need this, but optimisation */ + uint32_t size; + uint32_t frags; /* Number of fragments which currently refer to this node. When this reaches zero, the node is obsolete. */ @@ -141,15 +149,15 @@ { struct jffs2_tmp_dnode_info *next; struct jffs2_full_dnode *fn; - __u32 version; + uint32_t version; }; struct jffs2_full_dirent { struct jffs2_raw_node_ref *raw; struct jffs2_full_dirent *next; - __u32 version; - __u32 ino; /* == zero for unlink */ + uint32_t version; + uint32_t ino; /* == zero for unlink */ unsigned int nhash; unsigned char type; unsigned char name[0]; @@ -162,20 +170,20 @@ { struct jffs2_node_frag *next; struct jffs2_full_dnode *node; /* NULL for holes */ - __u32 size; - __u32 ofs; /* Don't really need this, but optimisation */ - __u32 node_ofs; /* offset within the physical node */ + uint32_t size; + uint32_t ofs; /* Don't really need this, but optimisation */ + uint32_t node_ofs; /* offset within the physical node */ }; struct jffs2_eraseblock { struct list_head list; int bad_count; - __u32 offset; /* of this block in the MTD */ + uint32_t offset; /* of this block in the MTD */ - __u32 used_size; - __u32 dirty_size; - __u32 free_size; /* Note that sector_size - free_size + uint32_t used_size; + uint32_t dirty_size; + uint32_t free_size; /* Note that sector_size - free_size is the address of the first free space */ struct jffs2_raw_node_ref *first_node; struct jffs2_raw_node_ref *last_node; @@ -207,7 +215,7 @@ } while(0) #define ACCT_PARANOIA_CHECK(jeb) do { \ - __u32 my_used_size = 0; \ + uint32_t my_used_size = 0; \ struct jffs2_raw_node_ref *ref2 = jeb->first_node; \ while (ref2) { \ if (!(ref2->flash_offset & 1)) \ @@ -249,8 +257,8 @@ void jffs2_add_tn_to_list(struct jffs2_tmp_dnode_info *tn, struct jffs2_tmp_dnode_info **list); int jffs2_get_inode_nodes(struct jffs2_sb_info *c, ino_t ino, struct jffs2_inode_info *f, struct jffs2_tmp_dnode_info **tnp, struct jffs2_full_dirent **fdp, - __u32 *highest_version, __u32 *latest_mctime, - __u32 *mctime_ver); + uint32_t *highest_version, uint32_t *latest_mctime, + uint32_t *mctime_ver); struct jffs2_inode_cache *jffs2_get_ino_cache(struct jffs2_sb_info *c, int ino); void jffs2_add_ino_cache (struct jffs2_sb_info *c, struct jffs2_inode_cache *new); void jffs2_del_ino_cache(struct jffs2_sb_info *c, struct jffs2_inode_cache *old); @@ -258,28 +266,33 @@ void jffs2_free_raw_node_refs(struct jffs2_sb_info *c); /* nodemgmt.c */ -int jffs2_reserve_space(struct jffs2_sb_info *c, __u32 minsize, __u32 *ofs, __u32 *len, int prio); -int jffs2_reserve_space_gc(struct jffs2_sb_info *c, __u32 minsize, __u32 *ofs, __u32 *len); -int jffs2_add_physical_node_ref(struct jffs2_sb_info *c, struct jffs2_raw_node_ref *new, __u32 len, int dirty); +int jffs2_reserve_space(struct jffs2_sb_info *c, uint32_t minsize, uint32_t *ofs, uint32_t *len, int prio); +int jffs2_reserve_space_gc(struct jffs2_sb_info *c, uint32_t minsize, uint32_t *ofs, uint32_t *len); +int jffs2_add_physical_node_ref(struct jffs2_sb_info *c, struct jffs2_raw_node_ref *new, uint32_t len, int dirty); void jffs2_complete_reservation(struct jffs2_sb_info *c); void jffs2_mark_node_obsolete(struct jffs2_sb_info *c, struct jffs2_raw_node_ref *raw); /* write.c */ -struct inode *jffs2_new_inode (struct inode *dir_i, int mode, struct jffs2_raw_inode *ri); -struct jffs2_full_dnode *jffs2_write_dnode(struct inode *inode, struct jffs2_raw_inode *ri, const unsigned char *data, __u32 datalen, __u32 flash_ofs, __u32 *writelen); -struct jffs2_full_dirent *jffs2_write_dirent(struct inode *inode, struct jffs2_raw_dirent *rd, const unsigned char *name, __u32 namelen, __u32 flash_ofs, __u32 *writelen); +int jffs2_do_new_inode(struct jffs2_sb_info *c, struct jffs2_inode_info *f, uint32_t mode, struct jffs2_raw_inode *ri); +struct jffs2_full_dnode *jffs2_write_dnode(struct jffs2_sb_info *c, struct jffs2_inode_info *f, struct jffs2_raw_inode *ri, const unsigned char *data, uint32_t datalen, uint32_t flash_ofs, uint32_t *writelen); +struct jffs2_full_dirent *jffs2_write_dirent(struct jffs2_sb_info *c, struct jffs2_inode_info *f, struct jffs2_raw_dirent *rd, const unsigned char *name, uint32_t namelen, uint32_t flash_ofs, uint32_t *writelen); +int jffs2_write_inode_range(struct jffs2_sb_info *c, struct jffs2_inode_info *f, + struct jffs2_raw_inode *ri, unsigned char *buf, + uint32_t offset, uint32_t writelen, uint32_t *retlen); +int jffs2_do_create(struct jffs2_sb_info *c, struct jffs2_inode_info *dir_f, struct jffs2_inode_info *f, struct jffs2_raw_inode *ri, const char *name, int namelen); +int jffs2_do_unlink(struct jffs2_sb_info *c, struct jffs2_inode_info *dir_f, const char *name, int namelen, struct jffs2_inode_info *dead_f); +int jffs2_do_link (struct jffs2_sb_info *c, struct jffs2_inode_info *dir_f, uint32_t ino, uint8_t type, const char *name, int namelen); + /* readinode.c */ -void jffs2_truncate_fraglist (struct jffs2_sb_info *c, struct jffs2_node_frag **list, __u32 size); +void jffs2_truncate_fraglist (struct jffs2_sb_info *c, struct jffs2_node_frag **list, uint32_t size); int jffs2_add_full_dnode_to_fraglist(struct jffs2_sb_info *c, struct jffs2_node_frag **list, struct jffs2_full_dnode *fn); int jffs2_add_full_dnode_to_inode(struct jffs2_sb_info *c, struct jffs2_inode_info *f, struct jffs2_full_dnode *fn); -void jffs2_read_inode (struct inode *); -void jffs2_clear_inode (struct inode *); +int jffs2_do_read_inode(struct jffs2_sb_info *c, struct jffs2_inode_info *f, + uint32_t ino, struct jffs2_raw_inode *latest_node); +void jffs2_do_clear_inode(struct jffs2_sb_info *c, struct jffs2_inode_info *f); /* malloc.c */ -void jffs2_free_tmp_dnode_info_list(struct jffs2_tmp_dnode_info *tn); -void jffs2_free_full_dirent_list(struct jffs2_full_dirent *fd); - int jffs2_create_slab_caches(void); void jffs2_destroy_slab_caches(void); @@ -303,47 +316,24 @@ /* gc.c */ int jffs2_garbage_collect_pass(struct jffs2_sb_info *c); -/* background.c */ -int jffs2_start_garbage_collect_thread(struct jffs2_sb_info *c); -void jffs2_stop_garbage_collect_thread(struct jffs2_sb_info *c); -void jffs2_garbage_collect_trigger(struct jffs2_sb_info *c); - -/* dir.c */ -extern struct file_operations jffs2_dir_operations; -extern struct inode_operations jffs2_dir_inode_operations; - -/* file.c */ -extern struct file_operations jffs2_file_operations; -extern struct inode_operations jffs2_file_inode_operations; -extern struct address_space_operations jffs2_file_address_operations; -int jffs2_null_fsync(struct file *, struct dentry *, int); -int jffs2_setattr (struct dentry *dentry, struct iattr *iattr); -int jffs2_do_readpage_nolock (struct inode *inode, struct page *pg); -int jffs2_do_readpage_unlock (struct inode *inode, struct page *pg); -int jffs2_readpage (struct file *, struct page *); -int jffs2_prepare_write (struct file *, struct page *, unsigned, unsigned); -int jffs2_commit_write (struct file *, struct page *, unsigned, unsigned); - -/* ioctl.c */ -int jffs2_ioctl(struct inode *, struct file *, unsigned int, unsigned long); - /* read.c */ int jffs2_read_dnode(struct jffs2_sb_info *c, struct jffs2_full_dnode *fd, unsigned char *buf, int ofs, int len); +int jffs2_read_inode_range(struct jffs2_sb_info *c, struct jffs2_inode_info *f, + unsigned char *buf, uint32_t offset, uint32_t len); +char *jffs2_getlink(struct jffs2_sb_info *c, struct jffs2_inode_info *f); + /* compr.c */ unsigned char jffs2_compress(unsigned char *data_in, unsigned char *cpage_out, - __u32 *datalen, __u32 *cdatalen); + uint32_t *datalen, uint32_t *cdatalen); int jffs2_decompress(unsigned char comprtype, unsigned char *cdata_in, - unsigned char *data_out, __u32 cdatalen, __u32 datalen); + unsigned char *data_out, uint32_t cdatalen, uint32_t datalen); /* scan.c */ int jffs2_scan_medium(struct jffs2_sb_info *c); /* build.c */ -int jffs2_build_filesystem(struct jffs2_sb_info *c); - -/* symlink.c */ -extern struct inode_operations jffs2_symlink_inode_operations; +int jffs2_do_mount_fs(struct jffs2_sb_info *c); /* erase.c */ void jffs2_erase_block(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb); @@ -351,6 +341,16 @@ void jffs2_mark_erased_blocks(struct jffs2_sb_info *c); void jffs2_erase_pending_trigger(struct jffs2_sb_info *c); +#ifdef CONFIG_JFFS2_FS_NAND +/* wbuf.c */ +int jffs2_flush_wbuf(struct jffs2_sb_info *c, int pad); +int jffs2_check_nand_cleanmarker(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb); +int jffs2_write_nand_cleanmarker(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb); +int jffs2_nand_read_failcnt(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb); +#endif + /* compr_zlib.c */ int jffs2_zlib_init(void); void jffs2_zlib_exit(void); + +#endif /* __JFFS2_NODELIST_H__ */ diff -Nru a/fs/jffs2/nodemgmt.c b/fs/jffs2/nodemgmt.c --- a/fs/jffs2/nodemgmt.c Tue Mar 12 13:58:15 2002 +++ b/fs/jffs2/nodemgmt.c Tue Mar 12 13:58:15 2002 @@ -1,7 +1,7 @@ /* * JFFS2 -- Journalling Flash File System, Version 2. * - * Copyright (C) 2001 Red Hat, Inc. + * Copyright (C) 2001, 2002 Red Hat, Inc. * * Created by David Woodhouse * @@ -31,13 +31,12 @@ * provisions above, a recipient may use your version of this file * under either the RHEPL or the GPL. * - * $Id: nodemgmt.c,v 1.45.2.1 2002/02/23 14:13:34 dwmw2 Exp $ + * $Id: nodemgmt.c,v 1.63 2002/03/08 14:54:09 dwmw2 Exp $ * */ #include #include -#include #include #include #include "nodelist.h" @@ -62,9 +61,9 @@ * for the requested allocation. */ -static int jffs2_do_reserve_space(struct jffs2_sb_info *c, __u32 minsize, __u32 *ofs, __u32 *len); +static int jffs2_do_reserve_space(struct jffs2_sb_info *c, uint32_t minsize, uint32_t *ofs, uint32_t *len); -int jffs2_reserve_space(struct jffs2_sb_info *c, __u32 minsize, __u32 *ofs, __u32 *len, int prio) +int jffs2_reserve_space(struct jffs2_sb_info *c, uint32_t minsize, uint32_t *ofs, uint32_t *len, int prio) { int ret = -EAGAIN; int blocksneeded = JFFS2_RESERVED_BLOCKS_WRITE; @@ -121,7 +120,7 @@ return ret; } -int jffs2_reserve_space_gc(struct jffs2_sb_info *c, __u32 minsize, __u32 *ofs, __u32 *len) +int jffs2_reserve_space_gc(struct jffs2_sb_info *c, uint32_t minsize, uint32_t *ofs, uint32_t *len) { int ret = -EAGAIN; minsize = PAD(minsize); @@ -140,13 +139,21 @@ } /* Called with alloc sem _and_ erase_completion_lock */ -static int jffs2_do_reserve_space(struct jffs2_sb_info *c, __u32 minsize, __u32 *ofs, __u32 *len) +static int jffs2_do_reserve_space(struct jffs2_sb_info *c, uint32_t minsize, uint32_t *ofs, uint32_t *len) { struct jffs2_eraseblock *jeb = c->nextblock; restart: if (jeb && minsize > jeb->free_size) { /* Skip the end of this block and file it as having some dirty space */ + /* If there's a pending write to it, flush now */ + if (c->wbuf_len) { + spin_unlock_bh(&c->erase_completion_lock); + D1(printk(KERN_DEBUG "jffs2_do_reserve_space: Flushing write buffer\n")); + jffs2_flush_wbuf(c, 1); + spin_lock_bh(&c->erase_completion_lock); + /* We know nobody's going to have changed nextblock. Just continue */ + } c->dirty_size += jeb->free_size; c->free_size -= jeb->free_size; jeb->dirty_size += jeb->free_size; @@ -165,20 +172,44 @@ DECLARE_WAITQUEUE(wait, current); + if (!c->nr_erasing_blocks && + !list_empty(&c->erasable_list)) { + struct jffs2_eraseblock *ejeb; + + ejeb = list_entry(c->erasable_list.next, struct jffs2_eraseblock, list); + list_del(&ejeb->list); + list_add_tail(&ejeb->list, &c->erase_pending_list); + c->nr_erasing_blocks++; + D1(printk(KERN_DEBUG "jffs2_do_reserve_space: Triggering erase of erasable block at 0x%08x\n", + ejeb->offset)); + } + + if (!c->nr_erasing_blocks && + !list_empty(&c->erasable_pending_wbuf_list)) { + D1(printk(KERN_DEBUG "jffs2_do_reserve_space: Flushing write buffer\n")); + /* c->nextblock is NULL, no update to c->nextblock allowed */ + spin_unlock_bh(&c->erase_completion_lock); + jffs2_flush_wbuf(c, 1); + spin_lock_bh(&c->erase_completion_lock); + /* Have another go. It'll be on the erasable_list now */ + return -EAGAIN; + } + if (!c->nr_erasing_blocks) { -// if (list_empty(&c->erasing_list) && list_empty(&c->erase_pending_list) && list_empty(c->erase_complete_list)) { /* Ouch. We're in GC, or we wouldn't have got here. And there's no space left. At all. */ - printk(KERN_CRIT "Argh. No free space left for GC. nr_erasing_blocks is %d. nr_free_blocks is %d. (erasingempty: %s, erasependingempty: %s)\n", - c->nr_erasing_blocks, c->nr_free_blocks, list_empty(&c->erasing_list)?"yes":"no", list_empty(&c->erase_pending_list)?"yes":"no"); + printk(KERN_CRIT "Argh. No free space left for GC. nr_erasing_blocks is %d. nr_free_blocks is %d. (erasableempty: %s, erasingempty: %s, erasependingempty: %s)\n", + c->nr_erasing_blocks, c->nr_free_blocks, list_empty(&c->erasable_list)?"yes":"no", + list_empty(&c->erasing_list)?"yes":"no", list_empty(&c->erase_pending_list)?"yes":"no"); return -ENOSPC; } /* Make sure this can't deadlock. Someone has to start the erases of erase_pending blocks */ set_current_state(TASK_INTERRUPTIBLE); add_wait_queue(&c->erase_wait, &wait); - D1(printk(KERN_DEBUG "Waiting for erases to complete. erasing_blocks is %d. (erasingempty: %s, erasependingempty: %s)\n", - c->nr_erasing_blocks, list_empty(&c->erasing_list)?"yes":"no", list_empty(&c->erase_pending_list)?"yes":"no")); + D1(printk(KERN_DEBUG "Waiting for erases to complete. erasing_blocks is %d. (erasableempty: %s, erasingempty: %s, erasependingempty: %s)\n", + c->nr_erasing_blocks, list_empty(&c->erasable_list)?"yes":"no", + list_empty(&c->erasing_list)?"yes":"no", list_empty(&c->erase_pending_list)?"yes":"no")); if (!list_empty(&c->erase_pending_list)) { D1(printk(KERN_DEBUG "Triggering pending erases\n")); jffs2_erase_pending_trigger(c); @@ -200,7 +231,11 @@ list_del(next); c->nextblock = jeb = list_entry(next, struct jffs2_eraseblock, list); c->nr_free_blocks--; - if (jeb->free_size != c->sector_size - sizeof(struct jffs2_unknown_node)) { + + /* On NAND free_size == sector_size, cleanmarker is in spare area !*/ + if (jeb->free_size != c->sector_size - + (jffs2_cleanmarker_oob(c)) ? 0 : sizeof(struct jffs2_unknown_node)) { + printk(KERN_WARNING "Eep. Block 0x%08x taken from free_list had free_size of 0x%08x!!\n", jeb->offset, jeb->free_size); goto restart; } @@ -209,6 +244,20 @@ enough space */ *ofs = jeb->offset + (c->sector_size - jeb->free_size); *len = jeb->free_size; + + if (jeb->used_size == PAD(sizeof(struct jffs2_unknown_node)) && + !jeb->first_node->next_in_ino) { + /* Only node in it beforehand was a CLEANMARKER node (we think). + So mark it obsolete now that there's going to be another node + in the block. This will reduce used_size to zero but We've + already set c->nextblock so that jffs2_mark_node_obsolete() + won't try to refile it to the dirty_list. + */ + spin_unlock_bh(&c->erase_completion_lock); + jffs2_mark_node_obsolete(c, jeb->first_node); + spin_lock_bh(&c->erase_completion_lock); + } + D1(printk(KERN_DEBUG "jffs2_do_reserve_space(): Giving 0x%x bytes at 0x%x\n", *len, *ofs)); return 0; } @@ -216,9 +265,9 @@ /** * jffs2_add_physical_node_ref - add a physical node reference to the list * @c: superblock info - * @ofs: physical location of this physical node + * @new: new node reference to add * @len: length of this physical node - * @ino: inode number with which this physical node is associated + * @dirty: dirty flag for new node * * Should only be used to report nodes for which space has been allocated * by jffs2_reserve_space. @@ -226,12 +275,12 @@ * Must be called with the alloc_sem held. */ -int jffs2_add_physical_node_ref(struct jffs2_sb_info *c, struct jffs2_raw_node_ref *new, __u32 len, int dirty) +int jffs2_add_physical_node_ref(struct jffs2_sb_info *c, struct jffs2_raw_node_ref *new, uint32_t len, int dirty) { struct jffs2_eraseblock *jeb; len = PAD(len); - jeb = &c->blocks[(new->flash_offset & ~3) / c->sector_size]; + jeb = &c->blocks[new->flash_offset / c->sector_size]; D1(printk(KERN_DEBUG "jffs2_add_physical_node_ref(): Node at 0x%x, size 0x%x\n", new->flash_offset & ~3, len)); #if 1 if (jeb != c->nextblock || (new->flash_offset & ~3) != jeb->offset + (c->sector_size - jeb->free_size)) { @@ -240,13 +289,14 @@ return -EINVAL; } #endif + spin_lock_bh(&c->erase_completion_lock); + if (!jeb->first_node) jeb->first_node = new; if (jeb->last_node) jeb->last_node->next_phys = new; jeb->last_node = new; - spin_lock_bh(&c->erase_completion_lock); jeb->free_size -= len; c->free_size -= len; if (dirty) { @@ -257,17 +307,26 @@ jeb->used_size += len; c->used_size += len; } - spin_unlock_bh(&c->erase_completion_lock); + if (!jeb->free_size && !jeb->dirty_size) { /* If it lives on the dirty_list, jffs2_reserve_space will put it there */ D1(printk(KERN_DEBUG "Adding full erase block at 0x%08x to clean_list (free 0x%08x, dirty 0x%08x, used 0x%08x\n", jeb->offset, jeb->free_size, jeb->dirty_size, jeb->used_size)); + if (c->wbuf_len) { + /* Flush the last write in the block if it's outstanding */ + spin_unlock_bh(&c->erase_completion_lock); + jffs2_flush_wbuf(c, 1); + spin_lock_bh(&c->erase_completion_lock); + } + list_add_tail(&jeb->list, &c->clean_list); c->nextblock = NULL; } ACCT_SANITY_CHECK(c,jeb); ACCT_PARANOIA_CHECK(jeb); + spin_unlock_bh(&c->erase_completion_lock); + return 0; } @@ -285,7 +344,7 @@ int blocknr; struct jffs2_unknown_node n; int ret; - ssize_t retlen; + size_t retlen; if(!ref) { printk(KERN_NOTICE "EEEEEK. jffs2_mark_node_obsolete called with NULL node\n"); @@ -327,41 +386,62 @@ spin_unlock_bh(&c->erase_completion_lock); return; } + if (jeb == c->nextblock) { D2(printk(KERN_DEBUG "Not moving nextblock 0x%08x to dirty/erase_pending list\n", jeb->offset)); - } else if (jeb == c->gcblock) { - D2(printk(KERN_DEBUG "Not moving gcblock 0x%08x to dirty/erase_pending list\n", jeb->offset)); -#if 0 /* We no longer do this here. It can screw the wear levelling. If you have a lot of static - data and a few blocks free, and you just create new files and keep deleting/overwriting - them, then you'd keep erasing and reusing those blocks without ever moving stuff around. - So we leave completely obsoleted blocks on the dirty_list and let the GC delete them - when it finds them there. That way, we still get the 'once in a while, take a clean block' - to spread out the flash usage */ } else if (!jeb->used_size) { - D1(printk(KERN_DEBUG "Eraseblock at 0x%08x completely dirtied. Removing from (dirty?) list...\n", jeb->offset)); - list_del(&jeb->list); - D1(printk(KERN_DEBUG "...and adding to erase_pending_list\n")); - list_add_tail(&jeb->list, &c->erase_pending_list); - c->nr_erasing_blocks++; - jffs2_erase_pending_trigger(c); - // OFNI_BS_2SFFJ(c)->s_dirt = 1; + if (jeb == c->gcblock) { + D1(printk(KERN_DEBUG "gcblock at 0x%08x completely dirtied. Clearing gcblock...\n", jeb->offset)); + c->gcblock = NULL; + } else { + D1(printk(KERN_DEBUG "Eraseblock at 0x%08x completely dirtied. Removing from (dirty?) list...\n", jeb->offset)); + list_del(&jeb->list); + } + if (c->wbuf_len) { + D1(printk(KERN_DEBUG "...and adding to erasable_pending_wbuf_list\n")); + list_add_tail(&jeb->list, &c->erasable_pending_wbuf_list); + + /* We've changed the rules slightly. After + writing a node you now mustn't drop the + alloc_sem before you've finished all the + list management - this is so that when we + get here, we know that no other nodes have + been written, and the above check on wbuf + is valid - wbuf_len is nonzero IFF the node + which obsoletes this node is still in the + wbuf. + + So we BUG() if that new rule is broken, to + make sure we catch it and fix it. + */ + if (!down_trylock(&c->alloc_sem)) { + up(&c->alloc_sem); + printk(KERN_CRIT "jffs2_mark_node_obsolete() called with wbuf active but alloc_sem not locked!\n"); + BUG(); + } + } else { + D1(printk(KERN_DEBUG "...and adding to erasable_list\n")); + list_add_tail(&jeb->list, &c->erasable_list); + } D1(printk(KERN_DEBUG "Done OK\n")); -#endif + } else if (jeb == c->gcblock) { + D2(printk(KERN_DEBUG "Not moving gcblock 0x%08x to dirty_list\n", jeb->offset)); } else if (jeb->dirty_size == ref->totlen) { D1(printk(KERN_DEBUG "Eraseblock at 0x%08x is freshly dirtied. Removing from clean list...\n", jeb->offset)); list_del(&jeb->list); D1(printk(KERN_DEBUG "...and adding to dirty_list\n")); list_add_tail(&jeb->list, &c->dirty_list); } + spin_unlock_bh(&c->erase_completion_lock); - if (c->mtd->type != MTD_NORFLASH && c->mtd->type != MTD_RAM) + if (!jffs2_can_mark_obsolete(c)) return; - if (OFNI_BS_2SFFJ(c)->s_flags & MS_RDONLY) + if (jffs2_is_readonly(c)) return; D1(printk(KERN_DEBUG "obliterating obsoleted node at 0x%08x\n", ref->flash_offset &~3)); - ret = c->mtd->read(c->mtd, ref->flash_offset &~3, sizeof(n), &retlen, (char *)&n); + ret = jffs2_flash_read(c, ref->flash_offset &~3, sizeof(n), &retlen, (char *)&n); if (ret) { printk(KERN_WARNING "Read error reading from obsoleted node at 0x%08x: %d\n", ref->flash_offset &~3, ret); return; @@ -379,7 +459,7 @@ return; } n.nodetype &= ~JFFS2_NODE_ACCURATE; - ret = c->mtd->write(c->mtd, ref->flash_offset&~3, sizeof(n), &retlen, (char *)&n); + ret = jffs2_flash_write(c, ref->flash_offset&~3, sizeof(n), &retlen, (char *)&n); if (ret) { printk(KERN_WARNING "Write error in obliterating obsoleted node at 0x%08x: %d\n", ref->flash_offset &~3, ret); return; diff -Nru a/fs/jffs2/os-linux.h b/fs/jffs2/os-linux.h --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/fs/jffs2/os-linux.h Tue Mar 12 13:58:16 2002 @@ -0,0 +1,170 @@ +/* + * JFFS2 -- Journalling Flash File System, Version 2. + * + * Copyright (C) 2002 Red Hat, Inc. + * + * Created by David Woodhouse + * + * The original JFFS, from which the design for JFFS2 was derived, + * was designed and implemented by Axis Communications AB. + * + * The contents of this file are subject to the Red Hat eCos Public + * License Version 1.1 (the "Licence"); you may not use this file + * except in compliance with the Licence. You may obtain a copy of + * the Licence at http://www.redhat.com/ + * + * Software distributed under the Licence is distributed on an "AS IS" + * basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. + * See the Licence for the specific language governing rights and + * limitations under the Licence. + * + * The Original Code is JFFS2 - Journalling Flash File System, version 2 + * + * Alternatively, the contents of this file may be used under the + * terms of the GNU General Public License version 2 (the "GPL"), in + * which case the provisions of the GPL are applicable instead of the + * above. If you wish to allow the use of your version of this file + * only under the terms of the GPL and not to allow others to use your + * version of this file under the RHEPL, indicate your decision by + * deleting the provisions above and replace them with the notice and + * other provisions required by the GPL. If you do not delete the + * provisions above, a recipient may use your version of this file + * under either the RHEPL or the GPL. + * + * $Id: os-linux.h,v 1.15 2002/03/08 11:31:48 dwmw2 Exp $ + * + */ + +#ifndef __JFFS2_OS_LINUX_H__ +#define __JFFS2_OS_LINUX_H__ +#include + +#if LINUX_VERSION_CODE > KERNEL_VERSION(2,5,2) +#define JFFS2_INODE_INFO(i) (list_entry(i, struct jffs2_inode_info, vfs_inode)) +#define OFNI_EDONI_2SFFJ(f) (&(f)->vfs_inode) +#define JFFS2_SB_INFO(sb) (&sb->u.jffs2_sb) +#define OFNI_BS_2SFFJ(c) ((struct super_block *) ( ((char *)c) - ((char *)(&((struct super_block *)NULL)->u)) ) ) +#elif defined(JFFS2_OUT_OF_KERNEL) +#define JFFS2_INODE_INFO(i) ((struct jffs2_inode_info *) &(i)->u) +#define OFNI_EDONI_2SFFJ(f) ((struct inode *) ( ((char *)f) - ((char *)(&((struct inode *)NULL)->u)) ) ) +#define JFFS2_SB_INFO(sb) ((struct jffs2_sb_info *) &(sb)->u) +#define OFNI_BS_2SFFJ(c) ((struct super_block *) ( ((char *)c) - ((char *)(&((struct super_block *)NULL)->u)) ) ) +#else +#define JFFS2_INODE_INFO(i) (&i->u.jffs2_i) +#define OFNI_EDONI_2SFFJ(f) ((struct inode *) ( ((char *)f) - ((char *)(&((struct inode *)NULL)->u)) ) ) +#define JFFS2_SB_INFO(sb) (&sb->u.jffs2_sb) +#define OFNI_BS_2SFFJ(c) ((struct super_block *) ( ((char *)c) - ((char *)(&((struct super_block *)NULL)->u)) ) ) +#endif + + +#define JFFS2_F_I_SIZE(f) (OFNI_EDONI_2SFFJ(f)->i_size) +#define JFFS2_F_I_MODE(f) (OFNI_EDONI_2SFFJ(f)->i_mode) +#define JFFS2_F_I_UID(f) (OFNI_EDONI_2SFFJ(f)->i_uid) +#define JFFS2_F_I_GID(f) (OFNI_EDONI_2SFFJ(f)->i_gid) +#define JFFS2_F_I_CTIME(f) (OFNI_EDONI_2SFFJ(f)->i_ctime) +#define JFFS2_F_I_MTIME(f) (OFNI_EDONI_2SFFJ(f)->i_mtime) +#define JFFS2_F_I_ATIME(f) (OFNI_EDONI_2SFFJ(f)->i_atime) + +#if LINUX_VERSION_CODE > KERNEL_VERSION(2,5,1) +#define JFFS2_F_I_RDEV_MIN(f) (minor(OFNI_EDONI_2SFFJ(f)->i_rdev)) +#define JFFS2_F_I_RDEV_MAJ(f) (major(OFNI_EDONI_2SFFJ(f)->i_rdev)) +#else +#define JFFS2_F_I_RDEV_MIN(f) (MINOR(to_kdev_t(OFNI_EDONI_2SFFJ(f)->i_rdev))) +#define JFFS2_F_I_RDEV_MAJ(f) (MAJOR(to_kdev_t(OFNI_EDONI_2SFFJ(f)->i_rdev))) +#endif + +static inline void jffs2_init_inode_info(struct jffs2_inode_info *f) +{ +#if LINUX_VERSION_CODE > KERNEL_VERSION(2,5,2) + f->highest_version = 0; + f->fraglist = NULL; + f->metadata = NULL; + f->dents = NULL; + f->flags = 0; + f->usercompr = 0; +#else + memset(f, 0, sizeof(*f)); + init_MUTEX_LOCKED(&f->sem); +#endif +} + +#define jffs2_is_readonly(c) (OFNI_BS_2SFFJ(c)->s_flags & MS_RDONLY) + +#ifndef CONFIG_JFFS2_FS_NAND +#define jffs2_can_mark_obsolete(c) (1) +#define jffs2_cleanmarker_oob(c) (0) +#define jffs2_write_nand_cleanmarker(c,jeb) (-EIO) + +#define jffs2_flash_write(c, ofs, len, retlen, buf) ((c)->mtd->write((c)->mtd, ofs, len, retlen, buf)) +#define jffs2_flash_read(c, ofs, len, retlen, buf) ((c)->mtd->read((c)->mtd, ofs, len, retlen, buf)) +#define jffs2_flush_wbuf(c, flag) do { ; } while(0) +#define jffs2_nand_read_failcnt(c,jeb) do { ; } while(0) +#define jffs2_write_nand_badblock(c,jeb) do { ; } while(0) +#define jffs2_flash_writev jffs2_flash_direct_writev + +#else /* NAND support present */ + +#define jffs2_can_mark_obsolete(c) (c->mtd->type == MTD_NORFLASH || c->mtd->type == MTD_RAM) +#define jffs2_cleanmarker_oob(c) (c->mtd->type == MTD_NANDFLASH) + +#define jffs2_flash_write_oob(c, ofs, len, retlen, buf) ((c)->mtd->write_oob((c)->mtd, ofs, len, retlen, buf)) +#define jffs2_flash_read_oob(c, ofs, len, retlen, buf) ((c)->mtd->read_oob((c)->mtd, ofs, len, retlen, buf)) + + +/* wbuf.c */ +int jffs2_flash_writev(struct jffs2_sb_info *c, const struct iovec *vecs, unsigned long count, loff_t to, size_t *retlen); +int jffs2_flash_write(struct jffs2_sb_info *c, loff_t ofs, size_t len, size_t *retlen, const u_char *buf); +int jffs2_flash_read(struct jffs2_sb_info *c, loff_t ofs, size_t len, size_t *retlen, u_char *buf); +int jffs2_check_oob_empty(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb,int mode); +int jffs2_check_nand_cleanmarker(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb); +int jffs2_write_nand_cleanmarker(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb); +int jffs2_write_nand_badblock(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb); +#endif /* NAND */ + +/* background.c */ +int jffs2_start_garbage_collect_thread(struct jffs2_sb_info *c); +void jffs2_stop_garbage_collect_thread(struct jffs2_sb_info *c); +void jffs2_garbage_collect_trigger(struct jffs2_sb_info *c); + +/* dir.c */ +extern struct file_operations jffs2_dir_operations; +extern struct inode_operations jffs2_dir_inode_operations; + +/* file.c */ +extern struct file_operations jffs2_file_operations; +extern struct inode_operations jffs2_file_inode_operations; +extern struct address_space_operations jffs2_file_address_operations; +int jffs2_fsync(struct file *, struct dentry *, int); +int jffs2_setattr (struct dentry *dentry, struct iattr *iattr); +int jffs2_do_readpage_nolock (struct inode *inode, struct page *pg); +int jffs2_do_readpage_unlock (struct inode *inode, struct page *pg); +int jffs2_readpage (struct file *, struct page *); +int jffs2_prepare_write (struct file *, struct page *, unsigned, unsigned); +int jffs2_commit_write (struct file *, struct page *, unsigned, unsigned); + +/* ioctl.c */ +int jffs2_ioctl(struct inode *, struct file *, unsigned int, unsigned long); + +/* symlink.c */ +extern struct inode_operations jffs2_symlink_inode_operations; + +/* fs.c */ +void jffs2_read_inode (struct inode *); +void jffs2_clear_inode (struct inode *); +struct inode *jffs2_new_inode (struct inode *dir_i, int mode, + struct jffs2_raw_inode *ri); +int jffs2_statfs (struct super_block *, struct statfs *); +void jffs2_write_super (struct super_block *); +int jffs2_remount_fs (struct super_block *, int *, char *); +int jffs2_do_fill_super(struct super_block *sb, void *data, int silent); + +/* writev.c */ +int jffs2_flash_direct_writev(struct jffs2_sb_info *c, const struct iovec *vecs, + unsigned long count, loff_t to, size_t *retlen); + +/* super.c */ + + +#endif /* __JFFS2_OS_LINUX_H__ */ + + diff -Nru a/fs/jffs2/pushpull.c b/fs/jffs2/pushpull.c --- a/fs/jffs2/pushpull.c Tue Mar 12 13:58:14 2002 +++ /dev/null Wed Dec 31 16:00:00 1969 @@ -1,71 +0,0 @@ -/* - * JFFS2 -- Journalling Flash File System, Version 2. - * - * Copyright (C) 2001 Red Hat, Inc. - * - * Created by David Woodhouse - * - * The original JFFS, from which the design for JFFS2 was derived, - * was designed and implemented by Axis Communications AB. - * - * The contents of this file are subject to the Red Hat eCos Public - * License Version 1.1 (the "Licence"); you may not use this file - * except in compliance with the Licence. You may obtain a copy of - * the Licence at http://www.redhat.com/ - * - * Software distributed under the Licence is distributed on an "AS IS" - * basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. - * See the Licence for the specific language governing rights and - * limitations under the Licence. - * - * The Original Code is JFFS2 - Journalling Flash File System, version 2 - * - * Alternatively, the contents of this file may be used under the - * terms of the GNU General Public License version 2 (the "GPL"), in - * which case the provisions of the GPL are applicable instead of the - * above. If you wish to allow the use of your version of this file - * only under the terms of the GPL and not to allow others to use your - * version of this file under the RHEPL, indicate your decision by - * deleting the provisions above and replace them with the notice and - * other provisions required by the GPL. If you do not delete the - * provisions above, a recipient may use your version of this file - * under either the RHEPL or the GPL. - * - * $Id: pushpull.c,v 1.7 2001/09/23 10:04:15 rmk Exp $ - * - */ - -#include -#include "pushpull.h" -#include - -void init_pushpull(struct pushpull *pp, char *buf, unsigned buflen, unsigned ofs, unsigned reserve) -{ - pp->buf = buf; - pp->buflen = buflen; - pp->ofs = ofs; - pp->reserve = reserve; -} - - -int pushbit(struct pushpull *pp, int bit, int use_reserved) -{ - if (pp->ofs >= pp->buflen - (use_reserved?0:pp->reserve)) { - return -ENOSPC; - } - - if (bit) { - pp->buf[pp->ofs >> 3] |= (1<<(7-(pp->ofs &7))); - } - else { - pp->buf[pp->ofs >> 3] &= ~(1<<(7-(pp->ofs &7))); - } - pp->ofs++; - - return 0; -} - -int pushedbits(struct pushpull *pp) -{ - return pp->ofs; -} diff -Nru a/fs/jffs2/pushpull.h b/fs/jffs2/pushpull.h --- a/fs/jffs2/pushpull.h Tue Mar 12 13:58:14 2002 +++ b/fs/jffs2/pushpull.h Tue Mar 12 13:58:14 2002 @@ -1,7 +1,7 @@ /* * JFFS2 -- Journalling Flash File System, Version 2. * - * Copyright (C) 2001 Red Hat, Inc. + * Copyright (C) 2001, 2002 Red Hat, Inc. * * Created by David Woodhouse * @@ -31,12 +31,15 @@ * provisions above, a recipient may use your version of this file * under either the RHEPL or the GPL. * - * $Id: pushpull.h,v 1.5 2001/09/23 10:04:15 rmk Exp $ + * $Id: pushpull.h,v 1.7 2002/03/06 12:37:08 dwmw2 Exp $ * */ #ifndef __PUSHPULL_H__ #define __PUSHPULL_H__ + +#include + struct pushpull { unsigned char *buf; unsigned int buflen; @@ -44,9 +47,36 @@ unsigned int reserve; }; -void init_pushpull(struct pushpull *, char *, unsigned, unsigned, unsigned); -int pushbit(struct pushpull *pp, int bit, int use_reserved); -int pushedbits(struct pushpull *pp); + +static inline void init_pushpull(struct pushpull *pp, char *buf, unsigned buflen, unsigned ofs, unsigned reserve) +{ + pp->buf = buf; + pp->buflen = buflen; + pp->ofs = ofs; + pp->reserve = reserve; +} + +static inline int pushbit(struct pushpull *pp, int bit, int use_reserved) +{ + if (pp->ofs >= pp->buflen - (use_reserved?0:pp->reserve)) { + return -ENOSPC; + } + + if (bit) { + pp->buf[pp->ofs >> 3] |= (1<<(7-(pp->ofs &7))); + } + else { + pp->buf[pp->ofs >> 3] &= ~(1<<(7-(pp->ofs &7))); + } + pp->ofs++; + + return 0; +} + +static inline int pushedbits(struct pushpull *pp) +{ + return pp->ofs; +} static inline int pullbit(struct pushpull *pp) { diff -Nru a/fs/jffs2/read.c b/fs/jffs2/read.c --- a/fs/jffs2/read.c Tue Mar 12 13:58:15 2002 +++ b/fs/jffs2/read.c Tue Mar 12 13:58:15 2002 @@ -1,7 +1,7 @@ /* * JFFS2 -- Journalling Flash File System, Version 2. * - * Copyright (C) 2001 Red Hat, Inc. + * Copyright (C) 2001, 2002 Red Hat, Inc. * * Created by David Woodhouse * @@ -31,14 +31,14 @@ * provisions above, a recipient may use your version of this file * under either the RHEPL or the GPL. * - * $Id: read.c,v 1.13.2.1 2002/02/01 23:32:33 dwmw2 Exp $ + * $Id: read.c,v 1.22 2002/03/02 22:08:27 dwmw2 Exp $ * */ #include #include #include -#include +#include #include #include "nodelist.h" @@ -46,7 +46,7 @@ { struct jffs2_raw_inode *ri; size_t readlen; - __u32 crc; + uint32_t crc; unsigned char *decomprbuf = NULL; unsigned char *readbuf = NULL; int ret = 0; @@ -55,7 +55,7 @@ if (!ri) return -ENOMEM; - ret = c->mtd->read(c->mtd, fd->raw->flash_offset & ~3, sizeof(*ri), &readlen, (char *)ri); + ret = jffs2_flash_read(c, fd->raw->flash_offset & ~3, sizeof(*ri), &readlen, (char *)ri); if (ret) { jffs2_free_raw_inode(ri); printk(KERN_WARNING "Error reading node from 0x%08x: %d\n", fd->raw->flash_offset & ~3, ret); @@ -124,7 +124,7 @@ } D2(printk(KERN_DEBUG "Read %d bytes to %p\n", ri->csize, readbuf)); - ret = c->mtd->read(c->mtd, (fd->raw->flash_offset &~3) + sizeof(*ri), ri->csize, &readlen, readbuf); + ret = jffs2_flash_read(c, (fd->raw->flash_offset &~3) + sizeof(*ri), ri->csize, &readlen, readbuf); if (!ret && readlen != ri->csize) ret = -EIO; @@ -160,4 +160,100 @@ jffs2_free_raw_inode(ri); return ret; +} + +int jffs2_read_inode_range(struct jffs2_sb_info *c, struct jffs2_inode_info *f, + unsigned char *buf, uint32_t offset, uint32_t len) +{ + uint32_t end = offset + len; + struct jffs2_node_frag *frag = f->fraglist; + int ret; + + D1(printk(KERN_DEBUG "jffs2_read_inode_range: ino #%u, range 0x%08x-0x%08x\n", + f->inocache->ino, offset, offset+len)); + + while(frag && frag->ofs + frag->size <= offset) { + D2(printk(KERN_DEBUG "skipping frag %d-%d; before the region we care about\n", frag->ofs, frag->ofs + frag->size)); + frag = frag->next; + } + /* XXX FIXME: Where a single physical node actually shows up in two + frags, we read it twice. Don't do that. */ + /* Now we're pointing at the first frag which overlaps our page */ + while(offset < end) { + D2(printk(KERN_DEBUG "jffs2_read_inode_range: offset %d, end %d\n", offset, end)); + if (!frag || frag->ofs > offset) { + uint32_t holesize = end - offset; + if (frag) { + D1(printk(KERN_NOTICE "Eep. Hole in ino #%u fraglist. frag->ofs = 0x%08x, offset = 0x%08x\n", f->inocache->ino, frag->ofs, offset)); + holesize = min(holesize, frag->ofs - offset); + D1(jffs2_print_frag_list(f)); + } + D1(printk(KERN_DEBUG "Filling non-frag hole from %d-%d\n", offset, offset+holesize)); + memset(buf, 0, holesize); + buf += holesize; + offset += holesize; + continue; + } else if (frag->ofs < offset && (offset & (PAGE_CACHE_SIZE-1)) != 0) { + D1(printk(KERN_NOTICE "Eep. Overlap in ino #%u fraglist. frag->ofs = 0x%08x, offset = 0x%08x\n", + f->inocache->ino, frag->ofs, offset)); + D1(jffs2_print_frag_list(f)); + memset(buf, 0, end - offset); + return -EIO; + } else if (!frag->node) { + uint32_t holeend = min(end, frag->ofs + frag->size); + D1(printk(KERN_DEBUG "Filling frag hole from %d-%d (frag 0x%x 0x%x)\n", offset, holeend, frag->ofs, frag->ofs + frag->size)); + memset(buf, 0, holeend - offset); + buf += holeend - offset; + offset = holeend; + frag = frag->next; + continue; + } else { + uint32_t readlen; + readlen = min(frag->size, end - offset); + D1(printk(KERN_DEBUG "Reading %d-%d from node at 0x%x\n", frag->ofs, frag->ofs+readlen, frag->node->raw->flash_offset & ~3)); + ret = jffs2_read_dnode(c, frag->node, buf, frag->ofs - frag->node->ofs, readlen); + D2(printk(KERN_DEBUG "node read done\n")); + if (ret) { + D1(printk(KERN_DEBUG"jffs2_read_inode_range error %d\n",ret)); + memset(buf, 0, frag->size); + return ret; + } + } + buf += frag->size; + offset += frag->size; + frag = frag->next; + D2(printk(KERN_DEBUG "node read was OK. Looping\n")); + } + return 0; +} + +/* Core function to read symlink target. */ +char *jffs2_getlink(struct jffs2_sb_info *c, struct jffs2_inode_info *f) +{ + char *buf; + int ret; + + down(&f->sem); + + if (!f->metadata) { + printk(KERN_NOTICE "No metadata for symlink inode #%u\n", f->inocache->ino); + up(&f->sem); + return ERR_PTR(-EINVAL); + } + buf = kmalloc(f->metadata->size+1, GFP_USER); + if (!buf) { + up(&f->sem); + return ERR_PTR(-ENOMEM); + } + buf[f->metadata->size]=0; + + ret = jffs2_read_dnode(c, f->metadata, buf, 0, f->metadata->size); + + up(&f->sem); + + if (ret) { + kfree(buf); + return ERR_PTR(ret); + } + return buf; } diff -Nru a/fs/jffs2/readinode.c b/fs/jffs2/readinode.c --- a/fs/jffs2/readinode.c Tue Mar 12 13:58:14 2002 +++ b/fs/jffs2/readinode.c Tue Mar 12 13:58:14 2002 @@ -1,7 +1,7 @@ /* * JFFS2 -- Journalling Flash File System, Version 2. * - * Copyright (C) 2001 Red Hat, Inc. + * Copyright (C) 2001, 2002 Red Hat, Inc. * * Created by David Woodhouse * @@ -31,19 +31,16 @@ * provisions above, a recipient may use your version of this file * under either the RHEPL or the GPL. * - * $Id: readinode.c,v 1.58.2.2 2002/02/23 14:25:37 dwmw2 Exp $ + * $Id: readinode.c,v 1.71 2002/03/06 12:25:59 dwmw2 Exp $ * */ -/* Given an inode, probably with existing list of fragments, add the new node - * to the fragment list. - */ #include #include #include #include #include -#include +#include #include "nodelist.h" @@ -64,6 +61,9 @@ }) +/* Given an inode, probably with existing list of fragments, add the new node + * to the fragment list. + */ int jffs2_add_full_dnode_to_inode(struct jffs2_sb_info *c, struct jffs2_inode_info *f, struct jffs2_full_dnode *fn) { int ret; @@ -101,7 +101,7 @@ struct jffs2_node_frag *this, **prev, *old; struct jffs2_node_frag *newfrag, *newfrag2; - __u32 lastend = 0; + uint32_t lastend = 0; newfrag = jffs2_alloc_node_frag(); @@ -222,7 +222,7 @@ return 0; } -void jffs2_truncate_fraglist (struct jffs2_sb_info *c, struct jffs2_node_frag **list, __u32 size) +void jffs2_truncate_fraglist (struct jffs2_sb_info *c, struct jffs2_node_frag **list, uint32_t size) { D1(printk(KERN_DEBUG "Truncating fraglist to 0x%08x bytes\n", size)); @@ -243,68 +243,53 @@ /* Scan the list of all nodes present for this ino, build map of versions, etc. */ -void jffs2_read_inode (struct inode *inode) +int jffs2_do_read_inode(struct jffs2_sb_info *c, struct jffs2_inode_info *f, + uint32_t ino, struct jffs2_raw_inode *latest_node) { struct jffs2_tmp_dnode_info *tn_list, *tn; struct jffs2_full_dirent *fd_list; - struct jffs2_inode_info *f; struct jffs2_full_dnode *fn = NULL; - struct jffs2_sb_info *c; - struct jffs2_raw_inode latest_node; - __u32 latest_mctime, mctime_ver; + uint32_t crc; + uint32_t latest_mctime, mctime_ver; + uint32_t mdata_ver = 0; + size_t retlen; int ret; - ssize_t retlen; - D1(printk(KERN_DEBUG "jffs2_read_inode(): inode->i_ino == %lu\n", inode->i_ino)); + D2(printk(KERN_DEBUG "jffs2_do_read_inode(): getting inocache\n")); - f = JFFS2_INODE_INFO(inode); - c = JFFS2_SB_INFO(inode->i_sb); + f->inocache = jffs2_get_ino_cache(c, ino); - f->highest_version = 0; - f->fraglist = NULL; - f->metadata = NULL; - f->dents = NULL; - f->flags = 0; - f->usercompr = 0; - D2(printk(KERN_DEBUG "getting inocache\n")); - f->inocache = jffs2_get_ino_cache(c, inode->i_ino); - D2(printk(KERN_DEBUG "jffs2_read_inode(): Got inocache at %p\n", f->inocache)); + D2(printk(KERN_DEBUG "jffs2_do_read_inode(): Got inocache at %p\n", f->inocache)); - if (!f->inocache && inode->i_ino == 1) { + if (!f->inocache && ino == 1) { /* Special case - no root inode on medium */ f->inocache = jffs2_alloc_inode_cache(); if (!f->inocache) { - printk(KERN_CRIT "jffs2_read_inode(): Cannot allocate inocache for root inode\n"); - make_bad_inode(inode); - return; + printk(KERN_CRIT "jffs2_do_read_inode(): Cannot allocate inocache for root inode\n"); + return -ENOMEM; } - D1(printk(KERN_DEBUG "jffs2_read_inode(): Creating inocache for root inode\n")); + D1(printk(KERN_DEBUG "jffs2_do_read_inode(): Creating inocache for root inode\n")); memset(f->inocache, 0, sizeof(struct jffs2_inode_cache)); f->inocache->ino = f->inocache->nlink = 1; f->inocache->nodes = (struct jffs2_raw_node_ref *)f->inocache; jffs2_add_ino_cache(c, f->inocache); } if (!f->inocache) { - printk(KERN_WARNING "jffs2_read_inode() on nonexistent ino %lu\n", (unsigned long)inode->i_ino); - make_bad_inode(inode); - return; + printk(KERN_WARNING "jffs2_do_read_inode() on nonexistent ino %u\n", ino); + return -ENOENT; } - D1(printk(KERN_DEBUG "jffs2_read_inode(): ino #%lu nlink is %d\n", (unsigned long)inode->i_ino, f->inocache->nlink)); - inode->i_nlink = f->inocache->nlink; + D1(printk(KERN_DEBUG "jffs2_do_read_inode(): ino #%u nlink is %d\n", ino, f->inocache->nlink)); /* Grab all nodes relevant to this ino */ - ret = jffs2_get_inode_nodes(c, inode->i_ino, f, &tn_list, &fd_list, &f->highest_version, &latest_mctime, &mctime_ver); + ret = jffs2_get_inode_nodes(c, ino, f, &tn_list, &fd_list, &f->highest_version, &latest_mctime, &mctime_ver); if (ret) { - printk(KERN_CRIT "jffs2_get_inode_nodes() for ino %lu returned %d\n", inode->i_ino, ret); - make_bad_inode(inode); - return; + printk(KERN_CRIT "jffs2_get_inode_nodes() for ino %u returned %d\n", ino, ret); + return ret; } f->dents = fd_list; while (tn_list) { - static __u32 mdata_ver; - tn = tn_list; fn = tn->fn; @@ -331,150 +316,106 @@ } if (!fn) { /* No data nodes for this inode. */ - if (inode->i_ino != 1) { - printk(KERN_WARNING "jffs2_read_inode(): No data nodes found for ino #%lu\n", inode->i_ino); + if (ino != 1) { + printk(KERN_WARNING "jffs2_do_read_inode(): No data nodes found for ino #%u\n", ino); if (!fd_list) { - make_bad_inode(inode); - return; + return -EIO; } - printk(KERN_WARNING "jffs2_read_inode(): But it has children so we fake some modes for it\n"); + printk(KERN_WARNING "jffs2_do_read_inode(): But it has children so we fake some modes for it\n"); } - inode->i_mode = S_IFDIR | S_IRUGO | S_IWUSR | S_IXUGO; - latest_node.version = 0; - inode->i_atime = inode->i_ctime = inode->i_mtime = CURRENT_TIME; - inode->i_nlink = f->inocache->nlink; - inode->i_size = 0; - } else { - __u32 crc; + latest_node->mode = S_IFDIR | S_IRUGO | S_IWUSR | S_IXUGO; + latest_node->version = 0; + latest_node->atime = latest_node->ctime = latest_node->mtime = 0; + latest_node->isize = 0; + latest_node->gid = 0; + latest_node->uid = 0; + return 0; + } - ret = c->mtd->read(c->mtd, fn->raw->flash_offset & ~3, sizeof(latest_node), &retlen, (void *)&latest_node); - if (ret || retlen != sizeof(latest_node)) { - printk(KERN_NOTICE "MTD read in jffs2_read_inode() failed: Returned %d, %ld of %d bytes read\n", - ret, (long)retlen, sizeof(latest_node)); - jffs2_clear_inode(inode); - make_bad_inode(inode); - return; - } - - crc = crc32(0, &latest_node, sizeof(latest_node)-8); - if (crc != latest_node.node_crc) { - printk(KERN_NOTICE "CRC failed for read_inode of inode %ld at physical location 0x%x\n", inode->i_ino, fn->raw->flash_offset & ~3); - jffs2_clear_inode(inode); - make_bad_inode(inode); - return; - } - - inode->i_mode = latest_node.mode; - inode->i_uid = latest_node.uid; - inode->i_gid = latest_node.gid; - inode->i_size = latest_node.isize; - if (S_ISREG(inode->i_mode)) - jffs2_truncate_fraglist(c, &f->fraglist, latest_node.isize); - inode->i_atime = latest_node.atime; - inode->i_mtime = latest_node.mtime; - inode->i_ctime = latest_node.ctime; - } - - /* OK, now the special cases. Certain inode types should - have only one data node, and it's kept as the metadata - node */ - if (S_ISBLK(inode->i_mode) || S_ISCHR(inode->i_mode) || - S_ISLNK(inode->i_mode)) { - if (f->metadata) { - printk(KERN_WARNING "Argh. Special inode #%lu with mode 0%o had metadata node\n", inode->i_ino, inode->i_mode); - jffs2_clear_inode(inode); - make_bad_inode(inode); - return; - } - if (!f->fraglist) { - printk(KERN_WARNING "Argh. Special inode #%lu with mode 0%o has no fragments\n", inode->i_ino, inode->i_mode); - jffs2_clear_inode(inode); - make_bad_inode(inode); - return; - } - /* ASSERT: f->fraglist != NULL */ - if (f->fraglist->next) { - printk(KERN_WARNING "Argh. Special inode #%lu with mode 0%o had more than one node\n", inode->i_ino, inode->i_mode); - /* FIXME: Deal with it - check crc32, check for duplicate node, check times and discard the older one */ - jffs2_clear_inode(inode); - make_bad_inode(inode); - return; - } - /* OK. We're happy */ - f->metadata = f->fraglist->node; - jffs2_free_node_frag(f->fraglist); - f->fraglist = NULL; - } - - inode->i_blksize = PAGE_SIZE; - inode->i_blocks = (inode->i_size + 511) >> 9; - - switch (inode->i_mode & S_IFMT) { - unsigned short rdev; + ret = jffs2_flash_read(c, fn->raw->flash_offset & ~3, sizeof(*latest_node), &retlen, (void *)latest_node); + if (ret || retlen != sizeof(*latest_node)) { + printk(KERN_NOTICE "MTD read in jffs2_do_read_inode() failed: Returned %d, %ld of %d bytes read\n", + ret, (long)retlen, sizeof(*latest_node)); + /* FIXME: If this fails, there seems to be a memory leak. Find it. */ + jffs2_do_clear_inode(c, f); + return ret?ret:-EIO; + } + + crc = crc32(0, latest_node, sizeof(*latest_node)-8); + if (crc != latest_node->node_crc) { + printk(KERN_NOTICE "CRC failed for read_inode of inode %u at physical location 0x%x\n", ino, fn->raw->flash_offset & ~3); + jffs2_do_clear_inode(c, f); + return -EIO; + } - case S_IFLNK: - inode->i_op = &jffs2_symlink_inode_operations; - /* Hack to work around broken isize in old symlink code. - Remove this when dwmw2 comes to his senses and stops - symlinks from being an entirely gratuitous special - case. */ - if (!inode->i_size) - inode->i_size = latest_node.dsize; - break; - + switch(latest_node->mode & S_IFMT) { case S_IFDIR: - if (mctime_ver > latest_node.version) { + if (mctime_ver > latest_node->version) { /* The times in the latest_node are actually older than mctime in the latest dirent. Cheat. */ - inode->i_mtime = inode->i_ctime = inode->i_atime = - latest_mctime; + latest_node->ctime = latest_node->mtime = latest_mctime; } - inode->i_op = &jffs2_dir_inode_operations; - inode->i_fop = &jffs2_dir_operations; break; + case S_IFREG: - inode->i_op = &jffs2_file_inode_operations; - inode->i_fop = &jffs2_file_operations; - inode->i_mapping->a_ops = &jffs2_file_address_operations; - inode->i_mapping->nrpages = 0; + /* If it was a regular file, truncate it to the latest node's isize */ + jffs2_truncate_fraglist(c, &f->fraglist, latest_node->isize); break; + case S_IFLNK: + /* Hack to work around broken isize in old symlink code. + Remove this when dwmw2 comes to his senses and stops + symlinks from being an entirely gratuitous special + case. */ + if (!latest_node->isize) + latest_node->isize = latest_node->dsize; + /* fall through... */ + case S_IFBLK: case S_IFCHR: - /* Read the device numbers from the media */ - D1(printk(KERN_DEBUG "Reading device numbers from flash\n")); - if (jffs2_read_dnode(c, f->metadata, (char *)&rdev, 0, sizeof(rdev)) < 0) { - /* Eep */ - printk(KERN_NOTICE "Read device numbers for inode %lu failed\n", (unsigned long)inode->i_ino); - jffs2_clear_inode(inode); - make_bad_inode(inode); - return; - } - - case S_IFSOCK: - case S_IFIFO: - inode->i_op = &jffs2_file_inode_operations; - init_special_inode(inode, inode->i_mode, kdev_t_to_nr(mk_kdev(rdev>>8, rdev&0xff))); + /* Xertain inode types should have only one data node, and it's + kept as the metadata node */ + if (f->metadata) { + printk(KERN_WARNING "Argh. Special inode #%u with mode 0%o had metadata node\n", ino, latest_node->mode); + jffs2_do_clear_inode(c, f); + return -EIO; + } + if (!f->fraglist) { + printk(KERN_WARNING "Argh. Special inode #%u with mode 0%o has no fragments\n", ino, latest_node->mode); + jffs2_do_clear_inode(c, f); + return -EIO; + } + /* ASSERT: f->fraglist != NULL */ + if (f->fraglist->next) { + printk(KERN_WARNING "Argh. Special inode #%u with mode 0%o had more than one node\n", ino, latest_node->mode); + /* FIXME: Deal with it - check crc32, check for duplicate node, check times and discard the older one */ + jffs2_do_clear_inode(c, f); + return -EIO; + } + /* OK. We're happy */ + f->metadata = f->fraglist->node; + jffs2_free_node_frag(f->fraglist); + f->fraglist = NULL; break; - - default: - printk(KERN_WARNING "jffs2_read_inode(): Bogus imode %o for ino %lu", inode->i_mode, (unsigned long)inode->i_ino); } - D1(printk(KERN_DEBUG "jffs2_read_inode() returning\n")); + + return 0; } -void jffs2_clear_inode (struct inode *inode) + +void jffs2_do_clear_inode(struct jffs2_sb_info *c, struct jffs2_inode_info *f) { - /* We can forget about this inode for now - drop all - * the nodelists associated with it, etc. - */ - struct jffs2_sb_info *c = JFFS2_SB_INFO(inode->i_sb); struct jffs2_node_frag *frag, *frags; struct jffs2_full_dirent *fd, *fds; - struct jffs2_inode_info *f = JFFS2_INODE_INFO(inode); - D1(printk(KERN_DEBUG "jffs2_clear_inode(): ino #%lu mode %o\n", inode->i_ino, inode->i_mode)); + /* If it's a deleted inode, grab the alloc_sem to keep the + (maybe temporary) BUG() in jffs2_mark_node_obsolete() + from triggering */ + if(!f->inocache->nlink) + down(&c->alloc_sem); + + down(&f->sem); frags = f->fraglist; fds = f->dents; @@ -487,7 +428,7 @@ while (frags) { frag = frags; frags = frag->next; - D2(printk(KERN_DEBUG "jffs2_clear_inode: frag at 0x%x-0x%x: node %p, frags %d--\n", frag->ofs, frag->ofs+frag->size, frag->node, frag->node?frag->node->frags:0)); + D2(printk(KERN_DEBUG "jffs2_do_clear_inode: frag at 0x%x-0x%x: node %p, frags %d--\n", frag->ofs, frag->ofs+frag->size, frag->node, frag->node?frag->node->frags:0)); if (frag->node && !(--frag->node->frags)) { /* Not a hole, and it's the final remaining frag of this node. Free the node */ @@ -503,10 +444,12 @@ fds = fd->next; jffs2_free_full_dirent(fd); } - // if (!f->inocache->nlink) { - // D1(printk(KERN_DEBUG "jffs2_clear_inode() deleting inode #%lu\n", inode->i_ino)); - // jffs2_del_ino_cache(JFFS2_SB_INFO(inode->i_sb), f->inocache); - // jffs2_free_inode_cache(f->inocache); - // } -}; + /* Urgh. Is there a nicer way to do this? */ + if(!f->inocache->nlink) { + up(&f->sem); + up(&c->alloc_sem); + } else { + up(&f->sem); + } +} diff -Nru a/fs/jffs2/scan.c b/fs/jffs2/scan.c --- a/fs/jffs2/scan.c Tue Mar 12 13:58:15 2002 +++ b/fs/jffs2/scan.c Tue Mar 12 13:58:15 2002 @@ -1,7 +1,7 @@ /* * JFFS2 -- Journalling Flash File System, Version 2. * - * Copyright (C) 2001 Red Hat, Inc. + * Copyright (C) 2001, 2002 Red Hat, Inc. * * Created by David Woodhouse * @@ -31,12 +31,11 @@ * provisions above, a recipient may use your version of this file * under either the RHEPL or the GPL. * - * $Id: scan.c,v 1.51.2.2 2002/02/23 13:34:31 dwmw2 Exp $ + * $Id: scan.c,v 1.69 2002/03/08 11:03:23 dwmw2 Exp $ * */ #include #include -#include #include #include #include @@ -71,15 +70,21 @@ * Returning an error will abort the mount - bad checksums etc. should just mark the space * as dirty. */ -static int jffs2_scan_empty(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb, __u32 *ofs, int *noise); -static int jffs2_scan_inode_node(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb, __u32 *ofs); -static int jffs2_scan_dirent_node(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb, __u32 *ofs); - +static int jffs2_scan_empty(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb, uint32_t *ofs, int *noise); +static int jffs2_scan_inode_node(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb, uint32_t *ofs); +static int jffs2_scan_dirent_node(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb, uint32_t *ofs); + +#define BLK_STATE_ALLFF 0 +#define BLK_STATE_CLEAN 1 +#define BLK_STATE_PARTDIRTY 2 +#define BLK_STATE_CLEANMARKER 3 +#define BLK_STATE_ALLDIRTY 4 +#define BLK_STATE_BADBLOCK 5 int jffs2_scan_medium(struct jffs2_sb_info *c) { int i, ret; - __u32 empty_blocks = 0; + uint32_t empty_blocks = 0, bad_blocks = 0; if (!c->blocks) { printk(KERN_WARNING "EEEK! c->blocks is NULL!\n"); @@ -95,7 +100,8 @@ ACCT_PARANOIA_CHECK(jeb); /* Now decide which list to put it on */ - if (ret == 1) { + switch(ret) { + case BLK_STATE_ALLFF: /* * Empty block. Since we can't be sure it * was entirely erased, we just queue it for erase @@ -103,10 +109,12 @@ * is complete. Meanwhile we still count it as empty * for later checks. */ - list_add(&jeb->list, &c->erase_pending_list); empty_blocks++; + list_add(&jeb->list, &c->erase_pending_list); c->nr_erasing_blocks++; - } else if (jeb->used_size == PAD(sizeof(struct jffs2_unknown_node)) && !jeb->first_node->next_in_ino) { + break; + + case BLK_STATE_CLEANMARKER: /* Only a CLEANMARKER node is valid */ if (!jeb->dirty_size) { /* It's actually free */ @@ -118,10 +126,14 @@ list_add(&jeb->list, &c->erase_pending_list); c->nr_erasing_blocks++; } - } else if (jeb->used_size > c->sector_size - (2*sizeof(struct jffs2_raw_inode))) { + break; + + case BLK_STATE_CLEAN: /* Full (or almost full) of clean data. Clean list */ list_add(&jeb->list, &c->clean_list); - } else if (jeb->used_size) { + break; + + case BLK_STATE_PARTDIRTY: /* Some data, but not full. Dirty list. */ /* Except that we want to remember the block with most free space, and stick it in the 'nextblock' position to start writing to it. @@ -131,26 +143,42 @@ if (jeb->free_size > 2*sizeof(struct jffs2_raw_inode) && (!c->nextblock || c->nextblock->free_size < jeb->free_size)) { /* Better candidate for the next writes to go to */ - if (c->nextblock) - list_add(&c->nextblock->list, &c->dirty_list); + if (c->nextblock) + list_add(&c->nextblock->list, &c->dirty_list); c->nextblock = jeb; } else { list_add(&jeb->list, &c->dirty_list); } - } else { + break; + + case BLK_STATE_ALLDIRTY: /* Nothing valid - not even a clean marker. Needs erasing. */ /* For now we just put it on the erasing list. We'll start the erases later */ - printk(KERN_NOTICE "JFFS2: Erase block at 0x%08x is not formatted. It will be erased\n", jeb->offset); + D1(printk(KERN_NOTICE "JFFS2: Erase block at 0x%08x is not formatted. It will be erased\n", jeb->offset)); list_add(&jeb->list, &c->erase_pending_list); c->nr_erasing_blocks++; + break; + + case BLK_STATE_BADBLOCK: + D1(printk(KERN_NOTICE "JFFS2: Block at 0x%08x is bad\n", jeb->offset)); + list_add(&jeb->list, &c->bad_list); + c->bad_size += c->sector_size; + c->free_size -= c->sector_size; + bad_blocks++; + break; + default: + printk("jffs2_scan_medium(): unknown block state\n"); + BUG(); } } + /* Rotate the lists by some number to ensure wear levelling */ jffs2_rotate_lists(c); if (c->nr_erasing_blocks) { - if (!c->used_size && empty_blocks != c->nr_blocks) { + if ( !c->used_size && ((empty_blocks+bad_blocks)!= c->nr_blocks || bad_blocks == c->nr_blocks) ) { printk(KERN_NOTICE "Cowardly refusing to erase blocks on filesystem with no valid JFFS2 nodes\n"); + printk(KERN_NOTICE "empty_blocks %d, bad_blocks %d, c->nr_blocks %d\n",empty_blocks,bad_blocks,c->nr_blocks); return -EIO; } jffs2_erase_pending_trigger(c); @@ -160,27 +188,62 @@ static int jffs2_scan_eraseblock (struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb) { struct jffs2_unknown_node node; - __u32 ofs, prevofs; - __u32 hdr_crc, nodetype; + uint32_t ofs, prevofs; + uint32_t hdr_crc, nodetype; int err; int noise = 0; +#ifdef CONFIG_JFFS2_FS_NAND + int cleanmarkerfound=0; +#endif ofs = jeb->offset; prevofs = jeb->offset - 1; D1(printk(KERN_DEBUG "jffs2_scan_eraseblock(): Scanning block at 0x%x\n", ofs)); +#ifdef CONFIG_JFFS2_FS_NAND + if (jffs2_cleanmarker_oob(c)) { + int ret = jffs2_check_nand_cleanmarker(c, jeb); + D2(printk(KERN_NOTICE "jffs_check_nand_cleanmarker returned %d\n",ret)); + /* Even if it's not found, we still scan to see + if the block is empty. We use this information + to decide whether to erase it or not. */ + switch (ret) { + case 0: cleanmarkerfound = 1; break; + case 1: break; + case 2: return BLK_STATE_BADBLOCK; + case 3: return BLK_STATE_ALLDIRTY; /* Block has failed to erase min. once */ + default: return ret; + } + } +#endif err = jffs2_scan_empty(c, jeb, &ofs, &noise); - if (err) return err; + if (err) + return err; + if (ofs == jeb->offset + c->sector_size) { +#ifdef CONFIG_JFFS2_FS_NAND + if (jffs2_cleanmarker_oob(c)) { + /* scan oob, take care of cleanmarker */ + int ret = jffs2_check_oob_empty(c, jeb, cleanmarkerfound); + D2(printk(KERN_NOTICE "jffs_check_oob_empty returned %d\n",ret)); + switch (ret) { + case 0: return cleanmarkerfound ? BLK_STATE_CLEANMARKER : BLK_STATE_ALLFF; + case 1: return BLK_STATE_ALLDIRTY; + case 2: return BLK_STATE_BADBLOCK; /* case 2/3 are paranoia checks */ + case 3: return BLK_STATE_ALLDIRTY; /* Block has failed to erase min. once */ + default: return ret; + } + } +#endif D1(printk(KERN_DEBUG "Block at 0x%08x is empty (erased)\n", jeb->offset)); - return 1; /* special return code */ + return BLK_STATE_ALLFF; /* OK to erase if all blocks are like this */ } noise = 10; while(ofs < jeb->offset + c->sector_size) { - ssize_t retlen; + size_t retlen; ACCT_PARANOIA_CHECK(jeb); if (ofs & 3) { @@ -202,8 +265,7 @@ break; } - err = c->mtd->read(c->mtd, ofs, sizeof(node), &retlen, (char *)&node); - + err = jffs2_flash_read(c, ofs, sizeof(node), &retlen, (char *)&node); if (err) { D1(printk(KERN_WARNING "mtd->read(0x%x bytes from 0x%x) returned %d\n", sizeof(node), ofs, err)); return err; @@ -261,7 +323,15 @@ continue; } - switch(node.nodetype | JFFS2_NODE_ACCURATE) { + if (!(node.nodetype & JFFS2_NODE_ACCURATE)) { + /* Wheee. This is an obsoleted node */ + D2(printk(KERN_DEBUG "Node at 0x%08x is obsolete. Skipping\n", ofs)); + DIRTY_SPACE(PAD(node.totlen)); + ofs += PAD(node.totlen); + continue; + } + + switch(node.nodetype) { case JFFS2_NODETYPE_INODE: err = jffs2_scan_inode_node(c, jeb, &ofs); if (err) return err; @@ -304,7 +374,7 @@ case JFFS2_FEATURE_ROCOMPAT: printk(KERN_NOTICE "Read-only compatible feature node (0x%04x) found at offset 0x%08x\n", node.nodetype, ofs); c->flags |= JFFS2_SB_FLAG_RO; - if (!(OFNI_BS_2SFFJ(c)->s_flags & MS_RDONLY)) + if (!(jffs2_is_readonly(c))) return -EROFS; DIRTY_SPACE(PAD(node.totlen)); ofs += PAD(node.totlen); @@ -315,43 +385,54 @@ return -EINVAL; case JFFS2_FEATURE_RWCOMPAT_DELETE: - printk(KERN_NOTICE "Unknown but compatible feature node (0x%04x) found at offset 0x%08x\n", node.nodetype, ofs); + D1(printk(KERN_NOTICE "Unknown but compatible feature node (0x%04x) found at offset 0x%08x\n", node.nodetype, ofs)); DIRTY_SPACE(PAD(node.totlen)); ofs += PAD(node.totlen); break; case JFFS2_FEATURE_RWCOMPAT_COPY: - printk(KERN_NOTICE "Unknown but compatible feature node (0x%04x) found at offset 0x%08x\n", node.nodetype, ofs); + D1(printk(KERN_NOTICE "Unknown but compatible feature node (0x%04x) found at offset 0x%08x\n", node.nodetype, ofs)); USED_SPACE(PAD(node.totlen)); ofs += PAD(node.totlen); break; } } } + + D1(printk(KERN_DEBUG "Block at 0x%08x: free 0x%08x, dirty 0x%08x, used 0x%08x\n", jeb->offset, jeb->free_size, jeb->dirty_size, jeb->used_size)); - return 0; + + if (jeb->used_size == PAD(sizeof(struct jffs2_unknown_node)) && + !jeb->first_node->next_in_ino && !jeb->dirty_size) + return BLK_STATE_CLEANMARKER; + else if (jeb->used_size > c->sector_size - (2*sizeof(struct jffs2_raw_inode))) + return BLK_STATE_CLEAN; + else if (jeb->used_size) + return BLK_STATE_PARTDIRTY; + else + return BLK_STATE_ALLDIRTY; } /* We're pointing at the first empty word on the flash. Scan and account for the whole dirty region */ -static int jffs2_scan_empty(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb, __u32 *startofs, int *noise) +static int jffs2_scan_empty(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb, uint32_t *startofs, int *noise) { - __u32 *buf; - __u32 scanlen = (jeb->offset + c->sector_size) - *startofs; - __u32 curofs = *startofs; + uint32_t *buf; + uint32_t scanlen = (jeb->offset + c->sector_size) - *startofs; + uint32_t curofs = *startofs; - buf = kmalloc(min((__u32)PAGE_SIZE, scanlen), GFP_KERNEL); + buf = kmalloc(min((uint32_t)PAGE_SIZE, scanlen), GFP_KERNEL); if (!buf) { printk(KERN_WARNING "Scan buffer allocation failed\n"); return -ENOMEM; } while(scanlen) { - ssize_t retlen; + size_t retlen; int ret, i; - ret = c->mtd->read(c->mtd, curofs, min((__u32)PAGE_SIZE, scanlen), &retlen, (char *)buf); - if(ret) { - D1(printk(KERN_WARNING "jffs2_scan_empty(): Read 0x%x bytes at 0x%08x returned %d\n", min((__u32)PAGE_SIZE, scanlen), curofs, ret)); + ret = jffs2_flash_read(c, curofs, min((uint32_t)PAGE_SIZE, scanlen), &retlen, (char *)buf); + if (ret) { + D1(printk(KERN_WARNING "jffs2_scan_empty(): Read 0x%x bytes at 0x%08x returned %d\n", min((uint32_t)PAGE_SIZE, scanlen), curofs, ret)); kfree(buf); return ret; } @@ -363,7 +444,6 @@ for (i=0; i<(retlen / 4); i++) { if (buf[i] != 0xffffffff) { curofs += i*4; - noisy_printk(noise, "jffs2_scan_empty(): Empty block at 0x%08x ends at 0x%08x (with 0x%08x)! Marking dirty\n", *startofs, curofs, buf[i]); DIRTY_SPACE(curofs - (*startofs)); *startofs = curofs; @@ -381,7 +461,7 @@ return 0; } -static struct jffs2_inode_cache *jffs2_scan_make_ino_cache(struct jffs2_sb_info *c, __u32 ino) +static struct jffs2_inode_cache *jffs2_scan_make_ino_cache(struct jffs2_sb_info *c, uint32_t ino) { struct jffs2_inode_cache *ic; @@ -410,21 +490,21 @@ return ic; } -static int jffs2_scan_inode_node(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb, __u32 *ofs) +static int jffs2_scan_inode_node(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb, uint32_t *ofs) { struct jffs2_raw_node_ref *raw; struct jffs2_full_dnode *fn; struct jffs2_tmp_dnode_info *tn, **tn_list; struct jffs2_inode_cache *ic; struct jffs2_raw_inode ri; - __u32 crc; - __u16 oldnodetype; + uint32_t crc; + uint16_t oldnodetype; int ret; - ssize_t retlen; + size_t retlen; D1(printk(KERN_DEBUG "jffs2_scan_inode_node(): Node at 0x%08x\n", *ofs)); - ret = c->mtd->read(c->mtd, *ofs, sizeof(ri), &retlen, (char *)&ri); + ret = jffs2_flash_read(c, *ofs, sizeof(ri), &retlen, (char *)&ri); if (ret) { printk(KERN_NOTICE "jffs2_scan_inode_node(): Read error at 0x%08x: %d\n", *ofs, ret); return ret; @@ -460,14 +540,14 @@ if (ri.csize) { /* Check data CRC too */ unsigned char *dbuf; - __u32 crc; + uint32_t crc; dbuf = kmalloc(PAGE_CACHE_SIZE, GFP_KERNEL); if (!dbuf) { printk(KERN_NOTICE "jffs2_scan_inode_node(): allocation of temporary data buffer for CRC check failed\n"); return -ENOMEM; } - ret = c->mtd->read(c->mtd, *ofs+sizeof(ri), ri.csize, &retlen, dbuf); + ret = jffs2_flash_read(c, *ofs+sizeof(ri), ri.csize, &retlen, dbuf); if (ret) { printk(KERN_NOTICE "jffs2_scan_inode_node(): Read error at 0x%08x: %d\n", *ofs+sizeof(ri), ret); kfree(dbuf); @@ -551,6 +631,13 @@ } if (ri.nodetype & JFFS2_NODE_ACCURATE) { + + /* Only do fraglist truncation in pass1 for S_IFREG inodes */ + if (S_ISREG(ri.mode) && ic->scan->version < ri.version) { + ic->scan->version = ri.version; + ic->scan->isize = ri.isize; + } + memset(fn,0,sizeof(*fn)); fn->ofs = ri.offset; @@ -587,20 +674,20 @@ return 0; } -static int jffs2_scan_dirent_node(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb, __u32 *ofs) +static int jffs2_scan_dirent_node(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb, uint32_t *ofs) { struct jffs2_raw_node_ref *raw; struct jffs2_full_dirent *fd; struct jffs2_inode_cache *ic; struct jffs2_raw_dirent rd; - __u16 oldnodetype; + uint16_t oldnodetype; int ret; - __u32 crc; - ssize_t retlen; + uint32_t crc; + size_t retlen; D1(printk(KERN_DEBUG "jffs2_scan_dirent_node(): Node at 0x%08x\n", *ofs)); - ret = c->mtd->read(c->mtd, *ofs, sizeof(rd), &retlen, (char *)&rd); + ret = jffs2_flash_read(c, *ofs, sizeof(rd), &retlen, (char *)&rd); if (ret) { printk(KERN_NOTICE "jffs2_scan_dirent_node(): Read error at 0x%08x: %d\n", *ofs, ret); return ret; @@ -632,8 +719,8 @@ fd = jffs2_alloc_full_dirent(rd.nsize+1); if (!fd) { return -ENOMEM; -} - ret = c->mtd->read(c->mtd, *ofs + sizeof(rd), rd.nsize, &retlen, &fd->name[0]); + } + ret = jffs2_flash_read(c, *ofs + sizeof(rd), rd.nsize, &retlen, &fd->name[0]); if (ret) { jffs2_free_full_dirent(fd); printk(KERN_NOTICE "jffs2_scan_dirent_node(): Read error at 0x%08x: %d\n", @@ -646,6 +733,7 @@ retlen, *ofs + sizeof(rd), rd.nsize); return -EIO; } + crc = crc32(0, fd->name, rd.nsize); if (crc != rd.name_crc) { printk(KERN_NOTICE "jffs2_scan_dirent_node(): Name CRC failed on node at 0x%08x: Read 0x%08x, calculated 0x%08x\n", @@ -690,13 +778,11 @@ fd->name[rd.nsize]=0; fd->nhash = full_name_hash(fd->name, rd.nsize); fd->type = rd.type; - USED_SPACE(PAD(rd.totlen)); jffs2_add_fd_to_list(c, fd, &ic->scan->dents); } else { raw->flash_offset |= 1; jffs2_free_full_dirent(fd); - DIRTY_SPACE(PAD(rd.totlen)); } *ofs += PAD(rd.totlen); @@ -721,8 +807,9 @@ struct list_head *n = head->next; list_del(head); - while(count--) + while(count--) { n = n->next; + } list_add(head, n); } @@ -743,4 +830,5 @@ if (c->nr_free_blocks) /* Not that it should ever be zero */ rotate_list((&c->free_list), pseudo_random % c->nr_free_blocks); + } diff -Nru a/fs/jffs2/super.c b/fs/jffs2/super.c --- a/fs/jffs2/super.c Tue Mar 12 13:58:15 2002 +++ b/fs/jffs2/super.c Tue Mar 12 13:58:15 2002 @@ -1,7 +1,7 @@ /* * JFFS2 -- Journalling Flash File System, Version 2. * - * Copyright (C) 2001 Red Hat, Inc. + * Copyright (C) 2001, 2002 Red Hat, Inc. * * Created by David Woodhouse * @@ -31,8 +31,7 @@ * provisions above, a recipient may use your version of this file * under either the RHEPL or the GPL. * - * $Id: super.c,v 1.48.2.1 2002/02/23 14:13:34 dwmw2 Exp $ - * + zlib_init calls from v1.56 + * $Id: super.c,v 1.62 2002/03/12 16:23:41 dwmw2 Exp $ * */ @@ -50,25 +49,41 @@ #include #include "nodelist.h" -#ifndef MTD_BLOCK_MAJOR -#define MTD_BLOCK_MAJOR 31 -#endif - -extern void jffs2_read_inode (struct inode *); void jffs2_put_super (struct super_block *); -void jffs2_write_super (struct super_block *); -static int jffs2_statfs (struct super_block *, struct statfs *); -int jffs2_remount_fs (struct super_block *, int *, char *); -extern void jffs2_clear_inode (struct inode *); -extern void jffs2_destroy_inode (struct inode *); -extern struct inode *jffs2_alloc_inode (struct super_block *); - + + +static kmem_cache_t *jffs2_inode_cachep; + +static struct inode *jffs2_alloc_inode(struct super_block *sb) +{ + struct jffs2_inode_info *ei; + ei = (struct jffs2_inode_info *)kmem_cache_alloc(jffs2_inode_cachep, SLAB_KERNEL); + if (!ei) + return NULL; + return &ei->vfs_inode; +} + +static void jffs2_destroy_inode(struct inode *inode) +{ + kmem_cache_free(jffs2_inode_cachep, JFFS2_INODE_INFO(inode)); +} + +static void jffs2_i_init_once(void * foo, kmem_cache_t * cachep, unsigned long flags) +{ + struct jffs2_inode_info *ei = (struct jffs2_inode_info *) foo; + + if ((flags & (SLAB_CTOR_VERIFY|SLAB_CTOR_CONSTRUCTOR)) == + SLAB_CTOR_CONSTRUCTOR) { + init_MUTEX(&ei->sem); + inode_init_once(&ei->vfs_inode); + } +} + static struct super_operations jffs2_super_operations = { alloc_inode: jffs2_alloc_inode, destroy_inode: jffs2_destroy_inode, read_inode: jffs2_read_inode, -// delete_inode: jffs2_delete_inode, put_super: jffs2_put_super, write_super: jffs2_write_super, statfs: jffs2_statfs, @@ -76,222 +91,37 @@ clear_inode: jffs2_clear_inode }; -static int jffs2_statfs(struct super_block *sb, struct statfs *buf) -{ - struct jffs2_sb_info *c = JFFS2_SB_INFO(sb); - unsigned long avail; - buf->f_type = JFFS2_SUPER_MAGIC; - buf->f_bsize = 1 << PAGE_SHIFT; - buf->f_blocks = c->flash_size >> PAGE_SHIFT; - buf->f_files = 0; - buf->f_ffree = 0; - buf->f_namelen = JFFS2_MAX_NAME_LEN; - - spin_lock_bh(&c->erase_completion_lock); - - avail = c->dirty_size + c->free_size; - if (avail > c->sector_size * JFFS2_RESERVED_BLOCKS_WRITE) - avail -= c->sector_size * JFFS2_RESERVED_BLOCKS_WRITE; - else - avail = 0; - - buf->f_bavail = buf->f_bfree = avail >> PAGE_SHIFT; - -#if CONFIG_JFFS2_FS_DEBUG > 0 - printk(KERN_DEBUG "STATFS:\n"); - printk(KERN_DEBUG "flash_size: %08x\n", c->flash_size); - printk(KERN_DEBUG "used_size: %08x\n", c->used_size); - printk(KERN_DEBUG "dirty_size: %08x\n", c->dirty_size); - printk(KERN_DEBUG "free_size: %08x\n", c->free_size); - printk(KERN_DEBUG "erasing_size: %08x\n", c->erasing_size); - printk(KERN_DEBUG "bad_size: %08x\n", c->bad_size); - printk(KERN_DEBUG "sector_size: %08x\n", c->sector_size); - - if (c->nextblock) { - printk(KERN_DEBUG "nextblock: 0x%08x\n", c->nextblock->offset); - } else { - printk(KERN_DEBUG "nextblock: NULL\n"); - } - if (c->gcblock) { - printk(KERN_DEBUG "gcblock: 0x%08x\n", c->gcblock->offset); - } else { - printk(KERN_DEBUG "gcblock: NULL\n"); - } - if (list_empty(&c->clean_list)) { - printk(KERN_DEBUG "clean_list: empty\n"); - } else { - struct list_head *this; - - list_for_each(this, &c->clean_list) { - struct jffs2_eraseblock *jeb = list_entry(this, struct jffs2_eraseblock, list); - printk(KERN_DEBUG "clean_list: %08x\n", jeb->offset); - } - } - if (list_empty(&c->dirty_list)) { - printk(KERN_DEBUG "dirty_list: empty\n"); - } else { - struct list_head *this; - - list_for_each(this, &c->dirty_list) { - struct jffs2_eraseblock *jeb = list_entry(this, struct jffs2_eraseblock, list); - printk(KERN_DEBUG "dirty_list: %08x\n", jeb->offset); - } - } - if (list_empty(&c->erasing_list)) { - printk(KERN_DEBUG "erasing_list: empty\n"); - } else { - struct list_head *this; - - list_for_each(this, &c->erasing_list) { - struct jffs2_eraseblock *jeb = list_entry(this, struct jffs2_eraseblock, list); - printk(KERN_DEBUG "erasing_list: %08x\n", jeb->offset); - } - } - if (list_empty(&c->erase_pending_list)) { - printk(KERN_DEBUG "erase_pending_list: empty\n"); - } else { - struct list_head *this; - - list_for_each(this, &c->erase_pending_list) { - struct jffs2_eraseblock *jeb = list_entry(this, struct jffs2_eraseblock, list); - printk(KERN_DEBUG "erase_pending_list: %08x\n", jeb->offset); - } - } - if (list_empty(&c->free_list)) { - printk(KERN_DEBUG "free_list: empty\n"); - } else { - struct list_head *this; - - list_for_each(this, &c->free_list) { - struct jffs2_eraseblock *jeb = list_entry(this, struct jffs2_eraseblock, list); - printk(KERN_DEBUG "free_list: %08x\n", jeb->offset); - } - } - if (list_empty(&c->bad_list)) { - printk(KERN_DEBUG "bad_list: empty\n"); - } else { - struct list_head *this; - - list_for_each(this, &c->bad_list) { - struct jffs2_eraseblock *jeb = list_entry(this, struct jffs2_eraseblock, list); - printk(KERN_DEBUG "bad_list: %08x\n", jeb->offset); - } - } - if (list_empty(&c->bad_used_list)) { - printk(KERN_DEBUG "bad_used_list: empty\n"); - } else { - struct list_head *this; - - list_for_each(this, &c->bad_used_list) { - struct jffs2_eraseblock *jeb = list_entry(this, struct jffs2_eraseblock, list); - printk(KERN_DEBUG "bad_used_list: %08x\n", jeb->offset); - } - } -#endif /* CONFIG_JFFS2_FS_DEBUG */ - - spin_unlock_bh(&c->erase_completion_lock); - - - return 0; -} - -static int jffs2_fill_super(struct super_block *sb, void *data, int silent) +static int jffs2_blk_fill_super(struct super_block *sb, void *data, int silent) { struct jffs2_sb_info *c; - struct inode *root_i; - int i; + int ret; - D1(printk(KERN_DEBUG "jffs2: read_super for device %s\n", sb->s_id)); + D1(printk(KERN_DEBUG "jffs2: blk_read_super for device %s\n", sb->s_id)); if (major(sb->s_dev) != MTD_BLOCK_MAJOR) { if (!silent) - printk(KERN_DEBUG "jffs2: attempt to mount non-MTD device %s\n", kdevname(sb->s_dev)); + printk(KERN_NOTICE "jffs2: attempt to mount non-MTD device %s\n", + sb->s_id); return -EINVAL; } c = JFFS2_SB_INFO(sb); memset(c, 0, sizeof(*c)); + sb->s_op = &jffs2_super_operations; + c->mtd = get_mtd_device(NULL, minor(sb->s_dev)); if (!c->mtd) { D1(printk(KERN_DEBUG "jffs2: MTD device #%u doesn't appear to exist\n", minor(sb->s_dev))); return -EINVAL; } - c->sector_size = c->mtd->erasesize; - c->free_size = c->flash_size = c->mtd->size; - c->nr_blocks = c->mtd->size / c->mtd->erasesize; - c->blocks = kmalloc(sizeof(struct jffs2_eraseblock) * c->nr_blocks, GFP_KERNEL); - if (!c->blocks) - goto out_mtd; - for (i=0; inr_blocks; i++) { - INIT_LIST_HEAD(&c->blocks[i].list); - c->blocks[i].offset = i * c->sector_size; - c->blocks[i].free_size = c->sector_size; - c->blocks[i].dirty_size = 0; - c->blocks[i].used_size = 0; - c->blocks[i].first_node = NULL; - c->blocks[i].last_node = NULL; - } - - spin_lock_init(&c->nodelist_lock); - init_MUTEX(&c->alloc_sem); - init_waitqueue_head(&c->erase_wait); - spin_lock_init(&c->erase_completion_lock); - spin_lock_init(&c->inocache_lock); - - INIT_LIST_HEAD(&c->clean_list); - INIT_LIST_HEAD(&c->dirty_list); - INIT_LIST_HEAD(&c->erasing_list); - INIT_LIST_HEAD(&c->erase_pending_list); - INIT_LIST_HEAD(&c->erase_complete_list); - INIT_LIST_HEAD(&c->free_list); - INIT_LIST_HEAD(&c->bad_list); - INIT_LIST_HEAD(&c->bad_used_list); - c->highest_ino = 1; - - c->flags |= JFFS2_SB_FLAG_MOUNTING; - - if (jffs2_build_filesystem(c)) { - D1(printk(KERN_DEBUG "build_fs failed\n")); - goto out_nodes; - } - c->flags &= ~JFFS2_SB_FLAG_MOUNTING; + ret = jffs2_do_fill_super(sb, data, silent); + if (ret) + put_mtd_device(c->mtd); - sb->s_op = &jffs2_super_operations; - - D1(printk(KERN_DEBUG "jffs2_read_super(): Getting root inode\n")); - root_i = iget(sb, 1); - if (is_bad_inode(root_i)) { - D1(printk(KERN_WARNING "get root inode failed\n")); - goto out_nodes; - } - - D1(printk(KERN_DEBUG "jffs2_read_super(): d_alloc_root()\n")); - sb->s_root = d_alloc_root(root_i); - if (!sb->s_root) - goto out_root_i; - -#if LINUX_VERSION_CODE >= 0x20403 - sb->s_maxbytes = 0xFFFFFFFF; -#endif - sb->s_blocksize = PAGE_CACHE_SIZE; - sb->s_blocksize_bits = PAGE_CACHE_SHIFT; - sb->s_magic = JFFS2_SUPER_MAGIC; - if (!(sb->s_flags & MS_RDONLY)) - jffs2_start_garbage_collect_thread(c); - return 0; - - out_root_i: - iput(root_i); - out_nodes: - jffs2_free_ino_caches(c); - jffs2_free_raw_node_refs(c); - kfree(c->blocks); - out_mtd: - put_mtd_device(c->mtd); - return -EINVAL; + return ret; } void jffs2_put_super (struct super_block *sb) @@ -300,8 +130,10 @@ D2(printk(KERN_DEBUG "jffs2: jffs2_put_super()\n")); + if (!(sb->s_flags & MS_RDONLY)) jffs2_stop_garbage_collect_thread(c); + jffs2_flush_wbuf(c, 1); jffs2_free_ino_caches(c); jffs2_free_raw_node_refs(c); kfree(c->blocks); @@ -312,68 +144,36 @@ D1(printk(KERN_DEBUG "jffs2_put_super returning\n")); } -int jffs2_remount_fs (struct super_block *sb, int *flags, char *data) -{ - struct jffs2_sb_info *c = JFFS2_SB_INFO(sb); - - if (c->flags & JFFS2_SB_FLAG_RO && !(sb->s_flags & MS_RDONLY)) - return -EROFS; - - /* We stop if it was running, then restart if it needs to. - This also catches the case where it was stopped and this - is just a remount to restart it */ - if (!(sb->s_flags & MS_RDONLY)) - jffs2_stop_garbage_collect_thread(c); - - if (!(*flags & MS_RDONLY)) - jffs2_start_garbage_collect_thread(c); - - sb->s_flags = (sb->s_flags & ~MS_RDONLY)|(*flags & MS_RDONLY); - - return 0; -} - -void jffs2_write_super (struct super_block *sb) -{ - struct jffs2_sb_info *c = JFFS2_SB_INFO(sb); - sb->s_dirt = 0; - - if (sb->s_flags & MS_RDONLY) - return; - - jffs2_garbage_collect_trigger(c); - jffs2_erase_pending_blocks(c); - jffs2_mark_erased_blocks(c); -} - static struct super_block *jffs2_get_sb(struct file_system_type *fs_type, - int flags, char *dev_name, void *data) + int flags, char *dev_name, void *data) { - return get_sb_bdev(fs_type, flags, dev_name, data, jffs2_fill_super); + return get_sb_bdev(fs_type, flags, dev_name, data, jffs2_blk_fill_super); } - + static struct file_system_type jffs2_fs_type = { owner: THIS_MODULE, name: "jffs2", get_sb: jffs2_get_sb, + kill_sb: kill_block_super, fs_flags: FS_REQUIRES_DEV, }; + + static int __init init_jffs2_fs(void) { int ret; - printk(KERN_NOTICE "JFFS2 version 2.1. (C) 2001 Red Hat, Inc., designed by Axis Communications AB.\n"); + printk(KERN_INFO "JFFS2 version 2.1. (C) 2001, 2002 Red Hat, Inc.\n"); -#ifdef JFFS2_OUT_OF_KERNEL - /* sanity checks. Could we do these at compile time? */ - if (sizeof(struct jffs2_sb_info) > sizeof (((struct super_block *)NULL)->u)) { - printk(KERN_ERR "JFFS2 error: struct jffs2_sb_info (%d bytes) doesn't fit in the super_block union (%d bytes)\n", - sizeof(struct jffs2_sb_info), sizeof (((struct super_block *)NULL)->u)); - return -EIO; + jffs2_inode_cachep = kmem_cache_create("jffs2_i", + sizeof(struct jffs2_inode_info), + 0, SLAB_HWCACHE_ALIGN, + jffs2_i_init_once, NULL); + if (!jffs2_inode_cachep) { + printk(KERN_ERR "JFFS2 error: Failed to initialise inode cache\n"); + return -ENOMEM; } -#endif - ret = jffs2_zlib_init(); if (ret) { printk(KERN_ERR "JFFS2 error: Failed to initialise zlib workspaces\n"); @@ -394,9 +194,10 @@ static void __exit exit_jffs2_fs(void) { + unregister_filesystem(&jffs2_fs_type); jffs2_destroy_slab_caches(); jffs2_zlib_exit(); - unregister_filesystem(&jffs2_fs_type); + kmem_cache_destroy(jffs2_inode_cachep); } module_init(init_jffs2_fs); diff -Nru a/fs/jffs2/symlink.c b/fs/jffs2/symlink.c --- a/fs/jffs2/symlink.c Tue Mar 12 13:58:14 2002 +++ b/fs/jffs2/symlink.c Tue Mar 12 13:58:14 2002 @@ -31,7 +31,7 @@ * provisions above, a recipient may use your version of this file * under either the RHEPL or the GPL. * - * $Id: symlink.c,v 1.5.2.1 2002/01/15 10:39:06 dwmw2 Exp $ + * $Id: symlink.c,v 1.9 2002/01/10 09:29:53 dwmw2 Exp $ * */ @@ -39,7 +39,6 @@ #include #include #include -#include #include "nodelist.h" int jffs2_readlink(struct dentry *dentry, char *buffer, int buflen); @@ -52,40 +51,12 @@ setattr: jffs2_setattr }; -static char *jffs2_getlink(struct dentry *dentry) -{ - struct jffs2_inode_info *f = JFFS2_INODE_INFO(dentry->d_inode); - char *buf; - int ret; - - down(&f->sem); - if (!f->metadata) { - up(&f->sem); - printk(KERN_NOTICE "No metadata for symlink inode #%lu\n", dentry->d_inode->i_ino); - return ERR_PTR(-EINVAL); - } - buf = kmalloc(f->metadata->size+1, GFP_USER); - if (!buf) { - up(&f->sem); - return ERR_PTR(-ENOMEM); - } - buf[f->metadata->size]=0; - - ret = jffs2_read_dnode(JFFS2_SB_INFO(dentry->d_inode->i_sb), f->metadata, buf, 0, f->metadata->size); - up(&f->sem); - if (ret) { - kfree(buf); - return ERR_PTR(ret); - } - return buf; - -} int jffs2_readlink(struct dentry *dentry, char *buffer, int buflen) { unsigned char *kbuf; int ret; - kbuf = jffs2_getlink(dentry); + kbuf = jffs2_getlink(JFFS2_SB_INFO(dentry->d_inode->i_sb), JFFS2_INODE_INFO(dentry->d_inode)); if (IS_ERR(kbuf)) return PTR_ERR(kbuf); @@ -99,7 +70,7 @@ unsigned char *buf; int ret; - buf = jffs2_getlink(dentry); + buf = jffs2_getlink(JFFS2_SB_INFO(dentry->d_inode->i_sb), JFFS2_INODE_INFO(dentry->d_inode)); if (IS_ERR(buf)) return PTR_ERR(buf); diff -Nru a/fs/jffs2/wbuf.c b/fs/jffs2/wbuf.c --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/fs/jffs2/wbuf.c Tue Mar 12 13:58:16 2002 @@ -0,0 +1,674 @@ +/* + * JFFS2 -- Journalling Flash File System, Version 2. + * + * Copyright (C) 2001, 2002 Red Hat, Inc. + * + * Created by David Woodhouse + * + * The original JFFS, from which the design for JFFS2 was derived, + * was designed and implemented by Axis Communications AB. + * + * The contents of this file are subject to the Red Hat eCos Public + * License Version 1.1 (the "Licence"); you may not use this file + * except in compliance with the Licence. You may obtain a copy of + * the Licence at http://www.redhat.com/ + * + * Software distributed under the Licence is distributed on an "AS IS" + * basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. + * See the Licence for the specific language governing rights and + * limitations under the Licence. + * + * The Original Code is JFFS2 - Journalling Flash File System, version 2 + * + * Alternatively, the contents of this file may be used under the + * terms of the GNU General Public License version 2 (the "GPL"), in + * which case the provisions of the GPL are applicable instead of the + * above. If you wish to allow the use of your version of this file + * only under the terms of the GPL and not to allow others to use your + * version of this file under the RHEPL, indicate your decision by + * deleting the provisions above and replace them with the notice and + * other provisions required by the GPL. If you do not delete the + * provisions above, a recipient may use your version of this file + * under either the RHEPL or the GPL. + * + * $Id: wbuf.c,v 1.7 2002/03/08 11:27:59 dwmw2 Exp $ + * + */ + +#include +#include +#include +#include +#include +#include "nodelist.h" + +/* FIXME duplicated defines in wbuf.c and nand.c + * Constants for out of band layout + */ +#define NAND_JFFS2_OOB_BADBPOS 5 +#define NAND_JFFS2_OOB8_FSDAPOS 6 +#define NAND_JFFS2_OOB16_FSDAPOS 8 +#define NAND_JFFS2_OOB8_FSDALEN 2 +#define NAND_JFFS2_OOB16_FSDALEN 8 + +#define MAX_ERASE_FAILURES 5 + +static inline void jffs2_refile_wbuf_blocks(struct jffs2_sb_info *c) +{ + struct list_head *this, *next; + + if (list_empty(&c->erasable_pending_wbuf_list)) + return; + + list_for_each_safe(this, next, &c->erasable_pending_wbuf_list) { + list_del(this); + list_add_tail(this, &c->erasable_list); + } +} + +int jffs2_flush_wbuf(struct jffs2_sb_info *c, int pad) +{ + int ret; + size_t retlen; + + if(!c->wbuf || !c->wbuf_len) + return 0; + + /* claim remaining space on the page + this happens, if we have a change to a new block, + or if fsync forces us to flush the writebuffer. + if we have a switch to next page, we will not have + enough remaining space for this. + */ + if (pad) { + c->wbuf_len = PAD(c->wbuf_len); + + if ( c->wbuf_len + sizeof(struct jffs2_unknown_node) < c->wbuf_pagesize) { + struct jffs2_unknown_node *padnode = (void *)(c->wbuf + c->wbuf_len); + padnode->magic = JFFS2_MAGIC_BITMASK; + padnode->nodetype = JFFS2_NODETYPE_PADDING; + padnode->totlen = c->wbuf_pagesize - c->wbuf_len; + padnode->hdr_crc = crc32(0, padnode, sizeof(*padnode)-4); + } + } + /* else jffs2_flash_writev has actually filled in the rest of the + buffer for us, and will deal with the node refs etc. later. */ + + ret = c->mtd->write(c->mtd, c->wbuf_ofs, c->wbuf_pagesize, &retlen, c->wbuf); + + if (ret || retlen != c->wbuf_pagesize) { + if (ret) + printk(KERN_CRIT "jffs2_flush_wbuf(): Write failed with %d\n",ret); + else + printk(KERN_CRIT "jffs2_flush_wbuf(): Write was short %d instead of %d\n",retlen,c->wbuf_pagesize); + + ret = -EIO; + /* CHECKME NAND + So that the caller knows what happened. If + we were called from jffs2_flash_writev(), it'll + know to return failure and _its_ caller will + try again. writev gives back to jffs2_write_xxx + in write.c. There are the real fixme's + */ + + /* FIXME NAND + If we were called from GC or fsync, there's no repair kit yet + */ + + return ret; + } + + /* Adjusting free size of next block only, if it's called from fsync ! */ + if (pad == 2) { + D1(printk(KERN_DEBUG "jffs2_flush_wbuf() adjusting free_size of c->nextblock\n")); + spin_lock_bh(&c->erase_completion_lock); + if (!c->nextblock) + BUG(); + if (c->nextblock->free_size < (c->wbuf_pagesize - c->wbuf_len)) + BUG(); + c->nextblock->free_size -= (c->wbuf_pagesize - c->wbuf_len); + c->nextblock->dirty_size += (c->wbuf_pagesize - c->wbuf_len); + spin_unlock_bh(&c->erase_completion_lock); + } + + /* Stick any now-obsoleted blocks on the erase_pending_list */ + spin_lock_bh(&c->erase_completion_lock); + jffs2_refile_wbuf_blocks(c); + spin_unlock_bh(&c->erase_completion_lock); + + memset(c->wbuf,0xff,c->wbuf_pagesize); + /* adjust write buffer offset, else we get a non contigous write bug */ + c->wbuf_ofs+= c->wbuf_pagesize; + c->wbuf_len = 0; + return 0; +} + +#define PAGE_DIV(x) ( (x) & (~(c->wbuf_pagesize - 1)) ) +#define PAGE_MOD(x) ( (x) & (c->wbuf_pagesize - 1) ) +int jffs2_flash_writev(struct jffs2_sb_info *c, const struct iovec *invecs, unsigned long count, loff_t to, size_t *retlen) +{ + struct iovec outvecs[3]; + uint32_t totlen = 0; + uint32_t split_ofs = 0; + uint32_t old_totlen; + int ret, splitvec = -1; + int invec, outvec; + size_t wbuf_retlen; + unsigned char *wbuf_ptr; + size_t donelen = 0; + uint32_t outvec_to = to; + + /* If not NAND flash, don't bother */ + if (!c->wbuf) + return jffs2_flash_direct_writev(c, invecs, count, to, retlen); + + /* If wbuf_ofs is not initialized, set it to target adress */ + if (c->wbuf_ofs == 0xFFFFFFFF) { + c->wbuf_ofs = PAGE_DIV(to); + c->wbuf_len = PAGE_MOD(to); + memset(c->wbuf,0xff,c->wbuf_pagesize); + } + + /* Sanity checks on target address. + It's permitted to write at PAD(c->wbuf_len+c->wbuf_ofs), + and it's permitted to write at the beginning of a new + erase block. Anything else, and you die. + New block starts at xxx000c (0-b = block header) + */ + if ( (to & ~(c->sector_size-1)) != (c->wbuf_ofs & ~(c->sector_size-1)) ) { + /* It's a write to a new block */ + if (c->wbuf_len) { + D1(printk(KERN_DEBUG "jffs2_flash_writev() to 0x%lx causes flush of wbuf at 0x%08x\n", (unsigned long)to, c->wbuf_ofs)); + ret = jffs2_flush_wbuf(c, 1); + if (ret) { + /* the underlying layer has to check wbuf_len to do the cleanup */ + D1(printk(KERN_WARNING "jffs2_flush_wbuf() called from jffs2_flash_writev() failed %d\n", ret)); + *retlen = 0; + return ret; + } + } + /* set pointer to new block */ + c->wbuf_ofs = PAGE_DIV(to); + c->wbuf_len = PAGE_MOD(to); + } + + if (to != PAD(c->wbuf_ofs + c->wbuf_len)) { + /* We're not writing immediately after the writebuffer. Bad. */ + printk(KERN_CRIT "jffs2_flash_writev(): Non-contiguous write to %08lx\n", (unsigned long)to); + if (c->wbuf_len) + printk(KERN_CRIT "wbuf was previously %08x-%08x\n", + c->wbuf_ofs, c->wbuf_ofs+c->wbuf_len); + BUG(); + } + + /* Note outvecs[3] above. We know count is never greater than 2 */ + if (count > 2) { + printk(KERN_CRIT "jffs2_flash_writev(): count is %ld\n", count); + BUG(); + } + + invec = 0; + outvec = 0; + + + /* Fill writebuffer first, if already in use */ + if (c->wbuf_len) { + uint32_t invec_ofs = 0; + + /* adjust alignment offset */ + if (c->wbuf_len != PAGE_MOD(to)) { + c->wbuf_len = PAGE_MOD(to); + /* take care of alignment to next page */ + if (!c->wbuf_len) + c->wbuf_len = c->wbuf_pagesize; + } + + while(c->wbuf_len < c->wbuf_pagesize) { + uint32_t thislen; + + if (invec == count) + goto alldone; + + thislen = c->wbuf_pagesize - c->wbuf_len; + + if (thislen >= invecs[invec].iov_len) + thislen = invecs[invec].iov_len; + + invec_ofs = thislen; + + memcpy(c->wbuf + c->wbuf_len, invecs[invec].iov_base, thislen); + c->wbuf_len += thislen; + donelen += thislen; + /* Get next invec, if actual did not fill the buffer */ + if (c->wbuf_len < c->wbuf_pagesize) + invec++; + } + + /* write buffer is full, flush buffer */ + ret = jffs2_flush_wbuf(c, 0); + if (ret) { + /* the underlying layer has to check wbuf_len to do the cleanup */ + D1(printk(KERN_WARNING "jffs2_flush_wbuf() called from jffs2_flash_writev() failed %d\n", ret)); + *retlen = 0; + return ret; + } + outvec_to += donelen; + c->wbuf_ofs = outvec_to; + + /* All invecs done ? */ + if (invec == count) + goto alldone; + + /* Set up the first outvec, containing the remainder of the + invec we partially used */ + if (invecs[invec].iov_len > invec_ofs) { + outvecs[0].iov_base = invecs[invec].iov_base+invec_ofs; + totlen = outvecs[0].iov_len = invecs[invec].iov_len-invec_ofs; + if (totlen > c->wbuf_pagesize) { + splitvec = outvec; + split_ofs = outvecs[0].iov_len - PAGE_MOD(totlen); + } + outvec++; + } + invec++; + } + + /* OK, now we've flushed the wbuf and the start of the bits + we have been asked to write, now to write the rest.... */ + + /* totlen holds the amount of data still to be written */ + old_totlen = totlen; + for ( ; invec < count; invec++,outvec++ ) { + outvecs[outvec].iov_base = invecs[invec].iov_base; + totlen += outvecs[outvec].iov_len = invecs[invec].iov_len; + if (PAGE_DIV(totlen) != PAGE_DIV(old_totlen)) { + splitvec = outvec; + split_ofs = outvecs[outvec].iov_len - PAGE_MOD(totlen); + old_totlen = totlen; + } + } + + /* Now the outvecs array holds all the remaining data to write */ + /* Up to splitvec,split_ofs is to be written immediately. The rest + goes into the (now-empty) wbuf */ + + if (splitvec != -1) { + uint32_t remainder; + int ret; + + remainder = outvecs[splitvec].iov_len - split_ofs; + outvecs[splitvec].iov_len = split_ofs; + + /* We did cross a page boundary, so we write some now */ + ret = jffs2_flash_direct_writev(c, outvecs, splitvec+1, outvec_to, &wbuf_retlen); + if (ret < 0 || wbuf_retlen != PAGE_DIV(totlen)) { + /* At this point we have no problem, + c->wbuf is empty. + */ + *retlen = donelen; + return ret; + } + + donelen += wbuf_retlen; + c->wbuf_ofs = PAGE_DIV(outvec_to) + PAGE_DIV(totlen); + + if (remainder) { + outvecs[splitvec].iov_base += split_ofs; + outvecs[splitvec].iov_len = remainder; + } else { + splitvec++; + } + + } else { + splitvec = 0; + } + + /* Now splitvec points to the start of the bits we have to copy + into the wbuf */ + wbuf_ptr = c->wbuf; + + for ( ; splitvec < outvec; splitvec++) { + /* Don't copy the wbuf into itself */ + if (outvecs[splitvec].iov_base == c->wbuf) + continue; + memcpy(wbuf_ptr, outvecs[splitvec].iov_base, outvecs[splitvec].iov_len); + wbuf_ptr += outvecs[splitvec].iov_len; + donelen += outvecs[splitvec].iov_len; + } + c->wbuf_len = wbuf_ptr - c->wbuf; + +alldone: + *retlen = donelen; + return 0; +} + +/* + This is the entry for NOR-Flash. We use it also for NAND to flush wbuf +*/ +int jffs2_flash_write(struct jffs2_sb_info *c, loff_t ofs, size_t len, size_t *retlen, const u_char *buf) +{ + return c->mtd->write(c->mtd, ofs, len, retlen, buf); +} + +/* + Handle readback from writebuffer and ECC failure return +*/ +int jffs2_flash_read(struct jffs2_sb_info *c, loff_t ofs, size_t len, size_t *retlen, u_char *buf) +{ + loff_t orbf = 0, owbf = 0, lwbf = 0; + int ret; + + /* Read flash */ + ret = c->mtd->read(c->mtd, ofs, len, retlen, buf); + + if (!jffs2_can_mark_obsolete(c) && (ret == -EIO) && (*retlen == len) ) { + printk(KERN_WARNING "mtd->read(0x%x bytes from 0x%llx) returned ECC error\n", len, ofs); + /* + * We have the raw data without ECC correction in the buffer, maybe + * we are lucky and all data or parts are correct. We check the node. + * If data are corrupted node check will sort it out. + * We keep this block, it will fail on write or erase and the we + * mark it bad. Or should we do that now? But we should give him a chance. + * Maybe we had a system crash or power loss before the ecc write or + * a erase was completed. + * So we return success. :) + */ + ret = 0; + } + + /* if no writebuffer available or write buffer empty, return */ + if (!c->wbuf_pagesize || !c->wbuf_len) + return ret; + + + /* if we read in a different block, return */ + if ( (ofs & ~(c->sector_size-1)) != (c->wbuf_ofs & ~(c->sector_size-1)) ) + return ret; + + if (ofs >= c->wbuf_ofs) { + owbf = (ofs - c->wbuf_ofs); /* offset in write buffer */ + if (owbf > c->wbuf_len) /* is read beyond write buffer ? */ + return ret; + lwbf = c->wbuf_len - owbf; /* number of bytes to copy */ + if (lwbf > len) + lwbf = len; + } else { + orbf = (c->wbuf_ofs - ofs); /* offset in read buffer */ + if (orbf > len) /* is write beyond write buffer ? */ + return ret; + lwbf = len - orbf; /* number of bytes to copy */ + if (lwbf > c->wbuf_len) + lwbf = c->wbuf_len; + } + if (lwbf > 0) + memcpy(buf+orbf,c->wbuf+owbf,lwbf); + + return ret; +} + +/* + * Check, if the out of band area is empty + */ +int jffs2_check_oob_empty( struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb, int mode) +{ + unsigned char *buf; + int ret = 0; + int i,len,cnt,page; + size_t retlen; + int fsdata_pos,badblock_pos,oob_size; + + oob_size = c->mtd->oobsize; + + switch(c->mtd->ecctype) { + case MTD_ECC_SW: + fsdata_pos = (c->wbuf_pagesize == 256) ? NAND_JFFS2_OOB8_FSDAPOS : NAND_JFFS2_OOB16_FSDAPOS; + badblock_pos = NAND_JFFS2_OOB_BADBPOS; + break; + default: + D1(printk(KERN_WARNING "jffs2_write_oob_empty(): Invalid ECC type\n")); + return -EINVAL; + } + + /* allocate a buffer for all oob data in this sector */ + len = oob_size * (c->sector_size/c->mtd->oobblock); + buf = kmalloc(len, GFP_KERNEL); + if (!buf) { + printk(KERN_NOTICE "jffs2_check_oob_empty(): allocation of temporary data buffer for oob check failed\n"); + return -ENOMEM; + } + /* + * if mode = 0, we scan for a total empty oob area, else we have + * to take care of the cleanmarker in the first page of the block + */ + ret = jffs2_flash_read_oob(c, jeb->offset, len , &retlen, buf); + if (ret) { + D1(printk(KERN_WARNING "jffs2_check_oob_empty(): Read OOB failed %d for block at %08x\n", ret, jeb->offset)); + goto out; + } + + if (retlen < len) { + D1(printk(KERN_WARNING "jffs2_check_oob_empty(): Read OOB return short read " + "(%d bytes not %d) for block at %08x\n", retlen, len, jeb->offset)); + ret = -EIO; + goto out; + } + + /* Special check for first two pages */ + for (page = 0; page < 2; page += oob_size) { + /* Check for bad block marker */ + if (buf[page+badblock_pos] != 0xff) { + D1(printk(KERN_WARNING "jffs2_check_oob_empty(): Bad or failed block at %08x\n",jeb->offset)); + /* Return 2 for bad and 3 for failed block + bad goes to list_bad and failed to list_erase */ + ret = (!page) ? 2 : 3; + goto out; + } + cnt = oob_size; + if (mode) + cnt -= fsdata_pos; + for(i = 0; i < cnt ; i+=sizeof(unsigned short)) { + unsigned short dat = *(unsigned short *)(&buf[page+i]); + if(dat != 0xffff) { + ret = 1; + goto out; + } + } + /* only the first page can contain a cleanmarker !*/ + mode = 0; + } + + /* we know, we are aligned :) */ + for (; page < len; page += sizeof(long)) { + unsigned long dat = *(unsigned long *)(&buf[page]); + if(dat != -1) { + ret = 1; + goto out; + } + } + +out: + kfree(buf); + + return ret; +} + +int jffs2_check_nand_cleanmarker(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb) +{ + struct jffs2_unknown_node n; + unsigned char buf[32]; + unsigned char *p; + int ret,i; + size_t retlen; + int fsdata_pos,fsdata_len, oob_size, badblock_pos; + + oob_size = c->mtd->oobsize; + + switch(c->mtd->ecctype) { + case MTD_ECC_SW: + fsdata_pos = (c->wbuf_pagesize == 256) ? NAND_JFFS2_OOB8_FSDAPOS : NAND_JFFS2_OOB16_FSDAPOS; + fsdata_len = (c->wbuf_pagesize == 256) ? NAND_JFFS2_OOB8_FSDALEN : NAND_JFFS2_OOB16_FSDALEN; + badblock_pos = NAND_JFFS2_OOB_BADBPOS; + break; + default: + D1(printk(KERN_WARNING "jffs2_write_nand_cleanmarker(): Invalid ECC type\n")); + return -EINVAL; + } + + /* + * We read oob data from page 0 and 1 of the block. + * page 0 contains cleanmarker and badblock info + * page 2 contains failure count of this block + */ + ret = c->mtd->read_oob(c->mtd, jeb->offset, oob_size << 1 , &retlen, buf); + + if (ret) { + D1(printk(KERN_WARNING "jffs2_check_nand_cleanmarker(): Read OOB failed %d for block at %08x\n", ret, jeb->offset)); + return ret; + } + if (retlen < (oob_size << 1) ) { + D1(printk(KERN_WARNING "jffs2_check_nand_cleanmarker(): Read OOB return short read (%d bytes not %d) for block at %08x\n", retlen, oob_size << 1 , jeb->offset)); + return -EIO; + } + + /* Check for bad block marker */ + if (buf[badblock_pos] != 0xff) { + D1(printk(KERN_WARNING "jffs2_check_nand_cleanmarker(): Bad block at %08x\n",jeb->offset)); + return 2; + } + + /* Check for failure counter in the second page */ + if (buf[badblock_pos+oob_size] != 0xff) { + D1(printk(KERN_WARNING "jffs2_check_nand_cleanmarker(): Block marked as failed at %08x, fail count:%d\n",jeb->offset,buf[badblock_pos+oob_size])); + return 3; + } + + n.magic = JFFS2_MAGIC_BITMASK; + n.nodetype = JFFS2_NODETYPE_CLEANMARKER; + n.totlen = 8; + p = (unsigned char *) &n; + + for (i = 0; i < fsdata_len; i++) { + if (buf[fsdata_pos+i] != p[i]) { + D2(printk(KERN_WARNING "jffs2_check_nand_cleanmarker(): Cleanmarker node not detected in block at %08x\n", jeb->offset)); + return 1; + } + } + + return 0; +} + +int jffs2_write_nand_cleanmarker(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb) +{ + struct jffs2_unknown_node n; + int ret; + int fsdata_pos,fsdata_len; + size_t retlen; + + switch(c->mtd->ecctype) { + case MTD_ECC_SW: + fsdata_pos = (c->wbuf_pagesize == 256) ? NAND_JFFS2_OOB8_FSDAPOS : NAND_JFFS2_OOB16_FSDAPOS; + fsdata_len = (c->wbuf_pagesize == 256) ? NAND_JFFS2_OOB8_FSDALEN : NAND_JFFS2_OOB16_FSDALEN; + break; + default: + D1(printk(KERN_WARNING "jffs2_write_nand_cleanmarker(): Invalid ECC type\n")); + return -EINVAL; + } + + n.magic = JFFS2_MAGIC_BITMASK; + n.nodetype = JFFS2_NODETYPE_CLEANMARKER; + n.totlen = 8; + + ret = jffs2_flash_write_oob(c, jeb->offset + fsdata_pos, fsdata_len, &retlen, (unsigned char *)&n); + + if (ret) { + D1(printk(KERN_WARNING "jffs2_write_nand_cleanmarker(): Write failed for block at %08x: error %d\n", jeb->offset, ret)); + return ret; + } + if (retlen != fsdata_len) { + D1(printk(KERN_WARNING "jffs2_write_nand_cleanmarker(): Short write for block at %08x: %d not %d\n", jeb->offset, retlen, fsdata_len)); + return ret; + } + return 0; +} + +/* + * We try to get the failure count of this block. + */ +int jffs2_nand_read_failcnt(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb) { + + unsigned char buf[16]; + int ret; + size_t retlen; + int oob_size, badblock_pos; + + oob_size = c->mtd->oobsize; + + switch(c->mtd->ecctype) { + case MTD_ECC_SW: + badblock_pos = NAND_JFFS2_OOB_BADBPOS; + break; + default: + D1(printk(KERN_WARNING "jffs2_nand_read_failcnt(): Invalid ECC type\n")); + return -EINVAL; + } + + ret = c->mtd->read_oob(c->mtd, jeb->offset + c->mtd->oobblock, oob_size , &retlen, buf); + + if (ret) { + D1(printk(KERN_WARNING "jffs2_nand_read_failcnt(): Read OOB failed %d for block at %08x\n", ret, jeb->offset)); + return ret; + } + + if (retlen < oob_size) { + D1(printk(KERN_WARNING "jffs2_nand_read_failcnt(): Read OOB return short read (%d bytes not %d) for block at %08x\n", retlen, oob_size, jeb->offset)); + return -EIO; + } + + jeb->bad_count = buf[badblock_pos]; + return 0; +} + +/* + * On NAND we try to mark this block bad. We try to write how often + * the block was erased and mark it finaly bad, if the count + * is > MAX_ERASE_FAILURES. We read this information on mount ! + * jeb->bad_count contains the count before this erase. + * Don't care about failures. This block remains on the erase-pending + * or badblock list as long as nobody manipulates the flash with + * a bootloader or something like that. + */ + +int jffs2_write_nand_badblock(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb) +{ + unsigned char buf = 0x0; + int ret,pos; + size_t retlen; + + switch(c->mtd->ecctype) { + case MTD_ECC_SW: + pos = NAND_JFFS2_OOB_BADBPOS; + break; + default: + D1(printk(KERN_WARNING "jffs2_write_nand_badblock(): Invalid ECC type\n")); + return -EINVAL; + } + + /* if the count is < max, we try to write the counter to the 2nd page oob area */ + if( ++jeb->bad_count < MAX_ERASE_FAILURES) { + buf = (unsigned char)jeb->bad_count; + pos += c->mtd->oobblock; + } + + ret = jffs2_flash_write_oob(c, jeb->offset + pos, 1, &retlen, &buf); + + if (ret) { + D1(printk(KERN_WARNING "jffs2_write_nand_badblock(): Write failed for block at %08x: error %d\n", jeb->offset, ret)); + return ret; + } + if (retlen != 1) { + D1(printk(KERN_WARNING "jffs2_write_nand_badblock(): Short write for block at %08x: %d not 1\n", jeb->offset, retlen)); + return ret; + } + return 0; +} + diff -Nru a/fs/jffs2/write.c b/fs/jffs2/write.c --- a/fs/jffs2/write.c Tue Mar 12 13:58:15 2002 +++ b/fs/jffs2/write.c Tue Mar 12 13:58:15 2002 @@ -1,7 +1,7 @@ /* * JFFS2 -- Journalling Flash File System, Version 2. * - * Copyright (C) 2001 Red Hat, Inc. + * Copyright (C) 2001, 2002 Red Hat, Inc. * * Created by David Woodhouse * @@ -31,60 +31,35 @@ * provisions above, a recipient may use your version of this file * under either the RHEPL or the GPL. * - * $Id: write.c,v 1.30 2001/12/30 16:01:11 dwmw2 Exp $ + * $Id: write.c,v 1.52 2002/03/08 11:01:43 dwmw2 Exp $ * */ #include #include #include -#include +#include #include #include "nodelist.h" -/* jffs2_new_inode: allocate a new inode and inocache, add it to the hash, - fill in the raw_inode while you're at it. */ -struct inode *jffs2_new_inode (struct inode *dir_i, int mode, struct jffs2_raw_inode *ri) + +int jffs2_do_new_inode(struct jffs2_sb_info *c, struct jffs2_inode_info *f, uint32_t mode, struct jffs2_raw_inode *ri) { - struct inode *inode; - struct super_block *sb = dir_i->i_sb; struct jffs2_inode_cache *ic; - struct jffs2_sb_info *c; - struct jffs2_inode_info *f; - - D1(printk(KERN_DEBUG "jffs2_new_inode(): dir_i %ld, mode 0x%x\n", dir_i->i_ino, mode)); - - c = JFFS2_SB_INFO(sb); - memset(ri, 0, sizeof(*ri)); ic = jffs2_alloc_inode_cache(); if (!ic) { - return ERR_PTR(-ENOMEM); - } - memset(ic, 0, sizeof(*ic)); - - inode = new_inode(sb); - - if (!inode) { - jffs2_free_inode_cache(ic); - return ERR_PTR(-ENOMEM); + return -ENOMEM; } - /* Alloc jffs2_inode_info when that's split in 2.5 */ + memset(ic, 0, sizeof(*ic)); - f = JFFS2_INODE_INFO(inode); - down(&f->sem); - f->highest_version = 0; - f->fraglist = NULL; - f->metadata = NULL; - f->dents = NULL; - f->flags = 0; - f->usercompr = 0; + init_MUTEX_LOCKED(&f->sem); f->inocache = ic; - inode->i_nlink = f->inocache->nlink = 1; + f->inocache->nlink = 1; f->inocache->nodes = (struct jffs2_raw_node_ref *)f->inocache; - f->inocache->ino = ri->ino = inode->i_ino = ++c->highest_ino; - D1(printk(KERN_DEBUG "jffs2_new_inode(): Assigned ino# %d\n", ri->ino)); + f->inocache->ino = ri->ino = ++c->highest_ino; + D1(printk(KERN_DEBUG "jffs2_do_new_inode(): Assigned ino# %d\n", ri->ino)); jffs2_add_ino_cache(c, f->inocache); ri->magic = JFFS2_MAGIC_BITMASK; @@ -93,65 +68,18 @@ ri->hdr_crc = crc32(0, ri, sizeof(struct jffs2_unknown_node)-4); ri->mode = mode; f->highest_version = ri->version = 1; - ri->uid = current->fsuid; - if (dir_i->i_mode & S_ISGID) { - ri->gid = dir_i->i_gid; - if (S_ISDIR(mode)) - ri->mode |= S_ISGID; - } else { - ri->gid = current->fsgid; - } - inode->i_mode = ri->mode; - inode->i_gid = ri->gid; - inode->i_uid = ri->uid; - inode->i_atime = inode->i_ctime = inode->i_mtime = - ri->atime = ri->mtime = ri->ctime = CURRENT_TIME; - inode->i_blksize = PAGE_SIZE; - inode->i_blocks = 0; - inode->i_size = 0; - - insert_inode_hash(inode); - - return inode; -} - -/* This ought to be in core MTD code. All registered MTD devices without writev should have - this put in place. Bug the MTD maintainer */ -static int mtd_fake_writev(struct mtd_info *mtd, const struct iovec *vecs, unsigned long count, loff_t to, size_t *retlen) -{ - unsigned long i; - size_t totlen = 0, thislen; - int ret = 0; - - for (i=0; iwrite(mtd, to, vecs[i].iov_len, &thislen, vecs[i].iov_base); - totlen += thislen; - if (ret || thislen != vecs[i].iov_len) - break; - to += vecs[i].iov_len; - } - if (retlen) - *retlen = totlen; - return ret; -} - -static inline int mtd_writev(struct mtd_info *mtd, const struct iovec *vecs, unsigned long count, loff_t to, size_t *retlen) -{ - if (mtd->writev) - return mtd->writev(mtd,vecs,count,to,retlen); - else - return mtd_fake_writev(mtd, vecs, count, to, retlen); + return 0; } -static void writecheck(struct mtd_info *mtd, __u32 ofs) +static void writecheck(struct jffs2_sb_info *c, uint32_t ofs) { unsigned char buf[16]; - ssize_t retlen; + size_t retlen; int ret, i; - ret = mtd->read(mtd, ofs, 16, &retlen, buf); - if (ret && retlen != 16) { + ret = jffs2_flash_read(c, ofs, 16, &retlen, buf); + if (ret || (retlen != 16)) { D1(printk(KERN_DEBUG "read failed or short in writecheck(). ret %d, retlen %d\n", ret, retlen)); return; } @@ -169,22 +97,20 @@ } } - - + /* jffs2_write_dnode - given a raw_inode, allocate a full_dnode for it, write it to the flash, link it into the existing inode/fragment list */ -struct jffs2_full_dnode *jffs2_write_dnode(struct inode *inode, struct jffs2_raw_inode *ri, const unsigned char *data, __u32 datalen, __u32 flash_ofs, __u32 *writelen) +struct jffs2_full_dnode *jffs2_write_dnode(struct jffs2_sb_info *c, struct jffs2_inode_info *f, struct jffs2_raw_inode *ri, const unsigned char *data, uint32_t datalen, uint32_t flash_ofs, uint32_t *writelen) { - struct jffs2_sb_info *c = JFFS2_SB_INFO(inode->i_sb); - struct jffs2_inode_info *f = JFFS2_INODE_INFO(inode); struct jffs2_raw_node_ref *raw; struct jffs2_full_dnode *fn; - ssize_t retlen; + size_t retlen; struct iovec vecs[2]; int ret; + unsigned long cnt = 2; D1(if(ri->hdr_crc != crc32(0, ri, sizeof(struct jffs2_unknown_node)-4)) { printk(KERN_CRIT "Eep. CRC not correct in jffs2_write_dnode()\n"); @@ -196,7 +122,7 @@ vecs[1].iov_base = (unsigned char *)data; vecs[1].iov_len = datalen; - writecheck(c->mtd, flash_ofs); + writecheck(c, flash_ofs); if (ri->totlen != sizeof(*ri) + datalen) { printk(KERN_WARNING "jffs2_write_dnode: ri->totlen (0x%08x) != sizeof(*ri) (0x%08x) + datalen (0x%08x)\n", ri->totlen, sizeof(*ri), datalen); @@ -219,7 +145,12 @@ fn->frags = 0; fn->raw = raw; - ret = mtd_writev(c->mtd, vecs, 2, flash_ofs, &retlen); + /* check number of valid vecs */ + if (!datalen || !data) + cnt = 1; + + ret = jffs2_flash_writev(c, vecs, cnt, flash_ofs, &retlen); + if (ret || (retlen != sizeof(*ri) + datalen)) { printk(KERN_NOTICE "Write of %d bytes at 0x%08x failed. returned %d, retlen %d\n", sizeof(*ri)+datalen, flash_ofs, ret, retlen); @@ -261,18 +192,16 @@ return fn; } -struct jffs2_full_dirent *jffs2_write_dirent(struct inode *inode, struct jffs2_raw_dirent *rd, const unsigned char *name, __u32 namelen, __u32 flash_ofs, __u32 *writelen) +struct jffs2_full_dirent *jffs2_write_dirent(struct jffs2_sb_info *c, struct jffs2_inode_info *f, struct jffs2_raw_dirent *rd, const unsigned char *name, uint32_t namelen, uint32_t flash_ofs, uint32_t *writelen) { - struct jffs2_sb_info *c = JFFS2_SB_INFO(inode->i_sb); - struct jffs2_inode_info *f = JFFS2_INODE_INFO(inode); struct jffs2_raw_node_ref *raw; struct jffs2_full_dirent *fd; - ssize_t retlen; + size_t retlen; struct iovec vecs[2]; int ret; D1(printk(KERN_DEBUG "jffs2_write_dirent(ino #%u, name at *0x%p \"%s\"->ino #%u, name_crc 0x%08x)\n", rd->pino, name, name, rd->ino, rd->name_crc)); - writecheck(c->mtd, flash_ofs); + writecheck(c, flash_ofs); D1(if(rd->hdr_crc != crc32(0, rd, sizeof(struct jffs2_unknown_node)-4)) { printk(KERN_CRIT "Eep. CRC not correct in jffs2_write_dirent()\n"); @@ -309,18 +238,18 @@ fd->name[namelen]=0; fd->raw = raw; - ret = mtd_writev(c->mtd, vecs, 2, flash_ofs, &retlen); - if (ret || (retlen != sizeof(*rd) + namelen)) { - printk(KERN_NOTICE "Write of %d bytes at 0x%08x failed. returned %d, retlen %d\n", + ret = jffs2_flash_writev(c, vecs, 2, flash_ofs, &retlen); + if (ret || (retlen != sizeof(*rd) + namelen)) { + printk(KERN_NOTICE "Write of %d bytes at 0x%08x failed. returned %d, retlen %d\n", sizeof(*rd)+namelen, flash_ofs, ret, retlen); /* Mark the space as dirtied */ - if (retlen) { - jffs2_add_physical_node_ref(c, raw, sizeof(*rd)+namelen, 1); - jffs2_mark_node_obsolete(c, raw); - } else { - printk(KERN_NOTICE "Not marking the space at 0x%08x as dirty because the flash driver returned retlen zero\n", raw->flash_offset); - jffs2_free_raw_node_ref(raw); - } + if (retlen) { + jffs2_add_physical_node_ref(c, raw, sizeof(*rd)+namelen, 1); + jffs2_mark_node_obsolete(c, raw); + } else { + printk(KERN_NOTICE "Not marking the space at 0x%08x as dirty because the flash driver returned retlen zero\n", raw->flash_offset); + jffs2_free_raw_node_ref(raw); + } /* Release the full_dnode which is now useless, and return */ jffs2_free_full_dirent(fd); @@ -335,4 +264,344 @@ f->inocache->nodes = raw; return fd; +} + +/* The OS-specific code fills in the metadata in the jffs2_raw_inode for us, so that + we don't have to go digging in struct inode or its equivalent. It should set: + mode, uid, gid, (starting)isize, atime, ctime, mtime */ +int jffs2_write_inode_range(struct jffs2_sb_info *c, struct jffs2_inode_info *f, + struct jffs2_raw_inode *ri, unsigned char *buf, + uint32_t offset, uint32_t writelen, uint32_t *retlen) +{ + int ret = 0; + uint32_t writtenlen = 0; + + D1(printk(KERN_DEBUG "jffs2_write_inode_range(): Ino #%u, ofs 0x%x, len 0x%x\n", + f->inocache->ino, offset, writelen)); + + while(writelen) { + struct jffs2_full_dnode *fn; + unsigned char *comprbuf = NULL; + unsigned char comprtype = JFFS2_COMPR_NONE; + uint32_t phys_ofs, alloclen; + uint32_t datalen, cdatalen; + + D2(printk(KERN_DEBUG "jffs2_commit_write() loop: 0x%x to write to 0x%x\n", writelen, offset)); + + ret = jffs2_reserve_space(c, sizeof(*ri) + JFFS2_MIN_DATA_LEN, &phys_ofs, &alloclen, ALLOC_NORMAL); + if (ret) { + D1(printk(KERN_DEBUG "jffs2_reserve_space returned %d\n", ret)); + break; + } + down(&f->sem); + datalen = writelen; + cdatalen = min(alloclen - sizeof(*ri), writelen); + + comprbuf = kmalloc(cdatalen, GFP_KERNEL); + if (comprbuf) { + comprtype = jffs2_compress(buf, comprbuf, &datalen, &cdatalen); + } + if (comprtype == JFFS2_COMPR_NONE) { + /* Either compression failed, or the allocation of comprbuf failed */ + if (comprbuf) + kfree(comprbuf); + comprbuf = buf; + datalen = cdatalen; + } + /* Now comprbuf points to the data to be written, be it compressed or not. + comprtype holds the compression type, and comprtype == JFFS2_COMPR_NONE means + that the comprbuf doesn't need to be kfree()d. + */ + + ri->magic = JFFS2_MAGIC_BITMASK; + ri->nodetype = JFFS2_NODETYPE_INODE; + ri->totlen = sizeof(*ri) + cdatalen; + ri->hdr_crc = crc32(0, ri, sizeof(struct jffs2_unknown_node)-4); + + ri->ino = f->inocache->ino; + ri->version = ++f->highest_version; + ri->isize = max(ri->isize, offset + datalen); + ri->offset = offset; + ri->csize = cdatalen; + ri->dsize = datalen; + ri->compr = comprtype; + ri->node_crc = crc32(0, ri, sizeof(*ri)-8); + ri->data_crc = crc32(0, comprbuf, cdatalen); + + fn = jffs2_write_dnode(c, f, ri, comprbuf, cdatalen, phys_ofs, NULL); + + if (comprtype != JFFS2_COMPR_NONE) + kfree(comprbuf); + + if (IS_ERR(fn)) { + ret = PTR_ERR(fn); + up(&f->sem); + jffs2_complete_reservation(c); + break; + } + ret = jffs2_add_full_dnode_to_inode(c, f, fn); + if (f->metadata) { + jffs2_mark_node_obsolete(c, f->metadata->raw); + jffs2_free_full_dnode(f->metadata); + f->metadata = NULL; + } + if (ret) { + /* Eep */ + D1(printk(KERN_DEBUG "Eep. add_full_dnode_to_inode() failed in commit_write, returned %d\n", ret)); + jffs2_mark_node_obsolete(c, fn->raw); + jffs2_free_full_dnode(fn); + + up(&f->sem); + jffs2_complete_reservation(c); + break; + } + up(&f->sem); + jffs2_complete_reservation(c); + if (!datalen) { + printk(KERN_WARNING "Eep. We didn't actually write any data in jffs2_write_inode_range()\n"); + ret = -EIO; + break; + } + D1(printk(KERN_DEBUG "increasing writtenlen by %d\n", datalen)); + writtenlen += datalen; + offset += datalen; + writelen -= datalen; + buf += datalen; + } + *retlen = writtenlen; + return ret; +} + +int jffs2_do_create(struct jffs2_sb_info *c, struct jffs2_inode_info *dir_f, struct jffs2_inode_info *f, struct jffs2_raw_inode *ri, const char *name, int namelen) +{ + struct jffs2_raw_dirent *rd; + struct jffs2_full_dnode *fn; + struct jffs2_full_dirent *fd; + uint32_t alloclen, phys_ofs; + uint32_t writtenlen; + int ret; + + /* Try to reserve enough space for both node and dirent. + * Just the node will do for now, though + */ + ret = jffs2_reserve_space(c, sizeof(*ri), &phys_ofs, &alloclen, ALLOC_NORMAL); + D1(printk(KERN_DEBUG "jffs2_do_create(): reserved 0x%x bytes\n", alloclen)); + if (ret) + return ret; + + ri->data_crc = 0; + ri->node_crc = crc32(0, ri, sizeof(*ri)-8); + + fn = jffs2_write_dnode(c, f, ri, NULL, 0, phys_ofs, &writtenlen); + + D1(printk(KERN_DEBUG "jffs2_do_create created file with mode 0x%x\n", ri->mode)); + + if (IS_ERR(fn)) { + D1(printk(KERN_DEBUG "jffs2_write_dnode() failed\n")); + /* Eeek. Wave bye bye */ + up(&f->sem); + jffs2_complete_reservation(c); + return PTR_ERR(fn); + } + /* No data here. Only a metadata node, which will be + obsoleted by the first data write + */ + f->metadata = fn; + + /* Work out where to put the dirent node now. */ + writtenlen = PAD(writtenlen); + phys_ofs += writtenlen; + alloclen -= writtenlen; + up(&f->sem); + + if (alloclen < sizeof(*rd)+namelen) { + /* Not enough space left in this chunk. Get some more */ + jffs2_complete_reservation(c); + ret = jffs2_reserve_space(c, sizeof(*rd)+namelen, &phys_ofs, &alloclen, ALLOC_NORMAL); + + if (ret) { + /* Eep. */ + D1(printk(KERN_DEBUG "jffs2_reserve_space() for dirent failed\n")); + return ret; + } + } + + rd = jffs2_alloc_raw_dirent(); + if (!rd) { + /* Argh. Now we treat it like a normal delete */ + jffs2_complete_reservation(c); + return -ENOMEM; + } + + down(&dir_f->sem); + + rd->magic = JFFS2_MAGIC_BITMASK; + rd->nodetype = JFFS2_NODETYPE_DIRENT; + rd->totlen = sizeof(*rd) + namelen; + rd->hdr_crc = crc32(0, rd, sizeof(struct jffs2_unknown_node)-4); + + rd->pino = dir_f->inocache->ino; + rd->version = ++dir_f->highest_version; + rd->ino = ri->ino; + rd->mctime = ri->ctime; + rd->nsize = namelen; + rd->type = DT_REG; + rd->node_crc = crc32(0, rd, sizeof(*rd)-8); + rd->name_crc = crc32(0, name, namelen); + + fd = jffs2_write_dirent(c, dir_f, rd, name, namelen, phys_ofs, &writtenlen); + + jffs2_free_raw_dirent(rd); + + if (IS_ERR(fd)) { + /* dirent failed to write. Delete the inode normally + as if it were the final unlink() */ + jffs2_complete_reservation(c); + up(&dir_f->sem); + return PTR_ERR(fd); + } + + /* Link the fd into the inode's list, obsoleting an old + one if necessary. */ + jffs2_add_fd_to_list(c, fd, &dir_f->dents); + + jffs2_complete_reservation(c); + up(&dir_f->sem); + + return 0; +} + + +int jffs2_do_unlink(struct jffs2_sb_info *c, struct jffs2_inode_info *dir_f, const char *name, int namelen, struct jffs2_inode_info *dead_f) +{ + struct jffs2_raw_dirent *rd; + struct jffs2_full_dirent *fd; + uint32_t alloclen, phys_ofs; + int ret; + + rd = jffs2_alloc_raw_dirent(); + if (!rd) + return -ENOMEM; + + ret = jffs2_reserve_space(c, sizeof(*rd)+namelen, &phys_ofs, &alloclen, ALLOC_DELETION); + if (ret) { + jffs2_free_raw_dirent(rd); + return ret; + } + + down(&dir_f->sem); + + /* Build a deletion node */ + rd->magic = JFFS2_MAGIC_BITMASK; + rd->nodetype = JFFS2_NODETYPE_DIRENT; + rd->totlen = sizeof(*rd) + namelen; + rd->hdr_crc = crc32(0, rd, sizeof(struct jffs2_unknown_node)-4); + + rd->pino = dir_f->inocache->ino; + rd->version = ++dir_f->highest_version; + rd->ino = 0; + rd->mctime = CURRENT_TIME; + rd->nsize = namelen; + rd->type = DT_UNKNOWN; + rd->node_crc = crc32(0, rd, sizeof(*rd)-8); + rd->name_crc = crc32(0, name, namelen); + + fd = jffs2_write_dirent(c, dir_f, rd, name, namelen, phys_ofs, NULL); + + jffs2_free_raw_dirent(rd); + + if (IS_ERR(fd)) { + jffs2_complete_reservation(c); + up(&dir_f->sem); + return PTR_ERR(fd); + } + + /* File it. This will mark the old one obsolete. */ + jffs2_add_fd_to_list(c, fd, &dir_f->dents); + + jffs2_complete_reservation(c); + up(&dir_f->sem); + + if (dead_f) { /* Null if this was a rename not a real unlink */ + + down(&dead_f->sem); + + while (dead_f->dents) { + /* There can be only deleted ones */ + fd = dead_f->dents; + + dead_f->dents = fd->next; + + if (fd->ino) { + printk(KERN_WARNING "Deleting inode #%u with active dentry \"%s\"->ino #%u\n", + dead_f->inocache->ino, fd->name, fd->ino); + } else { + D1(printk(KERN_DEBUG "Removing deletion dirent for \"%s\" from dir ino #%u\n", fd->name, dead_f->inocache->ino)); + } + jffs2_mark_node_obsolete(c, fd->raw); + jffs2_free_full_dirent(fd); + } + + dead_f->inocache->nlink--; + /* NB: Caller must set inode nlink if appropriate */ + up(&dead_f->sem); + } + + return 0; +} + + +int jffs2_do_link (struct jffs2_sb_info *c, struct jffs2_inode_info *dir_f, uint32_t ino, uint8_t type, const char *name, int namelen) +{ + struct jffs2_raw_dirent *rd; + struct jffs2_full_dirent *fd; + uint32_t alloclen, phys_ofs; + int ret; + + rd = jffs2_alloc_raw_dirent(); + if (!rd) + return -ENOMEM; + + ret = jffs2_reserve_space(c, sizeof(*rd)+namelen, &phys_ofs, &alloclen, ALLOC_NORMAL); + if (ret) { + jffs2_free_raw_dirent(rd); + return ret; + } + + down(&dir_f->sem); + + /* Build a deletion node */ + rd->magic = JFFS2_MAGIC_BITMASK; + rd->nodetype = JFFS2_NODETYPE_DIRENT; + rd->totlen = sizeof(*rd) + namelen; + rd->hdr_crc = crc32(0, rd, sizeof(struct jffs2_unknown_node)-4); + + rd->pino = dir_f->inocache->ino; + rd->version = ++dir_f->highest_version; + rd->ino = ino; + rd->mctime = CURRENT_TIME; + rd->nsize = namelen; + + rd->type = type; + + rd->node_crc = crc32(0, rd, sizeof(*rd)-8); + rd->name_crc = crc32(0, name, namelen); + + fd = jffs2_write_dirent(c, dir_f, rd, name, namelen, phys_ofs, NULL); + + jffs2_free_raw_dirent(rd); + + if (IS_ERR(fd)) { + jffs2_complete_reservation(c); + up(&dir_f->sem); + return PTR_ERR(fd); + } + + /* File it. This will mark the old one obsolete. */ + jffs2_add_fd_to_list(c, fd, &dir_f->dents); + + jffs2_complete_reservation(c); + up(&dir_f->sem); + + return 0; } diff -Nru a/fs/jffs2/writev.c b/fs/jffs2/writev.c --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/fs/jffs2/writev.c Tue Mar 12 13:58:16 2002 @@ -0,0 +1,74 @@ +/* + * JFFS2 -- Journalling Flash File System, Version 2. + * + * Copyright (C) 2001, 2002 Red Hat, Inc. + * + * Created by David Woodhouse + * + * The original JFFS, from which the design for JFFS2 was derived, + * was designed and implemented by Axis Communications AB. + * + * The contents of this file are subject to the Red Hat eCos Public + * License Version 1.1 (the "Licence"); you may not use this file + * except in compliance with the Licence. You may obtain a copy of + * the Licence at http://www.redhat.com/ + * + * Software distributed under the Licence is distributed on an "AS IS" + * basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. + * See the Licence for the specific language governing rights and + * limitations under the Licence. + * + * The Original Code is JFFS2 - Journalling Flash File System, version 2 + * + * Alternatively, the contents of this file may be used under the + * terms of the GNU General Public License version 2 (the "GPL"), in + * which case the provisions of the GPL are applicable instead of the + * above. If you wish to allow the use of your version of this file + * only under the terms of the GPL and not to allow others to use your + * version of this file under the RHEPL, indicate your decision by + * deleting the provisions above and replace them with the notice and + * other provisions required by the GPL. If you do not delete the + * provisions above, a recipient may use your version of this file + * under either the RHEPL or the GPL. + * + * $Id: writev.c,v 1.1 2002/03/08 11:27:59 dwmw2 Exp $ + * + */ + +#include +#include +#include "nodelist.h" + +/* This ought to be in core MTD code. All registered MTD devices + without writev should have this put in place. Bug the MTD + maintainer */ +static inline int mtd_fake_writev(struct mtd_info *mtd, const struct iovec *vecs, + unsigned long count, loff_t to, size_t *retlen) +{ + unsigned long i; + size_t totlen = 0, thislen; + int ret = 0; + + for (i=0; iwrite(mtd, to, vecs[i].iov_len, &thislen, vecs[i].iov_base); + totlen += thislen; + if (ret || thislen != vecs[i].iov_len) + break; + to += vecs[i].iov_len; + } + if (retlen) + *retlen = totlen; + return ret; +} + +int jffs2_flash_direct_writev(struct jffs2_sb_info *c, const struct iovec *vecs, + unsigned long count, loff_t to, size_t *retlen) +{ + if (c->mtd->writev) + return c->mtd->writev(c->mtd, vecs, count, to, retlen); + else + return mtd_fake_writev(c->mtd, vecs, count, to, retlen); +} + diff -Nru a/fs/jfs/super.c b/fs/jfs/super.c --- a/fs/jfs/super.c Tue Mar 12 13:58:15 2002 +++ b/fs/jfs/super.c Tue Mar 12 13:58:15 2002 @@ -364,6 +364,7 @@ owner: THIS_MODULE, name: "jfs", get_sb: jfs_get_sb, + kill_sb: kill_block_super, fs_flags: FS_REQUIRES_DEV, }; diff -Nru a/fs/lockd/svc4proc.c b/fs/lockd/svc4proc.c --- a/fs/lockd/svc4proc.c Tue Mar 12 13:58:15 2002 +++ b/fs/lockd/svc4proc.c Tue Mar 12 13:58:15 2002 @@ -447,11 +447,13 @@ if (nlmsvc_ops != NULL) { struct svc_client *clnt; saddr.sin_addr.s_addr = argp->addr; + nlmsvc_ops->exp_readlock(); if ((clnt = nlmsvc_ops->exp_getclient(&saddr)) != NULL && (host = nlm_lookup_host(clnt, &saddr, 0, 0)) != NULL) { nlmsvc_free_host_resources(host); } nlm_release_host(host); + nlmsvc_ops->exp_unlock(); } return rpc_success; diff -Nru a/fs/lockd/svcproc.c b/fs/lockd/svcproc.c --- a/fs/lockd/svcproc.c Tue Mar 12 13:58:15 2002 +++ b/fs/lockd/svcproc.c Tue Mar 12 13:58:15 2002 @@ -475,11 +475,13 @@ if (nlmsvc_ops != NULL) { struct svc_client *clnt; saddr.sin_addr.s_addr = argp->addr; + nlmsvc_ops->exp_readlock(); if ((clnt = nlmsvc_ops->exp_getclient(&saddr)) != NULL && (host = nlm_lookup_host(clnt, &saddr, 0, 0)) != NULL) { nlmsvc_free_host_resources(host); } nlm_release_host(host); + nlmsvc_ops->exp_unlock(); } return rpc_success; diff -Nru a/fs/minix/bitmap.c b/fs/minix/bitmap.c --- a/fs/minix/bitmap.c Tue Mar 12 13:58:15 2002 +++ b/fs/minix/bitmap.c Tue Mar 12 13:58:15 2002 @@ -52,6 +52,7 @@ void minix_free_block(struct inode * inode, int block) { struct super_block * sb = inode->i_sb; + struct minix_sb_info * sbi = minix_sb(sb); struct buffer_head * bh; unsigned int bit,zone; @@ -59,19 +60,19 @@ printk("trying to free block on nonexistent device\n"); return; } - if (block < sb->u.minix_sb.s_firstdatazone || - block >= sb->u.minix_sb.s_nzones) { + if (block < sbi->s_firstdatazone || + block >= sbi->s_nzones) { printk("trying to free block not in datazone\n"); return; } - zone = block - sb->u.minix_sb.s_firstdatazone + 1; + zone = block - sbi->s_firstdatazone + 1; bit = zone & 8191; zone >>= 13; - if (zone >= sb->u.minix_sb.s_zmap_blocks) { + if (zone >= sbi->s_zmap_blocks) { printk("minix_free_block: nonexistent bitmap buffer\n"); return; } - bh = sb->u.minix_sb.s_zmap[zone]; + bh = sbi->s_zmap[zone]; if (!minix_test_and_clear_bit(bit,bh->b_data)) printk("free_block (%s:%d): bit already cleared\n", sb->s_id, block); @@ -82,6 +83,7 @@ int minix_new_block(struct inode * inode) { struct super_block * sb = inode->i_sb; + struct minix_sb_info * sbi = minix_sb(sb); struct buffer_head * bh; int i,j; @@ -92,8 +94,8 @@ repeat: j = 8192; bh = NULL; - for (i = 0; i < sb->u.minix_sb.s_zmap_blocks; i++) { - bh = sb->u.minix_sb.s_zmap[i]; + for (i = 0; i < sbi->s_zmap_blocks; i++) { + bh = sbi->s_zmap[i]; if ((j = minix_find_first_zero_bit(bh->b_data, 8192)) < 8192) break; } @@ -104,25 +106,26 @@ goto repeat; } mark_buffer_dirty(bh); - j += i*8192 + sb->u.minix_sb.s_firstdatazone-1; - if (j < sb->u.minix_sb.s_firstdatazone || - j >= sb->u.minix_sb.s_nzones) + j += i*8192 + sbi->s_firstdatazone-1; + if (j < sbi->s_firstdatazone || + j >= sbi->s_nzones) return 0; return j; } unsigned long minix_count_free_blocks(struct super_block *sb) { - return (count_free(sb->u.minix_sb.s_zmap, sb->u.minix_sb.s_zmap_blocks, - sb->u.minix_sb.s_nzones - sb->u.minix_sb.s_firstdatazone + 1) - << sb->u.minix_sb.s_log_zone_size); + struct minix_sb_info *sbi = minix_sb(sb); + return (count_free(sbi->s_zmap, sbi->s_zmap_blocks, + sbi->s_nzones - sbi->s_firstdatazone + 1) + << sbi->s_log_zone_size); } struct minix_inode * minix_V1_raw_inode(struct super_block *sb, ino_t ino, struct buffer_head **bh) { int block; - struct minix_sb_info *sbi = &sb->u.minix_sb; + struct minix_sb_info *sbi = minix_sb(sb); struct minix_inode *p; if (!ino || ino > sbi->s_ninodes) { @@ -146,7 +149,7 @@ minix_V2_raw_inode(struct super_block *sb, ino_t ino, struct buffer_head **bh) { int block; - struct minix_sb_info *sbi = &sb->u.minix_sb; + struct minix_sb_info *sbi = minix_sb(sb); struct minix2_inode *p; *bh = NULL; @@ -198,17 +201,17 @@ struct buffer_head * bh; unsigned long ino; - if (inode->i_ino < 1 || inode->i_ino > inode->i_sb->u.minix_sb.s_ninodes) { + if (inode->i_ino < 1 || inode->i_ino > minix_sb(inode->i_sb)->s_ninodes) { printk("free_inode: inode 0 or nonexistent inode\n"); return; } ino = inode->i_ino; - if ((ino >> 13) >= inode->i_sb->u.minix_sb.s_imap_blocks) { + if ((ino >> 13) >= minix_sb(inode->i_sb)->s_imap_blocks) { printk("free_inode: nonexistent imap in superblock\n"); return; } - bh = inode->i_sb->u.minix_sb.s_imap[ino >> 13]; + bh = minix_sb(inode->i_sb)->s_imap[ino >> 13]; minix_clear_inode(inode); clear_inode(inode); if (!minix_test_and_clear_bit(ino & 8191, bh->b_data)) @@ -233,8 +236,8 @@ bh = NULL; *error = -ENOSPC; lock_super(sb); - for (i = 0; i < sb->u.minix_sb.s_imap_blocks; i++) { - bh = inode->i_sb->u.minix_sb.s_imap[i]; + for (i = 0; i < minix_sb(sb)->s_imap_blocks; i++) { + bh = minix_sb(inode->i_sb)->s_imap[i]; if ((j = minix_find_first_zero_bit(bh->b_data, 8192)) < 8192) break; } @@ -251,7 +254,7 @@ } mark_buffer_dirty(bh); j += i*8192; - if (!j || j > inode->i_sb->u.minix_sb.s_ninodes) { + if (!j || j > minix_sb(inode->i_sb)->s_ninodes) { iput(inode); unlock_super(sb); return NULL; @@ -272,6 +275,6 @@ unsigned long minix_count_free_inodes(struct super_block *sb) { - return count_free(sb->u.minix_sb.s_imap, sb->u.minix_sb.s_imap_blocks, - sb->u.minix_sb.s_ninodes + 1); + return count_free(minix_sb(sb)->s_imap, minix_sb(sb)->s_imap_blocks, + minix_sb(sb)->s_ninodes + 1); } diff -Nru a/fs/minix/dir.c b/fs/minix/dir.c --- a/fs/minix/dir.c Tue Mar 12 13:58:15 2002 +++ b/fs/minix/dir.c Tue Mar 12 13:58:15 2002 @@ -77,7 +77,7 @@ unsigned offset = pos & ~PAGE_CACHE_MASK; unsigned long n = pos >> PAGE_CACHE_SHIFT; unsigned long npages = dir_pages(inode); - struct minix_sb_info *sbi = &sb->u.minix_sb; + struct minix_sb_info *sbi = minix_sb(sb); unsigned chunk_size = sbi->s_dirsize; pos = (pos + chunk_size-1) & ~(chunk_size-1); @@ -140,7 +140,7 @@ int namelen = dentry->d_name.len; struct inode * dir = dentry->d_parent->d_inode; struct super_block * sb = dir->i_sb; - struct minix_sb_info * sbi = &sb->u.minix_sb; + struct minix_sb_info * sbi = minix_sb(sb); unsigned long n; unsigned long npages = dir_pages(dir); struct page *page = NULL; @@ -178,7 +178,7 @@ const char * name = dentry->d_name.name; int namelen = dentry->d_name.len; struct super_block * sb = dir->i_sb; - struct minix_sb_info * sbi = &sb->u.minix_sb; + struct minix_sb_info * sbi = minix_sb(sb); struct page *page = NULL; struct minix_dir_entry * de; unsigned long npages = dir_pages(dir); @@ -236,7 +236,7 @@ struct inode *inode = (struct inode*)mapping->host; char *kaddr = (char*)page_address(page); unsigned from = (char*)de - kaddr; - unsigned to = from + inode->i_sb->u.minix_sb.s_dirsize; + unsigned to = from + minix_sb(inode->i_sb)->s_dirsize; int err; lock_page(page); @@ -256,7 +256,7 @@ { struct address_space *mapping = inode->i_mapping; struct page *page = grab_cache_page(mapping, 0); - struct minix_sb_info * sbi = &inode->i_sb->u.minix_sb; + struct minix_sb_info * sbi = minix_sb(inode->i_sb); struct minix_dir_entry * de; char *base; int err; @@ -291,7 +291,7 @@ { struct page *page = NULL; unsigned long i, npages = dir_pages(inode); - struct minix_sb_info *sbi = &inode->i_sb->u.minix_sb; + struct minix_sb_info *sbi = minix_sb(inode->i_sb); for (i = 0; i < npages; i++) { char *kaddr; @@ -334,7 +334,7 @@ struct inode *inode) { struct inode *dir = (struct inode*)page->mapping->host; - struct minix_sb_info *sbi = &dir->i_sb->u.minix_sb; + struct minix_sb_info *sbi = minix_sb(dir->i_sb); unsigned from = (char *)de-(char*)page_address(page); unsigned to = from + sbi->s_dirsize; int err; @@ -354,7 +354,7 @@ struct minix_dir_entry * minix_dotdot (struct inode *dir, struct page **p) { struct page *page = dir_get_page(dir, 0); - struct minix_sb_info *sbi = &dir->i_sb->u.minix_sb; + struct minix_sb_info *sbi = minix_sb(dir->i_sb); struct minix_dir_entry *de = NULL; if (!IS_ERR(page)) { diff -Nru a/fs/minix/inode.c b/fs/minix/inode.c --- a/fs/minix/inode.c Tue Mar 12 13:58:15 2002 +++ b/fs/minix/inode.c Tue Mar 12 13:58:15 2002 @@ -36,7 +36,7 @@ static void minix_commit_super(struct super_block * sb) { - mark_buffer_dirty(sb->u.minix_sb.s_sbh); + mark_buffer_dirty(minix_sb(sb)->s_sbh); sb->s_dirt = 0; } @@ -45,7 +45,7 @@ struct minix_super_block * ms; if (!(sb->s_flags & MS_RDONLY)) { - ms = sb->u.minix_sb.s_ms; + ms = minix_sb(sb)->s_ms; if (ms->s_state & MINIX_VALID_FS) ms->s_state &= ~MINIX_VALID_FS; @@ -58,17 +58,20 @@ static void minix_put_super(struct super_block *sb) { int i; + struct minix_sb_info *sbi = minix_sb(sb); if (!(sb->s_flags & MS_RDONLY)) { - sb->u.minix_sb.s_ms->s_state = sb->u.minix_sb.s_mount_state; - mark_buffer_dirty(sb->u.minix_sb.s_sbh); + sbi->s_ms->s_state = sbi->s_mount_state; + mark_buffer_dirty(sbi->s_sbh); } - for (i = 0; i < sb->u.minix_sb.s_imap_blocks; i++) - brelse(sb->u.minix_sb.s_imap[i]); - for (i = 0; i < sb->u.minix_sb.s_zmap_blocks; i++) - brelse(sb->u.minix_sb.s_zmap[i]); - brelse (sb->u.minix_sb.s_sbh); - kfree(sb->u.minix_sb.s_imap); + for (i = 0; i < sbi->s_imap_blocks; i++) + brelse(sbi->s_imap[i]); + for (i = 0; i < sbi->s_zmap_blocks; i++) + brelse(sbi->s_zmap[i]); + brelse (sbi->s_sbh); + kfree(sbi->s_imap); + sb->u.generic_sbp = NULL; + kfree(sbi); return; } @@ -129,32 +132,33 @@ static int minix_remount (struct super_block * sb, int * flags, char * data) { + struct minix_sb_info * sbi = minix_sb(sb); struct minix_super_block * ms; - ms = sb->u.minix_sb.s_ms; + ms = sbi->s_ms; if ((*flags & MS_RDONLY) == (sb->s_flags & MS_RDONLY)) return 0; if (*flags & MS_RDONLY) { if (ms->s_state & MINIX_VALID_FS || - !(sb->u.minix_sb.s_mount_state & MINIX_VALID_FS)) + !(sbi->s_mount_state & MINIX_VALID_FS)) return 0; /* Mounting a rw partition read-only. */ - ms->s_state = sb->u.minix_sb.s_mount_state; - mark_buffer_dirty(sb->u.minix_sb.s_sbh); + ms->s_state = sbi->s_mount_state; + mark_buffer_dirty(sbi->s_sbh); sb->s_dirt = 1; minix_commit_super(sb); } else { /* Mount a partition which is read-only, read-write. */ - sb->u.minix_sb.s_mount_state = ms->s_state; + sbi->s_mount_state = ms->s_state; ms->s_state &= ~MINIX_VALID_FS; - mark_buffer_dirty(sb->u.minix_sb.s_sbh); + mark_buffer_dirty(sbi->s_sbh); sb->s_dirt = 1; - if (!(sb->u.minix_sb.s_mount_state & MINIX_VALID_FS)) + if (!(sbi->s_mount_state & MINIX_VALID_FS)) printk ("MINIX-fs warning: remounting unchecked fs, " "running fsck is recommended.\n"); - else if ((sb->u.minix_sb.s_mount_state & MINIX_ERROR_FS)) + else if ((sbi->s_mount_state & MINIX_ERROR_FS)) printk ("MINIX-fs warning: remounting fs with errors, " "running fsck is recommended.\n"); } @@ -168,7 +172,12 @@ struct minix_super_block *ms; int i, block; struct inode *root_inode; - struct minix_sb_info *sbi = &s->u.minix_sb; + struct minix_sb_info *sbi; + + sbi = kmalloc(sizeof(struct minix_sb_info), GFP_KERNEL); + if (!sbi) + return -ENOMEM; + s->u.generic_sbp = sbi; /* N.B. These should be compile-time tests. Unfortunately that is impossible. */ @@ -311,19 +320,22 @@ out_bad_sb: printk("MINIX-fs: unable to read superblock\n"); out: + s->u.generic_sbp = NULL; + kfree(sbi); return -EINVAL; } static int minix_statfs(struct super_block *sb, struct statfs *buf) { + struct minix_sb_info *sbi = minix_sb(sb); buf->f_type = sb->s_magic; buf->f_bsize = sb->s_blocksize; - buf->f_blocks = (sb->u.minix_sb.s_nzones - sb->u.minix_sb.s_firstdatazone) << sb->u.minix_sb.s_log_zone_size; + buf->f_blocks = (sbi->s_nzones - sbi->s_firstdatazone) << sbi->s_log_zone_size; buf->f_bfree = minix_count_free_blocks(sb); buf->f_bavail = buf->f_bfree; - buf->f_files = sb->u.minix_sb.s_ninodes; + buf->f_files = sbi->s_ninodes; buf->f_ffree = minix_count_free_inodes(sb); - buf->f_namelen = sb->u.minix_sb.s_namelen; + buf->f_namelen = sbi->s_namelen; return 0; } @@ -567,6 +579,7 @@ owner: THIS_MODULE, name: "minix", get_sb: minix_get_sb, + kill_sb: kill_block_super, fs_flags: FS_REQUIRES_DEV, }; diff -Nru a/fs/minix/itree_v1.c b/fs/minix/itree_v1.c --- a/fs/minix/itree_v1.c Tue Mar 12 13:58:15 2002 +++ b/fs/minix/itree_v1.c Tue Mar 12 13:58:15 2002 @@ -28,7 +28,7 @@ if (block < 0) { printk("minix_bmap: block<0"); - } else if (block >= (inode->i_sb->u.minix_sb.s_max_size/BLOCK_SIZE)) { + } else if (block >= (minix_sb(inode->i_sb)->s_max_size/BLOCK_SIZE)) { printk("minix_bmap: block>big"); } else if (block < 7) { offsets[n++] = block; diff -Nru a/fs/minix/itree_v2.c b/fs/minix/itree_v2.c --- a/fs/minix/itree_v2.c Tue Mar 12 13:58:15 2002 +++ b/fs/minix/itree_v2.c Tue Mar 12 13:58:15 2002 @@ -28,7 +28,7 @@ if (block < 0) { printk("minix_bmap: block<0"); - } else if (block >= (inode->i_sb->u.minix_sb.s_max_size/BLOCK_SIZE)) { + } else if (block >= (minix_sb(inode->i_sb)->s_max_size/BLOCK_SIZE)) { printk("minix_bmap: block>big"); } else if (block < 7) { offsets[n++] = block; diff -Nru a/fs/minix/namei.c b/fs/minix/namei.c --- a/fs/minix/namei.c Tue Mar 12 13:58:15 2002 +++ b/fs/minix/namei.c Tue Mar 12 13:58:15 2002 @@ -38,7 +38,7 @@ int i; const unsigned char *name; - i = dentry->d_inode->i_sb->u.minix_sb.s_namelen; + i = minix_sb(dentry->d_inode->i_sb)->s_namelen; if (i >= qstr->len) return 0; /* Truncate the name in place, avoids having to define a compare @@ -63,7 +63,7 @@ dentry->d_op = dir->i_sb->s_root->d_op; - if (dentry->d_name.len > dir->i_sb->u.minix_sb.s_namelen) + if (dentry->d_name.len > minix_sb(dir->i_sb)->s_namelen) return ERR_PTR(-ENAMETOOLONG); ino = minix_inode_by_name(dentry); @@ -131,7 +131,7 @@ { struct inode *inode = old_dentry->d_inode; - if (inode->i_nlink >= inode->i_sb->u.minix_sb.s_link_max) + if (inode->i_nlink >= minix_sb(inode->i_sb)->s_link_max) return -EMLINK; inode->i_ctime = CURRENT_TIME; @@ -145,7 +145,7 @@ struct inode * inode; int err = -EMLINK; - if (dir->i_nlink >= dir->i_sb->u.minix_sb.s_link_max) + if (dir->i_nlink >= minix_sb(dir->i_sb)->s_link_max) goto out; inc_count(dir); @@ -221,7 +221,7 @@ static int minix_rename(struct inode * old_dir, struct dentry *old_dentry, struct inode * new_dir, struct dentry *new_dentry) { - struct minix_sb_info * info = &old_dir->i_sb->u.minix_sb; + struct minix_sb_info * info = minix_sb(old_dir->i_sb); struct inode * old_inode = old_dentry->d_inode; struct inode * new_inode = new_dentry->d_inode; struct page * dir_page = NULL; diff -Nru a/fs/msdos/msdosfs_syms.c b/fs/msdos/msdosfs_syms.c --- a/fs/msdos/msdosfs_syms.c Tue Mar 12 13:58:14 2002 +++ b/fs/msdos/msdosfs_syms.c Tue Mar 12 13:58:14 2002 @@ -35,6 +35,7 @@ owner: THIS_MODULE, name: "msdos", get_sb: msdos_get_sb, + kill_sb: kill_block_super, fs_flags: FS_REQUIRES_DEV, }; diff -Nru a/fs/namei.c b/fs/namei.c --- a/fs/namei.c Tue Mar 12 13:58:15 2002 +++ b/fs/namei.c Tue Mar 12 13:58:15 2002 @@ -686,7 +686,7 @@ if (!emul) goto set_it; err = path_lookup(emul, LOOKUP_FOLLOW|LOOKUP_DIRECTORY|LOOKUP_NOALT, &nd); - if (err) { + if (!err) { mnt = nd.mnt; dentry = nd.dentry; } diff -Nru a/fs/namespace.c b/fs/namespace.c --- a/fs/namespace.c Tue Mar 12 13:58:14 2002 +++ b/fs/namespace.c Tue Mar 12 13:58:14 2002 @@ -23,7 +23,6 @@ struct vfsmount *do_kern_mount(const char *type, int flags, char *name, void *data); int do_remount_sb(struct super_block *sb, int flags, void * data); -void kill_super(struct super_block *sb); int __init init_rootfs(void); static struct list_head *mount_hashtable; @@ -152,7 +151,7 @@ struct super_block *sb = mnt->mnt_sb; dput(mnt->mnt_root); free_vfsmnt(mnt); - kill_super(sb); + deactivate_super(sb); } /* iterator */ diff -Nru a/fs/ncpfs/inode.c b/fs/ncpfs/inode.c --- a/fs/ncpfs/inode.c Tue Mar 12 13:58:15 2002 +++ b/fs/ncpfs/inode.c Tue Mar 12 13:58:15 2002 @@ -744,6 +744,7 @@ owner: THIS_MODULE, name: "ncpfs", get_sb: ncp_get_sb, + kill_sb: kill_anon_super, }; static int __init init_ncp_fs(void) diff -Nru a/fs/nfs/inode.c b/fs/nfs/inode.c --- a/fs/nfs/inode.c Tue Mar 12 13:58:16 2002 +++ b/fs/nfs/inode.c Tue Mar 12 13:58:16 2002 @@ -1155,6 +1155,7 @@ owner: THIS_MODULE, name: "nfs", get_sb: nfs_get_sb, + kill_sb: kill_anon_super, fs_flags: FS_ODD_RENAME, }; diff -Nru a/fs/nfsd/export.c b/fs/nfsd/export.c --- a/fs/nfsd/export.c Tue Mar 12 13:58:15 2002 +++ b/fs/nfsd/export.c Tue Mar 12 13:58:15 2002 @@ -52,6 +52,7 @@ ((((a)>>24) ^ ((a)>>16) ^ ((a)>>8) ^(a)) & CLIENT_HASHMASK) /* XXX: is this adequate for 32bit kdev_t ? */ #define EXPORT_HASH(dev) (minor(dev) & (NFSCLNT_EXPMAX - 1)) +#define EXPORT_FSID_HASH(fsid) ((fsid) & (NFSCLNT_EXPMAX - 1)) struct svc_clnthash { struct svc_clnthash * h_next; @@ -82,6 +83,27 @@ return NULL; } +/* + * Find the client's export entry matching fsid + */ +svc_export * +exp_get_fsid(svc_client *clp, int fsid) +{ + struct list_head *head, *p; + + + if (!clp) + return NULL; + + head = &clp->cl_expfsid[EXPORT_FSID_HASH(fsid)]; + list_for_each(p, head) { + svc_export *exp = list_entry(p, svc_export, ex_fsid_hash); + if (exp->ex_fsid == fsid) + return exp; + } + return NULL; +} + svc_export * exp_get_by_name(svc_client *clp, struct vfsmount *mnt, struct dentry *dentry) { @@ -192,6 +214,25 @@ up_write(&hash_sem); } +static void exp_fsid_unhash(struct svc_export *exp) +{ + + if ((exp->ex_flags & NFSEXP_FSID) == 0) + return; + + list_del_init(&exp->ex_fsid_hash); +} + +static void exp_fsid_hash(struct svc_client *clp, struct svc_export *exp) +{ + struct list_head *head; + + if ((exp->ex_flags & NFSEXP_FSID) == 0) + return; + head = clp->cl_expfsid + EXPORT_FSID_HASH(exp->ex_fsid); + list_add(&exp->ex_fsid_hash, head); +} + /* * Export a file system. */ @@ -199,7 +240,8 @@ exp_export(struct nfsctl_export *nxp) { svc_client *clp; - svc_export *exp, *parent; + svc_export *exp = NULL, *parent; + svc_export *fsid_exp; struct nameidata nd; struct inode *inode = NULL; int err; @@ -215,8 +257,6 @@ dprintk("exp_export called for %s:%s (%x/%ld fl %x).\n", nxp->ex_client, nxp->ex_path, nxp->ex_dev, (long) nxp->ex_ino, nxp->ex_flags); - dev = to_kdev_t(nxp->ex_dev); - ino = nxp->ex_ino; /* Try to lock the export table for update */ exp_writelock(); @@ -225,31 +265,35 @@ if (!(clp = exp_getclientbyname(nxp->ex_client))) goto out_unlock; - /* - * If there's already an export for this file, assume this - * is just a flag update. - */ - if ((exp = exp_get(clp, dev, ino)) != NULL) { - exp->ex_flags = nxp->ex_flags; - exp->ex_anon_uid = nxp->ex_anon_uid; - exp->ex_anon_gid = nxp->ex_anon_gid; - err = 0; - goto out_unlock; - } /* Look up the dentry */ err = path_lookup(nxp->ex_path, 0, &nd); if (err) goto out_unlock; - inode = nd.dentry->d_inode; + dev = inode->i_dev; + ino = inode->i_ino; err = -EINVAL; - if (!kdev_same(inode->i_dev, dev) || inode->i_ino != nxp->ex_ino) { - printk(KERN_DEBUG "exp_export: i_dev = %02x:%02x, dev = %02x:%02x\n", - major(inode->i_dev), minor(inode->i_dev), - major(dev), minor(dev)); - /* I'm just being paranoid... */ - goto finish; + + exp = exp_get(clp, dev, ino); + + /* must make sure there wont be an ex_fsid clash */ + if ((nxp->ex_flags & NFSEXP_FSID) && + (fsid_exp = exp_get_fsid(clp, nxp->ex_dev)) && + fsid_exp != exp) + goto out_unlock; + + if (exp != NULL) { + /* just a flags/id/fsid update */ + + exp_fsid_unhash(exp); + exp->ex_flags = nxp->ex_flags; + exp->ex_anon_uid = nxp->ex_anon_uid; + exp->ex_anon_gid = nxp->ex_anon_gid; + exp->ex_fsid = nxp->ex_dev; + exp_fsid_hash(clp, exp); + err = 0; + goto out_unlock; } /* We currently export only dirs and regular files. @@ -292,6 +336,8 @@ exp->ex_ino = ino; exp->ex_anon_uid = nxp->ex_anon_uid; exp->ex_anon_gid = nxp->ex_anon_gid; + exp->ex_fsid = nxp->ex_dev; + /* Update parent pointers of all exports */ if (parent) @@ -300,6 +346,8 @@ list_add(&exp->ex_hash, clp->cl_export + EXPORT_HASH(dev)); list_add_tail(&exp->ex_list, &clp->cl_list); + exp_fsid_hash(clp, exp); + err = 0; /* Unlock hashtable */ @@ -325,6 +373,9 @@ struct vfsmount *mnt; struct inode *inode; + list_del(&unexp->ex_list); + list_del(&unexp->ex_hash); + exp_fsid_unhash(unexp); /* Update parent pointers. */ exp_change_parents(unexp->ex_client, unexp, unexp->ex_parent); dentry = unexp->ex_dentry; @@ -340,8 +391,6 @@ /* * Revoke all exports for a given client. - * This may look very awkward, but we have to do it this way in order - * to avoid race conditions (aka mind the parent pointer). */ static void exp_unexport_all(svc_client *clp) @@ -352,8 +401,6 @@ while (!list_empty(p)) { svc_export *exp = list_entry(p->next, svc_export, ex_list); - list_del(&exp->ex_list); - list_del(&exp->ex_hash); exp_do_unexport(exp); } } @@ -379,8 +426,6 @@ kdev_t ex_dev = to_kdev_t(nxp->ex_dev); svc_export *exp = exp_get(clp, ex_dev, nxp->ex_ino); if (exp) { - list_del(&exp->ex_hash); - list_del(&exp->ex_list); exp_do_unexport(exp); err = 0; } @@ -455,13 +500,19 @@ for (hp = head; (tmp = *hp) != NULL; hp = &(tmp->h_next)) { if (tmp->h_addr.s_addr == addr) { +#if 0 +/* If we really want to do this, we need a spin +lock to protect against multiple access (as we only +have an exp_readlock) and need to protect +the code in e_show() that walks this list too. +*/ /* Move client to the front */ if (head != hp) { *hp = tmp->h_next; tmp->h_next = *head; *head = tmp; } - +#endif return tmp->h_client; } } @@ -568,7 +619,7 @@ { 0, {"", ""}} }; -static void exp_flags(struct seq_file *m, int flag) +static void exp_flags(struct seq_file *m, int flag, int fsid) { int first = 0; struct flags *flg; @@ -578,6 +629,8 @@ if (*flg->name[state]) seq_printf(m, "%s%s", first++?",":"", flg->name[state]); } + if (flag & NFSEXP_FSID) + seq_printf(m, "%sfsid=%d", first++?",":"", fsid); } static inline void mangle(struct seq_file *m, const char *s) @@ -603,7 +656,7 @@ seq_putc(m, '\t'); mangle(m, clp->cl_ident); seq_putc(m, '('); - exp_flags(m, exp->ex_flags); + exp_flags(m, exp->ex_flags, exp->ex_fsid); seq_puts(m, ") # "); for (j = 0; j < clp->cl_naddr; j++) { struct svc_clnthash **hp, **head, *tmp; @@ -673,8 +726,10 @@ if (!(clp = kmalloc(sizeof(*clp), GFP_KERNEL))) goto out_unlock; memset(clp, 0, sizeof(*clp)); - for (i = 0; i < NFSCLNT_EXPMAX; i++) + for (i = 0; i < NFSCLNT_EXPMAX; i++) { INIT_LIST_HEAD(&clp->cl_export[i]); + INIT_LIST_HEAD(&clp->cl_expfsid[i]); + } INIT_LIST_HEAD(&clp->cl_list); dprintk("created client %s (%p)\n", ncp->cl_ident, clp); diff -Nru a/fs/nfsd/nfsfh.c b/fs/nfsd/nfsfh.c --- a/fs/nfsd/nfsfh.c Tue Mar 12 13:58:15 2002 +++ b/fs/nfsd/nfsfh.c Tue Mar 12 13:58:15 2002 @@ -547,11 +547,13 @@ dprintk("nfsd: fh_verify(%s)\n", SVCFH_fmt(fhp)); if (!fhp->fh_dentry) { - kdev_t xdev; - ino_t xino; + kdev_t xdev = NODEV; + ino_t xino = 0; __u32 *datap=NULL; int data_left = fh->fh_size/4; int nfsdev; + int fsid = 0; + error = nfserr_stale; if (rqstp->rq_vers == 3) error = nfserr_badhandle; @@ -571,6 +573,10 @@ xdev = mk_kdev(nfsdev>>16, nfsdev&0xFFFF); xino = *datap++; break; + case 1: + if ((data_left-=1)<0) goto out; + fsid = *datap++; + break; default: goto out; } @@ -586,7 +592,10 @@ * Look up the export entry. */ error = nfserr_stale; - exp = exp_get(rqstp->rq_client, xdev, xino); + if (fh->fh_version == 1 && fh->fh_fsid_type == 1) + exp = exp_get_fsid(rqstp->rq_client, fsid); + else + exp = exp_get(rqstp->rq_client, xdev, xino); if (!exp) { /* export entry revoked */ @@ -838,12 +847,20 @@ } else { fhp->fh_handle.fh_version = 1; fhp->fh_handle.fh_auth_type = 0; - fhp->fh_handle.fh_fsid_type = 0; datap = fhp->fh_handle.fh_auth+0; - /* fsid_type 0 == 2byte major, 2byte minor, 4byte inode */ - *datap++ = htonl((major(exp->ex_dev)<<16)| minor(exp->ex_dev)); - *datap++ = ino_t_to_u32(exp->ex_ino); - fhp->fh_handle.fh_size = 3*4; + if ((exp->ex_flags & NFSEXP_FSID) && + (!ref_fh || ref_fh->fh_handle.fh_fsid_type == 1)) { + fhp->fh_handle.fh_fsid_type = 1; + /* fsid_type 1 == 4 bytes filesystem id */ + *datap++ = exp->ex_fsid; + fhp->fh_handle.fh_size = 2*4; + } else { + fhp->fh_handle.fh_fsid_type = 0; + /* fsid_type 0 == 2byte major, 2byte minor, 4byte inode */ + *datap++ = htonl((major(exp->ex_dev)<<16)| minor(exp->ex_dev)); + *datap++ = ino_t_to_u32(exp->ex_ino); + fhp->fh_handle.fh_size = 3*4; + } if (inode) { int size = fhp->fh_maxsize/4 - 3; fhp->fh_handle.fh_fileid_type = diff -Nru a/fs/ntfs/fs.c b/fs/ntfs/fs.c --- a/fs/ntfs/fs.c Tue Mar 12 13:58:15 2002 +++ b/fs/ntfs/fs.c Tue Mar 12 13:58:15 2002 @@ -1168,6 +1168,7 @@ owner: THIS_MODULE, name: "ntfs", get_sb: ntfs_get_sb, + kill_sb: kill_block_super, fs_flags: FS_REQUIRES_DEV, }; diff -Nru a/fs/openpromfs/inode.c b/fs/openpromfs/inode.c --- a/fs/openpromfs/inode.c Tue Mar 12 13:58:15 2002 +++ b/fs/openpromfs/inode.c Tue Mar 12 13:58:15 2002 @@ -1043,6 +1043,7 @@ owner: THIS_MODULE, name: "openpromfs", get_sb: openprom_get_sb, + kill_sb: kill_anon_super, }; static int __init init_openprom_fs(void) diff -Nru a/fs/pipe.c b/fs/pipe.c --- a/fs/pipe.c Tue Mar 12 13:58:14 2002 +++ b/fs/pipe.c Tue Mar 12 13:58:14 2002 @@ -635,6 +635,7 @@ static struct file_system_type pipe_fs_type = { name: "pipefs", get_sb: pipefs_get_sb, + kill_sb: kill_anon_super, fs_flags: FS_NOMOUNT, }; diff -Nru a/fs/proc/root.c b/fs/proc/root.c --- a/fs/proc/root.c Tue Mar 12 13:58:15 2002 +++ b/fs/proc/root.c Tue Mar 12 13:58:15 2002 @@ -33,6 +33,7 @@ static struct file_system_type proc_fs_type = { name: "proc", get_sb: proc_get_sb, + kill_sb: kill_anon_super, }; extern int __init proc_init_inodecache(void); diff -Nru a/fs/qnx4/inode.c b/fs/qnx4/inode.c --- a/fs/qnx4/inode.c Tue Mar 12 13:58:15 2002 +++ b/fs/qnx4/inode.c Tue Mar 12 13:58:15 2002 @@ -549,6 +549,7 @@ owner: THIS_MODULE, name: "qnx4", get_sb: qnx4_get_sb, + kill_sb: kill_block_super, fs_flags: FS_REQUIRES_DEV, }; diff -Nru a/fs/ramfs/inode.c b/fs/ramfs/inode.c --- a/fs/ramfs/inode.c Tue Mar 12 13:58:15 2002 +++ b/fs/ramfs/inode.c Tue Mar 12 13:58:15 2002 @@ -338,12 +338,13 @@ static struct file_system_type ramfs_fs_type = { name: "ramfs", get_sb: ramfs_get_sb, - fs_flags: FS_LITTER, + kill_sb: kill_litter_super, }; static struct file_system_type rootfs_fs_type = { name: "rootfs", get_sb: ramfs_get_sb, - fs_flags: FS_NOMOUNT|FS_LITTER, + kill_sb: kill_litter_super, + fs_flags: FS_NOMOUNT, }; static int __init init_ramfs_fs(void) diff -Nru a/fs/reiserfs/super.c b/fs/reiserfs/super.c --- a/fs/reiserfs/super.c Tue Mar 12 13:58:15 2002 +++ b/fs/reiserfs/super.c Tue Mar 12 13:58:15 2002 @@ -1193,6 +1193,7 @@ owner: THIS_MODULE, name: "reiserfs", get_sb: reiserfs_get_sb, + kill_sb: kill_block_super, fs_flags: FS_REQUIRES_DEV, }; diff -Nru a/fs/romfs/inode.c b/fs/romfs/inode.c --- a/fs/romfs/inode.c Tue Mar 12 13:58:15 2002 +++ b/fs/romfs/inode.c Tue Mar 12 13:58:15 2002 @@ -540,6 +540,7 @@ owner: THIS_MODULE, name: "romfs", get_sb: romfs_get_sb, + kill_sb: kill_block_super, fs_flags: FS_REQUIRES_DEV, }; diff -Nru a/fs/smbfs/inode.c b/fs/smbfs/inode.c --- a/fs/smbfs/inode.c Tue Mar 12 13:58:15 2002 +++ b/fs/smbfs/inode.c Tue Mar 12 13:58:15 2002 @@ -719,6 +719,7 @@ owner: THIS_MODULE, name: "smbfs", get_sb: smb_get_sb, + kill_sb: kill_anon_super, }; static int __init init_smb_fs(void) diff -Nru a/fs/super.c b/fs/super.c --- a/fs/super.c Tue Mar 12 13:58:15 2002 +++ b/fs/super.c Tue Mar 12 13:58:15 2002 @@ -297,22 +297,6 @@ /* Superblock refcounting */ /** - * deactivate_super - turn an active reference into temporary - * @s: superblock to deactivate - * - * Turns an active reference into temporary one. Returns 0 if there are - * other active references, 1 if we had deactivated the last one. - */ -static inline int deactivate_super(struct super_block *s) -{ - if (!atomic_dec_and_lock(&s->s_active, &sb_lock)) - return 0; - s->s_count -= S_BIAS-1; - spin_unlock(&sb_lock); - return 1; -} - -/** * put_super - drop a temporary reference to superblock * @s: superblock in question * @@ -328,6 +312,28 @@ } /** + * deactivate_super - drop an active reference to superblock + * @s: superblock to deactivate + * + * Drops an active reference to superblock, acquiring a temprory one if + * there is no active references left. In that case we lock superblock, + * tell fs driver to shut it down and drop the temporary reference we + * had just acquired. + */ +void deactivate_super(struct super_block *s) +{ + struct file_system_type *fs = s->s_type; + if (atomic_dec_and_lock(&s->s_active, &sb_lock)) { + s->s_count -= S_BIAS-1; + spin_unlock(&sb_lock); + down_write(&s->s_umount); + fs->kill_sb(s); + put_filesystem(fs); + put_super(s); + } +} + +/** * grab_super - acquire an active reference * @s - reference we are trying to make active * @@ -376,19 +382,16 @@ get_filesystem(type); } -static void put_anon_dev(kdev_t dev); - /** * remove_super - makes superblock unreachable * @s: superblock in question * - * Removes superblock from the lists, unlocks it and drop the reference - * @s should have no active references by that time and after - * remove_super() it's essentially in rundown mode - all remaining - * references are temporary, no new reference of any sort are going - * to appear and all holders of temporary ones will eventually drop them. - * At that point superblock itself will be destroyed; all its contents - * is already gone. + * Removes superblock from the lists, and unlocks it. @s should have + * no active references by that time and after remove_super() it's + * essentially in rundown mode - all remaining references are temporary, + * no new references of any sort are going to appear and all holders + * of temporary ones will eventually drop them. At that point superblock + * itself will be destroyed; all its contents is already gone. */ static void remove_super(struct super_block *s) { @@ -397,7 +400,6 @@ list_del(&s->s_instances); spin_unlock(&sb_lock); up_write(&s->s_umount); - put_super(s); } static void generic_shutdown_super(struct super_block *sb) @@ -434,33 +436,37 @@ remove_super(sb); } -static void shutdown_super(struct super_block *sb) -{ - struct file_system_type *fs = sb->s_type; - kdev_t dev = sb->s_dev; - struct block_device *bdev = sb->s_bdev; - - /* Need to clean after the sucker */ - if (fs->fs_flags & FS_LITTER && sb->s_root) - d_genocide(sb->s_root); - generic_shutdown_super(sb); - if (bdev) { - bd_release(bdev); - blkdev_put(bdev, BDEV_FS); - } else - put_anon_dev(dev); -} - -void kill_super(struct super_block *sb) +struct super_block *sget(struct file_system_type *type, + int (*test)(struct super_block *,void *), + int (*set)(struct super_block *,void *), + void *data) { - struct file_system_type *fs = sb->s_type; + struct super_block *s = alloc_super(); + struct list_head *p; + int err; - if (!deactivate_super(sb)) - return; + if (!s) + return ERR_PTR(-ENOMEM); - down_write(&sb->s_umount); - shutdown_super(sb); - put_filesystem(fs); +retry: + spin_lock(&sb_lock); + if (test) list_for_each(p, &type->fs_supers) { + struct super_block *old; + old = list_entry(p, struct super_block, s_instances); + if (!test(old, data)) + continue; + if (!grab_super(old)) + goto retry; + destroy_super(s); + return old; + } + err = set(s, data); + if (err) { + destroy_super(s); + return ERR_PTR(err); + } + insert_super(s, type); + return s; } struct vfsmount *alloc_vfsmnt(char *name); @@ -615,72 +621,53 @@ static unsigned long unnamed_dev_in_use[Max_anon/(8*sizeof(unsigned long))]; static spinlock_t unnamed_dev_lock = SPIN_LOCK_UNLOCKED;/* protects the above */ -/** - * put_anon_dev - release anonymous device number. - * @dev: device in question - */ -static void put_anon_dev(kdev_t dev) +int set_anon_super(struct super_block *s, void *data) { + int dev; spin_lock(&unnamed_dev_lock); - clear_bit(minor(dev), unnamed_dev_in_use); + dev = find_first_zero_bit(unnamed_dev_in_use, Max_anon); + if (dev == Max_anon) { + spin_unlock(&unnamed_dev_lock); + return -EMFILE; + } + set_bit(dev, unnamed_dev_in_use); spin_unlock(&unnamed_dev_lock); + s->s_dev = mk_kdev(0, dev); + return 0; } -/** - * get_anon_super - allocate a superblock for non-device fs - * @type: filesystem type - * @compare: check if existing superblock is what we want - * @data: argument for @compare. - * - * get_anon_super is a helper for non-blockdevice filesystems. - * It either finds and returns one of the superblocks of given type - * (if it can find one that would satisfy caller) or creates a new - * one. In the either case we return an active reference to superblock - * with ->s_umount locked. If superblock is new it gets a new - * anonymous device allocated for it and is inserted into lists - - * other initialization is left to caller. - * - * Rather than duplicating all that logics every time when - * we want something that doesn't fit "nodev" and "single" we pull - * the relevant code into common helper and let ->get_sb() call - * it. - */ struct super_block *get_anon_super(struct file_system_type *type, int (*compare)(struct super_block *,void *), void *data) { - struct super_block *s = alloc_super(); - int dev; - struct list_head *p; - - if (!s) - return ERR_PTR(-ENOMEM); - + return sget(type, compare, set_anon_super, data); +} + +void kill_anon_super(struct super_block *sb) +{ + int slot = minor(sb->s_dev); + generic_shutdown_super(sb); spin_lock(&unnamed_dev_lock); - dev = find_first_zero_bit(unnamed_dev_in_use, Max_anon); - if (dev == Max_anon) { - spin_unlock(&unnamed_dev_lock); - destroy_super(s); - return ERR_PTR(-EMFILE); - } - set_bit(dev, unnamed_dev_in_use); + clear_bit(slot, unnamed_dev_in_use); spin_unlock(&unnamed_dev_lock); +} -retry: - spin_lock(&sb_lock); - if (compare) list_for_each(p, &type->fs_supers) { - struct super_block *old; - old = list_entry(p, struct super_block, s_instances); - if (!compare(old, data)) - continue; - if (!grab_super(old)) - goto retry; - destroy_super(s); - return old; - } +void kill_litter_super(struct super_block *sb) +{ + if (sb->s_root) + d_genocide(sb->s_root); + kill_anon_super(sb); +} - s->s_dev = mk_kdev(0, dev); - insert_super(s, type); - return s; +static int set_bdev_super(struct super_block *s, void *data) +{ + s->s_bdev = data; + s->s_dev = to_kdev_t(s->s_bdev->bd_dev); + return 0; +} + +static int test_bdev_super(struct super_block *s, void *data) +{ + return (void *)s->s_bdev == data; } struct super_block *get_sb_bdev(struct file_system_type *fs_type, @@ -693,7 +680,6 @@ devfs_handle_t de; struct super_block * s; struct nameidata nd; - struct list_head *p; kdev_t dev; int error = 0; mode_t mode = FMODE_READ; /* we always need it ;-) */ @@ -711,7 +697,9 @@ error = -EACCES; if (nd.mnt->mnt_flags & MNT_NODEV) goto out; - bd_acquire(inode); + error = bd_acquire(inode); + if (error) + goto out; bdev = inode->i_bdev; de = devfs_get_handle_from_inode (inode); bdops = devfs_get_ops (de); /* Increments module use count */ @@ -732,50 +720,29 @@ if (error) goto out1; - error = -ENOMEM; - s = alloc_super(); - if (!s) - goto out2; - - error = -EBUSY; -restart: - spin_lock(&sb_lock); - - list_for_each(p, &fs_type->fs_supers) { - struct super_block *old = sb_entry(p); - if (old->s_bdev != bdev) - continue; - if (!grab_super(old)) - goto restart; - destroy_super(s); - if ((flags ^ old->s_flags) & MS_RDONLY) { - up_write(&old->s_umount); - kill_super(old); - old = ERR_PTR(-EBUSY); + s = sget(fs_type, test_bdev_super, set_bdev_super, bdev); + if (s->s_root) { + if ((flags ^ s->s_flags) & MS_RDONLY) { + up_write(&s->s_umount); + deactivate_super(s); + s = ERR_PTR(-EBUSY); } bd_release(bdev); blkdev_put(bdev, BDEV_FS); - path_release(&nd); - return old; + } else { + s->s_flags = flags; + strncpy(s->s_id, bdevname(dev), sizeof(s->s_id)); + error = fill_super(s, data, flags & MS_VERBOSE ? 1 : 0); + if (error) { + up_write(&s->s_umount); + deactivate_super(s); + s = ERR_PTR(error); + } else + s->s_flags |= MS_ACTIVE; } - s->s_bdev = bdev; - s->s_dev = dev; - insert_super(s, fs_type); - s->s_flags = flags; - strncpy(s->s_id, bdevname(dev), sizeof(s->s_id)); - error = fill_super(s, data, flags & MS_VERBOSE ? 1 : 0); - if (error) - goto failed; - s->s_flags |= MS_ACTIVE; path_release(&nd); return s; -failed: - up_write(&s->s_umount); - kill_super(s); - goto out; -out2: - bd_release(bdev); out1: blkdev_put(bdev, BDEV_FS); out: @@ -783,6 +750,14 @@ return ERR_PTR(error); } +void kill_block_super(struct super_block *sb) +{ + struct block_device *bdev = sb->s_bdev; + generic_shutdown_super(sb); + bd_release(bdev); + blkdev_put(bdev, BDEV_FS); +} + struct super_block *get_sb_nodev(struct file_system_type *fs_type, int flags, void *data, int (*fill_super)(struct super_block *, void *, int)) @@ -798,7 +773,7 @@ error = fill_super(s, data, flags & MS_VERBOSE ? 1 : 0); if (error) { up_write(&s->s_umount); - kill_super(s); + deactivate_super(s); return ERR_PTR(error); } s->s_flags |= MS_ACTIVE; @@ -824,7 +799,7 @@ error = fill_super(s, data, flags & MS_VERBOSE ? 1 : 0); if (error) { up_write(&s->s_umount); - kill_super(s); + deactivate_super(s); return ERR_PTR(error); } s->s_flags |= MS_ACTIVE; diff -Nru a/fs/sysv/super.c b/fs/sysv/super.c --- a/fs/sysv/super.c Tue Mar 12 13:58:15 2002 +++ b/fs/sysv/super.c Tue Mar 12 13:58:15 2002 @@ -490,6 +490,7 @@ owner: THIS_MODULE, name: "sysv", get_sb: sysv_get_sb, + kill_sb: kill_block_super, fs_flags: FS_REQUIRES_DEV, }; @@ -497,6 +498,7 @@ owner: THIS_MODULE, name: "v7", get_sb: v7_get_sb, + kill_sb: kill_block_super, fs_flags: FS_REQUIRES_DEV, }; diff -Nru a/fs/udf/super.c b/fs/udf/super.c --- a/fs/udf/super.c Tue Mar 12 13:58:14 2002 +++ b/fs/udf/super.c Tue Mar 12 13:58:14 2002 @@ -106,6 +106,7 @@ owner: THIS_MODULE, name: "udf", get_sb: udf_get_sb, + kill_sb: kill_block_super, fs_flags: FS_REQUIRES_DEV, }; diff -Nru a/fs/ufs/super.c b/fs/ufs/super.c --- a/fs/ufs/super.c Tue Mar 12 13:58:14 2002 +++ b/fs/ufs/super.c Tue Mar 12 13:58:14 2002 @@ -1018,6 +1018,7 @@ owner: THIS_MODULE, name: "ufs", get_sb: ufs_get_sb, + kill_sb: kill_block_super, fs_flags: FS_REQUIRES_DEV, }; diff -Nru a/fs/vfat/vfatfs_syms.c b/fs/vfat/vfatfs_syms.c --- a/fs/vfat/vfatfs_syms.c Tue Mar 12 13:58:15 2002 +++ b/fs/vfat/vfatfs_syms.c Tue Mar 12 13:58:15 2002 @@ -21,6 +21,7 @@ owner: THIS_MODULE, name: "vfat", get_sb: vfat_get_sb, + kill_sb: kill_block_super, fs_flags: FS_REQUIRES_DEV, }; diff -Nru a/include/asm-arm/arch-ebsa110/system.h b/include/asm-arm/arch-ebsa110/system.h --- a/include/asm-arm/arch-ebsa110/system.h Tue Mar 12 13:58:14 2002 +++ b/include/asm-arm/arch-ebsa110/system.h Tue Mar 12 13:58:14 2002 @@ -17,29 +17,34 @@ * will stop our MCLK signal (which provides the clock for the glue * logic, and therefore the timer interrupt). * - * Instead, we spin, waiting for either hlt_counter or need_resched() - * to be set. If we have been spinning for 2cs, then we drop the - * core clock down to the memory clock. + * Instead, we spin, polling the IRQ_STAT register for the occurrence + * of any interrupt with core clock down to the memory clock. */ static void arch_idle(void) { - unsigned long start_idle; + const char *irq_stat = (char *)0xff000000; + long flags; - start_idle = jiffies; + if (!hlt_counter) + return; do { - if (need_resched() || hlt_counter) - goto slow_out; - } while (time_before(jiffies, start_idle + HZ/50)); - - cpu_do_idle(IDLE_CLOCK_SLOW); - - while (!need_resched() && !hlt_counter) { - /* do nothing slowly */ - } - - cpu_do_idle(IDLE_CLOCK_FAST); -slow_out: + /* disable interrupts */ + cli(); + /* check need_resched here to avoid races */ + if (need_resched()) { + sti(); + return; + } + /* disable clock switching */ + asm volatile ("mcr%? p15, 0, ip, c15, c2, 2"); + /* wait for an interrupt to occur */ + while (!*irq_stat); + /* enable clock switching */ + asm volatile ("mcr%? p15, 0, ip, c15, c1, 2"); + /* allow the interrupt to happen */ + sti(); + } while (!need_resched()); } #define arch_reset(mode) cpu_reset(0x80000000) diff -Nru a/include/asm-arm/cpu-multi32.h b/include/asm-arm/cpu-multi32.h --- a/include/asm-arm/cpu-multi32.h Tue Mar 12 13:58:14 2002 +++ b/include/asm-arm/cpu-multi32.h Tue Mar 12 13:58:14 2002 @@ -107,18 +107,12 @@ */ void (*set_pte)(pte_t *ptep, pte_t pte); } pgtable; - - struct { /* other */ - void (*clear_user_page)(void *page, unsigned long u_addr); - void (*copy_user_page)(void *to, void *from, unsigned long u_addr); - } misc; } processor; extern const struct processor arm6_processor_functions; extern const struct processor arm7_processor_functions; extern const struct processor sa110_processor_functions; -#define cpu_data_abort(pc) processor._data_abort(pc) #define cpu_check_bugs() processor._check_bugs() #define cpu_proc_init() processor._proc_init() #define cpu_proc_fin() processor._proc_fin() @@ -140,9 +134,6 @@ #define cpu_set_pgd(pgd) processor.pgtable.set_pgd(pgd) #define cpu_set_pmd(pmdp, pmd) processor.pgtable.set_pmd(pmdp, pmd) #define cpu_set_pte(ptep, pte) processor.pgtable.set_pte(ptep, pte) - -#define cpu_copy_user_page(to,from,uaddr) processor.misc.copy_user_page(to,from,uaddr) -#define cpu_clear_user_page(page,uaddr) processor.misc.clear_user_page(page,uaddr) #define cpu_switch_mm(pgd,tsk) cpu_set_pgd(__virt_to_phys((unsigned long)(pgd))) diff -Nru a/include/asm-arm/cpu-single.h b/include/asm-arm/cpu-single.h --- a/include/asm-arm/cpu-single.h Tue Mar 12 13:58:14 2002 +++ b/include/asm-arm/cpu-single.h Tue Mar 12 13:58:14 2002 @@ -22,7 +22,6 @@ * function pointers for this lot. Otherwise, we can optimise the * table away. */ -#define cpu_data_abort __cpu_fn(CPU_ABRT,_abort) #define cpu_check_bugs __cpu_fn(CPU_NAME,_check_bugs) #define cpu_proc_init __cpu_fn(CPU_NAME,_proc_init) #define cpu_proc_fin __cpu_fn(CPU_NAME,_proc_fin) @@ -40,8 +39,6 @@ #define cpu_set_pgd __cpu_fn(CPU_NAME,_set_pgd) #define cpu_set_pmd __cpu_fn(CPU_NAME,_set_pmd) #define cpu_set_pte __cpu_fn(CPU_NAME,_set_pte) -#define cpu_copy_user_page __cpu_fn(MMU_ARCH,_copy_user_page) -#define cpu_clear_user_page __cpu_fn(MMU_ARCH,_clear_user_page) #ifndef __ASSEMBLY__ @@ -73,9 +70,6 @@ extern void cpu_set_pgd(unsigned long pgd_phys); extern void cpu_set_pmd(pmd_t *pmdp, pmd_t pmd); extern void cpu_set_pte(pte_t *ptep, pte_t pte); - -extern void cpu_copy_user_page(void *to, void *from, unsigned long u_addr); -extern void cpu_clear_user_page(void *page, unsigned long u_addr); extern volatile void cpu_reset(unsigned long addr); diff -Nru a/include/asm-arm/glue.h b/include/asm-arm/glue.h --- a/include/asm-arm/glue.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-arm/glue.h Tue Mar 12 13:58:15 2002 @@ -24,14 +24,21 @@ #endif #define __glue(name,fn) ____glue(name,fn) -/* - * Select MMU TLB handling. - */ + /* - * ARMv3 MMU + * MMU TLB Model + * ============= + * + * We have the following to choose from: + * v3 - ARMv3 + * v4 - ARMv4 without write buffer + * v4wb - ARMv4 with write buffer without I TLB flush entry instruction + * v4wbi - ARMv4 with write buffer with I TLB flush entry instruction */ #undef _TLB +#undef MULTI_TLB + #if defined(CONFIG_CPU_ARM610) || defined(CONFIG_CPU_ARM710) # ifdef _TLB # define MULTI_TLB 1 @@ -40,9 +47,6 @@ # endif #endif -/* - * ARMv4 MMU without write buffer - */ #if defined(CONFIG_CPU_ARM720T) # ifdef _TLB # define MULTI_TLB 1 @@ -51,9 +55,6 @@ # endif #endif -/* - * ARMv4 MMU with write buffer, with invalidate I TLB entry instruction - */ #if defined(CONFIG_CPU_ARM920T) || defined(CONFIG_CPU_ARM922T) || \ defined(CONFIG_CPU_ARM926T) || defined(CONFIG_CPU_ARM1020) || \ defined(CONFIG_CPU_XSCALE) @@ -64,15 +65,142 @@ # endif #endif -/* - * ARMv4 MMU with write buffer, without invalidate I TLB entry instruction - */ #if defined(CONFIG_CPU_SA110) || defined(CONFIG_CPU_SA1100) # ifdef _TLB # define MULTI_TLB 1 # else # define _TLB v4wb # endif +#endif + +#ifndef _TLB +#error Unknown TLB model +#endif + + + +/* + * Data Abort Model + * ================ + * + * We have the following to choose from: + * arm6 - ARM6 style + * arm7 - ARM7 style + * v4_early - ARMv4 without Thumb early abort handler + * v4t_late - ARMv4 with Thumb late abort handler + * v4t_early - ARMv4 with Thumb early abort handler + * v5ej_early - ARMv5 with Thumb and Java early abort handler + */ +#undef CPU_ABORT_HANDLER +#undef MULTI_ABORT + +#if defined(CONFIG_CPU_ARM610) +# ifdef CPU_ABORT_HANDLER +# define MULTI_ABORT 1 +# else +# define CPU_ABORT_HANDLER cpu_arm6_data_abort +# endif +#endif + +#if defined(CONFIG_CPU_ARM710) +# ifdef CPU_ABORT_HANDLER +# define MULTI_ABORT 1 +# else +# define CPU_ABORT_HANDLER cpu_arm7_data_abort +# endif +#endif + +#if defined(CONFIG_CPU_ARM720T) +# ifdef CPU_ABORT_HANDLER +# define MULTI_ABORT 1 +# else +# define CPU_ABORT_HANDLER v4t_late_abort +# endif +#endif + +#if defined(CONFIG_CPU_SA110) || defined(CONFIG_CPU_SA1100) +# ifdef CPU_ABORT_HANDLER +# define MULTI_ABORT 1 +# else +# define CPU_ABORT_HANDLER v4_early_abort +# endif +#endif + +#if defined(CONFIG_CPU_ARM920T) || defined(CONFIG_CPU_ARM922T) || \ + defined(CONFIG_CPU_ARM1020) || defined(CONFIG_CPU_XSCALE) +# ifdef CPU_ABORT_HANDLER +# define MULTI_ABORT 1 +# else +# define CPU_ABORT_HANDLER v4t_early_abort +# endif +#endif + +#if defined(CONFIG_CPU_ARM926T) +# ifdef CPU_ABORT_HANDLER +# define MULTI_ABORT 1 +# else +# define CPU_ABORT_HANDLER v5ej_early_abort +# endif +#endif + +#ifndef CPU_ABORT_HANDLER +#error Unknown data abort handler type +#endif + + +/* + * User Space Model + * ================ + * + * This section selects the correct set of functions for dealing with + * page-based copying and clearing for user space for the particular + * processor(s) we're building for. + * + * We have the following to choose from: + * v3 - ARMv3 + * v4 - ARMv4 without minicache + * v4_mc - ARMv4 with minicache + * v5te_mc - ARMv5TE with minicache + */ +#undef _USER +#undef MULTI_USER + +#if defined(CONFIG_CPU_ARM610) || defined(CONFIG_CPU_ARM710) +# ifdef _USER +# define MULTI_USER 1 +# else +# define _USER v3 +# endif +#endif + +#if defined(CONFIG_CPU_ARM720T) || defined(CONFIG_CPU_ARM920T) || \ + defined(CONFIG_CPU_ARM922T) || defined(CONFIG_CPU_ARM926T) || \ + defined(CONFIG_CPU_SA110) || defined(CONFIG_CPU_ARM1020) +# ifdef _USER +# define MULTI_USER 1 +# else +# define _USER v4 +# endif +#endif + +#if defined(CONFIG_CPU_SA1100) +# ifdef _USER +# define MULTI_USER 1 +# else +# define _USER v4_mc +# endif +#endif + +#if defined(CONFIG_CPU_XSCALE) +# ifdef _USER +# define MULTI_USER 1 +# else +# define _USER v5te_mc +# endif +#endif + +#ifndef _USER +#error Unknown user operations model #endif #endif diff -Nru a/include/asm-arm/mach/pci.h b/include/asm-arm/mach/pci.h --- a/include/asm-arm/mach/pci.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-arm/mach/pci.h Tue Mar 12 13:58:15 2002 @@ -19,12 +19,6 @@ /* Setup bus resources */ void (*setup_resources)(struct resource **); - /* - * This is the offset of PCI memory base registers - * to physical memory. - */ - unsigned long mem_offset; - /* IRQ swizzle */ u8 (*swizzle)(struct pci_dev *dev, u8 *pin); diff -Nru a/include/asm-arm/page.h b/include/asm-arm/page.h --- a/include/asm-arm/page.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-arm/page.h Tue Mar 12 13:58:15 2002 @@ -1,50 +1,68 @@ #ifndef _ASMARM_PAGE_H #define _ASMARM_PAGE_H -#include - -#define PAGE_SIZE (1UL << PAGE_SHIFT) -#define PAGE_MASK (~(PAGE_SIZE-1)) +#include #ifdef __KERNEL__ #ifndef __ASSEMBLY__ -#define STRICT_MM_TYPECHECKS +#include -#define clear_page(page) memzero((void *)(page), PAGE_SIZE) -extern void copy_page(void *to, void *from); +struct cpu_user_fns { + void (*cpu_clear_user_page)(void *p, unsigned long user); + void (*cpu_copy_user_page)(void *to, const void *from, + unsigned long user); +}; + +#ifdef MULTI_USER +extern struct cpu_user_fns cpu_user; + +#define __cpu_clear_user_page cpu_user.cpu_clear_user_page +#define __cpu_copy_user_page cpu_user.cpu_copy_user_page + +#else + +#define __cpu_clear_user_page __glue(_USER,_clear_user_page) +#define __cpu_copy_user_page __glue(_USER,_copy_user_page) + +extern void __cpu_clear_user_page(void *p, unsigned long user); +extern void __cpu_copy_user_page(void *to, const void *from, + unsigned long user); +#endif #define clear_user_page(addr,vaddr) \ do { \ preempt_disable(); \ - cpu_clear_user_page(addr, vaddr); \ + __cpu_clear_user_page(addr, vaddr); \ preempt_enable(); \ } while (0) #define copy_user_page(to,from,vaddr) \ do { \ preempt_disable(); \ - cpu_copy_user_page(to, from, vaddr); \ + __cpu_copy_user_page(to, from, vaddr); \ preempt_enable(); \ } while (0) +#define clear_page(page) memzero((void *)(page), PAGE_SIZE) +extern void copy_page(void *to, void *from); + +#undef STRICT_MM_TYPECHECKS + #ifdef STRICT_MM_TYPECHECKS /* * These are used to make use of C type-checking.. */ typedef struct { unsigned long pte; } pte_t; typedef struct { unsigned long pmd; } pmd_t; -typedef struct { unsigned long pgd; } pgd_t; typedef struct { unsigned long pgprot; } pgprot_t; #define pte_val(x) ((x).pte) #define pmd_val(x) ((x).pmd) -#define pgd_val(x) ((x).pgd) #define pgprot_val(x) ((x).pgprot) #define __pte(x) ((pte_t) { (x) } ) #define __pmd(x) ((pmd_t) { (x) } ) -#define __pgd(x) ((pgd_t) { (x) } ) #define __pgprot(x) ((pgprot_t) { (x) } ) #else @@ -53,25 +71,29 @@ */ typedef unsigned long pte_t; typedef unsigned long pmd_t; -typedef unsigned long pgd_t; typedef unsigned long pgprot_t; #define pte_val(x) (x) #define pmd_val(x) (x) -#define pgd_val(x) (x) #define pgprot_val(x) (x) #define __pte(x) (x) #define __pmd(x) (x) -#define __pgd(x) (x) #define __pgprot(x) (x) -#endif +#endif /* STRICT_MM_TYPECHECKS */ #endif /* !__ASSEMBLY__ */ +#endif /* __KERNEL__ */ + +#include + +#define PAGE_SIZE (1UL << PAGE_SHIFT) +#define PAGE_MASK (~(PAGE_SIZE-1)) /* to align the pointer to the (next) page boundary */ #define PAGE_ALIGN(addr) (((addr)+PAGE_SIZE-1)&PAGE_MASK) +#ifdef __KERNEL__ #ifndef __ASSEMBLY__ #ifdef CONFIG_DEBUG_BUGVERBOSE @@ -105,7 +127,6 @@ #endif /* !__ASSEMBLY__ */ -#include #include #define __pa(x) __virt_to_phys((unsigned long)(x)) @@ -120,6 +141,6 @@ #define VM_DATA_DEFAULT_FLAGS (VM_READ | VM_WRITE | VM_EXEC | \ VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC) -#endif +#endif /* __KERNEL__ */ #endif diff -Nru a/include/asm-arm/pgtable.h b/include/asm-arm/pgtable.h --- a/include/asm-arm/pgtable.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-arm/pgtable.h Tue Mar 12 13:58:15 2002 @@ -1,7 +1,7 @@ /* * linux/include/asm-arm/pgtable.h * - * Copyright (C) 2000-2001 Russell King + * Copyright (C) 2000-2002 Russell King * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as @@ -19,7 +19,11 @@ * PGDIR_SHIFT determines what a third-level page table entry can map */ #define PMD_SHIFT 20 +#ifdef CONFIG_CPU_32 +#define PGDIR_SHIFT 21 +#else #define PGDIR_SHIFT 20 +#endif #define LIBRARY_TEXT_START 0x0c000000 @@ -93,7 +97,6 @@ #define pmd_none(pmd) (!pmd_val(pmd)) #define pmd_present(pmd) (pmd_val(pmd)) -#define pmd_clear(pmdp) set_pmd(pmdp, __pmd(0)) /* * Permanent address of a page. We never have highmem, so this is trivial. @@ -106,18 +109,10 @@ */ static inline pte_t mk_pte_phys(unsigned long physpage, pgprot_t pgprot) { - pte_t pte; - pte_val(pte) = physpage | pgprot_val(pgprot); - return pte; + return __pte(physpage | pgprot_val(pgprot)); } -#define mk_pte(page,pgprot) \ -({ \ - pte_t __pte; \ - pte_val(__pte) = __pa(page_address(page)) + \ - pgprot_val(pgprot); \ - __pte; \ -}) +#define mk_pte(page,pgprot) mk_pte_phys(__pa(page_address(page)), pgprot) /* * The "pgd_xxx()" functions here are trivial for a folded two-level @@ -127,7 +122,7 @@ #define pgd_none(pgd) (0) #define pgd_bad(pgd) (0) #define pgd_present(pgd) (1) -#define pgd_clear(pgdp) +#define pgd_clear(pgdp) do { } while (0) #define page_pte_prot(page,prot) mk_pte(page, prot) #define page_pte(page) mk_pte(page, __pgprot(0)) @@ -147,15 +142,6 @@ /* Find an entry in the third-level page table.. */ #define __pte_index(addr) (((addr) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1)) -#define pmd_page(dir) ((struct page *)__pmd_page(dir)) - -#define __pte_offset(dir, addr) ((pte_t *)__pmd_page(*(dir)) + __pte_index(addr)) -#define pte_offset_kernel __pte_offset -#define pte_offset_map __pte_offset -#define pte_offset_map_nested __pte_offset -#define pte_unmap(pte) do { } while (0) -#define pte_unmap_nested(pte) do { } while (0) - #include static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) @@ -181,8 +167,6 @@ #define kern_addr_valid(addr) (1) #include - -extern void pgtable_cache_init(void); /* * remap a physical address `phys' of size `size' with page protection `prot' diff -Nru a/include/asm-arm/proc-armo/page.h b/include/asm-arm/proc-armo/page.h --- a/include/asm-arm/proc-armo/page.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-arm/proc-armo/page.h Tue Mar 12 13:58:15 2002 @@ -1,7 +1,7 @@ /* * linux/include/asm-arm/proc-armo/page.h * - * Copyright (C) 1995, 1996 Russell King + * Copyright (C) 1995-2002 Russell King * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as @@ -20,5 +20,21 @@ #endif #define EXEC_PAGESIZE 32768 + +#ifndef __ASSEMBLY__ +#ifdef STRICT_MM_TYPECHECKS + +typedef struct { unsigned long pgd; } pgd_t; + +#define pgd_val(x) ((x).pgd) + +#else + +typedef unsigned long pgd_t; + +#define pgd_val(x) (x) + +#endif +#endif /* __ASSEMBLY__ */ #endif /* __ASM_PROC_PAGE_H */ diff -Nru a/include/asm-arm/proc-armo/pgalloc.h b/include/asm-arm/proc-armo/pgalloc.h --- a/include/asm-arm/proc-armo/pgalloc.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-arm/proc-armo/pgalloc.h Tue Mar 12 13:58:15 2002 @@ -1,32 +1,22 @@ /* * linux/include/asm-arm/proc-armo/pgalloc.h * - * Copyright (C) 2001 Russell King + * Copyright (C) 2001-2002 Russell King * * Page table allocation/freeing primitives for 26-bit ARM processors. */ -/* unfortunately, this includes linux/mm.h and the rest of the universe. */ #include extern kmem_cache_t *pte_cache; -/* - * Allocate one PTE table. - * - * Note that we keep the processor copy of the PTE entries separate - * from the Linux copy. The processor copies are offset by -PTRS_PER_PTE - * words from the Linux copy. - */ -static inline pte_t *pte_alloc_one(struct mm_struct *mm, unsigned long address) +static inline pte_t * +pte_alloc_one_kernel(struct mm_struct *mm, unsigned long addr) { return kmem_cache_alloc(pte_cache, GFP_KERNEL); } -/* - * Free one PTE table. - */ -static inline void pte_free_slow(pte_t *pte) +static inline void pte_free_kernel(pte_t *pte) { if (pte) kmem_cache_free(pte_cache, pte); @@ -39,9 +29,16 @@ * If 'mm' is the init tasks mm, then we are doing a vmalloc, and we * need to set stuff up correctly for it. */ -#define pmd_populate(mm,pmdp,pte) \ - do { \ - set_pmd(pmdp, __mk_pmd(pte, _PAGE_TABLE)); \ - } while (0) - +static inline void +pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmdp, pte_t *ptep) +{ + set_pmd(pmdp, __mk_pmd(ptep, _PAGE_TABLE)); +} +/* + * We use the old 2.5.5-rmk1 hack for this. + * This is not truely correct, but should be functional. + */ +#define pte_alloc_one(mm,addr) ((struct page *)pte_alloc_one_kernel(mm,addr)) +#define pte_free(pte) pte_free_kernel((pte_t *)pte) +#define pmd_populate(mm,pmdp,ptep) pmd_populate_kernel(mm,pmdp,(pte_t *)ptep) diff -Nru a/include/asm-arm/proc-armo/pgtable.h b/include/asm-arm/proc-armo/pgtable.h --- a/include/asm-arm/proc-armo/pgtable.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-arm/proc-armo/pgtable.h Tue Mar 12 13:58:15 2002 @@ -1,7 +1,7 @@ /* * linux/include/asm-arm/proc-armo/pgtable.h * - * Copyright (C) 1995-2001 Russell King + * Copyright (C) 1995-2002 Russell King * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as @@ -32,6 +32,7 @@ #define pmd_bad(pmd) ((pmd_val(pmd) & 0xfc000002)) #define set_pmd(pmdp,pmd) ((*(pmdp)) = (pmd)) +#define pmd_clear(pmdp) set_pmd(pmdp, __pmd(0)) static inline pmd_t __mk_pmd(pte_t *ptep, unsigned long prot) { @@ -48,6 +49,12 @@ return __phys_to_virt(pmd_val(pmd) & ~_PAGE_TABLE); } +#define pte_offset_kernel(dir,addr) (pmd_page_kernel(*(dir)) + __pte_index(addr)) +#define pte_offset_map(dir,addr) (pmd_page_kernel(*(dir)) + __pte_index(addr)) +#define pte_offset_map_nested(dir,addr) (pmd_page_kernel(*(dir)) + __pte_index(addr)) +#define pte_unmap(pte) do { } while (0) +#define pte_unmap_nested(pte) do { } while (0) + #define set_pte(pteptr, pteval) ((*(pteptr)) = (pteval)) #define _PAGE_PRESENT 0x01 @@ -89,11 +96,11 @@ static inline pte_t pte_mkdirty(pte_t pte) { pte_val(pte) &= ~_PAGE_CLEAN; return pte; } static inline pte_t pte_mkyoung(pte_t pte) { pte_val(pte) &= ~_PAGE_OLD; return pte; } -#define pte_alloc_kernel pte_alloc - /* * We don't store cache state bits in the page table here. */ #define pgprot_noncached(prot) (prot) + +extern void pgtable_cache_init(void); #endif /* __ASM_PROC_PGTABLE_H */ diff -Nru a/include/asm-arm/proc-armv/cache.h b/include/asm-arm/proc-armv/cache.h --- a/include/asm-arm/proc-armv/cache.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-arm/proc-armv/cache.h Tue Mar 12 13:58:15 2002 @@ -1,7 +1,7 @@ /* * linux/include/asm-arm/proc-armv/cache.h * - * Copyright (C) 1999-2001 Russell King + * Copyright (C) 1999-2002 Russell King * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as @@ -134,7 +134,8 @@ #define clean_dcache_range(_s,_e) cpu_dcache_clean_range((_s),(_e)) #define flush_dcache_range(_s,_e) cpu_cache_clean_invalidate_range((_s),(_e),0) -#define mapping_mapped(map) ((map)->i_mmap || (map)->i_mmap_shared) +#define mapping_mapped(map) (!list_empty(&(map)->i_mmap) || \ + !list_empty(&(map)->i_mmap_shared)) /* * flush_dcache_page is used when the kernel has written to the page @@ -204,7 +205,7 @@ * TLB Management * ============== * - * The arch/arm/mm/tlb-*.S files implement this methods. + * The arch/arm/mm/tlb-*.S files implement these methods. * * The TLB specific code is expected to perform whatever tests it * needs to determine if it should invalidate the TLB for each diff -Nru a/include/asm-arm/proc-armv/page.h b/include/asm-arm/proc-armv/page.h --- a/include/asm-arm/proc-armv/page.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-arm/proc-armv/page.h Tue Mar 12 13:58:15 2002 @@ -1,7 +1,7 @@ /* * linux/include/asm-arm/proc-armv/page.h * - * Copyright (C) 1995, 1996 Russell King + * Copyright (C) 1995-2002 Russell King * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as @@ -14,5 +14,24 @@ #define PAGE_SHIFT 12 #define EXEC_PAGESIZE 4096 + +#ifndef __ASSEMBLY__ +#ifdef STRICT_MM_TYPECHECKS + +typedef struct { + unsigned long pgd0; + unsigned long pgd1; +} pgd_t; + +#define pgd_val(x) ((x).pgd0) + +#else + +typedef unsigned long pgd_t[2]; + +#define pgd_val(x) ((x)[0]) + +#endif +#endif /* __ASSEMBLY__ */ #endif /* __ASM_PROC_PAGE_H */ diff -Nru a/include/asm-arm/proc-armv/pgalloc.h b/include/asm-arm/proc-armv/pgalloc.h --- a/include/asm-arm/proc-armv/pgalloc.h Tue Mar 12 13:58:14 2002 +++ b/include/asm-arm/proc-armv/pgalloc.h Tue Mar 12 13:58:14 2002 @@ -1,43 +1,72 @@ /* * linux/include/asm-arm/proc-armv/pgalloc.h * - * Copyright (C) 2001 Russell King + * Copyright (C) 2001-2002 Russell King * * Page table allocation/freeing primitives for 32-bit ARM processors. */ - -/* unfortunately, this includes linux/mm.h and the rest of the universe. */ -#include - -extern kmem_cache_t *pte_cache; +#include "pgtable.h" /* * Allocate one PTE table. * - * Note that we keep the processor copy of the PTE entries separate - * from the Linux copy. The processor copies are offset by -PTRS_PER_PTE - * words from the Linux copy. + * This actually allocates two hardware PTE tables, but we wrap this up + * into one table thus: + * + * +------------+ + * | h/w pt 0 | + * +------------+ + * | h/w pt 1 | + * +------------+ + * | Linux pt 0 | + * +------------+ + * | Linux pt 1 | + * +------------+ */ static inline pte_t * pte_alloc_one_kernel(struct mm_struct *mm, unsigned long addr) { + int count = 0; pte_t *pte; - pte = kmem_cache_alloc(pte_cache, GFP_KERNEL); - if (pte) + do { + pte = (pte_t *)__get_free_page(GFP_KERNEL); + if (!pte) { + current->state = TASK_UNINTERRUPTIBLE; + schedule_timeout(HZ); + } + } while (!pte && (count++ < 10)); + + if (pte) { + clear_page(pte); + clean_dcache_area(pte, sizeof(pte_t) * PTRS_PER_PTE); pte += PTRS_PER_PTE; + } + return pte; } static inline struct page * pte_alloc_one(struct mm_struct *mm, unsigned long addr) { - pte_t *pte; + struct page *pte; + int count = 0; - pte = kmem_cache_alloc(pte_cache, GFP_KERNEL); - if (pte) - pte += PTRS_PER_PTE; - return (struct page *)pte; + do { + pte = alloc_pages(GFP_KERNEL, 0); + if (!pte) { + current->state = TASK_UNINTERRUPTIBLE; + schedule_timeout(HZ); + } + } while (!pte && (count++ < 10)); + + if (pte) { + void *page = page_address(pte); + clear_page(page); + clean_dcache_area(page, sizeof(pte_t) * PTRS_PER_PTE); + } + + return pte; } /* @@ -47,34 +76,49 @@ { if (pte) { pte -= PTRS_PER_PTE; - kmem_cache_free(pte_cache, pte); + free_page((unsigned long)pte); } } static inline void pte_free(struct page *pte) { - pte_t *_pte = (pte_t *)pte; - if (pte) { - _pte -= PTRS_PER_PTE; - kmem_cache_free(pte_cache, _pte); - } + __free_page(pte); } /* * Populate the pmdp entry with a pointer to the pte. This pmd is part * of the mm address space. * - * If 'mm' is the init tasks mm, then we are doing a vmalloc, and we - * need to set stuff up correctly for it. + * Ensure that we always set both PMD entries. */ -#define pmd_populate_kernel(mm,pmdp,pte) \ - do { \ - BUG_ON(mm != &init_mm); \ - set_pmd(pmdp, __mk_pmd(pte, _PAGE_KERNEL_TABLE));\ - } while (0) - -#define pmd_populate(mm,pmdp,pte) \ - do { \ - BUG_ON(mm == &init_mm); \ - set_pmd(pmdp, __mk_pmd(pte, _PAGE_USER_TABLE)); \ - } while (0) +static inline void +pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmdp, pte_t *ptep) +{ + unsigned long pte_ptr = (unsigned long)ptep; + pmd_t pmd; + + BUG_ON(mm != &init_mm); + + /* + * The pmd must be loaded with the physical + * address of the PTE table + */ + pte_ptr -= PTRS_PER_PTE * sizeof(void *); + pmd_val(pmd) = __pa(pte_ptr) | _PAGE_KERNEL_TABLE; + set_pmd(pmdp, pmd); + pmd_val(pmd) += 256 * sizeof(pte_t); + set_pmd(pmdp + 1, pmd); +} + +static inline void +pmd_populate(struct mm_struct *mm, pmd_t *pmdp, struct page *ptep) +{ + pmd_t pmd; + + BUG_ON(mm == &init_mm); + + pmd_val(pmd) = __pa(page_address(ptep)) | _PAGE_USER_TABLE; + set_pmd(pmdp, pmd); + pmd_val(pmd) += 256 * sizeof(pte_t); + set_pmd(pmdp + 1, pmd); +} diff -Nru a/include/asm-arm/proc-armv/pgtable.h b/include/asm-arm/proc-armv/pgtable.h --- a/include/asm-arm/proc-armv/pgtable.h Tue Mar 12 13:58:14 2002 +++ b/include/asm-arm/proc-armv/pgtable.h Tue Mar 12 13:58:14 2002 @@ -1,7 +1,7 @@ /* * linux/include/asm-arm/proc-armv/pgtable.h * - * Copyright (C) 1995-2001 Russell King + * Copyright (C) 1995-2002 Russell King * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as @@ -16,12 +16,17 @@ #define __ASM_PROC_PGTABLE_H /* - * entries per page directory level: they are two-level, so - * we don't really have any PMD directory. + * We pull a couple of tricks here: + * 1. We wrap the PMD into the PGD. + * 2. We lie about the size of the PTE and PGD. + * Even though we have 256 PTE entries and 4096 PGD entries, we tell + * Linux that we actually have 512 PTE entries and 2048 PGD entries. + * Each "Linux" PGD entry is made up of two hardware PGD entries, and + * each PTE table is actually two hardware PTE tables. */ -#define PTRS_PER_PTE 256 +#define PTRS_PER_PTE 512 #define PTRS_PER_PMD 1 -#define PTRS_PER_PGD 4096 +#define PTRS_PER_PGD 2048 /* * Hardware page table definitions. @@ -109,33 +114,30 @@ #define pmd_bad(pmd) (pmd_val(pmd) & 2) #define set_pmd(pmdp,pmd) cpu_set_pmd(pmdp, pmd) -static inline pmd_t __mk_pmd(pte_t *ptep, unsigned long prot) +static inline void pmd_clear(pmd_t *pmdp) { - unsigned long pte_ptr = (unsigned long)ptep; - pmd_t pmd; - - pte_ptr -= PTRS_PER_PTE * sizeof(void *); - - /* - * The pmd must be loaded with the physical - * address of the PTE table - */ - pmd_val(pmd) = __virt_to_phys(pte_ptr) | prot; - - return pmd; + set_pmd(pmdp, __pmd(0)); + set_pmd(pmdp + 1, __pmd(0)); } -static inline unsigned long __pmd_page(pmd_t pmd) +static inline pte_t *pmd_page_kernel(pmd_t pmd) { unsigned long ptr; ptr = pmd_val(pmd) & ~(PTRS_PER_PTE * sizeof(void *) - 1); - ptr += PTRS_PER_PTE * sizeof(void *); - return __phys_to_virt(ptr); + return __va(ptr); } +#define pmd_page(pmd) virt_to_page(__va(pmd_val(pmd))) + +#define pte_offset_kernel(dir,addr) (pmd_page_kernel(*(dir)) + __pte_index(addr)) +#define pte_offset_map(dir,addr) (pmd_page_kernel(*(dir)) + __pte_index(addr)) +#define pte_offset_map_nested(dir,addr) (pmd_page_kernel(*(dir)) + __pte_index(addr)) +#define pte_unmap(pte) do { } while (0) +#define pte_unmap_nested(pte) do { } while (0) + #define set_pte(ptep, pte) cpu_set_pte(ptep,pte) /* @@ -182,6 +184,8 @@ * Mark the prot value as uncacheable and unbufferable. */ #define pgprot_noncached(prot) __pgprot(pgprot_val(prot) & ~(L_PTE_CACHEABLE | L_PTE_BUFFERABLE)) + +#define pgtable_cache_init() do { } while (0) #endif /* __ASSEMBLY__ */ diff -Nru a/include/asm-arm/proc-fns.h b/include/asm-arm/proc-fns.h --- a/include/asm-arm/proc-fns.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-arm/proc-fns.h Tue Mar 12 13:58:15 2002 @@ -28,8 +28,6 @@ /* * CPU_NAME - the prefix for CPU related functions - * CPU_ABRT - the prefix for the CPU abort decoding function - * MMU_ARCH - the prefix for copy_user_page/clear_user_page */ #ifdef CONFIG_CPU_32 @@ -40,8 +38,6 @@ # define MULTI_CPU # else # define CPU_NAME cpu_arm6 -# define CPU_ABRT cpu_arm6 -# define MMU_ARCH armv3 # endif # endif # ifdef CONFIG_CPU_ARM710 @@ -50,8 +46,6 @@ # define MULTI_CPU # else # define CPU_NAME cpu_arm7 -# define CPU_ABRT cpu_arm7 -# define MMU_ARCH armv3 # endif # endif # ifdef CONFIG_CPU_ARM720T @@ -60,8 +54,6 @@ # define MULTI_CPU # else # define CPU_NAME cpu_arm720 -# define CPU_ABRT armv4t_late -# define MMU_ARCH armv4 # endif # endif # ifdef CONFIG_CPU_ARM920T @@ -70,8 +62,6 @@ # define MULTI_CPU # else # define CPU_NAME cpu_arm920 -# define CPU_ABRT armv4t_early -# define MMU_ARCH armv4 # endif # endif # ifdef CONFIG_CPU_ARM922T @@ -80,8 +70,6 @@ # define MULTI_CPU # else # define CPU_NAME cpu_arm922 -# define CPU_ABRT armv4t_early -# define MMU_ARCH armv4 # endif # endif # ifdef CONFIG_CPU_ARM926T @@ -90,8 +78,6 @@ # define MULTI_CPU # else # define CPU_NAME cpu_arm926 -# define CPU_ABRT armv5ej_early -# define MMU_ARCH armv4 # endif # endif # ifdef CONFIG_CPU_SA110 @@ -100,8 +86,6 @@ # define MULTI_CPU # else # define CPU_NAME cpu_sa110 -# define CPU_ABRT armv4_early -# define MMU_ARCH armv4 # endif # endif # ifdef CONFIG_CPU_SA1100 @@ -110,8 +94,6 @@ # define MULTI_CPU # else # define CPU_NAME cpu_sa1100 -# define CPU_ABRT armv4_early -# define MMU_ARCH armv4_mc # endif # endif # ifdef CONFIG_CPU_ARM1020 @@ -120,8 +102,6 @@ # define MULTI_CPU # else # define CPU_NAME cpu_arm1020 -# define CPU_ABRT armv4t_early -# define MMU_ARCH armv4 # endif # endif # ifdef CONFIG_CPU_XSCALE @@ -130,8 +110,6 @@ # define MULTI_CPU # else # define CPU_NAME cpu_xscale -# define CPU_ABRT armv4t_early -# define MMU_ARCH armv5te # endif # endif #endif diff -Nru a/include/asm-arm/procinfo.h b/include/asm-arm/procinfo.h --- a/include/asm-arm/procinfo.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-arm/procinfo.h Tue Mar 12 13:58:15 2002 @@ -15,6 +15,7 @@ #include struct cpu_tlb_fns; +struct cpu_user_fns; struct processor; struct proc_info_item { @@ -32,21 +33,22 @@ * arch/arm/mm/proc-*.S and arch/arm/kernel/head-armv.S */ struct proc_info_list { - unsigned int cpu_val; - unsigned int cpu_mask; - unsigned long __cpu_mmu_flags; /* used by head-armv.S */ - unsigned long __cpu_flush; /* used by head-armv.S */ - const char *arch_name; - const char *elf_name; - unsigned int elf_hwcap; - struct proc_info_item *info; - struct processor *proc; - struct cpu_tlb_fns *tlb; + unsigned int cpu_val; + unsigned int cpu_mask; + unsigned long __cpu_mmu_flags; /* used by head-armv.S */ + unsigned long __cpu_flush; /* used by head-armv.S */ + const char *arch_name; + const char *elf_name; + unsigned int elf_hwcap; + struct proc_info_item *info; + struct processor *proc; + struct cpu_tlb_fns *tlb; + struct cpu_user_fns *user; }; #endif /* __ASSEMBLY__ */ -#define PROC_INFO_SZ 40 +#define PROC_INFO_SZ 44 #define HWCAP_SWP 1 #define HWCAP_HALF 2 diff -Nru a/include/asm-i386/mman.h b/include/asm-i386/mman.h --- a/include/asm-i386/mman.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-i386/mman.h Tue Mar 12 13:58:15 2002 @@ -4,6 +4,7 @@ #define PROT_READ 0x1 /* page can be read */ #define PROT_WRITE 0x2 /* page can be written */ #define PROT_EXEC 0x4 /* page can be executed */ +#define PROT_SEM 0x8 /* page may be used for atomic ops */ #define PROT_NONE 0x0 /* page can not be accessed */ #define MAP_SHARED 0x01 /* Share changes */ diff -Nru a/include/asm-i386/pgalloc.h b/include/asm-i386/pgalloc.h --- a/include/asm-i386/pgalloc.h Tue Mar 12 13:58:14 2002 +++ b/include/asm-i386/pgalloc.h Tue Mar 12 13:58:14 2002 @@ -223,4 +223,6 @@ /* i386 does not keep any page table caches in TLB */ } +#define check_pgt_cache() do { } while (0) + #endif /* _I386_PGALLOC_H */ diff -Nru a/include/asm-i386/unistd.h b/include/asm-i386/unistd.h --- a/include/asm-i386/unistd.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-i386/unistd.h Tue Mar 12 13:58:15 2002 @@ -244,6 +244,7 @@ #define __NR_fremovexattr 237 #define __NR_tkill 238 #define __NR_sendfile64 239 +#define __NR_futex 240 /* user-visible error numbers are in the range -1 - -124: see */ diff -Nru a/include/asm-ia64/a.out.h b/include/asm-ia64/a.out.h --- a/include/asm-ia64/a.out.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/a.out.h Tue Mar 12 13:58:15 2002 @@ -7,8 +7,8 @@ * probably would be better to clean up binfmt_elf.c so it does not * necessarily depend on there being a.out support. * - * Copyright (C) 1998-2000 Hewlett-Packard Co - * Copyright (C) 1998-2000 David Mosberger-Tang + * Copyright (C) 1998-2002 Hewlett-Packard Co + * David Mosberger-Tang */ #include @@ -31,7 +31,7 @@ #ifdef __KERNEL__ # include -# define STACK_TOP (0x8000000000000000UL + (1UL << (4*PAGE_SHIFT - 12)) - PAGE_SIZE) +# define STACK_TOP (0x6000000000000000UL + (1UL << (4*PAGE_SHIFT - 12)) - PAGE_SIZE) # define IA64_RBS_BOT (STACK_TOP - 0x80000000L + PAGE_SIZE) /* bottom of reg. backing store */ #endif diff -Nru a/include/asm-ia64/acpi-ext.h b/include/asm-ia64/acpi-ext.h --- a/include/asm-ia64/acpi-ext.h Tue Mar 12 13:58:14 2002 +++ b/include/asm-ia64/acpi-ext.h Tue Mar 12 13:58:14 2002 @@ -13,7 +13,9 @@ * ACPI 2.0 specification */ +#include #include +#include #pragma pack(1) #define ACPI_RSDP_SIG "RSD PTR " /* Trailing space required */ @@ -24,7 +26,7 @@ char oem_id[6]; u8 revision; u32 rsdt; - u32 lenth; + u32 length; struct acpi_xsdt *xsdt; u8 ext_checksum; u8 reserved[3]; @@ -96,7 +98,7 @@ struct acpi_rsdt *rsdt; } acpi_rsdp_t; -typedef struct { +typedef struct acpi_rsdt { acpi_desc_table_hdr_t header; u8 reserved[4]; unsigned long entry_ptrs[1]; /* Not really . . . */ @@ -151,15 +153,15 @@ #define MADT_PCAT_COMPAT (1<<0) /* acpi 2.0 MADT structure types */ -#define ACPI20_ENTRY_LOCAL_APIC 0 -#define ACPI20_ENTRY_IO_APIC 1 -#define ACPI20_ENTRY_INT_SRC_OVERRIDE 2 -#define ACPI20_ENTRY_NMI_SOURCE 3 -#define ACPI20_ENTRY_LOCAL_APIC_NMI 4 -#define ACPI20_ENTRY_LOCAL_APIC_ADDR_OVERRIDE 5 -#define ACPI20_ENTRY_IO_SAPIC 6 -#define ACPI20_ENTRY_LOCAL_SAPIC 7 -#define ACPI20_ENTRY_PLATFORM_INT_SOURCE 8 +#define ACPI20_ENTRY_LOCAL_APIC 0 +#define ACPI20_ENTRY_IO_APIC 1 +#define ACPI20_ENTRY_INT_SRC_OVERRIDE 2 +#define ACPI20_ENTRY_NMI_SOURCE 3 +#define ACPI20_ENTRY_LOCAL_APIC_NMI 4 +#define ACPI20_ENTRY_LOCAL_APIC_ADDR_OVERRIDE 5 +#define ACPI20_ENTRY_IO_SAPIC 6 +#define ACPI20_ENTRY_LOCAL_SAPIC 7 +#define ACPI20_ENTRY_PLATFORM_INT_SOURCE 8 typedef struct acpi20_entry_lsapic { u8 type; @@ -190,16 +192,132 @@ } acpi20_entry_platform_src_t; /* constants for interrupt routing API for device drivers */ -#define ACPI20_ENTRY_PIS_PMI 1 -#define ACPI20_ENTRY_PIS_INIT 2 -#define ACPI20_ENTRY_PIS_CPEI 3 -#define ACPI_MAX_PLATFORM_IRQS 4 +#define ACPI20_ENTRY_PIS_PMI 1 +#define ACPI20_ENTRY_PIS_INIT 2 +#define ACPI20_ENTRY_PIS_CPEI 3 +#define ACPI_MAX_PLATFORM_IRQS 4 + +#define ACPI_SPCRT_SIG "SPCR" +#define ACPI_SPCRT_SIG_LEN 4 + +#define ACPI_DBGPT_SIG "DBGP" +#define ACPI_DBGPT_SIG_LEN 4 extern int acpi20_parse(acpi20_rsdp_t *); +extern int acpi20_early_parse(acpi20_rsdp_t *); extern int acpi_parse(acpi_rsdp_t *); extern const char *acpi_get_sysname (void); extern int acpi_request_vector(u32 int_type); - extern void (*acpi_idle) (void); /* power-management idle function, if any */ +#ifdef CONFIG_NUMA +extern cnodeid_t paddr_to_nid(unsigned long paddr); +#endif + +/* + * ACPI 2.0 SRAT Table + * http://www.microsoft.com/HWDEV/design/SRAT.htm + */ + +typedef struct acpi_srat { + acpi_desc_table_hdr_t header; + u32 table_revision; + u64 reserved; +} acpi_srat_t; + +typedef struct srat_cpu_affinity { + u8 type; + u8 length; + u8 proximity_domain; + u8 apic_id; + u32 flags; + u8 local_sapic_eid; + u8 reserved[7]; +} srat_cpu_affinity_t; + +typedef struct srat_memory_affinity { + u8 type; + u8 length; + u8 proximity_domain; + u8 reserved[5]; + u32 base_addr_lo; + u32 base_addr_hi; + u32 length_lo; + u32 length_hi; + u32 memory_type; + u32 flags; + u64 reserved2; +} srat_memory_affinity_t; + +/* ACPI 2.0 SRAT structure */ +#define ACPI_SRAT_SIG "SRAT" +#define ACPI_SRAT_SIG_LEN 4 +#define ACPI_SRAT_REVISION 1 + +#define SRAT_CPU_STRUCTURE 0 +#define SRAT_MEMORY_STRUCTURE 1 + +/* Only 1 flag for cpu affinity structure! */ +#define SRAT_CPU_FLAGS_ENABLED 0x00000001 + +#define SRAT_MEMORY_FLAGS_ENABLED 0x00000001 +#define SRAT_MEMORY_FLAGS_HOTREMOVABLE 0x00000002 + +/* ACPI 2.0 address range types */ +#define ACPI_ADDRESS_RANGE_MEMORY 1 +#define ACPI_ADDRESS_RANGE_RESERVED 2 +#define ACPI_ADDRESS_RANGE_ACPI 3 +#define ACPI_ADDRESS_RANGE_NVS 4 + +#define NODE_ARRAY_INDEX(x) ((x) / 8) /* 8 bits/char */ +#define NODE_ARRAY_OFFSET(x) ((x) % 8) /* 8 bits/char */ +#define MAX_PXM_DOMAINS (256) + +#ifdef CONFIG_DISCONTIGMEM +/* + * List of node memory chunks. Filled when parsing SRAT table to + * obtain information about memory nodes. +*/ + +struct node_memory_chunk_s { + unsigned long start_paddr; + unsigned long size; + int pxm; // proximity domain of node + int nid; // which cnode contains this chunk? + int bank; // which mem bank on this node +}; + +extern struct node_memory_chunk_s node_memory_chunk[PLAT_MAXCLUMPS]; // temporary? + +struct node_cpuid_s { + u16 phys_id; /* id << 8 | eid */ + int pxm; // proximity domain of cpu + int nid; +}; +extern struct node_cpuid_s node_cpuid[NR_CPUS]; + +extern int pxm_to_nid_map[MAX_PXM_DOMAINS]; /* _PXM to logical node ID map */ +extern int nid_to_pxm_map[PLAT_MAX_COMPACT_NODES]; /* logical node ID to _PXM map */ +extern int numnodes; /* total number of nodes in system */ +extern int num_memory_chunks; /* total number of memory chunks */ + +/* + * ACPI 2.0 SLIT Table + * http://devresource.hp.com/devresource/Docs/TechPapers/IA64/slit.pdf + */ + +typedef struct acpi_slit { + acpi_desc_table_hdr_t header; + u64 localities; + u8 entries[1]; /* dummy, real size = locality^2 */ +} acpi_slit_t; + +extern u8 acpi20_slit[PLAT_MAX_COMPACT_NODES * PLAT_MAX_COMPACT_NODES]; + +#define ACPI_SLIT_SIG "SLIT" +#define ACPI_SLIT_SIG_LEN 4 +#define ACPI_SLIT_REVISION 1 +#define ACPI_SLIT_LOCAL 10 +#endif /* CONFIG_DISCONTIGMEM */ + #pragma pack() #endif /* _ASM_IA64_ACPI_EXT_H */ diff -Nru a/include/asm-ia64/asmmacro.h b/include/asm-ia64/asmmacro.h --- a/include/asm-ia64/asmmacro.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/asmmacro.h Tue Mar 12 13:58:15 2002 @@ -3,7 +3,7 @@ /* * Copyright (C) 2000-2001 Hewlett-Packard Co - * Copyright (C) 2000-2001 David Mosberger-Tang + * David Mosberger-Tang */ #define ENTRY(name) \ diff -Nru a/include/asm-ia64/bitops.h b/include/asm-ia64/bitops.h --- a/include/asm-ia64/bitops.h Tue Mar 12 13:58:16 2002 +++ b/include/asm-ia64/bitops.h Tue Mar 12 13:58:16 2002 @@ -2,8 +2,11 @@ #define _ASM_IA64_BITOPS_H /* - * Copyright (C) 1998-2001 Hewlett-Packard Co - * Copyright (C) 1998-2001 David Mosberger-Tang + * Copyright (C) 1998-2002 Hewlett-Packard Co + * David Mosberger-Tang + * + * 02/06/02 find_next_bit() and find_first_bit() added from Erich Focht's ia64 O(1) + * scheduler patch */ #include @@ -57,10 +60,10 @@ } /* - * clear_bit() doesn't provide any barrier for the compiler. + * clear_bit() has "acquire" semantics. */ #define smp_mb__before_clear_bit() smp_mb() -#define smp_mb__after_clear_bit() smp_mb() +#define smp_mb__after_clear_bit() do { /* skip */; } while (0) /** * clear_bit - Clears a bit in memory @@ -89,6 +92,17 @@ } /** + * __clear_bit - Clears a bit in memory (non-atomic version) + */ +static __inline__ void +__clear_bit (int nr, volatile void *addr) +{ + volatile __u32 *p = (__u32 *) addr + (nr >> 5);; + __u32 m = 1 << (nr & 31); + *p &= ~m; +} + +/** * change_bit - Toggle a bit in memory * @nr: Bit to clear * @addr: Address to start counting from @@ -264,12 +278,11 @@ } /** - * ffz - find the first zero bit in a memory region - * @x: The address to start the search at + * ffz - find the first zero bit in a long word + * @x: The long word to find the bit in * - * Returns the bit-number (0..63) of the first (least significant) zero bit, not - * the number of the byte containing a bit. Undefined if no zero exists, so - * code should check against ~0UL first... + * Returns the bit-number (0..63) of the first (least significant) zero bit. Undefined if + * no zero exists, so code should check against ~0UL first... */ static inline unsigned long ffz (unsigned long x) @@ -280,6 +293,21 @@ return result; } +/** + * __ffs - find first bit in word. + * @x: The word to search + * + * Undefined if no bit exists, so code should check against 0 first. + */ +static __inline__ unsigned long +__ffs (unsigned long x) +{ + unsigned long result; + + __asm__ ("popcnt %0=%1" : "=r" (result) : "r" ((x - 1) & ~x)); + return result; +} + #ifdef __KERNEL__ /* @@ -357,6 +385,8 @@ tmp = *p; found_first: tmp |= ~0UL << size; + if (tmp == ~0UL) /* any bits zero? */ + return result + size; /* nope */ found_middle: return result + ffz(tmp); } @@ -366,8 +396,53 @@ */ #define find_first_zero_bit(addr, size) find_next_zero_bit((addr), (size), 0) +/* + * Find next bit in a bitmap reasonably efficiently.. + */ +static inline int +find_next_bit (void *addr, unsigned long size, unsigned long offset) +{ + unsigned long *p = ((unsigned long *) addr) + (offset >> 6); + unsigned long result = offset & ~63UL; + unsigned long tmp; + + if (offset >= size) + return size; + size -= result; + offset &= 63UL; + if (offset) { + tmp = *(p++); + tmp &= ~0UL << offset; + if (size < 64) + goto found_first; + if (tmp) + goto found_middle; + size -= 64; + result += 64; + } + while (size & ~63UL) { + if ((tmp = *(p++))) + goto found_middle; + result += 64; + size -= 64; + } + if (!size) + return result; + tmp = *p; + found_first: + tmp &= ~0UL >> (64-size); + if (tmp == 0UL) /* Are any bits set? */ + return result + size; /* Nope. */ + found_middle: + return result + __ffs(tmp); +} + +#define find_first_bit(addr, size) find_next_bit((addr), (size), 0) + #ifdef __KERNEL__ +#define __clear_bit(nr, addr) clear_bit(nr, addr) + #define ext2_set_bit test_and_set_bit #define ext2_clear_bit test_and_clear_bit #define ext2_test_bit test_bit @@ -380,6 +455,16 @@ #define minix_test_and_clear_bit(nr,addr) test_and_clear_bit(nr,addr) #define minix_test_bit(nr,addr) test_bit(nr,addr) #define minix_find_first_zero_bit(addr,size) find_first_zero_bit(addr,size) + +static inline int +sched_find_first_bit (unsigned long *b) +{ + if (unlikely(b[0])) + return __ffs(b[0]); + if (unlikely(b[1])) + return 64 + __ffs(b[1]); + return __ffs(b[2]) + 128; +} #endif /* __KERNEL__ */ diff -Nru a/include/asm-ia64/checksum.h b/include/asm-ia64/checksum.h --- a/include/asm-ia64/checksum.h Tue Mar 12 13:58:14 2002 +++ b/include/asm-ia64/checksum.h Tue Mar 12 13:58:14 2002 @@ -89,11 +89,4 @@ return ~sum; } -#define _HAVE_ARCH_IPV6_CSUM -extern unsigned short int csum_ipv6_magic (struct in6_addr *saddr, - struct in6_addr *daddr, - __u16 len, - unsigned short proto, - unsigned int sum); - #endif /* _ASM_IA64_CHECKSUM_H */ diff -Nru a/include/asm-ia64/current.h b/include/asm-ia64/current.h --- a/include/asm-ia64/current.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/current.h Tue Mar 12 13:58:15 2002 @@ -3,7 +3,7 @@ /* * Copyright (C) 1998-2000 Hewlett-Packard Co - * Copyright (C) 1998-2000 David Mosberger-Tang + * David Mosberger-Tang */ /* In kernel mode, thread pointer (r13) is used to point to the diff -Nru a/include/asm-ia64/elf.h b/include/asm-ia64/elf.h --- a/include/asm-ia64/elf.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/elf.h Tue Mar 12 13:58:15 2002 @@ -4,8 +4,8 @@ /* * ELF archtecture specific definitions. * - * Copyright (C) 1998, 1999 Hewlett-Packard Co - * Copyright (C) 1998, 1999 David Mosberger-Tang + * Copyright (C) 1998, 1999, 2002 Hewlett-Packard Co + * David Mosberger-Tang */ #include @@ -25,7 +25,10 @@ #define USE_ELF_CORE_DUMP -/* always align to 64KB to allow for future page sizes of up to 64KB: */ +/* Least-significant four bits of ELF header's e_flags are OS-specific. The bits are + interpreted as follows by Linux: */ +#define EF_IA_64_LINUX_EXECUTABLE_STACK 0x1 /* is stack (& heap) executable by default? */ + #define ELF_EXEC_PAGESIZE PAGE_SIZE /* @@ -82,7 +85,9 @@ #define ELF_PLATFORM 0 #ifdef __KERNEL__ -#define SET_PERSONALITY(ex, ibcs2) set_personality((ibcs2)?PER_SVR4:PER_LINUX) +struct elf64_hdr; +extern void ia64_set_personality (struct elf64_hdr *elf_ex, int ibcs2_interpreter); +#define SET_PERSONALITY(ex, ibcs2) ia64_set_personality(&(ex), ibcs2) #endif #endif /* _ASM_IA64_ELF_H */ diff -Nru a/include/asm-ia64/hardirq.h b/include/asm-ia64/hardirq.h --- a/include/asm-ia64/hardirq.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/hardirq.h Tue Mar 12 13:58:15 2002 @@ -25,8 +25,8 @@ #define local_softirq_pending() (local_cpu_data->softirq_pending) #define local_ksoftirqd_task() (local_cpu_data->ksoftirqd) -#define local_irq_count() (local_cpu_data->irq_stat.f.irq_count) -#define local_bh_count() (local_cpu_data->irq_stat.f.bh_count) +#define really_local_irq_count() (local_cpu_data->irq_stat.f.irq_count) /* XXX fix me */ +#define really_local_bh_count() (local_cpu_data->irq_stat.f.bh_count) /* XXX fix me */ #define local_syscall_count() /* unused on IA-64 */ #define local_nmi_count() 0 @@ -38,11 +38,11 @@ #define in_irq() (local_cpu_data->irq_stat.f.irq_count != 0) #ifndef CONFIG_SMP -# define local_hardirq_trylock() (local_irq_count() == 0) +# define local_hardirq_trylock() (really_local_irq_count() == 0) # define local_hardirq_endlock() do { } while (0) -# define local_irq_enter(irq) (local_irq_count()++) -# define local_irq_exit(irq) (local_irq_count()--) +# define local_irq_enter(irq) (really_local_irq_count()++) +# define local_irq_exit(irq) (really_local_irq_count()--) # define synchronize_irq() barrier() #else @@ -70,6 +70,7 @@ /* if we didn't own the irq lock, just ignore.. */ if (global_irq_holder == cpu) { global_irq_holder = NO_PROC_ID; + smp_mb__before_clear_bit(); /* need barrier before releasing lock... */ clear_bit(0,&global_irq_lock); } } @@ -77,7 +78,7 @@ static inline void local_irq_enter (int irq) { - local_irq_count()++; + really_local_irq_count()++; while (test_bit(0,&global_irq_lock)) { /* nothing */; @@ -87,13 +88,13 @@ static inline void local_irq_exit (int irq) { - local_irq_count()--; + really_local_irq_count()--; } static inline int local_hardirq_trylock (void) { - return !local_irq_count() && !test_bit(0,&global_irq_lock); + return !really_local_irq_count() && !test_bit(0,&global_irq_lock); } #define local_hardirq_endlock() do { } while (0) diff -Nru a/include/asm-ia64/ia32.h b/include/asm-ia64/ia32.h --- a/include/asm-ia64/ia32.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/ia32.h Tue Mar 12 13:58:15 2002 @@ -5,7 +5,7 @@ #ifdef CONFIG_IA32_SUPPORT -#include +#include /* * 32 bit structures for IA32 support. @@ -474,6 +474,8 @@ unsigned int seg_not_present:1; unsigned int useable:1; }; + +struct linux_binprm; extern void ia32_gdt_init (void); extern int ia32_setup_frame1 (int sig, struct k_sigaction *ka, siginfo_t *info, diff -Nru a/include/asm-ia64/io.h b/include/asm-ia64/io.h --- a/include/asm-ia64/io.h Tue Mar 12 13:58:14 2002 +++ b/include/asm-ia64/io.h Tue Mar 12 13:58:14 2002 @@ -69,6 +69,22 @@ */ #define __ia64_mf_a() __asm__ __volatile__ ("mf.a" ::: "memory") +/** + * __ia64_mmiob - I/O space memory barrier + * + * Acts as a memory mapped I/O barrier for platforms that queue writes to + * I/O space. This ensures that subsequent writes to I/O space arrive after + * all previous writes. For most ia64 platforms, this is a simple + * 'mf.a' instruction, so the address is ignored. For other platforms, + * the address may be required to ensure proper ordering of writes to I/O space + * since a 'dummy' read might be necessary to barrier the write operation. + */ +static inline void +__ia64_mmiob (void) +{ + __ia64_mf_a(); +} + static inline const unsigned long __ia64_get_io_port_base (void) { @@ -271,6 +287,7 @@ #define __outb platform_outb #define __outw platform_outw #define __outl platform_outl +#define __mmiob platform_mmiob #define inb __inb #define inw __inw @@ -284,6 +301,7 @@ #define outsb __outsb #define outsw __outsw #define outsl __outsl +#define mmiob __mmiob /* * The address passed to these functions are ioremap()ped already. @@ -408,5 +426,11 @@ #define memset_io(addr,c,len) \ __ia64_memset_c_io((unsigned long)(addr),0x0101010101010101UL*(u8)(c),(len)) + +#define dma_cache_inv(_start,_size) do { } while (0) +#define dma_cache_wback(_start,_size) do { } while (0) +#define dma_cache_wback_inv(_start,_size) do { } while (0) + # endif /* __KERNEL__ */ + #endif /* _ASM_IA64_IO_H */ diff -Nru a/include/asm-ia64/irq.h b/include/asm-ia64/irq.h --- a/include/asm-ia64/irq.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/irq.h Tue Mar 12 13:58:15 2002 @@ -2,9 +2,9 @@ #define _ASM_IA64_IRQ_H /* - * Copyright (C) 1999-2000 Hewlett-Packard Co - * Copyright (C) 1998-2000 David Mosberger-Tang - * Copyright (C) 1998 Stephane Eranian + * Copyright (C) 1999-2000, 2002 Hewlett-Packard Co + * David Mosberger-Tang + * Stephane Eranian * * 11/24/98 S.Eranian updated TIMER_IRQ and irq_cannonicalize * 01/20/99 S.Eranian added keyboard interrupt @@ -27,5 +27,6 @@ extern void disable_irq (unsigned int); extern void disable_irq_nosync (unsigned int); extern void enable_irq (unsigned int); +extern void set_irq_affinity_info (int irq, int dest, int redir); #endif /* _ASM_IA64_IRQ_H */ diff -Nru a/include/asm-ia64/machvec.h b/include/asm-ia64/machvec.h --- a/include/asm-ia64/machvec.h Tue Mar 12 13:58:14 2002 +++ b/include/asm-ia64/machvec.h Tue Mar 12 13:58:14 2002 @@ -20,6 +20,7 @@ struct irq_desc; typedef void ia64_mv_setup_t (char **); +typedef void ia64_mv_cpu_init_t(void); typedef void ia64_mv_irq_init_t (void); typedef void ia64_mv_pci_fixup_t (int); typedef unsigned long ia64_mv_map_nr_t (unsigned long); @@ -59,6 +60,7 @@ typedef void ia64_mv_outb_t (unsigned char, unsigned long); typedef void ia64_mv_outw_t (unsigned short, unsigned long); typedef void ia64_mv_outl_t (unsigned int, unsigned long); +typedef void ia64_mv_mmiob_t (void); extern void machvec_noop (void); @@ -77,6 +79,7 @@ # else # define platform_name ia64_mv.name # define platform_setup ia64_mv.setup +# define platform_cpu_init ia64_mv.cpu_init # define platform_irq_init ia64_mv.irq_init # define platform_map_nr ia64_mv.map_nr # define platform_mca_init ia64_mv.mca_init @@ -105,11 +108,13 @@ # define platform_outb ia64_mv.outb # define platform_outw ia64_mv.outw # define platform_outl ia64_mv.outl +# define platofrm_mmiob ia64_mv.mmiob # endif struct ia64_machine_vector { const char *name; ia64_mv_setup_t *setup; + ia64_mv_cpu_init_t *cpu_init; ia64_mv_irq_init_t *irq_init; ia64_mv_pci_fixup_t *pci_fixup; ia64_mv_map_nr_t *map_nr; @@ -137,6 +142,7 @@ ia64_mv_outb_t *outb; ia64_mv_outw_t *outw; ia64_mv_outl_t *outl; + ia64_mv_mmiob_t *mmiob; }; #define MACHVEC_INIT(name) \ @@ -170,7 +176,8 @@ platform_inl, \ platform_outb, \ platform_outw, \ - platform_outl \ + platform_outl, \ + platform_mmiob \ } extern struct ia64_machine_vector ia64_mv; @@ -201,6 +208,9 @@ #ifndef platform_setup # define platform_setup ((ia64_mv_setup_t *) machvec_noop) #endif +#ifndef platform_cpu_init +# define platform_cpu_init ((ia64_mv_cpu_init_t *) machvec_noop) +#endif #ifndef platform_irq_init # define platform_irq_init ((ia64_mv_irq_init_t *) machvec_noop) #endif @@ -281,6 +291,9 @@ #endif #ifndef platform_outl # define platform_outl __ia64_outl +#endif +#ifndef platform_mmiob +# define platform_mmiob __ia64_mmiob #endif #endif /* _ASM_IA64_MACHVEC_H */ diff -Nru a/include/asm-ia64/machvec_sn1.h b/include/asm-ia64/machvec_sn1.h --- a/include/asm-ia64/machvec_sn1.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/machvec_sn1.h Tue Mar 12 13:58:15 2002 @@ -1,7 +1,40 @@ +/* + * Copyright (c) 2002 Silicon Graphics, Inc. All Rights Reserved. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of version 2 of the GNU General Public License + * as published by the Free Software Foundation. + * + * This program is distributed in the hope that it would be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. + * + * Further, this software is distributed without any warranty that it is + * free of the rightful claim of any third person regarding infringement + * or the like. Any license provided herein, whether implied or + * otherwise, applies only to this software file. Patent licenses, if + * any, provided herein do not apply to combinations of this program with + * other software, or any other product whatsoever. + * + * You should have received a copy of the GNU General Public + * License along with this program; if not, write the Free Software + * Foundation, Inc., 59 Temple Place - Suite 330, Boston MA 02111-1307, USA. + * + * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy, + * Mountain View, CA 94043, or: + * + * http://www.sgi.com + * + * For further information regarding this notice, see: + * + * http://oss.sgi.com/projects/GenInfo/NoticeExplan + */ + #ifndef _ASM_IA64_MACHVEC_SN1_h #define _ASM_IA64_MACHVEC_SN1_h extern ia64_mv_setup_t sn1_setup; +extern ia64_mv_cpu_init_t sn_cpu_init; extern ia64_mv_irq_init_t sn1_irq_init; extern ia64_mv_map_nr_t sn1_map_nr; extern ia64_mv_send_ipi_t sn1_send_IPI; @@ -13,6 +46,7 @@ extern ia64_mv_outb_t sn1_outb; extern ia64_mv_outw_t sn1_outw; extern ia64_mv_outl_t sn1_outl; +extern ia64_mv_mmiob_t sn_mmiob; extern ia64_mv_pci_alloc_consistent sn1_pci_alloc_consistent; extern ia64_mv_pci_free_consistent sn1_pci_free_consistent; extern ia64_mv_pci_map_single sn1_pci_map_single; @@ -32,6 +66,7 @@ */ #define platform_name "sn1" #define platform_setup sn1_setup +#define platform_cpu_init sn_cpu_init #define platform_irq_init sn1_irq_init #define platform_map_nr sn1_map_nr #define platform_send_ipi sn1_send_IPI @@ -43,6 +78,7 @@ #define platform_outb sn1_outb #define platform_outw sn1_outw #define platform_outl sn1_outl +#define platform_mmiob sn_mmiob #define platform_pci_dma_init machvec_noop #define platform_pci_alloc_consistent sn1_pci_alloc_consistent #define platform_pci_free_consistent sn1_pci_free_consistent diff -Nru a/include/asm-ia64/machvec_sn2.h b/include/asm-ia64/machvec_sn2.h --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/include/asm-ia64/machvec_sn2.h Tue Mar 12 13:58:16 2002 @@ -0,0 +1,103 @@ +/* + * Copyright (c) 2002 Silicon Graphics, Inc. All Rights Reserved. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of version 2 of the GNU General Public License + * as published by the Free Software Foundation. + * + * This program is distributed in the hope that it would be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. + * + * Further, this software is distributed without any warranty that it is + * free of the rightful claim of any third person regarding infringement + * or the like. Any license provided herein, whether implied or + * otherwise, applies only to this software file. Patent licenses, if + * any, provided herein do not apply to combinations of this program with + * other software, or any other product whatsoever. + * + * You should have received a copy of the GNU General Public + * License along with this program; if not, write the Free Software + * Foundation, Inc., 59 Temple Place - Suite 330, Boston MA 02111-1307, USA. + * + * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy, + * Mountain View, CA 94043, or: + * + * http://www.sgi.com + * + * For further information regarding this notice, see: + * + * http://oss.sgi.com/projects/GenInfo/NoticeExplan + */ + +#ifndef _ASM_IA64_MACHVEC_SN2_H +#define _ASM_IA64_MACHVEC_SN2_H + +extern ia64_mv_setup_t sn1_setup; +extern ia64_mv_cpu_init_t sn_cpu_init; +extern ia64_mv_irq_init_t sn1_irq_init; +extern ia64_mv_map_nr_t sn2_map_nr; +extern ia64_mv_send_ipi_t sn2_send_IPI; +extern ia64_mv_global_tlb_purge_t sn2_global_tlb_purge; +extern ia64_mv_irq_desc sn1_irq_desc; +extern ia64_mv_irq_to_vector sn1_irq_to_vector; +extern ia64_mv_local_vector_to_irq sn1_local_vector_to_irq; +extern ia64_mv_valid_irq sn1_valid_irq; +extern ia64_mv_pci_fixup_t sn1_pci_fixup; +#ifdef Colin /* We are using the same is Generic IA64 calls defined in io.h */ +extern ia64_mv_inb_t sn1_inb; +extern ia64_mv_inw_t sn1_inw; +extern ia64_mv_inl_t sn1_inl; +extern ia64_mv_outb_t sn1_outb; +extern ia64_mv_outw_t sn1_outw; +extern ia64_mv_outl_t sn1_outl; +#endif +extern ia64_mv_pci_alloc_consistent sn1_pci_alloc_consistent; +extern ia64_mv_pci_free_consistent sn1_pci_free_consistent; +extern ia64_mv_pci_map_single sn1_pci_map_single; +extern ia64_mv_pci_unmap_single sn1_pci_unmap_single; +extern ia64_mv_pci_map_sg sn1_pci_map_sg; +extern ia64_mv_pci_unmap_sg sn1_pci_unmap_sg; +extern ia64_mv_pci_dma_sync_single sn1_pci_dma_sync_single; +extern ia64_mv_pci_dma_sync_sg sn1_pci_dma_sync_sg; +extern ia64_mv_pci_dma_address sn1_dma_address; + +/* + * This stuff has dual use! + * + * For a generic kernel, the macros are used to initialize the + * platform's machvec structure. When compiling a non-generic kernel, + * the macros are used directly. + */ +#define platform_name "sn2" +#define platform_setup sn1_setup +#define platform_cpu_init sn_cpu_init +#define platform_irq_init sn1_irq_init +#define platform_map_nr sn2_map_nr +#define platform_send_ipi sn2_send_IPI +#define platform_global_tlb_purge sn2_global_tlb_purge +#define platform_pci_fixup sn1_pci_fixup +#ifdef Colin /* We are using the same is Generic IA64 calls defined in io.h */ +#define platform_inb sn1_inb +#define platform_inw sn1_inw +#define platform_inl sn1_inl +#define platform_outb sn1_outb +#define platform_outw sn1_outw +#define platform_outl sn1_outl +#endif +#define platform_irq_desc sn1_irq_desc +#define platform_irq_to_vector sn1_irq_to_vector +#define platform_local_vector_to_irq sn1_local_vector_to_irq +#define platform_valid_irq sn1_valid_irq +#define platform_pci_dma_init machvec_noop +#define platform_pci_alloc_consistent sn1_pci_alloc_consistent +#define platform_pci_free_consistent sn1_pci_free_consistent +#define platform_pci_map_single sn1_pci_map_single +#define platform_pci_unmap_single sn1_pci_unmap_single +#define platform_pci_map_sg sn1_pci_map_sg +#define platform_pci_unmap_sg sn1_pci_unmap_sg +#define platform_pci_dma_sync_single sn1_pci_dma_sync_single +#define platform_pci_dma_sync_sg sn1_pci_dma_sync_sg +#define platform_pci_dma_address sn1_dma_address + +#endif /* _ASM_IA64_MACHVEC_SN2_H */ diff -Nru a/include/asm-ia64/mca.h b/include/asm-ia64/mca.h --- a/include/asm-ia64/mca.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/mca.h Tue Mar 12 13:58:15 2002 @@ -7,9 +7,6 @@ * Copyright (C) Srinivasa Thirumalachar (sprasad@engr.sgi.com) */ -/* XXX use this temporary define for MP systems trying to INIT */ -#undef SAL_MPINIT_WORKAROUND - #ifndef _ASM_IA64_MCA_H #define _ASM_IA64_MCA_H @@ -101,12 +98,19 @@ IA64_MCA_HALT = -3 /* System to be halted by SAL */ }; +enum { + IA64_MCA_SAME_CONTEXT = 0x0, /* SAL to return to same context */ + IA64_MCA_NEW_CONTEXT = -1 /* SAL to return to new context */ +}; + typedef struct ia64_mca_os_to_sal_state_s { u64 imots_os_status; /* OS status to SAL as to what happened * with the MCA handling. */ u64 imots_sal_gp; /* GP of the SAL - physical */ - u64 imots_new_min_state; /* Pointer to structure containing + u64 imots_context; /* 0 if return to same context + 1 if return to new context */ + u64 *imots_new_min_state; /* Pointer to structure containing * new values of registers in the min state * save area. */ @@ -127,12 +131,19 @@ extern void ia64_mca_wakeup_int_handler(int,void *,struct pt_regs *); extern void ia64_mca_cmc_int_handler(int,void *,struct pt_regs *); extern void ia64_mca_cpe_int_handler(int,void *,struct pt_regs *); -extern void ia64_log_print(int,prfunc_t); +extern int ia64_log_print(int,prfunc_t); extern void ia64_mca_cmc_vector_setup(void); extern void ia64_mca_check_errors( void ); extern u64 ia64_log_get(int, prfunc_t); #define PLATFORM_CALL(fn, args) printk("Platform call TBD\n") + +#define platform_mem_dev_err_print ia64_log_prt_oem_data +#define platform_pci_bus_err_print ia64_log_prt_oem_data +#define platform_pci_comp_err_print ia64_log_prt_oem_data +#define platform_plat_specific_err_print ia64_log_prt_oem_data +#define platform_host_ctlr_err_print ia64_log_prt_oem_data +#define platform_plat_bus_err_print ia64_log_prt_oem_data #undef MCA_TEST diff -Nru a/include/asm-ia64/mca_asm.h b/include/asm-ia64/mca_asm.h --- a/include/asm-ia64/mca_asm.h Tue Mar 12 13:58:16 2002 +++ b/include/asm-ia64/mca_asm.h Tue Mar 12 13:58:16 2002 @@ -6,6 +6,8 @@ * Copyright (C) Srinivasa Thirumalachar * Copyright (C) 2000 Hewlett-Packard Co. * Copyright (C) 2000 David Mosberger-Tang + * Copyright (C) 2002 Intel Corp. + * Copyright (C) 2002 Jenna Hall */ #ifndef _ASM_IA64_MCA_ASM_H #define _ASM_IA64_MCA_ASM_H @@ -24,7 +26,7 @@ * 1. Lop off bits 61 thru 63 in the virtual address */ #define INST_VA_TO_PA(addr) \ - dep addr = 0, addr, 61, 3; + dep addr = 0, addr, 61, 3 /* * This macro converts a data virtual address to a physical address * Right now for simulation purposes the virtual addresses are @@ -32,7 +34,7 @@ * 1. Lop off bits 61 thru 63 in the virtual address */ #define DATA_VA_TO_PA(addr) \ - dep addr = 0, addr, 61, 3; + dep addr = 0, addr, 61, 3 /* * This macro converts a data physical address to a virtual address * Right now for simulation purposes the virtual addresses are @@ -41,7 +43,7 @@ */ #define DATA_PA_TO_VA(addr,temp) \ mov temp = 0x7 ;; \ - dep addr = temp, addr, 61, 3;; + dep addr = temp, addr, 61, 3 /* * This macro jumps to the instruction at the given virtual address @@ -112,8 +114,8 @@ ;; \ mov cr.iip = temp2; \ mov cr.ifs = r0; \ - DATA_VA_TO_PA(sp) \ - DATA_VA_TO_PA(gp) \ + DATA_VA_TO_PA(sp); \ + DATA_VA_TO_PA(gp); \ ;; \ srlz.i; \ ;; \ @@ -130,8 +132,7 @@ * translations turned on. * 1. Get the old saved psr * - * 2. Clear the interrupt enable and interrupt state collection bits - * in the current psr. + * 2. Clear the interrupt state collection bit in the current psr. * * 3. Set the instruction translation bit back in the old psr * Note we have to do this since we are right now saving only the @@ -140,9 +141,11 @@ * * 4. Set ipsr to this old_psr with "it" bit set and "bn" = 1. * - * 5. Set iip to the virtual address of the next instruction bundle. + * 5. Reset the current thread pointer (r13). * - * 6. Do an rfi to move ipsr to psr and iip to ip. + * 6. Set iip to the virtual address of the next instruction bundle. + * + * 7. Do an rfi to move ipsr to psr and iip to ip. */ #define VIRTUAL_MODE_ENTER(temp1, temp2, start_addr, old_psr) \ @@ -156,6 +159,10 @@ mov ar.rsc = 0; \ ;; \ srlz.d; \ + mov r13 = ar.k6; \ + ;; \ + DATA_PA_TO_VA(r13,temp1); \ + ;; \ mov temp2 = ar.bspstore; \ ;; \ DATA_PA_TO_VA(temp2,temp1); \ @@ -170,8 +177,6 @@ ;; \ mov temp2 = 1; \ ;; \ - dep temp1 = temp2, temp1, PSR_I, 1; \ - ;; \ dep temp1 = temp2, temp1, PSR_IC, 1; \ ;; \ dep temp1 = temp2, temp1, PSR_IT, 1; \ @@ -195,7 +200,7 @@ nop 1; \ nop 2; \ nop 1; \ - rfi; \ + rfi \ ;; /* diff -Nru a/include/asm-ia64/mman.h b/include/asm-ia64/mman.h --- a/include/asm-ia64/mman.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/mman.h Tue Mar 12 13:58:15 2002 @@ -2,8 +2,8 @@ #define _ASM_IA64_MMAN_H /* - * Copyright (C) 1998-2000 Hewlett-Packard Co - * Copyright (C) 1998-2000 David Mosberger-Tang + * Copyright (C) 1998-2000, 2002 Hewlett-Packard Co + * David Mosberger-Tang */ #define PROT_READ 0x1 /* page can be read */ @@ -23,8 +23,6 @@ #define MAP_EXECUTABLE 0x1000 /* mark it as an executable */ #define MAP_LOCKED 0x2000 /* pages are locked */ #define MAP_NORESERVE 0x4000 /* don't check for reservations */ -#define MAP_WRITECOMBINED 0x10000 /* write-combine the area */ -#define MAP_NONCACHED 0x20000 /* don't cache the memory */ #define MS_ASYNC 1 /* sync memory asynchronously */ #define MS_INVALIDATE 2 /* invalidate the caches */ diff -Nru a/include/asm-ia64/offsets.h b/include/asm-ia64/offsets.h --- a/include/asm-ia64/offsets.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/offsets.h Tue Mar 12 13:58:15 2002 @@ -7,8 +7,9 @@ * */ #define PT_PTRACED_BIT 0 -#define PT_TRACESYS_BIT 1 -#define IA64_TASK_SIZE 3408 /* 0xd50 */ +#define PT_SYSCALLTRACE_BIT 1 +#define IA64_TASK_SIZE 3936 /* 0xf60 */ +#define IA64_THREAD_INFO_SIZE 24 /* 0x18 */ #define IA64_PT_REGS_SIZE 400 /* 0x190 */ #define IA64_SWITCH_STACK_SIZE 560 /* 0x230 */ #define IA64_SIGINFO_SIZE 128 /* 0x80 */ @@ -16,15 +17,12 @@ #define SIGFRAME_SIZE 2816 /* 0xb00 */ #define UNW_FRAME_INFO_SIZE 448 /* 0x1c0 */ -#define IA64_TASK_PTRACE_OFFSET 48 /* 0x30 */ -#define IA64_TASK_SIGPENDING_OFFSET 16 /* 0x10 */ -#define IA64_TASK_NEED_RESCHED_OFFSET 40 /* 0x28 */ -#define IA64_TASK_PROCESSOR_OFFSET 100 /* 0x64 */ -#define IA64_TASK_THREAD_OFFSET 976 /* 0x3d0 */ -#define IA64_TASK_THREAD_KSP_OFFSET 976 /* 0x3d0 */ -#define IA64_TASK_PFM_MUST_BLOCK_OFFSET 1600 /* 0x640 */ -#define IA64_TASK_PID_OFFSET 220 /* 0xdc */ -#define IA64_TASK_MM_OFFSET 88 /* 0x58 */ +#define IA64_TASK_PTRACE_OFFSET 32 /* 0x20 */ +#define IA64_TASK_THREAD_OFFSET 1472 /* 0x5c0 */ +#define IA64_TASK_THREAD_KSP_OFFSET 1480 /* 0x5c8 */ +#define IA64_TASK_PFM_OVFL_BLOCK_RESET_OFFSET 2096 /* 0x830 */ +#define IA64_TASK_PID_OFFSET 212 /* 0xd4 */ +#define IA64_TASK_MM_OFFSET 136 /* 0x88 */ #define IA64_PT_REGS_CR_IPSR_OFFSET 0 /* 0x0 */ #define IA64_PT_REGS_CR_IIP_OFFSET 8 /* 0x8 */ #define IA64_PT_REGS_CR_IFS_OFFSET 16 /* 0x10 */ diff -Nru a/include/asm-ia64/page.h b/include/asm-ia64/page.h --- a/include/asm-ia64/page.h Tue Mar 12 13:58:14 2002 +++ b/include/asm-ia64/page.h Tue Mar 12 13:58:14 2002 @@ -3,8 +3,8 @@ /* * Pagetable related stuff. * - * Copyright (C) 1998, 1999 Hewlett-Packard Co - * Copyright (C) 1998, 1999 David Mosberger-Tang + * Copyright (C) 1998, 1999, 2002 Hewlett-Packard Co + * David Mosberger-Tang */ #include @@ -39,6 +39,22 @@ extern void clear_page (void *page); extern void copy_page (void *to, void *from); + +/* + * clear_user_page() and copy_user_page() can't be inline functions because + * flush_dcache_page() can't be defined until later... + */ +#define clear_user_page(addr, vaddr, page) \ +do { \ + clear_page(addr); \ + flush_dcache_page(page); \ +} while (0) + +#define copy_user_page(to, from, vaddr, page) \ +do { \ + copy_page((to), (from)); \ + flush_dcache_page(page); \ +} while (0) /* * Note: the MAP_NR_*() macro can't use __pa() because MAP_NR_*(X) MUST diff -Nru a/include/asm-ia64/pal.h b/include/asm-ia64/pal.h --- a/include/asm-ia64/pal.h Tue Mar 12 13:58:14 2002 +++ b/include/asm-ia64/pal.h Tue Mar 12 13:58:14 2002 @@ -88,10 +88,10 @@ typedef s64 pal_status_t; #define PAL_STATUS_SUCCESS 0 /* No error */ -#define PAL_STATUS_UNIMPLEMENTED -1 /* Unimplemented procedure */ -#define PAL_STATUS_EINVAL -2 /* Invalid argument */ -#define PAL_STATUS_ERROR -3 /* Error */ -#define PAL_STATUS_CACHE_INIT_FAIL -4 /* Could not initialize the +#define PAL_STATUS_UNIMPLEMENTED (-1) /* Unimplemented procedure */ +#define PAL_STATUS_EINVAL (-2) /* Invalid argument */ +#define PAL_STATUS_ERROR (-3) /* Error */ +#define PAL_STATUS_CACHE_INIT_FAIL (-4) /* Could not initialize the * specified level and type of * cache without sideeffects * and "restrict" was 1 diff -Nru a/include/asm-ia64/pci.h b/include/asm-ia64/pci.h --- a/include/asm-ia64/pci.h Tue Mar 12 13:58:16 2002 +++ b/include/asm-ia64/pci.h Tue Mar 12 13:58:16 2002 @@ -1,10 +1,11 @@ #ifndef _ASM_IA64_PCI_H #define _ASM_IA64_PCI_H +#include #include +#include #include #include -#include #include #include @@ -21,6 +22,13 @@ struct pci_dev; +/* + * The PCI address space does equal the physical memory address space. + * The networking and block device layers use this boolean for bounce + * buffer decisions. + */ +#define PCI_DMA_BUS_IS_PHYS (1) + static inline void pcibios_set_master (struct pci_dev *dev) { @@ -79,7 +87,7 @@ /* The ia64 platform always supports 64-bit addressing. */ #define pci_dac_dma_supported(pci_dev, mask) (1) -#define pci_dac_page_to_dma(dev,pg,off,dir) ((dma64_addr_t) page_to_bus(pg) + (off)) +#define pci_dac_page_to_dma(dev,pg,off,dir) ((dma_addr_t) page_to_bus(pg) + (off)) #define pci_dac_dma_to_page(dev,dma_addr) (virt_to_page(bus_to_virt(dma_addr))) #define pci_dac_dma_to_offset(dev,dma_addr) ((dma_addr) & ~PAGE_MASK) #define pci_dac_dma_sync_single(dev,dma_addr,len,dir) do { /* nothing */ } while (0) diff -Nru a/include/asm-ia64/perfmon.h b/include/asm-ia64/perfmon.h --- a/include/asm-ia64/perfmon.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/perfmon.h Tue Mar 12 13:58:15 2002 @@ -1,55 +1,177 @@ /* - * Copyright (C) 2001 Hewlett-Packard Co - * Copyright (C) 2001 Stephane Eranian + * Copyright (C) 2001-2002 Hewlett-Packard Co + * Stephane Eranian */ #ifndef _ASM_IA64_PERFMON_H #define _ASM_IA64_PERFMON_H -#include +/* + * perfmon comamnds supported on all CPU models + */ +#define PFM_WRITE_PMCS 0x01 +#define PFM_WRITE_PMDS 0x02 +#define PFM_READ_PMDS 0x03 +#define PFM_STOP 0x04 +#define PFM_START 0x05 +#define PFM_ENABLE 0x06 +#define PFM_DISABLE 0x07 +#define PFM_CREATE_CONTEXT 0x08 +#define PFM_DESTROY_CONTEXT 0x09 +#define PFM_RESTART 0x0a +#define PFM_PROTECT_CONTEXT 0x0b +#define PFM_GET_FEATURES 0x0c +#define PFM_DEBUG 0x0d +#define PFM_UNPROTECT_CONTEXT 0x0e + + +/* + * CPU model specific commands (may not be supported on all models) + */ +#define PFM_WRITE_IBRS 0x20 +#define PFM_WRITE_DBRS 0x21 + +/* + * context flags + */ +#define PFM_FL_INHERIT_NONE 0x00 /* never inherit a context across fork (default) */ +#define PFM_FL_INHERIT_ONCE 0x01 /* clone pfm_context only once across fork() */ +#define PFM_FL_INHERIT_ALL 0x02 /* always clone pfm_context across fork() */ +#define PFM_FL_NOTIFY_BLOCK 0x04 /* block task on user level notifications */ +#define PFM_FL_SYSTEM_WIDE 0x08 /* create a system wide context */ + +/* + * PMC flags + */ +#define PFM_REGFL_OVFL_NOTIFY 0x1 /* send notification on overflow */ + +/* + * PMD/PMC/IBR/DBR return flags (ignored on input) + * + * Those flags are used on output and must be checked in case EAGAIN is returned + * by any of the calls using a pfarg_reg_t or pfarg_dbreg_t structure. + */ +#define PFM_REG_RETFL_NOTAVAIL (1U<<31) /* set if register is implemented but not available */ +#define PFM_REG_RETFL_EINVAL (1U<<30) /* set if register entry is invalid */ +#define PFM_REG_RETFL_MASK (PFM_REG_RETFL_NOTAVAIL|PFM_REG_RETFL_EINVAL) + +#define PFM_REG_HAS_ERROR(flag) (((flag) & PFM_REG_RETFL_MASK) != 0) /* * Request structure used to define a context */ typedef struct { - unsigned long smpl_entries; /* how many entries in sampling buffer */ - unsigned long smpl_regs; /* which pmds to record on overflow */ - void *smpl_vaddr; /* returns address of BTB buffer */ + unsigned long ctx_smpl_entries; /* how many entries in sampling buffer */ + unsigned long ctx_smpl_regs[4]; /* which pmds to record on overflow */ - pid_t notify_pid; /* which process to notify on overflow */ - int notify_sig; /* XXX: not used anymore */ + pid_t ctx_notify_pid; /* which process to notify on overflow */ + int ctx_flags; /* noblock/block, inherit flags */ + void *ctx_smpl_vaddr; /* returns address of BTB buffer */ - int flags; /* NOBLOCK/BLOCK/ INHERIT flags (will replace API flags) */ -} pfreq_context_t; + unsigned long ctx_cpu_mask; /* on which CPU to enable perfmon (systemwide) */ + + unsigned long reserved[8]; /* for future use */ +} pfarg_context_t; /* * Request structure used to write/read a PMC or PMD */ typedef struct { - unsigned long reg_num; /* which register */ + unsigned int reg_num; /* which register */ + unsigned int reg_flags; /* PMC: notify/don't notify. PMD/PMC: return flags */ unsigned long reg_value; /* configuration (PMC) or initial value (PMD) */ - unsigned long reg_smpl_reset; /* reset of sampling buffer overflow (large) */ - unsigned long reg_ovfl_reset; /* reset on counter overflow (small) */ - int reg_flags; /* (PMD): notify/don't notify */ -} pfreq_reg_t; + + unsigned long reg_long_reset; /* reset after sampling buffer overflow (large) */ + unsigned long reg_short_reset;/* reset after counter overflow (small) */ + + unsigned long reg_reset_pmds[4]; /* which other counters to reset on overflow */ + + unsigned long reserved[16]; /* for future use */ +} pfarg_reg_t; + +typedef struct { + unsigned int dbreg_num; /* which register */ + unsigned int dbreg_flags; /* dbregs return flags */ + unsigned long dbreg_value; /* configuration (PMC) or initial value (PMD) */ + unsigned long reserved[6]; +} pfarg_dbreg_t; + +typedef struct { + unsigned int ft_version; /* perfmon: major [16-31], minor [0-15] */ + unsigned int ft_smpl_version;/* sampling format: major [16-31], minor [0-15] */ + unsigned long reserved[4]; /* for future use */ +} pfarg_features_t; /* - * main request structure passed by user - */ -typedef union { - pfreq_context_t pfr_ctx; /* request to configure a context */ - pfreq_reg_t pfr_reg; /* request to configure a PMD/PMC */ -} perfmon_req_t; + * This header is at the beginning of the sampling buffer returned to the user. + * It is exported as Read-Only at this point. It is directly followed by the + * first record. + */ +typedef struct { + unsigned int hdr_version; /* contains perfmon version (smpl format diffs) */ + unsigned int reserved; + unsigned long hdr_entry_size; /* size of one entry in bytes */ + unsigned long hdr_count; /* how many valid entries */ + unsigned long hdr_pmds[4]; /* which pmds are recorded */ +} perfmon_smpl_hdr_t; + +/* + * Define the version numbers for both perfmon as a whole and the sampling buffer format. + */ +#define PFM_VERSION_MAJ 1U +#define PFM_VERSION_MIN 0U +#define PFM_VERSION (((PFM_VERSION_MAJ&0xffff)<<16)|(PFM_VERSION_MIN & 0xffff)) + +#define PFM_SMPL_VERSION_MAJ 1U +#define PFM_SMPL_VERSION_MIN 0U +#define PFM_SMPL_VERSION (((PFM_SMPL_VERSION_MAJ&0xffff)<<16)|(PFM_SMPL_VERSION_MIN & 0xffff)) + + +#define PFM_VERSION_MAJOR(x) (((x)>>16) & 0xffff) +#define PFM_VERSION_MINOR(x) ((x) & 0xffff) + +/* + * Entry header in the sampling buffer. + * The header is directly followed with the PMDS saved in increasing index + * order: PMD4, PMD5, .... How many PMDs are present is determined by the + * user program during context creation. + * + * XXX: in this version of the entry, only up to 64 registers can be recorded + * This should be enough for quite some time. Always check sampling format + * before parsing entries! + * + * Inn the case where multiple counters have overflowed at the same time, the + * rate field indicate the initial value of the first PMD, based on the index. + * For instance, if PMD2 and PMD5 have ovewrflowed for this entry, the rate field + * will show the initial value of PMD2. + */ +typedef struct { + int pid; /* identification of process */ + int cpu; /* which cpu was used */ + unsigned long rate; /* initial value of overflowed counter */ + unsigned long stamp; /* timestamp */ + unsigned long ip; /* where did the overflow interrupt happened */ + unsigned long regs; /* bitmask of which registers overflowed */ + unsigned long period; /* sampling period used by overflowed counter (smallest pmd index) */ +} perfmon_smpl_entry_t; + +extern int perfmonctl(pid_t pid, int cmd, void *arg, int narg); #ifdef __KERNEL__ extern void pfm_save_regs (struct task_struct *); extern void pfm_load_regs (struct task_struct *); -extern int pfm_inherit (struct task_struct *, struct pt_regs *); +extern int pfm_inherit (struct task_struct *, struct pt_regs *); extern void pfm_context_exit (struct task_struct *); extern void pfm_flush_regs (struct task_struct *); extern void pfm_cleanup_notifiers (struct task_struct *); +extern void pfm_cleanup_owners (struct task_struct *); +extern int pfm_use_debug_registers(struct task_struct *); +extern int pfm_release_debug_registers(struct task_struct *); +extern int pfm_cleanup_smpl_buf(struct task_struct *); +extern void pfm_syst_wide_update_task(struct task_struct *, int); +extern void pfm_ovfl_block_reset (void); #endif /* __KERNEL__ */ diff -Nru a/include/asm-ia64/pgalloc.h b/include/asm-ia64/pgalloc.h --- a/include/asm-ia64/pgalloc.h Tue Mar 12 13:58:14 2002 +++ b/include/asm-ia64/pgalloc.h Tue Mar 12 13:58:14 2002 @@ -30,7 +30,6 @@ */ #define pgd_quicklist (local_cpu_data->pgd_quick) #define pmd_quicklist (local_cpu_data->pmd_quick) -#define pte_quicklist (local_cpu_data->pte_quick) #define pgtable_cache_size (local_cpu_data->pgtable_cache_sz) static inline pgd_t* @@ -108,27 +107,29 @@ } static inline void -pmd_populate (struct mm_struct *mm, pmd_t *pmd_entry, pte_t *pte) +pmd_populate (struct mm_struct *mm, pmd_t *pmd_entry, struct page *pte) +{ + pmd_val(*pmd_entry) = page_to_phys(pte); +} + +static inline void +pmd_populate_kernel (struct mm_struct *mm, pmd_t *pmd_entry, pte_t *pte) { pmd_val(*pmd_entry) = __pa(pte); } -static inline pte_t* -pte_alloc_one_fast (struct mm_struct *mm, unsigned long addr) +static inline struct page * +pte_alloc_one (struct mm_struct *mm, unsigned long addr) { - unsigned long *ret = (unsigned long *)pte_quicklist; + struct page *pte = alloc_pages(GFP_KERNEL, 0); - if (__builtin_expect(ret != NULL, 1)) { - pte_quicklist = (unsigned long *)(*ret); - ret[0] = 0; - --pgtable_cache_size; - } - return (pte_t *)ret; + if (__builtin_expect(pte != NULL, 1)) + clear_page(page_address(pte)); + return pte; } - -static inline pte_t* -pte_alloc_one (struct mm_struct *mm, unsigned long addr) +static inline pte_t * +pte_alloc_one_kernel (struct mm_struct *mm, unsigned long addr) { pte_t *pte = (pte_t *) __get_free_page(GFP_KERNEL); @@ -138,16 +139,45 @@ } static inline void -pte_free (pte_t *pte) +pte_free (struct page *pte) { - *(unsigned long *)pte = (unsigned long) pte_quicklist; - pte_quicklist = (unsigned long *) pte; - ++pgtable_cache_size; + __free_page(pte); +} + +static inline void +pte_free_kernel (pte_t *pte) +{ + free_page((unsigned long) pte); } extern int do_check_pgt_cache (int, int); /* + * IA-64 doesn't have any external MMU info: the page tables contain all the necessary + * information. However, we use this macro to take care of any (delayed) i-cache flushing + * that may be necessary. + */ +static inline void +update_mmu_cache (struct vm_area_struct *vma, unsigned long vaddr, pte_t pte) +{ + unsigned long addr; + struct page *page; + + if (!pte_exec(pte)) + return; /* not an executable page... */ + + page = pte_page(pte); + /* don't use VADDR: it may not be mapped on this CPU (or may have just been flushed): */ + addr = (unsigned long) page_address(page); + + if (test_bit(PG_arch_1, &page->flags)) + return; /* i-cache is already coherent with d-cache */ + + flush_icache_range(addr, addr + PAGE_SIZE); + set_bit(PG_arch_1, &page->flags); /* mark page as clean */ +} + +/* * Now for some TLB flushing routines. This is the kind of stuff that * can be very expensive, so try to avoid them whenever possible. */ @@ -210,65 +240,6 @@ printk("flush_tlb_pgtables: can't flush across regions!!\n"); vma.vm_mm = mm; flush_tlb_range(&vma, ia64_thash(start), ia64_thash(end)); -} - -/* - * Now for some cache flushing routines. This is the kind of stuff - * that can be very expensive, so try to avoid them whenever possible. - */ - -/* Caches aren't brain-dead on the IA-64. */ -#define flush_cache_all() do { } while (0) -#define flush_cache_mm(mm) do { } while (0) -#define flush_cache_range(vma, start, end) do { } while (0) -#define flush_cache_page(vma, vmaddr) do { } while (0) -#define flush_page_to_ram(page) do { } while (0) - -extern void flush_icache_range (unsigned long start, unsigned long end); - -static inline void -flush_dcache_page (struct page *page) -{ - clear_bit(PG_arch_1, &page->flags); -} - -static inline void -clear_user_page (void *addr, unsigned long vaddr, struct page *page) -{ - clear_page(addr); - flush_dcache_page(page); -} - -static inline void -copy_user_page (void *to, void *from, unsigned long vaddr, struct page *page) -{ - copy_page(to, from); - flush_dcache_page(page); -} - -/* - * IA-64 doesn't have any external MMU info: the page tables contain all the necessary - * information. However, we use this macro to take care of any (delayed) i-cache flushing - * that may be necessary. - */ -static inline void -update_mmu_cache (struct vm_area_struct *vma, unsigned long vaddr, pte_t pte) -{ - unsigned long addr; - struct page *page; - - if (!pte_exec(pte)) - return; /* not an executable page... */ - - page = pte_page(pte); - /* don't use VADDR: it may not be mapped on this CPU (or may have just been flushed): */ - addr = (unsigned long) page_address(page); - - if (test_bit(PG_arch_1, &page->flags)) - return; /* i-cache is already coherent with d-cache */ - - flush_icache_range(addr, addr + PAGE_SIZE); - set_bit(PG_arch_1, &page->flags); /* mark page as clean */ } #endif /* _ASM_IA64_PGALLOC_H */ diff -Nru a/include/asm-ia64/pgtable.h b/include/asm-ia64/pgtable.h --- a/include/asm-ia64/pgtable.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/pgtable.h Tue Mar 12 13:58:15 2002 @@ -8,7 +8,7 @@ * This hopefully works with any (fixed) IA-64 page-size, as defined * in (currently 8192). * - * Copyright (C) 1998-2001 Hewlett-Packard Co + * Copyright (C) 1998-2002 Hewlett-Packard Co * David Mosberger-Tang */ @@ -108,19 +108,15 @@ /* * All the normal masks have the "page accessed" bits on, as any time * they are used, the page is accessed. They are cleared only by the - * page-out routines. On the other hand, we do NOT turn on the - * execute bit on pages that are mapped writable. For those pages, we - * turn on the X bit only when the program attempts to actually - * execute code in such a page (it's a "lazy execute bit", if you - * will). This lets reduce the amount of i-cache flushing we have to - * do for data pages such as stack and heap pages. + * page-out routines. */ #define PAGE_NONE __pgprot(_PAGE_PROTNONE | _PAGE_A) #define PAGE_SHARED __pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_RW) #define PAGE_READONLY __pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_R) -#define PAGE_COPY __pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_R) +#define PAGE_COPY __pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_RX) #define PAGE_GATE __pgprot(__ACCESS_BITS | _PAGE_PL_0 | _PAGE_AR_X_RX) #define PAGE_KERNEL __pgprot(__DIRTY_BITS | _PAGE_PL_0 | _PAGE_AR_RWX) +#define PAGE_KERNELRX __pgprot(__ACCESS_BITS | _PAGE_PL_0 | _PAGE_AR_RX) # ifndef __ASSEMBLY__ @@ -152,8 +148,8 @@ #define __S011 PAGE_SHARED #define __S100 __pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_X_RX) #define __S101 __pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_RX) -#define __S110 __pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_RW) -#define __S111 __pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_RW) +#define __S110 __pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_RWX) +#define __S111 __pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_RWX) #define pgd_ERROR(e) printk("%s:%d: bad pgd %016lx.\n", __FILE__, __LINE__, pgd_val(e)) #define pmd_ERROR(e) printk("%s:%d: bad pmd %016lx.\n", __FILE__, __LINE__, pmd_val(e)) @@ -161,8 +157,7 @@ /* - * Some definitions to translate between mem_map, PTEs, and page - * addresses: + * Some definitions to translate between mem_map, PTEs, and page addresses: */ @@ -173,6 +168,7 @@ return (addr & (local_cpu_data->unimpl_pa_mask)) == 0; } +#ifndef CONFIG_DISCONTIGMEM /* * kern_addr_valid(ADDR) tests if ADDR is pointing to valid kernel * memory. For the return value to be meaningful, ADDR must be >= @@ -188,6 +184,8 @@ */ #define kern_addr_valid(addr) (1) +#endif + /* * Now come the defines and routines to manage and access the three-level * page table. @@ -211,12 +209,12 @@ * Conversion functions: convert a page and protection to a page entry, * and a page entry and page directory to the page they refer to. */ -#define mk_pte(page,pgprot) \ -({ \ - pte_t __pte; \ - \ - pte_val(__pte) = ((page - mem_map) << PAGE_SHIFT) | pgprot_val(pgprot); \ - __pte; \ +#define mk_pte(page,pgprot) \ +({ \ + pte_t __pte; \ + \ + pte_val(__pte) = (page_to_phys(page)) | pgprot_val(pgprot); \ + __pte; \ }) /* This takes a physical page address that is used by the remapping functions */ @@ -232,14 +230,17 @@ #define pte_none(pte) (!pte_val(pte)) #define pte_present(pte) (pte_val(pte) & (_PAGE_P | _PAGE_PROTNONE)) #define pte_clear(pte) (pte_val(*(pte)) = 0UL) +#ifndef CONFIG_DISCONTIGMEM /* pte_page() returns the "struct page *" corresponding to the PTE: */ -#define pte_page(pte) (mem_map + (unsigned long) ((pte_val(pte) & _PFN_MASK) >> PAGE_SHIFT)) +#define pte_page(pte) virt_to_page(((pte_val(pte) & _PFN_MASK) + PAGE_OFFSET)) +#endif #define pmd_none(pmd) (!pmd_val(pmd)) #define pmd_bad(pmd) (!ia64_phys_addr_valid(pmd_val(pmd))) #define pmd_present(pmd) (pmd_val(pmd) != 0UL) #define pmd_clear(pmdp) (pmd_val(*(pmdp)) = 0UL) -#define pmd_page(pmd) ((unsigned long) __va(pmd_val(pmd) & _PFN_MASK)) +#define pmd_page_kernel(pmd) ((unsigned long) __va(pmd_val(pmd) & _PFN_MASK)) +#define pmd_page(pmd) virt_to_page((pmd_val(pmd) + PAGE_OFFSET)) #define pgd_none(pgd) (!pgd_val(pgd)) #define pgd_bad(pgd) (!ia64_phys_addr_valid(pgd_val(pgd))) @@ -339,9 +340,16 @@ #define pmd_offset(dir,addr) \ ((pmd_t *) pgd_page(*(dir)) + (((addr) >> PMD_SHIFT) & (PTRS_PER_PMD - 1))) -/* Find an entry in the third-level page table.. */ -#define pte_offset(dir,addr) \ - ((pte_t *) pmd_page(*(dir)) + (((addr) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1))) +/* + * Find an entry in the third-level page table. This looks more complicated than it + * should be because some platforms place page tables in high memory. + */ +#define __pte_offset(addr) (((addr) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1)) +#define pte_offset_kernel(dir,addr) ((pte_t *) pmd_page_kernel(*(dir)) + __pte_offset(addr)) +#define pte_offset_map(dir,addr) pte_offset_kernel(dir, addr) +#define pte_offset_map_nested(dir,addr) pte_offset_map(dir, addr) +#define pte_unmap(pte) do { } while (0) +#define pte_unmap_nested(pte) do { } while (0) /* atomic versions of the some PTE manipulations: */ @@ -418,22 +426,6 @@ return pte_val(a) == pte_val(b); } -/* - * Macros to check the type of access that triggered a page fault. - */ - -static inline int -is_write_access (int access_type) -{ - return (access_type & 0x2); -} - -static inline int -is_exec_access (int access_type) -{ - return (access_type & 0x4); -} - extern pgd_t swapper_pg_dir[PTRS_PER_PGD]; extern void paging_init (void); @@ -447,6 +439,33 @@ #define PageSkip(page) (0) #define io_remap_page_range remap_page_range /* XXX is this right? */ + + +/* + * Now for some cache flushing routines. This is the kind of stuff that can be very + * expensive, so try to avoid them whenever possible. + */ + +/* Caches aren't brain-dead on the IA-64. */ +#define flush_cache_all() do { } while (0) +#define flush_cache_mm(mm) do { } while (0) +#define flush_cache_range(vma, start, end) do { } while (0) +#define flush_cache_page(vma, vmaddr) do { } while (0) +#define flush_page_to_ram(page) do { } while (0) +#define flush_icache_page(vma,page) do { } while (0) + +#define flush_dcache_page(page) \ +do { \ + clear_bit(PG_arch_1, &page->flags); \ +} while (0) + +extern void flush_icache_range (unsigned long start, unsigned long end); + +#define flush_icache_user_range(vma, page, user_addr, len) \ +do { \ + unsigned long _addr = page_address(page) + ((user_addr) & ~PAGE_MASK); \ + flush_icache_range(_addr, _addr + (len)); \ +} while (0) /* * ZERO_PAGE is a global shared page that is always zero: used diff -Nru a/include/asm-ia64/processor.h b/include/asm-ia64/processor.h --- a/include/asm-ia64/processor.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/processor.h Tue Mar 12 13:58:15 2002 @@ -2,9 +2,9 @@ #define _ASM_IA64_PROCESSOR_H /* - * Copyright (C) 1998-2001 Hewlett-Packard Co - * Copyright (C) 1998-2001 David Mosberger-Tang - * Copyright (C) 1998-2001 Stephane Eranian + * Copyright (C) 1998-2002 Hewlett-Packard Co + * David Mosberger-Tang + * Stephane Eranian * Copyright (C) 1999 Asit Mallick * Copyright (C) 1999 Don Dugger * @@ -27,7 +27,6 @@ */ #define IA64_NUM_PMC_REGS 32 #define IA64_NUM_PMD_REGS 32 -#define IA64_NUM_PMD_COUNTERS 4 #define DEFAULT_MAP_BASE 0x2000000000000000 #define DEFAULT_TASK_SIZE 0xa000000000000000 @@ -170,6 +169,7 @@ #define IA64_THREAD_KRBS_SYNCED (__IA64_UL(1) << 5) /* krbs synced with process vm? */ #define IA64_THREAD_FPEMU_NOPRINT (__IA64_UL(1) << 6) /* don't log any fpswa faults */ #define IA64_THREAD_FPEMU_SIGFPE (__IA64_UL(1) << 7) /* send a SIGFPE for fpswa faults */ +#define IA64_THREAD_XSTACK (__IA64_UL(1) << 8) /* stack executable by default? */ #define IA64_THREAD_UAC_SHIFT 3 #define IA64_THREAD_UAC_MASK (IA64_THREAD_UAC_NOPRINT | IA64_THREAD_UAC_SIGBUS) @@ -187,6 +187,7 @@ #ifndef __ASSEMBLY__ #include +#include #include #include @@ -253,7 +254,6 @@ __u64 itm_next; /* interval timer mask value to use for next clock tick */ __u64 *pgd_quick; __u64 *pmd_quick; - __u64 *pte_quick; __u64 pgtable_cache_sz; /* CPUID-derived information: */ __u64 ppn; @@ -275,15 +275,23 @@ __u32 ptce_stride[2]; struct task_struct *ksoftirqd; /* kernel softirq daemon for this CPU */ #ifdef CONFIG_SMP + int cpu; __u64 loops_per_jiffy; __u64 ipi_count; __u64 prof_counter; __u64 prof_multiplier; - __u64 ipi_operation; + __u32 pfm_syst_wide; + __u32 pfm_dcr_pp; + /* this is written to by *other* CPUs: */ + __u64 ipi_operation ____cacheline_aligned; #endif #ifdef CONFIG_NUMA + void *node_directory; + int numa_node_id; struct cpuinfo_ia64 *cpu_data[NR_CPUS]; #endif + /* Platform specific word. MUST BE LAST IN STRUCT */ + __u64 platform_specific; } __attribute__ ((aligned (PAGE_SIZE))) ; /* @@ -303,7 +311,8 @@ * the array. */ #ifdef CONFIG_NUMA -# define cpu_data(cpu) local_cpu_data->cpu_data_ptrs[cpu] +# define cpu_data(cpu) local_cpu_data->cpu_data[cpu] +# define numa_node_id() (local_cpu_data->numa_node_id) #else extern struct cpuinfo_ia64 _cpu_data[NR_CPUS]; # define cpu_data(cpu) (&_cpu_data[cpu]) @@ -343,8 +352,8 @@ struct siginfo; struct thread_struct { + __u64 flags; /* various thread flags (see IA64_THREAD_*) */ __u64 ksp; /* kernel stack pointer */ - unsigned long flags; /* various flags */ __u64 map_base; /* base address for get_unmapped_area() */ __u64 task_size; /* limit for task size */ struct siginfo *siginfo; /* current siginfo struct for ptrace() */ @@ -366,10 +375,12 @@ #ifdef CONFIG_PERFMON __u64 pmc[IA64_NUM_PMC_REGS]; __u64 pmd[IA64_NUM_PMD_REGS]; - unsigned long pfm_must_block; /* non-zero if we need to block on overflow */ + unsigned long pfm_ovfl_block_reset;/* non-zero if we need to block or reset regs on ovfl */ void *pfm_context; /* pointer to detailed PMU context */ - atomic_t pfm_notifiers_check; /* indicate if release_thread much check tasklist */ -# define INIT_THREAD_PM {0, }, {0, }, 0, 0, {0}, + atomic_t pfm_notifiers_check; /* when >0, will cleanup ctx_notify_task in tasklist */ + atomic_t pfm_owners_check; /* when >0, will cleanup ctx_owner in tasklist */ + void *pfm_smpl_buf_list; /* list of sampling buffers to vfree */ +# define INIT_THREAD_PM {0, }, {0, }, 0, NULL, {0}, {0}, NULL, #else # define INIT_THREAD_PM #endif @@ -378,17 +389,17 @@ struct ia64_fpreg fph[96]; /* saved/loaded on demand */ }; -#define INIT_THREAD { \ - 0, /* ksp */ \ - 0, /* flags */ \ - DEFAULT_MAP_BASE, /* map_base */ \ - DEFAULT_TASK_SIZE, /* task_size */ \ - 0, /* siginfo */ \ - INIT_THREAD_IA32 \ - INIT_THREAD_PM \ - {0, }, /* dbr */ \ - {0, }, /* ibr */ \ - {{{{0}}}, } /* fph */ \ +#define INIT_THREAD { \ + flags: 0, \ + ksp: 0, \ + map_base: DEFAULT_MAP_BASE, \ + task_size: DEFAULT_TASK_SIZE, \ + siginfo: 0, \ + INIT_THREAD_IA32 \ + INIT_THREAD_PM \ + dbr: {0, }, \ + ibr: {0, }, \ + fph: {{{{0}}}, } \ } #define start_thread(regs,new_ip,new_sp) do { \ @@ -398,6 +409,7 @@ ia64_psr(regs)->cpl = 3; /* set user mode */ \ ia64_psr(regs)->ri = 0; /* clear return slot number */ \ ia64_psr(regs)->is = 0; /* IA-64 instruction set */ \ + ia64_psr(regs)->sp = 1; /* enforce secure perfmon */ \ regs->cr_iip = new_ip; \ regs->ar_rsc = 0xf; /* eager mode, privilege level 3 */ \ regs->ar_rnat = 0; \ @@ -542,11 +554,6 @@ extern void ia32_load_state (struct task_struct *task); #endif -#ifdef CONFIG_PERFMON -extern void ia64_save_pm_regs (struct task_struct *task); -extern void ia64_load_pm_regs (struct task_struct *task); -#endif - #define ia64_fph_enable() asm volatile (";; rsm psr.dfh;; srlz.d;;" ::: "memory"); #define ia64_fph_disable() asm volatile (";; ssm psr.dfh;; srlz.d;;" ::: "memory"); @@ -808,15 +815,12 @@ * Note that the only way T can block is through a call to schedule() -> switch_to(). */ static inline unsigned long -thread_saved_pc (struct thread_struct *t) +thread_saved_pc (struct task_struct *t) { struct unw_frame_info info; unsigned long ip; - /* XXX ouch: Linus, please pass the task pointer to thread_saved_pc() instead! */ - struct task_struct *p = (void *) ((unsigned long) t - IA64_TASK_THREAD_OFFSET); - - unw_init_from_blocked_task(&info, p); + unw_init_from_blocked_task(&info, t); if (unw_unwind(&info) < 0) return 0; unw_get_ip(&info, &ip); @@ -828,16 +832,6 @@ */ #define current_text_addr() \ ({ void *_pc; asm volatile ("mov %0=ip" : "=r" (_pc)); _pc; }) - -#define THREAD_SIZE IA64_STK_OFFSET -/* NOTE: The task struct and the stacks are allocated together. */ -#define alloc_task_struct() \ - ((struct task_struct *) __get_free_pages(GFP_KERNEL, IA64_TASK_STRUCT_LOG_NUM_PAGES)) -#define free_task_struct(p) free_pages((unsigned long)(p), IA64_TASK_STRUCT_LOG_NUM_PAGES) -#define get_task_struct(tsk) atomic_inc(&virt_to_page(tsk)->count) - -#define init_task (init_task_union.task) -#define init_stack (init_task_union.stack) /* * Set the correctable machine check vector register diff -Nru a/include/asm-ia64/ptrace.h b/include/asm-ia64/ptrace.h --- a/include/asm-ia64/ptrace.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/ptrace.h Tue Mar 12 13:58:15 2002 @@ -2,9 +2,9 @@ #define _ASM_IA64_PTRACE_H /* - * Copyright (C) 1998-2001 Hewlett-Packard Co - * Copyright (C) 1998-2001 David Mosberger-Tang - * Copyright (C) 1998, 1999 Stephane Eranian + * Copyright (C) 1998-2002 Hewlett-Packard Co + * David Mosberger-Tang + * Stephane Eranian * * 12/07/98 S. Eranian added pt_regs & switch_stack * 12/21/98 D. Mosberger updated to match latest code @@ -39,7 +39,9 @@ * | (growing upwards) | | * | | | * +----------------------+ | --- IA64_RBS_OFFSET - * | | | ^ + * | struct thread_info | | ^ + * +----------------------+ | | + * | | | | * | struct task_struct | | | * current -> | | | | * +----------------------+ ------- @@ -58,19 +60,19 @@ * (including register backing store and memory stack): */ #if defined(CONFIG_IA64_PAGE_SIZE_4KB) -# define IA64_TASK_STRUCT_LOG_NUM_PAGES 3 +# define KERNEL_STACK_SIZE_ORDER 3 #elif defined(CONFIG_IA64_PAGE_SIZE_8KB) -# define IA64_TASK_STRUCT_LOG_NUM_PAGES 2 +# define KERNEL_STACK_SIZE_ORDER 2 #elif defined(CONFIG_IA64_PAGE_SIZE_16KB) -# define IA64_TASK_STRUCT_LOG_NUM_PAGES 1 +# define KERNEL_STACK_SIZE_ORDER 1 #else -# define IA64_TASK_STRUCT_LOG_NUM_PAGES 0 +# define KERNEL_STACK_SIZE_ORDER 0 #endif -#define IA64_RBS_OFFSET ((IA64_TASK_SIZE + 15) & ~15) -#define IA64_STK_OFFSET ((1 << IA64_TASK_STRUCT_LOG_NUM_PAGES)*PAGE_SIZE) +#define IA64_RBS_OFFSET ((IA64_TASK_SIZE + IA64_THREAD_INFO_SIZE + 15) & ~15) +#define IA64_STK_OFFSET ((1 << KERNEL_STACK_SIZE_ORDER)*PAGE_SIZE) -#define INIT_TASK_SIZE IA64_STK_OFFSET +#define KERNEL_STACK_SIZE IA64_STK_OFFSET #ifndef __ASSEMBLY__ @@ -247,8 +249,34 @@ #endif /* !__KERNEL__ */ +/* pt_all_user_regs is used for PTRACE_GETREGS PTRACE_SETREGS */ +struct pt_all_user_regs { + unsigned long nat; + unsigned long cr_iip; + unsigned long cfm; + unsigned long cr_ipsr; + unsigned long pr; + + unsigned long gr[32]; + unsigned long br[8]; + unsigned long ar[128]; + struct ia64_fpreg fr[128]; +}; + #endif /* !__ASSEMBLY__ */ +/* indices to application-registers array in pt_all_user_regs */ +#define PT_AUR_RSC 16 +#define PT_AUR_BSP 17 +#define PT_AUR_BSPSTORE 18 +#define PT_AUR_RNAT 19 +#define PT_AUR_CCV 32 +#define PT_AUR_UNAT 36 +#define PT_AUR_FPSR 40 +#define PT_AUR_PFS 64 +#define PT_AUR_LC 65 +#define PT_AUR_EC 66 + /* * The numbers chosen here are somewhat arbitrary but absolutely MUST * not overlap with any of the number assigned in . @@ -256,5 +284,12 @@ #define PTRACE_SINGLEBLOCK 12 /* resume execution until next branch */ #define PTRACE_GETSIGINFO 13 /* get child's siginfo structure */ #define PTRACE_SETSIGINFO 14 /* set child's siginfo structure */ +#define PTRACE_GETREGS 18 /* get all registers (pt_all_user_regs) in one shot */ +#define PTRACE_SETREGS 19 /* set all registers (pt_all_user_regs) in one shot */ + +#define PTRACE_SETOPTIONS 21 + +/* options set using PTRACE_SETOPTIONS */ +#define PTRACE_O_TRACESYSGOOD 0x00000001 #endif /* _ASM_IA64_PTRACE_H */ diff -Nru a/include/asm-ia64/sal.h b/include/asm-ia64/sal.h --- a/include/asm-ia64/sal.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/sal.h Tue Mar 12 13:58:15 2002 @@ -8,11 +8,14 @@ * Abstraction Layer". * * Copyright (C) 2001 Intel + * Copyright (C) 2002 Jenna Hall * Copyright (C) 2001 Fred Lewis * Copyright (C) 1998, 1999, 2001 Hewlett-Packard Co * Copyright (C) 1998, 1999, 2001 David Mosberger-Tang * Copyright (C) 1999 Srinivasa Prasad Thirumalachar * + * 02/01/04 J. Hall Updated Error Record Structures to conform to July 2001 + * revision of the SAL spec. * 01/01/03 fvlewis Updated Error Record Structures to conform with Nov. 2000 * revision of the SAL spec. * 99/09/29 davidm Updated for SAL 2.6. @@ -149,6 +152,7 @@ #define IA64_SAL_PLATFORM_FEATURE_BUS_LOCK (1 << 0) #define IA64_SAL_PLATFORM_FEATURE_IRQ_REDIR_HINT (1 << 1) #define IA64_SAL_PLATFORM_FEATURE_IPI_REDIR_HINT (1 << 2) +#define IA64_SAL_PLATFORM_FEATURE_ITC_DRIFT (1 << 3) typedef struct ia64_sal_desc_platform_feature { u8 type; @@ -227,6 +231,10 @@ SAL_VECTOR_OS_BOOT_RENDEZ = 2 }; +/* Encodings for mca_opt parameter sent to SAL_MC_SET_PARAMS */ +#define SAL_MC_PARAM_RZ_ALWAYS 0x1 +#define SAL_MC_PARAM_BINIT_ESCALATE 0x10 + /* ** Definition of the SAL Error Log from the SAL spec */ @@ -515,12 +523,12 @@ { u16 vendor_id; u16 device_id; - u16 class_code; + u8 class_code[3]; u8 func_num; u8 dev_num; u8 bus_num; u8 seg_num; - u8 reserved[6]; + u8 reserved[5]; } comp_info; u32 num_mem_regs; u32 num_io_regs; @@ -775,5 +783,7 @@ *scratch_buf_size_needed = isrv.v1; return isrv.status; } + +extern unsigned long sal_platform_features; #endif /* _ASM_IA64_PAL_H */ diff -Nru a/include/asm-ia64/scatterlist.h b/include/asm-ia64/scatterlist.h --- a/include/asm-ia64/scatterlist.h Tue Mar 12 13:58:14 2002 +++ b/include/asm-ia64/scatterlist.h Tue Mar 12 13:58:14 2002 @@ -2,7 +2,7 @@ #define _ASM_IA64_SCATTERLIST_H /* - * Copyright (C) 1998, 1999, 2001 Hewlett-Packard Co + * Copyright (C) 1998-1999, 2001-2002 Hewlett-Packard Co * David Mosberger-Tang */ @@ -12,7 +12,6 @@ /* These two are only valid if ADDRESS member of this struct is NULL. */ struct page *page; unsigned int offset; - unsigned int length; /* buffer length */ }; diff -Nru a/include/asm-ia64/sigcontext.h b/include/asm-ia64/sigcontext.h --- a/include/asm-ia64/sigcontext.h Tue Mar 12 13:58:14 2002 +++ b/include/asm-ia64/sigcontext.h Tue Mar 12 13:58:14 2002 @@ -56,7 +56,9 @@ unsigned long sc_rbs_base; /* NULL or new base of sighandler's rbs */ unsigned long sc_loadrs; /* see description above */ - unsigned long sc_rsvd[14]; /* reserved for future use */ + unsigned long sc_ar25; /* rsvd for scratch use */ + unsigned long sc_ar26; /* rsvd for scratch use */ + unsigned long sc_rsvd[12]; /* reserved for future use */ /* * The mask must come last so we can increase _NSIG_WORDS * without breaking binary compatibility. diff -Nru a/include/asm-ia64/siginfo.h b/include/asm-ia64/siginfo.h --- a/include/asm-ia64/siginfo.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/siginfo.h Tue Mar 12 13:58:15 2002 @@ -2,8 +2,8 @@ #define _ASM_IA64_SIGINFO_H /* - * Copyright (C) 1998-2001 Hewlett-Packard Co - * Copyright (C) 1998-2001 David Mosberger-Tang + * Copyright (C) 1998-2002 Hewlett-Packard Co + * David Mosberger-Tang */ #include @@ -57,7 +57,7 @@ struct { void *_addr; /* faulting insn/memory ref. */ int _imm; /* immediate value for "break" */ - int _pad0; + unsigned int _flags; /* see below */ unsigned long _isr; /* isr */ } _sigfault; @@ -70,7 +70,7 @@ struct { pid_t _pid; /* which child */ uid_t _uid; /* sender's uid */ - unsigned long _pfm_ovfl_counters; /* which PMU counter overflowed */ + unsigned long _pfm_ovfl_counters[4]; /* which PMU counter overflowed */ } _sigprof; } _sifields; } siginfo_t; @@ -88,12 +88,23 @@ #define si_ptr _sifields._rt._sigval.sival_ptr #define si_addr _sifields._sigfault._addr #define si_imm _sifields._sigfault._imm /* as per UNIX SysV ABI spec */ -#define si_isr _sifields._sigfault._isr /* valid if si_code==FPE_FLTxxx */ +#define si_flags _sifields._sigfault._flags +/* + * si_isr is valid for SIGILL, SIGFPE, SIGSEGV, SIGBUS, and SIGTRAP provided that + * si_code is non-zero and __ISR_VALID is set in si_flags. + */ +#define si_isr _sifields._sigfault._isr #define si_band _sifields._sigpoll._band #define si_fd _sifields._sigpoll._fd #define si_pfm_ovfl _sifields._sigprof._pfm_ovfl_counters /* + * Flag values for si_flags: + */ +#define __ISR_VALID_BIT 0 +#define __ISR_VALID (1 << __ISR_VALID_BIT) + +/* * si_code values * Positive values for kernel-generated signals. */ @@ -119,12 +130,12 @@ #define SI_USER 0 /* sent by kill, sigsend, raise */ #define SI_KERNEL 0x80 /* sent by the kernel from somewhere */ -#define SI_QUEUE -1 /* sent by sigqueue */ +#define SI_QUEUE (-1) /* sent by sigqueue */ #define SI_TIMER __SI_CODE(__SI_TIMER,-2) /* sent by timer expiration */ -#define SI_MESGQ -3 /* sent by real time mesq state change */ -#define SI_ASYNCIO -4 /* sent by AIO completion */ -#define SI_SIGIO -5 /* sent by queued SIGIO */ -#define SI_TKILL -6 /* sent by tkill system call */ +#define SI_MESGQ (-3) /* sent by real time mesq state change */ +#define SI_ASYNCIO (-4) /* sent by AIO completion */ +#define SI_SIGIO (-5) /* sent by queued SIGIO */ +#define SI_TKILL (-6) /* sent by tkill system call */ #define SI_FROMUSER(siptr) ((siptr)->si_code <= 0) #define SI_FROMKERNEL(siptr) ((siptr)->si_code > 0) diff -Nru a/include/asm-ia64/signal.h b/include/asm-ia64/signal.h --- a/include/asm-ia64/signal.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/signal.h Tue Mar 12 13:58:15 2002 @@ -115,6 +115,7 @@ #define SA_PROBE SA_ONESHOT #define SA_SAMPLE_RANDOM SA_RESTART #define SA_SHIRQ 0x04000000 +#define SA_PERCPU_IRQ 0x02000000 #endif /* __KERNEL__ */ diff -Nru a/include/asm-ia64/smp.h b/include/asm-ia64/smp.h --- a/include/asm-ia64/smp.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/smp.h Tue Mar 12 13:58:15 2002 @@ -3,7 +3,7 @@ * * Copyright (C) 1999 VA Linux Systems * Copyright (C) 1999 Walt Drummond - * Copyright (C) 2001 Hewlett-Packard Co + * Copyright (C) 2001-2002 Hewlett-Packard Co * David Mosberger-Tang */ #ifndef _ASM_IA64_SMP_H @@ -27,7 +27,7 @@ #define SMP_IRQ_REDIRECTION (1 << 0) #define SMP_IPI_REDIRECTION (1 << 1) -#define smp_processor_id() (current->processor) +#define smp_processor_id() (current_thread_info()->cpu) extern struct smp_boot_data { int cpu_count; @@ -110,17 +110,13 @@ #define NO_PROC_ID 0xffffffff /* no processor magic marker */ -/* - * Extra overhead to move a task from one cpu to another (due to TLB and cache misses). - * Expressed in "negative nice value" units (larger number means higher priority/penalty). - */ -#define PROC_CHANGE_PENALTY 20 - extern void __init init_smp_config (void); extern void smp_do_timer (struct pt_regs *regs); extern int smp_call_function_single (int cpuid, void (*func) (void *info), void *info, int retry, int wait); +extern void smp_send_reschedule (int cpu); +extern void smp_send_reschedule_all (void); #endif /* CONFIG_SMP */ diff -Nru a/include/asm-ia64/smplock.h b/include/asm-ia64/smplock.h --- a/include/asm-ia64/smplock.h Tue Mar 12 13:58:14 2002 +++ b/include/asm-ia64/smplock.h Tue Mar 12 13:58:14 2002 @@ -20,10 +20,11 @@ static __inline__ void release_kernel_lock(struct task_struct *task, int cpu) { - if (task->lock_depth >= 0) + if (unlikely(task->lock_depth >= 0)) { spin_unlock(&kernel_flag); - release_irqlock(cpu); - __sti(); + if (global_irq_holder == (cpu)) \ + BUG(); \ + } } /* @@ -32,7 +33,7 @@ static __inline__ void reacquire_kernel_lock(struct task_struct *task) { - if (task->lock_depth >= 0) + if (unlikely(task->lock_depth >= 0)) spin_lock(&kernel_flag); } diff -Nru a/include/asm-ia64/sn/addrs.h b/include/asm-ia64/sn/addrs.h --- a/include/asm-ia64/sn/addrs.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/sn/addrs.h Tue Mar 12 13:58:15 2002 @@ -1,40 +1,42 @@ -/* $Id$ + +/* * * This file is subject to the terms and conditions of the GNU General Public * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 1999 Silicon Graphics, Inc. - * Copyright (C) 1999 by Ralf Baechle + * Copyright (c) 1992-1999,2001 Silicon Graphics, Inc. All rights reserved. */ -#ifndef _ASM_SN_ADDRS_H -#define _ASM_SN_ADDRS_H -#include -#if _LANGUAGE_C -#include -#endif /* _LANGUAGE_C */ - -#if !defined(CONFIG_IA64_SGI_SN1) && !defined(CONFIG_IA64_GENERIC) -#include -#include -#include -#endif /* CONFIG_IA64_SGI_SN1 */ +#ifndef _ASM_IA64_SN_ADDRS_H +#define _ASM_IA64_SN_ADDRS_H + +#include -#if defined(CONFIG_SGI_IP35) || defined(CONFIG_IA64_SGI_SN1) || defined(CONFIG_IA64_GENERIC) +#if defined (CONFIG_IA64_SGI_SN1) #include -#endif +#elif defined (CONFIG_IA64_SGI_SN2) +#include +#else +#error <<>> +#endif /* !SN1 && !SN2 */ +#ifndef __ASSEMBLY__ +#include +#endif -#if _LANGUAGE_C +#ifndef __ASSEMBLY__ #define PS_UINT_CAST (__psunsigned_t) #define UINT64_CAST (uint64_t) - +#ifdef CONFIG_IA64_SGI_SN2 +#define HUBREG_CAST (volatile mmr_t *) +#else #define HUBREG_CAST (volatile hubreg_t *) +#endif -#elif _LANGUAGE_ASSEMBLY +#elif __ASSEMBLY__ #define PS_UINT_CAST #define UINT64_CAST @@ -43,18 +45,6 @@ #endif -#define NASID_GET_META(_n) ((_n) >> NASID_LOCAL_BITS) -#if defined CONFIG_SGI_IP35 || defined(CONFIG_IA64_SGI_SN1) || defined(CONFIG_IA64_GENERIC) -#define NASID_GET_LOCAL(_n) ((_n) & 0x7f) -#endif -#define NASID_MAKE(_m, _l) (((_m) << NASID_LOCAL_BITS) | (_l)) - -#define NODE_ADDRSPACE_MASK (NODE_ADDRSPACE_SIZE - 1) -#define TO_NODE_ADDRSPACE(_pa) (UINT64_CAST (_pa) & NODE_ADDRSPACE_MASK) - -#define CHANGE_ADDR_NASID(_pa, _nasid) \ - ((UINT64_CAST (_pa) & ~NASID_MASK) | \ - (UINT64_CAST(_nasid) << NASID_SHFT)) /* @@ -62,7 +52,11 @@ * node's address space. */ +#ifdef CONFIG_IA64_SGI_SN2 /* SN2 has an extra AS field between node offset and node id (nasid) */ +#define NODE_OFFSET(_n) (UINT64_CAST (_n) << NASID_SHFT) +#else #define NODE_OFFSET(_n) (UINT64_CAST (_n) << NODE_SIZE_BITS) +#endif #define NODE_CAC_BASE(_n) (CAC_BASE + NODE_OFFSET(_n)) #define NODE_HSPEC_BASE(_n) (HSPEC_BASE + NODE_OFFSET(_n)) @@ -118,11 +112,6 @@ /* * The following define the major position-independent aliases used * in SN. - * UALIAS -- 256MB in size, reads in the UALIAS result in - * uncached references to the memory of the reader's node. - * CPU_UALIAS -- 128kb in size, the bottom part of UALIAS is flipped - * depending on which CPU does the access to provide - * all CPUs with unique uncached memory at low addresses. * LBOOT -- 256MB in size, reads in the LBOOT area result in * uncached references to the local hub's boot prom and * other directory-bus connected devices. @@ -130,17 +119,7 @@ * references to the local hub's registers. */ -#define UALIAS_BASE HSPEC_BASE -#define UALIAS_SIZE 0x10000000 /* 256 Megabytes */ -#define CPU_UALIAS 0x20000 /* 128 Kilobytes */ -#define UALIAS_CPU_SIZE (CPU_UALIAS / CPUS_PER_NODE) -#define UALIAS_LIMIT (UALIAS_BASE + UALIAS_SIZE) - -/* - * The bottom of ualias space is flipped depending on whether you're - * processor 0 or 1 within a node. - */ -#if defined(CONFIG_SGI_IP35) || defined(CONFIG_IA64_SGI_SN1) || defined(CONFIG_IA64_GENERIC) +#if defined CONFIG_IA64_SGI_SN1 #define LREG_BASE (HSPEC_BASE + 0x10000000) #define LREG_SIZE 0x8000000 /* 128 MB */ #define LREG_LIMIT (LREG_BASE + LREG_SIZE) @@ -151,7 +130,11 @@ #endif #define HUB_REGISTER_WIDGET 1 +#ifdef CONFIG_IA64_SGI_SN2 +#define IALIAS_BASE LOCAL_SWIN_BASE(HUB_REGISTER_WIDGET) +#else #define IALIAS_BASE NODE_SWIN_BASE(0, HUB_REGISTER_WIDGET) +#endif #define IALIAS_SIZE 0x800000 /* 8 Megabytes */ #define IS_IALIAS(_a) (((_a) >= IALIAS_BASE) && \ ((_a) < (IALIAS_BASE + IALIAS_SIZE))) @@ -160,7 +143,7 @@ * Macro for referring to Hub's RBOOT space */ -#if defined(CONFIG_SGI_IP35) || defined(CONFIG_IA64_SGI_SN1) || defined(CONFIG_IA64_GENERIC) +#if defined CONFIG_IA64_SGI_SN1 #define NODE_LREG_BASE(_n) (NODE_HSPEC_BASE(_n) + 0x30000000) #define NODE_LREG_LIMIT(_n) (NODE_LREG_BASE(_n) + LREG_SIZE) @@ -172,168 +155,6 @@ #endif -/* - * Macros for referring the Hub's back door space - * - * These macros correctly process addresses in any node's space. - * WARNING: They won't work in assembler. - * - * BDDIR_ENTRY_LO returns the address of the low double-word of the dir - * entry corresponding to a physical (Cac or Uncac) address. - * BDDIR_ENTRY_HI returns the address of the high double-word of the entry. - * BDPRT_ENTRY returns the address of the double-word protection entry - * corresponding to the page containing the physical address. - * BDPRT_ENTRY_S Stores the value into the protection entry. - * BDPRT_ENTRY_L Load the value from the protection entry. - * BDECC_ENTRY returns the address of the ECC byte corresponding to a - * double-word at a specified physical address. - * BDECC_ENTRY_H returns the address of the two ECC bytes corresponding to a - * quad-word at a specified physical address. - */ -#define NODE_BDOOR_BASE(_n) (NODE_HSPEC_BASE(_n) + (NODE_ADDRSPACE_SIZE/2)) - -#define NODE_BDECC_BASE(_n) (NODE_BDOOR_BASE(_n)) -#define NODE_BDDIR_BASE(_n) (NODE_BDOOR_BASE(_n) + (NODE_ADDRSPACE_SIZE/4)) -#if defined(CONFIG_SGI_IP35) || defined(CONFIG_IA64_SGI_SN1) || defined(CONFIG_IA64_GENERIC) -/* - * Bedrock's directory entries are a single word: no low/high - */ - -#define BDDIR_ENTRY(_pa) (HSPEC_BASE + \ - NODE_ADDRSPACE_SIZE * 7 / 8 | \ - UINT64_CAST (_pa) & NASID_MASK | \ - UINT64_CAST (_pa) >> 3 & BDDIR_UPPER_MASK) - -#ifdef BRINGUP - /* minimize source changes by mapping *_LO() & *_HI() */ -#define BDDIR_ENTRY_LO(_pa) BDDIR_ENTRY(_pa) -#define BDDIR_ENTRY_HI(_pa) BDDIR_ENTRY(_pa) -#endif /* BRINGUP */ - -#define BDDIR_PAGE_MASK (BDDIR_UPPER_MASK & 0x7ffff << 11) -#define BDDIR_PAGE_BASE_MASK (UINT64_CAST 0xfffffffffffff800) - -#ifdef _LANGUAGE_C - -#define BDPRT_ENTRY_ADDR(_pa, _rgn) ((uint64_t *) ( (HSPEC_BASE + \ - NODE_ADDRSPACE_SIZE * 7 / 8 + 0x408) | \ - (UINT64_CAST (_pa) & NASID_MASK) | \ - (UINT64_CAST (_pa) >> 3 & BDDIR_PAGE_MASK) | \ - (UINT64_CAST (_pa) >> 3 & 0x3 << 4) | \ - ((_rgn) & 0x1e) << 5)) - -static __inline uint64_t BDPRT_ENTRY_L(paddr_t pa,uint32_t rgn) { - uint64_t word=*BDPRT_ENTRY_ADDR(pa,rgn); - - if(rgn&0x20) /*If the region is > 32, move it down*/ - word = word >> 32; - if(rgn&0x1) /*If the region is odd, get that part */ - word = word >> 16; - word = word & 0xffff; /*Get the 16 bits we are interested in*/ - - return word; -} - -static __inline void BDPRT_ENTRY_S(paddr_t pa,uint32_t rgn,uint64_t val) { - uint64_t *addr=(uint64_t *)BDPRT_ENTRY_ADDR(pa,rgn); - uint64_t word,mask; - - word=*addr; - mask=0; - if(rgn&0x1) { - mask|=0x0000ffff0000ffff; - val=val<<16; - } - else - mask|=0xffff0000ffff0000; - if(rgn&0x20) { - mask|=0x00000000ffffffff; - val=val<<32; - } - else - mask|=0xffffffff00000000; - word &= mask; - word |= val; - - *(addr++)=word; - addr++; - *(addr++)=word; - addr++; - *(addr++)=word; - addr++; - *addr=word; -} -#endif /*_LANGUAGE_C*/ - -#define BDCNT_ENTRY(_pa) (HSPEC_BASE + \ - NODE_ADDRSPACE_SIZE * 7 / 8 + 0x8 | \ - UINT64_CAST (_pa) & NASID_MASK | \ - UINT64_CAST (_pa) >> 3 & BDDIR_PAGE_MASK | \ - UINT64_CAST (_pa) >> 3 & 0x3 << 4) - - -#ifdef BRINGUP - /* little endian packing of ecc bytes requires a swizzle */ - /* this is problemmatic for memory_init_ecc */ -#endif /* BRINGUP */ -#define BDECC_ENTRY(_pa) (HSPEC_BASE + \ - NODE_ADDRSPACE_SIZE * 5 / 8 | \ - UINT64_CAST (_pa) & NASID_MASK | \ - UINT64_CAST (_pa) >> 3 & BDECC_UPPER_MASK \ - ^ 0x7ULL) - -#define BDECC_SCRUB(_pa) (HSPEC_BASE + \ - NODE_ADDRSPACE_SIZE / 2 | \ - UINT64_CAST (_pa) & NASID_MASK | \ - UINT64_CAST (_pa) >> 3 & BDECC_UPPER_MASK \ - ^ 0x7ULL) - - /* address for Halfword backdoor ecc access. Note that */ - /* ecc bytes are packed in little endian order */ -#define BDECC_ENTRY_H(_pa) (HSPEC_BASE + \ - NODE_ADDRSPACE_SIZE * 5 / 8 | \ - UINT64_CAST (_pa) & NASID_MASK | \ - UINT64_CAST (_pa) >> 3 & BDECC_UPPER_MASK \ - ^ 0x6ULL) - -/* - * Macro to convert a back door directory, protection, page counter, or ecc - * address into the raw physical address of the associated cache line - * or protection page. - */ - -#define BDDIR_TO_MEM(_ba) (UINT64_CAST (_ba) & NASID_MASK | \ - (UINT64_CAST (_ba) & BDDIR_UPPER_MASK) << 3) - -#ifdef BRINGUP -/* - * This can't be done since there are 4 entries per address so you'd end up - * mapping back to 4 different physical addrs. - */ - -#define BDPRT_TO_MEM(_ba) (UINT64_CAST (_ba) & NASID_MASK | \ - (UINT64_CAST (_ba) & BDDIR_PAGE_MASK) << 3 | \ - (UINT64_CAST (_ba) & 0x3 << 4) << 3) -#endif - -#define BDCNT_TO_MEM(_ba) (UINT64_CAST (_ba) & NASID_MASK | \ - (UINT64_CAST (_ba) & BDDIR_PAGE_MASK) << 3 | \ - (UINT64_CAST (_ba) & 0x3 << 4) << 3) - -#define BDECC_TO_MEM(_ba) (UINT64_CAST (_ba) & NASID_MASK | \ - ((UINT64_CAST (_ba) ^ 0x7ULL) \ - & BDECC_UPPER_MASK) << 3 ) - -#define BDECC_H_TO_MEM(_ba) (UINT64_CAST (_ba) & NASID_MASK | \ - ((UINT64_CAST (_ba) ^ 0x6ULL) \ - & BDECC_UPPER_MASK) << 3 ) - -#define BDADDR_IS_DIR(_ba) ((UINT64_CAST (_ba) & 0x8) == 0) -#define BDADDR_IS_PRT(_ba) ((UINT64_CAST (_ba) & 0x408) == 0x408) -#define BDADDR_IS_CNT(_ba) ((UINT64_CAST (_ba) & 0x8) == 0x8) - -#endif /* CONFIG_SGI_IP35 */ - /* * The following macros produce the correct base virtual address for @@ -344,6 +165,36 @@ * for _x. */ + +#ifdef CONFIG_IA64_SGI_SN2 +/* + * SN2 has II mmr's located inside small window space like SN0 & SN1, + * but has all other non-II mmr's located at the top of big window + * space, unlike SN0 & SN1. + */ +#define LOCAL_HUB_BASE(_x) (LOCAL_MMR_ADDR(_x) | (((~(_x)) & BWIN_TOP)>>8)) +#define REMOTE_HUB_BASE(_x) \ + (UNCACHED | GLOBAL_MMR_SPACE | \ + (((~(_x)) & BWIN_TOP)>>8) | \ + (((~(_x)) & BWIN_TOP)>>9) | (_x)) + +#define LOCAL_HUB(_x) (HUBREG_CAST LOCAL_HUB_BASE(_x)) +#define REMOTE_HUB(_n, _x) \ + (HUBREG_CAST (REMOTE_HUB_BASE(_x) | ((((long)(_n))<offset + \ - KLD_LAUNCH(nasid)->stride * (slice)) -#define LAUNCH_ADDR(nasid, slice) \ - TO_NODE_UNCAC((nasid), LAUNCH_OFFSET(nasid, slice)) -#define LAUNCH_SIZE(nasid) KLD_LAUNCH(nasid)->size - -#define NMI_OFFSET(nasid, slice) \ - (KLD_NMI(nasid)->offset + \ - KLD_NMI(nasid)->stride * (slice)) -#define NMI_ADDR(nasid, slice) \ - TO_NODE_UNCAC((nasid), NMI_OFFSET(nasid, slice)) -#define NMI_SIZE(nasid) KLD_NMI(nasid)->size - +#ifndef CONFIG_IA64_SGI_SN2 #define KLCONFIG_OFFSET(nasid) KLD_KLCONFIG(nasid)->offset +#else +#define KLCONFIG_OFFSET(nasid) \ + ia64_sn_get_klconfig_addr(nasid) +#endif /* CONFIG_IA64_SGI_SN2 */ + #define KLCONFIG_ADDR(nasid) \ - TO_NODE_UNCAC((nasid), KLCONFIG_OFFSET(nasid)) + TO_NODE_CAC((nasid), KLCONFIG_OFFSET(nasid)) #define KLCONFIG_SIZE(nasid) KLD_KLCONFIG(nasid)->size #define GDA_ADDR(nasid) KLD_GDA(nasid)->pointer #define GDA_SIZE(nasid) KLD_GDA(nasid)->size -#define SYMMON_STK_OFFSET(nasid, slice) \ - (KLD_SYMMON_STK(nasid)->offset + \ - KLD_SYMMON_STK(nasid)->stride * (slice)) -#define SYMMON_STK_STRIDE(nasid) KLD_SYMMON_STK(nasid)->stride - -#define SYMMON_STK_ADDR(nasid, slice) \ - TO_NODE_CAC((nasid), SYMMON_STK_OFFSET(nasid, slice)) - -#define SYMMON_STK_SIZE(nasid) KLD_SYMMON_STK(nasid)->stride - -#define SYMMON_STK_END(nasid) (SYMMON_STK_ADDR(nasid, 0) + KLD_SYMMON_STK(nasid)->size) - -/* loading symmon 4k below UNIX. the arcs loader needs the topaddr for a - * relocatable program - */ -#if defined(CONFIG_SGI_IP35) || defined(CONFIG_IA64_SGI_SN1) || defined(CONFIG_IA64_GENERIC) -/* update master.d/sn1_elspec.dbg, SN1/addrs.h/DEBUGUNIX_ADDR, and - * DBGLOADADDR in symmon's Makefile when changing this */ -#define UNIX_DEBUG_LOADADDR 0x310000 -#elif defined(SN0XXL) -#define UNIX_DEBUG_LOADADDR 0x360000 -#else -#define UNIX_DEBUG_LOADADDR 0x300000 -#endif -#define SYMMON_LOADADDR(nasid) \ - TO_NODE(nasid, PHYS_TO_K0(UNIX_DEBUG_LOADADDR - 0x1000)) - -#define FREEMEM_OFFSET(nasid) KLD_FREEMEM(nasid)->offset -#define FREEMEM_ADDR(nasid) SYMMON_STK_END(nasid) -/* - * XXX - * Fix this. FREEMEM_ADDR should be aware of if symmon is loaded. - * Also, it should take into account what prom thinks to be a safe - * address - PHYS_TO_K0(NODE_OFFSET(nasid) + FREEMEM_OFFSET(nasid)) - */ -#define FREEMEM_SIZE(nasid) KLD_FREEMEM(nasid)->size - -#define PI_ERROR_OFFSET(nasid) KLD_PI_ERROR(nasid)->offset -#define PI_ERROR_ADDR(nasid) \ - TO_NODE_UNCAC((nasid), PI_ERROR_OFFSET(nasid)) -#define PI_ERROR_SIZE(nasid) KLD_PI_ERROR(nasid)->size - #define NODE_OFFSET_TO_K0(_nasid, _off) \ (PAGE_OFFSET | NODE_OFFSET(_nasid) | (_off)) -#define K0_TO_NODE_OFFSET(_k0addr) \ - ((__psunsigned_t)(_k0addr) & NODE_ADDRSPACE_MASK) - -#define KERN_VARS_ADDR(nasid) KLD_KERN_VARS(nasid)->pointer -#define KERN_VARS_SIZE(nasid) KLD_KERN_VARS(nasid)->size - -#define KERN_XP_ADDR(nasid) KLD_KERN_XP(nasid)->pointer -#define KERN_XP_SIZE(nasid) KLD_KERN_XP(nasid)->size - -#define GPDA_ADDR(nasid) TO_NODE_CAC(nasid, GPDA_OFFSET) - -#endif /* _LANGUAGE_C */ +#endif /* __ASSEMBLY__ */ -#endif /* _ASM_SN_ADDRS_H */ +#endif /* _ASM_IA64_SN_ADDRS_H */ diff -Nru a/include/asm-ia64/sn/agent.h b/include/asm-ia64/sn/agent.h --- a/include/asm-ia64/sn/agent.h Tue Mar 12 13:58:15 2002 +++ /dev/null Wed Dec 31 16:00:00 1969 @@ -1,47 +0,0 @@ -/* $Id$ - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * - * This file has definitions for the hub and snac interfaces. - * - * Copyright (C) 1992 - 1997, 1999 Silcon Graphics, Inc. - * Copyright (C) 1999 Ralf Baechle (ralf@gnu.org) - */ -#ifndef _ASM_SGI_SN_AGENT_H -#define _ASM_SGI_SN_AGENT_H - -#include - -#include -#include -//#include - -#if defined(CONFIG_SGI_IP35) || defined(CONFIG_IA64_SGI_SN1) || defined(CONFIG_IA64_GENERIC) -#include -#endif /* CONFIG_SGI_IP35 */ - -/* - * NIC register macros - */ - -#if defined(CONFIG_SGI_IP35) || defined(CONFIG_IA64_SGI_SN1) || defined(CONFIG_IA64_GENERIC) -#define HUB_NIC_ADDR(_cpuid) \ - REMOTE_HUB_ADDR(COMPACT_TO_NASID_NODEID(cputocnode(_cpuid)), \ - LB_MICROLAN_CTL) -#endif - -#define SET_HUB_NIC(_my_cpuid, _val) \ - (HUB_S(HUB_NIC_ADDR(_my_cpuid), (_val))) - -#define SET_MY_HUB_NIC(_v) \ - SET_HUB_NIC(cpuid(), (_v)) - -#define GET_HUB_NIC(_my_cpuid) \ - (HUB_L(HUB_NIC_ADDR(_my_cpuid))) - -#define GET_MY_HUB_NIC() \ - GET_HUB_NIC(cpuid()) - -#endif /* _ASM_SGI_SN_AGENT_H */ diff -Nru a/include/asm-ia64/sn/alenlist.h b/include/asm-ia64/sn/alenlist.h --- a/include/asm-ia64/sn/alenlist.h Tue Mar 12 13:58:14 2002 +++ b/include/asm-ia64/sn/alenlist.h Tue Mar 12 13:58:14 2002 @@ -4,11 +4,12 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2002 Silicon Graphics, Inc. All rights reserved. */ -#ifndef _ASM_SN_ALENLIST_H -#define _ASM_SN_ALENLIST_H +#ifndef _ASM_IA64_SN_ALENLIST_H +#define _ASM_IA64_SN_ALENLIST_H + +#include /* Definition of Address/Length List */ @@ -51,7 +52,7 @@ /* Return codes from alenlist routines. */ -#define ALENLIST_FAILURE -1 +#define ALENLIST_FAILURE (-1) #define ALENLIST_SUCCESS 0 @@ -201,4 +202,4 @@ } #endif -#endif /* _ASM_SN_ALENLIST_H */ +#endif /* _ASM_IA64_SN_ALENLIST_H */ diff -Nru a/include/asm-ia64/sn/arc/hinv.h b/include/asm-ia64/sn/arc/hinv.h --- a/include/asm-ia64/sn/arc/hinv.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/sn/arc/hinv.h Tue Mar 12 13:58:15 2002 @@ -4,8 +4,7 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Jack Steiner (steiner@sgi.com) + * Copyright (C) 2000-2001 Silicon Graphics, Inc. All rights reserved. */ diff -Nru a/include/asm-ia64/sn/arc/types.h b/include/asm-ia64/sn/arc/types.h --- a/include/asm-ia64/sn/arc/types.h Tue Mar 12 13:58:16 2002 +++ b/include/asm-ia64/sn/arc/types.h Tue Mar 12 13:58:16 2002 @@ -4,7 +4,7 @@ * for more details. * * Copyright 1999 Ralf Baechle (ralf@gnu.org) - * Copyright 1999 Silicon Graphics, Inc. + * Copyright 1999,2001 Silicon Graphics, Inc. */ #ifndef _ASM_SN_ARC_TYPES_H #define _ASM_SN_ARC_TYPES_H diff -Nru a/include/asm-ia64/sn/arch.h b/include/asm-ia64/sn/arch.h --- a/include/asm-ia64/sn/arch.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/sn/arch.h Tue Mar 12 13:58:15 2002 @@ -6,180 +6,50 @@ * * SGI specific setup. * - * Copyright (C) 1995 - 1997, 1999 Silcon Graphics, Inc. + * Copyright (C) 1995-1997,1999,2001-2002 Silicon Graphics, Inc. All rights reserved. * Copyright (C) 1999 Ralf Baechle (ralf@gnu.org) */ -#ifndef _ASM_SN_ARCH_H -#define _ASM_SN_ARCH_H +#ifndef _ASM_IA64_SN_ARCH_H +#define _ASM_IA64_SN_ARCH_H -#include #include - +#include +#include #include -#if defined(CONFIG_IA64_SGI_SN1) || defined(CONFIG_SGI_IP37) || defined(CONFIG_IA64_GENERIC) + +#if defined(CONFIG_IA64_SGI_SN1) #include +#elif defined(CONFIG_IA64_SGI_SN2) +#include #endif -#if defined(_LANGUAGE_C) || defined(_LANGUAGE_C_PLUS_PLUS) +#if defined(CONFIG_IA64_SGI_SN1) +typedef u64 bdrkreg_t; +#elif defined(CONFIG_IA64_SGI_SN2) +typedef u64 shubreg_t; +#endif + typedef u64 hubreg_t; +typedef u64 mmr_t; typedef u64 nic_t; -#if defined(CONFIG_SGI_IP35) || defined(CONFIG_IA64_SGI_SN1) || defined(CONFIG_IA64_GENERIC) -typedef u64 bdrkreg_t; -#endif /* CONFIG_SGI_xxxxx */ -#endif /* _LANGUAGE_C || _LANGUAGE_C_PLUS_PLUS */ - -#if defined(CONFIG_SGI_IP35) || defined(CONFIG_IA64_SGI_SN1) || defined(CONFIG_IA64_GENERIC) -#define CPUS_PER_NODE 4 /* CPUs on a single hub */ -#define CPUS_PER_NODE_SHFT 2 /* Bits to shift in the node number */ -#define CPUS_PER_SUBNODE 2 /* CPUs on a single hub PI */ -#endif -#define CNODE_NUM_CPUS(_cnode) (NODEPDA(_cnode)->node_num_cpus) #define CNODE_TO_CPU_BASE(_cnode) (NODEPDA(_cnode)->node_first_cpu) -#define makespnum(_nasid, _slice) \ - (((_nasid) << CPUS_PER_NODE_SHFT) | (_slice)) - -#if defined(CONFIG_SGI_IP35) || defined(CONFIG_IA64_SGI_SN1) || defined(CONFIG_IA64_GENERIC) - -/* - * There are 2 very similar macros for dealing with "slices". Make sure - * you use the right one. - * Unfortunately, on all platforms except IP35 (currently), the 2 macros - * are interchangible. - * - * On IP35, there are 4 cpus per node. Each cpu is refered to by it's slice. - * The slices are numbered 0 thru 3. - * - * There are also 2 PI interfaces per node. Each PI interface supports 2 cpus. - * The term "local slice" specifies the cpu number relative to the PI. - * - * The cpus on the node are numbered: - * slice localslice - * 0 0 - * 1 1 - * 2 0 - * 3 1 - * - * cputoslice - returns a number 0..3 that is the slice of the specified cpu. - * cputolocalslice - returns a number 0..1 that identifies the local slice of - * the cpu within it's PI interface. - */ -#ifdef LATER - /* These are dummied up for now ..... */ -#define cputocnode(cpu) \ - (pdaindr[(cpu)].p_nodeid) -#define cputonasid(cpu) \ - (pdaindr[(cpu)].p_nasid) -#define cputoslice(cpu) \ - (ASSERT(pdaindr[(cpu)].pda), (pdaindr[(cpu)].pda->p_slice)) -#define cputolocalslice(cpu) \ - (ASSERT(pdaindr[(cpu)].pda), (LOCALCPU(pdaindr[(cpu)].pda->p_slice))) -#define cputosubnode(cpu) \ - (ASSERT(pdaindr[(cpu)].pda), (SUBNODE(pdaindr[(cpu)].pda->p_slice))) -#else -#define cputocnode(cpu) 0 -#define cputonasid(cpu) 0 -#define cputoslice(cpu) 0 -#define cputolocalslice(cpu) 0 -#define cputosubnode(cpu) 0 -#endif /* LATER */ -#endif /* CONFIG_SGI_IP35 */ - -#if defined(_LANGUAGE_C) || defined(_LANGUAGE_C_PLUS_PLUS) - -#define INVALID_NASID (nasid_t)-1 -#define INVALID_CNODEID (cnodeid_t)-1 -#define INVALID_PNODEID (pnodeid_t)-1 -#define INVALID_MODULE (moduleid_t)-1 -#define INVALID_PARTID (partid_t)-1 - -#if defined(CONFIG_SGI_IP35) || defined(CONFIG_IA64_SGI_SN1) || defined(CONFIG_IA64_GENERIC) -extern int get_slice(void); -extern cpuid_t get_cnode_cpu(cnodeid_t); -extern int get_cpu_slice(cpuid_t); -extern cpuid_t cnodetocpu(cnodeid_t); -// extern cpuid_t cnode_slice_to_cpuid(cnodeid_t, int); - -extern int cnode_exists(cnodeid_t cnode); -extern cnodeid_t cpuid_to_compact_node[MAXCPUS]; -#endif /* CONFIG_IP35 */ - -extern nasid_t get_nasid(void); -extern cnodeid_t get_cpu_cnode(int); -extern int get_cpu_slice(cpuid_t); - -/* - * NO ONE should access these arrays directly. The only reason we refer to - * them here is to avoid the procedure call that would be required in the - * macros below. (Really want private data members here :-) - */ -extern cnodeid_t nasid_to_compact_node[MAX_NASIDS]; -extern nasid_t compact_to_nasid_node[MAX_COMPACT_NODES]; - -/* - * These macros are used by various parts of the kernel to convert - * between the three different kinds of node numbering. At least some - * of them may change to procedure calls in the future, but the macros - * will continue to work. Don't use the arrays above directly. - */ - -#define NASID_TO_REGION(nnode) \ - ((nnode) >> \ - (is_fine_dirmode() ? NASID_TO_FINEREG_SHFT : NASID_TO_COARSEREG_SHFT)) - -#ifndef __ia64 -extern cnodeid_t nasid_to_compact_node[MAX_NASIDS]; -extern nasid_t compact_to_nasid_node[MAX_COMPACT_NODES]; -extern cnodeid_t cpuid_to_compact_node[MAXCPUS]; - -#if !defined(DEBUG) - -#define NASID_TO_COMPACT_NODEID(nnode) (nasid_to_compact_node[nnode]) -#define COMPACT_TO_NASID_NODEID(cnode) (compact_to_nasid_node[cnode]) -#define CPUID_TO_COMPACT_NODEID(cpu) (cpuid_to_compact_node[(cpu)]) -#else - -/* - * These functions can do type checking and fail if they need to return - * a bad nodeid, but they're not as fast so just use 'em for debug kernels. - */ -cnodeid_t nasid_to_compact_nodeid(nasid_t nasid); -nasid_t compact_to_nasid_nodeid(cnodeid_t cnode); - -#define NASID_TO_COMPACT_NODEID(nnode) nasid_to_compact_nodeid(nnode) -#define COMPACT_TO_NASID_NODEID(cnode) compact_to_nasid_nodeid(cnode) -#define CPUID_TO_COMPACT_NODEID(cpu) (cpuid_to_compact_node[(cpu)]) -#endif - -#else - -/* - * IA64 specific nasid and cnode ids. - */ #define NASID_TO_COMPACT_NODEID(nasid) (nasid_to_cnodeid(nasid)) #define COMPACT_TO_NASID_NODEID(cnode) (cnodeid_to_nasid(cnode)) -#define CPUID_TO_COMPACT_NODEID(cpu) (cpuid_to_cnodeid(cpu)) -#endif /* #ifndef __ia64 */ -extern int node_getlastslot(cnodeid_t); +#define INVALID_NASID ((nasid_t)-1) +#define INVALID_CNODEID ((cnodeid_t)-1) +#define INVALID_PNODEID ((pnodeid_t)-1) +#define INVALID_MODULE ((moduleid_t)-1) +#define INVALID_PARTID ((partid_t)-1) -#endif /* _LANGUAGE_C || _LANGUAGE_C_PLUS_PLUS */ +extern cpuid_t cnodetocpu(cnodeid_t); +void sn_flush_all_caches(long addr, long bytes); -#define SLOT_BITMASK (MAX_MEM_SLOTS - 1) -#define SLOT_SIZE (1LL< [2] # units unit number + * : : : + * [ ] 0 + */ + +#include + +#define ulong_t uint64_t + +struct map +{ + unsigned long m_size; /* number of units available */ + unsigned long m_addr; /* address of first available unit */ +}; + +#define mapstart(X) &X[2] /* start of map array */ + +#define mapsize(X) X[0].m_size /* number of empty slots */ + /* remaining in map array */ +#define maplock(X) (((spinlock_t *) X[1].m_size)) + +#define mapout(X) ((sv_t *) X[1].m_addr) + + +extern ulong_t atealloc(struct map *, size_t); +extern struct map *atemapalloc(ulong_t); +extern void atefree(struct map *, size_t, ulong_t); +extern void atemapfree(struct map *); + +#endif /* _ASM_IA64_SN_ATE_UTILS_H */ + diff -Nru a/include/asm-ia64/sn/bte.h b/include/asm-ia64/sn/bte.h --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/include/asm-ia64/sn/bte.h Tue Mar 12 13:58:16 2002 @@ -0,0 +1,88 @@ +/* + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (c) 2001-2002 Silicon Graphics, Inc. All rights reserved. + */ + +#ifndef _ASM_IA64_SN_BTE_H +#define _ASM_IA64_SN_BTE_H + +#ident "$Revision: $" + +#include +#include +#include + +#define L1_CACHE_MASK (L1_CACHE_BYTES - 1) /* Mask to retrieve + * the offset into this + * cache line.*/ + +/* BTE status register only supports 16 bits for length field */ +#define BTE_LEN_MASK ((1 << 16) - 1) + +/* + * Constants used in determining the best and worst case transfer + * times. To help explain the two, the following graph of transfer + * status vs time may help. + * + * active +------------------:-+ : + * status | : | : + * idle +__________________:_+======= + * 0 Time MaxT MinT + * + * Therefore, MaxT is the maximum thoeretical rate for transfering + * the request block (assuming ideal circumstances) + * + * MinT is the minimum theoretical rate for transferring the + * requested block (assuming maximum link distance and contention) + * + * The following defines are the inverse of the above. They are + * used for calculating the MaxT time and MinT time given the + * number of lines in the transfer. + */ +#define BTE_MAXT_LINES_PER_SECOND 800 +#define BTE_MINT_LINES_PER_SECOND 600 + + +/* Define hardware */ +#define BTES_PER_NODE 2 + +/* Define hardware modes */ +#define BTE_NOTIFY (IBCT_NOTIFY) +#define BTE_NORMAL BTE_NOTIFY +#define BTE_ZERO_FILL (BTE_NOTIFY | IBCT_ZFIL_MODE) + +/* Use a reserved bit to let the caller specify a wait for any BTE */ +#define BTE_WACQUIRE (0x4000) + +/* + * Structure defining a bte. An instance of this + * structure is created in the nodepda for each + * bte on that node (as defined by BTES_PER_NODE) + * This structure contains everything necessary + * to work with a BTE. + */ +typedef struct bteinfo_s { + u64 volatile notify ____cacheline_aligned; + char *bte_base_addr ____cacheline_aligned; + spinlock_t spinlock; + u64 idealTransferTimeout; + u64 idealTransferTimeoutReached; + u64 mostRecentSrc; + u64 mostRecentDest; + u64 mostRecentLen; + u64 mostRecentMode; + u64 volatile *mostRecentNotification; +} bteinfo_t; + +/* Possible results from bte_copy and bte_unaligned_copy */ +typedef enum { + BTE_SUCCESS, /* 0 is success */ + BTEFAIL_NOTAVAIL, /* BTE not available */ + BTEFAIL_ERROR, /* Generic error */ + BTEFAIL_DIR /* Diretory error */ +} bte_result_t; + +#endif /* _ASM_IA64_SN_BTE_H */ diff -Nru a/include/asm-ia64/sn/bte_copy.h b/include/asm-ia64/sn/bte_copy.h --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/include/asm-ia64/sn/bte_copy.h Tue Mar 12 13:58:16 2002 @@ -0,0 +1,311 @@ +/* + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (c) 2001-2002 Silicon Graphics, Inc. All rights reserved. + */ + +#ifndef _ASM_IA64_SN_BTE_COPY_H +#define _ASM_IA64_SN_BTE_COPY_H + +#ident "$Revision: $" + +#include +#include +#include +#include + +/* + * BTE_LOCKING support - Undefining the following line will + * adapt the bte_copy code to support one bte per cpu in + * synchronous mode. Even if bte_copy is called with a + * notify address, the bte will spin and wait for the transfer + * to complete. By defining the following, spin_locks and + * busy checks are placed around the initiation of a BTE + * transfer and multiple bte's per cpu are supported. + */ +#define CONFIG_IA64_SGI_BTE_LOCKING 1 + +/* + * Some macros to simplify reading. + * + * Start with macros to locate the BTE control registers. + */ + +#define BTEREG_LNSTAT_ADDR (bte->bte_base_addr) +#define BTEREG_SOURCE_ADDR (bte->bte_base_addr + IIO_IBSA0 - IIO_IBLS0) +#define BTEREG_DEST_ADDR (bte->bte_base_addr + IIO_IBDA0 - IIO_IBLS0) +#define BTEREG_CTRL_ADDR (bte->bte_base_addr + IIO_IBCT0 - IIO_IBLS0) +#define BTEREG_NOTIF_ADDR (bte->bte_base_addr + IIO_IBNA0 - IIO_IBLS0) + +/* Some macros to force the IBCT0 value valid. */ + +#define BTE_VALID_MODES BTE_NOTIFY +#define BTE_VLD_MODE(x) (x & BTE_VALID_MODES) + +// #define DEBUG_BTE +// #define DEBUG_BTE_VERBOSE +// #define DEBUG_TIME_BTE + +#ifdef DEBUG_BTE +# define DPRINTK(x) printk x // Terse +# ifdef DEBUG_BTE_VERBOSE +# define DPRINTKV(x) printk x // Verbose +# else +# define DPRINTKV(x) +# endif +#else +# define DPRINTK(x) +# define DPRINTKV(x) +#endif + +#ifdef DEBUG_TIME_BTE +extern u64 BteSetupTime; +extern u64 BteTransferTime; +extern u64 BteTeardownTime; +extern u64 BteExecuteTime; +#endif + +/* + * bte_copy(src, dest, len, mode, notification) + * + * use the block transfer engine to move kernel + * memory from src to dest using the assigned mode. + * + * Paramaters: + * src - physical address of the transfer source. + * dest - physical address of the transfer destination. + * len - number of bytes to transfer from source to dest. + * mode - hardware defined. See reference information + * for IBCT0/1 in the SHUB Programmers Reference + * notification - kernel virtual address of the notification cache + * line. If NULL, the default is used and + * the bte_copy is synchronous. + * + * NOTE: This function requires src, dest, and len to + * be cache line aligned. + */ +extern __inline__ bte_result_t +bte_copy(u64 src, u64 dest, u64 len, u64 mode, void *notification) +{ +#ifdef CONFIG_IA64_SGI_BTE_LOCKING + int bte_to_use; +#endif + +#ifdef DEBUG_TIME_BTE + u64 invokeTime = 0; + u64 completeTime = 0; + u64 xferStartTime = 0; + u64 xferCompleteTime = 0; +#endif + u64 transferSize; + bteinfo_t *bte; + +#ifdef DEBUG_TIME_BTE + invokeTime = ia64_get_itc(); +#endif + + DPRINTK(("bte_copy (0x%lx, 0x%lx, 0x%lx, 0x%lx, 0x%lx)\n", + src, dest, len, mode, notification)); + + if (len == 0) { + return (BTE_SUCCESS); + } + + ASSERT(!((len & L1_CACHE_MASK) || + (src & L1_CACHE_MASK) || (dest & L1_CACHE_MASK))); + + ASSERT(len < ((BTE_LEN_MASK + 1) << L1_CACHE_SHIFT)); + +#ifdef CONFIG_IA64_SGI_BTE_LOCKING + { + bte_to_use = 0; + + /* Attempt to lock one of the BTE interfaces */ + while ((*pda.cpubte[bte_to_use]-> + mostRecentNotification & IBLS_BUSY) + && + (!(spin_trylock + (&(pda.cpubte[bte_to_use]->spinlock)))) + && (bte_to_use < BTES_PER_NODE)) { + bte_to_use++; + } + + if ((bte_to_use >= BTES_PER_NODE) && + !(mode & BTE_WACQUIRE)) { + return (BTEFAIL_NOTAVAIL); + } + + /* Wait until a bte is available. */ + } + while (bte_to_use >= BTES_PER_NODE); + + bte = pda.cpubte[bte_to_use]; + DPRINTKV(("Got a lock on bte %d\n", bte_to_use)); +#else + /* Assuming one BTE per CPU. */ + bte = pda.cpubte[0]; +#endif + + /* + * The following are removed for optimization but is + * available in the event that the SHUB exhibits + * notification problems similar to the hub, bedrock et al. + * + * bte->mostRecentSrc = src; + * bte->mostRecentDest = dest; + * bte->mostRecentLen = len; + * bte->mostRecentMode = mode; + */ + if (notification == NULL) { + /* User does not want to be notified. */ + bte->mostRecentNotification = &bte->notify; + } else { + bte->mostRecentNotification = notification; + } + + /* Calculate the number of cache lines to transfer. */ + transferSize = ((len >> L1_CACHE_SHIFT) & BTE_LEN_MASK); + + DPRINTKV(("Calculated transfer size of %d cache lines\n", + transferSize)); + + /* Initialize the notification to a known value. */ + *bte->mostRecentNotification = -1L; + + + DPRINTKV(("Before, status is 0x%lx and notify is 0x%lx\n", + HUB_L(BTEREG_LNSTAT_ADDR), + *bte->mostRecentNotification)); + + /* Set the status reg busy bit and transfer length */ + DPRINTKV(("IBLS - HUB_S(0x%lx, 0x%lx)\n", + BTEREG_LNSTAT_ADDR, IBLS_BUSY | transferSize)); + HUB_S(BTEREG_LNSTAT_ADDR, IBLS_BUSY | transferSize); + + + DPRINTKV(("After setting status, status is 0x%lx and notify is 0x%lx\n", HUB_L(BTEREG_LNSTAT_ADDR), *bte->mostRecentNotification)); + + /* Set the source and destination registers */ + DPRINTKV(("IBSA - HUB_S(0x%lx, 0x%lx)\n", BTEREG_SOURCE_ADDR, + src)); + HUB_S(BTEREG_SOURCE_ADDR, src); + DPRINTKV(("IBDA - HUB_S(0x%lx, 0x%lx)\n", BTEREG_DEST_ADDR, dest)); + HUB_S(BTEREG_DEST_ADDR, dest); + + + /* Set the notification register */ + DPRINTKV(("IBNA - HUB_S(0x%lx, 0x%lx)\n", BTEREG_NOTIF_ADDR, + __pa(bte->mostRecentNotification))); + HUB_S(BTEREG_NOTIF_ADDR, (__pa(bte->mostRecentNotification))); + + + DPRINTKV(("Set Notify, status is 0x%lx and notify is 0x%lx\n", + HUB_L(BTEREG_LNSTAT_ADDR), + *bte->mostRecentNotification)); + + /* Initiate the transfer */ + DPRINTKV(("IBCT - HUB_S(0x%lx, 0x%lx)\n", BTEREG_CTRL_ADDR, mode)); +#ifdef DEBUG_TIME_BTE + xferStartTime = ia64_get_itc(); +#endif + HUB_S(BTEREG_CTRL_ADDR, BTE_VLD_MODE(mode)); + + DPRINTKV(("Initiated, status is 0x%lx and notify is 0x%lx\n", + HUB_L(BTEREG_LNSTAT_ADDR), + *bte->mostRecentNotification)); + + // >>> Temporarily work around not getting a notification + // from medusa. + // *bte->mostRecentNotification = HUB_L(bte->bte_base_addr); + + if (notification == NULL) { + /* + * Calculate our timeout + * + * What are we doing here? We are trying to determine + * the fastest time the BTE could have transfered our + * block of data. By takine the clock frequency (ticks/sec) + * divided by the BTE MaxT Transfer Rate (lines/sec) + * times the transfer size (lines), we get a tick + * offset from current time that the transfer should + * complete. + * + * Why do this? We are watching for a notification + * failure from the BTE. This behaviour has been + * seen in the SN0 and SN1 hardware on rare circumstances + * and is expected in SN2. By checking at the + * ideal transfer timeout, we minimize our time + * delay from hardware completing our request and + * our detecting the failure. + */ + bte->idealTransferTimeout = jiffies + + (HZ / BTE_MAXT_LINES_PER_SECOND * transferSize); + + while ((IBLS_BUSY & bte->notify)) { + /* + * Notification Workaround: When the max + * theoretical time has elapsed, read the hub + * status register into the notification area. + * This fakes the shub performing the copy. + */ + if (jiffies > bte->idealTransferTimeout) { + bte->notify = HUB_L(bte->bte_base_addr); + bte->idealTransferTimeoutReached++; + bte->idealTransferTimeout = jiffies + + (HZ / BTE_MAXT_LINES_PER_SECOND * + (bte->notify & BTE_LEN_MASK)); + } + } +#ifdef DEBUG_TIME_BTE + xferCompleteTime = ia64_get_itc(); +#endif + if (bte->notify & IBLS_ERROR) { + /* >>> Need to do real error checking. */ + transferSize = 0; + +#ifdef CONFIG_IA64_SGI_BTE_LOCKING + spin_unlock(&(bte->spinlock)); +#endif + return (BTEFAIL_ERROR); + } + + } +#ifdef CONFIG_IA64_SGI_BTE_LOCKING + spin_unlock(&(bte->spinlock)); +#endif +#ifdef DEBUG_TIME_BTE + completeTime = ia64_get_itc(); + + BteSetupTime = xferStartTime - invokeTime; + BteTransferTime = xferCompleteTime - xferStartTime; + BteTeardownTime = completeTime - xferCompleteTime; + BteExecuteTime = completeTime - invokeTime; +#endif + return (BTE_SUCCESS); +} + +/* + * Define the bte_unaligned_copy as an extern. + */ +extern bte_result_t bte_unaligned_copy(u64, u64, u64, u64, char *); + +/* + * The following is the prefered way of calling bte_unaligned_copy + * If the copy is fully cache line aligned, then bte_copy is + * used instead. Since bte_copy is inlined, this saves a call + * stack. NOTE: bte_copy is called synchronously and does block + * until the transfer is complete. In order to get the asynch + * version of bte_copy, you must perform this check yourself. + */ +#define BTE_UNALIGNED_COPY(src, dest, len, mode, bteBlock) \ + if ((len & L1_CACHE_MASK) || \ + (src & L1_CACHE_MASK) || \ + (dest & L1_CACHE_MASK)) { \ + bte_unaligned_copy (src, dest, len, mode, bteBlock); \ + } else { \ + bte_copy(src, dest, len, mode, NULL); \ + } + +#endif /* _ASM_IA64_SN_BTE_COPY_H */ diff -Nru a/include/asm-ia64/sn/cdl.h b/include/asm-ia64/sn/cdl.h --- a/include/asm-ia64/sn/cdl.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/sn/cdl.h Tue Mar 12 13:58:15 2002 @@ -4,11 +4,10 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2001 Silicon Graphics, Inc. All rights reserved. */ -#ifndef _ASM_SN_CDL_H -#define _ASM_SN_CDL_H +#ifndef _ASM_IA64_SN_CDL_H +#define _ASM_IA64_SN_CDL_H #include @@ -193,4 +192,4 @@ void async_attach_signal_done(async_attach_t); void async_attach_waitall(async_attach_t); -#endif /* _ASM_SN_CDL_H */ +#endif /* _ASM_IA64_SN_CDL_H */ diff -Nru a/include/asm-ia64/sn/clksupport.h b/include/asm-ia64/sn/clksupport.h --- a/include/asm-ia64/sn/clksupport.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/sn/clksupport.h Tue Mar 12 13:58:15 2002 @@ -4,61 +4,60 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Jack Steiner (steiner@sgi.com) + * Copyright (C) 2000-2002 Silicon Graphics, Inc. All rights reserved. */ - -#ifndef _ASM_KSYS_CLKSUPPORT_H -#define _ASM_KSYS_CLKSUPPORT_H - -/* #include */ - -#if SN -#include -#include -typedef hubreg_t clkreg_t; -extern nasid_t master_nasid; - -#define GET_LOCAL_RTC (clkreg_t)LOCAL_HUB_L(PI_RT_COUNT) -#define DISABLE_TMO_INTR() if (cpuid_to_localslice(cpuid())) \ - REMOTE_HUB_PI_S(get_nasid(),\ - cputosubnode(cpuid()),\ - PI_RT_COMPARE_B, 0); \ - else \ - REMOTE_HUB_PI_S(get_nasid(),\ - cputosubnode(cpuid()),\ - PI_RT_COMPARE_A, 0); - -/* This is a hack; we really need to figure these values out dynamically */ -/* - * Since 800 ns works very well with various HUB frequencies, such as - * 360, 380, 390 and 400 MHZ, we use 800 ns rtc cycle time. - */ -#define NSEC_PER_CYCLE 800 -#define CYCLE_PER_SEC (NSEC_PER_SEC/NSEC_PER_CYCLE) /* - * Number of cycles per profiling intr + * This file contains definitions for accessing a platform supported high resolution + * clock. The clock is monitonically increasing and can be accessed from any node + * in the system. The clock is synchronized across nodes - all nodes see the + * same value. + * + * RTC_COUNTER_ADDR - contains the address of the counter + * + * GET_RTC_COUNTER() - macro to read the value of the clock + * + * RTC_CYCLES_PER_SEC - clock frequency in ticks per second + * */ -#define CLK_FCLOCK_FAST_FREQ 1250 -#define CLK_FCLOCK_SLOW_FREQ 0 -/* The is the address that the user will use to mmap the cycle counter */ -#define CLK_CYCLE_ADDRESS_FOR_USER LOCAL_HUB_ADDR(PI_RT_COUNT) - -#elif IP30 -#include -typedef heartreg_t clkreg_t; -#define NSEC_PER_CYCLE 80 -#define CYCLE_PER_SEC (NSEC_PER_SEC/NSEC_PER_CYCLE) -#define GET_LOCAL_RTC *((volatile clkreg_t *)PHYS_TO_COMPATK1(HEART_COUNT)) -#define DISABLE_TMO_INTR() -#define CLK_CYCLE_ADDRESS_FOR_USER PHYS_TO_K1(HEART_COUNT) -#define CLK_FCLOCK_SLOW_FREQ (CYCLE_PER_SEC / HZ) + +#ifndef _ASM_IA64_SN_CLKSUPPORT_H +#define _ASM_IA64_SN_CLKSUPPORT_H + +#include +#include +#include + +typedef long clkreg_t; +extern long sn_rtc_cycles_per_second; + + +#if defined(CONFIG_IA64_SGI_SN1) +#include +#include +/* clocks are not synchronized yet on SN1 - used node 0 (problem if no NASID 0) */ +#define RTC_COUNTER_ADDR ((clkreg_t*)REMOTE_HUB_ADDR(0, PI_RT_COUNTER)) +#define RTC_COMPARE_A_ADDR ((clkreg_t*)REMOTE_HUB_ADDR(0, PI_RT_COMPARE_A)) +#define RTC_COMPARE_B_ADDR ((clkreg_t*)REMOTE_HUB_ADDR(0, PI_RT_COMPARE_B)) +#define RTC_INT_PENDING_A_ADDR ((clkreg_t*)REMOTE_HUB_ADDR(0, PI_RT_INT_PEND_A)) +#define RTC_INT_PENDING_B_ADDR ((clkreg_t*)REMOTE_HUB_ADDR(0, PI_RT_INT_PEND_B)) +#define RTC_INT_ENABLED_A_ADDR ((clkreg_t*)REMOTE_HUB_ADDR(0, PI_RT_INT_EN_A)) +#define RTC_INT_ENABLED_B_ADDR ((clkreg_t*)REMOTE_HUB_ADDR(0, PI_RT_INT_EN_B)) +#else +#include +#define RTC_COUNTER_ADDR ((clkreg_t*)LOCAL_MMR_ADDR(SH_RTC)) +#define RTC_COMPARE_A_ADDR ((clkreg_t*)LOCAL_MMR_ADDR(SH_RTC)) +#define RTC_COMPARE_B_ADDR ((clkreg_t*)LOCAL_MMR_ADDR(SH_RTC)) +#define RTC_INT_PENDING_A_ADDR ((clkreg_t*)LOCAL_MMR_ADDR(SH_RTC)) +#define RTC_INT_PENDING_B_ADDR ((clkreg_t*)LOCAL_MMR_ADDR(SH_RTC)) +#define RTC_INT_ENABLED_A_ADDR ((clkreg_t*)LOCAL_MMR_ADDR(SH_RTC)) +#define RTC_INT_ENABLED_B_ADDR ((clkreg_t*)LOCAL_MMR_ADDR(SH_RTC)) #endif -/* Prototypes */ -extern void init_timebase(void); -extern void fastick_maint(struct eframe_s *); -extern int audioclock; -extern int prfclk_enabled_cnt; -#endif /* _ASM_KSYS_CLKSUPPORT_H */ + +#define GET_RTC_COUNTER() (*RTC_COUNTER_ADDR) +#define rtc_time() GET_RTC_COUNTER() + +#define RTC_CYCLES_PER_SEC sn_rtc_cycles_per_second + +#endif /* _ASM_IA64_SN_CLKSUPPORT_H */ diff -Nru a/include/asm-ia64/sn/dmamap.h b/include/asm-ia64/sn/dmamap.h --- a/include/asm-ia64/sn/dmamap.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/sn/dmamap.h Tue Mar 12 13:58:15 2002 @@ -4,11 +4,10 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2001 Silicon Graphics, Inc. All rights reserved. */ -#ifndef _ASM_SN_DMAMAP_H -#define _ASM_SN_DMAMAP_H +#ifndef _ASM_IA64_SN_DMAMAP_H +#define _ASM_IA64_SN_DMAMAP_H #include @@ -70,7 +69,6 @@ extern int a24_mapsize; extern int a32_mapsize; -extern lock_t dmamaplock; extern sv_t dmamapout; #ifdef __cplusplus @@ -87,4 +85,4 @@ #define DMAMAP_FLAGS 0x7 -#endif /* _ASM_SN_DMAMAP_H */ +#endif /* _ASM_IA64_SN_DMAMAP_H */ diff -Nru a/include/asm-ia64/sn/driver.h b/include/asm-ia64/sn/driver.h --- a/include/asm-ia64/sn/driver.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/sn/driver.h Tue Mar 12 13:58:15 2002 @@ -4,11 +4,13 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2002 Silicon Graphics, Inc. All rights reserved. */ -#ifndef _ASM_SN_DRIVER_H -#define _ASM_SN_DRIVER_H +#ifndef _ASM_IA64_SN_DRIVER_H +#define _ASM_IA64_SN_DRIVER_H + +#include +#include /* ** Interface for device driver handle management. @@ -18,133 +20,77 @@ */ typedef struct device_driver_s *device_driver_t; -#define DEVICE_DRIVER_NONE (device_driver_t)NULL /* == Driver thread priority support == */ typedef int ilvl_t; -/* default driver thread priority level */ -#define DRIVER_THREAD_PRI_DEFAULT (ilvl_t)230 -/* invalid driver thread priority level */ -#define DRIVER_THREAD_PRI_INVALID (ilvl_t)-1 - -/* Associate a thread priority with a driver */ -extern int device_driver_thread_pri_set(device_driver_t driver, - ilvl_t pri); - -/* Get the thread priority associated with the driver */ -extern ilvl_t device_driver_thread_pri_get(device_driver_t driver); - -/* Get the thread priority for a driver from the sysgen paramters */ -extern ilvl_t device_driver_sysgen_thread_pri_get(char *driver_prefix); - -/* Initialize device driver functions. */ -extern void device_driver_init(void); - - -/* Allocate a driver handle */ -extern device_driver_t device_driver_alloc(char *prefix); - - -/* Free a driver handle */ -extern void device_driver_free(device_driver_t driver); - - -/* Given a device driver prefix, return a handle to the driver. */ -extern device_driver_t device_driver_get(char *prefix); - -/* Given a device, return a handle to the driver. */ -extern device_driver_t device_driver_getbydev(devfs_handle_t device); -struct cdevsw; -struct bdevsw; +#ifdef __cplusplus +extern "C" { +#endif + +struct eframe_s; +struct piomap; +struct dmamap; + +typedef __psunsigned_t iobush_t; + +/* interrupt function */ +typedef void *intr_arg_t; +typedef void intr_func_f(intr_arg_t); +typedef intr_func_f *intr_func_t; + +#define INTR_ARG(n) ((intr_arg_t)(__psunsigned_t)(n)) + +/* system interrupt resource handle -- returned from intr_alloc */ +typedef struct intr_s *intr_t; +#define INTR_HANDLE_NONE ((intr_t)0) -/* Associate a driver with bdevsw/cdevsw pointers. */ -extern int -device_driver_devsw_put(device_driver_t driver, - struct bdevsw *my_bdevsw, - struct cdevsw *my_cdevsw); - - -/* Given a driver, return the corresponding bdevsw and cdevsw pointers. */ -extern void -device_driver_devsw_get( device_driver_t driver, - struct bdevsw **bdevswp, - struct cdevsw **cdevswp); - -/* Given a driver, return its name (prefix). */ -extern void device_driver_name_get(device_driver_t driver, char *buffer, int length); +/* + * restore interrupt level value, returned from intr_block_level + * for use with intr_unblock_level. + */ +typedef void *rlvl_t; /* - * A descriptor for every static device driver in the system. - * lboot creates a table of these and places in in master.c. - * device_driver_init runs through this table during initialization - * in order to "register" every static device driver. + * A basic, platform-independent description of I/O requirements for + * a device. This structure is usually formed by lboot based on information + * in configuration files. It contains information about PIO, DMA, and + * interrupt requirements for a specific instance of a device. + * + * The pio description is currently unused. + * + * The dma description describes bandwidth characteristics and bandwidth + * allocation requirements. (TBD) + * + * The Interrupt information describes the priority of interrupt, desired + * destination, policy (TBD), whether this is an error interrupt, etc. + * For now, interrupts are targeted to specific CPUs. */ -typedef struct static_device_driver_desc_s { - char *sdd_prefix; - struct bdevsw *sdd_bdevsw; - struct cdevsw *sdd_cdevsw; -} *static_device_driver_desc_t; - -extern struct static_device_driver_desc_s static_device_driver_table[]; -extern int static_devsw_count; +typedef struct device_desc_s { + /* pio description (currently none) */ -/*====== administration support ========== */ -/* structure of each entry in the table created by lboot for - * device / driver administration -*/ -typedef struct dev_admin_info_s { - char *dai_name; /* name of the device or driver - * prefix - */ - char *dai_param_name; /* device or driver parameter name */ - char *dai_param_val; /* value of the parameter */ -} dev_admin_info_t; - - -/* Update all the administrative hints associated with the device */ -extern void device_admin_info_update(devfs_handle_t dev_vhdl); - -/* Update all the administrative hints associated with the device driver */ -extern void device_driver_admin_info_update(device_driver_t driver); - -/* Get a particular administrative hint associated with a device */ -extern char *device_admin_info_get(devfs_handle_t dev_vhdl, - char *info_lbl); - -/* Associate a particular administrative hint for a device */ -extern int device_admin_info_set(devfs_handle_t dev_vhdl, - char *info_lbl, - char *info_val); - -/* Get a particular administrative hint associated with a device driver*/ -extern char *device_driver_admin_info_get(char *driver_prefix, - char *info_name); - -/* Associate a particular administrative hint for a device driver*/ -extern int device_driver_admin_info_set(char *driver_prefix, - char *driver_info_lbl, - char *driver_info_val); + /* dma description */ + /* TBD: allocated badwidth requirements */ -/* Initialize the extended device administrative hint table */ -extern void device_admin_table_init(void); - -/* Add a hint corresponding to a device to the extended device administrative - * hint table. - */ -extern void device_admin_table_update(char *dev_name, - char *param_name, - char *param_val); - -/* Initialize the extended device driver administrative hint table */ -extern void device_driver_admin_table_init(void); - -/* Add a hint corresponding to a device to the extended device driver - * administrative hint table. - */ -extern void device_driver_admin_table_update(char *drv_prefix, - char *param_name, - char *param_val); -#endif /* _ASM_SN_DRIVER_H */ + /* interrupt description */ + devfs_handle_t intr_target; /* Hardware locator string */ + int intr_policy; /* TBD */ + ilvl_t intr_swlevel; /* software level for blocking intr */ + char *intr_name; /* name of interrupt, if any */ + + int flags; +} *device_desc_t; + +/* flag values */ +#define D_INTR_ISERR 0x1 /* interrupt is for error handling */ +#define D_IS_ASSOC 0x2 /* descriptor is associated with a dev */ +#define D_INTR_NOTHREAD 0x4 /* Interrupt handler isn't threaded. */ + +#define INTR_SWLEVEL_NOTHREAD_DEFAULT 0 /* Default + * Interrupt level in case of + * non-threaded interrupt + * handlers + */ +#endif /* _ASM_IA64_SN_DRIVER_H */ diff -Nru a/include/asm-ia64/sn/eeprom.h b/include/asm-ia64/sn/eeprom.h --- a/include/asm-ia64/sn/eeprom.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/sn/eeprom.h Tue Mar 12 13:58:15 2002 @@ -6,11 +6,10 @@ * * Public interface for reading Atmel EEPROMs via L1 system controllers * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2002 Silicon Graphics, Inc. All rights reserved. */ -#ifndef _ASM_SN_EEPROM_H -#define _ASM_SN_EEPROM_H +#ifndef _ASM_IA64_SN_EEPROM_H +#define _ASM_IA64_SN_EEPROM_H #include #include @@ -385,14 +384,8 @@ ( IO_BRICK, NASID_GET((r)), (v), 0 ) \ : nic_bridge_vertex_info((v), (r)) ) -#ifdef BRINGUP /* will we read mfg info from IOC3's that aren't - * part of IO7 cards, or aren't in I/O bricks? */ -#define IOC3_VERTEX_MFG_INFO(v, r, e) \ - eeprom_vertex_info_set( IO_IO7, NASID_GET((r)), (v), 0 ) -#endif /* BRINGUP */ - #define HUB_UID_GET(n,v,p) cbrick_uid_get((n),(p)) #define ROUTER_UID_GET(d,p) rbrick_uid_get(get_nasid(),(d),(p)) #define XBOW_UID_GET(n,p) iobrick_uid_get((n),(p)) -#endif /* _ASM_SN_EEPROM_H */ +#endif /* _ASM_IA64_SN_EEPROM_H */ diff -Nru a/include/asm-ia64/sn/fetchop.h b/include/asm-ia64/sn/fetchop.h --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/include/asm-ia64/sn/fetchop.h Tue Mar 12 13:58:16 2002 @@ -0,0 +1,40 @@ +/* + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (c) 2001-2002 Silicon Graphics, Inc. All rights reserved. + */ + + +#ifndef _ASM_IA64_SN_FETCHOP_H +#define _ASM_IA64_SN_FETCHOP_H + +#define FETCHOP_BASENAME "sgi_fetchop" +#define FETCHOP_FULLNAME "/dev/sgi_fetchop" + + + +#define FETCHOP_VAR_SIZE 64 /* 64 byte per fetchop variable */ + +#define FETCHOP_LOAD 0 +#define FETCHOP_INCREMENT 8 +#define FETCHOP_DECREMENT 16 +#define FETCHOP_CLEAR 24 + +#define FETCHOP_STORE 0 +#define FETCHOP_AND 24 +#define FETCHOP_OR 32 + +#define FETCHOP_CLEAR_CACHE 56 + +#define FETCHOP_LOAD_OP(addr, op) ( \ + *(long *)((char*) (addr) + (op))) + +#define FETCHOP_STORE_OP(addr, op, x) ( \ + *(long *)((char*) (addr) + (op)) = \ + (long) (x)) + +#endif /* _ASM_IA64_SN_FETCHOP_H */ + diff -Nru a/include/asm-ia64/sn/gda.h b/include/asm-ia64/sn/gda.h --- a/include/asm-ia64/sn/gda.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/sn/gda.h Tue Mar 12 13:58:15 2002 @@ -6,16 +6,17 @@ * * Derived from IRIX . * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. + * Copyright (C) 1992 - 1997, 2000-2001 Silicon Graphics, Inc. All rights reserved. * * gda.h -- Contains the data structure for the global data area, * The GDA contains information communicated between the * PROM, SYMMON, and the kernel. */ -#ifndef _ASM_SN_GDA_H -#define _ASM_SN_GDA_H +#ifndef _ASM_IA64_SN_GDA_H +#define _ASM_IA64_SN_GDA_H #include +#include #define GDA_MAGIC 0x58464552 @@ -42,7 +43,7 @@ #define G_PARTIDOFF 40 #define G_TABLEOFF 128 -#ifdef _LANGUAGE_C +#ifndef __ASSEMBLY__ typedef struct gda { u32 g_magic; /* GDA magic number */ @@ -68,7 +69,7 @@ #define GDA ((gda_t*) GDA_ADDR(get_nasid())) -#endif /* __LANGUAGE_C */ +#endif /* __ASSEMBLY__ */ /* * Define: PART_GDA_VERSION * Purpose: Define the minimum version of the GDA required, lower @@ -105,4 +106,4 @@ #define PROMOP_BIST1 0x0800 /* keep track of which BIST ran */ #define PROMOP_BIST2 0x1000 /* keep track of which BIST ran */ -#endif /* _ASM_SN_GDA_H */ +#endif /* _ASM_IA64_SN_GDA_H */ diff -Nru a/include/asm-ia64/sn/hack.h b/include/asm-ia64/sn/hack.h --- a/include/asm-ia64/sn/hack.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/sn/hack.h Tue Mar 12 13:58:15 2002 @@ -4,13 +4,12 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Jack Steiner (steiner@sgi.com) + * Copyright (C) 2000-2001 Silicon Graphics, Inc. All rights reserved. */ -#ifndef _ASM_SN_HACK_H -#define _ASM_SN_HACK_H +#ifndef _ASM_IA64_SN_HACK_H +#define _ASM_IA64_SN_HACK_H #include #include /* for copy_??_user */ @@ -32,7 +31,6 @@ #include #define DELAY(a) -#define cpuid() 0 /************************************************ * Routines redefined to use linux equivalents. * @@ -59,14 +57,14 @@ #define spl7 splhi() #define splx(s) -extern void * kmem_alloc_node(register size_t, register int, cnodeid_t); -extern void * kmem_zalloc(size_t, int); -extern void * kmem_zalloc_node(register size_t, register int, cnodeid_t ); -extern void * kmem_zone_alloc(register zone_t *, int); -extern zone_t * kmem_zone_init(register int , char *); -extern void kmem_zone_free(register zone_t *, void *); +extern void * snia_kmem_alloc_node(register size_t, register int, cnodeid_t); +extern void * snia_kmem_zalloc(size_t, int); +extern void * snia_kmem_zalloc_node(register size_t, register int, cnodeid_t ); +extern void * snia_kmem_zone_alloc(register zone_t *, int); +extern zone_t * snia_kmem_zone_init(register int , char *); +extern void snia_kmem_zone_free(register zone_t *, void *); extern int is_specified(char *); extern int cap_able(uint64_t); extern int compare_and_swap_ptr(void **, void *, void *); -#endif /* _ASM_SN_HACK_H */ +#endif /* _ASM_IA64_SN_HACK_H */ diff -Nru a/include/asm-ia64/sn/hcl.h b/include/asm-ia64/sn/hcl.h --- a/include/asm-ia64/sn/hcl.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/sn/hcl.h Tue Mar 12 13:58:15 2002 @@ -4,13 +4,15 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2001 Silicon Graphics, Inc. All rights reserved. */ -#ifndef _ASM_SN_HCL_H -#define _ASM_SN_HCL_H +#ifndef _ASM_IA64_SN_HCL_H +#define _ASM_IA64_SN_HCL_H + +#include +#include +#include -extern spinlock_t hcl_spinlock; extern devfs_handle_t hcl_handle; /* HCL driver */ extern devfs_handle_t hwgraph_root; extern devfs_handle_t linux_busnum; @@ -93,7 +95,6 @@ extern devfs_handle_t hwgraph_char_device_get(devfs_handle_t); extern graph_error_t hwgraph_char_device_add(devfs_handle_t, char *, char *, devfs_handle_t *); extern int hwgraph_path_add(devfs_handle_t, char *, devfs_handle_t *); -extern struct file_operations * hwgraph_bdevsw_get(devfs_handle_t); extern int hwgraph_info_add_LBL(devfs_handle_t, char *, arbitrary_info_t); extern int hwgraph_info_get_LBL(devfs_handle_t, char *, arbitrary_info_t *); extern int hwgraph_info_replace_LBL(devfs_handle_t, char *, arbitrary_info_t, @@ -111,4 +112,4 @@ -#endif /* _ASM_SN_HCL_H */ +#endif /* _ASM_IA64_SN_HCL_H */ diff -Nru a/include/asm-ia64/sn/hcl_util.h b/include/asm-ia64/sn/hcl_util.h --- a/include/asm-ia64/sn/hcl_util.h Tue Mar 12 13:58:14 2002 +++ b/include/asm-ia64/sn/hcl_util.h Tue Mar 12 13:58:14 2002 @@ -4,12 +4,13 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2002 Silicon Graphics, Inc. All rights reserved. */ -#ifndef _ASM_SN_HCL_UTIL_H -#define _ASM_SN_HCL_UTIL_H +#ifndef _ASM_IA64_SN_HCL_UTIL_H +#define _ASM_IA64_SN_HCL_UTIL_H + +#include extern char * dev_to_name(devfs_handle_t, char *, uint); extern int device_master_set(devfs_handle_t, devfs_handle_t); @@ -17,8 +18,5 @@ extern cnodeid_t master_node_get(devfs_handle_t); extern cnodeid_t nodevertex_to_cnodeid(devfs_handle_t); extern void mark_nodevertex_as_node(devfs_handle_t, cnodeid_t); -extern void device_info_set(devfs_handle_t, void *); -extern void *device_info_get(devfs_handle_t); - -#endif _ASM_SN_HCL_UTIL_H +#endif /* _ASM_IA64_SN_HCL_UTIL_H */ diff -Nru a/include/asm-ia64/sn/hires_clock.h b/include/asm-ia64/sn/hires_clock.h --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/include/asm-ia64/sn/hires_clock.h Tue Mar 12 13:58:16 2002 @@ -0,0 +1,52 @@ +/* + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (C) 2001 Silicon Graphics, Inc. All rights reserved. + * + * SGI Hi Resolution Clock + * + * SGI SN platforms provide a high resolution clock that is + * synchronized across all nodes. The clock can be memory mapped + * and directly read from user space. + * + * Access to the clock is thru the following: + * (error checking not shown) + * + * (Note: should library routines be provided to encapsulate this??) + * + * int fd: + * volatile long *clk; + * + * fd = open (HIRES_FULLNAME, O_RDONLY); + * clk = mmap(0, getpagesize(), PROT_READ, MAP_SHARED, fd, 0); + * clk += ioctl(fd, HIRES_IOCQGETOFFSET, 0); + * + * At this point, clk is a pointer to the high resolution clock. + * + * The clock period can be obtained via: + * + * long picosec_per_tick; + * picosec_per_tick = ioctl(fd, HIRES_IOCQGETPICOSEC, 0); + */ + +#ifndef _ASM_IA64_SN_HIRES_CLOCK_H +#define _ASM_IA64_SN_HIRES_CLOCK_H + + +#define HIRES_BASENAME "sgi_hires_clock" +#define HIRES_FULLNAME "/dev/sgi_hires_clock" +#define HIRES_IOC_BASE 's' + + +/* Get page offset of hires timer */ +#define HIRES_IOCQGETOFFSET _IO( HIRES_IOC_BASE, 0 ) + +/* get clock period in picoseconds per tick */ +#define HIRES_IOCQGETPICOSEC _IO( HIRES_IOC_BASE, 1 ) + +/* get number of significant bits in clock counter */ +#define HIRES_IOCQGETCLOCKBITS _IO( HIRES_IOC_BASE, 2 ) + +#endif /* _ASM_IA64_SN_HIRES_CLOCK_H */ diff -Nru a/include/asm-ia64/sn/hubspc.h b/include/asm-ia64/sn/hubspc.h --- a/include/asm-ia64/sn/hubspc.h Tue Mar 12 13:58:15 2002 +++ /dev/null Wed Dec 31 16:00:00 1969 @@ -1,25 +0,0 @@ -/* $Id$ - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam - */ -#ifndef _ASM_SN_HUBSPC_H -#define _ASM_SN_HUBSPC_H - -typedef enum { - HUBSPC_REFCOUNTERS, - HUBSPC_PROM -} hubspc_subdevice_t; - - -/* - * Reference Counters - */ - -extern int refcounters_attach(devfs_handle_t hub); - -#endif /* _ASM_SN_HUBSPC_H */ diff -Nru a/include/asm-ia64/sn/hwcntrs.h b/include/asm-ia64/sn/hwcntrs.h --- a/include/asm-ia64/sn/hwcntrs.h Tue Mar 12 13:58:15 2002 +++ /dev/null Wed Dec 31 16:00:00 1969 @@ -1,97 +0,0 @@ -/* $Id$ - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam - */ -#ifndef _ASM_SN_HWCNTRS_H -#define _ASM_SN_HWCNTRS_H - - -typedef uint64_t refcnt_t; - -#define SN0_REFCNT_MAX_COUNTERS 64 - -typedef struct sn0_refcnt_set { - refcnt_t refcnt[SN0_REFCNT_MAX_COUNTERS]; - uint64_t flags; - uint64_t reserved[4]; -} sn0_refcnt_set_t; - -typedef struct sn0_refcnt_buf { - sn0_refcnt_set_t refcnt_set; - uint64_t paddr; - uint64_t page_size; - cnodeid_t cnodeid; /* cnodeid + pad[3] use 64 bits */ - uint16_t pad[3]; - uint64_t reserved[4]; -} sn0_refcnt_buf_t; - -typedef struct sn0_refcnt_args { - uint64_t vaddr; - uint64_t len; - sn0_refcnt_buf_t* buf; - uint64_t reserved[4]; -} sn0_refcnt_args_t; - -/* - * Info needed by the user level program - * to mmap the refcnt buffer - */ - -#define RCB_INFO_GET 1 -#define RCB_SLOT_GET 2 - -typedef struct rcb_info { - uint64_t rcb_len; /* total refcnt buffer len in bytes */ - - int rcb_sw_sets; /* number of sw counter sets in buffer */ - int rcb_sw_counters_per_set; /* sw counters per set -- numnodes */ - int rcb_sw_counter_size; /* sizeof(refcnt_t) -- size of sw cntr */ - - int rcb_base_pages; /* number of base pages in node */ - int rcb_base_page_size; /* sw base page size */ - uint64_t rcb_base_paddr; /* base physical address for this node */ - - int rcb_cnodeid; /* cnodeid for this node */ - int rcb_granularity; /* hw page size used for counter sets */ - uint rcb_hw_counter_max; /* max hwcounter count (width mask) */ - int rcb_diff_threshold; /* current node differential threshold */ - int rcb_abs_threshold; /* current node absolute threshold */ - int rcb_num_slots; /* physmem slots */ - - int rcb_reserved[512]; - -} rcb_info_t; - -typedef struct rcb_slot { - uint64_t base; - uint64_t size; -} rcb_slot_t; - -#if defined(__KERNEL__) -typedef struct sn0_refcnt_args_32 { - uint64_t vaddr; - uint64_t len; - app32_ptr_t buf; - uint64_t reserved[4]; -} sn0_refcnt_args_32_t; - -/* Defines and Macros */ -/* A set of reference counts are for 4k bytes of physical memory */ -#define NBPREFCNTP 0x1000 -#define BPREFCNTPSHIFT 12 -#define bytes_to_refcntpages(x) (((__psunsigned_t)(x)+(NBPREFCNTP-1))>>BPREFCNTPSHIFT) -#define refcntpage_offset(x) ((__psunsigned_t)(x)&((NBPP-1)&~(NBPREFCNTP-1))) -#define align_to_refcntpage(x) ((__psunsigned_t)(x)&(~(NBPREFCNTP-1))) - -extern void migr_refcnt_read(sn0_refcnt_buf_t*); -extern void migr_refcnt_read_extended(sn0_refcnt_buf_t*); -extern int migr_refcnt_enabled(void); - -#endif /* __KERNEL__ */ - -#endif /* _ASM_SN_HWCNTRS_H */ diff -Nru a/include/asm-ia64/sn/idle.h b/include/asm-ia64/sn/idle.h --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/include/asm-ia64/sn/idle.h Tue Mar 12 13:58:16 2002 @@ -0,0 +1,54 @@ +#ifndef _ASM_IA64_SN_IDLE_H +#define _ASM_IA64_SN_IDLE_H + +/* + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (c) 2001-2002 Silicon Graphics, Inc. All rights reserved. + */ + +#include +#include +#include + +static __inline__ void +snidle(void) { + +#ifdef CONFIG_IA64_SGI_AUTOTEST + { + extern int autotest_enabled; + if (autotest_enabled) { + extern void llsc_main(int, long, long); + llsc_main(smp_processor_id(), 0xe000000000000000LL, 0xe000000001000000LL); + } + } +#endif + + if (pda.idle_flag == 0) { + /* + * Turn the activity LED off. + */ + set_led_bits(0, LED_CPU_ACTIVITY); + } + +#ifdef CONFIG_IA64_SGI_SN_SIM + if (IS_RUNNING_ON_SIMULATOR()) + SIMULATOR_SLEEP(); +#endif + + pda.idle_flag = 1; +} + +static __inline__ void +snidleoff(void) { + /* + * Turn the activity LED on. + */ + set_led_bits(LED_CPU_ACTIVITY, LED_CPU_ACTIVITY); + + pda.idle_flag = 0; +} + +#endif /* _ASM_IA64_SN_IDLE_H */ diff -Nru a/include/asm-ia64/sn/ifconfig_net.h b/include/asm-ia64/sn/ifconfig_net.h --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/include/asm-ia64/sn/ifconfig_net.h Tue Mar 12 13:58:16 2002 @@ -0,0 +1,32 @@ +/* + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (C) 2000-2001 Silicon Graphics, Inc. All rights reserved. + */ + +#ifndef _ASM_IA64_SN_IFCONFIG_NET_H +#define _ASM_IA64_SN_IFCONFIG_NET_H + +#define NETCONFIG_FILE "/tmp/ifconfig_net" +#define POUND_CHAR '#' +#define MAX_LINE_LEN 128 +#define MAXPATHLEN 128 + +struct ifname_num { + long next_eth; + long next_fddi; + long next_hip; + long next_tr; + long next_fc; + long size; +}; + +struct ifname_MAC { + char name[16]; + unsigned char dev_addr[7]; + unsigned char addr_len; /* hardware address length */ +}; + +#endif /* _ASM_IA64_SN_IFCONFIG_NET_H */ diff -Nru a/include/asm-ia64/sn/intr.h b/include/asm-ia64/sn/intr.h --- a/include/asm-ia64/sn/intr.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/sn/intr.h Tue Mar 12 13:58:15 2002 @@ -4,250 +4,17 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2001 Silicon Graphics, Inc. All rights reserved. */ -#ifndef _ASM_SN_INTR_H -#define _ASM_SN_INTR_H - -/* Subnode wildcard */ -#define SUBNODE_ANY -1 - -/* Number of interrupt levels associated with each interrupt register. */ -#define N_INTPEND_BITS 64 - -#define INT_PEND0_BASELVL 0 -#define INT_PEND1_BASELVL 64 - -#define N_INTPENDJUNK_BITS 8 -#define INTPENDJUNK_CLRBIT 0x80 +#ifndef _ASM_IA64_SN_INTR_H +#define _ASM_IA64_SN_INTR_H #include -#include - -#if LANGUAGE_C - -#define II_NAMELEN 24 - -/* - * Dispatch table entry - contains information needed to call an interrupt - * routine. - */ -typedef struct intr_vector_s { - intr_func_t iv_func; /* Interrupt handler function */ - intr_func_t iv_prefunc; /* Interrupt handler prologue func */ - void *iv_arg; /* Argument to pass to handler */ -#ifdef LATER - thd_int_t iv_tinfo; /* Thread info */ -#endif - cpuid_t iv_mustruncpu; /* Where we must run. */ -} intr_vector_t; - -/* Interrupt information table. */ -typedef struct intr_info_s { - xtalk_intr_setfunc_t ii_setfunc; /* Function to set the interrupt - * destination and level register. - * It returns 0 (success) or an - * error code. - */ - void *ii_cookie; /* arg passed to setfunc */ - devfs_handle_t ii_owner_dev; /* device that owns this intr */ - char ii_name[II_NAMELEN]; /* Name of this intr. */ - int ii_flags; /* informational flags */ -} intr_info_t; - -#define iv_tflags iv_tinfo.thd_flags -#define iv_isync iv_tinfo.thd_isync -#define iv_lat iv_tinfo.thd_latstats -#define iv_thread iv_tinfo.thd_ithread -#define iv_pri iv_tinfo.thd_pri - -#define THD_CREATED 0x00000001 /* - * We've created a thread for this - * interrupt. - */ - -/* - * Bits for ii_flags: - */ -#define II_UNRESERVE 0 -#define II_RESERVE 1 /* Interrupt reserved. */ -#define II_INUSE 2 /* Interrupt connected */ -#define II_ERRORINT 4 /* INterrupt is an error condition */ -#define II_THREADED 8 /* Interrupt handler is threaded. */ - -/* - * Interrupt level wildcard - */ -#define INTRCONNECT_ANYBIT -1 - -/* - * This structure holds information needed both to call and to maintain - * interrupts. The two are in separate arrays for the locality benefits. - * Since there's only one set of vectors per hub chip (but more than one - * CPU, the lock to change the vector tables must be here rather than in - * the PDA. - */ - -typedef struct intr_vecblk_s { - intr_vector_t vectors[N_INTPEND_BITS]; /* information needed to - call an intr routine. */ - intr_info_t info[N_INTPEND_BITS]; /* information needed only - to maintain interrupts. */ - spinlock_t vector_lock; /* Lock for this and the - masks in the PDA. */ - splfunc_t vector_spl; /* vector_lock req'd spl */ - int vector_state; /* Initialized to zero. - Set to INTR_INITED - by hubintr_init. - */ - int vector_count; /* Number of vectors - * reserved. - */ - int cpu_count[CPUS_PER_SUBNODE]; /* How many interrupts are - * connected to each CPU - */ - int ithreads_enabled; /* Are interrupt threads - * initialized on this node. - * and block? - */ -} intr_vecblk_t; - -/* Possible values for vector_state: */ -#define VECTOR_UNINITED 0 -#define VECTOR_INITED 1 -#define VECTOR_SET 2 - -#define hub_intrvect0 private.p_intmasks.dispatch0->vectors -#define hub_intrvect1 private.p_intmasks.dispatch1->vectors -#define hub_intrinfo0 private.p_intmasks.dispatch0->info -#define hub_intrinfo1 private.p_intmasks.dispatch1->info - -/* - * Macros to manipulate the interrupt register on the calling hub chip. - */ - -#define LOCAL_HUB_SEND_INTR(_level) LOCAL_HUB_S(PI_INT_PEND_MOD, \ - (0x100|(_level))) -#define REMOTE_HUB_PI_SEND_INTR(_hub, _sn, _level) \ - REMOTE_HUB_PI_S((_hub), _sn, PI_INT_PEND_MOD, (0x100|(_level))) - -#define REMOTE_CPU_SEND_INTR(_cpuid, _level) \ - REMOTE_HUB_PI_S(cputonasid(_cpuid), \ - SUBNODE(cputoslice(_cpuid)), \ - PI_INT_PEND_MOD, (0x100|(_level))) - -/* - * When clearing the interrupt, make sure this clear does make it - * to the hub. Otherwise we could end up losing interrupts. - * We do an uncached load of the int_pend0 register to ensure this. - */ - -#define LOCAL_HUB_CLR_INTR(_level) \ - LOCAL_HUB_S(PI_INT_PEND_MOD, (_level)), \ - LOCAL_HUB_L(PI_INT_PEND0) -#define REMOTE_HUB_PI_CLR_INTR(_hub, _sn, _level) \ - REMOTE_HUB_PI_S((_hub), (_sn), PI_INT_PEND_MOD, (_level)), \ - REMOTE_HUB_PI_L((_hub), (_sn), PI_INT_PEND0) - -/* Special support for use by gfx driver only. Supports special gfx hub interrupt. */ -extern void install_gfxintr(cpuid_t cpu, ilvl_t swlevel, intr_func_t intr_func, void *intr_arg); - -void setrtvector(intr_func_t func); - -/* - * Interrupt blocking - */ -extern void intr_block_bit(cpuid_t cpu, int bit); -extern void intr_unblock_bit(cpuid_t cpu, int bit); - -#endif /* LANGUAGE_C */ - -/* - * Hard-coded interrupt levels: - */ - -/* - * L0 = SW1 - * L1 = SW2 - * L2 = INT_PEND0 - * L3 = INT_PEND1 - * L4 = RTC - * L5 = Profiling Timer - * L6 = Hub Errors - * L7 = Count/Compare (T5 counters) - */ - - -/* INT_PEND0 hard-coded bits. */ -#ifdef DEBUG_INTR_TSTAMP -/* hard coded interrupt level for interrupt latency test interrupt */ -#define CPU_INTRLAT_B 62 -#define CPU_INTRLAT_A 61 -#endif - -/* Hardcoded bits required by software. */ -#define MSC_MESG_INTR 9 -#define CPU_ACTION_B 8 -#define CPU_ACTION_A 7 - -/* These are determined by hardware: */ -#define CC_PEND_B 6 -#define CC_PEND_A 5 -#define UART_INTR 4 -#define PG_MIG_INTR 3 -#define GFX_INTR_B 2 -#define GFX_INTR_A 1 -#define RESERVED_INTR 0 - -/* INT_PEND1 hard-coded bits: */ -#define MSC_PANIC_INTR 63 -#define NI_ERROR_INTR 62 -#define MD_COR_ERR_INTR 61 -#define COR_ERR_INTR_B 60 -#define COR_ERR_INTR_A 59 -#define CLK_ERR_INTR 58 - -#if CONFIG_SGI_IP35 || CONFIG_IA64_SGI_SN1 || CONFIG_IA64_GENERIC -# define NACK_INT_B 57 -# define NACK_INT_A 56 -# define LB_ERROR 55 -# define XB_ERROR 54 -#else - << BOMB! >> Must define IP27 or IP35 or IP37 -#endif - -#define BRIDGE_ERROR_INTR 53 /* Setup by PROM to catch Bridge Errors */ - -#define IP27_INTR_0 52 /* Reserved for PROM use */ -#define IP27_INTR_1 51 /* (do not use in Kernel) */ -#define IP27_INTR_2 50 -#define IP27_INTR_3 49 -#define IP27_INTR_4 48 -#define IP27_INTR_5 47 -#define IP27_INTR_6 46 -#define IP27_INTR_7 45 - -#define TLB_INTR_B 44 /* used for tlb flush random */ -#define TLB_INTR_A 43 - -#define LLP_PFAIL_INTR_B 42 /* see ml/SN/SN0/sysctlr.c */ -#define LLP_PFAIL_INTR_A 41 - -#define NI_BRDCAST_ERR_B 40 -#define NI_BRDCAST_ERR_A 39 - -#if CONFIG_SGI_IP35 || CONFIG_IA64_SGI_SN1 || CONFIG_IA64_GENERIC -# define IO_ERROR_INTR 38 /* set up by prom */ -# define DEBUG_INTR_B 37 /* used by symmon to stop all cpus */ -# define DEBUG_INTR_A 36 -#endif -#ifdef CONFIG_IA64_SGI_SN1 -// These aren't strictly accurate or complete. See the -// Synergy Spec. for details. -#define SGI_UART_IRQ (65) -#define SGI_HUB_ERROR_IRQ (182) +#if defined(CONFIG_IA64_SGI_SN1) +#include +#elif defined(CONFIG_IA64_SGI_SN2) +#include #endif -#endif /* _ASM_SN_INTR_H */ +#endif /* _ASM_IA64_SN_INTR_H */ diff -Nru a/include/asm-ia64/sn/intr_public.h b/include/asm-ia64/sn/intr_public.h --- a/include/asm-ia64/sn/intr_public.h Tue Mar 12 13:58:16 2002 +++ b/include/asm-ia64/sn/intr_public.h Tue Mar 12 13:58:16 2002 @@ -4,56 +4,16 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2001 Silicon Graphics, Inc. All rights reserved. */ -#ifndef _ASM_SN_INTR_PUBLIC_H__ -#define _ASM_SN_INTR_PUBLIC_H__ +#ifndef _ASM_IA64_SN_INTR_PUBLIC_H +#define _ASM_IA64_SN_INTR_PUBLIC_H #include -/* REMEMBER: If you change these, the whole world needs to be recompiled. - * It would also require changing the hubspl.s code and SN0/intr.c - * Currently, the spl code has no support for multiple INTPEND1 masks. - */ - -#define N_INTPEND0_MASKS 1 -#define N_INTPEND1_MASKS 1 - -#define INTPEND0_MAXMASK (N_INTPEND0_MASKS - 1) -#define INTPEND1_MAXMASK (N_INTPEND1_MASKS - 1) - -#if _LANGUAGE_C -#if defined(CONFIG_SGI_IP35) || defined(CONFIG_IA64_SGI_SN1) || defined(CONFIG_IA64_GENERIC) -#include +#if defined(CONFIG_IA64_SGI_SN1) +#include +#elif defined(CONFIG_IA64_SGI_SN2) #endif -#include - -struct intr_vecblk_s; /* defined in asm/sn/intr.h */ - -/* - * The following are necessary to create the illusion of a CEL - * on the IP27 hub. We'll add more priority levels soon, but for - * now, any interrupt in a particular band effectively does an spl. - * These must be in the PDA since they're different for each processor. - * Users of this structure must hold the vector_lock in the appropriate vector - * block before modifying the mask arrays. There's only one vector block - * for each Hub so a lock in the PDA wouldn't be adequate. - */ -typedef struct hub_intmasks_s { - /* - * The masks are stored with the lowest-priority (most inclusive) - * in the lowest-numbered masks (i.e., 0, 1, 2...). - */ - /* INT_PEND0: */ - hubreg_t intpend0_masks[N_INTPEND0_MASKS]; - /* INT_PEND1: */ - hubreg_t intpend1_masks[N_INTPEND1_MASKS]; - /* INT_PEND0: */ - struct intr_vecblk_s *dispatch0; - /* INT_PEND1: */ - struct intr_vecblk_s *dispatch1; -} hub_intmasks_t; -#endif /* _LANGUAGE_C */ -#endif /* _ASM_SN_INTR_PUBLIC_H__ */ +#endif /* _ASM_IA64_SN_INTR_PUBLIC_H */ diff -Nru a/include/asm-ia64/sn/invent.h b/include/asm-ia64/sn/invent.h --- a/include/asm-ia64/sn/invent.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/sn/invent.h Tue Mar 12 13:58:15 2002 @@ -4,11 +4,13 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2002 Silicon Graphics, Inc. All rights reserved. */ -#ifndef _ASM_SN_INVENT_H -#define _ASM_SN_INVENT_H +#ifndef _ASM_IA64_SN_INVENT_H +#define _ASM_IA64_SN_INVENT_H + +#include +#include /* * sys/sn/invent.h -- Kernel Hardware Inventory @@ -743,4 +745,4 @@ int); extern int device_controller_num_get( devfs_handle_t); #endif /* __KERNEL__ */ -#endif /* _ASM_SN_INVENT_H */ +#endif /* _ASM_IA64_SN_INVENT_H */ diff -Nru a/include/asm-ia64/sn/io.h b/include/asm-ia64/sn/io.h --- a/include/asm-ia64/sn/io.h Tue Mar 12 13:58:14 2002 +++ b/include/asm-ia64/sn/io.h Tue Mar 12 13:58:14 2002 @@ -1,21 +1,17 @@ - -/* $Id: io.h,v 1.2 2000/02/02 16:35:57 ralf Exp $ - * +/* * This file is subject to the terms and conditions of the GNU General Public * License. See the file "COPYING" in the main directory of this archive * for more details. * * Copyright (C) 2000 Ralf Baechle - * Copyright (C) 2000 Silicon Graphics, Inc. + * Copyright (C) 2000-2001 Silicon Graphics, Inc. */ -#ifndef _ASM_SN_IO_H -#define _ASM_SN_IO_H +#ifndef _ASM_IA64_SN_IO_H +#define _ASM_IA64_SN_IO_H #include -#if defined(CONFIG_SGI_IP35) || defined(CONFIG_IA64_SGI_SN1) || defined(CONFIG_IA64_GENERIC) -#include -#endif +#include /* Because we only have PCI I/O ports. */ #define IIO_ITTE_BASE 0x400160 /* base of translation table entries */ @@ -51,17 +47,35 @@ #define IIO_ITTE_GET(nasid, bigwin) REMOTE_HUB_ADDR((nasid), IIO_ITTE(bigwin)) /* - * Macro which takes the widget number, and returns the + * Macro which takes the widget number, and returns the * IO PRB address of that widget. - * value _x is expected to be a widget number in the range + * value _x is expected to be a widget number in the range * 0, 8 - 0xF */ #define IIO_IOPRB(_x) (IIO_IOPRB_0 + ( ( (_x) < HUB_WIDGET_ID_MIN ? \ (_x) : \ (_x) - (HUB_WIDGET_ID_MIN-1)) << 3) ) -#if defined(CONFIG_SGI_IP35) || defined(CONFIG_IA64_SGI_SN1) || defined(CONFIG_IA64_GENERIC) +#if defined(CONFIG_IA64_SGI_SN1) +#include #include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#elif defined(CONFIG_IA64_SGI_SN2) +#include +#include #endif -#endif /* _ASM_SN_IO_H */ +#endif /* _ASM_IA64_SN_IO_H */ diff -Nru a/include/asm-ia64/sn/iobus.h b/include/asm-ia64/sn/iobus.h --- a/include/asm-ia64/sn/iobus.h Tue Mar 12 13:58:14 2002 +++ /dev/null Wed Dec 31 16:00:00 1969 @@ -1,185 +0,0 @@ -/* $Id$ - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam - */ -#ifndef _ASM_SN_IOBUS_H -#define _ASM_SN_IOBUS_H - -#ifdef __cplusplus -extern "C" { -#endif - -struct eframe_s; -struct piomap; -struct dmamap; - - -/* for ilvl_t interrupt level, for use with intr_block_level. Can't - * typedef twice without causing warnings, and some users of this header - * file do not already include driver.h, but expect ilvl_t to be defined, - * while others include both, leading to the warning ... - */ - -#include -#include - - -typedef __psunsigned_t iobush_t; - -#if __KERNEL__ -/* adapter handle */ -typedef devfs_handle_t adap_t; -#endif - - -/* interrupt function */ -typedef void *intr_arg_t; -typedef void intr_func_f(intr_arg_t); -typedef intr_func_f *intr_func_t; - -#define INTR_ARG(n) ((intr_arg_t)(__psunsigned_t)(n)) - -/* system interrupt resource handle -- returned from intr_alloc */ -typedef struct intr_s *intr_t; -#define INTR_HANDLE_NONE ((intr_t)0) - -/* - * restore interrupt level value, returned from intr_block_level - * for use with intr_unblock_level. - */ -typedef void *rlvl_t; - - -/* - * A basic, platform-independent description of I/O requirements for - * a device. This structure is usually formed by lboot based on information - * in configuration files. It contains information about PIO, DMA, and - * interrupt requirements for a specific instance of a device. - * - * The pio description is currently unused. - * - * The dma description describes bandwidth characteristics and bandwidth - * allocation requirements. (TBD) - * - * The Interrupt information describes the priority of interrupt, desired - * destination, policy (TBD), whether this is an error interrupt, etc. - * For now, interrupts are targeted to specific CPUs. - */ - -typedef struct device_desc_s { - /* pio description (currently none) */ - - /* dma description */ - /* TBD: allocated badwidth requirements */ - - /* interrupt description */ - devfs_handle_t intr_target; /* Hardware locator string */ - int intr_policy; /* TBD */ - ilvl_t intr_swlevel; /* software level for blocking intr */ - char *intr_name; /* name of interrupt, if any */ - - int flags; -} *device_desc_t; - -/* flag values */ -#define D_INTR_ISERR 0x1 /* interrupt is for error handling */ -#define D_IS_ASSOC 0x2 /* descriptor is associated with a dev */ -#define D_INTR_NOTHREAD 0x4 /* Interrupt handler isn't threaded. */ - -#define INTR_SWLEVEL_NOTHREAD_DEFAULT 0 /* Default - * Interrupt level in case of - * non-threaded interrupt - * handlers - */ -/* - * Drivers use these interfaces to manage device descriptors. - * - * To examine defaults: - * desc = device_desc_default_get(dev); - * device_desc_*_get(desc); - * - * To modify defaults: - * desc = device_desc_default_get(dev); - * device_desc_*_set(desc); - * - * To eliminate defaults: - * device_desc_default_set(dev, NULL); - * - * To override defaults: - * desc = device_desc_dup(dev); - * device_desc_*_set(desc,...); - * use device_desc in calls - * device_desc_free(desc); - * - * Software must not set or eliminate default device descriptors for a device while - * concurrently get'ing, dup'ing or using them. Default device descriptors can be - * changed only for a device that is quiescent. In general, device drivers have no - * need to permanently change defaults anyway -- they just override defaults, when - * necessary. - */ -extern device_desc_t device_desc_dup(devfs_handle_t dev); -extern void device_desc_free(device_desc_t device_desc); -extern device_desc_t device_desc_default_get(devfs_handle_t dev); -extern void device_desc_default_set(devfs_handle_t dev, device_desc_t device_desc); - -extern devfs_handle_t device_desc_intr_target_get(device_desc_t device_desc); -extern int device_desc_intr_policy_get(device_desc_t device_desc); -extern ilvl_t device_desc_intr_swlevel_get(device_desc_t device_desc); -extern char * device_desc_intr_name_get(device_desc_t device_desc); -extern int device_desc_flags_get(device_desc_t device_desc); - -extern void device_desc_intr_target_set(device_desc_t device_desc, devfs_handle_t target); -extern void device_desc_intr_policy_set(device_desc_t device_desc, int policy); -extern void device_desc_intr_swlevel_set(device_desc_t device_desc, ilvl_t swlevel); -extern void device_desc_intr_name_set(device_desc_t device_desc, char *name); -extern void device_desc_flags_set(device_desc_t device_desc, int flags); - - -/* IO state */ -#ifdef COMMENT -#define IO_STATE_EMPTY 0x01 /* non-existent */ -#define IO_STATE_INITIALIZING 0x02 /* being initialized */ -#define IO_STATE_ATTACHING 0x04 /* becoming active */ -#define IO_STATE_ACTIVE 0x08 /* active */ -#define IO_STATE_DETACHING 0x10 /* becoming inactive */ -#define IO_STATE_INACTIVE 0x20 /* not in use */ -#define IO_STATE_ERROR 0x40 /* problems */ -#define IO_STATE_BAD_HARDWARE 0x80 /* broken hardware */ -#endif - -struct edt; - - -/* return codes */ -#define RC_OK 0 -#define RC_ERROR 1 - -/* bus configuration management op code */ -#define IOBUS_CONFIG_ATTACH 0 /* vary on */ -#define IOBUS_CONFIG_DETACH 1 /* vary off */ -#define IOBUS_CONFIG_RECOVER 2 /* clear error then vary on */ - -/* get low-level PIO handle */ -extern int pio_geth(struct piomap*, int bus, int bus_id, int subtype, - iopaddr_t addr, int size); - -/* get low-level DMA handle */ -extern int dma_geth(struct dmamap*, int bus_type, int bus_id, int dma_type, - int npages, int page_size, int flags); - -#ifdef __cplusplus -} -#endif - -/* - * Macros for page number and page offsets, using ps as page size - */ -#define x_pnum(addr, ps) ((__psunsigned_t)(addr) / (__psunsigned_t)(ps)) -#define x_poff(addr, ps) ((__psunsigned_t)(addr) & ((__psunsigned_t)(ps) - 1)) - -#endif /* _ASM_SN_IOBUS_H */ diff -Nru a/include/asm-ia64/sn/ioc3.h b/include/asm-ia64/sn/ioc3.h --- a/include/asm-ia64/sn/ioc3.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/sn/ioc3.h Tue Mar 12 13:58:15 2002 @@ -1,10 +1,44 @@ +/* + * Copyright (c) 2002 Silicon Graphics, Inc. All Rights Reserved. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of version 2 of the GNU General Public License + * as published by the Free Software Foundation. + * + * This program is distributed in the hope that it would be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. + * + * Further, this software is distributed without any warranty that it is + * free of the rightful claim of any third person regarding infringement + * or the like. Any license provided herein, whether implied or + * otherwise, applies only to this software file. Patent licenses, if + * any, provided herein do not apply to combinations of this program with + * other software, or any other product whatsoever. + * + * You should have received a copy of the GNU General Public + * License along with this program; if not, write the Free Software + * Foundation, Inc., 59 Temple Place - Suite 330, Boston MA 02111-1307, USA. + * + * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy, + * Mountain View, CA 94043, or: + * + * http://www.sgi.com + * + * For further information regarding this notice, see: + * + * http://oss.sgi.com/projects/GenInfo/NoticeExplan + */ + /* $Id: ioc3.h,v 1.2 2000/11/16 19:49:17 pfg Exp $ * * Copyright (C) 1999 Ralf Baechle * This file is part of the Linux driver for the SGI IOC3. */ -#ifndef IOC3_H -#define IOC3_H +#ifndef _ASM_IA64_SN_IOC3_H +#define _ASM_IA64_SN_IOC3_H + +#include /* SUPERIO uart register map */ typedef volatile struct ioc3_uartregs { @@ -668,4 +702,4 @@ #define IOC3_VENDOR_ID_NUM 0x10A9 #define IOC3_DEVICE_ID_NUM 0x0003 -#endif /* IOC3_H */ +#endif /* _ASM_IA64_SN_IOC3_H */ diff -Nru a/include/asm-ia64/sn/ioerror.h b/include/asm-ia64/sn/ioerror.h --- a/include/asm-ia64/sn/ioerror.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/sn/ioerror.h Tue Mar 12 13:58:15 2002 @@ -4,13 +4,15 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2002 Silicon Graphics, Inc. All rights reserved. */ -#ifndef _ASM_SN_IOERROR_H -#define _ASM_SN_IOERROR_H +#ifndef _ASM_IA64_SN_IOERROR_H +#define _ASM_IA64_SN_IOERROR_H -#if defined(_LANGUAGE_C) || defined(_LANGUAGE_C_PLUS_PLUS) +#ifndef __ASSEMBLY__ + +#include +#include /* * Macros defining the various Errors to be handled as part of @@ -162,7 +164,6 @@ #define IOERROR_FIELDVALID(e,f) (((e)->ie_v.iev_b.ievb_ ## f) != 0) #define IOERROR_GETVALUE(e,f) (ASSERT(IOERROR_FIELDVALID(e,f)),((e)->ie_ ## f)) -#if IP27 || IP35 /* hub code likes to call the SysAD address "hubaddr" ... */ #define ie_hubaddr ie_sysioaddr #define ievb_hubaddr ievb_sysioaddr @@ -178,7 +179,6 @@ MODE_DEVREENABLE /* Reenable pass */ } ioerror_mode_t; -#endif /* C || C++ */ typedef int error_handler_f(void *, int, ioerror_mode_t, ioerror_t *); typedef void *error_handler_arg_t; @@ -193,4 +193,4 @@ #define IOERR_PRINTF(x) #endif /* ERROR_DEBUG */ -#endif /* _ASM_SN_IOERROR_H */ +#endif /* _ASM_IA64_SN_IOERROR_H */ diff -Nru a/include/asm-ia64/sn/ioerror_handling.h b/include/asm-ia64/sn/ioerror_handling.h --- a/include/asm-ia64/sn/ioerror_handling.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/sn/ioerror_handling.h Tue Mar 12 13:58:15 2002 @@ -1,16 +1,17 @@ -/* $Id$ - * +/* * This file is subject to the terms and conditions of the GNU General Public * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2002 Silicon Graphics, Inc. All rights reserved. */ -#ifndef _ASM_SN_IOERROR_HANDLING_H -#define _ASM_SN_IOERROR_HANDLING_H +#ifndef _ASM_IA64_SN_IOERROR_HANDLING_H +#define _ASM_IA64_SN_IOERROR_HANDLING_H #include +#include +#include +#include #if __KERNEL__ @@ -264,7 +265,7 @@ * one. */ if (v_error_skip_env_get(v, error_env) != GRAPH_SUCCESS) { - error_env = kmem_zalloc(sizeof(label_t), KM_NOSLEEP); + error_env = snia_kmem_zalloc(sizeof(label_t), KM_NOSLEEP); /* Unable to allocate memory for jum buffer. This should * be a very rare occurrence. */ @@ -302,4 +303,4 @@ #endif #endif /* __KERNEL__ */ -#endif /* _ASM_SN_IOERROR_HANDLING_H */ +#endif /* _ASM_IA64_SN_IOERROR_HANDLING_H */ diff -Nru a/include/asm-ia64/sn/iograph.h b/include/asm-ia64/sn/iograph.h --- a/include/asm-ia64/sn/iograph.h Tue Mar 12 13:58:14 2002 +++ b/include/asm-ia64/sn/iograph.h Tue Mar 12 13:58:14 2002 @@ -4,11 +4,10 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2001 Silicon Graphics, Inc. All rights reserved. */ -#ifndef _ASM_SN_IOGRAPH_H -#define _ASM_SN_IOGRAPH_H +#ifndef _ASM_IA64_SN_IOGRAPH_H +#define _ASM_IA64_SN_IOGRAPH_H /* * During initialization, platform-dependent kernel code establishes some @@ -68,6 +67,7 @@ #define EDGE_LBL_HPC "hpc" #define EDGE_LBL_GFX "gfx" #define EDGE_LBL_HUB "hub" /* For SN0 */ +#define EDGE_LBL_SYNERGY "synergy" /* For SNIA only */ #define EDGE_LBL_IBUS "ibus" /* For EVEREST */ #define EDGE_LBL_INTERCONNECT "link" #define EDGE_LBL_IO "io" @@ -216,4 +216,4 @@ }; -#endif /* _ASM_SN_IOGRAPH_H */ +#endif /* _ASM_IA64_SN_IOGRAPH_H */ diff -Nru a/include/asm-ia64/sn/klclock.h b/include/asm-ia64/sn/klclock.h --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/include/asm-ia64/sn/klclock.h Tue Mar 12 13:58:16 2002 @@ -0,0 +1,60 @@ +/* + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (C) 1996, 2001 Silicon Graphics, Inc. All rights reserved. + * Copyright (C) 2001 by Ralf Baechle + */ +#ifndef _ASM_IA64_SN_KLCLOCK_H +#define _ASM_IA64_SN_KLCLOCK_H + +#include + +#define RTC_BASE_ADDR (unsigned char *)(nvram_base) + +/* Defines for the SGS-Thomson M48T35 clock */ +#define RTC_SGS_WRITE_ENABLE 0x80 +#define RTC_SGS_READ_PROTECT 0x40 +#define RTC_SGS_YEAR_ADDR (RTC_BASE_ADDR + 0x7fffL) +#define RTC_SGS_MONTH_ADDR (RTC_BASE_ADDR + 0x7ffeL) +#define RTC_SGS_DATE_ADDR (RTC_BASE_ADDR + 0x7ffdL) +#define RTC_SGS_DAY_ADDR (RTC_BASE_ADDR + 0x7ffcL) +#define RTC_SGS_HOUR_ADDR (RTC_BASE_ADDR + 0x7ffbL) +#define RTC_SGS_MIN_ADDR (RTC_BASE_ADDR + 0x7ffaL) +#define RTC_SGS_SEC_ADDR (RTC_BASE_ADDR + 0x7ff9L) +#define RTC_SGS_CONTROL_ADDR (RTC_BASE_ADDR + 0x7ff8L) + +/* Defines for the Dallas DS1386 */ +#define RTC_DAL_UPDATE_ENABLE 0x80 +#define RTC_DAL_UPDATE_DISABLE 0x00 +#define RTC_DAL_YEAR_ADDR (RTC_BASE_ADDR + 0xaL) +#define RTC_DAL_MONTH_ADDR (RTC_BASE_ADDR + 0x9L) +#define RTC_DAL_DATE_ADDR (RTC_BASE_ADDR + 0x8L) +#define RTC_DAL_DAY_ADDR (RTC_BASE_ADDR + 0x6L) +#define RTC_DAL_HOUR_ADDR (RTC_BASE_ADDR + 0x4L) +#define RTC_DAL_MIN_ADDR (RTC_BASE_ADDR + 0x2L) +#define RTC_DAL_SEC_ADDR (RTC_BASE_ADDR + 0x1L) +#define RTC_DAL_CONTROL_ADDR (RTC_BASE_ADDR + 0xbL) +#define RTC_DAL_USER_ADDR (RTC_BASE_ADDR + 0xeL) + +/* Defines for the Dallas DS1742 */ +#define RTC_DS1742_WRITE_ENABLE 0x80 +#define RTC_DS1742_READ_ENABLE 0x40 +#define RTC_DS1742_UPDATE_DISABLE 0x00 +#define RTC_DS1742_YEAR_ADDR (RTC_BASE_ADDR + 0x7ffL) +#define RTC_DS1742_MONTH_ADDR (RTC_BASE_ADDR + 0x7feL) +#define RTC_DS1742_DATE_ADDR (RTC_BASE_ADDR + 0x7fdL) +#define RTC_DS1742_DAY_ADDR (RTC_BASE_ADDR + 0x7fcL) +#define RTC_DS1742_HOUR_ADDR (RTC_BASE_ADDR + 0x7fbL) +#define RTC_DS1742_MIN_ADDR (RTC_BASE_ADDR + 0x7faL) +#define RTC_DS1742_SEC_ADDR (RTC_BASE_ADDR + 0x7f9L) +#define RTC_DS1742_CONTROL_ADDR (RTC_BASE_ADDR + 0x7f8L) +#define RTC_DS1742_USER_ADDR (RTC_BASE_ADDR + 0x0L) + +#define BCD_TO_INT(x) (((x>>4) * 10) + (x & 0xf)) +#define INT_TO_BCD(x) (((x / 10)<<4) + (x % 10)) + +#define YRREF 1970 + +#endif /* _ASM_IA64_SN_KLCLOCK_H */ diff -Nru a/include/asm-ia64/sn/klconfig.h b/include/asm-ia64/sn/klconfig.h --- a/include/asm-ia64/sn/klconfig.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/sn/klconfig.h Tue Mar 12 13:58:15 2002 @@ -6,11 +6,11 @@ * * Derived from IRIX . * - * Copyright (C) 1992 - 1997, 1999 Silicon Graphics, Inc. + * Copyright (C) 1992-1997,1999,2001-2002 Silicon Graphics, Inc. All Rights Reserved. * Copyright (C) 1999 by Ralf Baechle */ -#ifndef _ASM_SN_KLCONFIG_H -#define _ASM_SN_KLCONFIG_H +#ifndef _ASM_IA64_SN_KLCONFIG_H +#define _ASM_IA64_SN_KLCONFIG_H #include @@ -38,20 +38,22 @@ #include #include #include -#if defined(CONFIG_SGI_IP35) || defined(CONFIG_IA64_SGI_SN1) || defined(CONFIG_IA64_GENERIC) #include -#include +#include #include -#include -// #include -// #include #include #include #include #include #include -#endif /* CONFIG_SGI_IP35 ... */ +#ifdef CONFIG_IA64_SGI_SN1 +#include +#endif + +#ifdef CONFIG_IA64_SGI_SN2 +#include +#endif #define KLCFGINFO_MAGIC 0xbeedbabe @@ -59,19 +61,11 @@ #define MAX_MODULE_ID 255 #define SIZE_PAD 4096 /* 4k padding for structures */ -#if (defined(CONFIG_SGI_IP35) || defined(CONFIG_IA64_SGI_SN1) || defined(CONFIG_IA64_GENERIC)) && defined(BRINGUP) /* MAX_SLOTS_PER_NODE??? */ /* * 1 NODE brick, 3 Router bricks (1 local, 1 meta, 1 repeater), * 6 XIO Widgets, 1 Xbow, 1 gfx */ #define MAX_SLOTS_PER_NODE (1 + 3 + 6 + 1 + 1) -#else -/* - * 1 NODE brd, 2 Router brd (1 8p, 1 meta), 6 Widgets, - * 2 Midplanes assuming no pci card cages - */ -#define MAX_SLOTS_PER_NODE (1 + 2 + 6 + 2) -#endif /* XXX if each node is guranteed to have some memory */ @@ -349,7 +343,7 @@ #define KLCLASS(_x) ((_x) & KLCLASS_MASK) /* - * IP27 board types + * board types */ #define KLTYPE_MASK 0x0f @@ -357,11 +351,7 @@ #define KLTYPE_EMPTY 0x00 #define KLTYPE_WEIRDCPU (KLCLASS_CPU | 0x0) -#define KLTYPE_IP27 (KLCLASS_CPU | 0x1) /* 2 CPUs(R10K) per board */ -#if defined(CONFIG_SGI_IP35) || defined(CONFIG_IA64_SGI_SN1) || defined(CONFIG_IA64_GENERIC) -#define KLTYPE_IP35 KLTYPE_IP27 -#define KLTYPE_IP37 KLTYPE_IP35 -#endif +#define KLTYPE_SNIA (KLCLASS_CPU | 0x1) #define KLTYPE_WEIRDIO (KLCLASS_IO | 0x0) #define KLTYPE_BASEIO (KLCLASS_IO | 0x1) /* IOC3, SuperIO, Bridge, SCSI */ @@ -949,7 +939,6 @@ extern int config_find_nic_router(nasid_t, nic_t, lboard_t **, klrou_t**); extern int config_find_nic_hub(nasid_t, nic_t, lboard_t **, klhub_t**); extern int config_find_xbow(nasid_t, lboard_t **, klxbow_t**); -extern klcpu_t *get_cpuinfo(cpuid_t cpu); extern int update_klcfg_cpuinfo(nasid_t, int); extern void board_to_path(lboard_t *brd, char *path); extern moduleid_t get_module_id(nasid_t nasid); @@ -963,4 +952,4 @@ extern nasid_t get_actual_nasid(lboard_t *brd) ; extern net_vec_t klcfg_discover_route(lboard_t *, lboard_t *, int); -#endif /* _ASM_SN_KLCONFIG_H */ +#endif /* _ASM_IA64_SN_KLCONFIG_H */ diff -Nru a/include/asm-ia64/sn/kldir.h b/include/asm-ia64/sn/kldir.h --- a/include/asm-ia64/sn/kldir.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/sn/kldir.h Tue Mar 12 13:58:15 2002 @@ -1,18 +1,16 @@ -/* $Id$ - * +/* * This file is subject to the terms and conditions of the GNU General Public * License. See the file "COPYING" in the main directory of this archive * for more details. * * Derived from IRIX , revision 1.21. * - * Copyright (C) 1992 - 1997, 1999 Silicon Graphics, Inc. + * Copyright (C) 1992-1997,1999,2001-2002 Silicon Graphics, Inc. All Rights Reserved. * Copyright (C) 1999 by Ralf Baechle */ -#ifndef _ASM_SN_KLDIR_H -#define _ASM_SN_KLDIR_H +#ifndef _ASM_IA64_SN_KLDIR_H +#define _ASM_IA64_SN_KLDIR_H -#include #include /* @@ -125,16 +123,16 @@ * 0x0 (0K) +-----------------------------------------+ */ -#ifdef LANGUAGE_ASSEMBLY +#ifdef __ASSEMBLY__ #define KLDIR_OFF_MAGIC 0x00 #define KLDIR_OFF_OFFSET 0x08 #define KLDIR_OFF_POINTER 0x10 #define KLDIR_OFF_SIZE 0x18 #define KLDIR_OFF_COUNT 0x20 #define KLDIR_OFF_STRIDE 0x28 -#endif /* LANGUAGE_ASSEMBLY */ +#endif /* __ASSEMBLY__ */ -#ifdef _LANGUAGE_C +#ifndef __ASSEMBLY__ typedef struct kldir_ent_s { u64 magic; /* Indicates validity of entry */ off_t offset; /* Offset from start of node space */ @@ -146,19 +144,220 @@ /* NOTE: These 16 bytes are used in the Partition KLDIR entry to store partition info. Refer to klpart.h for this. */ } kldir_ent_t; -#endif /* _LANGUAGE_C */ +#endif /* __ASSEMBLY__ */ #define KLDIR_ENT_SIZE 0x40 #define KLDIR_MAX_ENTRIES (0x400 / 0x40) + + /* - * The actual offsets of each memory area are machine-dependent + * The upper portion of the memory map applies during boot + * only and is overwritten by IRIX/SYMMON. The minimum memory bank + * size on IP35 is 64M, which provides a limit on the amount of space + * the PROM can assume it has available. + * + * Most of the addresses below are defined as macros in this file, or + * in SN/addrs.h or SN/SN1/addrs.h. + * + * MEMORY MAP PER NODE + * + * 0x4000000 (64M) +-----------------------------------------+ + * | | + * | | + * | IO7 TEXT/DATA/BSS/stack | + * 0x3000000 (48M) +-----------------------------------------+ + * | Free | + * 0x2102000 (>33M) +-----------------------------------------+ + * | IP35 Topology (PCFG) + misc data | + * 0x2000000 (32M) +-----------------------------------------+ + * | IO7 BUFFERS FOR FLASH ENET IOC3 | + * 0x1F80000 (31.5M) +-----------------------------------------+ + * | Free | + * 0x1C00000 (28M) +-----------------------------------------+ + * | IP35 PROM TEXT/DATA/BSS/stack | + * 0x1A00000 (26M) +-----------------------------------------+ + * | Routing temp. space | + * 0x1800000 (24M) +-----------------------------------------+ + * | Diagnostics temp. space | + * 0x1500000 (21M) +-----------------------------------------+ + * | Free | + * 0x1400000 (20M) +-----------------------------------------+ + * | IO7 PROM temporary copy | + * 0x1300000 (19M) +-----------------------------------------+ + * | | + * | Free | + * | (UNIX DATA starts above 0x1000000) | + * | | + * +-----------------------------------------+ + * | UNIX DEBUG Version | + * 0x0310000 (3.1M) +-----------------------------------------+ + * | SYMMON, loaded just below UNIX | + * | (For UNIX Debug only) | + * | | + * | | + * 0x006C000 (432K) +-----------------------------------------+ + * | SYMMON STACK [NUM_CPU_PER_NODE] | + * | (For UNIX Debug only) | + * 0x004C000 (304K) +-----------------------------------------+ + * | | + * | | + * | UNIX NON-DEBUG Version | + * 0x0040000 (256K) +-----------------------------------------+ + * + * + * The lower portion of the memory map contains information that is + * permanent and is used by the IP35PROM, IO7PROM and IRIX. + * + * 0x40000 (256K) +-----------------------------------------+ + * | | + * | KLCONFIG (64K) | + * | | + * 0x30000 (192K) +-----------------------------------------+ + * | | + * | PI Error Spools (64K) | + * | | + * 0x20000 (128K) +-----------------------------------------+ + * | | + * | Unused | + * | | + * 0x19000 (100K) +-----------------------------------------+ + * | Early cache Exception stack (CPU 3)| + * 0x18800 (98K) +-----------------------------------------+ + * | cache error eframe (CPU 3) | + * 0x18400 (97K) +-----------------------------------------+ + * | Exception Handlers (CPU 3) | + * 0x18000 (96K) +-----------------------------------------+ + * | | + * | Unused | + * | | + * 0x13c00 (79K) +-----------------------------------------+ + * | GPDA (8k) | + * 0x11c00 (71K) +-----------------------------------------+ + * | Early cache Exception stack (CPU 2)| + * 0x10800 (66k) +-----------------------------------------+ + * | cache error eframe (CPU 2) | + * 0x10400 (65K) +-----------------------------------------+ + * | Exception Handlers (CPU 2) | + * 0x10000 (64K) +-----------------------------------------+ + * | | + * | Unused | + * | | + * 0x0b400 (45K) +-----------------------------------------+ + * | GDA (1k) | + * 0x0b000 (44K) +-----------------------------------------+ + * | NMI Eframe areas (4) | + * 0x0a000 (40K) +-----------------------------------------+ + * | NMI Register save areas (4) | + * 0x09000 (36K) +-----------------------------------------+ + * | Early cache Exception stack (CPU 1)| + * 0x08800 (34K) +-----------------------------------------+ + * | cache error eframe (CPU 1) | + * 0x08400 (33K) +-----------------------------------------+ + * | Exception Handlers (CPU 1) | + * 0x08000 (32K) +-----------------------------------------+ + * | | + * | | + * | Unused | + * | | + * | | + * 0x04000 (16K) +-----------------------------------------+ + * | NMI Handler (Protected Page) | + * 0x03000 (12K) +-----------------------------------------+ + * | ARCS PVECTORS (master node only) | + * 0x02c00 (11K) +-----------------------------------------+ + * | ARCS TVECTORS (master node only) | + * 0x02800 (10K) +-----------------------------------------+ + * | LAUNCH [NUM_CPU] | + * 0x02400 (9K) +-----------------------------------------+ + * | Low memory directory (KLDIR) | + * 0x02000 (8K) +-----------------------------------------+ + * | ARCS SPB (1K) | + * 0x01000 (4K) +-----------------------------------------+ + * | Early cache Exception stack (CPU 0)| + * 0x00800 (2k) +-----------------------------------------+ + * | cache error eframe (CPU 0) | + * 0x00400 (1K) +-----------------------------------------+ + * | Exception Handlers (CPU 0) | + * 0x00000 (0K) +-----------------------------------------+ + */ + +/* + * NOTE: To change the kernel load address, you must update: + * - the appropriate elspec files in irix/kern/master.d + * - NODEBUGUNIX_ADDR in SN/SN1/addrs.h + * - IP27_FREEMEM_OFFSET below + * - KERNEL_START_OFFSET below (if supporting cells) */ -#if defined(CONFIG_SGI_IP35) || defined(CONFIG_IA64_SGI_SN1) || defined(CONFIG_IA64_GENERIC) -#include -#else -#error "kldir.h is currently defined for IP27 and IP35 platforms only" -#endif -#endif /* _ASM_SN_KLDIR_H */ + +/* + * This is defined here because IP27_SYMMON_STK_SIZE must be at least what + * we define here. Since it's set up in the prom. We can't redefine it later + * and expect more space to be allocated. The way to find out the true size + * of the symmon stacks is to divide SYMMON_STK_SIZE by SYMMON_STK_STRIDE + * for a particular node. + */ +#define SYMMON_STACK_SIZE 0x8000 + +#if defined (PROM) || defined (SABLE) + +/* + * These defines are prom version dependent. No code other than the IP35 + * prom should attempt to use these values. + */ +#define IP27_LAUNCH_OFFSET 0x2400 +#define IP27_LAUNCH_SIZE 0x400 +#define IP27_LAUNCH_COUNT 4 +#define IP27_LAUNCH_STRIDE 0x100 /* could be as small as 0x80 */ + +#define IP27_KLCONFIG_OFFSET 0x30000 +#define IP27_KLCONFIG_SIZE 0x10000 +#define IP27_KLCONFIG_COUNT 1 +#define IP27_KLCONFIG_STRIDE 0 + +#define IP27_NMI_OFFSET 0x3000 +#define IP27_NMI_SIZE 0x100 +#define IP27_NMI_COUNT 4 +#define IP27_NMI_STRIDE 0x40 + +#define IP27_PI_ERROR_OFFSET 0x20000 +#define IP27_PI_ERROR_SIZE 0x10000 +#define IP27_PI_ERROR_COUNT 1 +#define IP27_PI_ERROR_STRIDE 0 + +#define IP27_SYMMON_STK_OFFSET 0x4c000 +#define IP27_SYMMON_STK_SIZE 0x20000 +#define IP27_SYMMON_STK_COUNT 4 +/* IP27_SYMMON_STK_STRIDE must be >= SYMMON_STACK_SIZE */ +#define IP27_SYMMON_STK_STRIDE 0x8000 + +#define IP27_FREEMEM_OFFSET 0x40000 +#define IP27_FREEMEM_SIZE (-1) +#define IP27_FREEMEM_COUNT 1 +#define IP27_FREEMEM_STRIDE 0 + +#endif /* PROM || SABLE*/ +/* + * There will be only one of these in a partition so the IO7 must set it up. + */ +#define IO6_GDA_OFFSET 0xb000 +#define IO6_GDA_SIZE 0x400 +#define IO6_GDA_COUNT 1 +#define IO6_GDA_STRIDE 0 + +/* + * save area of kernel nmi regs in the prom format + */ +#define IP27_NMI_KREGS_OFFSET 0x9000 +#define IP27_NMI_KREGS_CPU_SIZE 0x400 +/* + * save area of kernel nmi regs in eframe format + */ +#define IP27_NMI_EFRAME_OFFSET 0xa000 +#define IP27_NMI_EFRAME_SIZE 0x400 + +#define GPDA_OFFSET 0x11c00 + +#endif /* _ASM_IA64_SN_KLDIR_H */ diff -Nru a/include/asm-ia64/sn/ksys/elsc.h b/include/asm-ia64/sn/ksys/elsc.h --- a/include/asm-ia64/sn/ksys/elsc.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/sn/ksys/elsc.h Tue Mar 12 13:58:15 2002 @@ -4,36 +4,16 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992-1997, 2000-2002 Silicon Graphics, Inc. All Rights Reserved. */ #ifndef _ASM_SN_KSYS_ELSC_H #define _ASM_SN_KSYS_ELSC_H -#include - -#if defined(CONFIG_IA64_SGI_SN1) || defined(CONFIG_IA64_GENERIC) #include -#endif - -// #include -#define ELSC_I2C_ADDR 0x08 -#define ELSC_I2C_HUB0 0x09 -#define ELSC_I2C_HUB1 0x0a -#define ELSC_I2C_HUB2 0x0b -#define ELSC_I2C_HUB3 0x0c - -#define ELSC_PACKET_MAX 96 #define ELSC_ACP_MAX 86 /* 84+cr+lf */ #define ELSC_LINE_MAX (ELSC_ACP_MAX - 2) -/* - * ELSC character queue type for I/O - */ - -#define ELSC_QSIZE 128 /* Power of 2 is more efficient */ - typedef sc_cq_t elsc_cq_t; /* @@ -49,14 +29,11 @@ int elsc_msg_callback(elsc_t *e, void (*callback)(void *callback_data, char *msg), void *callback_data); -#ifdef LATER char *elsc_errmsg(int code); int elsc_nvram_write(elsc_t *e, int addr, char *buf, int len); int elsc_nvram_read(elsc_t *e, int addr, char *buf, int len); int elsc_nvram_magic(elsc_t *e); -#endif - int elsc_command(elsc_t *e, int only_if_message); int elsc_parse(elsc_t *e, char *p1, char *p2, char *p3); int elsc_ust_write(elsc_t *e, uchar_t c); @@ -69,10 +46,8 @@ */ int elsc_version(elsc_t *e, char *result); -#ifdef LATER int elsc_debug_set(elsc_t *e, u_char byte1, u_char byte2); int elsc_debug_get(elsc_t *e, u_char *byte1, u_char *byte2); -#endif int elsc_module_set(elsc_t *e, int module); int elsc_module_get(elsc_t *e); int elsc_partition_set(elsc_t *e, int partition); @@ -85,13 +60,10 @@ int elsc_cell_get(elsc_t *e); int elsc_bist_set(elsc_t *e, char bist_status); char elsc_bist_get(elsc_t *e); -int elsc_lock(elsc_t *e, - int retry_interval_usec, - int timeout_usec, u_char lock_val); +int elsc_lock(elsc_t *e, int retry_interval_usec, int timeout_usec, u_char lock_val); int elsc_unlock(elsc_t *e); int elsc_display_char(elsc_t *e, int led, int chr); int elsc_display_digit(elsc_t *e, int led, int num, int l_case); -#ifdef LATER int elsc_display_mesg(elsc_t *e, char *chr); /* 8-char input */ int elsc_password_set(elsc_t *e, char *password); /* 4-char input */ int elsc_password_get(elsc_t *e, char *password); /* 4-char output */ @@ -102,7 +74,6 @@ int elsc_system_reset(elsc_t *e); int elsc_dip_switches(elsc_t *e); int elsc_nic_get(elsc_t *e, uint64_t *nic, int verbose); -#endif int _elsc_hbt(elsc_t *e, int ival, int rdly); @@ -110,29 +81,8 @@ #define elsc_hbt_disable(e) _elsc_hbt(e, 0, 0) #define elsc_hbt_send(e) _elsc_hbt(e, 0, 1) -/* - * Routines for using the ELSC as a UART. There's a version of each - * routine that takes a pointer to an elsc_t, and another version that - * gets the pointer by calling a user-supplied global routine "get_elsc". - * The latter version is useful when the elsc is employed for stdio. - */ - -#define ELSCUART_FLASH 0x3c /* LED pattern */ - elsc_t *get_elsc(void); -int elscuart_probe(void); -void elscuart_init(void *); -int elscuart_poll(void); -int elscuart_readc(void); -int elscuart_getc(void); -int elscuart_putc(int); -int elscuart_puts(char *); -char *elscuart_gets(char *, int); -int elscuart_flush(void); - - - /* * Error codes * @@ -142,23 +92,23 @@ #define ELSC_ERROR_NONE 0 -#define ELSC_ERROR_CMD_SEND -100 /* Error sending command */ -#define ELSC_ERROR_CMD_CHECKSUM -101 /* Command checksum bad */ -#define ELSC_ERROR_CMD_UNKNOWN -102 /* Unknown command */ -#define ELSC_ERROR_CMD_ARGS -103 /* Invalid argument(s) */ -#define ELSC_ERROR_CMD_PERM -104 /* Permission denied */ -#define ELSC_ERROR_CMD_STATE -105 /* not allowed in this state*/ - -#define ELSC_ERROR_RESP_TIMEOUT -110 /* ELSC response timeout */ -#define ELSC_ERROR_RESP_CHECKSUM -111 /* Response checksum bad */ -#define ELSC_ERROR_RESP_FORMAT -112 /* Response format error */ -#define ELSC_ERROR_RESP_DIR -113 /* Response direction error */ - -#define ELSC_ERROR_MSG_LOST -120 /* Queue full; msg. lost */ -#define ELSC_ERROR_LOCK_TIMEOUT -121 /* ELSC response timeout */ -#define ELSC_ERROR_DATA_SEND -122 /* Error sending data */ -#define ELSC_ERROR_NIC -123 /* NIC processing error */ -#define ELSC_ERROR_NVMAGIC -124 /* Bad magic no. in NVRAM */ -#define ELSC_ERROR_MODULE -125 /* Moduleid processing err */ +#define ELSC_ERROR_CMD_SEND (-100) /* Error sending command */ +#define ELSC_ERROR_CMD_CHECKSUM (-101) /* Command checksum bad */ +#define ELSC_ERROR_CMD_UNKNOWN (-102) /* Unknown command */ +#define ELSC_ERROR_CMD_ARGS (-103) /* Invalid argument(s) */ +#define ELSC_ERROR_CMD_PERM (-104) /* Permission denied */ +#define ELSC_ERROR_CMD_STATE (-105) /* not allowed in this state*/ + +#define ELSC_ERROR_RESP_TIMEOUT (-110) /* ELSC response timeout */ +#define ELSC_ERROR_RESP_CHECKSUM (-111) /* Response checksum bad */ +#define ELSC_ERROR_RESP_FORMAT (-112) /* Response format error */ +#define ELSC_ERROR_RESP_DIR (-113) /* Response direction error */ + +#define ELSC_ERROR_MSG_LOST (-120) /* Queue full; msg. lost */ +#define ELSC_ERROR_LOCK_TIMEOUT (-121) /* ELSC response timeout */ +#define ELSC_ERROR_DATA_SEND (-122) /* Error sending data */ +#define ELSC_ERROR_NIC (-123) /* NIC processing error */ +#define ELSC_ERROR_NVMAGIC (-124) /* Bad magic no. in NVRAM */ +#define ELSC_ERROR_MODULE (-125) /* Moduleid processing err */ #endif /* _ASM_SN_KSYS_ELSC_H */ diff -Nru a/include/asm-ia64/sn/ksys/i2c.h b/include/asm-ia64/sn/ksys/i2c.h --- a/include/asm-ia64/sn/ksys/i2c.h Tue Mar 12 13:58:14 2002 +++ /dev/null Wed Dec 31 16:00:00 1969 @@ -1,77 +0,0 @@ -/* $Id$ - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam - */ -#ifndef _ASM_SN_KSYS_I2C_H -#define _ASM_SN_KSYS_I2C_H - -#if _STANDALONE -# include "rtc.h" -#else -# define rtc_time() (GET_LOCAL_RTC * NSEC_PER_CYCLE / 1000) -# define rtc_sleep us_delay -# define rtc_time_t uint64_t -#endif - -typedef u_char i2c_addr_t; /* 7-bit address */ - -int i2c_init(nasid_t); - -int i2c_probe(nasid_t nasid, rtc_time_t timeout); - -int i2c_arb(nasid_t, rtc_time_t timeout, rtc_time_t *token_start); - -int i2c_master_xmit(nasid_t, - i2c_addr_t addr, - u_char *buf, - int len_max, - int *len_ptr, - rtc_time_t timeout, - int only_if_message); - -int i2c_master_recv(nasid_t, - i2c_addr_t addr, - u_char *buf, - int len_max, - int *len_ptr, - int emblen, - rtc_time_t timeout, - int only_if_message); - -int i2c_master_xmit_recv(nasid_t, - i2c_addr_t addr, - u_char *xbuf, - int xlen_max, - int *xlen_ptr, - u_char *rbuf, - int rlen_max, - int *rlen_ptr, - int emblen, - rtc_time_t timeout, - int only_if_message); - -char *i2c_errmsg(int code); - -/* - * Error codes - */ - -#define I2C_ERROR_NONE 0 -#define I2C_ERROR_INIT -1 /* Initialization error */ -#define I2C_ERROR_STATE -2 /* Unexpected chip state */ -#define I2C_ERROR_NAK -3 /* Addressed slave not responding */ -#define I2C_ERROR_TO_ARB -4 /* Timeout waiting for sysctlr arb */ -#define I2C_ERROR_TO_BUSY -5 /* Timeout waiting for busy bus */ -#define I2C_ERROR_TO_SENDA -6 /* Timeout sending address byte */ -#define I2C_ERROR_TO_SENDD -7 /* Timeout sending data byte */ -#define I2C_ERROR_TO_RECVA -8 /* Timeout receiving address byte */ -#define I2C_ERROR_TO_RECVD -9 /* Timeout receiving data byte */ -#define I2C_ERROR_NO_MESSAGE -10 /* No message was waiting */ -#define I2C_ERROR_NO_ELSC -11 /* ELSC is disabled for access */ - -#endif /* _ASM_SN_KSYS_I2C_H */ diff -Nru a/include/asm-ia64/sn/ksys/l1.h b/include/asm-ia64/sn/ksys/l1.h --- a/include/asm-ia64/sn/ksys/l1.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/sn/ksys/l1.h Tue Mar 12 13:58:15 2002 @@ -4,8 +4,7 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992-1997,2000-2002 Silicon Graphics, Inc. All Rights Reserved. */ #ifndef _ASM_SN_KSYS_L1_H @@ -13,7 +12,8 @@ #include #include -#include +#include +#include #define BRL1_QSIZE 128 /* power of 2 is more efficient */ #define BRL1_BUFSZ 264 /* needs to be large enough @@ -39,7 +39,7 @@ * This value can't be confused with a network vector because the least- * significant nibble of a network vector cannot be greater than 8. */ -#define BRL1_LOCALUART ((net_vec_t)0xf) +#define BRL1_LOCALHUB_UART ((net_vec_t)0xf) /* L1<->Bedrock reserved subchannels */ @@ -71,7 +71,14 @@ struct l1sc_s; -typedef void (*brl1_notif_t)(struct l1sc_s *, int); +/* Saved off interrupt frame */ +typedef struct brl1_intr_frame { + int bf_irq; /* irq received */ + void *bf_dev_id; /* device information */ + struct pt_regs *bf_regs; /* register frame */ +} brl1_intr_frame_t; + +typedef void (*brl1_notif_t)(int, void *, struct pt_regs *, struct l1sc_s *, int); typedef int (*brl1_uartf_t)(struct l1sc_s *); /* structure for controlling a subchannel */ @@ -90,6 +97,7 @@ * continue */ brl1_notif_t rx_notify; /* notify higher layer that a packet has been * received */ + brl1_intr_frame_t irq_frame; /* saved off irq information */ } brl1_sch_t; /* br<->l1 protocol states */ @@ -101,7 +109,7 @@ #define BRL1_RESET 7 -#ifndef _LANGUAGE_ASSEMBLY +#ifndef __ASSEMBLY__ /* * l1sc_t structure-- tracks protocol state, open subchannels, etc. @@ -118,6 +126,8 @@ brl1_uartf_t putc_f; /* pointer to UART putc function */ brl1_uartf_t getc_f; /* pointer to UART getc function */ + spinlock_t send_lock; /* arbitrates send synchronization */ + spinlock_t recv_lock; /* arbitrates uart receive access */ spinlock_t subch_lock; /* arbitrates subchannel allocation */ cpuid_t intr_cpu; /* cpu that receives L1 interrupts */ @@ -327,15 +337,6 @@ void sc_init( l1sc_t *sc, nasid_t nasid, net_vec_t uart ); void sc_intr_enable( l1sc_t *sc ); -int _elscuart_putc( l1sc_t *sc, int c ); -int _elscuart_getc( l1sc_t *sc ); -int _elscuart_poll( l1sc_t *sc ); -int _elscuart_readc( l1sc_t *sc ); -int _elscuart_flush( l1sc_t *sc ); -int _elscuart_probe( l1sc_t *sc ); -void _elscuart_init( l1sc_t *sc ); -void elscuart_syscon_listen( l1sc_t *sc ); - int elsc_rack_bay_get(l1sc_t *e, uint *rack, uint *bay); int elsc_rack_bay_type_get(l1sc_t *e, uint *rack, uint *bay, uint *brick_type); @@ -357,5 +358,5 @@ int iobrick_sc_version( l1sc_t *sc, char *result ); -#endif /* !_LANGUAGE_ASSEMBLY */ +#endif /* !__ASSEMBLY__ */ #endif /* _ASM_SN_KSYS_L1_H */ diff -Nru a/include/asm-ia64/sn/labelcl.h b/include/asm-ia64/sn/labelcl.h --- a/include/asm-ia64/sn/labelcl.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/sn/labelcl.h Tue Mar 12 13:58:15 2002 @@ -4,15 +4,16 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2002 Silicon Graphics, Inc. All rights reserved. */ -#ifndef _ASM_SN_LABELCL_H -#define _ASM_SN_LABELCL_H +#ifndef _ASM_IA64_SN_LABELCL_H +#define _ASM_IA64_SN_LABELCL_H + +#include #define LABELCL_MAGIC 0x4857434c /* 'HWLC' */ #define LABEL_LENGTH_MAX 256 /* Includes NULL char */ -#define INFO_DESC_PRIVATE -1 /* default */ +#define INFO_DESC_PRIVATE (-1) /* default */ #define INFO_DESC_EXPORT 0 /* export info itself */ /* @@ -90,4 +91,4 @@ extern int labelcl_info_get_IDX(struct devfs_entry *, int, arbitrary_info_t *); extern struct devfs_entry *device_info_connectpt_get(struct devfs_entry *); -#endif /* _ASM_SN_LABELCL_H */ +#endif /* _ASM_IA64_SN_LABELCL_H */ diff -Nru a/include/asm-ia64/sn/leds.h b/include/asm-ia64/sn/leds.h --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/include/asm-ia64/sn/leds.h Tue Mar 12 13:58:14 2002 @@ -0,0 +1,42 @@ +#ifndef _ASM_IA64_SN_LEDS_H +#define _ASM_IA64_SN_LEDS_H + +/* + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * Copyright (C) 2000-2001 Silicon Graphics, Inc. All rights reserved. + */ + +#include +#include +#include +#include +#include + +#ifdef CONFIG_IA64_SGI_SN1 +#define LED0 0xc0000b00100000c0LL /* ZZZ fixme */ +#define LED_CPU_SHIFT 3 +#else +#include +#define LED0 (LOCAL_MMR_ADDR(SH_REAL_JUNK_BUS_LED0)) +#define LED_CPU_SHIFT 16 +#endif + +#define LED_CPU_HEARTBEAT 0x01 +#define LED_CPU_ACTIVITY 0x02 +#define LED_MASK_AUTOTEST 0xfe + +/* + * Basic macros for flashing the LEDS on an SGI, SN1. + */ + +static __inline__ void +set_led_bits(u8 value, u8 mask) +{ + pda.led_state = (pda.led_state & ~mask) | (value & mask); + *pda.led_address = (long) pda.led_state; +} + +#endif /* _ASM_IA64_SN_LEDS_H */ + diff -Nru a/include/asm-ia64/sn/mca.h b/include/asm-ia64/sn/mca.h --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/include/asm-ia64/sn/mca.h Tue Mar 12 13:58:16 2002 @@ -0,0 +1,128 @@ +/* + * File: mca.h + * Purpose: Machine check handling specific to the SN platform defines + * + * Copyright (C) 2001-2002 Silicon Graphics, Inc. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of version 2 of the GNU General Public License + * as published by the Free Software Foundation. + * + * This program is distributed in the hope that it would be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. + * + * Further, this software is distributed without any warranty that it is + * free of the rightful claim of any third person regarding infringement + * or the like. Any license provided herein, whether implied or + * otherwise, applies only to this software file. Patent licenses, if + * any, provided herein do not apply to combinations of this program with + * other software, or any other product whatsoever. + * + * You should have received a copy of the GNU General Public + * License along with this program; if not, write the Free Software + * Foundation, Inc., 59 Temple Place - Suite 330, Boston MA 02111-1307, USA. + * + * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy, + * Mountain View, CA 94043, or: + * + * http://www.sgi.com + * + * For further information regarding this notice, see: + * + * http://oss.sgi.com/projects/GenInfo/NoticeExplan + */ + +#include +#include +#include +#include + +#ifdef CONFIG_IA64_SGI_SN + +typedef u64 __uint64_t; + +typedef struct { + __uint64_t sh_event_occurred; + __uint64_t sh_first_error; + __uint64_t sh_event_overflow; + __uint64_t sh_pi_first_error; + __uint64_t sh_pi_error_summary; + __uint64_t sh_pi_error_overflow; + __uint64_t sh_pi_error_detail_1; + __uint64_t sh_pi_error_detail_2; + __uint64_t sh_pi_hw_time_stamp; + __uint64_t sh_pi_uncorrected_detail_1; + __uint64_t sh_pi_uncorrected_detail_2; + __uint64_t sh_pi_uncorrected_detail_3; + __uint64_t sh_pi_uncorrected_detail_4; + __uint64_t sh_pi_uncor_time_stamp; + __uint64_t sh_pi_corrected_detail_1; + __uint64_t sh_pi_corrected_detail_2; + __uint64_t sh_pi_corrected_detail_3; + __uint64_t sh_pi_corrected_detail_4; + __uint64_t sh_pi_cor_time_stamp; + __uint64_t sh_mem_error_summary; + __uint64_t sh_mem_error_overflow; + __uint64_t sh_misc_err_hdr_lower; + __uint64_t sh_misc_err_hdr_upper; + __uint64_t sh_dir_uc_err_hdr_lower; + __uint64_t sh_dir_uc_err_hdr_upper; + __uint64_t sh_dir_cor_err_hdr_lower; + __uint64_t sh_dir_cor_err_hdr_upper; + __uint64_t sh_mem_error_mask; + __uint64_t sh_md_uncor_time_stamp; + __uint64_t sh_md_cor_time_stamp; + __uint64_t sh_md_hw_time_stamp; + __uint64_t sh_xn_error_summary; + __uint64_t sh_xn_first_error; + __uint64_t sh_xn_error_overflow; + __uint64_t sh_xniilb_error_summary; + __uint64_t sh_xniilb_first_error; + __uint64_t sh_xniilb_error_overflow; + __uint64_t sh_xniilb_error_detail_1; + __uint64_t sh_xniilb_error_detail_2; + __uint64_t sh_xniilb_error_detail_3; + __uint64_t sh_xnpi_error_summary; + __uint64_t sh_xnpi_first_error; + __uint64_t sh_xnpi_error_overflow; + __uint64_t sh_xnpi_error_detail_1; + __uint64_t sh_xnmd_error_summary; + __uint64_t sh_xnmd_first_error; + __uint64_t sh_xnmd_error_overflow; + __uint64_t sh_xnmd_ecc_err_report; + __uint64_t sh_xnmd_error_detail_1; + __uint64_t sh_lb_error_summary; + __uint64_t sh_lb_first_error; + __uint64_t sh_lb_error_overflow; + __uint64_t sh_lb_error_detail_1; + __uint64_t sh_lb_error_detail_2; + __uint64_t sh_lb_error_detail_3; + __uint64_t sh_lb_error_detail_4; + __uint64_t sh_lb_error_detail_5; +} sal_log_shub_state_t; + +typedef struct { +sal_log_section_hdr_t header; + struct + { + __uint64_t err_status : 1, + guid : 1, + oem_data : 1, + reserved : 61; + } valid; + __uint64_t err_status; + efi_guid_t guid; + __uint64_t shub_nic; + sal_log_shub_state_t shub_state; +} sal_log_plat_info_t; + + +extern void sal_log_plat_print(int header_len, int sect_len, u8 *p_data, prfunc_t prfunc); + +#ifdef platform_plat_specific_err_print +#undef platform_plat_specific_err_print +#endif +#define platform_plat_specific_err_print sal_log_plat_print + +#endif /* CONFIG_IA64_SGI_SN */ diff -Nru a/include/asm-ia64/sn/mem_refcnt.h b/include/asm-ia64/sn/mem_refcnt.h --- a/include/asm-ia64/sn/mem_refcnt.h Tue Mar 12 13:58:15 2002 +++ /dev/null Wed Dec 31 16:00:00 1969 @@ -1,26 +0,0 @@ -/* $Id$ - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam - */ -#ifndef _ASM_SN_MEM_REFCNT_H -#define _ASM_SN_MEM_REFCNT_H - -extern int mem_refcnt_attach(devfs_handle_t hub); -extern int mem_refcnt_open(devfs_handle_t *devp, mode_t oflag, int otyp, cred_t *crp); -extern int mem_refcnt_close(devfs_handle_t dev, int oflag, int otyp, cred_t *crp); -extern int mem_refcnt_mmap(devfs_handle_t dev, vhandl_t *vt, off_t off, size_t len, uint prot); -extern int mem_refcnt_unmap(devfs_handle_t dev, vhandl_t *vt); -extern int mem_refcnt_ioctl(devfs_handle_t dev, - int cmd, - void *arg, - int mode, - cred_t *cred_p, - int *rvalp); - - -#endif /* _ASM_SN_MEM_REFCNT_H */ diff -Nru a/include/asm-ia64/sn/mmtimer_private.h b/include/asm-ia64/sn/mmtimer_private.h --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/include/asm-ia64/sn/mmtimer_private.h Tue Mar 12 13:58:16 2002 @@ -0,0 +1,42 @@ +/* + * Intel Multimedia Timer device interface + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (c) 2001-2002 Silicon Graphics, Inc. All rights reserved. + * + * Helper file for the SN implementation of mmtimers + * + * 11/01/01 - jbarnes - initial revision + */ + +#ifndef _SN_MMTIMER_PRIVATE_H + +#define RTC_BITS 55 /* 55 bits for this implementation */ +#define NUM_COMPARATORS 2 /* two comparison registers in SN1 */ + +/* + * Check for an interrupt and clear the pending bit if + * one is waiting. + */ +#define MMTIMER_INT_PENDING(x) (x ? *(RTC_INT_PENDING_B_ADDR) : *(RTC_INT_PENDING_A_ADDR)) + +/* + * Set interrupts on RTC 'x' to 'v' (true or false) + */ +#define MMTIMER_SET_INT(x,v) (x ? (*(RTC_INT_ENABLED_B_ADDR) = (unsigned long)(v)) : (*(RTC_INT_ENABLED_A_ADDR) = (unsigned long)(v))) + +#define MMTIMER_ENABLE_INT(x) MMTIMER_SET_INT(x, 1) +#define MMTIMER_DISABLE_INT(x) MMTIMER_SET_INT(x, 0) + +typedef struct mmtimer { + spinlock_t timer_lock; + unsigned long periodic; + int signo; + volatile unsigned long *compare; + struct task_struct *process; +} mmtimer_t; + +#endif /* _SN_LINUX_MMTIMER_PRIVATE_H */ diff -Nru a/include/asm-ia64/sn/mmzone.h b/include/asm-ia64/sn/mmzone.h --- a/include/asm-ia64/sn/mmzone.h Tue Mar 12 13:58:15 2002 +++ /dev/null Wed Dec 31 16:00:00 1969 @@ -1,113 +0,0 @@ -/* - * Written by Kanoj Sarcar (kanoj@sgi.com) Jan 2000 - * Copyright, 2000, Silicon Graphics, sprasad@engr.sgi.com - */ -#ifndef _LINUX_ASM_SN_MMZONE_H -#define _LINUX_ASM_SN_MMZONE_H - -#include - -#include -#include - -/* - * Memory is conceptually divided into chunks. A chunk is either - * completely present, or else the kernel assumes it is completely - * absent. Each node consists of a number of contiguous chunks. - */ - -#define CHUNKMASK (~(CHUNKSZ - 1)) -#define CHUNKNUM(vaddr) (__pa(vaddr) >> CHUNKSHIFT) -#define PCHUNKNUM(paddr) ((paddr) >> CHUNKSHIFT) - -#define MAXCHUNKS (MAXNODES * MAX_CHUNKS_PER_NODE) - -extern int chunktonid[]; -#define CHUNKTONID(cnum) (chunktonid[cnum]) - -typedef struct plat_pglist_data { - pg_data_t gendata; /* try to keep this first. */ - unsigned long virtstart; - unsigned long size; -} plat_pg_data_t; - -extern plat_pg_data_t plat_node_data[]; - -extern int numa_debug(void); - -/* - * The foll two will move into linux/mmzone.h RSN. - */ -#define NODE_START(n) plat_node_data[(n)].virtstart -#define NODE_SIZE(n) plat_node_data[(n)].size - -#define KVADDR_TO_NID(kaddr) \ - ((CHUNKTONID(CHUNKNUM((kaddr))) != -1) ? (CHUNKTONID(CHUNKNUM((kaddr)))) : \ - (printk("DISCONTIGBUG: %s line %d addr 0x%lx", __FILE__, __LINE__, \ - (unsigned long)(kaddr)), numa_debug())) -#if 0 -#define KVADDR_TO_NID(kaddr) CHUNKTONID(CHUNKNUM((kaddr))) -#endif - -/* These 2 macros should never be used if KVADDR_TO_NID(kaddr) is -1 */ -/* - * Given a kaddr, ADDR_TO_MAPBASE finds the owning node of the memory - * and returns the mem_map of that node. - */ -#define ADDR_TO_MAPBASE(kaddr) \ - NODE_MEM_MAP(KVADDR_TO_NID((unsigned long)(kaddr))) - -/* - * Given a kaddr, LOCAL_BASE_ADDR finds the owning node of the memory - * and returns the kaddr corresponding to first physical page in the - * node's mem_map. - */ -#define LOCAL_BASE_ADDR(kaddr) NODE_START(KVADDR_TO_NID(kaddr)) - -#ifdef CONFIG_DISCONTIGMEM - -/* - * Return a pointer to the node data for node n. - * Assume that n is the compact node id. - */ -#define NODE_DATA(n) (&((plat_node_data + (n))->gendata)) - -/* - * NODE_MEM_MAP gives the kaddr for the mem_map of the node. - */ -#define NODE_MEM_MAP(nid) (NODE_DATA((nid))->node_mem_map) - -/* This macro should never be used if KVADDR_TO_NID(kaddr) is -1 */ -#define LOCAL_MAP_NR(kvaddr) \ - (((unsigned long)(kvaddr)-LOCAL_BASE_ADDR((kvaddr))) >> PAGE_SHIFT) -#define MAP_NR_SN1(kaddr) (LOCAL_MAP_NR((kaddr)) + \ - (((unsigned long)ADDR_TO_MAPBASE((kaddr)) - PAGE_OFFSET) / \ - sizeof(mem_map_t))) -#if 0 -#define MAP_NR_VALID(kaddr) (LOCAL_MAP_NR((kaddr)) + \ - (((unsigned long)ADDR_TO_MAPBASE((kaddr)) - PAGE_OFFSET) / \ - sizeof(mem_map_t))) -#define MAP_NR_SN1(kaddr) ((KVADDR_TO_NID(kaddr) == -1) ? (max_mapnr + 1) :\ - MAP_NR_VALID(kaddr)) -#endif - -/* FIXME */ -#define sn1_pte_pagenr(x) MAP_NR_SN1(PAGE_OFFSET + (unsigned long)((pte_val(x)&_PFN_MASK) & PAGE_MASK)) -#define pte_page(pte) (mem_map + sn1_pte_pagenr(pte)) -/* FIXME */ - -#define kern_addr_valid(addr) ((KVADDR_TO_NID((unsigned long)addr) >= \ - numnodes) ? 0 : (test_bit(LOCAL_MAP_NR((addr)), \ - NODE_DATA(KVADDR_TO_NID((unsigned long)addr))->valid_addr_bitmap))) - -#define virt_to_page(kaddr) (mem_map + MAP_NR_SN1(kaddr)) - -#else /* CONFIG_DISCONTIGMEM */ - -#define MAP_NR_SN1(addr) (((unsigned long) (addr) - PAGE_OFFSET) >> PAGE_SHIFT) - -#endif /* CONFIG_DISCONTIGMEM */ - -#define numa_node_id() cpuid_to_cnodeid(smp_processor_id()) - -#endif /* !_LINUX_ASM_SN_MMZONE_H */ diff -Nru a/include/asm-ia64/sn/mmzone_default.h b/include/asm-ia64/sn/mmzone_default.h --- a/include/asm-ia64/sn/mmzone_default.h Tue Mar 12 13:58:15 2002 +++ /dev/null Wed Dec 31 16:00:00 1969 @@ -1,15 +0,0 @@ -/* - * Copyright, 2000, Silicon Graphics, sprasad@engr.sgi.com - */ - -#define MAXNODES 16 -#define MAXNASIDS 16 - -#define CHUNKSZ (8*1024*1024) -#define CHUNKSHIFT 23 /* 2 ^^ CHUNKSHIFT == CHUNKSZ */ - -#define CNODEID_TO_NASID(n) n -#define NASID_TO_CNODEID(n) n - -#define MAX_CHUNKS_PER_NODE 8 - diff -Nru a/include/asm-ia64/sn/mmzone_sn1.h b/include/asm-ia64/sn/mmzone_sn1.h --- a/include/asm-ia64/sn/mmzone_sn1.h Tue Mar 12 13:58:15 2002 +++ /dev/null Wed Dec 31 16:00:00 1969 @@ -1,105 +0,0 @@ -#ifndef _ASM_IA64_MMZONE_SN1_H -#define _ASM_IA64_MMZONE_SN1_H - -#include - -/* - * Copyright, 2000, Silicon Graphics, sprasad@engr.sgi.com - */ -/* Maximum configuration supported by SNIA hardware. There are other - * restrictions that may limit us to a smaller max configuration. - */ -#define MAXNODES 128 -#define MAXNASIDS 128 - -#define CHUNKSZ (64*1024*1024) -#define CHUNKSHIFT 26 /* 2 ^^ CHUNKSHIFT == CHUNKSZ */ - -extern int cnodeid_map[] ; -extern int nasid_map[] ; - -#define CNODEID_TO_NASID(n) (cnodeid_map[(n)]) -#define NASID_TO_CNODEID(n) (nasid_map[(n)]) - -#define MAX_CHUNKS_PER_NODE 128 - - -/* - * These are a bunch of sn1 hw specific defines. For now, keep it - * in this file. If it gets too diverse we may want to create a - * mmhwdefs_sn1.h - */ - -/* - * Structure of the mem config of the node as a SN1 MI reg - * Medusa supports this reg config. - */ - -typedef struct node_memmap_s -{ - unsigned int b0 :1, /* 0 bank 0 present */ - b1 :1, /* 1 bank 1 present */ - r01 :2, /* 2-3 reserved */ - b01size :4, /* 4-7 Size of bank 0 and 1 */ - b2 :1, /* 8 bank 2 present */ - b3 :1, /* 9 bank 3 present */ - r23 :2, /* 10-11 reserved */ - b23size :4, /* 12-15 Size of bank 2 and 3 */ - b4 :1, /* 16 bank 4 present */ - b5 :1, /* 17 bank 5 present */ - r45 :2, /* 18-19 reserved */ - b45size :4, /* 20-23 Size of bank 4 and 5 */ - b6 :1, /* 24 bank 6 present */ - b7 :1, /* 25 bank 7 present */ - r67 :2, /* 26-27 reserved */ - b67size :4; /* 28-31 Size of bank 6 and 7 */ -} node_memmap_t ; - -#define GBSHIFT 30 -#define MBSHIFT 20 - -/* - * SN1 Arch defined values - */ -#define SN1_MAX_BANK_PER_NODE 8 -#define SN1_BANK_PER_NODE_SHIFT 3 /* derived from SN1_MAX_BANK_PER_NODE */ -#define SN1_NODE_ADDR_SHIFT (GBSHIFT+3) /* 8GB */ -#define SN1_BANK_ADDR_SHIFT (SN1_NODE_ADDR_SHIFT-SN1_BANK_PER_NODE_SHIFT) - -#define SN1_BANK_SIZE_SHIFT (MBSHIFT+6) /* 64 MB */ -#define SN1_MIN_BANK_SIZE_SHIFT SN1_BANK_SIZE_SHIFT - -/* - * BankSize nibble to bank size mapping - * - * 1 - 64 MB - * 2 - 128 MB - * 3 - 256 MB - * 4 - 512 MB - * 5 - 1024 MB (1GB) - */ - -/* fixme - this macro breaks for bsize 6-8 and 0 */ - -#ifdef CONFIG_IA64_SGI_SN1_SIM -/* Support the medusa hack for 8M/16M/32M nodes */ -#define BankSizeBytes(bsize) ((bsize<6) ? (1<<((bsize-1)+SN1_BANK_SIZE_SHIFT)) :\ - (1<<((bsize-9)+MBSHIFT))) -#else -#define BankSizeBytes(bsize) (1<<((bsize-1)+SN1_BANK_SIZE_SHIFT)) -#endif - -#define BankSizeToEFIPages(bsize) ((BankSizeBytes(bsize)) >> 12) - -#define GetPhysAddr(n,b) (((u64)n<> SN1_NODE_ADDR_SHIFT) - -#define GetBankId(paddr) \ - (((u64)(paddr) >> SN1_BANK_ADDR_SHIFT) & 7) - -#define SN1_MAX_BANK_SIZE ((u64)BankSizeBytes(5)) -#define SN1_BANK_SIZE_MASK (~(SN1_MAX_BANK_SIZE-1)) - -#endif /* _ASM_IA64_MMZONE_SN1_H */ diff -Nru a/include/asm-ia64/sn/module.h b/include/asm-ia64/sn/module.h --- a/include/asm-ia64/sn/module.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/sn/module.h Tue Mar 12 13:58:15 2002 @@ -1,31 +1,24 @@ -/* $Id$ - * +/* * This file is subject to the terms and conditions of the GNU General Public * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2002 Silicon Graphics, Inc. All rights reserved. */ -#ifndef _ASM_SN_MODULE_H -#define _ASM_SN_MODULE_H +#ifndef _ASM_IA64_SN_MODULE_H +#define _ASM_IA64_SN_MODULE_H #ifdef __cplusplus extern "C" { #endif -#include #include #include #include -#if defined(CONFIG_SGI_IP35) || defined(CONFIG_IA64_SGI_SN1) || defined(CONFIG_IA64_GENERIC) -#ifdef BRINGUP /* max. number of modules? Should be about 300.*/ -#define MODULE_MAX 56 -#endif /* BRINGUP */ +#define MODULE_MAX 128 #define MODULE_MAX_NODES 1 -#endif /* CONFIG_SGI_IP35 */ #define MODULE_HIST_CNT 16 #define MAX_MODULE_LEN 16 @@ -39,8 +32,6 @@ #define MODULE_FORMAT_LONG 2 -#if defined(CONFIG_SGI_IP35) || defined(CONFIG_IA64_SGI_SN1) || defined(CONFIG_IA64_GENERIC) - /* * Module id format * @@ -134,17 +125,6 @@ ((_m2)&(MODULE_RACK_MASK|MODULE_BPOS_MASK))) #define MODULE_MATCH(_m1, _m2) (MODULE_CMP((_m1),(_m2)) == 0) -#else - -/* - * Some code that uses this macro will not be conditionally compiled. - */ -#define MODULE_GET_BTCHAR(_m) ('?') -#define MODULE_CMP(_m1, _m2) ((_m1) - (_m2)) -#define MODULE_MATCH(_m1, _m2) (MODULE_CMP((_m1),(_m2)) == 0) - -#endif /* CONFIG_SGI_IP35 || CONFIG_IA64_SGI_SN1 */ - typedef struct module_s module_t; struct module_s { @@ -205,4 +185,4 @@ } #endif -#endif /* _ASM_SN_MODULE_H */ +#endif /* _ASM_IA64_SN_MODULE_H */ diff -Nru a/include/asm-ia64/sn/nag.h b/include/asm-ia64/sn/nag.h --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/include/asm-ia64/sn/nag.h Tue Mar 12 13:58:16 2002 @@ -0,0 +1,32 @@ +/* + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (c) 2001 Silicon Graphics, Inc. All rights reserved. +*/ + + +#ifndef _ASM_IA64_SN_NAG_H +#define _ASM_IA64_SN_NAG_H + + +#define NAG(mesg...) \ +do { \ + static unsigned int how_broken = 1; \ + static unsigned int threshold = 1; \ + if (how_broken == threshold) { \ + if (threshold < 10000) \ + threshold *= 10; \ + if (how_broken > 1) \ + printk(KERN_WARNING "%u times: ", how_broken); \ + else \ + printk(KERN_WARNING); \ + printk(mesg); \ + } \ + how_broken++; \ +} while (0) + + +#endif /* _ASM_IA64_SN_NAG_H */ diff -Nru a/include/asm-ia64/sn/nic.h b/include/asm-ia64/sn/nic.h --- a/include/asm-ia64/sn/nic.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/sn/nic.h Tue Mar 12 13:58:15 2002 @@ -4,13 +4,14 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2002 Silicon Graphics, Inc. All rights reserved. */ -#ifndef _ASM_SN_NIC_H -#define _ASM_SN_NIC_H +#ifndef _ASM_IA64_SN_NIC_H +#define _ASM_IA64_SN_NIC_H #include +#include +#include #define MCR_DATA(x) ((int) ((x) & 1)) #define MCR_DONE(x) ((x) & 2) @@ -125,4 +126,4 @@ extern nic_vmce_t nic_vmc_add(char *, nic_vmc_func *); extern void nic_vmc_del(nic_vmce_t); -#endif /* _ASM_SN_NIC_H */ +#endif /* _ASM_IA64_SN_NIC_H */ diff -Nru a/include/asm-ia64/sn/nodemask.h b/include/asm-ia64/sn/nodemask.h --- a/include/asm-ia64/sn/nodemask.h Tue Mar 12 13:58:16 2002 +++ /dev/null Wed Dec 31 16:00:00 1969 @@ -1,330 +0,0 @@ -/* $Id$ - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam - */ -#ifndef _ASM_SN_NODEMASK_H -#define _ASM_SN_NODEMASK_H - -#if defined(__KERNEL__) || defined(_KMEMUSER) - -#include - -#if CONFIG_SGI_IP35 || CONFIG_IA64_SGI_SN1 || CONFIG_IA64_GENERIC -#include /* needed for MAX_COMPACT_NODES */ -#endif - -#define CNODEMASK_BOOTED_MASK boot_cnodemask -#define CNODEMASK_BIPW 64 - -#if !defined(SN0XXL) && !defined(CONFIG_SGI_IP35) && !defined(CONFIG_IA64_SGI_SN1) && !defined(CONFIG_IA64_GENERIC) - /* MAXCPUS 128p (64 nodes) or less */ - -#define CNODEMASK_SIZE 1 -typedef uint64_t cnodemask_t; - -#define CNODEMASK_WORD(p,w) (p) -#define CNODEMASK_SET_WORD(p,w,val) (p) = val -#define CNODEMASK_CLRALL(p) (p) = 0 -#define CNODEMASK_SETALL(p) (p) = ~((cnodemask_t)0) -#define CNODEMASK_IS_ZERO(p) ((p) == 0) -#define CNODEMASK_IS_NONZERO(p) ((p) != 0) -#define CNODEMASK_NOTEQ(p, q) ((p) != (q)) -#define CNODEMASK_EQ(p, q) ((p) == (q)) -#define CNODEMASK_LSB_ISONE(p) ((p) & 0x1ULL) - -#define CNODEMASK_ZERO() ((cnodemask_t)0) -#define CNODEMASK_CVTB(bit) (1ULL << (bit)) -#define CNODEMASK_SETB(p, bit) ((p) |= 1ULL << (bit)) -#define CNODEMASK_CLRB(p, bit) ((p) &= ~(1ULL << (bit))) -#define CNODEMASK_TSTB(p, bit) ((p) & (1ULL << (bit))) - -#define CNODEMASK_SETM(p, q) ((p) |= (q)) -#define CNODEMASK_CLRM(p, q) ((p) &= ~(q)) -#define CNODEMASK_ANDM(p, q) ((p) &= (q)) -#define CNODEMASK_TSTM(p, q) ((p) & (q)) - -#define CNODEMASK_CPYNOTM(p, q) ((p) = ~(q)) -#define CNODEMASK_CPY(p, q) ((p) = (q)) -#define CNODEMASK_ORNOTM(p, q) ((p) |= ~(q)) -#define CNODEMASK_SHIFTL(p) ((p) <<= 1) -#define CNODEMASK_SHIFTR(p) ((p) >>= 1) -#define CNODEMASK_SHIFTL_PTR(p) (*(p) <<= 1) -#define CNODEMASK_SHIFTR_PTR(p) (*(p) >>= 1) - -/* Atomically set or clear a particular bit */ -#define CNODEMASK_ATOMSET_BIT(p, bit) atomicSetUlong((cnodemask_t *)&(p), (1ULL<<(bit))) -#define CNODEMASK_ATOMCLR_BIT(p, bit) atomicClearUlong((cnodemask_t *)&(p), (1ULL<<(bit))) - -/* Atomically set or clear a collection of bits */ -#define CNODEMASK_ATOMSET(p, q) atomicSetUlong((cnodemask_t *)&(p), q) -#define CNODEMASK_ATOMCLR(p, q) atomicClearUlong((cnodemask_t *)&(p), q) - -/* Atomically set or clear a collection of bits, returning the old value */ -#define CNODEMASK_ATOMSET_MASK(__old, p, q) { \ - (__old) = atomicSetUlong((cnodemask_t *)&(p), q); \ -} -#define CNODEMASK_ATOMCLR_MASK(__old, p, q) { \ - (__old) = atomicClearUlong((cnodemask_t *)&(p),q); \ -} - -#define CNODEMASK_FROM_NUMNODES(n) ((~(cnodemask_t)0)>>(CNODEMASK_BIPW-(n))) - -#else /* SN0XXL || SN1 - MAXCPUS > 128 */ - -#define CNODEMASK_SIZE (MAX_COMPACT_NODES / CNODEMASK_BIPW) - -typedef struct { - uint64_t _bits[CNODEMASK_SIZE]; -} cnodemask_t; - -#define CNODEMASK_WORD(p,w) \ - ((w >= 0 && w < CNODEMASK_SIZE) ? (p)._bits[(w)] : 0) -#define CNODEMASK_SET_WORD(p,w,val) { \ - if (w >= 0 && w < CNODEMASK_SIZE) \ - (p)._bits[(w)] = val; \ -} - -#define CNODEMASK_CLRALL(p) { \ - int i; \ - \ - for (i = 0 ; i < CNODEMASK_SIZE ; i++) \ - (p)._bits[i] = 0; \ -} - -#define CNODEMASK_SETALL(p) { \ - int i; \ - \ - for (i = 0 ; i < CNODEMASK_SIZE ; i++) \ - (p)._bits[i] = ~(0); \ -} - -#define CNODEMASK_LSB_ISONE(p) ((p)._bits[0] & 0x1ULL) - - -#define CNODEMASK_SETM(p,q) { \ - int i; \ - \ - for (i = 0 ; i < CNODEMASK_SIZE ; i++) \ - (p)._bits[i] |= ((q)._bits[i]); \ -} - -#define CNODEMASK_CLRM(p,q) { \ - int i; \ - \ - for (i = 0 ; i < CNODEMASK_SIZE ; i++) \ - (p)._bits[i] &= ~((q)._bits[i]); \ -} - -#define CNODEMASK_ANDM(p,q) { \ - int i; \ - \ - for (i = 0 ; i < CNODEMASK_SIZE ; i++) \ - (p)._bits[i] &= ((q)._bits[i]); \ -} - -#define CNODEMASK_CPY(p, q) { \ - int i; \ - \ - for (i = 0 ; i < CNODEMASK_SIZE ; i++) \ - (p)._bits[i] = (q)._bits[i]; \ -} - -#define CNODEMASK_CPYNOTM(p,q) { \ - int i; \ - \ - for (i = 0 ; i < CNODEMASK_SIZE ; i++) \ - (p)._bits[i] = ~((q)._bits[i]); \ -} - -#define CNODEMASK_ORNOTM(p,q) { \ - int i; \ - \ - for (i = 0 ; i < CNODEMASK_SIZE ; i++) \ - (p)._bits[i] |= ~((q)._bits[i]); \ -} - -#define CNODEMASK_INDEX(bit) ((bit) >> 6) -#define CNODEMASK_SHFT(bit) ((bit) & 0x3f) - - -#define CNODEMASK_SETB(p, bit) \ - (p)._bits[CNODEMASK_INDEX(bit)] |= (1ULL << CNODEMASK_SHFT(bit)) - - -#define CNODEMASK_CLRB(p, bit) \ - (p)._bits[CNODEMASK_INDEX(bit)] &= ~(1ULL << CNODEMASK_SHFT(bit)) - - -#define CNODEMASK_TSTB(p, bit) \ - ((p)._bits[CNODEMASK_INDEX(bit)] & (1ULL << CNODEMASK_SHFT(bit))) - -/** Probably should add atomic update for entire cnodemask_t struct **/ - -/* Atomically set or clear a particular bit */ -#define CNODEMASK_ATOMSET_BIT(p, bit) \ - (atomicSetUlong((unsigned long *)&(p)._bits[CNODEMASK_INDEX(bit)], (1ULL << CNODEMASK_SHFT(bit)))); -#define CNODEMASK_ATOMCLR_BIT(__old, p, bit) \ - (atomicClearUlong((unsigned long *)&(p)._bits[CNODEMASK_INDEX(bit)], (1ULL << CNODEMASK_SHFT(bit)))); - -/* Atomically set or clear a collection of bits */ -#define CNODEMASK_ATOMSET(p, q) { \ - int i; \ - \ - for (i = 0 ; i < CNODEMASK_SIZE ; i++) { \ - atomicSetUlong((unsigned long *)&(p)._bits[i], (q)._bits[i]); \ - } \ -} -#define CNODEMASK_ATOMCLR(p, q) { \ - int i; \ - \ - for (i = 0 ; i < CNODEMASK_SIZE ; i++) { \ - atomicClearUlong((unsigned long *)&(p)._bits[i], (q)._bits[i]); \ - } \ -} - -/* Atomically set or clear a collection of bits, returning the old value */ -#define CNODEMASK_ATOMSET_MASK(__old, p, q) { \ - int i; \ - \ - for (i = 0 ; i < CNODEMASK_SIZE ; i++) { \ - (__old)._bits[i] = \ - atomicSetUlong((unsigned long *)&(p)._bits[i], (q)._bits[i]); \ - } \ -} -#define CNODEMASK_ATOMCLR_MASK(__old, p, q) { \ - int i; \ - \ - for (i = 0 ; i < CNODEMASK_SIZE ; i++) { \ - (__old)._bits[i] = \ - atomicClearUlong((unsigned long *)&(p)._bits[i], (q)._bits[i]); \ - } \ -} - -__inline static cnodemask_t CNODEMASK_CVTB(int bit) -{ - cnodemask_t __tmp; - CNODEMASK_CLRALL(__tmp); - CNODEMASK_SETB(__tmp,bit); - return(__tmp); -} - - -__inline static cnodemask_t CNODEMASK_ZERO(void) -{ - cnodemask_t __tmp; - CNODEMASK_CLRALL(__tmp); - return(__tmp); -} - -__inline static int CNODEMASK_IS_ZERO (cnodemask_t p) -{ - int i; - - for (i = 0 ; i < CNODEMASK_SIZE ; i++) - if (p._bits[i] != 0) - return 0; - return 1; -} - -__inline static int CNODEMASK_IS_NONZERO (cnodemask_t p) -{ - int i; - - for (i = 0 ; i < CNODEMASK_SIZE ; i++) - if (p._bits[i] != 0) - return 1; - return 0; -} - -__inline static int CNODEMASK_NOTEQ (cnodemask_t p, cnodemask_t q) -{ - int i; - - for (i = 0 ; i < CNODEMASK_SIZE ; i++) - if (p._bits[i] != q._bits[i]) - return 1; - return 0; -} - -__inline static int CNODEMASK_EQ (cnodemask_t p, cnodemask_t q) -{ - int i; - - for (i = 0 ; i < CNODEMASK_SIZE ; i++) - if (p._bits[i] != q._bits[i]) - return 0; - return 1; -} - - -__inline static int CNODEMASK_TSTM (cnodemask_t p, cnodemask_t q) -{ - int i; - - for (i = 0 ; i < CNODEMASK_SIZE ; i++) - if (p._bits[i] & q._bits[i]) - return 1; - return 0; -} - -__inline static void CNODEMASK_SHIFTL_PTR (cnodemask_t *p) -{ - int i; - uint64_t upper; - - /* - * shift words starting with the last word - * of the vector and work backward to the first - * word updating the low order bits with the - * high order bit of the prev word. - */ - for (i=(CNODEMASK_SIZE-1); i > 0; --i) { - upper = (p->_bits[i-1] & (1ULL<<(CNODEMASK_BIPW-1))) ? 1 : 0; - p->_bits[i] <<= 1; - p->_bits[i] |= upper; - } - p->_bits[i] <<= 1; -} - -__inline static void CNODEMASK_SHIFTR_PTR (cnodemask_t *p) -{ - int i; - uint64_t lower; - - /* - * shift words starting with the first word - * of the vector and work forward to the last - * word updating the high order bit with the - * low order bit of the next word. - */ - for (i=0; i < (CNODEMASK_SIZE-2); ++i) { - lower = (p->_bits[i+1] & (0x1)) ? 1 : 0; - p->_bits[i] >>= 1; - p->_bits[i] |= (lower<<((CNODEMASK_BIPW-1))); - } - p->_bits[i] >>= 1; -} - -__inline static cnodemask_t CNODEMASK_FROM_NUMNODES(int n) -{ - cnodemask_t __tmp; - int i; - CNODEMASK_CLRALL(__tmp); - for (i=0; i - -#include +#include #include #include -#include -/* #include */ -#ifdef LATER -typedef struct module_s module_t; /* Avoids sys/SN/module.h */ -#else +#if defined(CONFIG_IA64_SGI_SN1) +#include +#endif +#include #include +#include + +#if defined(CONFIG_IA64_SGI_SN1) +#include #endif -/* #include */ /* * NUMA Node-Specific Data structures are defined in this file. @@ -37,26 +33,16 @@ /* * Subnode PDA structures. Each node needs a few data structures that * correspond to the PIs on the HUB chip that supports the node. - * - * WARNING!!!! 6.5.x compatibility requirements prevent us from - * changing or reordering fields in the following structure for IP27. - * It is essential that the data mappings not change for IP27 platforms. - * It is OK to add fields that are IP35 specific if they are under #ifdef IP35. */ +#if defined(CONFIG_IA64_SGI_SN1) struct subnodepda_s { intr_vecblk_t intr_dispatch0; intr_vecblk_t intr_dispatch1; - uint64_t next_prof_timeout; - int prof_count; }; - typedef struct subnodepda_s subnode_pda_t; -struct ptpool_s; - -#if defined(CONFIG_IA64_SGI_SYNERGY_PERF) struct synergy_perf_s; #endif @@ -65,8 +51,6 @@ * Node-specific data structure. * * One of these structures is allocated on each node of a NUMA system. - * Non-NUMA systems are considered to be systems with one node, and - * hence there will be one of this structure for the entire system. * * This structure provides a convenient way of keeping together * all per-node data structures. @@ -74,119 +58,13 @@ -#ifdef LATER -/* - * The following structure is contained in the nodepda & contains - * a lock & queue-head for sanon pages that belong to the node. - * See the anon manager for more details. - */ -typedef struct { - lock_t sal_lock; - plist_t sal_listhead; -} sanon_list_head_t; -#endif struct nodepda_s { -#ifdef NUMA_BASE - - /* - * Pointer to this node's copy of Nodepdaindr - */ - struct nodepda_s **pernode_pdaindr; - - /* - * Data used for migration control - */ - struct migr_control_data_s *mcd; - - /* - * Data used for replication control - */ - struct repl_control_data_s *rcd; - - /* - * Numa statistics - */ - struct numa_stats_s *numa_stats; - - /* - * Load distribution - */ - uint memfit_assign; - - /* - * New extended memory reference counters - */ - void *migr_refcnt_counterbase; - void *migr_refcnt_counterbuffer; - size_t migr_refcnt_cbsize; - int migr_refcnt_numsets; - - /* - * mem_tick quiescing lock - */ - uint mem_tick_lock; - - /* - * Migration candidate set - * by migration prologue intr handler - */ - uint64_t migr_candidate; - - /* - * Each node gets its own syswait counter to remove contention - * on the global one. - */ -#ifdef LATER - struct syswait syswait; -#endif - -#endif /* NUMA_BASE */ - /* - * Node-specific Zone structures. - */ -#ifdef LATER - zoneset_element_t node_zones; - pg_data_t node_pg_data; /* VM page data structures */ - plist_t error_discard_plist; -#endif - uint error_discard_count; - uint error_page_count; - uint error_cleaned_count; - spinlock_t error_discard_lock; - /* Information needed for SN Hub chip interrupt handling. */ - subnode_pda_t snpda[NUM_SUBNODES]; - /* Distributed kernel support */ -#ifdef LATER - kern_vars_t kern_vars; -#endif - /* Vector operation support */ - /* Change this to a sleep lock? */ - spinlock_t vector_lock; - /* State of the vector unit for this node */ - char vector_unit_busy; cpuid_t node_first_cpu; /* Starting cpu number for node */ - ushort node_num_cpus; /* Number of cpus present */ - - /* node utlbmiss info */ - spinlock_t node_utlbswitchlock; - volatile cpumask_t node_utlbmiss_flush; - volatile signed char node_need_utlbmiss_patch; - volatile char node_utlbmiss_patched; - nodepda_router_info_t *npda_rip_first; - nodepda_router_info_t **npda_rip_last; - int dependent_routers; - -#if defined(CONFIG_IA64_SGI_SYNERGY_PERF) - int synergy_perf_enabled; - int synergy_perf_freq; - spinlock_t synergy_perf_lock; - uint64_t synergy_inactive_intervals; - uint64_t synergy_active_intervals; - struct synergy_perf_s *synergy_perf_data; - struct synergy_perf_s *synergy_perf_first; /* reporting consistency .. */ -#endif /* CONFIG_IA64_SGI_SYNERGY_PERF */ + /* WARNING: no guarantee that */ + /* the second cpu on a node is */ + /* node_first_cpu+1. */ devfs_handle_t xbow_vhdl; nasid_t xbow_peer; /* NASID of our peer hub on xbow */ @@ -194,84 +72,67 @@ slotid_t slotdesc; moduleid_t module_id; /* Module ID (redundant local copy) */ module_t *module; /* Pointer to containing module */ - int hub_chip_rev; /* Rev of my Hub chip */ - char nasid_mask[NASID_MASK_BYTES]; - /* Need a copy of the nasid mask - * on every node */ xwidgetnum_t basew_id; devfs_handle_t basew_xc; - spinlock_t fprom_lock; - char ni_error_print; /* For printing ni error state - * only once during system panic - */ -#ifdef LATER - md_perf_monitor_t node_md_perfmon; - hubstat_t hubstats; int hubticks; - sbe_info_t *sbe_info; /* ECC single-bit error statistics */ -#endif /* LATER */ - int huberror_ticks; - - router_queue_t *visited_router_q; - router_queue_t *bfs_router_q; - /* Used for router traversal */ -#if defined (CONFIG_SGI_IP35) || defined(CONFIG_IA64_SGI_SN1) || defined(CONFIG_IA64_GENERIC) - router_map_ent_t router_map[MAX_RTR_BREADTH]; -#endif - int num_routers; /* Total routers in the system */ + int num_routers; /* XXX not setup! Total routers in the system */ - char membank_flavor; - /* Indicates what sort of memory - * banks are present on this node - */ char *hwg_node_name; /* hwgraph node name */ - - struct widget_info_t *widget_info; /* Node as xtalk widget */ devfs_handle_t node_vertex; /* Hwgraph vertex for this node */ void *pdinfo; /* Platform-dependent per-node info */ - uint64_t *dump_stack; /* Dump stack during nmi handling */ - int dump_count; /* To allow only one cpu-per-node */ -#ifdef LATER - io_perf_monitor_t node_io_perfmon; -#endif - /* - * Each node gets its own pdcount counter to remove contention - * on the global one. - */ - - int pdcount; /* count of pdinserted pages */ -#ifdef NUMA_BASE - void *cached_global_pool; /* pointer to cached vmpool */ -#endif /* NUMA_BASE */ + nodepda_router_info_t *npda_rip_first; + nodepda_router_info_t **npda_rip_last; -#ifdef LATER - sanon_list_head_t sanon_list_head; /* head for sanon pages */ -#endif -#ifdef NUMA_BASE - struct ptpool_s *ptpool; /* ptpool for this node */ -#endif /* NUMA_BASE */ /* * The BTEs on this node are shared by the local cpus */ -#if defined(CONFIG_SGI_IP35) || defined(CONFIG_IA64_SGI_SN1) || defined(CONFIG_IA64_GENERIC) -#ifdef LATER - bteinfo_t *node_bte_info[BTES_PER_NODE]; -#endif -#endif + bteinfo_t node_bte_info[BTES_PER_NODE]; + +#if defined(CONFIG_IA64_SGI_SN1) + subnode_pda_t snpda[NUM_SUBNODES]; + /* + * New extended memory reference counters + */ + void *migr_refcnt_counterbase; + void *migr_refcnt_counterbuffer; + size_t migr_refcnt_cbsize; + int migr_refcnt_numsets; + hubstat_t hubstats; + int synergy_perf_enabled; + int synergy_perf_freq; + spinlock_t synergy_perf_lock; + uint64_t synergy_inactive_intervals; + uint64_t synergy_active_intervals; + struct synergy_perf_s *synergy_perf_data; + struct synergy_perf_s *synergy_perf_first; /* reporting consistency .. */ +#endif /* CONFIG_IA64_SGI_SN1 */ + + /* + * Array of pointers to the nodepdas for each node. + */ + struct nodepda_s *pernode_pdaindr[MAX_COMPACT_NODES]; + }; typedef struct nodepda_s nodepda_t; +#ifdef CONFIG_IA64_SGI_SN2 +struct irqpda_s { + int num_irq_used; + char irq_flags[NR_IRQS]; +}; + +typedef struct irqpda_s irqpda_t; + +#endif /* CONFIG_IA64_SGI_SN2 */ + -#define NODE_MODULEID(_node) (NODEPDA(_node)->module_id) -#define NODE_SLOTID(_node) (NODEPDA(_node)->slotdesc) -#ifdef NUMA_BASE /* * Access Functions for node PDA. * Since there is one nodepda for each node, we need a convenient mechanism @@ -279,180 +140,49 @@ * The next set of definitions provides this. * Routines are expected to use * - * nodepda -> to access PDA for the node on which code is running - * subnodepda -> to access subnode PDA for the node on which code is running + * nodepda -> to access node PDA for the node on which code is running + * subnodepda -> to access subnode PDA for the subnode on which code is running * - * NODEPDA(x) -> to access node PDA for cnodeid 'x' - * SUBNODEPDA(x,s) -> to access subnode PDA for cnodeid/slice 'x' - */ - -#ifdef LATER -#define nodepda private.p_nodepda /* Ptr to this node's PDA */ -#if CONFIG_SGI_IP35 || CONFIG_IA64_SGI_SN1 || CONFIG_IA64_GENERIC -#define subnodepda private.p_subnodepda /* Ptr to this node's subnode PDA */ -#endif - -#else -/* - * Until we have a shared node local area defined, do it this way .. - * like in Caliase space. See above. - */ -extern nodepda_t *nodepda; -extern subnode_pda_t *subnodepda; -#endif - -/* - * Nodepdaindr[] - * This is a private data structure for use only in early initialization. - * All users of nodepda should use the macro NODEPDA(nodenum) to get - * the suitable nodepda structure. - * This macro has the advantage of not requiring #ifdefs for NUMA and - * non-NUMA code. - */ -extern nodepda_t *Nodepdaindr[]; -/* - * NODEPDA_GLOBAL(x) macro should ONLY be used during early initialization. - * Once meminit is complete, NODEPDA(x) is ready to use. - * During early init, the system fills up Nodepdaindr. By the time we - * are in meminit(), all nodepdas are initialized, and hence - * we can fill up the node_pdaindr array in each nodepda structure. + * NODEPDA(cnode) -> to access node PDA for cnodeid + * SUBNODEPDA(cnode,sn) -> to access subnode PDA for cnodeid/subnode */ -#define NODEPDA_GLOBAL(x) Nodepdaindr[x] -/* - * Returns a pointer to a given node's nodepda. - */ -#define NODEPDA(x) (nodepda->pernode_pdaindr[x]) +#define nodepda pda.p_nodepda /* Ptr to this node's PDA */ +#define NODEPDA(cnode) (nodepda->pernode_pdaindr[cnode]) -/* - * Returns a pointer to a given node/slice's subnodepda. - * SUBNODEPDA(cnode, subnode) - uses cnode as first arg - * SNPDA(npda, subnode) - uses pointer to nodepda as first arg - */ -#define SUBNODEPDA(x,sn) (&nodepda->pernode_pdaindr[x]->snpda[sn]) +#if defined(CONFIG_IA64_SGI_SN1) +#define subnodepda pda.p_subnodepda /* Ptr to this node's subnode PDA */ +#define SUBNODEPDA(cnode,sn) (&(NODEPDA(cnode)->snpda[sn])) #define SNPDA(npda,sn) (&(npda)->snpda[sn]) +#endif -#define NODEPDA_ERROR_FOOTPRINT(node, cpu) \ - (&(NODEPDA(node)->error_stamp[cpu])) -#define NODEPDA_MDP_MON(node) (&(NODEPDA(node)->node_md_perfmon)) -#define NODEPDA_IOP_MON(node) (&(NODEPDA(node)->node_io_perfmon)) /* * Macros to access data structures inside nodepda */ -#if NUMA_MIGR_CONTROL -#define NODEPDA_MCD(node) (NODEPDA(node)->mcd) -#endif /* NUMA_MIGR_CONTROL */ - -#if NUMA_REPL_CONTROL -#define NODEPDA_RCD(node) (NODEPDA(node)->rcd) -#endif /* NUMA_REPL_CONTROL */ - -#if (NUMA_MIGR_CONTROL || NUMA_REPL_CONTROL) -#define NODEPDA_LRS(node) (NODEPDA(node)->lrs) -#endif /* (NUMA_MIGR_CONTROL || NUMA_REPL_CONTROL) */ +#define NODE_MODULEID(cnode) (NODEPDA(cnode)->module_id) +#define NODE_SLOTID(cnode) (NODEPDA(cnode)->slotdesc) -/* - * Exported functions - */ -extern nodepda_t *nodepda_alloc(void); -#else /* !NUMA_BASE */ /* - * For a single-node system we will just have one global nodepda pointer - * allocated at startup. The global nodepda will point to this nodepda - * structure. + * Quickly convert a compact node ID into a hwgraph vertex */ -extern nodepda_t *Nodepdaindr; +#define cnodeid_to_vertex(cnodeid) (NODEPDA(cnodeid)->node_vertex) -/* - * On non-NUMA systems, NODEPDA_GLOBAL and NODEPDA macros collapse to - * be the same. - */ -#define NODEPDA_GLOBAL(x) Nodepdaindr /* - * Returns a pointer to a given node's nodepda. + * Check if given a compact node id the corresponding node has all the + * cpus disabled. */ -#define NODEPDA(x) Nodepdaindr +#define is_headless_node(cnode) ((cnode == CNODEID_NONE) || \ + (node_data(cnode)->active_cpu_count == 0)) /* - * nodepda can also be defined as private.p_nodepda. - * But on non-NUMA systems, there is only one nodepda, and there is - * no reason to go through the PDA to access this pointer. - * Hence nodepda aliases to the global nodepda directly. - * - * Routines should use nodepda to access the local node's PDA. - */ -#define nodepda (Nodepdaindr) - -#endif /* NUMA_BASE */ - -/* Quickly convert a compact node ID into a hwgraph vertex */ -#define cnodeid_to_vertex(cnodeid) (NODEPDA(cnodeid)->node_vertex) - - -/* Check if given a compact node id the corresponding node has all the - * cpus disabled. - */ -#define is_headless_node(_cnode) ((_cnode == CNODEID_NONE) || \ - (CNODE_NUM_CPUS(_cnode) == 0)) -/* Check if given a node vertex handle the corresponding node has all the + * Check if given a node vertex handle the corresponding node has all the * cpus disabled. */ #define is_headless_node_vertex(_nodevhdl) \ is_headless_node(nodevertex_to_cnodeid(_nodevhdl)) -#ifdef __cplusplus -} -#endif - -#ifdef NUMA_BASE -/* - * To remove contention on the global syswait counter each node will have - * its own. Each clock tick the clock cpu will re-calculate the global - * syswait counter by summing from each of the nodes. The other cpus will - * continue to read the global one during their clock ticks. This does - * present a problem when a thread increments the count on one node and wakes - * up on a different node and decrements it there. Eventually the count could - * overflow if this happens continually for a long period. To prevent this - * second_thread() periodically preserves the current syswait state and - * resets the counters. - */ -#define ADD_SYSWAIT(_field) atomicAddInt(&nodepda->syswait._field, 1) -#define SUB_SYSWAIT(_field) atomicAddInt(&nodepda->syswait._field, -1) -#else -#define ADD_SYSWAIT(_field) \ -{ \ - ASSERT(syswait._field >= 0); \ - atomicAddInt(&syswait._field, 1); \ -} -#define SUB_SYSWAIT(_field) \ -{ \ - ASSERT(syswait._field > 0); \ - atomicAddInt(&syswait._field, -1); \ -} -#endif /* NUMA_BASE */ - -#ifdef NUMA_BASE -/* - * Another global variable to remove contention from: pdcount. - * See above comments for SYSWAIT. - */ -#define ADD_PDCOUNT(_n) \ -{ \ - atomicAddInt(&nodepda->pdcount, _n); \ - if (_n > 0 && !pdflag) \ - pdflag = 1; \ -} -#else -#define ADD_PDCOUNT(_n) \ -{ \ - ASSERT(&pdcount >= 0); \ - atomicAddInt(&pdcount, _n); \ - if (_n > 0 && !pdflag) \ - pdflag = 1; \ -} -#endif /* NUMA_BASE */ -#endif /* _ASM_SN_NODEPDA_H */ +#endif /* _ASM_IA64_SN_NODEPDA_H */ diff -Nru a/include/asm-ia64/sn/pci/bridge.h b/include/asm-ia64/sn/pci/bridge.h --- a/include/asm-ia64/sn/pci/bridge.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/sn/pci/bridge.h Tue Mar 12 13:58:15 2002 @@ -4,8 +4,7 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2001 Silicon Graphics, Inc. All rights reserved. */ #ifndef _ASM_SN_PCI_BRIDGE_H #define _ASM_SN_PCI_BRIDGE_H @@ -53,7 +52,7 @@ * Bridge address map */ -#if defined(_LANGUAGE_C) || defined(_LANGUAGE_C_PLUS_PLUS) +#ifndef __ASSEMBLY__ #ifdef __cplusplus extern "C" { @@ -373,7 +372,7 @@ ds:2, /* Data size */ gbr:1, /* GBR enable */ vbpm:1, /* VBPM message */ - error:1, /* Error occurred */ + error:1, /* Error occurred */ barr:1, /* Barrier op */ rsvd:8; } berr_st; @@ -638,7 +637,7 @@ #define berr_field berr_un.berr_st -#endif /* LANGUAGE_C */ +#endif /* __ASSEMBLY__ */ /* * The values of these macros can and should be crosschecked @@ -903,10 +902,10 @@ #define BRIDGE_DEVIO_2MB 0x00200000 /* Device IO Offset (0..1) */ #define BRIDGE_DEVIO_1MB 0x00100000 /* Device IO Offset (2..7) */ -#if LANGUAGE_C +#ifndef __ASSEMBLY__ #define BRIDGE_DEVIO(x) ((x)<=1 ? BRIDGE_DEVIO0+(x)*BRIDGE_DEVIO_2MB : BRIDGE_DEVIO2+((x)-2)*BRIDGE_DEVIO_1MB) -#endif /* LANGUAGE_C */ +#endif /* __ASSEMBLY__ */ #define BRIDGE_EXTERNAL_FLASH 0x00C00000 /* External Flash PROMS */ @@ -971,6 +970,10 @@ #define BRIDGE_CTRL_CLR_RLLP_CNT (0x1 << 11) #define BRIDGE_CTRL_CLR_TLLP_CNT (0x1 << 10) #define BRIDGE_CTRL_SYS_END (0x1 << 9) +#define BRIDGE_CTRL_BUS_SPEED(n) ((n) << 4) +#define BRIDGE_CTRL_BUS_SPEED_MASK (BRIDGE_CTRL_BUS_SPEED(0x3)) +#define BRIDGE_CTRL_BUS_SPEED_33 0x00 +#define BRIDGE_CTRL_BUS_SPEED_66 0x10 #define BRIDGE_CTRL_MAX_TRANS(n) ((n) << 4) #define BRIDGE_CTRL_MAX_TRANS_MASK (BRIDGE_CTRL_MAX_TRANS(0x1f)) #define BRIDGE_CTRL_WIDGET_ID(n) ((n) << 0) @@ -1296,14 +1299,14 @@ #define PCI32_MAPPED_BASE BRIDGE_DMA_MAPPED_BASE #define PCI32_DIRECT_BASE BRIDGE_DMA_DIRECT_BASE -#if LANGUAGE_C +#ifndef __ASSEMBLY__ #define IS_PCI32_LOCAL(x) ((uint64_t)(x) < PCI32_MAPPED_BASE) #define IS_PCI32_MAPPED(x) ((uint64_t)(x) < PCI32_DIRECT_BASE && \ (uint64_t)(x) >= PCI32_MAPPED_BASE) #define IS_PCI32_DIRECT(x) ((uint64_t)(x) >= PCI32_MAPPED_BASE) #define IS_PCI64(x) ((uint64_t)(x) >= PCI64_BASE) -#endif /* LANGUAGE_C */ +#endif /* __ASSEMBLY__ */ /* * The GIO address space. @@ -1318,13 +1321,13 @@ #define GIO_MAPPED_BASE BRIDGE_DMA_MAPPED_BASE #define GIO_DIRECT_BASE BRIDGE_DMA_DIRECT_BASE -#if LANGUAGE_C +#ifndef __ASSEMBLY__ #define IS_GIO_LOCAL(x) ((uint64_t)(x) < GIO_MAPPED_BASE) #define IS_GIO_MAPPED(x) ((uint64_t)(x) < GIO_DIRECT_BASE && \ (uint64_t)(x) >= GIO_MAPPED_BASE) #define IS_GIO_DIRECT(x) ((uint64_t)(x) >= GIO_MAPPED_BASE) -#endif /* LANGUAGE_C */ +#endif /* __ASSEMBLY__ */ /* PCI to xtalk mapping */ @@ -1347,7 +1350,7 @@ #define PCI64_ATTR_RMF_MASK 0x00ff000000000000 #define PCI64_ATTR_RMF_SHFT 48 -#if LANGUAGE_C +#ifndef __ASSEMBLY__ /* Address translation entry for mapped pci32 accesses */ typedef union ate_u { uint64_t ent; @@ -1375,7 +1378,7 @@ uint64_t valid:1; } field; } ate_t; -#endif /* LANGUAGE_C */ +#endif /* __ASSEMBLY__ */ #define ATE_V (1 << 0) #define ATE_CO (1 << 1) @@ -1401,7 +1404,7 @@ #define is_xbridge(bridge) \ (XWIDGET_PART_NUM(bridge->b_wid_id) == XBRIDGE_WIDGET_PART_NUM) -#if LANGUAGE_C +#ifndef __ASSEMBLY__ /* ======================================================================== */ diff -Nru a/include/asm-ia64/sn/pci/pci_bus_cvlink.h b/include/asm-ia64/sn/pci/pci_bus_cvlink.h --- a/include/asm-ia64/sn/pci/pci_bus_cvlink.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/sn/pci/pci_bus_cvlink.h Tue Mar 12 13:58:15 2002 @@ -4,12 +4,36 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2002 Silicon Graphics, Inc. All rights reserved. */ #ifndef _ASM_SN_PCI_CVLINK_H #define _ASM_SN_PCI_CVLINK_H +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include + +#define MAX_PCI_XWIDGET 256 +#define MAX_ATE_MAPS 1024 + #define SET_PCIA64(dev) \ (((struct sn1_device_sysdata *)((dev)->sysdata))->isa64) = 1 #define IS_PCIA64(dev) (((dev)->dma_mask == 0xffffffffffffffffUL) || \ @@ -17,6 +41,12 @@ #define IS_PCI32G(dev) ((dev)->dma_mask >= 0xffffffff) #define IS_PCI32L(dev) ((dev)->dma_mask < 0xffffffff) +#define PCIDEV_VERTEX(pci_dev) \ + (((struct sn1_device_sysdata *)((pci_dev)->sysdata))->vhdl) + +#define PCIBUS_VERTEX(pci_bus) \ + (((struct sn1_widget_sysdata *)((pci_bus)->sysdata))->vhdl) + struct sn1_widget_sysdata { devfs_handle_t vhdl; }; @@ -24,6 +54,8 @@ struct sn1_device_sysdata { devfs_handle_t vhdl; int isa64; + volatile unsigned int *dma_buf_sync; + volatile unsigned int *xbow_buf_sync; }; struct sn1_dma_maps_s{ diff -Nru a/include/asm-ia64/sn/pci/pci_defs.h b/include/asm-ia64/sn/pci/pci_defs.h --- a/include/asm-ia64/sn/pci/pci_defs.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/sn/pci/pci_defs.h Tue Mar 12 13:58:15 2002 @@ -4,8 +4,7 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2001 Silicon Graphics, Inc. All rights reserved. */ #ifndef _ASM_SN_PCI_PCI_DEFS_H #define _ASM_SN_PCI_PCI_DEFS_H diff -Nru a/include/asm-ia64/sn/pci/pciba.h b/include/asm-ia64/sn/pci/pciba.h --- a/include/asm-ia64/sn/pci/pciba.h Tue Mar 12 13:58:14 2002 +++ b/include/asm-ia64/sn/pci/pciba.h Tue Mar 12 13:58:14 2002 @@ -1,24 +1,33 @@ -/* $Id$ +/* + * This file is subject to the terms and conditions of the GNU General + * Public License. See the file "COPYING" in the main directory of + * this archive for more details. * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. + * Copyright (C) 1997, 2001 Silicon Graphics, Inc. All rights reserved. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam */ + #ifndef _ASM_SN_PCI_PCIBA_H #define _ASM_SN_PCI_PCIBA_H -/* - * These are all the HACKS from ioccom.h .. - */ -#define IOCPARM_MASK 0xff /* parameters must be < 256 bytes */ -#define IOC_VOID 0x20000000 /* no parameters */ +#include +#include +#include + +/* for application compatibility with IRIX (why do I bother?) */ + +#ifndef __KERNEL__ +typedef u_int8_t uint8_t; +typedef u_int16_t uint16_t; +typedef u_int32_t uint32_t; +#endif + +#define PCI_CFG_VENDOR_ID PCI_VENDOR_ID +#define PCI_CFG_COMMAND PCI_COMMAND +#define PCI_CFG_REV_ID PCI_REVISION_ID +#define PCI_CFG_HEADER_TYPE PCI_HEADER_TYPE +#define PCI_CFG_BASE_ADDR(n) PCI_BASE_ADDRESS_##n -/* - * The above needs to be modified and follow LINUX ... - */ /* /hw/.../pci/[slot]/config accepts ioctls to read * and write specific registers as follows: @@ -69,18 +78,11 @@ /* PCIIOCGETBASE(n): arg is ptr to a 32-bit int, * which will get the value of the BASE register. */ + +/* FIXME chadt: this doesn't tell me whether or not this will work + with non-constant 'n.' */ #define PCIIOCGETBASE(n) PCIIOCCFGRD(uint32_t,PCI_CFG_BASE_ADDR(n)) -/* /hw/.../pci/[slot]/intr accepts an ioctl to - * set up user level interrupt handling as follows: - * - * "n" is a bitmap of which of the four PCI interrupt - * lines are of interest, using PCIIO_INTR_LINE_[ABCD]. - */ -#define PCIIOCSETULI(n) _IOWR(1,n,struct uliargs) -#if _KERNEL -#define PCIIOCSETULI32(n) _IOWR(1,n,struct uliargs32) -#endif /* /hw/.../pci/[slot]/dma accepts ioctls to allocate * and free physical memory for use in user-triggered @@ -93,11 +95,20 @@ * both the size of the request and the flag values * to be used in setting up the DMA. * + +FIXME chadt: gonna have to revisit this: what flags would an IRIXer like to + have available? + * Any flags normally useful in pciio_dmamap - * or pciio_dmatrans function calls can6 be used here. - */ + * or pciio_dmatrans function calls can6 be used here. */ #define PCIIOCDMAALLOC_REQUEST_PACK(flags,size) \ ((((uint64_t)(flags))<<32)| \ (((uint64_t)(size))&0xFFFFFFFF)) + + +#ifdef __KERNEL__ +extern int pciba_init(void); +#endif + #endif /* _ASM_SN_PCI_PCIBA_H */ diff -Nru a/include/asm-ia64/sn/pci/pcibr.h b/include/asm-ia64/sn/pci/pcibr.h --- a/include/asm-ia64/sn/pci/pcibr.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/sn/pci/pcibr.h Tue Mar 12 13:58:15 2002 @@ -4,8 +4,7 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2001 Silicon Graphics, Inc. All rights reserved. */ #ifndef _ASM_SN_PCI_PCIBR_H #define _ASM_SN_PCI_PCIBR_H @@ -13,7 +12,7 @@ #if defined(__KERNEL__) #include -#include +#include #include #include @@ -31,7 +30,7 @@ #define PCIBR_INTR_BLOCKED 0x40000000 #define PCIBR_INTR_BUSY 0x80000000 -#if LANGUAGE_C +#ifndef __ASSEMBLY__ /* ===================================================================== * opaque types used by pcibr's xtalk bus provider @@ -183,10 +182,7 @@ extern void pcibr_intr_free(pcibr_intr_t intr); -extern int pcibr_intr_connect(pcibr_intr_t intr, - intr_func_t intr_func, - intr_arg_t intr_arg, - void *thread); +extern int pcibr_intr_connect(pcibr_intr_t intr); extern void pcibr_intr_disconnect(pcibr_intr_t intr); @@ -349,7 +345,7 @@ extern int pcibr_asic_rev(devfs_handle_t); -#endif /* _LANGUAGE_C */ +#endif /* __ASSEMBLY__ */ #endif /* #if defined(__KERNEL__) */ /* * Some useful ioctls into the pcibr driver @@ -390,10 +386,34 @@ /* * Structures for requesting PCI bridge information and receiving a response */ -typedef struct pcibr_slot_info_req_s *pcibr_slot_info_req_t; +typedef struct pcibr_slot_req_s *pcibr_slot_req_t; +typedef struct pcibr_slot_up_resp_s *pcibr_slot_up_resp_t; +typedef struct pcibr_slot_down_resp_s *pcibr_slot_down_resp_t; typedef struct pcibr_slot_info_resp_s *pcibr_slot_info_resp_t; typedef struct pcibr_slot_func_info_resp_s *pcibr_slot_func_info_resp_t; +#define L1_QSIZE 128 /* our L1 message buffer size */ +struct pcibr_slot_req_s { + int req_slot; + union { + pcibr_slot_up_resp_t up; + pcibr_slot_down_resp_t down; + pcibr_slot_info_resp_t query; + void *any; + } req_respp; + int req_size; +}; + +struct pcibr_slot_up_resp_s { + int resp_sub_errno; + char resp_l1_msg[L1_QSIZE + 1]; +}; + +struct pcibr_slot_down_resp_s { + int resp_sub_errno; + char resp_l1_msg[L1_QSIZE + 1]; +}; + struct pcibr_slot_info_req_s { int req_slot; pcibr_slot_info_resp_t req_respp; @@ -454,7 +474,40 @@ int resp_f_att_det_error; } resp_func[8]; - }; + + +/* + * PCI specific errors, interpreted by pciconfig command + */ + +/* EPERM 1 */ +#define PCI_SLOT_ALREADY_UP 2 /* slot already up */ +#define PCI_SLOT_ALREADY_DOWN 3 /* slot already down */ +#define PCI_IS_SYS_CRITICAL 4 /* slot is system critical */ +/* EIO 5 */ +/* ENXIO 6 */ +#define PCI_L1_ERR 7 /* L1 console command error */ +#define PCI_NOT_A_BRIDGE 8 /* device is not a bridge */ +#define PCI_SLOT_IN_SHOEHORN 9 /* slot is in a shorhorn */ +#define PCI_NOT_A_SLOT 10 /* slot is invalid */ +#define PCI_RESP_AREA_TOO_SMALL 11 /* slot is invalid */ +/* ENOMEM 12 */ +#define PCI_NO_DRIVER 13 /* no driver for device */ +/* EFAULT 14 */ +#define PCI_EMPTY_33MHZ 15 /* empty 33 MHz bus */ +/* EBUSY 16 */ +#define PCI_SLOT_RESET_ERR 17 /* slot reset error */ +#define PCI_SLOT_INFO_INIT_ERR 18 /* slot info init error */ +/* ENODEV 19 */ +#define PCI_SLOT_ADDR_INIT_ERR 20 /* slot addr space init error */ +#define PCI_SLOT_DEV_INIT_ERR 21 /* slot device init error */ +/* EINVAL 22 */ +#define PCI_SLOT_GUEST_INIT_ERR 23 /* slot guest info init error */ +#define PCI_SLOT_RRB_ALLOC_ERR 24 /* slot initial rrb alloc error */ +#define PCI_SLOT_DRV_ATTACH_ERR 25 /* driver attach error */ +#define PCI_SLOT_DRV_DETACH_ERR 26 /* driver detach error */ +/* ERANGE 34 */ +/* EUNATCH 42 */ #endif /* _ASM_SN_PCI_PCIBR_H */ diff -Nru a/include/asm-ia64/sn/pci/pcibr_private.h b/include/asm-ia64/sn/pci/pcibr_private.h --- a/include/asm-ia64/sn/pci/pcibr_private.h Tue Mar 12 13:58:14 2002 +++ b/include/asm-ia64/sn/pci/pcibr_private.h Tue Mar 12 13:58:14 2002 @@ -4,8 +4,7 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2002 Silicon Graphics, Inc. All rights reserved. */ #ifndef _ASM_SN_PCI_PCIBR_PRIVATE_H #define _ASM_SN_PCI_PCIBR_PRIVATE_H @@ -16,6 +15,7 @@ * should ever peek into this file. */ +#include #include #include @@ -100,9 +100,6 @@ #define bi_flags bi_pi.pi_flags /* PCIBR_INTR flags */ #define bi_dev bi_pi.pi_dev /* associated pci card */ #define bi_lines bi_pi.pi_lines /* which PCI interrupt line(s) */ -#define bi_func bi_pi.pi_func /* handler function (when connected) */ -#define bi_arg bi_pi.pi_arg /* handler parameter (when connected) */ -#define bi_tinfo bi_pi.pi_tinfo /* Thread info (when connected) */ #define bi_mustruncpu bi_pi.pi_mustruncpu /* Where we must run. */ #define bi_irq bi_pi.pi_irq /* IRQ assigned. */ #define bi_cpu bi_pi.pi_cpu /* cpu assigned. */ @@ -173,14 +170,17 @@ */ struct pcibr_soft_s { - devfs_handle_t bs_conn; /* xtalk connection point */ - devfs_handle_t bs_vhdl; /* vertex owned by pcibr */ + devfs_handle_t bs_conn; /* xtalk connection point */ + devfs_handle_t bs_vhdl; /* vertex owned by pcibr */ int bs_int_enable; /* Mask of enabled intrs */ - bridge_t *bs_base; /* PIO pointer to Bridge chip */ - char *bs_name; /* hw graph name */ - xwidgetnum_t bs_xid; /* Bridge's xtalk ID number */ - devfs_handle_t bs_master; /* xtalk master vertex */ - xwidgetnum_t bs_mxid; /* master's xtalk ID number */ + bridge_t *bs_base; /* PIO pointer to Bridge chip */ + char *bs_name; /* hw graph name */ + xwidgetnum_t bs_xid; /* Bridge's xtalk ID number */ + devfs_handle_t bs_master; /* xtalk master vertex */ + xwidgetnum_t bs_mxid; /* master's xtalk ID number */ + pciio_slot_t bs_first_slot; /* first existing slot */ + pciio_slot_t bs_last_slot; /* last existing slot */ + iopaddr_t bs_dir_xbase; /* xtalk address for 32-bit PCI direct map */ xwidgetnum_t bs_dir_xport; /* xtalk port for 32-bit PCI direct map */ @@ -190,7 +190,7 @@ short bs_int_ate_size; /* number of internal ates */ short bs_xbridge; /* if 1 then xbridge */ - int bs_rev_num; /* revision number of Bridge */ + int bs_rev_num; /* revision number of Bridge */ unsigned bs_dma_flags; /* revision-implied DMA flags */ @@ -253,6 +253,7 @@ struct { pciio_space_t bssd_space; iopaddr_t bssd_base; + int bssd_ref_cnt; } bss_devio; /* Shadow value for Device(x) register, @@ -312,7 +313,9 @@ int bs_rrb_fixed; int bs_rrb_avail[2]; int bs_rrb_res[8]; - int bs_rrb_valid[16]; + int bs_rrb_res_dflt[8]; + int bs_rrb_valid[16]; + int bs_rrb_valid_dflt[16]; struct { /* Each Bridge interrupt bit has a single XIO @@ -433,5 +436,42 @@ #define pcibr_soft_get(v) ((pcibr_soft_t)hwgraph_fastinfo_get((v))) #define pcibr_soft_set(v,i) (hwgraph_fastinfo_set((v), (arbitrary_info_t)(i))) + +/* Use io spin locks. This ensures that all the PIO writes from a particular + * CPU to a particular IO device are synched before the start of the next + * set of PIO operations to the same device. + */ +#define pcibr_lock(pcibr_soft) io_splock(&pcibr_soft->bs_lock) +#define pcibr_unlock(pcibr_soft,s) io_spunlock(&pcibr_soft->bs_lock,s) + +/* + * mem alloc/free macros + */ +#define NEWAf(ptr,n,f) (ptr = snia_kmem_zalloc((n)*sizeof (*(ptr)), (f&PCIIO_NOSLEEP)?KM_NOSLEEP:KM_SLEEP)) +#define NEWA(ptr,n) (ptr = snia_kmem_zalloc((n)*sizeof (*(ptr)), KM_SLEEP)) +#define DELA(ptr,n) (kfree(ptr)) + +#define NEWf(ptr,f) NEWAf(ptr,1,f) +#define NEW(ptr) NEWA(ptr,1) +#define DEL(ptr) DELA(ptr,1) + +typedef volatile unsigned *cfg_p; +typedef volatile bridgereg_t *reg_p; + +#define PCIBR_RRB_SLOT_VIRTUAL 8 +#define PCIBR_VALID_SLOT(s) (s < 8) +#define PCIBR_D64_BASE_UNSET (0xFFFFFFFFFFFFFFFF) +#define PCIBR_D32_BASE_UNSET (0xFFFFFFFF) +#define INFO_LBL_PCIBR_ASIC_REV "_pcibr_asic_rev" + +#define PCIBR_SOFT_LIST 1 +#if PCIBR_SOFT_LIST +typedef struct pcibr_list_s *pcibr_list_p; +struct pcibr_list_s { + pcibr_list_p bl_next; + pcibr_soft_t bl_soft; + devfs_handle_t bl_vhdl; +}; +#endif /* PCIBR_SOFT_LIST */ #endif /* _ASM_SN_PCI_PCIBR_PRIVATE_H */ diff -Nru a/include/asm-ia64/sn/pci/pciio.h b/include/asm-ia64/sn/pci/pciio.h --- a/include/asm-ia64/sn/pci/pciio.h Tue Mar 12 13:58:14 2002 +++ b/include/asm-ia64/sn/pci/pciio.h Tue Mar 12 13:58:14 2002 @@ -4,8 +4,7 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2002 Silicon Graphics, Inc. All rights reserved. */ #ifndef _ASM_SN_PCI_PCIIO_H #define _ASM_SN_PCI_PCIIO_H @@ -15,25 +14,22 @@ */ #include -#include +#include +#include -#if defined(_LANGUAGE_C) || defined(_LANGUAGE_C_PLUS_PLUS) +#ifndef __ASSEMBLY__ #include #include -#ifdef __cplusplus -extern "C" { -#endif - typedef int pciio_vendor_id_t; -#define PCIIO_VENDOR_ID_NONE -1 +#define PCIIO_VENDOR_ID_NONE (-1) typedef int pciio_device_id_t; -#define PCIIO_DEVICE_ID_NONE -1 +#define PCIIO_DEVICE_ID_NONE (-1) typedef uint8_t pciio_bus_t; /* PCI bus number (0..255) */ typedef uint8_t pciio_slot_t; /* PCI slot number (0..31, 255) */ @@ -387,10 +383,7 @@ pciio_intr_free_f (pciio_intr_t intr_hdl); typedef int -pciio_intr_connect_f (pciio_intr_t intr_hdl, /* pciio intr resource handle */ - intr_func_t intr_func, /* pciio intr handler */ - intr_arg_t intr_arg, /* arg to intr handler */ - void *thread); /* intr thread to use */ +pciio_intr_connect_f (pciio_intr_t intr_hdl); /* pciio intr resource handle */ typedef void pciio_intr_disconnect_f (pciio_intr_t intr_hdl); @@ -729,8 +722,5 @@ extern int pciio_error_handler(devfs_handle_t, int, ioerror_mode_t, ioerror_t *); extern int pciio_dma_enabled(devfs_handle_t); -#ifdef __cplusplus -}; -#endif #endif /* C or C++ */ #endif /* _ASM_SN_PCI_PCIIO_H */ diff -Nru a/include/asm-ia64/sn/pci/pciio_private.h b/include/asm-ia64/sn/pci/pciio_private.h --- a/include/asm-ia64/sn/pci/pciio_private.h Tue Mar 12 13:58:14 2002 +++ b/include/asm-ia64/sn/pci/pciio_private.h Tue Mar 12 13:58:14 2002 @@ -4,12 +4,13 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2002 Silicon Graphics, Inc. All rights reserved. */ #ifndef _ASM_SN_PCI_PCIIO_PRIVATE_H #define _ASM_SN_PCI_PCIIO_PRIVATE_H +#include + /* * pciio_private.h -- private definitions for pciio * PCI drivers should NOT include this file. @@ -48,11 +49,6 @@ devfs_handle_t pi_dev; /* associated pci card */ device_desc_t pi_dev_desc; /* override device descriptor */ pciio_intr_line_t pi_lines; /* which interrupt line(s) */ - intr_func_t pi_func; /* handler function (when connected) */ - intr_arg_t pi_arg; /* handler parameter (when connected) */ -#ifdef LATER - thd_int_t pi_tinfo; /* Thread info (when connected) */ -#endif cpuid_t pi_mustruncpu; /* Where we must run. */ int pi_irq; /* IRQ assigned */ int pi_cpu; /* cpu assigned */ @@ -84,6 +80,8 @@ pciio_space_t w_space; iopaddr_t w_base; size_t w_size; + int w_devio_index; /* DevIO[] register used to + access this window */ } c_window[6]; unsigned c_rbase; /* EXPANSION ROM base addr */ diff -Nru a/include/asm-ia64/sn/pda.h b/include/asm-ia64/sn/pda.h --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/include/asm-ia64/sn/pda.h Tue Mar 12 13:58:16 2002 @@ -0,0 +1,80 @@ +/* + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (C) 1992 - 1997, 2000-2002 Silicon Graphics, Inc. All rights reserved. + */ +#ifndef _ASM_IA64_SN_PDA_H +#define _ASM_IA64_SN_PDA_H + +#include +#include +#include +#include +#include +#include + + +/* + * CPU-specific data structure. + * + * One of these structures is allocated for each cpu of a NUMA system. + * + * This structure provides a convenient way of keeping together + * all SN per-cpu data structures. + */ + + + +typedef struct pda_s { + + /* Having a pointer in the begining of PDA tends to increase + * the chance of having this pointer in cache. (Yes something + * else gets pushed out). Doing this reduces the number of memory + * access to all nodepda variables to be one + */ + struct nodepda_s *p_nodepda; /* Pointer to Per node PDA */ + struct subnodepda_s *p_subnodepda; /* Pointer to CPU subnode PDA */ + + /* + * Support for blinking SN LEDs + */ + long *led_address; + u8 led_state; + char hb_state; /* supports blinking heartbeat leds */ + unsigned int hb_count; + + unsigned int idle_flag; + +#ifdef CONFIG_IA64_SGI_SN2 + struct irqpda_s *p_irqpda; /* Pointer to CPU irq data */ +#endif + volatile unsigned long *bedrock_rev_id; + volatile unsigned long *pio_write_status_addr; + + bteinfo_t *cpubte[BTES_PER_NODE]; +} pda_t; + + +#define CACHE_ALIGN(x) (((x) + SMP_CACHE_BYTES-1) & ~(SMP_CACHE_BYTES-1)) + +/* + * PDA + * Per-cpu private data area for each cpu. The PDA is located immediately after + * the IA64 cpu_data area. A full page is allocated for the cp_data area for each + * cpu but only a small amout of the page is actually used. We put the SNIA PDA + * in the same page as the cpu_data area. Note that there is a check in the setup + * code to verify that we dont overflow the page. + * + * Seems like we should should cache-line align the pda so that any changes in the + * size of the cpu_data area dont change cache layout. Should we align to 32, 64, 128 + * or 512 boundary. Each has merits. For now, pick 128 but should be revisited later. + */ +#define CPU_DATA_END CACHE_ALIGN((long)&(((struct cpuinfo_ia64*)0)->platform_specific)) +#define PDAADDR (PERCPU_ADDR+CPU_DATA_END) + +#define pda (*((pda_t *) PDAADDR)) + + +#endif /* _ASM_IA64_SN_PDA_H */ diff -Nru a/include/asm-ia64/sn/pio.h b/include/asm-ia64/sn/pio.h --- a/include/asm-ia64/sn/pio.h Tue Mar 12 13:58:14 2002 +++ b/include/asm-ia64/sn/pio.h Tue Mar 12 13:58:14 2002 @@ -4,15 +4,14 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2002 Silicon Graphics, Inc. All rights reserved. */ -#ifndef _ASM_SN_PIO_H -#define _ASM_SN_PIO_H +#ifndef _ASM_IA64_SN_PIO_H +#define _ASM_IA64_SN_PIO_H #include #include -#include +#include /* * pioaddr_t - The kernel virtual address that a PIO can be done upon. @@ -143,7 +142,7 @@ #define LAN_RAM 2 #define LAN_IO 3 -#define PIOREG_NULL -1 +#define PIOREG_NULL (-1) /* standard flags values for pio_map routines, * including {xtalk,pciio}_piomap calls. @@ -156,4 +155,4 @@ #define PIOMAP_FLAGS 0x7 -#endif /* _ASM_SN_PIO_H */ +#endif /* _ASM_IA64_SN_PIO_H */ diff -Nru a/include/asm-ia64/sn/pio_flush.h b/include/asm-ia64/sn/pio_flush.h --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/include/asm-ia64/sn/pio_flush.h Tue Mar 12 13:58:16 2002 @@ -0,0 +1,65 @@ +/* + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (C) 2001-2002 Silicon Graphics, Inc. All rights reserved. + */ + + +#include + +#ifndef _ASM_IA64_PIO_FLUSH_H +#define _ASM_IA64_PIO_FLUSH_H + +/* + * This macro flushes all outstanding PIOs performed by this cpu to the + * intended destination SHUB. This in essence ensures that all PIO's + * issues by this cpu has landed at it's destination. + * + * This macro expects the caller: + * 1. The thread is locked. + * 2. All prior PIO operations has been fenced. + * + */ + +#if defined (CONFIG_IA64_SGI_SN) + +#include + +#if defined (CONFIG_IA64_SGI_SN2) + +#define PIO_FLUSH() \ + { \ + while ( !((volatile unsigned long) (*pda.pio_write_status_addr)) & 0x8000000000000000) { \ + udelay(5); \ + } \ + __ia64_mf_a(); \ + } + +#elif defined (CONFIG_IA64_SGI_SN1) + +/* + * For SN1 we need to first read any local Bedrock's MMR and then poll on the + * Synergy MMR. + */ +#define PIO_FLUSH() \ + { \ + (volatile unsigned long) (*pda.bedrock_rev_id); \ + while (!(volatile unsigned long) (*pda.pio_write_status_addr)) { \ + udelay(5); \ + } \ + __ia64_mf_a(); \ + } +#endif +#else +/* + * For all ARCHITECTURE type, this is a NOOP. + */ + +#define PIO_FLUSH() + +#endif + +#endif /* _ASM_IA64_PIO_FLUSH_H */ diff -Nru a/include/asm-ia64/sn/prio.h b/include/asm-ia64/sn/prio.h --- a/include/asm-ia64/sn/prio.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/sn/prio.h Tue Mar 12 13:58:15 2002 @@ -4,11 +4,12 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2002 Silicon Graphics, Inc. All rights reserved. */ -#ifndef _ASM_SN_PRIO_H -#define _ASM_SN_PRIO_H +#ifndef _ASM_IA64_SN_PRIO_H +#define _ASM_IA64_SN_PRIO_H + +#include /* * Priority I/O function prototypes and macro definitions @@ -33,6 +34,6 @@ /* Error returns */ #define PRIO_SUCCESS 0 -#define PRIO_FAIL -1 +#define PRIO_FAIL (-1) -#endif /* _ASM_SN_PRIO_H */ +#endif /* _ASM_IA64_SN_PRIO_H */ diff -Nru a/include/asm-ia64/sn/router.h b/include/asm-ia64/sn/router.h --- a/include/asm-ia64/sn/router.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/sn/router.h Tue Mar 12 13:58:15 2002 @@ -1,19 +1,665 @@ + /* $Id$ * * This file is subject to the terms and conditions of the GNU General Public * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2002 Silicon Graphics, Inc. All rights reserved. + */ + +#ifndef _ASM_IA64_SN_ROUTER_H +#define _ASM_IA64_SN_ROUTER_H + +/* + * Router Register definitions + * + * Macro argument _L always stands for a link number (1 to 8, inclusive). + */ + +#ifndef __ASSEMBLY__ + +#include +#include +#include +#include + +typedef uint64_t router_reg_t; + +#define MAX_ROUTERS 64 + +#define MAX_ROUTER_PATH 80 + +#define ROUTER_REG_CAST (volatile router_reg_t *) +#define PS_UINT_CAST (__psunsigned_t) +#define UINT64_CAST (uint64_t) +typedef signed char port_no_t; /* Type for router port number */ + +#else + +#define ROUTERREG_CAST +#define PS_UINT_CAST +#define UINT64_CAST + +#endif /* __ASSEMBLY__ */ + +#define MAX_ROUTER_PORTS (8) /* Max. number of ports on a router */ + +#define ALL_PORTS ((1 << MAX_ROUTER_PORTS) - 1) /* for 0 based references */ + +#define PORT_INVALID (-1) /* Invalid port number */ + +#define IS_META(_rp) ((_rp)->flags & PCFG_ROUTER_META) + +#define IS_REPEATER(_rp)((_rp)->flags & PCFG_ROUTER_REPEATER) + +/* + * RR_TURN makes a given number of clockwise turns (0 to 7) from an inport + * port to generate an output port. + * + * RR_DISTANCE returns the number of turns necessary (0 to 7) to go from + * an input port (_L1 = 1 to 8) to an output port ( _L2 = 1 to 8). + * + * These are written to work on unsigned data. */ -#ifndef _ASM_SN_ROUTER_H -#define _ASM_SN_ROUTER_H -#include +#define RR_TURN(_L, count) ((_L) + (count) > MAX_ROUTER_PORTS ? \ + (_L) + (count) - MAX_ROUTER_PORTS : \ + (_L) + (count)) + +#define RR_DISTANCE(_LS, _LD) ((_LD) >= (_LS) ? \ + (_LD) - (_LS) : \ + (_LD) + MAX_ROUTER_PORTS - (_LS)) + +/* Router register addresses */ + +#define RR_STATUS_REV_ID 0x00000 /* Status register and Revision ID */ +#define RR_PORT_RESET 0x00008 /* Multiple port reset */ +#define RR_PROT_CONF 0x00010 /* Inter-partition protection conf. */ +#define RR_GLOBAL_PORT_DEF 0x00018 /* Global Port definitions */ +#define RR_GLOBAL_PARMS0 0x00020 /* Parameters shared by all 8 ports */ +#define RR_GLOBAL_PARMS1 0x00028 /* Parameters shared by all 8 ports */ +#define RR_DIAG_PARMS 0x00030 /* Parameters for diag. testing */ +#define RR_DEBUG_ADDR 0x00038 /* Debug address select - debug port*/ +#define RR_LB_TO_L2 0x00040 /* Local Block to L2 cntrl intf reg */ +#define RR_L2_TO_LB 0x00048 /* L2 cntrl intf to Local Block reg */ +#define RR_JBUS_CONTROL 0x00050 /* read/write timing for JBUS intf */ + +#define RR_SCRATCH_REG0 0x00100 /* Scratch 0 is 64 bits */ +#define RR_SCRATCH_REG1 0x00108 /* Scratch 1 is 64 bits */ +#define RR_SCRATCH_REG2 0x00110 /* Scratch 2 is 64 bits */ +#define RR_SCRATCH_REG3 0x00118 /* Scratch 3 is 1 bit */ +#define RR_SCRATCH_REG4 0x00120 /* Scratch 4 is 1 bit */ + +#define RR_JBUS0(_D) (((_D) & 0x7) << 3 | 0x00200) /* JBUS0 addresses */ +#define RR_JBUS1(_D) (((_D) & 0x7) << 3 | 0x00240) /* JBUS1 addresses */ + +#define RR_SCRATCH_REG0_WZ 0x00500 /* Scratch 0 is 64 bits */ +#define RR_SCRATCH_REG1_WZ 0x00508 /* Scratch 1 is 64 bits */ +#define RR_SCRATCH_REG2_WZ 0x00510 /* Scratch 2 is 64 bits */ +#define RR_SCRATCH_REG3_SZ 0x00518 /* Scratch 3 is 1 bit */ +#define RR_SCRATCH_REG4_SZ 0x00520 /* Scratch 4 is 1 bit */ + +#define RR_VECTOR_HW_BAR(context) (0x08000 | (context)<<3) /* barrier config registers */ +/* Port-specific registers (_L is the link number from 1 to 8) */ + +#define RR_PORT_PARMS(_L) (((_L+1) & 0xe) << 15 | ((_L+1) & 0x1) << 11 | 0x0000) /* LLP parameters */ +#define RR_STATUS_ERROR(_L) (((_L+1) & 0xe) << 15 | ((_L+1) & 0x1) << 11 | 0x0008) /* Port-related errs */ +#define RR_CHANNEL_TEST(_L) (((_L+1) & 0xe) << 15 | ((_L+1) & 0x1) << 11 | 0x0010) /* Port LLP chan test */ +#define RR_RESET_MASK(_L) (((_L+1) & 0xe) << 15 | ((_L+1) & 0x1) << 11 | 0x0018) /* Remote reset mask */ +#define RR_HISTOGRAM0(_L) (((_L+1) & 0xe) << 15 | ((_L+1) & 0x1) << 11 | 0x0020) /* Port usage histgrm */ +#define RR_HISTOGRAM1(_L) (((_L+1) & 0xe) << 15 | ((_L+1) & 0x1) << 11 | 0x0028) /* Port usage histgrm */ +#define RR_HISTOGRAM0_WC(_L) (((_L+1) & 0xe) << 15 | ((_L+1) & 0x1) << 11 | 0x0030) /* Port usage histgrm */ +#define RR_HISTOGRAM1_WC(_L) (((_L+1) & 0xe) << 15 | ((_L+1) & 0x1) << 11 | 0x0038) /* Port usage histgrm */ +#define RR_ERROR_CLEAR(_L) (((_L+1) & 0xe) << 15 | ((_L+1) & 0x1) << 11 | 0x0088) /* Read/clear errors */ +#define RR_GLOBAL_TABLE0(_L) (((_L+1) & 0xe) << 15 | ((_L+1) & 0x1) << 11 | 0x0100) /* starting address of global table for this port */ +#define RR_GLOBAL_TABLE(_L, _x) (RR_GLOBAL_TABLE0(_L) + ((_x) << 3)) +#define RR_LOCAL_TABLE0(_L) (((_L+1) & 0xe) << 15 | ((_L+1) & 0x1) << 11 | 0x0200) /* starting address of local table for this port */ +#define RR_LOCAL_TABLE(_L, _x) (RR_LOCAL_TABLE0(_L) + ((_x) << 3)) + +#define RR_META_ENTRIES 16 + +#define RR_LOCAL_ENTRIES 128 + +/* + * RR_STATUS_REV_ID mask and shift definitions + */ + +#define RSRI_INPORT_SHFT 52 +#define RSRI_INPORT_MASK (UINT64_CAST 0xf << 52) +#define RSRI_LINKWORKING_BIT(_L) (35 + 2 * (_L)) +#define RSRI_LINKWORKING(_L) (UINT64_CAST 1 << (35 + 2 * (_L))) +#define RSRI_LINKRESETFAIL(_L) (UINT64_CAST 1 << (34 + 2 * (_L))) +#define RSRI_LSTAT_SHFT(_L) (34 + 2 * (_L)) +#define RSRI_LSTAT_MASK(_L) (UINT64_CAST 0x3 << 34 + 2 * (_L)) +#define RSRI_LOCALSBERROR (UINT64_CAST 1 << 35) +#define RSRI_LOCALSTUCK (UINT64_CAST 1 << 34) +#define RSRI_LOCALBADVEC (UINT64_CAST 1 << 33) +#define RSRI_LOCALTAILERR (UINT64_CAST 1 << 32) +#define RSRI_LOCAL_SHFT 32 +#define RSRI_LOCAL_MASK (UINT64_CAST 0xf << 32) +#define RSRI_CHIPREV_SHFT 28 +#define RSRI_CHIPREV_MASK (UINT64_CAST 0xf << 28) +#define RSRI_CHIPID_SHFT 12 +#define RSRI_CHIPID_MASK (UINT64_CAST 0xffff << 12) +#define RSRI_MFGID_SHFT 1 +#define RSRI_MFGID_MASK (UINT64_CAST 0x7ff << 1) + +#define RSRI_LSTAT_WENTDOWN 0 +#define RSRI_LSTAT_RESETFAIL 1 +#define RSRI_LSTAT_LINKUP 2 +#define RSRI_LSTAT_NOTUSED 3 + +/* + * RR_PORT_RESET mask definitions + */ + +#define RPRESET_WARM (UINT64_CAST 1 << 9) +#define RPRESET_LINK(_L) (UINT64_CAST 1 << (_L)) +#define RPRESET_LOCAL (UINT64_CAST 1) + +/* + * RR_PROT_CONF mask and shift definitions + */ + +#define RPCONF_DIRCMPDIS_SHFT 13 +#define RPCONF_DIRCMPDIS_MASK (UINT64_CAST 1 << 13) +#define RPCONF_FORCELOCAL (UINT64_CAST 1 << 12) +#define RPCONF_FLOCAL_SHFT 12 +#define RPCONF_METAID_SHFT 8 +#define RPCONF_METAID_MASK (UINT64_CAST 0xf << 8) +#define RPCONF_RESETOK(_L) (UINT64_CAST 1 << ((_L) - 1)) + +/* + * RR_GLOBAL_PORT_DEF mask and shift definitions + */ + +#define RGPD_MGLBLNHBR_ID_SHFT 12 /* -global neighbor ID */ +#define RGPD_MGLBLNHBR_ID_MASK (UINT64_CAST 0xf << 12) +#define RGPD_MGLBLNHBR_VLD_SHFT 11 /* -global neighbor Valid */ +#define RGPD_MGLBLNHBR_VLD_MASK (UINT64_CAST 0x1 << 11) +#define RGPD_MGLBLPORT_SHFT 8 /* -global neighbor Port */ +#define RGPD_MGLBLPORT_MASK (UINT64_CAST 0x7 << 8) +#define RGPD_PGLBLNHBR_ID_SHFT 4 /* +global neighbor ID */ +#define RGPD_PGLBLNHBR_ID_MASK (UINT64_CAST 0xf << 4) +#define RGPD_PGLBLNHBR_VLD_SHFT 3 /* +global neighbor Valid */ +#define RGPD_PGLBLNHBR_VLD_MASK (UINT64_CAST 0x1 << 3) +#define RGPD_PGLBLPORT_SHFT 0 /* +global neighbor Port */ +#define RGPD_PGLBLPORT_MASK (UINT64_CAST 0x7 << 0) + +#define GLBL_PARMS_REGS 2 /* Two Global Parms registers */ + +/* + * RR_GLOBAL_PARMS0 mask and shift definitions + */ + +#define RGPARM0_ARB_VALUE_SHFT 54 /* Local Block Arbitration State */ +#define RGPARM0_ARB_VALUE_MASK (UINT64_CAST 0x7 << 54) +#define RGPARM0_ROTATEARB_SHFT 53 /* Rotate Local Block Arbitration */ +#define RGPARM0_ROTATEARB_MASK (UINT64_CAST 0x1 << 53) +#define RGPARM0_FAIREN_SHFT 52 /* Fairness logic Enable */ +#define RGPARM0_FAIREN_MASK (UINT64_CAST 0x1 << 52) +#define RGPARM0_LOCGNTTO_SHFT 40 /* Local grant timeout */ +#define RGPARM0_LOCGNTTO_MASK (UINT64_CAST 0xfff << 40) +#define RGPARM0_DATELINE_SHFT 38 /* Dateline crossing router */ +#define RGPARM0_DATELINE_MASK (UINT64_CAST 0x1 << 38) +#define RGPARM0_MAXRETRY_SHFT 28 /* Max retry count */ +#define RGPARM0_MAXRETRY_MASK (UINT64_CAST 0x3ff << 28) +#define RGPARM0_URGWRAP_SHFT 20 /* Urgent wrap */ +#define RGPARM0_URGWRAP_MASK (UINT64_CAST 0xff << 20) +#define RGPARM0_DEADLKTO_SHFT 16 /* Deadlock timeout */ +#define RGPARM0_DEADLKTO_MASK (UINT64_CAST 0xf << 16) +#define RGPARM0_URGVAL_SHFT 12 /* Urgent value */ +#define RGPARM0_URGVAL_MASK (UINT64_CAST 0xf << 12) +#define RGPARM0_VCHSELEN_SHFT 11 /* VCH_SEL_EN */ +#define RGPARM0_VCHSELEN_MASK (UINT64_CAST 0x1 << 11) +#define RGPARM0_LOCURGTO_SHFT 9 /* Local urgent timeout */ +#define RGPARM0_LOCURGTO_MASK (UINT64_CAST 0x3 << 9) +#define RGPARM0_TAILVAL_SHFT 5 /* Tail value */ +#define RGPARM0_TAILVAL_MASK (UINT64_CAST 0xf << 5) +#define RGPARM0_CLOCK_SHFT 1 /* Global clock select */ +#define RGPARM0_CLOCK_MASK (UINT64_CAST 0xf << 1) +#define RGPARM0_BYPEN_SHFT 0 +#define RGPARM0_BYPEN_MASK (UINT64_CAST 1) /* Bypass enable */ + +/* + * RR_GLOBAL_PARMS1 shift and mask definitions + */ + +#define RGPARM1_TTOWRAP_SHFT 12 /* Tail timeout wrap */ +#define RGPARM1_TTOWRAP_MASK (UINT64_CAST 0xfffff << 12) +#define RGPARM1_AGERATE_SHFT 8 /* Age rate */ +#define RGPARM1_AGERATE_MASK (UINT64_CAST 0xf << 8) +#define RGPARM1_JSWSTAT_SHFT 0 /* JTAG Sw Register bits */ +#define RGPARM1_JSWSTAT_MASK (UINT64_CAST 0xff << 0) + +/* + * RR_DIAG_PARMS mask and shift definitions + */ + +#define RDPARM_ABSHISTOGRAM (UINT64_CAST 1 << 17) /* Absolute histgrm */ +#define RDPARM_DEADLOCKRESET (UINT64_CAST 1 << 16) /* Reset on deadlck */ +#define RDPARM_DISABLE(_L) (UINT64_CAST 1 << ((_L) + 7)) +#define RDPARM_SENDERROR(_L) (UINT64_CAST 1 << ((_L) - 1)) + +/* + * RR_DEBUG_ADDR mask and shift definitions + */ + +#define RDA_DATA_SHFT 10 /* Observed debug data */ +#define RDA_DATA_MASK (UINT64_CAST 0xffff << 10) +#define RDA_ADDR_SHFT 0 /* debug address for data */ +#define RDA_ADDR_MASK (UINT64_CAST 0x3ff << 0) + +/* + * RR_LB_TO_L2 mask and shift definitions + */ + +#define RLBTOL2_DATA_VLD_SHFT 32 /* data is valid for JTAG controller */ +#define RLBTOL2_DATA_VLD_MASK (UINT64_CAST 0x1 << 32) +#define RLBTOL2_DATA_SHFT 0 /* data bits for JTAG controller */ +#define RLBTOL2_DATA_MASK (UINT64_CAST 0xffffffff) + +/* + * RR_L2_TO_LB mask and shift definitions + */ + +#define RL2TOLB_DATA_VLD_SHFT 33 /* data is valid from JTAG controller */ +#define RL2TOLB_DATA_VLD_MASK (UINT64_CAST 0x1 << 33) +#define RL2TOLB_PARITY_SHFT 32 /* sw implemented parity for data */ +#define RL2TOLB_PARITY_MASK (UINT64_CAST 0x1 << 32) +#define RL2TOLB_DATA_SHFT 0 /* data bits from JTAG controller */ +#define RL2TOLB_DATA_MASK (UINT64_CAST 0xffffffff) + +/* + * RR_JBUS_CONTROL mask and shift definitions + */ -#if CONFIG_SGI_IP35 || CONFIG_IA64_SGI_SN1 || CONFIG_IA64_GENERIC -#include +#define RJC_POS_BITS_SHFT 20 /* Router position bits */ +#define RJC_POS_BITS_MASK (UINT64_CAST 0xf << 20) +#define RJC_RD_DATA_STROBE_SHFT 16 /* count when read data is strobed in */ +#define RJC_RD_DATA_STROBE_MASK (UINT64_CAST 0xf << 16) +#define RJC_WE_OE_HOLD_SHFT 8 /* time OE or WE is held */ +#define RJC_WE_OE_HOLD_MASK (UINT64_CAST 0xff << 8) +#define RJC_ADDR_SET_HLD_SHFT 0 /* time address driven around OE/WE */ +#define RJC_ADDR_SET_HLD_MASK (UINT64_CAST 0xff) + +/* + * RR_SCRATCH_REGx mask and shift definitions + * note: these fields represent a software convention, and are not + * understood/interpreted by the hardware. + */ + +#define RSCR0_BOOTED_SHFT 63 +#define RSCR0_BOOTED_MASK (UINT64_CAST 0x1 << RSCR0_BOOTED_SHFT) +#define RSCR0_LOCALID_SHFT 56 +#define RSCR0_LOCALID_MASK (UINT64_CAST 0x7f << RSCR0_LOCALID_SHFT) +#define RSCR0_UNUSED_SHFT 48 +#define RSCR0_UNUSED_MASK (UINT64_CAST 0xff << RSCR0_UNUSED_SHFT) +#define RSCR0_NIC_SHFT 0 +#define RSCR0_NIC_MASK (UINT64_CAST 0xffffffffffff) + +#define RSCR1_MODID_SHFT 0 +#define RSCR1_MODID_MASK (UINT64_CAST 0xffff) + +/* + * RR_VECTOR_HW_BAR mask and shift definitions + */ + +#define BAR_TX_SHFT 27 /* Barrier in trans(m)it when read */ +#define BAR_TX_MASK (UINT64_CAST 1 << BAR_TX_SHFT) +#define BAR_VLD_SHFT 26 /* Valid Configuration */ +#define BAR_VLD_MASK (UINT64_CAST 1 << BAR_VLD_SHFT) +#define BAR_SEQ_SHFT 24 /* Sequence number */ +#define BAR_SEQ_MASK (UINT64_CAST 3 << BAR_SEQ_SHFT) +#define BAR_LEAFSTATE_SHFT 18 /* Leaf State */ +#define BAR_LEAFSTATE_MASK (UINT64_CAST 0x3f << BAR_LEAFSTATE_SHFT) +#define BAR_PARENT_SHFT 14 /* Parent Port */ +#define BAR_PARENT_MASK (UINT64_CAST 0xf << BAR_PARENT_SHFT) +#define BAR_CHILDREN_SHFT 6 /* Child Select port bits */ +#define BAR_CHILDREN_MASK (UINT64_CAST 0xff << BAR_CHILDREN_SHFT) +#define BAR_LEAFCOUNT_SHFT 0 /* Leaf Count to trigger parent */ +#define BAR_LEAFCOUNT_MASK (UINT64_CAST 0x3f) + +/* + * RR_PORT_PARMS(_L) mask and shift definitions + */ + +#define RPPARM_MIPRESETEN_SHFT 29 /* Message In Progress reset enable */ +#define RPPARM_MIPRESETEN_MASK (UINT64_CAST 0x1 << 29) +#define RPPARM_UBAREN_SHFT 28 /* Enable user barrier requests */ +#define RPPARM_UBAREN_MASK (UINT64_CAST 0x1 << 28) +#define RPPARM_OUTPDTO_SHFT 24 /* Output Port Deadlock TO value */ +#define RPPARM_OUTPDTO_MASK (UINT64_CAST 0xf << 24) +#define RPPARM_PORTMATE_SHFT 21 /* Port Mate for the port */ +#define RPPARM_PORTMATE_MASK (UINT64_CAST 0x7 << 21) +#define RPPARM_HISTEN_SHFT 20 /* Histogram counter enable */ +#define RPPARM_HISTEN_MASK (UINT64_CAST 0x1 << 20) +#define RPPARM_HISTSEL_SHFT 18 +#define RPPARM_HISTSEL_MASK (UINT64_CAST 0x3 << 18) +#define RPPARM_DAMQHS_SHFT 16 +#define RPPARM_DAMQHS_MASK (UINT64_CAST 0x3 << 16) +#define RPPARM_NULLTO_SHFT 10 +#define RPPARM_NULLTO_MASK (UINT64_CAST 0x3f << 10) +#define RPPARM_MAXBURST_SHFT 0 +#define RPPARM_MAXBURST_MASK (UINT64_CAST 0x3ff) + +/* + * NOTE: Normally the kernel tracks only UTILIZATION statistics. + * The other 2 should not be used, except during any experimentation + * with the router. + */ +#define RPPARM_HISTSEL_AGE 0 /* Histogram age characterization. */ +#define RPPARM_HISTSEL_UTIL 1 /* Histogram link utilization */ +#define RPPARM_HISTSEL_DAMQ 2 /* Histogram DAMQ characterization. */ + +/* + * RR_STATUS_ERROR(_L) and RR_ERROR_CLEAR(_L) mask and shift definitions + */ +#define RSERR_POWERNOK (UINT64_CAST 1 << 38) +#define RSERR_PORT_DEADLOCK (UINT64_CAST 1 << 37) +#define RSERR_WARMRESET (UINT64_CAST 1 << 36) +#define RSERR_LINKRESET (UINT64_CAST 1 << 35) +#define RSERR_RETRYTIMEOUT (UINT64_CAST 1 << 34) +#define RSERR_FIFOOVERFLOW (UINT64_CAST 1 << 33) +#define RSERR_ILLEGALPORT (UINT64_CAST 1 << 32) +#define RSERR_DEADLOCKTO_SHFT 28 +#define RSERR_DEADLOCKTO_MASK (UINT64_CAST 0xf << 28) +#define RSERR_RECVTAILTO_SHFT 24 +#define RSERR_RECVTAILTO_MASK (UINT64_CAST 0xf << 24) +#define RSERR_RETRYCNT_SHFT 16 +#define RSERR_RETRYCNT_MASK (UINT64_CAST 0xff << 16) +#define RSERR_CBERRCNT_SHFT 8 +#define RSERR_CBERRCNT_MASK (UINT64_CAST 0xff << 8) +#define RSERR_SNERRCNT_SHFT 0 +#define RSERR_SNERRCNT_MASK (UINT64_CAST 0xff << 0) + + +#define PORT_STATUS_UP (1 << 0) /* Router link up */ +#define PORT_STATUS_FENCE (1 << 1) /* Router link fenced */ +#define PORT_STATUS_RESETFAIL (1 << 2) /* Router link didnot + * come out of reset */ +#define PORT_STATUS_DISCFAIL (1 << 3) /* Router link failed after + * out of reset but before + * router tables were + * programmed + */ +#define PORT_STATUS_KERNFAIL (1 << 4) /* Router link failed + * after reset and the + * router tables were + * programmed + */ +#define PORT_STATUS_UNDEF (1 << 5) /* Unable to pinpoint + * why the router link + * went down + */ +#define PROBE_RESULT_BAD (-1) /* Set if any of the router + * links failed after reset + */ +#define PROBE_RESULT_GOOD (0) /* Set if all the router links + * which came out of reset + * are up + */ + +/* Should be enough for 256 CPUs */ +#define MAX_RTR_BREADTH 64 /* Max # of routers possible */ + +/* Get the require set of bits in a var. corr to a sequence of bits */ +#define GET_FIELD(var, fname) \ + ((var) >> fname##_SHFT & fname##_MASK >> fname##_SHFT) +/* Set the require set of bits in a var. corr to a sequence of bits */ +#define SET_FIELD(var, fname, fval) \ + ((var) = (var) & ~fname##_MASK | (uint64_t) (fval) << fname##_SHFT) + + +#ifndef __ASSEMBLY__ + +typedef struct router_map_ent_s { + uint64_t nic; + moduleid_t module; + slotid_t slot; +} router_map_ent_t; + +struct rr_status_error_fmt { + uint64_t rserr_unused : 30, + rserr_fifooverflow : 1, + rserr_illegalport : 1, + rserr_deadlockto : 4, + rserr_recvtailto : 4, + rserr_retrycnt : 8, + rserr_cberrcnt : 8, + rserr_snerrcnt : 8; +}; + +/* + * This type is used to store "absolute" counts of router events + */ +typedef int router_count_t; + +/* All utilizations are on a scale from 0 - 1023. */ +#define RP_BYPASS_UTIL 0 +#define RP_RCV_UTIL 1 +#define RP_SEND_UTIL 2 +#define RP_TOTAL_PKTS 3 /* Free running clock/packet counter */ + +#define RP_NUM_UTILS 3 + +#define RP_HIST_REGS 2 +#define RP_NUM_BUCKETS 4 +#define RP_HIST_TYPES 3 + +#define RP_AGE0 0 +#define RP_AGE1 1 +#define RP_AGE2 2 +#define RP_AGE3 3 + + +#define RR_UTIL_SCALE 1024 + +/* + * Router port-oriented information + */ +typedef struct router_port_info_s { + router_reg_t rp_histograms[RP_HIST_REGS];/* Port usage info */ + router_reg_t rp_port_error; /* Port error info */ + router_count_t rp_retry_errors; /* Total retry errors */ + router_count_t rp_sn_errors; /* Total sn errors */ + router_count_t rp_cb_errors; /* Total cb errors */ + int rp_overflows; /* Total count overflows */ + int rp_excess_err; /* Port has excessive errors */ + ushort rp_util[RP_NUM_BUCKETS];/* Port utilization */ +} router_port_info_t; + +#define ROUTER_INFO_VERSION 7 + +struct lboard_s; + +/* + * Router information + */ +typedef struct router_info_s { + char ri_version; /* structure version */ + cnodeid_t ri_cnode; /* cnode of its legal guardian hub */ + nasid_t ri_nasid; /* Nasid of same */ + char ri_ledcache; /* Last LED bitmap */ + char ri_leds; /* Current LED bitmap */ + char ri_portmask; /* Active port bitmap */ + router_reg_t ri_stat_rev_id; /* Status rev ID value */ + net_vec_t ri_vector; /* vector from guardian to router */ + int ri_writeid; /* router's vector write ID */ + int64_t ri_timebase; /* Time of first sample */ + int64_t ri_timestamp; /* Time of last sample */ + router_port_info_t ri_port[MAX_ROUTER_PORTS]; /* per port info */ + moduleid_t ri_module; /* Which module are we in? */ + slotid_t ri_slotnum; /* Which slot are we in? */ + router_reg_t ri_glbl_parms[GLBL_PARMS_REGS]; + /* Global parms0&1 register contents*/ + devfs_handle_t ri_vertex; /* hardware graph vertex */ + router_reg_t ri_prot_conf; /* protection config. register */ + int64_t ri_per_minute; /* Ticks per minute */ + + /* + * Everything below here is for kernel use only and may change at + * at any time with or without a change in teh revision number + * + * Any pointers or things that come and go with DEBUG must go at + * the bottom of the structure, below the user stuff. + */ + char ri_hist_type; /* histogram type */ + devfs_handle_t ri_guardian; /* guardian node for the router */ + int64_t ri_last_print; /* When did we last print */ + char ri_print; /* Should we print */ + char ri_just_blink; /* Should we blink the LEDs */ + +#ifdef DEBUG + int64_t ri_deltatime; /* Time it took to sample */ #endif + spinlock_t ri_lock; /* Lock for access to router info */ + net_vec_t *ri_vecarray; /* Pointer to array of vectors */ + struct lboard_s *ri_brd; /* Pointer to board structure */ + char * ri_name; /* This board's hwg path */ + unsigned char ri_port_maint[MAX_ROUTER_PORTS]; /* should we send a + message to availmon */ +} router_info_t; + + +/* Router info location specifiers */ + +#define RIP_PROMLOG 2 /* Router info in promlog */ +#define RIP_CONSOLE 4 /* Router info on console */ + +#define ROUTER_INFO_PRINT(_rip,_where) (_rip->ri_print |= _where) + /* Set the field used to check if a + * router info can be printed + */ +#define IS_ROUTER_INFO_PRINTED(_rip,_where) \ + (_rip->ri_print & _where) + /* Was the router info printed to + * the given location (_where) ? + * Mainly used to prevent duplicate + * router error states. + */ +#define ROUTER_INFO_LOCK(_rip,_s) _s = mutex_spinlock(&(_rip->ri_lock)) + /* Take the lock on router info + * to gain exclusive access + */ +#define ROUTER_INFO_UNLOCK(_rip,_s) mutex_spinunlock(&(_rip->ri_lock),_s) + /* Release the lock on router info */ +/* + * Router info hanging in the nodepda + */ +typedef struct nodepda_router_info_s { + devfs_handle_t router_vhdl; /* vertex handle of the router */ + short router_port; /* port thru which we entered */ + short router_portmask; + moduleid_t router_module; /* module in which router is there */ + slotid_t router_slot; /* router slot */ + unsigned char router_type; /* kind of router */ + net_vec_t router_vector; /* vector from the guardian node */ + + router_info_t *router_infop; /* info hanging off the hwg vertex */ + struct nodepda_router_info_s *router_next; + /* pointer to next element */ +} nodepda_router_info_t; + +#define ROUTER_NAME_SIZE 20 /* Max size of a router name */ + +#define NORMAL_ROUTER_NAME "normal_router" +#define NULL_ROUTER_NAME "null_router" +#define META_ROUTER_NAME "meta_router" +#define REPEATER_ROUTER_NAME "repeater_router" +#define UNKNOWN_ROUTER_NAME "unknown_router" + +/* The following definitions are needed by the router traversing + * code either using the hardware graph or using vector operations. + */ +/* Structure of the router queue element */ +typedef struct router_elt_s { + union { + /* queue element structure during router probing */ + struct { + /* number-in-a-can (unique) for the router */ + nic_t nic; + /* vector route from the master hub to + * this router. + */ + net_vec_t vec; + /* port status */ + uint64_t status; + char port_status[MAX_ROUTER_PORTS + 1]; + } r_elt; + /* queue element structure during router guardian + * assignment + */ + struct { + /* vertex handle for the router */ + devfs_handle_t vhdl; + /* guardian for this router */ + devfs_handle_t guard; + /* vector router from the guardian to the router */ + net_vec_t vec; + } k_elt; + } u; + /* easy to use port status interpretation */ +} router_elt_t; + +/* structure of the router queue */ + +typedef struct router_queue_s { + char head; /* Point where a queue element is inserted */ + char tail; /* Point where a queue element is removed */ + int type; + router_elt_t array[MAX_RTR_BREADTH]; + /* Entries for queue elements */ +} router_queue_t; + + +#endif /* __ASSEMBLY__ */ + +/* + * RR_HISTOGRAM(_L) mask and shift definitions + * There are two 64 bit histogram registers, so the following macros take + * into account dealing with an array of 4 32 bit values indexed by _x + */ + +#define RHIST_BUCKET_SHFT(_x) (32 * ((_x) & 0x1)) +#define RHIST_BUCKET_MASK(_x) (UINT64_CAST 0xffffffff << RHIST_BUCKET_SHFT((_x) & 0x1)) +#define RHIST_GET_BUCKET(_x, _reg) \ + ((RHIST_BUCKET_MASK(_x) & ((_reg)[(_x) >> 1])) >> RHIST_BUCKET_SHFT(_x)) + +/* + * RR_RESET_MASK(_L) mask and shift definitions + */ + +#define RRM_RESETOK(_L) (UINT64_CAST 1 << ((_L) - 1)) +#define RRM_RESETOK_ALL ALL_PORTS + +/* + * RR_META_TABLE(_x) and RR_LOCAL_TABLE(_x) mask and shift definitions + */ + +#define RTABLE_SHFT(_L) (4 * ((_L) - 1)) +#define RTABLE_MASK(_L) (UINT64_CAST 0x7 << RTABLE_SHFT(_L)) + + +#define ROUTERINFO_STKSZ 4096 + +#ifndef __ASSEMBLY__ + +int router_reg_read(router_info_t *rip, int regno, router_reg_t *val); +int router_reg_write(router_info_t *rip, int regno, router_reg_t val); +int router_get_info(devfs_handle_t routerv, router_info_t *, int); +int router_init(cnodeid_t cnode,int writeid, nodepda_router_info_t *npda_rip); +int router_set_leds(router_info_t *rip); +void router_print_state(router_info_t *rip, int level, + void (*pf)(int, char *, ...),int print_where); +void capture_router_stats(router_info_t *rip); + + +int probe_routers(void); +void get_routername(unsigned char brd_type,char *rtrname); +void router_guardians_set(devfs_handle_t hwgraph_root); +int router_hist_reselect(router_info_t *, int64_t); +#endif /* __ASSEMBLY__ */ -#endif /* _ASM_SN_ROUTER_H */ +#endif /* _ASM_IA64_SN_ROUTER_H */ diff -Nru a/include/asm-ia64/sn/sgi.h b/include/asm-ia64/sn/sgi.h --- a/include/asm-ia64/sn/sgi.h Tue Mar 12 13:58:14 2002 +++ b/include/asm-ia64/sn/sgi.h Tue Mar 12 13:58:14 2002 @@ -4,13 +4,12 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Jack Steiner (steiner@sgi.com) + * Copyright (C) 2000-2002 Silicon Graphics, Inc. All rights reserved. */ -#ifndef _ASM_SN_SGI_H -#define _ASM_SN_SGI_H +#ifndef _ASM_IA64_SN_SGI_H +#define _ASM_IA64_SN_SGI_H #include @@ -95,9 +94,6 @@ bigger. This is NULL-terminated */ }; -#define MIN(_a,_b) ((_a)<(_b)?(_a):(_b)) - -typedef uint32_t app32_ptr_t; /* needed by edt.h */ typedef int64_t __psint_t; /* needed by klgraph.c */ typedef enum { B_FALSE, B_TRUE } boolean_t; @@ -105,8 +101,6 @@ #define ctob(x) ((uint64_t)(x)*NBPC) #define btoc(x) (((uint64_t)(x)+(NBPC-1))/NBPC) -typedef __psunsigned_t nic_data_t; - /* ** Possible return values from graph routines. @@ -129,10 +123,6 @@ * calls */ #define XG_WIDGET_PART_NUM 0xC102 /* KONA/xt_regs.h XG_XT_PART_NUM_VALUE */ -#ifndef TO_PHYS_MASK -#define TO_PHYS_MASK 0x0000000fffffffff -#endif - typedef uint64_t vhandl_t; @@ -159,7 +149,7 @@ typedef uint64_t mrlock_t; /* needed by devsupport.c */ #define HUB_PIO_CONVEYOR 0x1 -#define CNODEID_NONE (cnodeid_t)-1 +#define CNODEID_NONE ((cnodeid_t)-1) #define XTALK_PCI_PART_NUM "030-1275-" #define kdebug 0 @@ -177,7 +167,7 @@ #define kern_free(x) kfree(x) typedef cpuid_t cpu_cookie_t; -#define CPU_NONE -1 +#define CPU_NONE (-1) /* * mutext support mapping @@ -225,9 +215,6 @@ } } while(0) #endif /* DISABLE_ASSERT */ -#define PRINT_WARNING(x...) do { printk("WARNING : "); printk(x); } while(0) -#define PRINT_NOTICE(x...) do { printk("NOTICE : "); printk(x); } while(0) -#define PRINT_ALERT(x...) do { printk("ALERT : "); printk(x); } while(0) #define PRINT_PANIC panic #ifdef CONFIG_SMP @@ -238,4 +225,4 @@ #include /* for now */ -#endif /* _ASM_SN_SGI_H */ +#endif /* _ASM_IA64_SN_SGI_H */ diff -Nru a/include/asm-ia64/sn/simulator.h b/include/asm-ia64/sn/simulator.h --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/include/asm-ia64/sn/simulator.h Tue Mar 12 13:58:16 2002 @@ -0,0 +1,27 @@ +#ifndef _ASM_IA64_SN_SIMULATOR_H +#define _ASM_IA64_SN_SIMULATOR_H + +/* + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * Copyright (C) 2000-2001 Silicon Graphics, Inc. All rights reserved. + */ + +#include + +#ifdef CONFIG_IA64_SGI_SN_SIM + +#define SNMAGIC 0xaeeeeeee8badbeefL +#define IS_RUNNING_ON_SIMULATOR() ({long sn; asm("mov %0=cpuid[%1]" : "=r"(sn) : "r"(2)); sn == SNMAGIC;}) + +#define SIMULATOR_SLEEP() asm("nop.i 0x8beef") + +#else + +#define IS_RUNNING_ON_SIMULATOR() (0) +#define SIMULATOR_SLEEP() + +#endif + +#endif /* _ASM_IA64_SN_SIMULATOR_H */ diff -Nru a/include/asm-ia64/sn/slotnum.h b/include/asm-ia64/sn/slotnum.h --- a/include/asm-ia64/sn/slotnum.h Tue Mar 12 13:58:16 2002 +++ b/include/asm-ia64/sn/slotnum.h Tue Mar 12 13:58:16 2002 @@ -4,22 +4,23 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2001 Silicon Graphics, Inc. All rights reserved. */ -#ifndef _ASM_SN_SLOTNUM_H -#define _ASM_SN_SLOTNUM_H +#ifndef _ASM_IA64_SN_SLOTNUM_H +#define _ASM_IA64_SN_SLOTNUM_H #include typedef unsigned char slotid_t; -#if defined (CONFIG_SGI_IP35) || defined(CONFIG_IA64_SGI_SN1) || defined(CONFIG_IA64_GENERIC) +#if defined (CONFIG_IA64_SGI_SN1) #include +#elif defined (CONFIG_IA64_SGI_SN2) +#include #else #error <> -#endif /* !CONFIG_SGI_IP35 && !CONFIG_IA64_SGI_SN1 */ +#endif /* !CONFIG_IA64_SGI_SN1 */ -#endif /* _ASM_SN_SLOTNUM_H */ +#endif /* _ASM_IA64_SN_SLOTNUM_H */ diff -Nru a/include/asm-ia64/sn/sn1/addrs.h b/include/asm-ia64/sn/sn1/addrs.h --- a/include/asm-ia64/sn/sn1/addrs.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/sn/sn1/addrs.h Tue Mar 12 13:58:15 2002 @@ -4,19 +4,21 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2002 Silicon Graphics, Inc. All rights reserved. */ -#ifndef _ASM_SN_SN1_ADDRS_H -#define _ASM_SN_SN1_ADDRS_H +#ifndef _ASM_IA64_SN_SN1_ADDRS_H +#define _ASM_IA64_SN_SN1_ADDRS_H +#include + +#ifdef CONFIG_IA64_SGI_SN1 /* - * IP35 (on a TRex) Address map + * SN1 (on a TRex) Address map * * This file contains a set of definitions and macros which are used * to reference into the major address spaces (CAC, HSPEC, IO, MSPEC, - * and UNCAC) used by the IP35 architecture. It also contains addresses + * and UNCAC) used by the SN1 architecture. It also contains addresses * for "major" statically locatable PROM/Kernel data structures, such as * the partition table, the configuration data structure, etc. * We make an implicit assumption that the processor using this file @@ -32,7 +34,6 @@ * appropriately. */ -#include /* * Some of the macros here need to be casted to appropriate types when used @@ -40,22 +41,14 @@ * use some new ANSI preprocessor stuff to paste these on where needed. */ -#if defined(_RUN_UNCACHED) -#define CAC_BASE 0x9600000000000000 -#else -#ifndef __ia64 -#define CAC_BASE 0xa800000000000000 -#else #define CAC_BASE 0xe000000000000000 -#endif -#endif - #define HSPEC_BASE 0xc0000b0000000000 #define HSPEC_SWIZ_BASE 0xc000030000000000 #define IO_BASE 0xc0000a0000000000 #define IO_SWIZ_BASE 0xc000020000000000 -#define MSPEC_BASE 0xc000000000000000 +#define MSPEC_BASE 0xc000090000000000 #define UNCAC_BASE 0xc000000000000000 +#define TO_PHYS_MASK 0x000000ffffffffff #define TO_PHYS(x) ( ((x) & TO_PHYS_MASK)) #define TO_CAC(x) (CAC_BASE | ((x) & TO_PHYS_MASK)) @@ -109,18 +102,14 @@ #define NASID_GET(_pa) (int) ((UINT64_CAST (_pa) >> \ NASID_SHFT) & NASID_BITMASK) -#if _LANGUAGE_C && !defined(_STANDALONE) -#ifndef REAL_HARDWARE -#define NODE_SWIN_BASE(nasid, widget) RAW_NODE_SWIN_BASE(nasid, widget) -#else +#ifndef __ASSEMBLY__ #define NODE_SWIN_BASE(nasid, widget) \ ((widget == 0) ? NODE_BWIN_BASE((nasid), SWIN0_BIGWIN) \ : RAW_NODE_SWIN_BASE(nasid, widget)) -#endif #else #define NODE_SWIN_BASE(nasid, widget) \ (NODE_IO_BASE(nasid) + (UINT64_CAST (widget) << SWIN_SIZE_BITS)) -#endif /* _LANGUAGE_C */ +#endif /* __ASSEMBLY__ */ /* * The following definitions pertain to the IO special address @@ -155,7 +144,7 @@ /* * The following define the major position-independent aliases used - * in IP27. + * in SN1. * CALIAS -- Varies in size, points to the first n bytes of memory * on the reader's node. */ @@ -169,11 +158,6 @@ #define SN0_WIDGET_BASE(_nasid, _wid) (NODE_SWIN_BASE((_nasid), (_wid))) -#if _LANGUAGE_C -#define KERN_NMI_ADDR(nasid, slice) \ - TO_NODE_UNCAC((nasid), IP27_NMI_KREGS_OFFSET + \ - (IP27_NMI_KREGS_CPU_SIZE * (slice))) -#endif /* _LANGUAGE_C */ /* @@ -197,7 +181,7 @@ #define KL_UART_CMD LOCAL_HSPEC(HSPEC_UART_0) /* UART command reg */ #define KL_UART_DATA LOCAL_HSPEC(HSPEC_UART_1) /* UART data reg */ -#if !_LANGUAGE_ASSEMBLY +#if !__ASSEMBLY__ /* Address 0x400 to 0x1000 ualias points to cache error eframe + misc * CACHE_ERR_SP_PTR could either contain an address to the stack, or * the stack could start at CACHE_ERR_SP_PTR @@ -210,28 +194,9 @@ #define CACHE_ERR_SP (CACHE_ERR_SP_PTR - 16) #define CACHE_ERR_AREA_SIZE (ARCS_SPB_OFFSET - CACHE_ERR_EFRAME) -#endif /* !_LANGUAGE_ASSEMBLY */ +#endif /* !__ASSEMBLY__ */ + -/* Each CPU accesses UALIAS at a different physaddr, on 32k boundaries - * This determines the locations of the exception vectors - */ -#define UALIAS_FLIP_BASE UALIAS_BASE -#define UALIAS_FLIP_SHIFT 15 -#define UALIAS_FLIP_ADDR(_x) ((_x) ^ (cputoslice(getcpuid())<Key field is used for this purpose. - * Macros needed by IP27 device drivers to convert the + * Macros needed by SN1 device drivers to convert the * COMPONENT->Key field to the respective base address. * Key field looks as follows: * @@ -256,7 +221,7 @@ * is in place. */ -#if _LANGUAGE_C +#ifndef __ASSEMBLY__ #define uchar unsigned char @@ -301,8 +266,9 @@ #define PUT_INSTALL_STATUS(c,s) c->Revision = s #define GET_INSTALL_STATUS(c) c->Revision -#endif /* LANGUAGE_C */ +#endif /* __ASSEMBLY__ */ #endif /* _STANDALONE */ +#endif /* CONFIG_IA64_SGI_SN1 */ -#endif /* _ASM_SN_SN1_ADDRS_H */ +#endif /* _ASM_IA64_SN_SN1_ADDRS_H */ diff -Nru a/include/asm-ia64/sn/sn1/arch.h b/include/asm-ia64/sn/sn1/arch.h --- a/include/asm-ia64/sn/sn1/arch.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/sn/sn1/arch.h Tue Mar 12 13:58:15 2002 @@ -4,29 +4,29 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2001 Silicon Graphics, Inc. All rights reserved. */ -#ifndef _ASM_SN_SN1_ARCH_H -#define _ASM_SN_SN1_ARCH_H +#ifndef _ASM_IA64_SN_SN1_ARCH_H +#define _ASM_IA64_SN_SN1_ARCH_H #if defined(N_MODE) #error "ERROR constants defined only for M-mode" #endif +#include +#include + +#define CPUS_PER_NODE 4 /* CPUs on a single hub */ +#define CPUS_PER_SUBNODE 2 /* CPUs on a single hub PI */ + /* * This is the maximum number of NASIDS that can be present in a system. + * This include ALL nodes in ALL partitions connected via NUMALINK. * (Highest NASID plus one.) */ #define MAX_NASIDS 128 /* - * MAXCPUS refers to the maximum number of CPUs in a single kernel. - * This is not necessarily the same as MAXNODES * CPUS_PER_NODE - */ -#define MAXCPUS 512 - -/* * This is the maximum number of nodes that can be part of a kernel. * Effectively, it's the maximum number of compact node ids (cnodeid_t). * This is not necessarily the same as MAX_NASIDS. @@ -40,6 +40,19 @@ #define MAX_NONPREMIUM_REGIONS 16 #define MAX_PREMIUM_REGIONS MAX_REGIONS +/* + * Slot constants for IP35 + */ + +#define MAX_MEM_SLOTS 8 /* max slots per node */ + +#if defined(N_MODE) +#error "N-mode not supported" +#endif + +#define SLOT_SHIFT (30) +#define SLOT_MIN_MEM_SIZE (64*1024*1024) + /* * MAX_PARITIONS refers to the maximum number of logically defined @@ -51,17 +64,14 @@ #define NASID_MASK_BYTES ((MAX_NASIDS + 7) / 8) /* - * Slot constants for IP35 + * New stuff in here from Irix sys/pfdat.h. */ +#define SLOT_PFNSHIFT (SLOT_SHIFT - PAGE_SHIFT) +#define PFN_NASIDSHFT (NASID_SHFT - PAGE_SHIFT) +#define slot_getbasepfn(node,slot) (mkpfn(COMPACT_TO_NASID_NODEID(node), slot< /* The secret password; used to release protection */ #define HUB_PASSWORD 0x53474972756c6573ull @@ -24,7 +22,6 @@ #define MAX_HUB_PATH 80 -#if defined(CONFIG_SGI_IP35) || defined(CONFIG_IA64_SGI_SN1) || defined(CONFIG_IA64_GENERIC) #include #include #include @@ -40,19 +37,13 @@ #include #include -#else /* ! CONFIG_SGI_IP35 || CONFIG_IA64_SGI_SN1 */ - -<< BOMB! CONFIG_SGI_IP35 is only defined for IP35 >> - -#endif /* defined(CONFIG_SGI_IP35) */ - /* Translation of uncached attributes */ #define UATTR_HSPEC 0 #define UATTR_IO 1 #define UATTR_MSPEC 2 #define UATTR_UNCAC 3 -#if _LANGUAGE_ASSEMBLY +#if __ASSEMBLY__ /* * Get nasid into register, r (uses at) @@ -63,9 +54,9 @@ and r, LRI_NODEID_MASK; \ dsrl r, LRI_NODEID_SHFT -#endif /* _LANGUAGE_ASSEMBLY */ +#endif /* __ASSEMBLY__ */ -#if _LANGUAGE_C +#ifndef __ASSEMBLY__ #include @@ -78,6 +69,6 @@ void capture_hub_stats(cnodeid_t, struct nodepda_s *); void init_hub_stats(cnodeid_t, struct nodepda_s *); -#endif /* _LANGUAGE_C */ +#endif /* __ASSEMBLY__ */ -#endif /* _ASM_SN_SN1_BEDROCK_H */ +#endif /* _ASM_IA64_SN_SN1_BEDROCK_H */ diff -Nru a/include/asm-ia64/sn/sn1/hubdev.h b/include/asm-ia64/sn/sn1/hubdev.h --- a/include/asm-ia64/sn/sn1/hubdev.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/sn/sn1/hubdev.h Tue Mar 12 13:58:15 2002 @@ -4,12 +4,11 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2001 Silicon Graphics, Inc. All rights reserved. */ -#ifndef _ASM_SN_SN1_HUBDEV_H -#define _ASM_SN_SN1_HUBDEV_H +#ifndef _ASM_IA64_SN_SN1_HUBDEV_H +#define _ASM_IA64_SN_SN1_HUBDEV_H extern void hubdev_init(void); extern void hubdev_register(int (*attach_method)(devfs_handle_t)); @@ -19,4 +18,4 @@ extern caddr_t hubdev_prombase_get(devfs_handle_t hub); extern cnodeid_t hubdev_cnodeid_get(devfs_handle_t hub); -#endif /* _ASM_SN_SN1_HUBDEV_H */ +#endif /* _ASM_IA64_SN_SN1_HUBDEV_H */ diff -Nru a/include/asm-ia64/sn/sn1/hubio.h b/include/asm-ia64/sn/sn1/hubio.h --- a/include/asm-ia64/sn/sn1/hubio.h Tue Mar 12 13:58:16 2002 +++ b/include/asm-ia64/sn/sn1/hubio.h Tue Mar 12 13:58:16 2002 @@ -4,8 +4,7 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2001 Silicon Graphics, Inc. All rights reserved. */ /************************************************************************ @@ -20,8 +19,8 @@ ************************************************************************/ -#ifndef _ASM_SN_SN1_HUBIO_H -#define _ASM_SN_SN1_HUBIO_H +#ifndef _ASM_IA64_SN_SN1_HUBIO_H +#define _ASM_IA64_SN_SN1_HUBIO_H #define IIO_WID 0x00400000 /* @@ -762,7 +761,7 @@ -#ifdef _LANGUAGE_C +#ifndef __ASSEMBLY__ /************************************************************************ * * @@ -2942,15 +2941,15 @@ typedef union ii_ilct_u { bdrkreg_t ii_ilct_regval; struct { - bdrkreg_t i_rsvd : 9; - bdrkreg_t i_test_err_capture : 1; - bdrkreg_t i_test_clear : 1; - bdrkreg_t i_test_flit : 3; - bdrkreg_t i_test_cberr : 1; - bdrkreg_t i_test_valid : 1; - bdrkreg_t i_test_data : 20; - bdrkreg_t i_test_mask : 8; - bdrkreg_t i_test_seed : 20; + bdrkreg_t i_test_seed : 20; + bdrkreg_t i_test_mask : 8; + bdrkreg_t i_test_data : 20; + bdrkreg_t i_test_valid : 1; + bdrkreg_t i_test_cberr : 1; + bdrkreg_t i_test_flit : 3; + bdrkreg_t i_test_clear : 1; + bdrkreg_t i_test_err_capture : 1; + bdrkreg_t i_rsvd : 9; } ii_ilct_fld_s; } ii_ilct_u_t; @@ -4935,7 +4934,7 @@ -#endif /* _LANGUAGE_C */ +#endif /* __ASSEMBLY__ */ /************************************************************************ * * @@ -5014,4 +5013,4 @@ -#endif /* _ASM_SN_SN1_HUBIO_H */ +#endif /* _ASM_IA64_SN_SN1_HUBIO_H */ diff -Nru a/include/asm-ia64/sn/sn1/hubio_next.h b/include/asm-ia64/sn/sn1/hubio_next.h --- a/include/asm-ia64/sn/sn1/hubio_next.h Tue Mar 12 13:58:14 2002 +++ b/include/asm-ia64/sn/sn1/hubio_next.h Tue Mar 12 13:58:14 2002 @@ -4,11 +4,10 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2002 Silicon Graphics, Inc. All rights reserved. */ -#ifndef _ASM_SN_SN1_HUBIO_NEXT_H -#define _ASM_SN_SN1_HUBIO_NEXT_H +#ifndef _ASM_IA64_SN_SN1_HUBIO_NEXT_H +#define _ASM_IA64_SN_SN1_HUBIO_NEXT_H /* * Slightly friendlier names for some common registers. @@ -64,7 +63,7 @@ #define IIO_BTE_NOTIFY_0 IIO_IBNA_0 /* Also BTE notification 0 */ #define IIO_BTE_INT_0 IIO_IBIA_0 /* Also BTE interrupt 0 */ #define IIO_BTE_OFF_0 0 /* Base offset from BTE 0 regs. */ -#define IIO_BTE_OFF_1 IIO_IBLS_1 - IIO_IBLS_0 /* Offset from base to BTE 1 */ +#define IIO_BTE_OFF_1 (IIO_IBLS_1 - IIO_IBLS_0) /* Offset from base to BTE 1 */ /* BTE register offsets from base */ #define BTEOFF_STAT 0 @@ -78,11 +77,16 @@ /* names used in hub_diags.c; carried over from SN0 */ #define IIO_BASE_BTE0 IIO_IBLS_0 #define IIO_BASE_BTE1 IIO_IBLS_1 -#if 0 -#define IIO_BASE IIO_WID -#define IIO_BASE_PERF IIO_IPCR /* IO Performance Control */ -#define IIO_PERF_CNT IIO_IPPR /* IO Performance Profiling */ -#endif + +/* + * Macro which takes the widget number, and returns the + * IO PRB address of that widget. + * value _x is expected to be a widget number in the range + * 0, 8 - 0xF + */ +#define IIO_IOPRB(_x) (IIO_IOPRB_0 + ( ( (_x) < HUB_WIDGET_ID_MIN ? \ + (_x) : \ + (_x) - (HUB_WIDGET_ID_MIN-1)) << 3) ) /* GFX Flow Control Node/Widget Register */ @@ -139,7 +143,7 @@ * redefined big window 7 as small window 0. XXX does this still apply for SN1?? */ -#define HUB_NUM_BIG_WINDOW IIO_NUM_ITTES - 1 +#define HUB_NUM_BIG_WINDOW (IIO_NUM_ITTES - 1) /* * Use the top big window as a surrogate for the first small window @@ -343,7 +347,7 @@ * CRBs. */ -#ifdef _LANGUAGE_C +#ifndef __ASSEMBLY__ /* * Easy access macros for CRBs, all 4 registers (A-D) @@ -389,7 +393,7 @@ #define icrbd_context ii_icrb0_d_fld_s.id_context #define d_regvalue ii_icrb0_d_regval -#endif /* LANGUAGE_C */ +#endif /* __ASSEMBLY__ */ /* Number of widgets supported by hub */ #define HUB_NUM_WIDGET 9 @@ -399,7 +403,7 @@ #define HUB_WIDGET_PART_NUM 0xc110 #define MAX_HUBS_PER_XBOW 2 -#ifdef _LANGUAGE_C +#ifndef __ASSEMBLY__ /* A few more #defines for backwards compatibility */ #define iprb_t ii_iprb0_u_t #define iprb_regval ii_iprb0_regval @@ -430,11 +434,11 @@ #define IO_PERF_SETS 32 #if __KERNEL__ -#if _LANGUAGE_C +#ifndef __ASSEMBLY__ /* XXX moved over from SN/SN0/hubio.h -- each should be checked for SN1 */ #include #include -#include +#include #include /* Bit for the widget in inbound access register */ @@ -699,12 +703,9 @@ extern int hub_intr_connect( hub_intr_t intr_hdl, /* xtalk intr resource hndl */ - intr_func_t intr_func, /* xtalk intr handler */ - void *intr_arg, /* arg to intr handler */ xtalk_intr_setfunc_t setfunc, /* func to set intr hw */ - void *setfunc_arg, /* arg to setfunc */ - void *thread); /* intr thread to use */ + void *setfunc_arg); /* arg to setfunc */ extern void hub_intr_disconnect(hub_intr_t intr_hdl); @@ -756,6 +757,6 @@ extern void hub_widgetdev_shutdown(devfs_handle_t, int); extern int hub_dma_enabled(devfs_handle_t); -#endif /* _LANGUAGE_C */ +#endif /* __ASSEMBLY__ */ #endif /* _KERNEL */ -#endif /* _ASM_SN_SN1_HUBIO_NEXT_H */ +#endif /* _ASM_IA64_SN_SN1_HUBIO_NEXT_H */ diff -Nru a/include/asm-ia64/sn/sn1/hublb.h b/include/asm-ia64/sn/sn1/hublb.h --- a/include/asm-ia64/sn/sn1/hublb.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/sn/sn1/hublb.h Tue Mar 12 13:58:15 2002 @@ -4,8 +4,7 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2001 Silicon Graphics, Inc. All rights reserved. */ /************************************************************************ @@ -20,8 +19,8 @@ ************************************************************************/ -#ifndef _ASM_SN_SN1_HUBLB_H -#define _ASM_SN_SN1_HUBLB_H +#ifndef _ASM_IA64_SN_SN1_HUBLB_H +#define _ASM_IA64_SN_SN1_HUBLB_H #define LB_REV_ID 0x00600000 /* @@ -251,7 +250,7 @@ -#ifdef _LANGUAGE_C +#ifndef __ASSEMBLY__ /************************************************************************ * * @@ -1593,7 +1592,7 @@ -#endif /* _LANGUAGE_C */ +#endif /* __ASSEMBLY__ */ /************************************************************************ * * @@ -1605,4 +1604,4 @@ -#endif /* _ASM_SN_SN1_HUBLB_H */ +#endif /* _ASM_IA64_SN_SN1_HUBLB_H */ diff -Nru a/include/asm-ia64/sn/sn1/hublb_next.h b/include/asm-ia64/sn/sn1/hublb_next.h --- a/include/asm-ia64/sn/sn1/hublb_next.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/sn/sn1/hublb_next.h Tue Mar 12 13:58:15 2002 @@ -4,11 +4,10 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2001 Silicon Graphics, Inc. All rights reserved. */ -#ifndef _ASM_SN_SN1_HUBLB_NEXT_H -#define _ASM_SN_SN1_HUBLB_NEXT_H +#ifndef _ASM_IA64_SN_SN1_HUBLB_NEXT_H +#define _ASM_IA64_SN_SN1_HUBLB_NEXT_H /********************************************************************** @@ -107,4 +106,4 @@ #define PIOTYPE_PROT_ERR 6 /* VECTOR_STATUS only */ #define PIOTYPE_UNKNOWN 7 /* VECTOR_STATUS only */ -#endif /* _ASM_SN_SN1_HUBLB_NEXT_H */ +#endif /* _ASM_IA64_SN_SN1_HUBLB_NEXT_H */ diff -Nru a/include/asm-ia64/sn/sn1/hubmd.h b/include/asm-ia64/sn/sn1/hubmd.h --- a/include/asm-ia64/sn/sn1/hubmd.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/sn/sn1/hubmd.h Tue Mar 12 13:58:15 2002 @@ -4,11 +4,10 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2001 Silicon Graphics, Inc. All rights reserved. */ -#ifndef _ASM_SN_SN1_HUBMD_H -#define _ASM_SN_SN1_HUBMD_H +#ifndef _ASM_IA64_SN_SN1_HUBMD_H +#define _ASM_IA64_SN_SN1_HUBMD_H /************************************************************************ @@ -315,7 +314,7 @@ -#ifdef _LANGUAGE_C +#ifndef __ASSEMBLY__ /************************************************************************ * * @@ -2140,7 +2139,7 @@ * corresponds to the valid bit, and bit 1 of each two-bit field * * corresponds to the overrun bit. * * The rule for the valid bit is that it gets set whenever that error * - * occurs, regardless of whether a higher priority error has occurred. * + * occurs, regardless of whether a higher priority error has occurred. * * The rule for the overrun bit is that it gets set whenever we are * * unable to record the address information for this particular * * error, due to a previous error of the same or higher priority. * @@ -2463,7 +2462,7 @@ -#endif /* _LANGUAGE_C */ +#endif /* __ASSEMBLY__ */ /************************************************************************ * * @@ -2474,4 +2473,4 @@ -#endif /* _ASM_SN_SN1_HUBMD_H */ +#endif /* _ASM_IA64_SN_SN1_HUBMD_H */ diff -Nru a/include/asm-ia64/sn/sn1/hubmd_next.h b/include/asm-ia64/sn/sn1/hubmd_next.h --- a/include/asm-ia64/sn/sn1/hubmd_next.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/sn/sn1/hubmd_next.h Tue Mar 12 13:58:15 2002 @@ -4,13 +4,11 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2002 Silicon Graphics, Inc. All rights reserved. */ -#ifndef _ASM_SN_SN1_HUBMD_NEXT_H -#define _ASM_SN_SN1_HUBMD_NEXT_H +#ifndef _ASM_IA64_SN_SN1_HUBMD_NEXT_H +#define _ASM_IA64_SN_SN1_HUBMD_NEXT_H -#ifdef BRINGUP /* XXX moved over from SN/SN0/hubmd.h -- each should be checked for SN1 */ /* In fact, most of this stuff is wrong. Some is correct, such as * MD_PAGE_SIZE and MD_PAGE_NUM_SHFT. @@ -147,7 +145,7 @@ #define MD_SPROT_REFCNT_GET(value) ( \ ((value) & MD_SPROT_REFCNT_MASK) >> MD_SPROT_REFCNT_SHFT) -#if _LANGUAGE_C +#ifndef __ASSEMBLY__ #ifdef LITTLE_ENDIAN typedef union md_perf_sel { @@ -171,9 +169,8 @@ } md_perf_sel_t; #endif -#endif /* _LANGUAGE_C */ +#endif /* __ASSEMBLY__ */ -#endif /* BRINGUP */ /* Like SN0, SN1 supports a mostly-flat address space with 8 CPU-visible, evenly spaced, contiguous regions, or "software @@ -300,7 +297,7 @@ ***********************************************************************/ -#ifdef _LANGUAGE_C +#ifndef __ASSEMBLY__ /* Standard Directory Entries */ @@ -533,7 +530,7 @@ struct md_pdir_sparse_fmt pds_fmt; } md_pdir_t; -#endif /* _LANGUAGE_C */ +#endif /* __ASSEMBLY__ */ /********************************************************************** @@ -568,7 +565,7 @@ #define MD_DIR_WAIT (UINT64_CAST 0x6) /* ptr format, hw-defined */ #define MD_DIR_POISONED (UINT64_CAST 0x7) /* ptr format, hw-defined */ -#ifdef _LANGUAGE_C +#ifndef __ASSEMBLY__ /* Convert format and state fields into a single "cacheline state" value, defined above */ @@ -578,7 +575,7 @@ MD_DIR_SHARED) #define MD_DIR_STATE(x) MD_FMT_ST_TO_STATE(MD_DIR_FORMAT(x), MD_DIR_STVAL(x)) -#endif /* _LANGUAGE_C */ +#endif /* __ASSEMBLY__ */ @@ -812,4 +809,4 @@ #define MFC_ADDR_SHFT 6 -#endif /* _ASM_SN_SN1_HUBMD_NEXT_H */ +#endif /* _ASM_IA64_SN_SN1_HUBMD_NEXT_H */ diff -Nru a/include/asm-ia64/sn/sn1/hubni.h b/include/asm-ia64/sn/sn1/hubni.h --- a/include/asm-ia64/sn/sn1/hubni.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/sn/sn1/hubni.h Tue Mar 12 13:58:15 2002 @@ -4,11 +4,10 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2001 Silicon Graphics, Inc. All rights reserved. */ -#ifndef _ASM_SN_SN1_HUBNI_H -#define _ASM_SN_SN1_HUBNI_H +#ifndef _ASM_IA64_SN_SN1_HUBNI_H +#define _ASM_IA64_SN_SN1_HUBNI_H /************************************************************************ @@ -1000,7 +999,7 @@ -#ifdef _LANGUAGE_C +#ifndef __ASSEMBLY__ /************************************************************************ * * @@ -1615,7 +1614,7 @@ -#endif /* _LANGUAGE_C */ +#endif /* __ASSEMBLY__ */ /************************************************************************ * * @@ -1779,4 +1778,4 @@ -#endif /* _ASM_SN_SN1_HUBNI_H */ +#endif /* _ASM_IA64_SN_SN1_HUBNI_H */ diff -Nru a/include/asm-ia64/sn/sn1/hubni_next.h b/include/asm-ia64/sn/sn1/hubni_next.h --- a/include/asm-ia64/sn/sn1/hubni_next.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/sn/sn1/hubni_next.h Tue Mar 12 13:58:15 2002 @@ -4,11 +4,10 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2001 Silicon Graphics, Inc. All rights reserved. */ -#ifndef _ASM_SN_SN1_HUBNI_NEXT_H -#define _ASM_SN_SN1_HUBNI_NEXT_H +#ifndef _ASM_IA64_SN_SN1_HUBNI_NEXT_H +#define _ASM_IA64_SN_SN1_HUBNI_NEXT_H #define NI_LOCAL_ENTRIES 128 #define NI_META_ENTRIES 1 @@ -67,7 +66,7 @@ NPE_EXTLONG_MASK | NPE_EXTSHORT_MASK |\ NPE_FIFOOVFLOW_MASK | NPE_TAILTO_MASK) -#ifdef _LANGUAGE_C +#ifndef __ASSEMBLY__ /* NI_PORT_HEADER[AB] registers (not automatically generated) */ #ifdef LITTLE_ENDIAN @@ -172,4 +171,4 @@ 0x6 << NPP_NULL_TIMEOUT_SHFT | \ 0x3f0 << NPP_MAX_BURST_SHFT) -#endif /* _ASM_SN_SN1_HUBNI_NEXT_H */ +#endif /* _ASM_IA64_SN_SN1_HUBNI_NEXT_H */ diff -Nru a/include/asm-ia64/sn/sn1/hubpi.h b/include/asm-ia64/sn/sn1/hubpi.h --- a/include/asm-ia64/sn/sn1/hubpi.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/sn/sn1/hubpi.h Tue Mar 12 13:58:15 2002 @@ -4,11 +4,10 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2001 Silicon Graphics, Inc. All rights reserved. */ -#ifndef _ASM_SN_SN1_HUBPI_H -#define _ASM_SN_SN1_HUBPI_H +#ifndef _ASM_IA64_SN_SN1_HUBPI_H +#define _ASM_IA64_SN_SN1_HUBPI_H /************************************************************************ * * @@ -551,7 +550,7 @@ -#ifdef _LANGUAGE_C +#ifndef __ASSEMBLY__ /************************************************************************ * * @@ -4248,7 +4247,7 @@ -#endif /* _LANGUAGE_C */ +#endif /* __ASSEMBLY__ */ /************************************************************************ * * @@ -4261,4 +4260,4 @@ #define PI_GFX_PAGE_ENABLE 0x0000010000000000LL -#endif /* _ASM_SN_SN1_HUBPI_H */ +#endif /* _ASM_IA64_SN_SN1_HUBPI_H */ diff -Nru a/include/asm-ia64/sn/sn1/hubpi_next.h b/include/asm-ia64/sn/sn1/hubpi_next.h --- a/include/asm-ia64/sn/sn1/hubpi_next.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/sn/sn1/hubpi_next.h Tue Mar 12 13:58:15 2002 @@ -4,11 +4,10 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2001 Silicon Graphics, Inc. All rights reserved. */ -#ifndef _ASM_SN_SN1_HUBPI_NEXT_H -#define _ASM_SN_SN1_HUBPI_NEXT_H +#ifndef _ASM_IA64_SN_SN1_HUBPI_NEXT_H +#define _ASM_IA64_SN_SN1_HUBPI_NEXT_H /* define for remote PI_1 space. It is always half of a node_addressspace @@ -54,7 +53,7 @@ ((sts) & (PI_CRB_STS_I | PI_CRB_STS_H) | \ ((sts) & (PI_CRB_STS_A | PI_CRB_STS_R)) >> 1) -#ifdef _LANGUAGE_C +#ifndef __ASSEMBLY__ /* * format of error stack and error status registers. */ @@ -329,4 +328,4 @@ /* Error stack address shift, for use with pi_stk_fmt.sk_addr */ #define ERR_STK_ADDR_SHFT 3 -#endif /* _ASM_SN_SN1_HUBPI_NEXT_H */ +#endif /* _ASM_IA64_SN_SN1_HUBPI_NEXT_H */ diff -Nru a/include/asm-ia64/sn/sn1/hubspc.h b/include/asm-ia64/sn/sn1/hubspc.h --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/include/asm-ia64/sn/sn1/hubspc.h Tue Mar 12 13:58:15 2002 @@ -0,0 +1,24 @@ +/* + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (C) 1992 - 1997, 2000-2001 Silicon Graphics, Inc. All rights reserved. + */ +#ifndef _ASM_IA64_SN_SN1_HUBSPC_H +#define _ASM_IA64_SN_SN1_HUBSPC_H + +typedef enum { + HUBSPC_REFCOUNTERS, + HUBSPC_PROM +} hubspc_subdevice_t; + + +/* + * Reference Counters + */ + +extern int refcounters_attach(devfs_handle_t hub); + +#endif /* _ASM_IA64_SN_SN1_HUBSPC_H */ diff -Nru a/include/asm-ia64/sn/sn1/hubstat.h b/include/asm-ia64/sn/sn1/hubstat.h --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/include/asm-ia64/sn/sn1/hubstat.h Tue Mar 12 13:58:16 2002 @@ -0,0 +1,56 @@ +/* + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (C) 2000 - 2001 Silicon Graphics, Inc. All rights reserved. + */ + +#ifndef _ASM_IA64_SN_SN1_HUBSTAT_H +#define _ASM_IA64_SN_SN1_HUBSTAT_H + +typedef int64_t hub_count_t; + +#define HUBSTAT_VERSION 1 + +typedef struct hubstat_s { + char hs_version; /* structure version */ + cnodeid_t hs_cnode; /* cnode of this hub */ + nasid_t hs_nasid; /* Nasid of same */ + int64_t hs_timebase; /* Time of first sample */ + int64_t hs_timestamp; /* Time of last sample */ + int64_t hs_per_minute; /* Ticks per minute */ + + union { + hubreg_t hs_niu_stat_rev_id; /* SN0: Status rev ID */ + hubreg_t hs_niu_port_status; /* SN1: Port status */ + } hs_niu; + + hub_count_t hs_ni_retry_errors; /* Total retry errors */ + hub_count_t hs_ni_sn_errors; /* Total sn errors */ + hub_count_t hs_ni_cb_errors; /* Total cb errors */ + int hs_ni_overflows; /* NI count overflows */ + hub_count_t hs_ii_sn_errors; /* Total sn errors */ + hub_count_t hs_ii_cb_errors; /* Total cb errors */ + int hs_ii_overflows; /* II count overflows */ + + /* + * Anything below this comment is intended for kernel internal-use + * only and may be changed at any time. + * + * Any members that contain pointers or are conditionally compiled + * need to be below here also. + */ + int64_t hs_last_print; /* When we last printed */ + char hs_print; /* Should we print */ + + char *hs_name; /* This hub's name */ + unsigned char hs_maint; /* Should we print to availmon */ +} hubstat_t; + +#define hs_ni_stat_rev_id hs_niu.hs_niu_stat_rev_id +#define hs_ni_port_status hs_niu.hs_niu_port_status + +extern struct file_operations hub_mon_fops; + +#endif /* _ASM_IA64_SN_SN1_HUBSTAT_H */ diff -Nru a/include/asm-ia64/sn/sn1/hubxb.h b/include/asm-ia64/sn/sn1/hubxb.h --- a/include/asm-ia64/sn/sn1/hubxb.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/sn/sn1/hubxb.h Tue Mar 12 13:58:15 2002 @@ -4,11 +4,10 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2001 Silicon Graphics, Inc. All rights reserved. */ -#ifndef _ASM_SN_SN1_HUBXB_H -#define _ASM_SN_SN1_HUBXB_H +#ifndef _ASM_IA64_SN_SN1_HUBXB_H +#define _ASM_IA64_SN_SN1_HUBXB_H /************************************************************************ * * @@ -273,7 +272,7 @@ -#ifdef _LANGUAGE_C +#ifndef __ASSEMBLY__ /************************************************************************ * * @@ -1247,7 +1246,7 @@ -#endif /* _LANGUAGE_C */ +#endif /* __ASSEMBLY__ */ /************************************************************************ * * @@ -1286,4 +1285,4 @@ -#endif /* _ASM_SN_SN1_HUBXB_H */ +#endif /* _ASM_IA64_SN_SN1_HUBXB_H */ diff -Nru a/include/asm-ia64/sn/sn1/hubxb_next.h b/include/asm-ia64/sn/sn1/hubxb_next.h --- a/include/asm-ia64/sn/sn1/hubxb_next.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/sn/sn1/hubxb_next.h Tue Mar 12 13:58:15 2002 @@ -4,11 +4,10 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2001 Silicon Graphics, Inc. All rights reserved. */ -#ifndef _ASM_SN_SN1_HUBXB_NEXT_H -#define _ASM_SN_SN1_HUBXB_NEXT_H +#ifndef _ASM_IA64_SN_SN1_HUBXB_NEXT_H +#define _ASM_IA64_SN_SN1_HUBXB_NEXT_H /* XB_FIRST_ERROR fe_source field encoding */ #define XVE_SOURCE_POQ0 0xf /* 1111 */ @@ -30,4 +29,4 @@ #define XBP_RESET_DEFAULTS 0x0008000080000021LL #define XBP_ACTIVE_DEFAULTS 0x00080000fffff021LL -#endif /* _ASM_SN_SN1_HUBXB_NEXT_H */ +#endif /* _ASM_IA64_SN_SN1_HUBXB_NEXT_H */ diff -Nru a/include/asm-ia64/sn/sn1/hwcntrs.h b/include/asm-ia64/sn/sn1/hwcntrs.h --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/include/asm-ia64/sn/sn1/hwcntrs.h Tue Mar 12 13:58:15 2002 @@ -0,0 +1,96 @@ +/* + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (C) 1992 - 1997, 2000-2001 Silicon Graphics, Inc. All rights reserved. + */ +#ifndef _ASM_IA64_SN_SN1_HWCNTRS_H +#define _ASM_IA64_SN_SN1_HWCNTRS_H + + +typedef uint64_t refcnt_t; + +#define SN0_REFCNT_MAX_COUNTERS 64 + +typedef struct sn0_refcnt_set { + refcnt_t refcnt[SN0_REFCNT_MAX_COUNTERS]; + uint64_t flags; + uint64_t reserved[4]; +} sn0_refcnt_set_t; + +typedef struct sn0_refcnt_buf { + sn0_refcnt_set_t refcnt_set; + uint64_t paddr; + uint64_t page_size; + cnodeid_t cnodeid; /* cnodeid + pad[3] use 64 bits */ + uint16_t pad[3]; + uint64_t reserved[4]; +} sn0_refcnt_buf_t; + +typedef struct sn0_refcnt_args { + uint64_t vaddr; + uint64_t len; + sn0_refcnt_buf_t* buf; + uint64_t reserved[4]; +} sn0_refcnt_args_t; + +/* + * Info needed by the user level program + * to mmap the refcnt buffer + */ + +#define RCB_INFO_GET 1 +#define RCB_SLOT_GET 2 + +typedef struct rcb_info { + uint64_t rcb_len; /* total refcnt buffer len in bytes */ + + int rcb_sw_sets; /* number of sw counter sets in buffer */ + int rcb_sw_counters_per_set; /* sw counters per set -- num_compact_nodes */ + int rcb_sw_counter_size; /* sizeof(refcnt_t) -- size of sw cntr */ + + int rcb_base_pages; /* number of base pages in node */ + int rcb_base_page_size; /* sw base page size */ + uint64_t rcb_base_paddr; /* base physical address for this node */ + + int rcb_cnodeid; /* cnodeid for this node */ + int rcb_granularity; /* hw page size used for counter sets */ + uint rcb_hw_counter_max; /* max hwcounter count (width mask) */ + int rcb_diff_threshold; /* current node differential threshold */ + int rcb_abs_threshold; /* current node absolute threshold */ + int rcb_num_slots; /* physmem slots */ + + int rcb_reserved[512]; + +} rcb_info_t; + +typedef struct rcb_slot { + uint64_t base; + uint64_t size; +} rcb_slot_t; + +#if defined(__KERNEL__) +typedef struct sn0_refcnt_args_32 { + uint64_t vaddr; + uint64_t len; + app32_ptr_t buf; + uint64_t reserved[4]; +} sn0_refcnt_args_32_t; + +/* Defines and Macros */ +/* A set of reference counts are for 4k bytes of physical memory */ +#define NBPREFCNTP 0x1000 +#define BPREFCNTPSHIFT 12 +#define bytes_to_refcntpages(x) (((__psunsigned_t)(x)+(NBPREFCNTP-1))>>BPREFCNTPSHIFT) +#define refcntpage_offset(x) ((__psunsigned_t)(x)&((NBPP-1)&~(NBPREFCNTP-1))) +#define align_to_refcntpage(x) ((__psunsigned_t)(x)&(~(NBPREFCNTP-1))) + +extern void migr_refcnt_read(sn0_refcnt_buf_t*); +extern void migr_refcnt_read_extended(sn0_refcnt_buf_t*); +extern int migr_refcnt_enabled(void); + +#endif /* __KERNEL__ */ + +#endif /* _ASM_IA64_SN_SN1_HWCNTRS_H */ diff -Nru a/include/asm-ia64/sn/sn1/intr.h b/include/asm-ia64/sn/sn1/intr.h --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/include/asm-ia64/sn/sn1/intr.h Tue Mar 12 13:58:16 2002 @@ -0,0 +1,237 @@ +/* $Id$ + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (C) 1992 - 1997, 2000-2002 Silicon Graphics, Inc. All rights reserved. + */ +#ifndef _ASM_IA64_SN_SN1_INTR_H +#define _ASM_IA64_SN_SN1_INTR_H + +/* Subnode wildcard */ +#define SUBNODE_ANY (-1) + +/* Number of interrupt levels associated with each interrupt register. */ +#define N_INTPEND_BITS 64 + +#define INT_PEND0_BASELVL 0 +#define INT_PEND1_BASELVL 64 + +#define N_INTPENDJUNK_BITS 8 +#define INTPENDJUNK_CLRBIT 0x80 + +#include +#include +#include +#include + +#ifndef __ASSEMBLY__ +#define II_NAMELEN 24 + +/* + * Dispatch table entry - contains information needed to call an interrupt + * routine. + */ +typedef struct intr_vector_s { + intr_func_t iv_func; /* Interrupt handler function */ + intr_func_t iv_prefunc; /* Interrupt handler prologue func */ + void *iv_arg; /* Argument to pass to handler */ + cpuid_t iv_mustruncpu; /* Where we must run. */ +} intr_vector_t; + +/* Interrupt information table. */ +typedef struct intr_info_s { + xtalk_intr_setfunc_t ii_setfunc; /* Function to set the interrupt + * destination and level register. + * It returns 0 (success) or an + * error code. + */ + void *ii_cookie; /* arg passed to setfunc */ + devfs_handle_t ii_owner_dev; /* device that owns this intr */ + char ii_name[II_NAMELEN]; /* Name of this intr. */ + int ii_flags; /* informational flags */ +} intr_info_t; + + +#define THD_CREATED 0x00000001 /* + * We've created a thread for this + * interrupt. + */ + +/* + * Bits for ii_flags: + */ +#define II_UNRESERVE 0 +#define II_RESERVE 1 /* Interrupt reserved. */ +#define II_INUSE 2 /* Interrupt connected */ +#define II_ERRORINT 4 /* INterrupt is an error condition */ +#define II_THREADED 8 /* Interrupt handler is threaded. */ + +/* + * Interrupt level wildcard + */ +#define INTRCONNECT_ANYBIT (-1) + +/* + * This structure holds information needed both to call and to maintain + * interrupts. The two are in separate arrays for the locality benefits. + * Since there's only one set of vectors per hub chip (but more than one + * CPU, the lock to change the vector tables must be here rather than in + * the PDA. + */ + +typedef struct intr_vecblk_s { + intr_vector_t vectors[N_INTPEND_BITS]; /* information needed to + call an intr routine. */ + intr_info_t info[N_INTPEND_BITS]; /* information needed only + to maintain interrupts. */ + spinlock_t vector_lock; /* Lock for this and the + masks in the PDA. */ + splfunc_t vector_spl; /* vector_lock req'd spl */ + int vector_state; /* Initialized to zero. + Set to INTR_INITED + by hubintr_init. + */ + int vector_count; /* Number of vectors + * reserved. + */ + int cpu_count[CPUS_PER_SUBNODE]; /* How many interrupts are + * connected to each CPU + */ + int ithreads_enabled; /* Are interrupt threads + * initialized on this node. + * and block? + */ +} intr_vecblk_t; + +/* Possible values for vector_state: */ +#define VECTOR_UNINITED 0 +#define VECTOR_INITED 1 +#define VECTOR_SET 2 + +#define hub_intrvect0 private.p_intmasks.dispatch0->vectors +#define hub_intrvect1 private.p_intmasks.dispatch1->vectors +#define hub_intrinfo0 private.p_intmasks.dispatch0->info +#define hub_intrinfo1 private.p_intmasks.dispatch1->info + +/* + * Macros to manipulate the interrupt register on the calling hub chip. + */ + +#define LOCAL_HUB_SEND_INTR(_level) LOCAL_HUB_S(PI_INT_PEND_MOD, \ + (0x100|(_level))) +#define REMOTE_HUB_PI_SEND_INTR(_hub, _sn, _level) \ + REMOTE_HUB_PI_S((_hub), _sn, PI_INT_PEND_MOD, (0x100|(_level))) + +#define REMOTE_CPU_SEND_INTR(_cpuid, _level) \ + REMOTE_HUB_PI_S(cpuid_to_nasid(_cpuid), \ + SUBNODE(cpuid_to_slice(_cpuid)), \ + PI_INT_PEND_MOD, (0x100|(_level))) + +/* + * When clearing the interrupt, make sure this clear does make it + * to the hub. Otherwise we could end up losing interrupts. + * We do an uncached load of the int_pend0 register to ensure this. + */ + +#define LOCAL_HUB_CLR_INTR(_level) \ + LOCAL_HUB_S(PI_INT_PEND_MOD, (_level)), \ + LOCAL_HUB_L(PI_INT_PEND0) +#define REMOTE_HUB_PI_CLR_INTR(_hub, _sn, _level) \ + REMOTE_HUB_PI_S((_hub), (_sn), PI_INT_PEND_MOD, (_level)), \ + REMOTE_HUB_PI_L((_hub), (_sn), PI_INT_PEND0) + +/* Special support for use by gfx driver only. Supports special gfx hub interrupt. */ +extern void install_gfxintr(cpuid_t cpu, ilvl_t swlevel, intr_func_t intr_func, void *intr_arg); + +void setrtvector(intr_func_t func); + +/* + * Interrupt blocking + */ +extern void intr_block_bit(cpuid_t cpu, int bit); +extern void intr_unblock_bit(cpuid_t cpu, int bit); + +#endif /* __ASSEMBLY__ */ + +/* + * Hard-coded interrupt levels: + */ + +/* + * L0 = SW1 + * L1 = SW2 + * L2 = INT_PEND0 + * L3 = INT_PEND1 + * L4 = RTC + * L5 = Profiling Timer + * L6 = Hub Errors + * L7 = Count/Compare (T5 counters) + */ + + +/* INT_PEND0 hard-coded bits. */ +#ifdef DEBUG_INTR_TSTAMP +/* hard coded interrupt level for interrupt latency test interrupt */ +#define CPU_INTRLAT_B 62 +#define CPU_INTRLAT_A 61 +#endif + +/* Hardcoded bits required by software. */ +#define MSC_MESG_INTR 9 +#define CPU_ACTION_B 8 +#define CPU_ACTION_A 7 + +/* These are determined by hardware: */ +#define CC_PEND_B 6 +#define CC_PEND_A 5 +#define UART_INTR 4 +#define PG_MIG_INTR 3 +#define GFX_INTR_B 2 +#define GFX_INTR_A 1 +#define RESERVED_INTR 0 + +/* INT_PEND1 hard-coded bits: */ +#define MSC_PANIC_INTR 63 +#define NI_ERROR_INTR 62 +#define MD_COR_ERR_INTR 61 +#define COR_ERR_INTR_B 60 +#define COR_ERR_INTR_A 59 +#define CLK_ERR_INTR 58 + +# define NACK_INT_B 57 +# define NACK_INT_A 56 +# define LB_ERROR 55 +# define XB_ERROR 54 + +#define BRIDGE_ERROR_INTR 53 /* Setup by PROM to catch Bridge Errors */ + +#define IP27_INTR_0 52 /* Reserved for PROM use */ +#define IP27_INTR_1 51 /* (do not use in Kernel) */ +#define IP27_INTR_2 50 +#define IP27_INTR_3 49 +#define IP27_INTR_4 48 +#define IP27_INTR_5 47 +#define IP27_INTR_6 46 +#define IP27_INTR_7 45 + +#define TLB_INTR_B 44 /* used for tlb flush random */ +#define TLB_INTR_A 43 + +#define LLP_PFAIL_INTR_B 42 /* see ml/SN/SN0/sysctlr.c */ +#define LLP_PFAIL_INTR_A 41 + +#define NI_BRDCAST_ERR_B 40 +#define NI_BRDCAST_ERR_A 39 + +# define IO_ERROR_INTR 38 /* set up by prom */ +# define DEBUG_INTR_B 37 /* used by symmon to stop all cpus */ +# define DEBUG_INTR_A 36 + +// These aren't strictly accurate or complete. See the +// Synergy Spec. for details. +#define SGI_UART_IRQ (65) +#define SGI_HUB_ERROR_IRQ (182) + +#endif /* _ASM_IA64_SN_SN1_INTR_H */ diff -Nru a/include/asm-ia64/sn/sn1/intr_public.h b/include/asm-ia64/sn/sn1/intr_public.h --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/include/asm-ia64/sn/sn1/intr_public.h Tue Mar 12 13:58:16 2002 @@ -0,0 +1,53 @@ +/* $Id$ + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (C) 1992 - 1997, 2000-2001 Silicon Graphics, Inc. All rights reserved. + */ +#ifndef _ASM_IA64_SN_SN1_INTR_PUBLIC_H +#define _ASM_IA64_SN_SN1_INTR_PUBLIC_H + +/* REMEMBER: If you change these, the whole world needs to be recompiled. + * It would also require changing the hubspl.s code and SN0/intr.c + * Currently, the spl code has no support for multiple INTPEND1 masks. + */ + +#define N_INTPEND0_MASKS 1 +#define N_INTPEND1_MASKS 1 + +#define INTPEND0_MAXMASK (N_INTPEND0_MASKS - 1) +#define INTPEND1_MAXMASK (N_INTPEND1_MASKS - 1) + +#ifndef __ASSEMBLY__ +#include + +struct intr_vecblk_s; /* defined in asm/sn/intr.h */ + +/* + * The following are necessary to create the illusion of a CEL + * on the IP27 hub. We'll add more priority levels soon, but for + * now, any interrupt in a particular band effectively does an spl. + * These must be in the PDA since they're different for each processor. + * Users of this structure must hold the vector_lock in the appropriate vector + * block before modifying the mask arrays. There's only one vector block + * for each Hub so a lock in the PDA wouldn't be adequate. + */ +typedef struct hub_intmasks_s { + /* + * The masks are stored with the lowest-priority (most inclusive) + * in the lowest-numbered masks (i.e., 0, 1, 2...). + */ + /* INT_PEND0: */ + hubreg_t intpend0_masks[N_INTPEND0_MASKS]; + /* INT_PEND1: */ + hubreg_t intpend1_masks[N_INTPEND1_MASKS]; + /* INT_PEND0: */ + struct intr_vecblk_s *dispatch0; + /* INT_PEND1: */ + struct intr_vecblk_s *dispatch1; +} hub_intmasks_t; + +#endif /* __ASSEMBLY__ */ +#endif /* _ASM_IA64_SN_SN1_INTR_PUBLIC_H */ diff -Nru a/include/asm-ia64/sn/sn1/ip27config.h b/include/asm-ia64/sn/sn1/ip27config.h --- a/include/asm-ia64/sn/sn1/ip27config.h Tue Mar 12 13:58:16 2002 +++ b/include/asm-ia64/sn/sn1/ip27config.h Tue Mar 12 13:58:16 2002 @@ -4,12 +4,11 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2002 Silicon Graphics, Inc. All rights reserved. */ -#ifndef _ASM_SN_SN1_IP27CONFIG_H -#define _ASM_SN_SN1_IP27CONFIG_H +#ifndef _ASM_IA64_SN_SN1_IP27CONFIG_H +#define _ASM_IA64_SN_SN1_IP27CONFIG_H /* @@ -50,7 +49,7 @@ */ #define IP27_RTC_FREQ 1250 /* 800ns cycle time */ -#if _LANGUAGE_C +#ifndef __ASSEMBLY__ typedef struct ip27config_s { /* KEEP IN SYNC w/ start.s & below */ uint time_const; /* Time constant */ @@ -110,9 +109,9 @@ */ #define CONFIG_12P4I_NODE(n) (0) -#endif /* _LANGUAGE_C */ +#endif /* __ASSEMBLY__ */ -#if _LANGUAGE_ASSEMBLY +#if __ASSEMBLY__ .struct 0 /* KEEP IN SYNC WITH C structure */ ip27c_time_const: .word 0 @@ -137,7 +136,7 @@ ip27c_pvers_rev: .word 0 ip27c_config_type: .word 0 /* To recognize special configs */ -#endif /* _LANGUAGE_ASSEMBLY */ +#endif /* __ASSEMBLY__ */ /* * R10000 Configuration Cycle - These define the SYSAD values used @@ -245,7 +244,7 @@ #define CONFIG_FREQ_RTC IP27C_KHZ(IP27_RTC_FREQ) -#if _LANGUAGE_C +#ifndef __ASSEMBLY__ /* we are going to define all the known configs is a table * for building hex images we will pull out the particular @@ -258,7 +257,7 @@ */ /* these numbers are as the are ordered in the table below */ -#define IP27_CONFIG_UNKNOWN -1 +#define IP27_CONFIG_UNKNOWN (-1) #define IP27_CONFIG_SN1_1MB_200_400_200_TABLE 0 #define IP27_CONFIG_SN00_4MB_100_200_133_TABLE 1 #define IP27_CONFIG_SN1_4MB_200_400_267_TABLE 2 @@ -500,9 +499,9 @@ #define CONFIG_FPROM_WR ip_config_table[IP27_CONFIG_SN1_4MB_180_360_240_TABLE].fprom_wr #endif /* IP27_CONFIG_SN1_4MB_180_360_240 */ -#endif /* _LANGUAGE_C */ +#endif /* __ASSEMBLY__ */ -#if _LANGUAGE_ASSEMBLY +#if __ASSEMBLY__ /* these need to be in here since we need assembly definitions * for building hex images (as required by start.s) @@ -653,6 +652,6 @@ #define CONFIG_FPROM_WR CONFIG_FPROM_ENABLE #endif /* IP27_CONFIG_SN1_4MB_180_360_240 */ -#endif /* _LANGUAGE_C */ +#endif /* __ASSEMBLY__ */ -#endif /* _ASM_SN_SN1_IP27CONFIG_H */ +#endif /* _ASM_IA64_SN_SN1_IP27CONFIG_H */ diff -Nru a/include/asm-ia64/sn/sn1/kldir.h b/include/asm-ia64/sn/sn1/kldir.h --- a/include/asm-ia64/sn/sn1/kldir.h Tue Mar 12 13:58:15 2002 +++ /dev/null Wed Dec 31 16:00:00 1969 @@ -1,222 +0,0 @@ -/* $Id$ - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam - */ - -#ifndef _ASM_SN_SN1_KLDIR_H -#define _ASM_SN_SN1_KLDIR_H - -/* - * The upper portion of the memory map applies during boot - * only and is overwritten by IRIX/SYMMON. The minimum memory bank - * size on IP35 is 64M, which provides a limit on the amount of space - * the PROM can assume it has available. - * - * Most of the addresses below are defined as macros in this file, or - * in SN/addrs.h or SN/SN1/addrs.h. - * - * MEMORY MAP PER NODE - * - * 0x4000000 (64M) +-----------------------------------------+ - * | | - * | | - * | IO7 TEXT/DATA/BSS/stack | - * 0x3000000 (48M) +-----------------------------------------+ - * | Free | - * 0x2102000 (>33M) +-----------------------------------------+ - * | IP35 Topology (PCFG) + misc data | - * 0x2000000 (32M) +-----------------------------------------+ - * | IO7 BUFFERS FOR FLASH ENET IOC3 | - * 0x1F80000 (31.5M) +-----------------------------------------+ - * | Free | - * 0x1C00000 (28M) +-----------------------------------------+ - * | IP35 PROM TEXT/DATA/BSS/stack | - * 0x1A00000 (26M) +-----------------------------------------+ - * | Routing temp. space | - * 0x1800000 (24M) +-----------------------------------------+ - * | Diagnostics temp. space | - * 0x1500000 (21M) +-----------------------------------------+ - * | Free | - * 0x1400000 (20M) +-----------------------------------------+ - * | IO7 PROM temporary copy | - * 0x1300000 (19M) +-----------------------------------------+ - * | | - * | Free | - * | (UNIX DATA starts above 0x1000000) | - * | | - * +-----------------------------------------+ - * | UNIX DEBUG Version | - * 0x0310000 (3.1M) +-----------------------------------------+ - * | SYMMON, loaded just below UNIX | - * | (For UNIX Debug only) | - * | | - * | | - * 0x006C000 (432K) +-----------------------------------------+ - * | SYMMON STACK [NUM_CPU_PER_NODE] | - * | (For UNIX Debug only) | - * 0x004C000 (304K) +-----------------------------------------+ - * | | - * | | - * | UNIX NON-DEBUG Version | - * 0x0040000 (256K) +-----------------------------------------+ - * - * - * The lower portion of the memory map contains information that is - * permanent and is used by the IP35PROM, IO7PROM and IRIX. - * - * 0x40000 (256K) +-----------------------------------------+ - * | | - * | KLCONFIG (64K) | - * | | - * 0x30000 (192K) +-----------------------------------------+ - * | | - * | PI Error Spools (64K) | - * | | - * 0x20000 (128K) +-----------------------------------------+ - * | | - * | Unused | - * | | - * 0x19000 (100K) +-----------------------------------------+ - * | Early cache Exception stack (CPU 3)| - * 0x18800 (98K) +-----------------------------------------+ - * | cache error eframe (CPU 3) | - * 0x18400 (97K) +-----------------------------------------+ - * | Exception Handlers (CPU 3) | - * 0x18000 (96K) +-----------------------------------------+ - * | | - * | Unused | - * | | - * 0x13c00 (79K) +-----------------------------------------+ - * | GPDA (8k) | - * 0x11c00 (71K) +-----------------------------------------+ - * | Early cache Exception stack (CPU 2)| - * 0x10800 (66k) +-----------------------------------------+ - * | cache error eframe (CPU 2) | - * 0x10400 (65K) +-----------------------------------------+ - * | Exception Handlers (CPU 2) | - * 0x10000 (64K) +-----------------------------------------+ - * | | - * | Unused | - * | | - * 0x0b400 (45K) +-----------------------------------------+ - * | GDA (1k) | - * 0x0b000 (44K) +-----------------------------------------+ - * | NMI Eframe areas (4) | - * 0x0a000 (40K) +-----------------------------------------+ - * | NMI Register save areas (4) | - * 0x09000 (36K) +-----------------------------------------+ - * | Early cache Exception stack (CPU 1)| - * 0x08800 (34K) +-----------------------------------------+ - * | cache error eframe (CPU 1) | - * 0x08400 (33K) +-----------------------------------------+ - * | Exception Handlers (CPU 1) | - * 0x08000 (32K) +-----------------------------------------+ - * | | - * | | - * | Unused | - * | | - * | | - * 0x04000 (16K) +-----------------------------------------+ - * | NMI Handler (Protected Page) | - * 0x03000 (12K) +-----------------------------------------+ - * | ARCS PVECTORS (master node only) | - * 0x02c00 (11K) +-----------------------------------------+ - * | ARCS TVECTORS (master node only) | - * 0x02800 (10K) +-----------------------------------------+ - * | LAUNCH [NUM_CPU] | - * 0x02400 (9K) +-----------------------------------------+ - * | Low memory directory (KLDIR) | - * 0x02000 (8K) +-----------------------------------------+ - * | ARCS SPB (1K) | - * 0x01000 (4K) +-----------------------------------------+ - * | Early cache Exception stack (CPU 0)| - * 0x00800 (2k) +-----------------------------------------+ - * | cache error eframe (CPU 0) | - * 0x00400 (1K) +-----------------------------------------+ - * | Exception Handlers (CPU 0) | - * 0x00000 (0K) +-----------------------------------------+ - */ - -/* - * NOTE: To change the kernel load address, you must update: - * - the appropriate elspec files in irix/kern/master.d - * - NODEBUGUNIX_ADDR in SN/SN1/addrs.h - * - IP27_FREEMEM_OFFSET below - * - KERNEL_START_OFFSET below (if supporting cells) - */ - - -/* - * This is defined here because IP27_SYMMON_STK_SIZE must be at least what - * we define here. Since it's set up in the prom. We can't redefine it later - * and expect more space to be allocated. The way to find out the true size - * of the symmon stacks is to divide SYMMON_STK_SIZE by SYMMON_STK_STRIDE - * for a particular node. - */ -#define SYMMON_STACK_SIZE 0x8000 - -#if defined (PROM) || defined (SABLE) - -/* - * These defines are prom version dependent. No code other than the IP35 - * prom should attempt to use these values. - */ -#define IP27_LAUNCH_OFFSET 0x2400 -#define IP27_LAUNCH_SIZE 0x400 -#define IP27_LAUNCH_COUNT 4 -#define IP27_LAUNCH_STRIDE 0x100 /* could be as small as 0x80 */ - -#define IP27_KLCONFIG_OFFSET 0x30000 -#define IP27_KLCONFIG_SIZE 0x10000 -#define IP27_KLCONFIG_COUNT 1 -#define IP27_KLCONFIG_STRIDE 0 - -#define IP27_NMI_OFFSET 0x3000 -#define IP27_NMI_SIZE 0x100 -#define IP27_NMI_COUNT 4 -#define IP27_NMI_STRIDE 0x40 - -#define IP27_PI_ERROR_OFFSET 0x20000 -#define IP27_PI_ERROR_SIZE 0x10000 -#define IP27_PI_ERROR_COUNT 1 -#define IP27_PI_ERROR_STRIDE 0 - -#define IP27_SYMMON_STK_OFFSET 0x4c000 -#define IP27_SYMMON_STK_SIZE 0x20000 -#define IP27_SYMMON_STK_COUNT 4 -/* IP27_SYMMON_STK_STRIDE must be >= SYMMON_STACK_SIZE */ -#define IP27_SYMMON_STK_STRIDE 0x8000 - -#define IP27_FREEMEM_OFFSET 0x40000 -#define IP27_FREEMEM_SIZE -1 -#define IP27_FREEMEM_COUNT 1 -#define IP27_FREEMEM_STRIDE 0 - -#endif /* PROM || SABLE*/ -/* - * There will be only one of these in a partition so the IO7 must set it up. - */ -#define IO6_GDA_OFFSET 0xb000 -#define IO6_GDA_SIZE 0x400 -#define IO6_GDA_COUNT 1 -#define IO6_GDA_STRIDE 0 - -/* - * save area of kernel nmi regs in the prom format - */ -#define IP27_NMI_KREGS_OFFSET 0x9000 -#define IP27_NMI_KREGS_CPU_SIZE 0x400 -/* - * save area of kernel nmi regs in eframe format - */ -#define IP27_NMI_EFRAME_OFFSET 0xa000 -#define IP27_NMI_EFRAME_SIZE 0x400 - -#define GPDA_OFFSET 0x11c00 - -#endif /* _ASM_SN_SN1_KLDIR_H */ diff -Nru a/include/asm-ia64/sn/sn1/leds.h b/include/asm-ia64/sn/sn1/leds.h --- a/include/asm-ia64/sn/sn1/leds.h Tue Mar 12 13:58:14 2002 +++ /dev/null Wed Dec 31 16:00:00 1969 @@ -1,35 +0,0 @@ -#ifndef _ASM_SN_SN1_LED_H -#define _ASM_SN_SN1_LED_H - -/* - * Copyright (C) 2000 Silicon Graphics, Inc - * Copyright (C) 2000 Jack Steiner (steiner@sgi.com) - */ - -#include - -#define LED0 0xc0000b00100000c0LL /* ZZZ fixme */ - - - -#define LED_AP_START 0x01 /* AP processor started */ -#define LED_AP_IDLE 0x01 - -/* - * Basic macros for flashing the LEDS on an SGI, SN1. - */ - -extern __inline__ void -HUB_SET_LED(int val) -{ - long *ledp; - int eid; - - eid = hard_smp_processor_id() & 3; - ledp = (long*) (LED0 + (eid<<3)); - *ledp = val; -} - - -#endif /* _ASM_SN_SN1_LED_H */ - diff -Nru a/include/asm-ia64/sn/sn1/mem_refcnt.h b/include/asm-ia64/sn/sn1/mem_refcnt.h --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/include/asm-ia64/sn/sn1/mem_refcnt.h Tue Mar 12 13:58:15 2002 @@ -0,0 +1,25 @@ +/* + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (C) 1992 - 1997, 2000-2001 Silicon Graphics, Inc. All rights reserved. + */ +#ifndef _ASM_IA64_SN_SN1_MEM_REFCNT_H +#define _ASM_IA64_SN_SN1_MEM_REFCNT_H + +extern int mem_refcnt_attach(devfs_handle_t hub); +extern int mem_refcnt_open(devfs_handle_t *devp, mode_t oflag, int otyp, cred_t *crp); +extern int mem_refcnt_close(devfs_handle_t dev, int oflag, int otyp, cred_t *crp); +extern int mem_refcnt_mmap(devfs_handle_t dev, vhandl_t *vt, off_t off, size_t len, uint prot); +extern int mem_refcnt_unmap(devfs_handle_t dev, vhandl_t *vt); +extern int mem_refcnt_ioctl(devfs_handle_t dev, + int cmd, + void *arg, + int mode, + cred_t *cred_p, + int *rvalp); + + +#endif /* _ASM_IA64_SN_SN1_MEM_REFCNT_H */ diff -Nru a/include/asm-ia64/sn/sn1/mmzone_sn1.h b/include/asm-ia64/sn/sn1/mmzone_sn1.h --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/include/asm-ia64/sn/sn1/mmzone_sn1.h Tue Mar 12 13:58:15 2002 @@ -0,0 +1,149 @@ +#ifndef _ASM_IA64_SN_MMZONE_SN1_H +#define _ASM_IA64_SN_MMZONE_SN1_H + +/* + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (c) 2000-2002 Silicon Graphics, Inc. All rights reserved. + */ + +#include + + +/* + * SGI SN1 Arch defined values + * + * An SN1 physical address is broken down as follows: + * + * +-----------------------------------------+ + * | | | | node offset | + * | unused | AS | node |-------------------| + * | | | | cn | clump offset | + * +-----------------------------------------+ + * 6 4 4 4 3 3 3 3 2 0 + * 3 4 3 0 9 3 2 0 9 0 + * + * bits 63-44 Unused - must be zero + * bits 43-40 Address space ID. Cached memory has a value of 0. + * Chipset & IO addresses have non-zero values. + * bits 39-33 Node number. Note that some configurations do NOT + * have a node zero. + * bits 32-0 Node offset. + * + * The node offset can be further broken down as: + * bits 32-30 Clump (bank) number. + * bits 29-0 Clump (bank) offset. + * + * A node consists of up to 8 clumps (banks) of memory. A clump may be empty, or may be + * populated with a single contiguous block of memory starting at clump + * offset 0. The size of the block is (2**n) * 64MB, where 0> SN1_NODE_SHIFT) & SN1_NODE_MASK) +#define SN1_NODE_CLUMP_NUMBER(addr) (((unsigned long)(addr) >>30) & 7) +#define SN1_NODE_OFFSET(addr) (((unsigned long)(addr)) & SN1_NODE_OFFSET_MASK) +#define SN1_KADDR(nasid, offset) (((unsigned long)(nasid)<> SN1_CHUNKSHIFT) + + +/* + * Given a kaddr, find the nid (compact nodeid) + */ +#ifdef CONFIG_IA64_SGI_SN_DEBUG +#define DISCONBUG(kaddr) panic("DISCONTIG BUG: line %d, %s. kaddr 0x%lx", \ + __LINE__, __FILE__, (long)(kaddr)) + +#define KVADDR_TO_NID(kaddr) ({long _ktn=(long)(kaddr); \ + kern_addr_valid(_ktn) ? \ + local_node_data->physical_node_map[SN1_NODE_NUMBER(_ktn)] :\ + (DISCONBUG(_ktn), 0UL);}) +#else +#define KVADDR_TO_NID(kaddr) (local_node_data->physical_node_map[SN1_NODE_NUMBER(kaddr)]) +#endif + + + +/* + * Given a kaddr, find the index into the clump_mem_map_base array of the page struct entry + * for the first page of the clump. + */ +#define PLAT_CLUMP_MEM_MAP_INDEX(kaddr) ({long _kmmi=(long)(kaddr); \ + KVADDR_TO_NID(_kmmi) * PLAT_CLUMPS_PER_NODE + \ + SN1_NODE_CLUMP_NUMBER(_kmmi);}) + + +/* + * Calculate a "goal" value to be passed to __alloc_bootmem_node for allocating structures on + * nodes so that they dont alias to the same line in the cache as the previous allocated structure. + * This macro takes an address of the end of previous allocation, rounds it to a page boundary & + * changes the node number. + */ +#define PLAT_BOOTMEM_ALLOC_GOAL(cnode,kaddr) SN1_KADDR(PLAT_PXM_TO_PHYS_NODE_NUMBER(nid_to_pxm_map[cnodeid]), \ + (SN1_NODE_OFFSET(kaddr) + PAGE_SIZE - 1) >> PAGE_SHIFT << PAGE_SHIFT) + + + + +/* + * Convert a proximity domain number (from the ACPI tables) into a physical node number. + */ + +#define PLAT_PXM_TO_PHYS_NODE_NUMBER(pxm) (pxm) + +#endif /* _ASM_IA64_SN_MMZONE_SN1_H */ diff -Nru a/include/asm-ia64/sn/sn1/promlog.h b/include/asm-ia64/sn/sn1/promlog.h --- a/include/asm-ia64/sn/sn1/promlog.h Tue Mar 12 13:58:16 2002 +++ /dev/null Wed Dec 31 16:00:00 1969 @@ -1,85 +0,0 @@ -/* $Id$ - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam - */ - -#ifndef _ASM_SN_SN1_PROMLOG_H -#define _ASM_SN_SN1_PROMLOG_H - -#include - -#define PROMLOG_MAGIC 0x504c4f49 -#define PROMLOG_VERSION 1 - -#define PROMLOG_OFFSET_MAGIC 0x10 -#define PROMLOG_OFFSET_VERSION 0x14 -#define PROMLOG_OFFSET_SEQUENCE 0x18 -#define PROMLOG_OFFSET_ENTRY0 0x100 - -#define PROMLOG_ERROR_NONE 0 -#define PROMLOG_ERROR_PROM -1 -#define PROMLOG_ERROR_MAGIC -2 -#define PROMLOG_ERROR_CORRUPT -3 -#define PROMLOG_ERROR_BOL -4 -#define PROMLOG_ERROR_EOL -5 -#define PROMLOG_ERROR_POS -6 -#define PROMLOG_ERROR_REPLACE -7 -#define PROMLOG_ERROR_COMPACT -8 -#define PROMLOG_ERROR_FULL -9 -#define PROMLOG_ERROR_ARG -10 -#define PROMLOG_ERROR_UNUSED -11 - -#define PROMLOG_TYPE_UNUSED 0xf -#define PROMLOG_TYPE_LOG 3 -#define PROMLOG_TYPE_LIST 2 -#define PROMLOG_TYPE_VAR 1 -#define PROMLOG_TYPE_DELETED 0 - -#define PROMLOG_TYPE_ANY 98 -#define PROMLOG_TYPE_INVALID 99 - -#define PROMLOG_KEY_MAX 14 -#define PROMLOG_VALUE_MAX 47 -#define PROMLOG_CPU_MAX 4 - -typedef struct promlog_header_s { - unsigned int unused[4]; - unsigned int magic; - unsigned int version; - unsigned int sequence; -} promlog_header_t; - -typedef unsigned int promlog_pos_t; - -typedef struct promlog_ent_s { /* PROM individual entry */ - uint type : 4; - uint cpu_num : 4; - char key[PROMLOG_KEY_MAX + 1]; - - char value[PROMLOG_VALUE_MAX + 1]; - -} promlog_ent_t; - -typedef struct promlog_s { /* Activation handle */ - fprom_t f; - int sector_base; - int cpu_num; - - int active; /* Active sector, 0 or 1 */ - - promlog_pos_t log_start; - promlog_pos_t log_end; - - promlog_pos_t alt_start; - promlog_pos_t alt_end; - - promlog_pos_t pos; - promlog_ent_t ent; -} promlog_t; - -#endif /* _ASM_SN_SN1_PROMLOG_H */ diff -Nru a/include/asm-ia64/sn/sn1/router.h b/include/asm-ia64/sn/sn1/router.h --- a/include/asm-ia64/sn/sn1/router.h Tue Mar 12 13:58:15 2002 +++ /dev/null Wed Dec 31 16:00:00 1969 @@ -1,670 +0,0 @@ -/* $Id$ - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam - */ - -#ifndef _ASM_SN_SN1_ROUTER_H -#define _ASM_SN_SN1_ROUTER_H - -/* - * Router Register definitions - * - * Macro argument _L always stands for a link number (1 to 8, inclusive). - */ - -#if defined(_LANGUAGE_C) || defined(_LANGUAGE_C_PLUS_PLUS) - -#include -#include -#include - -typedef uint64_t router_reg_t; - -#define MAX_ROUTERS 64 - -#define MAX_ROUTER_PATH 80 - -#define ROUTER_REG_CAST (volatile router_reg_t *) -#define PS_UINT_CAST (__psunsigned_t) -#define UINT64_CAST (uint64_t) -typedef signed char port_no_t; /* Type for router port number */ - -#elif _LANGUAGE_ASSEMBLY - -#define ROUTERREG_CAST -#define PS_UINT_CAST -#define UINT64_CAST - -#endif /* _LANGUAGE_C || _LANGUAGE_C_PLUS_PLUS */ - -#define MAX_ROUTER_PORTS (8) /* Max. number of ports on a router */ - -#define ALL_PORTS ((1 << MAX_ROUTER_PORTS) - 1) /* for 0 based references */ - -#define PORT_INVALID (-1) /* Invalid port number */ - -#define IS_META(_rp) ((_rp)->flags & PCFG_ROUTER_META) - -#define IS_REPEATER(_rp)((_rp)->flags & PCFG_ROUTER_REPEATER) - -/* - * RR_TURN makes a given number of clockwise turns (0 to 7) from an inport - * port to generate an output port. - * - * RR_DISTANCE returns the number of turns necessary (0 to 7) to go from - * an input port (_L1 = 1 to 8) to an output port ( _L2 = 1 to 8). - * - * These are written to work on unsigned data. - */ - -#define RR_TURN(_L, count) ((_L) + (count) > MAX_ROUTER_PORTS ? \ - (_L) + (count) - MAX_ROUTER_PORTS : \ - (_L) + (count)) - -#define RR_DISTANCE(_LS, _LD) ((_LD) >= (_LS) ? \ - (_LD) - (_LS) : \ - (_LD) + MAX_ROUTER_PORTS - (_LS)) - -/* Router register addresses */ - -#define RR_STATUS_REV_ID 0x00000 /* Status register and Revision ID */ -#define RR_PORT_RESET 0x00008 /* Multiple port reset */ -#define RR_PROT_CONF 0x00010 /* Inter-partition protection conf. */ -#define RR_GLOBAL_PORT_DEF 0x00018 /* Global Port definitions */ -#define RR_GLOBAL_PARMS0 0x00020 /* Parameters shared by all 8 ports */ -#define RR_GLOBAL_PARMS1 0x00028 /* Parameters shared by all 8 ports */ -#define RR_DIAG_PARMS 0x00030 /* Parameters for diag. testing */ -#define RR_DEBUG_ADDR 0x00038 /* Debug address select - debug port*/ -#define RR_LB_TO_L2 0x00040 /* Local Block to L2 cntrl intf reg */ -#define RR_L2_TO_LB 0x00048 /* L2 cntrl intf to Local Block reg */ -#define RR_JBUS_CONTROL 0x00050 /* read/write timing for JBUS intf */ - -#define RR_SCRATCH_REG0 0x00100 /* Scratch 0 is 64 bits */ -#define RR_SCRATCH_REG1 0x00108 /* Scratch 1 is 64 bits */ -#define RR_SCRATCH_REG2 0x00110 /* Scratch 2 is 64 bits */ -#define RR_SCRATCH_REG3 0x00118 /* Scratch 3 is 1 bit */ -#define RR_SCRATCH_REG4 0x00120 /* Scratch 4 is 1 bit */ - -#define RR_JBUS0(_D) (((_D) & 0x7) << 3 | 0x00200) /* JBUS0 addresses */ -#define RR_JBUS1(_D) (((_D) & 0x7) << 3 | 0x00240) /* JBUS1 addresses */ - -#define RR_SCRATCH_REG0_WZ 0x00500 /* Scratch 0 is 64 bits */ -#define RR_SCRATCH_REG1_WZ 0x00508 /* Scratch 1 is 64 bits */ -#define RR_SCRATCH_REG2_WZ 0x00510 /* Scratch 2 is 64 bits */ -#define RR_SCRATCH_REG3_SZ 0x00518 /* Scratch 3 is 1 bit */ -#define RR_SCRATCH_REG4_SZ 0x00520 /* Scratch 4 is 1 bit */ - -#define RR_VECTOR_HW_BAR(context) (0x08000 | (context)<<3) /* barrier config registers */ -/* Port-specific registers (_L is the link number from 1 to 8) */ - -#define RR_PORT_PARMS(_L) (((_L+1) & 0xe) << 15 | ((_L+1) & 0x1) << 11 | 0x0000) /* LLP parameters */ -#define RR_STATUS_ERROR(_L) (((_L+1) & 0xe) << 15 | ((_L+1) & 0x1) << 11 | 0x0008) /* Port-related errs */ -#define RR_CHANNEL_TEST(_L) (((_L+1) & 0xe) << 15 | ((_L+1) & 0x1) << 11 | 0x0010) /* Port LLP chan test */ -#define RR_RESET_MASK(_L) (((_L+1) & 0xe) << 15 | ((_L+1) & 0x1) << 11 | 0x0018) /* Remote reset mask */ -#define RR_HISTOGRAM0(_L) (((_L+1) & 0xe) << 15 | ((_L+1) & 0x1) << 11 | 0x0020) /* Port usage histgrm */ -#define RR_HISTOGRAM1(_L) (((_L+1) & 0xe) << 15 | ((_L+1) & 0x1) << 11 | 0x0028) /* Port usage histgrm */ -#define RR_HISTOGRAM0_WC(_L) (((_L+1) & 0xe) << 15 | ((_L+1) & 0x1) << 11 | 0x0030) /* Port usage histgrm */ -#define RR_HISTOGRAM1_WC(_L) (((_L+1) & 0xe) << 15 | ((_L+1) & 0x1) << 11 | 0x0038) /* Port usage histgrm */ -#define RR_ERROR_CLEAR(_L) (((_L+1) & 0xe) << 15 | ((_L+1) & 0x1) << 11 | 0x0088) /* Read/clear errors */ -#define RR_GLOBAL_TABLE0(_L) (((_L+1) & 0xe) << 15 | ((_L+1) & 0x1) << 11 | 0x0100) /* starting address of global table for this port */ -#define RR_GLOBAL_TABLE(_L, _x) (RR_GLOBAL_TABLE0(_L) + ((_x) << 3)) -#define RR_LOCAL_TABLE0(_L) (((_L+1) & 0xe) << 15 | ((_L+1) & 0x1) << 11 | 0x0200) /* starting address of local table for this port */ -#define RR_LOCAL_TABLE(_L, _x) (RR_LOCAL_TABLE0(_L) + ((_x) << 3)) - -#define RR_META_ENTRIES 16 - -#define RR_LOCAL_ENTRIES 128 - -/* - * RR_STATUS_REV_ID mask and shift definitions - */ - -#define RSRI_INPORT_SHFT 52 -#define RSRI_INPORT_MASK (UINT64_CAST 0xf << 52) -#define RSRI_LINKWORKING_BIT(_L) (35 + 2 * (_L)) -#define RSRI_LINKWORKING(_L) (UINT64_CAST 1 << (35 + 2 * (_L))) -#define RSRI_LINKRESETFAIL(_L) (UINT64_CAST 1 << (34 + 2 * (_L))) -#define RSRI_LSTAT_SHFT(_L) (34 + 2 * (_L)) -#define RSRI_LSTAT_MASK(_L) (UINT64_CAST 0x3 << 34 + 2 * (_L)) -#define RSRI_LOCALSBERROR (UINT64_CAST 1 << 35) -#define RSRI_LOCALSTUCK (UINT64_CAST 1 << 34) -#define RSRI_LOCALBADVEC (UINT64_CAST 1 << 33) -#define RSRI_LOCALTAILERR (UINT64_CAST 1 << 32) -#define RSRI_LOCAL_SHFT 32 -#define RSRI_LOCAL_MASK (UINT64_CAST 0xf << 32) -#define RSRI_CHIPREV_SHFT 28 -#define RSRI_CHIPREV_MASK (UINT64_CAST 0xf << 28) -#define RSRI_CHIPID_SHFT 12 -#define RSRI_CHIPID_MASK (UINT64_CAST 0xffff << 12) -#define RSRI_MFGID_SHFT 1 -#define RSRI_MFGID_MASK (UINT64_CAST 0x7ff << 1) - -#define RSRI_LSTAT_WENTDOWN 0 -#define RSRI_LSTAT_RESETFAIL 1 -#define RSRI_LSTAT_LINKUP 2 -#define RSRI_LSTAT_NOTUSED 3 - -/* - * RR_PORT_RESET mask definitions - */ - -#define RPRESET_WARM (UINT64_CAST 1 << 9) -#define RPRESET_LINK(_L) (UINT64_CAST 1 << (_L)) -#define RPRESET_LOCAL (UINT64_CAST 1) - -/* - * RR_PROT_CONF mask and shift definitions - */ - -#define RPCONF_DIRCMPDIS_SHFT 13 -#define RPCONF_DIRCMPDIS_MASK (UINT64_CAST 1 << 13) -#define RPCONF_FORCELOCAL (UINT64_CAST 1 << 12) -#define RPCONF_FLOCAL_SHFT 12 -#define RPCONF_METAID_SHFT 8 -#define RPCONF_METAID_MASK (UINT64_CAST 0xf << 8) -#define RPCONF_RESETOK(_L) (UINT64_CAST 1 << ((_L) - 1)) - -/* - * RR_GLOBAL_PORT_DEF mask and shift definitions - */ - -#define RGPD_MGLBLNHBR_ID_SHFT 12 /* -global neighbor ID */ -#define RGPD_MGLBLNHBR_ID_MASK (UINT64_CAST 0xf << 12) -#define RGPD_MGLBLNHBR_VLD_SHFT 11 /* -global neighbor Valid */ -#define RGPD_MGLBLNHBR_VLD_MASK (UINT64_CAST 0x1 << 11) -#define RGPD_MGLBLPORT_SHFT 8 /* -global neighbor Port */ -#define RGPD_MGLBLPORT_MASK (UINT64_CAST 0x7 << 8) -#define RGPD_PGLBLNHBR_ID_SHFT 4 /* +global neighbor ID */ -#define RGPD_PGLBLNHBR_ID_MASK (UINT64_CAST 0xf << 4) -#define RGPD_PGLBLNHBR_VLD_SHFT 3 /* +global neighbor Valid */ -#define RGPD_PGLBLNHBR_VLD_MASK (UINT64_CAST 0x1 << 3) -#define RGPD_PGLBLPORT_SHFT 0 /* +global neighbor Port */ -#define RGPD_PGLBLPORT_MASK (UINT64_CAST 0x7 << 0) - -#define GLBL_PARMS_REGS 2 /* Two Global Parms registers */ - -/* - * RR_GLOBAL_PARMS0 mask and shift definitions - */ - -#define RGPARM0_ARB_VALUE_SHFT 54 /* Local Block Arbitration State */ -#define RGPARM0_ARB_VALUE_MASK (UINT64_CAST 0x7 << 54) -#define RGPARM0_ROTATEARB_SHFT 53 /* Rotate Local Block Arbitration */ -#define RGPARM0_ROTATEARB_MASK (UINT64_CAST 0x1 << 53) -#define RGPARM0_FAIREN_SHFT 52 /* Fairness logic Enable */ -#define RGPARM0_FAIREN_MASK (UINT64_CAST 0x1 << 52) -#define RGPARM0_LOCGNTTO_SHFT 40 /* Local grant timeout */ -#define RGPARM0_LOCGNTTO_MASK (UINT64_CAST 0xfff << 40) -#define RGPARM0_DATELINE_SHFT 38 /* Dateline crossing router */ -#define RGPARM0_DATELINE_MASK (UINT64_CAST 0x1 << 38) -#define RGPARM0_MAXRETRY_SHFT 28 /* Max retry count */ -#define RGPARM0_MAXRETRY_MASK (UINT64_CAST 0x3ff << 28) -#define RGPARM0_URGWRAP_SHFT 20 /* Urgent wrap */ -#define RGPARM0_URGWRAP_MASK (UINT64_CAST 0xff << 20) -#define RGPARM0_DEADLKTO_SHFT 16 /* Deadlock timeout */ -#define RGPARM0_DEADLKTO_MASK (UINT64_CAST 0xf << 16) -#define RGPARM0_URGVAL_SHFT 12 /* Urgent value */ -#define RGPARM0_URGVAL_MASK (UINT64_CAST 0xf << 12) -#define RGPARM0_VCHSELEN_SHFT 11 /* VCH_SEL_EN */ -#define RGPARM0_VCHSELEN_MASK (UINT64_CAST 0x1 << 11) -#define RGPARM0_LOCURGTO_SHFT 9 /* Local urgent timeout */ -#define RGPARM0_LOCURGTO_MASK (UINT64_CAST 0x3 << 9) -#define RGPARM0_TAILVAL_SHFT 5 /* Tail value */ -#define RGPARM0_TAILVAL_MASK (UINT64_CAST 0xf << 5) -#define RGPARM0_CLOCK_SHFT 1 /* Global clock select */ -#define RGPARM0_CLOCK_MASK (UINT64_CAST 0xf << 1) -#define RGPARM0_BYPEN_SHFT 0 -#define RGPARM0_BYPEN_MASK (UINT64_CAST 1) /* Bypass enable */ - -/* - * RR_GLOBAL_PARMS1 shift and mask definitions - */ - -#define RGPARM1_TTOWRAP_SHFT 12 /* Tail timeout wrap */ -#define RGPARM1_TTOWRAP_MASK (UINT64_CAST 0xfffff << 12) -#define RGPARM1_AGERATE_SHFT 8 /* Age rate */ -#define RGPARM1_AGERATE_MASK (UINT64_CAST 0xf << 8) -#define RGPARM1_JSWSTAT_SHFT 0 /* JTAG Sw Register bits */ -#define RGPARM1_JSWSTAT_MASK (UINT64_CAST 0xff << 0) - -/* - * RR_DIAG_PARMS mask and shift definitions - */ - -#define RDPARM_ABSHISTOGRAM (UINT64_CAST 1 << 17) /* Absolute histgrm */ -#define RDPARM_DEADLOCKRESET (UINT64_CAST 1 << 16) /* Reset on deadlck */ -#define RDPARM_DISABLE(_L) (UINT64_CAST 1 << ((_L) + 7)) -#define RDPARM_SENDERROR(_L) (UINT64_CAST 1 << ((_L) - 1)) - -/* - * RR_DEBUG_ADDR mask and shift definitions - */ - -#define RDA_DATA_SHFT 10 /* Observed debug data */ -#define RDA_DATA_MASK (UINT64_CAST 0xffff << 10) -#define RDA_ADDR_SHFT 0 /* debug address for data */ -#define RDA_ADDR_MASK (UINT64_CAST 0x3ff << 0) - -/* - * RR_LB_TO_L2 mask and shift definitions - */ - -#define RLBTOL2_DATA_VLD_SHFT 32 /* data is valid for JTAG controller */ -#define RLBTOL2_DATA_VLD_MASK (UINT64_CAST 0x1 << 32) -#define RLBTOL2_DATA_SHFT 0 /* data bits for JTAG controller */ -#define RLBTOL2_DATA_MASK (UINT64_CAST 0xffffffff) - -/* - * RR_L2_TO_LB mask and shift definitions - */ - -#define RL2TOLB_DATA_VLD_SHFT 33 /* data is valid from JTAG controller */ -#define RL2TOLB_DATA_VLD_MASK (UINT64_CAST 0x1 << 33) -#define RL2TOLB_PARITY_SHFT 32 /* sw implemented parity for data */ -#define RL2TOLB_PARITY_MASK (UINT64_CAST 0x1 << 32) -#define RL2TOLB_DATA_SHFT 0 /* data bits from JTAG controller */ -#define RL2TOLB_DATA_MASK (UINT64_CAST 0xffffffff) - -/* - * RR_JBUS_CONTROL mask and shift definitions - */ - -#define RJC_POS_BITS_SHFT 20 /* Router position bits */ -#define RJC_POS_BITS_MASK (UINT64_CAST 0xf << 20) -#define RJC_RD_DATA_STROBE_SHFT 16 /* count when read data is strobed in */ -#define RJC_RD_DATA_STROBE_MASK (UINT64_CAST 0xf << 16) -#define RJC_WE_OE_HOLD_SHFT 8 /* time OE or WE is held */ -#define RJC_WE_OE_HOLD_MASK (UINT64_CAST 0xff << 8) -#define RJC_ADDR_SET_HLD_SHFT 0 /* time address driven around OE/WE */ -#define RJC_ADDR_SET_HLD_MASK (UINT64_CAST 0xff) - -/* - * RR_SCRATCH_REGx mask and shift definitions - * note: these fields represent a software convention, and are not - * understood/interpreted by the hardware. - */ - -#define RSCR0_BOOTED_SHFT 63 -#define RSCR0_BOOTED_MASK (UINT64_CAST 0x1 << RSCR0_BOOTED_SHFT) -#define RSCR0_LOCALID_SHFT 56 -#define RSCR0_LOCALID_MASK (UINT64_CAST 0x7f << RSCR0_LOCALID_SHFT) -#define RSCR0_UNUSED_SHFT 48 -#define RSCR0_UNUSED_MASK (UINT64_CAST 0xff << RSCR0_UNUSED_SHFT) -#define RSCR0_NIC_SHFT 0 -#define RSCR0_NIC_MASK (UINT64_CAST 0xffffffffffff) - -#define RSCR1_MODID_SHFT 0 -#define RSCR1_MODID_MASK (UINT64_CAST 0xffff) - -/* - * RR_VECTOR_HW_BAR mask and shift definitions - */ - -#define BAR_TX_SHFT 27 /* Barrier in trans(m)it when read */ -#define BAR_TX_MASK (UINT64_CAST 1 << BAR_TX_SHFT) -#define BAR_VLD_SHFT 26 /* Valid Configuration */ -#define BAR_VLD_MASK (UINT64_CAST 1 << BAR_VLD_SHFT) -#define BAR_SEQ_SHFT 24 /* Sequence number */ -#define BAR_SEQ_MASK (UINT64_CAST 3 << BAR_SEQ_SHFT) -#define BAR_LEAFSTATE_SHFT 18 /* Leaf State */ -#define BAR_LEAFSTATE_MASK (UINT64_CAST 0x3f << BAR_LEAFSTATE_SHFT) -#define BAR_PARENT_SHFT 14 /* Parent Port */ -#define BAR_PARENT_MASK (UINT64_CAST 0xf << BAR_PARENT_SHFT) -#define BAR_CHILDREN_SHFT 6 /* Child Select port bits */ -#define BAR_CHILDREN_MASK (UINT64_CAST 0xff << BAR_CHILDREN_SHFT) -#define BAR_LEAFCOUNT_SHFT 0 /* Leaf Count to trigger parent */ -#define BAR_LEAFCOUNT_MASK (UINT64_CAST 0x3f) - -/* - * RR_PORT_PARMS(_L) mask and shift definitions - */ - -#define RPPARM_MIPRESETEN_SHFT 29 /* Message In Progress reset enable */ -#define RPPARM_MIPRESETEN_MASK (UINT64_CAST 0x1 << 29) -#define RPPARM_UBAREN_SHFT 28 /* Enable user barrier requests */ -#define RPPARM_UBAREN_MASK (UINT64_CAST 0x1 << 28) -#define RPPARM_OUTPDTO_SHFT 24 /* Output Port Deadlock TO value */ -#define RPPARM_OUTPDTO_MASK (UINT64_CAST 0xf << 24) -#define RPPARM_PORTMATE_SHFT 21 /* Port Mate for the port */ -#define RPPARM_PORTMATE_MASK (UINT64_CAST 0x7 << 21) -#define RPPARM_HISTEN_SHFT 20 /* Histogram counter enable */ -#define RPPARM_HISTEN_MASK (UINT64_CAST 0x1 << 20) -#define RPPARM_HISTSEL_SHFT 18 -#define RPPARM_HISTSEL_MASK (UINT64_CAST 0x3 << 18) -#define RPPARM_DAMQHS_SHFT 16 -#define RPPARM_DAMQHS_MASK (UINT64_CAST 0x3 << 16) -#define RPPARM_NULLTO_SHFT 10 -#define RPPARM_NULLTO_MASK (UINT64_CAST 0x3f << 10) -#define RPPARM_MAXBURST_SHFT 0 -#define RPPARM_MAXBURST_MASK (UINT64_CAST 0x3ff) - -/* - * NOTE: Normally the kernel tracks only UTILIZATION statistics. - * The other 2 should not be used, except during any experimentation - * with the router. - */ -#define RPPARM_HISTSEL_AGE 0 /* Histogram age characterization. */ -#define RPPARM_HISTSEL_UTIL 1 /* Histogram link utilization */ -#define RPPARM_HISTSEL_DAMQ 2 /* Histogram DAMQ characterization. */ - -/* - * RR_STATUS_ERROR(_L) and RR_ERROR_CLEAR(_L) mask and shift definitions - */ -#define RSERR_POWERNOK (UINT64_CAST 1 << 38) -#define RSERR_PORT_DEADLOCK (UINT64_CAST 1 << 37) -#define RSERR_WARMRESET (UINT64_CAST 1 << 36) -#define RSERR_LINKRESET (UINT64_CAST 1 << 35) -#define RSERR_RETRYTIMEOUT (UINT64_CAST 1 << 34) -#define RSERR_FIFOOVERFLOW (UINT64_CAST 1 << 33) -#define RSERR_ILLEGALPORT (UINT64_CAST 1 << 32) -#define RSERR_DEADLOCKTO_SHFT 28 -#define RSERR_DEADLOCKTO_MASK (UINT64_CAST 0xf << 28) -#define RSERR_RECVTAILTO_SHFT 24 -#define RSERR_RECVTAILTO_MASK (UINT64_CAST 0xf << 24) -#define RSERR_RETRYCNT_SHFT 16 -#define RSERR_RETRYCNT_MASK (UINT64_CAST 0xff << 16) -#define RSERR_CBERRCNT_SHFT 8 -#define RSERR_CBERRCNT_MASK (UINT64_CAST 0xff << 8) -#define RSERR_SNERRCNT_SHFT 0 -#define RSERR_SNERRCNT_MASK (UINT64_CAST 0xff << 0) - - -#define PORT_STATUS_UP (1 << 0) /* Router link up */ -#define PORT_STATUS_FENCE (1 << 1) /* Router link fenced */ -#define PORT_STATUS_RESETFAIL (1 << 2) /* Router link didnot - * come out of reset */ -#define PORT_STATUS_DISCFAIL (1 << 3) /* Router link failed after - * out of reset but before - * router tables were - * programmed - */ -#define PORT_STATUS_KERNFAIL (1 << 4) /* Router link failed - * after reset and the - * router tables were - * programmed - */ -#define PORT_STATUS_UNDEF (1 << 5) /* Unable to pinpoint - * why the router link - * went down - */ -#define PROBE_RESULT_BAD (-1) /* Set if any of the router - * links failed after reset - */ -#define PROBE_RESULT_GOOD (0) /* Set if all the router links - * which came out of reset - * are up - */ - -/* Should be enough for 256 CPUs */ -#define MAX_RTR_BREADTH 64 /* Max # of routers possible */ - -/* Get the require set of bits in a var. corr to a sequence of bits */ -#define GET_FIELD(var, fname) \ - ((var) >> fname##_SHFT & fname##_MASK >> fname##_SHFT) -/* Set the require set of bits in a var. corr to a sequence of bits */ -#define SET_FIELD(var, fname, fval) \ - ((var) = (var) & ~fname##_MASK | (uint64_t) (fval) << fname##_SHFT) - - -#if defined(_LANGUAGE_C) || defined(_LANGUAGE_C_PLUS_PLUS) - -typedef struct router_map_ent_s { - uint64_t nic; - moduleid_t module; - slotid_t slot; -} router_map_ent_t; - -struct rr_status_error_fmt { - uint64_t rserr_unused : 30, - rserr_fifooverflow : 1, - rserr_illegalport : 1, - rserr_deadlockto : 4, - rserr_recvtailto : 4, - rserr_retrycnt : 8, - rserr_cberrcnt : 8, - rserr_snerrcnt : 8; -}; - -/* - * This type is used to store "absolute" counts of router events - */ -typedef int router_count_t; - -/* All utilizations are on a scale from 0 - 1023. */ -#define RP_BYPASS_UTIL 0 -#define RP_RCV_UTIL 1 -#define RP_SEND_UTIL 2 -#define RP_TOTAL_PKTS 3 /* Free running clock/packet counter */ - -#define RP_NUM_UTILS 3 - -#define RP_HIST_REGS 2 -#define RP_NUM_BUCKETS 4 -#define RP_HIST_TYPES 3 - -#define RP_AGE0 0 -#define RP_AGE1 1 -#define RP_AGE2 2 -#define RP_AGE3 3 - - -#define RR_UTIL_SCALE 1024 - -/* - * Router port-oriented information - */ -typedef struct router_port_info_s { - router_reg_t rp_histograms[RP_HIST_REGS];/* Port usage info */ - router_reg_t rp_port_error; /* Port error info */ - router_count_t rp_retry_errors; /* Total retry errors */ - router_count_t rp_sn_errors; /* Total sn errors */ - router_count_t rp_cb_errors; /* Total cb errors */ - int rp_overflows; /* Total count overflows */ - int rp_excess_err; /* Port has excessive errors */ - ushort rp_util[RP_NUM_BUCKETS];/* Port utilization */ -} router_port_info_t; - -#define ROUTER_INFO_VERSION 7 - -struct lboard_s; - -/* - * Router information - */ -typedef struct router_info_s { - char ri_version; /* structure version */ - cnodeid_t ri_cnode; /* cnode of its legal guardian hub */ - nasid_t ri_nasid; /* Nasid of same */ - char ri_ledcache; /* Last LED bitmap */ - char ri_leds; /* Current LED bitmap */ - char ri_portmask; /* Active port bitmap */ - router_reg_t ri_stat_rev_id; /* Status rev ID value */ - net_vec_t ri_vector; /* vector from guardian to router */ - int ri_writeid; /* router's vector write ID */ - int64_t ri_timebase; /* Time of first sample */ - int64_t ri_timestamp; /* Time of last sample */ - router_port_info_t ri_port[MAX_ROUTER_PORTS]; /* per port info */ - moduleid_t ri_module; /* Which module are we in? */ - slotid_t ri_slotnum; /* Which slot are we in? */ - router_reg_t ri_glbl_parms[GLBL_PARMS_REGS]; - /* Global parms0&1 register contents*/ - devfs_handle_t ri_vertex; /* hardware graph vertex */ - router_reg_t ri_prot_conf; /* protection config. register */ - int64_t ri_per_minute; /* Ticks per minute */ - - /* - * Everything below here is for kernel use only and may change at - * at any time with or without a change in teh revision number - * - * Any pointers or things that come and go with DEBUG must go at - * the bottom of the structure, below the user stuff. - */ - char ri_hist_type; /* histogram type */ - devfs_handle_t ri_guardian; /* guardian node for the router */ - int64_t ri_last_print; /* When did we last print */ - char ri_print; /* Should we print */ - char ri_just_blink; /* Should we blink the LEDs */ - -#ifdef DEBUG - int64_t ri_deltatime; /* Time it took to sample */ -#endif - spinlock_t ri_lock; /* Lock for access to router info */ - net_vec_t *ri_vecarray; /* Pointer to array of vectors */ - struct lboard_s *ri_brd; /* Pointer to board structure */ - char * ri_name; /* This board's hwg path */ - unsigned char ri_port_maint[MAX_ROUTER_PORTS]; /* should we send a - message to availmon */ -} router_info_t; - - -/* Router info location specifiers */ - -#define RIP_PROMLOG 2 /* Router info in promlog */ -#define RIP_CONSOLE 4 /* Router info on console */ - -#define ROUTER_INFO_PRINT(_rip,_where) (_rip->ri_print |= _where) - /* Set the field used to check if a - * router info can be printed - */ -#define IS_ROUTER_INFO_PRINTED(_rip,_where) \ - (_rip->ri_print & _where) - /* Was the router info printed to - * the given location (_where) ? - * Mainly used to prevent duplicate - * router error states. - */ -#define ROUTER_INFO_LOCK(_rip,_s) _s = mutex_spinlock(&(_rip->ri_lock)) - /* Take the lock on router info - * to gain exclusive access - */ -#define ROUTER_INFO_UNLOCK(_rip,_s) mutex_spinunlock(&(_rip->ri_lock),_s) - /* Release the lock on router info */ -/* - * Router info hanging in the nodepda - */ -typedef struct nodepda_router_info_s { - devfs_handle_t router_vhdl; /* vertex handle of the router */ - short router_port; /* port thru which we entered */ - short router_portmask; - moduleid_t router_module; /* module in which router is there */ - slotid_t router_slot; /* router slot */ - unsigned char router_type; /* kind of router */ - net_vec_t router_vector; /* vector from the guardian node */ - - router_info_t *router_infop; /* info hanging off the hwg vertex */ - struct nodepda_router_info_s *router_next; - /* pointer to next element */ -} nodepda_router_info_t; - -#define ROUTER_NAME_SIZE 20 /* Max size of a router name */ - -#define NORMAL_ROUTER_NAME "normal_router" -#define NULL_ROUTER_NAME "null_router" -#define META_ROUTER_NAME "meta_router" -#define REPEATER_ROUTER_NAME "repeater_router" -#define UNKNOWN_ROUTER_NAME "unknown_router" - -/* The following definitions are needed by the router traversing - * code either using the hardware graph or using vector operations. - */ -/* Structure of the router queue element */ -typedef struct router_elt_s { - union { - /* queue element structure during router probing */ - struct { - /* number-in-a-can (unique) for the router */ - nic_t nic; - /* vector route from the master hub to - * this router. - */ - net_vec_t vec; - /* port status */ - uint64_t status; - char port_status[MAX_ROUTER_PORTS + 1]; - } r_elt; - /* queue element structure during router guardian - * assignment - */ - struct { - /* vertex handle for the router */ - devfs_handle_t vhdl; - /* guardian for this router */ - devfs_handle_t guard; - /* vector router from the guardian to the router */ - net_vec_t vec; - } k_elt; - } u; - /* easy to use port status interpretation */ -} router_elt_t; - -/* structure of the router queue */ - -typedef struct router_queue_s { - char head; /* Point where a queue element is inserted */ - char tail; /* Point where a queue element is removed */ - int type; - router_elt_t array[MAX_RTR_BREADTH]; - /* Entries for queue elements */ -} router_queue_t; - - -#endif /* _LANGUAGE_C || _LANGUAGE_C_PLUS_PLUS */ - -/* - * RR_HISTOGRAM(_L) mask and shift definitions - * There are two 64 bit histogram registers, so the following macros take - * into account dealing with an array of 4 32 bit values indexed by _x - */ - -#define RHIST_BUCKET_SHFT(_x) (32 * ((_x) & 0x1)) -#define RHIST_BUCKET_MASK(_x) (UINT64_CAST 0xffffffff << RHIST_BUCKET_SHFT((_x) & 0x1)) -#define RHIST_GET_BUCKET(_x, _reg) \ - ((RHIST_BUCKET_MASK(_x) & ((_reg)[(_x) >> 1])) >> RHIST_BUCKET_SHFT(_x)) - -/* - * RR_RESET_MASK(_L) mask and shift definitions - */ - -#define RRM_RESETOK(_L) (UINT64_CAST 1 << ((_L) - 1)) -#define RRM_RESETOK_ALL ALL_PORTS - -/* - * RR_META_TABLE(_x) and RR_LOCAL_TABLE(_x) mask and shift definitions - */ - -#define RTABLE_SHFT(_L) (4 * ((_L) - 1)) -#define RTABLE_MASK(_L) (UINT64_CAST 0x7 << RTABLE_SHFT(_L)) - - -#define ROUTERINFO_STKSZ 4096 - -#if defined(_LANGUAGE_C) || defined(_LANGUAGE_C_PLUS_PLUS) -#if defined(_LANGUAGE_C_PLUS_PLUS) -extern "C" { -#endif - -int router_reg_read(router_info_t *rip, int regno, router_reg_t *val); -int router_reg_write(router_info_t *rip, int regno, router_reg_t val); -int router_get_info(devfs_handle_t routerv, router_info_t *, int); -int router_init(cnodeid_t cnode,int writeid, nodepda_router_info_t *npda_rip); -int router_set_leds(router_info_t *rip); -void router_print_state(router_info_t *rip, int level, - void (*pf)(int, char *, ...),int print_where); -void capture_router_stats(router_info_t *rip); - - -int probe_routers(void); -void get_routername(unsigned char brd_type,char *rtrname); -void router_guardians_set(devfs_handle_t hwgraph_root); -int router_hist_reselect(router_info_t *, int64_t); -#if defined(_LANGUAGE_C_PLUS_PLUS) -} -#endif -#endif /* _LANGUAGE_C || _LANGUAGE_C_PLUS_PLUS */ - -#endif /* _ASM_SN_SN1_ROUTER_H */ diff -Nru a/include/asm-ia64/sn/sn1/slotnum.h b/include/asm-ia64/sn/sn1/slotnum.h --- a/include/asm-ia64/sn/sn1/slotnum.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/sn/sn1/slotnum.h Tue Mar 12 13:58:15 2002 @@ -4,12 +4,11 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2001 Silicon Graphics, Inc. All rights reserved. */ -#ifndef _ASM_SN_SN1_SLOTNUM_H -#define _ASM_SN_SN1_SLOTNUM_H +#ifndef _ASM_IA64_SN_SN1_SLOTNUM_H +#define _ASM_IA64_SN_SN1_SLOTNUM_H #define SLOTNUM_MAXLENGTH 16 @@ -85,4 +84,4 @@ #endif /* __KERNEL__ */ -#endif /* _ASM_SN_SN1_SLOTNUM_H */ +#endif /* _ASM_IA64_SN_SN1_SLOTNUM_H */ diff -Nru a/include/asm-ia64/sn/sn1/sn1.h b/include/asm-ia64/sn/sn1/sn1.h --- a/include/asm-ia64/sn/sn1/sn1.h Tue Mar 12 13:58:14 2002 +++ /dev/null Wed Dec 31 16:00:00 1969 @@ -1,34 +0,0 @@ -/* $Id$ - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam - */ - -/* - * sn1.h -- hardware specific defines for sn1 boards - * The defines used here are used to limit the size of - * various datastructures in the PROM. eg. KLCFGINFO, MPCONF etc. - */ - -#ifndef _ASM_SN_SN1_SN1_H -#define _ASM_SN_SN1_SN1_H - -extern xwidgetnum_t hub_widget_id(nasid_t); -extern nasid_t get_nasid(void); -extern int get_slice(void); -extern int is_fine_dirmode(void); -extern hubreg_t get_hub_chiprev(nasid_t nasid); -extern hubreg_t get_region(cnodeid_t); -extern hubreg_t nasid_to_region(nasid_t); -extern int verify_snchip_rev(void); -extern void ni_reset_port(void); - -#ifdef SN1_USE_POISON_BITS -extern int hub_bte_poison_ok(void); -#endif /* SN1_USE_POISON_BITS */ - -#endif /* _ASM_SN_SN1_SN1_H */ diff -Nru a/include/asm-ia64/sn/sn1/sn_private.h b/include/asm-ia64/sn/sn1/sn_private.h --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/include/asm-ia64/sn/sn1/sn_private.h Tue Mar 12 13:58:16 2002 @@ -0,0 +1,292 @@ +/* $Id$ + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (C) 1992 - 1997, 2000-2001 Silicon Graphics, Inc. All rights reserved. + */ +#ifndef _ASM_IA64_SN_SN1_SN_PRIVATE_H +#define _ASM_IA64_SN_SN1_SN_PRIVATE_H + +#include +#include +#include + +extern nasid_t master_nasid; + +/* promif.c */ +#ifdef LATER +extern cpuid_t cpu_node_probe(cpumask_t *cpumask, int *numnodes); +#endif +extern void he_arcs_set_vectors(void); +extern void mem_init(void); +#ifdef LATER +extern int cpu_enabled(cpuid_t); +#endif +extern void cpu_unenable(cpuid_t); +extern nasid_t get_lowest_nasid(void); +extern __psunsigned_t get_master_bridge_base(void); +extern void set_master_bridge_base(void); +extern int check_nasid_equiv(nasid_t, nasid_t); +extern nasid_t get_console_nasid(void); +extern char get_console_pcislot(void); +#ifdef LATER +extern void intr_init_vecblk(nodepda_t *npda, cnodeid_t, int); +#endif + +extern int is_master_nasid_widget(nasid_t test_nasid, xwidgetnum_t test_wid); + +/* memsupport.c */ +extern void poison_state_alter_range(__psunsigned_t start, int len, int poison); +extern int memory_present(paddr_t); +extern int memory_read_accessible(paddr_t); +extern int memory_write_accessible(paddr_t); +extern void memory_set_access(paddr_t, int, int); +extern void show_dir_state(paddr_t, void (*)(char *, ...)); +extern void check_dir_state(nasid_t, int, void (*)(char *, ...)); +extern void set_dir_owner(paddr_t, int); +extern void set_dir_state(paddr_t, int); +extern void set_dir_state_POISONED(paddr_t); +extern void set_dir_state_UNOWNED(paddr_t); +extern int is_POISONED_dir_state(paddr_t); +extern int is_UNOWNED_dir_state(paddr_t); +extern void get_dir_ent(paddr_t paddr, int *state, + uint64_t *vec_ptr, hubreg_t *elo); + +/* intr.c */ +extern int intr_reserve_level(cpuid_t cpu, int level, int err, devfs_handle_t owner_dev, char *name); +extern void intr_unreserve_level(cpuid_t cpu, int level); +extern int intr_connect_level(cpuid_t cpu, int bit, ilvl_t mask_no, + intr_func_t intr_prefunc); +extern int intr_disconnect_level(cpuid_t cpu, int bit); +extern cpuid_t intr_heuristic(devfs_handle_t dev, device_desc_t dev_desc, + int req_bit,int intr_resflags,devfs_handle_t owner_dev, + char *intr_name,int *resp_bit); +extern void intr_block_bit(cpuid_t cpu, int bit); +extern void intr_unblock_bit(cpuid_t cpu, int bit); +extern void setrtvector(intr_func_t); +extern void install_cpuintr(cpuid_t cpu); +extern void install_dbgintr(cpuid_t cpu); +extern void install_tlbintr(cpuid_t cpu); +extern void hub_migrintr_init(cnodeid_t /*cnode*/); +extern int cause_intr_connect(int level, intr_func_t handler, uint intr_spl_mask); +extern int cause_intr_disconnect(int level); +extern void intr_reserve_hardwired(cnodeid_t); +extern void intr_clear_all(nasid_t); +extern void intr_dumpvec(cnodeid_t cnode, void (*pf)(char *, ...)); + +/* error_dump.c */ +extern char *hub_rrb_err_type[]; +extern char *hub_wrb_err_type[]; + +void nmi_dump(void); +void install_cpu_nmi_handler(int slice); + +/* klclock.c */ +extern void hub_rtc_init(cnodeid_t); + +/* bte.c */ +void bte_lateinit(void); +void bte_wait_for_xfer_completion(void *); + +/* klgraph.c */ +void klhwg_add_all_nodes(devfs_handle_t); +void klhwg_add_all_modules(devfs_handle_t); + +/* klidbg.c */ +void install_klidbg_functions(void); + +/* klnuma.c */ +extern void replicate_kernel_text(int numnodes); +extern __psunsigned_t get_freemem_start(cnodeid_t cnode); +extern void setup_replication_mask(int maxnodes); + +/* init.c */ +extern cnodeid_t get_compact_nodeid(void); /* get compact node id */ +extern void init_platform_nodepda(nodepda_t *npda, cnodeid_t node); +extern void init_platform_pda(cpuid_t cpu); +extern void per_cpu_init(void); +#ifdef LATER +extern cpumask_t boot_cpumask; +#endif +extern int is_fine_dirmode(void); +extern void update_node_information(cnodeid_t); + +#ifdef LATER +/* clksupport.c */ +extern void early_counter_intr(eframe_t *); +#endif + +/* hubio.c */ +extern void hubio_init(void); +extern void hub_merge_clean(nasid_t nasid); +extern void hub_set_piomode(nasid_t nasid, int conveyor); + +/* huberror.c */ +extern void hub_error_init(cnodeid_t); +extern void dump_error_spool(cpuid_t cpu, void (*pf)(char *, ...)); +extern void hubni_error_handler(char *, int); +extern int check_ni_errors(void); + +/* Used for debugger to signal upper software a breakpoint has taken place */ + +extern void *debugger_update; +extern __psunsigned_t debugger_stopped; + +/* + * IP27 piomap, created by hub_pio_alloc. + * xtalk_info MUST BE FIRST, since this structure is cast to a + * xtalk_piomap_s by generic xtalk routines. + */ +struct hub_piomap_s { + struct xtalk_piomap_s hpio_xtalk_info;/* standard crosstalk pio info */ + devfs_handle_t hpio_hub; /* which hub's mapping registers are set up */ + short hpio_holdcnt; /* count of current users of bigwin mapping */ + char hpio_bigwin_num;/* if big window map, which one */ + int hpio_flags; /* defined below */ +}; +/* hub_piomap flags */ +#define HUB_PIOMAP_IS_VALID 0x1 +#define HUB_PIOMAP_IS_BIGWINDOW 0x2 +#define HUB_PIOMAP_IS_FIXED 0x4 + +#define hub_piomap_xt_piomap(hp) (&hp->hpio_xtalk_info) +#define hub_piomap_hub_v(hp) (hp->hpio_hub) +#define hub_piomap_winnum(hp) (hp->hpio_bigwin_num) + +#if TBD + /* Ensure that hpio_xtalk_info is first */ + #assert (&(((struct hub_piomap_s *)0)->hpio_xtalk_info) == 0) +#endif + + +/* + * IP27 dmamap, created by hub_pio_alloc. + * xtalk_info MUST BE FIRST, since this structure is cast to a + * xtalk_dmamap_s by generic xtalk routines. + */ +struct hub_dmamap_s { + struct xtalk_dmamap_s hdma_xtalk_info;/* standard crosstalk dma info */ + devfs_handle_t hdma_hub; /* which hub we go through */ + int hdma_flags; /* defined below */ +}; +/* hub_dmamap flags */ +#define HUB_DMAMAP_IS_VALID 0x1 +#define HUB_DMAMAP_USED 0x2 +#define HUB_DMAMAP_IS_FIXED 0x4 + +#if TBD + /* Ensure that hdma_xtalk_info is first */ + #assert (&(((struct hub_dmamap_s *)0)->hdma_xtalk_info) == 0) +#endif + +/* + * IP27 interrupt handle, created by hub_intr_alloc. + * xtalk_info MUST BE FIRST, since this structure is cast to a + * xtalk_intr_s by generic xtalk routines. + */ +struct hub_intr_s { + struct xtalk_intr_s i_xtalk_info; /* standard crosstalk intr info */ + ilvl_t i_swlevel; /* software level for blocking intr */ + cpuid_t i_cpuid; /* which cpu */ + int i_bit; /* which bit */ + int i_flags; +}; +/* flag values */ +#define HUB_INTR_IS_ALLOCED 0x1 /* for debug: allocated */ +#define HUB_INTR_IS_CONNECTED 0x4 /* for debug: connected to a software driver */ + +#if TBD + /* Ensure that i_xtalk_info is first */ + #assert (&(((struct hub_intr_s *)0)->i_xtalk_info) == 0) +#endif + + +/* IP27 hub-specific information stored under INFO_LBL_HUB_INFO */ +/* TBD: IP27-dependent stuff currently in nodepda.h should be here */ +typedef struct hubinfo_s { + nodepda_t *h_nodepda; /* pointer to node's private data area */ + cnodeid_t h_cnodeid; /* compact nodeid */ + nasid_t h_nasid; /* nasid */ + + /* structures for PIO management */ + xwidgetnum_t h_widgetid; /* my widget # (as viewed from xbow) */ + struct hub_piomap_s h_small_window_piomap[HUB_WIDGET_ID_MAX+1]; + sv_t h_bwwait; /* wait for big window to free */ + spinlock_t h_bwlock; /* guard big window piomap's */ + spinlock_t h_crblock; /* gaurd CRB error handling */ + int h_num_big_window_fixed; /* count number of FIXED maps */ + struct hub_piomap_s h_big_window_piomap[HUB_NUM_BIG_WINDOW]; + hub_intr_t hub_ii_errintr; +} *hubinfo_t; + +#define hubinfo_get(vhdl, infoptr) ((void)hwgraph_info_get_LBL \ + (vhdl, INFO_LBL_NODE_INFO, (arbitrary_info_t *)infoptr)) + +#define hubinfo_set(vhdl, infoptr) (void)hwgraph_info_add_LBL \ + (vhdl, INFO_LBL_NODE_INFO, (arbitrary_info_t)infoptr) + +#define hubinfo_to_hubv(hinfo, hub_v) (hinfo->h_nodepda->node_vertex) + +/* + * Hub info PIO map access functions. + */ +#define hubinfo_bwin_piomap_get(hinfo, win) \ + (&hinfo->h_big_window_piomap[win]) +#define hubinfo_swin_piomap_get(hinfo, win) \ + (&hinfo->h_small_window_piomap[win]) + +/* IP27 cpu-specific information stored under INFO_LBL_CPU_INFO */ +/* TBD: IP27-dependent stuff currently in pda.h should be here */ +typedef struct cpuinfo_s { +#ifdef LATER + pda_t *ci_cpupda; /* pointer to CPU's private data area */ +#endif + cpuid_t ci_cpuid; /* CPU ID */ +} *cpuinfo_t; + +#define cpuinfo_get(vhdl, infoptr) ((void)hwgraph_info_get_LBL \ + (vhdl, INFO_LBL_CPU_INFO, (arbitrary_info_t *)infoptr)) + +#define cpuinfo_set(vhdl, infoptr) (void)hwgraph_info_add_LBL \ + (vhdl, INFO_LBL_CPU_INFO, (arbitrary_info_t)infoptr) + +/* Special initialization function for xswitch vertices created during startup. */ +extern void xswitch_vertex_init(devfs_handle_t xswitch); + +extern xtalk_provider_t hub_provider; + +/* du.c */ +int ducons_write(char *buf, int len); + +/* memerror.c */ + +extern void install_eccintr(cpuid_t cpu); +extern void memerror_get_stats(cnodeid_t cnode, + int *bank_stats, int *bank_stats_max); +extern void probe_md_errors(nasid_t); +/* sysctlr.c */ +extern void sysctlr_init(void); +extern void sysctlr_power_off(int sdonly); +extern void sysctlr_keepalive(void); + +#define valid_cpuid(_x) (((_x) >= 0) && ((_x) < maxcpus)) + +/* Useful definitions to get the memory dimm given a physical + * address. + */ +#define paddr_dimm(_pa) ((_pa & MD_BANK_MASK) >> MD_BANK_SHFT) +#define paddr_cnode(_pa) (NASID_TO_COMPACT_NODEID(NASID_GET(_pa))) +extern void membank_pathname_get(paddr_t,char *); + +/* To redirect the output into the error buffer */ +#define errbuf_print(_s) printf("#%s",_s) + +extern void crbx(nasid_t nasid, void (*pf)(char *, ...)); +void bootstrap(void); + +/* sndrv.c */ +extern int sndrv_attach(devfs_handle_t vertex); + +#endif /* _ASM_IA64_SN_SN1_SN_PRIVATE_H */ diff -Nru a/include/asm-ia64/sn/sn1/synergy.h b/include/asm-ia64/sn/sn1/synergy.h --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/include/asm-ia64/sn/sn1/synergy.h Tue Mar 12 13:58:15 2002 @@ -0,0 +1,187 @@ +#ifndef _ASM_IA64_SN_SN1_SYNERGY_H +#define _ASM_IA64_SN_SN1_SYNERGY_H + +#include +#include +#include +#include + + +/* + * Definitions for the synergy asic driver + * + * These are for SGI platforms only. + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (c) 2000-2002 Silicon Graphics, Inc. All rights reserved. + */ + + +#define SYNERGY_L4_BYTES (64UL*1024*1024) +#define SYNERGY_L4_WAYS 8 +#define SYNERGY_L4_BYTES_PER_WAY (SYNERGY_L4_BYTES/SYNERGY_L4_WAYS) +#define SYNERGY_BLOCK_SIZE 512UL + + +#define SSPEC_BASE (0xe0000000000UL) +#define LB_REG_BASE (SSPEC_BASE + 0x0) + +#define VEC_MASK3A_ADDR (0x2a0 + LB_REG_BASE + __IA64_UNCACHED_OFFSET) +#define VEC_MASK3B_ADDR (0x2a8 + LB_REG_BASE + __IA64_UNCACHED_OFFSET) +#define VEC_MASK3A (0x2a0) +#define VEC_MASK3B (0x2a8) + +#define VEC_MASK2A_ADDR (0x2b0 + LB_REG_BASE + __IA64_UNCACHED_OFFSET) +#define VEC_MASK2B_ADDR (0x2b8 + LB_REG_BASE + __IA64_UNCACHED_OFFSET) +#define VEC_MASK2A (0x2b0) +#define VEC_MASK2B (0x2b8) + +#define VEC_MASK1A_ADDR (0x2c0 + LB_REG_BASE + __IA64_UNCACHED_OFFSET) +#define VEC_MASK1B_ADDR (0x2c8 + LB_REG_BASE + __IA64_UNCACHED_OFFSET) +#define VEC_MASK1A (0x2c0) +#define VEC_MASK1B (0x2c8) + +#define VEC_MASK0A_ADDR (0x2d0 + LB_REG_BASE + __IA64_UNCACHED_OFFSET) +#define VEC_MASK0B_ADDR (0x2d8 + LB_REG_BASE + __IA64_UNCACHED_OFFSET) +#define VEC_MASK0A (0x2d0) +#define VEC_MASK0B (0x2d8) + +#define GBL_PERF_A_ADDR (0x330 + LB_REG_BASE + __IA64_UNCACHED_OFFSET) +#define GBL_PERF_B_ADDR (0x338 + LB_REG_BASE + __IA64_UNCACHED_OFFSET) + +#define WRITE_LOCAL_SYNERGY_REG(addr, value) __synergy_out(addr, value) + +#define HUB_L(_a) *(_a) +#define HUB_S(_a, _d) *(_a) = (_d) + +#define HSPEC_SYNERGY0_0 0x04000000 /* Synergy0 Registers */ +#define HSPEC_SYNERGY1_0 0x05000000 /* Synergy1 Registers */ +#define HS_SYNERGY_STRIDE (HSPEC_SYNERGY1_0 - HSPEC_SYNERGY0_0) +#define REMOTE_HSPEC(_n, _x) (HUBREG_CAST (RREG_BASE(_n) + (_x))) + +#define RREG_BASE(_n) (NODE_LREG_BASE(_n)) +#define NODE_LREG_BASE(_n) (NODE_HSPEC_BASE(_n) + 0x30000000) +#define NODE_HSPEC_BASE(_n) (HSPEC_BASE + NODE_OFFSET(_n)) +#ifndef HSPEC_BASE +#define HSPEC_BASE (SYN_UNCACHED_SPACE | HSPEC_BASE_SYN) +#endif +#define SYN_UNCACHED_SPACE 0xc000000000000000 +#define HSPEC_BASE_SYN 0x00000b0000000000 +#define NODE_OFFSET(_n) (UINT64_CAST (_n) << NODE_SIZE_BITS) +#define NODE_SIZE_BITS 33 + +#define SYN_TAG_DISABLE_WAY (SSPEC_BASE+0xae0) + + +#define RSYN_REG_OFFSET(fsb, reg) (((fsb) ? HSPEC_SYNERGY1_0 : HSPEC_SYNERGY0_0) | (reg)) + +#define REMOTE_SYNERGY_LOAD(nasid, fsb, reg) __remote_synergy_in(nasid, fsb, reg) +#define REMOTE_SYNERGY_STORE(nasid, fsb, reg, val) __remote_synergy_out(nasid, fsb, reg, val) + +static inline uint64_t +__remote_synergy_in(int nasid, int fsb, uint64_t reg) { + volatile uint64_t *addr; + + addr = (uint64_t *)(RREG_BASE(nasid) + RSYN_REG_OFFSET(fsb, reg)); + return (*addr); +} + +static inline void +__remote_synergy_out(int nasid, int fsb, uint64_t reg, uint64_t value) { + volatile uint64_t *addr; + + addr = (uint64_t *)(RREG_BASE(nasid) + RSYN_REG_OFFSET(fsb, (reg<<2))); + *(addr+0) = value >> 48; + *(addr+1) = value >> 32; + *(addr+2) = value >> 16; + *(addr+3) = value; + __ia64_mf_a(); +} + +/* XX this doesn't make a lot of sense. Which fsb? */ +static inline void +__synergy_out(unsigned long addr, unsigned long value) +{ + volatile unsigned long *adr = (unsigned long *) + (addr | __IA64_UNCACHED_OFFSET); + + *adr = value; + __ia64_mf_a(); +} + +#define READ_LOCAL_SYNERGY_REG(addr) __synergy_in(addr) + +/* XX this doesn't make a lot of sense. Which fsb? */ +static inline unsigned long +__synergy_in(unsigned long addr) +{ + unsigned long ret, *adr = (unsigned long *) + (addr | __IA64_UNCACHED_OFFSET); + + ret = *adr; + __ia64_mf_a(); + return ret; +} + +struct sn1_intr_action { + void (*handler)(int, void *, struct pt_regs *); + void *intr_arg; + unsigned long flags; + struct sn1_intr_action * next; +}; + +typedef struct synergy_da_s { + hub_intmasks_t s_intmasks; +}synergy_da_t; + +struct sn1_cnode_action_list { + spinlock_t action_list_lock; + struct sn1_intr_action *action_list; +}; + +/* + * ioctl cmds for node/hub/synergy/[01]/mon for synergy + * perf monitoring are defined in sndrv.h + */ + +/* multiplex the counters every 10 timer interrupts */ +#define SYNERGY_PERF_FREQ_DEFAULT 10 + +/* macros for synergy "mon" device ioctl handler */ +#define SYNERGY_PERF_INFO(_s, _f) (arbitrary_info_t)(((_s) << 16)|(_f)) +#define SYNERGY_PERF_INFO_CNODE(_x) (cnodeid_t)(((uint64_t)_x) >> 16) +#define SYNERGY_PERF_INFO_FSB(_x) (((uint64_t)_x) & 1) + +/* synergy perf control registers */ +#define PERF_CNTL0_A 0xab0UL /* control A on FSB0 */ +#define PERF_CNTL0_B 0xab8UL /* control B on FSB0 */ +#define PERF_CNTL1_A 0xac0UL /* control A on FSB1 */ +#define PERF_CNTL1_B 0xac8UL /* control B on FSB1 */ + +/* synergy perf counters */ +#define PERF_CNTR0_A 0xad0UL /* counter A on FSB0 */ +#define PERF_CNTR0_B 0xad8UL /* counter B on FSB0 */ +#define PERF_CNTR1_A 0xaf0UL /* counter A on FSB1 */ +#define PERF_CNTR1_B 0xaf8UL /* counter B on FSB1 */ + +/* Synergy perf data. Each nodepda keeps a list of these */ +struct synergy_perf_s { + uint64_t intervals; /* count of active intervals for this event */ + uint64_t total_intervals;/* snapshot of total intervals */ + uint64_t modesel; /* mode and sel bits, both A and B registers */ + struct synergy_perf_s *next; /* next in circular linked list */ + uint64_t counts[2]; /* [0] is synergy-A counter, [1] synergy-B counter */ +}; + +typedef struct synergy_perf_s synergy_perf_t; + +typedef struct synergy_info_s synergy_info_t; + +extern void synergy_perf_init(void); +extern void synergy_perf_update(int); +extern struct file_operations synergy_mon_fops; + +#endif /* _ASM_IA64_SN_SN1_SYNERGY_H */ diff -Nru a/include/asm-ia64/sn/sn1/uart16550.h b/include/asm-ia64/sn/sn1/uart16550.h --- a/include/asm-ia64/sn/sn1/uart16550.h Tue Mar 12 13:58:15 2002 +++ /dev/null Wed Dec 31 16:00:00 1969 @@ -1,228 +0,0 @@ -/* $Id$ - * - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam - */ - -#ifndef _ASM_SN_SN1_UART16550_H -#define _ASM_SN_SN1_UART16550_H - - -/* - * Definitions for 16550 chip - */ - - /* defined as offsets from the data register */ -#define REG_DAT 0 /* receive/transmit data */ -#define REG_ICR 1 /* interrupt control register */ -#define REG_ISR 2 /* interrupt status register */ -#define REG_FCR 2 /* fifo control register */ -#define REG_LCR 3 /* line control register */ -#define REG_MCR 4 /* modem control register */ -#define REG_LSR 5 /* line status register */ -#define REG_MSR 6 /* modem status register */ -#define REG_SCR 7 /* Scratch register */ -#define REG_DLL 0 /* divisor latch (lsb) */ -#define REG_DLH 1 /* divisor latch (msb) */ -#define REG_EFR 2 /* 16650 enhanced feature register */ - -/* - * 16450/16550 Registers Structure. - */ - -/* Line Control Register */ -#define LCR_WLS0 0x01 /*word length select bit 0 */ -#define LCR_WLS1 0x02 /*word length select bit 2 */ -#define LCR_STB 0x04 /* number of stop bits */ -#define LCR_PEN 0x08 /* parity enable */ -#define LCR_EPS 0x10 /* even parity select */ -#define LCR_SETBREAK 0x40 /* break key */ -#define LCR_DLAB 0x80 /* divisor latch access bit */ -#define LCR_RXLEN 0x03 /* # of data bits per received/xmitted char */ -#define LCR_STOP1 0x00 -#define LCR_STOP2 0x04 -#define LCR_PAREN 0x08 -#define LCR_PAREVN 0x10 -#define LCR_PARMARK 0x20 -#define LCR_SNDBRK 0x40 -#define LCR_DLAB 0x80 - - -#define LCR_BITS5 0x00 /* 5 bits per char */ -#define LCR_BITS6 0x01 /* 6 bits per char */ -#define LCR_BITS7 0x02 /* 7 bits per char */ -#define LCR_BITS8 0x03 /* 8 bits per char */ - -#define LCR_MASK_BITS_CHAR 0x03 -#define LCR_MASK_STOP_BITS 0x04 -#define LCR_MASK_PARITY_BITS 0x18 - - -/* Line Status Register */ -#define LSR_RCA 0x01 /* data ready */ -#define LSR_OVRRUN 0x02 /* overrun error */ -#define LSR_PARERR 0x04 /* parity error */ -#define LSR_FRMERR 0x08 /* framing error */ -#define LSR_BRKDET 0x10 /* a break has arrived */ -#define LSR_XHRE 0x20 /* tx hold reg is now empty */ -#define LSR_XSRE 0x40 /* tx shift reg is now empty */ -#define LSR_RFBE 0x80 /* rx FIFO Buffer error */ - -/* Interrupt Status Regisger */ -#define ISR_MSTATUS 0x00 -#define ISR_TxRDY 0x02 -#define ISR_RxRDY 0x04 -#define ISR_ERROR_INTR 0x08 -#define ISR_FFTMOUT 0x0c /* FIFO Timeout */ -#define ISR_RSTATUS 0x06 /* Receiver Line status */ - -/* Interrupt Enable Register */ -#define ICR_RIEN 0x01 /* Received Data Ready */ -#define ICR_TIEN 0x02 /* Tx Hold Register Empty */ -#define ICR_SIEN 0x04 /* Receiver Line Status */ -#define ICR_MIEN 0x08 /* Modem Status */ - -/* Modem Control Register */ -#define MCR_DTR 0x01 /* Data Terminal Ready */ -#define MCR_RTS 0x02 /* Request To Send */ -#define MCR_OUT1 0x04 /* Aux output - not used */ -#define MCR_OUT2 0x08 /* turns intr to 386 on/off */ -#define MCR_LOOP 0x10 /* loopback for diagnostics */ -#define MCR_AFE 0x20 /* Auto flow control enable */ - -/* Modem Status Register */ -#define MSR_DCTS 0x01 /* Delta Clear To Send */ -#define MSR_DDSR 0x02 /* Delta Data Set Ready */ -#define MSR_DRI 0x04 /* Trail Edge Ring Indicator */ -#define MSR_DDCD 0x08 /* Delta Data Carrier Detect */ -#define MSR_CTS 0x10 /* Clear To Send */ -#define MSR_DSR 0x20 /* Data Set Ready */ -#define MSR_RI 0x40 /* Ring Indicator */ -#define MSR_DCD 0x80 /* Data Carrier Detect */ - -#define DELTAS(x) ((x)&(MSR_DCTS|MSR_DDSR|MSR_DRI|MSR_DDCD)) -#define STATES(x) ((x)(MSR_CTS|MSR_DSR|MSR_RI|MSR_DCD)) - - -#define FCR_FIFOEN 0x01 /* enable receive/transmit fifo */ -#define FCR_RxFIFO 0x02 /* enable receive fifo */ -#define FCR_TxFIFO 0x04 /* enable transmit fifo */ -#define FCR_MODE1 0x08 /* change to mode 1 */ -#define RxLVL0 0x00 /* Rx fifo level at 1 */ -#define RxLVL1 0x40 /* Rx fifo level at 4 */ -#define RxLVL2 0x80 /* Rx fifo level at 8 */ -#define RxLVL3 0xc0 /* Rx fifo level at 14 */ - -#define FIFOEN (FCR_FIFOEN | FCR_RxFIFO | FCR_TxFIFO | RxLVL3 | FCR_MODE1) - -#define FCT_TxMASK 0x30 /* mask for Tx trigger */ -#define FCT_RxMASK 0xc0 /* mask for Rx trigger */ - -/* enhanced festures register */ -#define EFR_SFLOW 0x0f /* various S/w Flow Controls */ -#define EFR_EIC 0x10 /* Enhanced Interrupt Control bit */ -#define EFR_SCD 0x20 /* Special Character Detect */ -#define EFR_RTS 0x40 /* RTS flow control */ -#define EFR_CTS 0x80 /* CTS flow control */ - -/* Rx Tx software flow controls in 16650 enhanced mode */ -#define SFLOW_Tx0 0x00 /* no Xmit flow control */ -#define SFLOW_Tx1 0x08 /* Transmit Xon1, Xoff1 */ -#define SFLOW_Tx2 0x04 /* Transmit Xon2, Xoff2 */ -#define SFLOW_Tx3 0x0c /* Transmit Xon1,Xon2, Xoff1,Xoff2 */ -#define SFLOW_Rx0 0x00 /* no Rcv flow control */ -#define SFLOW_Rx1 0x02 /* Receiver compares Xon1, Xoff1 */ -#define SFLOW_Rx2 0x01 /* Receiver compares Xon2, Xoff2 */ - -#define ASSERT_DTR(x) (x |= MCR_DTR) -#define ASSERT_RTS(x) (x |= MCR_RTS) -#define DU_RTS_ASSERTED(x) (((x) & MCR_RTS) != 0) -#define DU_RTS_ASSERT(x) ((x) |= MCR_RTS) -#define DU_RTS_DEASSERT(x) ((x) &= ~MCR_RTS) - - -/* - * ioctl(fd, I_STR, arg) - * use the SIOC_RS422 and SIOC_EXTCLK combination to support MIDI - */ -#define SIOC ('z' << 8) /* z for z85130 */ -#define SIOC_EXTCLK (SIOC | 1) /* select/de-select external clock */ -#define SIOC_RS422 (SIOC | 2) /* select/de-select RS422 protocol */ -#define SIOC_ITIMER (SIOC | 3) /* upstream timer adjustment */ -#define SIOC_LOOPBACK (SIOC | 4) /* diagnostic loopback test mode */ - - -/* channel control register */ -#define DMA_INT_MASK 0xe0 /* ring intr mask */ -#define DMA_INT_TH25 0x20 /* 25% threshold */ -#define DMA_INT_TH50 0x40 /* 50% threshold */ -#define DMA_INT_TH75 0x60 /* 75% threshold */ -#define DMA_INT_EMPTY 0x80 /* ring buffer empty */ -#define DMA_INT_NEMPTY 0xa0 /* ring buffer not empty */ -#define DMA_INT_FULL 0xc0 /* ring buffer full */ -#define DMA_INT_NFULL 0xe0 /* ring buffer not full */ - -#define DMA_CHANNEL_RESET 0x400 /* reset dma channel */ -#define DMA_ENABLE 0x200 /* enable DMA */ - -/* peripheral controller intr status bits applicable to serial ports */ -#define ISA_SERIAL0_MASK 0x03f00000 /* mask for port #1 intrs */ -#define ISA_SERIAL0_DIR 0x00100000 /* device intr request */ -#define ISA_SERIAL0_Tx_THIR 0x00200000 /* Transmit DMA threshold */ -#define ISA_SERIAL0_Tx_PREQ 0x00400000 /* Transmit DMA pair req */ -#define ISA_SERIAL0_Tx_MEMERR 0x00800000 /* Transmit DMA memory err */ -#define ISA_SERIAL0_Rx_THIR 0x01000000 /* Receive DMA threshold */ -#define ISA_SERIAL0_Rx_OVERRUN 0x02000000 /* Receive DMA over-run */ - -#define ISA_SERIAL1_MASK 0xfc000000 /* mask for port #1 intrs */ -#define ISA_SERIAL1_DIR 0x04000000 /* device intr request */ -#define ISA_SERIAL1_Tx_THIR 0x08000000 /* Transmit DMA threshold */ -#define ISA_SERIAL1_Tx_PREQ 0x10000000 /* Transmit DMA pair req */ -#define ISA_SERIAL1_Tx_MEMERR 0x20000000 /* Transmit DMA memory err */ -#define ISA_SERIAL1_Rx_THIR 0x40000000 /* Receive DMA threshold */ -#define ISA_SERIAL1_Rx_OVERRUN 0x80000000 /* Receive DMA over-run */ - -#define MAX_RING_BLOCKS 128 /* 4096/32 */ -#define MAX_RING_SIZE 4096 - -/* DMA Input Control Byte */ -#define DMA_IC_OVRRUN 0x01 /* overrun error */ -#define DMA_IC_PARERR 0x02 /* parity error */ -#define DMA_IC_FRMERR 0x04 /* framing error */ -#define DMA_IC_BRKDET 0x08 /* a break has arrived */ -#define DMA_IC_VALID 0x80 /* pair is valid */ - -/* DMA Output Control Byte */ -#define DMA_OC_TxINTR 0x20 /* set Tx intr after processing byte */ -#define DMA_OC_INVALID 0x00 /* invalid pair */ -#define DMA_OC_WTHR 0x40 /* Write byte to THR */ -#define DMA_OC_WMCR 0x80 /* Write byte to MCR */ -#define DMA_OC_DELAY 0xc0 /* time delay before next xmit */ - -/* ring id's */ -#define RID_SERIAL0_TX 0x4 /* serial port 0, transmit ring buffer */ -#define RID_SERIAL0_RX 0x5 /* serial port 0, receive ring buffer */ -#define RID_SERIAL1_TX 0x6 /* serial port 1, transmit ring buffer */ -#define RID_SERIAL1_RX 0x7 /* serial port 1, receive ring buffer */ - -#define CLOCK_XIN 22 -#define PRESCALER_DIVISOR 3 -#define CLOCK_ACE 7333333 - -/* - * increment the ring offset. One way to do this would be to add b'100000. - * this would let the offset value roll over automatically when it reaches - * its maximum value (127). However when we use the offset, we must use - * the appropriate bits only by masking with 0xfe0. - * The other option is to shift the offset right by 5 bits and look at its - * value. Then increment if required and shift back - * note: 127 * 2^5 = 4064 - */ -#define INC_RING_POINTER(x) \ - ( ((x & 0xffe0) < 4064) ? (x += 32) : 0 ) - -#endif /* _ASM_SN_SN1_UART16550_H */ diff -Nru a/include/asm-ia64/sn/sn2/addrs.h b/include/asm-ia64/sn/sn2/addrs.h --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/include/asm-ia64/sn/sn2/addrs.h Tue Mar 12 13:58:16 2002 @@ -0,0 +1,153 @@ +/* + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (c) 2001 Silicon Graphics, Inc. All rights reserved. + */ + +#ifndef _ASM_IA64_SN_SN2_ADDRS_H +#define _ASM_IA64_SN_SN2_ADDRS_H + +/* McKinley Address Format: + * + * 4 4 3 3 3 3 + * 9 8 8 7 6 5 0 + * +-+---------+----+--------------+ + * |0| Node ID | AS | Node Offset | + * +-+---------+----+--------------+ + * + * Node ID: If bit 38 = 1, is ICE, else is SHUB + * AS: Address Space Identifier. Used only if bit 38 = 0. + * b'00: Local Resources and MMR space + * bit 35 + * 0: Local resources space + * node id: + * 0: IA64/NT compatibility space + * 2: Local MMR Space + * 4: Local memory, regardless of local node id + * 1: Global MMR space + * b'01: GET space. + * b'10: AMO space. + * b'11: Cacheable memory space. + * + * NodeOffset: byte offset + */ + +#ifndef __ASSEMBLY__ +typedef union ia64_sn2_pa { + struct { + unsigned long off : 36; + unsigned long as : 2; + unsigned long nasid: 11; + unsigned long fill : 15; + } f; + unsigned long l; + void *p; +} ia64_sn2_pa_t; +#endif + +#define TO_PHYS_MASK 0x0001ffcfffffffff /* Note - clear AS bits */ + + +/* Regions determined by AS */ +#define LOCAL_MMR_SPACE 0xc000008000000000 /* Local MMR space */ +#define LOCAL_MEM_SPACE 0xc000010000000000 /* Local Memory space */ +#define GLOBAL_MMR_SPACE 0xc000000800000000 /* Global MMR space */ +#define GET_SPACE 0xc000001000000000 /* GET space */ +#define AMO_SPACE 0xc000002000000000 /* AMO space */ +#define CACHEABLE_MEM_SPACE 0xe000003000000000 /* Cacheable memory space */ +#define UNCACHED 0xc000000000000000 /* UnCacheable memory space */ + +/* SN2 address macros */ +#define NID_SHFT 38 +#define LOCAL_MMR_ADDR(a) (UNCACHED | LOCAL_MMR_SPACE | (a)) +#define LOCAL_MEM_ADDR(a) (LOCAL_MEM_SPACE | (a)) +#define REMOTE_ADDR(n,a) ((((unsigned long)(n))< */ +#define BWIN_SIZE_BITS 29 /* big window size: 512M */ +#define NASID_BITS 11 /* bits <48:38> */ +#define NASID_BITMASK (0x7ffULL) +#define NASID_SHFT NID_SHFT +#define NASID_META_BITS 0 /* ???? */ +#define NASID_LOCAL_BITS 7 /* same router as SN1 */ + +#define NODE_ADDRSPACE_SIZE (UINT64_CAST 1 << NODE_SIZE_BITS) +#define NASID_MASK (UINT64_CAST NASID_BITMASK << NASID_SHFT) +#define NASID_GET(_pa) (int) ((UINT64_CAST (_pa) >> \ + NASID_SHFT) & NASID_BITMASK) + +#define CHANGE_NASID(n,x) ({ia64_sn2_pa_t _v; _v.l = (long) (x); _v.f.nasid = n; _v.p;}) + +#ifndef __ASSEMBLY__ +#define NODE_SWIN_BASE(nasid, widget) \ + ((widget == 0) ? NODE_BWIN_BASE((nasid), SWIN0_BIGWIN) \ + : RAW_NODE_SWIN_BASE(nasid, widget)) +#else +#define NODE_SWIN_BASE(nasid, widget) \ + (NODE_IO_BASE(nasid) + (UINT64_CAST (widget) << SWIN_SIZE_BITS)) +#define LOCAL_SWIN_BASE(widget) \ + (UNCACHED | LOCAL_MMR_SPACE | ((UINT64_CAST (widget) << SWIN_SIZE_BITS))) +#endif /* __ASSEMBLY__ */ + +/* + * The following definitions pertain to the IO special address + * space. They define the location of the big and little windows + * of any given node. + */ + +#define BWIN_INDEX_BITS 3 +#define BWIN_SIZE (UINT64_CAST 1 << BWIN_SIZE_BITS) +#define BWIN_SIZEMASK (BWIN_SIZE - 1) +#define BWIN_WIDGET_MASK 0x7 +#define NODE_BWIN_BASE0(nasid) (NODE_IO_BASE(nasid) + BWIN_SIZE) +#define NODE_BWIN_BASE(nasid, bigwin) (NODE_BWIN_BASE0(nasid) + \ + (UINT64_CAST (bigwin) << BWIN_SIZE_BITS)) + +#define BWIN_WIDGETADDR(addr) ((addr) & BWIN_SIZEMASK) +#define BWIN_WINDOWNUM(addr) (((addr) >> BWIN_SIZE_BITS) & BWIN_WIDGET_MASK) + +/* + * Verify if addr belongs to large window address of node with "nasid" + * + * + * NOTE: "addr" is expected to be XKPHYS address, and NOT physical + * address + * + * + */ + +#define NODE_BWIN_ADDR(nasid, addr) \ + (((addr) >= NODE_BWIN_BASE0(nasid)) && \ + ((addr) < (NODE_BWIN_BASE(nasid, HUB_NUM_BIG_WINDOW) + \ + BWIN_SIZE))) + +#endif /* _ASM_IA64_SN_SN2_ADDRS_H */ diff -Nru a/include/asm-ia64/sn/sn2/arch.h b/include/asm-ia64/sn/sn2/arch.h --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/include/asm-ia64/sn/sn2/arch.h Tue Mar 12 13:58:16 2002 @@ -0,0 +1,66 @@ +/* $Id$ + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (C) 1992 - 1997, 2000-2002 Silicon Graphics, Inc. All rights reserved. + */ +#ifndef _ASM_IA64_SN_SN2_ARCH_H +#define _ASM_IA64_SN_SN2_ARCH_H + +#include + + +#define CPUS_PER_NODE 4 /* CPUs on a single hub */ +#define CPUS_PER_SUBNODE 4 /* CPUs on a single hub PI */ + + +/* + * This is the maximum number of NASIDS that can be present in a system. + * (Highest NASID plus one.) + */ +#define MAX_NASIDS 2048 + + +/* + * This is the maximum number of nodes that can be part of a kernel. + * Effectively, it's the maximum number of compact node ids (cnodeid_t). + * This is not necessarily the same as MAX_NASIDS. + */ +#define MAX_COMPACT_NODES 128 + +/* + * MAX_REGIONS refers to the maximum number of hardware partitioned regions. + */ +#define MAX_REGIONS 64 +#define MAX_NONPREMIUM_REGIONS 16 +#define MAX_PREMIUM_REGIONS MAX_REGIONS + + +/* + * MAX_PARITIONS refers to the maximum number of logically defined + * partitions the system can support. + */ +#define MAX_PARTITIONS MAX_REGIONS + + +#define NASID_MASK_BYTES ((MAX_NASIDS + 7) / 8) + + +/* + * 1 FSB per SHUB, with up to 4 cpus per FSB. + */ +#define NUM_SUBNODES 1 +#define SUBNODE_SHFT 0 +#define SUBNODE_MASK (0x0 << SUBNODE_SHFT) +#define LOCALCPU_SHFT 0 +#define LOCALCPU_MASK (0x3 << LOCALCPU_SHFT) +#define SUBNODE(slice) (((slice) & SUBNODE_MASK) >> SUBNODE_SHFT) +#define LOCALCPU(slice) (((slice) & LOCALCPU_MASK) >> LOCALCPU_SHFT) +#define TO_SLICE(subn, local) (((subn) << SUBNODE_SHFT) | \ + ((local) << LOCALCPU_SHFT)) + +typedef u64 mmr_t; + +#endif /* _ASM_IA64_SN_SN2_ARCH_H */ diff -Nru a/include/asm-ia64/sn/sn2/intr.h b/include/asm-ia64/sn/sn2/intr.h --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/include/asm-ia64/sn/sn2/intr.h Tue Mar 12 13:58:16 2002 @@ -0,0 +1,25 @@ +/* $Id$ + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (C) 1992 - 1997, 2000-2002 Silicon Graphics, Inc. All rights reserved. + */ +#ifndef _ASM_IA64_SN_SN2_INTR_H +#define _ASM_IA64_SN_SN2_INTR_H + +#define SGI_UART_VECTOR (0xe9) +#define SGI_SHUB_ERROR_VECTOR (0xea) + +// These two IRQ's are used by partitioning. +#define SGI_XPC_NOTIFY (0xe7) +#define SGI_XPART_ACTIVATE (0x30) + +#define IA64_SN2_FIRST_DEVICE_VECTOR (0x31) +#define IA64_SN2_LAST_DEVICE_VECTOR (0xe6) + +#define SN2_IRQ_RESERVED (0x1) +#define SN2_IRQ_CONNECTED (0x2) + +#endif /* _ASM_IA64_SN_SN2_INTR_H */ diff -Nru a/include/asm-ia64/sn/sn2/mmzone_sn2.h b/include/asm-ia64/sn/sn2/mmzone_sn2.h --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/include/asm-ia64/sn/sn2/mmzone_sn2.h Tue Mar 12 13:58:16 2002 @@ -0,0 +1,165 @@ +#ifndef _ASM_IA64_SN_MMZONE_SN2_H +#define _ASM_IA64_SN_MMZONE_SN2_H + +/* + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (c) 2000-2002 Silicon Graphics, Inc. All rights reserved. + */ + +#include + + +/* + * SGI SN2 Arch defined values + * + * An SN2 physical address is broken down as follows: + * + * +-----------------------------------------+ + * | | | | node offset | + * | unused | node | AS |-------------------| + * | | | | cn | clump offset | + * +-----------------------------------------+ + * 6 4 4 3 3 3 3 3 3 0 + * 3 9 8 8 7 6 5 4 3 0 + * + * bits 63-49 Unused - must be zero + * bits 48-38 Node number. Note that some configurations do NOT + * have a node zero. + * bits 37-36 Address space ID. Cached memory has a value of 3 (!!!). + * Chipset & IO addresses have other values. + * (Yikes!! The hardware folks hate us...) + * bits 35-0 Node offset. + * + * The node offset can be further broken down as: + * bits 35-34 Clump (bank) number. + * bits 33-0 Clump (bank) offset. + * + * A node consists of up to 4 clumps (banks) of memory. A clump may be empty, or may be + * populated with a single contiguous block of memory starting at clump + * offset 0. The size of the block is (2**n) * 64MB, where 0> SN2_NODE_SHIFT) & SN2_NODE_MASK) +#define SN2_NODE_CLUMP_NUMBER(kaddr) (((unsigned long)(kaddr) >>34) & 3) +#define SN2_NODE_OFFSET(addr) (((unsigned long)(addr)) & SN2_NODE_OFFSET_MASK) +#define SN2_KADDR(nasid, offset) (((unsigned long)(nasid)<>2) | \ + (_p&SN2_NODE_OFFSET_MASK)) >>SN2_CHUNKSHIFT;}) + +/* + * Given a kaddr, find the nid (compact nodeid) + */ +#ifdef CONFIG_IA64_SGI_SN_DEBUG +#define DISCONBUG(kaddr) panic("DISCONTIG BUG: line %d, %s. kaddr 0x%lx", \ + __LINE__, __FILE__, (long)(kaddr)) + +#define KVADDR_TO_NID(kaddr) ({long _ktn=(long)(kaddr); \ + kern_addr_valid(_ktn) ? \ + local_node_data->physical_node_map[SN2_NODE_NUMBER(_ktn)] : \ + (DISCONBUG(_ktn), 0UL);}) +#else +#define KVADDR_TO_NID(kaddr) (local_node_data->physical_node_map[SN2_NODE_NUMBER(kaddr)]) +#endif + + + +/* + * Given a kaddr, find the index into the clump_mem_map_base array of the page struct entry + * for the first page of the clump. + */ +#define PLAT_CLUMP_MEM_MAP_INDEX(kaddr) ({long _kmmi=(long)(kaddr); \ + KVADDR_TO_NID(_kmmi) * PLAT_CLUMPS_PER_NODE + \ + SN2_NODE_CLUMP_NUMBER(_kmmi);}) + + + +/* + * Calculate a "goal" value to be passed to __alloc_bootmem_node for allocating structures on + * nodes so that they dont alias to the same line in the cache as the previous allocated structure. + * This macro takes an address of the end of previous allocation, rounds it to a page boundary & + * changes the node number. + */ +#define PLAT_BOOTMEM_ALLOC_GOAL(cnode,kaddr) SN2_KADDR(PLAT_PXM_TO_PHYS_NODE_NUMBER(nid_to_pxm_map[cnodeid]), \ + (SN2_NODE_OFFSET(kaddr) + PAGE_SIZE - 1) >> PAGE_SHIFT << PAGE_SHIFT) + + + + +/* + * Convert a proximity domain number (from the ACPI tables) into a physical node number. + * Note: on SN2, the promity domain number is the same as bits [8:1] of the NASID. The following + * algorithm relies on: + * - bit 0 of the NASID for cpu nodes is always 0 + * - bits [10:9] of all NASIDs in a partition are always the same + * - hard_smp_processor_id return the SAPIC of the current cpu & + * bits 0..11 contain the NASID. + * + * All of this complexity is because MS architectually limited proximity domain numbers to + * 8 bits. + */ + +#define PLAT_PXM_TO_PHYS_NODE_NUMBER(pxm) (((pxm)<<1) | (hard_smp_processor_id() & 0x300)) + +#endif /* _ASM_IA64_SN_MMZONE_SN2_H */ diff -Nru a/include/asm-ia64/sn/sn2/shub.h b/include/asm-ia64/sn/sn2/shub.h --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/include/asm-ia64/sn/sn2/shub.h Tue Mar 12 13:58:16 2002 @@ -0,0 +1,44 @@ +/* + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (c) 2001 Silicon Graphics, Inc. All rights reserved. + */ + + +#ifndef _ASM_IA64_SN_SN2_SHUB_H +#define _ASM_IA64_SN_SN2_SHUB_H + +#include /* shub mmr addresses and formats */ +#include +#include +#ifndef __ASSEMBLY__ +#include /* shub mmr struct defines */ +#endif + +/* + * Junk Bus Address Space + * The junk bus is used to access the PROM, LED's, and UART. It's + * accessed through the local block MMR space. The data path is + * 16 bits wide. This space requires address bits 31-27 to be set, and + * is further divided by address bits 26:15. + * The LED addresses are write-only. To read the LEDs, you need to use + * SH_JUNK_BUS_LED0-3, defined in shub_mmr.h + * + */ +#define SH_REAL_JUNK_BUS_LED0 0x7fed00000 +#define SH_REAL_JUNK_BUS_LED1 0x7fed10000 +#define SH_REAL_JUNK_BUS_LED2 0x7fed20000 +#define SH_REAL_JUNK_BUS_LED3 0x7fed30000 +#define SH_JUNK_BUS_UART0 0x7fed40000 +#define SH_JUNK_BUS_UART1 0x7fed40008 +#define SH_JUNK_BUS_UART2 0x7fed40010 +#define SH_JUNK_BUS_UART3 0x7fed40018 +#define SH_JUNK_BUS_UART4 0x7fed40020 +#define SH_JUNK_BUS_UART5 0x7fed40028 +#define SH_JUNK_BUS_UART6 0x7fed40030 +#define SH_JUNK_BUS_UART7 0x7fed40038 + +#endif /* _ASM_IA64_SN_SN2_SHUB_H */ diff -Nru a/include/asm-ia64/sn/sn2/shub_md.h b/include/asm-ia64/sn/sn2/shub_md.h --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/include/asm-ia64/sn/sn2/shub_md.h Tue Mar 12 13:58:16 2002 @@ -0,0 +1,278 @@ +/************************************************************************** + * * + * Copyright (C) 2001 Silicon Graphics, Inc. All rights reserved. * + * * + * These coded instructions, statements, and computer programs contain * + * unpublished proprietary information of Silicon Graphics, Inc., and * + * are protected by Federal copyright law. They may not be disclosed * + * to third parties or copied or duplicated in any form, in whole or * + * in part, without the prior written consent of Silicon Graphics, Inc. * + * * + **************************************************************************/ + +#ifndef _SHUB_MD_H +#define _SHUB_MD_H + +/* SN2 supports a mostly-flat address space with 4 CPU-visible, evenly spaced, + contiguous regions, or "software banks". On SN2, software bank n begins at + addresses n * 16GB, 0 <= n < 4. Each bank has a 16GB address space. If + the 4 dimms do not use up this space there will be holes between the + banks. Even with these holes the whole memory space within a bank is + not addressable address space. The top 1/32 of each bank is directory + memory space and is accessible through bist only. + + Physically a SN2 node board contains 2 daughter cards with 8 dimm sockets + each. A total of 16 dimm sockets arranged as 4 "DIMM banks" of 4 dimms + each. The data is stripped across the 4 memory busses so all dimms within + a dimm bank must have identical capacity dimms. Memory is increased or + decreased in sets of 4. Each dimm bank has 2 dimms on each side. + + Physical Dimm Bank layout. + DTR Card0 + ------------ + Dimm Bank 3 | MemYL3 | CS 3 + | MemXL3 | + |----------| + Dimm Bank 2 | MemYL2 | CS 2 + | MemXL2 | + |----------| + Dimm Bank 1 | MemYL1 | CS 1 + | MemXL1 | + |----------| + Dimm Bank 0 | MemYL0 | CS 0 + | MemXL0 | + ------------ + | | + BUS BUS + XL YL + | | + ------------ + | SHUB | + | MD | + ------------ + | | + BUS BUS + XR YR + | | + ------------ + Dimm Bank 0 | MemXR0 | CS 0 + | MemYR0 | + |----------| + Dimm Bank 1 | MemXR1 | CS 1 + | MemYR1 | + |----------| + Dimm Bank 2 | MemXR2 | CS 2 + | MemYR2 | + |----------| + Dimm Bank 3 | MemXR3 | CS 3 + | MemYR3 | + ------------ + DTR Card1 + + The dimms can be 1 or 2 sided dimms. The size and bankness is defined + separately for each dimm bank in the sh_[x,y,jnr]_dimm_cfg MMR register. + + Normally software bank 0 would map directly to physical dimm bank 0. The + software banks can map to the different physical dimm banks via the + DIMM[0-3]_CS field in SH_[x,y,jnr]_DIMM_CFG for each dimm slot. + + All the PROM's data structures (promlog variables, klconfig, etc.) + track memory by the physical dimm bank number. The kernel usually + tracks memory by the software bank number. + + */ + + +/* Preprocessor macros */ +#define MD_MEM_BANKS 4 +#define MD_PHYS_BANKS_PER_DIMM 2 /* dimms may be 2 sided. */ +#define MD_NUM_PHYS_BANKS (MD_MEM_BANKS * MD_PHYS_BANKS_PER_DIMM) +#define MD_DIMMS_IN_SLOT 4 /* 4 dimms in each dimm bank. aka slot */ + +/* Address bits 35,34 control dimm bank access. */ +#define MD_BANK_SHFT 34 +#define MD_BANK_MASK (UINT64_CAST 0x3 << MD_BANK_SHFT ) +#define MD_BANK_GET(addr) (((addr) & MD_BANK_MASK) >> MD_BANK_SHFT) +#define MD_BANK_SIZE (UINT64_CAST 0x1 << MD_BANK_SHFT ) /* 16 gb */ +#define MD_BANK_OFFSET(_b) (UINT64_CAST (_b) << MD_BANK_SHFT) + +/*Address bit 12 selects side of dimm if 2bnk dimms present. */ +#define MD_PHYS_BANK_SEL_SHFT 12 +#define MD_PHYS_BANK_SEL_MASK (UINT64_CAST 0x1 << MD_PHYS_BANK_SEL_SHFT) + +/* Address bit 7 determines if data resides on X or Y memory system. + * If addr Bit 7 is set the data resides on Y memory system and + * the corresponing directory entry reside on the X. + */ +#define MD_X_OR_Y_SEL_SHFT 7 +#define MD_X_OR_Y_SEL_MASK (1 << MD_X_OR_Y_SEL_SHFT) + +/* Address bit 8 determines which directory entry of the pair the address + * corresponds to. If addr Bit 8 is set DirB corresponds to the memory address. + */ +#define MD_DIRA_OR_DIRB_SEL_SHFT 8 +#define MD_DIRA_OR_DIRB_SEL_MASK (1 << MD_DIRA_OR_DIRB_SEL_SHFT) + +/* Address bit 11 determines if corresponding directory entry resides + * on Left or Right memory bus. If addr Bit 11 is set the corresponding + * directory entry resides on Right memory bus. + */ +#define MD_L_OR_R_SEL_SHFT 11 +#define MD_L_OR_R_SEL_MASK (1 << MD_L_OR_R_SEL_SHFT) + +/* DRAM sizes. */ +#define MD_SZ_64_Mb 0x0 +#define MD_SZ_128_Mb 0x1 +#define MD_SZ_256_Mb 0x2 +#define MD_SZ_512_Mb 0x3 +#define MD_SZ_1024_Mb 0x4 +#define MD_SZ_2048_Mb 0x5 +#define MD_SZ_UNUSED 0x7 + +#define MD_DIMM_SIZE_BYTES(_size, _2bk) ( \ + ( (_size) == 7 ? 0 : ( 0x4000000L << (_size)) << (_2bk)))\ + +#define MD_DIMM_SIZE_MBYTES(_size, _2bk) ( \ + ( (_size) == 7 ? 0 : ( 0x40L << (_size) ) << (_2bk))) \ + +/* The top 1/32 of each bank is directory memory, and not accessable + * via normal reads and writes */ +#define MD_DIMM_USER_SIZE(_size) ((_size) * 31 / 32) + +/* Minimum size of a populated bank is 64M (62M usable) */ +#define MIN_BANK_SIZE MD_DIMM_USER_SIZE((64 * 0x100000)) +#define MIN_BANK_STRING "62" + + +/*Possible values for FREQ field in sh_[x,y,jnr]_dimm_cfg regs */ +#define MD_DIMM_100_CL2_0 0x0 +#define MD_DIMM_133_CL2_0 0x1 +#define MD_DIMM_133_CL2_5 0x2 +#define MD_DIMM_160_CL2_0 0x3 +#define MD_DIMM_160_CL2_5 0x4 +#define MD_DIMM_160_CL3_0 0x5 +#define MD_DIMM_200_CL2_0 0x6 +#define MD_DIMM_200_CL2_5 0x7 +#define MD_DIMM_200_CL3_0 0x8 + +/* DIMM_CFG fields */ +#define MD_DIMM_SHFT(_dimm) ((_dimm) << 3) +#define MD_DIMM_SIZE_MASK(_dimm) \ + (SH_JNR_DIMM_CFG_DIMM0_SIZE_MASK << \ + (MD_DIMM_SHFT(_dimm))) + +#define MD_DIMM_2BK_MASK(_dimm) \ + (SH_JNR_DIMM_CFG_DIMM0_2BK_MASK << \ + MD_DIMM_SHFT(_dimm)) + +#define MD_DIMM_REV_MASK(_dimm) \ + (SH_JNR_DIMM_CFG_DIMM0_REV_MASK << \ + MD_DIMM_SHFT(_dimm)) + +#define MD_DIMM_CS_MASK(_dimm) \ + (SH_JNR_DIMM_CFG_DIMM0_CS_MASK << \ + MD_DIMM_SHFT(_dimm)) + +#define MD_DIMM_SIZE(_dimm, _cfg) \ + (((_cfg) & MD_DIMM_SIZE_MASK(_dimm)) \ + >> (MD_DIMM_SHFT(_dimm)+SH_JNR_DIMM_CFG_DIMM0_SIZE_SHFT)) + +#define MD_DIMM_TWO_SIDED(_dimm,_cfg) \ + ( ((_cfg) & MD_DIMM_2BK_MASK(_dimm)) \ + >> (MD_DIMM_SHFT(_dimm)+SH_JNR_DIMM_CFG_DIMM0_2BK_SHFT)) + +#define MD_DIMM_REVERSED(_dimm,_cfg) \ + (((_cfg) & MD_DIMM_REV_MASK(_dimm)) \ + >> (MD_DIMM_SHFT(_dimm)+SH_JNR_DIMM_CFG_DIMM0_REV_SHFT)) + +#define MD_DIMM_CS(_dimm,_cfg) \ + (((_cfg) & MD_DIMM_CS_MASK(_dimm)) \ + >> (MD_DIMM_SHFT(_dimm)+SH_JNR_DIMM_CFG_DIMM0_CS_SHFT)) + + + +/* Macros to set MMRs that must be set identically to others. */ +#define MD_SET_DIMM_CFG(_n, _value) { \ + REMOTE_HUB_S(_n, SH_X_DIMM_CFG,_value); \ + REMOTE_HUB_S(_n, SH_Y_DIMM_CFG, _value); \ + REMOTE_HUB_S(_n, SH_JNR_DIMM_CFG, _value);} + +#define MD_SET_DQCT_CFG(_n, _value) { \ + REMOTE_HUB_S(_n, SH_X_DQCT_CFG,_value); \ + REMOTE_HUB_S(_n, SH_Y_DQCT_CFG,_value); } + +#define MD_SET_CFG(_n, _value) { \ + REMOTE_HUB_S(_n, SH_X_CFG,_value); \ + REMOTE_HUB_S(_n, SH_Y_CFG,_value);} + +#define MD_SET_REFRESH_CONTROL(_n, _value) { \ + REMOTE_HUB_S(_n, SH_X_REFRESH_CONTROL, _value); \ + REMOTE_HUB_S(_n, SH_Y_REFRESH_CONTROL, _value);} + +#define MD_SET_DQ_MMR_DIR_COFIG(_n, _value) { \ + REMOTE_HUB_S(_n, SH_MD_DQLP_MMR_DIR_CONFIG, _value); \ + REMOTE_HUB_S(_n, SH_MD_DQRP_MMR_DIR_CONFIG, _value);} + +#define MD_SET_PIOWD_DIR_ENTRYS(_n, _value) { \ + REMOTE_HUB_S(_n, SH_MD_DQLP_MMR_PIOWD_DIR_ENTRY, _value);\ + REMOTE_HUB_S(_n, SH_MD_DQRP_MMR_PIOWD_DIR_ENTRY, _value);} + +/* + * There are 12 Node Presence MMRs, 4 in each primary DQ and 4 in the + * LB. The data in the left and right DQ MMRs and the LB must match. + */ +#define MD_SET_PRESENT_VEC(_n, _vec, _value) { \ + REMOTE_HUB_S(_n, SH_MD_DQLP_MMR_DIR_PRESVEC0+((_vec)*0x10),\ + _value); \ + REMOTE_HUB_S(_n, SH_MD_DQRP_MMR_DIR_PRESVEC0+((_vec)*0x10),\ + _value); \ + REMOTE_HUB_S(_n, SH_SHUBS_PRESENT0+((_vec)*0x80), _value);} +/* + * There are 16 Privilege Vector MMRs, 8 in each primary DQ. The data + * in the corresponding left and right DQ MMRs must match. Each MMR + * pair is used for a single partition. + */ +#define MD_SET_PRI_VEC(_n, _vec, _value) { \ + REMOTE_HUB_S(_n, SH_MD_DQLP_MMR_DIR_PRIVEC0+((_vec)*0x10),\ + _value); \ + REMOTE_HUB_S(_n, SH_MD_DQRP_MMR_DIR_PRIVEC0+((_vec)*0x10),\ + _value);} +/* + * There are 16 Local/Remote MMRs, 8 in each primary DQ. The data in + * the corresponding left and right DQ MMRs must match. Each MMR pair + * is used for a single partition. + */ +#define MD_SET_LOC_VEC(_n, _vec, _value) { \ + REMOTE_HUB_S(_n, SH_MD_DQLP_MMR_DIR_LOCVEC0+((_vec)*0x10),\ + _value); \ + REMOTE_HUB_S(_n, SH_MD_DQRP_MMR_DIR_LOCVEC0+((_vec)*0x10),\ + _value);} + +/* Memory BIST CMDS */ +#define MD_DIMM_INIT_MODE_SET 0x0 +#define MD_DIMM_INIT_REFRESH 0x1 +#define MD_DIMM_INIT_PRECHARGE 0x2 +#define MD_DIMM_INIT_BURST_TERM 0x6 +#define MD_DIMM_INIT_NOP 0x7 +#define MD_DIMM_BIST_READ 0x10 +#define MD_FILL_DIR 0x20 +#define MD_FILL_DATA 0x30 +#define MD_FILL_DIR_ACCESS 0X40 +#define MD_READ_DIR_PAIR 0x50 +#define MD_READ_DIR_TAG 0x60 + +/* SH_MMRBIST_CTL macros */ +#define MD_BIST_FAIL(_n) (REMOTE_HUB_L(_n, SH_MMRBIST_CTL) & \ + SH_MMRBIST_CTL_FAIL_MASK) + +#define MD_BIST_IN_PROGRESS(_n) (REMOTE_HUB_L(_n, SH_MMRBIST_CTL) & \ + SH_MMRBIST_CTL_IN_PROGRESS_MASK) + +#define MD_BIST_MEM_IDLE(_n); (REMOTE_HUB_L(_n, SH_MMRBIST_CTL) & \ + SH_MMRBIST_CTL_MEM_IDLE_MASK) + +/* SH_MMRBIST_ERR macros */ +#define MD_BIST_MISCOMPARE(_n) (REMOTE_HUB_L(_n, SH_MMRBIST_ERR) & \ + SH_MMRBIST_ERR_DETECTED_MASK) + +#endif /* _SHUB_MD_H */ diff -Nru a/include/asm-ia64/sn/sn2/shub_mmr.h b/include/asm-ia64/sn/sn2/shub_mmr.h --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/include/asm-ia64/sn/sn2/shub_mmr.h Tue Mar 12 13:58:16 2002 @@ -0,0 +1,31597 @@ +/* + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (c) 2001 Silicon Graphics, Inc. All rights reserved. + */ + + +#ifndef _ASM_IA64_SN_SN2_SHUB_MMR_H +#define _ASM_IA64_SN_SN2_SHUB_MMR_H + +/* ==================================================================== */ +/* Register "SH_FSB_BINIT_CONTROL" */ +/* FSB BINIT# Control */ +/* ==================================================================== */ + +#define SH_FSB_BINIT_CONTROL 0x0000000120010000 +#define SH_FSB_BINIT_CONTROL_MASK 0x0000000000000001 +#define SH_FSB_BINIT_CONTROL_INIT 0x0000000000000000 + +/* SH_FSB_BINIT_CONTROL_BINIT */ +/* Description: Assert the FSB's BINIT# Signal */ +#define SH_FSB_BINIT_CONTROL_BINIT_SHFT 0 +#define SH_FSB_BINIT_CONTROL_BINIT_MASK 0x0000000000000001 + +/* ==================================================================== */ +/* Register "SH_FSB_RESET_CONTROL" */ +/* FSB Reset Control */ +/* ==================================================================== */ + +#define SH_FSB_RESET_CONTROL 0x0000000120010080 +#define SH_FSB_RESET_CONTROL_MASK 0x0000000000000001 +#define SH_FSB_RESET_CONTROL_INIT 0x0000000000000000 + +/* SH_FSB_RESET_CONTROL_RESET */ +/* Description: Assert the FSB's RESET# Signal */ +#define SH_FSB_RESET_CONTROL_RESET_SHFT 0 +#define SH_FSB_RESET_CONTROL_RESET_MASK 0x0000000000000001 + +/* ==================================================================== */ +/* Register "SH_FSB_SYSTEM_AGENT_CONFIG" */ +/* FSB System Agent Configuration */ +/* ==================================================================== */ + +#define SH_FSB_SYSTEM_AGENT_CONFIG 0x0000000120010100 +#define SH_FSB_SYSTEM_AGENT_CONFIG_MASK 0x00003fff0187fff9 +#define SH_FSB_SYSTEM_AGENT_CONFIG_INIT 0x0000000000000000 + +/* SH_FSB_SYSTEM_AGENT_CONFIG_RCNT_SCNT_EN */ +/* Description: RCNT/SCNT Assertion Enabled */ +#define SH_FSB_SYSTEM_AGENT_CONFIG_RCNT_SCNT_EN_SHFT 0 +#define SH_FSB_SYSTEM_AGENT_CONFIG_RCNT_SCNT_EN_MASK 0x0000000000000001 + +/* SH_FSB_SYSTEM_AGENT_CONFIG_BERR_ASSERT_EN */ +/* Description: BERR Assertion Enabled for Bus Errors */ +#define SH_FSB_SYSTEM_AGENT_CONFIG_BERR_ASSERT_EN_SHFT 3 +#define SH_FSB_SYSTEM_AGENT_CONFIG_BERR_ASSERT_EN_MASK 0x0000000000000008 + +/* SH_FSB_SYSTEM_AGENT_CONFIG_BERR_SAMPLING_EN */ +/* Description: BERR Sampling Enabled */ +#define SH_FSB_SYSTEM_AGENT_CONFIG_BERR_SAMPLING_EN_SHFT 4 +#define SH_FSB_SYSTEM_AGENT_CONFIG_BERR_SAMPLING_EN_MASK 0x0000000000000010 + +/* SH_FSB_SYSTEM_AGENT_CONFIG_BINIT_ASSERT_EN */ +/* Description: BINIT Assertion Enabled */ +#define SH_FSB_SYSTEM_AGENT_CONFIG_BINIT_ASSERT_EN_SHFT 5 +#define SH_FSB_SYSTEM_AGENT_CONFIG_BINIT_ASSERT_EN_MASK 0x0000000000000020 + +/* SH_FSB_SYSTEM_AGENT_CONFIG_BNR_THROTTLING_EN */ +/* Description: stutter FSB request assertion */ +#define SH_FSB_SYSTEM_AGENT_CONFIG_BNR_THROTTLING_EN_SHFT 6 +#define SH_FSB_SYSTEM_AGENT_CONFIG_BNR_THROTTLING_EN_MASK 0x0000000000000040 + +/* SH_FSB_SYSTEM_AGENT_CONFIG_SHORT_HANG_EN */ +/* Description: use short duration hang timeout */ +#define SH_FSB_SYSTEM_AGENT_CONFIG_SHORT_HANG_EN_SHFT 7 +#define SH_FSB_SYSTEM_AGENT_CONFIG_SHORT_HANG_EN_MASK 0x0000000000000080 + +/* SH_FSB_SYSTEM_AGENT_CONFIG_INTA_RSP_DATA */ +/* Description: Interrupt Acknowledge Response Data */ +#define SH_FSB_SYSTEM_AGENT_CONFIG_INTA_RSP_DATA_SHFT 8 +#define SH_FSB_SYSTEM_AGENT_CONFIG_INTA_RSP_DATA_MASK 0x000000000000ff00 + +/* SH_FSB_SYSTEM_AGENT_CONFIG_IO_TRANS_RSP */ +/* Description: IO Transaction Response */ +#define SH_FSB_SYSTEM_AGENT_CONFIG_IO_TRANS_RSP_SHFT 16 +#define SH_FSB_SYSTEM_AGENT_CONFIG_IO_TRANS_RSP_MASK 0x0000000000010000 + +/* SH_FSB_SYSTEM_AGENT_CONFIG_XTPR_TRANS_RSP */ +/* Description: External Task Priority Register (xTPR) Transaction */ +/* Response */ +#define SH_FSB_SYSTEM_AGENT_CONFIG_XTPR_TRANS_RSP_SHFT 17 +#define SH_FSB_SYSTEM_AGENT_CONFIG_XTPR_TRANS_RSP_MASK 0x0000000000020000 + +/* SH_FSB_SYSTEM_AGENT_CONFIG_INTA_TRANS_RSP */ +/* Description: Interrupt Acknowledge Transaction Response */ +#define SH_FSB_SYSTEM_AGENT_CONFIG_INTA_TRANS_RSP_SHFT 18 +#define SH_FSB_SYSTEM_AGENT_CONFIG_INTA_TRANS_RSP_MASK 0x0000000000040000 + +/* SH_FSB_SYSTEM_AGENT_CONFIG_TDOT */ +/* Description: Throttle Data-bus Ownership Transitions */ +#define SH_FSB_SYSTEM_AGENT_CONFIG_TDOT_SHFT 23 +#define SH_FSB_SYSTEM_AGENT_CONFIG_TDOT_MASK 0x0000000000800000 + +/* SH_FSB_SYSTEM_AGENT_CONFIG_SERIALIZE_FSB_EN */ +/* Description: serialize processor transactions */ +#define SH_FSB_SYSTEM_AGENT_CONFIG_SERIALIZE_FSB_EN_SHFT 24 +#define SH_FSB_SYSTEM_AGENT_CONFIG_SERIALIZE_FSB_EN_MASK 0x0000000001000000 + +/* SH_FSB_SYSTEM_AGENT_CONFIG_BINIT_EVENT_ENABLES */ +/* Description: FSB error binit enables */ +#define SH_FSB_SYSTEM_AGENT_CONFIG_BINIT_EVENT_ENABLES_SHFT 32 +#define SH_FSB_SYSTEM_AGENT_CONFIG_BINIT_EVENT_ENABLES_MASK 0x00003fff00000000 + +/* ==================================================================== */ +/* Register "SH_FSB_VGA_REMAP" */ +/* FSB VGA Address Space Remap */ +/* ==================================================================== */ + +#define SH_FSB_VGA_REMAP 0x0000000120010180 +#define SH_FSB_VGA_REMAP_MASK 0x4001fffffffe0000 +#define SH_FSB_VGA_REMAP_INIT 0x0000000000000000 + +/* SH_FSB_VGA_REMAP_OFFSET */ +/* Description: VGA Remap Node Offset */ +#define SH_FSB_VGA_REMAP_OFFSET_SHFT 17 +#define SH_FSB_VGA_REMAP_OFFSET_MASK 0x0000000ffffe0000 + +/* SH_FSB_VGA_REMAP_ASID */ +/* Description: VGA Remap Address Space ID */ +#define SH_FSB_VGA_REMAP_ASID_SHFT 36 +#define SH_FSB_VGA_REMAP_ASID_MASK 0x0000003000000000 + +/* SH_FSB_VGA_REMAP_NID */ +/* Description: VGA Remap Node ID */ +#define SH_FSB_VGA_REMAP_NID_SHFT 38 +#define SH_FSB_VGA_REMAP_NID_MASK 0x0001ffc000000000 + +/* SH_FSB_VGA_REMAP_VGA_REMAPPING_ENABLED */ +/* Description: VGA Remapping Enabled */ +#define SH_FSB_VGA_REMAP_VGA_REMAPPING_ENABLED_SHFT 62 +#define SH_FSB_VGA_REMAP_VGA_REMAPPING_ENABLED_MASK 0x4000000000000000 + +/* ==================================================================== */ +/* Register "SH_FSB_RESET_STATUS" */ +/* FSB Reset Status */ +/* ==================================================================== */ + +#define SH_FSB_RESET_STATUS 0x0000000120020000 +#define SH_FSB_RESET_STATUS_MASK 0x0000000000000001 +#define SH_FSB_RESET_STATUS_INIT 0x0000000000000000 + +/* SH_FSB_RESET_STATUS_RESET_IN_PROGRESS */ +/* Description: Reset in Progress */ +#define SH_FSB_RESET_STATUS_RESET_IN_PROGRESS_SHFT 0 +#define SH_FSB_RESET_STATUS_RESET_IN_PROGRESS_MASK 0x0000000000000001 + +/* ==================================================================== */ +/* Register "SH_FSB_SYMMETRIC_AGENT_STATUS" */ +/* FSB Symmetric Agent Status */ +/* ==================================================================== */ + +#define SH_FSB_SYMMETRIC_AGENT_STATUS 0x0000000120020080 +#define SH_FSB_SYMMETRIC_AGENT_STATUS_MASK 0x0000000000000007 +#define SH_FSB_SYMMETRIC_AGENT_STATUS_INIT 0x0000000000000000 + +/* SH_FSB_SYMMETRIC_AGENT_STATUS_CPU_0_ACTIVE */ +/* Description: CPU 0 Active. */ +#define SH_FSB_SYMMETRIC_AGENT_STATUS_CPU_0_ACTIVE_SHFT 0 +#define SH_FSB_SYMMETRIC_AGENT_STATUS_CPU_0_ACTIVE_MASK 0x0000000000000001 + +/* SH_FSB_SYMMETRIC_AGENT_STATUS_CPU_1_ACTIVE */ +/* Description: CPU 1 Active. */ +#define SH_FSB_SYMMETRIC_AGENT_STATUS_CPU_1_ACTIVE_SHFT 1 +#define SH_FSB_SYMMETRIC_AGENT_STATUS_CPU_1_ACTIVE_MASK 0x0000000000000002 + +/* SH_FSB_SYMMETRIC_AGENT_STATUS_CPUS_READY */ +/* Description: The Processors are Ready */ +#define SH_FSB_SYMMETRIC_AGENT_STATUS_CPUS_READY_SHFT 2 +#define SH_FSB_SYMMETRIC_AGENT_STATUS_CPUS_READY_MASK 0x0000000000000004 + +/* ==================================================================== */ +/* Register "SH_GFX_CREDIT_COUNT_0" */ +/* Graphics-write Credit Count for CPU 0 */ +/* ==================================================================== */ + +#define SH_GFX_CREDIT_COUNT_0 0x0000000120030000 +#define SH_GFX_CREDIT_COUNT_0_MASK 0x80000000000fffff +#define SH_GFX_CREDIT_COUNT_0_INIT 0x000000000000003f + +/* SH_GFX_CREDIT_COUNT_0_COUNT */ +/* Description: Credit Count */ +#define SH_GFX_CREDIT_COUNT_0_COUNT_SHFT 0 +#define SH_GFX_CREDIT_COUNT_0_COUNT_MASK 0x00000000000fffff + +/* SH_GFX_CREDIT_COUNT_0_RESET_GFX_STATE */ +/* Description: Reset GFX state */ +#define SH_GFX_CREDIT_COUNT_0_RESET_GFX_STATE_SHFT 63 +#define SH_GFX_CREDIT_COUNT_0_RESET_GFX_STATE_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_GFX_CREDIT_COUNT_1" */ +/* Graphics-write Credit Count for CPU 1 */ +/* ==================================================================== */ + +#define SH_GFX_CREDIT_COUNT_1 0x0000000120030080 +#define SH_GFX_CREDIT_COUNT_1_MASK 0x80000000000fffff +#define SH_GFX_CREDIT_COUNT_1_INIT 0x000000000000003f + +/* SH_GFX_CREDIT_COUNT_1_COUNT */ +/* Description: Credit Count */ +#define SH_GFX_CREDIT_COUNT_1_COUNT_SHFT 0 +#define SH_GFX_CREDIT_COUNT_1_COUNT_MASK 0x00000000000fffff + +/* SH_GFX_CREDIT_COUNT_1_RESET_GFX_STATE */ +/* Description: Reset GFX state */ +#define SH_GFX_CREDIT_COUNT_1_RESET_GFX_STATE_SHFT 63 +#define SH_GFX_CREDIT_COUNT_1_RESET_GFX_STATE_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_GFX_MODE_CNTRL_0" */ +/* Graphics credit mode amd message ordering for CPU 0 */ +/* ==================================================================== */ + +#define SH_GFX_MODE_CNTRL_0 0x0000000120030100 +#define SH_GFX_MODE_CNTRL_0_MASK 0x0000000000000007 +#define SH_GFX_MODE_CNTRL_0_INIT 0x0000000000000003 + +/* SH_GFX_MODE_CNTRL_0_DWORD_CREDITS */ +/* Description: GFX credits are tracked by D-words */ +#define SH_GFX_MODE_CNTRL_0_DWORD_CREDITS_SHFT 0 +#define SH_GFX_MODE_CNTRL_0_DWORD_CREDITS_MASK 0x0000000000000001 + +/* SH_GFX_MODE_CNTRL_0_MIXED_MODE_CREDITS */ +/* Description: GFX credits are tracked by D-words and messages */ +#define SH_GFX_MODE_CNTRL_0_MIXED_MODE_CREDITS_SHFT 1 +#define SH_GFX_MODE_CNTRL_0_MIXED_MODE_CREDITS_MASK 0x0000000000000002 + +/* SH_GFX_MODE_CNTRL_0_RELAXED_ORDERING */ +/* Description: GFX message routing order */ +#define SH_GFX_MODE_CNTRL_0_RELAXED_ORDERING_SHFT 2 +#define SH_GFX_MODE_CNTRL_0_RELAXED_ORDERING_MASK 0x0000000000000004 + +/* ==================================================================== */ +/* Register "SH_GFX_MODE_CNTRL_1" */ +/* Graphics credit mode amd message ordering for CPU 1 */ +/* ==================================================================== */ + +#define SH_GFX_MODE_CNTRL_1 0x0000000120030180 +#define SH_GFX_MODE_CNTRL_1_MASK 0x0000000000000007 +#define SH_GFX_MODE_CNTRL_1_INIT 0x0000000000000003 + +/* SH_GFX_MODE_CNTRL_1_DWORD_CREDITS */ +/* Description: GFX credits are tracked by D-words */ +#define SH_GFX_MODE_CNTRL_1_DWORD_CREDITS_SHFT 0 +#define SH_GFX_MODE_CNTRL_1_DWORD_CREDITS_MASK 0x0000000000000001 + +/* SH_GFX_MODE_CNTRL_1_MIXED_MODE_CREDITS */ +/* Description: GFX credits are tracked by D-words and messages */ +#define SH_GFX_MODE_CNTRL_1_MIXED_MODE_CREDITS_SHFT 1 +#define SH_GFX_MODE_CNTRL_1_MIXED_MODE_CREDITS_MASK 0x0000000000000002 + +/* SH_GFX_MODE_CNTRL_1_RELAXED_ORDERING */ +/* Description: GFX message routing order */ +#define SH_GFX_MODE_CNTRL_1_RELAXED_ORDERING_SHFT 2 +#define SH_GFX_MODE_CNTRL_1_RELAXED_ORDERING_MASK 0x0000000000000004 + +/* ==================================================================== */ +/* Register "SH_GFX_SKID_CREDIT_COUNT_0" */ +/* Graphics-write Skid Credit Count for CPU 0 */ +/* ==================================================================== */ + +#define SH_GFX_SKID_CREDIT_COUNT_0 0x0000000120030200 +#define SH_GFX_SKID_CREDIT_COUNT_0_MASK 0x00000000000fffff +#define SH_GFX_SKID_CREDIT_COUNT_0_INIT 0x0000000000000030 + +/* SH_GFX_SKID_CREDIT_COUNT_0_SKID */ +/* Description: Skid Credit Count */ +#define SH_GFX_SKID_CREDIT_COUNT_0_SKID_SHFT 0 +#define SH_GFX_SKID_CREDIT_COUNT_0_SKID_MASK 0x00000000000fffff + +/* ==================================================================== */ +/* Register "SH_GFX_SKID_CREDIT_COUNT_1" */ +/* Graphics-write Skid Credit Count for CPU 1 */ +/* ==================================================================== */ + +#define SH_GFX_SKID_CREDIT_COUNT_1 0x0000000120030280 +#define SH_GFX_SKID_CREDIT_COUNT_1_MASK 0x00000000000fffff +#define SH_GFX_SKID_CREDIT_COUNT_1_INIT 0x0000000000000030 + +/* SH_GFX_SKID_CREDIT_COUNT_1_SKID */ +/* Description: Skid Credit Count */ +#define SH_GFX_SKID_CREDIT_COUNT_1_SKID_SHFT 0 +#define SH_GFX_SKID_CREDIT_COUNT_1_SKID_MASK 0x00000000000fffff + +/* ==================================================================== */ +/* Register "SH_GFX_STALL_LIMIT_0" */ +/* Graphics-write Stall Limit for CPU 0 */ +/* ==================================================================== */ + +#define SH_GFX_STALL_LIMIT_0 0x0000000120030300 +#define SH_GFX_STALL_LIMIT_0_MASK 0x0000000003ffffff +#define SH_GFX_STALL_LIMIT_0_INIT 0x0000000000010000 + +/* SH_GFX_STALL_LIMIT_0_LIMIT */ +/* Description: Graphics Stall Limit for CPU 0 */ +#define SH_GFX_STALL_LIMIT_0_LIMIT_SHFT 0 +#define SH_GFX_STALL_LIMIT_0_LIMIT_MASK 0x0000000003ffffff + +/* ==================================================================== */ +/* Register "SH_GFX_STALL_LIMIT_1" */ +/* Graphics-write Stall Limit for CPU 1 */ +/* ==================================================================== */ + +#define SH_GFX_STALL_LIMIT_1 0x0000000120030380 +#define SH_GFX_STALL_LIMIT_1_MASK 0x0000000003ffffff +#define SH_GFX_STALL_LIMIT_1_INIT 0x0000000000010000 + +/* SH_GFX_STALL_LIMIT_1_LIMIT */ +/* Description: Graphics Stall Limit for CPU 1 */ +#define SH_GFX_STALL_LIMIT_1_LIMIT_SHFT 0 +#define SH_GFX_STALL_LIMIT_1_LIMIT_MASK 0x0000000003ffffff + +/* ==================================================================== */ +/* Register "SH_GFX_STALL_TIMER_0" */ +/* Graphics-write Stall Timer for CPU 0 */ +/* ==================================================================== */ + +#define SH_GFX_STALL_TIMER_0 0x0000000120030400 +#define SH_GFX_STALL_TIMER_0_MASK 0x0000000003ffffff +#define SH_GFX_STALL_TIMER_0_INIT 0x0000000000000000 + +/* SH_GFX_STALL_TIMER_0_TIMER_VALUE */ +/* Description: Timer Value */ +#define SH_GFX_STALL_TIMER_0_TIMER_VALUE_SHFT 0 +#define SH_GFX_STALL_TIMER_0_TIMER_VALUE_MASK 0x0000000003ffffff + +/* ==================================================================== */ +/* Register "SH_GFX_STALL_TIMER_1" */ +/* Graphics-write Stall Timer for CPU 1 */ +/* ==================================================================== */ + +#define SH_GFX_STALL_TIMER_1 0x0000000120030480 +#define SH_GFX_STALL_TIMER_1_MASK 0x0000000003ffffff +#define SH_GFX_STALL_TIMER_1_INIT 0x0000000000000000 + +/* SH_GFX_STALL_TIMER_1_TIMER_VALUE */ +/* Description: Timer Value */ +#define SH_GFX_STALL_TIMER_1_TIMER_VALUE_SHFT 0 +#define SH_GFX_STALL_TIMER_1_TIMER_VALUE_MASK 0x0000000003ffffff + +/* ==================================================================== */ +/* Register "SH_GFX_WINDOW_0" */ +/* Graphics-write Window for CPU 0 */ +/* ==================================================================== */ + +#define SH_GFX_WINDOW_0 0x0000000120030500 +#define SH_GFX_WINDOW_0_MASK 0x8000000fff000000 +#define SH_GFX_WINDOW_0_INIT 0x0000000000000000 + +/* SH_GFX_WINDOW_0_BASE_ADDR */ +/* Description: Base Address for CPU 0's 16 MB Graphics Window */ +#define SH_GFX_WINDOW_0_BASE_ADDR_SHFT 24 +#define SH_GFX_WINDOW_0_BASE_ADDR_MASK 0x0000000fff000000 + +/* SH_GFX_WINDOW_0_GFX_WINDOW_EN */ +/* Description: Graphics Window Enabled */ +#define SH_GFX_WINDOW_0_GFX_WINDOW_EN_SHFT 63 +#define SH_GFX_WINDOW_0_GFX_WINDOW_EN_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_GFX_WINDOW_1" */ +/* Graphics-write Window for CPU 1 */ +/* ==================================================================== */ + +#define SH_GFX_WINDOW_1 0x0000000120030580 +#define SH_GFX_WINDOW_1_MASK 0x8000000fff000000 +#define SH_GFX_WINDOW_1_INIT 0x0000000000000000 + +/* SH_GFX_WINDOW_1_BASE_ADDR */ +/* Description: Base Address for CPU 1's 16 MB Graphics Window */ +#define SH_GFX_WINDOW_1_BASE_ADDR_SHFT 24 +#define SH_GFX_WINDOW_1_BASE_ADDR_MASK 0x0000000fff000000 + +/* SH_GFX_WINDOW_1_GFX_WINDOW_EN */ +/* Description: Graphics Window Enabled */ +#define SH_GFX_WINDOW_1_GFX_WINDOW_EN_SHFT 63 +#define SH_GFX_WINDOW_1_GFX_WINDOW_EN_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_GFX_INTERRUPT_TIMER_LIMIT_0" */ +/* Graphics-write Interrupt Limit for CPU 0 */ +/* ==================================================================== */ + +#define SH_GFX_INTERRUPT_TIMER_LIMIT_0 0x0000000120030600 +#define SH_GFX_INTERRUPT_TIMER_LIMIT_0_MASK 0x00000000000000ff +#define SH_GFX_INTERRUPT_TIMER_LIMIT_0_INIT 0x0000000000000040 + +/* SH_GFX_INTERRUPT_TIMER_LIMIT_0_INTERRUPT_TIMER_LIMIT */ +/* Description: GFX Interrupt Timer Limit */ +#define SH_GFX_INTERRUPT_TIMER_LIMIT_0_INTERRUPT_TIMER_LIMIT_SHFT 0 +#define SH_GFX_INTERRUPT_TIMER_LIMIT_0_INTERRUPT_TIMER_LIMIT_MASK 0x00000000000000ff + +/* ==================================================================== */ +/* Register "SH_GFX_INTERRUPT_TIMER_LIMIT_1" */ +/* Graphics-write Interrupt Limit for CPU 1 */ +/* ==================================================================== */ + +#define SH_GFX_INTERRUPT_TIMER_LIMIT_1 0x0000000120030680 +#define SH_GFX_INTERRUPT_TIMER_LIMIT_1_MASK 0x00000000000000ff +#define SH_GFX_INTERRUPT_TIMER_LIMIT_1_INIT 0x0000000000000040 + +/* SH_GFX_INTERRUPT_TIMER_LIMIT_1_INTERRUPT_TIMER_LIMIT */ +/* Description: GFX Interrupt Timer Limit */ +#define SH_GFX_INTERRUPT_TIMER_LIMIT_1_INTERRUPT_TIMER_LIMIT_SHFT 0 +#define SH_GFX_INTERRUPT_TIMER_LIMIT_1_INTERRUPT_TIMER_LIMIT_MASK 0x00000000000000ff + +/* ==================================================================== */ +/* Register "SH_GFX_WRITE_STATUS_0" */ +/* Graphics Write Status for CPU 0 */ +/* ==================================================================== */ + +#define SH_GFX_WRITE_STATUS_0 0x0000000120040000 +#define SH_GFX_WRITE_STATUS_0_MASK 0x8000000000000001 +#define SH_GFX_WRITE_STATUS_0_INIT 0x0000000000000000 + +/* SH_GFX_WRITE_STATUS_0_BUSY */ +/* Description: Busy */ +#define SH_GFX_WRITE_STATUS_0_BUSY_SHFT 0 +#define SH_GFX_WRITE_STATUS_0_BUSY_MASK 0x0000000000000001 + +/* SH_GFX_WRITE_STATUS_0_RE_ENABLE_GFX_STALL */ +/* Description: Re-enable GFX stall logic for this processor */ +#define SH_GFX_WRITE_STATUS_0_RE_ENABLE_GFX_STALL_SHFT 63 +#define SH_GFX_WRITE_STATUS_0_RE_ENABLE_GFX_STALL_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_GFX_WRITE_STATUS_1" */ +/* Graphics Write Status for CPU 1 */ +/* ==================================================================== */ + +#define SH_GFX_WRITE_STATUS_1 0x0000000120040080 +#define SH_GFX_WRITE_STATUS_1_MASK 0x8000000000000001 +#define SH_GFX_WRITE_STATUS_1_INIT 0x0000000000000000 + +/* SH_GFX_WRITE_STATUS_1_BUSY */ +/* Description: Busy */ +#define SH_GFX_WRITE_STATUS_1_BUSY_SHFT 0 +#define SH_GFX_WRITE_STATUS_1_BUSY_MASK 0x0000000000000001 + +/* SH_GFX_WRITE_STATUS_1_RE_ENABLE_GFX_STALL */ +/* Description: Re-enable GFX stall logic for this processor */ +#define SH_GFX_WRITE_STATUS_1_RE_ENABLE_GFX_STALL_SHFT 63 +#define SH_GFX_WRITE_STATUS_1_RE_ENABLE_GFX_STALL_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_II_INT0" */ +/* SHub II Interrupt 0 Registers */ +/* ==================================================================== */ + +#define SH_II_INT0 0x0000000110000000 +#define SH_II_INT0_MASK 0x00000000000001ff +#define SH_II_INT0_INIT 0x0000000000000000 + +/* SH_II_INT0_IDX */ +/* Description: Targeted McKinley interrupt vector */ +#define SH_II_INT0_IDX_SHFT 0 +#define SH_II_INT0_IDX_MASK 0x00000000000000ff + +/* SH_II_INT0_SEND */ +/* Description: Send Interrupt Message to PI, This generates a puls */ +#define SH_II_INT0_SEND_SHFT 8 +#define SH_II_INT0_SEND_MASK 0x0000000000000100 + +/* ==================================================================== */ +/* Register "SH_II_INT0_CONFIG" */ +/* SHub II Interrupt 0 Config Registers */ +/* ==================================================================== */ + +#define SH_II_INT0_CONFIG 0x0000000110000080 +#define SH_II_INT0_CONFIG_MASK 0x0003ffffffefffff +#define SH_II_INT0_CONFIG_INIT 0x0000000000000000 + +/* SH_II_INT0_CONFIG_TYPE */ +/* Description: Type of Interrupt: 0=INT, 2=PMI, 4=NMI, 5=INIT */ +#define SH_II_INT0_CONFIG_TYPE_SHFT 0 +#define SH_II_INT0_CONFIG_TYPE_MASK 0x0000000000000007 + +/* SH_II_INT0_CONFIG_AGT */ +/* Description: Agent, must be 0 for SHub */ +#define SH_II_INT0_CONFIG_AGT_SHFT 3 +#define SH_II_INT0_CONFIG_AGT_MASK 0x0000000000000008 + +/* SH_II_INT0_CONFIG_PID */ +/* Description: Processor ID, same setting as on targeted McKinley */ +#define SH_II_INT0_CONFIG_PID_SHFT 4 +#define SH_II_INT0_CONFIG_PID_MASK 0x00000000000ffff0 + +/* SH_II_INT0_CONFIG_BASE */ +/* Description: Optional interrupt vector area, 2MB aligned */ +#define SH_II_INT0_CONFIG_BASE_SHFT 21 +#define SH_II_INT0_CONFIG_BASE_MASK 0x0003ffffffe00000 + +/* ==================================================================== */ +/* Register "SH_II_INT0_ENABLE" */ +/* SHub II Interrupt 0 Enable Registers */ +/* ==================================================================== */ + +#define SH_II_INT0_ENABLE 0x0000000110000200 +#define SH_II_INT0_ENABLE_MASK 0x0000000000000001 +#define SH_II_INT0_ENABLE_INIT 0x0000000000000000 + +/* SH_II_INT0_ENABLE_II_ENABLE */ +/* Description: Enable II Interrupt */ +#define SH_II_INT0_ENABLE_II_ENABLE_SHFT 0 +#define SH_II_INT0_ENABLE_II_ENABLE_MASK 0x0000000000000001 + +/* ==================================================================== */ +/* Register "SH_II_INT1" */ +/* SHub II Interrupt 1 Registers */ +/* ==================================================================== */ + +#define SH_II_INT1 0x0000000110000100 +#define SH_II_INT1_MASK 0x00000000000001ff +#define SH_II_INT1_INIT 0x0000000000000000 + +/* SH_II_INT1_IDX */ +/* Description: Targeted McKinley interrupt vector */ +#define SH_II_INT1_IDX_SHFT 0 +#define SH_II_INT1_IDX_MASK 0x00000000000000ff + +/* SH_II_INT1_SEND */ +/* Description: Send Interrupt Message to PI, This generates a puls */ +#define SH_II_INT1_SEND_SHFT 8 +#define SH_II_INT1_SEND_MASK 0x0000000000000100 + +/* ==================================================================== */ +/* Register "SH_II_INT1_CONFIG" */ +/* SHub II Interrupt 1 Config Registers */ +/* ==================================================================== */ + +#define SH_II_INT1_CONFIG 0x0000000110000180 +#define SH_II_INT1_CONFIG_MASK 0x0003ffffffefffff +#define SH_II_INT1_CONFIG_INIT 0x0000000000000000 + +/* SH_II_INT1_CONFIG_TYPE */ +/* Description: Type of Interrupt: 0=INT, 2=PMI, 4=NMI, 5=INIT */ +#define SH_II_INT1_CONFIG_TYPE_SHFT 0 +#define SH_II_INT1_CONFIG_TYPE_MASK 0x0000000000000007 + +/* SH_II_INT1_CONFIG_AGT */ +/* Description: Agent, must be 0 for SHub */ +#define SH_II_INT1_CONFIG_AGT_SHFT 3 +#define SH_II_INT1_CONFIG_AGT_MASK 0x0000000000000008 + +/* SH_II_INT1_CONFIG_PID */ +/* Description: Processor ID, same setting as on targeted McKinley */ +#define SH_II_INT1_CONFIG_PID_SHFT 4 +#define SH_II_INT1_CONFIG_PID_MASK 0x00000000000ffff0 + +/* SH_II_INT1_CONFIG_BASE */ +/* Description: Optional interrupt vector area, 2MB aligned */ +#define SH_II_INT1_CONFIG_BASE_SHFT 21 +#define SH_II_INT1_CONFIG_BASE_MASK 0x0003ffffffe00000 + +/* ==================================================================== */ +/* Register "SH_II_INT1_ENABLE" */ +/* SHub II Interrupt 1 Enable Registers */ +/* ==================================================================== */ + +#define SH_II_INT1_ENABLE 0x0000000110000280 +#define SH_II_INT1_ENABLE_MASK 0x0000000000000001 +#define SH_II_INT1_ENABLE_INIT 0x0000000000000000 + +/* SH_II_INT1_ENABLE_II_ENABLE */ +/* Description: Enable II 1 Interrupt */ +#define SH_II_INT1_ENABLE_II_ENABLE_SHFT 0 +#define SH_II_INT1_ENABLE_II_ENABLE_MASK 0x0000000000000001 + +/* ==================================================================== */ +/* Register "SH_INT_NODE_ID_CONFIG" */ +/* SHub Interrupt Node ID Configuration */ +/* ==================================================================== */ + +#define SH_INT_NODE_ID_CONFIG 0x0000000110000300 +#define SH_INT_NODE_ID_CONFIG_MASK 0x0000000000000fff +#define SH_INT_NODE_ID_CONFIG_INIT 0x0000000000000000 + +/* SH_INT_NODE_ID_CONFIG_NODE_ID */ +/* Description: Node ID for interrupt messages */ +#define SH_INT_NODE_ID_CONFIG_NODE_ID_SHFT 0 +#define SH_INT_NODE_ID_CONFIG_NODE_ID_MASK 0x00000000000007ff + +/* SH_INT_NODE_ID_CONFIG_ID_SEL */ +/* Description: Select node id for interrupt messages */ +#define SH_INT_NODE_ID_CONFIG_ID_SEL_SHFT 11 +#define SH_INT_NODE_ID_CONFIG_ID_SEL_MASK 0x0000000000000800 + +/* ==================================================================== */ +/* Register "SH_IPI_INT" */ +/* SHub Inter-Processor Interrupt Registers */ +/* ==================================================================== */ + +#define SH_IPI_INT 0x0000000110000380 +#define SH_IPI_INT_MASK 0x8ff3ffffffefffff +#define SH_IPI_INT_INIT 0x0000000000000000 + +/* SH_IPI_INT_TYPE */ +/* Description: Type of Interrupt: 0=INT, 2=PMI, 4=NMI, 5=INIT */ +#define SH_IPI_INT_TYPE_SHFT 0 +#define SH_IPI_INT_TYPE_MASK 0x0000000000000007 + +/* SH_IPI_INT_AGT */ +/* Description: Agent, must be 0 for SHub */ +#define SH_IPI_INT_AGT_SHFT 3 +#define SH_IPI_INT_AGT_MASK 0x0000000000000008 + +/* SH_IPI_INT_PID */ +/* Description: Processor ID, same setting as on targeted McKinley */ +#define SH_IPI_INT_PID_SHFT 4 +#define SH_IPI_INT_PID_MASK 0x00000000000ffff0 + +/* SH_IPI_INT_BASE */ +/* Description: Optional interrupt vector area, 2MB aligned */ +#define SH_IPI_INT_BASE_SHFT 21 +#define SH_IPI_INT_BASE_MASK 0x0003ffffffe00000 + +/* SH_IPI_INT_IDX */ +/* Description: Targeted McKinley interrupt vector */ +#define SH_IPI_INT_IDX_SHFT 52 +#define SH_IPI_INT_IDX_MASK 0x0ff0000000000000 + +/* SH_IPI_INT_SEND */ +/* Description: Send Interrupt Message to PI, This generates a puls */ +#define SH_IPI_INT_SEND_SHFT 63 +#define SH_IPI_INT_SEND_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_IPI_INT_ENABLE" */ +/* SHub Inter-Processor Interrupt Enable Registers */ +/* ==================================================================== */ + +#define SH_IPI_INT_ENABLE 0x0000000110000400 +#define SH_IPI_INT_ENABLE_MASK 0x0000000000000001 +#define SH_IPI_INT_ENABLE_INIT 0x0000000000000000 + +/* SH_IPI_INT_ENABLE_PIO_ENABLE */ +/* Description: Enable PIO Interrupt */ +#define SH_IPI_INT_ENABLE_PIO_ENABLE_SHFT 0 +#define SH_IPI_INT_ENABLE_PIO_ENABLE_MASK 0x0000000000000001 + +/* ==================================================================== */ +/* Register "SH_LOCAL_INT0_CONFIG" */ +/* SHub Local Interrupt 0 Registers */ +/* ==================================================================== */ + +#define SH_LOCAL_INT0_CONFIG 0x0000000110000480 +#define SH_LOCAL_INT0_CONFIG_MASK 0x0ff3ffffffefffff +#define SH_LOCAL_INT0_CONFIG_INIT 0x0000000000000000 + +/* SH_LOCAL_INT0_CONFIG_TYPE */ +/* Description: Type of Interrupt: 0=INT, 2=PMI, 4=NMI, 5=INIT */ +#define SH_LOCAL_INT0_CONFIG_TYPE_SHFT 0 +#define SH_LOCAL_INT0_CONFIG_TYPE_MASK 0x0000000000000007 + +/* SH_LOCAL_INT0_CONFIG_AGT */ +/* Description: Agent, must be 0 for SHub */ +#define SH_LOCAL_INT0_CONFIG_AGT_SHFT 3 +#define SH_LOCAL_INT0_CONFIG_AGT_MASK 0x0000000000000008 + +/* SH_LOCAL_INT0_CONFIG_PID */ +/* Description: Processor ID, same setting as on targeted McKinley */ +#define SH_LOCAL_INT0_CONFIG_PID_SHFT 4 +#define SH_LOCAL_INT0_CONFIG_PID_MASK 0x00000000000ffff0 + +/* SH_LOCAL_INT0_CONFIG_BASE */ +/* Description: Optional interrupt vector area, 2MB aligned */ +#define SH_LOCAL_INT0_CONFIG_BASE_SHFT 21 +#define SH_LOCAL_INT0_CONFIG_BASE_MASK 0x0003ffffffe00000 + +/* SH_LOCAL_INT0_CONFIG_IDX */ +/* Description: Targeted McKinley interrupt vector */ +#define SH_LOCAL_INT0_CONFIG_IDX_SHFT 52 +#define SH_LOCAL_INT0_CONFIG_IDX_MASK 0x0ff0000000000000 + +/* ==================================================================== */ +/* Register "SH_LOCAL_INT0_ENABLE" */ +/* SHub Local Interrupt 0 Enable */ +/* ==================================================================== */ + +#define SH_LOCAL_INT0_ENABLE 0x0000000110000500 +#define SH_LOCAL_INT0_ENABLE_MASK 0x000000000000f7ff +#define SH_LOCAL_INT0_ENABLE_INIT 0x0000000000000000 + +/* SH_LOCAL_INT0_ENABLE_PI_HW_INT */ +/* Description: Enable PI Hardware interrupt */ +#define SH_LOCAL_INT0_ENABLE_PI_HW_INT_SHFT 0 +#define SH_LOCAL_INT0_ENABLE_PI_HW_INT_MASK 0x0000000000000001 + +/* SH_LOCAL_INT0_ENABLE_MD_HW_INT */ +/* Description: Enable MD Hardware interrupt */ +#define SH_LOCAL_INT0_ENABLE_MD_HW_INT_SHFT 1 +#define SH_LOCAL_INT0_ENABLE_MD_HW_INT_MASK 0x0000000000000002 + +/* SH_LOCAL_INT0_ENABLE_XN_HW_INT */ +/* Description: Enable XN Hardware interrupt */ +#define SH_LOCAL_INT0_ENABLE_XN_HW_INT_SHFT 2 +#define SH_LOCAL_INT0_ENABLE_XN_HW_INT_MASK 0x0000000000000004 + +/* SH_LOCAL_INT0_ENABLE_LB_HW_INT */ +/* Description: Enable LB Hardware interrupt */ +#define SH_LOCAL_INT0_ENABLE_LB_HW_INT_SHFT 3 +#define SH_LOCAL_INT0_ENABLE_LB_HW_INT_MASK 0x0000000000000008 + +/* SH_LOCAL_INT0_ENABLE_II_HW_INT */ +/* Description: Enable II wrapper Hardware interrupt */ +#define SH_LOCAL_INT0_ENABLE_II_HW_INT_SHFT 4 +#define SH_LOCAL_INT0_ENABLE_II_HW_INT_MASK 0x0000000000000010 + +/* SH_LOCAL_INT0_ENABLE_PI_CE_INT */ +/* Description: Enable PI Correctable Error Interrupt */ +#define SH_LOCAL_INT0_ENABLE_PI_CE_INT_SHFT 5 +#define SH_LOCAL_INT0_ENABLE_PI_CE_INT_MASK 0x0000000000000020 + +/* SH_LOCAL_INT0_ENABLE_MD_CE_INT */ +/* Description: Enable MD Correctable Error Interrupt */ +#define SH_LOCAL_INT0_ENABLE_MD_CE_INT_SHFT 6 +#define SH_LOCAL_INT0_ENABLE_MD_CE_INT_MASK 0x0000000000000040 + +/* SH_LOCAL_INT0_ENABLE_XN_CE_INT */ +/* Description: Enable XN Correctable Error Interrupt */ +#define SH_LOCAL_INT0_ENABLE_XN_CE_INT_SHFT 7 +#define SH_LOCAL_INT0_ENABLE_XN_CE_INT_MASK 0x0000000000000080 + +/* SH_LOCAL_INT0_ENABLE_PI_UCE_INT */ +/* Description: Enable PI Correctable Error Interrupt */ +#define SH_LOCAL_INT0_ENABLE_PI_UCE_INT_SHFT 8 +#define SH_LOCAL_INT0_ENABLE_PI_UCE_INT_MASK 0x0000000000000100 + +/* SH_LOCAL_INT0_ENABLE_MD_UCE_INT */ +/* Description: Enable MD Correctable Error Interrupt */ +#define SH_LOCAL_INT0_ENABLE_MD_UCE_INT_SHFT 9 +#define SH_LOCAL_INT0_ENABLE_MD_UCE_INT_MASK 0x0000000000000200 + +/* SH_LOCAL_INT0_ENABLE_XN_UCE_INT */ +/* Description: Enable XN Correctable Error Interrupt */ +#define SH_LOCAL_INT0_ENABLE_XN_UCE_INT_SHFT 10 +#define SH_LOCAL_INT0_ENABLE_XN_UCE_INT_MASK 0x0000000000000400 + +/* SH_LOCAL_INT0_ENABLE_SYSTEM_SHUTDOWN_INT */ +/* Description: Enable System Shutdown Interrupt */ +#define SH_LOCAL_INT0_ENABLE_SYSTEM_SHUTDOWN_INT_SHFT 12 +#define SH_LOCAL_INT0_ENABLE_SYSTEM_SHUTDOWN_INT_MASK 0x0000000000001000 + +/* SH_LOCAL_INT0_ENABLE_UART_INT */ +/* Description: Enable Junk Bus UART Interrupt */ +#define SH_LOCAL_INT0_ENABLE_UART_INT_SHFT 13 +#define SH_LOCAL_INT0_ENABLE_UART_INT_MASK 0x0000000000002000 + +/* SH_LOCAL_INT0_ENABLE_L1_NMI_INT */ +/* Description: Enable L1 Controller NMI Interrupt */ +#define SH_LOCAL_INT0_ENABLE_L1_NMI_INT_SHFT 14 +#define SH_LOCAL_INT0_ENABLE_L1_NMI_INT_MASK 0x0000000000004000 + +/* SH_LOCAL_INT0_ENABLE_STOP_CLOCK */ +/* Description: Stop Clock Interrupt */ +#define SH_LOCAL_INT0_ENABLE_STOP_CLOCK_SHFT 15 +#define SH_LOCAL_INT0_ENABLE_STOP_CLOCK_MASK 0x0000000000008000 + +/* ==================================================================== */ +/* Register "SH_LOCAL_INT1_CONFIG" */ +/* SHub Local Interrupt 1 Registers */ +/* ==================================================================== */ + +#define SH_LOCAL_INT1_CONFIG 0x0000000110000580 +#define SH_LOCAL_INT1_CONFIG_MASK 0x0ff3ffffffefffff +#define SH_LOCAL_INT1_CONFIG_INIT 0x0000000000000000 + +/* SH_LOCAL_INT1_CONFIG_TYPE */ +/* Description: Type of Interrupt: 0=INT, 2=PMI, 4=NMI, 5=INIT */ +#define SH_LOCAL_INT1_CONFIG_TYPE_SHFT 0 +#define SH_LOCAL_INT1_CONFIG_TYPE_MASK 0x0000000000000007 + +/* SH_LOCAL_INT1_CONFIG_AGT */ +/* Description: Agent, must be 0 for SHub */ +#define SH_LOCAL_INT1_CONFIG_AGT_SHFT 3 +#define SH_LOCAL_INT1_CONFIG_AGT_MASK 0x0000000000000008 + +/* SH_LOCAL_INT1_CONFIG_PID */ +/* Description: Processor ID, same setting as on targeted McKinley */ +#define SH_LOCAL_INT1_CONFIG_PID_SHFT 4 +#define SH_LOCAL_INT1_CONFIG_PID_MASK 0x00000000000ffff0 + +/* SH_LOCAL_INT1_CONFIG_BASE */ +/* Description: Optional interrupt vector area, 2MB aligned */ +#define SH_LOCAL_INT1_CONFIG_BASE_SHFT 21 +#define SH_LOCAL_INT1_CONFIG_BASE_MASK 0x0003ffffffe00000 + +/* SH_LOCAL_INT1_CONFIG_IDX */ +/* Description: Targeted McKinley interrupt vector */ +#define SH_LOCAL_INT1_CONFIG_IDX_SHFT 52 +#define SH_LOCAL_INT1_CONFIG_IDX_MASK 0x0ff0000000000000 + +/* ==================================================================== */ +/* Register "SH_LOCAL_INT1_ENABLE" */ +/* SHub Local Interrupt 1 Enable */ +/* ==================================================================== */ + +#define SH_LOCAL_INT1_ENABLE 0x0000000110000600 +#define SH_LOCAL_INT1_ENABLE_MASK 0x000000000000f7ff +#define SH_LOCAL_INT1_ENABLE_INIT 0x0000000000000000 + +/* SH_LOCAL_INT1_ENABLE_PI_HW_INT */ +/* Description: Enable PI Hardware interrupt */ +#define SH_LOCAL_INT1_ENABLE_PI_HW_INT_SHFT 0 +#define SH_LOCAL_INT1_ENABLE_PI_HW_INT_MASK 0x0000000000000001 + +/* SH_LOCAL_INT1_ENABLE_MD_HW_INT */ +/* Description: Enable MD Hardware interrupt */ +#define SH_LOCAL_INT1_ENABLE_MD_HW_INT_SHFT 1 +#define SH_LOCAL_INT1_ENABLE_MD_HW_INT_MASK 0x0000000000000002 + +/* SH_LOCAL_INT1_ENABLE_XN_HW_INT */ +/* Description: Enable XN Hardware interrupt */ +#define SH_LOCAL_INT1_ENABLE_XN_HW_INT_SHFT 2 +#define SH_LOCAL_INT1_ENABLE_XN_HW_INT_MASK 0x0000000000000004 + +/* SH_LOCAL_INT1_ENABLE_LB_HW_INT */ +/* Description: Enable LB Hardware interrupt */ +#define SH_LOCAL_INT1_ENABLE_LB_HW_INT_SHFT 3 +#define SH_LOCAL_INT1_ENABLE_LB_HW_INT_MASK 0x0000000000000008 + +/* SH_LOCAL_INT1_ENABLE_II_HW_INT */ +/* Description: Enable II wrapper Hardware interrupt */ +#define SH_LOCAL_INT1_ENABLE_II_HW_INT_SHFT 4 +#define SH_LOCAL_INT1_ENABLE_II_HW_INT_MASK 0x0000000000000010 + +/* SH_LOCAL_INT1_ENABLE_PI_CE_INT */ +/* Description: Enable PI Correctable Error Interrupt */ +#define SH_LOCAL_INT1_ENABLE_PI_CE_INT_SHFT 5 +#define SH_LOCAL_INT1_ENABLE_PI_CE_INT_MASK 0x0000000000000020 + +/* SH_LOCAL_INT1_ENABLE_MD_CE_INT */ +/* Description: Enable MD Correctable Error Interrupt */ +#define SH_LOCAL_INT1_ENABLE_MD_CE_INT_SHFT 6 +#define SH_LOCAL_INT1_ENABLE_MD_CE_INT_MASK 0x0000000000000040 + +/* SH_LOCAL_INT1_ENABLE_XN_CE_INT */ +/* Description: Enable XN Correctable Error Interrupt */ +#define SH_LOCAL_INT1_ENABLE_XN_CE_INT_SHFT 7 +#define SH_LOCAL_INT1_ENABLE_XN_CE_INT_MASK 0x0000000000000080 + +/* SH_LOCAL_INT1_ENABLE_PI_UCE_INT */ +/* Description: Enable PI Correctable Error Interrupt */ +#define SH_LOCAL_INT1_ENABLE_PI_UCE_INT_SHFT 8 +#define SH_LOCAL_INT1_ENABLE_PI_UCE_INT_MASK 0x0000000000000100 + +/* SH_LOCAL_INT1_ENABLE_MD_UCE_INT */ +/* Description: Enable MD Correctable Error Interrupt */ +#define SH_LOCAL_INT1_ENABLE_MD_UCE_INT_SHFT 9 +#define SH_LOCAL_INT1_ENABLE_MD_UCE_INT_MASK 0x0000000000000200 + +/* SH_LOCAL_INT1_ENABLE_XN_UCE_INT */ +/* Description: Enable XN Correctable Error Interrupt */ +#define SH_LOCAL_INT1_ENABLE_XN_UCE_INT_SHFT 10 +#define SH_LOCAL_INT1_ENABLE_XN_UCE_INT_MASK 0x0000000000000400 + +/* SH_LOCAL_INT1_ENABLE_SYSTEM_SHUTDOWN_INT */ +/* Description: Enable System Shutdown Interrupt */ +#define SH_LOCAL_INT1_ENABLE_SYSTEM_SHUTDOWN_INT_SHFT 12 +#define SH_LOCAL_INT1_ENABLE_SYSTEM_SHUTDOWN_INT_MASK 0x0000000000001000 + +/* SH_LOCAL_INT1_ENABLE_UART_INT */ +/* Description: Enable Junk Bus UART Interrupt */ +#define SH_LOCAL_INT1_ENABLE_UART_INT_SHFT 13 +#define SH_LOCAL_INT1_ENABLE_UART_INT_MASK 0x0000000000002000 + +/* SH_LOCAL_INT1_ENABLE_L1_NMI_INT */ +/* Description: Enable L1 Controller NMI Interrupt */ +#define SH_LOCAL_INT1_ENABLE_L1_NMI_INT_SHFT 14 +#define SH_LOCAL_INT1_ENABLE_L1_NMI_INT_MASK 0x0000000000004000 + +/* SH_LOCAL_INT1_ENABLE_STOP_CLOCK */ +/* Description: Stop Clock Interrupt */ +#define SH_LOCAL_INT1_ENABLE_STOP_CLOCK_SHFT 15 +#define SH_LOCAL_INT1_ENABLE_STOP_CLOCK_MASK 0x0000000000008000 + +/* ==================================================================== */ +/* Register "SH_LOCAL_INT2_CONFIG" */ +/* SHub Local Interrupt 2 Registers */ +/* ==================================================================== */ + +#define SH_LOCAL_INT2_CONFIG 0x0000000110000680 +#define SH_LOCAL_INT2_CONFIG_MASK 0x0ff3ffffffefffff +#define SH_LOCAL_INT2_CONFIG_INIT 0x0000000000000000 + +/* SH_LOCAL_INT2_CONFIG_TYPE */ +/* Description: Type of Interrupt: 0=INT, 2=PMI, 4=NMI, 5=INIT */ +#define SH_LOCAL_INT2_CONFIG_TYPE_SHFT 0 +#define SH_LOCAL_INT2_CONFIG_TYPE_MASK 0x0000000000000007 + +/* SH_LOCAL_INT2_CONFIG_AGT */ +/* Description: Agent, must be 0 for SHub */ +#define SH_LOCAL_INT2_CONFIG_AGT_SHFT 3 +#define SH_LOCAL_INT2_CONFIG_AGT_MASK 0x0000000000000008 + +/* SH_LOCAL_INT2_CONFIG_PID */ +/* Description: Processor ID, same setting as on targeted McKinley */ +#define SH_LOCAL_INT2_CONFIG_PID_SHFT 4 +#define SH_LOCAL_INT2_CONFIG_PID_MASK 0x00000000000ffff0 + +/* SH_LOCAL_INT2_CONFIG_BASE */ +/* Description: Optional interrupt vector area, 2MB aligned */ +#define SH_LOCAL_INT2_CONFIG_BASE_SHFT 21 +#define SH_LOCAL_INT2_CONFIG_BASE_MASK 0x0003ffffffe00000 + +/* SH_LOCAL_INT2_CONFIG_IDX */ +/* Description: Targeted McKinley interrupt vector */ +#define SH_LOCAL_INT2_CONFIG_IDX_SHFT 52 +#define SH_LOCAL_INT2_CONFIG_IDX_MASK 0x0ff0000000000000 + +/* ==================================================================== */ +/* Register "SH_LOCAL_INT2_ENABLE" */ +/* SHub Local Interrupt 2 Enable */ +/* ==================================================================== */ + +#define SH_LOCAL_INT2_ENABLE 0x0000000110000700 +#define SH_LOCAL_INT2_ENABLE_MASK 0x000000000000f7ff +#define SH_LOCAL_INT2_ENABLE_INIT 0x0000000000000000 + +/* SH_LOCAL_INT2_ENABLE_PI_HW_INT */ +/* Description: Enable PI Hardware interrupt */ +#define SH_LOCAL_INT2_ENABLE_PI_HW_INT_SHFT 0 +#define SH_LOCAL_INT2_ENABLE_PI_HW_INT_MASK 0x0000000000000001 + +/* SH_LOCAL_INT2_ENABLE_MD_HW_INT */ +/* Description: Enable MD Hardware interrupt */ +#define SH_LOCAL_INT2_ENABLE_MD_HW_INT_SHFT 1 +#define SH_LOCAL_INT2_ENABLE_MD_HW_INT_MASK 0x0000000000000002 + +/* SH_LOCAL_INT2_ENABLE_XN_HW_INT */ +/* Description: Enable XN Hardware interrupt */ +#define SH_LOCAL_INT2_ENABLE_XN_HW_INT_SHFT 2 +#define SH_LOCAL_INT2_ENABLE_XN_HW_INT_MASK 0x0000000000000004 + +/* SH_LOCAL_INT2_ENABLE_LB_HW_INT */ +/* Description: Enable LB Hardware interrupt */ +#define SH_LOCAL_INT2_ENABLE_LB_HW_INT_SHFT 3 +#define SH_LOCAL_INT2_ENABLE_LB_HW_INT_MASK 0x0000000000000008 + +/* SH_LOCAL_INT2_ENABLE_II_HW_INT */ +/* Description: Enable II wrapper Hardware interrupt */ +#define SH_LOCAL_INT2_ENABLE_II_HW_INT_SHFT 4 +#define SH_LOCAL_INT2_ENABLE_II_HW_INT_MASK 0x0000000000000010 + +/* SH_LOCAL_INT2_ENABLE_PI_CE_INT */ +/* Description: Enable PI Correctable Error Interrupt */ +#define SH_LOCAL_INT2_ENABLE_PI_CE_INT_SHFT 5 +#define SH_LOCAL_INT2_ENABLE_PI_CE_INT_MASK 0x0000000000000020 + +/* SH_LOCAL_INT2_ENABLE_MD_CE_INT */ +/* Description: Enable MD Correctable Error Interrupt */ +#define SH_LOCAL_INT2_ENABLE_MD_CE_INT_SHFT 6 +#define SH_LOCAL_INT2_ENABLE_MD_CE_INT_MASK 0x0000000000000040 + +/* SH_LOCAL_INT2_ENABLE_XN_CE_INT */ +/* Description: Enable XN Correctable Error Interrupt */ +#define SH_LOCAL_INT2_ENABLE_XN_CE_INT_SHFT 7 +#define SH_LOCAL_INT2_ENABLE_XN_CE_INT_MASK 0x0000000000000080 + +/* SH_LOCAL_INT2_ENABLE_PI_UCE_INT */ +/* Description: Enable PI Correctable Error Interrupt */ +#define SH_LOCAL_INT2_ENABLE_PI_UCE_INT_SHFT 8 +#define SH_LOCAL_INT2_ENABLE_PI_UCE_INT_MASK 0x0000000000000100 + +/* SH_LOCAL_INT2_ENABLE_MD_UCE_INT */ +/* Description: Enable MD Correctable Error Interrupt */ +#define SH_LOCAL_INT2_ENABLE_MD_UCE_INT_SHFT 9 +#define SH_LOCAL_INT2_ENABLE_MD_UCE_INT_MASK 0x0000000000000200 + +/* SH_LOCAL_INT2_ENABLE_XN_UCE_INT */ +/* Description: Enable XN Correctable Error Interrupt */ +#define SH_LOCAL_INT2_ENABLE_XN_UCE_INT_SHFT 10 +#define SH_LOCAL_INT2_ENABLE_XN_UCE_INT_MASK 0x0000000000000400 + +/* SH_LOCAL_INT2_ENABLE_SYSTEM_SHUTDOWN_INT */ +/* Description: Enable System Shutdown Interrupt */ +#define SH_LOCAL_INT2_ENABLE_SYSTEM_SHUTDOWN_INT_SHFT 12 +#define SH_LOCAL_INT2_ENABLE_SYSTEM_SHUTDOWN_INT_MASK 0x0000000000001000 + +/* SH_LOCAL_INT2_ENABLE_UART_INT */ +/* Description: Enable Junk Bus UART Interrupt */ +#define SH_LOCAL_INT2_ENABLE_UART_INT_SHFT 13 +#define SH_LOCAL_INT2_ENABLE_UART_INT_MASK 0x0000000000002000 + +/* SH_LOCAL_INT2_ENABLE_L1_NMI_INT */ +/* Description: Enable L1 Controller NMI Interrupt */ +#define SH_LOCAL_INT2_ENABLE_L1_NMI_INT_SHFT 14 +#define SH_LOCAL_INT2_ENABLE_L1_NMI_INT_MASK 0x0000000000004000 + +/* SH_LOCAL_INT2_ENABLE_STOP_CLOCK */ +/* Description: Stop Clock Interrupt */ +#define SH_LOCAL_INT2_ENABLE_STOP_CLOCK_SHFT 15 +#define SH_LOCAL_INT2_ENABLE_STOP_CLOCK_MASK 0x0000000000008000 + +/* ==================================================================== */ +/* Register "SH_LOCAL_INT3_CONFIG" */ +/* SHub Local Interrupt 3 Registers */ +/* ==================================================================== */ + +#define SH_LOCAL_INT3_CONFIG 0x0000000110000780 +#define SH_LOCAL_INT3_CONFIG_MASK 0x0ff3ffffffefffff +#define SH_LOCAL_INT3_CONFIG_INIT 0x0000000000000000 + +/* SH_LOCAL_INT3_CONFIG_TYPE */ +/* Description: Type of Interrupt: 0=INT, 2=PMI, 4=NMI, 5=INIT */ +#define SH_LOCAL_INT3_CONFIG_TYPE_SHFT 0 +#define SH_LOCAL_INT3_CONFIG_TYPE_MASK 0x0000000000000007 + +/* SH_LOCAL_INT3_CONFIG_AGT */ +/* Description: Agent, must be 0 for SHub */ +#define SH_LOCAL_INT3_CONFIG_AGT_SHFT 3 +#define SH_LOCAL_INT3_CONFIG_AGT_MASK 0x0000000000000008 + +/* SH_LOCAL_INT3_CONFIG_PID */ +/* Description: Processor ID, same setting as on targeted McKinley */ +#define SH_LOCAL_INT3_CONFIG_PID_SHFT 4 +#define SH_LOCAL_INT3_CONFIG_PID_MASK 0x00000000000ffff0 + +/* SH_LOCAL_INT3_CONFIG_BASE */ +/* Description: Optional interrupt vector area, 2MB aligned */ +#define SH_LOCAL_INT3_CONFIG_BASE_SHFT 21 +#define SH_LOCAL_INT3_CONFIG_BASE_MASK 0x0003ffffffe00000 + +/* SH_LOCAL_INT3_CONFIG_IDX */ +/* Description: Targeted McKinley interrupt vector */ +#define SH_LOCAL_INT3_CONFIG_IDX_SHFT 52 +#define SH_LOCAL_INT3_CONFIG_IDX_MASK 0x0ff0000000000000 + +/* ==================================================================== */ +/* Register "SH_LOCAL_INT3_ENABLE" */ +/* SHub Local Interrupt 3 Enable */ +/* ==================================================================== */ + +#define SH_LOCAL_INT3_ENABLE 0x0000000110000800 +#define SH_LOCAL_INT3_ENABLE_MASK 0x000000000000f7ff +#define SH_LOCAL_INT3_ENABLE_INIT 0x0000000000000000 + +/* SH_LOCAL_INT3_ENABLE_PI_HW_INT */ +/* Description: Enable PI Hardware interrupt */ +#define SH_LOCAL_INT3_ENABLE_PI_HW_INT_SHFT 0 +#define SH_LOCAL_INT3_ENABLE_PI_HW_INT_MASK 0x0000000000000001 + +/* SH_LOCAL_INT3_ENABLE_MD_HW_INT */ +/* Description: Enable MD Hardware interrupt */ +#define SH_LOCAL_INT3_ENABLE_MD_HW_INT_SHFT 1 +#define SH_LOCAL_INT3_ENABLE_MD_HW_INT_MASK 0x0000000000000002 + +/* SH_LOCAL_INT3_ENABLE_XN_HW_INT */ +/* Description: Enable XN Hardware interrupt */ +#define SH_LOCAL_INT3_ENABLE_XN_HW_INT_SHFT 2 +#define SH_LOCAL_INT3_ENABLE_XN_HW_INT_MASK 0x0000000000000004 + +/* SH_LOCAL_INT3_ENABLE_LB_HW_INT */ +/* Description: Enable LB Hardware interrupt */ +#define SH_LOCAL_INT3_ENABLE_LB_HW_INT_SHFT 3 +#define SH_LOCAL_INT3_ENABLE_LB_HW_INT_MASK 0x0000000000000008 + +/* SH_LOCAL_INT3_ENABLE_II_HW_INT */ +/* Description: Enable II wrapper Hardware interrupt */ +#define SH_LOCAL_INT3_ENABLE_II_HW_INT_SHFT 4 +#define SH_LOCAL_INT3_ENABLE_II_HW_INT_MASK 0x0000000000000010 + +/* SH_LOCAL_INT3_ENABLE_PI_CE_INT */ +/* Description: Enable PI Correctable Error Interrupt */ +#define SH_LOCAL_INT3_ENABLE_PI_CE_INT_SHFT 5 +#define SH_LOCAL_INT3_ENABLE_PI_CE_INT_MASK 0x0000000000000020 + +/* SH_LOCAL_INT3_ENABLE_MD_CE_INT */ +/* Description: Enable MD Correctable Error Interrupt */ +#define SH_LOCAL_INT3_ENABLE_MD_CE_INT_SHFT 6 +#define SH_LOCAL_INT3_ENABLE_MD_CE_INT_MASK 0x0000000000000040 + +/* SH_LOCAL_INT3_ENABLE_XN_CE_INT */ +/* Description: Enable XN Correctable Error Interrupt */ +#define SH_LOCAL_INT3_ENABLE_XN_CE_INT_SHFT 7 +#define SH_LOCAL_INT3_ENABLE_XN_CE_INT_MASK 0x0000000000000080 + +/* SH_LOCAL_INT3_ENABLE_PI_UCE_INT */ +/* Description: Enable PI Correctable Error Interrupt */ +#define SH_LOCAL_INT3_ENABLE_PI_UCE_INT_SHFT 8 +#define SH_LOCAL_INT3_ENABLE_PI_UCE_INT_MASK 0x0000000000000100 + +/* SH_LOCAL_INT3_ENABLE_MD_UCE_INT */ +/* Description: Enable MD Correctable Error Interrupt */ +#define SH_LOCAL_INT3_ENABLE_MD_UCE_INT_SHFT 9 +#define SH_LOCAL_INT3_ENABLE_MD_UCE_INT_MASK 0x0000000000000200 + +/* SH_LOCAL_INT3_ENABLE_XN_UCE_INT */ +/* Description: Enable XN Correctable Error Interrupt */ +#define SH_LOCAL_INT3_ENABLE_XN_UCE_INT_SHFT 10 +#define SH_LOCAL_INT3_ENABLE_XN_UCE_INT_MASK 0x0000000000000400 + +/* SH_LOCAL_INT3_ENABLE_SYSTEM_SHUTDOWN_INT */ +/* Description: Enable System Shutdown Interrupt */ +#define SH_LOCAL_INT3_ENABLE_SYSTEM_SHUTDOWN_INT_SHFT 12 +#define SH_LOCAL_INT3_ENABLE_SYSTEM_SHUTDOWN_INT_MASK 0x0000000000001000 + +/* SH_LOCAL_INT3_ENABLE_UART_INT */ +/* Description: Enable Junk Bus UART Interrupt */ +#define SH_LOCAL_INT3_ENABLE_UART_INT_SHFT 13 +#define SH_LOCAL_INT3_ENABLE_UART_INT_MASK 0x0000000000002000 + +/* SH_LOCAL_INT3_ENABLE_L1_NMI_INT */ +/* Description: Enable L1 Controller NMI Interrupt */ +#define SH_LOCAL_INT3_ENABLE_L1_NMI_INT_SHFT 14 +#define SH_LOCAL_INT3_ENABLE_L1_NMI_INT_MASK 0x0000000000004000 + +/* SH_LOCAL_INT3_ENABLE_STOP_CLOCK */ +/* Description: Stop Clock Interrupt */ +#define SH_LOCAL_INT3_ENABLE_STOP_CLOCK_SHFT 15 +#define SH_LOCAL_INT3_ENABLE_STOP_CLOCK_MASK 0x0000000000008000 + +/* ==================================================================== */ +/* Register "SH_LOCAL_INT4_CONFIG" */ +/* SHub Local Interrupt 4 Registers */ +/* ==================================================================== */ + +#define SH_LOCAL_INT4_CONFIG 0x0000000110000880 +#define SH_LOCAL_INT4_CONFIG_MASK 0x0ff3ffffffefffff +#define SH_LOCAL_INT4_CONFIG_INIT 0x0000000000000000 + +/* SH_LOCAL_INT4_CONFIG_TYPE */ +/* Description: Type of Interrupt: 0=INT, 2=PMI, 4=NMI, 5=INIT */ +#define SH_LOCAL_INT4_CONFIG_TYPE_SHFT 0 +#define SH_LOCAL_INT4_CONFIG_TYPE_MASK 0x0000000000000007 + +/* SH_LOCAL_INT4_CONFIG_AGT */ +/* Description: Agent, must be 0 for SHub */ +#define SH_LOCAL_INT4_CONFIG_AGT_SHFT 3 +#define SH_LOCAL_INT4_CONFIG_AGT_MASK 0x0000000000000008 + +/* SH_LOCAL_INT4_CONFIG_PID */ +/* Description: Processor ID, same setting as on targeted McKinley */ +#define SH_LOCAL_INT4_CONFIG_PID_SHFT 4 +#define SH_LOCAL_INT4_CONFIG_PID_MASK 0x00000000000ffff0 + +/* SH_LOCAL_INT4_CONFIG_BASE */ +/* Description: Optional interrupt vector area, 2MB aligned */ +#define SH_LOCAL_INT4_CONFIG_BASE_SHFT 21 +#define SH_LOCAL_INT4_CONFIG_BASE_MASK 0x0003ffffffe00000 + +/* SH_LOCAL_INT4_CONFIG_IDX */ +/* Description: Targeted McKinley interrupt vector */ +#define SH_LOCAL_INT4_CONFIG_IDX_SHFT 52 +#define SH_LOCAL_INT4_CONFIG_IDX_MASK 0x0ff0000000000000 + +/* ==================================================================== */ +/* Register "SH_LOCAL_INT4_ENABLE" */ +/* SHub Local Interrupt 4 Enable */ +/* ==================================================================== */ + +#define SH_LOCAL_INT4_ENABLE 0x0000000110000900 +#define SH_LOCAL_INT4_ENABLE_MASK 0x000000000000f7ff +#define SH_LOCAL_INT4_ENABLE_INIT 0x0000000000000000 + +/* SH_LOCAL_INT4_ENABLE_PI_HW_INT */ +/* Description: Enable PI Hardware interrupt */ +#define SH_LOCAL_INT4_ENABLE_PI_HW_INT_SHFT 0 +#define SH_LOCAL_INT4_ENABLE_PI_HW_INT_MASK 0x0000000000000001 + +/* SH_LOCAL_INT4_ENABLE_MD_HW_INT */ +/* Description: Enable MD Hardware interrupt */ +#define SH_LOCAL_INT4_ENABLE_MD_HW_INT_SHFT 1 +#define SH_LOCAL_INT4_ENABLE_MD_HW_INT_MASK 0x0000000000000002 + +/* SH_LOCAL_INT4_ENABLE_XN_HW_INT */ +/* Description: Enable XN Hardware interrupt */ +#define SH_LOCAL_INT4_ENABLE_XN_HW_INT_SHFT 2 +#define SH_LOCAL_INT4_ENABLE_XN_HW_INT_MASK 0x0000000000000004 + +/* SH_LOCAL_INT4_ENABLE_LB_HW_INT */ +/* Description: Enable LB Hardware interrupt */ +#define SH_LOCAL_INT4_ENABLE_LB_HW_INT_SHFT 3 +#define SH_LOCAL_INT4_ENABLE_LB_HW_INT_MASK 0x0000000000000008 + +/* SH_LOCAL_INT4_ENABLE_II_HW_INT */ +/* Description: Enable II wrapper Hardware interrupt */ +#define SH_LOCAL_INT4_ENABLE_II_HW_INT_SHFT 4 +#define SH_LOCAL_INT4_ENABLE_II_HW_INT_MASK 0x0000000000000010 + +/* SH_LOCAL_INT4_ENABLE_PI_CE_INT */ +/* Description: Enable PI Correctable Error Interrupt */ +#define SH_LOCAL_INT4_ENABLE_PI_CE_INT_SHFT 5 +#define SH_LOCAL_INT4_ENABLE_PI_CE_INT_MASK 0x0000000000000020 + +/* SH_LOCAL_INT4_ENABLE_MD_CE_INT */ +/* Description: Enable MD Correctable Error Interrupt */ +#define SH_LOCAL_INT4_ENABLE_MD_CE_INT_SHFT 6 +#define SH_LOCAL_INT4_ENABLE_MD_CE_INT_MASK 0x0000000000000040 + +/* SH_LOCAL_INT4_ENABLE_XN_CE_INT */ +/* Description: Enable XN Correctable Error Interrupt */ +#define SH_LOCAL_INT4_ENABLE_XN_CE_INT_SHFT 7 +#define SH_LOCAL_INT4_ENABLE_XN_CE_INT_MASK 0x0000000000000080 + +/* SH_LOCAL_INT4_ENABLE_PI_UCE_INT */ +/* Description: Enable PI Correctable Error Interrupt */ +#define SH_LOCAL_INT4_ENABLE_PI_UCE_INT_SHFT 8 +#define SH_LOCAL_INT4_ENABLE_PI_UCE_INT_MASK 0x0000000000000100 + +/* SH_LOCAL_INT4_ENABLE_MD_UCE_INT */ +/* Description: Enable MD Correctable Error Interrupt */ +#define SH_LOCAL_INT4_ENABLE_MD_UCE_INT_SHFT 9 +#define SH_LOCAL_INT4_ENABLE_MD_UCE_INT_MASK 0x0000000000000200 + +/* SH_LOCAL_INT4_ENABLE_XN_UCE_INT */ +/* Description: Enable XN Correctable Error Interrupt */ +#define SH_LOCAL_INT4_ENABLE_XN_UCE_INT_SHFT 10 +#define SH_LOCAL_INT4_ENABLE_XN_UCE_INT_MASK 0x0000000000000400 + +/* SH_LOCAL_INT4_ENABLE_SYSTEM_SHUTDOWN_INT */ +/* Description: Enable System Shutdown Interrupt */ +#define SH_LOCAL_INT4_ENABLE_SYSTEM_SHUTDOWN_INT_SHFT 12 +#define SH_LOCAL_INT4_ENABLE_SYSTEM_SHUTDOWN_INT_MASK 0x0000000000001000 + +/* SH_LOCAL_INT4_ENABLE_UART_INT */ +/* Description: Enable Junk Bus UART Interrupt */ +#define SH_LOCAL_INT4_ENABLE_UART_INT_SHFT 13 +#define SH_LOCAL_INT4_ENABLE_UART_INT_MASK 0x0000000000002000 + +/* SH_LOCAL_INT4_ENABLE_L1_NMI_INT */ +/* Description: Enable L1 Controller NMI Interrupt */ +#define SH_LOCAL_INT4_ENABLE_L1_NMI_INT_SHFT 14 +#define SH_LOCAL_INT4_ENABLE_L1_NMI_INT_MASK 0x0000000000004000 + +/* SH_LOCAL_INT4_ENABLE_STOP_CLOCK */ +/* Description: Stop Clock Interrupt */ +#define SH_LOCAL_INT4_ENABLE_STOP_CLOCK_SHFT 15 +#define SH_LOCAL_INT4_ENABLE_STOP_CLOCK_MASK 0x0000000000008000 + +/* ==================================================================== */ +/* Register "SH_LOCAL_INT5_CONFIG" */ +/* SHub Local Interrupt 5 Registers */ +/* ==================================================================== */ + +#define SH_LOCAL_INT5_CONFIG 0x0000000110000980 +#define SH_LOCAL_INT5_CONFIG_MASK 0x0ff3ffffffefffff +#define SH_LOCAL_INT5_CONFIG_INIT 0x0000000000000000 + +/* SH_LOCAL_INT5_CONFIG_TYPE */ +/* Description: Type of Interrupt: 0=INT, 2=PMI, 4=NMI, 5=INIT */ +#define SH_LOCAL_INT5_CONFIG_TYPE_SHFT 0 +#define SH_LOCAL_INT5_CONFIG_TYPE_MASK 0x0000000000000007 + +/* SH_LOCAL_INT5_CONFIG_AGT */ +/* Description: Agent, must be 0 for SHub */ +#define SH_LOCAL_INT5_CONFIG_AGT_SHFT 3 +#define SH_LOCAL_INT5_CONFIG_AGT_MASK 0x0000000000000008 + +/* SH_LOCAL_INT5_CONFIG_PID */ +/* Description: Processor ID, same setting as on targeted McKinley */ +#define SH_LOCAL_INT5_CONFIG_PID_SHFT 4 +#define SH_LOCAL_INT5_CONFIG_PID_MASK 0x00000000000ffff0 + +/* SH_LOCAL_INT5_CONFIG_BASE */ +/* Description: Optional interrupt vector area, 2MB aligned */ +#define SH_LOCAL_INT5_CONFIG_BASE_SHFT 21 +#define SH_LOCAL_INT5_CONFIG_BASE_MASK 0x0003ffffffe00000 + +/* SH_LOCAL_INT5_CONFIG_IDX */ +/* Description: Targeted McKinley interrupt vector */ +#define SH_LOCAL_INT5_CONFIG_IDX_SHFT 52 +#define SH_LOCAL_INT5_CONFIG_IDX_MASK 0x0ff0000000000000 + +/* ==================================================================== */ +/* Register "SH_LOCAL_INT5_ENABLE" */ +/* SHub Local Interrupt 5 Enable */ +/* ==================================================================== */ + +#define SH_LOCAL_INT5_ENABLE 0x0000000110000a00 +#define SH_LOCAL_INT5_ENABLE_MASK 0x000000000000f7ff +#define SH_LOCAL_INT5_ENABLE_INIT 0x0000000000000000 + +/* SH_LOCAL_INT5_ENABLE_PI_HW_INT */ +/* Description: Enable PI Hardware interrupt */ +#define SH_LOCAL_INT5_ENABLE_PI_HW_INT_SHFT 0 +#define SH_LOCAL_INT5_ENABLE_PI_HW_INT_MASK 0x0000000000000001 + +/* SH_LOCAL_INT5_ENABLE_MD_HW_INT */ +/* Description: Enable MD Hardware interrupt */ +#define SH_LOCAL_INT5_ENABLE_MD_HW_INT_SHFT 1 +#define SH_LOCAL_INT5_ENABLE_MD_HW_INT_MASK 0x0000000000000002 + +/* SH_LOCAL_INT5_ENABLE_XN_HW_INT */ +/* Description: Enable XN Hardware interrupt */ +#define SH_LOCAL_INT5_ENABLE_XN_HW_INT_SHFT 2 +#define SH_LOCAL_INT5_ENABLE_XN_HW_INT_MASK 0x0000000000000004 + +/* SH_LOCAL_INT5_ENABLE_LB_HW_INT */ +/* Description: Enable LB Hardware interrupt */ +#define SH_LOCAL_INT5_ENABLE_LB_HW_INT_SHFT 3 +#define SH_LOCAL_INT5_ENABLE_LB_HW_INT_MASK 0x0000000000000008 + +/* SH_LOCAL_INT5_ENABLE_II_HW_INT */ +/* Description: Enable II wrapper Hardware interrupt */ +#define SH_LOCAL_INT5_ENABLE_II_HW_INT_SHFT 4 +#define SH_LOCAL_INT5_ENABLE_II_HW_INT_MASK 0x0000000000000010 + +/* SH_LOCAL_INT5_ENABLE_PI_CE_INT */ +/* Description: Enable PI Correctable Error Interrupt */ +#define SH_LOCAL_INT5_ENABLE_PI_CE_INT_SHFT 5 +#define SH_LOCAL_INT5_ENABLE_PI_CE_INT_MASK 0x0000000000000020 + +/* SH_LOCAL_INT5_ENABLE_MD_CE_INT */ +/* Description: Enable MD Correctable Error Interrupt */ +#define SH_LOCAL_INT5_ENABLE_MD_CE_INT_SHFT 6 +#define SH_LOCAL_INT5_ENABLE_MD_CE_INT_MASK 0x0000000000000040 + +/* SH_LOCAL_INT5_ENABLE_XN_CE_INT */ +/* Description: Enable XN Correctable Error Interrupt */ +#define SH_LOCAL_INT5_ENABLE_XN_CE_INT_SHFT 7 +#define SH_LOCAL_INT5_ENABLE_XN_CE_INT_MASK 0x0000000000000080 + +/* SH_LOCAL_INT5_ENABLE_PI_UCE_INT */ +/* Description: Enable PI Correctable Error Interrupt */ +#define SH_LOCAL_INT5_ENABLE_PI_UCE_INT_SHFT 8 +#define SH_LOCAL_INT5_ENABLE_PI_UCE_INT_MASK 0x0000000000000100 + +/* SH_LOCAL_INT5_ENABLE_MD_UCE_INT */ +/* Description: Enable MD Correctable Error Interrupt */ +#define SH_LOCAL_INT5_ENABLE_MD_UCE_INT_SHFT 9 +#define SH_LOCAL_INT5_ENABLE_MD_UCE_INT_MASK 0x0000000000000200 + +/* SH_LOCAL_INT5_ENABLE_XN_UCE_INT */ +/* Description: Enable XN Correctable Error Interrupt */ +#define SH_LOCAL_INT5_ENABLE_XN_UCE_INT_SHFT 10 +#define SH_LOCAL_INT5_ENABLE_XN_UCE_INT_MASK 0x0000000000000400 + +/* SH_LOCAL_INT5_ENABLE_SYSTEM_SHUTDOWN_INT */ +/* Description: Enable System Shutdown Interrupt */ +#define SH_LOCAL_INT5_ENABLE_SYSTEM_SHUTDOWN_INT_SHFT 12 +#define SH_LOCAL_INT5_ENABLE_SYSTEM_SHUTDOWN_INT_MASK 0x0000000000001000 + +/* SH_LOCAL_INT5_ENABLE_UART_INT */ +/* Description: Enable Junk Bus UART Interrupt */ +#define SH_LOCAL_INT5_ENABLE_UART_INT_SHFT 13 +#define SH_LOCAL_INT5_ENABLE_UART_INT_MASK 0x0000000000002000 + +/* SH_LOCAL_INT5_ENABLE_L1_NMI_INT */ +/* Description: Enable L1 Controller NMI Interrupt */ +#define SH_LOCAL_INT5_ENABLE_L1_NMI_INT_SHFT 14 +#define SH_LOCAL_INT5_ENABLE_L1_NMI_INT_MASK 0x0000000000004000 + +/* SH_LOCAL_INT5_ENABLE_STOP_CLOCK */ +/* Description: Stop Clock Interrupt */ +#define SH_LOCAL_INT5_ENABLE_STOP_CLOCK_SHFT 15 +#define SH_LOCAL_INT5_ENABLE_STOP_CLOCK_MASK 0x0000000000008000 + +/* ==================================================================== */ +/* Register "SH_PROC0_ERR_INT_CONFIG" */ +/* SHub Processor 0 Error Interrupt Registers */ +/* ==================================================================== */ + +#define SH_PROC0_ERR_INT_CONFIG 0x0000000110000a80 +#define SH_PROC0_ERR_INT_CONFIG_MASK 0x0ff3ffffffefffff +#define SH_PROC0_ERR_INT_CONFIG_INIT 0x0000000000000000 + +/* SH_PROC0_ERR_INT_CONFIG_TYPE */ +/* Description: Type of Interrupt: 0=INT, 2=PMI, 4=NMI, 5=INIT */ +#define SH_PROC0_ERR_INT_CONFIG_TYPE_SHFT 0 +#define SH_PROC0_ERR_INT_CONFIG_TYPE_MASK 0x0000000000000007 + +/* SH_PROC0_ERR_INT_CONFIG_AGT */ +/* Description: Agent, must be 0 for SHub */ +#define SH_PROC0_ERR_INT_CONFIG_AGT_SHFT 3 +#define SH_PROC0_ERR_INT_CONFIG_AGT_MASK 0x0000000000000008 + +/* SH_PROC0_ERR_INT_CONFIG_PID */ +/* Description: Processor ID, same setting as on targeted McKinley */ +#define SH_PROC0_ERR_INT_CONFIG_PID_SHFT 4 +#define SH_PROC0_ERR_INT_CONFIG_PID_MASK 0x00000000000ffff0 + +/* SH_PROC0_ERR_INT_CONFIG_BASE */ +/* Description: Optional interrupt vector area, 2MB aligned */ +#define SH_PROC0_ERR_INT_CONFIG_BASE_SHFT 21 +#define SH_PROC0_ERR_INT_CONFIG_BASE_MASK 0x0003ffffffe00000 + +/* SH_PROC0_ERR_INT_CONFIG_IDX */ +/* Description: Targeted McKinley interrupt vector */ +#define SH_PROC0_ERR_INT_CONFIG_IDX_SHFT 52 +#define SH_PROC0_ERR_INT_CONFIG_IDX_MASK 0x0ff0000000000000 + +/* ==================================================================== */ +/* Register "SH_PROC1_ERR_INT_CONFIG" */ +/* SHub Processor 1 Error Interrupt Registers */ +/* ==================================================================== */ + +#define SH_PROC1_ERR_INT_CONFIG 0x0000000110000b00 +#define SH_PROC1_ERR_INT_CONFIG_MASK 0x0ff3ffffffefffff +#define SH_PROC1_ERR_INT_CONFIG_INIT 0x0000000000000000 + +/* SH_PROC1_ERR_INT_CONFIG_TYPE */ +/* Description: Type of Interrupt: 0=INT, 2=PMI, 4=NMI, 5=INIT */ +#define SH_PROC1_ERR_INT_CONFIG_TYPE_SHFT 0 +#define SH_PROC1_ERR_INT_CONFIG_TYPE_MASK 0x0000000000000007 + +/* SH_PROC1_ERR_INT_CONFIG_AGT */ +/* Description: Agent, must be 0 for SHub */ +#define SH_PROC1_ERR_INT_CONFIG_AGT_SHFT 3 +#define SH_PROC1_ERR_INT_CONFIG_AGT_MASK 0x0000000000000008 + +/* SH_PROC1_ERR_INT_CONFIG_PID */ +/* Description: Processor ID, same setting as on targeted McKinley */ +#define SH_PROC1_ERR_INT_CONFIG_PID_SHFT 4 +#define SH_PROC1_ERR_INT_CONFIG_PID_MASK 0x00000000000ffff0 + +/* SH_PROC1_ERR_INT_CONFIG_BASE */ +/* Description: Optional interrupt vector area, 2MB aligned */ +#define SH_PROC1_ERR_INT_CONFIG_BASE_SHFT 21 +#define SH_PROC1_ERR_INT_CONFIG_BASE_MASK 0x0003ffffffe00000 + +/* SH_PROC1_ERR_INT_CONFIG_IDX */ +/* Description: Targeted McKinley interrupt vector */ +#define SH_PROC1_ERR_INT_CONFIG_IDX_SHFT 52 +#define SH_PROC1_ERR_INT_CONFIG_IDX_MASK 0x0ff0000000000000 + +/* ==================================================================== */ +/* Register "SH_PROC2_ERR_INT_CONFIG" */ +/* SHub Processor 2 Error Interrupt Registers */ +/* ==================================================================== */ + +#define SH_PROC2_ERR_INT_CONFIG 0x0000000110000b80 +#define SH_PROC2_ERR_INT_CONFIG_MASK 0x0ff3ffffffefffff +#define SH_PROC2_ERR_INT_CONFIG_INIT 0x0000000000000000 + +/* SH_PROC2_ERR_INT_CONFIG_TYPE */ +/* Description: Type of Interrupt: 0=INT, 2=PMI, 4=NMI, 5=INIT */ +#define SH_PROC2_ERR_INT_CONFIG_TYPE_SHFT 0 +#define SH_PROC2_ERR_INT_CONFIG_TYPE_MASK 0x0000000000000007 + +/* SH_PROC2_ERR_INT_CONFIG_AGT */ +/* Description: Agent, must be 0 for SHub */ +#define SH_PROC2_ERR_INT_CONFIG_AGT_SHFT 3 +#define SH_PROC2_ERR_INT_CONFIG_AGT_MASK 0x0000000000000008 + +/* SH_PROC2_ERR_INT_CONFIG_PID */ +/* Description: Processor ID, same setting as on targeted McKinley */ +#define SH_PROC2_ERR_INT_CONFIG_PID_SHFT 4 +#define SH_PROC2_ERR_INT_CONFIG_PID_MASK 0x00000000000ffff0 + +/* SH_PROC2_ERR_INT_CONFIG_BASE */ +/* Description: Optional interrupt vector area, 2MB aligned */ +#define SH_PROC2_ERR_INT_CONFIG_BASE_SHFT 21 +#define SH_PROC2_ERR_INT_CONFIG_BASE_MASK 0x0003ffffffe00000 + +/* SH_PROC2_ERR_INT_CONFIG_IDX */ +/* Description: Targeted McKinley interrupt vector */ +#define SH_PROC2_ERR_INT_CONFIG_IDX_SHFT 52 +#define SH_PROC2_ERR_INT_CONFIG_IDX_MASK 0x0ff0000000000000 + +/* ==================================================================== */ +/* Register "SH_PROC3_ERR_INT_CONFIG" */ +/* SHub Processor 3 Error Interrupt Registers */ +/* ==================================================================== */ + +#define SH_PROC3_ERR_INT_CONFIG 0x0000000110000c00 +#define SH_PROC3_ERR_INT_CONFIG_MASK 0x0ff3ffffffefffff +#define SH_PROC3_ERR_INT_CONFIG_INIT 0x0000000000000000 + +/* SH_PROC3_ERR_INT_CONFIG_TYPE */ +/* Description: Type of Interrupt: 0=INT, 2=PMI, 4=NMI, 5=INIT */ +#define SH_PROC3_ERR_INT_CONFIG_TYPE_SHFT 0 +#define SH_PROC3_ERR_INT_CONFIG_TYPE_MASK 0x0000000000000007 + +/* SH_PROC3_ERR_INT_CONFIG_AGT */ +/* Description: Agent, must be 0 for SHub */ +#define SH_PROC3_ERR_INT_CONFIG_AGT_SHFT 3 +#define SH_PROC3_ERR_INT_CONFIG_AGT_MASK 0x0000000000000008 + +/* SH_PROC3_ERR_INT_CONFIG_PID */ +/* Description: Processor ID, same setting as on targeted McKinley */ +#define SH_PROC3_ERR_INT_CONFIG_PID_SHFT 4 +#define SH_PROC3_ERR_INT_CONFIG_PID_MASK 0x00000000000ffff0 + +/* SH_PROC3_ERR_INT_CONFIG_BASE */ +/* Description: Optional interrupt vector area, 2MB aligned */ +#define SH_PROC3_ERR_INT_CONFIG_BASE_SHFT 21 +#define SH_PROC3_ERR_INT_CONFIG_BASE_MASK 0x0003ffffffe00000 + +/* SH_PROC3_ERR_INT_CONFIG_IDX */ +/* Description: Targeted McKinley interrupt vector */ +#define SH_PROC3_ERR_INT_CONFIG_IDX_SHFT 52 +#define SH_PROC3_ERR_INT_CONFIG_IDX_MASK 0x0ff0000000000000 + +/* ==================================================================== */ +/* Register "SH_PROC0_ADV_INT_CONFIG" */ +/* SHub Processor 0 Advisory Interrupt Registers */ +/* ==================================================================== */ + +#define SH_PROC0_ADV_INT_CONFIG 0x0000000110000c80 +#define SH_PROC0_ADV_INT_CONFIG_MASK 0x0ff3ffffffefffff +#define SH_PROC0_ADV_INT_CONFIG_INIT 0x0000000000000000 + +/* SH_PROC0_ADV_INT_CONFIG_TYPE */ +/* Description: Type of Interrupt: 0=INT, 2=PMI, 4=NMI, 5=INIT */ +#define SH_PROC0_ADV_INT_CONFIG_TYPE_SHFT 0 +#define SH_PROC0_ADV_INT_CONFIG_TYPE_MASK 0x0000000000000007 + +/* SH_PROC0_ADV_INT_CONFIG_AGT */ +/* Description: Agent, must be 0 for SHub */ +#define SH_PROC0_ADV_INT_CONFIG_AGT_SHFT 3 +#define SH_PROC0_ADV_INT_CONFIG_AGT_MASK 0x0000000000000008 + +/* SH_PROC0_ADV_INT_CONFIG_PID */ +/* Description: Processor ID, same setting as on targeted McKinley */ +#define SH_PROC0_ADV_INT_CONFIG_PID_SHFT 4 +#define SH_PROC0_ADV_INT_CONFIG_PID_MASK 0x00000000000ffff0 + +/* SH_PROC0_ADV_INT_CONFIG_BASE */ +/* Description: Optional interrupt vector area, 2MB aligned */ +#define SH_PROC0_ADV_INT_CONFIG_BASE_SHFT 21 +#define SH_PROC0_ADV_INT_CONFIG_BASE_MASK 0x0003ffffffe00000 + +/* SH_PROC0_ADV_INT_CONFIG_IDX */ +/* Description: Targeted McKinley interrupt vector */ +#define SH_PROC0_ADV_INT_CONFIG_IDX_SHFT 52 +#define SH_PROC0_ADV_INT_CONFIG_IDX_MASK 0x0ff0000000000000 + +/* ==================================================================== */ +/* Register "SH_PROC1_ADV_INT_CONFIG" */ +/* SHub Processor 1 Advisory Interrupt Registers */ +/* ==================================================================== */ + +#define SH_PROC1_ADV_INT_CONFIG 0x0000000110000d00 +#define SH_PROC1_ADV_INT_CONFIG_MASK 0x0ff3ffffffefffff +#define SH_PROC1_ADV_INT_CONFIG_INIT 0x0000000000000000 + +/* SH_PROC1_ADV_INT_CONFIG_TYPE */ +/* Description: Type of Interrupt: 0=INT, 2=PMI, 4=NMI, 5=INIT */ +#define SH_PROC1_ADV_INT_CONFIG_TYPE_SHFT 0 +#define SH_PROC1_ADV_INT_CONFIG_TYPE_MASK 0x0000000000000007 + +/* SH_PROC1_ADV_INT_CONFIG_AGT */ +/* Description: Agent, must be 0 for SHub */ +#define SH_PROC1_ADV_INT_CONFIG_AGT_SHFT 3 +#define SH_PROC1_ADV_INT_CONFIG_AGT_MASK 0x0000000000000008 + +/* SH_PROC1_ADV_INT_CONFIG_PID */ +/* Description: Processor ID, same setting as on targeted McKinley */ +#define SH_PROC1_ADV_INT_CONFIG_PID_SHFT 4 +#define SH_PROC1_ADV_INT_CONFIG_PID_MASK 0x00000000000ffff0 + +/* SH_PROC1_ADV_INT_CONFIG_BASE */ +/* Description: Optional interrupt vector area, 2MB aligned */ +#define SH_PROC1_ADV_INT_CONFIG_BASE_SHFT 21 +#define SH_PROC1_ADV_INT_CONFIG_BASE_MASK 0x0003ffffffe00000 + +/* SH_PROC1_ADV_INT_CONFIG_IDX */ +/* Description: Targeted McKinley interrupt vector */ +#define SH_PROC1_ADV_INT_CONFIG_IDX_SHFT 52 +#define SH_PROC1_ADV_INT_CONFIG_IDX_MASK 0x0ff0000000000000 + +/* ==================================================================== */ +/* Register "SH_PROC2_ADV_INT_CONFIG" */ +/* SHub Processor 2 Advisory Interrupt Registers */ +/* ==================================================================== */ + +#define SH_PROC2_ADV_INT_CONFIG 0x0000000110000d80 +#define SH_PROC2_ADV_INT_CONFIG_MASK 0x0ff3ffffffefffff +#define SH_PROC2_ADV_INT_CONFIG_INIT 0x0000000000000000 + +/* SH_PROC2_ADV_INT_CONFIG_TYPE */ +/* Description: Type of Interrupt: 0=INT, 2=PMI, 4=NMI, 5=INIT */ +#define SH_PROC2_ADV_INT_CONFIG_TYPE_SHFT 0 +#define SH_PROC2_ADV_INT_CONFIG_TYPE_MASK 0x0000000000000007 + +/* SH_PROC2_ADV_INT_CONFIG_AGT */ +/* Description: Agent, must be 0 for SHub */ +#define SH_PROC2_ADV_INT_CONFIG_AGT_SHFT 3 +#define SH_PROC2_ADV_INT_CONFIG_AGT_MASK 0x0000000000000008 + +/* SH_PROC2_ADV_INT_CONFIG_PID */ +/* Description: Processor ID, same setting as on targeted McKinley */ +#define SH_PROC2_ADV_INT_CONFIG_PID_SHFT 4 +#define SH_PROC2_ADV_INT_CONFIG_PID_MASK 0x00000000000ffff0 + +/* SH_PROC2_ADV_INT_CONFIG_BASE */ +/* Description: Optional interrupt vector area, 2MB aligned */ +#define SH_PROC2_ADV_INT_CONFIG_BASE_SHFT 21 +#define SH_PROC2_ADV_INT_CONFIG_BASE_MASK 0x0003ffffffe00000 + +/* SH_PROC2_ADV_INT_CONFIG_IDX */ +/* Description: Targeted McKinley interrupt vector */ +#define SH_PROC2_ADV_INT_CONFIG_IDX_SHFT 52 +#define SH_PROC2_ADV_INT_CONFIG_IDX_MASK 0x0ff0000000000000 + +/* ==================================================================== */ +/* Register "SH_PROC3_ADV_INT_CONFIG" */ +/* SHub Processor 3 Advisory Interrupt Registers */ +/* ==================================================================== */ + +#define SH_PROC3_ADV_INT_CONFIG 0x0000000110000e00 +#define SH_PROC3_ADV_INT_CONFIG_MASK 0x0ff3ffffffefffff +#define SH_PROC3_ADV_INT_CONFIG_INIT 0x0000000000000000 + +/* SH_PROC3_ADV_INT_CONFIG_TYPE */ +/* Description: Type of Interrupt: 0=INT, 2=PMI, 4=NMI, 5=INIT */ +#define SH_PROC3_ADV_INT_CONFIG_TYPE_SHFT 0 +#define SH_PROC3_ADV_INT_CONFIG_TYPE_MASK 0x0000000000000007 + +/* SH_PROC3_ADV_INT_CONFIG_AGT */ +/* Description: Agent, must be 0 for SHub */ +#define SH_PROC3_ADV_INT_CONFIG_AGT_SHFT 3 +#define SH_PROC3_ADV_INT_CONFIG_AGT_MASK 0x0000000000000008 + +/* SH_PROC3_ADV_INT_CONFIG_PID */ +/* Description: Processor ID, same setting as on targeted McKinley */ +#define SH_PROC3_ADV_INT_CONFIG_PID_SHFT 4 +#define SH_PROC3_ADV_INT_CONFIG_PID_MASK 0x00000000000ffff0 + +/* SH_PROC3_ADV_INT_CONFIG_BASE */ +/* Description: Optional interrupt vector area, 2MB aligned */ +#define SH_PROC3_ADV_INT_CONFIG_BASE_SHFT 21 +#define SH_PROC3_ADV_INT_CONFIG_BASE_MASK 0x0003ffffffe00000 + +/* SH_PROC3_ADV_INT_CONFIG_IDX */ +/* Description: Targeted McKinley interrupt vector */ +#define SH_PROC3_ADV_INT_CONFIG_IDX_SHFT 52 +#define SH_PROC3_ADV_INT_CONFIG_IDX_MASK 0x0ff0000000000000 + +/* ==================================================================== */ +/* Register "SH_PROC0_ERR_INT_ENABLE" */ +/* SHub Processor 0 Error Interrupt Enable Registers */ +/* ==================================================================== */ + +#define SH_PROC0_ERR_INT_ENABLE 0x0000000110000e80 +#define SH_PROC0_ERR_INT_ENABLE_MASK 0x0000000000000001 +#define SH_PROC0_ERR_INT_ENABLE_INIT 0x0000000000000000 + +/* SH_PROC0_ERR_INT_ENABLE_PROC0_ERR_ENABLE */ +/* Description: Enable Processor 0 Error Interrupt */ +#define SH_PROC0_ERR_INT_ENABLE_PROC0_ERR_ENABLE_SHFT 0 +#define SH_PROC0_ERR_INT_ENABLE_PROC0_ERR_ENABLE_MASK 0x0000000000000001 + +/* ==================================================================== */ +/* Register "SH_PROC1_ERR_INT_ENABLE" */ +/* SHub Processor 1 Error Interrupt Enable Registers */ +/* ==================================================================== */ + +#define SH_PROC1_ERR_INT_ENABLE 0x0000000110000f00 +#define SH_PROC1_ERR_INT_ENABLE_MASK 0x0000000000000001 +#define SH_PROC1_ERR_INT_ENABLE_INIT 0x0000000000000000 + +/* SH_PROC1_ERR_INT_ENABLE_PROC1_ERR_ENABLE */ +/* Description: Enable Processor 1 Error Interrupt */ +#define SH_PROC1_ERR_INT_ENABLE_PROC1_ERR_ENABLE_SHFT 0 +#define SH_PROC1_ERR_INT_ENABLE_PROC1_ERR_ENABLE_MASK 0x0000000000000001 + +/* ==================================================================== */ +/* Register "SH_PROC2_ERR_INT_ENABLE" */ +/* SHub Processor 2 Error Interrupt Enable Registers */ +/* ==================================================================== */ + +#define SH_PROC2_ERR_INT_ENABLE 0x0000000110000f80 +#define SH_PROC2_ERR_INT_ENABLE_MASK 0x0000000000000001 +#define SH_PROC2_ERR_INT_ENABLE_INIT 0x0000000000000000 + +/* SH_PROC2_ERR_INT_ENABLE_PROC2_ERR_ENABLE */ +/* Description: Enable Processor 2 Error Interrupt */ +#define SH_PROC2_ERR_INT_ENABLE_PROC2_ERR_ENABLE_SHFT 0 +#define SH_PROC2_ERR_INT_ENABLE_PROC2_ERR_ENABLE_MASK 0x0000000000000001 + +/* ==================================================================== */ +/* Register "SH_PROC3_ERR_INT_ENABLE" */ +/* SHub Processor 3 Error Interrupt Enable Registers */ +/* ==================================================================== */ + +#define SH_PROC3_ERR_INT_ENABLE 0x0000000110001000 +#define SH_PROC3_ERR_INT_ENABLE_MASK 0x0000000000000001 +#define SH_PROC3_ERR_INT_ENABLE_INIT 0x0000000000000000 + +/* SH_PROC3_ERR_INT_ENABLE_PROC3_ERR_ENABLE */ +/* Description: Enable Processor 3 Error Interrupt */ +#define SH_PROC3_ERR_INT_ENABLE_PROC3_ERR_ENABLE_SHFT 0 +#define SH_PROC3_ERR_INT_ENABLE_PROC3_ERR_ENABLE_MASK 0x0000000000000001 + +/* ==================================================================== */ +/* Register "SH_PROC0_ADV_INT_ENABLE" */ +/* SHub Processor 0 Advisory Interrupt Enable Registers */ +/* ==================================================================== */ + +#define SH_PROC0_ADV_INT_ENABLE 0x0000000110001080 +#define SH_PROC0_ADV_INT_ENABLE_MASK 0x0000000000000001 +#define SH_PROC0_ADV_INT_ENABLE_INIT 0x0000000000000000 + +/* SH_PROC0_ADV_INT_ENABLE_PROC0_ADV_ENABLE */ +/* Description: Enable Processor 0 Advisory Interrupt */ +#define SH_PROC0_ADV_INT_ENABLE_PROC0_ADV_ENABLE_SHFT 0 +#define SH_PROC0_ADV_INT_ENABLE_PROC0_ADV_ENABLE_MASK 0x0000000000000001 + +/* ==================================================================== */ +/* Register "SH_PROC1_ADV_INT_ENABLE" */ +/* SHub Processor 1 Advisory Interrupt Enable Registers */ +/* ==================================================================== */ + +#define SH_PROC1_ADV_INT_ENABLE 0x0000000110001100 +#define SH_PROC1_ADV_INT_ENABLE_MASK 0x0000000000000001 +#define SH_PROC1_ADV_INT_ENABLE_INIT 0x0000000000000000 + +/* SH_PROC1_ADV_INT_ENABLE_PROC1_ADV_ENABLE */ +/* Description: Enable Processor 1 Advisory Interrupt */ +#define SH_PROC1_ADV_INT_ENABLE_PROC1_ADV_ENABLE_SHFT 0 +#define SH_PROC1_ADV_INT_ENABLE_PROC1_ADV_ENABLE_MASK 0x0000000000000001 + +/* ==================================================================== */ +/* Register "SH_PROC2_ADV_INT_ENABLE" */ +/* SHub Processor 2 Advisory Interrupt Enable Registers */ +/* ==================================================================== */ + +#define SH_PROC2_ADV_INT_ENABLE 0x0000000110001180 +#define SH_PROC2_ADV_INT_ENABLE_MASK 0x0000000000000001 +#define SH_PROC2_ADV_INT_ENABLE_INIT 0x0000000000000000 + +/* SH_PROC2_ADV_INT_ENABLE_PROC2_ADV_ENABLE */ +/* Description: Enable Processor 2 Advisory Interrupt */ +#define SH_PROC2_ADV_INT_ENABLE_PROC2_ADV_ENABLE_SHFT 0 +#define SH_PROC2_ADV_INT_ENABLE_PROC2_ADV_ENABLE_MASK 0x0000000000000001 + +/* ==================================================================== */ +/* Register "SH_PROC3_ADV_INT_ENABLE" */ +/* SHub Processor 3 Advisory Interrupt Enable Registers */ +/* ==================================================================== */ + +#define SH_PROC3_ADV_INT_ENABLE 0x0000000110001200 +#define SH_PROC3_ADV_INT_ENABLE_MASK 0x0000000000000001 +#define SH_PROC3_ADV_INT_ENABLE_INIT 0x0000000000000000 + +/* SH_PROC3_ADV_INT_ENABLE_PROC3_ADV_ENABLE */ +/* Description: Enable Processor 3 Advisory Interrupt */ +#define SH_PROC3_ADV_INT_ENABLE_PROC3_ADV_ENABLE_SHFT 0 +#define SH_PROC3_ADV_INT_ENABLE_PROC3_ADV_ENABLE_MASK 0x0000000000000001 + +/* ==================================================================== */ +/* Register "SH_PROFILE_INT_CONFIG" */ +/* SHub Profile Interrupt Configuration Registers */ +/* ==================================================================== */ + +#define SH_PROFILE_INT_CONFIG 0x0000000110001280 +#define SH_PROFILE_INT_CONFIG_MASK 0x0ff3ffffffefffff +#define SH_PROFILE_INT_CONFIG_INIT 0x0000000000000000 + +/* SH_PROFILE_INT_CONFIG_TYPE */ +/* Description: Type of Interrupt: 0=INT, 2=PMI, 4=NMI, 5=INIT */ +#define SH_PROFILE_INT_CONFIG_TYPE_SHFT 0 +#define SH_PROFILE_INT_CONFIG_TYPE_MASK 0x0000000000000007 + +/* SH_PROFILE_INT_CONFIG_AGT */ +/* Description: Agent, must be 0 for SHub */ +#define SH_PROFILE_INT_CONFIG_AGT_SHFT 3 +#define SH_PROFILE_INT_CONFIG_AGT_MASK 0x0000000000000008 + +/* SH_PROFILE_INT_CONFIG_PID */ +/* Description: Processor ID, same setting as on targeted McKinley */ +#define SH_PROFILE_INT_CONFIG_PID_SHFT 4 +#define SH_PROFILE_INT_CONFIG_PID_MASK 0x00000000000ffff0 + +/* SH_PROFILE_INT_CONFIG_BASE */ +/* Description: Optional interrupt vector area, 2MB aligned */ +#define SH_PROFILE_INT_CONFIG_BASE_SHFT 21 +#define SH_PROFILE_INT_CONFIG_BASE_MASK 0x0003ffffffe00000 + +/* SH_PROFILE_INT_CONFIG_IDX */ +/* Description: Targeted McKinley interrupt vector */ +#define SH_PROFILE_INT_CONFIG_IDX_SHFT 52 +#define SH_PROFILE_INT_CONFIG_IDX_MASK 0x0ff0000000000000 + +/* ==================================================================== */ +/* Register "SH_PROFILE_INT_ENABLE" */ +/* SHub Profile Interrupt Enable Registers */ +/* ==================================================================== */ + +#define SH_PROFILE_INT_ENABLE 0x0000000110001300 +#define SH_PROFILE_INT_ENABLE_MASK 0x0000000000000001 +#define SH_PROFILE_INT_ENABLE_INIT 0x0000000000000000 + +/* SH_PROFILE_INT_ENABLE_PROFILE_ENABLE */ +/* Description: Enable Profile Interrupt */ +#define SH_PROFILE_INT_ENABLE_PROFILE_ENABLE_SHFT 0 +#define SH_PROFILE_INT_ENABLE_PROFILE_ENABLE_MASK 0x0000000000000001 + +/* ==================================================================== */ +/* Register "SH_RTC0_INT_CONFIG" */ +/* SHub RTC 0 Interrupt Config Registers */ +/* ==================================================================== */ + +#define SH_RTC0_INT_CONFIG 0x0000000110001380 +#define SH_RTC0_INT_CONFIG_MASK 0x0ff3ffffffefffff +#define SH_RTC0_INT_CONFIG_INIT 0x0000000000000000 + +/* SH_RTC0_INT_CONFIG_TYPE */ +/* Description: Type of Interrupt: 0=INT, 2=PMI, 4=NMI, 5=INIT */ +#define SH_RTC0_INT_CONFIG_TYPE_SHFT 0 +#define SH_RTC0_INT_CONFIG_TYPE_MASK 0x0000000000000007 + +/* SH_RTC0_INT_CONFIG_AGT */ +/* Description: Agent, must be 0 for SHub */ +#define SH_RTC0_INT_CONFIG_AGT_SHFT 3 +#define SH_RTC0_INT_CONFIG_AGT_MASK 0x0000000000000008 + +/* SH_RTC0_INT_CONFIG_PID */ +/* Description: Processor ID, same setting as on targeted McKinley */ +#define SH_RTC0_INT_CONFIG_PID_SHFT 4 +#define SH_RTC0_INT_CONFIG_PID_MASK 0x00000000000ffff0 + +/* SH_RTC0_INT_CONFIG_BASE */ +/* Description: Optional interrupt vector area, 2MB aligned */ +#define SH_RTC0_INT_CONFIG_BASE_SHFT 21 +#define SH_RTC0_INT_CONFIG_BASE_MASK 0x0003ffffffe00000 + +/* SH_RTC0_INT_CONFIG_IDX */ +/* Description: Targeted McKinley interrupt vector */ +#define SH_RTC0_INT_CONFIG_IDX_SHFT 52 +#define SH_RTC0_INT_CONFIG_IDX_MASK 0x0ff0000000000000 + +/* ==================================================================== */ +/* Register "SH_RTC0_INT_ENABLE" */ +/* SHub RTC 0 Interrupt Enable Registers */ +/* ==================================================================== */ + +#define SH_RTC0_INT_ENABLE 0x0000000110001400 +#define SH_RTC0_INT_ENABLE_MASK 0x0000000000000001 +#define SH_RTC0_INT_ENABLE_INIT 0x0000000000000000 + +/* SH_RTC0_INT_ENABLE_RTC0_ENABLE */ +/* Description: Enable RTC 0 Interrupt */ +#define SH_RTC0_INT_ENABLE_RTC0_ENABLE_SHFT 0 +#define SH_RTC0_INT_ENABLE_RTC0_ENABLE_MASK 0x0000000000000001 + +/* ==================================================================== */ +/* Register "SH_RTC1_INT_CONFIG" */ +/* SHub RTC 1 Interrupt Config Registers */ +/* ==================================================================== */ + +#define SH_RTC1_INT_CONFIG 0x0000000110001480 +#define SH_RTC1_INT_CONFIG_MASK 0x0ff3ffffffefffff +#define SH_RTC1_INT_CONFIG_INIT 0x0000000000000000 + +/* SH_RTC1_INT_CONFIG_TYPE */ +/* Description: Type of Interrupt: 0=INT, 2=PMI, 4=NMI, 5=INIT */ +#define SH_RTC1_INT_CONFIG_TYPE_SHFT 0 +#define SH_RTC1_INT_CONFIG_TYPE_MASK 0x0000000000000007 + +/* SH_RTC1_INT_CONFIG_AGT */ +/* Description: Agent, must be 0 for SHub */ +#define SH_RTC1_INT_CONFIG_AGT_SHFT 3 +#define SH_RTC1_INT_CONFIG_AGT_MASK 0x0000000000000008 + +/* SH_RTC1_INT_CONFIG_PID */ +/* Description: Processor ID, same setting as on targeted McKinley */ +#define SH_RTC1_INT_CONFIG_PID_SHFT 4 +#define SH_RTC1_INT_CONFIG_PID_MASK 0x00000000000ffff0 + +/* SH_RTC1_INT_CONFIG_BASE */ +/* Description: Optional interrupt vector area, 2MB aligned */ +#define SH_RTC1_INT_CONFIG_BASE_SHFT 21 +#define SH_RTC1_INT_CONFIG_BASE_MASK 0x0003ffffffe00000 + +/* SH_RTC1_INT_CONFIG_IDX */ +/* Description: Targeted McKinley interrupt vector */ +#define SH_RTC1_INT_CONFIG_IDX_SHFT 52 +#define SH_RTC1_INT_CONFIG_IDX_MASK 0x0ff0000000000000 + +/* ==================================================================== */ +/* Register "SH_RTC1_INT_ENABLE" */ +/* SHub RTC 1 Interrupt Enable Registers */ +/* ==================================================================== */ + +#define SH_RTC1_INT_ENABLE 0x0000000110001500 +#define SH_RTC1_INT_ENABLE_MASK 0x0000000000000001 +#define SH_RTC1_INT_ENABLE_INIT 0x0000000000000000 + +/* SH_RTC1_INT_ENABLE_RTC1_ENABLE */ +/* Description: Enable RTC 1 Interrupt */ +#define SH_RTC1_INT_ENABLE_RTC1_ENABLE_SHFT 0 +#define SH_RTC1_INT_ENABLE_RTC1_ENABLE_MASK 0x0000000000000001 + +/* ==================================================================== */ +/* Register "SH_RTC2_INT_CONFIG" */ +/* SHub RTC 2 Interrupt Config Registers */ +/* ==================================================================== */ + +#define SH_RTC2_INT_CONFIG 0x0000000110001580 +#define SH_RTC2_INT_CONFIG_MASK 0x0ff3ffffffefffff +#define SH_RTC2_INT_CONFIG_INIT 0x0000000000000000 + +/* SH_RTC2_INT_CONFIG_TYPE */ +/* Description: Type of Interrupt: 0=INT, 2=PMI, 4=NMI, 5=INIT */ +#define SH_RTC2_INT_CONFIG_TYPE_SHFT 0 +#define SH_RTC2_INT_CONFIG_TYPE_MASK 0x0000000000000007 + +/* SH_RTC2_INT_CONFIG_AGT */ +/* Description: Agent, must be 0 for SHub */ +#define SH_RTC2_INT_CONFIG_AGT_SHFT 3 +#define SH_RTC2_INT_CONFIG_AGT_MASK 0x0000000000000008 + +/* SH_RTC2_INT_CONFIG_PID */ +/* Description: Processor ID, same setting as on targeted McKinley */ +#define SH_RTC2_INT_CONFIG_PID_SHFT 4 +#define SH_RTC2_INT_CONFIG_PID_MASK 0x00000000000ffff0 + +/* SH_RTC2_INT_CONFIG_BASE */ +/* Description: Optional interrupt vector area, 2MB aligned */ +#define SH_RTC2_INT_CONFIG_BASE_SHFT 21 +#define SH_RTC2_INT_CONFIG_BASE_MASK 0x0003ffffffe00000 + +/* SH_RTC2_INT_CONFIG_IDX */ +/* Description: Targeted McKinley interrupt vector */ +#define SH_RTC2_INT_CONFIG_IDX_SHFT 52 +#define SH_RTC2_INT_CONFIG_IDX_MASK 0x0ff0000000000000 + +/* ==================================================================== */ +/* Register "SH_RTC2_INT_ENABLE" */ +/* SHub RTC 2 Interrupt Enable Registers */ +/* ==================================================================== */ + +#define SH_RTC2_INT_ENABLE 0x0000000110001600 +#define SH_RTC2_INT_ENABLE_MASK 0x0000000000000001 +#define SH_RTC2_INT_ENABLE_INIT 0x0000000000000000 + +/* SH_RTC2_INT_ENABLE_RTC2_ENABLE */ +/* Description: Enable RTC 2 Interrupt */ +#define SH_RTC2_INT_ENABLE_RTC2_ENABLE_SHFT 0 +#define SH_RTC2_INT_ENABLE_RTC2_ENABLE_MASK 0x0000000000000001 + +/* ==================================================================== */ +/* Register "SH_RTC3_INT_CONFIG" */ +/* SHub RTC 3 Interrupt Config Registers */ +/* ==================================================================== */ + +#define SH_RTC3_INT_CONFIG 0x0000000110001680 +#define SH_RTC3_INT_CONFIG_MASK 0x0ff3ffffffefffff +#define SH_RTC3_INT_CONFIG_INIT 0x0000000000000000 + +/* SH_RTC3_INT_CONFIG_TYPE */ +/* Description: Type of Interrupt: 0=INT, 2=PMI, 4=NMI, 5=INIT */ +#define SH_RTC3_INT_CONFIG_TYPE_SHFT 0 +#define SH_RTC3_INT_CONFIG_TYPE_MASK 0x0000000000000007 + +/* SH_RTC3_INT_CONFIG_AGT */ +/* Description: Agent, must be 0 for SHub */ +#define SH_RTC3_INT_CONFIG_AGT_SHFT 3 +#define SH_RTC3_INT_CONFIG_AGT_MASK 0x0000000000000008 + +/* SH_RTC3_INT_CONFIG_PID */ +/* Description: Processor ID, same setting as on targeted McKinley */ +#define SH_RTC3_INT_CONFIG_PID_SHFT 4 +#define SH_RTC3_INT_CONFIG_PID_MASK 0x00000000000ffff0 + +/* SH_RTC3_INT_CONFIG_BASE */ +/* Description: Optional interrupt vector area, 2MB aligned */ +#define SH_RTC3_INT_CONFIG_BASE_SHFT 21 +#define SH_RTC3_INT_CONFIG_BASE_MASK 0x0003ffffffe00000 + +/* SH_RTC3_INT_CONFIG_IDX */ +/* Description: Targeted McKinley interrupt vector */ +#define SH_RTC3_INT_CONFIG_IDX_SHFT 52 +#define SH_RTC3_INT_CONFIG_IDX_MASK 0x0ff0000000000000 + +/* ==================================================================== */ +/* Register "SH_RTC3_INT_ENABLE" */ +/* SHub RTC 3 Interrupt Enable Registers */ +/* ==================================================================== */ + +#define SH_RTC3_INT_ENABLE 0x0000000110001700 +#define SH_RTC3_INT_ENABLE_MASK 0x0000000000000001 +#define SH_RTC3_INT_ENABLE_INIT 0x0000000000000000 + +/* SH_RTC3_INT_ENABLE_RTC3_ENABLE */ +/* Description: Enable RTC 3 Interrupt */ +#define SH_RTC3_INT_ENABLE_RTC3_ENABLE_SHFT 0 +#define SH_RTC3_INT_ENABLE_RTC3_ENABLE_MASK 0x0000000000000001 + +/* ==================================================================== */ +/* Register "SH_EVENT_OCCURRED" */ +/* SHub Interrupt Event Occurred */ +/* ==================================================================== */ + +#define SH_EVENT_OCCURRED 0x0000000110010000 +#define SH_EVENT_OCCURRED_MASK 0x000000007fffffff +#define SH_EVENT_OCCURRED_INIT 0x0000000000000000 + +/* SH_EVENT_OCCURRED_PI_HW_INT */ +/* Description: Pending PI Hardware interrupt */ +#define SH_EVENT_OCCURRED_PI_HW_INT_SHFT 0 +#define SH_EVENT_OCCURRED_PI_HW_INT_MASK 0x0000000000000001 + +/* SH_EVENT_OCCURRED_MD_HW_INT */ +/* Description: Pending MD Hardware interrupt */ +#define SH_EVENT_OCCURRED_MD_HW_INT_SHFT 1 +#define SH_EVENT_OCCURRED_MD_HW_INT_MASK 0x0000000000000002 + +/* SH_EVENT_OCCURRED_XN_HW_INT */ +/* Description: Pending XN Hardware interrupt */ +#define SH_EVENT_OCCURRED_XN_HW_INT_SHFT 2 +#define SH_EVENT_OCCURRED_XN_HW_INT_MASK 0x0000000000000004 + +/* SH_EVENT_OCCURRED_LB_HW_INT */ +/* Description: Pending LB Hardware interrupt */ +#define SH_EVENT_OCCURRED_LB_HW_INT_SHFT 3 +#define SH_EVENT_OCCURRED_LB_HW_INT_MASK 0x0000000000000008 + +/* SH_EVENT_OCCURRED_II_HW_INT */ +/* Description: Pending II wrapper Hardware interrupt */ +#define SH_EVENT_OCCURRED_II_HW_INT_SHFT 4 +#define SH_EVENT_OCCURRED_II_HW_INT_MASK 0x0000000000000010 + +/* SH_EVENT_OCCURRED_PI_CE_INT */ +/* Description: Pending PI Correctable Error Interrupt */ +#define SH_EVENT_OCCURRED_PI_CE_INT_SHFT 5 +#define SH_EVENT_OCCURRED_PI_CE_INT_MASK 0x0000000000000020 + +/* SH_EVENT_OCCURRED_MD_CE_INT */ +/* Description: Pending MD Correctable Error Interrupt */ +#define SH_EVENT_OCCURRED_MD_CE_INT_SHFT 6 +#define SH_EVENT_OCCURRED_MD_CE_INT_MASK 0x0000000000000040 + +/* SH_EVENT_OCCURRED_XN_CE_INT */ +/* Description: Pending XN Correctable Error Interrupt */ +#define SH_EVENT_OCCURRED_XN_CE_INT_SHFT 7 +#define SH_EVENT_OCCURRED_XN_CE_INT_MASK 0x0000000000000080 + +/* SH_EVENT_OCCURRED_PI_UCE_INT */ +/* Description: Pending PI Correctable Error Interrupt */ +#define SH_EVENT_OCCURRED_PI_UCE_INT_SHFT 8 +#define SH_EVENT_OCCURRED_PI_UCE_INT_MASK 0x0000000000000100 + +/* SH_EVENT_OCCURRED_MD_UCE_INT */ +/* Description: Pending MD Correctable Error Interrupt */ +#define SH_EVENT_OCCURRED_MD_UCE_INT_SHFT 9 +#define SH_EVENT_OCCURRED_MD_UCE_INT_MASK 0x0000000000000200 + +/* SH_EVENT_OCCURRED_XN_UCE_INT */ +/* Description: Pending XN Correctable Error Interrupt */ +#define SH_EVENT_OCCURRED_XN_UCE_INT_SHFT 10 +#define SH_EVENT_OCCURRED_XN_UCE_INT_MASK 0x0000000000000400 + +/* SH_EVENT_OCCURRED_PROC0_ADV_INT */ +/* Description: Pending Processor 0 Advisory Interrupt */ +#define SH_EVENT_OCCURRED_PROC0_ADV_INT_SHFT 11 +#define SH_EVENT_OCCURRED_PROC0_ADV_INT_MASK 0x0000000000000800 + +/* SH_EVENT_OCCURRED_PROC1_ADV_INT */ +/* Description: Pending Processor 1 Advisory Interrupt */ +#define SH_EVENT_OCCURRED_PROC1_ADV_INT_SHFT 12 +#define SH_EVENT_OCCURRED_PROC1_ADV_INT_MASK 0x0000000000001000 + +/* SH_EVENT_OCCURRED_PROC2_ADV_INT */ +/* Description: Pending Processor 2 Advisory Interrupt */ +#define SH_EVENT_OCCURRED_PROC2_ADV_INT_SHFT 13 +#define SH_EVENT_OCCURRED_PROC2_ADV_INT_MASK 0x0000000000002000 + +/* SH_EVENT_OCCURRED_PROC3_ADV_INT */ +/* Description: Pending Processor 3 Advisory Interrupt */ +#define SH_EVENT_OCCURRED_PROC3_ADV_INT_SHFT 14 +#define SH_EVENT_OCCURRED_PROC3_ADV_INT_MASK 0x0000000000004000 + +/* SH_EVENT_OCCURRED_PROC0_ERR_INT */ +/* Description: Pending Processor 0 Error Interrupt */ +#define SH_EVENT_OCCURRED_PROC0_ERR_INT_SHFT 15 +#define SH_EVENT_OCCURRED_PROC0_ERR_INT_MASK 0x0000000000008000 + +/* SH_EVENT_OCCURRED_PROC1_ERR_INT */ +/* Description: Pending Processor 1 Error Interrupt */ +#define SH_EVENT_OCCURRED_PROC1_ERR_INT_SHFT 16 +#define SH_EVENT_OCCURRED_PROC1_ERR_INT_MASK 0x0000000000010000 + +/* SH_EVENT_OCCURRED_PROC2_ERR_INT */ +/* Description: Pending Processor 2 Error Interrupt */ +#define SH_EVENT_OCCURRED_PROC2_ERR_INT_SHFT 17 +#define SH_EVENT_OCCURRED_PROC2_ERR_INT_MASK 0x0000000000020000 + +/* SH_EVENT_OCCURRED_PROC3_ERR_INT */ +/* Description: Pending Processor 3 Error Interrupt */ +#define SH_EVENT_OCCURRED_PROC3_ERR_INT_SHFT 18 +#define SH_EVENT_OCCURRED_PROC3_ERR_INT_MASK 0x0000000000040000 + +/* SH_EVENT_OCCURRED_SYSTEM_SHUTDOWN_INT */ +/* Description: Pending System Shutdown Interrupt */ +#define SH_EVENT_OCCURRED_SYSTEM_SHUTDOWN_INT_SHFT 19 +#define SH_EVENT_OCCURRED_SYSTEM_SHUTDOWN_INT_MASK 0x0000000000080000 + +/* SH_EVENT_OCCURRED_UART_INT */ +/* Description: Pending Junk Bus UART Interrupt */ +#define SH_EVENT_OCCURRED_UART_INT_SHFT 20 +#define SH_EVENT_OCCURRED_UART_INT_MASK 0x0000000000100000 + +/* SH_EVENT_OCCURRED_L1_NMI_INT */ +/* Description: Pending L1 Controller NMI Interrupt */ +#define SH_EVENT_OCCURRED_L1_NMI_INT_SHFT 21 +#define SH_EVENT_OCCURRED_L1_NMI_INT_MASK 0x0000000000200000 + +/* SH_EVENT_OCCURRED_STOP_CLOCK */ +/* Description: Pending Stop Clock Interrupt */ +#define SH_EVENT_OCCURRED_STOP_CLOCK_SHFT 22 +#define SH_EVENT_OCCURRED_STOP_CLOCK_MASK 0x0000000000400000 + +/* SH_EVENT_OCCURRED_RTC0_INT */ +/* Description: Pending RTC 0 Interrupt */ +#define SH_EVENT_OCCURRED_RTC0_INT_SHFT 23 +#define SH_EVENT_OCCURRED_RTC0_INT_MASK 0x0000000000800000 + +/* SH_EVENT_OCCURRED_RTC1_INT */ +/* Description: Pending RTC 1 Interrupt */ +#define SH_EVENT_OCCURRED_RTC1_INT_SHFT 24 +#define SH_EVENT_OCCURRED_RTC1_INT_MASK 0x0000000001000000 + +/* SH_EVENT_OCCURRED_RTC2_INT */ +/* Description: Pending RTC 2 Interrupt */ +#define SH_EVENT_OCCURRED_RTC2_INT_SHFT 25 +#define SH_EVENT_OCCURRED_RTC2_INT_MASK 0x0000000002000000 + +/* SH_EVENT_OCCURRED_RTC3_INT */ +/* Description: Pending RTC 3 Interrupt */ +#define SH_EVENT_OCCURRED_RTC3_INT_SHFT 26 +#define SH_EVENT_OCCURRED_RTC3_INT_MASK 0x0000000004000000 + +/* SH_EVENT_OCCURRED_PROFILE_INT */ +/* Description: Pending Profile Interrupt */ +#define SH_EVENT_OCCURRED_PROFILE_INT_SHFT 27 +#define SH_EVENT_OCCURRED_PROFILE_INT_MASK 0x0000000008000000 + +/* SH_EVENT_OCCURRED_IPI_INT */ +/* Description: Pending IPI Interrupt */ +#define SH_EVENT_OCCURRED_IPI_INT_SHFT 28 +#define SH_EVENT_OCCURRED_IPI_INT_MASK 0x0000000010000000 + +/* SH_EVENT_OCCURRED_II_INT0 */ +/* Description: Pending II 0 Interrupt */ +#define SH_EVENT_OCCURRED_II_INT0_SHFT 29 +#define SH_EVENT_OCCURRED_II_INT0_MASK 0x0000000020000000 + +/* SH_EVENT_OCCURRED_II_INT1 */ +/* Description: Pending II 1 Interrupt */ +#define SH_EVENT_OCCURRED_II_INT1_SHFT 30 +#define SH_EVENT_OCCURRED_II_INT1_MASK 0x0000000040000000 + +/* ==================================================================== */ +/* Register "SH_EVENT_OCCURRED_ALIAS" */ +/* SHub Interrupt Event Occurred Alias */ +/* ==================================================================== */ + +#define SH_EVENT_OCCURRED_ALIAS 0x0000000110010008 + +/* ==================================================================== */ +/* Register "SH_EVENT_OVERFLOW" */ +/* SHub Interrupt Event Occurred Overflow */ +/* ==================================================================== */ + +#define SH_EVENT_OVERFLOW 0x0000000110010080 +#define SH_EVENT_OVERFLOW_MASK 0x000000000fffffff +#define SH_EVENT_OVERFLOW_INIT 0x0000000000000000 + +/* SH_EVENT_OVERFLOW_PI_HW_INT */ +/* Description: Pending PI Hardware interrupt */ +#define SH_EVENT_OVERFLOW_PI_HW_INT_SHFT 0 +#define SH_EVENT_OVERFLOW_PI_HW_INT_MASK 0x0000000000000001 + +/* SH_EVENT_OVERFLOW_MD_HW_INT */ +/* Description: Pending MD Hardware interrupt */ +#define SH_EVENT_OVERFLOW_MD_HW_INT_SHFT 1 +#define SH_EVENT_OVERFLOW_MD_HW_INT_MASK 0x0000000000000002 + +/* SH_EVENT_OVERFLOW_XN_HW_INT */ +/* Description: Pending XN Hardware interrupt */ +#define SH_EVENT_OVERFLOW_XN_HW_INT_SHFT 2 +#define SH_EVENT_OVERFLOW_XN_HW_INT_MASK 0x0000000000000004 + +/* SH_EVENT_OVERFLOW_LB_HW_INT */ +/* Description: Pending LB Hardware interrupt */ +#define SH_EVENT_OVERFLOW_LB_HW_INT_SHFT 3 +#define SH_EVENT_OVERFLOW_LB_HW_INT_MASK 0x0000000000000008 + +/* SH_EVENT_OVERFLOW_II_HW_INT */ +/* Description: Pending II wrapper Hardware interrupt */ +#define SH_EVENT_OVERFLOW_II_HW_INT_SHFT 4 +#define SH_EVENT_OVERFLOW_II_HW_INT_MASK 0x0000000000000010 + +/* SH_EVENT_OVERFLOW_PI_CE_INT */ +/* Description: Pending PI Correctable Error Interrupt */ +#define SH_EVENT_OVERFLOW_PI_CE_INT_SHFT 5 +#define SH_EVENT_OVERFLOW_PI_CE_INT_MASK 0x0000000000000020 + +/* SH_EVENT_OVERFLOW_MD_CE_INT */ +/* Description: Pending MD Correctable Error Interrupt */ +#define SH_EVENT_OVERFLOW_MD_CE_INT_SHFT 6 +#define SH_EVENT_OVERFLOW_MD_CE_INT_MASK 0x0000000000000040 + +/* SH_EVENT_OVERFLOW_XN_CE_INT */ +/* Description: Pending XN Correctable Error Interrupt */ +#define SH_EVENT_OVERFLOW_XN_CE_INT_SHFT 7 +#define SH_EVENT_OVERFLOW_XN_CE_INT_MASK 0x0000000000000080 + +/* SH_EVENT_OVERFLOW_PI_UCE_INT */ +/* Description: Pending PI Correctable Error Interrupt */ +#define SH_EVENT_OVERFLOW_PI_UCE_INT_SHFT 8 +#define SH_EVENT_OVERFLOW_PI_UCE_INT_MASK 0x0000000000000100 + +/* SH_EVENT_OVERFLOW_MD_UCE_INT */ +/* Description: Pending MD Correctable Error Interrupt */ +#define SH_EVENT_OVERFLOW_MD_UCE_INT_SHFT 9 +#define SH_EVENT_OVERFLOW_MD_UCE_INT_MASK 0x0000000000000200 + +/* SH_EVENT_OVERFLOW_XN_UCE_INT */ +/* Description: Pending XN Correctable Error Interrupt */ +#define SH_EVENT_OVERFLOW_XN_UCE_INT_SHFT 10 +#define SH_EVENT_OVERFLOW_XN_UCE_INT_MASK 0x0000000000000400 + +/* SH_EVENT_OVERFLOW_PROC0_ADV_INT */ +/* Description: Pending Processor 0 Advisory Interrupt */ +#define SH_EVENT_OVERFLOW_PROC0_ADV_INT_SHFT 11 +#define SH_EVENT_OVERFLOW_PROC0_ADV_INT_MASK 0x0000000000000800 + +/* SH_EVENT_OVERFLOW_PROC1_ADV_INT */ +/* Description: Pending Processor 1 Advisory Interrupt */ +#define SH_EVENT_OVERFLOW_PROC1_ADV_INT_SHFT 12 +#define SH_EVENT_OVERFLOW_PROC1_ADV_INT_MASK 0x0000000000001000 + +/* SH_EVENT_OVERFLOW_PROC2_ADV_INT */ +/* Description: Pending Processor 2 Advisory Interrupt */ +#define SH_EVENT_OVERFLOW_PROC2_ADV_INT_SHFT 13 +#define SH_EVENT_OVERFLOW_PROC2_ADV_INT_MASK 0x0000000000002000 + +/* SH_EVENT_OVERFLOW_PROC3_ADV_INT */ +/* Description: Pending Processor 3 Advisory Interrupt */ +#define SH_EVENT_OVERFLOW_PROC3_ADV_INT_SHFT 14 +#define SH_EVENT_OVERFLOW_PROC3_ADV_INT_MASK 0x0000000000004000 + +/* SH_EVENT_OVERFLOW_PROC0_ERR_INT */ +/* Description: Pending Processor 0 Error Interrupt */ +#define SH_EVENT_OVERFLOW_PROC0_ERR_INT_SHFT 15 +#define SH_EVENT_OVERFLOW_PROC0_ERR_INT_MASK 0x0000000000008000 + +/* SH_EVENT_OVERFLOW_PROC1_ERR_INT */ +/* Description: Pending Processor 1 Error Interrupt */ +#define SH_EVENT_OVERFLOW_PROC1_ERR_INT_SHFT 16 +#define SH_EVENT_OVERFLOW_PROC1_ERR_INT_MASK 0x0000000000010000 + +/* SH_EVENT_OVERFLOW_PROC2_ERR_INT */ +/* Description: Pending Processor 2 Error Interrupt */ +#define SH_EVENT_OVERFLOW_PROC2_ERR_INT_SHFT 17 +#define SH_EVENT_OVERFLOW_PROC2_ERR_INT_MASK 0x0000000000020000 + +/* SH_EVENT_OVERFLOW_PROC3_ERR_INT */ +/* Description: Pending Processor 3 Error Interrupt */ +#define SH_EVENT_OVERFLOW_PROC3_ERR_INT_SHFT 18 +#define SH_EVENT_OVERFLOW_PROC3_ERR_INT_MASK 0x0000000000040000 + +/* SH_EVENT_OVERFLOW_SYSTEM_SHUTDOWN_INT */ +/* Description: Pending System Shutdown Interrupt */ +#define SH_EVENT_OVERFLOW_SYSTEM_SHUTDOWN_INT_SHFT 19 +#define SH_EVENT_OVERFLOW_SYSTEM_SHUTDOWN_INT_MASK 0x0000000000080000 + +/* SH_EVENT_OVERFLOW_UART_INT */ +/* Description: Pending Junk Bus UART Interrupt */ +#define SH_EVENT_OVERFLOW_UART_INT_SHFT 20 +#define SH_EVENT_OVERFLOW_UART_INT_MASK 0x0000000000100000 + +/* SH_EVENT_OVERFLOW_L1_NMI_INT */ +/* Description: Pending L1 Controller NMI Interrupt */ +#define SH_EVENT_OVERFLOW_L1_NMI_INT_SHFT 21 +#define SH_EVENT_OVERFLOW_L1_NMI_INT_MASK 0x0000000000200000 + +/* SH_EVENT_OVERFLOW_STOP_CLOCK */ +/* Description: Pending Stop Clock Interrupt */ +#define SH_EVENT_OVERFLOW_STOP_CLOCK_SHFT 22 +#define SH_EVENT_OVERFLOW_STOP_CLOCK_MASK 0x0000000000400000 + +/* SH_EVENT_OVERFLOW_RTC0_INT */ +/* Description: Pending RTC 0 Interrupt */ +#define SH_EVENT_OVERFLOW_RTC0_INT_SHFT 23 +#define SH_EVENT_OVERFLOW_RTC0_INT_MASK 0x0000000000800000 + +/* SH_EVENT_OVERFLOW_RTC1_INT */ +/* Description: Pending RTC 1 Interrupt */ +#define SH_EVENT_OVERFLOW_RTC1_INT_SHFT 24 +#define SH_EVENT_OVERFLOW_RTC1_INT_MASK 0x0000000001000000 + +/* SH_EVENT_OVERFLOW_RTC2_INT */ +/* Description: Pending RTC 2 Interrupt */ +#define SH_EVENT_OVERFLOW_RTC2_INT_SHFT 25 +#define SH_EVENT_OVERFLOW_RTC2_INT_MASK 0x0000000002000000 + +/* SH_EVENT_OVERFLOW_RTC3_INT */ +/* Description: Pending RTC 3 Interrupt */ +#define SH_EVENT_OVERFLOW_RTC3_INT_SHFT 26 +#define SH_EVENT_OVERFLOW_RTC3_INT_MASK 0x0000000004000000 + +/* SH_EVENT_OVERFLOW_PROFILE_INT */ +/* Description: Pending Profile Interrupt */ +#define SH_EVENT_OVERFLOW_PROFILE_INT_SHFT 27 +#define SH_EVENT_OVERFLOW_PROFILE_INT_MASK 0x0000000008000000 + +/* ==================================================================== */ +/* Register "SH_EVENT_OVERFLOW_ALIAS" */ +/* SHub Interrupt Event Occurred Overflow Alias */ +/* ==================================================================== */ + +#define SH_EVENT_OVERFLOW_ALIAS 0x0000000110010088 + +/* ==================================================================== */ +/* Register "SH_JUNK_BUS_TIME" */ +/* Junk Bus Timing */ +/* ==================================================================== */ + +#define SH_JUNK_BUS_TIME 0x0000000110020000 +#define SH_JUNK_BUS_TIME_MASK 0x00000000ffffffff +#define SH_JUNK_BUS_TIME_INIT 0x0000000040404040 + +/* SH_JUNK_BUS_TIME_FPROM_SETUP_HOLD */ +/* Description: Fprom_Setup_Hold */ +#define SH_JUNK_BUS_TIME_FPROM_SETUP_HOLD_SHFT 0 +#define SH_JUNK_BUS_TIME_FPROM_SETUP_HOLD_MASK 0x00000000000000ff + +/* SH_JUNK_BUS_TIME_FPROM_ENABLE */ +/* Description: Fprom_Enable */ +#define SH_JUNK_BUS_TIME_FPROM_ENABLE_SHFT 8 +#define SH_JUNK_BUS_TIME_FPROM_ENABLE_MASK 0x000000000000ff00 + +/* SH_JUNK_BUS_TIME_UART_SETUP_HOLD */ +/* Description: Uart_Setup_Hold */ +#define SH_JUNK_BUS_TIME_UART_SETUP_HOLD_SHFT 16 +#define SH_JUNK_BUS_TIME_UART_SETUP_HOLD_MASK 0x0000000000ff0000 + +/* SH_JUNK_BUS_TIME_UART_ENABLE */ +/* Description: Uart_Enable */ +#define SH_JUNK_BUS_TIME_UART_ENABLE_SHFT 24 +#define SH_JUNK_BUS_TIME_UART_ENABLE_MASK 0x00000000ff000000 + +/* ==================================================================== */ +/* Register "SH_JUNK_LATCH_TIME" */ +/* Junk Bus Latch Timing */ +/* ==================================================================== */ + +#define SH_JUNK_LATCH_TIME 0x0000000110020080 +#define SH_JUNK_LATCH_TIME_MASK 0x0000000000000007 +#define SH_JUNK_LATCH_TIME_INIT 0x0000000000000002 + +/* SH_JUNK_LATCH_TIME_SETUP_HOLD */ +/* Description: Setup and Hold Time */ +#define SH_JUNK_LATCH_TIME_SETUP_HOLD_SHFT 0 +#define SH_JUNK_LATCH_TIME_SETUP_HOLD_MASK 0x0000000000000007 + +/* ==================================================================== */ +/* Register "SH_JUNK_NACK_RESET" */ +/* Junk Bus Nack Counter Reset */ +/* ==================================================================== */ + +#define SH_JUNK_NACK_RESET 0x0000000110020100 +#define SH_JUNK_NACK_RESET_MASK 0x0000000000000001 +#define SH_JUNK_NACK_RESET_INIT 0x0000000000000000 + +/* SH_JUNK_NACK_RESET_PULSE */ +/* Description: Junk bus nack counter reset */ +#define SH_JUNK_NACK_RESET_PULSE_SHFT 0 +#define SH_JUNK_NACK_RESET_PULSE_MASK 0x0000000000000001 + +/* ==================================================================== */ +/* Register "SH_JUNK_BUS_LED0" */ +/* Junk Bus LED0 */ +/* ==================================================================== */ + +#define SH_JUNK_BUS_LED0 0x0000000110030000 +#define SH_JUNK_BUS_LED0_MASK 0x00000000000000ff +#define SH_JUNK_BUS_LED0_INIT 0x0000000000000000 + +/* SH_JUNK_BUS_LED0_LED0_DATA */ +/* Description: LED0_data */ +#define SH_JUNK_BUS_LED0_LED0_DATA_SHFT 0 +#define SH_JUNK_BUS_LED0_LED0_DATA_MASK 0x00000000000000ff + +/* ==================================================================== */ +/* Register "SH_JUNK_BUS_LED1" */ +/* Junk Bus LED1 */ +/* ==================================================================== */ + +#define SH_JUNK_BUS_LED1 0x0000000110030080 +#define SH_JUNK_BUS_LED1_MASK 0x00000000000000ff +#define SH_JUNK_BUS_LED1_INIT 0x0000000000000000 + +/* SH_JUNK_BUS_LED1_LED1_DATA */ +/* Description: LED1_data */ +#define SH_JUNK_BUS_LED1_LED1_DATA_SHFT 0 +#define SH_JUNK_BUS_LED1_LED1_DATA_MASK 0x00000000000000ff + +/* ==================================================================== */ +/* Register "SH_JUNK_BUS_LED2" */ +/* Junk Bus LED2 */ +/* ==================================================================== */ + +#define SH_JUNK_BUS_LED2 0x0000000110030100 +#define SH_JUNK_BUS_LED2_MASK 0x00000000000000ff +#define SH_JUNK_BUS_LED2_INIT 0x0000000000000000 + +/* SH_JUNK_BUS_LED2_LED2_DATA */ +/* Description: LED2_data */ +#define SH_JUNK_BUS_LED2_LED2_DATA_SHFT 0 +#define SH_JUNK_BUS_LED2_LED2_DATA_MASK 0x00000000000000ff + +/* ==================================================================== */ +/* Register "SH_JUNK_BUS_LED3" */ +/* Junk Bus LED3 */ +/* ==================================================================== */ + +#define SH_JUNK_BUS_LED3 0x0000000110030180 +#define SH_JUNK_BUS_LED3_MASK 0x00000000000000ff +#define SH_JUNK_BUS_LED3_INIT 0x0000000000000000 + +/* SH_JUNK_BUS_LED3_LED3_DATA */ +/* Description: LED3_data */ +#define SH_JUNK_BUS_LED3_LED3_DATA_SHFT 0 +#define SH_JUNK_BUS_LED3_LED3_DATA_MASK 0x00000000000000ff + +/* ==================================================================== */ +/* Register "SH_JUNK_ERROR_STATUS" */ +/* Junk Bus Error Status */ +/* ==================================================================== */ + +#define SH_JUNK_ERROR_STATUS 0x0000000110030200 +#define SH_JUNK_ERROR_STATUS_MASK 0x1fff7fffffffffff +#define SH_JUNK_ERROR_STATUS_INIT 0x0000000000000000 + +/* SH_JUNK_ERROR_STATUS_ADDRESS */ +/* Description: Failing junk bus address */ +#define SH_JUNK_ERROR_STATUS_ADDRESS_SHFT 0 +#define SH_JUNK_ERROR_STATUS_ADDRESS_MASK 0x00007fffffffffff + +/* SH_JUNK_ERROR_STATUS_CMD */ +/* Description: Junk bus command */ +#define SH_JUNK_ERROR_STATUS_CMD_SHFT 48 +#define SH_JUNK_ERROR_STATUS_CMD_MASK 0x00ff000000000000 + +/* SH_JUNK_ERROR_STATUS_MODE */ +/* Description: Mode */ +#define SH_JUNK_ERROR_STATUS_MODE_SHFT 56 +#define SH_JUNK_ERROR_STATUS_MODE_MASK 0x0100000000000000 + +/* SH_JUNK_ERROR_STATUS_STATUS */ +/* Description: Status */ +#define SH_JUNK_ERROR_STATUS_STATUS_SHFT 57 +#define SH_JUNK_ERROR_STATUS_STATUS_MASK 0x1e00000000000000 + +/* ==================================================================== */ +/* Register "SH_NI0_LLP_STAT" */ +/* This register describes the LLP status. */ +/* ==================================================================== */ + +#define SH_NI0_LLP_STAT 0x0000000150000000 +#define SH_NI0_LLP_STAT_MASK 0x000000000000000f +#define SH_NI0_LLP_STAT_INIT 0x0000000000000000 + +/* SH_NI0_LLP_STAT_LINK_RESET_STATE */ +/* Description: Status of LLP link. */ +#define SH_NI0_LLP_STAT_LINK_RESET_STATE_SHFT 0 +#define SH_NI0_LLP_STAT_LINK_RESET_STATE_MASK 0x000000000000000f + +/* ==================================================================== */ +/* Register "SH_NI0_LLP_RESET" */ +/* Writing issues a reset to the network interface */ +/* ==================================================================== */ + +#define SH_NI0_LLP_RESET 0x0000000150000008 +#define SH_NI0_LLP_RESET_MASK 0x0000000000000003 +#define SH_NI0_LLP_RESET_INIT 0x0000000000000000 + +/* SH_NI0_LLP_RESET_LINK */ +/* Description: Send Link Reset. Generates a pulse. */ +#define SH_NI0_LLP_RESET_LINK_SHFT 0 +#define SH_NI0_LLP_RESET_LINK_MASK 0x0000000000000001 + +/* SH_NI0_LLP_RESET_WARM */ +/* Description: Send Warm Reset. Generates a pulse. */ +#define SH_NI0_LLP_RESET_WARM_SHFT 1 +#define SH_NI0_LLP_RESET_WARM_MASK 0x0000000000000002 + +/* ==================================================================== */ +/* Register "SH_NI0_LLP_RESET_EN" */ +/* Controls LLP warm reset propagation */ +/* ==================================================================== */ + +#define SH_NI0_LLP_RESET_EN 0x0000000150000010 +#define SH_NI0_LLP_RESET_EN_MASK 0x0000000000000001 +#define SH_NI0_LLP_RESET_EN_INIT 0x0000000000000001 + +/* SH_NI0_LLP_RESET_EN_OK */ +/* Description: Allow LLP warm reset to reset SHUB */ +#define SH_NI0_LLP_RESET_EN_OK_SHFT 0 +#define SH_NI0_LLP_RESET_EN_OK_MASK 0x0000000000000001 + +/* ==================================================================== */ +/* Register "SH_NI0_LLP_CHAN_MODE" */ +/* Sets the signaling mode of LLP and channel */ +/* ==================================================================== */ + +#define SH_NI0_LLP_CHAN_MODE 0x0000000150000018 +#define SH_NI0_LLP_CHAN_MODE_MASK 0x000000000000001f +#define SH_NI0_LLP_CHAN_MODE_INIT 0x0000000000000000 + +/* SH_NI0_LLP_CHAN_MODE_BITMODE32 */ +/* Description: Enables 32-bit (plus sideband) channel phits */ +#define SH_NI0_LLP_CHAN_MODE_BITMODE32_SHFT 0 +#define SH_NI0_LLP_CHAN_MODE_BITMODE32_MASK 0x0000000000000001 + +/* SH_NI0_LLP_CHAN_MODE_AC_ENCODE */ +/* Description: Enables nearly dc-free encoding for AC-coupling */ +#define SH_NI0_LLP_CHAN_MODE_AC_ENCODE_SHFT 1 +#define SH_NI0_LLP_CHAN_MODE_AC_ENCODE_MASK 0x0000000000000002 + +/* SH_NI0_LLP_CHAN_MODE_ENABLE_TUNING */ +/* Description: Enables automatic tuning of channel skew. */ +#define SH_NI0_LLP_CHAN_MODE_ENABLE_TUNING_SHFT 2 +#define SH_NI0_LLP_CHAN_MODE_ENABLE_TUNING_MASK 0x0000000000000004 + +/* SH_NI0_LLP_CHAN_MODE_ENABLE_RMT_FT_UPD */ +/* Description: Enables remote fine tune updates */ +#define SH_NI0_LLP_CHAN_MODE_ENABLE_RMT_FT_UPD_SHFT 3 +#define SH_NI0_LLP_CHAN_MODE_ENABLE_RMT_FT_UPD_MASK 0x0000000000000008 + +/* SH_NI0_LLP_CHAN_MODE_ENABLE_CLKQUAD */ +/* Description: Enables quadrature clock in the pfssd */ +#define SH_NI0_LLP_CHAN_MODE_ENABLE_CLKQUAD_SHFT 4 +#define SH_NI0_LLP_CHAN_MODE_ENABLE_CLKQUAD_MASK 0x0000000000000010 + +/* ==================================================================== */ +/* Register "SH_NI0_LLP_CONFIG" */ +/* Sets the configuration of LLP and channel */ +/* ==================================================================== */ + +#define SH_NI0_LLP_CONFIG 0x0000000150000020 +#define SH_NI0_LLP_CONFIG_MASK 0x0000003fffffffff +#define SH_NI0_LLP_CONFIG_INIT 0x00000007fc6ffd00 + +/* SH_NI0_LLP_CONFIG_MAXBURST */ +#define SH_NI0_LLP_CONFIG_MAXBURST_SHFT 0 +#define SH_NI0_LLP_CONFIG_MAXBURST_MASK 0x00000000000003ff + +/* SH_NI0_LLP_CONFIG_MAXRETRY */ +#define SH_NI0_LLP_CONFIG_MAXRETRY_SHFT 10 +#define SH_NI0_LLP_CONFIG_MAXRETRY_MASK 0x00000000000ffc00 + +/* SH_NI0_LLP_CONFIG_NULLTIMEOUT */ +#define SH_NI0_LLP_CONFIG_NULLTIMEOUT_SHFT 20 +#define SH_NI0_LLP_CONFIG_NULLTIMEOUT_MASK 0x0000000003f00000 + +/* SH_NI0_LLP_CONFIG_FTU_TIME */ +#define SH_NI0_LLP_CONFIG_FTU_TIME_SHFT 26 +#define SH_NI0_LLP_CONFIG_FTU_TIME_MASK 0x0000003ffc000000 + +/* ==================================================================== */ +/* Register "SH_NI0_LLP_TEST_CTL" */ +/* ==================================================================== */ + +#define SH_NI0_LLP_TEST_CTL 0x0000000150000028 +#define SH_NI0_LLP_TEST_CTL_MASK 0x7ff3f3ffffffffff +#define SH_NI0_LLP_TEST_CTL_INIT 0x000000000a5fffff + +/* SH_NI0_LLP_TEST_CTL_PATTERN */ +/* Description: Send channel data pattern */ +#define SH_NI0_LLP_TEST_CTL_PATTERN_SHFT 0 +#define SH_NI0_LLP_TEST_CTL_PATTERN_MASK 0x000000ffffffffff + +/* SH_NI0_LLP_TEST_CTL_SEND_TEST_MODE */ +/* Description: Enables continuous send of data */ +#define SH_NI0_LLP_TEST_CTL_SEND_TEST_MODE_SHFT 40 +#define SH_NI0_LLP_TEST_CTL_SEND_TEST_MODE_MASK 0x0000030000000000 + +/* SH_NI0_LLP_TEST_CTL_WIRE_SEL */ +#define SH_NI0_LLP_TEST_CTL_WIRE_SEL_SHFT 44 +#define SH_NI0_LLP_TEST_CTL_WIRE_SEL_MASK 0x0003f00000000000 + +/* SH_NI0_LLP_TEST_CTL_LFSR_MODE */ +#define SH_NI0_LLP_TEST_CTL_LFSR_MODE_SHFT 52 +#define SH_NI0_LLP_TEST_CTL_LFSR_MODE_MASK 0x0030000000000000 + +/* SH_NI0_LLP_TEST_CTL_NOISE_MODE */ +#define SH_NI0_LLP_TEST_CTL_NOISE_MODE_SHFT 54 +#define SH_NI0_LLP_TEST_CTL_NOISE_MODE_MASK 0x00c0000000000000 + +/* SH_NI0_LLP_TEST_CTL_ARMCAPTURE */ +/* Description: Enable Capture of Next MicroPacket */ +#define SH_NI0_LLP_TEST_CTL_ARMCAPTURE_SHFT 56 +#define SH_NI0_LLP_TEST_CTL_ARMCAPTURE_MASK 0x0100000000000000 + +/* SH_NI0_LLP_TEST_CTL_CAPTURECBONLY */ +/* Description: Only capture a micropacket with a Check Byte error */ +#define SH_NI0_LLP_TEST_CTL_CAPTURECBONLY_SHFT 57 +#define SH_NI0_LLP_TEST_CTL_CAPTURECBONLY_MASK 0x0200000000000000 + +/* SH_NI0_LLP_TEST_CTL_SENDCBERROR */ +/* Description: Sends a single error */ +#define SH_NI0_LLP_TEST_CTL_SENDCBERROR_SHFT 58 +#define SH_NI0_LLP_TEST_CTL_SENDCBERROR_MASK 0x0400000000000000 + +/* SH_NI0_LLP_TEST_CTL_SENDSNERROR */ +/* Description: Sends a single sequence number error */ +#define SH_NI0_LLP_TEST_CTL_SENDSNERROR_SHFT 59 +#define SH_NI0_LLP_TEST_CTL_SENDSNERROR_MASK 0x0800000000000000 + +/* SH_NI0_LLP_TEST_CTL_FAKESNERROR */ +/* Description: Causes receiver to pretend it saw a sn error */ +#define SH_NI0_LLP_TEST_CTL_FAKESNERROR_SHFT 60 +#define SH_NI0_LLP_TEST_CTL_FAKESNERROR_MASK 0x1000000000000000 + +/* SH_NI0_LLP_TEST_CTL_CAPTURED */ +/* Description: Indicates a Valid Micropacket was captured */ +#define SH_NI0_LLP_TEST_CTL_CAPTURED_SHFT 61 +#define SH_NI0_LLP_TEST_CTL_CAPTURED_MASK 0x2000000000000000 + +/* SH_NI0_LLP_TEST_CTL_CBERROR */ +/* Description: Indicates a Micropacket with a CB error was capture */ +#define SH_NI0_LLP_TEST_CTL_CBERROR_SHFT 62 +#define SH_NI0_LLP_TEST_CTL_CBERROR_MASK 0x4000000000000000 + +/* ==================================================================== */ +/* Register "SH_NI0_LLP_CAPT_WD1" */ +/* low order 64-bit captured word */ +/* ==================================================================== */ + +#define SH_NI0_LLP_CAPT_WD1 0x0000000150000030 +#define SH_NI0_LLP_CAPT_WD1_MASK 0xffffffffffffffff +#define SH_NI0_LLP_CAPT_WD1_INIT 0x0000000000000000 + +/* SH_NI0_LLP_CAPT_WD1_DATA */ +/* Description: low order 64-bit captured word */ +#define SH_NI0_LLP_CAPT_WD1_DATA_SHFT 0 +#define SH_NI0_LLP_CAPT_WD1_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_NI0_LLP_CAPT_WD2" */ +/* high order 64-bit captured word */ +/* ==================================================================== */ + +#define SH_NI0_LLP_CAPT_WD2 0x0000000150000038 +#define SH_NI0_LLP_CAPT_WD2_MASK 0xffffffffffffffff +#define SH_NI0_LLP_CAPT_WD2_INIT 0x0000000000000000 + +/* SH_NI0_LLP_CAPT_WD2_DATA */ +/* Description: high order 64-bit captured word */ +#define SH_NI0_LLP_CAPT_WD2_DATA_SHFT 0 +#define SH_NI0_LLP_CAPT_WD2_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_NI0_LLP_CAPT_SBCB" */ +/* captured sideband, sequence, and CRC */ +/* ==================================================================== */ + +#define SH_NI0_LLP_CAPT_SBCB 0x0000000150000040 +#define SH_NI0_LLP_CAPT_SBCB_MASK 0x0000001fffffffff +#define SH_NI0_LLP_CAPT_SBCB_INIT 0x0000000000000000 + +/* SH_NI0_LLP_CAPT_SBCB_CAPTUREDRCVSBSN */ +/* Description: sideband and sequence */ +#define SH_NI0_LLP_CAPT_SBCB_CAPTUREDRCVSBSN_SHFT 0 +#define SH_NI0_LLP_CAPT_SBCB_CAPTUREDRCVSBSN_MASK 0x000000000000ffff + +/* SH_NI0_LLP_CAPT_SBCB_CAPTUREDRCVCRC */ +/* Description: CRC */ +#define SH_NI0_LLP_CAPT_SBCB_CAPTUREDRCVCRC_SHFT 16 +#define SH_NI0_LLP_CAPT_SBCB_CAPTUREDRCVCRC_MASK 0x00000000ffff0000 + +/* SH_NI0_LLP_CAPT_SBCB_SENTALLCBERRORS */ +/* Description: All CB errors have been sent */ +#define SH_NI0_LLP_CAPT_SBCB_SENTALLCBERRORS_SHFT 32 +#define SH_NI0_LLP_CAPT_SBCB_SENTALLCBERRORS_MASK 0x0000000100000000 + +/* SH_NI0_LLP_CAPT_SBCB_SENTALLSNERRORS */ +/* Description: All SN errors have been sent */ +#define SH_NI0_LLP_CAPT_SBCB_SENTALLSNERRORS_SHFT 33 +#define SH_NI0_LLP_CAPT_SBCB_SENTALLSNERRORS_MASK 0x0000000200000000 + +/* SH_NI0_LLP_CAPT_SBCB_FAKEDALLSNERRORS */ +/* Description: All faked SN errors have been sent */ +#define SH_NI0_LLP_CAPT_SBCB_FAKEDALLSNERRORS_SHFT 34 +#define SH_NI0_LLP_CAPT_SBCB_FAKEDALLSNERRORS_MASK 0x0000000400000000 + +/* SH_NI0_LLP_CAPT_SBCB_CHARGEOVERFLOW */ +/* Description: wire charge counter overflowed, valid if llp_mode e */ +#define SH_NI0_LLP_CAPT_SBCB_CHARGEOVERFLOW_SHFT 35 +#define SH_NI0_LLP_CAPT_SBCB_CHARGEOVERFLOW_MASK 0x0000000800000000 + +/* SH_NI0_LLP_CAPT_SBCB_CHARGEUNDERFLOW */ +/* Description: wire charge counter underflowed, valid if llp_mode */ +/* enabled */ +#define SH_NI0_LLP_CAPT_SBCB_CHARGEUNDERFLOW_SHFT 36 +#define SH_NI0_LLP_CAPT_SBCB_CHARGEUNDERFLOW_MASK 0x0000001000000000 + +/* ==================================================================== */ +/* Register "SH_NI0_LLP_ERR" */ +/* ==================================================================== */ + +#define SH_NI0_LLP_ERR 0x0000000150000048 +#define SH_NI0_LLP_ERR_MASK 0x001fffffffffffff +#define SH_NI0_LLP_ERR_INIT 0x0000000000000000 + +/* SH_NI0_LLP_ERR_RX_SN_ERR_COUNT */ +/* Description: Counts the sequence number errors received */ +#define SH_NI0_LLP_ERR_RX_SN_ERR_COUNT_SHFT 0 +#define SH_NI0_LLP_ERR_RX_SN_ERR_COUNT_MASK 0x00000000000000ff + +/* SH_NI0_LLP_ERR_RX_CB_ERR_COUNT */ +/* Description: Counts the check byte errors received */ +#define SH_NI0_LLP_ERR_RX_CB_ERR_COUNT_SHFT 8 +#define SH_NI0_LLP_ERR_RX_CB_ERR_COUNT_MASK 0x000000000000ff00 + +/* SH_NI0_LLP_ERR_RETRY_COUNT */ +/* Description: Counts the retries */ +#define SH_NI0_LLP_ERR_RETRY_COUNT_SHFT 16 +#define SH_NI0_LLP_ERR_RETRY_COUNT_MASK 0x0000000000ff0000 + +/* SH_NI0_LLP_ERR_RETRY_TIMEOUT */ +/* Description: Indicates a retry timeout has occured */ +#define SH_NI0_LLP_ERR_RETRY_TIMEOUT_SHFT 24 +#define SH_NI0_LLP_ERR_RETRY_TIMEOUT_MASK 0x0000000001000000 + +/* SH_NI0_LLP_ERR_RCV_LINK_RESET */ +/* Description: Indicates a link reset has been received */ +#define SH_NI0_LLP_ERR_RCV_LINK_RESET_SHFT 25 +#define SH_NI0_LLP_ERR_RCV_LINK_RESET_MASK 0x0000000002000000 + +/* SH_NI0_LLP_ERR_SQUASH */ +/* Description: Indicates a micropacket was squashed */ +#define SH_NI0_LLP_ERR_SQUASH_SHFT 26 +#define SH_NI0_LLP_ERR_SQUASH_MASK 0x0000000004000000 + +/* SH_NI0_LLP_ERR_POWER_NOT_OK */ +/* Description: Detects and traps a loss of power_OK */ +#define SH_NI0_LLP_ERR_POWER_NOT_OK_SHFT 27 +#define SH_NI0_LLP_ERR_POWER_NOT_OK_MASK 0x0000000008000000 + +/* SH_NI0_LLP_ERR_WIRE_CNT */ +/* Description: counts the errors detected on a single wire test */ +#define SH_NI0_LLP_ERR_WIRE_CNT_SHFT 28 +#define SH_NI0_LLP_ERR_WIRE_CNT_MASK 0x000ffffff0000000 + +/* SH_NI0_LLP_ERR_WIRE_OVERFLOW */ +/* Description: wire_error_cnt has overflowed */ +#define SH_NI0_LLP_ERR_WIRE_OVERFLOW_SHFT 52 +#define SH_NI0_LLP_ERR_WIRE_OVERFLOW_MASK 0x0010000000000000 + +/* ==================================================================== */ +/* Register "SH_NI1_LLP_STAT" */ +/* This register describes the LLP status. */ +/* ==================================================================== */ + +#define SH_NI1_LLP_STAT 0x0000000150002000 +#define SH_NI1_LLP_STAT_MASK 0x000000000000000f +#define SH_NI1_LLP_STAT_INIT 0x0000000000000000 + +/* SH_NI1_LLP_STAT_LINK_RESET_STATE */ +/* Description: Status of LLP link. */ +#define SH_NI1_LLP_STAT_LINK_RESET_STATE_SHFT 0 +#define SH_NI1_LLP_STAT_LINK_RESET_STATE_MASK 0x000000000000000f + +/* ==================================================================== */ +/* Register "SH_NI1_LLP_RESET" */ +/* Writing issues a reset to the network interface */ +/* ==================================================================== */ + +#define SH_NI1_LLP_RESET 0x0000000150002008 +#define SH_NI1_LLP_RESET_MASK 0x0000000000000003 +#define SH_NI1_LLP_RESET_INIT 0x0000000000000000 + +/* SH_NI1_LLP_RESET_LINK */ +/* Description: Send Link Reset. Generates a pulse. */ +#define SH_NI1_LLP_RESET_LINK_SHFT 0 +#define SH_NI1_LLP_RESET_LINK_MASK 0x0000000000000001 + +/* SH_NI1_LLP_RESET_WARM */ +/* Description: Send Warm Reset. Generates a pulse. */ +#define SH_NI1_LLP_RESET_WARM_SHFT 1 +#define SH_NI1_LLP_RESET_WARM_MASK 0x0000000000000002 + +/* ==================================================================== */ +/* Register "SH_NI1_LLP_RESET_EN" */ +/* Controls LLP warm reset propagation */ +/* ==================================================================== */ + +#define SH_NI1_LLP_RESET_EN 0x0000000150002010 +#define SH_NI1_LLP_RESET_EN_MASK 0x0000000000000001 +#define SH_NI1_LLP_RESET_EN_INIT 0x0000000000000001 + +/* SH_NI1_LLP_RESET_EN_OK */ +/* Description: Allow LLP warm reset to reset SHUB */ +#define SH_NI1_LLP_RESET_EN_OK_SHFT 0 +#define SH_NI1_LLP_RESET_EN_OK_MASK 0x0000000000000001 + +/* ==================================================================== */ +/* Register "SH_NI1_LLP_CHAN_MODE" */ +/* Sets the signaling mode of LLP and channel */ +/* ==================================================================== */ + +#define SH_NI1_LLP_CHAN_MODE 0x0000000150002018 +#define SH_NI1_LLP_CHAN_MODE_MASK 0x000000000000001f +#define SH_NI1_LLP_CHAN_MODE_INIT 0x0000000000000000 + +/* SH_NI1_LLP_CHAN_MODE_BITMODE32 */ +/* Description: Enables 32-bit (plus sideband) channel phits */ +#define SH_NI1_LLP_CHAN_MODE_BITMODE32_SHFT 0 +#define SH_NI1_LLP_CHAN_MODE_BITMODE32_MASK 0x0000000000000001 + +/* SH_NI1_LLP_CHAN_MODE_AC_ENCODE */ +/* Description: Enables nearly dc-free encoding for AC-coupling */ +#define SH_NI1_LLP_CHAN_MODE_AC_ENCODE_SHFT 1 +#define SH_NI1_LLP_CHAN_MODE_AC_ENCODE_MASK 0x0000000000000002 + +/* SH_NI1_LLP_CHAN_MODE_ENABLE_TUNING */ +/* Description: Enables automatic tuning of channel skew. */ +#define SH_NI1_LLP_CHAN_MODE_ENABLE_TUNING_SHFT 2 +#define SH_NI1_LLP_CHAN_MODE_ENABLE_TUNING_MASK 0x0000000000000004 + +/* SH_NI1_LLP_CHAN_MODE_ENABLE_RMT_FT_UPD */ +/* Description: Enables remote fine tune updates */ +#define SH_NI1_LLP_CHAN_MODE_ENABLE_RMT_FT_UPD_SHFT 3 +#define SH_NI1_LLP_CHAN_MODE_ENABLE_RMT_FT_UPD_MASK 0x0000000000000008 + +/* SH_NI1_LLP_CHAN_MODE_ENABLE_CLKQUAD */ +/* Description: Enables quadrature clock in the pfssd */ +#define SH_NI1_LLP_CHAN_MODE_ENABLE_CLKQUAD_SHFT 4 +#define SH_NI1_LLP_CHAN_MODE_ENABLE_CLKQUAD_MASK 0x0000000000000010 + +/* ==================================================================== */ +/* Register "SH_NI1_LLP_CONFIG" */ +/* Sets the configuration of LLP and channel */ +/* ==================================================================== */ + +#define SH_NI1_LLP_CONFIG 0x0000000150002020 +#define SH_NI1_LLP_CONFIG_MASK 0x0000003fffffffff +#define SH_NI1_LLP_CONFIG_INIT 0x00000007fc6ffd00 + +/* SH_NI1_LLP_CONFIG_MAXBURST */ +#define SH_NI1_LLP_CONFIG_MAXBURST_SHFT 0 +#define SH_NI1_LLP_CONFIG_MAXBURST_MASK 0x00000000000003ff + +/* SH_NI1_LLP_CONFIG_MAXRETRY */ +#define SH_NI1_LLP_CONFIG_MAXRETRY_SHFT 10 +#define SH_NI1_LLP_CONFIG_MAXRETRY_MASK 0x00000000000ffc00 + +/* SH_NI1_LLP_CONFIG_NULLTIMEOUT */ +#define SH_NI1_LLP_CONFIG_NULLTIMEOUT_SHFT 20 +#define SH_NI1_LLP_CONFIG_NULLTIMEOUT_MASK 0x0000000003f00000 + +/* SH_NI1_LLP_CONFIG_FTU_TIME */ +#define SH_NI1_LLP_CONFIG_FTU_TIME_SHFT 26 +#define SH_NI1_LLP_CONFIG_FTU_TIME_MASK 0x0000003ffc000000 + +/* ==================================================================== */ +/* Register "SH_NI1_LLP_TEST_CTL" */ +/* ==================================================================== */ + +#define SH_NI1_LLP_TEST_CTL 0x0000000150002028 +#define SH_NI1_LLP_TEST_CTL_MASK 0x7ff3f3ffffffffff +#define SH_NI1_LLP_TEST_CTL_INIT 0x000000000a5fffff + +/* SH_NI1_LLP_TEST_CTL_PATTERN */ +/* Description: Send channel data pattern */ +#define SH_NI1_LLP_TEST_CTL_PATTERN_SHFT 0 +#define SH_NI1_LLP_TEST_CTL_PATTERN_MASK 0x000000ffffffffff + +/* SH_NI1_LLP_TEST_CTL_SEND_TEST_MODE */ +/* Description: Enables continuous send of data */ +#define SH_NI1_LLP_TEST_CTL_SEND_TEST_MODE_SHFT 40 +#define SH_NI1_LLP_TEST_CTL_SEND_TEST_MODE_MASK 0x0000030000000000 + +/* SH_NI1_LLP_TEST_CTL_WIRE_SEL */ +#define SH_NI1_LLP_TEST_CTL_WIRE_SEL_SHFT 44 +#define SH_NI1_LLP_TEST_CTL_WIRE_SEL_MASK 0x0003f00000000000 + +/* SH_NI1_LLP_TEST_CTL_LFSR_MODE */ +#define SH_NI1_LLP_TEST_CTL_LFSR_MODE_SHFT 52 +#define SH_NI1_LLP_TEST_CTL_LFSR_MODE_MASK 0x0030000000000000 + +/* SH_NI1_LLP_TEST_CTL_NOISE_MODE */ +#define SH_NI1_LLP_TEST_CTL_NOISE_MODE_SHFT 54 +#define SH_NI1_LLP_TEST_CTL_NOISE_MODE_MASK 0x00c0000000000000 + +/* SH_NI1_LLP_TEST_CTL_ARMCAPTURE */ +/* Description: Enable Capture of Next MicroPacket */ +#define SH_NI1_LLP_TEST_CTL_ARMCAPTURE_SHFT 56 +#define SH_NI1_LLP_TEST_CTL_ARMCAPTURE_MASK 0x0100000000000000 + +/* SH_NI1_LLP_TEST_CTL_CAPTURECBONLY */ +/* Description: Only capture a micropacket with a Check Byte error */ +#define SH_NI1_LLP_TEST_CTL_CAPTURECBONLY_SHFT 57 +#define SH_NI1_LLP_TEST_CTL_CAPTURECBONLY_MASK 0x0200000000000000 + +/* SH_NI1_LLP_TEST_CTL_SENDCBERROR */ +/* Description: Sends a single error */ +#define SH_NI1_LLP_TEST_CTL_SENDCBERROR_SHFT 58 +#define SH_NI1_LLP_TEST_CTL_SENDCBERROR_MASK 0x0400000000000000 + +/* SH_NI1_LLP_TEST_CTL_SENDSNERROR */ +/* Description: Sends a single sequence number error */ +#define SH_NI1_LLP_TEST_CTL_SENDSNERROR_SHFT 59 +#define SH_NI1_LLP_TEST_CTL_SENDSNERROR_MASK 0x0800000000000000 + +/* SH_NI1_LLP_TEST_CTL_FAKESNERROR */ +/* Description: Causes receiver to pretend it saw a sn error */ +#define SH_NI1_LLP_TEST_CTL_FAKESNERROR_SHFT 60 +#define SH_NI1_LLP_TEST_CTL_FAKESNERROR_MASK 0x1000000000000000 + +/* SH_NI1_LLP_TEST_CTL_CAPTURED */ +/* Description: Indicates a Valid Micropacket was captured */ +#define SH_NI1_LLP_TEST_CTL_CAPTURED_SHFT 61 +#define SH_NI1_LLP_TEST_CTL_CAPTURED_MASK 0x2000000000000000 + +/* SH_NI1_LLP_TEST_CTL_CBERROR */ +/* Description: Indicates a Micropacket with a CB error was capture */ +#define SH_NI1_LLP_TEST_CTL_CBERROR_SHFT 62 +#define SH_NI1_LLP_TEST_CTL_CBERROR_MASK 0x4000000000000000 + +/* ==================================================================== */ +/* Register "SH_NI1_LLP_CAPT_WD1" */ +/* low order 64-bit captured word */ +/* ==================================================================== */ + +#define SH_NI1_LLP_CAPT_WD1 0x0000000150002030 +#define SH_NI1_LLP_CAPT_WD1_MASK 0xffffffffffffffff +#define SH_NI1_LLP_CAPT_WD1_INIT 0x0000000000000000 + +/* SH_NI1_LLP_CAPT_WD1_DATA */ +/* Description: low order 64-bit captured word */ +#define SH_NI1_LLP_CAPT_WD1_DATA_SHFT 0 +#define SH_NI1_LLP_CAPT_WD1_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_NI1_LLP_CAPT_WD2" */ +/* high order 64-bit captured word */ +/* ==================================================================== */ + +#define SH_NI1_LLP_CAPT_WD2 0x0000000150002038 +#define SH_NI1_LLP_CAPT_WD2_MASK 0xffffffffffffffff +#define SH_NI1_LLP_CAPT_WD2_INIT 0x0000000000000000 + +/* SH_NI1_LLP_CAPT_WD2_DATA */ +/* Description: high order 64-bit captured word */ +#define SH_NI1_LLP_CAPT_WD2_DATA_SHFT 0 +#define SH_NI1_LLP_CAPT_WD2_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_NI1_LLP_CAPT_SBCB" */ +/* captured sideband, sequence, and CRC */ +/* ==================================================================== */ + +#define SH_NI1_LLP_CAPT_SBCB 0x0000000150002040 +#define SH_NI1_LLP_CAPT_SBCB_MASK 0x0000001fffffffff +#define SH_NI1_LLP_CAPT_SBCB_INIT 0x0000000000000000 + +/* SH_NI1_LLP_CAPT_SBCB_CAPTUREDRCVSBSN */ +/* Description: sideband and sequence */ +#define SH_NI1_LLP_CAPT_SBCB_CAPTUREDRCVSBSN_SHFT 0 +#define SH_NI1_LLP_CAPT_SBCB_CAPTUREDRCVSBSN_MASK 0x000000000000ffff + +/* SH_NI1_LLP_CAPT_SBCB_CAPTUREDRCVCRC */ +/* Description: CRC */ +#define SH_NI1_LLP_CAPT_SBCB_CAPTUREDRCVCRC_SHFT 16 +#define SH_NI1_LLP_CAPT_SBCB_CAPTUREDRCVCRC_MASK 0x00000000ffff0000 + +/* SH_NI1_LLP_CAPT_SBCB_SENTALLCBERRORS */ +/* Description: All CB errors have been sent */ +#define SH_NI1_LLP_CAPT_SBCB_SENTALLCBERRORS_SHFT 32 +#define SH_NI1_LLP_CAPT_SBCB_SENTALLCBERRORS_MASK 0x0000000100000000 + +/* SH_NI1_LLP_CAPT_SBCB_SENTALLSNERRORS */ +/* Description: All SN errors have been sent */ +#define SH_NI1_LLP_CAPT_SBCB_SENTALLSNERRORS_SHFT 33 +#define SH_NI1_LLP_CAPT_SBCB_SENTALLSNERRORS_MASK 0x0000000200000000 + +/* SH_NI1_LLP_CAPT_SBCB_FAKEDALLSNERRORS */ +/* Description: All faked SN errors have been sent */ +#define SH_NI1_LLP_CAPT_SBCB_FAKEDALLSNERRORS_SHFT 34 +#define SH_NI1_LLP_CAPT_SBCB_FAKEDALLSNERRORS_MASK 0x0000000400000000 + +/* SH_NI1_LLP_CAPT_SBCB_CHARGEOVERFLOW */ +/* Description: wire charge counter overflowed, valid if llp_mode e */ +#define SH_NI1_LLP_CAPT_SBCB_CHARGEOVERFLOW_SHFT 35 +#define SH_NI1_LLP_CAPT_SBCB_CHARGEOVERFLOW_MASK 0x0000000800000000 + +/* SH_NI1_LLP_CAPT_SBCB_CHARGEUNDERFLOW */ +/* Description: wire charge counter underflowed, valid if llp_mode */ +/* enabled */ +#define SH_NI1_LLP_CAPT_SBCB_CHARGEUNDERFLOW_SHFT 36 +#define SH_NI1_LLP_CAPT_SBCB_CHARGEUNDERFLOW_MASK 0x0000001000000000 + +/* ==================================================================== */ +/* Register "SH_NI1_LLP_ERR" */ +/* ==================================================================== */ + +#define SH_NI1_LLP_ERR 0x0000000150002048 +#define SH_NI1_LLP_ERR_MASK 0x001fffffffffffff +#define SH_NI1_LLP_ERR_INIT 0x0000000000000000 + +/* SH_NI1_LLP_ERR_RX_SN_ERR_COUNT */ +/* Description: Counts the sequence number errors received */ +#define SH_NI1_LLP_ERR_RX_SN_ERR_COUNT_SHFT 0 +#define SH_NI1_LLP_ERR_RX_SN_ERR_COUNT_MASK 0x00000000000000ff + +/* SH_NI1_LLP_ERR_RX_CB_ERR_COUNT */ +/* Description: Counts the check byte errors received */ +#define SH_NI1_LLP_ERR_RX_CB_ERR_COUNT_SHFT 8 +#define SH_NI1_LLP_ERR_RX_CB_ERR_COUNT_MASK 0x000000000000ff00 + +/* SH_NI1_LLP_ERR_RETRY_COUNT */ +/* Description: Counts the retries */ +#define SH_NI1_LLP_ERR_RETRY_COUNT_SHFT 16 +#define SH_NI1_LLP_ERR_RETRY_COUNT_MASK 0x0000000000ff0000 + +/* SH_NI1_LLP_ERR_RETRY_TIMEOUT */ +/* Description: Indicates a retry timeout has occured */ +#define SH_NI1_LLP_ERR_RETRY_TIMEOUT_SHFT 24 +#define SH_NI1_LLP_ERR_RETRY_TIMEOUT_MASK 0x0000000001000000 + +/* SH_NI1_LLP_ERR_RCV_LINK_RESET */ +/* Description: Indicates a link reset has been received */ +#define SH_NI1_LLP_ERR_RCV_LINK_RESET_SHFT 25 +#define SH_NI1_LLP_ERR_RCV_LINK_RESET_MASK 0x0000000002000000 + +/* SH_NI1_LLP_ERR_SQUASH */ +/* Description: Indicates a micropacket was squashed */ +#define SH_NI1_LLP_ERR_SQUASH_SHFT 26 +#define SH_NI1_LLP_ERR_SQUASH_MASK 0x0000000004000000 + +/* SH_NI1_LLP_ERR_POWER_NOT_OK */ +/* Description: Detects and traps a loss of power_OK */ +#define SH_NI1_LLP_ERR_POWER_NOT_OK_SHFT 27 +#define SH_NI1_LLP_ERR_POWER_NOT_OK_MASK 0x0000000008000000 + +/* SH_NI1_LLP_ERR_WIRE_CNT */ +/* Description: counts the errors detected on a single wire test */ +#define SH_NI1_LLP_ERR_WIRE_CNT_SHFT 28 +#define SH_NI1_LLP_ERR_WIRE_CNT_MASK 0x000ffffff0000000 + +/* SH_NI1_LLP_ERR_WIRE_OVERFLOW */ +/* Description: wire_error_cnt has overflowed */ +#define SH_NI1_LLP_ERR_WIRE_OVERFLOW_SHFT 52 +#define SH_NI1_LLP_ERR_WIRE_OVERFLOW_MASK 0x0010000000000000 + +/* ==================================================================== */ +/* Register "SH_XNNI0_LLP_TO_FIFO02_FLOW" */ +/* ==================================================================== */ + +#define SH_XNNI0_LLP_TO_FIFO02_FLOW 0x0000000150001010 +#define SH_XNNI0_LLP_TO_FIFO02_FLOW_MASK 0x3f3f003f3f00bfbf +#define SH_XNNI0_LLP_TO_FIFO02_FLOW_INIT 0x0000000000000000 + +/* SH_XNNI0_LLP_TO_FIFO02_FLOW_DEBIT_VC0_WITHHOLD */ +/* Description: vc0 withhold */ +#define SH_XNNI0_LLP_TO_FIFO02_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0 +#define SH_XNNI0_LLP_TO_FIFO02_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000003f + +/* SH_XNNI0_LLP_TO_FIFO02_FLOW_DEBIT_VC0_FORCE_CRED */ +/* Description: Force Credit on VC0 from debit cntr */ +#define SH_XNNI0_LLP_TO_FIFO02_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7 +#define SH_XNNI0_LLP_TO_FIFO02_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080 + +/* SH_XNNI0_LLP_TO_FIFO02_FLOW_DEBIT_VC2_WITHHOLD */ +/* Description: vc2 withhold */ +#define SH_XNNI0_LLP_TO_FIFO02_FLOW_DEBIT_VC2_WITHHOLD_SHFT 8 +#define SH_XNNI0_LLP_TO_FIFO02_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x0000000000003f00 + +/* SH_XNNI0_LLP_TO_FIFO02_FLOW_DEBIT_VC2_FORCE_CRED */ +/* Description: Force Credit on VC2 from debit cntr */ +#define SH_XNNI0_LLP_TO_FIFO02_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 15 +#define SH_XNNI0_LLP_TO_FIFO02_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000008000 + +/* SH_XNNI0_LLP_TO_FIFO02_FLOW_CREDIT_VC0_DYN */ +/* Description: vc0 credit dynamic value */ +#define SH_XNNI0_LLP_TO_FIFO02_FLOW_CREDIT_VC0_DYN_SHFT 24 +#define SH_XNNI0_LLP_TO_FIFO02_FLOW_CREDIT_VC0_DYN_MASK 0x000000003f000000 + +/* SH_XNNI0_LLP_TO_FIFO02_FLOW_CREDIT_VC0_CAP */ +/* Description: vc0 credit captured value */ +#define SH_XNNI0_LLP_TO_FIFO02_FLOW_CREDIT_VC0_CAP_SHFT 32 +#define SH_XNNI0_LLP_TO_FIFO02_FLOW_CREDIT_VC0_CAP_MASK 0x0000003f00000000 + +/* SH_XNNI0_LLP_TO_FIFO02_FLOW_CREDIT_VC2_DYN */ +/* Description: vc2 credit dynamic value */ +#define SH_XNNI0_LLP_TO_FIFO02_FLOW_CREDIT_VC2_DYN_SHFT 48 +#define SH_XNNI0_LLP_TO_FIFO02_FLOW_CREDIT_VC2_DYN_MASK 0x003f000000000000 + +/* SH_XNNI0_LLP_TO_FIFO02_FLOW_CREDIT_VC2_CAP */ +/* Description: vc2 credit captured value */ +#define SH_XNNI0_LLP_TO_FIFO02_FLOW_CREDIT_VC2_CAP_SHFT 56 +#define SH_XNNI0_LLP_TO_FIFO02_FLOW_CREDIT_VC2_CAP_MASK 0x3f00000000000000 + +/* ==================================================================== */ +/* Register "SH_XNNI0_LLP_TO_FIFO13_FLOW" */ +/* ==================================================================== */ + +#define SH_XNNI0_LLP_TO_FIFO13_FLOW 0x0000000150001020 +#define SH_XNNI0_LLP_TO_FIFO13_FLOW_MASK 0x3f3f003f3f00bfbf +#define SH_XNNI0_LLP_TO_FIFO13_FLOW_INIT 0x0000000000000000 + +/* SH_XNNI0_LLP_TO_FIFO13_FLOW_DEBIT_VC0_WITHHOLD */ +/* Description: vc0 withhold */ +#define SH_XNNI0_LLP_TO_FIFO13_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0 +#define SH_XNNI0_LLP_TO_FIFO13_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000003f + +/* SH_XNNI0_LLP_TO_FIFO13_FLOW_DEBIT_VC0_FORCE_CRED */ +/* Description: Force Credit on VC0 from debit cntr */ +#define SH_XNNI0_LLP_TO_FIFO13_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7 +#define SH_XNNI0_LLP_TO_FIFO13_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080 + +/* SH_XNNI0_LLP_TO_FIFO13_FLOW_DEBIT_VC2_WITHHOLD */ +/* Description: vc2 withhold */ +#define SH_XNNI0_LLP_TO_FIFO13_FLOW_DEBIT_VC2_WITHHOLD_SHFT 8 +#define SH_XNNI0_LLP_TO_FIFO13_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x0000000000003f00 + +/* SH_XNNI0_LLP_TO_FIFO13_FLOW_DEBIT_VC2_FORCE_CRED */ +/* Description: Force Credit on VC2 from debit cntr */ +#define SH_XNNI0_LLP_TO_FIFO13_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 15 +#define SH_XNNI0_LLP_TO_FIFO13_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000008000 + +/* SH_XNNI0_LLP_TO_FIFO13_FLOW_CREDIT_VC0_DYN */ +/* Description: vc0 credit dynamic value */ +#define SH_XNNI0_LLP_TO_FIFO13_FLOW_CREDIT_VC0_DYN_SHFT 24 +#define SH_XNNI0_LLP_TO_FIFO13_FLOW_CREDIT_VC0_DYN_MASK 0x000000003f000000 + +/* SH_XNNI0_LLP_TO_FIFO13_FLOW_CREDIT_VC0_CAP */ +/* Description: vc0 credit captured value */ +#define SH_XNNI0_LLP_TO_FIFO13_FLOW_CREDIT_VC0_CAP_SHFT 32 +#define SH_XNNI0_LLP_TO_FIFO13_FLOW_CREDIT_VC0_CAP_MASK 0x0000003f00000000 + +/* SH_XNNI0_LLP_TO_FIFO13_FLOW_CREDIT_VC2_DYN */ +/* Description: vc2 credit dynamic value */ +#define SH_XNNI0_LLP_TO_FIFO13_FLOW_CREDIT_VC2_DYN_SHFT 48 +#define SH_XNNI0_LLP_TO_FIFO13_FLOW_CREDIT_VC2_DYN_MASK 0x003f000000000000 + +/* SH_XNNI0_LLP_TO_FIFO13_FLOW_CREDIT_VC2_CAP */ +/* Description: vc2 credit captured value */ +#define SH_XNNI0_LLP_TO_FIFO13_FLOW_CREDIT_VC2_CAP_SHFT 56 +#define SH_XNNI0_LLP_TO_FIFO13_FLOW_CREDIT_VC2_CAP_MASK 0x3f00000000000000 + +/* ==================================================================== */ +/* Register "SH_XNNI0_LLP_DEBIT_FLOW" */ +/* ==================================================================== */ + +#define SH_XNNI0_LLP_DEBIT_FLOW 0x0000000150001030 +#define SH_XNNI0_LLP_DEBIT_FLOW_MASK 0x1f1f1f1f1f1f1f1f +#define SH_XNNI0_LLP_DEBIT_FLOW_INIT 0x0000000000000000 + +/* SH_XNNI0_LLP_DEBIT_FLOW_DEBIT_VC0_DYN */ +/* Description: vc0 debit dynamic value */ +#define SH_XNNI0_LLP_DEBIT_FLOW_DEBIT_VC0_DYN_SHFT 0 +#define SH_XNNI0_LLP_DEBIT_FLOW_DEBIT_VC0_DYN_MASK 0x000000000000001f + +/* SH_XNNI0_LLP_DEBIT_FLOW_DEBIT_VC0_CAP */ +/* Description: vc0 debit captured value */ +#define SH_XNNI0_LLP_DEBIT_FLOW_DEBIT_VC0_CAP_SHFT 8 +#define SH_XNNI0_LLP_DEBIT_FLOW_DEBIT_VC0_CAP_MASK 0x0000000000001f00 + +/* SH_XNNI0_LLP_DEBIT_FLOW_DEBIT_VC1_DYN */ +/* Description: vc1 debit dynamic value */ +#define SH_XNNI0_LLP_DEBIT_FLOW_DEBIT_VC1_DYN_SHFT 16 +#define SH_XNNI0_LLP_DEBIT_FLOW_DEBIT_VC1_DYN_MASK 0x00000000001f0000 + +/* SH_XNNI0_LLP_DEBIT_FLOW_DEBIT_VC1_CAP */ +/* Description: vc1 debit captured value */ +#define SH_XNNI0_LLP_DEBIT_FLOW_DEBIT_VC1_CAP_SHFT 24 +#define SH_XNNI0_LLP_DEBIT_FLOW_DEBIT_VC1_CAP_MASK 0x000000001f000000 + +/* SH_XNNI0_LLP_DEBIT_FLOW_DEBIT_VC2_DYN */ +/* Description: vc2 debit dynamic value */ +#define SH_XNNI0_LLP_DEBIT_FLOW_DEBIT_VC2_DYN_SHFT 32 +#define SH_XNNI0_LLP_DEBIT_FLOW_DEBIT_VC2_DYN_MASK 0x0000001f00000000 + +/* SH_XNNI0_LLP_DEBIT_FLOW_DEBIT_VC2_CAP */ +/* Description: vc2 debit captured value */ +#define SH_XNNI0_LLP_DEBIT_FLOW_DEBIT_VC2_CAP_SHFT 40 +#define SH_XNNI0_LLP_DEBIT_FLOW_DEBIT_VC2_CAP_MASK 0x00001f0000000000 + +/* SH_XNNI0_LLP_DEBIT_FLOW_DEBIT_VC3_DYN */ +/* Description: vc3 debit dynamic value */ +#define SH_XNNI0_LLP_DEBIT_FLOW_DEBIT_VC3_DYN_SHFT 48 +#define SH_XNNI0_LLP_DEBIT_FLOW_DEBIT_VC3_DYN_MASK 0x001f000000000000 + +/* SH_XNNI0_LLP_DEBIT_FLOW_DEBIT_VC3_CAP */ +/* Description: vc3 debit captured value */ +#define SH_XNNI0_LLP_DEBIT_FLOW_DEBIT_VC3_CAP_SHFT 56 +#define SH_XNNI0_LLP_DEBIT_FLOW_DEBIT_VC3_CAP_MASK 0x1f00000000000000 + +/* ==================================================================== */ +/* Register "SH_XNNI0_LINK_0_FLOW" */ +/* ==================================================================== */ + +#define SH_XNNI0_LINK_0_FLOW 0x0000000150001040 +#define SH_XNNI0_LINK_0_FLOW_MASK 0x000000007f7f7fbf +#define SH_XNNI0_LINK_0_FLOW_INIT 0x0000000000001800 + +/* SH_XNNI0_LINK_0_FLOW_DEBIT_VC0_WITHHOLD */ +/* Description: vc0 withhold */ +#define SH_XNNI0_LINK_0_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0 +#define SH_XNNI0_LINK_0_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000003f + +/* SH_XNNI0_LINK_0_FLOW_DEBIT_VC0_FORCE_CRED */ +/* Description: Force Credit on vc0 from debit cntr */ +#define SH_XNNI0_LINK_0_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7 +#define SH_XNNI0_LINK_0_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080 + +/* SH_XNNI0_LINK_0_FLOW_CREDIT_VC0_TEST */ +/* Description: vc0 Limit Test */ +#define SH_XNNI0_LINK_0_FLOW_CREDIT_VC0_TEST_SHFT 8 +#define SH_XNNI0_LINK_0_FLOW_CREDIT_VC0_TEST_MASK 0x0000000000007f00 + +/* SH_XNNI0_LINK_0_FLOW_CREDIT_VC0_DYN */ +/* Description: Dynamic vc0 credit value */ +#define SH_XNNI0_LINK_0_FLOW_CREDIT_VC0_DYN_SHFT 16 +#define SH_XNNI0_LINK_0_FLOW_CREDIT_VC0_DYN_MASK 0x00000000007f0000 + +/* SH_XNNI0_LINK_0_FLOW_CREDIT_VC0_CAP */ +/* Description: Captured vc0 credit */ +#define SH_XNNI0_LINK_0_FLOW_CREDIT_VC0_CAP_SHFT 24 +#define SH_XNNI0_LINK_0_FLOW_CREDIT_VC0_CAP_MASK 0x000000007f000000 + +/* ==================================================================== */ +/* Register "SH_XNNI0_LINK_1_FLOW" */ +/* ==================================================================== */ + +#define SH_XNNI0_LINK_1_FLOW 0x0000000150001050 +#define SH_XNNI0_LINK_1_FLOW_MASK 0x000000007f7f7fbf +#define SH_XNNI0_LINK_1_FLOW_INIT 0x0000000000001800 + +/* SH_XNNI0_LINK_1_FLOW_DEBIT_VC1_WITHHOLD */ +/* Description: vc1 withhold */ +#define SH_XNNI0_LINK_1_FLOW_DEBIT_VC1_WITHHOLD_SHFT 0 +#define SH_XNNI0_LINK_1_FLOW_DEBIT_VC1_WITHHOLD_MASK 0x000000000000003f + +/* SH_XNNI0_LINK_1_FLOW_DEBIT_VC1_FORCE_CRED */ +/* Description: Force Credit on vc1 from debit cntr */ +#define SH_XNNI0_LINK_1_FLOW_DEBIT_VC1_FORCE_CRED_SHFT 7 +#define SH_XNNI0_LINK_1_FLOW_DEBIT_VC1_FORCE_CRED_MASK 0x0000000000000080 + +/* SH_XNNI0_LINK_1_FLOW_CREDIT_VC1_TEST */ +/* Description: vc1 Limit Test */ +#define SH_XNNI0_LINK_1_FLOW_CREDIT_VC1_TEST_SHFT 8 +#define SH_XNNI0_LINK_1_FLOW_CREDIT_VC1_TEST_MASK 0x0000000000007f00 + +/* SH_XNNI0_LINK_1_FLOW_CREDIT_VC1_DYN */ +/* Description: Dynamic vc1 credit value */ +#define SH_XNNI0_LINK_1_FLOW_CREDIT_VC1_DYN_SHFT 16 +#define SH_XNNI0_LINK_1_FLOW_CREDIT_VC1_DYN_MASK 0x00000000007f0000 + +/* SH_XNNI0_LINK_1_FLOW_CREDIT_VC1_CAP */ +/* Description: Captured vc1 credit */ +#define SH_XNNI0_LINK_1_FLOW_CREDIT_VC1_CAP_SHFT 24 +#define SH_XNNI0_LINK_1_FLOW_CREDIT_VC1_CAP_MASK 0x000000007f000000 + +/* ==================================================================== */ +/* Register "SH_XNNI0_LINK_2_FLOW" */ +/* ==================================================================== */ + +#define SH_XNNI0_LINK_2_FLOW 0x0000000150001060 +#define SH_XNNI0_LINK_2_FLOW_MASK 0x000000007f7f7fbf +#define SH_XNNI0_LINK_2_FLOW_INIT 0x0000000000001800 + +/* SH_XNNI0_LINK_2_FLOW_DEBIT_VC2_WITHHOLD */ +/* Description: vc2 withhold */ +#define SH_XNNI0_LINK_2_FLOW_DEBIT_VC2_WITHHOLD_SHFT 0 +#define SH_XNNI0_LINK_2_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x000000000000003f + +/* SH_XNNI0_LINK_2_FLOW_DEBIT_VC2_FORCE_CRED */ +/* Description: Force Credit on vc2 from debit cntr */ +#define SH_XNNI0_LINK_2_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 7 +#define SH_XNNI0_LINK_2_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000000080 + +/* SH_XNNI0_LINK_2_FLOW_CREDIT_VC2_TEST */ +/* Description: vc2 Limit Test */ +#define SH_XNNI0_LINK_2_FLOW_CREDIT_VC2_TEST_SHFT 8 +#define SH_XNNI0_LINK_2_FLOW_CREDIT_VC2_TEST_MASK 0x0000000000007f00 + +/* SH_XNNI0_LINK_2_FLOW_CREDIT_VC2_DYN */ +/* Description: Dynamic vc2 credit value */ +#define SH_XNNI0_LINK_2_FLOW_CREDIT_VC2_DYN_SHFT 16 +#define SH_XNNI0_LINK_2_FLOW_CREDIT_VC2_DYN_MASK 0x00000000007f0000 + +/* SH_XNNI0_LINK_2_FLOW_CREDIT_VC2_CAP */ +/* Description: Captured vc2 credit */ +#define SH_XNNI0_LINK_2_FLOW_CREDIT_VC2_CAP_SHFT 24 +#define SH_XNNI0_LINK_2_FLOW_CREDIT_VC2_CAP_MASK 0x000000007f000000 + +/* ==================================================================== */ +/* Register "SH_XNNI0_LINK_3_FLOW" */ +/* ==================================================================== */ + +#define SH_XNNI0_LINK_3_FLOW 0x0000000150001070 +#define SH_XNNI0_LINK_3_FLOW_MASK 0x000000007f7f7fbf +#define SH_XNNI0_LINK_3_FLOW_INIT 0x0000000000001800 + +/* SH_XNNI0_LINK_3_FLOW_DEBIT_VC3_WITHHOLD */ +/* Description: vc3 withhold */ +#define SH_XNNI0_LINK_3_FLOW_DEBIT_VC3_WITHHOLD_SHFT 0 +#define SH_XNNI0_LINK_3_FLOW_DEBIT_VC3_WITHHOLD_MASK 0x000000000000003f + +/* SH_XNNI0_LINK_3_FLOW_DEBIT_VC3_FORCE_CRED */ +/* Description: Force Credit on vc3 from debit cntr */ +#define SH_XNNI0_LINK_3_FLOW_DEBIT_VC3_FORCE_CRED_SHFT 7 +#define SH_XNNI0_LINK_3_FLOW_DEBIT_VC3_FORCE_CRED_MASK 0x0000000000000080 + +/* SH_XNNI0_LINK_3_FLOW_CREDIT_VC3_TEST */ +/* Description: vc3 Limit Test */ +#define SH_XNNI0_LINK_3_FLOW_CREDIT_VC3_TEST_SHFT 8 +#define SH_XNNI0_LINK_3_FLOW_CREDIT_VC3_TEST_MASK 0x0000000000007f00 + +/* SH_XNNI0_LINK_3_FLOW_CREDIT_VC3_DYN */ +/* Description: Dynamic vc3 credit value */ +#define SH_XNNI0_LINK_3_FLOW_CREDIT_VC3_DYN_SHFT 16 +#define SH_XNNI0_LINK_3_FLOW_CREDIT_VC3_DYN_MASK 0x00000000007f0000 + +/* SH_XNNI0_LINK_3_FLOW_CREDIT_VC3_CAP */ +/* Description: Captured vc3 credit */ +#define SH_XNNI0_LINK_3_FLOW_CREDIT_VC3_CAP_SHFT 24 +#define SH_XNNI0_LINK_3_FLOW_CREDIT_VC3_CAP_MASK 0x000000007f000000 + +/* ==================================================================== */ +/* Register "SH_XNNI1_LLP_TO_FIFO02_FLOW" */ +/* ==================================================================== */ + +#define SH_XNNI1_LLP_TO_FIFO02_FLOW 0x0000000150003010 +#define SH_XNNI1_LLP_TO_FIFO02_FLOW_MASK 0x3f3f003f3f00bfbf +#define SH_XNNI1_LLP_TO_FIFO02_FLOW_INIT 0x0000000000000000 + +/* SH_XNNI1_LLP_TO_FIFO02_FLOW_DEBIT_VC0_WITHHOLD */ +/* Description: vc0 withhold */ +#define SH_XNNI1_LLP_TO_FIFO02_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0 +#define SH_XNNI1_LLP_TO_FIFO02_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000003f + +/* SH_XNNI1_LLP_TO_FIFO02_FLOW_DEBIT_VC0_FORCE_CRED */ +/* Description: Force Credit on VC0 from debit cntr */ +#define SH_XNNI1_LLP_TO_FIFO02_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7 +#define SH_XNNI1_LLP_TO_FIFO02_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080 + +/* SH_XNNI1_LLP_TO_FIFO02_FLOW_DEBIT_VC2_WITHHOLD */ +/* Description: vc2 withhold */ +#define SH_XNNI1_LLP_TO_FIFO02_FLOW_DEBIT_VC2_WITHHOLD_SHFT 8 +#define SH_XNNI1_LLP_TO_FIFO02_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x0000000000003f00 + +/* SH_XNNI1_LLP_TO_FIFO02_FLOW_DEBIT_VC2_FORCE_CRED */ +/* Description: Force Credit on VC2 from debit cntr */ +#define SH_XNNI1_LLP_TO_FIFO02_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 15 +#define SH_XNNI1_LLP_TO_FIFO02_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000008000 + +/* SH_XNNI1_LLP_TO_FIFO02_FLOW_CREDIT_VC0_DYN */ +/* Description: vc0 credit dynamic value */ +#define SH_XNNI1_LLP_TO_FIFO02_FLOW_CREDIT_VC0_DYN_SHFT 24 +#define SH_XNNI1_LLP_TO_FIFO02_FLOW_CREDIT_VC0_DYN_MASK 0x000000003f000000 + +/* SH_XNNI1_LLP_TO_FIFO02_FLOW_CREDIT_VC0_CAP */ +/* Description: vc0 credit captured value */ +#define SH_XNNI1_LLP_TO_FIFO02_FLOW_CREDIT_VC0_CAP_SHFT 32 +#define SH_XNNI1_LLP_TO_FIFO02_FLOW_CREDIT_VC0_CAP_MASK 0x0000003f00000000 + +/* SH_XNNI1_LLP_TO_FIFO02_FLOW_CREDIT_VC2_DYN */ +/* Description: vc2 credit dynamic value */ +#define SH_XNNI1_LLP_TO_FIFO02_FLOW_CREDIT_VC2_DYN_SHFT 48 +#define SH_XNNI1_LLP_TO_FIFO02_FLOW_CREDIT_VC2_DYN_MASK 0x003f000000000000 + +/* SH_XNNI1_LLP_TO_FIFO02_FLOW_CREDIT_VC2_CAP */ +/* Description: vc2 credit captured value */ +#define SH_XNNI1_LLP_TO_FIFO02_FLOW_CREDIT_VC2_CAP_SHFT 56 +#define SH_XNNI1_LLP_TO_FIFO02_FLOW_CREDIT_VC2_CAP_MASK 0x3f00000000000000 + +/* ==================================================================== */ +/* Register "SH_XNNI1_LLP_TO_FIFO13_FLOW" */ +/* ==================================================================== */ + +#define SH_XNNI1_LLP_TO_FIFO13_FLOW 0x0000000150003020 +#define SH_XNNI1_LLP_TO_FIFO13_FLOW_MASK 0x3f3f003f3f00bfbf +#define SH_XNNI1_LLP_TO_FIFO13_FLOW_INIT 0x0000000000000000 + +/* SH_XNNI1_LLP_TO_FIFO13_FLOW_DEBIT_VC0_WITHHOLD */ +/* Description: vc0 withhold */ +#define SH_XNNI1_LLP_TO_FIFO13_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0 +#define SH_XNNI1_LLP_TO_FIFO13_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000003f + +/* SH_XNNI1_LLP_TO_FIFO13_FLOW_DEBIT_VC0_FORCE_CRED */ +/* Description: Force Credit on VC0 from debit cntr */ +#define SH_XNNI1_LLP_TO_FIFO13_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7 +#define SH_XNNI1_LLP_TO_FIFO13_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080 + +/* SH_XNNI1_LLP_TO_FIFO13_FLOW_DEBIT_VC2_WITHHOLD */ +/* Description: vc2 withhold */ +#define SH_XNNI1_LLP_TO_FIFO13_FLOW_DEBIT_VC2_WITHHOLD_SHFT 8 +#define SH_XNNI1_LLP_TO_FIFO13_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x0000000000003f00 + +/* SH_XNNI1_LLP_TO_FIFO13_FLOW_DEBIT_VC2_FORCE_CRED */ +/* Description: Force Credit on VC2 from debit cntr */ +#define SH_XNNI1_LLP_TO_FIFO13_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 15 +#define SH_XNNI1_LLP_TO_FIFO13_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000008000 + +/* SH_XNNI1_LLP_TO_FIFO13_FLOW_CREDIT_VC0_DYN */ +/* Description: vc0 credit dynamic value */ +#define SH_XNNI1_LLP_TO_FIFO13_FLOW_CREDIT_VC0_DYN_SHFT 24 +#define SH_XNNI1_LLP_TO_FIFO13_FLOW_CREDIT_VC0_DYN_MASK 0x000000003f000000 + +/* SH_XNNI1_LLP_TO_FIFO13_FLOW_CREDIT_VC0_CAP */ +/* Description: vc0 credit captured value */ +#define SH_XNNI1_LLP_TO_FIFO13_FLOW_CREDIT_VC0_CAP_SHFT 32 +#define SH_XNNI1_LLP_TO_FIFO13_FLOW_CREDIT_VC0_CAP_MASK 0x0000003f00000000 + +/* SH_XNNI1_LLP_TO_FIFO13_FLOW_CREDIT_VC2_DYN */ +/* Description: vc2 credit dynamic value */ +#define SH_XNNI1_LLP_TO_FIFO13_FLOW_CREDIT_VC2_DYN_SHFT 48 +#define SH_XNNI1_LLP_TO_FIFO13_FLOW_CREDIT_VC2_DYN_MASK 0x003f000000000000 + +/* SH_XNNI1_LLP_TO_FIFO13_FLOW_CREDIT_VC2_CAP */ +/* Description: vc2 credit captured value */ +#define SH_XNNI1_LLP_TO_FIFO13_FLOW_CREDIT_VC2_CAP_SHFT 56 +#define SH_XNNI1_LLP_TO_FIFO13_FLOW_CREDIT_VC2_CAP_MASK 0x3f00000000000000 + +/* ==================================================================== */ +/* Register "SH_XNNI1_LLP_DEBIT_FLOW" */ +/* ==================================================================== */ + +#define SH_XNNI1_LLP_DEBIT_FLOW 0x0000000150003030 +#define SH_XNNI1_LLP_DEBIT_FLOW_MASK 0x1f1f1f1f1f1f1f1f +#define SH_XNNI1_LLP_DEBIT_FLOW_INIT 0x0000000000000000 + +/* SH_XNNI1_LLP_DEBIT_FLOW_DEBIT_VC0_DYN */ +/* Description: vc0 debit dynamic value */ +#define SH_XNNI1_LLP_DEBIT_FLOW_DEBIT_VC0_DYN_SHFT 0 +#define SH_XNNI1_LLP_DEBIT_FLOW_DEBIT_VC0_DYN_MASK 0x000000000000001f + +/* SH_XNNI1_LLP_DEBIT_FLOW_DEBIT_VC0_CAP */ +/* Description: vc0 debit captured value */ +#define SH_XNNI1_LLP_DEBIT_FLOW_DEBIT_VC0_CAP_SHFT 8 +#define SH_XNNI1_LLP_DEBIT_FLOW_DEBIT_VC0_CAP_MASK 0x0000000000001f00 + +/* SH_XNNI1_LLP_DEBIT_FLOW_DEBIT_VC1_DYN */ +/* Description: vc1 debit dynamic value */ +#define SH_XNNI1_LLP_DEBIT_FLOW_DEBIT_VC1_DYN_SHFT 16 +#define SH_XNNI1_LLP_DEBIT_FLOW_DEBIT_VC1_DYN_MASK 0x00000000001f0000 + +/* SH_XNNI1_LLP_DEBIT_FLOW_DEBIT_VC1_CAP */ +/* Description: vc1 debit captured value */ +#define SH_XNNI1_LLP_DEBIT_FLOW_DEBIT_VC1_CAP_SHFT 24 +#define SH_XNNI1_LLP_DEBIT_FLOW_DEBIT_VC1_CAP_MASK 0x000000001f000000 + +/* SH_XNNI1_LLP_DEBIT_FLOW_DEBIT_VC2_DYN */ +/* Description: vc2 debit dynamic value */ +#define SH_XNNI1_LLP_DEBIT_FLOW_DEBIT_VC2_DYN_SHFT 32 +#define SH_XNNI1_LLP_DEBIT_FLOW_DEBIT_VC2_DYN_MASK 0x0000001f00000000 + +/* SH_XNNI1_LLP_DEBIT_FLOW_DEBIT_VC2_CAP */ +/* Description: vc2 debit captured value */ +#define SH_XNNI1_LLP_DEBIT_FLOW_DEBIT_VC2_CAP_SHFT 40 +#define SH_XNNI1_LLP_DEBIT_FLOW_DEBIT_VC2_CAP_MASK 0x00001f0000000000 + +/* SH_XNNI1_LLP_DEBIT_FLOW_DEBIT_VC3_DYN */ +/* Description: vc3 debit dynamic value */ +#define SH_XNNI1_LLP_DEBIT_FLOW_DEBIT_VC3_DYN_SHFT 48 +#define SH_XNNI1_LLP_DEBIT_FLOW_DEBIT_VC3_DYN_MASK 0x001f000000000000 + +/* SH_XNNI1_LLP_DEBIT_FLOW_DEBIT_VC3_CAP */ +/* Description: vc3 debit captured value */ +#define SH_XNNI1_LLP_DEBIT_FLOW_DEBIT_VC3_CAP_SHFT 56 +#define SH_XNNI1_LLP_DEBIT_FLOW_DEBIT_VC3_CAP_MASK 0x1f00000000000000 + +/* ==================================================================== */ +/* Register "SH_XNNI1_LINK_0_FLOW" */ +/* ==================================================================== */ + +#define SH_XNNI1_LINK_0_FLOW 0x0000000150003040 +#define SH_XNNI1_LINK_0_FLOW_MASK 0x000000007f7f7fbf +#define SH_XNNI1_LINK_0_FLOW_INIT 0x0000000000001800 + +/* SH_XNNI1_LINK_0_FLOW_DEBIT_VC0_WITHHOLD */ +/* Description: vc0 withhold */ +#define SH_XNNI1_LINK_0_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0 +#define SH_XNNI1_LINK_0_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000003f + +/* SH_XNNI1_LINK_0_FLOW_DEBIT_VC0_FORCE_CRED */ +/* Description: Force Credit on vc0 from debit cntr */ +#define SH_XNNI1_LINK_0_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7 +#define SH_XNNI1_LINK_0_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080 + +/* SH_XNNI1_LINK_0_FLOW_CREDIT_VC0_TEST */ +/* Description: vc0 Limit Test */ +#define SH_XNNI1_LINK_0_FLOW_CREDIT_VC0_TEST_SHFT 8 +#define SH_XNNI1_LINK_0_FLOW_CREDIT_VC0_TEST_MASK 0x0000000000007f00 + +/* SH_XNNI1_LINK_0_FLOW_CREDIT_VC0_DYN */ +/* Description: Dynamic vc0 credit value */ +#define SH_XNNI1_LINK_0_FLOW_CREDIT_VC0_DYN_SHFT 16 +#define SH_XNNI1_LINK_0_FLOW_CREDIT_VC0_DYN_MASK 0x00000000007f0000 + +/* SH_XNNI1_LINK_0_FLOW_CREDIT_VC0_CAP */ +/* Description: Captured vc0 credit */ +#define SH_XNNI1_LINK_0_FLOW_CREDIT_VC0_CAP_SHFT 24 +#define SH_XNNI1_LINK_0_FLOW_CREDIT_VC0_CAP_MASK 0x000000007f000000 + +/* ==================================================================== */ +/* Register "SH_XNNI1_LINK_1_FLOW" */ +/* ==================================================================== */ + +#define SH_XNNI1_LINK_1_FLOW 0x0000000150003050 +#define SH_XNNI1_LINK_1_FLOW_MASK 0x000000007f7f7fbf +#define SH_XNNI1_LINK_1_FLOW_INIT 0x0000000000001800 + +/* SH_XNNI1_LINK_1_FLOW_DEBIT_VC1_WITHHOLD */ +/* Description: vc1 withhold */ +#define SH_XNNI1_LINK_1_FLOW_DEBIT_VC1_WITHHOLD_SHFT 0 +#define SH_XNNI1_LINK_1_FLOW_DEBIT_VC1_WITHHOLD_MASK 0x000000000000003f + +/* SH_XNNI1_LINK_1_FLOW_DEBIT_VC1_FORCE_CRED */ +/* Description: Force Credit on vc1 from debit cntr */ +#define SH_XNNI1_LINK_1_FLOW_DEBIT_VC1_FORCE_CRED_SHFT 7 +#define SH_XNNI1_LINK_1_FLOW_DEBIT_VC1_FORCE_CRED_MASK 0x0000000000000080 + +/* SH_XNNI1_LINK_1_FLOW_CREDIT_VC1_TEST */ +/* Description: vc1 Limit Test */ +#define SH_XNNI1_LINK_1_FLOW_CREDIT_VC1_TEST_SHFT 8 +#define SH_XNNI1_LINK_1_FLOW_CREDIT_VC1_TEST_MASK 0x0000000000007f00 + +/* SH_XNNI1_LINK_1_FLOW_CREDIT_VC1_DYN */ +/* Description: Dynamic vc1 credit value */ +#define SH_XNNI1_LINK_1_FLOW_CREDIT_VC1_DYN_SHFT 16 +#define SH_XNNI1_LINK_1_FLOW_CREDIT_VC1_DYN_MASK 0x00000000007f0000 + +/* SH_XNNI1_LINK_1_FLOW_CREDIT_VC1_CAP */ +/* Description: Captured vc1 credit */ +#define SH_XNNI1_LINK_1_FLOW_CREDIT_VC1_CAP_SHFT 24 +#define SH_XNNI1_LINK_1_FLOW_CREDIT_VC1_CAP_MASK 0x000000007f000000 + +/* ==================================================================== */ +/* Register "SH_XNNI1_LINK_2_FLOW" */ +/* ==================================================================== */ + +#define SH_XNNI1_LINK_2_FLOW 0x0000000150003060 +#define SH_XNNI1_LINK_2_FLOW_MASK 0x000000007f7f7fbf +#define SH_XNNI1_LINK_2_FLOW_INIT 0x0000000000001800 + +/* SH_XNNI1_LINK_2_FLOW_DEBIT_VC2_WITHHOLD */ +/* Description: vc2 withhold */ +#define SH_XNNI1_LINK_2_FLOW_DEBIT_VC2_WITHHOLD_SHFT 0 +#define SH_XNNI1_LINK_2_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x000000000000003f + +/* SH_XNNI1_LINK_2_FLOW_DEBIT_VC2_FORCE_CRED */ +/* Description: Force Credit on vc2 from debit cntr */ +#define SH_XNNI1_LINK_2_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 7 +#define SH_XNNI1_LINK_2_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000000080 + +/* SH_XNNI1_LINK_2_FLOW_CREDIT_VC2_TEST */ +/* Description: vc2 Limit Test */ +#define SH_XNNI1_LINK_2_FLOW_CREDIT_VC2_TEST_SHFT 8 +#define SH_XNNI1_LINK_2_FLOW_CREDIT_VC2_TEST_MASK 0x0000000000007f00 + +/* SH_XNNI1_LINK_2_FLOW_CREDIT_VC2_DYN */ +/* Description: Dynamic vc2 credit value */ +#define SH_XNNI1_LINK_2_FLOW_CREDIT_VC2_DYN_SHFT 16 +#define SH_XNNI1_LINK_2_FLOW_CREDIT_VC2_DYN_MASK 0x00000000007f0000 + +/* SH_XNNI1_LINK_2_FLOW_CREDIT_VC2_CAP */ +/* Description: Captured vc2 credit */ +#define SH_XNNI1_LINK_2_FLOW_CREDIT_VC2_CAP_SHFT 24 +#define SH_XNNI1_LINK_2_FLOW_CREDIT_VC2_CAP_MASK 0x000000007f000000 + +/* ==================================================================== */ +/* Register "SH_XNNI1_LINK_3_FLOW" */ +/* ==================================================================== */ + +#define SH_XNNI1_LINK_3_FLOW 0x0000000150003070 +#define SH_XNNI1_LINK_3_FLOW_MASK 0x000000007f7f7fbf +#define SH_XNNI1_LINK_3_FLOW_INIT 0x0000000000001800 + +/* SH_XNNI1_LINK_3_FLOW_DEBIT_VC3_WITHHOLD */ +/* Description: vc3 withhold */ +#define SH_XNNI1_LINK_3_FLOW_DEBIT_VC3_WITHHOLD_SHFT 0 +#define SH_XNNI1_LINK_3_FLOW_DEBIT_VC3_WITHHOLD_MASK 0x000000000000003f + +/* SH_XNNI1_LINK_3_FLOW_DEBIT_VC3_FORCE_CRED */ +/* Description: Force Credit on vc3 from debit cntr */ +#define SH_XNNI1_LINK_3_FLOW_DEBIT_VC3_FORCE_CRED_SHFT 7 +#define SH_XNNI1_LINK_3_FLOW_DEBIT_VC3_FORCE_CRED_MASK 0x0000000000000080 + +/* SH_XNNI1_LINK_3_FLOW_CREDIT_VC3_TEST */ +/* Description: vc3 Limit Test */ +#define SH_XNNI1_LINK_3_FLOW_CREDIT_VC3_TEST_SHFT 8 +#define SH_XNNI1_LINK_3_FLOW_CREDIT_VC3_TEST_MASK 0x0000000000007f00 + +/* SH_XNNI1_LINK_3_FLOW_CREDIT_VC3_DYN */ +/* Description: Dynamic vc3 credit value */ +#define SH_XNNI1_LINK_3_FLOW_CREDIT_VC3_DYN_SHFT 16 +#define SH_XNNI1_LINK_3_FLOW_CREDIT_VC3_DYN_MASK 0x00000000007f0000 + +/* SH_XNNI1_LINK_3_FLOW_CREDIT_VC3_CAP */ +/* Description: Captured vc3 credit */ +#define SH_XNNI1_LINK_3_FLOW_CREDIT_VC3_CAP_SHFT 24 +#define SH_XNNI1_LINK_3_FLOW_CREDIT_VC3_CAP_MASK 0x000000007f000000 + +/* ==================================================================== */ +/* Register "SH_IILB_LOCAL_TABLE" */ +/* local lookup table */ +/* ==================================================================== */ + +#define SH_IILB_LOCAL_TABLE 0x0000000150020000 +#define SH_IILB_LOCAL_TABLE_MASK 0x800000000000003f +#define SH_IILB_LOCAL_TABLE_MEMDEPTH 128 +#define SH_IILB_LOCAL_TABLE_INIT 0x0000000000000000 + +/* SH_IILB_LOCAL_TABLE_DIR0 */ +/* Description: Direction field for next chip */ +#define SH_IILB_LOCAL_TABLE_DIR0_SHFT 0 +#define SH_IILB_LOCAL_TABLE_DIR0_MASK 0x000000000000000f + +/* SH_IILB_LOCAL_TABLE_V0 */ +/* Description: Low bit of virtual channel for next chip */ +#define SH_IILB_LOCAL_TABLE_V0_SHFT 4 +#define SH_IILB_LOCAL_TABLE_V0_MASK 0x0000000000000010 + +/* SH_IILB_LOCAL_TABLE_NI_SEL0 */ +/* Description: ni select for requests */ +#define SH_IILB_LOCAL_TABLE_NI_SEL0_SHFT 5 +#define SH_IILB_LOCAL_TABLE_NI_SEL0_MASK 0x0000000000000020 + +/* SH_IILB_LOCAL_TABLE_VALID */ +/* Description: Indicates that this entry is valid */ +#define SH_IILB_LOCAL_TABLE_VALID_SHFT 63 +#define SH_IILB_LOCAL_TABLE_VALID_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_IILB_GLOBAL_TABLE" */ +/* global lookup table */ +/* ==================================================================== */ + +#define SH_IILB_GLOBAL_TABLE 0x0000000150020400 +#define SH_IILB_GLOBAL_TABLE_MASK 0x800000000000003f +#define SH_IILB_GLOBAL_TABLE_MEMDEPTH 16 +#define SH_IILB_GLOBAL_TABLE_INIT 0x0000000000000000 + +/* SH_IILB_GLOBAL_TABLE_DIR0 */ +/* Description: Direction field for next chip */ +#define SH_IILB_GLOBAL_TABLE_DIR0_SHFT 0 +#define SH_IILB_GLOBAL_TABLE_DIR0_MASK 0x000000000000000f + +/* SH_IILB_GLOBAL_TABLE_V0 */ +/* Description: Low bit of virtual channel for next chip */ +#define SH_IILB_GLOBAL_TABLE_V0_SHFT 4 +#define SH_IILB_GLOBAL_TABLE_V0_MASK 0x0000000000000010 + +/* SH_IILB_GLOBAL_TABLE_NI_SEL0 */ +/* Description: ni select for requests */ +#define SH_IILB_GLOBAL_TABLE_NI_SEL0_SHFT 5 +#define SH_IILB_GLOBAL_TABLE_NI_SEL0_MASK 0x0000000000000020 + +/* SH_IILB_GLOBAL_TABLE_VALID */ +/* Description: Indicates that this entry is valid */ +#define SH_IILB_GLOBAL_TABLE_VALID_SHFT 63 +#define SH_IILB_GLOBAL_TABLE_VALID_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_IILB_OVER_RIDE_TABLE" */ +/* If enabled, bypass the Global/Local tables */ +/* ==================================================================== */ + +#define SH_IILB_OVER_RIDE_TABLE 0x0000000150020480 +#define SH_IILB_OVER_RIDE_TABLE_MASK 0x800000000000003f +#define SH_IILB_OVER_RIDE_TABLE_INIT 0x8000000000000000 + +/* SH_IILB_OVER_RIDE_TABLE_DIR0 */ +/* Description: Direction field for next chip */ +#define SH_IILB_OVER_RIDE_TABLE_DIR0_SHFT 0 +#define SH_IILB_OVER_RIDE_TABLE_DIR0_MASK 0x000000000000000f + +/* SH_IILB_OVER_RIDE_TABLE_V0 */ +/* Description: Low bit of virtual channel for next chip */ +#define SH_IILB_OVER_RIDE_TABLE_V0_SHFT 4 +#define SH_IILB_OVER_RIDE_TABLE_V0_MASK 0x0000000000000010 + +/* SH_IILB_OVER_RIDE_TABLE_NI_SEL0 */ +/* Description: ni select */ +#define SH_IILB_OVER_RIDE_TABLE_NI_SEL0_SHFT 5 +#define SH_IILB_OVER_RIDE_TABLE_NI_SEL0_MASK 0x0000000000000020 + +/* SH_IILB_OVER_RIDE_TABLE_ENABLE */ +/* Description: Indicates that this entry is enabled */ +#define SH_IILB_OVER_RIDE_TABLE_ENABLE_SHFT 63 +#define SH_IILB_OVER_RIDE_TABLE_ENABLE_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_IILB_RSP_PLANE_HINT" */ +/* If enabled, invert incoming response only plane hint bit before lo */ +/* ==================================================================== */ + +#define SH_IILB_RSP_PLANE_HINT 0x0000000150020488 +#define SH_IILB_RSP_PLANE_HINT_MASK 0x0000000000000000 +#define SH_IILB_RSP_PLANE_HINT_INIT 0x0000000000000000 + +/* ==================================================================== */ +/* Register "SH_PI_LOCAL_TABLE" */ +/* local lookup table */ +/* ==================================================================== */ + +#define SH_PI_LOCAL_TABLE 0x0000000150021000 +#define SH_PI_LOCAL_TABLE_MASK 0x8000000000003f3f +#define SH_PI_LOCAL_TABLE_MEMDEPTH 128 +#define SH_PI_LOCAL_TABLE_INIT 0x0000000000000000 + +/* SH_PI_LOCAL_TABLE_DIR0 */ +/* Description: Direction field for next chip */ +#define SH_PI_LOCAL_TABLE_DIR0_SHFT 0 +#define SH_PI_LOCAL_TABLE_DIR0_MASK 0x000000000000000f + +/* SH_PI_LOCAL_TABLE_V0 */ +/* Description: Low bit of virtual channel for next chip */ +#define SH_PI_LOCAL_TABLE_V0_SHFT 4 +#define SH_PI_LOCAL_TABLE_V0_MASK 0x0000000000000010 + +/* SH_PI_LOCAL_TABLE_NI_SEL0 */ +/* Description: ni select for requests */ +#define SH_PI_LOCAL_TABLE_NI_SEL0_SHFT 5 +#define SH_PI_LOCAL_TABLE_NI_SEL0_MASK 0x0000000000000020 + +/* SH_PI_LOCAL_TABLE_DIR1 */ +#define SH_PI_LOCAL_TABLE_DIR1_SHFT 8 +#define SH_PI_LOCAL_TABLE_DIR1_MASK 0x0000000000000f00 + +/* SH_PI_LOCAL_TABLE_V1 */ +/* Description: Low bit of virtual channel for next chip */ +#define SH_PI_LOCAL_TABLE_V1_SHFT 12 +#define SH_PI_LOCAL_TABLE_V1_MASK 0x0000000000001000 + +/* SH_PI_LOCAL_TABLE_NI_SEL1 */ +/* Description: ni select for plane-hint 1 */ +#define SH_PI_LOCAL_TABLE_NI_SEL1_SHFT 13 +#define SH_PI_LOCAL_TABLE_NI_SEL1_MASK 0x0000000000002000 + +/* SH_PI_LOCAL_TABLE_VALID */ +/* Description: Indicates that this entry is valid */ +#define SH_PI_LOCAL_TABLE_VALID_SHFT 63 +#define SH_PI_LOCAL_TABLE_VALID_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_PI_GLOBAL_TABLE" */ +/* global lookup table */ +/* ==================================================================== */ + +#define SH_PI_GLOBAL_TABLE 0x0000000150021400 +#define SH_PI_GLOBAL_TABLE_MASK 0x8000000000003f3f +#define SH_PI_GLOBAL_TABLE_MEMDEPTH 16 +#define SH_PI_GLOBAL_TABLE_INIT 0x0000000000000000 + +/* SH_PI_GLOBAL_TABLE_DIR0 */ +/* Description: Direction field for next chip */ +#define SH_PI_GLOBAL_TABLE_DIR0_SHFT 0 +#define SH_PI_GLOBAL_TABLE_DIR0_MASK 0x000000000000000f + +/* SH_PI_GLOBAL_TABLE_V0 */ +/* Description: Low bit of virtual channel for next chip */ +#define SH_PI_GLOBAL_TABLE_V0_SHFT 4 +#define SH_PI_GLOBAL_TABLE_V0_MASK 0x0000000000000010 + +/* SH_PI_GLOBAL_TABLE_NI_SEL0 */ +/* Description: ni select for requests */ +#define SH_PI_GLOBAL_TABLE_NI_SEL0_SHFT 5 +#define SH_PI_GLOBAL_TABLE_NI_SEL0_MASK 0x0000000000000020 + +/* SH_PI_GLOBAL_TABLE_DIR1 */ +#define SH_PI_GLOBAL_TABLE_DIR1_SHFT 8 +#define SH_PI_GLOBAL_TABLE_DIR1_MASK 0x0000000000000f00 + +/* SH_PI_GLOBAL_TABLE_V1 */ +/* Description: Low bit of virtual channel for next chip */ +#define SH_PI_GLOBAL_TABLE_V1_SHFT 12 +#define SH_PI_GLOBAL_TABLE_V1_MASK 0x0000000000001000 + +/* SH_PI_GLOBAL_TABLE_NI_SEL1 */ +/* Description: ni select for plane-hint 1 */ +#define SH_PI_GLOBAL_TABLE_NI_SEL1_SHFT 13 +#define SH_PI_GLOBAL_TABLE_NI_SEL1_MASK 0x0000000000002000 + +/* SH_PI_GLOBAL_TABLE_VALID */ +/* Description: Indicates that this entry is valid */ +#define SH_PI_GLOBAL_TABLE_VALID_SHFT 63 +#define SH_PI_GLOBAL_TABLE_VALID_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_PI_OVER_RIDE_TABLE" */ +/* If enabled, bypass the Global/Local tables */ +/* ==================================================================== */ + +#define SH_PI_OVER_RIDE_TABLE 0x0000000150021480 +#define SH_PI_OVER_RIDE_TABLE_MASK 0x8000000000003f3f +#define SH_PI_OVER_RIDE_TABLE_INIT 0x8000000000002000 + +/* SH_PI_OVER_RIDE_TABLE_DIR0 */ +/* Description: Direction field for next chip */ +#define SH_PI_OVER_RIDE_TABLE_DIR0_SHFT 0 +#define SH_PI_OVER_RIDE_TABLE_DIR0_MASK 0x000000000000000f + +/* SH_PI_OVER_RIDE_TABLE_V0 */ +/* Description: Low bit of virtual channel for next chip */ +#define SH_PI_OVER_RIDE_TABLE_V0_SHFT 4 +#define SH_PI_OVER_RIDE_TABLE_V0_MASK 0x0000000000000010 + +/* SH_PI_OVER_RIDE_TABLE_NI_SEL0 */ +/* Description: ni select */ +#define SH_PI_OVER_RIDE_TABLE_NI_SEL0_SHFT 5 +#define SH_PI_OVER_RIDE_TABLE_NI_SEL0_MASK 0x0000000000000020 + +/* SH_PI_OVER_RIDE_TABLE_DIR1 */ +#define SH_PI_OVER_RIDE_TABLE_DIR1_SHFT 8 +#define SH_PI_OVER_RIDE_TABLE_DIR1_MASK 0x0000000000000f00 + +/* SH_PI_OVER_RIDE_TABLE_V1 */ +/* Description: Low bit of virtual channel for next chip */ +#define SH_PI_OVER_RIDE_TABLE_V1_SHFT 12 +#define SH_PI_OVER_RIDE_TABLE_V1_MASK 0x0000000000001000 + +/* SH_PI_OVER_RIDE_TABLE_NI_SEL1 */ +/* Description: ni select */ +#define SH_PI_OVER_RIDE_TABLE_NI_SEL1_SHFT 13 +#define SH_PI_OVER_RIDE_TABLE_NI_SEL1_MASK 0x0000000000002000 + +/* SH_PI_OVER_RIDE_TABLE_ENABLE */ +/* Description: Indicates that this entry is enabled */ +#define SH_PI_OVER_RIDE_TABLE_ENABLE_SHFT 63 +#define SH_PI_OVER_RIDE_TABLE_ENABLE_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_PI_RSP_PLANE_HINT" */ +/* If enabled, invert incoming response only plane hint bit before lo */ +/* ==================================================================== */ + +#define SH_PI_RSP_PLANE_HINT 0x0000000150021488 +#define SH_PI_RSP_PLANE_HINT_MASK 0x0000000000000001 +#define SH_PI_RSP_PLANE_HINT_INIT 0x0000000000000000 + +/* SH_PI_RSP_PLANE_HINT_INVERT */ +/* Description: Invert Response Plane Hint */ +#define SH_PI_RSP_PLANE_HINT_INVERT_SHFT 0 +#define SH_PI_RSP_PLANE_HINT_INVERT_MASK 0x0000000000000001 + +/* ==================================================================== */ +/* Register "SH_NI0_LOCAL_TABLE" */ +/* local lookup table */ +/* ==================================================================== */ + +#define SH_NI0_LOCAL_TABLE 0x0000000150022000 +#define SH_NI0_LOCAL_TABLE_MASK 0x800000000000001f +#define SH_NI0_LOCAL_TABLE_MEMDEPTH 128 +#define SH_NI0_LOCAL_TABLE_INIT 0x0000000000000000 + +/* SH_NI0_LOCAL_TABLE_DIR0 */ +/* Description: Direction field for next chip */ +#define SH_NI0_LOCAL_TABLE_DIR0_SHFT 0 +#define SH_NI0_LOCAL_TABLE_DIR0_MASK 0x000000000000000f + +/* SH_NI0_LOCAL_TABLE_V0 */ +/* Description: Low bit of virtual channel for next chip */ +#define SH_NI0_LOCAL_TABLE_V0_SHFT 4 +#define SH_NI0_LOCAL_TABLE_V0_MASK 0x0000000000000010 + +/* SH_NI0_LOCAL_TABLE_VALID */ +/* Description: Indicates that this entry is valid */ +#define SH_NI0_LOCAL_TABLE_VALID_SHFT 63 +#define SH_NI0_LOCAL_TABLE_VALID_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_NI0_GLOBAL_TABLE" */ +/* global lookup table */ +/* ==================================================================== */ + +#define SH_NI0_GLOBAL_TABLE 0x0000000150022400 +#define SH_NI0_GLOBAL_TABLE_MASK 0x800000000000001f +#define SH_NI0_GLOBAL_TABLE_MEMDEPTH 16 +#define SH_NI0_GLOBAL_TABLE_INIT 0x0000000000000000 + +/* SH_NI0_GLOBAL_TABLE_DIR0 */ +/* Description: Direction field for next chip */ +#define SH_NI0_GLOBAL_TABLE_DIR0_SHFT 0 +#define SH_NI0_GLOBAL_TABLE_DIR0_MASK 0x000000000000000f + +/* SH_NI0_GLOBAL_TABLE_V0 */ +/* Description: Low bit of virtual channel for next chip */ +#define SH_NI0_GLOBAL_TABLE_V0_SHFT 4 +#define SH_NI0_GLOBAL_TABLE_V0_MASK 0x0000000000000010 + +/* SH_NI0_GLOBAL_TABLE_VALID */ +/* Description: Indicates that this entry is valid */ +#define SH_NI0_GLOBAL_TABLE_VALID_SHFT 63 +#define SH_NI0_GLOBAL_TABLE_VALID_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_NI0_OVER_RIDE_TABLE" */ +/* If enabled, bypass the Global/Local tables */ +/* ==================================================================== */ + +#define SH_NI0_OVER_RIDE_TABLE 0x0000000150022480 +#define SH_NI0_OVER_RIDE_TABLE_MASK 0x800000000000001f +#define SH_NI0_OVER_RIDE_TABLE_INIT 0x8000000000000000 + +/* SH_NI0_OVER_RIDE_TABLE_DIR0 */ +/* Description: Direction field for next chip */ +#define SH_NI0_OVER_RIDE_TABLE_DIR0_SHFT 0 +#define SH_NI0_OVER_RIDE_TABLE_DIR0_MASK 0x000000000000000f + +/* SH_NI0_OVER_RIDE_TABLE_V0 */ +/* Description: Low bit of virtual channel for next chip */ +#define SH_NI0_OVER_RIDE_TABLE_V0_SHFT 4 +#define SH_NI0_OVER_RIDE_TABLE_V0_MASK 0x0000000000000010 + +/* SH_NI0_OVER_RIDE_TABLE_ENABLE */ +/* Description: Indicates that this entry is enabled */ +#define SH_NI0_OVER_RIDE_TABLE_ENABLE_SHFT 63 +#define SH_NI0_OVER_RIDE_TABLE_ENABLE_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_NI0_RSP_PLANE_HINT" */ +/* If enabled, invert incoming response only plane hint bit before lo */ +/* ==================================================================== */ + +#define SH_NI0_RSP_PLANE_HINT 0x0000000150022488 +#define SH_NI0_RSP_PLANE_HINT_MASK 0x0000000000000000 +#define SH_NI0_RSP_PLANE_HINT_INIT 0x0000000000000000 + +/* ==================================================================== */ +/* Register "SH_NI1_LOCAL_TABLE" */ +/* local lookup table */ +/* ==================================================================== */ + +#define SH_NI1_LOCAL_TABLE 0x0000000150023000 +#define SH_NI1_LOCAL_TABLE_MASK 0x800000000000001f +#define SH_NI1_LOCAL_TABLE_MEMDEPTH 128 +#define SH_NI1_LOCAL_TABLE_INIT 0x0000000000000000 + +/* SH_NI1_LOCAL_TABLE_DIR0 */ +/* Description: Direction field for next chip */ +#define SH_NI1_LOCAL_TABLE_DIR0_SHFT 0 +#define SH_NI1_LOCAL_TABLE_DIR0_MASK 0x000000000000000f + +/* SH_NI1_LOCAL_TABLE_V0 */ +/* Description: Low bit of virtual channel for next chip */ +#define SH_NI1_LOCAL_TABLE_V0_SHFT 4 +#define SH_NI1_LOCAL_TABLE_V0_MASK 0x0000000000000010 + +/* SH_NI1_LOCAL_TABLE_VALID */ +/* Description: Indicates that this entry is valid */ +#define SH_NI1_LOCAL_TABLE_VALID_SHFT 63 +#define SH_NI1_LOCAL_TABLE_VALID_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_NI1_GLOBAL_TABLE" */ +/* global lookup table */ +/* ==================================================================== */ + +#define SH_NI1_GLOBAL_TABLE 0x0000000150023400 +#define SH_NI1_GLOBAL_TABLE_MASK 0x800000000000001f +#define SH_NI1_GLOBAL_TABLE_MEMDEPTH 16 +#define SH_NI1_GLOBAL_TABLE_INIT 0x0000000000000000 + +/* SH_NI1_GLOBAL_TABLE_DIR0 */ +/* Description: Direction field for next chip */ +#define SH_NI1_GLOBAL_TABLE_DIR0_SHFT 0 +#define SH_NI1_GLOBAL_TABLE_DIR0_MASK 0x000000000000000f + +/* SH_NI1_GLOBAL_TABLE_V0 */ +/* Description: Low bit of virtual channel for next chip */ +#define SH_NI1_GLOBAL_TABLE_V0_SHFT 4 +#define SH_NI1_GLOBAL_TABLE_V0_MASK 0x0000000000000010 + +/* SH_NI1_GLOBAL_TABLE_VALID */ +/* Description: Indicates that this entry is valid */ +#define SH_NI1_GLOBAL_TABLE_VALID_SHFT 63 +#define SH_NI1_GLOBAL_TABLE_VALID_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_NI1_OVER_RIDE_TABLE" */ +/* If enabled, bypass the Global/Local tables */ +/* ==================================================================== */ + +#define SH_NI1_OVER_RIDE_TABLE 0x0000000150023480 +#define SH_NI1_OVER_RIDE_TABLE_MASK 0x800000000000001f +#define SH_NI1_OVER_RIDE_TABLE_INIT 0x8000000000000000 + +/* SH_NI1_OVER_RIDE_TABLE_DIR0 */ +/* Description: Direction field for next chip */ +#define SH_NI1_OVER_RIDE_TABLE_DIR0_SHFT 0 +#define SH_NI1_OVER_RIDE_TABLE_DIR0_MASK 0x000000000000000f + +/* SH_NI1_OVER_RIDE_TABLE_V0 */ +/* Description: Low bit of virtual channel for next chip */ +#define SH_NI1_OVER_RIDE_TABLE_V0_SHFT 4 +#define SH_NI1_OVER_RIDE_TABLE_V0_MASK 0x0000000000000010 + +/* SH_NI1_OVER_RIDE_TABLE_ENABLE */ +/* Description: Indicates that this entry is enabled */ +#define SH_NI1_OVER_RIDE_TABLE_ENABLE_SHFT 63 +#define SH_NI1_OVER_RIDE_TABLE_ENABLE_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_NI1_RSP_PLANE_HINT" */ +/* If enabled, invert incoming response only plane hint bit before lo */ +/* ==================================================================== */ + +#define SH_NI1_RSP_PLANE_HINT 0x0000000150023488 +#define SH_NI1_RSP_PLANE_HINT_MASK 0x0000000000000000 +#define SH_NI1_RSP_PLANE_HINT_INIT 0x0000000000000000 + +/* ==================================================================== */ +/* Register "SH_MD_LOCAL_TABLE" */ +/* local lookup table */ +/* ==================================================================== */ + +#define SH_MD_LOCAL_TABLE 0x0000000150024000 +#define SH_MD_LOCAL_TABLE_MASK 0x8000000000003f3f +#define SH_MD_LOCAL_TABLE_MEMDEPTH 128 +#define SH_MD_LOCAL_TABLE_INIT 0x0000000000000000 + +/* SH_MD_LOCAL_TABLE_DIR0 */ +/* Description: Direction field for next chip */ +#define SH_MD_LOCAL_TABLE_DIR0_SHFT 0 +#define SH_MD_LOCAL_TABLE_DIR0_MASK 0x000000000000000f + +/* SH_MD_LOCAL_TABLE_V0 */ +/* Description: Low bit of virtual channel for next chip */ +#define SH_MD_LOCAL_TABLE_V0_SHFT 4 +#define SH_MD_LOCAL_TABLE_V0_MASK 0x0000000000000010 + +/* SH_MD_LOCAL_TABLE_NI_SEL0 */ +/* Description: ni select for requests */ +#define SH_MD_LOCAL_TABLE_NI_SEL0_SHFT 5 +#define SH_MD_LOCAL_TABLE_NI_SEL0_MASK 0x0000000000000020 + +/* SH_MD_LOCAL_TABLE_DIR1 */ +#define SH_MD_LOCAL_TABLE_DIR1_SHFT 8 +#define SH_MD_LOCAL_TABLE_DIR1_MASK 0x0000000000000f00 + +/* SH_MD_LOCAL_TABLE_V1 */ +/* Description: Low bit of virtual channel for next chip */ +#define SH_MD_LOCAL_TABLE_V1_SHFT 12 +#define SH_MD_LOCAL_TABLE_V1_MASK 0x0000000000001000 + +/* SH_MD_LOCAL_TABLE_NI_SEL1 */ +/* Description: ni select for plane-hint 1 */ +#define SH_MD_LOCAL_TABLE_NI_SEL1_SHFT 13 +#define SH_MD_LOCAL_TABLE_NI_SEL1_MASK 0x0000000000002000 + +/* SH_MD_LOCAL_TABLE_VALID */ +/* Description: Indicates that this entry is valid */ +#define SH_MD_LOCAL_TABLE_VALID_SHFT 63 +#define SH_MD_LOCAL_TABLE_VALID_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_MD_GLOBAL_TABLE" */ +/* global lookup table */ +/* ==================================================================== */ + +#define SH_MD_GLOBAL_TABLE 0x0000000150024400 +#define SH_MD_GLOBAL_TABLE_MASK 0x8000000000003f3f +#define SH_MD_GLOBAL_TABLE_MEMDEPTH 16 +#define SH_MD_GLOBAL_TABLE_INIT 0x0000000000000000 + +/* SH_MD_GLOBAL_TABLE_DIR0 */ +/* Description: Direction field for next chip */ +#define SH_MD_GLOBAL_TABLE_DIR0_SHFT 0 +#define SH_MD_GLOBAL_TABLE_DIR0_MASK 0x000000000000000f + +/* SH_MD_GLOBAL_TABLE_V0 */ +/* Description: Low bit of virtual channel for next chip */ +#define SH_MD_GLOBAL_TABLE_V0_SHFT 4 +#define SH_MD_GLOBAL_TABLE_V0_MASK 0x0000000000000010 + +/* SH_MD_GLOBAL_TABLE_NI_SEL0 */ +/* Description: ni select for requests */ +#define SH_MD_GLOBAL_TABLE_NI_SEL0_SHFT 5 +#define SH_MD_GLOBAL_TABLE_NI_SEL0_MASK 0x0000000000000020 + +/* SH_MD_GLOBAL_TABLE_DIR1 */ +#define SH_MD_GLOBAL_TABLE_DIR1_SHFT 8 +#define SH_MD_GLOBAL_TABLE_DIR1_MASK 0x0000000000000f00 + +/* SH_MD_GLOBAL_TABLE_V1 */ +/* Description: Low bit of virtual channel for next chip */ +#define SH_MD_GLOBAL_TABLE_V1_SHFT 12 +#define SH_MD_GLOBAL_TABLE_V1_MASK 0x0000000000001000 + +/* SH_MD_GLOBAL_TABLE_NI_SEL1 */ +/* Description: ni select for plane-hint 1 */ +#define SH_MD_GLOBAL_TABLE_NI_SEL1_SHFT 13 +#define SH_MD_GLOBAL_TABLE_NI_SEL1_MASK 0x0000000000002000 + +/* SH_MD_GLOBAL_TABLE_VALID */ +/* Description: Indicates that this entry is valid */ +#define SH_MD_GLOBAL_TABLE_VALID_SHFT 63 +#define SH_MD_GLOBAL_TABLE_VALID_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_MD_OVER_RIDE_TABLE" */ +/* If enabled, bypass the Global/Local tables */ +/* ==================================================================== */ + +#define SH_MD_OVER_RIDE_TABLE 0x0000000150024480 +#define SH_MD_OVER_RIDE_TABLE_MASK 0x8000000000003f3f +#define SH_MD_OVER_RIDE_TABLE_INIT 0x8000000000002000 + +/* SH_MD_OVER_RIDE_TABLE_DIR0 */ +/* Description: Direction field for next chip */ +#define SH_MD_OVER_RIDE_TABLE_DIR0_SHFT 0 +#define SH_MD_OVER_RIDE_TABLE_DIR0_MASK 0x000000000000000f + +/* SH_MD_OVER_RIDE_TABLE_V0 */ +/* Description: Low bit of virtual channel for next chip */ +#define SH_MD_OVER_RIDE_TABLE_V0_SHFT 4 +#define SH_MD_OVER_RIDE_TABLE_V0_MASK 0x0000000000000010 + +/* SH_MD_OVER_RIDE_TABLE_NI_SEL0 */ +/* Description: ni select */ +#define SH_MD_OVER_RIDE_TABLE_NI_SEL0_SHFT 5 +#define SH_MD_OVER_RIDE_TABLE_NI_SEL0_MASK 0x0000000000000020 + +/* SH_MD_OVER_RIDE_TABLE_DIR1 */ +#define SH_MD_OVER_RIDE_TABLE_DIR1_SHFT 8 +#define SH_MD_OVER_RIDE_TABLE_DIR1_MASK 0x0000000000000f00 + +/* SH_MD_OVER_RIDE_TABLE_V1 */ +/* Description: Low bit of virtual channel for next chip */ +#define SH_MD_OVER_RIDE_TABLE_V1_SHFT 12 +#define SH_MD_OVER_RIDE_TABLE_V1_MASK 0x0000000000001000 + +/* SH_MD_OVER_RIDE_TABLE_NI_SEL1 */ +/* Description: ni select */ +#define SH_MD_OVER_RIDE_TABLE_NI_SEL1_SHFT 13 +#define SH_MD_OVER_RIDE_TABLE_NI_SEL1_MASK 0x0000000000002000 + +/* SH_MD_OVER_RIDE_TABLE_ENABLE */ +/* Description: Indicates that this entry is enabled */ +#define SH_MD_OVER_RIDE_TABLE_ENABLE_SHFT 63 +#define SH_MD_OVER_RIDE_TABLE_ENABLE_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_MD_RSP_PLANE_HINT" */ +/* If enabled, invert incoming response only plane hint bit before lo */ +/* ==================================================================== */ + +#define SH_MD_RSP_PLANE_HINT 0x0000000150024488 +#define SH_MD_RSP_PLANE_HINT_MASK 0x0000000000000001 +#define SH_MD_RSP_PLANE_HINT_INIT 0x0000000000000000 + +/* SH_MD_RSP_PLANE_HINT_INVERT */ +/* Description: Invert Response Plane Hint */ +#define SH_MD_RSP_PLANE_HINT_INVERT_SHFT 0 +#define SH_MD_RSP_PLANE_HINT_INVERT_MASK 0x0000000000000001 + +/* ==================================================================== */ +/* Register "SH_LB_LIQ_CTL" */ +/* Local Block LIQ Control */ +/* ==================================================================== */ + +#define SH_LB_LIQ_CTL 0x0000000110040000 +#define SH_LB_LIQ_CTL_MASK 0x0000000000070f1f +#define SH_LB_LIQ_CTL_INIT 0x0000000000000000 + +/* SH_LB_LIQ_CTL_LIQ_REQ_CTL */ +/* Description: LIQ Request Control */ +#define SH_LB_LIQ_CTL_LIQ_REQ_CTL_SHFT 0 +#define SH_LB_LIQ_CTL_LIQ_REQ_CTL_MASK 0x000000000000001f + +/* SH_LB_LIQ_CTL_LIQ_RPL_CTL */ +/* Description: LIQ Reply Control */ +#define SH_LB_LIQ_CTL_LIQ_RPL_CTL_SHFT 8 +#define SH_LB_LIQ_CTL_LIQ_RPL_CTL_MASK 0x0000000000000f00 + +/* SH_LB_LIQ_CTL_FORCE_RQ_CREDIT */ +/* Description: Force request credit */ +#define SH_LB_LIQ_CTL_FORCE_RQ_CREDIT_SHFT 16 +#define SH_LB_LIQ_CTL_FORCE_RQ_CREDIT_MASK 0x0000000000010000 + +/* SH_LB_LIQ_CTL_FORCE_RP_CREDIT */ +/* Description: Force reply credit */ +#define SH_LB_LIQ_CTL_FORCE_RP_CREDIT_SHFT 17 +#define SH_LB_LIQ_CTL_FORCE_RP_CREDIT_MASK 0x0000000000020000 + +/* SH_LB_LIQ_CTL_FORCE_LINVV_CREDIT */ +/* Description: Force linvv credit */ +#define SH_LB_LIQ_CTL_FORCE_LINVV_CREDIT_SHFT 18 +#define SH_LB_LIQ_CTL_FORCE_LINVV_CREDIT_MASK 0x0000000000040000 + +/* ==================================================================== */ +/* Register "SH_LB_LOQ_CTL" */ +/* Local Block LOQ Control */ +/* ==================================================================== */ + +#define SH_LB_LOQ_CTL 0x0000000110040080 +#define SH_LB_LOQ_CTL_MASK 0x0000000000000003 +#define SH_LB_LOQ_CTL_INIT 0x0000000000000000 + +/* SH_LB_LOQ_CTL_LOQ_REQ_CTL */ +/* Description: LOQ Request Control */ +#define SH_LB_LOQ_CTL_LOQ_REQ_CTL_SHFT 0 +#define SH_LB_LOQ_CTL_LOQ_REQ_CTL_MASK 0x0000000000000001 + +/* SH_LB_LOQ_CTL_LOQ_RPL_CTL */ +/* Description: LOQ Reply Control */ +#define SH_LB_LOQ_CTL_LOQ_RPL_CTL_SHFT 1 +#define SH_LB_LOQ_CTL_LOQ_RPL_CTL_MASK 0x0000000000000002 + +/* ==================================================================== */ +/* Register "SH_LB_MAX_REP_CREDIT_CNT" */ +/* Maximum number of reply credits from XN */ +/* ==================================================================== */ + +#define SH_LB_MAX_REP_CREDIT_CNT 0x0000000110040100 +#define SH_LB_MAX_REP_CREDIT_CNT_MASK 0x000000000000001f +#define SH_LB_MAX_REP_CREDIT_CNT_INIT 0x000000000000001f + +/* SH_LB_MAX_REP_CREDIT_CNT_MAX_CNT */ +/* Description: Max reply credits */ +#define SH_LB_MAX_REP_CREDIT_CNT_MAX_CNT_SHFT 0 +#define SH_LB_MAX_REP_CREDIT_CNT_MAX_CNT_MASK 0x000000000000001f + +/* ==================================================================== */ +/* Register "SH_LB_MAX_REQ_CREDIT_CNT" */ +/* Maximum number of request credits from XN */ +/* ==================================================================== */ + +#define SH_LB_MAX_REQ_CREDIT_CNT 0x0000000110040180 +#define SH_LB_MAX_REQ_CREDIT_CNT_MASK 0x000000000000001f +#define SH_LB_MAX_REQ_CREDIT_CNT_INIT 0x000000000000001f + +/* SH_LB_MAX_REQ_CREDIT_CNT_MAX_CNT */ +/* Description: Max request credits */ +#define SH_LB_MAX_REQ_CREDIT_CNT_MAX_CNT_SHFT 0 +#define SH_LB_MAX_REQ_CREDIT_CNT_MAX_CNT_MASK 0x000000000000001f + +/* ==================================================================== */ +/* Register "SH_PIO_TIME_OUT" */ +/* Local Block PIO time out value */ +/* ==================================================================== */ + +#define SH_PIO_TIME_OUT 0x0000000110040200 +#define SH_PIO_TIME_OUT_MASK 0x000000000000ffff +#define SH_PIO_TIME_OUT_INIT 0x0000000000000400 + +/* SH_PIO_TIME_OUT_VALUE */ +/* Description: PIO time out value */ +#define SH_PIO_TIME_OUT_VALUE_SHFT 0 +#define SH_PIO_TIME_OUT_VALUE_MASK 0x000000000000ffff + +/* ==================================================================== */ +/* Register "SH_PIO_NACK_RESET" */ +/* Local Block PIO Reset for nack counters */ +/* ==================================================================== */ + +#define SH_PIO_NACK_RESET 0x0000000110040280 +#define SH_PIO_NACK_RESET_MASK 0x0000000000000001 +#define SH_PIO_NACK_RESET_INIT 0x0000000000000000 + +/* SH_PIO_NACK_RESET_PULSE */ +/* Description: PIO nack counter reset */ +#define SH_PIO_NACK_RESET_PULSE_SHFT 0 +#define SH_PIO_NACK_RESET_PULSE_MASK 0x0000000000000001 + +/* ==================================================================== */ +/* Register "SH_CONVEYOR_BELT_TIME_OUT" */ +/* Local Block conveyor belt time out value */ +/* ==================================================================== */ + +#define SH_CONVEYOR_BELT_TIME_OUT 0x0000000110040300 +#define SH_CONVEYOR_BELT_TIME_OUT_MASK 0x0000000000000fff +#define SH_CONVEYOR_BELT_TIME_OUT_INIT 0x0000000000000000 + +/* SH_CONVEYOR_BELT_TIME_OUT_VALUE */ +/* Description: Conveyor belt time out value */ +#define SH_CONVEYOR_BELT_TIME_OUT_VALUE_SHFT 0 +#define SH_CONVEYOR_BELT_TIME_OUT_VALUE_MASK 0x0000000000000fff + +/* ==================================================================== */ +/* Register "SH_LB_CREDIT_STATUS" */ +/* Credit Counter Status Register */ +/* ==================================================================== */ + +#define SH_LB_CREDIT_STATUS 0x0000000110050000 +#define SH_LB_CREDIT_STATUS_MASK 0x000000000ffff3df +#define SH_LB_CREDIT_STATUS_INIT 0x0000000000000000 + +/* SH_LB_CREDIT_STATUS_LIQ_RQ_CREDIT */ +/* Description: LIQ request queue credit counter */ +#define SH_LB_CREDIT_STATUS_LIQ_RQ_CREDIT_SHFT 0 +#define SH_LB_CREDIT_STATUS_LIQ_RQ_CREDIT_MASK 0x000000000000001f + +/* SH_LB_CREDIT_STATUS_LIQ_RP_CREDIT */ +/* Description: LIQ reply queue credit counter */ +#define SH_LB_CREDIT_STATUS_LIQ_RP_CREDIT_SHFT 6 +#define SH_LB_CREDIT_STATUS_LIQ_RP_CREDIT_MASK 0x00000000000003c0 + +/* SH_LB_CREDIT_STATUS_LINVV_CREDIT */ +/* Description: LINVV credit counter */ +#define SH_LB_CREDIT_STATUS_LINVV_CREDIT_SHFT 12 +#define SH_LB_CREDIT_STATUS_LINVV_CREDIT_MASK 0x000000000003f000 + +/* SH_LB_CREDIT_STATUS_LOQ_RQ_CREDIT */ +/* Description: LOQ request queue credit counter */ +#define SH_LB_CREDIT_STATUS_LOQ_RQ_CREDIT_SHFT 18 +#define SH_LB_CREDIT_STATUS_LOQ_RQ_CREDIT_MASK 0x00000000007c0000 + +/* SH_LB_CREDIT_STATUS_LOQ_RP_CREDIT */ +/* Description: LOQ reply queue credit counter */ +#define SH_LB_CREDIT_STATUS_LOQ_RP_CREDIT_SHFT 23 +#define SH_LB_CREDIT_STATUS_LOQ_RP_CREDIT_MASK 0x000000000f800000 + +/* ==================================================================== */ +/* Register "SH_LB_DEBUG_LOCAL_SEL" */ +/* LB Debug Port Select */ +/* ==================================================================== */ + +#define SH_LB_DEBUG_LOCAL_SEL 0x0000000110050080 +#define SH_LB_DEBUG_LOCAL_SEL_MASK 0xf777777777777777 +#define SH_LB_DEBUG_LOCAL_SEL_INIT 0x0000000000000000 + +/* SH_LB_DEBUG_LOCAL_SEL_NIBBLE0_CHIPLET_SEL */ +/* Description: Nibble 0 Chiplet select */ +#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE0_CHIPLET_SEL_SHFT 0 +#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE0_CHIPLET_SEL_MASK 0x0000000000000007 + +/* SH_LB_DEBUG_LOCAL_SEL_NIBBLE0_NIBBLE_SEL */ +/* Description: Nibble 0 Nibble select */ +#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE0_NIBBLE_SEL_SHFT 4 +#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE0_NIBBLE_SEL_MASK 0x0000000000000070 + +/* SH_LB_DEBUG_LOCAL_SEL_NIBBLE1_CHIPLET_SEL */ +/* Description: Nibble 1 Chiplet select */ +#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE1_CHIPLET_SEL_SHFT 8 +#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE1_CHIPLET_SEL_MASK 0x0000000000000700 + +/* SH_LB_DEBUG_LOCAL_SEL_NIBBLE1_NIBBLE_SEL */ +/* Description: Nibble 1 Nibble select */ +#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE1_NIBBLE_SEL_SHFT 12 +#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE1_NIBBLE_SEL_MASK 0x0000000000007000 + +/* SH_LB_DEBUG_LOCAL_SEL_NIBBLE2_CHIPLET_SEL */ +/* Description: Nibble 2 Chiplet select */ +#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE2_CHIPLET_SEL_SHFT 16 +#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE2_CHIPLET_SEL_MASK 0x0000000000070000 + +/* SH_LB_DEBUG_LOCAL_SEL_NIBBLE2_NIBBLE_SEL */ +/* Description: Nibble 2 Nibble select */ +#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE2_NIBBLE_SEL_SHFT 20 +#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE2_NIBBLE_SEL_MASK 0x0000000000700000 + +/* SH_LB_DEBUG_LOCAL_SEL_NIBBLE3_CHIPLET_SEL */ +/* Description: Nibble 3 Chiplet select */ +#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE3_CHIPLET_SEL_SHFT 24 +#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE3_CHIPLET_SEL_MASK 0x0000000007000000 + +/* SH_LB_DEBUG_LOCAL_SEL_NIBBLE3_NIBBLE_SEL */ +/* Description: Nibble 3 Nibble select */ +#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE3_NIBBLE_SEL_SHFT 28 +#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE3_NIBBLE_SEL_MASK 0x0000000070000000 + +/* SH_LB_DEBUG_LOCAL_SEL_NIBBLE4_CHIPLET_SEL */ +/* Description: Nibble 4 Chiplet select */ +#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE4_CHIPLET_SEL_SHFT 32 +#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE4_CHIPLET_SEL_MASK 0x0000000700000000 + +/* SH_LB_DEBUG_LOCAL_SEL_NIBBLE4_NIBBLE_SEL */ +/* Description: Nibble 4 Nibble select */ +#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE4_NIBBLE_SEL_SHFT 36 +#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE4_NIBBLE_SEL_MASK 0x0000007000000000 + +/* SH_LB_DEBUG_LOCAL_SEL_NIBBLE5_CHIPLET_SEL */ +/* Description: Nibble 5 Chiplet select */ +#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE5_CHIPLET_SEL_SHFT 40 +#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE5_CHIPLET_SEL_MASK 0x0000070000000000 + +/* SH_LB_DEBUG_LOCAL_SEL_NIBBLE5_NIBBLE_SEL */ +/* Description: Nibble 5 Nibble select */ +#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE5_NIBBLE_SEL_SHFT 44 +#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE5_NIBBLE_SEL_MASK 0x0000700000000000 + +/* SH_LB_DEBUG_LOCAL_SEL_NIBBLE6_CHIPLET_SEL */ +/* Description: Nibble 6 Chiplet select */ +#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE6_CHIPLET_SEL_SHFT 48 +#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE6_CHIPLET_SEL_MASK 0x0007000000000000 + +/* SH_LB_DEBUG_LOCAL_SEL_NIBBLE6_NIBBLE_SEL */ +/* Description: Nibble 6 Nibble select */ +#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE6_NIBBLE_SEL_SHFT 52 +#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE6_NIBBLE_SEL_MASK 0x0070000000000000 + +/* SH_LB_DEBUG_LOCAL_SEL_NIBBLE7_CHIPLET_SEL */ +/* Description: Nibble 7 Chiplet select */ +#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE7_CHIPLET_SEL_SHFT 56 +#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE7_CHIPLET_SEL_MASK 0x0700000000000000 + +/* SH_LB_DEBUG_LOCAL_SEL_NIBBLE7_NIBBLE_SEL */ +/* Description: Nibble 7 Nibble select */ +#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE7_NIBBLE_SEL_SHFT 60 +#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE7_NIBBLE_SEL_MASK 0x7000000000000000 + +/* SH_LB_DEBUG_LOCAL_SEL_TRIGGER_ENABLE */ +/* Description: Enable trigger on bit 32 of Analyzer data */ +#define SH_LB_DEBUG_LOCAL_SEL_TRIGGER_ENABLE_SHFT 63 +#define SH_LB_DEBUG_LOCAL_SEL_TRIGGER_ENABLE_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_LB_DEBUG_PERF_SEL" */ +/* LB Debug Port Performance Select */ +/* ==================================================================== */ + +#define SH_LB_DEBUG_PERF_SEL 0x0000000110050100 +#define SH_LB_DEBUG_PERF_SEL_MASK 0x7777777777777777 +#define SH_LB_DEBUG_PERF_SEL_INIT 0x0000000000000000 + +/* SH_LB_DEBUG_PERF_SEL_NIBBLE0_CHIPLET_SEL */ +/* Description: Nibble 0 Chiplet select */ +#define SH_LB_DEBUG_PERF_SEL_NIBBLE0_CHIPLET_SEL_SHFT 0 +#define SH_LB_DEBUG_PERF_SEL_NIBBLE0_CHIPLET_SEL_MASK 0x0000000000000007 + +/* SH_LB_DEBUG_PERF_SEL_NIBBLE0_NIBBLE_SEL */ +/* Description: Nibble 0 Nibble select */ +#define SH_LB_DEBUG_PERF_SEL_NIBBLE0_NIBBLE_SEL_SHFT 4 +#define SH_LB_DEBUG_PERF_SEL_NIBBLE0_NIBBLE_SEL_MASK 0x0000000000000070 + +/* SH_LB_DEBUG_PERF_SEL_NIBBLE1_CHIPLET_SEL */ +/* Description: Nibble 1 Chiplet select */ +#define SH_LB_DEBUG_PERF_SEL_NIBBLE1_CHIPLET_SEL_SHFT 8 +#define SH_LB_DEBUG_PERF_SEL_NIBBLE1_CHIPLET_SEL_MASK 0x0000000000000700 + +/* SH_LB_DEBUG_PERF_SEL_NIBBLE1_NIBBLE_SEL */ +/* Description: Nibble 1 Nibble select */ +#define SH_LB_DEBUG_PERF_SEL_NIBBLE1_NIBBLE_SEL_SHFT 12 +#define SH_LB_DEBUG_PERF_SEL_NIBBLE1_NIBBLE_SEL_MASK 0x0000000000007000 + +/* SH_LB_DEBUG_PERF_SEL_NIBBLE2_CHIPLET_SEL */ +/* Description: Nibble 2 Chiplet select */ +#define SH_LB_DEBUG_PERF_SEL_NIBBLE2_CHIPLET_SEL_SHFT 16 +#define SH_LB_DEBUG_PERF_SEL_NIBBLE2_CHIPLET_SEL_MASK 0x0000000000070000 + +/* SH_LB_DEBUG_PERF_SEL_NIBBLE2_NIBBLE_SEL */ +/* Description: Nibble 2 Nibble select */ +#define SH_LB_DEBUG_PERF_SEL_NIBBLE2_NIBBLE_SEL_SHFT 20 +#define SH_LB_DEBUG_PERF_SEL_NIBBLE2_NIBBLE_SEL_MASK 0x0000000000700000 + +/* SH_LB_DEBUG_PERF_SEL_NIBBLE3_CHIPLET_SEL */ +/* Description: Nibble 3 Chiplet select */ +#define SH_LB_DEBUG_PERF_SEL_NIBBLE3_CHIPLET_SEL_SHFT 24 +#define SH_LB_DEBUG_PERF_SEL_NIBBLE3_CHIPLET_SEL_MASK 0x0000000007000000 + +/* SH_LB_DEBUG_PERF_SEL_NIBBLE3_NIBBLE_SEL */ +/* Description: Nibble 3 Nibble select */ +#define SH_LB_DEBUG_PERF_SEL_NIBBLE3_NIBBLE_SEL_SHFT 28 +#define SH_LB_DEBUG_PERF_SEL_NIBBLE3_NIBBLE_SEL_MASK 0x0000000070000000 + +/* SH_LB_DEBUG_PERF_SEL_NIBBLE4_CHIPLET_SEL */ +/* Description: Nibble 4 Chiplet select */ +#define SH_LB_DEBUG_PERF_SEL_NIBBLE4_CHIPLET_SEL_SHFT 32 +#define SH_LB_DEBUG_PERF_SEL_NIBBLE4_CHIPLET_SEL_MASK 0x0000000700000000 + +/* SH_LB_DEBUG_PERF_SEL_NIBBLE4_NIBBLE_SEL */ +/* Description: Nibble 4 Nibble select */ +#define SH_LB_DEBUG_PERF_SEL_NIBBLE4_NIBBLE_SEL_SHFT 36 +#define SH_LB_DEBUG_PERF_SEL_NIBBLE4_NIBBLE_SEL_MASK 0x0000007000000000 + +/* SH_LB_DEBUG_PERF_SEL_NIBBLE5_CHIPLET_SEL */ +/* Description: Nibble 5 Chiplet select */ +#define SH_LB_DEBUG_PERF_SEL_NIBBLE5_CHIPLET_SEL_SHFT 40 +#define SH_LB_DEBUG_PERF_SEL_NIBBLE5_CHIPLET_SEL_MASK 0x0000070000000000 + +/* SH_LB_DEBUG_PERF_SEL_NIBBLE5_NIBBLE_SEL */ +/* Description: Nibble 5 Nibble select */ +#define SH_LB_DEBUG_PERF_SEL_NIBBLE5_NIBBLE_SEL_SHFT 44 +#define SH_LB_DEBUG_PERF_SEL_NIBBLE5_NIBBLE_SEL_MASK 0x0000700000000000 + +/* SH_LB_DEBUG_PERF_SEL_NIBBLE6_CHIPLET_SEL */ +/* Description: Nibble 6 Chiplet select */ +#define SH_LB_DEBUG_PERF_SEL_NIBBLE6_CHIPLET_SEL_SHFT 48 +#define SH_LB_DEBUG_PERF_SEL_NIBBLE6_CHIPLET_SEL_MASK 0x0007000000000000 + +/* SH_LB_DEBUG_PERF_SEL_NIBBLE6_NIBBLE_SEL */ +/* Description: Nibble 6 Nibble select */ +#define SH_LB_DEBUG_PERF_SEL_NIBBLE6_NIBBLE_SEL_SHFT 52 +#define SH_LB_DEBUG_PERF_SEL_NIBBLE6_NIBBLE_SEL_MASK 0x0070000000000000 + +/* SH_LB_DEBUG_PERF_SEL_NIBBLE7_CHIPLET_SEL */ +/* Description: Nibble 7 Chiplet select */ +#define SH_LB_DEBUG_PERF_SEL_NIBBLE7_CHIPLET_SEL_SHFT 56 +#define SH_LB_DEBUG_PERF_SEL_NIBBLE7_CHIPLET_SEL_MASK 0x0700000000000000 + +/* SH_LB_DEBUG_PERF_SEL_NIBBLE7_NIBBLE_SEL */ +/* Description: Nibble 7 Nibble select */ +#define SH_LB_DEBUG_PERF_SEL_NIBBLE7_NIBBLE_SEL_SHFT 60 +#define SH_LB_DEBUG_PERF_SEL_NIBBLE7_NIBBLE_SEL_MASK 0x7000000000000000 + +/* ==================================================================== */ +/* Register "SH_LB_DEBUG_TRIG_SEL" */ +/* LB Debug Trigger Select */ +/* ==================================================================== */ + +#define SH_LB_DEBUG_TRIG_SEL 0x0000000110050180 +#define SH_LB_DEBUG_TRIG_SEL_MASK 0x7777777777777777 +#define SH_LB_DEBUG_TRIG_SEL_INIT 0x0000000000000000 + +/* SH_LB_DEBUG_TRIG_SEL_TRIGGER0_CHIPLET_SEL */ +/* Description: Nibble 0 Chiplet select */ +#define SH_LB_DEBUG_TRIG_SEL_TRIGGER0_CHIPLET_SEL_SHFT 0 +#define SH_LB_DEBUG_TRIG_SEL_TRIGGER0_CHIPLET_SEL_MASK 0x0000000000000007 + +/* SH_LB_DEBUG_TRIG_SEL_TRIGGER0_NIBBLE_SEL */ +/* Description: Nibble 0 Nibble select */ +#define SH_LB_DEBUG_TRIG_SEL_TRIGGER0_NIBBLE_SEL_SHFT 4 +#define SH_LB_DEBUG_TRIG_SEL_TRIGGER0_NIBBLE_SEL_MASK 0x0000000000000070 + +/* SH_LB_DEBUG_TRIG_SEL_TRIGGER1_CHIPLET_SEL */ +/* Description: Nibble 1 Chiplet select */ +#define SH_LB_DEBUG_TRIG_SEL_TRIGGER1_CHIPLET_SEL_SHFT 8 +#define SH_LB_DEBUG_TRIG_SEL_TRIGGER1_CHIPLET_SEL_MASK 0x0000000000000700 + +/* SH_LB_DEBUG_TRIG_SEL_TRIGGER1_NIBBLE_SEL */ +/* Description: Nibble 1 Nibble select */ +#define SH_LB_DEBUG_TRIG_SEL_TRIGGER1_NIBBLE_SEL_SHFT 12 +#define SH_LB_DEBUG_TRIG_SEL_TRIGGER1_NIBBLE_SEL_MASK 0x0000000000007000 + +/* SH_LB_DEBUG_TRIG_SEL_TRIGGER2_CHIPLET_SEL */ +/* Description: Nibble 2 Chiplet select */ +#define SH_LB_DEBUG_TRIG_SEL_TRIGGER2_CHIPLET_SEL_SHFT 16 +#define SH_LB_DEBUG_TRIG_SEL_TRIGGER2_CHIPLET_SEL_MASK 0x0000000000070000 + +/* SH_LB_DEBUG_TRIG_SEL_TRIGGER2_NIBBLE_SEL */ +/* Description: Nibble 2 Nibble select */ +#define SH_LB_DEBUG_TRIG_SEL_TRIGGER2_NIBBLE_SEL_SHFT 20 +#define SH_LB_DEBUG_TRIG_SEL_TRIGGER2_NIBBLE_SEL_MASK 0x0000000000700000 + +/* SH_LB_DEBUG_TRIG_SEL_TRIGGER3_CHIPLET_SEL */ +/* Description: Nibble 3 Chiplet select */ +#define SH_LB_DEBUG_TRIG_SEL_TRIGGER3_CHIPLET_SEL_SHFT 24 +#define SH_LB_DEBUG_TRIG_SEL_TRIGGER3_CHIPLET_SEL_MASK 0x0000000007000000 + +/* SH_LB_DEBUG_TRIG_SEL_TRIGGER3_NIBBLE_SEL */ +/* Description: Nibble 3 Nibble select */ +#define SH_LB_DEBUG_TRIG_SEL_TRIGGER3_NIBBLE_SEL_SHFT 28 +#define SH_LB_DEBUG_TRIG_SEL_TRIGGER3_NIBBLE_SEL_MASK 0x0000000070000000 + +/* SH_LB_DEBUG_TRIG_SEL_TRIGGER4_CHIPLET_SEL */ +/* Description: Nibble 4 Chiplet select */ +#define SH_LB_DEBUG_TRIG_SEL_TRIGGER4_CHIPLET_SEL_SHFT 32 +#define SH_LB_DEBUG_TRIG_SEL_TRIGGER4_CHIPLET_SEL_MASK 0x0000000700000000 + +/* SH_LB_DEBUG_TRIG_SEL_TRIGGER4_NIBBLE_SEL */ +/* Description: Nibble 4 Nibble select */ +#define SH_LB_DEBUG_TRIG_SEL_TRIGGER4_NIBBLE_SEL_SHFT 36 +#define SH_LB_DEBUG_TRIG_SEL_TRIGGER4_NIBBLE_SEL_MASK 0x0000007000000000 + +/* SH_LB_DEBUG_TRIG_SEL_TRIGGER5_CHIPLET_SEL */ +/* Description: Nibble 5 Chiplet select */ +#define SH_LB_DEBUG_TRIG_SEL_TRIGGER5_CHIPLET_SEL_SHFT 40 +#define SH_LB_DEBUG_TRIG_SEL_TRIGGER5_CHIPLET_SEL_MASK 0x0000070000000000 + +/* SH_LB_DEBUG_TRIG_SEL_TRIGGER5_NIBBLE_SEL */ +/* Description: Nibble 5 Nibble select */ +#define SH_LB_DEBUG_TRIG_SEL_TRIGGER5_NIBBLE_SEL_SHFT 44 +#define SH_LB_DEBUG_TRIG_SEL_TRIGGER5_NIBBLE_SEL_MASK 0x0000700000000000 + +/* SH_LB_DEBUG_TRIG_SEL_TRIGGER6_CHIPLET_SEL */ +/* Description: Nibble 6 Chiplet select */ +#define SH_LB_DEBUG_TRIG_SEL_TRIGGER6_CHIPLET_SEL_SHFT 48 +#define SH_LB_DEBUG_TRIG_SEL_TRIGGER6_CHIPLET_SEL_MASK 0x0007000000000000 + +/* SH_LB_DEBUG_TRIG_SEL_TRIGGER6_NIBBLE_SEL */ +/* Description: Nibble 6 Nibble select */ +#define SH_LB_DEBUG_TRIG_SEL_TRIGGER6_NIBBLE_SEL_SHFT 52 +#define SH_LB_DEBUG_TRIG_SEL_TRIGGER6_NIBBLE_SEL_MASK 0x0070000000000000 + +/* SH_LB_DEBUG_TRIG_SEL_TRIGGER7_CHIPLET_SEL */ +/* Description: Nibble 7 Chiplet select */ +#define SH_LB_DEBUG_TRIG_SEL_TRIGGER7_CHIPLET_SEL_SHFT 56 +#define SH_LB_DEBUG_TRIG_SEL_TRIGGER7_CHIPLET_SEL_MASK 0x0700000000000000 + +/* SH_LB_DEBUG_TRIG_SEL_TRIGGER7_NIBBLE_SEL */ +/* Description: Nibble 7 Nibble select */ +#define SH_LB_DEBUG_TRIG_SEL_TRIGGER7_NIBBLE_SEL_SHFT 60 +#define SH_LB_DEBUG_TRIG_SEL_TRIGGER7_NIBBLE_SEL_MASK 0x7000000000000000 + +/* ==================================================================== */ +/* Register "SH_LB_ERROR_DETAIL_1" */ +/* LB Error capture information: HDR1 */ +/* ==================================================================== */ + +#define SH_LB_ERROR_DETAIL_1 0x0000000110050200 +#define SH_LB_ERROR_DETAIL_1_MASK 0x8003073fff3fffff +#define SH_LB_ERROR_DETAIL_1_INIT 0x0000000000000000 + +/* SH_LB_ERROR_DETAIL_1_COMMAND */ +/* Description: COMMAND */ +#define SH_LB_ERROR_DETAIL_1_COMMAND_SHFT 0 +#define SH_LB_ERROR_DETAIL_1_COMMAND_MASK 0x00000000000000ff + +/* SH_LB_ERROR_DETAIL_1_SUPPL */ +/* Description: SUPPLMENTAL */ +#define SH_LB_ERROR_DETAIL_1_SUPPL_SHFT 8 +#define SH_LB_ERROR_DETAIL_1_SUPPL_MASK 0x00000000003fff00 + +/* SH_LB_ERROR_DETAIL_1_SOURCE */ +/* Description: SOURCE */ +#define SH_LB_ERROR_DETAIL_1_SOURCE_SHFT 24 +#define SH_LB_ERROR_DETAIL_1_SOURCE_MASK 0x0000003fff000000 + +/* SH_LB_ERROR_DETAIL_1_DEST */ +/* Description: DEST */ +#define SH_LB_ERROR_DETAIL_1_DEST_SHFT 40 +#define SH_LB_ERROR_DETAIL_1_DEST_MASK 0x0000070000000000 + +/* SH_LB_ERROR_DETAIL_1_HDR_ERR */ +/* Description: HDR_ERR */ +#define SH_LB_ERROR_DETAIL_1_HDR_ERR_SHFT 48 +#define SH_LB_ERROR_DETAIL_1_HDR_ERR_MASK 0x0001000000000000 + +/* SH_LB_ERROR_DETAIL_1_DATA_ERR */ +/* Description: DATA_ERR */ +#define SH_LB_ERROR_DETAIL_1_DATA_ERR_SHFT 49 +#define SH_LB_ERROR_DETAIL_1_DATA_ERR_MASK 0x0002000000000000 + +/* SH_LB_ERROR_DETAIL_1_VALID */ +/* Description: VALID */ +#define SH_LB_ERROR_DETAIL_1_VALID_SHFT 63 +#define SH_LB_ERROR_DETAIL_1_VALID_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_LB_ERROR_DETAIL_2" */ +/* LB Error Bits */ +/* ==================================================================== */ + +#define SH_LB_ERROR_DETAIL_2 0x0000000110050280 +#define SH_LB_ERROR_DETAIL_2_MASK 0x00007fffffffffff +#define SH_LB_ERROR_DETAIL_2_INIT 0x0000000000000000 + +/* SH_LB_ERROR_DETAIL_2_ADDRESS */ +/* Description: ADDRESS */ +#define SH_LB_ERROR_DETAIL_2_ADDRESS_SHFT 0 +#define SH_LB_ERROR_DETAIL_2_ADDRESS_MASK 0x00007fffffffffff + +/* ==================================================================== */ +/* Register "SH_LB_ERROR_DETAIL_3" */ +/* LB Error Bits */ +/* ==================================================================== */ + +#define SH_LB_ERROR_DETAIL_3 0x0000000110050300 +#define SH_LB_ERROR_DETAIL_3_MASK 0xffffffffffffffff +#define SH_LB_ERROR_DETAIL_3_INIT 0x0000000000000000 + +/* SH_LB_ERROR_DETAIL_3_DATA */ +/* Description: DATA */ +#define SH_LB_ERROR_DETAIL_3_DATA_SHFT 0 +#define SH_LB_ERROR_DETAIL_3_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_LB_ERROR_DETAIL_4" */ +/* LB Error Bits */ +/* ==================================================================== */ + +#define SH_LB_ERROR_DETAIL_4 0x0000000110050380 +#define SH_LB_ERROR_DETAIL_4_MASK 0xffffffffffffffff +#define SH_LB_ERROR_DETAIL_4_INIT 0x0000000000000000 + +/* SH_LB_ERROR_DETAIL_4_ROUTE */ +/* Description: ROUTE */ +#define SH_LB_ERROR_DETAIL_4_ROUTE_SHFT 0 +#define SH_LB_ERROR_DETAIL_4_ROUTE_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_LB_ERROR_DETAIL_5" */ +/* LB Error Bits */ +/* ==================================================================== */ + +#define SH_LB_ERROR_DETAIL_5 0x0000000110050400 +#define SH_LB_ERROR_DETAIL_5_MASK 0x000000000000007f +#define SH_LB_ERROR_DETAIL_5_INIT 0x0000000000000000 + +/* SH_LB_ERROR_DETAIL_5_READ_RETRY */ +/* Description: Read retry error */ +#define SH_LB_ERROR_DETAIL_5_READ_RETRY_SHFT 0 +#define SH_LB_ERROR_DETAIL_5_READ_RETRY_MASK 0x0000000000000001 + +/* SH_LB_ERROR_DETAIL_5_PTC1_WRITE */ +/* Description: PTC1 write error */ +#define SH_LB_ERROR_DETAIL_5_PTC1_WRITE_SHFT 1 +#define SH_LB_ERROR_DETAIL_5_PTC1_WRITE_MASK 0x0000000000000002 + +/* SH_LB_ERROR_DETAIL_5_WRITE_RETRY */ +/* Description: Write retry error */ +#define SH_LB_ERROR_DETAIL_5_WRITE_RETRY_SHFT 2 +#define SH_LB_ERROR_DETAIL_5_WRITE_RETRY_MASK 0x0000000000000004 + +/* SH_LB_ERROR_DETAIL_5_COUNT_A_OVERFLOW */ +/* Description: Nack A counter overflow error */ +#define SH_LB_ERROR_DETAIL_5_COUNT_A_OVERFLOW_SHFT 3 +#define SH_LB_ERROR_DETAIL_5_COUNT_A_OVERFLOW_MASK 0x0000000000000008 + +/* SH_LB_ERROR_DETAIL_5_COUNT_B_OVERFLOW */ +/* Description: Nack B counter overflow error */ +#define SH_LB_ERROR_DETAIL_5_COUNT_B_OVERFLOW_SHFT 4 +#define SH_LB_ERROR_DETAIL_5_COUNT_B_OVERFLOW_MASK 0x0000000000000010 + +/* SH_LB_ERROR_DETAIL_5_NACK_A_TIMEOUT */ +/* Description: Nack A counter timeout error */ +#define SH_LB_ERROR_DETAIL_5_NACK_A_TIMEOUT_SHFT 5 +#define SH_LB_ERROR_DETAIL_5_NACK_A_TIMEOUT_MASK 0x0000000000000020 + +/* SH_LB_ERROR_DETAIL_5_NACK_B_TIMEOUT */ +/* Description: Nack B counter timeout error */ +#define SH_LB_ERROR_DETAIL_5_NACK_B_TIMEOUT_SHFT 6 +#define SH_LB_ERROR_DETAIL_5_NACK_B_TIMEOUT_MASK 0x0000000000000040 + +/* ==================================================================== */ +/* Register "SH_LB_ERROR_MASK" */ +/* LB Error Mask */ +/* ==================================================================== */ + +#define SH_LB_ERROR_MASK 0x0000000110050480 +#define SH_LB_ERROR_MASK_MASK 0x00000000007fffff +#define SH_LB_ERROR_MASK_INIT 0x00000000007fffff + +/* SH_LB_ERROR_MASK_RQ_BAD_CMD */ +/* Description: RQ_BAD_CMD */ +#define SH_LB_ERROR_MASK_RQ_BAD_CMD_SHFT 0 +#define SH_LB_ERROR_MASK_RQ_BAD_CMD_MASK 0x0000000000000001 + +/* SH_LB_ERROR_MASK_RP_BAD_CMD */ +/* Description: RP_BAD_CMD */ +#define SH_LB_ERROR_MASK_RP_BAD_CMD_SHFT 1 +#define SH_LB_ERROR_MASK_RP_BAD_CMD_MASK 0x0000000000000002 + +/* SH_LB_ERROR_MASK_RQ_SHORT */ +/* Description: RQ_SHORT */ +#define SH_LB_ERROR_MASK_RQ_SHORT_SHFT 2 +#define SH_LB_ERROR_MASK_RQ_SHORT_MASK 0x0000000000000004 + +/* SH_LB_ERROR_MASK_RP_SHORT */ +/* Description: RP_SHORT */ +#define SH_LB_ERROR_MASK_RP_SHORT_SHFT 3 +#define SH_LB_ERROR_MASK_RP_SHORT_MASK 0x0000000000000008 + +/* SH_LB_ERROR_MASK_RQ_LONG */ +/* Description: RQ_LONG */ +#define SH_LB_ERROR_MASK_RQ_LONG_SHFT 4 +#define SH_LB_ERROR_MASK_RQ_LONG_MASK 0x0000000000000010 + +/* SH_LB_ERROR_MASK_RP_LONG */ +/* Description: RP_LONG */ +#define SH_LB_ERROR_MASK_RP_LONG_SHFT 5 +#define SH_LB_ERROR_MASK_RP_LONG_MASK 0x0000000000000020 + +/* SH_LB_ERROR_MASK_RQ_BAD_DATA */ +/* Description: RQ_BAD_DATA */ +#define SH_LB_ERROR_MASK_RQ_BAD_DATA_SHFT 6 +#define SH_LB_ERROR_MASK_RQ_BAD_DATA_MASK 0x0000000000000040 + +/* SH_LB_ERROR_MASK_RP_BAD_DATA */ +/* Description: RP_BAD_DATA */ +#define SH_LB_ERROR_MASK_RP_BAD_DATA_SHFT 7 +#define SH_LB_ERROR_MASK_RP_BAD_DATA_MASK 0x0000000000000080 + +/* SH_LB_ERROR_MASK_RQ_BAD_ADDR */ +/* Description: RQ_BAD_ADDR */ +#define SH_LB_ERROR_MASK_RQ_BAD_ADDR_SHFT 8 +#define SH_LB_ERROR_MASK_RQ_BAD_ADDR_MASK 0x0000000000000100 + +/* SH_LB_ERROR_MASK_RQ_TIME_OUT */ +/* Description: RQ_TIME_OUT */ +#define SH_LB_ERROR_MASK_RQ_TIME_OUT_SHFT 9 +#define SH_LB_ERROR_MASK_RQ_TIME_OUT_MASK 0x0000000000000200 + +/* SH_LB_ERROR_MASK_LINVV_OVERFLOW */ +/* Description: LINVV_OVERFLOW */ +#define SH_LB_ERROR_MASK_LINVV_OVERFLOW_SHFT 10 +#define SH_LB_ERROR_MASK_LINVV_OVERFLOW_MASK 0x0000000000000400 + +/* SH_LB_ERROR_MASK_UNEXPECTED_LINV */ +/* Description: UNEXPECTED_LINV */ +#define SH_LB_ERROR_MASK_UNEXPECTED_LINV_SHFT 11 +#define SH_LB_ERROR_MASK_UNEXPECTED_LINV_MASK 0x0000000000000800 + +/* SH_LB_ERROR_MASK_PTC_1_TIMEOUT */ +/* Description: PTC_1 Time out */ +#define SH_LB_ERROR_MASK_PTC_1_TIMEOUT_SHFT 12 +#define SH_LB_ERROR_MASK_PTC_1_TIMEOUT_MASK 0x0000000000001000 + +/* SH_LB_ERROR_MASK_JUNK_BUS_ERR */ +/* Description: Junk Bus error */ +#define SH_LB_ERROR_MASK_JUNK_BUS_ERR_SHFT 13 +#define SH_LB_ERROR_MASK_JUNK_BUS_ERR_MASK 0x0000000000002000 + +/* SH_LB_ERROR_MASK_PIO_CB_ERR */ +/* Description: PIO Conveyor Belt operation error */ +#define SH_LB_ERROR_MASK_PIO_CB_ERR_SHFT 14 +#define SH_LB_ERROR_MASK_PIO_CB_ERR_MASK 0x0000000000004000 + +/* SH_LB_ERROR_MASK_VECTOR_RQ_ROUTE_ERROR */ +/* Description: Vector request Route data was invalid */ +#define SH_LB_ERROR_MASK_VECTOR_RQ_ROUTE_ERROR_SHFT 15 +#define SH_LB_ERROR_MASK_VECTOR_RQ_ROUTE_ERROR_MASK 0x0000000000008000 + +/* SH_LB_ERROR_MASK_VECTOR_RP_ROUTE_ERROR */ +/* Description: Vector reply Route data was invalid */ +#define SH_LB_ERROR_MASK_VECTOR_RP_ROUTE_ERROR_SHFT 16 +#define SH_LB_ERROR_MASK_VECTOR_RP_ROUTE_ERROR_MASK 0x0000000000010000 + +/* SH_LB_ERROR_MASK_GCLK_DROP */ +/* Description: Gclk drop error */ +#define SH_LB_ERROR_MASK_GCLK_DROP_SHFT 17 +#define SH_LB_ERROR_MASK_GCLK_DROP_MASK 0x0000000000020000 + +/* SH_LB_ERROR_MASK_RQ_FIFO_ERROR */ +/* Description: Request queue FIFO error */ +#define SH_LB_ERROR_MASK_RQ_FIFO_ERROR_SHFT 18 +#define SH_LB_ERROR_MASK_RQ_FIFO_ERROR_MASK 0x0000000000040000 + +/* SH_LB_ERROR_MASK_RP_FIFO_ERROR */ +/* Description: Reply queue FIFO error */ +#define SH_LB_ERROR_MASK_RP_FIFO_ERROR_SHFT 19 +#define SH_LB_ERROR_MASK_RP_FIFO_ERROR_MASK 0x0000000000080000 + +/* SH_LB_ERROR_MASK_UNEXP_VALID */ +/* Description: Unexpected valid error */ +#define SH_LB_ERROR_MASK_UNEXP_VALID_SHFT 20 +#define SH_LB_ERROR_MASK_UNEXP_VALID_MASK 0x0000000000100000 + +/* SH_LB_ERROR_MASK_RQ_CREDIT_OVERFLOW */ +/* Description: Request queue credit overflow */ +#define SH_LB_ERROR_MASK_RQ_CREDIT_OVERFLOW_SHFT 21 +#define SH_LB_ERROR_MASK_RQ_CREDIT_OVERFLOW_MASK 0x0000000000200000 + +/* SH_LB_ERROR_MASK_RP_CREDIT_OVERFLOW */ +/* Description: Reply queue credit overflow */ +#define SH_LB_ERROR_MASK_RP_CREDIT_OVERFLOW_SHFT 22 +#define SH_LB_ERROR_MASK_RP_CREDIT_OVERFLOW_MASK 0x0000000000400000 + +/* ==================================================================== */ +/* Register "SH_LB_ERROR_OVERFLOW" */ +/* LB Error Overflow */ +/* ==================================================================== */ + +#define SH_LB_ERROR_OVERFLOW 0x0000000110050500 +#define SH_LB_ERROR_OVERFLOW_MASK 0x00000000007fffff +#define SH_LB_ERROR_OVERFLOW_INIT 0x0000000000000000 + +/* SH_LB_ERROR_OVERFLOW_RQ_BAD_CMD_OVRFL */ +/* Description: RQ_BAD_CMD_OVRFL */ +#define SH_LB_ERROR_OVERFLOW_RQ_BAD_CMD_OVRFL_SHFT 0 +#define SH_LB_ERROR_OVERFLOW_RQ_BAD_CMD_OVRFL_MASK 0x0000000000000001 + +/* SH_LB_ERROR_OVERFLOW_RP_BAD_CMD_OVRFL */ +/* Description: RP_BAD_CMD_OVRFL */ +#define SH_LB_ERROR_OVERFLOW_RP_BAD_CMD_OVRFL_SHFT 1 +#define SH_LB_ERROR_OVERFLOW_RP_BAD_CMD_OVRFL_MASK 0x0000000000000002 + +/* SH_LB_ERROR_OVERFLOW_RQ_SHORT_OVRFL */ +/* Description: RQ_SHORT_OVRFL */ +#define SH_LB_ERROR_OVERFLOW_RQ_SHORT_OVRFL_SHFT 2 +#define SH_LB_ERROR_OVERFLOW_RQ_SHORT_OVRFL_MASK 0x0000000000000004 + +/* SH_LB_ERROR_OVERFLOW_RP_SHORT_OVRFL */ +/* Description: RP_SHORT_OVRFL */ +#define SH_LB_ERROR_OVERFLOW_RP_SHORT_OVRFL_SHFT 3 +#define SH_LB_ERROR_OVERFLOW_RP_SHORT_OVRFL_MASK 0x0000000000000008 + +/* SH_LB_ERROR_OVERFLOW_RQ_LONG_OVRFL */ +/* Description: RQ_LONG_OVRFL */ +#define SH_LB_ERROR_OVERFLOW_RQ_LONG_OVRFL_SHFT 4 +#define SH_LB_ERROR_OVERFLOW_RQ_LONG_OVRFL_MASK 0x0000000000000010 + +/* SH_LB_ERROR_OVERFLOW_RP_LONG_OVRFL */ +/* Description: RP_LONG_OVRFL */ +#define SH_LB_ERROR_OVERFLOW_RP_LONG_OVRFL_SHFT 5 +#define SH_LB_ERROR_OVERFLOW_RP_LONG_OVRFL_MASK 0x0000000000000020 + +/* SH_LB_ERROR_OVERFLOW_RQ_BAD_DATA_OVRFL */ +/* Description: RQ_BAD_DATA_OVRFL */ +#define SH_LB_ERROR_OVERFLOW_RQ_BAD_DATA_OVRFL_SHFT 6 +#define SH_LB_ERROR_OVERFLOW_RQ_BAD_DATA_OVRFL_MASK 0x0000000000000040 + +/* SH_LB_ERROR_OVERFLOW_RP_BAD_DATA_OVRFL */ +/* Description: RP_BAD_DATA_OVRFL */ +#define SH_LB_ERROR_OVERFLOW_RP_BAD_DATA_OVRFL_SHFT 7 +#define SH_LB_ERROR_OVERFLOW_RP_BAD_DATA_OVRFL_MASK 0x0000000000000080 + +/* SH_LB_ERROR_OVERFLOW_RQ_BAD_ADDR_OVRFL */ +/* Description: RQ_BAD_ADDR_OVRFL */ +#define SH_LB_ERROR_OVERFLOW_RQ_BAD_ADDR_OVRFL_SHFT 8 +#define SH_LB_ERROR_OVERFLOW_RQ_BAD_ADDR_OVRFL_MASK 0x0000000000000100 + +/* SH_LB_ERROR_OVERFLOW_RQ_TIME_OUT_OVRFL */ +/* Description: RQ_TIME_OUT_OVRFL */ +#define SH_LB_ERROR_OVERFLOW_RQ_TIME_OUT_OVRFL_SHFT 9 +#define SH_LB_ERROR_OVERFLOW_RQ_TIME_OUT_OVRFL_MASK 0x0000000000000200 + +/* SH_LB_ERROR_OVERFLOW_LINVV_OVERFLOW_OVRFL */ +/* Description: LINVV_OVERFLOW_OVRFL */ +#define SH_LB_ERROR_OVERFLOW_LINVV_OVERFLOW_OVRFL_SHFT 10 +#define SH_LB_ERROR_OVERFLOW_LINVV_OVERFLOW_OVRFL_MASK 0x0000000000000400 + +/* SH_LB_ERROR_OVERFLOW_UNEXPECTED_LINV_OVRFL */ +/* Description: UNEXPECTED_LINV_OVRFL */ +#define SH_LB_ERROR_OVERFLOW_UNEXPECTED_LINV_OVRFL_SHFT 11 +#define SH_LB_ERROR_OVERFLOW_UNEXPECTED_LINV_OVRFL_MASK 0x0000000000000800 + +/* SH_LB_ERROR_OVERFLOW_PTC_1_TIMEOUT_OVRFL */ +/* Description: PTC_1 Time out overflow */ +#define SH_LB_ERROR_OVERFLOW_PTC_1_TIMEOUT_OVRFL_SHFT 12 +#define SH_LB_ERROR_OVERFLOW_PTC_1_TIMEOUT_OVRFL_MASK 0x0000000000001000 + +/* SH_LB_ERROR_OVERFLOW_JUNK_BUS_ERR_OVRFL */ +/* Description: Junk Bus error overflow */ +#define SH_LB_ERROR_OVERFLOW_JUNK_BUS_ERR_OVRFL_SHFT 13 +#define SH_LB_ERROR_OVERFLOW_JUNK_BUS_ERR_OVRFL_MASK 0x0000000000002000 + +/* SH_LB_ERROR_OVERFLOW_PIO_CB_ERR_OVRFL */ +/* Description: PIO Conveyor Belt operation error overflow */ +#define SH_LB_ERROR_OVERFLOW_PIO_CB_ERR_OVRFL_SHFT 14 +#define SH_LB_ERROR_OVERFLOW_PIO_CB_ERR_OVRFL_MASK 0x0000000000004000 + +/* SH_LB_ERROR_OVERFLOW_VECTOR_RQ_ROUTE_ERROR_OVRFL */ +/* Description: Vector request Route data was invalid overflow */ +#define SH_LB_ERROR_OVERFLOW_VECTOR_RQ_ROUTE_ERROR_OVRFL_SHFT 15 +#define SH_LB_ERROR_OVERFLOW_VECTOR_RQ_ROUTE_ERROR_OVRFL_MASK 0x0000000000008000 + +/* SH_LB_ERROR_OVERFLOW_VECTOR_RP_ROUTE_ERROR_OVRFL */ +/* Description: Vector reply Route data was invalid overflow */ +#define SH_LB_ERROR_OVERFLOW_VECTOR_RP_ROUTE_ERROR_OVRFL_SHFT 16 +#define SH_LB_ERROR_OVERFLOW_VECTOR_RP_ROUTE_ERROR_OVRFL_MASK 0x0000000000010000 + +/* SH_LB_ERROR_OVERFLOW_GCLK_DROP_OVRFL */ +/* Description: Gclk drop error overflow */ +#define SH_LB_ERROR_OVERFLOW_GCLK_DROP_OVRFL_SHFT 17 +#define SH_LB_ERROR_OVERFLOW_GCLK_DROP_OVRFL_MASK 0x0000000000020000 + +/* SH_LB_ERROR_OVERFLOW_RQ_FIFO_ERROR_OVRFL */ +/* Description: Request queue FIFO error overflow */ +#define SH_LB_ERROR_OVERFLOW_RQ_FIFO_ERROR_OVRFL_SHFT 18 +#define SH_LB_ERROR_OVERFLOW_RQ_FIFO_ERROR_OVRFL_MASK 0x0000000000040000 + +/* SH_LB_ERROR_OVERFLOW_RP_FIFO_ERROR_OVRFL */ +/* Description: Reply queue FIFO error overflow */ +#define SH_LB_ERROR_OVERFLOW_RP_FIFO_ERROR_OVRFL_SHFT 19 +#define SH_LB_ERROR_OVERFLOW_RP_FIFO_ERROR_OVRFL_MASK 0x0000000000080000 + +/* SH_LB_ERROR_OVERFLOW_UNEXP_VALID_OVRFL */ +/* Description: Unexpected valid error overflow */ +#define SH_LB_ERROR_OVERFLOW_UNEXP_VALID_OVRFL_SHFT 20 +#define SH_LB_ERROR_OVERFLOW_UNEXP_VALID_OVRFL_MASK 0x0000000000100000 + +/* SH_LB_ERROR_OVERFLOW_RQ_CREDIT_OVERFLOW_OVRFL */ +/* Description: Request queue credit overflow */ +#define SH_LB_ERROR_OVERFLOW_RQ_CREDIT_OVERFLOW_OVRFL_SHFT 21 +#define SH_LB_ERROR_OVERFLOW_RQ_CREDIT_OVERFLOW_OVRFL_MASK 0x0000000000200000 + +/* SH_LB_ERROR_OVERFLOW_RP_CREDIT_OVERFLOW_OVRFL */ +/* Description: Reply queue credit overflow */ +#define SH_LB_ERROR_OVERFLOW_RP_CREDIT_OVERFLOW_OVRFL_SHFT 22 +#define SH_LB_ERROR_OVERFLOW_RP_CREDIT_OVERFLOW_OVRFL_MASK 0x0000000000400000 + +/* ==================================================================== */ +/* Register "SH_LB_ERROR_OVERFLOW_ALIAS" */ +/* LB Error Overflow */ +/* ==================================================================== */ + +#define SH_LB_ERROR_OVERFLOW_ALIAS 0x0000000110050508 + +/* ==================================================================== */ +/* Register "SH_LB_ERROR_SUMMARY" */ +/* LB Error Bits */ +/* ==================================================================== */ + +#define SH_LB_ERROR_SUMMARY 0x0000000110050580 +#define SH_LB_ERROR_SUMMARY_MASK 0x00000000007fffff +#define SH_LB_ERROR_SUMMARY_INIT 0x0000000000000000 + +/* SH_LB_ERROR_SUMMARY_RQ_BAD_CMD */ +/* Description: RQ_BAD_CMD */ +#define SH_LB_ERROR_SUMMARY_RQ_BAD_CMD_SHFT 0 +#define SH_LB_ERROR_SUMMARY_RQ_BAD_CMD_MASK 0x0000000000000001 + +/* SH_LB_ERROR_SUMMARY_RP_BAD_CMD */ +/* Description: RP_BAD_CMD */ +#define SH_LB_ERROR_SUMMARY_RP_BAD_CMD_SHFT 1 +#define SH_LB_ERROR_SUMMARY_RP_BAD_CMD_MASK 0x0000000000000002 + +/* SH_LB_ERROR_SUMMARY_RQ_SHORT */ +/* Description: RQ_SHORT */ +#define SH_LB_ERROR_SUMMARY_RQ_SHORT_SHFT 2 +#define SH_LB_ERROR_SUMMARY_RQ_SHORT_MASK 0x0000000000000004 + +/* SH_LB_ERROR_SUMMARY_RP_SHORT */ +/* Description: RP_SHORT */ +#define SH_LB_ERROR_SUMMARY_RP_SHORT_SHFT 3 +#define SH_LB_ERROR_SUMMARY_RP_SHORT_MASK 0x0000000000000008 + +/* SH_LB_ERROR_SUMMARY_RQ_LONG */ +/* Description: RQ_LONG */ +#define SH_LB_ERROR_SUMMARY_RQ_LONG_SHFT 4 +#define SH_LB_ERROR_SUMMARY_RQ_LONG_MASK 0x0000000000000010 + +/* SH_LB_ERROR_SUMMARY_RP_LONG */ +/* Description: RP_LONG */ +#define SH_LB_ERROR_SUMMARY_RP_LONG_SHFT 5 +#define SH_LB_ERROR_SUMMARY_RP_LONG_MASK 0x0000000000000020 + +/* SH_LB_ERROR_SUMMARY_RQ_BAD_DATA */ +/* Description: RQ_BAD_DATA */ +#define SH_LB_ERROR_SUMMARY_RQ_BAD_DATA_SHFT 6 +#define SH_LB_ERROR_SUMMARY_RQ_BAD_DATA_MASK 0x0000000000000040 + +/* SH_LB_ERROR_SUMMARY_RP_BAD_DATA */ +/* Description: RP_BAD_DATA */ +#define SH_LB_ERROR_SUMMARY_RP_BAD_DATA_SHFT 7 +#define SH_LB_ERROR_SUMMARY_RP_BAD_DATA_MASK 0x0000000000000080 + +/* SH_LB_ERROR_SUMMARY_RQ_BAD_ADDR */ +/* Description: RQ_BAD_ADDR */ +#define SH_LB_ERROR_SUMMARY_RQ_BAD_ADDR_SHFT 8 +#define SH_LB_ERROR_SUMMARY_RQ_BAD_ADDR_MASK 0x0000000000000100 + +/* SH_LB_ERROR_SUMMARY_RQ_TIME_OUT */ +/* Description: RQ_TIME_OUT */ +#define SH_LB_ERROR_SUMMARY_RQ_TIME_OUT_SHFT 9 +#define SH_LB_ERROR_SUMMARY_RQ_TIME_OUT_MASK 0x0000000000000200 + +/* SH_LB_ERROR_SUMMARY_LINVV_OVERFLOW */ +/* Description: LINVV_OVERFLOW */ +#define SH_LB_ERROR_SUMMARY_LINVV_OVERFLOW_SHFT 10 +#define SH_LB_ERROR_SUMMARY_LINVV_OVERFLOW_MASK 0x0000000000000400 + +/* SH_LB_ERROR_SUMMARY_UNEXPECTED_LINV */ +/* Description: UNEXPECTED_LINV */ +#define SH_LB_ERROR_SUMMARY_UNEXPECTED_LINV_SHFT 11 +#define SH_LB_ERROR_SUMMARY_UNEXPECTED_LINV_MASK 0x0000000000000800 + +/* SH_LB_ERROR_SUMMARY_PTC_1_TIMEOUT */ +/* Description: PTC_1 Time out */ +#define SH_LB_ERROR_SUMMARY_PTC_1_TIMEOUT_SHFT 12 +#define SH_LB_ERROR_SUMMARY_PTC_1_TIMEOUT_MASK 0x0000000000001000 + +/* SH_LB_ERROR_SUMMARY_JUNK_BUS_ERR */ +/* Description: Junk Bus error */ +#define SH_LB_ERROR_SUMMARY_JUNK_BUS_ERR_SHFT 13 +#define SH_LB_ERROR_SUMMARY_JUNK_BUS_ERR_MASK 0x0000000000002000 + +/* SH_LB_ERROR_SUMMARY_PIO_CB_ERR */ +/* Description: PIO Conveyor Belt operation error */ +#define SH_LB_ERROR_SUMMARY_PIO_CB_ERR_SHFT 14 +#define SH_LB_ERROR_SUMMARY_PIO_CB_ERR_MASK 0x0000000000004000 + +/* SH_LB_ERROR_SUMMARY_VECTOR_RQ_ROUTE_ERROR */ +/* Description: Vector request Route data was invalid */ +#define SH_LB_ERROR_SUMMARY_VECTOR_RQ_ROUTE_ERROR_SHFT 15 +#define SH_LB_ERROR_SUMMARY_VECTOR_RQ_ROUTE_ERROR_MASK 0x0000000000008000 + +/* SH_LB_ERROR_SUMMARY_VECTOR_RP_ROUTE_ERROR */ +/* Description: Vector reply Route data was invalid */ +#define SH_LB_ERROR_SUMMARY_VECTOR_RP_ROUTE_ERROR_SHFT 16 +#define SH_LB_ERROR_SUMMARY_VECTOR_RP_ROUTE_ERROR_MASK 0x0000000000010000 + +/* SH_LB_ERROR_SUMMARY_GCLK_DROP */ +/* Description: Gclk drop error */ +#define SH_LB_ERROR_SUMMARY_GCLK_DROP_SHFT 17 +#define SH_LB_ERROR_SUMMARY_GCLK_DROP_MASK 0x0000000000020000 + +/* SH_LB_ERROR_SUMMARY_RQ_FIFO_ERROR */ +/* Description: Request queue FIFO error */ +#define SH_LB_ERROR_SUMMARY_RQ_FIFO_ERROR_SHFT 18 +#define SH_LB_ERROR_SUMMARY_RQ_FIFO_ERROR_MASK 0x0000000000040000 + +/* SH_LB_ERROR_SUMMARY_RP_FIFO_ERROR */ +/* Description: Reply queue FIFO error */ +#define SH_LB_ERROR_SUMMARY_RP_FIFO_ERROR_SHFT 19 +#define SH_LB_ERROR_SUMMARY_RP_FIFO_ERROR_MASK 0x0000000000080000 + +/* SH_LB_ERROR_SUMMARY_UNEXP_VALID */ +/* Description: Unexpected valid error */ +#define SH_LB_ERROR_SUMMARY_UNEXP_VALID_SHFT 20 +#define SH_LB_ERROR_SUMMARY_UNEXP_VALID_MASK 0x0000000000100000 + +/* SH_LB_ERROR_SUMMARY_RQ_CREDIT_OVERFLOW */ +/* Description: Request queue credit overflow */ +#define SH_LB_ERROR_SUMMARY_RQ_CREDIT_OVERFLOW_SHFT 21 +#define SH_LB_ERROR_SUMMARY_RQ_CREDIT_OVERFLOW_MASK 0x0000000000200000 + +/* SH_LB_ERROR_SUMMARY_RP_CREDIT_OVERFLOW */ +/* Description: Reply queue credit overflow */ +#define SH_LB_ERROR_SUMMARY_RP_CREDIT_OVERFLOW_SHFT 22 +#define SH_LB_ERROR_SUMMARY_RP_CREDIT_OVERFLOW_MASK 0x0000000000400000 + +/* ==================================================================== */ +/* Register "SH_LB_ERROR_SUMMARY_ALIAS" */ +/* LB Error Bits Alias */ +/* ==================================================================== */ + +#define SH_LB_ERROR_SUMMARY_ALIAS 0x0000000110050588 + +/* ==================================================================== */ +/* Register "SH_LB_FIRST_ERROR" */ +/* LB First Error */ +/* ==================================================================== */ + +#define SH_LB_FIRST_ERROR 0x0000000110050600 +#define SH_LB_FIRST_ERROR_MASK 0x00000000007fffff +#define SH_LB_FIRST_ERROR_INIT 0x0000000000000000 + +/* SH_LB_FIRST_ERROR_RQ_BAD_CMD */ +/* Description: RQ_BAD_CMD */ +#define SH_LB_FIRST_ERROR_RQ_BAD_CMD_SHFT 0 +#define SH_LB_FIRST_ERROR_RQ_BAD_CMD_MASK 0x0000000000000001 + +/* SH_LB_FIRST_ERROR_RP_BAD_CMD */ +/* Description: RP_BAD_CMD */ +#define SH_LB_FIRST_ERROR_RP_BAD_CMD_SHFT 1 +#define SH_LB_FIRST_ERROR_RP_BAD_CMD_MASK 0x0000000000000002 + +/* SH_LB_FIRST_ERROR_RQ_SHORT */ +/* Description: RQ_SHORT */ +#define SH_LB_FIRST_ERROR_RQ_SHORT_SHFT 2 +#define SH_LB_FIRST_ERROR_RQ_SHORT_MASK 0x0000000000000004 + +/* SH_LB_FIRST_ERROR_RP_SHORT */ +/* Description: RP_SHORT */ +#define SH_LB_FIRST_ERROR_RP_SHORT_SHFT 3 +#define SH_LB_FIRST_ERROR_RP_SHORT_MASK 0x0000000000000008 + +/* SH_LB_FIRST_ERROR_RQ_LONG */ +/* Description: RQ_LONG */ +#define SH_LB_FIRST_ERROR_RQ_LONG_SHFT 4 +#define SH_LB_FIRST_ERROR_RQ_LONG_MASK 0x0000000000000010 + +/* SH_LB_FIRST_ERROR_RP_LONG */ +/* Description: RP_LONG */ +#define SH_LB_FIRST_ERROR_RP_LONG_SHFT 5 +#define SH_LB_FIRST_ERROR_RP_LONG_MASK 0x0000000000000020 + +/* SH_LB_FIRST_ERROR_RQ_BAD_DATA */ +/* Description: RQ_BAD_DATA */ +#define SH_LB_FIRST_ERROR_RQ_BAD_DATA_SHFT 6 +#define SH_LB_FIRST_ERROR_RQ_BAD_DATA_MASK 0x0000000000000040 + +/* SH_LB_FIRST_ERROR_RP_BAD_DATA */ +/* Description: RP_BAD_DATA */ +#define SH_LB_FIRST_ERROR_RP_BAD_DATA_SHFT 7 +#define SH_LB_FIRST_ERROR_RP_BAD_DATA_MASK 0x0000000000000080 + +/* SH_LB_FIRST_ERROR_RQ_BAD_ADDR */ +/* Description: RQ_BAD_ADDR */ +#define SH_LB_FIRST_ERROR_RQ_BAD_ADDR_SHFT 8 +#define SH_LB_FIRST_ERROR_RQ_BAD_ADDR_MASK 0x0000000000000100 + +/* SH_LB_FIRST_ERROR_RQ_TIME_OUT */ +/* Description: RQ_TIME_OUT */ +#define SH_LB_FIRST_ERROR_RQ_TIME_OUT_SHFT 9 +#define SH_LB_FIRST_ERROR_RQ_TIME_OUT_MASK 0x0000000000000200 + +/* SH_LB_FIRST_ERROR_LINVV_OVERFLOW */ +/* Description: LINVV_OVERFLOW */ +#define SH_LB_FIRST_ERROR_LINVV_OVERFLOW_SHFT 10 +#define SH_LB_FIRST_ERROR_LINVV_OVERFLOW_MASK 0x0000000000000400 + +/* SH_LB_FIRST_ERROR_UNEXPECTED_LINV */ +/* Description: UNEXPECTED_LINV */ +#define SH_LB_FIRST_ERROR_UNEXPECTED_LINV_SHFT 11 +#define SH_LB_FIRST_ERROR_UNEXPECTED_LINV_MASK 0x0000000000000800 + +/* SH_LB_FIRST_ERROR_PTC_1_TIMEOUT */ +/* Description: PTC_1 Time out */ +#define SH_LB_FIRST_ERROR_PTC_1_TIMEOUT_SHFT 12 +#define SH_LB_FIRST_ERROR_PTC_1_TIMEOUT_MASK 0x0000000000001000 + +/* SH_LB_FIRST_ERROR_JUNK_BUS_ERR */ +/* Description: Junk Bus error */ +#define SH_LB_FIRST_ERROR_JUNK_BUS_ERR_SHFT 13 +#define SH_LB_FIRST_ERROR_JUNK_BUS_ERR_MASK 0x0000000000002000 + +/* SH_LB_FIRST_ERROR_PIO_CB_ERR */ +/* Description: PIO Conveyor Belt operation error */ +#define SH_LB_FIRST_ERROR_PIO_CB_ERR_SHFT 14 +#define SH_LB_FIRST_ERROR_PIO_CB_ERR_MASK 0x0000000000004000 + +/* SH_LB_FIRST_ERROR_VECTOR_RQ_ROUTE_ERROR */ +/* Description: Vector request Route data was invalid */ +#define SH_LB_FIRST_ERROR_VECTOR_RQ_ROUTE_ERROR_SHFT 15 +#define SH_LB_FIRST_ERROR_VECTOR_RQ_ROUTE_ERROR_MASK 0x0000000000008000 + +/* SH_LB_FIRST_ERROR_VECTOR_RP_ROUTE_ERROR */ +/* Description: Vector reply Route data was invalid */ +#define SH_LB_FIRST_ERROR_VECTOR_RP_ROUTE_ERROR_SHFT 16 +#define SH_LB_FIRST_ERROR_VECTOR_RP_ROUTE_ERROR_MASK 0x0000000000010000 + +/* SH_LB_FIRST_ERROR_GCLK_DROP */ +/* Description: Gclk drop error */ +#define SH_LB_FIRST_ERROR_GCLK_DROP_SHFT 17 +#define SH_LB_FIRST_ERROR_GCLK_DROP_MASK 0x0000000000020000 + +/* SH_LB_FIRST_ERROR_RQ_FIFO_ERROR */ +/* Description: Request queue FIFO error */ +#define SH_LB_FIRST_ERROR_RQ_FIFO_ERROR_SHFT 18 +#define SH_LB_FIRST_ERROR_RQ_FIFO_ERROR_MASK 0x0000000000040000 + +/* SH_LB_FIRST_ERROR_RP_FIFO_ERROR */ +/* Description: Reply queue FIFO error */ +#define SH_LB_FIRST_ERROR_RP_FIFO_ERROR_SHFT 19 +#define SH_LB_FIRST_ERROR_RP_FIFO_ERROR_MASK 0x0000000000080000 + +/* SH_LB_FIRST_ERROR_UNEXP_VALID */ +/* Description: Unexpected valid error */ +#define SH_LB_FIRST_ERROR_UNEXP_VALID_SHFT 20 +#define SH_LB_FIRST_ERROR_UNEXP_VALID_MASK 0x0000000000100000 + +/* SH_LB_FIRST_ERROR_RQ_CREDIT_OVERFLOW */ +/* Description: Request queue credit overflow */ +#define SH_LB_FIRST_ERROR_RQ_CREDIT_OVERFLOW_SHFT 21 +#define SH_LB_FIRST_ERROR_RQ_CREDIT_OVERFLOW_MASK 0x0000000000200000 + +/* SH_LB_FIRST_ERROR_RP_CREDIT_OVERFLOW */ +/* Description: Reply queue credit overflow */ +#define SH_LB_FIRST_ERROR_RP_CREDIT_OVERFLOW_SHFT 22 +#define SH_LB_FIRST_ERROR_RP_CREDIT_OVERFLOW_MASK 0x0000000000400000 + +/* ==================================================================== */ +/* Register "SH_LB_LAST_CREDIT" */ +/* Credit counter status register */ +/* ==================================================================== */ + +#define SH_LB_LAST_CREDIT 0x0000000110050680 +#define SH_LB_LAST_CREDIT_MASK 0x000000000ffff3df +#define SH_LB_LAST_CREDIT_INIT 0x0000000000000000 + +/* SH_LB_LAST_CREDIT_LIQ_RQ_CREDIT */ +/* Description: LIQ request queue credit counter */ +#define SH_LB_LAST_CREDIT_LIQ_RQ_CREDIT_SHFT 0 +#define SH_LB_LAST_CREDIT_LIQ_RQ_CREDIT_MASK 0x000000000000001f + +/* SH_LB_LAST_CREDIT_LIQ_RP_CREDIT */ +/* Description: LIQ reply queue credit counter */ +#define SH_LB_LAST_CREDIT_LIQ_RP_CREDIT_SHFT 6 +#define SH_LB_LAST_CREDIT_LIQ_RP_CREDIT_MASK 0x00000000000003c0 + +/* SH_LB_LAST_CREDIT_LINVV_CREDIT */ +/* Description: LINVV credit counter */ +#define SH_LB_LAST_CREDIT_LINVV_CREDIT_SHFT 12 +#define SH_LB_LAST_CREDIT_LINVV_CREDIT_MASK 0x000000000003f000 + +/* SH_LB_LAST_CREDIT_LOQ_RQ_CREDIT */ +/* Description: LOQ request queue credit counter */ +#define SH_LB_LAST_CREDIT_LOQ_RQ_CREDIT_SHFT 18 +#define SH_LB_LAST_CREDIT_LOQ_RQ_CREDIT_MASK 0x00000000007c0000 + +/* SH_LB_LAST_CREDIT_LOQ_RP_CREDIT */ +/* Description: LOQ reply queue credit counter */ +#define SH_LB_LAST_CREDIT_LOQ_RP_CREDIT_SHFT 23 +#define SH_LB_LAST_CREDIT_LOQ_RP_CREDIT_MASK 0x000000000f800000 + +/* ==================================================================== */ +/* Register "SH_LB_NACK_STATUS" */ +/* Nack Counter Status Register */ +/* ==================================================================== */ + +#define SH_LB_NACK_STATUS 0x0000000110050700 +#define SH_LB_NACK_STATUS_MASK 0x3fffffff0fff0fff +#define SH_LB_NACK_STATUS_INIT 0x0000000000000000 + +/* SH_LB_NACK_STATUS_PIO_NACK_A */ +/* Description: PIO nackA counter */ +#define SH_LB_NACK_STATUS_PIO_NACK_A_SHFT 0 +#define SH_LB_NACK_STATUS_PIO_NACK_A_MASK 0x0000000000000fff + +/* SH_LB_NACK_STATUS_PIO_NACK_B */ +/* Description: PIO nackA counter */ +#define SH_LB_NACK_STATUS_PIO_NACK_B_SHFT 16 +#define SH_LB_NACK_STATUS_PIO_NACK_B_MASK 0x000000000fff0000 + +/* SH_LB_NACK_STATUS_JUNK_NACK */ +/* Description: Junk bus nack counter */ +#define SH_LB_NACK_STATUS_JUNK_NACK_SHFT 32 +#define SH_LB_NACK_STATUS_JUNK_NACK_MASK 0x0000ffff00000000 + +/* SH_LB_NACK_STATUS_CB_TIMEOUT_COUNT */ +/* Description: Conveyor belt time out counter */ +#define SH_LB_NACK_STATUS_CB_TIMEOUT_COUNT_SHFT 48 +#define SH_LB_NACK_STATUS_CB_TIMEOUT_COUNT_MASK 0x0fff000000000000 + +/* SH_LB_NACK_STATUS_CB_STATE */ +/* Description: Conveyor belt state */ +#define SH_LB_NACK_STATUS_CB_STATE_SHFT 60 +#define SH_LB_NACK_STATUS_CB_STATE_MASK 0x3000000000000000 + +/* ==================================================================== */ +/* Register "SH_LB_TRIGGER_COMPARE" */ +/* LB Test-point Trigger Compare */ +/* ==================================================================== */ + +#define SH_LB_TRIGGER_COMPARE 0x0000000110050780 +#define SH_LB_TRIGGER_COMPARE_MASK 0x00000000ffffffff +#define SH_LB_TRIGGER_COMPARE_INIT 0x0000000000000000 + +/* SH_LB_TRIGGER_COMPARE_MASK */ +/* Description: Mask to select Debug bits for trigger generation */ +#define SH_LB_TRIGGER_COMPARE_MASK_SHFT 0 +#define SH_LB_TRIGGER_COMPARE_MASK_MASK 0x00000000ffffffff + +/* ==================================================================== */ +/* Register "SH_LB_TRIGGER_DATA" */ +/* LB Test-point Trigger Compare Data */ +/* ==================================================================== */ + +#define SH_LB_TRIGGER_DATA 0x0000000110050800 +#define SH_LB_TRIGGER_DATA_MASK 0x00000000ffffffff +#define SH_LB_TRIGGER_DATA_INIT 0x00000000ffffffff + +/* SH_LB_TRIGGER_DATA_COMPARE_PATTERN */ +/* Description: debug bit pattern for trigger generation */ +#define SH_LB_TRIGGER_DATA_COMPARE_PATTERN_SHFT 0 +#define SH_LB_TRIGGER_DATA_COMPARE_PATTERN_MASK 0x00000000ffffffff + +/* ==================================================================== */ +/* Register "SH_PI_AEC_CONFIG" */ +/* PI Adaptive Error Correction Configuration */ +/* ==================================================================== */ + +#define SH_PI_AEC_CONFIG 0x0000000120050000 +#define SH_PI_AEC_CONFIG_MASK 0x0000000000000007 +#define SH_PI_AEC_CONFIG_INIT 0x0000000000000000 + +/* SH_PI_AEC_CONFIG_MODE */ +/* Description: AEC Operation Mode */ +#define SH_PI_AEC_CONFIG_MODE_SHFT 0 +#define SH_PI_AEC_CONFIG_MODE_MASK 0x0000000000000007 + +/* ==================================================================== */ +/* Register "SH_PI_AFI_ERROR_MASK" */ +/* PI AFI Error Mask */ +/* ==================================================================== */ + +#define SH_PI_AFI_ERROR_MASK 0x0000000120050080 +#define SH_PI_AFI_ERROR_MASK_MASK 0x00000007ffe00000 +#define SH_PI_AFI_ERROR_MASK_INIT 0x00000007ffe00000 + +/* SH_PI_AFI_ERROR_MASK_HUNG_BUS */ +/* Description: FSB is hung */ +#define SH_PI_AFI_ERROR_MASK_HUNG_BUS_SHFT 21 +#define SH_PI_AFI_ERROR_MASK_HUNG_BUS_MASK 0x0000000000200000 + +/* SH_PI_AFI_ERROR_MASK_RSP_PARITY */ +/* Description: Parity error detecte during response phase */ +#define SH_PI_AFI_ERROR_MASK_RSP_PARITY_SHFT 22 +#define SH_PI_AFI_ERROR_MASK_RSP_PARITY_MASK 0x0000000000400000 + +/* SH_PI_AFI_ERROR_MASK_IOQ_OVERRUN */ +/* Description: Over run error detected on IOQ */ +#define SH_PI_AFI_ERROR_MASK_IOQ_OVERRUN_SHFT 23 +#define SH_PI_AFI_ERROR_MASK_IOQ_OVERRUN_MASK 0x0000000000800000 + +/* SH_PI_AFI_ERROR_MASK_REQ_FORMAT */ +/* Description: FSB request format not supported */ +#define SH_PI_AFI_ERROR_MASK_REQ_FORMAT_SHFT 24 +#define SH_PI_AFI_ERROR_MASK_REQ_FORMAT_MASK 0x0000000001000000 + +/* SH_PI_AFI_ERROR_MASK_ADDR_ACCESS */ +/* Description: Access to Address is not supported */ +#define SH_PI_AFI_ERROR_MASK_ADDR_ACCESS_SHFT 25 +#define SH_PI_AFI_ERROR_MASK_ADDR_ACCESS_MASK 0x0000000002000000 + +/* SH_PI_AFI_ERROR_MASK_REQ_PARITY */ +/* Description: Parity error detected during request phase */ +#define SH_PI_AFI_ERROR_MASK_REQ_PARITY_SHFT 26 +#define SH_PI_AFI_ERROR_MASK_REQ_PARITY_MASK 0x0000000004000000 + +/* SH_PI_AFI_ERROR_MASK_ADDR_PARITY */ +/* Description: Parity error detected on address */ +#define SH_PI_AFI_ERROR_MASK_ADDR_PARITY_SHFT 27 +#define SH_PI_AFI_ERROR_MASK_ADDR_PARITY_MASK 0x0000000008000000 + +/* SH_PI_AFI_ERROR_MASK_SHUB_FSB_DQE */ +/* Description: SHUB_FSB_DQE */ +#define SH_PI_AFI_ERROR_MASK_SHUB_FSB_DQE_SHFT 28 +#define SH_PI_AFI_ERROR_MASK_SHUB_FSB_DQE_MASK 0x0000000010000000 + +/* SH_PI_AFI_ERROR_MASK_SHUB_FSB_UCE */ +/* Description: An un-correctable ECC error was detected */ +#define SH_PI_AFI_ERROR_MASK_SHUB_FSB_UCE_SHFT 29 +#define SH_PI_AFI_ERROR_MASK_SHUB_FSB_UCE_MASK 0x0000000020000000 + +/* SH_PI_AFI_ERROR_MASK_SHUB_FSB_CE */ +/* Description: An correctable ECC error was detected */ +#define SH_PI_AFI_ERROR_MASK_SHUB_FSB_CE_SHFT 30 +#define SH_PI_AFI_ERROR_MASK_SHUB_FSB_CE_MASK 0x0000000040000000 + +/* SH_PI_AFI_ERROR_MASK_LIVELOCK */ +/* Description: AFI livelock error was detected */ +#define SH_PI_AFI_ERROR_MASK_LIVELOCK_SHFT 31 +#define SH_PI_AFI_ERROR_MASK_LIVELOCK_MASK 0x0000000080000000 + +/* SH_PI_AFI_ERROR_MASK_BAD_SNOOP */ +/* Description: AFI bad snoop error was detected */ +#define SH_PI_AFI_ERROR_MASK_BAD_SNOOP_SHFT 32 +#define SH_PI_AFI_ERROR_MASK_BAD_SNOOP_MASK 0x0000000100000000 + +/* SH_PI_AFI_ERROR_MASK_FSB_TBL_MISS */ +/* Description: AFI FSB request table miss error was detected */ +#define SH_PI_AFI_ERROR_MASK_FSB_TBL_MISS_SHFT 33 +#define SH_PI_AFI_ERROR_MASK_FSB_TBL_MISS_MASK 0x0000000200000000 + +/* SH_PI_AFI_ERROR_MASK_MSG_LEN */ +/* Description: Runt or Obese message received from SIC */ +#define SH_PI_AFI_ERROR_MASK_MSG_LEN_SHFT 34 +#define SH_PI_AFI_ERROR_MASK_MSG_LEN_MASK 0x0000000400000000 + +/* ==================================================================== */ +/* Register "SH_PI_AFI_TEST_POINT_COMPARE" */ +/* PI AFI Test Point Compare */ +/* ==================================================================== */ + +#define SH_PI_AFI_TEST_POINT_COMPARE 0x0000000120050100 +#define SH_PI_AFI_TEST_POINT_COMPARE_MASK 0xffffffffffffffff +#define SH_PI_AFI_TEST_POINT_COMPARE_INIT 0xffffffff00000000 + +/* SH_PI_AFI_TEST_POINT_COMPARE_COMPARE_MASK */ +/* Description: Mask to select Debug bits for trigger generation */ +#define SH_PI_AFI_TEST_POINT_COMPARE_COMPARE_MASK_SHFT 0 +#define SH_PI_AFI_TEST_POINT_COMPARE_COMPARE_MASK_MASK 0x00000000ffffffff + +/* SH_PI_AFI_TEST_POINT_COMPARE_COMPARE_PATTERN */ +/* Description: debug bit pattern for trigger generation */ +#define SH_PI_AFI_TEST_POINT_COMPARE_COMPARE_PATTERN_SHFT 32 +#define SH_PI_AFI_TEST_POINT_COMPARE_COMPARE_PATTERN_MASK 0xffffffff00000000 + +/* ==================================================================== */ +/* Register "SH_PI_AFI_TEST_POINT_SELECT" */ +/* PI AFI Test Point Select */ +/* ==================================================================== */ + +#define SH_PI_AFI_TEST_POINT_SELECT 0x0000000120050180 +#define SH_PI_AFI_TEST_POINT_SELECT_MASK 0xff7f7f7f7f7f7f7f +#define SH_PI_AFI_TEST_POINT_SELECT_INIT 0x0000000000000000 + +/* SH_PI_AFI_TEST_POINT_SELECT_NIBBLE0_CHIPLET_SEL */ +/* Description: Nibble 0: Word Select */ +#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE0_CHIPLET_SEL_SHFT 0 +#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE0_CHIPLET_SEL_MASK 0x000000000000000f + +/* SH_PI_AFI_TEST_POINT_SELECT_NIBBLE0_NIBBLE_SEL */ +/* Description: Nibble 0: Nibble Select */ +#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE0_NIBBLE_SEL_SHFT 4 +#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE0_NIBBLE_SEL_MASK 0x0000000000000070 + +/* SH_PI_AFI_TEST_POINT_SELECT_NIBBLE1_CHIPLET_SEL */ +/* Description: Nibble 1: Word Select */ +#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE1_CHIPLET_SEL_SHFT 8 +#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE1_CHIPLET_SEL_MASK 0x0000000000000f00 + +/* SH_PI_AFI_TEST_POINT_SELECT_NIBBLE1_NIBBLE_SEL */ +/* Description: Nibble 1: Nibble Select */ +#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE1_NIBBLE_SEL_SHFT 12 +#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE1_NIBBLE_SEL_MASK 0x0000000000007000 + +/* SH_PI_AFI_TEST_POINT_SELECT_NIBBLE2_CHIPLET_SEL */ +/* Description: Nibble 2: Word Select */ +#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE2_CHIPLET_SEL_SHFT 16 +#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE2_CHIPLET_SEL_MASK 0x00000000000f0000 + +/* SH_PI_AFI_TEST_POINT_SELECT_NIBBLE2_NIBBLE_SEL */ +/* Description: Nibble 2: Nibble Select */ +#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE2_NIBBLE_SEL_SHFT 20 +#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE2_NIBBLE_SEL_MASK 0x0000000000700000 + +/* SH_PI_AFI_TEST_POINT_SELECT_NIBBLE3_CHIPLET_SEL */ +/* Description: Nibble 3: Word Select */ +#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE3_CHIPLET_SEL_SHFT 24 +#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE3_CHIPLET_SEL_MASK 0x000000000f000000 + +/* SH_PI_AFI_TEST_POINT_SELECT_NIBBLE3_NIBBLE_SEL */ +/* Description: Nibble 3: Nibble Select */ +#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE3_NIBBLE_SEL_SHFT 28 +#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE3_NIBBLE_SEL_MASK 0x0000000070000000 + +/* SH_PI_AFI_TEST_POINT_SELECT_NIBBLE4_CHIPLET_SEL */ +/* Description: Nibble 4: Word Select */ +#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE4_CHIPLET_SEL_SHFT 32 +#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE4_CHIPLET_SEL_MASK 0x0000000f00000000 + +/* SH_PI_AFI_TEST_POINT_SELECT_NIBBLE4_NIBBLE_SEL */ +/* Description: Nibble 4: Nibble Select */ +#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE4_NIBBLE_SEL_SHFT 36 +#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE4_NIBBLE_SEL_MASK 0x0000007000000000 + +/* SH_PI_AFI_TEST_POINT_SELECT_NIBBLE5_CHIPLET_SEL */ +/* Description: Nibble 5: Word Select */ +#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE5_CHIPLET_SEL_SHFT 40 +#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE5_CHIPLET_SEL_MASK 0x00000f0000000000 + +/* SH_PI_AFI_TEST_POINT_SELECT_NIBBLE5_NIBBLE_SEL */ +/* Description: Nibble 5: Nibble Select */ +#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE5_NIBBLE_SEL_SHFT 44 +#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE5_NIBBLE_SEL_MASK 0x0000700000000000 + +/* SH_PI_AFI_TEST_POINT_SELECT_NIBBLE6_CHIPLET_SEL */ +/* Description: Nibble 6: Word Select */ +#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE6_CHIPLET_SEL_SHFT 48 +#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE6_CHIPLET_SEL_MASK 0x000f000000000000 + +/* SH_PI_AFI_TEST_POINT_SELECT_NIBBLE6_NIBBLE_SEL */ +/* Description: Nibble 6: Nibble Select */ +#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE6_NIBBLE_SEL_SHFT 52 +#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE6_NIBBLE_SEL_MASK 0x0070000000000000 + +/* SH_PI_AFI_TEST_POINT_SELECT_NIBBLE7_CHIPLET_SEL */ +/* Description: Nibble 7: Word Select */ +#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE7_CHIPLET_SEL_SHFT 56 +#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE7_CHIPLET_SEL_MASK 0x0f00000000000000 + +/* SH_PI_AFI_TEST_POINT_SELECT_NIBBLE7_NIBBLE_SEL */ +/* Description: Nibble 7: Nibble Select */ +#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE7_NIBBLE_SEL_SHFT 60 +#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE7_NIBBLE_SEL_MASK 0x7000000000000000 + +/* SH_PI_AFI_TEST_POINT_SELECT_TRIGGER_ENABLE */ +/* Description: Trigger Enabled */ +#define SH_PI_AFI_TEST_POINT_SELECT_TRIGGER_ENABLE_SHFT 63 +#define SH_PI_AFI_TEST_POINT_SELECT_TRIGGER_ENABLE_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_PI_AFI_TEST_POINT_TRIGGER_SELECT" */ +/* PI CRBC Test Point Trigger Select */ +/* ==================================================================== */ + +#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT 0x0000000120050200 +#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_MASK 0x7f7f7f7f7f7f7f7f +#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_INIT 0x0000000000000000 + +/* SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER0_CHIPLET_SEL */ +/* Description: Nibble 0 Chiplet select */ +#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER0_CHIPLET_SEL_SHFT 0 +#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER0_CHIPLET_SEL_MASK 0x000000000000000f + +/* SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER0_NIBBLE_SEL */ +/* Description: Nibble 0 Nibble select */ +#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER0_NIBBLE_SEL_SHFT 4 +#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER0_NIBBLE_SEL_MASK 0x0000000000000070 + +/* SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER1_CHIPLET_SEL */ +/* Description: Nibble 1 Chiplet select */ +#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER1_CHIPLET_SEL_SHFT 8 +#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER1_CHIPLET_SEL_MASK 0x0000000000000f00 + +/* SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER1_NIBBLE_SEL */ +/* Description: Nibble 1 Nibble select */ +#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER1_NIBBLE_SEL_SHFT 12 +#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER1_NIBBLE_SEL_MASK 0x0000000000007000 + +/* SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER2_CHIPLET_SEL */ +/* Description: Nibble 2 Chiplet select */ +#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER2_CHIPLET_SEL_SHFT 16 +#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER2_CHIPLET_SEL_MASK 0x00000000000f0000 + +/* SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER2_NIBBLE_SEL */ +/* Description: Nibble 2 Nibble select */ +#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER2_NIBBLE_SEL_SHFT 20 +#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER2_NIBBLE_SEL_MASK 0x0000000000700000 + +/* SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER3_CHIPLET_SEL */ +/* Description: Nibble 3 Chiplet select */ +#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER3_CHIPLET_SEL_SHFT 24 +#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER3_CHIPLET_SEL_MASK 0x000000000f000000 + +/* SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER3_NIBBLE_SEL */ +/* Description: Nibble 3 Nibble select */ +#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER3_NIBBLE_SEL_SHFT 28 +#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER3_NIBBLE_SEL_MASK 0x0000000070000000 + +/* SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER4_CHIPLET_SEL */ +/* Description: Nibble 4 Chiplet select */ +#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER4_CHIPLET_SEL_SHFT 32 +#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER4_CHIPLET_SEL_MASK 0x0000000f00000000 + +/* SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER4_NIBBLE_SEL */ +/* Description: Nibble 4 Nibble select */ +#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER4_NIBBLE_SEL_SHFT 36 +#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER4_NIBBLE_SEL_MASK 0x0000007000000000 + +/* SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER5_CHIPLET_SEL */ +/* Description: Nibble 5 Chiplet select */ +#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER5_CHIPLET_SEL_SHFT 40 +#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER5_CHIPLET_SEL_MASK 0x00000f0000000000 + +/* SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER5_NIBBLE_SEL */ +/* Description: Nibble 5 Nibble select */ +#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER5_NIBBLE_SEL_SHFT 44 +#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER5_NIBBLE_SEL_MASK 0x0000700000000000 + +/* SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER6_CHIPLET_SEL */ +/* Description: Nibble 6 Chiplet select */ +#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER6_CHIPLET_SEL_SHFT 48 +#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER6_CHIPLET_SEL_MASK 0x000f000000000000 + +/* SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER6_NIBBLE_SEL */ +/* Description: Nibble 6 Nibble select */ +#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER6_NIBBLE_SEL_SHFT 52 +#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER6_NIBBLE_SEL_MASK 0x0070000000000000 + +/* SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER7_CHIPLET_SEL */ +/* Description: Nibble 7 Chiplet select */ +#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER7_CHIPLET_SEL_SHFT 56 +#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER7_CHIPLET_SEL_MASK 0x0f00000000000000 + +/* SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER7_NIBBLE_SEL */ +/* Description: Nibble 7 Nibble select */ +#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER7_NIBBLE_SEL_SHFT 60 +#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER7_NIBBLE_SEL_MASK 0x7000000000000000 + +/* ==================================================================== */ +/* Register "SH_PI_AUTO_REPLY_ENABLE" */ +/* PI Auto Reply Enable */ +/* ==================================================================== */ + +#define SH_PI_AUTO_REPLY_ENABLE 0x0000000120050280 +#define SH_PI_AUTO_REPLY_ENABLE_MASK 0x0000000000000001 +#define SH_PI_AUTO_REPLY_ENABLE_INIT 0x0000000000000000 + +/* SH_PI_AUTO_REPLY_ENABLE_AUTO_REPLY_ENABLE */ +/* Description: Auto Reply Enabled */ +#define SH_PI_AUTO_REPLY_ENABLE_AUTO_REPLY_ENABLE_SHFT 0 +#define SH_PI_AUTO_REPLY_ENABLE_AUTO_REPLY_ENABLE_MASK 0x0000000000000001 + +/* ==================================================================== */ +/* Register "SH_PI_CAM_CONTROL" */ +/* CRB CAM MMR Access Control */ +/* ==================================================================== */ + +#define SH_PI_CAM_CONTROL 0x0000000120050300 +#define SH_PI_CAM_CONTROL_MASK 0x800000000000037f +#define SH_PI_CAM_CONTROL_INIT 0x0000000000000000 + +/* SH_PI_CAM_CONTROL_CAM_INDX */ +/* Description: CRB CAM Index to perform read/write on. */ +#define SH_PI_CAM_CONTROL_CAM_INDX_SHFT 0 +#define SH_PI_CAM_CONTROL_CAM_INDX_MASK 0x000000000000007f + +/* SH_PI_CAM_CONTROL_CAM_WRITE */ +/* Description: Is CRB CAM MMR function a write. */ +#define SH_PI_CAM_CONTROL_CAM_WRITE_SHFT 8 +#define SH_PI_CAM_CONTROL_CAM_WRITE_MASK 0x0000000000000100 + +/* SH_PI_CAM_CONTROL_RRB_RD_XFER_CLEAR */ +/* Description: Clear RRB read tranfer pending. */ +#define SH_PI_CAM_CONTROL_RRB_RD_XFER_CLEAR_SHFT 9 +#define SH_PI_CAM_CONTROL_RRB_RD_XFER_CLEAR_MASK 0x0000000000000200 + +/* SH_PI_CAM_CONTROL_START */ +/* Description: Start CRB CAM read/write operation */ +#define SH_PI_CAM_CONTROL_START_SHFT 63 +#define SH_PI_CAM_CONTROL_START_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_PI_CRBC_TEST_POINT_COMPARE" */ +/* PI CRBC Test Point Compare */ +/* ==================================================================== */ + +#define SH_PI_CRBC_TEST_POINT_COMPARE 0x0000000120050380 +#define SH_PI_CRBC_TEST_POINT_COMPARE_MASK 0xffffffffffffffff +#define SH_PI_CRBC_TEST_POINT_COMPARE_INIT 0xffffffff00000000 + +/* SH_PI_CRBC_TEST_POINT_COMPARE_COMPARE_MASK */ +/* Description: Mask to select Debug bits for trigger generation */ +#define SH_PI_CRBC_TEST_POINT_COMPARE_COMPARE_MASK_SHFT 0 +#define SH_PI_CRBC_TEST_POINT_COMPARE_COMPARE_MASK_MASK 0x00000000ffffffff + +/* SH_PI_CRBC_TEST_POINT_COMPARE_COMPARE_PATTERN */ +/* Description: debug bit pattern for trigger generation */ +#define SH_PI_CRBC_TEST_POINT_COMPARE_COMPARE_PATTERN_SHFT 32 +#define SH_PI_CRBC_TEST_POINT_COMPARE_COMPARE_PATTERN_MASK 0xffffffff00000000 + +/* ==================================================================== */ +/* Register "SH_PI_CRBC_TEST_POINT_SELECT" */ +/* PI CRBC Test Point Select */ +/* ==================================================================== */ + +#define SH_PI_CRBC_TEST_POINT_SELECT 0x0000000120050400 +#define SH_PI_CRBC_TEST_POINT_SELECT_MASK 0xf777777777777777 +#define SH_PI_CRBC_TEST_POINT_SELECT_INIT 0x0000000000000000 + +/* SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE0_CHIPLET_SEL */ +/* Description: Nibble 0 Chiplet select */ +#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE0_CHIPLET_SEL_SHFT 0 +#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE0_CHIPLET_SEL_MASK 0x0000000000000007 + +/* SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE0_NIBBLE_SEL */ +/* Description: Nibble 0 Nibble select */ +#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE0_NIBBLE_SEL_SHFT 4 +#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE0_NIBBLE_SEL_MASK 0x0000000000000070 + +/* SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE1_CHIPLET_SEL */ +/* Description: Nibble 1 Chiplet select */ +#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE1_CHIPLET_SEL_SHFT 8 +#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE1_CHIPLET_SEL_MASK 0x0000000000000700 + +/* SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE1_NIBBLE_SEL */ +/* Description: Nibble 1 Nibble select */ +#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE1_NIBBLE_SEL_SHFT 12 +#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE1_NIBBLE_SEL_MASK 0x0000000000007000 + +/* SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE2_CHIPLET_SEL */ +/* Description: Nibble 2 Chiplet select */ +#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE2_CHIPLET_SEL_SHFT 16 +#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE2_CHIPLET_SEL_MASK 0x0000000000070000 + +/* SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE2_NIBBLE_SEL */ +/* Description: Nibble 2 Nibble select */ +#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE2_NIBBLE_SEL_SHFT 20 +#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE2_NIBBLE_SEL_MASK 0x0000000000700000 + +/* SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE3_CHIPLET_SEL */ +/* Description: Nibble 3 Chiplet select */ +#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE3_CHIPLET_SEL_SHFT 24 +#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE3_CHIPLET_SEL_MASK 0x0000000007000000 + +/* SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE3_NIBBLE_SEL */ +/* Description: Nibble 3 Nibble select */ +#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE3_NIBBLE_SEL_SHFT 28 +#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE3_NIBBLE_SEL_MASK 0x0000000070000000 + +/* SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE4_CHIPLET_SEL */ +/* Description: Nibble 4 Chiplet select */ +#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE4_CHIPLET_SEL_SHFT 32 +#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE4_CHIPLET_SEL_MASK 0x0000000700000000 + +/* SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE4_NIBBLE_SEL */ +/* Description: Nibble 4 Nibble select */ +#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE4_NIBBLE_SEL_SHFT 36 +#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE4_NIBBLE_SEL_MASK 0x0000007000000000 + +/* SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE5_CHIPLET_SEL */ +/* Description: Nibble 5 Chiplet select */ +#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE5_CHIPLET_SEL_SHFT 40 +#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE5_CHIPLET_SEL_MASK 0x0000070000000000 + +/* SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE5_NIBBLE_SEL */ +/* Description: Nibble 5 Nibble select */ +#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE5_NIBBLE_SEL_SHFT 44 +#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE5_NIBBLE_SEL_MASK 0x0000700000000000 + +/* SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE6_CHIPLET_SEL */ +/* Description: Nibble 6 Chiplet select */ +#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE6_CHIPLET_SEL_SHFT 48 +#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE6_CHIPLET_SEL_MASK 0x0007000000000000 + +/* SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE6_NIBBLE_SEL */ +/* Description: Nibble 6 Nibble select */ +#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE6_NIBBLE_SEL_SHFT 52 +#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE6_NIBBLE_SEL_MASK 0x0070000000000000 + +/* SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE7_CHIPLET_SEL */ +/* Description: Nibble 7 Chiplet select */ +#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE7_CHIPLET_SEL_SHFT 56 +#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE7_CHIPLET_SEL_MASK 0x0700000000000000 + +/* SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE7_NIBBLE_SEL */ +/* Description: Nibble 7 Nibble select */ +#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE7_NIBBLE_SEL_SHFT 60 +#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE7_NIBBLE_SEL_MASK 0x7000000000000000 + +/* SH_PI_CRBC_TEST_POINT_SELECT_TRIGGER_ENABLE */ +/* Description: Enable trigger on bit 32 of Analyzer data */ +#define SH_PI_CRBC_TEST_POINT_SELECT_TRIGGER_ENABLE_SHFT 63 +#define SH_PI_CRBC_TEST_POINT_SELECT_TRIGGER_ENABLE_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT" */ +/* PI CRBC Test Point Trigger Select */ +/* ==================================================================== */ + +#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT 0x0000000120050480 +#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_MASK 0x7777777777777777 +#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_INIT 0x0000000000000000 + +/* SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER0_CHIPLET_SEL */ +/* Description: Nibble 0 Chiplet select */ +#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER0_CHIPLET_SEL_SHFT 0 +#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER0_CHIPLET_SEL_MASK 0x0000000000000007 + +/* SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER0_NIBBLE_SEL */ +/* Description: Nibble 0 Nibble select */ +#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER0_NIBBLE_SEL_SHFT 4 +#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER0_NIBBLE_SEL_MASK 0x0000000000000070 + +/* SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER1_CHIPLET_SEL */ +/* Description: Nibble 1 Chiplet select */ +#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER1_CHIPLET_SEL_SHFT 8 +#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER1_CHIPLET_SEL_MASK 0x0000000000000700 + +/* SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER1_NIBBLE_SEL */ +/* Description: Nibble 1 Nibble select */ +#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER1_NIBBLE_SEL_SHFT 12 +#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER1_NIBBLE_SEL_MASK 0x0000000000007000 + +/* SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER2_CHIPLET_SEL */ +/* Description: Nibble 2 Chiplet select */ +#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER2_CHIPLET_SEL_SHFT 16 +#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER2_CHIPLET_SEL_MASK 0x0000000000070000 + +/* SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER2_NIBBLE_SEL */ +/* Description: Nibble 2 Nibble select */ +#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER2_NIBBLE_SEL_SHFT 20 +#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER2_NIBBLE_SEL_MASK 0x0000000000700000 + +/* SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER3_CHIPLET_SEL */ +/* Description: Nibble 3 Chiplet select */ +#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER3_CHIPLET_SEL_SHFT 24 +#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER3_CHIPLET_SEL_MASK 0x0000000007000000 + +/* SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER3_NIBBLE_SEL */ +/* Description: Nibble 3 Nibble select */ +#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER3_NIBBLE_SEL_SHFT 28 +#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER3_NIBBLE_SEL_MASK 0x0000000070000000 + +/* SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER4_CHIPLET_SEL */ +/* Description: Nibble 4 Chiplet select */ +#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER4_CHIPLET_SEL_SHFT 32 +#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER4_CHIPLET_SEL_MASK 0x0000000700000000 + +/* SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER4_NIBBLE_SEL */ +/* Description: Nibble 4 Nibble select */ +#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER4_NIBBLE_SEL_SHFT 36 +#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER4_NIBBLE_SEL_MASK 0x0000007000000000 + +/* SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER5_CHIPLET_SEL */ +/* Description: Nibble 5 Chiplet select */ +#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER5_CHIPLET_SEL_SHFT 40 +#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER5_CHIPLET_SEL_MASK 0x0000070000000000 + +/* SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER5_NIBBLE_SEL */ +/* Description: Nibble 5 Nibble select */ +#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER5_NIBBLE_SEL_SHFT 44 +#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER5_NIBBLE_SEL_MASK 0x0000700000000000 + +/* SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER6_CHIPLET_SEL */ +/* Description: Nibble 6 Chiplet select */ +#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER6_CHIPLET_SEL_SHFT 48 +#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER6_CHIPLET_SEL_MASK 0x0007000000000000 + +/* SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER6_NIBBLE_SEL */ +/* Description: Nibble 6 Nibble select */ +#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER6_NIBBLE_SEL_SHFT 52 +#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER6_NIBBLE_SEL_MASK 0x0070000000000000 + +/* SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER7_CHIPLET_SEL */ +/* Description: Nibble 7 Chiplet select */ +#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER7_CHIPLET_SEL_SHFT 56 +#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER7_CHIPLET_SEL_MASK 0x0700000000000000 + +/* SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER7_NIBBLE_SEL */ +/* Description: Nibble 7 Nibble select */ +#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER7_NIBBLE_SEL_SHFT 60 +#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER7_NIBBLE_SEL_MASK 0x7000000000000000 + +/* ==================================================================== */ +/* Register "SH_PI_CRBP_ERROR_MASK" */ +/* PI CRBP Error Mask */ +/* ==================================================================== */ + +#define SH_PI_CRBP_ERROR_MASK 0x0000000120050500 +#define SH_PI_CRBP_ERROR_MASK_MASK 0x00000000001fffff +#define SH_PI_CRBP_ERROR_MASK_INIT 0x00000000001fffff + +/* SH_PI_CRBP_ERROR_MASK_FSB_PROTO_ERR */ +/* Description: Mask detection internal protocol table misses */ +#define SH_PI_CRBP_ERROR_MASK_FSB_PROTO_ERR_SHFT 0 +#define SH_PI_CRBP_ERROR_MASK_FSB_PROTO_ERR_MASK 0x0000000000000001 + +/* SH_PI_CRBP_ERROR_MASK_GFX_RP_ERR */ +/* Description: Mask graphic reply error detection */ +#define SH_PI_CRBP_ERROR_MASK_GFX_RP_ERR_SHFT 1 +#define SH_PI_CRBP_ERROR_MASK_GFX_RP_ERR_MASK 0x0000000000000002 + +/* SH_PI_CRBP_ERROR_MASK_XB_PROTO_ERR */ +/* Description: Mask detection of external protocol table misses */ +#define SH_PI_CRBP_ERROR_MASK_XB_PROTO_ERR_SHFT 2 +#define SH_PI_CRBP_ERROR_MASK_XB_PROTO_ERR_MASK 0x0000000000000004 + +/* SH_PI_CRBP_ERROR_MASK_MEM_RP_ERR */ +/* Description: Mask memory error reply message detection */ +#define SH_PI_CRBP_ERROR_MASK_MEM_RP_ERR_SHFT 3 +#define SH_PI_CRBP_ERROR_MASK_MEM_RP_ERR_MASK 0x0000000000000008 + +/* SH_PI_CRBP_ERROR_MASK_PIO_RP_ERR */ +/* Description: Mask PIO reply error message detection */ +#define SH_PI_CRBP_ERROR_MASK_PIO_RP_ERR_SHFT 4 +#define SH_PI_CRBP_ERROR_MASK_PIO_RP_ERR_MASK 0x0000000000000010 + +/* SH_PI_CRBP_ERROR_MASK_MEM_TO_ERR */ +/* Description: Mask memory time-out detection */ +#define SH_PI_CRBP_ERROR_MASK_MEM_TO_ERR_SHFT 5 +#define SH_PI_CRBP_ERROR_MASK_MEM_TO_ERR_MASK 0x0000000000000020 + +/* SH_PI_CRBP_ERROR_MASK_PIO_TO_ERR */ +/* Description: Mask PIO time-out detection */ +#define SH_PI_CRBP_ERROR_MASK_PIO_TO_ERR_SHFT 6 +#define SH_PI_CRBP_ERROR_MASK_PIO_TO_ERR_MASK 0x0000000000000040 + +/* SH_PI_CRBP_ERROR_MASK_FSB_SHUB_UCE */ +/* Description: Mask un-correctable ECC error detection */ +#define SH_PI_CRBP_ERROR_MASK_FSB_SHUB_UCE_SHFT 7 +#define SH_PI_CRBP_ERROR_MASK_FSB_SHUB_UCE_MASK 0x0000000000000080 + +/* SH_PI_CRBP_ERROR_MASK_FSB_SHUB_CE */ +/* Description: Mask correctable ECC error detection */ +#define SH_PI_CRBP_ERROR_MASK_FSB_SHUB_CE_SHFT 8 +#define SH_PI_CRBP_ERROR_MASK_FSB_SHUB_CE_MASK 0x0000000000000100 + +/* SH_PI_CRBP_ERROR_MASK_MSG_COLOR_ERR */ +/* Description: Mask detection of color errors */ +#define SH_PI_CRBP_ERROR_MASK_MSG_COLOR_ERR_SHFT 9 +#define SH_PI_CRBP_ERROR_MASK_MSG_COLOR_ERR_MASK 0x0000000000000200 + +/* SH_PI_CRBP_ERROR_MASK_MD_RQ_Q_OFLOW */ +/* Description: Mask MD Request input buffer over flow error */ +#define SH_PI_CRBP_ERROR_MASK_MD_RQ_Q_OFLOW_SHFT 10 +#define SH_PI_CRBP_ERROR_MASK_MD_RQ_Q_OFLOW_MASK 0x0000000000000400 + +/* SH_PI_CRBP_ERROR_MASK_MD_RP_Q_OFLOW */ +/* Description: Mask MD Reply input buffer over flow error */ +#define SH_PI_CRBP_ERROR_MASK_MD_RP_Q_OFLOW_SHFT 11 +#define SH_PI_CRBP_ERROR_MASK_MD_RP_Q_OFLOW_MASK 0x0000000000000800 + +/* SH_PI_CRBP_ERROR_MASK_XN_RQ_Q_OFLOW */ +/* Description: Mask XN Request input buffer over flow error */ +#define SH_PI_CRBP_ERROR_MASK_XN_RQ_Q_OFLOW_SHFT 12 +#define SH_PI_CRBP_ERROR_MASK_XN_RQ_Q_OFLOW_MASK 0x0000000000001000 + +/* SH_PI_CRBP_ERROR_MASK_XN_RP_Q_OFLOW */ +/* Description: Mask XN Reply input buffer over flow error */ +#define SH_PI_CRBP_ERROR_MASK_XN_RP_Q_OFLOW_SHFT 13 +#define SH_PI_CRBP_ERROR_MASK_XN_RP_Q_OFLOW_MASK 0x0000000000002000 + +/* SH_PI_CRBP_ERROR_MASK_NACK_OFLOW */ +/* Description: Mask NACK over flow error */ +#define SH_PI_CRBP_ERROR_MASK_NACK_OFLOW_SHFT 14 +#define SH_PI_CRBP_ERROR_MASK_NACK_OFLOW_MASK 0x0000000000004000 + +/* SH_PI_CRBP_ERROR_MASK_GFX_INT_0 */ +/* Description: Mask GFX transfer interrupt for CPU 0 */ +#define SH_PI_CRBP_ERROR_MASK_GFX_INT_0_SHFT 15 +#define SH_PI_CRBP_ERROR_MASK_GFX_INT_0_MASK 0x0000000000008000 + +/* SH_PI_CRBP_ERROR_MASK_GFX_INT_1 */ +/* Description: Mask GFX transfer interrupt for CPU 1 */ +#define SH_PI_CRBP_ERROR_MASK_GFX_INT_1_SHFT 16 +#define SH_PI_CRBP_ERROR_MASK_GFX_INT_1_MASK 0x0000000000010000 + +/* SH_PI_CRBP_ERROR_MASK_MD_RQ_CRD_OFLOW */ +/* Description: Mask MD Request Credit Overflow Error */ +#define SH_PI_CRBP_ERROR_MASK_MD_RQ_CRD_OFLOW_SHFT 17 +#define SH_PI_CRBP_ERROR_MASK_MD_RQ_CRD_OFLOW_MASK 0x0000000000020000 + +/* SH_PI_CRBP_ERROR_MASK_MD_RP_CRD_OFLOW */ +/* Description: Mask MD Reply Credit Overflow Error */ +#define SH_PI_CRBP_ERROR_MASK_MD_RP_CRD_OFLOW_SHFT 18 +#define SH_PI_CRBP_ERROR_MASK_MD_RP_CRD_OFLOW_MASK 0x0000000000040000 + +/* SH_PI_CRBP_ERROR_MASK_XN_RQ_CRD_OFLOW */ +/* Description: Mask XN Request Credit Overflow Error */ +#define SH_PI_CRBP_ERROR_MASK_XN_RQ_CRD_OFLOW_SHFT 19 +#define SH_PI_CRBP_ERROR_MASK_XN_RQ_CRD_OFLOW_MASK 0x0000000000080000 + +/* SH_PI_CRBP_ERROR_MASK_XN_RP_CRD_OFLOW */ +/* Description: Mask XN Reply Credit Overflow Error */ +#define SH_PI_CRBP_ERROR_MASK_XN_RP_CRD_OFLOW_SHFT 20 +#define SH_PI_CRBP_ERROR_MASK_XN_RP_CRD_OFLOW_MASK 0x0000000000100000 + +/* ==================================================================== */ +/* Register "SH_PI_CRBP_FSB_PIPE_COMPARE" */ +/* CRBP FSB Pipe Compare */ +/* ==================================================================== */ + +#define SH_PI_CRBP_FSB_PIPE_COMPARE 0x0000000120050580 +#define SH_PI_CRBP_FSB_PIPE_COMPARE_MASK 0x001fffffffffffff +#define SH_PI_CRBP_FSB_PIPE_COMPARE_INIT 0x0000000000000000 + +/* SH_PI_CRBP_FSB_PIPE_COMPARE_COMPARE_ADDRESS */ +/* Description: Address A or B to compare against */ +#define SH_PI_CRBP_FSB_PIPE_COMPARE_COMPARE_ADDRESS_SHFT 0 +#define SH_PI_CRBP_FSB_PIPE_COMPARE_COMPARE_ADDRESS_MASK 0x00007fffffffffff + +/* SH_PI_CRBP_FSB_PIPE_COMPARE_COMPARE_REQ */ +/* Description: REQa or REQb value to compare against */ +#define SH_PI_CRBP_FSB_PIPE_COMPARE_COMPARE_REQ_SHFT 47 +#define SH_PI_CRBP_FSB_PIPE_COMPARE_COMPARE_REQ_MASK 0x001f800000000000 + +/* ==================================================================== */ +/* Register "SH_PI_CRBP_FSB_PIPE_MASK" */ +/* CRBP Compare Mask */ +/* ==================================================================== */ + +#define SH_PI_CRBP_FSB_PIPE_MASK 0x0000000120050600 +#define SH_PI_CRBP_FSB_PIPE_MASK_MASK 0x001fffffffffffff +#define SH_PI_CRBP_FSB_PIPE_MASK_INIT 0x0000000000000000 + +/* SH_PI_CRBP_FSB_PIPE_MASK_COMPARE_ADDRESS_MASK */ +/* Description: Address A or B mask values */ +#define SH_PI_CRBP_FSB_PIPE_MASK_COMPARE_ADDRESS_MASK_SHFT 0 +#define SH_PI_CRBP_FSB_PIPE_MASK_COMPARE_ADDRESS_MASK_MASK 0x00007fffffffffff + +/* SH_PI_CRBP_FSB_PIPE_MASK_COMPARE_REQ_MASK */ +/* Description: REQa or REQb mask values */ +#define SH_PI_CRBP_FSB_PIPE_MASK_COMPARE_REQ_MASK_SHFT 47 +#define SH_PI_CRBP_FSB_PIPE_MASK_COMPARE_REQ_MASK_MASK 0x001f800000000000 + +/* ==================================================================== */ +/* Register "SH_PI_CRBP_TEST_POINT_COMPARE" */ +/* PI CRBP Test Point Compare */ +/* ==================================================================== */ + +#define SH_PI_CRBP_TEST_POINT_COMPARE 0x0000000120050680 +#define SH_PI_CRBP_TEST_POINT_COMPARE_MASK 0xffffffffffffffff +#define SH_PI_CRBP_TEST_POINT_COMPARE_INIT 0xffffffff00000000 + +/* SH_PI_CRBP_TEST_POINT_COMPARE_COMPARE_MASK */ +/* Description: Mask to select Debug bits for trigger generation */ +#define SH_PI_CRBP_TEST_POINT_COMPARE_COMPARE_MASK_SHFT 0 +#define SH_PI_CRBP_TEST_POINT_COMPARE_COMPARE_MASK_MASK 0x00000000ffffffff + +/* SH_PI_CRBP_TEST_POINT_COMPARE_COMPARE_PATTERN */ +/* Description: debug bit pattern for trigger generation */ +#define SH_PI_CRBP_TEST_POINT_COMPARE_COMPARE_PATTERN_SHFT 32 +#define SH_PI_CRBP_TEST_POINT_COMPARE_COMPARE_PATTERN_MASK 0xffffffff00000000 + +/* ==================================================================== */ +/* Register "SH_PI_CRBP_TEST_POINT_SELECT" */ +/* PI CRBP Test Point Select */ +/* ==================================================================== */ + +#define SH_PI_CRBP_TEST_POINT_SELECT 0x0000000120050700 +#define SH_PI_CRBP_TEST_POINT_SELECT_MASK 0xf777777777777777 +#define SH_PI_CRBP_TEST_POINT_SELECT_INIT 0x0000000000000000 + +/* SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE0_CHIPLET_SEL */ +/* Description: Nibble 0 Chiplet select */ +#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE0_CHIPLET_SEL_SHFT 0 +#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE0_CHIPLET_SEL_MASK 0x0000000000000007 + +/* SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE0_NIBBLE_SEL */ +/* Description: Nibble 0 Nibble select */ +#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE0_NIBBLE_SEL_SHFT 4 +#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE0_NIBBLE_SEL_MASK 0x0000000000000070 + +/* SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE1_CHIPLET_SEL */ +/* Description: Nibble 1 Chiplet select */ +#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE1_CHIPLET_SEL_SHFT 8 +#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE1_CHIPLET_SEL_MASK 0x0000000000000700 + +/* SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE1_NIBBLE_SEL */ +/* Description: Nibble 1 Nibble select */ +#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE1_NIBBLE_SEL_SHFT 12 +#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE1_NIBBLE_SEL_MASK 0x0000000000007000 + +/* SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE2_CHIPLET_SEL */ +/* Description: Nibble 2 Chiplet select */ +#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE2_CHIPLET_SEL_SHFT 16 +#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE2_CHIPLET_SEL_MASK 0x0000000000070000 + +/* SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE2_NIBBLE_SEL */ +/* Description: Nibble 2 Nibble select */ +#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE2_NIBBLE_SEL_SHFT 20 +#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE2_NIBBLE_SEL_MASK 0x0000000000700000 + +/* SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE3_CHIPLET_SEL */ +/* Description: Nibble 3 Chiplet select */ +#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE3_CHIPLET_SEL_SHFT 24 +#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE3_CHIPLET_SEL_MASK 0x0000000007000000 + +/* SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE3_NIBBLE_SEL */ +/* Description: Nibble 3 Nibble select */ +#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE3_NIBBLE_SEL_SHFT 28 +#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE3_NIBBLE_SEL_MASK 0x0000000070000000 + +/* SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE4_CHIPLET_SEL */ +/* Description: Nibble 4 Chiplet select */ +#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE4_CHIPLET_SEL_SHFT 32 +#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE4_CHIPLET_SEL_MASK 0x0000000700000000 + +/* SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE4_NIBBLE_SEL */ +/* Description: Nibble 4 Nibble select */ +#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE4_NIBBLE_SEL_SHFT 36 +#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE4_NIBBLE_SEL_MASK 0x0000007000000000 + +/* SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE5_CHIPLET_SEL */ +/* Description: Nibble 5 Chiplet select */ +#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE5_CHIPLET_SEL_SHFT 40 +#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE5_CHIPLET_SEL_MASK 0x0000070000000000 + +/* SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE5_NIBBLE_SEL */ +/* Description: Nibble 5 Nibble select */ +#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE5_NIBBLE_SEL_SHFT 44 +#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE5_NIBBLE_SEL_MASK 0x0000700000000000 + +/* SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE6_CHIPLET_SEL */ +/* Description: Nibble 6 Chiplet select */ +#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE6_CHIPLET_SEL_SHFT 48 +#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE6_CHIPLET_SEL_MASK 0x0007000000000000 + +/* SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE6_NIBBLE_SEL */ +/* Description: Nibble 6 Nibble select */ +#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE6_NIBBLE_SEL_SHFT 52 +#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE6_NIBBLE_SEL_MASK 0x0070000000000000 + +/* SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE7_CHIPLET_SEL */ +/* Description: Nibble 7 Chiplet select */ +#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE7_CHIPLET_SEL_SHFT 56 +#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE7_CHIPLET_SEL_MASK 0x0700000000000000 + +/* SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE7_NIBBLE_SEL */ +/* Description: Nibble 7 Nibble select */ +#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE7_NIBBLE_SEL_SHFT 60 +#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE7_NIBBLE_SEL_MASK 0x7000000000000000 + +/* SH_PI_CRBP_TEST_POINT_SELECT_TRIGGER_ENABLE */ +/* Description: Enable trigger on bit 32 of Analyzer data */ +#define SH_PI_CRBP_TEST_POINT_SELECT_TRIGGER_ENABLE_SHFT 63 +#define SH_PI_CRBP_TEST_POINT_SELECT_TRIGGER_ENABLE_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT" */ +/* PI CRBP Test Point Trigger Select */ +/* ==================================================================== */ + +#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT 0x0000000120050780 +#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_MASK 0x7777777777777777 +#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_INIT 0x0000000000000000 + +/* SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER0_CHIPLET_SEL */ +/* Description: Nibble 0 Chiplet select */ +#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER0_CHIPLET_SEL_SHFT 0 +#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER0_CHIPLET_SEL_MASK 0x0000000000000007 + +/* SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER0_NIBBLE_SEL */ +/* Description: Nibble 0 Nibble select */ +#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER0_NIBBLE_SEL_SHFT 4 +#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER0_NIBBLE_SEL_MASK 0x0000000000000070 + +/* SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER1_CHIPLET_SEL */ +/* Description: Nibble 1 Chiplet select */ +#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER1_CHIPLET_SEL_SHFT 8 +#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER1_CHIPLET_SEL_MASK 0x0000000000000700 + +/* SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER1_NIBBLE_SEL */ +/* Description: Nibble 1 Nibble select */ +#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER1_NIBBLE_SEL_SHFT 12 +#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER1_NIBBLE_SEL_MASK 0x0000000000007000 + +/* SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER2_CHIPLET_SEL */ +/* Description: Nibble 2 Chiplet select */ +#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER2_CHIPLET_SEL_SHFT 16 +#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER2_CHIPLET_SEL_MASK 0x0000000000070000 + +/* SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER2_NIBBLE_SEL */ +/* Description: Nibble 2 Nibble select */ +#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER2_NIBBLE_SEL_SHFT 20 +#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER2_NIBBLE_SEL_MASK 0x0000000000700000 + +/* SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER3_CHIPLET_SEL */ +/* Description: Nibble 3 Chiplet select */ +#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER3_CHIPLET_SEL_SHFT 24 +#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER3_CHIPLET_SEL_MASK 0x0000000007000000 + +/* SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER3_NIBBLE_SEL */ +/* Description: Nibble 3 Nibble select */ +#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER3_NIBBLE_SEL_SHFT 28 +#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER3_NIBBLE_SEL_MASK 0x0000000070000000 + +/* SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER4_CHIPLET_SEL */ +/* Description: Nibble 4 Chiplet select */ +#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER4_CHIPLET_SEL_SHFT 32 +#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER4_CHIPLET_SEL_MASK 0x0000000700000000 + +/* SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER4_NIBBLE_SEL */ +/* Description: Nibble 4 Nibble select */ +#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER4_NIBBLE_SEL_SHFT 36 +#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER4_NIBBLE_SEL_MASK 0x0000007000000000 + +/* SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER5_CHIPLET_SEL */ +/* Description: Nibble 5 Chiplet select */ +#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER5_CHIPLET_SEL_SHFT 40 +#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER5_CHIPLET_SEL_MASK 0x0000070000000000 + +/* SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER5_NIBBLE_SEL */ +/* Description: Nibble 5 Nibble select */ +#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER5_NIBBLE_SEL_SHFT 44 +#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER5_NIBBLE_SEL_MASK 0x0000700000000000 + +/* SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER6_CHIPLET_SEL */ +/* Description: Nibble 6 Chiplet select */ +#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER6_CHIPLET_SEL_SHFT 48 +#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER6_CHIPLET_SEL_MASK 0x0007000000000000 + +/* SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER6_NIBBLE_SEL */ +/* Description: Nibble 6 Nibble select */ +#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER6_NIBBLE_SEL_SHFT 52 +#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER6_NIBBLE_SEL_MASK 0x0070000000000000 + +/* SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER7_CHIPLET_SEL */ +/* Description: Nibble 7 Chiplet select */ +#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER7_CHIPLET_SEL_SHFT 56 +#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER7_CHIPLET_SEL_MASK 0x0700000000000000 + +/* SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER7_NIBBLE_SEL */ +/* Description: Nibble 7 Nibble select */ +#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER7_NIBBLE_SEL_SHFT 60 +#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER7_NIBBLE_SEL_MASK 0x7000000000000000 + +/* ==================================================================== */ +/* Register "SH_PI_CRBP_XB_PIPE_COMPARE_0" */ +/* CRBP XB Pipe Compare */ +/* ==================================================================== */ + +#define SH_PI_CRBP_XB_PIPE_COMPARE_0 0x0000000120050800 +#define SH_PI_CRBP_XB_PIPE_COMPARE_0_MASK 0x007fffffffffffff +#define SH_PI_CRBP_XB_PIPE_COMPARE_0_INIT 0x0000000000000000 + +/* SH_PI_CRBP_XB_PIPE_COMPARE_0_COMPARE_ADDRESS */ +/* Description: Address to compare against */ +#define SH_PI_CRBP_XB_PIPE_COMPARE_0_COMPARE_ADDRESS_SHFT 0 +#define SH_PI_CRBP_XB_PIPE_COMPARE_0_COMPARE_ADDRESS_MASK 0x00007fffffffffff + +/* SH_PI_CRBP_XB_PIPE_COMPARE_0_COMPARE_COMMAND */ +/* Description: SN2NET Command to compare against */ +#define SH_PI_CRBP_XB_PIPE_COMPARE_0_COMPARE_COMMAND_SHFT 47 +#define SH_PI_CRBP_XB_PIPE_COMPARE_0_COMPARE_COMMAND_MASK 0x007f800000000000 + +/* ==================================================================== */ +/* Register "SH_PI_CRBP_XB_PIPE_COMPARE_1" */ +/* CRBP XB Pipe Compare */ +/* ==================================================================== */ + +#define SH_PI_CRBP_XB_PIPE_COMPARE_1 0x0000000120050880 +#define SH_PI_CRBP_XB_PIPE_COMPARE_1_MASK 0x000001ff3fff3fff +#define SH_PI_CRBP_XB_PIPE_COMPARE_1_INIT 0x0000000000000000 + +/* SH_PI_CRBP_XB_PIPE_COMPARE_1_COMPARE_SOURCE */ +/* Description: Source to compare against */ +#define SH_PI_CRBP_XB_PIPE_COMPARE_1_COMPARE_SOURCE_SHFT 0 +#define SH_PI_CRBP_XB_PIPE_COMPARE_1_COMPARE_SOURCE_MASK 0x0000000000003fff + +/* SH_PI_CRBP_XB_PIPE_COMPARE_1_COMPARE_SUPPLEMENTAL */ +/* Description: Supplemental to compare against */ +#define SH_PI_CRBP_XB_PIPE_COMPARE_1_COMPARE_SUPPLEMENTAL_SHFT 16 +#define SH_PI_CRBP_XB_PIPE_COMPARE_1_COMPARE_SUPPLEMENTAL_MASK 0x000000003fff0000 + +/* SH_PI_CRBP_XB_PIPE_COMPARE_1_COMPARE_ECHO */ +/* Description: Echo to compare against */ +#define SH_PI_CRBP_XB_PIPE_COMPARE_1_COMPARE_ECHO_SHFT 32 +#define SH_PI_CRBP_XB_PIPE_COMPARE_1_COMPARE_ECHO_MASK 0x000001ff00000000 + +/* ==================================================================== */ +/* Register "SH_PI_CRBP_XB_PIPE_MASK_0" */ +/* CRBP Compare Mask Register 1 */ +/* ==================================================================== */ + +#define SH_PI_CRBP_XB_PIPE_MASK_0 0x0000000120050900 +#define SH_PI_CRBP_XB_PIPE_MASK_0_MASK 0x007fffffffffffff +#define SH_PI_CRBP_XB_PIPE_MASK_0_INIT 0x0000000000000000 + +/* SH_PI_CRBP_XB_PIPE_MASK_0_COMPARE_ADDRESS_MASK */ +/* Description: Address to compare against */ +#define SH_PI_CRBP_XB_PIPE_MASK_0_COMPARE_ADDRESS_MASK_SHFT 0 +#define SH_PI_CRBP_XB_PIPE_MASK_0_COMPARE_ADDRESS_MASK_MASK 0x00007fffffffffff + +/* SH_PI_CRBP_XB_PIPE_MASK_0_COMPARE_COMMAND_MASK */ +/* Description: SN2NET Command to compare against */ +#define SH_PI_CRBP_XB_PIPE_MASK_0_COMPARE_COMMAND_MASK_SHFT 47 +#define SH_PI_CRBP_XB_PIPE_MASK_0_COMPARE_COMMAND_MASK_MASK 0x007f800000000000 + +/* ==================================================================== */ +/* Register "SH_PI_CRBP_XB_PIPE_MASK_1" */ +/* CRBP XB Pipe Compare Mask Register 1 */ +/* ==================================================================== */ + +#define SH_PI_CRBP_XB_PIPE_MASK_1 0x0000000120050980 +#define SH_PI_CRBP_XB_PIPE_MASK_1_MASK 0x000001ff3fff3fff +#define SH_PI_CRBP_XB_PIPE_MASK_1_INIT 0x0000000000000000 + +/* SH_PI_CRBP_XB_PIPE_MASK_1_COMPARE_SOURCE_MASK */ +/* Description: Source to compare against */ +#define SH_PI_CRBP_XB_PIPE_MASK_1_COMPARE_SOURCE_MASK_SHFT 0 +#define SH_PI_CRBP_XB_PIPE_MASK_1_COMPARE_SOURCE_MASK_MASK 0x0000000000003fff + +/* SH_PI_CRBP_XB_PIPE_MASK_1_COMPARE_SUPPLEMENTAL_MASK */ +/* Description: Supplemental to compare against */ +#define SH_PI_CRBP_XB_PIPE_MASK_1_COMPARE_SUPPLEMENTAL_MASK_SHFT 16 +#define SH_PI_CRBP_XB_PIPE_MASK_1_COMPARE_SUPPLEMENTAL_MASK_MASK 0x000000003fff0000 + +/* SH_PI_CRBP_XB_PIPE_MASK_1_COMPARE_ECHO_MASK */ +/* Description: Echo to compare against */ +#define SH_PI_CRBP_XB_PIPE_MASK_1_COMPARE_ECHO_MASK_SHFT 32 +#define SH_PI_CRBP_XB_PIPE_MASK_1_COMPARE_ECHO_MASK_MASK 0x000001ff00000000 + +/* ==================================================================== */ +/* Register "SH_PI_DPC_QUEUE_CONFIG" */ +/* DPC Queue Configuration */ +/* ==================================================================== */ + +#define SH_PI_DPC_QUEUE_CONFIG 0x0000000120050a00 +#define SH_PI_DPC_QUEUE_CONFIG_MASK 0x000000001f1f1f1f +#define SH_PI_DPC_QUEUE_CONFIG_INIT 0x000000000c010c01 + +/* SH_PI_DPC_QUEUE_CONFIG_DWCQ_AE_LEVEL */ +/* Description: DXB WTL Command Queue Almost Empty Level */ +#define SH_PI_DPC_QUEUE_CONFIG_DWCQ_AE_LEVEL_SHFT 0 +#define SH_PI_DPC_QUEUE_CONFIG_DWCQ_AE_LEVEL_MASK 0x000000000000001f + +/* SH_PI_DPC_QUEUE_CONFIG_DWCQ_AF_THRESH */ +/* Description: DXB WTL Command Queue Almost Full Threshold */ +#define SH_PI_DPC_QUEUE_CONFIG_DWCQ_AF_THRESH_SHFT 8 +#define SH_PI_DPC_QUEUE_CONFIG_DWCQ_AF_THRESH_MASK 0x0000000000001f00 + +/* SH_PI_DPC_QUEUE_CONFIG_FWCQ_AE_LEVEL */ +/* Description: FSB WTL Command Queue Almost Empty Level */ +#define SH_PI_DPC_QUEUE_CONFIG_FWCQ_AE_LEVEL_SHFT 16 +#define SH_PI_DPC_QUEUE_CONFIG_FWCQ_AE_LEVEL_MASK 0x00000000001f0000 + +/* SH_PI_DPC_QUEUE_CONFIG_FWCQ_AF_THRESH */ +/* Description: FSB WTL Command Queue Almost Full Threshold */ +#define SH_PI_DPC_QUEUE_CONFIG_FWCQ_AF_THRESH_SHFT 24 +#define SH_PI_DPC_QUEUE_CONFIG_FWCQ_AF_THRESH_MASK 0x000000001f000000 + +/* ==================================================================== */ +/* Register "SH_PI_ERROR_MASK" */ +/* PI Error Mask */ +/* ==================================================================== */ + +#define SH_PI_ERROR_MASK 0x0000000120050a80 +#define SH_PI_ERROR_MASK_MASK 0x00000007ffffffff +#define SH_PI_ERROR_MASK_INIT 0x00000007ffffffff + +/* SH_PI_ERROR_MASK_FSB_PROTO_ERR */ +/* Description: Mask detection of internal protocol table misses */ +#define SH_PI_ERROR_MASK_FSB_PROTO_ERR_SHFT 0 +#define SH_PI_ERROR_MASK_FSB_PROTO_ERR_MASK 0x0000000000000001 + +/* SH_PI_ERROR_MASK_GFX_RP_ERR */ +/* Description: Mask graphic reply error message error detection */ +#define SH_PI_ERROR_MASK_GFX_RP_ERR_SHFT 1 +#define SH_PI_ERROR_MASK_GFX_RP_ERR_MASK 0x0000000000000002 + +/* SH_PI_ERROR_MASK_XB_PROTO_ERR */ +/* Description: Mask detection of external protocol table misses */ +#define SH_PI_ERROR_MASK_XB_PROTO_ERR_SHFT 2 +#define SH_PI_ERROR_MASK_XB_PROTO_ERR_MASK 0x0000000000000004 + +/* SH_PI_ERROR_MASK_MEM_RP_ERR */ +/* Description: Mask memory reply error detection */ +#define SH_PI_ERROR_MASK_MEM_RP_ERR_SHFT 3 +#define SH_PI_ERROR_MASK_MEM_RP_ERR_MASK 0x0000000000000008 + +/* SH_PI_ERROR_MASK_PIO_RP_ERR */ +/* Description: Mask PIO reply error detection */ +#define SH_PI_ERROR_MASK_PIO_RP_ERR_SHFT 4 +#define SH_PI_ERROR_MASK_PIO_RP_ERR_MASK 0x0000000000000010 + +/* SH_PI_ERROR_MASK_MEM_TO_ERR */ +/* Description: Mask CRB time-out errors */ +#define SH_PI_ERROR_MASK_MEM_TO_ERR_SHFT 5 +#define SH_PI_ERROR_MASK_MEM_TO_ERR_MASK 0x0000000000000020 + +/* SH_PI_ERROR_MASK_PIO_TO_ERR */ +/* Description: Mask PIO time-out errors */ +#define SH_PI_ERROR_MASK_PIO_TO_ERR_SHFT 6 +#define SH_PI_ERROR_MASK_PIO_TO_ERR_MASK 0x0000000000000040 + +/* SH_PI_ERROR_MASK_FSB_SHUB_UCE */ +/* Description: Mask un-correctable ECC error detection */ +#define SH_PI_ERROR_MASK_FSB_SHUB_UCE_SHFT 7 +#define SH_PI_ERROR_MASK_FSB_SHUB_UCE_MASK 0x0000000000000080 + +/* SH_PI_ERROR_MASK_FSB_SHUB_CE */ +/* Description: Mask correctable ECC error detection */ +#define SH_PI_ERROR_MASK_FSB_SHUB_CE_SHFT 8 +#define SH_PI_ERROR_MASK_FSB_SHUB_CE_MASK 0x0000000000000100 + +/* SH_PI_ERROR_MASK_MSG_COLOR_ERR */ +/* Description: Mask message color error detection */ +#define SH_PI_ERROR_MASK_MSG_COLOR_ERR_SHFT 9 +#define SH_PI_ERROR_MASK_MSG_COLOR_ERR_MASK 0x0000000000000200 + +/* SH_PI_ERROR_MASK_MD_RQ_Q_OFLOW */ +/* Description: Mask MD Request input buffer over flow error */ +#define SH_PI_ERROR_MASK_MD_RQ_Q_OFLOW_SHFT 10 +#define SH_PI_ERROR_MASK_MD_RQ_Q_OFLOW_MASK 0x0000000000000400 + +/* SH_PI_ERROR_MASK_MD_RP_Q_OFLOW */ +/* Description: Mask MD Reply input buffer over flow error */ +#define SH_PI_ERROR_MASK_MD_RP_Q_OFLOW_SHFT 11 +#define SH_PI_ERROR_MASK_MD_RP_Q_OFLOW_MASK 0x0000000000000800 + +/* SH_PI_ERROR_MASK_XN_RQ_Q_OFLOW */ +/* Description: Mask XN Request input buffer over flow error */ +#define SH_PI_ERROR_MASK_XN_RQ_Q_OFLOW_SHFT 12 +#define SH_PI_ERROR_MASK_XN_RQ_Q_OFLOW_MASK 0x0000000000001000 + +/* SH_PI_ERROR_MASK_XN_RP_Q_OFLOW */ +/* Description: Mask XN Reply input buffer over flow error */ +#define SH_PI_ERROR_MASK_XN_RP_Q_OFLOW_SHFT 13 +#define SH_PI_ERROR_MASK_XN_RP_Q_OFLOW_MASK 0x0000000000002000 + +/* SH_PI_ERROR_MASK_NACK_OFLOW */ +/* Description: Mask NACK over flow error */ +#define SH_PI_ERROR_MASK_NACK_OFLOW_SHFT 14 +#define SH_PI_ERROR_MASK_NACK_OFLOW_MASK 0x0000000000004000 + +/* SH_PI_ERROR_MASK_GFX_INT_0 */ +/* Description: Mask GFX transfer interrupt for CPU 0 */ +#define SH_PI_ERROR_MASK_GFX_INT_0_SHFT 15 +#define SH_PI_ERROR_MASK_GFX_INT_0_MASK 0x0000000000008000 + +/* SH_PI_ERROR_MASK_GFX_INT_1 */ +/* Description: Mask GFX transfer interrupt for CPU 1 */ +#define SH_PI_ERROR_MASK_GFX_INT_1_SHFT 16 +#define SH_PI_ERROR_MASK_GFX_INT_1_MASK 0x0000000000010000 + +/* SH_PI_ERROR_MASK_MD_RQ_CRD_OFLOW */ +/* Description: Mask MD Request Credit Overflow Error */ +#define SH_PI_ERROR_MASK_MD_RQ_CRD_OFLOW_SHFT 17 +#define SH_PI_ERROR_MASK_MD_RQ_CRD_OFLOW_MASK 0x0000000000020000 + +/* SH_PI_ERROR_MASK_MD_RP_CRD_OFLOW */ +/* Description: Mask MD Reply Credit Overflow Error */ +#define SH_PI_ERROR_MASK_MD_RP_CRD_OFLOW_SHFT 18 +#define SH_PI_ERROR_MASK_MD_RP_CRD_OFLOW_MASK 0x0000000000040000 + +/* SH_PI_ERROR_MASK_XN_RQ_CRD_OFLOW */ +/* Description: Mask XN Request Credit Overflow Error */ +#define SH_PI_ERROR_MASK_XN_RQ_CRD_OFLOW_SHFT 19 +#define SH_PI_ERROR_MASK_XN_RQ_CRD_OFLOW_MASK 0x0000000000080000 + +/* SH_PI_ERROR_MASK_XN_RP_CRD_OFLOW */ +/* Description: Mask XN Reply Credit Overflow Error */ +#define SH_PI_ERROR_MASK_XN_RP_CRD_OFLOW_SHFT 20 +#define SH_PI_ERROR_MASK_XN_RP_CRD_OFLOW_MASK 0x0000000000100000 + +/* SH_PI_ERROR_MASK_HUNG_BUS */ +/* Description: Mask FSB hung error */ +#define SH_PI_ERROR_MASK_HUNG_BUS_SHFT 21 +#define SH_PI_ERROR_MASK_HUNG_BUS_MASK 0x0000000000200000 + +/* SH_PI_ERROR_MASK_RSP_PARITY */ +/* Description: Parity error detecte during response phase */ +#define SH_PI_ERROR_MASK_RSP_PARITY_SHFT 22 +#define SH_PI_ERROR_MASK_RSP_PARITY_MASK 0x0000000000400000 + +/* SH_PI_ERROR_MASK_IOQ_OVERRUN */ +/* Description: Over run error detected on IOQ */ +#define SH_PI_ERROR_MASK_IOQ_OVERRUN_SHFT 23 +#define SH_PI_ERROR_MASK_IOQ_OVERRUN_MASK 0x0000000000800000 + +/* SH_PI_ERROR_MASK_REQ_FORMAT */ +/* Description: FSB request format not supported */ +#define SH_PI_ERROR_MASK_REQ_FORMAT_SHFT 24 +#define SH_PI_ERROR_MASK_REQ_FORMAT_MASK 0x0000000001000000 + +/* SH_PI_ERROR_MASK_ADDR_ACCESS */ +/* Description: Access to Address is not supported */ +#define SH_PI_ERROR_MASK_ADDR_ACCESS_SHFT 25 +#define SH_PI_ERROR_MASK_ADDR_ACCESS_MASK 0x0000000002000000 + +/* SH_PI_ERROR_MASK_REQ_PARITY */ +/* Description: Parity error detected during request phase */ +#define SH_PI_ERROR_MASK_REQ_PARITY_SHFT 26 +#define SH_PI_ERROR_MASK_REQ_PARITY_MASK 0x0000000004000000 + +/* SH_PI_ERROR_MASK_ADDR_PARITY */ +/* Description: Parity error detected on address */ +#define SH_PI_ERROR_MASK_ADDR_PARITY_SHFT 27 +#define SH_PI_ERROR_MASK_ADDR_PARITY_MASK 0x0000000008000000 + +/* SH_PI_ERROR_MASK_SHUB_FSB_DQE */ +/* Description: SHUB_FSB_DQE */ +#define SH_PI_ERROR_MASK_SHUB_FSB_DQE_SHFT 28 +#define SH_PI_ERROR_MASK_SHUB_FSB_DQE_MASK 0x0000000010000000 + +/* SH_PI_ERROR_MASK_SHUB_FSB_UCE */ +/* Description: An un-correctable ECC error was detected */ +#define SH_PI_ERROR_MASK_SHUB_FSB_UCE_SHFT 29 +#define SH_PI_ERROR_MASK_SHUB_FSB_UCE_MASK 0x0000000020000000 + +/* SH_PI_ERROR_MASK_SHUB_FSB_CE */ +/* Description: An correctable ECC error was detected */ +#define SH_PI_ERROR_MASK_SHUB_FSB_CE_SHFT 30 +#define SH_PI_ERROR_MASK_SHUB_FSB_CE_MASK 0x0000000040000000 + +/* SH_PI_ERROR_MASK_LIVELOCK */ +/* Description: AFI livelock error was detected */ +#define SH_PI_ERROR_MASK_LIVELOCK_SHFT 31 +#define SH_PI_ERROR_MASK_LIVELOCK_MASK 0x0000000080000000 + +/* SH_PI_ERROR_MASK_BAD_SNOOP */ +/* Description: AFI bad snoop error was detected */ +#define SH_PI_ERROR_MASK_BAD_SNOOP_SHFT 32 +#define SH_PI_ERROR_MASK_BAD_SNOOP_MASK 0x0000000100000000 + +/* SH_PI_ERROR_MASK_FSB_TBL_MISS */ +/* Description: AFI FSB request table miss error was detected */ +#define SH_PI_ERROR_MASK_FSB_TBL_MISS_SHFT 33 +#define SH_PI_ERROR_MASK_FSB_TBL_MISS_MASK 0x0000000200000000 + +/* SH_PI_ERROR_MASK_MSG_LENGTH */ +/* Description: Message length error on received message from SIC */ +#define SH_PI_ERROR_MASK_MSG_LENGTH_SHFT 34 +#define SH_PI_ERROR_MASK_MSG_LENGTH_MASK 0x0000000400000000 + +/* ==================================================================== */ +/* Register "SH_PI_EXPRESS_REPLY_CONFIG" */ +/* PI Express Reply Configuration */ +/* ==================================================================== */ + +#define SH_PI_EXPRESS_REPLY_CONFIG 0x0000000120050b00 +#define SH_PI_EXPRESS_REPLY_CONFIG_MASK 0x0000000000000007 +#define SH_PI_EXPRESS_REPLY_CONFIG_INIT 0x0000000000000001 + +/* SH_PI_EXPRESS_REPLY_CONFIG_MODE */ +/* Description: Express Reply Mode */ +#define SH_PI_EXPRESS_REPLY_CONFIG_MODE_SHFT 0 +#define SH_PI_EXPRESS_REPLY_CONFIG_MODE_MASK 0x0000000000000007 + +/* ==================================================================== */ +/* Register "SH_PI_FSB_COMPARE_VALUE" */ +/* FSB Compare Value */ +/* ==================================================================== */ + +#define SH_PI_FSB_COMPARE_VALUE 0x0000000120050c00 +#define SH_PI_FSB_COMPARE_VALUE_MASK 0xffffffffffffffff +#define SH_PI_FSB_COMPARE_VALUE_INIT 0x0000000000000000 + +/* SH_PI_FSB_COMPARE_VALUE_COMPARE_VALUE */ +/* Description: Compare value */ +#define SH_PI_FSB_COMPARE_VALUE_COMPARE_VALUE_SHFT 0 +#define SH_PI_FSB_COMPARE_VALUE_COMPARE_VALUE_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_PI_FSB_COMPARE_MASK" */ +/* FSB Compare Mask */ +/* ==================================================================== */ + +#define SH_PI_FSB_COMPARE_MASK 0x0000000120050b80 +#define SH_PI_FSB_COMPARE_MASK_MASK 0xffffffffffffffff +#define SH_PI_FSB_COMPARE_MASK_INIT 0x0000000000000000 + +/* SH_PI_FSB_COMPARE_MASK_MASK_VALUE */ +/* Description: Mask value */ +#define SH_PI_FSB_COMPARE_MASK_MASK_VALUE_SHFT 0 +#define SH_PI_FSB_COMPARE_MASK_MASK_VALUE_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_PI_FSB_ERROR_INJECTION" */ +/* Inject an Error onto the FSB */ +/* ==================================================================== */ + +#define SH_PI_FSB_ERROR_INJECTION 0x0000000120050c80 +#define SH_PI_FSB_ERROR_INJECTION_MASK 0x000000070fff03ff +#define SH_PI_FSB_ERROR_INJECTION_INIT 0x0000000000000000 + +/* SH_PI_FSB_ERROR_INJECTION_RP_PE_TO_FSB */ +/* Description: Inject a RP# Parity Error onto the FSB */ +#define SH_PI_FSB_ERROR_INJECTION_RP_PE_TO_FSB_SHFT 0 +#define SH_PI_FSB_ERROR_INJECTION_RP_PE_TO_FSB_MASK 0x0000000000000001 + +/* SH_PI_FSB_ERROR_INJECTION_AP0_PE_TO_FSB */ +/* Description: Inject an AP[0]# Parity Error onto the FSB */ +#define SH_PI_FSB_ERROR_INJECTION_AP0_PE_TO_FSB_SHFT 1 +#define SH_PI_FSB_ERROR_INJECTION_AP0_PE_TO_FSB_MASK 0x0000000000000002 + +/* SH_PI_FSB_ERROR_INJECTION_AP1_PE_TO_FSB */ +/* Description: Inject an AP[1]# Parity Error onto the FSB */ +#define SH_PI_FSB_ERROR_INJECTION_AP1_PE_TO_FSB_SHFT 2 +#define SH_PI_FSB_ERROR_INJECTION_AP1_PE_TO_FSB_MASK 0x0000000000000004 + +/* SH_PI_FSB_ERROR_INJECTION_RSP_PE_TO_FSB */ +/* Description: Inject a RSP# Parity Error onto the FSB */ +#define SH_PI_FSB_ERROR_INJECTION_RSP_PE_TO_FSB_SHFT 3 +#define SH_PI_FSB_ERROR_INJECTION_RSP_PE_TO_FSB_MASK 0x0000000000000008 + +/* SH_PI_FSB_ERROR_INJECTION_DW0_CE_TO_FSB */ +/* Description: Inject a Correctable Error in Doubleword 0 onto the */ +#define SH_PI_FSB_ERROR_INJECTION_DW0_CE_TO_FSB_SHFT 4 +#define SH_PI_FSB_ERROR_INJECTION_DW0_CE_TO_FSB_MASK 0x0000000000000010 + +/* SH_PI_FSB_ERROR_INJECTION_DW0_UCE_TO_FSB */ +/* Description: Inject an Uncorrectable Error in Doubleword 0 onto */ +/* the FSB */ +#define SH_PI_FSB_ERROR_INJECTION_DW0_UCE_TO_FSB_SHFT 5 +#define SH_PI_FSB_ERROR_INJECTION_DW0_UCE_TO_FSB_MASK 0x0000000000000020 + +/* SH_PI_FSB_ERROR_INJECTION_DW1_CE_TO_FSB */ +/* Description: Inject a Correctable Error in Doubleword 1 onto the */ +#define SH_PI_FSB_ERROR_INJECTION_DW1_CE_TO_FSB_SHFT 6 +#define SH_PI_FSB_ERROR_INJECTION_DW1_CE_TO_FSB_MASK 0x0000000000000040 + +/* SH_PI_FSB_ERROR_INJECTION_DW1_UCE_TO_FSB */ +/* Description: Inject an Uncorrectable Error in Doubleword 1 onto */ +/* the FSB */ +#define SH_PI_FSB_ERROR_INJECTION_DW1_UCE_TO_FSB_SHFT 7 +#define SH_PI_FSB_ERROR_INJECTION_DW1_UCE_TO_FSB_MASK 0x0000000000000080 + +/* SH_PI_FSB_ERROR_INJECTION_IP0_PE_TO_FSB */ +/* Description: Inject an IP[0]# Parity Error onto the FSB */ +#define SH_PI_FSB_ERROR_INJECTION_IP0_PE_TO_FSB_SHFT 8 +#define SH_PI_FSB_ERROR_INJECTION_IP0_PE_TO_FSB_MASK 0x0000000000000100 + +/* SH_PI_FSB_ERROR_INJECTION_IP1_PE_TO_FSB */ +/* Description: Inject an IP[1]# Parity Error onto the FSB */ +#define SH_PI_FSB_ERROR_INJECTION_IP1_PE_TO_FSB_SHFT 9 +#define SH_PI_FSB_ERROR_INJECTION_IP1_PE_TO_FSB_MASK 0x0000000000000200 + +/* SH_PI_FSB_ERROR_INJECTION_RP_PE_FROM_FSB */ +/* Description: Inject a RP# Parity Error When Sampling the FSB */ +#define SH_PI_FSB_ERROR_INJECTION_RP_PE_FROM_FSB_SHFT 16 +#define SH_PI_FSB_ERROR_INJECTION_RP_PE_FROM_FSB_MASK 0x0000000000010000 + +/* SH_PI_FSB_ERROR_INJECTION_AP0_PE_FROM_FSB */ +/* Description: Inject an AP[0]# Parity Error When Sampling the FSB */ +#define SH_PI_FSB_ERROR_INJECTION_AP0_PE_FROM_FSB_SHFT 17 +#define SH_PI_FSB_ERROR_INJECTION_AP0_PE_FROM_FSB_MASK 0x0000000000020000 + +/* SH_PI_FSB_ERROR_INJECTION_AP1_PE_FROM_FSB */ +/* Description: Inject an AP[1]# Parity Error When Sampling the FSB */ +#define SH_PI_FSB_ERROR_INJECTION_AP1_PE_FROM_FSB_SHFT 18 +#define SH_PI_FSB_ERROR_INJECTION_AP1_PE_FROM_FSB_MASK 0x0000000000040000 + +/* SH_PI_FSB_ERROR_INJECTION_RSP_PE_FROM_FSB */ +/* Description: Inject a RSP# Parity Error When Sampling the FSB */ +#define SH_PI_FSB_ERROR_INJECTION_RSP_PE_FROM_FSB_SHFT 19 +#define SH_PI_FSB_ERROR_INJECTION_RSP_PE_FROM_FSB_MASK 0x0000000000080000 + +/* SH_PI_FSB_ERROR_INJECTION_DW0_CE_FROM_FSB */ +/* Description: Inject a Correctable Error in Doubleword 0 of SIC D */ +/* ata Packet 0 */ +#define SH_PI_FSB_ERROR_INJECTION_DW0_CE_FROM_FSB_SHFT 20 +#define SH_PI_FSB_ERROR_INJECTION_DW0_CE_FROM_FSB_MASK 0x0000000000100000 + +/* SH_PI_FSB_ERROR_INJECTION_DW0_UCE_FROM_FSB */ +/* Description: Inject a Uncorrectable Error in Doubleword 0 of SIC */ +/* Data Packet 0 */ +#define SH_PI_FSB_ERROR_INJECTION_DW0_UCE_FROM_FSB_SHFT 21 +#define SH_PI_FSB_ERROR_INJECTION_DW0_UCE_FROM_FSB_MASK 0x0000000000200000 + +/* SH_PI_FSB_ERROR_INJECTION_DW1_CE_FROM_FSB */ +/* Description: Inject a Correctable Error in Doubleword 0 of SIC D */ +/* ata Packet 0 */ +#define SH_PI_FSB_ERROR_INJECTION_DW1_CE_FROM_FSB_SHFT 22 +#define SH_PI_FSB_ERROR_INJECTION_DW1_CE_FROM_FSB_MASK 0x0000000000400000 + +/* SH_PI_FSB_ERROR_INJECTION_DW1_UCE_FROM_FSB */ +/* Description: Inject a Uncorrectable Error in Doubleword 0 of SIC */ +/* Data Packet 0 */ +#define SH_PI_FSB_ERROR_INJECTION_DW1_UCE_FROM_FSB_SHFT 23 +#define SH_PI_FSB_ERROR_INJECTION_DW1_UCE_FROM_FSB_MASK 0x0000000000800000 + +/* SH_PI_FSB_ERROR_INJECTION_DW2_CE_FROM_FSB */ +/* Description: Inject a Correctable Error in Doubleword 0 of SIC D */ +/* ata Packet 0 */ +#define SH_PI_FSB_ERROR_INJECTION_DW2_CE_FROM_FSB_SHFT 24 +#define SH_PI_FSB_ERROR_INJECTION_DW2_CE_FROM_FSB_MASK 0x0000000001000000 + +/* SH_PI_FSB_ERROR_INJECTION_DW2_UCE_FROM_FSB */ +/* Description: Inject a Uncorrectable Error in Doubleword 0 of SIC */ +/* Data Packet 0 */ +#define SH_PI_FSB_ERROR_INJECTION_DW2_UCE_FROM_FSB_SHFT 25 +#define SH_PI_FSB_ERROR_INJECTION_DW2_UCE_FROM_FSB_MASK 0x0000000002000000 + +/* SH_PI_FSB_ERROR_INJECTION_DW3_CE_FROM_FSB */ +/* Description: Inject a Correctable Error in Doubleword 0 of SIC D */ +/* ata Packet 0 */ +#define SH_PI_FSB_ERROR_INJECTION_DW3_CE_FROM_FSB_SHFT 26 +#define SH_PI_FSB_ERROR_INJECTION_DW3_CE_FROM_FSB_MASK 0x0000000004000000 + +/* SH_PI_FSB_ERROR_INJECTION_DW3_UCE_FROM_FSB */ +/* Description: Inject a Uncorrectable Error in Doubleword 0 of SIC */ +/* Data Packet 0 */ +#define SH_PI_FSB_ERROR_INJECTION_DW3_UCE_FROM_FSB_SHFT 27 +#define SH_PI_FSB_ERROR_INJECTION_DW3_UCE_FROM_FSB_MASK 0x0000000008000000 + +/* SH_PI_FSB_ERROR_INJECTION_IOQ_OVERRUN */ +/* Description: Inject an ioq overrun Error on the FSB */ +#define SH_PI_FSB_ERROR_INJECTION_IOQ_OVERRUN_SHFT 32 +#define SH_PI_FSB_ERROR_INJECTION_IOQ_OVERRUN_MASK 0x0000000100000000 + +/* SH_PI_FSB_ERROR_INJECTION_LIVELOCK */ +/* Description: Inject a livelock Error on the FSB */ +#define SH_PI_FSB_ERROR_INJECTION_LIVELOCK_SHFT 33 +#define SH_PI_FSB_ERROR_INJECTION_LIVELOCK_MASK 0x0000000200000000 + +/* SH_PI_FSB_ERROR_INJECTION_BUS_HANG */ +/* Description: Inject an bus hang on the FSB */ +#define SH_PI_FSB_ERROR_INJECTION_BUS_HANG_SHFT 34 +#define SH_PI_FSB_ERROR_INJECTION_BUS_HANG_MASK 0x0000000400000000 + +/* ==================================================================== */ +/* Register "SH_PI_MD2PI_REPLY_VC_CONFIG" */ +/* MD-to-PI Reply Virtual Channel Configuration */ +/* ==================================================================== */ + +#define SH_PI_MD2PI_REPLY_VC_CONFIG 0x0000000120050d00 +#define SH_PI_MD2PI_REPLY_VC_CONFIG_MASK 0xc000000000003fff +#define SH_PI_MD2PI_REPLY_VC_CONFIG_INIT 0x000000000000088c + +/* SH_PI_MD2PI_REPLY_VC_CONFIG_HDR_DEPTH */ +/* Description: Depth of header Buffer */ +#define SH_PI_MD2PI_REPLY_VC_CONFIG_HDR_DEPTH_SHFT 0 +#define SH_PI_MD2PI_REPLY_VC_CONFIG_HDR_DEPTH_MASK 0x000000000000000f + +/* SH_PI_MD2PI_REPLY_VC_CONFIG_DATA_DEPTH */ +/* Description: Number of data buffers Available */ +#define SH_PI_MD2PI_REPLY_VC_CONFIG_DATA_DEPTH_SHFT 4 +#define SH_PI_MD2PI_REPLY_VC_CONFIG_DATA_DEPTH_MASK 0x00000000000000f0 + +/* SH_PI_MD2PI_REPLY_VC_CONFIG_MAX_CREDITS */ +/* Description: Maximum credits from sender */ +#define SH_PI_MD2PI_REPLY_VC_CONFIG_MAX_CREDITS_SHFT 8 +#define SH_PI_MD2PI_REPLY_VC_CONFIG_MAX_CREDITS_MASK 0x0000000000003f00 + +/* SH_PI_MD2PI_REPLY_VC_CONFIG_FORCE_CREDIT */ +/* Description: Send an extra credit to sender */ +#define SH_PI_MD2PI_REPLY_VC_CONFIG_FORCE_CREDIT_SHFT 62 +#define SH_PI_MD2PI_REPLY_VC_CONFIG_FORCE_CREDIT_MASK 0x4000000000000000 + +/* SH_PI_MD2PI_REPLY_VC_CONFIG_CAPTURE_CREDIT_STATUS */ +/* Description: Capture credit and status information */ +#define SH_PI_MD2PI_REPLY_VC_CONFIG_CAPTURE_CREDIT_STATUS_SHFT 63 +#define SH_PI_MD2PI_REPLY_VC_CONFIG_CAPTURE_CREDIT_STATUS_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_PI_MD2PI_REQUEST_VC_CONFIG" */ +/* MD-to-PI Request Virtual Channel Configuration */ +/* ==================================================================== */ + +#define SH_PI_MD2PI_REQUEST_VC_CONFIG 0x0000000120050d80 +#define SH_PI_MD2PI_REQUEST_VC_CONFIG_MASK 0xc000000000003fff +#define SH_PI_MD2PI_REQUEST_VC_CONFIG_INIT 0x000000000000088c + +/* SH_PI_MD2PI_REQUEST_VC_CONFIG_HDR_DEPTH */ +/* Description: Depth of header Buffer */ +#define SH_PI_MD2PI_REQUEST_VC_CONFIG_HDR_DEPTH_SHFT 0 +#define SH_PI_MD2PI_REQUEST_VC_CONFIG_HDR_DEPTH_MASK 0x000000000000000f + +/* SH_PI_MD2PI_REQUEST_VC_CONFIG_DATA_DEPTH */ +/* Description: Number of data buffers Available */ +#define SH_PI_MD2PI_REQUEST_VC_CONFIG_DATA_DEPTH_SHFT 4 +#define SH_PI_MD2PI_REQUEST_VC_CONFIG_DATA_DEPTH_MASK 0x00000000000000f0 + +/* SH_PI_MD2PI_REQUEST_VC_CONFIG_MAX_CREDITS */ +/* Description: Maximum credits from sender */ +#define SH_PI_MD2PI_REQUEST_VC_CONFIG_MAX_CREDITS_SHFT 8 +#define SH_PI_MD2PI_REQUEST_VC_CONFIG_MAX_CREDITS_MASK 0x0000000000003f00 + +/* SH_PI_MD2PI_REQUEST_VC_CONFIG_FORCE_CREDIT */ +/* Description: Send an extra credit to sender */ +#define SH_PI_MD2PI_REQUEST_VC_CONFIG_FORCE_CREDIT_SHFT 62 +#define SH_PI_MD2PI_REQUEST_VC_CONFIG_FORCE_CREDIT_MASK 0x4000000000000000 + +/* SH_PI_MD2PI_REQUEST_VC_CONFIG_CAPTURE_CREDIT_STATUS */ +/* Description: Capture credit and status information */ +#define SH_PI_MD2PI_REQUEST_VC_CONFIG_CAPTURE_CREDIT_STATUS_SHFT 63 +#define SH_PI_MD2PI_REQUEST_VC_CONFIG_CAPTURE_CREDIT_STATUS_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_PI_QUEUE_ERROR_INJECTION" */ +/* PI Queue Error Injection */ +/* ==================================================================== */ + +#define SH_PI_QUEUE_ERROR_INJECTION 0x0000000120050e00 +#define SH_PI_QUEUE_ERROR_INJECTION_MASK 0x00000000000000ff +#define SH_PI_QUEUE_ERROR_INJECTION_INIT 0x0000000000000000 + +/* SH_PI_QUEUE_ERROR_INJECTION_DAT_DFR_Q */ +#define SH_PI_QUEUE_ERROR_INJECTION_DAT_DFR_Q_SHFT 0 +#define SH_PI_QUEUE_ERROR_INJECTION_DAT_DFR_Q_MASK 0x0000000000000001 + +/* SH_PI_QUEUE_ERROR_INJECTION_DXB_WTL_CMND_Q */ +#define SH_PI_QUEUE_ERROR_INJECTION_DXB_WTL_CMND_Q_SHFT 1 +#define SH_PI_QUEUE_ERROR_INJECTION_DXB_WTL_CMND_Q_MASK 0x0000000000000002 + +/* SH_PI_QUEUE_ERROR_INJECTION_FSB_WTL_CMND_Q */ +#define SH_PI_QUEUE_ERROR_INJECTION_FSB_WTL_CMND_Q_SHFT 2 +#define SH_PI_QUEUE_ERROR_INJECTION_FSB_WTL_CMND_Q_MASK 0x0000000000000004 + +/* SH_PI_QUEUE_ERROR_INJECTION_MDPI_RPY_BFR */ +#define SH_PI_QUEUE_ERROR_INJECTION_MDPI_RPY_BFR_SHFT 3 +#define SH_PI_QUEUE_ERROR_INJECTION_MDPI_RPY_BFR_MASK 0x0000000000000008 + +/* SH_PI_QUEUE_ERROR_INJECTION_PTC_INTR */ +#define SH_PI_QUEUE_ERROR_INJECTION_PTC_INTR_SHFT 4 +#define SH_PI_QUEUE_ERROR_INJECTION_PTC_INTR_MASK 0x0000000000000010 + +/* SH_PI_QUEUE_ERROR_INJECTION_RXL_KILL_Q */ +#define SH_PI_QUEUE_ERROR_INJECTION_RXL_KILL_Q_SHFT 5 +#define SH_PI_QUEUE_ERROR_INJECTION_RXL_KILL_Q_MASK 0x0000000000000020 + +/* SH_PI_QUEUE_ERROR_INJECTION_RXL_RDY_Q */ +#define SH_PI_QUEUE_ERROR_INJECTION_RXL_RDY_Q_SHFT 6 +#define SH_PI_QUEUE_ERROR_INJECTION_RXL_RDY_Q_MASK 0x0000000000000040 + +/* SH_PI_QUEUE_ERROR_INJECTION_XNPI_RPY_BFR */ +#define SH_PI_QUEUE_ERROR_INJECTION_XNPI_RPY_BFR_SHFT 7 +#define SH_PI_QUEUE_ERROR_INJECTION_XNPI_RPY_BFR_MASK 0x0000000000000080 + +/* ==================================================================== */ +/* Register "SH_PI_TEST_POINT_COMPARE" */ +/* PI Test Point Compare */ +/* ==================================================================== */ + +#define SH_PI_TEST_POINT_COMPARE 0x0000000120050e80 +#define SH_PI_TEST_POINT_COMPARE_MASK 0xffffffffffffffff +#define SH_PI_TEST_POINT_COMPARE_INIT 0xffffffff00000000 + +/* SH_PI_TEST_POINT_COMPARE_COMPARE_MASK */ +/* Description: Mask to select test point data for trigger generati */ +#define SH_PI_TEST_POINT_COMPARE_COMPARE_MASK_SHFT 0 +#define SH_PI_TEST_POINT_COMPARE_COMPARE_MASK_MASK 0x00000000ffffffff + +/* SH_PI_TEST_POINT_COMPARE_COMPARE_PATTERN */ +/* Description: Pattern of test point data to cause trigger */ +#define SH_PI_TEST_POINT_COMPARE_COMPARE_PATTERN_SHFT 32 +#define SH_PI_TEST_POINT_COMPARE_COMPARE_PATTERN_MASK 0xffffffff00000000 + +/* ==================================================================== */ +/* Register "SH_PI_TEST_POINT_SELECT" */ +/* PI Test Point Select */ +/* ==================================================================== */ + +#define SH_PI_TEST_POINT_SELECT 0x0000000120050f00 +#define SH_PI_TEST_POINT_SELECT_MASK 0xf777777777777777 +#define SH_PI_TEST_POINT_SELECT_INIT 0x0000000000000000 + +/* SH_PI_TEST_POINT_SELECT_NIBBLE0_CHIPLET_SEL */ +/* Description: Nibble 0 data is from Chiplet X */ +#define SH_PI_TEST_POINT_SELECT_NIBBLE0_CHIPLET_SEL_SHFT 0 +#define SH_PI_TEST_POINT_SELECT_NIBBLE0_CHIPLET_SEL_MASK 0x0000000000000007 + +/* SH_PI_TEST_POINT_SELECT_NIBBLE0_NIBBLE_SEL */ +/* Description: Nibble X is routed to Nibble 0 */ +#define SH_PI_TEST_POINT_SELECT_NIBBLE0_NIBBLE_SEL_SHFT 4 +#define SH_PI_TEST_POINT_SELECT_NIBBLE0_NIBBLE_SEL_MASK 0x0000000000000070 + +/* SH_PI_TEST_POINT_SELECT_NIBBLE1_CHIPLET_SEL */ +/* Description: Nibble 1 data is from Chiplet X */ +#define SH_PI_TEST_POINT_SELECT_NIBBLE1_CHIPLET_SEL_SHFT 8 +#define SH_PI_TEST_POINT_SELECT_NIBBLE1_CHIPLET_SEL_MASK 0x0000000000000700 + +/* SH_PI_TEST_POINT_SELECT_NIBBLE1_NIBBLE_SEL */ +/* Description: Nibble X is routed to Nibble 1 */ +#define SH_PI_TEST_POINT_SELECT_NIBBLE1_NIBBLE_SEL_SHFT 12 +#define SH_PI_TEST_POINT_SELECT_NIBBLE1_NIBBLE_SEL_MASK 0x0000000000007000 + +/* SH_PI_TEST_POINT_SELECT_NIBBLE2_CHIPLET_SEL */ +/* Description: Nibble 2 data is from Chiplet X */ +#define SH_PI_TEST_POINT_SELECT_NIBBLE2_CHIPLET_SEL_SHFT 16 +#define SH_PI_TEST_POINT_SELECT_NIBBLE2_CHIPLET_SEL_MASK 0x0000000000070000 + +/* SH_PI_TEST_POINT_SELECT_NIBBLE2_NIBBLE_SEL */ +/* Description: Nibble X is routed to Nibble 2 */ +#define SH_PI_TEST_POINT_SELECT_NIBBLE2_NIBBLE_SEL_SHFT 20 +#define SH_PI_TEST_POINT_SELECT_NIBBLE2_NIBBLE_SEL_MASK 0x0000000000700000 + +/* SH_PI_TEST_POINT_SELECT_NIBBLE3_CHIPLET_SEL */ +/* Description: Nibble 3 data is from Chiplet X */ +#define SH_PI_TEST_POINT_SELECT_NIBBLE3_CHIPLET_SEL_SHFT 24 +#define SH_PI_TEST_POINT_SELECT_NIBBLE3_CHIPLET_SEL_MASK 0x0000000007000000 + +/* SH_PI_TEST_POINT_SELECT_NIBBLE3_NIBBLE_SEL */ +/* Description: Nibble X is routed to Nibble 3 */ +#define SH_PI_TEST_POINT_SELECT_NIBBLE3_NIBBLE_SEL_SHFT 28 +#define SH_PI_TEST_POINT_SELECT_NIBBLE3_NIBBLE_SEL_MASK 0x0000000070000000 + +/* SH_PI_TEST_POINT_SELECT_NIBBLE4_CHIPLET_SEL */ +/* Description: Nibble 4 data is from Chiplet X */ +#define SH_PI_TEST_POINT_SELECT_NIBBLE4_CHIPLET_SEL_SHFT 32 +#define SH_PI_TEST_POINT_SELECT_NIBBLE4_CHIPLET_SEL_MASK 0x0000000700000000 + +/* SH_PI_TEST_POINT_SELECT_NIBBLE4_NIBBLE_SEL */ +/* Description: Nibble X is routed to Nibble 4 */ +#define SH_PI_TEST_POINT_SELECT_NIBBLE4_NIBBLE_SEL_SHFT 36 +#define SH_PI_TEST_POINT_SELECT_NIBBLE4_NIBBLE_SEL_MASK 0x0000007000000000 + +/* SH_PI_TEST_POINT_SELECT_NIBBLE5_CHIPLET_SEL */ +/* Description: Nibble 5 data is from Chiplet X */ +#define SH_PI_TEST_POINT_SELECT_NIBBLE5_CHIPLET_SEL_SHFT 40 +#define SH_PI_TEST_POINT_SELECT_NIBBLE5_CHIPLET_SEL_MASK 0x0000070000000000 + +/* SH_PI_TEST_POINT_SELECT_NIBBLE5_NIBBLE_SEL */ +/* Description: Nibble X is routed to Nibble 5 */ +#define SH_PI_TEST_POINT_SELECT_NIBBLE5_NIBBLE_SEL_SHFT 44 +#define SH_PI_TEST_POINT_SELECT_NIBBLE5_NIBBLE_SEL_MASK 0x0000700000000000 + +/* SH_PI_TEST_POINT_SELECT_NIBBLE6_CHIPLET_SEL */ +/* Description: Nibble 6 data is from Chiplet X */ +#define SH_PI_TEST_POINT_SELECT_NIBBLE6_CHIPLET_SEL_SHFT 48 +#define SH_PI_TEST_POINT_SELECT_NIBBLE6_CHIPLET_SEL_MASK 0x0007000000000000 + +/* SH_PI_TEST_POINT_SELECT_NIBBLE6_NIBBLE_SEL */ +/* Description: Nibble X is routed to Nibble 6 */ +#define SH_PI_TEST_POINT_SELECT_NIBBLE6_NIBBLE_SEL_SHFT 52 +#define SH_PI_TEST_POINT_SELECT_NIBBLE6_NIBBLE_SEL_MASK 0x0070000000000000 + +/* SH_PI_TEST_POINT_SELECT_NIBBLE7_CHIPLET_SEL */ +/* Description: Nibble 7 data is from Chiplet X */ +#define SH_PI_TEST_POINT_SELECT_NIBBLE7_CHIPLET_SEL_SHFT 56 +#define SH_PI_TEST_POINT_SELECT_NIBBLE7_CHIPLET_SEL_MASK 0x0700000000000000 + +/* SH_PI_TEST_POINT_SELECT_NIBBLE7_NIBBLE_SEL */ +/* Description: Nibble X is routed to Nibble 7 */ +#define SH_PI_TEST_POINT_SELECT_NIBBLE7_NIBBLE_SEL_SHFT 60 +#define SH_PI_TEST_POINT_SELECT_NIBBLE7_NIBBLE_SEL_MASK 0x7000000000000000 + +/* SH_PI_TEST_POINT_SELECT_TRIGGER_ENABLE */ +/* Description: Enable trigger on bit 32 of Analyzer data */ +#define SH_PI_TEST_POINT_SELECT_TRIGGER_ENABLE_SHFT 63 +#define SH_PI_TEST_POINT_SELECT_TRIGGER_ENABLE_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_PI_TEST_POINT_TRIGGER_SELECT" */ +/* PI Test Point Trigger Select */ +/* ==================================================================== */ + +#define SH_PI_TEST_POINT_TRIGGER_SELECT 0x0000000120050f80 +#define SH_PI_TEST_POINT_TRIGGER_SELECT_MASK 0x7777777777777777 +#define SH_PI_TEST_POINT_TRIGGER_SELECT_INIT 0x0000000000000000 + +/* SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER0_CHIPLET_SEL */ +/* Description: Nibble 0 Chiplet select */ +#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER0_CHIPLET_SEL_SHFT 0 +#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER0_CHIPLET_SEL_MASK 0x0000000000000007 + +/* SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER0_NIBBLE_SEL */ +/* Description: Nibble 0 Nibble select */ +#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER0_NIBBLE_SEL_SHFT 4 +#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER0_NIBBLE_SEL_MASK 0x0000000000000070 + +/* SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER1_CHIPLET_SEL */ +/* Description: Nibble 1 Chiplet select */ +#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER1_CHIPLET_SEL_SHFT 8 +#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER1_CHIPLET_SEL_MASK 0x0000000000000700 + +/* SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER1_NIBBLE_SEL */ +/* Description: Nibble 1 Nibble select */ +#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER1_NIBBLE_SEL_SHFT 12 +#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER1_NIBBLE_SEL_MASK 0x0000000000007000 + +/* SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER2_CHIPLET_SEL */ +/* Description: Nibble 2 Chiplet select */ +#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER2_CHIPLET_SEL_SHFT 16 +#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER2_CHIPLET_SEL_MASK 0x0000000000070000 + +/* SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER2_NIBBLE_SEL */ +/* Description: Nibble 2 Nibble select */ +#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER2_NIBBLE_SEL_SHFT 20 +#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER2_NIBBLE_SEL_MASK 0x0000000000700000 + +/* SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER3_CHIPLET_SEL */ +/* Description: Nibble 3 Chiplet select */ +#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER3_CHIPLET_SEL_SHFT 24 +#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER3_CHIPLET_SEL_MASK 0x0000000007000000 + +/* SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER3_NIBBLE_SEL */ +/* Description: Nibble 3 Nibble select */ +#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER3_NIBBLE_SEL_SHFT 28 +#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER3_NIBBLE_SEL_MASK 0x0000000070000000 + +/* SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER4_CHIPLET_SEL */ +/* Description: Nibble 4 Chiplet select */ +#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER4_CHIPLET_SEL_SHFT 32 +#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER4_CHIPLET_SEL_MASK 0x0000000700000000 + +/* SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER4_NIBBLE_SEL */ +/* Description: Nibble 4 Nibble select */ +#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER4_NIBBLE_SEL_SHFT 36 +#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER4_NIBBLE_SEL_MASK 0x0000007000000000 + +/* SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER5_CHIPLET_SEL */ +/* Description: Nibble 5 Chiplet select */ +#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER5_CHIPLET_SEL_SHFT 40 +#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER5_CHIPLET_SEL_MASK 0x0000070000000000 + +/* SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER5_NIBBLE_SEL */ +/* Description: Nibble 5 Nibble select */ +#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER5_NIBBLE_SEL_SHFT 44 +#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER5_NIBBLE_SEL_MASK 0x0000700000000000 + +/* SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER6_CHIPLET_SEL */ +/* Description: Nibble 6 Chiplet select */ +#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER6_CHIPLET_SEL_SHFT 48 +#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER6_CHIPLET_SEL_MASK 0x0007000000000000 + +/* SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER6_NIBBLE_SEL */ +/* Description: Nibble 6 Nibble select */ +#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER6_NIBBLE_SEL_SHFT 52 +#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER6_NIBBLE_SEL_MASK 0x0070000000000000 + +/* SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER7_CHIPLET_SEL */ +/* Description: Nibble 7 Chiplet select */ +#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER7_CHIPLET_SEL_SHFT 56 +#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER7_CHIPLET_SEL_MASK 0x0700000000000000 + +/* SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER7_NIBBLE_SEL */ +/* Description: Nibble 7 Nibble select */ +#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER7_NIBBLE_SEL_SHFT 60 +#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER7_NIBBLE_SEL_MASK 0x7000000000000000 + +/* ==================================================================== */ +/* Register "SH_PI_XN2PI_REPLY_VC_CONFIG" */ +/* XN-to-PI Reply Virtual Channel Configuration */ +/* ==================================================================== */ + +#define SH_PI_XN2PI_REPLY_VC_CONFIG 0x0000000120051000 +#define SH_PI_XN2PI_REPLY_VC_CONFIG_MASK 0xc000000000003fff +#define SH_PI_XN2PI_REPLY_VC_CONFIG_INIT 0x000000000000068c + +/* SH_PI_XN2PI_REPLY_VC_CONFIG_HDR_DEPTH */ +/* Description: Depth of header Buffer */ +#define SH_PI_XN2PI_REPLY_VC_CONFIG_HDR_DEPTH_SHFT 0 +#define SH_PI_XN2PI_REPLY_VC_CONFIG_HDR_DEPTH_MASK 0x000000000000000f + +/* SH_PI_XN2PI_REPLY_VC_CONFIG_DATA_DEPTH */ +/* Description: Number of data buffers Available */ +#define SH_PI_XN2PI_REPLY_VC_CONFIG_DATA_DEPTH_SHFT 4 +#define SH_PI_XN2PI_REPLY_VC_CONFIG_DATA_DEPTH_MASK 0x00000000000000f0 + +/* SH_PI_XN2PI_REPLY_VC_CONFIG_MAX_CREDITS */ +/* Description: Maximum credits from sender */ +#define SH_PI_XN2PI_REPLY_VC_CONFIG_MAX_CREDITS_SHFT 8 +#define SH_PI_XN2PI_REPLY_VC_CONFIG_MAX_CREDITS_MASK 0x0000000000003f00 + +/* SH_PI_XN2PI_REPLY_VC_CONFIG_FORCE_CREDIT */ +/* Description: Send an extra credit to sender */ +#define SH_PI_XN2PI_REPLY_VC_CONFIG_FORCE_CREDIT_SHFT 62 +#define SH_PI_XN2PI_REPLY_VC_CONFIG_FORCE_CREDIT_MASK 0x4000000000000000 + +/* SH_PI_XN2PI_REPLY_VC_CONFIG_CAPTURE_CREDIT_STATUS */ +/* Description: Capture credit and status information */ +#define SH_PI_XN2PI_REPLY_VC_CONFIG_CAPTURE_CREDIT_STATUS_SHFT 63 +#define SH_PI_XN2PI_REPLY_VC_CONFIG_CAPTURE_CREDIT_STATUS_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_PI_XN2PI_REQUEST_VC_CONFIG" */ +/* XN-to-PI Request Virtual Channel Configuration */ +/* ==================================================================== */ + +#define SH_PI_XN2PI_REQUEST_VC_CONFIG 0x0000000120051080 +#define SH_PI_XN2PI_REQUEST_VC_CONFIG_MASK 0xc000000000003fff +#define SH_PI_XN2PI_REQUEST_VC_CONFIG_INIT 0x000000000000068c + +/* SH_PI_XN2PI_REQUEST_VC_CONFIG_HDR_DEPTH */ +/* Description: Depth of header Buffer */ +#define SH_PI_XN2PI_REQUEST_VC_CONFIG_HDR_DEPTH_SHFT 0 +#define SH_PI_XN2PI_REQUEST_VC_CONFIG_HDR_DEPTH_MASK 0x000000000000000f + +/* SH_PI_XN2PI_REQUEST_VC_CONFIG_DATA_DEPTH */ +/* Description: Number of data buffers Available */ +#define SH_PI_XN2PI_REQUEST_VC_CONFIG_DATA_DEPTH_SHFT 4 +#define SH_PI_XN2PI_REQUEST_VC_CONFIG_DATA_DEPTH_MASK 0x00000000000000f0 + +/* SH_PI_XN2PI_REQUEST_VC_CONFIG_MAX_CREDITS */ +/* Description: Maximum credits from sender */ +#define SH_PI_XN2PI_REQUEST_VC_CONFIG_MAX_CREDITS_SHFT 8 +#define SH_PI_XN2PI_REQUEST_VC_CONFIG_MAX_CREDITS_MASK 0x0000000000003f00 + +/* SH_PI_XN2PI_REQUEST_VC_CONFIG_FORCE_CREDIT */ +/* Description: Send an extra credit to sender */ +#define SH_PI_XN2PI_REQUEST_VC_CONFIG_FORCE_CREDIT_SHFT 62 +#define SH_PI_XN2PI_REQUEST_VC_CONFIG_FORCE_CREDIT_MASK 0x4000000000000000 + +/* SH_PI_XN2PI_REQUEST_VC_CONFIG_CAPTURE_CREDIT_STATUS */ +/* Description: Capture credit and status information */ +#define SH_PI_XN2PI_REQUEST_VC_CONFIG_CAPTURE_CREDIT_STATUS_SHFT 63 +#define SH_PI_XN2PI_REQUEST_VC_CONFIG_CAPTURE_CREDIT_STATUS_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_PI_AEC_STATUS" */ +/* PI Adaptive Error Correction Status */ +/* ==================================================================== */ + +#define SH_PI_AEC_STATUS 0x0000000120060000 +#define SH_PI_AEC_STATUS_MASK 0x0000000000000007 +#define SH_PI_AEC_STATUS_INIT 0x0000000000000000 + +/* SH_PI_AEC_STATUS_STATE */ +/* Description: AEC State */ +#define SH_PI_AEC_STATUS_STATE_SHFT 0 +#define SH_PI_AEC_STATUS_STATE_MASK 0x0000000000000007 + +/* ==================================================================== */ +/* Register "SH_PI_AFI_FIRST_ERROR" */ +/* PI AFI First Error */ +/* ==================================================================== */ + +#define SH_PI_AFI_FIRST_ERROR 0x0000000120060080 +#define SH_PI_AFI_FIRST_ERROR_MASK 0x00000007ffe00180 +#define SH_PI_AFI_FIRST_ERROR_INIT 0x0000000000000000 + +/* SH_PI_AFI_FIRST_ERROR_FSB_SHUB_UCE */ +/* Description: An un-correctable ECC error was detected */ +#define SH_PI_AFI_FIRST_ERROR_FSB_SHUB_UCE_SHFT 7 +#define SH_PI_AFI_FIRST_ERROR_FSB_SHUB_UCE_MASK 0x0000000000000080 + +/* SH_PI_AFI_FIRST_ERROR_FSB_SHUB_CE */ +/* Description: A correctable ECC error was detected */ +#define SH_PI_AFI_FIRST_ERROR_FSB_SHUB_CE_SHFT 8 +#define SH_PI_AFI_FIRST_ERROR_FSB_SHUB_CE_MASK 0x0000000000000100 + +/* SH_PI_AFI_FIRST_ERROR_HUNG_BUS */ +/* Description: FSB is hung */ +#define SH_PI_AFI_FIRST_ERROR_HUNG_BUS_SHFT 21 +#define SH_PI_AFI_FIRST_ERROR_HUNG_BUS_MASK 0x0000000000200000 + +/* SH_PI_AFI_FIRST_ERROR_RSP_PARITY */ +/* Description: Parity error detecte during response phase */ +#define SH_PI_AFI_FIRST_ERROR_RSP_PARITY_SHFT 22 +#define SH_PI_AFI_FIRST_ERROR_RSP_PARITY_MASK 0x0000000000400000 + +/* SH_PI_AFI_FIRST_ERROR_IOQ_OVERRUN */ +/* Description: Over run error detected on IOQ */ +#define SH_PI_AFI_FIRST_ERROR_IOQ_OVERRUN_SHFT 23 +#define SH_PI_AFI_FIRST_ERROR_IOQ_OVERRUN_MASK 0x0000000000800000 + +/* SH_PI_AFI_FIRST_ERROR_REQ_FORMAT */ +/* Description: FSB request format not supported */ +#define SH_PI_AFI_FIRST_ERROR_REQ_FORMAT_SHFT 24 +#define SH_PI_AFI_FIRST_ERROR_REQ_FORMAT_MASK 0x0000000001000000 + +/* SH_PI_AFI_FIRST_ERROR_ADDR_ACCESS */ +/* Description: Access to Address is not supported */ +#define SH_PI_AFI_FIRST_ERROR_ADDR_ACCESS_SHFT 25 +#define SH_PI_AFI_FIRST_ERROR_ADDR_ACCESS_MASK 0x0000000002000000 + +/* SH_PI_AFI_FIRST_ERROR_REQ_PARITY */ +/* Description: Parity error detected during request phase */ +#define SH_PI_AFI_FIRST_ERROR_REQ_PARITY_SHFT 26 +#define SH_PI_AFI_FIRST_ERROR_REQ_PARITY_MASK 0x0000000004000000 + +/* SH_PI_AFI_FIRST_ERROR_ADDR_PARITY */ +/* Description: Parity error detected on address */ +#define SH_PI_AFI_FIRST_ERROR_ADDR_PARITY_SHFT 27 +#define SH_PI_AFI_FIRST_ERROR_ADDR_PARITY_MASK 0x0000000008000000 + +/* SH_PI_AFI_FIRST_ERROR_SHUB_FSB_DQE */ +/* Description: SHUB_FSB_DQE */ +#define SH_PI_AFI_FIRST_ERROR_SHUB_FSB_DQE_SHFT 28 +#define SH_PI_AFI_FIRST_ERROR_SHUB_FSB_DQE_MASK 0x0000000010000000 + +/* SH_PI_AFI_FIRST_ERROR_SHUB_FSB_UCE */ +/* Description: An un-correctable ECC error was detected */ +#define SH_PI_AFI_FIRST_ERROR_SHUB_FSB_UCE_SHFT 29 +#define SH_PI_AFI_FIRST_ERROR_SHUB_FSB_UCE_MASK 0x0000000020000000 + +/* SH_PI_AFI_FIRST_ERROR_SHUB_FSB_CE */ +/* Description: An correctable ECC error was detected */ +#define SH_PI_AFI_FIRST_ERROR_SHUB_FSB_CE_SHFT 30 +#define SH_PI_AFI_FIRST_ERROR_SHUB_FSB_CE_MASK 0x0000000040000000 + +/* SH_PI_AFI_FIRST_ERROR_LIVELOCK */ +/* Description: AFI livelock error was detected */ +#define SH_PI_AFI_FIRST_ERROR_LIVELOCK_SHFT 31 +#define SH_PI_AFI_FIRST_ERROR_LIVELOCK_MASK 0x0000000080000000 + +/* SH_PI_AFI_FIRST_ERROR_BAD_SNOOP */ +/* Description: AFI bad snoop error was detected */ +#define SH_PI_AFI_FIRST_ERROR_BAD_SNOOP_SHFT 32 +#define SH_PI_AFI_FIRST_ERROR_BAD_SNOOP_MASK 0x0000000100000000 + +/* SH_PI_AFI_FIRST_ERROR_FSB_TBL_MISS */ +/* Description: AFI FSB request table miss error was detected */ +#define SH_PI_AFI_FIRST_ERROR_FSB_TBL_MISS_SHFT 33 +#define SH_PI_AFI_FIRST_ERROR_FSB_TBL_MISS_MASK 0x0000000200000000 + +/* SH_PI_AFI_FIRST_ERROR_MSG_LEN */ +/* Description: Runt or Obese message received from SIC */ +#define SH_PI_AFI_FIRST_ERROR_MSG_LEN_SHFT 34 +#define SH_PI_AFI_FIRST_ERROR_MSG_LEN_MASK 0x0000000400000000 + +/* ==================================================================== */ +/* Register "SH_PI_CAM_ADDRESS_READ_DATA" */ +/* CRB CAM MMR Address Read Data */ +/* ==================================================================== */ + +#define SH_PI_CAM_ADDRESS_READ_DATA 0x0000000120060100 +#define SH_PI_CAM_ADDRESS_READ_DATA_MASK 0x8000ffffffffffff +#define SH_PI_CAM_ADDRESS_READ_DATA_INIT 0x0000000000000000 + +/* SH_PI_CAM_ADDRESS_READ_DATA_CAM_ADDR */ +/* Description: CRB CAM Address Read Data. */ +#define SH_PI_CAM_ADDRESS_READ_DATA_CAM_ADDR_SHFT 0 +#define SH_PI_CAM_ADDRESS_READ_DATA_CAM_ADDR_MASK 0x0000ffffffffffff + +/* SH_PI_CAM_ADDRESS_READ_DATA_CAM_ADDR_VAL */ +/* Description: CRB CAM Address Read Data Valid. */ +#define SH_PI_CAM_ADDRESS_READ_DATA_CAM_ADDR_VAL_SHFT 63 +#define SH_PI_CAM_ADDRESS_READ_DATA_CAM_ADDR_VAL_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_PI_CAM_LPRA_READ_DATA" */ +/* CRB CAM MMR LPRA Read Data */ +/* ==================================================================== */ + +#define SH_PI_CAM_LPRA_READ_DATA 0x0000000120060180 +#define SH_PI_CAM_LPRA_READ_DATA_MASK 0xffffffffffffffff +#define SH_PI_CAM_LPRA_READ_DATA_INIT 0x0000000000000000 + +/* SH_PI_CAM_LPRA_READ_DATA_CAM_LPRA */ +/* Description: CRB CAM LPRA read data. */ +#define SH_PI_CAM_LPRA_READ_DATA_CAM_LPRA_SHFT 0 +#define SH_PI_CAM_LPRA_READ_DATA_CAM_LPRA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_PI_CAM_STATE_READ_DATA" */ +/* CRB CAM MMR State Read Data */ +/* ==================================================================== */ + +#define SH_PI_CAM_STATE_READ_DATA 0x0000000120060200 +#define SH_PI_CAM_STATE_READ_DATA_MASK 0x8003ffff0000003f +#define SH_PI_CAM_STATE_READ_DATA_INIT 0x0000000000000000 + +/* SH_PI_CAM_STATE_READ_DATA_CAM_STATE */ +/* Description: CRB CAM State read data. */ +#define SH_PI_CAM_STATE_READ_DATA_CAM_STATE_SHFT 0 +#define SH_PI_CAM_STATE_READ_DATA_CAM_STATE_MASK 0x000000000000000f + +/* SH_PI_CAM_STATE_READ_DATA_CAM_TO */ +/* Description: CRB CAM Time-out Status. */ +#define SH_PI_CAM_STATE_READ_DATA_CAM_TO_SHFT 4 +#define SH_PI_CAM_STATE_READ_DATA_CAM_TO_MASK 0x0000000000000010 + +/* SH_PI_CAM_STATE_READ_DATA_CAM_STATE_RD_PEND */ +/* Description: CRB CAM State Read Pending. */ +#define SH_PI_CAM_STATE_READ_DATA_CAM_STATE_RD_PEND_SHFT 5 +#define SH_PI_CAM_STATE_READ_DATA_CAM_STATE_RD_PEND_MASK 0x0000000000000020 + +/* SH_PI_CAM_STATE_READ_DATA_CAM_LPRA */ +/* Description: CRB LPRA Overflow Data. */ +#define SH_PI_CAM_STATE_READ_DATA_CAM_LPRA_SHFT 32 +#define SH_PI_CAM_STATE_READ_DATA_CAM_LPRA_MASK 0x0003ffff00000000 + +/* SH_PI_CAM_STATE_READ_DATA_CAM_RD_DATA_VAL */ +/* Description: CRB CAM MMR read data is valid. */ +#define SH_PI_CAM_STATE_READ_DATA_CAM_RD_DATA_VAL_SHFT 63 +#define SH_PI_CAM_STATE_READ_DATA_CAM_RD_DATA_VAL_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_PI_CORRECTED_DETAIL_1" */ +/* PI Corrected Error Detail */ +/* ==================================================================== */ + +#define SH_PI_CORRECTED_DETAIL_1 0x0000000120060280 +#define SH_PI_CORRECTED_DETAIL_1_MASK 0xffffffffffffffff +#define SH_PI_CORRECTED_DETAIL_1_INIT 0x0000000000000000 + +/* SH_PI_CORRECTED_DETAIL_1_ADDRESS */ +/* Description: Address of Message that logged Correctable Error */ +#define SH_PI_CORRECTED_DETAIL_1_ADDRESS_SHFT 0 +#define SH_PI_CORRECTED_DETAIL_1_ADDRESS_MASK 0x0000ffffffffffff + +/* SH_PI_CORRECTED_DETAIL_1_SYNDROME */ +/* Description: Syndrome for double word data with Correctable Erro */ +#define SH_PI_CORRECTED_DETAIL_1_SYNDROME_SHFT 48 +#define SH_PI_CORRECTED_DETAIL_1_SYNDROME_MASK 0x00ff000000000000 + +/* SH_PI_CORRECTED_DETAIL_1_DEP */ +/* Description: DEP code for Double word in error */ +#define SH_PI_CORRECTED_DETAIL_1_DEP_SHFT 56 +#define SH_PI_CORRECTED_DETAIL_1_DEP_MASK 0xff00000000000000 + +/* ==================================================================== */ +/* Register "SH_PI_CORRECTED_DETAIL_2" */ +/* PI Corrected Error Detail 2 */ +/* ==================================================================== */ + +#define SH_PI_CORRECTED_DETAIL_2 0x0000000120060300 +#define SH_PI_CORRECTED_DETAIL_2_MASK 0xffffffffffffffff +#define SH_PI_CORRECTED_DETAIL_2_INIT 0x0000000000000000 + +/* SH_PI_CORRECTED_DETAIL_2_DATA */ +/* Description: Double word data in error */ +#define SH_PI_CORRECTED_DETAIL_2_DATA_SHFT 0 +#define SH_PI_CORRECTED_DETAIL_2_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_PI_CORRECTED_DETAIL_3" */ +/* PI Corrected Error Detail 3 */ +/* ==================================================================== */ + +#define SH_PI_CORRECTED_DETAIL_3 0x0000000120060380 +#define SH_PI_CORRECTED_DETAIL_3_MASK 0xffffffffffffffff +#define SH_PI_CORRECTED_DETAIL_3_INIT 0x0000000000000000 + +/* SH_PI_CORRECTED_DETAIL_3_ADDRESS */ +/* Description: Address of Message that logged Correctable Error */ +#define SH_PI_CORRECTED_DETAIL_3_ADDRESS_SHFT 0 +#define SH_PI_CORRECTED_DETAIL_3_ADDRESS_MASK 0x0000ffffffffffff + +/* SH_PI_CORRECTED_DETAIL_3_SYNDROME */ +/* Description: Syndrome for double word data with Correctable Erro */ +#define SH_PI_CORRECTED_DETAIL_3_SYNDROME_SHFT 48 +#define SH_PI_CORRECTED_DETAIL_3_SYNDROME_MASK 0x00ff000000000000 + +/* SH_PI_CORRECTED_DETAIL_3_DEP */ +/* Description: DEP code for Double word in error */ +#define SH_PI_CORRECTED_DETAIL_3_DEP_SHFT 56 +#define SH_PI_CORRECTED_DETAIL_3_DEP_MASK 0xff00000000000000 + +/* ==================================================================== */ +/* Register "SH_PI_CORRECTED_DETAIL_4" */ +/* PI Corrected Error Detail 4 */ +/* ==================================================================== */ + +#define SH_PI_CORRECTED_DETAIL_4 0x0000000120060400 +#define SH_PI_CORRECTED_DETAIL_4_MASK 0xffffffffffffffff +#define SH_PI_CORRECTED_DETAIL_4_INIT 0x0000000000000000 + +/* SH_PI_CORRECTED_DETAIL_4_DATA */ +/* Description: Double word data in error */ +#define SH_PI_CORRECTED_DETAIL_4_DATA_SHFT 0 +#define SH_PI_CORRECTED_DETAIL_4_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_PI_CRBP_FIRST_ERROR" */ +/* PI CRBP First Error */ +/* ==================================================================== */ + +#define SH_PI_CRBP_FIRST_ERROR 0x0000000120060480 +#define SH_PI_CRBP_FIRST_ERROR_MASK 0x00000000001fffff +#define SH_PI_CRBP_FIRST_ERROR_INIT 0x0000000000000000 + +/* SH_PI_CRBP_FIRST_ERROR_FSB_PROTO_ERR */ +/* Description: CRB's FSB pipe detected protocol table miss */ +#define SH_PI_CRBP_FIRST_ERROR_FSB_PROTO_ERR_SHFT 0 +#define SH_PI_CRBP_FIRST_ERROR_FSB_PROTO_ERR_MASK 0x0000000000000001 + +/* SH_PI_CRBP_FIRST_ERROR_GFX_RP_ERR */ +/* Description: CRB's XB pipe received a GFX error reply */ +#define SH_PI_CRBP_FIRST_ERROR_GFX_RP_ERR_SHFT 1 +#define SH_PI_CRBP_FIRST_ERROR_GFX_RP_ERR_MASK 0x0000000000000002 + +/* SH_PI_CRBP_FIRST_ERROR_XB_PROTO_ERR */ +/* Description: CRB's XB pipe detected protocol table miss */ +#define SH_PI_CRBP_FIRST_ERROR_XB_PROTO_ERR_SHFT 2 +#define SH_PI_CRBP_FIRST_ERROR_XB_PROTO_ERR_MASK 0x0000000000000004 + +/* SH_PI_CRBP_FIRST_ERROR_MEM_RP_ERR */ +/* Description: CRB's XB pipe received a memory error reply message */ +#define SH_PI_CRBP_FIRST_ERROR_MEM_RP_ERR_SHFT 3 +#define SH_PI_CRBP_FIRST_ERROR_MEM_RP_ERR_MASK 0x0000000000000008 + +/* SH_PI_CRBP_FIRST_ERROR_PIO_RP_ERR */ +/* Description: CRB's XB pipe received a PIO error reply message */ +#define SH_PI_CRBP_FIRST_ERROR_PIO_RP_ERR_SHFT 4 +#define SH_PI_CRBP_FIRST_ERROR_PIO_RP_ERR_MASK 0x0000000000000010 + +/* SH_PI_CRBP_FIRST_ERROR_MEM_TO_ERR */ +/* Description: CRB's XB pipe detected a CRB time-out */ +#define SH_PI_CRBP_FIRST_ERROR_MEM_TO_ERR_SHFT 5 +#define SH_PI_CRBP_FIRST_ERROR_MEM_TO_ERR_MASK 0x0000000000000020 + +/* SH_PI_CRBP_FIRST_ERROR_PIO_TO_ERR */ +/* Description: CRB's XB pipe detected a PIO time-out */ +#define SH_PI_CRBP_FIRST_ERROR_PIO_TO_ERR_SHFT 6 +#define SH_PI_CRBP_FIRST_ERROR_PIO_TO_ERR_MASK 0x0000000000000040 + +/* SH_PI_CRBP_FIRST_ERROR_FSB_SHUB_UCE */ +/* Description: An un-correctable ECC error was detected */ +#define SH_PI_CRBP_FIRST_ERROR_FSB_SHUB_UCE_SHFT 7 +#define SH_PI_CRBP_FIRST_ERROR_FSB_SHUB_UCE_MASK 0x0000000000000080 + +/* SH_PI_CRBP_FIRST_ERROR_FSB_SHUB_CE */ +/* Description: A correctable ECC error was detected */ +#define SH_PI_CRBP_FIRST_ERROR_FSB_SHUB_CE_SHFT 8 +#define SH_PI_CRBP_FIRST_ERROR_FSB_SHUB_CE_MASK 0x0000000000000100 + +/* SH_PI_CRBP_FIRST_ERROR_MSG_COLOR_ERR */ +/* Description: Message color was wrong */ +#define SH_PI_CRBP_FIRST_ERROR_MSG_COLOR_ERR_SHFT 9 +#define SH_PI_CRBP_FIRST_ERROR_MSG_COLOR_ERR_MASK 0x0000000000000200 + +/* SH_PI_CRBP_FIRST_ERROR_MD_RQ_Q_OFLOW */ +/* Description: MD Request input buffer over flow error */ +#define SH_PI_CRBP_FIRST_ERROR_MD_RQ_Q_OFLOW_SHFT 10 +#define SH_PI_CRBP_FIRST_ERROR_MD_RQ_Q_OFLOW_MASK 0x0000000000000400 + +/* SH_PI_CRBP_FIRST_ERROR_MD_RP_Q_OFLOW */ +/* Description: MD Reply input buffer over flow error */ +#define SH_PI_CRBP_FIRST_ERROR_MD_RP_Q_OFLOW_SHFT 11 +#define SH_PI_CRBP_FIRST_ERROR_MD_RP_Q_OFLOW_MASK 0x0000000000000800 + +/* SH_PI_CRBP_FIRST_ERROR_XN_RQ_Q_OFLOW */ +/* Description: XN Request input buffer over flow error */ +#define SH_PI_CRBP_FIRST_ERROR_XN_RQ_Q_OFLOW_SHFT 12 +#define SH_PI_CRBP_FIRST_ERROR_XN_RQ_Q_OFLOW_MASK 0x0000000000001000 + +/* SH_PI_CRBP_FIRST_ERROR_XN_RP_Q_OFLOW */ +/* Description: XN Reply input buffer over flow error */ +#define SH_PI_CRBP_FIRST_ERROR_XN_RP_Q_OFLOW_SHFT 13 +#define SH_PI_CRBP_FIRST_ERROR_XN_RP_Q_OFLOW_MASK 0x0000000000002000 + +/* SH_PI_CRBP_FIRST_ERROR_NACK_OFLOW */ +/* Description: NACK over flow error */ +#define SH_PI_CRBP_FIRST_ERROR_NACK_OFLOW_SHFT 14 +#define SH_PI_CRBP_FIRST_ERROR_NACK_OFLOW_MASK 0x0000000000004000 + +/* SH_PI_CRBP_FIRST_ERROR_GFX_INT_0 */ +/* Description: GFX transfer interrupt for CPU 0 */ +#define SH_PI_CRBP_FIRST_ERROR_GFX_INT_0_SHFT 15 +#define SH_PI_CRBP_FIRST_ERROR_GFX_INT_0_MASK 0x0000000000008000 + +/* SH_PI_CRBP_FIRST_ERROR_GFX_INT_1 */ +/* Description: GFX transfer interrupt for CPU 1 */ +#define SH_PI_CRBP_FIRST_ERROR_GFX_INT_1_SHFT 16 +#define SH_PI_CRBP_FIRST_ERROR_GFX_INT_1_MASK 0x0000000000010000 + +/* SH_PI_CRBP_FIRST_ERROR_MD_RQ_CRD_OFLOW */ +/* Description: MD Request Credit Overflow Error */ +#define SH_PI_CRBP_FIRST_ERROR_MD_RQ_CRD_OFLOW_SHFT 17 +#define SH_PI_CRBP_FIRST_ERROR_MD_RQ_CRD_OFLOW_MASK 0x0000000000020000 + +/* SH_PI_CRBP_FIRST_ERROR_MD_RP_CRD_OFLOW */ +/* Description: MD Reply Credit Overflow Error */ +#define SH_PI_CRBP_FIRST_ERROR_MD_RP_CRD_OFLOW_SHFT 18 +#define SH_PI_CRBP_FIRST_ERROR_MD_RP_CRD_OFLOW_MASK 0x0000000000040000 + +/* SH_PI_CRBP_FIRST_ERROR_XN_RQ_CRD_OFLOW */ +/* Description: XN Request Credit Overflow Error */ +#define SH_PI_CRBP_FIRST_ERROR_XN_RQ_CRD_OFLOW_SHFT 19 +#define SH_PI_CRBP_FIRST_ERROR_XN_RQ_CRD_OFLOW_MASK 0x0000000000080000 + +/* SH_PI_CRBP_FIRST_ERROR_XN_RP_CRD_OFLOW */ +/* Description: XN Reply Credit Overflow Error */ +#define SH_PI_CRBP_FIRST_ERROR_XN_RP_CRD_OFLOW_SHFT 20 +#define SH_PI_CRBP_FIRST_ERROR_XN_RP_CRD_OFLOW_MASK 0x0000000000100000 + +/* ==================================================================== */ +/* Register "SH_PI_ERROR_DETAIL_1" */ +/* PI Error Detail 1 */ +/* ==================================================================== */ + +#define SH_PI_ERROR_DETAIL_1 0x0000000120060500 +#define SH_PI_ERROR_DETAIL_1_MASK 0xffffffffffffffff +#define SH_PI_ERROR_DETAIL_1_INIT 0x0000000000000000 + +/* SH_PI_ERROR_DETAIL_1_STATUS */ +/* Description: Error Detail 1 */ +#define SH_PI_ERROR_DETAIL_1_STATUS_SHFT 0 +#define SH_PI_ERROR_DETAIL_1_STATUS_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_PI_ERROR_DETAIL_2" */ +/* PI Error Detail 2 */ +/* ==================================================================== */ + +#define SH_PI_ERROR_DETAIL_2 0x0000000120060580 +#define SH_PI_ERROR_DETAIL_2_MASK 0xffffffffffffffff +#define SH_PI_ERROR_DETAIL_2_INIT 0x0000000000000000 + +/* SH_PI_ERROR_DETAIL_2_STATUS */ +/* Description: Error Status */ +#define SH_PI_ERROR_DETAIL_2_STATUS_SHFT 0 +#define SH_PI_ERROR_DETAIL_2_STATUS_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_PI_ERROR_OVERFLOW" */ +/* PI Error Overflow */ +/* ==================================================================== */ + +#define SH_PI_ERROR_OVERFLOW 0x0000000120060600 +#define SH_PI_ERROR_OVERFLOW_MASK 0x00000007ffffffff +#define SH_PI_ERROR_OVERFLOW_INIT 0x0000000000000000 + +/* SH_PI_ERROR_OVERFLOW_FSB_PROTO_ERR */ +/* Description: CRB's FSB pipe detected protocol table miss */ +#define SH_PI_ERROR_OVERFLOW_FSB_PROTO_ERR_SHFT 0 +#define SH_PI_ERROR_OVERFLOW_FSB_PROTO_ERR_MASK 0x0000000000000001 + +/* SH_PI_ERROR_OVERFLOW_GFX_RP_ERR */ +/* Description: CRB's XB pipe received another GFX reply error mess */ +#define SH_PI_ERROR_OVERFLOW_GFX_RP_ERR_SHFT 1 +#define SH_PI_ERROR_OVERFLOW_GFX_RP_ERR_MASK 0x0000000000000002 + +/* SH_PI_ERROR_OVERFLOW_XB_PROTO_ERR */ +/* Description: CRB's XB pipe detected another protocol table miss */ +#define SH_PI_ERROR_OVERFLOW_XB_PROTO_ERR_SHFT 2 +#define SH_PI_ERROR_OVERFLOW_XB_PROTO_ERR_MASK 0x0000000000000004 + +/* SH_PI_ERROR_OVERFLOW_MEM_RP_ERR */ +/* Description: CRB's XB pipe received another memory reply error m */ +#define SH_PI_ERROR_OVERFLOW_MEM_RP_ERR_SHFT 3 +#define SH_PI_ERROR_OVERFLOW_MEM_RP_ERR_MASK 0x0000000000000008 + +/* SH_PI_ERROR_OVERFLOW_PIO_RP_ERR */ +/* Description: CRB's XB pipe received another PIO reply error mess */ +#define SH_PI_ERROR_OVERFLOW_PIO_RP_ERR_SHFT 4 +#define SH_PI_ERROR_OVERFLOW_PIO_RP_ERR_MASK 0x0000000000000010 + +/* SH_PI_ERROR_OVERFLOW_MEM_TO_ERR */ +/* Description: CRB's XB pipe detected a CRB time-out */ +#define SH_PI_ERROR_OVERFLOW_MEM_TO_ERR_SHFT 5 +#define SH_PI_ERROR_OVERFLOW_MEM_TO_ERR_MASK 0x0000000000000020 + +/* SH_PI_ERROR_OVERFLOW_PIO_TO_ERR */ +/* Description: CRB's XB pipe detected a PIO time-out */ +#define SH_PI_ERROR_OVERFLOW_PIO_TO_ERR_SHFT 6 +#define SH_PI_ERROR_OVERFLOW_PIO_TO_ERR_MASK 0x0000000000000040 + +/* SH_PI_ERROR_OVERFLOW_FSB_SHUB_UCE */ +/* Description: An un-correctable ECC error was detected */ +#define SH_PI_ERROR_OVERFLOW_FSB_SHUB_UCE_SHFT 7 +#define SH_PI_ERROR_OVERFLOW_FSB_SHUB_UCE_MASK 0x0000000000000080 + +/* SH_PI_ERROR_OVERFLOW_FSB_SHUB_CE */ +/* Description: An correctable ECC error was detected */ +#define SH_PI_ERROR_OVERFLOW_FSB_SHUB_CE_SHFT 8 +#define SH_PI_ERROR_OVERFLOW_FSB_SHUB_CE_MASK 0x0000000000000100 + +/* SH_PI_ERROR_OVERFLOW_MSG_COLOR_ERR */ +/* Description: Message color was not correct */ +#define SH_PI_ERROR_OVERFLOW_MSG_COLOR_ERR_SHFT 9 +#define SH_PI_ERROR_OVERFLOW_MSG_COLOR_ERR_MASK 0x0000000000000200 + +/* SH_PI_ERROR_OVERFLOW_MD_RQ_Q_OFLOW */ +/* Description: MD Request input buffer over flow error */ +#define SH_PI_ERROR_OVERFLOW_MD_RQ_Q_OFLOW_SHFT 10 +#define SH_PI_ERROR_OVERFLOW_MD_RQ_Q_OFLOW_MASK 0x0000000000000400 + +/* SH_PI_ERROR_OVERFLOW_MD_RP_Q_OFLOW */ +/* Description: MD Reply input buffer over flow error */ +#define SH_PI_ERROR_OVERFLOW_MD_RP_Q_OFLOW_SHFT 11 +#define SH_PI_ERROR_OVERFLOW_MD_RP_Q_OFLOW_MASK 0x0000000000000800 + +/* SH_PI_ERROR_OVERFLOW_XN_RQ_Q_OFLOW */ +/* Description: XN Request input buffer over flow error */ +#define SH_PI_ERROR_OVERFLOW_XN_RQ_Q_OFLOW_SHFT 12 +#define SH_PI_ERROR_OVERFLOW_XN_RQ_Q_OFLOW_MASK 0x0000000000001000 + +/* SH_PI_ERROR_OVERFLOW_XN_RP_Q_OFLOW */ +/* Description: XN Reply input buffer over flow error */ +#define SH_PI_ERROR_OVERFLOW_XN_RP_Q_OFLOW_SHFT 13 +#define SH_PI_ERROR_OVERFLOW_XN_RP_Q_OFLOW_MASK 0x0000000000002000 + +/* SH_PI_ERROR_OVERFLOW_NACK_OFLOW */ +/* Description: NACK over flow error */ +#define SH_PI_ERROR_OVERFLOW_NACK_OFLOW_SHFT 14 +#define SH_PI_ERROR_OVERFLOW_NACK_OFLOW_MASK 0x0000000000004000 + +/* SH_PI_ERROR_OVERFLOW_GFX_INT_0 */ +/* Description: GFX transfer interrupt for CPU 0 */ +#define SH_PI_ERROR_OVERFLOW_GFX_INT_0_SHFT 15 +#define SH_PI_ERROR_OVERFLOW_GFX_INT_0_MASK 0x0000000000008000 + +/* SH_PI_ERROR_OVERFLOW_GFX_INT_1 */ +/* Description: GFX transfer interrupt for CPU 1 */ +#define SH_PI_ERROR_OVERFLOW_GFX_INT_1_SHFT 16 +#define SH_PI_ERROR_OVERFLOW_GFX_INT_1_MASK 0x0000000000010000 + +/* SH_PI_ERROR_OVERFLOW_MD_RQ_CRD_OFLOW */ +/* Description: MD Request Credit Overflow Error */ +#define SH_PI_ERROR_OVERFLOW_MD_RQ_CRD_OFLOW_SHFT 17 +#define SH_PI_ERROR_OVERFLOW_MD_RQ_CRD_OFLOW_MASK 0x0000000000020000 + +/* SH_PI_ERROR_OVERFLOW_MD_RP_CRD_OFLOW */ +/* Description: MD Reply Credit Overflow Error */ +#define SH_PI_ERROR_OVERFLOW_MD_RP_CRD_OFLOW_SHFT 18 +#define SH_PI_ERROR_OVERFLOW_MD_RP_CRD_OFLOW_MASK 0x0000000000040000 + +/* SH_PI_ERROR_OVERFLOW_XN_RQ_CRD_OFLOW */ +/* Description: XN Request Credit Overflow Error */ +#define SH_PI_ERROR_OVERFLOW_XN_RQ_CRD_OFLOW_SHFT 19 +#define SH_PI_ERROR_OVERFLOW_XN_RQ_CRD_OFLOW_MASK 0x0000000000080000 + +/* SH_PI_ERROR_OVERFLOW_XN_RP_CRD_OFLOW */ +/* Description: XN Reply Credit Overflow Error */ +#define SH_PI_ERROR_OVERFLOW_XN_RP_CRD_OFLOW_SHFT 20 +#define SH_PI_ERROR_OVERFLOW_XN_RP_CRD_OFLOW_MASK 0x0000000000100000 + +/* SH_PI_ERROR_OVERFLOW_HUNG_BUS */ +/* Description: FSB is hung */ +#define SH_PI_ERROR_OVERFLOW_HUNG_BUS_SHFT 21 +#define SH_PI_ERROR_OVERFLOW_HUNG_BUS_MASK 0x0000000000200000 + +/* SH_PI_ERROR_OVERFLOW_RSP_PARITY */ +/* Description: Parity error detecte during response phase */ +#define SH_PI_ERROR_OVERFLOW_RSP_PARITY_SHFT 22 +#define SH_PI_ERROR_OVERFLOW_RSP_PARITY_MASK 0x0000000000400000 + +/* SH_PI_ERROR_OVERFLOW_IOQ_OVERRUN */ +/* Description: Over run error detected on IOQ */ +#define SH_PI_ERROR_OVERFLOW_IOQ_OVERRUN_SHFT 23 +#define SH_PI_ERROR_OVERFLOW_IOQ_OVERRUN_MASK 0x0000000000800000 + +/* SH_PI_ERROR_OVERFLOW_REQ_FORMAT */ +/* Description: FSB request format not supported */ +#define SH_PI_ERROR_OVERFLOW_REQ_FORMAT_SHFT 24 +#define SH_PI_ERROR_OVERFLOW_REQ_FORMAT_MASK 0x0000000001000000 + +/* SH_PI_ERROR_OVERFLOW_ADDR_ACCESS */ +/* Description: Access to Address is not supported */ +#define SH_PI_ERROR_OVERFLOW_ADDR_ACCESS_SHFT 25 +#define SH_PI_ERROR_OVERFLOW_ADDR_ACCESS_MASK 0x0000000002000000 + +/* SH_PI_ERROR_OVERFLOW_REQ_PARITY */ +/* Description: Parity error detected during request phase */ +#define SH_PI_ERROR_OVERFLOW_REQ_PARITY_SHFT 26 +#define SH_PI_ERROR_OVERFLOW_REQ_PARITY_MASK 0x0000000004000000 + +/* SH_PI_ERROR_OVERFLOW_ADDR_PARITY */ +/* Description: Parity error detected on address */ +#define SH_PI_ERROR_OVERFLOW_ADDR_PARITY_SHFT 27 +#define SH_PI_ERROR_OVERFLOW_ADDR_PARITY_MASK 0x0000000008000000 + +/* SH_PI_ERROR_OVERFLOW_SHUB_FSB_DQE */ +/* Description: SHUB_FSB_DQE */ +#define SH_PI_ERROR_OVERFLOW_SHUB_FSB_DQE_SHFT 28 +#define SH_PI_ERROR_OVERFLOW_SHUB_FSB_DQE_MASK 0x0000000010000000 + +/* SH_PI_ERROR_OVERFLOW_SHUB_FSB_UCE */ +/* Description: An un-correctable ECC error was detected */ +#define SH_PI_ERROR_OVERFLOW_SHUB_FSB_UCE_SHFT 29 +#define SH_PI_ERROR_OVERFLOW_SHUB_FSB_UCE_MASK 0x0000000020000000 + +/* SH_PI_ERROR_OVERFLOW_SHUB_FSB_CE */ +/* Description: An correctable ECC error was detected */ +#define SH_PI_ERROR_OVERFLOW_SHUB_FSB_CE_SHFT 30 +#define SH_PI_ERROR_OVERFLOW_SHUB_FSB_CE_MASK 0x0000000040000000 + +/* SH_PI_ERROR_OVERFLOW_LIVELOCK */ +/* Description: AFI livelock error was detected */ +#define SH_PI_ERROR_OVERFLOW_LIVELOCK_SHFT 31 +#define SH_PI_ERROR_OVERFLOW_LIVELOCK_MASK 0x0000000080000000 + +/* SH_PI_ERROR_OVERFLOW_BAD_SNOOP */ +/* Description: AFI bad snoop error was detected */ +#define SH_PI_ERROR_OVERFLOW_BAD_SNOOP_SHFT 32 +#define SH_PI_ERROR_OVERFLOW_BAD_SNOOP_MASK 0x0000000100000000 + +/* SH_PI_ERROR_OVERFLOW_FSB_TBL_MISS */ +/* Description: AFI FSB request table miss error was detected */ +#define SH_PI_ERROR_OVERFLOW_FSB_TBL_MISS_SHFT 33 +#define SH_PI_ERROR_OVERFLOW_FSB_TBL_MISS_MASK 0x0000000200000000 + +/* SH_PI_ERROR_OVERFLOW_MSG_LENGTH */ +/* Description: Message length error on received message from SIC */ +#define SH_PI_ERROR_OVERFLOW_MSG_LENGTH_SHFT 34 +#define SH_PI_ERROR_OVERFLOW_MSG_LENGTH_MASK 0x0000000400000000 + +/* ==================================================================== */ +/* Register "SH_PI_ERROR_OVERFLOW_ALIAS" */ +/* PI Error Overflow Alias */ +/* ==================================================================== */ + +#define SH_PI_ERROR_OVERFLOW_ALIAS 0x0000000120060608 + +/* ==================================================================== */ +/* Register "SH_PI_ERROR_SUMMARY" */ +/* PI Error Summary */ +/* ==================================================================== */ + +#define SH_PI_ERROR_SUMMARY 0x0000000120060680 +#define SH_PI_ERROR_SUMMARY_MASK 0x00000007ffffffff +#define SH_PI_ERROR_SUMMARY_INIT 0x0000000000000000 + +/* SH_PI_ERROR_SUMMARY_FSB_PROTO_ERR */ +/* Description: CRB's FSB pipe detected protocol table miss */ +#define SH_PI_ERROR_SUMMARY_FSB_PROTO_ERR_SHFT 0 +#define SH_PI_ERROR_SUMMARY_FSB_PROTO_ERR_MASK 0x0000000000000001 + +/* SH_PI_ERROR_SUMMARY_GFX_RP_ERR */ +/* Description: Graphic reply error message received */ +#define SH_PI_ERROR_SUMMARY_GFX_RP_ERR_SHFT 1 +#define SH_PI_ERROR_SUMMARY_GFX_RP_ERR_MASK 0x0000000000000002 + +/* SH_PI_ERROR_SUMMARY_XB_PROTO_ERR */ +/* Description: CRB's XB pipe detected protocol table miss */ +#define SH_PI_ERROR_SUMMARY_XB_PROTO_ERR_SHFT 2 +#define SH_PI_ERROR_SUMMARY_XB_PROTO_ERR_MASK 0x0000000000000004 + +/* SH_PI_ERROR_SUMMARY_MEM_RP_ERR */ +/* Description: Memory reply error message received */ +#define SH_PI_ERROR_SUMMARY_MEM_RP_ERR_SHFT 3 +#define SH_PI_ERROR_SUMMARY_MEM_RP_ERR_MASK 0x0000000000000008 + +/* SH_PI_ERROR_SUMMARY_PIO_RP_ERR */ +/* Description: PIO error reply message received */ +#define SH_PI_ERROR_SUMMARY_PIO_RP_ERR_SHFT 4 +#define SH_PI_ERROR_SUMMARY_PIO_RP_ERR_MASK 0x0000000000000010 + +/* SH_PI_ERROR_SUMMARY_MEM_TO_ERR */ +/* Description: CRB's XB pipe detected a CRB time-out */ +#define SH_PI_ERROR_SUMMARY_MEM_TO_ERR_SHFT 5 +#define SH_PI_ERROR_SUMMARY_MEM_TO_ERR_MASK 0x0000000000000020 + +/* SH_PI_ERROR_SUMMARY_PIO_TO_ERR */ +/* Description: CRB's XB pipe detected a PIO time-out */ +#define SH_PI_ERROR_SUMMARY_PIO_TO_ERR_SHFT 6 +#define SH_PI_ERROR_SUMMARY_PIO_TO_ERR_MASK 0x0000000000000040 + +/* SH_PI_ERROR_SUMMARY_FSB_SHUB_UCE */ +/* Description: An un-correctable ECC error was detected */ +#define SH_PI_ERROR_SUMMARY_FSB_SHUB_UCE_SHFT 7 +#define SH_PI_ERROR_SUMMARY_FSB_SHUB_UCE_MASK 0x0000000000000080 + +/* SH_PI_ERROR_SUMMARY_FSB_SHUB_CE */ +/* Description: An correctable ECC error was detected */ +#define SH_PI_ERROR_SUMMARY_FSB_SHUB_CE_SHFT 8 +#define SH_PI_ERROR_SUMMARY_FSB_SHUB_CE_MASK 0x0000000000000100 + +/* SH_PI_ERROR_SUMMARY_MSG_COLOR_ERR */ +/* Description: Message color was wrong */ +#define SH_PI_ERROR_SUMMARY_MSG_COLOR_ERR_SHFT 9 +#define SH_PI_ERROR_SUMMARY_MSG_COLOR_ERR_MASK 0x0000000000000200 + +/* SH_PI_ERROR_SUMMARY_MD_RQ_Q_OFLOW */ +/* Description: MD Request input buffer over flow error */ +#define SH_PI_ERROR_SUMMARY_MD_RQ_Q_OFLOW_SHFT 10 +#define SH_PI_ERROR_SUMMARY_MD_RQ_Q_OFLOW_MASK 0x0000000000000400 + +/* SH_PI_ERROR_SUMMARY_MD_RP_Q_OFLOW */ +/* Description: MD Reply input buffer over flow error */ +#define SH_PI_ERROR_SUMMARY_MD_RP_Q_OFLOW_SHFT 11 +#define SH_PI_ERROR_SUMMARY_MD_RP_Q_OFLOW_MASK 0x0000000000000800 + +/* SH_PI_ERROR_SUMMARY_XN_RQ_Q_OFLOW */ +/* Description: XN Request input buffer over flow error */ +#define SH_PI_ERROR_SUMMARY_XN_RQ_Q_OFLOW_SHFT 12 +#define SH_PI_ERROR_SUMMARY_XN_RQ_Q_OFLOW_MASK 0x0000000000001000 + +/* SH_PI_ERROR_SUMMARY_XN_RP_Q_OFLOW */ +/* Description: XN Reply input buffer over flow error */ +#define SH_PI_ERROR_SUMMARY_XN_RP_Q_OFLOW_SHFT 13 +#define SH_PI_ERROR_SUMMARY_XN_RP_Q_OFLOW_MASK 0x0000000000002000 + +/* SH_PI_ERROR_SUMMARY_NACK_OFLOW */ +/* Description: NACK over flow error */ +#define SH_PI_ERROR_SUMMARY_NACK_OFLOW_SHFT 14 +#define SH_PI_ERROR_SUMMARY_NACK_OFLOW_MASK 0x0000000000004000 + +/* SH_PI_ERROR_SUMMARY_GFX_INT_0 */ +/* Description: GFX transfer interrupt for CPU 0 */ +#define SH_PI_ERROR_SUMMARY_GFX_INT_0_SHFT 15 +#define SH_PI_ERROR_SUMMARY_GFX_INT_0_MASK 0x0000000000008000 + +/* SH_PI_ERROR_SUMMARY_GFX_INT_1 */ +/* Description: GFX transfer interrupt for CPU 1 */ +#define SH_PI_ERROR_SUMMARY_GFX_INT_1_SHFT 16 +#define SH_PI_ERROR_SUMMARY_GFX_INT_1_MASK 0x0000000000010000 + +/* SH_PI_ERROR_SUMMARY_MD_RQ_CRD_OFLOW */ +/* Description: MD Request Credit Overflow Error */ +#define SH_PI_ERROR_SUMMARY_MD_RQ_CRD_OFLOW_SHFT 17 +#define SH_PI_ERROR_SUMMARY_MD_RQ_CRD_OFLOW_MASK 0x0000000000020000 + +/* SH_PI_ERROR_SUMMARY_MD_RP_CRD_OFLOW */ +/* Description: MD Reply Credit Overflow Error */ +#define SH_PI_ERROR_SUMMARY_MD_RP_CRD_OFLOW_SHFT 18 +#define SH_PI_ERROR_SUMMARY_MD_RP_CRD_OFLOW_MASK 0x0000000000040000 + +/* SH_PI_ERROR_SUMMARY_XN_RQ_CRD_OFLOW */ +/* Description: XN Request Credit Overflow Error */ +#define SH_PI_ERROR_SUMMARY_XN_RQ_CRD_OFLOW_SHFT 19 +#define SH_PI_ERROR_SUMMARY_XN_RQ_CRD_OFLOW_MASK 0x0000000000080000 + +/* SH_PI_ERROR_SUMMARY_XN_RP_CRD_OFLOW */ +/* Description: XN Reply Credit Overflow Error */ +#define SH_PI_ERROR_SUMMARY_XN_RP_CRD_OFLOW_SHFT 20 +#define SH_PI_ERROR_SUMMARY_XN_RP_CRD_OFLOW_MASK 0x0000000000100000 + +/* SH_PI_ERROR_SUMMARY_HUNG_BUS */ +/* Description: FSB is hung */ +#define SH_PI_ERROR_SUMMARY_HUNG_BUS_SHFT 21 +#define SH_PI_ERROR_SUMMARY_HUNG_BUS_MASK 0x0000000000200000 + +/* SH_PI_ERROR_SUMMARY_RSP_PARITY */ +/* Description: Parity error detecte during response phase */ +#define SH_PI_ERROR_SUMMARY_RSP_PARITY_SHFT 22 +#define SH_PI_ERROR_SUMMARY_RSP_PARITY_MASK 0x0000000000400000 + +/* SH_PI_ERROR_SUMMARY_IOQ_OVERRUN */ +/* Description: Over run error detected on IOQ */ +#define SH_PI_ERROR_SUMMARY_IOQ_OVERRUN_SHFT 23 +#define SH_PI_ERROR_SUMMARY_IOQ_OVERRUN_MASK 0x0000000000800000 + +/* SH_PI_ERROR_SUMMARY_REQ_FORMAT */ +/* Description: FSB request format not supported */ +#define SH_PI_ERROR_SUMMARY_REQ_FORMAT_SHFT 24 +#define SH_PI_ERROR_SUMMARY_REQ_FORMAT_MASK 0x0000000001000000 + +/* SH_PI_ERROR_SUMMARY_ADDR_ACCESS */ +/* Description: Access to Address is not supported */ +#define SH_PI_ERROR_SUMMARY_ADDR_ACCESS_SHFT 25 +#define SH_PI_ERROR_SUMMARY_ADDR_ACCESS_MASK 0x0000000002000000 + +/* SH_PI_ERROR_SUMMARY_REQ_PARITY */ +/* Description: Parity error detected during request phase */ +#define SH_PI_ERROR_SUMMARY_REQ_PARITY_SHFT 26 +#define SH_PI_ERROR_SUMMARY_REQ_PARITY_MASK 0x0000000004000000 + +/* SH_PI_ERROR_SUMMARY_ADDR_PARITY */ +/* Description: Parity error detected on address */ +#define SH_PI_ERROR_SUMMARY_ADDR_PARITY_SHFT 27 +#define SH_PI_ERROR_SUMMARY_ADDR_PARITY_MASK 0x0000000008000000 + +/* SH_PI_ERROR_SUMMARY_SHUB_FSB_DQE */ +/* Description: SHUB_FSB_DQE error */ +#define SH_PI_ERROR_SUMMARY_SHUB_FSB_DQE_SHFT 28 +#define SH_PI_ERROR_SUMMARY_SHUB_FSB_DQE_MASK 0x0000000010000000 + +/* SH_PI_ERROR_SUMMARY_SHUB_FSB_UCE */ +/* Description: An un-correctable ECC error was detected */ +#define SH_PI_ERROR_SUMMARY_SHUB_FSB_UCE_SHFT 29 +#define SH_PI_ERROR_SUMMARY_SHUB_FSB_UCE_MASK 0x0000000020000000 + +/* SH_PI_ERROR_SUMMARY_SHUB_FSB_CE */ +/* Description: An correctable ECC error was detected */ +#define SH_PI_ERROR_SUMMARY_SHUB_FSB_CE_SHFT 30 +#define SH_PI_ERROR_SUMMARY_SHUB_FSB_CE_MASK 0x0000000040000000 + +/* SH_PI_ERROR_SUMMARY_LIVELOCK */ +/* Description: AFI livelock error was detected */ +#define SH_PI_ERROR_SUMMARY_LIVELOCK_SHFT 31 +#define SH_PI_ERROR_SUMMARY_LIVELOCK_MASK 0x0000000080000000 + +/* SH_PI_ERROR_SUMMARY_BAD_SNOOP */ +/* Description: AFI bad snoop error was detected */ +#define SH_PI_ERROR_SUMMARY_BAD_SNOOP_SHFT 32 +#define SH_PI_ERROR_SUMMARY_BAD_SNOOP_MASK 0x0000000100000000 + +/* SH_PI_ERROR_SUMMARY_FSB_TBL_MISS */ +/* Description: AFI FSB request table miss error was detected */ +#define SH_PI_ERROR_SUMMARY_FSB_TBL_MISS_SHFT 33 +#define SH_PI_ERROR_SUMMARY_FSB_TBL_MISS_MASK 0x0000000200000000 + +/* SH_PI_ERROR_SUMMARY_MSG_LENGTH */ +/* Description: Message length error on received message from SIC */ +#define SH_PI_ERROR_SUMMARY_MSG_LENGTH_SHFT 34 +#define SH_PI_ERROR_SUMMARY_MSG_LENGTH_MASK 0x0000000400000000 + +/* ==================================================================== */ +/* Register "SH_PI_ERROR_SUMMARY_ALIAS" */ +/* PI Error Summary Alias */ +/* ==================================================================== */ + +#define SH_PI_ERROR_SUMMARY_ALIAS 0x0000000120060688 + +/* ==================================================================== */ +/* Register "SH_PI_EXPRESS_REPLY_STATUS" */ +/* PI Express Reply Status */ +/* ==================================================================== */ + +#define SH_PI_EXPRESS_REPLY_STATUS 0x0000000120060700 +#define SH_PI_EXPRESS_REPLY_STATUS_MASK 0x0000000000000007 +#define SH_PI_EXPRESS_REPLY_STATUS_INIT 0x0000000000000000 + +/* SH_PI_EXPRESS_REPLY_STATUS_STATE */ +/* Description: Express Reply State */ +#define SH_PI_EXPRESS_REPLY_STATUS_STATE_SHFT 0 +#define SH_PI_EXPRESS_REPLY_STATUS_STATE_MASK 0x0000000000000007 + +/* ==================================================================== */ +/* Register "SH_PI_FIRST_ERROR" */ +/* PI First Error */ +/* ==================================================================== */ + +#define SH_PI_FIRST_ERROR 0x0000000120060780 +#define SH_PI_FIRST_ERROR_MASK 0x00000007ffffffff +#define SH_PI_FIRST_ERROR_INIT 0x0000000000000000 + +/* SH_PI_FIRST_ERROR_FSB_PROTO_ERR */ +/* Description: CRB's FSB pipe detected protocol table miss */ +#define SH_PI_FIRST_ERROR_FSB_PROTO_ERR_SHFT 0 +#define SH_PI_FIRST_ERROR_FSB_PROTO_ERR_MASK 0x0000000000000001 + +/* SH_PI_FIRST_ERROR_GFX_RP_ERR */ +/* Description: Graphics error reply message received */ +#define SH_PI_FIRST_ERROR_GFX_RP_ERR_SHFT 1 +#define SH_PI_FIRST_ERROR_GFX_RP_ERR_MASK 0x0000000000000002 + +/* SH_PI_FIRST_ERROR_XB_PROTO_ERR */ +/* Description: CRB's XB pipe detected protocol table miss */ +#define SH_PI_FIRST_ERROR_XB_PROTO_ERR_SHFT 2 +#define SH_PI_FIRST_ERROR_XB_PROTO_ERR_MASK 0x0000000000000004 + +/* SH_PI_FIRST_ERROR_MEM_RP_ERR */ +/* Description: Memory reply error message received */ +#define SH_PI_FIRST_ERROR_MEM_RP_ERR_SHFT 3 +#define SH_PI_FIRST_ERROR_MEM_RP_ERR_MASK 0x0000000000000008 + +/* SH_PI_FIRST_ERROR_PIO_RP_ERR */ +/* Description: PIO reply error message received */ +#define SH_PI_FIRST_ERROR_PIO_RP_ERR_SHFT 4 +#define SH_PI_FIRST_ERROR_PIO_RP_ERR_MASK 0x0000000000000010 + +/* SH_PI_FIRST_ERROR_MEM_TO_ERR */ +/* Description: CRB's XB pipe detected a CRB time-out */ +#define SH_PI_FIRST_ERROR_MEM_TO_ERR_SHFT 5 +#define SH_PI_FIRST_ERROR_MEM_TO_ERR_MASK 0x0000000000000020 + +/* SH_PI_FIRST_ERROR_PIO_TO_ERR */ +/* Description: CRB's XB pipe detected a PIO time-out */ +#define SH_PI_FIRST_ERROR_PIO_TO_ERR_SHFT 6 +#define SH_PI_FIRST_ERROR_PIO_TO_ERR_MASK 0x0000000000000040 + +/* SH_PI_FIRST_ERROR_FSB_SHUB_UCE */ +/* Description: An un-correctable ECC error was detected */ +#define SH_PI_FIRST_ERROR_FSB_SHUB_UCE_SHFT 7 +#define SH_PI_FIRST_ERROR_FSB_SHUB_UCE_MASK 0x0000000000000080 + +/* SH_PI_FIRST_ERROR_FSB_SHUB_CE */ +/* Description: A correctable ECC error was detected */ +#define SH_PI_FIRST_ERROR_FSB_SHUB_CE_SHFT 8 +#define SH_PI_FIRST_ERROR_FSB_SHUB_CE_MASK 0x0000000000000100 + +/* SH_PI_FIRST_ERROR_MSG_COLOR_ERR */ +/* Description: Message color was wrong */ +#define SH_PI_FIRST_ERROR_MSG_COLOR_ERR_SHFT 9 +#define SH_PI_FIRST_ERROR_MSG_COLOR_ERR_MASK 0x0000000000000200 + +/* SH_PI_FIRST_ERROR_MD_RQ_Q_OFLOW */ +/* Description: MD Request input buffer over flow error */ +#define SH_PI_FIRST_ERROR_MD_RQ_Q_OFLOW_SHFT 10 +#define SH_PI_FIRST_ERROR_MD_RQ_Q_OFLOW_MASK 0x0000000000000400 + +/* SH_PI_FIRST_ERROR_MD_RP_Q_OFLOW */ +/* Description: MD Reply input buffer over flow error */ +#define SH_PI_FIRST_ERROR_MD_RP_Q_OFLOW_SHFT 11 +#define SH_PI_FIRST_ERROR_MD_RP_Q_OFLOW_MASK 0x0000000000000800 + +/* SH_PI_FIRST_ERROR_XN_RQ_Q_OFLOW */ +/* Description: XN Request input buffer over flow error */ +#define SH_PI_FIRST_ERROR_XN_RQ_Q_OFLOW_SHFT 12 +#define SH_PI_FIRST_ERROR_XN_RQ_Q_OFLOW_MASK 0x0000000000001000 + +/* SH_PI_FIRST_ERROR_XN_RP_Q_OFLOW */ +/* Description: XN Reply input buffer over flow error */ +#define SH_PI_FIRST_ERROR_XN_RP_Q_OFLOW_SHFT 13 +#define SH_PI_FIRST_ERROR_XN_RP_Q_OFLOW_MASK 0x0000000000002000 + +/* SH_PI_FIRST_ERROR_NACK_OFLOW */ +/* Description: NACK over flow error */ +#define SH_PI_FIRST_ERROR_NACK_OFLOW_SHFT 14 +#define SH_PI_FIRST_ERROR_NACK_OFLOW_MASK 0x0000000000004000 + +/* SH_PI_FIRST_ERROR_GFX_INT_0 */ +/* Description: GFX transfer interrupt for CPU 0 */ +#define SH_PI_FIRST_ERROR_GFX_INT_0_SHFT 15 +#define SH_PI_FIRST_ERROR_GFX_INT_0_MASK 0x0000000000008000 + +/* SH_PI_FIRST_ERROR_GFX_INT_1 */ +/* Description: GFX transfer interrupt for CPU 1 */ +#define SH_PI_FIRST_ERROR_GFX_INT_1_SHFT 16 +#define SH_PI_FIRST_ERROR_GFX_INT_1_MASK 0x0000000000010000 + +/* SH_PI_FIRST_ERROR_MD_RQ_CRD_OFLOW */ +/* Description: MD Request Credit Overflow Error */ +#define SH_PI_FIRST_ERROR_MD_RQ_CRD_OFLOW_SHFT 17 +#define SH_PI_FIRST_ERROR_MD_RQ_CRD_OFLOW_MASK 0x0000000000020000 + +/* SH_PI_FIRST_ERROR_MD_RP_CRD_OFLOW */ +/* Description: MD Reply Credit Overflow Error */ +#define SH_PI_FIRST_ERROR_MD_RP_CRD_OFLOW_SHFT 18 +#define SH_PI_FIRST_ERROR_MD_RP_CRD_OFLOW_MASK 0x0000000000040000 + +/* SH_PI_FIRST_ERROR_XN_RQ_CRD_OFLOW */ +/* Description: XN Request Credit Overflow Error */ +#define SH_PI_FIRST_ERROR_XN_RQ_CRD_OFLOW_SHFT 19 +#define SH_PI_FIRST_ERROR_XN_RQ_CRD_OFLOW_MASK 0x0000000000080000 + +/* SH_PI_FIRST_ERROR_XN_RP_CRD_OFLOW */ +/* Description: XN Reply Credit Overflow Error */ +#define SH_PI_FIRST_ERROR_XN_RP_CRD_OFLOW_SHFT 20 +#define SH_PI_FIRST_ERROR_XN_RP_CRD_OFLOW_MASK 0x0000000000100000 + +/* SH_PI_FIRST_ERROR_HUNG_BUS */ +/* Description: FSB is hung */ +#define SH_PI_FIRST_ERROR_HUNG_BUS_SHFT 21 +#define SH_PI_FIRST_ERROR_HUNG_BUS_MASK 0x0000000000200000 + +/* SH_PI_FIRST_ERROR_RSP_PARITY */ +/* Description: Parity error detecte during response phase */ +#define SH_PI_FIRST_ERROR_RSP_PARITY_SHFT 22 +#define SH_PI_FIRST_ERROR_RSP_PARITY_MASK 0x0000000000400000 + +/* SH_PI_FIRST_ERROR_IOQ_OVERRUN */ +/* Description: Over run error detected on IOQ */ +#define SH_PI_FIRST_ERROR_IOQ_OVERRUN_SHFT 23 +#define SH_PI_FIRST_ERROR_IOQ_OVERRUN_MASK 0x0000000000800000 + +/* SH_PI_FIRST_ERROR_REQ_FORMAT */ +/* Description: FSB request format not supported */ +#define SH_PI_FIRST_ERROR_REQ_FORMAT_SHFT 24 +#define SH_PI_FIRST_ERROR_REQ_FORMAT_MASK 0x0000000001000000 + +/* SH_PI_FIRST_ERROR_ADDR_ACCESS */ +/* Description: Access to Address is not supported */ +#define SH_PI_FIRST_ERROR_ADDR_ACCESS_SHFT 25 +#define SH_PI_FIRST_ERROR_ADDR_ACCESS_MASK 0x0000000002000000 + +/* SH_PI_FIRST_ERROR_REQ_PARITY */ +/* Description: Parity error detected during request phase */ +#define SH_PI_FIRST_ERROR_REQ_PARITY_SHFT 26 +#define SH_PI_FIRST_ERROR_REQ_PARITY_MASK 0x0000000004000000 + +/* SH_PI_FIRST_ERROR_ADDR_PARITY */ +/* Description: Parity error detected on address */ +#define SH_PI_FIRST_ERROR_ADDR_PARITY_SHFT 27 +#define SH_PI_FIRST_ERROR_ADDR_PARITY_MASK 0x0000000008000000 + +/* SH_PI_FIRST_ERROR_SHUB_FSB_DQE */ +/* Description: SHUB_FSB_DQE */ +#define SH_PI_FIRST_ERROR_SHUB_FSB_DQE_SHFT 28 +#define SH_PI_FIRST_ERROR_SHUB_FSB_DQE_MASK 0x0000000010000000 + +/* SH_PI_FIRST_ERROR_SHUB_FSB_UCE */ +/* Description: An un-correctable ECC error was detected */ +#define SH_PI_FIRST_ERROR_SHUB_FSB_UCE_SHFT 29 +#define SH_PI_FIRST_ERROR_SHUB_FSB_UCE_MASK 0x0000000020000000 + +/* SH_PI_FIRST_ERROR_SHUB_FSB_CE */ +/* Description: An correctable ECC error was detected */ +#define SH_PI_FIRST_ERROR_SHUB_FSB_CE_SHFT 30 +#define SH_PI_FIRST_ERROR_SHUB_FSB_CE_MASK 0x0000000040000000 + +/* SH_PI_FIRST_ERROR_LIVELOCK */ +/* Description: AFI livelock error was detected */ +#define SH_PI_FIRST_ERROR_LIVELOCK_SHFT 31 +#define SH_PI_FIRST_ERROR_LIVELOCK_MASK 0x0000000080000000 + +/* SH_PI_FIRST_ERROR_BAD_SNOOP */ +/* Description: AFI bad snoop error was detected */ +#define SH_PI_FIRST_ERROR_BAD_SNOOP_SHFT 32 +#define SH_PI_FIRST_ERROR_BAD_SNOOP_MASK 0x0000000100000000 + +/* SH_PI_FIRST_ERROR_FSB_TBL_MISS */ +/* Description: AFI FSB request table miss error was detected */ +#define SH_PI_FIRST_ERROR_FSB_TBL_MISS_SHFT 33 +#define SH_PI_FIRST_ERROR_FSB_TBL_MISS_MASK 0x0000000200000000 + +/* SH_PI_FIRST_ERROR_MSG_LENGTH */ +/* Description: Message length error on received message from SIC */ +#define SH_PI_FIRST_ERROR_MSG_LENGTH_SHFT 34 +#define SH_PI_FIRST_ERROR_MSG_LENGTH_MASK 0x0000000400000000 + +/* ==================================================================== */ +/* Register "SH_PI_FIRST_ERROR_ALIAS" */ +/* PI First Error Alias */ +/* ==================================================================== */ + +#define SH_PI_FIRST_ERROR_ALIAS 0x0000000120060788 + +/* ==================================================================== */ +/* Register "SH_PI_PI2MD_REPLY_VC_STATUS" */ +/* PI-to-MD Reply Virtual Channel Status */ +/* ==================================================================== */ + +#define SH_PI_PI2MD_REPLY_VC_STATUS 0x0000000120060900 +#define SH_PI_PI2MD_REPLY_VC_STATUS_MASK 0x000000000000003f +#define SH_PI_PI2MD_REPLY_VC_STATUS_INIT 0x0000000000000000 + +/* SH_PI_PI2MD_REPLY_VC_STATUS_OUTPUT_CRD_STAT */ +/* Description: Status of output credits */ +#define SH_PI_PI2MD_REPLY_VC_STATUS_OUTPUT_CRD_STAT_SHFT 0 +#define SH_PI_PI2MD_REPLY_VC_STATUS_OUTPUT_CRD_STAT_MASK 0x000000000000003f + +/* ==================================================================== */ +/* Register "SH_PI_PI2MD_REQUEST_VC_STATUS" */ +/* PI-to-MD Request Virtual Channel Status */ +/* ==================================================================== */ + +#define SH_PI_PI2MD_REQUEST_VC_STATUS 0x0000000120060980 +#define SH_PI_PI2MD_REQUEST_VC_STATUS_MASK 0x000000000000003f +#define SH_PI_PI2MD_REQUEST_VC_STATUS_INIT 0x0000000000000000 + +/* SH_PI_PI2MD_REQUEST_VC_STATUS_OUTPUT_CRD_STAT */ +/* Description: Status of output credits */ +#define SH_PI_PI2MD_REQUEST_VC_STATUS_OUTPUT_CRD_STAT_SHFT 0 +#define SH_PI_PI2MD_REQUEST_VC_STATUS_OUTPUT_CRD_STAT_MASK 0x000000000000003f + +/* ==================================================================== */ +/* Register "SH_PI_PI2XN_REPLY_VC_STATUS" */ +/* PI-to-XN Reply Virtual Channel Status */ +/* ==================================================================== */ + +#define SH_PI_PI2XN_REPLY_VC_STATUS 0x0000000120060a00 +#define SH_PI_PI2XN_REPLY_VC_STATUS_MASK 0x000000000000003f +#define SH_PI_PI2XN_REPLY_VC_STATUS_INIT 0x0000000000000000 + +/* SH_PI_PI2XN_REPLY_VC_STATUS_OUTPUT_CRD_STAT */ +/* Description: Status of output credits */ +#define SH_PI_PI2XN_REPLY_VC_STATUS_OUTPUT_CRD_STAT_SHFT 0 +#define SH_PI_PI2XN_REPLY_VC_STATUS_OUTPUT_CRD_STAT_MASK 0x000000000000003f + +/* ==================================================================== */ +/* Register "SH_PI_PI2XN_REQUEST_VC_STATUS" */ +/* PI-to-XN Request Virtual Channel Status */ +/* ==================================================================== */ + +#define SH_PI_PI2XN_REQUEST_VC_STATUS 0x0000000120060a80 +#define SH_PI_PI2XN_REQUEST_VC_STATUS_MASK 0x000000000000003f +#define SH_PI_PI2XN_REQUEST_VC_STATUS_INIT 0x0000000000000000 + +/* SH_PI_PI2XN_REQUEST_VC_STATUS_OUTPUT_CRD_STAT */ +/* Description: Status of output credits */ +#define SH_PI_PI2XN_REQUEST_VC_STATUS_OUTPUT_CRD_STAT_SHFT 0 +#define SH_PI_PI2XN_REQUEST_VC_STATUS_OUTPUT_CRD_STAT_MASK 0x000000000000003f + +/* ==================================================================== */ +/* Register "SH_PI_UNCORRECTED_DETAIL_1" */ +/* PI Uncorrected Error Detail 1 */ +/* ==================================================================== */ + +#define SH_PI_UNCORRECTED_DETAIL_1 0x0000000120060b00 +#define SH_PI_UNCORRECTED_DETAIL_1_MASK 0xffffffffffffffff +#define SH_PI_UNCORRECTED_DETAIL_1_INIT 0x0000000000000000 + +/* SH_PI_UNCORRECTED_DETAIL_1_ADDRESS */ +/* Description: Address of Message that logged Uncorrectable Error */ +#define SH_PI_UNCORRECTED_DETAIL_1_ADDRESS_SHFT 0 +#define SH_PI_UNCORRECTED_DETAIL_1_ADDRESS_MASK 0x0000ffffffffffff + +/* SH_PI_UNCORRECTED_DETAIL_1_SYNDROME */ +/* Description: Syndrome for double word data with Uncorrectable Er */ +#define SH_PI_UNCORRECTED_DETAIL_1_SYNDROME_SHFT 48 +#define SH_PI_UNCORRECTED_DETAIL_1_SYNDROME_MASK 0x00ff000000000000 + +/* SH_PI_UNCORRECTED_DETAIL_1_DEP */ +/* Description: DEP for Double word in error */ +#define SH_PI_UNCORRECTED_DETAIL_1_DEP_SHFT 56 +#define SH_PI_UNCORRECTED_DETAIL_1_DEP_MASK 0xff00000000000000 + +/* ==================================================================== */ +/* Register "SH_PI_UNCORRECTED_DETAIL_2" */ +/* PI Uncorrected Error Detail 2 */ +/* ==================================================================== */ + +#define SH_PI_UNCORRECTED_DETAIL_2 0x0000000120060b80 +#define SH_PI_UNCORRECTED_DETAIL_2_MASK 0xffffffffffffffff +#define SH_PI_UNCORRECTED_DETAIL_2_INIT 0x0000000000000000 + +/* SH_PI_UNCORRECTED_DETAIL_2_DATA */ +/* Description: Double word data in error */ +#define SH_PI_UNCORRECTED_DETAIL_2_DATA_SHFT 0 +#define SH_PI_UNCORRECTED_DETAIL_2_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_PI_UNCORRECTED_DETAIL_3" */ +/* PI Uncorrected Error Detail 3 */ +/* ==================================================================== */ + +#define SH_PI_UNCORRECTED_DETAIL_3 0x0000000120060c00 +#define SH_PI_UNCORRECTED_DETAIL_3_MASK 0xffffffffffffffff +#define SH_PI_UNCORRECTED_DETAIL_3_INIT 0x0000000000000000 + +/* SH_PI_UNCORRECTED_DETAIL_3_ADDRESS */ +/* Description: Address of Message that logged Uncorrectable Error */ +#define SH_PI_UNCORRECTED_DETAIL_3_ADDRESS_SHFT 0 +#define SH_PI_UNCORRECTED_DETAIL_3_ADDRESS_MASK 0x0000ffffffffffff + +/* SH_PI_UNCORRECTED_DETAIL_3_SYNDROME */ +/* Description: Syndrome for double word data with Uncorrectable Er */ +#define SH_PI_UNCORRECTED_DETAIL_3_SYNDROME_SHFT 48 +#define SH_PI_UNCORRECTED_DETAIL_3_SYNDROME_MASK 0x00ff000000000000 + +/* SH_PI_UNCORRECTED_DETAIL_3_DEP */ +/* Description: DCP for Double word in error */ +#define SH_PI_UNCORRECTED_DETAIL_3_DEP_SHFT 56 +#define SH_PI_UNCORRECTED_DETAIL_3_DEP_MASK 0xff00000000000000 + +/* ==================================================================== */ +/* Register "SH_PI_UNCORRECTED_DETAIL_4" */ +/* PI Uncorrected Error Detail 4 */ +/* ==================================================================== */ + +#define SH_PI_UNCORRECTED_DETAIL_4 0x0000000120060c80 +#define SH_PI_UNCORRECTED_DETAIL_4_MASK 0xffffffffffffffff +#define SH_PI_UNCORRECTED_DETAIL_4_INIT 0x0000000000000000 + +/* SH_PI_UNCORRECTED_DETAIL_4_DATA */ +/* Description: Double word data in error */ +#define SH_PI_UNCORRECTED_DETAIL_4_DATA_SHFT 0 +#define SH_PI_UNCORRECTED_DETAIL_4_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_PI_MD2PI_REPLY_VC_STATUS" */ +/* MD-to-PI Reply Virtual Channel Status */ +/* ==================================================================== */ + +#define SH_PI_MD2PI_REPLY_VC_STATUS 0x0000000120060800 +#define SH_PI_MD2PI_REPLY_VC_STATUS_MASK 0x0000000000000fff +#define SH_PI_MD2PI_REPLY_VC_STATUS_INIT 0x0000000000000000 + +/* SH_PI_MD2PI_REPLY_VC_STATUS_INPUT_HDR_CRD_STAT */ +/* Description: Status of input header credits */ +#define SH_PI_MD2PI_REPLY_VC_STATUS_INPUT_HDR_CRD_STAT_SHFT 0 +#define SH_PI_MD2PI_REPLY_VC_STATUS_INPUT_HDR_CRD_STAT_MASK 0x000000000000000f + +/* SH_PI_MD2PI_REPLY_VC_STATUS_INPUT_DAT_CRD_STAT */ +/* Description: Status of data credits */ +#define SH_PI_MD2PI_REPLY_VC_STATUS_INPUT_DAT_CRD_STAT_SHFT 4 +#define SH_PI_MD2PI_REPLY_VC_STATUS_INPUT_DAT_CRD_STAT_MASK 0x00000000000000f0 + +/* SH_PI_MD2PI_REPLY_VC_STATUS_INPUT_QUEUE_STAT */ +/* Description: Status of MD Reply Input Queue */ +#define SH_PI_MD2PI_REPLY_VC_STATUS_INPUT_QUEUE_STAT_SHFT 8 +#define SH_PI_MD2PI_REPLY_VC_STATUS_INPUT_QUEUE_STAT_MASK 0x0000000000000f00 + +/* ==================================================================== */ +/* Register "SH_PI_MD2PI_REQUEST_VC_STATUS" */ +/* MD-to-PI Request Virtual Channel Status */ +/* ==================================================================== */ + +#define SH_PI_MD2PI_REQUEST_VC_STATUS 0x0000000120060880 +#define SH_PI_MD2PI_REQUEST_VC_STATUS_MASK 0x0000000000000fff +#define SH_PI_MD2PI_REQUEST_VC_STATUS_INIT 0x0000000000000000 + +/* SH_PI_MD2PI_REQUEST_VC_STATUS_INPUT_HDR_CRD_STAT */ +/* Description: Status of input header credits */ +#define SH_PI_MD2PI_REQUEST_VC_STATUS_INPUT_HDR_CRD_STAT_SHFT 0 +#define SH_PI_MD2PI_REQUEST_VC_STATUS_INPUT_HDR_CRD_STAT_MASK 0x000000000000000f + +/* SH_PI_MD2PI_REQUEST_VC_STATUS_INPUT_DAT_CRD_STAT */ +/* Description: Status of input data credits */ +#define SH_PI_MD2PI_REQUEST_VC_STATUS_INPUT_DAT_CRD_STAT_SHFT 4 +#define SH_PI_MD2PI_REQUEST_VC_STATUS_INPUT_DAT_CRD_STAT_MASK 0x00000000000000f0 + +/* SH_PI_MD2PI_REQUEST_VC_STATUS_INPUT_QUEUE_STAT */ +/* Description: Status of MD Request Input Queue */ +#define SH_PI_MD2PI_REQUEST_VC_STATUS_INPUT_QUEUE_STAT_SHFT 8 +#define SH_PI_MD2PI_REQUEST_VC_STATUS_INPUT_QUEUE_STAT_MASK 0x0000000000000f00 + +/* ==================================================================== */ +/* Register "SH_PI_XN2PI_REPLY_VC_STATUS" */ +/* XN-to-PI Reply Virtual Channel Status */ +/* ==================================================================== */ + +#define SH_PI_XN2PI_REPLY_VC_STATUS 0x0000000120060d00 +#define SH_PI_XN2PI_REPLY_VC_STATUS_MASK 0x0000000000000fff +#define SH_PI_XN2PI_REPLY_VC_STATUS_INIT 0x0000000000000000 + +/* SH_PI_XN2PI_REPLY_VC_STATUS_INPUT_HDR_CRD_STAT */ +/* Description: Status of input header credits */ +#define SH_PI_XN2PI_REPLY_VC_STATUS_INPUT_HDR_CRD_STAT_SHFT 0 +#define SH_PI_XN2PI_REPLY_VC_STATUS_INPUT_HDR_CRD_STAT_MASK 0x000000000000000f + +/* SH_PI_XN2PI_REPLY_VC_STATUS_INPUT_DAT_CRD_STAT */ +/* Description: Status of input data credits */ +#define SH_PI_XN2PI_REPLY_VC_STATUS_INPUT_DAT_CRD_STAT_SHFT 4 +#define SH_PI_XN2PI_REPLY_VC_STATUS_INPUT_DAT_CRD_STAT_MASK 0x00000000000000f0 + +/* SH_PI_XN2PI_REPLY_VC_STATUS_INPUT_QUEUE_STAT */ +/* Description: Status of XN Reply Input Queue */ +#define SH_PI_XN2PI_REPLY_VC_STATUS_INPUT_QUEUE_STAT_SHFT 8 +#define SH_PI_XN2PI_REPLY_VC_STATUS_INPUT_QUEUE_STAT_MASK 0x0000000000000f00 + +/* ==================================================================== */ +/* Register "SH_PI_XN2PI_REQUEST_VC_STATUS" */ +/* XN-to-PI Request Virtual Channel Status */ +/* ==================================================================== */ + +#define SH_PI_XN2PI_REQUEST_VC_STATUS 0x0000000120060d80 +#define SH_PI_XN2PI_REQUEST_VC_STATUS_MASK 0x0000000000000fff +#define SH_PI_XN2PI_REQUEST_VC_STATUS_INIT 0x0000000000000000 + +/* SH_PI_XN2PI_REQUEST_VC_STATUS_INPUT_HDR_CRD_STAT */ +/* Description: Status of input header credits */ +#define SH_PI_XN2PI_REQUEST_VC_STATUS_INPUT_HDR_CRD_STAT_SHFT 0 +#define SH_PI_XN2PI_REQUEST_VC_STATUS_INPUT_HDR_CRD_STAT_MASK 0x000000000000000f + +/* SH_PI_XN2PI_REQUEST_VC_STATUS_INPUT_DAT_CRD_STAT */ +/* Description: Status of input data credits */ +#define SH_PI_XN2PI_REQUEST_VC_STATUS_INPUT_DAT_CRD_STAT_SHFT 4 +#define SH_PI_XN2PI_REQUEST_VC_STATUS_INPUT_DAT_CRD_STAT_MASK 0x00000000000000f0 + +/* SH_PI_XN2PI_REQUEST_VC_STATUS_INPUT_QUEUE_STAT */ +/* Description: Status of XN Request Input Queue */ +#define SH_PI_XN2PI_REQUEST_VC_STATUS_INPUT_QUEUE_STAT_SHFT 8 +#define SH_PI_XN2PI_REQUEST_VC_STATUS_INPUT_QUEUE_STAT_MASK 0x0000000000000f00 + +/* ==================================================================== */ +/* Register "SH_XNPI_SIC_FLOW" */ +/* ==================================================================== */ + +#define SH_XNPI_SIC_FLOW 0x0000000150030000 +#define SH_XNPI_SIC_FLOW_MASK 0x9f1f1f1f1f1f9f9f +#define SH_XNPI_SIC_FLOW_INIT 0x0000080000080000 + +/* SH_XNPI_SIC_FLOW_DEBIT_VC0_WITHHOLD */ +/* Description: vc0 withhold */ +#define SH_XNPI_SIC_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0 +#define SH_XNPI_SIC_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000001f + +/* SH_XNPI_SIC_FLOW_DEBIT_VC0_FORCE_CRED */ +/* Description: Force Credit on VC0 from debit cntr */ +#define SH_XNPI_SIC_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7 +#define SH_XNPI_SIC_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080 + +/* SH_XNPI_SIC_FLOW_DEBIT_VC2_WITHHOLD */ +/* Description: vc2 withhold */ +#define SH_XNPI_SIC_FLOW_DEBIT_VC2_WITHHOLD_SHFT 8 +#define SH_XNPI_SIC_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x0000000000001f00 + +/* SH_XNPI_SIC_FLOW_DEBIT_VC2_FORCE_CRED */ +/* Description: Force Credit on VC2 from debit cntr */ +#define SH_XNPI_SIC_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 15 +#define SH_XNPI_SIC_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000008000 + +/* SH_XNPI_SIC_FLOW_CREDIT_VC0_TEST */ +/* Description: vc0 credit_test */ +#define SH_XNPI_SIC_FLOW_CREDIT_VC0_TEST_SHFT 16 +#define SH_XNPI_SIC_FLOW_CREDIT_VC0_TEST_MASK 0x00000000001f0000 + +/* SH_XNPI_SIC_FLOW_CREDIT_VC0_DYN */ +/* Description: vc0 credit dynamic value */ +#define SH_XNPI_SIC_FLOW_CREDIT_VC0_DYN_SHFT 24 +#define SH_XNPI_SIC_FLOW_CREDIT_VC0_DYN_MASK 0x000000001f000000 + +/* SH_XNPI_SIC_FLOW_CREDIT_VC0_CAP */ +/* Description: vc0 credit captured value */ +#define SH_XNPI_SIC_FLOW_CREDIT_VC0_CAP_SHFT 32 +#define SH_XNPI_SIC_FLOW_CREDIT_VC0_CAP_MASK 0x0000001f00000000 + +/* SH_XNPI_SIC_FLOW_CREDIT_VC2_TEST */ +/* Description: vc2 credit_test */ +#define SH_XNPI_SIC_FLOW_CREDIT_VC2_TEST_SHFT 40 +#define SH_XNPI_SIC_FLOW_CREDIT_VC2_TEST_MASK 0x00001f0000000000 + +/* SH_XNPI_SIC_FLOW_CREDIT_VC2_DYN */ +/* Description: vc2 credit dynamic value */ +#define SH_XNPI_SIC_FLOW_CREDIT_VC2_DYN_SHFT 48 +#define SH_XNPI_SIC_FLOW_CREDIT_VC2_DYN_MASK 0x001f000000000000 + +/* SH_XNPI_SIC_FLOW_CREDIT_VC2_CAP */ +/* Description: vc2 credit captured value */ +#define SH_XNPI_SIC_FLOW_CREDIT_VC2_CAP_SHFT 56 +#define SH_XNPI_SIC_FLOW_CREDIT_VC2_CAP_MASK 0x1f00000000000000 + +/* SH_XNPI_SIC_FLOW_DISABLE_BYPASS_OUT */ +#define SH_XNPI_SIC_FLOW_DISABLE_BYPASS_OUT_SHFT 63 +#define SH_XNPI_SIC_FLOW_DISABLE_BYPASS_OUT_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_XNPI_TO_NI0_PORT_FLOW" */ +/* ==================================================================== */ + +#define SH_XNPI_TO_NI0_PORT_FLOW 0x0000000150030010 +#define SH_XNPI_TO_NI0_PORT_FLOW_MASK 0x3f3f003f3f00bfbf +#define SH_XNPI_TO_NI0_PORT_FLOW_INIT 0x0000000000000000 + +/* SH_XNPI_TO_NI0_PORT_FLOW_DEBIT_VC0_WITHHOLD */ +/* Description: vc0 withhold */ +#define SH_XNPI_TO_NI0_PORT_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0 +#define SH_XNPI_TO_NI0_PORT_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000003f + +/* SH_XNPI_TO_NI0_PORT_FLOW_DEBIT_VC0_FORCE_CRED */ +/* Description: Force Credit on VC0 from debit cntr */ +#define SH_XNPI_TO_NI0_PORT_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7 +#define SH_XNPI_TO_NI0_PORT_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080 + +/* SH_XNPI_TO_NI0_PORT_FLOW_DEBIT_VC2_WITHHOLD */ +/* Description: vc2 withhold */ +#define SH_XNPI_TO_NI0_PORT_FLOW_DEBIT_VC2_WITHHOLD_SHFT 8 +#define SH_XNPI_TO_NI0_PORT_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x0000000000003f00 + +/* SH_XNPI_TO_NI0_PORT_FLOW_DEBIT_VC2_FORCE_CRED */ +/* Description: Force Credit on VC2 from debit cntr */ +#define SH_XNPI_TO_NI0_PORT_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 15 +#define SH_XNPI_TO_NI0_PORT_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000008000 + +/* SH_XNPI_TO_NI0_PORT_FLOW_CREDIT_VC0_DYN */ +/* Description: vc0 credit dynamic value */ +#define SH_XNPI_TO_NI0_PORT_FLOW_CREDIT_VC0_DYN_SHFT 24 +#define SH_XNPI_TO_NI0_PORT_FLOW_CREDIT_VC0_DYN_MASK 0x000000003f000000 + +/* SH_XNPI_TO_NI0_PORT_FLOW_CREDIT_VC0_CAP */ +/* Description: vc0 credit captured value */ +#define SH_XNPI_TO_NI0_PORT_FLOW_CREDIT_VC0_CAP_SHFT 32 +#define SH_XNPI_TO_NI0_PORT_FLOW_CREDIT_VC0_CAP_MASK 0x0000003f00000000 + +/* SH_XNPI_TO_NI0_PORT_FLOW_CREDIT_VC2_DYN */ +/* Description: vc2 credit dynamic value */ +#define SH_XNPI_TO_NI0_PORT_FLOW_CREDIT_VC2_DYN_SHFT 48 +#define SH_XNPI_TO_NI0_PORT_FLOW_CREDIT_VC2_DYN_MASK 0x003f000000000000 + +/* SH_XNPI_TO_NI0_PORT_FLOW_CREDIT_VC2_CAP */ +/* Description: vc2 credit captured value */ +#define SH_XNPI_TO_NI0_PORT_FLOW_CREDIT_VC2_CAP_SHFT 56 +#define SH_XNPI_TO_NI0_PORT_FLOW_CREDIT_VC2_CAP_MASK 0x3f00000000000000 + +/* ==================================================================== */ +/* Register "SH_XNPI_TO_NI1_PORT_FLOW" */ +/* ==================================================================== */ + +#define SH_XNPI_TO_NI1_PORT_FLOW 0x0000000150030020 +#define SH_XNPI_TO_NI1_PORT_FLOW_MASK 0x3f3f003f3f00bfbf +#define SH_XNPI_TO_NI1_PORT_FLOW_INIT 0x0000000000000000 + +/* SH_XNPI_TO_NI1_PORT_FLOW_DEBIT_VC0_WITHHOLD */ +/* Description: vc0 withhold */ +#define SH_XNPI_TO_NI1_PORT_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0 +#define SH_XNPI_TO_NI1_PORT_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000003f + +/* SH_XNPI_TO_NI1_PORT_FLOW_DEBIT_VC0_FORCE_CRED */ +/* Description: Force Credit on VC0 from debit cntr */ +#define SH_XNPI_TO_NI1_PORT_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7 +#define SH_XNPI_TO_NI1_PORT_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080 + +/* SH_XNPI_TO_NI1_PORT_FLOW_DEBIT_VC2_WITHHOLD */ +/* Description: vc2 withhold */ +#define SH_XNPI_TO_NI1_PORT_FLOW_DEBIT_VC2_WITHHOLD_SHFT 8 +#define SH_XNPI_TO_NI1_PORT_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x0000000000003f00 + +/* SH_XNPI_TO_NI1_PORT_FLOW_DEBIT_VC2_FORCE_CRED */ +/* Description: Force Credit on VC2 from debit cntr */ +#define SH_XNPI_TO_NI1_PORT_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 15 +#define SH_XNPI_TO_NI1_PORT_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000008000 + +/* SH_XNPI_TO_NI1_PORT_FLOW_CREDIT_VC0_DYN */ +/* Description: vc0 credit dynamic value */ +#define SH_XNPI_TO_NI1_PORT_FLOW_CREDIT_VC0_DYN_SHFT 24 +#define SH_XNPI_TO_NI1_PORT_FLOW_CREDIT_VC0_DYN_MASK 0x000000003f000000 + +/* SH_XNPI_TO_NI1_PORT_FLOW_CREDIT_VC0_CAP */ +/* Description: vc0 credit captured value */ +#define SH_XNPI_TO_NI1_PORT_FLOW_CREDIT_VC0_CAP_SHFT 32 +#define SH_XNPI_TO_NI1_PORT_FLOW_CREDIT_VC0_CAP_MASK 0x0000003f00000000 + +/* SH_XNPI_TO_NI1_PORT_FLOW_CREDIT_VC2_DYN */ +/* Description: vc2 credit dynamic value */ +#define SH_XNPI_TO_NI1_PORT_FLOW_CREDIT_VC2_DYN_SHFT 48 +#define SH_XNPI_TO_NI1_PORT_FLOW_CREDIT_VC2_DYN_MASK 0x003f000000000000 + +/* SH_XNPI_TO_NI1_PORT_FLOW_CREDIT_VC2_CAP */ +/* Description: vc2 credit captured value */ +#define SH_XNPI_TO_NI1_PORT_FLOW_CREDIT_VC2_CAP_SHFT 56 +#define SH_XNPI_TO_NI1_PORT_FLOW_CREDIT_VC2_CAP_MASK 0x3f00000000000000 + +/* ==================================================================== */ +/* Register "SH_XNPI_TO_IILB_PORT_FLOW" */ +/* ==================================================================== */ + +#define SH_XNPI_TO_IILB_PORT_FLOW 0x0000000150030030 +#define SH_XNPI_TO_IILB_PORT_FLOW_MASK 0x3f3f003f3f00bfbf +#define SH_XNPI_TO_IILB_PORT_FLOW_INIT 0x0000000000000000 + +/* SH_XNPI_TO_IILB_PORT_FLOW_DEBIT_VC0_WITHHOLD */ +/* Description: vc0 withhold */ +#define SH_XNPI_TO_IILB_PORT_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0 +#define SH_XNPI_TO_IILB_PORT_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000003f + +/* SH_XNPI_TO_IILB_PORT_FLOW_DEBIT_VC0_FORCE_CRED */ +/* Description: Force Credit on VC0 from debit cntr */ +#define SH_XNPI_TO_IILB_PORT_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7 +#define SH_XNPI_TO_IILB_PORT_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080 + +/* SH_XNPI_TO_IILB_PORT_FLOW_DEBIT_VC2_WITHHOLD */ +/* Description: vc2 withhold */ +#define SH_XNPI_TO_IILB_PORT_FLOW_DEBIT_VC2_WITHHOLD_SHFT 8 +#define SH_XNPI_TO_IILB_PORT_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x0000000000003f00 + +/* SH_XNPI_TO_IILB_PORT_FLOW_DEBIT_VC2_FORCE_CRED */ +/* Description: Force Credit on VC2 from debit cntr */ +#define SH_XNPI_TO_IILB_PORT_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 15 +#define SH_XNPI_TO_IILB_PORT_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000008000 + +/* SH_XNPI_TO_IILB_PORT_FLOW_CREDIT_VC0_DYN */ +/* Description: vc0 credit dynamic value */ +#define SH_XNPI_TO_IILB_PORT_FLOW_CREDIT_VC0_DYN_SHFT 24 +#define SH_XNPI_TO_IILB_PORT_FLOW_CREDIT_VC0_DYN_MASK 0x000000003f000000 + +/* SH_XNPI_TO_IILB_PORT_FLOW_CREDIT_VC0_CAP */ +/* Description: vc0 credit captured value */ +#define SH_XNPI_TO_IILB_PORT_FLOW_CREDIT_VC0_CAP_SHFT 32 +#define SH_XNPI_TO_IILB_PORT_FLOW_CREDIT_VC0_CAP_MASK 0x0000003f00000000 + +/* SH_XNPI_TO_IILB_PORT_FLOW_CREDIT_VC2_DYN */ +/* Description: vc2 credit dynamic value */ +#define SH_XNPI_TO_IILB_PORT_FLOW_CREDIT_VC2_DYN_SHFT 48 +#define SH_XNPI_TO_IILB_PORT_FLOW_CREDIT_VC2_DYN_MASK 0x003f000000000000 + +/* SH_XNPI_TO_IILB_PORT_FLOW_CREDIT_VC2_CAP */ +/* Description: vc2 credit captured value */ +#define SH_XNPI_TO_IILB_PORT_FLOW_CREDIT_VC2_CAP_SHFT 56 +#define SH_XNPI_TO_IILB_PORT_FLOW_CREDIT_VC2_CAP_MASK 0x3f00000000000000 + +/* ==================================================================== */ +/* Register "SH_XNPI_FR_NI0_PORT_FLOW_FIFO" */ +/* ==================================================================== */ + +#define SH_XNPI_FR_NI0_PORT_FLOW_FIFO 0x0000000150030040 +#define SH_XNPI_FR_NI0_PORT_FLOW_FIFO_MASK 0x00001f1f3f3f3f3f +#define SH_XNPI_FR_NI0_PORT_FLOW_FIFO_INIT 0x00000c0c00000000 + +/* SH_XNPI_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC0_DYN */ +/* Description: vc0 fifo entry dynamic value */ +#define SH_XNPI_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC0_DYN_SHFT 0 +#define SH_XNPI_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC0_DYN_MASK 0x000000000000003f + +/* SH_XNPI_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC0_CAP */ +/* Description: vc0 fifo entry captured value */ +#define SH_XNPI_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC0_CAP_SHFT 8 +#define SH_XNPI_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC0_CAP_MASK 0x0000000000003f00 + +/* SH_XNPI_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC2_DYN */ +/* Description: vc2 fifo entry dynamic value */ +#define SH_XNPI_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC2_DYN_SHFT 16 +#define SH_XNPI_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC2_DYN_MASK 0x00000000003f0000 + +/* SH_XNPI_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC2_CAP */ +/* Description: vc2 fifo entry captured value */ +#define SH_XNPI_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC2_CAP_SHFT 24 +#define SH_XNPI_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC2_CAP_MASK 0x000000003f000000 + +/* SH_XNPI_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC0_TEST */ +/* Description: vc0 test credits limit */ +#define SH_XNPI_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC0_TEST_SHFT 32 +#define SH_XNPI_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC0_TEST_MASK 0x0000001f00000000 + +/* SH_XNPI_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC2_TEST */ +/* Description: vc2 test credits limit */ +#define SH_XNPI_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC2_TEST_SHFT 40 +#define SH_XNPI_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC2_TEST_MASK 0x00001f0000000000 + +/* ==================================================================== */ +/* Register "SH_XNPI_FR_NI1_PORT_FLOW_FIFO" */ +/* ==================================================================== */ + +#define SH_XNPI_FR_NI1_PORT_FLOW_FIFO 0x0000000150030050 +#define SH_XNPI_FR_NI1_PORT_FLOW_FIFO_MASK 0x00001f1f3f3f3f3f +#define SH_XNPI_FR_NI1_PORT_FLOW_FIFO_INIT 0x00000c0c00000000 + +/* SH_XNPI_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC0_DYN */ +/* Description: vc0 fifo entry dynamic value */ +#define SH_XNPI_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC0_DYN_SHFT 0 +#define SH_XNPI_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC0_DYN_MASK 0x000000000000003f + +/* SH_XNPI_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC0_CAP */ +/* Description: vc0 fifo entry captured value */ +#define SH_XNPI_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC0_CAP_SHFT 8 +#define SH_XNPI_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC0_CAP_MASK 0x0000000000003f00 + +/* SH_XNPI_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC2_DYN */ +/* Description: vc2 fifo entry dynamic value */ +#define SH_XNPI_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC2_DYN_SHFT 16 +#define SH_XNPI_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC2_DYN_MASK 0x00000000003f0000 + +/* SH_XNPI_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC2_CAP */ +/* Description: vc2 fifo entry captured value */ +#define SH_XNPI_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC2_CAP_SHFT 24 +#define SH_XNPI_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC2_CAP_MASK 0x000000003f000000 + +/* SH_XNPI_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC0_TEST */ +/* Description: vc0 test credits limit */ +#define SH_XNPI_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC0_TEST_SHFT 32 +#define SH_XNPI_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC0_TEST_MASK 0x0000001f00000000 + +/* SH_XNPI_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC2_TEST */ +/* Description: vc2 test credits limit */ +#define SH_XNPI_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC2_TEST_SHFT 40 +#define SH_XNPI_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC2_TEST_MASK 0x00001f0000000000 + +/* ==================================================================== */ +/* Register "SH_XNPI_FR_IILB_PORT_FLOW_FIFO" */ +/* ==================================================================== */ + +#define SH_XNPI_FR_IILB_PORT_FLOW_FIFO 0x0000000150030060 +#define SH_XNPI_FR_IILB_PORT_FLOW_FIFO_MASK 0x00001f1f3f3f3f3f +#define SH_XNPI_FR_IILB_PORT_FLOW_FIFO_INIT 0x00000c0c00000000 + +/* SH_XNPI_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC0_DYN */ +/* Description: vc0 fifo entry dynamic value */ +#define SH_XNPI_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC0_DYN_SHFT 0 +#define SH_XNPI_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC0_DYN_MASK 0x000000000000003f + +/* SH_XNPI_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC0_CAP */ +/* Description: vc0 fifo entry captured value */ +#define SH_XNPI_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC0_CAP_SHFT 8 +#define SH_XNPI_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC0_CAP_MASK 0x0000000000003f00 + +/* SH_XNPI_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC2_DYN */ +/* Description: vc2 fifo entry dynamic value */ +#define SH_XNPI_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC2_DYN_SHFT 16 +#define SH_XNPI_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC2_DYN_MASK 0x00000000003f0000 + +/* SH_XNPI_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC2_CAP */ +/* Description: vc2 fifo entry captured value */ +#define SH_XNPI_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC2_CAP_SHFT 24 +#define SH_XNPI_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC2_CAP_MASK 0x000000003f000000 + +/* SH_XNPI_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC0_TEST */ +/* Description: vc0 test credits limit */ +#define SH_XNPI_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC0_TEST_SHFT 32 +#define SH_XNPI_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC0_TEST_MASK 0x0000001f00000000 + +/* SH_XNPI_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC2_TEST */ +/* Description: vc2 test credits limit */ +#define SH_XNPI_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC2_TEST_SHFT 40 +#define SH_XNPI_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC2_TEST_MASK 0x00001f0000000000 + +/* ==================================================================== */ +/* Register "SH_XNMD_SIC_FLOW" */ +/* ==================================================================== */ + +#define SH_XNMD_SIC_FLOW 0x0000000150030100 +#define SH_XNMD_SIC_FLOW_MASK 0x9f1f1f1f1f1f9f9f +#define SH_XNMD_SIC_FLOW_INIT 0x0000090000090000 + +/* SH_XNMD_SIC_FLOW_DEBIT_VC0_WITHHOLD */ +/* Description: vc0 withhold */ +#define SH_XNMD_SIC_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0 +#define SH_XNMD_SIC_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000001f + +/* SH_XNMD_SIC_FLOW_DEBIT_VC0_FORCE_CRED */ +/* Description: Force Credit on VC0 from debit cntr */ +#define SH_XNMD_SIC_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7 +#define SH_XNMD_SIC_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080 + +/* SH_XNMD_SIC_FLOW_DEBIT_VC2_WITHHOLD */ +/* Description: vc2 withhold */ +#define SH_XNMD_SIC_FLOW_DEBIT_VC2_WITHHOLD_SHFT 8 +#define SH_XNMD_SIC_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x0000000000001f00 + +/* SH_XNMD_SIC_FLOW_DEBIT_VC2_FORCE_CRED */ +/* Description: Force Credit on VC2 from debit cntr */ +#define SH_XNMD_SIC_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 15 +#define SH_XNMD_SIC_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000008000 + +/* SH_XNMD_SIC_FLOW_CREDIT_VC0_TEST */ +/* Description: vc0 credit_test */ +#define SH_XNMD_SIC_FLOW_CREDIT_VC0_TEST_SHFT 16 +#define SH_XNMD_SIC_FLOW_CREDIT_VC0_TEST_MASK 0x00000000001f0000 + +/* SH_XNMD_SIC_FLOW_CREDIT_VC0_DYN */ +/* Description: vc0 credit dynamic value */ +#define SH_XNMD_SIC_FLOW_CREDIT_VC0_DYN_SHFT 24 +#define SH_XNMD_SIC_FLOW_CREDIT_VC0_DYN_MASK 0x000000001f000000 + +/* SH_XNMD_SIC_FLOW_CREDIT_VC0_CAP */ +/* Description: vc0 credit captured value */ +#define SH_XNMD_SIC_FLOW_CREDIT_VC0_CAP_SHFT 32 +#define SH_XNMD_SIC_FLOW_CREDIT_VC0_CAP_MASK 0x0000001f00000000 + +/* SH_XNMD_SIC_FLOW_CREDIT_VC2_TEST */ +/* Description: vc2 credit_test */ +#define SH_XNMD_SIC_FLOW_CREDIT_VC2_TEST_SHFT 40 +#define SH_XNMD_SIC_FLOW_CREDIT_VC2_TEST_MASK 0x00001f0000000000 + +/* SH_XNMD_SIC_FLOW_CREDIT_VC2_DYN */ +/* Description: vc2 credit dynamic value */ +#define SH_XNMD_SIC_FLOW_CREDIT_VC2_DYN_SHFT 48 +#define SH_XNMD_SIC_FLOW_CREDIT_VC2_DYN_MASK 0x001f000000000000 + +/* SH_XNMD_SIC_FLOW_CREDIT_VC2_CAP */ +/* Description: vc2 credit captured value */ +#define SH_XNMD_SIC_FLOW_CREDIT_VC2_CAP_SHFT 56 +#define SH_XNMD_SIC_FLOW_CREDIT_VC2_CAP_MASK 0x1f00000000000000 + +/* SH_XNMD_SIC_FLOW_DISABLE_BYPASS_OUT */ +#define SH_XNMD_SIC_FLOW_DISABLE_BYPASS_OUT_SHFT 63 +#define SH_XNMD_SIC_FLOW_DISABLE_BYPASS_OUT_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_XNMD_TO_NI0_PORT_FLOW" */ +/* ==================================================================== */ + +#define SH_XNMD_TO_NI0_PORT_FLOW 0x0000000150030110 +#define SH_XNMD_TO_NI0_PORT_FLOW_MASK 0x3f3f003f3f00bfbf +#define SH_XNMD_TO_NI0_PORT_FLOW_INIT 0x0000000000000000 + +/* SH_XNMD_TO_NI0_PORT_FLOW_DEBIT_VC0_WITHHOLD */ +/* Description: vc0 withhold */ +#define SH_XNMD_TO_NI0_PORT_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0 +#define SH_XNMD_TO_NI0_PORT_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000003f + +/* SH_XNMD_TO_NI0_PORT_FLOW_DEBIT_VC0_FORCE_CRED */ +/* Description: Force Credit on VC0 from debit cntr */ +#define SH_XNMD_TO_NI0_PORT_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7 +#define SH_XNMD_TO_NI0_PORT_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080 + +/* SH_XNMD_TO_NI0_PORT_FLOW_DEBIT_VC2_WITHHOLD */ +/* Description: vc2 withhold */ +#define SH_XNMD_TO_NI0_PORT_FLOW_DEBIT_VC2_WITHHOLD_SHFT 8 +#define SH_XNMD_TO_NI0_PORT_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x0000000000003f00 + +/* SH_XNMD_TO_NI0_PORT_FLOW_DEBIT_VC2_FORCE_CRED */ +/* Description: Force Credit on VC2 from debit cntr */ +#define SH_XNMD_TO_NI0_PORT_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 15 +#define SH_XNMD_TO_NI0_PORT_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000008000 + +/* SH_XNMD_TO_NI0_PORT_FLOW_CREDIT_VC0_DYN */ +/* Description: vc0 credit dynamic value */ +#define SH_XNMD_TO_NI0_PORT_FLOW_CREDIT_VC0_DYN_SHFT 24 +#define SH_XNMD_TO_NI0_PORT_FLOW_CREDIT_VC0_DYN_MASK 0x000000003f000000 + +/* SH_XNMD_TO_NI0_PORT_FLOW_CREDIT_VC0_CAP */ +/* Description: vc0 credit captured value */ +#define SH_XNMD_TO_NI0_PORT_FLOW_CREDIT_VC0_CAP_SHFT 32 +#define SH_XNMD_TO_NI0_PORT_FLOW_CREDIT_VC0_CAP_MASK 0x0000003f00000000 + +/* SH_XNMD_TO_NI0_PORT_FLOW_CREDIT_VC2_DYN */ +/* Description: vc2 credit dynamic value */ +#define SH_XNMD_TO_NI0_PORT_FLOW_CREDIT_VC2_DYN_SHFT 48 +#define SH_XNMD_TO_NI0_PORT_FLOW_CREDIT_VC2_DYN_MASK 0x003f000000000000 + +/* SH_XNMD_TO_NI0_PORT_FLOW_CREDIT_VC2_CAP */ +/* Description: vc2 credit captured value */ +#define SH_XNMD_TO_NI0_PORT_FLOW_CREDIT_VC2_CAP_SHFT 56 +#define SH_XNMD_TO_NI0_PORT_FLOW_CREDIT_VC2_CAP_MASK 0x3f00000000000000 + +/* ==================================================================== */ +/* Register "SH_XNMD_TO_NI1_PORT_FLOW" */ +/* ==================================================================== */ + +#define SH_XNMD_TO_NI1_PORT_FLOW 0x0000000150030120 +#define SH_XNMD_TO_NI1_PORT_FLOW_MASK 0x3f3f003f3f00bfbf +#define SH_XNMD_TO_NI1_PORT_FLOW_INIT 0x0000000000000000 + +/* SH_XNMD_TO_NI1_PORT_FLOW_DEBIT_VC0_WITHHOLD */ +/* Description: vc0 withhold */ +#define SH_XNMD_TO_NI1_PORT_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0 +#define SH_XNMD_TO_NI1_PORT_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000003f + +/* SH_XNMD_TO_NI1_PORT_FLOW_DEBIT_VC0_FORCE_CRED */ +/* Description: Force Credit on VC0 from debit cntr */ +#define SH_XNMD_TO_NI1_PORT_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7 +#define SH_XNMD_TO_NI1_PORT_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080 + +/* SH_XNMD_TO_NI1_PORT_FLOW_DEBIT_VC2_WITHHOLD */ +/* Description: vc2 withhold */ +#define SH_XNMD_TO_NI1_PORT_FLOW_DEBIT_VC2_WITHHOLD_SHFT 8 +#define SH_XNMD_TO_NI1_PORT_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x0000000000003f00 + +/* SH_XNMD_TO_NI1_PORT_FLOW_DEBIT_VC2_FORCE_CRED */ +/* Description: Force Credit on VC2 from debit cntr */ +#define SH_XNMD_TO_NI1_PORT_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 15 +#define SH_XNMD_TO_NI1_PORT_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000008000 + +/* SH_XNMD_TO_NI1_PORT_FLOW_CREDIT_VC0_DYN */ +/* Description: vc0 credit dynamic value */ +#define SH_XNMD_TO_NI1_PORT_FLOW_CREDIT_VC0_DYN_SHFT 24 +#define SH_XNMD_TO_NI1_PORT_FLOW_CREDIT_VC0_DYN_MASK 0x000000003f000000 + +/* SH_XNMD_TO_NI1_PORT_FLOW_CREDIT_VC0_CAP */ +/* Description: vc0 credit captured value */ +#define SH_XNMD_TO_NI1_PORT_FLOW_CREDIT_VC0_CAP_SHFT 32 +#define SH_XNMD_TO_NI1_PORT_FLOW_CREDIT_VC0_CAP_MASK 0x0000003f00000000 + +/* SH_XNMD_TO_NI1_PORT_FLOW_CREDIT_VC2_DYN */ +/* Description: vc2 credit dynamic value */ +#define SH_XNMD_TO_NI1_PORT_FLOW_CREDIT_VC2_DYN_SHFT 48 +#define SH_XNMD_TO_NI1_PORT_FLOW_CREDIT_VC2_DYN_MASK 0x003f000000000000 + +/* SH_XNMD_TO_NI1_PORT_FLOW_CREDIT_VC2_CAP */ +/* Description: vc2 credit captured value */ +#define SH_XNMD_TO_NI1_PORT_FLOW_CREDIT_VC2_CAP_SHFT 56 +#define SH_XNMD_TO_NI1_PORT_FLOW_CREDIT_VC2_CAP_MASK 0x3f00000000000000 + +/* ==================================================================== */ +/* Register "SH_XNMD_TO_IILB_PORT_FLOW" */ +/* ==================================================================== */ + +#define SH_XNMD_TO_IILB_PORT_FLOW 0x0000000150030130 +#define SH_XNMD_TO_IILB_PORT_FLOW_MASK 0x3f3f003f3f00bfbf +#define SH_XNMD_TO_IILB_PORT_FLOW_INIT 0x0000000000000000 + +/* SH_XNMD_TO_IILB_PORT_FLOW_DEBIT_VC0_WITHHOLD */ +/* Description: vc0 withhold */ +#define SH_XNMD_TO_IILB_PORT_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0 +#define SH_XNMD_TO_IILB_PORT_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000003f + +/* SH_XNMD_TO_IILB_PORT_FLOW_DEBIT_VC0_FORCE_CRED */ +/* Description: Force Credit on VC0 from debit cntr */ +#define SH_XNMD_TO_IILB_PORT_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7 +#define SH_XNMD_TO_IILB_PORT_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080 + +/* SH_XNMD_TO_IILB_PORT_FLOW_DEBIT_VC2_WITHHOLD */ +/* Description: vc2 withhold */ +#define SH_XNMD_TO_IILB_PORT_FLOW_DEBIT_VC2_WITHHOLD_SHFT 8 +#define SH_XNMD_TO_IILB_PORT_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x0000000000003f00 + +/* SH_XNMD_TO_IILB_PORT_FLOW_DEBIT_VC2_FORCE_CRED */ +/* Description: Force Credit on VC2 from debit cntr */ +#define SH_XNMD_TO_IILB_PORT_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 15 +#define SH_XNMD_TO_IILB_PORT_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000008000 + +/* SH_XNMD_TO_IILB_PORT_FLOW_CREDIT_VC0_DYN */ +/* Description: vc0 credit dynamic value */ +#define SH_XNMD_TO_IILB_PORT_FLOW_CREDIT_VC0_DYN_SHFT 24 +#define SH_XNMD_TO_IILB_PORT_FLOW_CREDIT_VC0_DYN_MASK 0x000000003f000000 + +/* SH_XNMD_TO_IILB_PORT_FLOW_CREDIT_VC0_CAP */ +/* Description: vc0 credit captured value */ +#define SH_XNMD_TO_IILB_PORT_FLOW_CREDIT_VC0_CAP_SHFT 32 +#define SH_XNMD_TO_IILB_PORT_FLOW_CREDIT_VC0_CAP_MASK 0x0000003f00000000 + +/* SH_XNMD_TO_IILB_PORT_FLOW_CREDIT_VC2_DYN */ +/* Description: vc2 credit dynamic value */ +#define SH_XNMD_TO_IILB_PORT_FLOW_CREDIT_VC2_DYN_SHFT 48 +#define SH_XNMD_TO_IILB_PORT_FLOW_CREDIT_VC2_DYN_MASK 0x003f000000000000 + +/* SH_XNMD_TO_IILB_PORT_FLOW_CREDIT_VC2_CAP */ +/* Description: vc2 credit captured value */ +#define SH_XNMD_TO_IILB_PORT_FLOW_CREDIT_VC2_CAP_SHFT 56 +#define SH_XNMD_TO_IILB_PORT_FLOW_CREDIT_VC2_CAP_MASK 0x3f00000000000000 + +/* ==================================================================== */ +/* Register "SH_XNMD_FR_NI0_PORT_FLOW_FIFO" */ +/* ==================================================================== */ + +#define SH_XNMD_FR_NI0_PORT_FLOW_FIFO 0x0000000150030140 +#define SH_XNMD_FR_NI0_PORT_FLOW_FIFO_MASK 0x00001f1f3f3f3f3f +#define SH_XNMD_FR_NI0_PORT_FLOW_FIFO_INIT 0x00000c0c00000000 + +/* SH_XNMD_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC0_DYN */ +/* Description: vc0 fifo entry dynamic value */ +#define SH_XNMD_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC0_DYN_SHFT 0 +#define SH_XNMD_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC0_DYN_MASK 0x000000000000003f + +/* SH_XNMD_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC0_CAP */ +/* Description: vc0 fifo entry captured value */ +#define SH_XNMD_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC0_CAP_SHFT 8 +#define SH_XNMD_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC0_CAP_MASK 0x0000000000003f00 + +/* SH_XNMD_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC2_DYN */ +/* Description: vc2 fifo entry dynamic value */ +#define SH_XNMD_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC2_DYN_SHFT 16 +#define SH_XNMD_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC2_DYN_MASK 0x00000000003f0000 + +/* SH_XNMD_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC2_CAP */ +/* Description: vc2 fifo entry captured value */ +#define SH_XNMD_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC2_CAP_SHFT 24 +#define SH_XNMD_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC2_CAP_MASK 0x000000003f000000 + +/* SH_XNMD_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC0_TEST */ +/* Description: vc0 test credits limit */ +#define SH_XNMD_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC0_TEST_SHFT 32 +#define SH_XNMD_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC0_TEST_MASK 0x0000001f00000000 + +/* SH_XNMD_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC2_TEST */ +/* Description: vc2 test credits limit */ +#define SH_XNMD_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC2_TEST_SHFT 40 +#define SH_XNMD_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC2_TEST_MASK 0x00001f0000000000 + +/* ==================================================================== */ +/* Register "SH_XNMD_FR_NI1_PORT_FLOW_FIFO" */ +/* ==================================================================== */ + +#define SH_XNMD_FR_NI1_PORT_FLOW_FIFO 0x0000000150030150 +#define SH_XNMD_FR_NI1_PORT_FLOW_FIFO_MASK 0x00001f1f3f3f3f3f +#define SH_XNMD_FR_NI1_PORT_FLOW_FIFO_INIT 0x00000c0c00000000 + +/* SH_XNMD_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC0_DYN */ +/* Description: vc0 fifo entry dynamic value */ +#define SH_XNMD_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC0_DYN_SHFT 0 +#define SH_XNMD_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC0_DYN_MASK 0x000000000000003f + +/* SH_XNMD_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC0_CAP */ +/* Description: vc0 fifo entry captured value */ +#define SH_XNMD_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC0_CAP_SHFT 8 +#define SH_XNMD_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC0_CAP_MASK 0x0000000000003f00 + +/* SH_XNMD_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC2_DYN */ +/* Description: vc2 fifo entry dynamic value */ +#define SH_XNMD_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC2_DYN_SHFT 16 +#define SH_XNMD_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC2_DYN_MASK 0x00000000003f0000 + +/* SH_XNMD_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC2_CAP */ +/* Description: vc2 fifo entry captured value */ +#define SH_XNMD_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC2_CAP_SHFT 24 +#define SH_XNMD_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC2_CAP_MASK 0x000000003f000000 + +/* SH_XNMD_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC0_TEST */ +/* Description: vc0 test credits limit */ +#define SH_XNMD_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC0_TEST_SHFT 32 +#define SH_XNMD_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC0_TEST_MASK 0x0000001f00000000 + +/* SH_XNMD_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC2_TEST */ +/* Description: vc2 test credits limit */ +#define SH_XNMD_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC2_TEST_SHFT 40 +#define SH_XNMD_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC2_TEST_MASK 0x00001f0000000000 + +/* ==================================================================== */ +/* Register "SH_XNMD_FR_IILB_PORT_FLOW_FIFO" */ +/* ==================================================================== */ + +#define SH_XNMD_FR_IILB_PORT_FLOW_FIFO 0x0000000150030160 +#define SH_XNMD_FR_IILB_PORT_FLOW_FIFO_MASK 0x00001f1f3f3f3f3f +#define SH_XNMD_FR_IILB_PORT_FLOW_FIFO_INIT 0x00000c0c00000000 + +/* SH_XNMD_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC0_DYN */ +/* Description: vc0 fifo entry dynamic value */ +#define SH_XNMD_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC0_DYN_SHFT 0 +#define SH_XNMD_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC0_DYN_MASK 0x000000000000003f + +/* SH_XNMD_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC0_CAP */ +/* Description: vc0 fifo entry captured value */ +#define SH_XNMD_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC0_CAP_SHFT 8 +#define SH_XNMD_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC0_CAP_MASK 0x0000000000003f00 + +/* SH_XNMD_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC2_DYN */ +/* Description: vc2 fifo entry dynamic value */ +#define SH_XNMD_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC2_DYN_SHFT 16 +#define SH_XNMD_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC2_DYN_MASK 0x00000000003f0000 + +/* SH_XNMD_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC2_CAP */ +/* Description: vc2 fifo entry captured value */ +#define SH_XNMD_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC2_CAP_SHFT 24 +#define SH_XNMD_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC2_CAP_MASK 0x000000003f000000 + +/* SH_XNMD_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC0_TEST */ +/* Description: vc0 test credits limit */ +#define SH_XNMD_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC0_TEST_SHFT 32 +#define SH_XNMD_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC0_TEST_MASK 0x0000001f00000000 + +/* SH_XNMD_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC2_TEST */ +/* Description: vc2 test credits limit */ +#define SH_XNMD_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC2_TEST_SHFT 40 +#define SH_XNMD_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC2_TEST_MASK 0x00001f0000000000 + +/* ==================================================================== */ +/* Register "SH_XNII_INTRA_FLOW" */ +/* ==================================================================== */ + +#define SH_XNII_INTRA_FLOW 0x0000000150030200 +#define SH_XNII_INTRA_FLOW_MASK 0x7f7f7f7f7f7fbfbf +#define SH_XNII_INTRA_FLOW_INIT 0x00003f00003f0000 + +/* SH_XNII_INTRA_FLOW_DEBIT_VC0_WITHHOLD */ +/* Description: vc0 withhold */ +#define SH_XNII_INTRA_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0 +#define SH_XNII_INTRA_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000003f + +/* SH_XNII_INTRA_FLOW_DEBIT_VC0_FORCE_CRED */ +/* Description: Force Credit on VC0 from debit cntr */ +#define SH_XNII_INTRA_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7 +#define SH_XNII_INTRA_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080 + +/* SH_XNII_INTRA_FLOW_DEBIT_VC2_WITHHOLD */ +/* Description: vc2 withhold */ +#define SH_XNII_INTRA_FLOW_DEBIT_VC2_WITHHOLD_SHFT 8 +#define SH_XNII_INTRA_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x0000000000003f00 + +/* SH_XNII_INTRA_FLOW_DEBIT_VC2_FORCE_CRED */ +/* Description: Force Credit on VC2 from debit cntr */ +#define SH_XNII_INTRA_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 15 +#define SH_XNII_INTRA_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000008000 + +/* SH_XNII_INTRA_FLOW_CREDIT_VC0_TEST */ +/* Description: vc0 credit_test */ +#define SH_XNII_INTRA_FLOW_CREDIT_VC0_TEST_SHFT 16 +#define SH_XNII_INTRA_FLOW_CREDIT_VC0_TEST_MASK 0x00000000007f0000 + +/* SH_XNII_INTRA_FLOW_CREDIT_VC0_DYN */ +/* Description: vc0 credit dynamic value */ +#define SH_XNII_INTRA_FLOW_CREDIT_VC0_DYN_SHFT 24 +#define SH_XNII_INTRA_FLOW_CREDIT_VC0_DYN_MASK 0x000000007f000000 + +/* SH_XNII_INTRA_FLOW_CREDIT_VC0_CAP */ +/* Description: vc0 credit captured value */ +#define SH_XNII_INTRA_FLOW_CREDIT_VC0_CAP_SHFT 32 +#define SH_XNII_INTRA_FLOW_CREDIT_VC0_CAP_MASK 0x0000007f00000000 + +/* SH_XNII_INTRA_FLOW_CREDIT_VC2_TEST */ +/* Description: vc2 credit_test */ +#define SH_XNII_INTRA_FLOW_CREDIT_VC2_TEST_SHFT 40 +#define SH_XNII_INTRA_FLOW_CREDIT_VC2_TEST_MASK 0x00007f0000000000 + +/* SH_XNII_INTRA_FLOW_CREDIT_VC2_DYN */ +/* Description: vc2 credit dynamic value */ +#define SH_XNII_INTRA_FLOW_CREDIT_VC2_DYN_SHFT 48 +#define SH_XNII_INTRA_FLOW_CREDIT_VC2_DYN_MASK 0x007f000000000000 + +/* SH_XNII_INTRA_FLOW_CREDIT_VC2_CAP */ +/* Description: vc2 credit captured value */ +#define SH_XNII_INTRA_FLOW_CREDIT_VC2_CAP_SHFT 56 +#define SH_XNII_INTRA_FLOW_CREDIT_VC2_CAP_MASK 0x7f00000000000000 + +/* ==================================================================== */ +/* Register "SH_XNLB_INTRA_FLOW" */ +/* ==================================================================== */ + +#define SH_XNLB_INTRA_FLOW 0x0000000150030210 +#define SH_XNLB_INTRA_FLOW_MASK 0xff7f7f7f7f7fbfbf +#define SH_XNLB_INTRA_FLOW_INIT 0x0000080000100000 + +/* SH_XNLB_INTRA_FLOW_DEBIT_VC0_WITHHOLD */ +/* Description: vc0 withhold */ +#define SH_XNLB_INTRA_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0 +#define SH_XNLB_INTRA_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000003f + +/* SH_XNLB_INTRA_FLOW_DEBIT_VC0_FORCE_CRED */ +/* Description: Force Credit on VC0 from debit cntr */ +#define SH_XNLB_INTRA_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7 +#define SH_XNLB_INTRA_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080 + +/* SH_XNLB_INTRA_FLOW_DEBIT_VC2_WITHHOLD */ +/* Description: vc2 withhold */ +#define SH_XNLB_INTRA_FLOW_DEBIT_VC2_WITHHOLD_SHFT 8 +#define SH_XNLB_INTRA_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x0000000000003f00 + +/* SH_XNLB_INTRA_FLOW_DEBIT_VC2_FORCE_CRED */ +/* Description: Force Credit on VC2 from debit cntr */ +#define SH_XNLB_INTRA_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 15 +#define SH_XNLB_INTRA_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000008000 + +/* SH_XNLB_INTRA_FLOW_CREDIT_VC0_TEST */ +/* Description: vc0 credit_test */ +#define SH_XNLB_INTRA_FLOW_CREDIT_VC0_TEST_SHFT 16 +#define SH_XNLB_INTRA_FLOW_CREDIT_VC0_TEST_MASK 0x00000000007f0000 + +/* SH_XNLB_INTRA_FLOW_CREDIT_VC0_DYN */ +/* Description: vc0 credit dynamic value */ +#define SH_XNLB_INTRA_FLOW_CREDIT_VC0_DYN_SHFT 24 +#define SH_XNLB_INTRA_FLOW_CREDIT_VC0_DYN_MASK 0x000000007f000000 + +/* SH_XNLB_INTRA_FLOW_CREDIT_VC0_CAP */ +/* Description: vc0 credit captured value */ +#define SH_XNLB_INTRA_FLOW_CREDIT_VC0_CAP_SHFT 32 +#define SH_XNLB_INTRA_FLOW_CREDIT_VC0_CAP_MASK 0x0000007f00000000 + +/* SH_XNLB_INTRA_FLOW_CREDIT_VC2_TEST */ +/* Description: vc2 credit_test */ +#define SH_XNLB_INTRA_FLOW_CREDIT_VC2_TEST_SHFT 40 +#define SH_XNLB_INTRA_FLOW_CREDIT_VC2_TEST_MASK 0x00007f0000000000 + +/* SH_XNLB_INTRA_FLOW_CREDIT_VC2_DYN */ +/* Description: vc2 credit dynamic value */ +#define SH_XNLB_INTRA_FLOW_CREDIT_VC2_DYN_SHFT 48 +#define SH_XNLB_INTRA_FLOW_CREDIT_VC2_DYN_MASK 0x007f000000000000 + +/* SH_XNLB_INTRA_FLOW_CREDIT_VC2_CAP */ +/* Description: vc2 credit captured value */ +#define SH_XNLB_INTRA_FLOW_CREDIT_VC2_CAP_SHFT 56 +#define SH_XNLB_INTRA_FLOW_CREDIT_VC2_CAP_MASK 0x7f00000000000000 + +/* SH_XNLB_INTRA_FLOW_DISABLE_BYPASS_IN */ +#define SH_XNLB_INTRA_FLOW_DISABLE_BYPASS_IN_SHFT 63 +#define SH_XNLB_INTRA_FLOW_DISABLE_BYPASS_IN_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT" */ +/* ==================================================================== */ + +#define SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT 0x0000000150030220 +#define SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT_MASK 0x7f7f007f7f00bfbf +#define SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT_INIT 0x0000000000000000 + +/* SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT_VC0_WITHHOLD */ +/* Description: vc0 withhold */ +#define SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0 +#define SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000003f + +/* SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT_VC0_FORCE_CRED */ +/* Description: Force Credit on VC0 from debit cntr */ +#define SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7 +#define SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080 + +/* SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT_VC2_WITHHOLD */ +/* Description: vc2 withhold */ +#define SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT_VC2_WITHHOLD_SHFT 8 +#define SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x0000000000003f00 + +/* SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT_VC2_FORCE_CRED */ +/* Description: Force Credit on VC2 from debit cntr */ +#define SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 15 +#define SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000008000 + +/* SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT_VC0_DYN */ +/* Description: vc0 debit dynamic value */ +#define SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT_VC0_DYN_SHFT 24 +#define SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT_VC0_DYN_MASK 0x000000007f000000 + +/* SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT_VC0_CAP */ +/* Description: vc0 debit captured value */ +#define SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT_VC0_CAP_SHFT 32 +#define SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT_VC0_CAP_MASK 0x0000007f00000000 + +/* SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT_VC2_DYN */ +/* Description: vc2 debit dynamic value */ +#define SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT_VC2_DYN_SHFT 48 +#define SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT_VC2_DYN_MASK 0x007f000000000000 + +/* SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT_VC2_CAP */ +/* Description: vc2 debit captured value */ +#define SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT_VC2_CAP_SHFT 56 +#define SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT_VC2_CAP_MASK 0x7f00000000000000 + +/* ==================================================================== */ +/* Register "SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT" */ +/* ==================================================================== */ + +#define SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT 0x0000000150030230 +#define SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT_MASK 0x7f7f007f7f00bfbf +#define SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT_INIT 0x0000000000000000 + +/* SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT_VC0_WITHHOLD */ +/* Description: vc0 withhold */ +#define SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0 +#define SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000003f + +/* SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT_VC0_FORCE_CRED */ +/* Description: Force Credit on VC0 from debit cntr */ +#define SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7 +#define SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080 + +/* SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT_VC2_WITHHOLD */ +/* Description: vc2 withhold */ +#define SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT_VC2_WITHHOLD_SHFT 8 +#define SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x0000000000003f00 + +/* SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT_VC2_FORCE_CRED */ +/* Description: Force Credit on VC2 from debit cntr */ +#define SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 15 +#define SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000008000 + +/* SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT_VC0_DYN */ +/* Description: vc0 debit dynamic value */ +#define SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT_VC0_DYN_SHFT 24 +#define SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT_VC0_DYN_MASK 0x000000007f000000 + +/* SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT_VC0_CAP */ +/* Description: vc0 debit captured value */ +#define SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT_VC0_CAP_SHFT 32 +#define SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT_VC0_CAP_MASK 0x0000007f00000000 + +/* SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT_VC2_DYN */ +/* Description: vc2 debit dynamic value */ +#define SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT_VC2_DYN_SHFT 48 +#define SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT_VC2_DYN_MASK 0x007f000000000000 + +/* SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT_VC2_CAP */ +/* Description: vc2 debit captured value */ +#define SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT_VC2_CAP_SHFT 56 +#define SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT_VC2_CAP_MASK 0x7f00000000000000 + +/* ==================================================================== */ +/* Register "SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT" */ +/* ==================================================================== */ + +#define SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT 0x0000000150030240 +#define SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT_MASK 0x7f7f007f7f00bfbf +#define SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT_INIT 0x0000000000000000 + +/* SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT_VC0_WITHHOLD */ +/* Description: vc0 withhold */ +#define SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0 +#define SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000003f + +/* SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT_VC0_FORCE_CRED */ +/* Description: Force Credit on VC0 from debit cntr */ +#define SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7 +#define SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080 + +/* SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT_VC2_WITHHOLD */ +/* Description: vc2 withhold */ +#define SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT_VC2_WITHHOLD_SHFT 8 +#define SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x0000000000003f00 + +/* SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT_VC2_FORCE_CRED */ +/* Description: Force Credit on VC2 from debit cntr */ +#define SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 15 +#define SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000008000 + +/* SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT_VC0_DYN */ +/* Description: vc0 debit dynamic value */ +#define SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT_VC0_DYN_SHFT 24 +#define SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT_VC0_DYN_MASK 0x000000007f000000 + +/* SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT_VC0_CAP */ +/* Description: vc0 debit captured value */ +#define SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT_VC0_CAP_SHFT 32 +#define SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT_VC0_CAP_MASK 0x0000007f00000000 + +/* SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT_VC2_DYN */ +/* Description: vc2 debit dynamic value */ +#define SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT_VC2_DYN_SHFT 48 +#define SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT_VC2_DYN_MASK 0x007f000000000000 + +/* SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT_VC2_CAP */ +/* Description: vc2 debit captured value */ +#define SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT_VC2_CAP_SHFT 56 +#define SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT_VC2_CAP_MASK 0x7f00000000000000 + +/* ==================================================================== */ +/* Register "SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT" */ +/* ==================================================================== */ + +#define SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT 0x0000000150030250 +#define SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT_MASK 0x7f7f007f7f00bfbf +#define SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT_INIT 0x0000000000000000 + +/* SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT_VC0_WITHHOLD */ +/* Description: vc0 withhold */ +#define SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0 +#define SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000003f + +/* SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT_VC0_FORCE_CRED */ +/* Description: Force Credit on VC0 from debit cntr */ +#define SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7 +#define SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080 + +/* SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT_VC2_WITHHOLD */ +/* Description: vc2 withhold */ +#define SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT_VC2_WITHHOLD_SHFT 8 +#define SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x0000000000003f00 + +/* SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT_VC2_FORCE_CRED */ +/* Description: Force Credit on VC2 from debit cntr */ +#define SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 15 +#define SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000008000 + +/* SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT_VC0_DYN */ +/* Description: vc0 debit dynamic value */ +#define SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT_VC0_DYN_SHFT 24 +#define SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT_VC0_DYN_MASK 0x000000007f000000 + +/* SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT_VC0_CAP */ +/* Description: vc0 debit captured value */ +#define SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT_VC0_CAP_SHFT 32 +#define SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT_VC0_CAP_MASK 0x0000007f00000000 + +/* SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT_VC2_DYN */ +/* Description: vc2 debit dynamic value */ +#define SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT_VC2_DYN_SHFT 48 +#define SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT_VC2_DYN_MASK 0x007f000000000000 + +/* SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT_VC2_CAP */ +/* Description: vc2 debit captured value */ +#define SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT_VC2_CAP_SHFT 56 +#define SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT_VC2_CAP_MASK 0x7f00000000000000 + +/* ==================================================================== */ +/* Register "SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT" */ +/* ==================================================================== */ + +#define SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT 0x0000000150030260 +#define SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT_MASK 0x7f7f007f7f00bfbf +#define SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT_INIT 0x0000000000000000 + +/* SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT_VC0_WITHHOLD */ +/* Description: vc0 withhold */ +#define SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0 +#define SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000003f + +/* SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT_VC0_FORCE_CRED */ +/* Description: Force Credit on VC0 from debit cntr */ +#define SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7 +#define SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080 + +/* SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT_VC2_WITHHOLD */ +/* Description: vc2 withhold */ +#define SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT_VC2_WITHHOLD_SHFT 8 +#define SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x0000000000003f00 + +/* SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT_VC2_FORCE_CRED */ +/* Description: Force Credit on VC2 from debit cntr */ +#define SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 15 +#define SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000008000 + +/* SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT_VC0_DYN */ +/* Description: vc0 debit dynamic value */ +#define SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT_VC0_DYN_SHFT 24 +#define SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT_VC0_DYN_MASK 0x000000007f000000 + +/* SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT_VC0_CAP */ +/* Description: vc0 debit captured value */ +#define SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT_VC0_CAP_SHFT 32 +#define SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT_VC0_CAP_MASK 0x0000007f00000000 + +/* SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT_VC2_DYN */ +/* Description: vc2 debit dynamic value */ +#define SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT_VC2_DYN_SHFT 48 +#define SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT_VC2_DYN_MASK 0x007f000000000000 + +/* SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT_VC2_CAP */ +/* Description: vc2 debit captured value */ +#define SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT_VC2_CAP_SHFT 56 +#define SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT_VC2_CAP_MASK 0x7f00000000000000 + +/* ==================================================================== */ +/* Register "SH_XNIILB_FR_NI0_INTRA_FLOW_CREDIT" */ +/* ==================================================================== */ + +#define SH_XNIILB_FR_NI0_INTRA_FLOW_CREDIT 0x0000000150030270 +#define SH_XNIILB_FR_NI0_INTRA_FLOW_CREDIT_MASK 0x00007f7f7f7f7f7f +#define SH_XNIILB_FR_NI0_INTRA_FLOW_CREDIT_INIT 0x000000000c00000c + +/* SH_XNIILB_FR_NI0_INTRA_FLOW_CREDIT_VC0_TEST */ +/* Description: vc0 credit_test */ +#define SH_XNIILB_FR_NI0_INTRA_FLOW_CREDIT_VC0_TEST_SHFT 0 +#define SH_XNIILB_FR_NI0_INTRA_FLOW_CREDIT_VC0_TEST_MASK 0x000000000000007f + +/* SH_XNIILB_FR_NI0_INTRA_FLOW_CREDIT_VC0_DYN */ +/* Description: vc0 credit dynamic value */ +#define SH_XNIILB_FR_NI0_INTRA_FLOW_CREDIT_VC0_DYN_SHFT 8 +#define SH_XNIILB_FR_NI0_INTRA_FLOW_CREDIT_VC0_DYN_MASK 0x0000000000007f00 + +/* SH_XNIILB_FR_NI0_INTRA_FLOW_CREDIT_VC0_CAP */ +/* Description: vc0 credit captured value */ +#define SH_XNIILB_FR_NI0_INTRA_FLOW_CREDIT_VC0_CAP_SHFT 16 +#define SH_XNIILB_FR_NI0_INTRA_FLOW_CREDIT_VC0_CAP_MASK 0x00000000007f0000 + +/* SH_XNIILB_FR_NI0_INTRA_FLOW_CREDIT_VC2_TEST */ +/* Description: vc2 credit_test */ +#define SH_XNIILB_FR_NI0_INTRA_FLOW_CREDIT_VC2_TEST_SHFT 24 +#define SH_XNIILB_FR_NI0_INTRA_FLOW_CREDIT_VC2_TEST_MASK 0x000000007f000000 + +/* SH_XNIILB_FR_NI0_INTRA_FLOW_CREDIT_VC2_DYN */ +/* Description: vc2 credit dynamic value */ +#define SH_XNIILB_FR_NI0_INTRA_FLOW_CREDIT_VC2_DYN_SHFT 32 +#define SH_XNIILB_FR_NI0_INTRA_FLOW_CREDIT_VC2_DYN_MASK 0x0000007f00000000 + +/* SH_XNIILB_FR_NI0_INTRA_FLOW_CREDIT_VC2_CAP */ +/* Description: vc2 credit captured value */ +#define SH_XNIILB_FR_NI0_INTRA_FLOW_CREDIT_VC2_CAP_SHFT 40 +#define SH_XNIILB_FR_NI0_INTRA_FLOW_CREDIT_VC2_CAP_MASK 0x00007f0000000000 + +/* ==================================================================== */ +/* Register "SH_XNIILB_FR_NI1_INTRA_FLOW_CREDIT" */ +/* ==================================================================== */ + +#define SH_XNIILB_FR_NI1_INTRA_FLOW_CREDIT 0x0000000150030280 +#define SH_XNIILB_FR_NI1_INTRA_FLOW_CREDIT_MASK 0x00007f7f7f7f7f7f +#define SH_XNIILB_FR_NI1_INTRA_FLOW_CREDIT_INIT 0x000000000c00000c + +/* SH_XNIILB_FR_NI1_INTRA_FLOW_CREDIT_VC0_TEST */ +/* Description: vc0 credit_test */ +#define SH_XNIILB_FR_NI1_INTRA_FLOW_CREDIT_VC0_TEST_SHFT 0 +#define SH_XNIILB_FR_NI1_INTRA_FLOW_CREDIT_VC0_TEST_MASK 0x000000000000007f + +/* SH_XNIILB_FR_NI1_INTRA_FLOW_CREDIT_VC0_DYN */ +/* Description: vc0 credit dynamic value */ +#define SH_XNIILB_FR_NI1_INTRA_FLOW_CREDIT_VC0_DYN_SHFT 8 +#define SH_XNIILB_FR_NI1_INTRA_FLOW_CREDIT_VC0_DYN_MASK 0x0000000000007f00 + +/* SH_XNIILB_FR_NI1_INTRA_FLOW_CREDIT_VC0_CAP */ +/* Description: vc0 credit captured value */ +#define SH_XNIILB_FR_NI1_INTRA_FLOW_CREDIT_VC0_CAP_SHFT 16 +#define SH_XNIILB_FR_NI1_INTRA_FLOW_CREDIT_VC0_CAP_MASK 0x00000000007f0000 + +/* SH_XNIILB_FR_NI1_INTRA_FLOW_CREDIT_VC2_TEST */ +/* Description: vc2 credit_test */ +#define SH_XNIILB_FR_NI1_INTRA_FLOW_CREDIT_VC2_TEST_SHFT 24 +#define SH_XNIILB_FR_NI1_INTRA_FLOW_CREDIT_VC2_TEST_MASK 0x000000007f000000 + +/* SH_XNIILB_FR_NI1_INTRA_FLOW_CREDIT_VC2_DYN */ +/* Description: vc2 credit dynamic value */ +#define SH_XNIILB_FR_NI1_INTRA_FLOW_CREDIT_VC2_DYN_SHFT 32 +#define SH_XNIILB_FR_NI1_INTRA_FLOW_CREDIT_VC2_DYN_MASK 0x0000007f00000000 + +/* SH_XNIILB_FR_NI1_INTRA_FLOW_CREDIT_VC2_CAP */ +/* Description: vc2 credit captured value */ +#define SH_XNIILB_FR_NI1_INTRA_FLOW_CREDIT_VC2_CAP_SHFT 40 +#define SH_XNIILB_FR_NI1_INTRA_FLOW_CREDIT_VC2_CAP_MASK 0x00007f0000000000 + +/* ==================================================================== */ +/* Register "SH_XNIILB_FR_MD_INTRA_FLOW_CREDIT" */ +/* ==================================================================== */ + +#define SH_XNIILB_FR_MD_INTRA_FLOW_CREDIT 0x0000000150030290 +#define SH_XNIILB_FR_MD_INTRA_FLOW_CREDIT_MASK 0x00007f7f7f7f7f7f +#define SH_XNIILB_FR_MD_INTRA_FLOW_CREDIT_INIT 0x000000000c00000c + +/* SH_XNIILB_FR_MD_INTRA_FLOW_CREDIT_VC0_TEST */ +/* Description: vc0 credit_test */ +#define SH_XNIILB_FR_MD_INTRA_FLOW_CREDIT_VC0_TEST_SHFT 0 +#define SH_XNIILB_FR_MD_INTRA_FLOW_CREDIT_VC0_TEST_MASK 0x000000000000007f + +/* SH_XNIILB_FR_MD_INTRA_FLOW_CREDIT_VC0_DYN */ +/* Description: vc0 credit dynamic value */ +#define SH_XNIILB_FR_MD_INTRA_FLOW_CREDIT_VC0_DYN_SHFT 8 +#define SH_XNIILB_FR_MD_INTRA_FLOW_CREDIT_VC0_DYN_MASK 0x0000000000007f00 + +/* SH_XNIILB_FR_MD_INTRA_FLOW_CREDIT_VC0_CAP */ +/* Description: vc0 credit captured value */ +#define SH_XNIILB_FR_MD_INTRA_FLOW_CREDIT_VC0_CAP_SHFT 16 +#define SH_XNIILB_FR_MD_INTRA_FLOW_CREDIT_VC0_CAP_MASK 0x00000000007f0000 + +/* SH_XNIILB_FR_MD_INTRA_FLOW_CREDIT_VC2_TEST */ +/* Description: vc2 credit_test */ +#define SH_XNIILB_FR_MD_INTRA_FLOW_CREDIT_VC2_TEST_SHFT 24 +#define SH_XNIILB_FR_MD_INTRA_FLOW_CREDIT_VC2_TEST_MASK 0x000000007f000000 + +/* SH_XNIILB_FR_MD_INTRA_FLOW_CREDIT_VC2_DYN */ +/* Description: vc2 credit dynamic value */ +#define SH_XNIILB_FR_MD_INTRA_FLOW_CREDIT_VC2_DYN_SHFT 32 +#define SH_XNIILB_FR_MD_INTRA_FLOW_CREDIT_VC2_DYN_MASK 0x0000007f00000000 + +/* SH_XNIILB_FR_MD_INTRA_FLOW_CREDIT_VC2_CAP */ +/* Description: vc2 credit captured value */ +#define SH_XNIILB_FR_MD_INTRA_FLOW_CREDIT_VC2_CAP_SHFT 40 +#define SH_XNIILB_FR_MD_INTRA_FLOW_CREDIT_VC2_CAP_MASK 0x00007f0000000000 + +/* ==================================================================== */ +/* Register "SH_XNIILB_FR_IILB_INTRA_FLOW_CREDIT" */ +/* ==================================================================== */ + +#define SH_XNIILB_FR_IILB_INTRA_FLOW_CREDIT 0x00000001500302a0 +#define SH_XNIILB_FR_IILB_INTRA_FLOW_CREDIT_MASK 0x00007f7f7f7f7f7f +#define SH_XNIILB_FR_IILB_INTRA_FLOW_CREDIT_INIT 0x000000000c00000c + +/* SH_XNIILB_FR_IILB_INTRA_FLOW_CREDIT_VC0_TEST */ +/* Description: vc0 credit_test */ +#define SH_XNIILB_FR_IILB_INTRA_FLOW_CREDIT_VC0_TEST_SHFT 0 +#define SH_XNIILB_FR_IILB_INTRA_FLOW_CREDIT_VC0_TEST_MASK 0x000000000000007f + +/* SH_XNIILB_FR_IILB_INTRA_FLOW_CREDIT_VC0_DYN */ +/* Description: vc0 credit dynamic value */ +#define SH_XNIILB_FR_IILB_INTRA_FLOW_CREDIT_VC0_DYN_SHFT 8 +#define SH_XNIILB_FR_IILB_INTRA_FLOW_CREDIT_VC0_DYN_MASK 0x0000000000007f00 + +/* SH_XNIILB_FR_IILB_INTRA_FLOW_CREDIT_VC0_CAP */ +/* Description: vc0 credit captured value */ +#define SH_XNIILB_FR_IILB_INTRA_FLOW_CREDIT_VC0_CAP_SHFT 16 +#define SH_XNIILB_FR_IILB_INTRA_FLOW_CREDIT_VC0_CAP_MASK 0x00000000007f0000 + +/* SH_XNIILB_FR_IILB_INTRA_FLOW_CREDIT_VC2_TEST */ +/* Description: vc2 credit_test */ +#define SH_XNIILB_FR_IILB_INTRA_FLOW_CREDIT_VC2_TEST_SHFT 24 +#define SH_XNIILB_FR_IILB_INTRA_FLOW_CREDIT_VC2_TEST_MASK 0x000000007f000000 + +/* SH_XNIILB_FR_IILB_INTRA_FLOW_CREDIT_VC2_DYN */ +/* Description: vc2 credit dynamic value */ +#define SH_XNIILB_FR_IILB_INTRA_FLOW_CREDIT_VC2_DYN_SHFT 32 +#define SH_XNIILB_FR_IILB_INTRA_FLOW_CREDIT_VC2_DYN_MASK 0x0000007f00000000 + +/* SH_XNIILB_FR_IILB_INTRA_FLOW_CREDIT_VC2_CAP */ +/* Description: vc2 credit captured value */ +#define SH_XNIILB_FR_IILB_INTRA_FLOW_CREDIT_VC2_CAP_SHFT 40 +#define SH_XNIILB_FR_IILB_INTRA_FLOW_CREDIT_VC2_CAP_MASK 0x00007f0000000000 + +/* ==================================================================== */ +/* Register "SH_XNIILB_FR_PI_INTRA_FLOW_CREDIT" */ +/* ==================================================================== */ + +#define SH_XNIILB_FR_PI_INTRA_FLOW_CREDIT 0x00000001500302b0 +#define SH_XNIILB_FR_PI_INTRA_FLOW_CREDIT_MASK 0x00007f7f7f7f7f7f +#define SH_XNIILB_FR_PI_INTRA_FLOW_CREDIT_INIT 0x000000000c00000c + +/* SH_XNIILB_FR_PI_INTRA_FLOW_CREDIT_VC0_TEST */ +/* Description: vc0 credit_test */ +#define SH_XNIILB_FR_PI_INTRA_FLOW_CREDIT_VC0_TEST_SHFT 0 +#define SH_XNIILB_FR_PI_INTRA_FLOW_CREDIT_VC0_TEST_MASK 0x000000000000007f + +/* SH_XNIILB_FR_PI_INTRA_FLOW_CREDIT_VC0_DYN */ +/* Description: vc0 credit dynamic value */ +#define SH_XNIILB_FR_PI_INTRA_FLOW_CREDIT_VC0_DYN_SHFT 8 +#define SH_XNIILB_FR_PI_INTRA_FLOW_CREDIT_VC0_DYN_MASK 0x0000000000007f00 + +/* SH_XNIILB_FR_PI_INTRA_FLOW_CREDIT_VC0_CAP */ +/* Description: vc0 credit captured value */ +#define SH_XNIILB_FR_PI_INTRA_FLOW_CREDIT_VC0_CAP_SHFT 16 +#define SH_XNIILB_FR_PI_INTRA_FLOW_CREDIT_VC0_CAP_MASK 0x00000000007f0000 + +/* SH_XNIILB_FR_PI_INTRA_FLOW_CREDIT_VC2_TEST */ +/* Description: vc2 credit_test */ +#define SH_XNIILB_FR_PI_INTRA_FLOW_CREDIT_VC2_TEST_SHFT 24 +#define SH_XNIILB_FR_PI_INTRA_FLOW_CREDIT_VC2_TEST_MASK 0x000000007f000000 + +/* SH_XNIILB_FR_PI_INTRA_FLOW_CREDIT_VC2_DYN */ +/* Description: vc2 credit dynamic value */ +#define SH_XNIILB_FR_PI_INTRA_FLOW_CREDIT_VC2_DYN_SHFT 32 +#define SH_XNIILB_FR_PI_INTRA_FLOW_CREDIT_VC2_DYN_MASK 0x0000007f00000000 + +/* SH_XNIILB_FR_PI_INTRA_FLOW_CREDIT_VC2_CAP */ +/* Description: vc2 credit captured value */ +#define SH_XNIILB_FR_PI_INTRA_FLOW_CREDIT_VC2_CAP_SHFT 40 +#define SH_XNIILB_FR_PI_INTRA_FLOW_CREDIT_VC2_CAP_MASK 0x00007f0000000000 + +/* ==================================================================== */ +/* Register "SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT" */ +/* ==================================================================== */ + +#define SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT 0x0000000150030300 +#define SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT_MASK 0x7f7f007f7f00bfbf +#define SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT_INIT 0x0000000000000000 + +/* SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT_VC0_WITHHOLD */ +/* Description: vc0 withhold */ +#define SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0 +#define SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000003f + +/* SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT_VC0_FORCE_CRED */ +/* Description: Force Credit on VC0 from debit cntr */ +#define SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7 +#define SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080 + +/* SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT_VC2_WITHHOLD */ +/* Description: vc2 withhold */ +#define SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT_VC2_WITHHOLD_SHFT 8 +#define SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x0000000000003f00 + +/* SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT_VC2_FORCE_CRED */ +/* Description: Force Credit on VC2 from debit cntr */ +#define SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 15 +#define SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000008000 + +/* SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT_VC0_DYN */ +/* Description: vc0 debit dynamic value */ +#define SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT_VC0_DYN_SHFT 24 +#define SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT_VC0_DYN_MASK 0x000000007f000000 + +/* SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT_VC0_CAP */ +/* Description: vc0 debit captured value */ +#define SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT_VC0_CAP_SHFT 32 +#define SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT_VC0_CAP_MASK 0x0000007f00000000 + +/* SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT_VC2_DYN */ +/* Description: vc2 debit dynamic value */ +#define SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT_VC2_DYN_SHFT 48 +#define SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT_VC2_DYN_MASK 0x007f000000000000 + +/* SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT_VC2_CAP */ +/* Description: vc2 debit captured value */ +#define SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT_VC2_CAP_SHFT 56 +#define SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT_VC2_CAP_MASK 0x7f00000000000000 + +/* ==================================================================== */ +/* Register "SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT" */ +/* ==================================================================== */ + +#define SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT 0x0000000150030310 +#define SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT_MASK 0x7f7f007f7f00bfbf +#define SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT_INIT 0x0000000000000000 + +/* SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT_VC0_WITHHOLD */ +/* Description: vc0 withhold */ +#define SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0 +#define SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000003f + +/* SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT_VC0_FORCE_CRED */ +/* Description: Force Credit on VC0 from debit cntr */ +#define SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7 +#define SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080 + +/* SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT_VC2_WITHHOLD */ +/* Description: vc2 withhold */ +#define SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT_VC2_WITHHOLD_SHFT 8 +#define SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x0000000000003f00 + +/* SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT_VC2_FORCE_CRED */ +/* Description: Force Credit on VC2 from debit cntr */ +#define SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 15 +#define SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000008000 + +/* SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT_VC0_DYN */ +/* Description: vc0 debit dynamic value */ +#define SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT_VC0_DYN_SHFT 24 +#define SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT_VC0_DYN_MASK 0x000000007f000000 + +/* SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT_VC0_CAP */ +/* Description: vc0 debit captured value */ +#define SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT_VC0_CAP_SHFT 32 +#define SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT_VC0_CAP_MASK 0x0000007f00000000 + +/* SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT_VC2_DYN */ +/* Description: vc2 debit dynamic value */ +#define SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT_VC2_DYN_SHFT 48 +#define SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT_VC2_DYN_MASK 0x007f000000000000 + +/* SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT_VC2_CAP */ +/* Description: vc2 debit captured value */ +#define SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT_VC2_CAP_SHFT 56 +#define SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT_VC2_CAP_MASK 0x7f00000000000000 + +/* ==================================================================== */ +/* Register "SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT" */ +/* ==================================================================== */ + +#define SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT 0x0000000150030320 +#define SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT_MASK 0x7f7f007f7f00bfbf +#define SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT_INIT 0x0000000000000000 + +/* SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT_VC0_WITHHOLD */ +/* Description: vc0 withhold */ +#define SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0 +#define SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000003f + +/* SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT_VC0_FORCE_CRED */ +/* Description: Force Credit on VC0 from debit cntr */ +#define SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7 +#define SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080 + +/* SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT_VC2_WITHHOLD */ +/* Description: vc2 withhold */ +#define SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT_VC2_WITHHOLD_SHFT 8 +#define SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x0000000000003f00 + +/* SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT_VC2_FORCE_CRED */ +/* Description: Force Credit on VC2 from debit cntr */ +#define SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 15 +#define SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000008000 + +/* SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT_VC0_DYN */ +/* Description: vc0 debit dynamic value */ +#define SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT_VC0_DYN_SHFT 24 +#define SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT_VC0_DYN_MASK 0x000000007f000000 + +/* SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT_VC0_CAP */ +/* Description: vc0 debit captured value */ +#define SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT_VC0_CAP_SHFT 32 +#define SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT_VC0_CAP_MASK 0x0000007f00000000 + +/* SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT_VC2_DYN */ +/* Description: vc2 debit dynamic value */ +#define SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT_VC2_DYN_SHFT 48 +#define SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT_VC2_DYN_MASK 0x007f000000000000 + +/* SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT_VC2_CAP */ +/* Description: vc2 debit captured value */ +#define SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT_VC2_CAP_SHFT 56 +#define SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT_VC2_CAP_MASK 0x7f00000000000000 + +/* ==================================================================== */ +/* Register "SH_XNNI0_FR_PI_INTRA_FLOW_CREDIT" */ +/* ==================================================================== */ + +#define SH_XNNI0_FR_PI_INTRA_FLOW_CREDIT 0x0000000150030330 +#define SH_XNNI0_FR_PI_INTRA_FLOW_CREDIT_MASK 0x00007f7f7f7f7f7f +#define SH_XNNI0_FR_PI_INTRA_FLOW_CREDIT_INIT 0x000000000c00000c + +/* SH_XNNI0_FR_PI_INTRA_FLOW_CREDIT_VC0_TEST */ +/* Description: vc0 credit_test */ +#define SH_XNNI0_FR_PI_INTRA_FLOW_CREDIT_VC0_TEST_SHFT 0 +#define SH_XNNI0_FR_PI_INTRA_FLOW_CREDIT_VC0_TEST_MASK 0x000000000000007f + +/* SH_XNNI0_FR_PI_INTRA_FLOW_CREDIT_VC0_DYN */ +/* Description: vc0 credit dynamic value */ +#define SH_XNNI0_FR_PI_INTRA_FLOW_CREDIT_VC0_DYN_SHFT 8 +#define SH_XNNI0_FR_PI_INTRA_FLOW_CREDIT_VC0_DYN_MASK 0x0000000000007f00 + +/* SH_XNNI0_FR_PI_INTRA_FLOW_CREDIT_VC0_CAP */ +/* Description: vc0 credit captured value */ +#define SH_XNNI0_FR_PI_INTRA_FLOW_CREDIT_VC0_CAP_SHFT 16 +#define SH_XNNI0_FR_PI_INTRA_FLOW_CREDIT_VC0_CAP_MASK 0x00000000007f0000 + +/* SH_XNNI0_FR_PI_INTRA_FLOW_CREDIT_VC2_TEST */ +/* Description: vc2 credit_test */ +#define SH_XNNI0_FR_PI_INTRA_FLOW_CREDIT_VC2_TEST_SHFT 24 +#define SH_XNNI0_FR_PI_INTRA_FLOW_CREDIT_VC2_TEST_MASK 0x000000007f000000 + +/* SH_XNNI0_FR_PI_INTRA_FLOW_CREDIT_VC2_DYN */ +/* Description: vc2 credit dynamic value */ +#define SH_XNNI0_FR_PI_INTRA_FLOW_CREDIT_VC2_DYN_SHFT 32 +#define SH_XNNI0_FR_PI_INTRA_FLOW_CREDIT_VC2_DYN_MASK 0x0000007f00000000 + +/* SH_XNNI0_FR_PI_INTRA_FLOW_CREDIT_VC2_CAP */ +/* Description: vc2 credit captured value */ +#define SH_XNNI0_FR_PI_INTRA_FLOW_CREDIT_VC2_CAP_SHFT 40 +#define SH_XNNI0_FR_PI_INTRA_FLOW_CREDIT_VC2_CAP_MASK 0x00007f0000000000 + +/* ==================================================================== */ +/* Register "SH_XNNI0_FR_MD_INTRA_FLOW_CREDIT" */ +/* ==================================================================== */ + +#define SH_XNNI0_FR_MD_INTRA_FLOW_CREDIT 0x0000000150030340 +#define SH_XNNI0_FR_MD_INTRA_FLOW_CREDIT_MASK 0x00007f7f7f7f7f7f +#define SH_XNNI0_FR_MD_INTRA_FLOW_CREDIT_INIT 0x000000000c00000c + +/* SH_XNNI0_FR_MD_INTRA_FLOW_CREDIT_VC0_TEST */ +/* Description: vc0 credit_test */ +#define SH_XNNI0_FR_MD_INTRA_FLOW_CREDIT_VC0_TEST_SHFT 0 +#define SH_XNNI0_FR_MD_INTRA_FLOW_CREDIT_VC0_TEST_MASK 0x000000000000007f + +/* SH_XNNI0_FR_MD_INTRA_FLOW_CREDIT_VC0_DYN */ +/* Description: vc0 credit dynamic value */ +#define SH_XNNI0_FR_MD_INTRA_FLOW_CREDIT_VC0_DYN_SHFT 8 +#define SH_XNNI0_FR_MD_INTRA_FLOW_CREDIT_VC0_DYN_MASK 0x0000000000007f00 + +/* SH_XNNI0_FR_MD_INTRA_FLOW_CREDIT_VC0_CAP */ +/* Description: vc0 credit captured value */ +#define SH_XNNI0_FR_MD_INTRA_FLOW_CREDIT_VC0_CAP_SHFT 16 +#define SH_XNNI0_FR_MD_INTRA_FLOW_CREDIT_VC0_CAP_MASK 0x00000000007f0000 + +/* SH_XNNI0_FR_MD_INTRA_FLOW_CREDIT_VC2_TEST */ +/* Description: vc2 credit_test */ +#define SH_XNNI0_FR_MD_INTRA_FLOW_CREDIT_VC2_TEST_SHFT 24 +#define SH_XNNI0_FR_MD_INTRA_FLOW_CREDIT_VC2_TEST_MASK 0x000000007f000000 + +/* SH_XNNI0_FR_MD_INTRA_FLOW_CREDIT_VC2_DYN */ +/* Description: vc2 credit dynamic value */ +#define SH_XNNI0_FR_MD_INTRA_FLOW_CREDIT_VC2_DYN_SHFT 32 +#define SH_XNNI0_FR_MD_INTRA_FLOW_CREDIT_VC2_DYN_MASK 0x0000007f00000000 + +/* SH_XNNI0_FR_MD_INTRA_FLOW_CREDIT_VC2_CAP */ +/* Description: vc2 credit captured value */ +#define SH_XNNI0_FR_MD_INTRA_FLOW_CREDIT_VC2_CAP_SHFT 40 +#define SH_XNNI0_FR_MD_INTRA_FLOW_CREDIT_VC2_CAP_MASK 0x00007f0000000000 + +/* ==================================================================== */ +/* Register "SH_XNNI0_FR_IILB_INTRA_FLOW_CREDIT" */ +/* ==================================================================== */ + +#define SH_XNNI0_FR_IILB_INTRA_FLOW_CREDIT 0x0000000150030350 +#define SH_XNNI0_FR_IILB_INTRA_FLOW_CREDIT_MASK 0x00007f7f7f7f7f7f +#define SH_XNNI0_FR_IILB_INTRA_FLOW_CREDIT_INIT 0x000000000c00000c + +/* SH_XNNI0_FR_IILB_INTRA_FLOW_CREDIT_VC0_TEST */ +/* Description: vc0 credit_test */ +#define SH_XNNI0_FR_IILB_INTRA_FLOW_CREDIT_VC0_TEST_SHFT 0 +#define SH_XNNI0_FR_IILB_INTRA_FLOW_CREDIT_VC0_TEST_MASK 0x000000000000007f + +/* SH_XNNI0_FR_IILB_INTRA_FLOW_CREDIT_VC0_DYN */ +/* Description: vc0 credit dynamic value */ +#define SH_XNNI0_FR_IILB_INTRA_FLOW_CREDIT_VC0_DYN_SHFT 8 +#define SH_XNNI0_FR_IILB_INTRA_FLOW_CREDIT_VC0_DYN_MASK 0x0000000000007f00 + +/* SH_XNNI0_FR_IILB_INTRA_FLOW_CREDIT_VC0_CAP */ +/* Description: vc0 credit captured value */ +#define SH_XNNI0_FR_IILB_INTRA_FLOW_CREDIT_VC0_CAP_SHFT 16 +#define SH_XNNI0_FR_IILB_INTRA_FLOW_CREDIT_VC0_CAP_MASK 0x00000000007f0000 + +/* SH_XNNI0_FR_IILB_INTRA_FLOW_CREDIT_VC2_TEST */ +/* Description: vc2 credit_test */ +#define SH_XNNI0_FR_IILB_INTRA_FLOW_CREDIT_VC2_TEST_SHFT 24 +#define SH_XNNI0_FR_IILB_INTRA_FLOW_CREDIT_VC2_TEST_MASK 0x000000007f000000 + +/* SH_XNNI0_FR_IILB_INTRA_FLOW_CREDIT_VC2_DYN */ +/* Description: vc2 credit dynamic value */ +#define SH_XNNI0_FR_IILB_INTRA_FLOW_CREDIT_VC2_DYN_SHFT 32 +#define SH_XNNI0_FR_IILB_INTRA_FLOW_CREDIT_VC2_DYN_MASK 0x0000007f00000000 + +/* SH_XNNI0_FR_IILB_INTRA_FLOW_CREDIT_VC2_CAP */ +/* Description: vc2 credit captured value */ +#define SH_XNNI0_FR_IILB_INTRA_FLOW_CREDIT_VC2_CAP_SHFT 40 +#define SH_XNNI0_FR_IILB_INTRA_FLOW_CREDIT_VC2_CAP_MASK 0x00007f0000000000 + +/* ==================================================================== */ +/* Register "SH_XNNI0_0_INTRANI_FLOW" */ +/* ==================================================================== */ + +#define SH_XNNI0_0_INTRANI_FLOW 0x0000000150030360 +#define SH_XNNI0_0_INTRANI_FLOW_MASK 0x00000000000000bf +#define SH_XNNI0_0_INTRANI_FLOW_INIT 0x0000000000000000 + +/* SH_XNNI0_0_INTRANI_FLOW_DEBIT_VC0_WITHHOLD */ +/* Description: vc0 withhold */ +#define SH_XNNI0_0_INTRANI_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0 +#define SH_XNNI0_0_INTRANI_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000003f + +/* SH_XNNI0_0_INTRANI_FLOW_DEBIT_VC0_FORCE_CRED */ +/* Description: Force Credit on VC0 from debit cntr */ +#define SH_XNNI0_0_INTRANI_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7 +#define SH_XNNI0_0_INTRANI_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080 + +/* ==================================================================== */ +/* Register "SH_XNNI0_1_INTRANI_FLOW" */ +/* ==================================================================== */ + +#define SH_XNNI0_1_INTRANI_FLOW 0x0000000150030370 +#define SH_XNNI0_1_INTRANI_FLOW_MASK 0x00000000000000bf +#define SH_XNNI0_1_INTRANI_FLOW_INIT 0x0000000000000000 + +/* SH_XNNI0_1_INTRANI_FLOW_DEBIT_VC1_WITHHOLD */ +/* Description: vc1 withhold */ +#define SH_XNNI0_1_INTRANI_FLOW_DEBIT_VC1_WITHHOLD_SHFT 0 +#define SH_XNNI0_1_INTRANI_FLOW_DEBIT_VC1_WITHHOLD_MASK 0x000000000000003f + +/* SH_XNNI0_1_INTRANI_FLOW_DEBIT_VC1_FORCE_CRED */ +/* Description: Force Credit on VC1 from debit cntr */ +#define SH_XNNI0_1_INTRANI_FLOW_DEBIT_VC1_FORCE_CRED_SHFT 7 +#define SH_XNNI0_1_INTRANI_FLOW_DEBIT_VC1_FORCE_CRED_MASK 0x0000000000000080 + +/* ==================================================================== */ +/* Register "SH_XNNI0_2_INTRANI_FLOW" */ +/* ==================================================================== */ + +#define SH_XNNI0_2_INTRANI_FLOW 0x0000000150030380 +#define SH_XNNI0_2_INTRANI_FLOW_MASK 0x00000000000000bf +#define SH_XNNI0_2_INTRANI_FLOW_INIT 0x0000000000000000 + +/* SH_XNNI0_2_INTRANI_FLOW_DEBIT_VC2_WITHHOLD */ +/* Description: vc2 withhold */ +#define SH_XNNI0_2_INTRANI_FLOW_DEBIT_VC2_WITHHOLD_SHFT 0 +#define SH_XNNI0_2_INTRANI_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x000000000000003f + +/* SH_XNNI0_2_INTRANI_FLOW_DEBIT_VC2_FORCE_CRED */ +/* Description: Force Credit on VC2 from debit cntr */ +#define SH_XNNI0_2_INTRANI_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 7 +#define SH_XNNI0_2_INTRANI_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000000080 + +/* ==================================================================== */ +/* Register "SH_XNNI0_3_INTRANI_FLOW" */ +/* ==================================================================== */ + +#define SH_XNNI0_3_INTRANI_FLOW 0x0000000150030390 +#define SH_XNNI0_3_INTRANI_FLOW_MASK 0x00000000000000bf +#define SH_XNNI0_3_INTRANI_FLOW_INIT 0x0000000000000000 + +/* SH_XNNI0_3_INTRANI_FLOW_DEBIT_VC3_WITHHOLD */ +/* Description: vc3 withhold */ +#define SH_XNNI0_3_INTRANI_FLOW_DEBIT_VC3_WITHHOLD_SHFT 0 +#define SH_XNNI0_3_INTRANI_FLOW_DEBIT_VC3_WITHHOLD_MASK 0x000000000000003f + +/* SH_XNNI0_3_INTRANI_FLOW_DEBIT_VC3_FORCE_CRED */ +/* Description: Force Credit on VC3 from debit cntr */ +#define SH_XNNI0_3_INTRANI_FLOW_DEBIT_VC3_FORCE_CRED_SHFT 7 +#define SH_XNNI0_3_INTRANI_FLOW_DEBIT_VC3_FORCE_CRED_MASK 0x0000000000000080 + +/* ==================================================================== */ +/* Register "SH_XNNI0_VCSWITCH_FLOW" */ +/* ==================================================================== */ + +#define SH_XNNI0_VCSWITCH_FLOW 0x00000001500303a0 +#define SH_XNNI0_VCSWITCH_FLOW_MASK 0x0000000701010101 +#define SH_XNNI0_VCSWITCH_FLOW_INIT 0x0000000000000000 + +/* SH_XNNI0_VCSWITCH_FLOW_NI_VCFIFO_DATELINE_SWITCH */ +/* Description: Swap VC0/2 with VC1/3 */ +#define SH_XNNI0_VCSWITCH_FLOW_NI_VCFIFO_DATELINE_SWITCH_SHFT 0 +#define SH_XNNI0_VCSWITCH_FLOW_NI_VCFIFO_DATELINE_SWITCH_MASK 0x0000000000000001 + +/* SH_XNNI0_VCSWITCH_FLOW_PI_VCFIFO_SWITCH */ +/* Description: Swap VC0/2 with VC1/3 */ +#define SH_XNNI0_VCSWITCH_FLOW_PI_VCFIFO_SWITCH_SHFT 8 +#define SH_XNNI0_VCSWITCH_FLOW_PI_VCFIFO_SWITCH_MASK 0x0000000000000100 + +/* SH_XNNI0_VCSWITCH_FLOW_MD_VCFIFO_SWITCH */ +/* Description: Swap VC0/2 with VC1/3 */ +#define SH_XNNI0_VCSWITCH_FLOW_MD_VCFIFO_SWITCH_SHFT 16 +#define SH_XNNI0_VCSWITCH_FLOW_MD_VCFIFO_SWITCH_MASK 0x0000000000010000 + +/* SH_XNNI0_VCSWITCH_FLOW_IILB_VCFIFO_SWITCH */ +/* Description: Swap VC0/2 with VC1/3 */ +#define SH_XNNI0_VCSWITCH_FLOW_IILB_VCFIFO_SWITCH_SHFT 24 +#define SH_XNNI0_VCSWITCH_FLOW_IILB_VCFIFO_SWITCH_MASK 0x0000000001000000 + +/* SH_XNNI0_VCSWITCH_FLOW_DISABLE_SYNC_BYPASS_IN */ +#define SH_XNNI0_VCSWITCH_FLOW_DISABLE_SYNC_BYPASS_IN_SHFT 32 +#define SH_XNNI0_VCSWITCH_FLOW_DISABLE_SYNC_BYPASS_IN_MASK 0x0000000100000000 + +/* SH_XNNI0_VCSWITCH_FLOW_DISABLE_SYNC_BYPASS_OUT */ +#define SH_XNNI0_VCSWITCH_FLOW_DISABLE_SYNC_BYPASS_OUT_SHFT 33 +#define SH_XNNI0_VCSWITCH_FLOW_DISABLE_SYNC_BYPASS_OUT_MASK 0x0000000200000000 + +/* SH_XNNI0_VCSWITCH_FLOW_ASYNC_FIFOES */ +#define SH_XNNI0_VCSWITCH_FLOW_ASYNC_FIFOES_SHFT 34 +#define SH_XNNI0_VCSWITCH_FLOW_ASYNC_FIFOES_MASK 0x0000000400000000 + +/* ==================================================================== */ +/* Register "SH_XNNI0_TIMER_REG" */ +/* ==================================================================== */ + +#define SH_XNNI0_TIMER_REG 0x00000001500303b0 +#define SH_XNNI0_TIMER_REG_MASK 0x0000000100ffffff +#define SH_XNNI0_TIMER_REG_INIT 0x0000000000ffffff + +/* SH_XNNI0_TIMER_REG_TIMEOUT_REG */ +/* Description: Master Timeout Counter */ +#define SH_XNNI0_TIMER_REG_TIMEOUT_REG_SHFT 0 +#define SH_XNNI0_TIMER_REG_TIMEOUT_REG_MASK 0x0000000000ffffff + +/* SH_XNNI0_TIMER_REG_LINKCLEANUP_REG */ +/* Description: Link Clean Up */ +#define SH_XNNI0_TIMER_REG_LINKCLEANUP_REG_SHFT 32 +#define SH_XNNI0_TIMER_REG_LINKCLEANUP_REG_MASK 0x0000000100000000 + +/* ==================================================================== */ +/* Register "SH_XNNI0_FIFO02_FLOW" */ +/* ==================================================================== */ + +#define SH_XNNI0_FIFO02_FLOW 0x00000001500303c0 +#define SH_XNNI0_FIFO02_FLOW_MASK 0x00000f0f0f0f0f0f +#define SH_XNNI0_FIFO02_FLOW_INIT 0x0000000000000000 + +/* SH_XNNI0_FIFO02_FLOW_COUNT_VC0_LIMIT */ +/* Description: limit reg zero disables functionality */ +#define SH_XNNI0_FIFO02_FLOW_COUNT_VC0_LIMIT_SHFT 0 +#define SH_XNNI0_FIFO02_FLOW_COUNT_VC0_LIMIT_MASK 0x000000000000000f + +/* SH_XNNI0_FIFO02_FLOW_COUNT_VC0_DYN */ +/* Description: dynamic counter value */ +#define SH_XNNI0_FIFO02_FLOW_COUNT_VC0_DYN_SHFT 8 +#define SH_XNNI0_FIFO02_FLOW_COUNT_VC0_DYN_MASK 0x0000000000000f00 + +/* SH_XNNI0_FIFO02_FLOW_COUNT_VC0_CAP */ +/* Description: captured counter value */ +#define SH_XNNI0_FIFO02_FLOW_COUNT_VC0_CAP_SHFT 16 +#define SH_XNNI0_FIFO02_FLOW_COUNT_VC0_CAP_MASK 0x00000000000f0000 + +/* SH_XNNI0_FIFO02_FLOW_COUNT_VC2_LIMIT */ +/* Description: limit reg zero disables functionality */ +#define SH_XNNI0_FIFO02_FLOW_COUNT_VC2_LIMIT_SHFT 24 +#define SH_XNNI0_FIFO02_FLOW_COUNT_VC2_LIMIT_MASK 0x000000000f000000 + +/* SH_XNNI0_FIFO02_FLOW_COUNT_VC2_DYN */ +/* Description: counter dynamic value */ +#define SH_XNNI0_FIFO02_FLOW_COUNT_VC2_DYN_SHFT 32 +#define SH_XNNI0_FIFO02_FLOW_COUNT_VC2_DYN_MASK 0x0000000f00000000 + +/* SH_XNNI0_FIFO02_FLOW_COUNT_VC2_CAP */ +/* Description: captured counter value */ +#define SH_XNNI0_FIFO02_FLOW_COUNT_VC2_CAP_SHFT 40 +#define SH_XNNI0_FIFO02_FLOW_COUNT_VC2_CAP_MASK 0x00000f0000000000 + +/* ==================================================================== */ +/* Register "SH_XNNI0_FIFO13_FLOW" */ +/* ==================================================================== */ + +#define SH_XNNI0_FIFO13_FLOW 0x00000001500303d0 +#define SH_XNNI0_FIFO13_FLOW_MASK 0x00000f0f0f0f0f0f +#define SH_XNNI0_FIFO13_FLOW_INIT 0x0000000000000000 + +/* SH_XNNI0_FIFO13_FLOW_COUNT_VC1_LIMIT */ +/* Description: limit reg zero disables functionality */ +#define SH_XNNI0_FIFO13_FLOW_COUNT_VC1_LIMIT_SHFT 0 +#define SH_XNNI0_FIFO13_FLOW_COUNT_VC1_LIMIT_MASK 0x000000000000000f + +/* SH_XNNI0_FIFO13_FLOW_COUNT_VC1_DYN */ +/* Description: dynamic counter value */ +#define SH_XNNI0_FIFO13_FLOW_COUNT_VC1_DYN_SHFT 8 +#define SH_XNNI0_FIFO13_FLOW_COUNT_VC1_DYN_MASK 0x0000000000000f00 + +/* SH_XNNI0_FIFO13_FLOW_COUNT_VC1_CAP */ +/* Description: captured counter value */ +#define SH_XNNI0_FIFO13_FLOW_COUNT_VC1_CAP_SHFT 16 +#define SH_XNNI0_FIFO13_FLOW_COUNT_VC1_CAP_MASK 0x00000000000f0000 + +/* SH_XNNI0_FIFO13_FLOW_COUNT_VC3_LIMIT */ +/* Description: limit reg zero disables functionality */ +#define SH_XNNI0_FIFO13_FLOW_COUNT_VC3_LIMIT_SHFT 24 +#define SH_XNNI0_FIFO13_FLOW_COUNT_VC3_LIMIT_MASK 0x000000000f000000 + +/* SH_XNNI0_FIFO13_FLOW_COUNT_VC3_DYN */ +/* Description: counter dynamic value */ +#define SH_XNNI0_FIFO13_FLOW_COUNT_VC3_DYN_SHFT 32 +#define SH_XNNI0_FIFO13_FLOW_COUNT_VC3_DYN_MASK 0x0000000f00000000 + +/* SH_XNNI0_FIFO13_FLOW_COUNT_VC3_CAP */ +/* Description: captured counter value */ +#define SH_XNNI0_FIFO13_FLOW_COUNT_VC3_CAP_SHFT 40 +#define SH_XNNI0_FIFO13_FLOW_COUNT_VC3_CAP_MASK 0x00000f0000000000 + +/* ==================================================================== */ +/* Register "SH_XNNI0_NI_FLOW" */ +/* ==================================================================== */ + +#define SH_XNNI0_NI_FLOW 0x00000001500303e0 +#define SH_XNNI0_NI_FLOW_MASK 0xff0fff0fff0fff0f +#define SH_XNNI0_NI_FLOW_INIT 0x0000000000000000 + +/* SH_XNNI0_NI_FLOW_VC0_LIMIT */ +/* Description: vc0 limit reg, zero disables functionality */ +#define SH_XNNI0_NI_FLOW_VC0_LIMIT_SHFT 0 +#define SH_XNNI0_NI_FLOW_VC0_LIMIT_MASK 0x000000000000000f + +/* SH_XNNI0_NI_FLOW_VC0_DYN */ +/* Description: vc0 counter dynamic value */ +#define SH_XNNI0_NI_FLOW_VC0_DYN_SHFT 8 +#define SH_XNNI0_NI_FLOW_VC0_DYN_MASK 0x0000000000000f00 + +/* SH_XNNI0_NI_FLOW_VC0_CAP */ +/* Description: vc0 counter captured value */ +#define SH_XNNI0_NI_FLOW_VC0_CAP_SHFT 12 +#define SH_XNNI0_NI_FLOW_VC0_CAP_MASK 0x000000000000f000 + +/* SH_XNNI0_NI_FLOW_VC1_LIMIT */ +/* Description: vc1 limit reg, zero disables functionality */ +#define SH_XNNI0_NI_FLOW_VC1_LIMIT_SHFT 16 +#define SH_XNNI0_NI_FLOW_VC1_LIMIT_MASK 0x00000000000f0000 + +/* SH_XNNI0_NI_FLOW_VC1_DYN */ +/* Description: vc1 counter dynamic value */ +#define SH_XNNI0_NI_FLOW_VC1_DYN_SHFT 24 +#define SH_XNNI0_NI_FLOW_VC1_DYN_MASK 0x000000000f000000 + +/* SH_XNNI0_NI_FLOW_VC1_CAP */ +/* Description: vc1 counter captured value */ +#define SH_XNNI0_NI_FLOW_VC1_CAP_SHFT 28 +#define SH_XNNI0_NI_FLOW_VC1_CAP_MASK 0x00000000f0000000 + +/* SH_XNNI0_NI_FLOW_VC2_LIMIT */ +/* Description: vc2 limit reg, zero disables functionality */ +#define SH_XNNI0_NI_FLOW_VC2_LIMIT_SHFT 32 +#define SH_XNNI0_NI_FLOW_VC2_LIMIT_MASK 0x0000000f00000000 + +/* SH_XNNI0_NI_FLOW_VC2_DYN */ +/* Description: vc2 counter dynamic value */ +#define SH_XNNI0_NI_FLOW_VC2_DYN_SHFT 40 +#define SH_XNNI0_NI_FLOW_VC2_DYN_MASK 0x00000f0000000000 + +/* SH_XNNI0_NI_FLOW_VC2_CAP */ +/* Description: vc2 counter captured value */ +#define SH_XNNI0_NI_FLOW_VC2_CAP_SHFT 44 +#define SH_XNNI0_NI_FLOW_VC2_CAP_MASK 0x0000f00000000000 + +/* SH_XNNI0_NI_FLOW_VC3_LIMIT */ +/* Description: vc3 limit reg, zero disables functionality */ +#define SH_XNNI0_NI_FLOW_VC3_LIMIT_SHFT 48 +#define SH_XNNI0_NI_FLOW_VC3_LIMIT_MASK 0x000f000000000000 + +/* SH_XNNI0_NI_FLOW_VC3_DYN */ +/* Description: vc3 counter dynamic value */ +#define SH_XNNI0_NI_FLOW_VC3_DYN_SHFT 56 +#define SH_XNNI0_NI_FLOW_VC3_DYN_MASK 0x0f00000000000000 + +/* SH_XNNI0_NI_FLOW_VC3_CAP */ +/* Description: vc3 counter captured value */ +#define SH_XNNI0_NI_FLOW_VC3_CAP_SHFT 60 +#define SH_XNNI0_NI_FLOW_VC3_CAP_MASK 0xf000000000000000 + +/* ==================================================================== */ +/* Register "SH_XNNI0_DEAD_FLOW" */ +/* ==================================================================== */ + +#define SH_XNNI0_DEAD_FLOW 0x00000001500303f0 +#define SH_XNNI0_DEAD_FLOW_MASK 0xff0fff0fff0fff0f +#define SH_XNNI0_DEAD_FLOW_INIT 0x0000000000000000 + +/* SH_XNNI0_DEAD_FLOW_VC0_LIMIT */ +/* Description: vc0 limit reg, zero disables functionality */ +#define SH_XNNI0_DEAD_FLOW_VC0_LIMIT_SHFT 0 +#define SH_XNNI0_DEAD_FLOW_VC0_LIMIT_MASK 0x000000000000000f + +/* SH_XNNI0_DEAD_FLOW_VC0_DYN */ +/* Description: vc0 counter dynamic value */ +#define SH_XNNI0_DEAD_FLOW_VC0_DYN_SHFT 8 +#define SH_XNNI0_DEAD_FLOW_VC0_DYN_MASK 0x0000000000000f00 + +/* SH_XNNI0_DEAD_FLOW_VC0_CAP */ +/* Description: vc0 counter captured value */ +#define SH_XNNI0_DEAD_FLOW_VC0_CAP_SHFT 12 +#define SH_XNNI0_DEAD_FLOW_VC0_CAP_MASK 0x000000000000f000 + +/* SH_XNNI0_DEAD_FLOW_VC1_LIMIT */ +/* Description: vc1 limit reg, zero disables functionality */ +#define SH_XNNI0_DEAD_FLOW_VC1_LIMIT_SHFT 16 +#define SH_XNNI0_DEAD_FLOW_VC1_LIMIT_MASK 0x00000000000f0000 + +/* SH_XNNI0_DEAD_FLOW_VC1_DYN */ +/* Description: vc1 counter dynamic value */ +#define SH_XNNI0_DEAD_FLOW_VC1_DYN_SHFT 24 +#define SH_XNNI0_DEAD_FLOW_VC1_DYN_MASK 0x000000000f000000 + +/* SH_XNNI0_DEAD_FLOW_VC1_CAP */ +/* Description: vc1 counter captured value */ +#define SH_XNNI0_DEAD_FLOW_VC1_CAP_SHFT 28 +#define SH_XNNI0_DEAD_FLOW_VC1_CAP_MASK 0x00000000f0000000 + +/* SH_XNNI0_DEAD_FLOW_VC2_LIMIT */ +/* Description: vc2 limit reg, zero disables functionality */ +#define SH_XNNI0_DEAD_FLOW_VC2_LIMIT_SHFT 32 +#define SH_XNNI0_DEAD_FLOW_VC2_LIMIT_MASK 0x0000000f00000000 + +/* SH_XNNI0_DEAD_FLOW_VC2_DYN */ +/* Description: vc2 counter dynamic value */ +#define SH_XNNI0_DEAD_FLOW_VC2_DYN_SHFT 40 +#define SH_XNNI0_DEAD_FLOW_VC2_DYN_MASK 0x00000f0000000000 + +/* SH_XNNI0_DEAD_FLOW_VC2_CAP */ +/* Description: vc2 counter captured value */ +#define SH_XNNI0_DEAD_FLOW_VC2_CAP_SHFT 44 +#define SH_XNNI0_DEAD_FLOW_VC2_CAP_MASK 0x0000f00000000000 + +/* SH_XNNI0_DEAD_FLOW_VC3_LIMIT */ +/* Description: vc3 limit reg, zero disables functionality */ +#define SH_XNNI0_DEAD_FLOW_VC3_LIMIT_SHFT 48 +#define SH_XNNI0_DEAD_FLOW_VC3_LIMIT_MASK 0x000f000000000000 + +/* SH_XNNI0_DEAD_FLOW_VC3_DYN */ +/* Description: vc3 counter dynamic value */ +#define SH_XNNI0_DEAD_FLOW_VC3_DYN_SHFT 56 +#define SH_XNNI0_DEAD_FLOW_VC3_DYN_MASK 0x0f00000000000000 + +/* SH_XNNI0_DEAD_FLOW_VC3_CAP */ +/* Description: vc3 counter captured value */ +#define SH_XNNI0_DEAD_FLOW_VC3_CAP_SHFT 60 +#define SH_XNNI0_DEAD_FLOW_VC3_CAP_MASK 0xf000000000000000 + +/* ==================================================================== */ +/* Register "SH_XNNI0_INJECT_AGE" */ +/* ==================================================================== */ + +#define SH_XNNI0_INJECT_AGE 0x0000000150030400 +#define SH_XNNI0_INJECT_AGE_MASK 0x000000000000ffff +#define SH_XNNI0_INJECT_AGE_INIT 0x0000000000000000 + +/* SH_XNNI0_INJECT_AGE_REQUEST_INJECT */ +/* Description: Value of AGE field for outgoing requests */ +#define SH_XNNI0_INJECT_AGE_REQUEST_INJECT_SHFT 0 +#define SH_XNNI0_INJECT_AGE_REQUEST_INJECT_MASK 0x00000000000000ff + +/* SH_XNNI0_INJECT_AGE_REPLY_INJECT */ +/* Description: Value of AGE field for outgoing replies */ +#define SH_XNNI0_INJECT_AGE_REPLY_INJECT_SHFT 8 +#define SH_XNNI0_INJECT_AGE_REPLY_INJECT_MASK 0x000000000000ff00 + +/* ==================================================================== */ +/* Register "SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT" */ +/* ==================================================================== */ + +#define SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT 0x0000000150030500 +#define SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT_MASK 0x7f7f007f7f00bfbf +#define SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT_INIT 0x0000000000000000 + +/* SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT_VC0_WITHHOLD */ +/* Description: vc0 withhold */ +#define SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0 +#define SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000003f + +/* SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT_VC0_FORCE_CRED */ +/* Description: Force Credit on VC0 from debit cntr */ +#define SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7 +#define SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080 + +/* SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT_VC2_WITHHOLD */ +/* Description: vc2 withhold */ +#define SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT_VC2_WITHHOLD_SHFT 8 +#define SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x0000000000003f00 + +/* SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT_VC2_FORCE_CRED */ +/* Description: Force Credit on VC2 from debit cntr */ +#define SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 15 +#define SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000008000 + +/* SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT_VC0_DYN */ +/* Description: vc0 debit dynamic value */ +#define SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT_VC0_DYN_SHFT 24 +#define SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT_VC0_DYN_MASK 0x000000007f000000 + +/* SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT_VC0_CAP */ +/* Description: vc0 debit captured value */ +#define SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT_VC0_CAP_SHFT 32 +#define SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT_VC0_CAP_MASK 0x0000007f00000000 + +/* SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT_VC2_DYN */ +/* Description: vc2 debit dynamic value */ +#define SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT_VC2_DYN_SHFT 48 +#define SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT_VC2_DYN_MASK 0x007f000000000000 + +/* SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT_VC2_CAP */ +/* Description: vc2 debit captured value */ +#define SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT_VC2_CAP_SHFT 56 +#define SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT_VC2_CAP_MASK 0x7f00000000000000 + +/* ==================================================================== */ +/* Register "SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT" */ +/* ==================================================================== */ + +#define SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT 0x0000000150030510 +#define SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT_MASK 0x7f7f007f7f00bfbf +#define SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT_INIT 0x0000000000000000 + +/* SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT_VC0_WITHHOLD */ +/* Description: vc0 withhold */ +#define SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0 +#define SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000003f + +/* SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT_VC0_FORCE_CRED */ +/* Description: Force Credit on VC0 from debit cntr */ +#define SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7 +#define SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080 + +/* SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT_VC2_WITHHOLD */ +/* Description: vc2 withhold */ +#define SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT_VC2_WITHHOLD_SHFT 8 +#define SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x0000000000003f00 + +/* SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT_VC2_FORCE_CRED */ +/* Description: Force Credit on VC2 from debit cntr */ +#define SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 15 +#define SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000008000 + +/* SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT_VC0_DYN */ +/* Description: vc0 debit dynamic value */ +#define SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT_VC0_DYN_SHFT 24 +#define SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT_VC0_DYN_MASK 0x000000007f000000 + +/* SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT_VC0_CAP */ +/* Description: vc0 debit captured value */ +#define SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT_VC0_CAP_SHFT 32 +#define SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT_VC0_CAP_MASK 0x0000007f00000000 + +/* SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT_VC2_DYN */ +/* Description: vc2 debit dynamic value */ +#define SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT_VC2_DYN_SHFT 48 +#define SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT_VC2_DYN_MASK 0x007f000000000000 + +/* SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT_VC2_CAP */ +/* Description: vc2 debit captured value */ +#define SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT_VC2_CAP_SHFT 56 +#define SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT_VC2_CAP_MASK 0x7f00000000000000 + +/* ==================================================================== */ +/* Register "SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT" */ +/* ==================================================================== */ + +#define SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT 0x0000000150030520 +#define SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT_MASK 0x7f7f007f7f00bfbf +#define SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT_INIT 0x0000000000000000 + +/* SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT_VC0_WITHHOLD */ +/* Description: vc0 withhold */ +#define SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0 +#define SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000003f + +/* SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT_VC0_FORCE_CRED */ +/* Description: Force Credit on VC0 from debit cntr */ +#define SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7 +#define SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080 + +/* SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT_VC2_WITHHOLD */ +/* Description: vc2 withhold */ +#define SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT_VC2_WITHHOLD_SHFT 8 +#define SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x0000000000003f00 + +/* SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT_VC2_FORCE_CRED */ +/* Description: Force Credit on VC2 from debit cntr */ +#define SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 15 +#define SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000008000 + +/* SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT_VC0_DYN */ +/* Description: vc0 debit dynamic value */ +#define SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT_VC0_DYN_SHFT 24 +#define SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT_VC0_DYN_MASK 0x000000007f000000 + +/* SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT_VC0_CAP */ +/* Description: vc0 debit captured value */ +#define SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT_VC0_CAP_SHFT 32 +#define SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT_VC0_CAP_MASK 0x0000007f00000000 + +/* SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT_VC2_DYN */ +/* Description: vc2 debit dynamic value */ +#define SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT_VC2_DYN_SHFT 48 +#define SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT_VC2_DYN_MASK 0x007f000000000000 + +/* SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT_VC2_CAP */ +/* Description: vc2 debit captured value */ +#define SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT_VC2_CAP_SHFT 56 +#define SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT_VC2_CAP_MASK 0x7f00000000000000 + +/* ==================================================================== */ +/* Register "SH_XNNI1_FR_PI_INTRA_FLOW_CREDIT" */ +/* ==================================================================== */ + +#define SH_XNNI1_FR_PI_INTRA_FLOW_CREDIT 0x0000000150030530 +#define SH_XNNI1_FR_PI_INTRA_FLOW_CREDIT_MASK 0x00007f7f7f7f7f7f +#define SH_XNNI1_FR_PI_INTRA_FLOW_CREDIT_INIT 0x000000000c00000c + +/* SH_XNNI1_FR_PI_INTRA_FLOW_CREDIT_VC0_TEST */ +/* Description: vc0 credit_test */ +#define SH_XNNI1_FR_PI_INTRA_FLOW_CREDIT_VC0_TEST_SHFT 0 +#define SH_XNNI1_FR_PI_INTRA_FLOW_CREDIT_VC0_TEST_MASK 0x000000000000007f + +/* SH_XNNI1_FR_PI_INTRA_FLOW_CREDIT_VC0_DYN */ +/* Description: vc0 credit dynamic value */ +#define SH_XNNI1_FR_PI_INTRA_FLOW_CREDIT_VC0_DYN_SHFT 8 +#define SH_XNNI1_FR_PI_INTRA_FLOW_CREDIT_VC0_DYN_MASK 0x0000000000007f00 + +/* SH_XNNI1_FR_PI_INTRA_FLOW_CREDIT_VC0_CAP */ +/* Description: vc0 credit captured value */ +#define SH_XNNI1_FR_PI_INTRA_FLOW_CREDIT_VC0_CAP_SHFT 16 +#define SH_XNNI1_FR_PI_INTRA_FLOW_CREDIT_VC0_CAP_MASK 0x00000000007f0000 + +/* SH_XNNI1_FR_PI_INTRA_FLOW_CREDIT_VC2_TEST */ +/* Description: vc2 credit_test */ +#define SH_XNNI1_FR_PI_INTRA_FLOW_CREDIT_VC2_TEST_SHFT 24 +#define SH_XNNI1_FR_PI_INTRA_FLOW_CREDIT_VC2_TEST_MASK 0x000000007f000000 + +/* SH_XNNI1_FR_PI_INTRA_FLOW_CREDIT_VC2_DYN */ +/* Description: vc2 credit dynamic value */ +#define SH_XNNI1_FR_PI_INTRA_FLOW_CREDIT_VC2_DYN_SHFT 32 +#define SH_XNNI1_FR_PI_INTRA_FLOW_CREDIT_VC2_DYN_MASK 0x0000007f00000000 + +/* SH_XNNI1_FR_PI_INTRA_FLOW_CREDIT_VC2_CAP */ +/* Description: vc2 credit captured value */ +#define SH_XNNI1_FR_PI_INTRA_FLOW_CREDIT_VC2_CAP_SHFT 40 +#define SH_XNNI1_FR_PI_INTRA_FLOW_CREDIT_VC2_CAP_MASK 0x00007f0000000000 + +/* ==================================================================== */ +/* Register "SH_XNNI1_FR_MD_INTRA_FLOW_CREDIT" */ +/* ==================================================================== */ + +#define SH_XNNI1_FR_MD_INTRA_FLOW_CREDIT 0x0000000150030540 +#define SH_XNNI1_FR_MD_INTRA_FLOW_CREDIT_MASK 0x00007f7f7f7f7f7f +#define SH_XNNI1_FR_MD_INTRA_FLOW_CREDIT_INIT 0x000000000c00000c + +/* SH_XNNI1_FR_MD_INTRA_FLOW_CREDIT_VC0_TEST */ +/* Description: vc0 credit_test */ +#define SH_XNNI1_FR_MD_INTRA_FLOW_CREDIT_VC0_TEST_SHFT 0 +#define SH_XNNI1_FR_MD_INTRA_FLOW_CREDIT_VC0_TEST_MASK 0x000000000000007f + +/* SH_XNNI1_FR_MD_INTRA_FLOW_CREDIT_VC0_DYN */ +/* Description: vc0 credit dynamic value */ +#define SH_XNNI1_FR_MD_INTRA_FLOW_CREDIT_VC0_DYN_SHFT 8 +#define SH_XNNI1_FR_MD_INTRA_FLOW_CREDIT_VC0_DYN_MASK 0x0000000000007f00 + +/* SH_XNNI1_FR_MD_INTRA_FLOW_CREDIT_VC0_CAP */ +/* Description: vc0 credit captured value */ +#define SH_XNNI1_FR_MD_INTRA_FLOW_CREDIT_VC0_CAP_SHFT 16 +#define SH_XNNI1_FR_MD_INTRA_FLOW_CREDIT_VC0_CAP_MASK 0x00000000007f0000 + +/* SH_XNNI1_FR_MD_INTRA_FLOW_CREDIT_VC2_TEST */ +/* Description: vc2 credit_test */ +#define SH_XNNI1_FR_MD_INTRA_FLOW_CREDIT_VC2_TEST_SHFT 24 +#define SH_XNNI1_FR_MD_INTRA_FLOW_CREDIT_VC2_TEST_MASK 0x000000007f000000 + +/* SH_XNNI1_FR_MD_INTRA_FLOW_CREDIT_VC2_DYN */ +/* Description: vc2 credit dynamic value */ +#define SH_XNNI1_FR_MD_INTRA_FLOW_CREDIT_VC2_DYN_SHFT 32 +#define SH_XNNI1_FR_MD_INTRA_FLOW_CREDIT_VC2_DYN_MASK 0x0000007f00000000 + +/* SH_XNNI1_FR_MD_INTRA_FLOW_CREDIT_VC2_CAP */ +/* Description: vc2 credit captured value */ +#define SH_XNNI1_FR_MD_INTRA_FLOW_CREDIT_VC2_CAP_SHFT 40 +#define SH_XNNI1_FR_MD_INTRA_FLOW_CREDIT_VC2_CAP_MASK 0x00007f0000000000 + +/* ==================================================================== */ +/* Register "SH_XNNI1_FR_IILB_INTRA_FLOW_CREDIT" */ +/* ==================================================================== */ + +#define SH_XNNI1_FR_IILB_INTRA_FLOW_CREDIT 0x0000000150030550 +#define SH_XNNI1_FR_IILB_INTRA_FLOW_CREDIT_MASK 0x00007f7f7f7f7f7f +#define SH_XNNI1_FR_IILB_INTRA_FLOW_CREDIT_INIT 0x000000000c00000c + +/* SH_XNNI1_FR_IILB_INTRA_FLOW_CREDIT_VC0_TEST */ +/* Description: vc0 credit_test */ +#define SH_XNNI1_FR_IILB_INTRA_FLOW_CREDIT_VC0_TEST_SHFT 0 +#define SH_XNNI1_FR_IILB_INTRA_FLOW_CREDIT_VC0_TEST_MASK 0x000000000000007f + +/* SH_XNNI1_FR_IILB_INTRA_FLOW_CREDIT_VC0_DYN */ +/* Description: vc0 credit dynamic value */ +#define SH_XNNI1_FR_IILB_INTRA_FLOW_CREDIT_VC0_DYN_SHFT 8 +#define SH_XNNI1_FR_IILB_INTRA_FLOW_CREDIT_VC0_DYN_MASK 0x0000000000007f00 + +/* SH_XNNI1_FR_IILB_INTRA_FLOW_CREDIT_VC0_CAP */ +/* Description: vc0 credit captured value */ +#define SH_XNNI1_FR_IILB_INTRA_FLOW_CREDIT_VC0_CAP_SHFT 16 +#define SH_XNNI1_FR_IILB_INTRA_FLOW_CREDIT_VC0_CAP_MASK 0x00000000007f0000 + +/* SH_XNNI1_FR_IILB_INTRA_FLOW_CREDIT_VC2_TEST */ +/* Description: vc2 credit_test */ +#define SH_XNNI1_FR_IILB_INTRA_FLOW_CREDIT_VC2_TEST_SHFT 24 +#define SH_XNNI1_FR_IILB_INTRA_FLOW_CREDIT_VC2_TEST_MASK 0x000000007f000000 + +/* SH_XNNI1_FR_IILB_INTRA_FLOW_CREDIT_VC2_DYN */ +/* Description: vc2 credit dynamic value */ +#define SH_XNNI1_FR_IILB_INTRA_FLOW_CREDIT_VC2_DYN_SHFT 32 +#define SH_XNNI1_FR_IILB_INTRA_FLOW_CREDIT_VC2_DYN_MASK 0x0000007f00000000 + +/* SH_XNNI1_FR_IILB_INTRA_FLOW_CREDIT_VC2_CAP */ +/* Description: vc2 credit captured value */ +#define SH_XNNI1_FR_IILB_INTRA_FLOW_CREDIT_VC2_CAP_SHFT 40 +#define SH_XNNI1_FR_IILB_INTRA_FLOW_CREDIT_VC2_CAP_MASK 0x00007f0000000000 + +/* ==================================================================== */ +/* Register "SH_XNNI1_0_INTRANI_FLOW" */ +/* ==================================================================== */ + +#define SH_XNNI1_0_INTRANI_FLOW 0x0000000150030560 +#define SH_XNNI1_0_INTRANI_FLOW_MASK 0x00000000000000bf +#define SH_XNNI1_0_INTRANI_FLOW_INIT 0x0000000000000000 + +/* SH_XNNI1_0_INTRANI_FLOW_DEBIT_VC0_WITHHOLD */ +/* Description: vc0 withhold */ +#define SH_XNNI1_0_INTRANI_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0 +#define SH_XNNI1_0_INTRANI_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000003f + +/* SH_XNNI1_0_INTRANI_FLOW_DEBIT_VC0_FORCE_CRED */ +/* Description: Force Credit on VC0 from debit cntr */ +#define SH_XNNI1_0_INTRANI_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7 +#define SH_XNNI1_0_INTRANI_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080 + +/* ==================================================================== */ +/* Register "SH_XNNI1_1_INTRANI_FLOW" */ +/* ==================================================================== */ + +#define SH_XNNI1_1_INTRANI_FLOW 0x0000000150030570 +#define SH_XNNI1_1_INTRANI_FLOW_MASK 0x00000000000000bf +#define SH_XNNI1_1_INTRANI_FLOW_INIT 0x0000000000000000 + +/* SH_XNNI1_1_INTRANI_FLOW_DEBIT_VC1_WITHHOLD */ +/* Description: vc1 withhold */ +#define SH_XNNI1_1_INTRANI_FLOW_DEBIT_VC1_WITHHOLD_SHFT 0 +#define SH_XNNI1_1_INTRANI_FLOW_DEBIT_VC1_WITHHOLD_MASK 0x000000000000003f + +/* SH_XNNI1_1_INTRANI_FLOW_DEBIT_VC1_FORCE_CRED */ +/* Description: Force Credit on VC1 from debit cntr */ +#define SH_XNNI1_1_INTRANI_FLOW_DEBIT_VC1_FORCE_CRED_SHFT 7 +#define SH_XNNI1_1_INTRANI_FLOW_DEBIT_VC1_FORCE_CRED_MASK 0x0000000000000080 + +/* ==================================================================== */ +/* Register "SH_XNNI1_2_INTRANI_FLOW" */ +/* ==================================================================== */ + +#define SH_XNNI1_2_INTRANI_FLOW 0x0000000150030580 +#define SH_XNNI1_2_INTRANI_FLOW_MASK 0x00000000000000bf +#define SH_XNNI1_2_INTRANI_FLOW_INIT 0x0000000000000000 + +/* SH_XNNI1_2_INTRANI_FLOW_DEBIT_VC2_WITHHOLD */ +/* Description: vc2 withhold */ +#define SH_XNNI1_2_INTRANI_FLOW_DEBIT_VC2_WITHHOLD_SHFT 0 +#define SH_XNNI1_2_INTRANI_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x000000000000003f + +/* SH_XNNI1_2_INTRANI_FLOW_DEBIT_VC2_FORCE_CRED */ +/* Description: Force Credit on VC2 from debit cntr */ +#define SH_XNNI1_2_INTRANI_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 7 +#define SH_XNNI1_2_INTRANI_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000000080 + +/* ==================================================================== */ +/* Register "SH_XNNI1_3_INTRANI_FLOW" */ +/* ==================================================================== */ + +#define SH_XNNI1_3_INTRANI_FLOW 0x0000000150030590 +#define SH_XNNI1_3_INTRANI_FLOW_MASK 0x00000000000000bf +#define SH_XNNI1_3_INTRANI_FLOW_INIT 0x0000000000000000 + +/* SH_XNNI1_3_INTRANI_FLOW_DEBIT_VC3_WITHHOLD */ +/* Description: vc3 withhold */ +#define SH_XNNI1_3_INTRANI_FLOW_DEBIT_VC3_WITHHOLD_SHFT 0 +#define SH_XNNI1_3_INTRANI_FLOW_DEBIT_VC3_WITHHOLD_MASK 0x000000000000003f + +/* SH_XNNI1_3_INTRANI_FLOW_DEBIT_VC3_FORCE_CRED */ +/* Description: Force Credit on VC3 from debit cntr */ +#define SH_XNNI1_3_INTRANI_FLOW_DEBIT_VC3_FORCE_CRED_SHFT 7 +#define SH_XNNI1_3_INTRANI_FLOW_DEBIT_VC3_FORCE_CRED_MASK 0x0000000000000080 + +/* ==================================================================== */ +/* Register "SH_XNNI1_VCSWITCH_FLOW" */ +/* ==================================================================== */ + +#define SH_XNNI1_VCSWITCH_FLOW 0x00000001500305a0 +#define SH_XNNI1_VCSWITCH_FLOW_MASK 0x0000000701010101 +#define SH_XNNI1_VCSWITCH_FLOW_INIT 0x0000000000000000 + +/* SH_XNNI1_VCSWITCH_FLOW_NI_VCFIFO_DATELINE_SWITCH */ +/* Description: Swap VC0/2 with VC1/3 */ +#define SH_XNNI1_VCSWITCH_FLOW_NI_VCFIFO_DATELINE_SWITCH_SHFT 0 +#define SH_XNNI1_VCSWITCH_FLOW_NI_VCFIFO_DATELINE_SWITCH_MASK 0x0000000000000001 + +/* SH_XNNI1_VCSWITCH_FLOW_PI_VCFIFO_SWITCH */ +/* Description: Swap VC0/2 with VC1/3 */ +#define SH_XNNI1_VCSWITCH_FLOW_PI_VCFIFO_SWITCH_SHFT 8 +#define SH_XNNI1_VCSWITCH_FLOW_PI_VCFIFO_SWITCH_MASK 0x0000000000000100 + +/* SH_XNNI1_VCSWITCH_FLOW_MD_VCFIFO_SWITCH */ +/* Description: Swap VC0/2 with VC1/3 */ +#define SH_XNNI1_VCSWITCH_FLOW_MD_VCFIFO_SWITCH_SHFT 16 +#define SH_XNNI1_VCSWITCH_FLOW_MD_VCFIFO_SWITCH_MASK 0x0000000000010000 + +/* SH_XNNI1_VCSWITCH_FLOW_IILB_VCFIFO_SWITCH */ +/* Description: Swap VC0/2 with VC1/3 */ +#define SH_XNNI1_VCSWITCH_FLOW_IILB_VCFIFO_SWITCH_SHFT 24 +#define SH_XNNI1_VCSWITCH_FLOW_IILB_VCFIFO_SWITCH_MASK 0x0000000001000000 + +/* SH_XNNI1_VCSWITCH_FLOW_DISABLE_SYNC_BYPASS_IN */ +#define SH_XNNI1_VCSWITCH_FLOW_DISABLE_SYNC_BYPASS_IN_SHFT 32 +#define SH_XNNI1_VCSWITCH_FLOW_DISABLE_SYNC_BYPASS_IN_MASK 0x0000000100000000 + +/* SH_XNNI1_VCSWITCH_FLOW_DISABLE_SYNC_BYPASS_OUT */ +#define SH_XNNI1_VCSWITCH_FLOW_DISABLE_SYNC_BYPASS_OUT_SHFT 33 +#define SH_XNNI1_VCSWITCH_FLOW_DISABLE_SYNC_BYPASS_OUT_MASK 0x0000000200000000 + +/* SH_XNNI1_VCSWITCH_FLOW_ASYNC_FIFOES */ +#define SH_XNNI1_VCSWITCH_FLOW_ASYNC_FIFOES_SHFT 34 +#define SH_XNNI1_VCSWITCH_FLOW_ASYNC_FIFOES_MASK 0x0000000400000000 + +/* ==================================================================== */ +/* Register "SH_XNNI1_TIMER_REG" */ +/* ==================================================================== */ + +#define SH_XNNI1_TIMER_REG 0x00000001500305b0 +#define SH_XNNI1_TIMER_REG_MASK 0x0000000100ffffff +#define SH_XNNI1_TIMER_REG_INIT 0x0000000000ffffff + +/* SH_XNNI1_TIMER_REG_TIMEOUT_REG */ +/* Description: Master Timeout Counter */ +#define SH_XNNI1_TIMER_REG_TIMEOUT_REG_SHFT 0 +#define SH_XNNI1_TIMER_REG_TIMEOUT_REG_MASK 0x0000000000ffffff + +/* SH_XNNI1_TIMER_REG_LINKCLEANUP_REG */ +/* Description: Link Clean Up */ +#define SH_XNNI1_TIMER_REG_LINKCLEANUP_REG_SHFT 32 +#define SH_XNNI1_TIMER_REG_LINKCLEANUP_REG_MASK 0x0000000100000000 + +/* ==================================================================== */ +/* Register "SH_XNNI1_FIFO02_FLOW" */ +/* ==================================================================== */ + +#define SH_XNNI1_FIFO02_FLOW 0x00000001500305c0 +#define SH_XNNI1_FIFO02_FLOW_MASK 0x00000f0f0f0f0f0f +#define SH_XNNI1_FIFO02_FLOW_INIT 0x0000000000000000 + +/* SH_XNNI1_FIFO02_FLOW_COUNT_VC0_LIMIT */ +/* Description: limit reg zero disables functionality */ +#define SH_XNNI1_FIFO02_FLOW_COUNT_VC0_LIMIT_SHFT 0 +#define SH_XNNI1_FIFO02_FLOW_COUNT_VC0_LIMIT_MASK 0x000000000000000f + +/* SH_XNNI1_FIFO02_FLOW_COUNT_VC0_DYN */ +/* Description: dynamic counter value */ +#define SH_XNNI1_FIFO02_FLOW_COUNT_VC0_DYN_SHFT 8 +#define SH_XNNI1_FIFO02_FLOW_COUNT_VC0_DYN_MASK 0x0000000000000f00 + +/* SH_XNNI1_FIFO02_FLOW_COUNT_VC0_CAP */ +/* Description: captured counter value */ +#define SH_XNNI1_FIFO02_FLOW_COUNT_VC0_CAP_SHFT 16 +#define SH_XNNI1_FIFO02_FLOW_COUNT_VC0_CAP_MASK 0x00000000000f0000 + +/* SH_XNNI1_FIFO02_FLOW_COUNT_VC2_LIMIT */ +/* Description: limit reg zero disables functionality */ +#define SH_XNNI1_FIFO02_FLOW_COUNT_VC2_LIMIT_SHFT 24 +#define SH_XNNI1_FIFO02_FLOW_COUNT_VC2_LIMIT_MASK 0x000000000f000000 + +/* SH_XNNI1_FIFO02_FLOW_COUNT_VC2_DYN */ +/* Description: counter dynamic value */ +#define SH_XNNI1_FIFO02_FLOW_COUNT_VC2_DYN_SHFT 32 +#define SH_XNNI1_FIFO02_FLOW_COUNT_VC2_DYN_MASK 0x0000000f00000000 + +/* SH_XNNI1_FIFO02_FLOW_COUNT_VC2_CAP */ +/* Description: captured counter value */ +#define SH_XNNI1_FIFO02_FLOW_COUNT_VC2_CAP_SHFT 40 +#define SH_XNNI1_FIFO02_FLOW_COUNT_VC2_CAP_MASK 0x00000f0000000000 + +/* ==================================================================== */ +/* Register "SH_XNNI1_FIFO13_FLOW" */ +/* ==================================================================== */ + +#define SH_XNNI1_FIFO13_FLOW 0x00000001500305d0 +#define SH_XNNI1_FIFO13_FLOW_MASK 0x00000f0f0f0f0f0f +#define SH_XNNI1_FIFO13_FLOW_INIT 0x0000000000000000 + +/* SH_XNNI1_FIFO13_FLOW_COUNT_VC1_LIMIT */ +/* Description: limit reg zero disables functionality */ +#define SH_XNNI1_FIFO13_FLOW_COUNT_VC1_LIMIT_SHFT 0 +#define SH_XNNI1_FIFO13_FLOW_COUNT_VC1_LIMIT_MASK 0x000000000000000f + +/* SH_XNNI1_FIFO13_FLOW_COUNT_VC1_DYN */ +/* Description: dynamic counter value */ +#define SH_XNNI1_FIFO13_FLOW_COUNT_VC1_DYN_SHFT 8 +#define SH_XNNI1_FIFO13_FLOW_COUNT_VC1_DYN_MASK 0x0000000000000f00 + +/* SH_XNNI1_FIFO13_FLOW_COUNT_VC1_CAP */ +/* Description: captured counter value */ +#define SH_XNNI1_FIFO13_FLOW_COUNT_VC1_CAP_SHFT 16 +#define SH_XNNI1_FIFO13_FLOW_COUNT_VC1_CAP_MASK 0x00000000000f0000 + +/* SH_XNNI1_FIFO13_FLOW_COUNT_VC3_LIMIT */ +/* Description: limit reg zero disables functionality */ +#define SH_XNNI1_FIFO13_FLOW_COUNT_VC3_LIMIT_SHFT 24 +#define SH_XNNI1_FIFO13_FLOW_COUNT_VC3_LIMIT_MASK 0x000000000f000000 + +/* SH_XNNI1_FIFO13_FLOW_COUNT_VC3_DYN */ +/* Description: counter dynamic value */ +#define SH_XNNI1_FIFO13_FLOW_COUNT_VC3_DYN_SHFT 32 +#define SH_XNNI1_FIFO13_FLOW_COUNT_VC3_DYN_MASK 0x0000000f00000000 + +/* SH_XNNI1_FIFO13_FLOW_COUNT_VC3_CAP */ +/* Description: captured counter value */ +#define SH_XNNI1_FIFO13_FLOW_COUNT_VC3_CAP_SHFT 40 +#define SH_XNNI1_FIFO13_FLOW_COUNT_VC3_CAP_MASK 0x00000f0000000000 + +/* ==================================================================== */ +/* Register "SH_XNNI1_NI_FLOW" */ +/* ==================================================================== */ + +#define SH_XNNI1_NI_FLOW 0x00000001500305e0 +#define SH_XNNI1_NI_FLOW_MASK 0xff0fff0fff0fff0f +#define SH_XNNI1_NI_FLOW_INIT 0x0000000000000000 + +/* SH_XNNI1_NI_FLOW_VC0_LIMIT */ +/* Description: vc0 limit reg, zero disables functionality */ +#define SH_XNNI1_NI_FLOW_VC0_LIMIT_SHFT 0 +#define SH_XNNI1_NI_FLOW_VC0_LIMIT_MASK 0x000000000000000f + +/* SH_XNNI1_NI_FLOW_VC0_DYN */ +/* Description: vc0 counter dynamic value */ +#define SH_XNNI1_NI_FLOW_VC0_DYN_SHFT 8 +#define SH_XNNI1_NI_FLOW_VC0_DYN_MASK 0x0000000000000f00 + +/* SH_XNNI1_NI_FLOW_VC0_CAP */ +/* Description: vc0 counter captured value */ +#define SH_XNNI1_NI_FLOW_VC0_CAP_SHFT 12 +#define SH_XNNI1_NI_FLOW_VC0_CAP_MASK 0x000000000000f000 + +/* SH_XNNI1_NI_FLOW_VC1_LIMIT */ +/* Description: vc1 limit reg, zero disables functionality */ +#define SH_XNNI1_NI_FLOW_VC1_LIMIT_SHFT 16 +#define SH_XNNI1_NI_FLOW_VC1_LIMIT_MASK 0x00000000000f0000 + +/* SH_XNNI1_NI_FLOW_VC1_DYN */ +/* Description: vc1 counter dynamic value */ +#define SH_XNNI1_NI_FLOW_VC1_DYN_SHFT 24 +#define SH_XNNI1_NI_FLOW_VC1_DYN_MASK 0x000000000f000000 + +/* SH_XNNI1_NI_FLOW_VC1_CAP */ +/* Description: vc1 counter captured value */ +#define SH_XNNI1_NI_FLOW_VC1_CAP_SHFT 28 +#define SH_XNNI1_NI_FLOW_VC1_CAP_MASK 0x00000000f0000000 + +/* SH_XNNI1_NI_FLOW_VC2_LIMIT */ +/* Description: vc2 limit reg, zero disables functionality */ +#define SH_XNNI1_NI_FLOW_VC2_LIMIT_SHFT 32 +#define SH_XNNI1_NI_FLOW_VC2_LIMIT_MASK 0x0000000f00000000 + +/* SH_XNNI1_NI_FLOW_VC2_DYN */ +/* Description: vc2 counter dynamic value */ +#define SH_XNNI1_NI_FLOW_VC2_DYN_SHFT 40 +#define SH_XNNI1_NI_FLOW_VC2_DYN_MASK 0x00000f0000000000 + +/* SH_XNNI1_NI_FLOW_VC2_CAP */ +/* Description: vc2 counter captured value */ +#define SH_XNNI1_NI_FLOW_VC2_CAP_SHFT 44 +#define SH_XNNI1_NI_FLOW_VC2_CAP_MASK 0x0000f00000000000 + +/* SH_XNNI1_NI_FLOW_VC3_LIMIT */ +/* Description: vc3 limit reg, zero disables functionality */ +#define SH_XNNI1_NI_FLOW_VC3_LIMIT_SHFT 48 +#define SH_XNNI1_NI_FLOW_VC3_LIMIT_MASK 0x000f000000000000 + +/* SH_XNNI1_NI_FLOW_VC3_DYN */ +/* Description: vc3 counter dynamic value */ +#define SH_XNNI1_NI_FLOW_VC3_DYN_SHFT 56 +#define SH_XNNI1_NI_FLOW_VC3_DYN_MASK 0x0f00000000000000 + +/* SH_XNNI1_NI_FLOW_VC3_CAP */ +/* Description: vc3 counter captured value */ +#define SH_XNNI1_NI_FLOW_VC3_CAP_SHFT 60 +#define SH_XNNI1_NI_FLOW_VC3_CAP_MASK 0xf000000000000000 + +/* ==================================================================== */ +/* Register "SH_XNNI1_DEAD_FLOW" */ +/* ==================================================================== */ + +#define SH_XNNI1_DEAD_FLOW 0x00000001500305f0 +#define SH_XNNI1_DEAD_FLOW_MASK 0xff0fff0fff0fff0f +#define SH_XNNI1_DEAD_FLOW_INIT 0x0000000000000000 + +/* SH_XNNI1_DEAD_FLOW_VC0_LIMIT */ +/* Description: vc0 limit reg, zero disables functionality */ +#define SH_XNNI1_DEAD_FLOW_VC0_LIMIT_SHFT 0 +#define SH_XNNI1_DEAD_FLOW_VC0_LIMIT_MASK 0x000000000000000f + +/* SH_XNNI1_DEAD_FLOW_VC0_DYN */ +/* Description: vc0 counter dynamic value */ +#define SH_XNNI1_DEAD_FLOW_VC0_DYN_SHFT 8 +#define SH_XNNI1_DEAD_FLOW_VC0_DYN_MASK 0x0000000000000f00 + +/* SH_XNNI1_DEAD_FLOW_VC0_CAP */ +/* Description: vc0 counter captured value */ +#define SH_XNNI1_DEAD_FLOW_VC0_CAP_SHFT 12 +#define SH_XNNI1_DEAD_FLOW_VC0_CAP_MASK 0x000000000000f000 + +/* SH_XNNI1_DEAD_FLOW_VC1_LIMIT */ +/* Description: vc1 limit reg, zero disables functionality */ +#define SH_XNNI1_DEAD_FLOW_VC1_LIMIT_SHFT 16 +#define SH_XNNI1_DEAD_FLOW_VC1_LIMIT_MASK 0x00000000000f0000 + +/* SH_XNNI1_DEAD_FLOW_VC1_DYN */ +/* Description: vc1 counter dynamic value */ +#define SH_XNNI1_DEAD_FLOW_VC1_DYN_SHFT 24 +#define SH_XNNI1_DEAD_FLOW_VC1_DYN_MASK 0x000000000f000000 + +/* SH_XNNI1_DEAD_FLOW_VC1_CAP */ +/* Description: vc1 counter captured value */ +#define SH_XNNI1_DEAD_FLOW_VC1_CAP_SHFT 28 +#define SH_XNNI1_DEAD_FLOW_VC1_CAP_MASK 0x00000000f0000000 + +/* SH_XNNI1_DEAD_FLOW_VC2_LIMIT */ +/* Description: vc2 limit reg, zero disables functionality */ +#define SH_XNNI1_DEAD_FLOW_VC2_LIMIT_SHFT 32 +#define SH_XNNI1_DEAD_FLOW_VC2_LIMIT_MASK 0x0000000f00000000 + +/* SH_XNNI1_DEAD_FLOW_VC2_DYN */ +/* Description: vc2 counter dynamic value */ +#define SH_XNNI1_DEAD_FLOW_VC2_DYN_SHFT 40 +#define SH_XNNI1_DEAD_FLOW_VC2_DYN_MASK 0x00000f0000000000 + +/* SH_XNNI1_DEAD_FLOW_VC2_CAP */ +/* Description: vc2 counter captured value */ +#define SH_XNNI1_DEAD_FLOW_VC2_CAP_SHFT 44 +#define SH_XNNI1_DEAD_FLOW_VC2_CAP_MASK 0x0000f00000000000 + +/* SH_XNNI1_DEAD_FLOW_VC3_LIMIT */ +/* Description: vc3 limit reg, zero disables functionality */ +#define SH_XNNI1_DEAD_FLOW_VC3_LIMIT_SHFT 48 +#define SH_XNNI1_DEAD_FLOW_VC3_LIMIT_MASK 0x000f000000000000 + +/* SH_XNNI1_DEAD_FLOW_VC3_DYN */ +/* Description: vc3 counter dynamic value */ +#define SH_XNNI1_DEAD_FLOW_VC3_DYN_SHFT 56 +#define SH_XNNI1_DEAD_FLOW_VC3_DYN_MASK 0x0f00000000000000 + +/* SH_XNNI1_DEAD_FLOW_VC3_CAP */ +/* Description: vc3 counter captured value */ +#define SH_XNNI1_DEAD_FLOW_VC3_CAP_SHFT 60 +#define SH_XNNI1_DEAD_FLOW_VC3_CAP_MASK 0xf000000000000000 + +/* ==================================================================== */ +/* Register "SH_XNNI1_INJECT_AGE" */ +/* ==================================================================== */ + +#define SH_XNNI1_INJECT_AGE 0x0000000150030600 +#define SH_XNNI1_INJECT_AGE_MASK 0x000000000000ffff +#define SH_XNNI1_INJECT_AGE_INIT 0x0000000000000000 + +/* SH_XNNI1_INJECT_AGE_REQUEST_INJECT */ +/* Description: Value of AGE field for outgoing requests */ +#define SH_XNNI1_INJECT_AGE_REQUEST_INJECT_SHFT 0 +#define SH_XNNI1_INJECT_AGE_REQUEST_INJECT_MASK 0x00000000000000ff + +/* SH_XNNI1_INJECT_AGE_REPLY_INJECT */ +/* Description: Value of AGE field for outgoing replies */ +#define SH_XNNI1_INJECT_AGE_REPLY_INJECT_SHFT 8 +#define SH_XNNI1_INJECT_AGE_REPLY_INJECT_MASK 0x000000000000ff00 + +/* ==================================================================== */ +/* Register "SH_XN_DEBUG_SEL" */ +/* XN Debug Port Select */ +/* ==================================================================== */ + +#define SH_XN_DEBUG_SEL 0x0000000150031000 +#define SH_XN_DEBUG_SEL_MASK 0xf777777777777777 +#define SH_XN_DEBUG_SEL_INIT 0x0000000000000000 + +/* SH_XN_DEBUG_SEL_NIBBLE0_RLM_SEL */ +/* Description: Nibble 0 RLM select */ +#define SH_XN_DEBUG_SEL_NIBBLE0_RLM_SEL_SHFT 0 +#define SH_XN_DEBUG_SEL_NIBBLE0_RLM_SEL_MASK 0x0000000000000007 + +/* SH_XN_DEBUG_SEL_NIBBLE0_NIBBLE_SEL */ +/* Description: Nibble 0 Nibble select */ +#define SH_XN_DEBUG_SEL_NIBBLE0_NIBBLE_SEL_SHFT 4 +#define SH_XN_DEBUG_SEL_NIBBLE0_NIBBLE_SEL_MASK 0x0000000000000070 + +/* SH_XN_DEBUG_SEL_NIBBLE1_RLM_SEL */ +/* Description: Nibble 1 RLM select */ +#define SH_XN_DEBUG_SEL_NIBBLE1_RLM_SEL_SHFT 8 +#define SH_XN_DEBUG_SEL_NIBBLE1_RLM_SEL_MASK 0x0000000000000700 + +/* SH_XN_DEBUG_SEL_NIBBLE1_NIBBLE_SEL */ +/* Description: Nibble 1 Nibble select */ +#define SH_XN_DEBUG_SEL_NIBBLE1_NIBBLE_SEL_SHFT 12 +#define SH_XN_DEBUG_SEL_NIBBLE1_NIBBLE_SEL_MASK 0x0000000000007000 + +/* SH_XN_DEBUG_SEL_NIBBLE2_RLM_SEL */ +/* Description: Nibble 2 RLM select */ +#define SH_XN_DEBUG_SEL_NIBBLE2_RLM_SEL_SHFT 16 +#define SH_XN_DEBUG_SEL_NIBBLE2_RLM_SEL_MASK 0x0000000000070000 + +/* SH_XN_DEBUG_SEL_NIBBLE2_NIBBLE_SEL */ +/* Description: Nibble 2 Nibble select */ +#define SH_XN_DEBUG_SEL_NIBBLE2_NIBBLE_SEL_SHFT 20 +#define SH_XN_DEBUG_SEL_NIBBLE2_NIBBLE_SEL_MASK 0x0000000000700000 + +/* SH_XN_DEBUG_SEL_NIBBLE3_RLM_SEL */ +/* Description: Nibble 3 RLM select */ +#define SH_XN_DEBUG_SEL_NIBBLE3_RLM_SEL_SHFT 24 +#define SH_XN_DEBUG_SEL_NIBBLE3_RLM_SEL_MASK 0x0000000007000000 + +/* SH_XN_DEBUG_SEL_NIBBLE3_NIBBLE_SEL */ +/* Description: Nibble 3 Nibble select */ +#define SH_XN_DEBUG_SEL_NIBBLE3_NIBBLE_SEL_SHFT 28 +#define SH_XN_DEBUG_SEL_NIBBLE3_NIBBLE_SEL_MASK 0x0000000070000000 + +/* SH_XN_DEBUG_SEL_NIBBLE4_RLM_SEL */ +/* Description: Nibble 4 RLM select */ +#define SH_XN_DEBUG_SEL_NIBBLE4_RLM_SEL_SHFT 32 +#define SH_XN_DEBUG_SEL_NIBBLE4_RLM_SEL_MASK 0x0000000700000000 + +/* SH_XN_DEBUG_SEL_NIBBLE4_NIBBLE_SEL */ +/* Description: Nibble 4 Nibble select */ +#define SH_XN_DEBUG_SEL_NIBBLE4_NIBBLE_SEL_SHFT 36 +#define SH_XN_DEBUG_SEL_NIBBLE4_NIBBLE_SEL_MASK 0x0000007000000000 + +/* SH_XN_DEBUG_SEL_NIBBLE5_RLM_SEL */ +/* Description: Nibble 5 RLM select */ +#define SH_XN_DEBUG_SEL_NIBBLE5_RLM_SEL_SHFT 40 +#define SH_XN_DEBUG_SEL_NIBBLE5_RLM_SEL_MASK 0x0000070000000000 + +/* SH_XN_DEBUG_SEL_NIBBLE5_NIBBLE_SEL */ +/* Description: Nibble 5 Nibble select */ +#define SH_XN_DEBUG_SEL_NIBBLE5_NIBBLE_SEL_SHFT 44 +#define SH_XN_DEBUG_SEL_NIBBLE5_NIBBLE_SEL_MASK 0x0000700000000000 + +/* SH_XN_DEBUG_SEL_NIBBLE6_RLM_SEL */ +/* Description: Nibble 6 RLM select */ +#define SH_XN_DEBUG_SEL_NIBBLE6_RLM_SEL_SHFT 48 +#define SH_XN_DEBUG_SEL_NIBBLE6_RLM_SEL_MASK 0x0007000000000000 + +/* SH_XN_DEBUG_SEL_NIBBLE6_NIBBLE_SEL */ +/* Description: Nibble 6 Nibble select */ +#define SH_XN_DEBUG_SEL_NIBBLE6_NIBBLE_SEL_SHFT 52 +#define SH_XN_DEBUG_SEL_NIBBLE6_NIBBLE_SEL_MASK 0x0070000000000000 + +/* SH_XN_DEBUG_SEL_NIBBLE7_RLM_SEL */ +/* Description: Nibble 7 RLM select */ +#define SH_XN_DEBUG_SEL_NIBBLE7_RLM_SEL_SHFT 56 +#define SH_XN_DEBUG_SEL_NIBBLE7_RLM_SEL_MASK 0x0700000000000000 + +/* SH_XN_DEBUG_SEL_NIBBLE7_NIBBLE_SEL */ +/* Description: Nibble 7 Nibble select */ +#define SH_XN_DEBUG_SEL_NIBBLE7_NIBBLE_SEL_SHFT 60 +#define SH_XN_DEBUG_SEL_NIBBLE7_NIBBLE_SEL_MASK 0x7000000000000000 + +/* SH_XN_DEBUG_SEL_TRIGGER_ENABLE */ +/* Description: Enable trigger on bit 32 of Analyzer data */ +#define SH_XN_DEBUG_SEL_TRIGGER_ENABLE_SHFT 63 +#define SH_XN_DEBUG_SEL_TRIGGER_ENABLE_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_XN_DEBUG_TRIG_SEL" */ +/* XN Debug trigger Select */ +/* ==================================================================== */ + +#define SH_XN_DEBUG_TRIG_SEL 0x0000000150031020 +#define SH_XN_DEBUG_TRIG_SEL_MASK 0x7777777777777777 +#define SH_XN_DEBUG_TRIG_SEL_INIT 0x0000000000000000 + +/* SH_XN_DEBUG_TRIG_SEL_TRIGGER0_RLM_SEL */ +/* Description: Nibble 0 RLM select */ +#define SH_XN_DEBUG_TRIG_SEL_TRIGGER0_RLM_SEL_SHFT 0 +#define SH_XN_DEBUG_TRIG_SEL_TRIGGER0_RLM_SEL_MASK 0x0000000000000007 + +/* SH_XN_DEBUG_TRIG_SEL_TRIGGER0_NIBBLE_SEL */ +/* Description: Nibble 0 Nibble select */ +#define SH_XN_DEBUG_TRIG_SEL_TRIGGER0_NIBBLE_SEL_SHFT 4 +#define SH_XN_DEBUG_TRIG_SEL_TRIGGER0_NIBBLE_SEL_MASK 0x0000000000000070 + +/* SH_XN_DEBUG_TRIG_SEL_TRIGGER1_RLM_SEL */ +/* Description: Nibble 1 RLM select */ +#define SH_XN_DEBUG_TRIG_SEL_TRIGGER1_RLM_SEL_SHFT 8 +#define SH_XN_DEBUG_TRIG_SEL_TRIGGER1_RLM_SEL_MASK 0x0000000000000700 + +/* SH_XN_DEBUG_TRIG_SEL_TRIGGER1_NIBBLE_SEL */ +/* Description: Nibble 1 Nibble select */ +#define SH_XN_DEBUG_TRIG_SEL_TRIGGER1_NIBBLE_SEL_SHFT 12 +#define SH_XN_DEBUG_TRIG_SEL_TRIGGER1_NIBBLE_SEL_MASK 0x0000000000007000 + +/* SH_XN_DEBUG_TRIG_SEL_TRIGGER2_RLM_SEL */ +/* Description: Nibble 2 RLM select */ +#define SH_XN_DEBUG_TRIG_SEL_TRIGGER2_RLM_SEL_SHFT 16 +#define SH_XN_DEBUG_TRIG_SEL_TRIGGER2_RLM_SEL_MASK 0x0000000000070000 + +/* SH_XN_DEBUG_TRIG_SEL_TRIGGER2_NIBBLE_SEL */ +/* Description: Nibble 2 Nibble select */ +#define SH_XN_DEBUG_TRIG_SEL_TRIGGER2_NIBBLE_SEL_SHFT 20 +#define SH_XN_DEBUG_TRIG_SEL_TRIGGER2_NIBBLE_SEL_MASK 0x0000000000700000 + +/* SH_XN_DEBUG_TRIG_SEL_TRIGGER3_RLM_SEL */ +/* Description: Nibble 3 RLM select */ +#define SH_XN_DEBUG_TRIG_SEL_TRIGGER3_RLM_SEL_SHFT 24 +#define SH_XN_DEBUG_TRIG_SEL_TRIGGER3_RLM_SEL_MASK 0x0000000007000000 + +/* SH_XN_DEBUG_TRIG_SEL_TRIGGER3_NIBBLE_SEL */ +/* Description: Nibble 3 Nibble select */ +#define SH_XN_DEBUG_TRIG_SEL_TRIGGER3_NIBBLE_SEL_SHFT 28 +#define SH_XN_DEBUG_TRIG_SEL_TRIGGER3_NIBBLE_SEL_MASK 0x0000000070000000 + +/* SH_XN_DEBUG_TRIG_SEL_TRIGGER4_RLM_SEL */ +/* Description: Nibble 4 RLM select */ +#define SH_XN_DEBUG_TRIG_SEL_TRIGGER4_RLM_SEL_SHFT 32 +#define SH_XN_DEBUG_TRIG_SEL_TRIGGER4_RLM_SEL_MASK 0x0000000700000000 + +/* SH_XN_DEBUG_TRIG_SEL_TRIGGER4_NIBBLE_SEL */ +/* Description: Nibble 4 Nibble select */ +#define SH_XN_DEBUG_TRIG_SEL_TRIGGER4_NIBBLE_SEL_SHFT 36 +#define SH_XN_DEBUG_TRIG_SEL_TRIGGER4_NIBBLE_SEL_MASK 0x0000007000000000 + +/* SH_XN_DEBUG_TRIG_SEL_TRIGGER5_RLM_SEL */ +/* Description: Nibble 5 RLM select */ +#define SH_XN_DEBUG_TRIG_SEL_TRIGGER5_RLM_SEL_SHFT 40 +#define SH_XN_DEBUG_TRIG_SEL_TRIGGER5_RLM_SEL_MASK 0x0000070000000000 + +/* SH_XN_DEBUG_TRIG_SEL_TRIGGER5_NIBBLE_SEL */ +/* Description: Nibble 5 Nibble select */ +#define SH_XN_DEBUG_TRIG_SEL_TRIGGER5_NIBBLE_SEL_SHFT 44 +#define SH_XN_DEBUG_TRIG_SEL_TRIGGER5_NIBBLE_SEL_MASK 0x0000700000000000 + +/* SH_XN_DEBUG_TRIG_SEL_TRIGGER6_RLM_SEL */ +/* Description: Nibble 6 RLM select */ +#define SH_XN_DEBUG_TRIG_SEL_TRIGGER6_RLM_SEL_SHFT 48 +#define SH_XN_DEBUG_TRIG_SEL_TRIGGER6_RLM_SEL_MASK 0x0007000000000000 + +/* SH_XN_DEBUG_TRIG_SEL_TRIGGER6_NIBBLE_SEL */ +/* Description: Nibble 6 Nibble select */ +#define SH_XN_DEBUG_TRIG_SEL_TRIGGER6_NIBBLE_SEL_SHFT 52 +#define SH_XN_DEBUG_TRIG_SEL_TRIGGER6_NIBBLE_SEL_MASK 0x0070000000000000 + +/* SH_XN_DEBUG_TRIG_SEL_TRIGGER7_RLM_SEL */ +/* Description: Nibble 7 RLM select */ +#define SH_XN_DEBUG_TRIG_SEL_TRIGGER7_RLM_SEL_SHFT 56 +#define SH_XN_DEBUG_TRIG_SEL_TRIGGER7_RLM_SEL_MASK 0x0700000000000000 + +/* SH_XN_DEBUG_TRIG_SEL_TRIGGER7_NIBBLE_SEL */ +/* Description: Nibble 7 Nibble select */ +#define SH_XN_DEBUG_TRIG_SEL_TRIGGER7_NIBBLE_SEL_SHFT 60 +#define SH_XN_DEBUG_TRIG_SEL_TRIGGER7_NIBBLE_SEL_MASK 0x7000000000000000 + +/* ==================================================================== */ +/* Register "SH_XN_TRIGGER_COMPARE" */ +/* XN Debug Compare */ +/* ==================================================================== */ + +#define SH_XN_TRIGGER_COMPARE 0x0000000150031040 +#define SH_XN_TRIGGER_COMPARE_MASK 0x00000000ffffffff +#define SH_XN_TRIGGER_COMPARE_INIT 0x0000000000000000 + +/* SH_XN_TRIGGER_COMPARE_MASK */ +/* Description: Mask to select Debug bits for trigger generation */ +#define SH_XN_TRIGGER_COMPARE_MASK_SHFT 0 +#define SH_XN_TRIGGER_COMPARE_MASK_MASK 0x00000000ffffffff + +/* ==================================================================== */ +/* Register "SH_XN_TRIGGER_DATA" */ +/* XN Debug Compare Data */ +/* ==================================================================== */ + +#define SH_XN_TRIGGER_DATA 0x0000000150031050 +#define SH_XN_TRIGGER_DATA_MASK 0x00000000ffffffff +#define SH_XN_TRIGGER_DATA_INIT 0x00000000ffffffff + +/* SH_XN_TRIGGER_DATA_COMPARE_PATTERN */ +/* Description: debug bit pattern for trigger generation */ +#define SH_XN_TRIGGER_DATA_COMPARE_PATTERN_SHFT 0 +#define SH_XN_TRIGGER_DATA_COMPARE_PATTERN_MASK 0x00000000ffffffff + +/* ==================================================================== */ +/* Register "SH_XN_IILB_DEBUG_SEL" */ +/* XN IILB Debug Port Select */ +/* ==================================================================== */ + +#define SH_XN_IILB_DEBUG_SEL 0x0000000150031060 +#define SH_XN_IILB_DEBUG_SEL_MASK 0x7777777777777777 +#define SH_XN_IILB_DEBUG_SEL_INIT 0x0000000000000000 + +/* SH_XN_IILB_DEBUG_SEL_NIBBLE0_INPUT_SEL */ +/* Description: Nibble 0 input select */ +#define SH_XN_IILB_DEBUG_SEL_NIBBLE0_INPUT_SEL_SHFT 0 +#define SH_XN_IILB_DEBUG_SEL_NIBBLE0_INPUT_SEL_MASK 0x0000000000000007 + +/* SH_XN_IILB_DEBUG_SEL_NIBBLE0_NIBBLE_SEL */ +/* Description: Nibble 0 Nibble select */ +#define SH_XN_IILB_DEBUG_SEL_NIBBLE0_NIBBLE_SEL_SHFT 4 +#define SH_XN_IILB_DEBUG_SEL_NIBBLE0_NIBBLE_SEL_MASK 0x0000000000000070 + +/* SH_XN_IILB_DEBUG_SEL_NIBBLE1_INPUT_SEL */ +/* Description: Nibble 1 input select */ +#define SH_XN_IILB_DEBUG_SEL_NIBBLE1_INPUT_SEL_SHFT 8 +#define SH_XN_IILB_DEBUG_SEL_NIBBLE1_INPUT_SEL_MASK 0x0000000000000700 + +/* SH_XN_IILB_DEBUG_SEL_NIBBLE1_NIBBLE_SEL */ +/* Description: Nibble 1 Nibble select */ +#define SH_XN_IILB_DEBUG_SEL_NIBBLE1_NIBBLE_SEL_SHFT 12 +#define SH_XN_IILB_DEBUG_SEL_NIBBLE1_NIBBLE_SEL_MASK 0x0000000000007000 + +/* SH_XN_IILB_DEBUG_SEL_NIBBLE2_INPUT_SEL */ +/* Description: Nibble 2 input select */ +#define SH_XN_IILB_DEBUG_SEL_NIBBLE2_INPUT_SEL_SHFT 16 +#define SH_XN_IILB_DEBUG_SEL_NIBBLE2_INPUT_SEL_MASK 0x0000000000070000 + +/* SH_XN_IILB_DEBUG_SEL_NIBBLE2_NIBBLE_SEL */ +/* Description: Nibble 2 Nibble select */ +#define SH_XN_IILB_DEBUG_SEL_NIBBLE2_NIBBLE_SEL_SHFT 20 +#define SH_XN_IILB_DEBUG_SEL_NIBBLE2_NIBBLE_SEL_MASK 0x0000000000700000 + +/* SH_XN_IILB_DEBUG_SEL_NIBBLE3_INPUT_SEL */ +/* Description: Nibble 3 input select */ +#define SH_XN_IILB_DEBUG_SEL_NIBBLE3_INPUT_SEL_SHFT 24 +#define SH_XN_IILB_DEBUG_SEL_NIBBLE3_INPUT_SEL_MASK 0x0000000007000000 + +/* SH_XN_IILB_DEBUG_SEL_NIBBLE3_NIBBLE_SEL */ +/* Description: Nibble 3 Nibble select */ +#define SH_XN_IILB_DEBUG_SEL_NIBBLE3_NIBBLE_SEL_SHFT 28 +#define SH_XN_IILB_DEBUG_SEL_NIBBLE3_NIBBLE_SEL_MASK 0x0000000070000000 + +/* SH_XN_IILB_DEBUG_SEL_NIBBLE4_INPUT_SEL */ +/* Description: Nibble 4 input select */ +#define SH_XN_IILB_DEBUG_SEL_NIBBLE4_INPUT_SEL_SHFT 32 +#define SH_XN_IILB_DEBUG_SEL_NIBBLE4_INPUT_SEL_MASK 0x0000000700000000 + +/* SH_XN_IILB_DEBUG_SEL_NIBBLE4_NIBBLE_SEL */ +/* Description: Nibble 4 Nibble select */ +#define SH_XN_IILB_DEBUG_SEL_NIBBLE4_NIBBLE_SEL_SHFT 36 +#define SH_XN_IILB_DEBUG_SEL_NIBBLE4_NIBBLE_SEL_MASK 0x0000007000000000 + +/* SH_XN_IILB_DEBUG_SEL_NIBBLE5_INPUT_SEL */ +/* Description: Nibble 5 input select */ +#define SH_XN_IILB_DEBUG_SEL_NIBBLE5_INPUT_SEL_SHFT 40 +#define SH_XN_IILB_DEBUG_SEL_NIBBLE5_INPUT_SEL_MASK 0x0000070000000000 + +/* SH_XN_IILB_DEBUG_SEL_NIBBLE5_NIBBLE_SEL */ +/* Description: Nibble 5 Nibble select */ +#define SH_XN_IILB_DEBUG_SEL_NIBBLE5_NIBBLE_SEL_SHFT 44 +#define SH_XN_IILB_DEBUG_SEL_NIBBLE5_NIBBLE_SEL_MASK 0x0000700000000000 + +/* SH_XN_IILB_DEBUG_SEL_NIBBLE6_INPUT_SEL */ +/* Description: Nibble 6 input select */ +#define SH_XN_IILB_DEBUG_SEL_NIBBLE6_INPUT_SEL_SHFT 48 +#define SH_XN_IILB_DEBUG_SEL_NIBBLE6_INPUT_SEL_MASK 0x0007000000000000 + +/* SH_XN_IILB_DEBUG_SEL_NIBBLE6_NIBBLE_SEL */ +/* Description: Nibble 6 Nibble select */ +#define SH_XN_IILB_DEBUG_SEL_NIBBLE6_NIBBLE_SEL_SHFT 52 +#define SH_XN_IILB_DEBUG_SEL_NIBBLE6_NIBBLE_SEL_MASK 0x0070000000000000 + +/* SH_XN_IILB_DEBUG_SEL_NIBBLE7_INPUT_SEL */ +/* Description: Nibble 7 input select */ +#define SH_XN_IILB_DEBUG_SEL_NIBBLE7_INPUT_SEL_SHFT 56 +#define SH_XN_IILB_DEBUG_SEL_NIBBLE7_INPUT_SEL_MASK 0x0700000000000000 + +/* SH_XN_IILB_DEBUG_SEL_NIBBLE7_NIBBLE_SEL */ +/* Description: Nibble 7 Nibble select */ +#define SH_XN_IILB_DEBUG_SEL_NIBBLE7_NIBBLE_SEL_SHFT 60 +#define SH_XN_IILB_DEBUG_SEL_NIBBLE7_NIBBLE_SEL_MASK 0x7000000000000000 + +/* ==================================================================== */ +/* Register "SH_XN_PI_DEBUG_SEL" */ +/* XN PI Debug Port Select */ +/* ==================================================================== */ + +#define SH_XN_PI_DEBUG_SEL 0x00000001500310a0 +#define SH_XN_PI_DEBUG_SEL_MASK 0x7777777777777777 +#define SH_XN_PI_DEBUG_SEL_INIT 0x0000000000000000 + +/* SH_XN_PI_DEBUG_SEL_NIBBLE0_INPUT_SEL */ +/* Description: Nibble 0 input select */ +#define SH_XN_PI_DEBUG_SEL_NIBBLE0_INPUT_SEL_SHFT 0 +#define SH_XN_PI_DEBUG_SEL_NIBBLE0_INPUT_SEL_MASK 0x0000000000000007 + +/* SH_XN_PI_DEBUG_SEL_NIBBLE0_NIBBLE_SEL */ +/* Description: Nibble 0 Nibble select */ +#define SH_XN_PI_DEBUG_SEL_NIBBLE0_NIBBLE_SEL_SHFT 4 +#define SH_XN_PI_DEBUG_SEL_NIBBLE0_NIBBLE_SEL_MASK 0x0000000000000070 + +/* SH_XN_PI_DEBUG_SEL_NIBBLE1_INPUT_SEL */ +/* Description: Nibble 1 input select */ +#define SH_XN_PI_DEBUG_SEL_NIBBLE1_INPUT_SEL_SHFT 8 +#define SH_XN_PI_DEBUG_SEL_NIBBLE1_INPUT_SEL_MASK 0x0000000000000700 + +/* SH_XN_PI_DEBUG_SEL_NIBBLE1_NIBBLE_SEL */ +/* Description: Nibble 1 Nibble select */ +#define SH_XN_PI_DEBUG_SEL_NIBBLE1_NIBBLE_SEL_SHFT 12 +#define SH_XN_PI_DEBUG_SEL_NIBBLE1_NIBBLE_SEL_MASK 0x0000000000007000 + +/* SH_XN_PI_DEBUG_SEL_NIBBLE2_INPUT_SEL */ +/* Description: Nibble 2 input select */ +#define SH_XN_PI_DEBUG_SEL_NIBBLE2_INPUT_SEL_SHFT 16 +#define SH_XN_PI_DEBUG_SEL_NIBBLE2_INPUT_SEL_MASK 0x0000000000070000 + +/* SH_XN_PI_DEBUG_SEL_NIBBLE2_NIBBLE_SEL */ +/* Description: Nibble 2 Nibble select */ +#define SH_XN_PI_DEBUG_SEL_NIBBLE2_NIBBLE_SEL_SHFT 20 +#define SH_XN_PI_DEBUG_SEL_NIBBLE2_NIBBLE_SEL_MASK 0x0000000000700000 + +/* SH_XN_PI_DEBUG_SEL_NIBBLE3_INPUT_SEL */ +/* Description: Nibble 3 input select */ +#define SH_XN_PI_DEBUG_SEL_NIBBLE3_INPUT_SEL_SHFT 24 +#define SH_XN_PI_DEBUG_SEL_NIBBLE3_INPUT_SEL_MASK 0x0000000007000000 + +/* SH_XN_PI_DEBUG_SEL_NIBBLE3_NIBBLE_SEL */ +/* Description: Nibble 3 Nibble select */ +#define SH_XN_PI_DEBUG_SEL_NIBBLE3_NIBBLE_SEL_SHFT 28 +#define SH_XN_PI_DEBUG_SEL_NIBBLE3_NIBBLE_SEL_MASK 0x0000000070000000 + +/* SH_XN_PI_DEBUG_SEL_NIBBLE4_INPUT_SEL */ +/* Description: Nibble 4 input select */ +#define SH_XN_PI_DEBUG_SEL_NIBBLE4_INPUT_SEL_SHFT 32 +#define SH_XN_PI_DEBUG_SEL_NIBBLE4_INPUT_SEL_MASK 0x0000000700000000 + +/* SH_XN_PI_DEBUG_SEL_NIBBLE4_NIBBLE_SEL */ +/* Description: Nibble 4 Nibble select */ +#define SH_XN_PI_DEBUG_SEL_NIBBLE4_NIBBLE_SEL_SHFT 36 +#define SH_XN_PI_DEBUG_SEL_NIBBLE4_NIBBLE_SEL_MASK 0x0000007000000000 + +/* SH_XN_PI_DEBUG_SEL_NIBBLE5_INPUT_SEL */ +/* Description: Nibble 5 input select */ +#define SH_XN_PI_DEBUG_SEL_NIBBLE5_INPUT_SEL_SHFT 40 +#define SH_XN_PI_DEBUG_SEL_NIBBLE5_INPUT_SEL_MASK 0x0000070000000000 + +/* SH_XN_PI_DEBUG_SEL_NIBBLE5_NIBBLE_SEL */ +/* Description: Nibble 5 Nibble select */ +#define SH_XN_PI_DEBUG_SEL_NIBBLE5_NIBBLE_SEL_SHFT 44 +#define SH_XN_PI_DEBUG_SEL_NIBBLE5_NIBBLE_SEL_MASK 0x0000700000000000 + +/* SH_XN_PI_DEBUG_SEL_NIBBLE6_INPUT_SEL */ +/* Description: Nibble 6 input select */ +#define SH_XN_PI_DEBUG_SEL_NIBBLE6_INPUT_SEL_SHFT 48 +#define SH_XN_PI_DEBUG_SEL_NIBBLE6_INPUT_SEL_MASK 0x0007000000000000 + +/* SH_XN_PI_DEBUG_SEL_NIBBLE6_NIBBLE_SEL */ +/* Description: Nibble 6 Nibble select */ +#define SH_XN_PI_DEBUG_SEL_NIBBLE6_NIBBLE_SEL_SHFT 52 +#define SH_XN_PI_DEBUG_SEL_NIBBLE6_NIBBLE_SEL_MASK 0x0070000000000000 + +/* SH_XN_PI_DEBUG_SEL_NIBBLE7_INPUT_SEL */ +/* Description: Nibble 7 input select */ +#define SH_XN_PI_DEBUG_SEL_NIBBLE7_INPUT_SEL_SHFT 56 +#define SH_XN_PI_DEBUG_SEL_NIBBLE7_INPUT_SEL_MASK 0x0700000000000000 + +/* SH_XN_PI_DEBUG_SEL_NIBBLE7_NIBBLE_SEL */ +/* Description: Nibble 7 Nibble select */ +#define SH_XN_PI_DEBUG_SEL_NIBBLE7_NIBBLE_SEL_SHFT 60 +#define SH_XN_PI_DEBUG_SEL_NIBBLE7_NIBBLE_SEL_MASK 0x7000000000000000 + +/* ==================================================================== */ +/* Register "SH_XN_MD_DEBUG_SEL" */ +/* XN MD Debug Port Select */ +/* ==================================================================== */ + +#define SH_XN_MD_DEBUG_SEL 0x0000000150031080 +#define SH_XN_MD_DEBUG_SEL_MASK 0x7777777777777777 +#define SH_XN_MD_DEBUG_SEL_INIT 0x0000000000000000 + +/* SH_XN_MD_DEBUG_SEL_NIBBLE0_INPUT_SEL */ +/* Description: Nibble 0 input select */ +#define SH_XN_MD_DEBUG_SEL_NIBBLE0_INPUT_SEL_SHFT 0 +#define SH_XN_MD_DEBUG_SEL_NIBBLE0_INPUT_SEL_MASK 0x0000000000000007 + +/* SH_XN_MD_DEBUG_SEL_NIBBLE0_NIBBLE_SEL */ +/* Description: Nibble 0 Nibble select */ +#define SH_XN_MD_DEBUG_SEL_NIBBLE0_NIBBLE_SEL_SHFT 4 +#define SH_XN_MD_DEBUG_SEL_NIBBLE0_NIBBLE_SEL_MASK 0x0000000000000070 + +/* SH_XN_MD_DEBUG_SEL_NIBBLE1_INPUT_SEL */ +/* Description: Nibble 1 input select */ +#define SH_XN_MD_DEBUG_SEL_NIBBLE1_INPUT_SEL_SHFT 8 +#define SH_XN_MD_DEBUG_SEL_NIBBLE1_INPUT_SEL_MASK 0x0000000000000700 + +/* SH_XN_MD_DEBUG_SEL_NIBBLE1_NIBBLE_SEL */ +/* Description: Nibble 1 Nibble select */ +#define SH_XN_MD_DEBUG_SEL_NIBBLE1_NIBBLE_SEL_SHFT 12 +#define SH_XN_MD_DEBUG_SEL_NIBBLE1_NIBBLE_SEL_MASK 0x0000000000007000 + +/* SH_XN_MD_DEBUG_SEL_NIBBLE2_INPUT_SEL */ +/* Description: Nibble 2 input select */ +#define SH_XN_MD_DEBUG_SEL_NIBBLE2_INPUT_SEL_SHFT 16 +#define SH_XN_MD_DEBUG_SEL_NIBBLE2_INPUT_SEL_MASK 0x0000000000070000 + +/* SH_XN_MD_DEBUG_SEL_NIBBLE2_NIBBLE_SEL */ +/* Description: Nibble 2 Nibble select */ +#define SH_XN_MD_DEBUG_SEL_NIBBLE2_NIBBLE_SEL_SHFT 20 +#define SH_XN_MD_DEBUG_SEL_NIBBLE2_NIBBLE_SEL_MASK 0x0000000000700000 + +/* SH_XN_MD_DEBUG_SEL_NIBBLE3_INPUT_SEL */ +/* Description: Nibble 3 input select */ +#define SH_XN_MD_DEBUG_SEL_NIBBLE3_INPUT_SEL_SHFT 24 +#define SH_XN_MD_DEBUG_SEL_NIBBLE3_INPUT_SEL_MASK 0x0000000007000000 + +/* SH_XN_MD_DEBUG_SEL_NIBBLE3_NIBBLE_SEL */ +/* Description: Nibble 3 Nibble select */ +#define SH_XN_MD_DEBUG_SEL_NIBBLE3_NIBBLE_SEL_SHFT 28 +#define SH_XN_MD_DEBUG_SEL_NIBBLE3_NIBBLE_SEL_MASK 0x0000000070000000 + +/* SH_XN_MD_DEBUG_SEL_NIBBLE4_INPUT_SEL */ +/* Description: Nibble 4 input select */ +#define SH_XN_MD_DEBUG_SEL_NIBBLE4_INPUT_SEL_SHFT 32 +#define SH_XN_MD_DEBUG_SEL_NIBBLE4_INPUT_SEL_MASK 0x0000000700000000 + +/* SH_XN_MD_DEBUG_SEL_NIBBLE4_NIBBLE_SEL */ +/* Description: Nibble 4 Nibble select */ +#define SH_XN_MD_DEBUG_SEL_NIBBLE4_NIBBLE_SEL_SHFT 36 +#define SH_XN_MD_DEBUG_SEL_NIBBLE4_NIBBLE_SEL_MASK 0x0000007000000000 + +/* SH_XN_MD_DEBUG_SEL_NIBBLE5_INPUT_SEL */ +/* Description: Nibble 5 input select */ +#define SH_XN_MD_DEBUG_SEL_NIBBLE5_INPUT_SEL_SHFT 40 +#define SH_XN_MD_DEBUG_SEL_NIBBLE5_INPUT_SEL_MASK 0x0000070000000000 + +/* SH_XN_MD_DEBUG_SEL_NIBBLE5_NIBBLE_SEL */ +/* Description: Nibble 5 Nibble select */ +#define SH_XN_MD_DEBUG_SEL_NIBBLE5_NIBBLE_SEL_SHFT 44 +#define SH_XN_MD_DEBUG_SEL_NIBBLE5_NIBBLE_SEL_MASK 0x0000700000000000 + +/* SH_XN_MD_DEBUG_SEL_NIBBLE6_INPUT_SEL */ +/* Description: Nibble 6 input select */ +#define SH_XN_MD_DEBUG_SEL_NIBBLE6_INPUT_SEL_SHFT 48 +#define SH_XN_MD_DEBUG_SEL_NIBBLE6_INPUT_SEL_MASK 0x0007000000000000 + +/* SH_XN_MD_DEBUG_SEL_NIBBLE6_NIBBLE_SEL */ +/* Description: Nibble 6 Nibble select */ +#define SH_XN_MD_DEBUG_SEL_NIBBLE6_NIBBLE_SEL_SHFT 52 +#define SH_XN_MD_DEBUG_SEL_NIBBLE6_NIBBLE_SEL_MASK 0x0070000000000000 + +/* SH_XN_MD_DEBUG_SEL_NIBBLE7_INPUT_SEL */ +/* Description: Nibble 7 input select */ +#define SH_XN_MD_DEBUG_SEL_NIBBLE7_INPUT_SEL_SHFT 56 +#define SH_XN_MD_DEBUG_SEL_NIBBLE7_INPUT_SEL_MASK 0x0700000000000000 + +/* SH_XN_MD_DEBUG_SEL_NIBBLE7_NIBBLE_SEL */ +/* Description: Nibble 7 Nibble select */ +#define SH_XN_MD_DEBUG_SEL_NIBBLE7_NIBBLE_SEL_SHFT 60 +#define SH_XN_MD_DEBUG_SEL_NIBBLE7_NIBBLE_SEL_MASK 0x7000000000000000 + +/* ==================================================================== */ +/* Register "SH_XN_NI0_DEBUG_SEL" */ +/* XN NI0 Debug Port Select */ +/* ==================================================================== */ + +#define SH_XN_NI0_DEBUG_SEL 0x00000001500310c0 +#define SH_XN_NI0_DEBUG_SEL_MASK 0x7777777777777777 +#define SH_XN_NI0_DEBUG_SEL_INIT 0x0000000000000000 + +/* SH_XN_NI0_DEBUG_SEL_NIBBLE0_INPUT_SEL */ +/* Description: Nibble 0 input select */ +#define SH_XN_NI0_DEBUG_SEL_NIBBLE0_INPUT_SEL_SHFT 0 +#define SH_XN_NI0_DEBUG_SEL_NIBBLE0_INPUT_SEL_MASK 0x0000000000000007 + +/* SH_XN_NI0_DEBUG_SEL_NIBBLE0_NIBBLE_SEL */ +/* Description: Nibble 0 Nibble select */ +#define SH_XN_NI0_DEBUG_SEL_NIBBLE0_NIBBLE_SEL_SHFT 4 +#define SH_XN_NI0_DEBUG_SEL_NIBBLE0_NIBBLE_SEL_MASK 0x0000000000000070 + +/* SH_XN_NI0_DEBUG_SEL_NIBBLE1_INPUT_SEL */ +/* Description: Nibble 1 input select */ +#define SH_XN_NI0_DEBUG_SEL_NIBBLE1_INPUT_SEL_SHFT 8 +#define SH_XN_NI0_DEBUG_SEL_NIBBLE1_INPUT_SEL_MASK 0x0000000000000700 + +/* SH_XN_NI0_DEBUG_SEL_NIBBLE1_NIBBLE_SEL */ +/* Description: Nibble 1 Nibble select */ +#define SH_XN_NI0_DEBUG_SEL_NIBBLE1_NIBBLE_SEL_SHFT 12 +#define SH_XN_NI0_DEBUG_SEL_NIBBLE1_NIBBLE_SEL_MASK 0x0000000000007000 + +/* SH_XN_NI0_DEBUG_SEL_NIBBLE2_INPUT_SEL */ +/* Description: Nibble 2 input select */ +#define SH_XN_NI0_DEBUG_SEL_NIBBLE2_INPUT_SEL_SHFT 16 +#define SH_XN_NI0_DEBUG_SEL_NIBBLE2_INPUT_SEL_MASK 0x0000000000070000 + +/* SH_XN_NI0_DEBUG_SEL_NIBBLE2_NIBBLE_SEL */ +/* Description: Nibble 2 Nibble select */ +#define SH_XN_NI0_DEBUG_SEL_NIBBLE2_NIBBLE_SEL_SHFT 20 +#define SH_XN_NI0_DEBUG_SEL_NIBBLE2_NIBBLE_SEL_MASK 0x0000000000700000 + +/* SH_XN_NI0_DEBUG_SEL_NIBBLE3_INPUT_SEL */ +/* Description: Nibble 3 input select */ +#define SH_XN_NI0_DEBUG_SEL_NIBBLE3_INPUT_SEL_SHFT 24 +#define SH_XN_NI0_DEBUG_SEL_NIBBLE3_INPUT_SEL_MASK 0x0000000007000000 + +/* SH_XN_NI0_DEBUG_SEL_NIBBLE3_NIBBLE_SEL */ +/* Description: Nibble 3 Nibble select */ +#define SH_XN_NI0_DEBUG_SEL_NIBBLE3_NIBBLE_SEL_SHFT 28 +#define SH_XN_NI0_DEBUG_SEL_NIBBLE3_NIBBLE_SEL_MASK 0x0000000070000000 + +/* SH_XN_NI0_DEBUG_SEL_NIBBLE4_INPUT_SEL */ +/* Description: Nibble 4 input select */ +#define SH_XN_NI0_DEBUG_SEL_NIBBLE4_INPUT_SEL_SHFT 32 +#define SH_XN_NI0_DEBUG_SEL_NIBBLE4_INPUT_SEL_MASK 0x0000000700000000 + +/* SH_XN_NI0_DEBUG_SEL_NIBBLE4_NIBBLE_SEL */ +/* Description: Nibble 4 Nibble select */ +#define SH_XN_NI0_DEBUG_SEL_NIBBLE4_NIBBLE_SEL_SHFT 36 +#define SH_XN_NI0_DEBUG_SEL_NIBBLE4_NIBBLE_SEL_MASK 0x0000007000000000 + +/* SH_XN_NI0_DEBUG_SEL_NIBBLE5_INPUT_SEL */ +/* Description: Nibble 5 input select */ +#define SH_XN_NI0_DEBUG_SEL_NIBBLE5_INPUT_SEL_SHFT 40 +#define SH_XN_NI0_DEBUG_SEL_NIBBLE5_INPUT_SEL_MASK 0x0000070000000000 + +/* SH_XN_NI0_DEBUG_SEL_NIBBLE5_NIBBLE_SEL */ +/* Description: Nibble 5 Nibble select */ +#define SH_XN_NI0_DEBUG_SEL_NIBBLE5_NIBBLE_SEL_SHFT 44 +#define SH_XN_NI0_DEBUG_SEL_NIBBLE5_NIBBLE_SEL_MASK 0x0000700000000000 + +/* SH_XN_NI0_DEBUG_SEL_NIBBLE6_INPUT_SEL */ +/* Description: Nibble 6 input select */ +#define SH_XN_NI0_DEBUG_SEL_NIBBLE6_INPUT_SEL_SHFT 48 +#define SH_XN_NI0_DEBUG_SEL_NIBBLE6_INPUT_SEL_MASK 0x0007000000000000 + +/* SH_XN_NI0_DEBUG_SEL_NIBBLE6_NIBBLE_SEL */ +/* Description: Nibble 6 Nibble select */ +#define SH_XN_NI0_DEBUG_SEL_NIBBLE6_NIBBLE_SEL_SHFT 52 +#define SH_XN_NI0_DEBUG_SEL_NIBBLE6_NIBBLE_SEL_MASK 0x0070000000000000 + +/* SH_XN_NI0_DEBUG_SEL_NIBBLE7_INPUT_SEL */ +/* Description: Nibble 7 input select */ +#define SH_XN_NI0_DEBUG_SEL_NIBBLE7_INPUT_SEL_SHFT 56 +#define SH_XN_NI0_DEBUG_SEL_NIBBLE7_INPUT_SEL_MASK 0x0700000000000000 + +/* SH_XN_NI0_DEBUG_SEL_NIBBLE7_NIBBLE_SEL */ +/* Description: Nibble 7 Nibble select */ +#define SH_XN_NI0_DEBUG_SEL_NIBBLE7_NIBBLE_SEL_SHFT 60 +#define SH_XN_NI0_DEBUG_SEL_NIBBLE7_NIBBLE_SEL_MASK 0x7000000000000000 + +/* ==================================================================== */ +/* Register "SH_XN_NI1_DEBUG_SEL" */ +/* XN NI1 Debug Port Select */ +/* ==================================================================== */ + +#define SH_XN_NI1_DEBUG_SEL 0x00000001500310e0 +#define SH_XN_NI1_DEBUG_SEL_MASK 0x7777777777777777 +#define SH_XN_NI1_DEBUG_SEL_INIT 0x0000000000000000 + +/* SH_XN_NI1_DEBUG_SEL_NIBBLE0_INPUT_SEL */ +/* Description: Nibble 0 input select */ +#define SH_XN_NI1_DEBUG_SEL_NIBBLE0_INPUT_SEL_SHFT 0 +#define SH_XN_NI1_DEBUG_SEL_NIBBLE0_INPUT_SEL_MASK 0x0000000000000007 + +/* SH_XN_NI1_DEBUG_SEL_NIBBLE0_NIBBLE_SEL */ +/* Description: Nibble 0 Nibble select */ +#define SH_XN_NI1_DEBUG_SEL_NIBBLE0_NIBBLE_SEL_SHFT 4 +#define SH_XN_NI1_DEBUG_SEL_NIBBLE0_NIBBLE_SEL_MASK 0x0000000000000070 + +/* SH_XN_NI1_DEBUG_SEL_NIBBLE1_INPUT_SEL */ +/* Description: Nibble 1 input select */ +#define SH_XN_NI1_DEBUG_SEL_NIBBLE1_INPUT_SEL_SHFT 8 +#define SH_XN_NI1_DEBUG_SEL_NIBBLE1_INPUT_SEL_MASK 0x0000000000000700 + +/* SH_XN_NI1_DEBUG_SEL_NIBBLE1_NIBBLE_SEL */ +/* Description: Nibble 1 Nibble select */ +#define SH_XN_NI1_DEBUG_SEL_NIBBLE1_NIBBLE_SEL_SHFT 12 +#define SH_XN_NI1_DEBUG_SEL_NIBBLE1_NIBBLE_SEL_MASK 0x0000000000007000 + +/* SH_XN_NI1_DEBUG_SEL_NIBBLE2_INPUT_SEL */ +/* Description: Nibble 2 input select */ +#define SH_XN_NI1_DEBUG_SEL_NIBBLE2_INPUT_SEL_SHFT 16 +#define SH_XN_NI1_DEBUG_SEL_NIBBLE2_INPUT_SEL_MASK 0x0000000000070000 + +/* SH_XN_NI1_DEBUG_SEL_NIBBLE2_NIBBLE_SEL */ +/* Description: Nibble 2 Nibble select */ +#define SH_XN_NI1_DEBUG_SEL_NIBBLE2_NIBBLE_SEL_SHFT 20 +#define SH_XN_NI1_DEBUG_SEL_NIBBLE2_NIBBLE_SEL_MASK 0x0000000000700000 + +/* SH_XN_NI1_DEBUG_SEL_NIBBLE3_INPUT_SEL */ +/* Description: Nibble 3 input select */ +#define SH_XN_NI1_DEBUG_SEL_NIBBLE3_INPUT_SEL_SHFT 24 +#define SH_XN_NI1_DEBUG_SEL_NIBBLE3_INPUT_SEL_MASK 0x0000000007000000 + +/* SH_XN_NI1_DEBUG_SEL_NIBBLE3_NIBBLE_SEL */ +/* Description: Nibble 3 Nibble select */ +#define SH_XN_NI1_DEBUG_SEL_NIBBLE3_NIBBLE_SEL_SHFT 28 +#define SH_XN_NI1_DEBUG_SEL_NIBBLE3_NIBBLE_SEL_MASK 0x0000000070000000 + +/* SH_XN_NI1_DEBUG_SEL_NIBBLE4_INPUT_SEL */ +/* Description: Nibble 4 input select */ +#define SH_XN_NI1_DEBUG_SEL_NIBBLE4_INPUT_SEL_SHFT 32 +#define SH_XN_NI1_DEBUG_SEL_NIBBLE4_INPUT_SEL_MASK 0x0000000700000000 + +/* SH_XN_NI1_DEBUG_SEL_NIBBLE4_NIBBLE_SEL */ +/* Description: Nibble 4 Nibble select */ +#define SH_XN_NI1_DEBUG_SEL_NIBBLE4_NIBBLE_SEL_SHFT 36 +#define SH_XN_NI1_DEBUG_SEL_NIBBLE4_NIBBLE_SEL_MASK 0x0000007000000000 + +/* SH_XN_NI1_DEBUG_SEL_NIBBLE5_INPUT_SEL */ +/* Description: Nibble 5 input select */ +#define SH_XN_NI1_DEBUG_SEL_NIBBLE5_INPUT_SEL_SHFT 40 +#define SH_XN_NI1_DEBUG_SEL_NIBBLE5_INPUT_SEL_MASK 0x0000070000000000 + +/* SH_XN_NI1_DEBUG_SEL_NIBBLE5_NIBBLE_SEL */ +/* Description: Nibble 5 Nibble select */ +#define SH_XN_NI1_DEBUG_SEL_NIBBLE5_NIBBLE_SEL_SHFT 44 +#define SH_XN_NI1_DEBUG_SEL_NIBBLE5_NIBBLE_SEL_MASK 0x0000700000000000 + +/* SH_XN_NI1_DEBUG_SEL_NIBBLE6_INPUT_SEL */ +/* Description: Nibble 6 input select */ +#define SH_XN_NI1_DEBUG_SEL_NIBBLE6_INPUT_SEL_SHFT 48 +#define SH_XN_NI1_DEBUG_SEL_NIBBLE6_INPUT_SEL_MASK 0x0007000000000000 + +/* SH_XN_NI1_DEBUG_SEL_NIBBLE6_NIBBLE_SEL */ +/* Description: Nibble 6 Nibble select */ +#define SH_XN_NI1_DEBUG_SEL_NIBBLE6_NIBBLE_SEL_SHFT 52 +#define SH_XN_NI1_DEBUG_SEL_NIBBLE6_NIBBLE_SEL_MASK 0x0070000000000000 + +/* SH_XN_NI1_DEBUG_SEL_NIBBLE7_INPUT_SEL */ +/* Description: Nibble 7 input select */ +#define SH_XN_NI1_DEBUG_SEL_NIBBLE7_INPUT_SEL_SHFT 56 +#define SH_XN_NI1_DEBUG_SEL_NIBBLE7_INPUT_SEL_MASK 0x0700000000000000 + +/* SH_XN_NI1_DEBUG_SEL_NIBBLE7_NIBBLE_SEL */ +/* Description: Nibble 7 Nibble select */ +#define SH_XN_NI1_DEBUG_SEL_NIBBLE7_NIBBLE_SEL_SHFT 60 +#define SH_XN_NI1_DEBUG_SEL_NIBBLE7_NIBBLE_SEL_MASK 0x7000000000000000 + +/* ==================================================================== */ +/* Register "SH_XN_IILB_LB_CMP_EXP_DATA0" */ +/* IILB compare LB input expected data0 */ +/* ==================================================================== */ + +#define SH_XN_IILB_LB_CMP_EXP_DATA0 0x0000000150031100 +#define SH_XN_IILB_LB_CMP_EXP_DATA0_MASK 0xffffffffffffffff +#define SH_XN_IILB_LB_CMP_EXP_DATA0_INIT 0x0000000000000000 + +/* SH_XN_IILB_LB_CMP_EXP_DATA0_DATA */ +/* Description: Expected data 0 */ +#define SH_XN_IILB_LB_CMP_EXP_DATA0_DATA_SHFT 0 +#define SH_XN_IILB_LB_CMP_EXP_DATA0_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_IILB_LB_CMP_EXP_DATA1" */ +/* IILB compare LB input expected data1 */ +/* ==================================================================== */ + +#define SH_XN_IILB_LB_CMP_EXP_DATA1 0x0000000150031110 +#define SH_XN_IILB_LB_CMP_EXP_DATA1_MASK 0xffffffffffffffff +#define SH_XN_IILB_LB_CMP_EXP_DATA1_INIT 0x0000000000000000 + +/* SH_XN_IILB_LB_CMP_EXP_DATA1_DATA */ +/* Description: Expected data 1 */ +#define SH_XN_IILB_LB_CMP_EXP_DATA1_DATA_SHFT 0 +#define SH_XN_IILB_LB_CMP_EXP_DATA1_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_IILB_LB_CMP_ENABLE0" */ +/* IILB compare LB input enable0 */ +/* ==================================================================== */ + +#define SH_XN_IILB_LB_CMP_ENABLE0 0x0000000150031120 +#define SH_XN_IILB_LB_CMP_ENABLE0_MASK 0xffffffffffffffff +#define SH_XN_IILB_LB_CMP_ENABLE0_INIT 0x0000000000000000 + +/* SH_XN_IILB_LB_CMP_ENABLE0_ENABLE */ +/* Description: Enable0 */ +#define SH_XN_IILB_LB_CMP_ENABLE0_ENABLE_SHFT 0 +#define SH_XN_IILB_LB_CMP_ENABLE0_ENABLE_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_IILB_LB_CMP_ENABLE1" */ +/* IILB compare LB input enable1 */ +/* ==================================================================== */ + +#define SH_XN_IILB_LB_CMP_ENABLE1 0x0000000150031130 +#define SH_XN_IILB_LB_CMP_ENABLE1_MASK 0xffffffffffffffff +#define SH_XN_IILB_LB_CMP_ENABLE1_INIT 0x0000000000000000 + +/* SH_XN_IILB_LB_CMP_ENABLE1_ENABLE */ +/* Description: Enable1 */ +#define SH_XN_IILB_LB_CMP_ENABLE1_ENABLE_SHFT 0 +#define SH_XN_IILB_LB_CMP_ENABLE1_ENABLE_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_IILB_II_CMP_EXP_DATA0" */ +/* IILB compare II input expected data0 */ +/* ==================================================================== */ + +#define SH_XN_IILB_II_CMP_EXP_DATA0 0x0000000150031140 +#define SH_XN_IILB_II_CMP_EXP_DATA0_MASK 0xffffffffffffffff +#define SH_XN_IILB_II_CMP_EXP_DATA0_INIT 0x0000000000000000 + +/* SH_XN_IILB_II_CMP_EXP_DATA0_DATA */ +/* Description: Expected data 0 */ +#define SH_XN_IILB_II_CMP_EXP_DATA0_DATA_SHFT 0 +#define SH_XN_IILB_II_CMP_EXP_DATA0_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_IILB_II_CMP_EXP_DATA1" */ +/* IILB compare II input expected data1 */ +/* ==================================================================== */ + +#define SH_XN_IILB_II_CMP_EXP_DATA1 0x0000000150031150 +#define SH_XN_IILB_II_CMP_EXP_DATA1_MASK 0xffffffffffffffff +#define SH_XN_IILB_II_CMP_EXP_DATA1_INIT 0x0000000000000000 + +/* SH_XN_IILB_II_CMP_EXP_DATA1_DATA */ +/* Description: Expected data 1 */ +#define SH_XN_IILB_II_CMP_EXP_DATA1_DATA_SHFT 0 +#define SH_XN_IILB_II_CMP_EXP_DATA1_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_IILB_II_CMP_ENABLE0" */ +/* IILB compare II input enable0 */ +/* ==================================================================== */ + +#define SH_XN_IILB_II_CMP_ENABLE0 0x0000000150031160 +#define SH_XN_IILB_II_CMP_ENABLE0_MASK 0xffffffffffffffff +#define SH_XN_IILB_II_CMP_ENABLE0_INIT 0x0000000000000000 + +/* SH_XN_IILB_II_CMP_ENABLE0_ENABLE */ +/* Description: Enable0 */ +#define SH_XN_IILB_II_CMP_ENABLE0_ENABLE_SHFT 0 +#define SH_XN_IILB_II_CMP_ENABLE0_ENABLE_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_IILB_II_CMP_ENABLE1" */ +/* IILB compare II input enable1 */ +/* ==================================================================== */ + +#define SH_XN_IILB_II_CMP_ENABLE1 0x0000000150031170 +#define SH_XN_IILB_II_CMP_ENABLE1_MASK 0xffffffffffffffff +#define SH_XN_IILB_II_CMP_ENABLE1_INIT 0x0000000000000000 + +/* SH_XN_IILB_II_CMP_ENABLE1_ENABLE */ +/* Description: Enable1 */ +#define SH_XN_IILB_II_CMP_ENABLE1_ENABLE_SHFT 0 +#define SH_XN_IILB_II_CMP_ENABLE1_ENABLE_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_IILB_MD_CMP_EXP_DATA0" */ +/* IILB compare MD input expected data0 */ +/* ==================================================================== */ + +#define SH_XN_IILB_MD_CMP_EXP_DATA0 0x0000000150031180 +#define SH_XN_IILB_MD_CMP_EXP_DATA0_MASK 0xffffffffffffffff +#define SH_XN_IILB_MD_CMP_EXP_DATA0_INIT 0x0000000000000000 + +/* SH_XN_IILB_MD_CMP_EXP_DATA0_DATA */ +/* Description: Expected data 0 */ +#define SH_XN_IILB_MD_CMP_EXP_DATA0_DATA_SHFT 0 +#define SH_XN_IILB_MD_CMP_EXP_DATA0_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_IILB_MD_CMP_EXP_DATA1" */ +/* IILB compare MD input expected data1 */ +/* ==================================================================== */ + +#define SH_XN_IILB_MD_CMP_EXP_DATA1 0x0000000150031190 +#define SH_XN_IILB_MD_CMP_EXP_DATA1_MASK 0xffffffffffffffff +#define SH_XN_IILB_MD_CMP_EXP_DATA1_INIT 0x0000000000000000 + +/* SH_XN_IILB_MD_CMP_EXP_DATA1_DATA */ +/* Description: Expected data 1 */ +#define SH_XN_IILB_MD_CMP_EXP_DATA1_DATA_SHFT 0 +#define SH_XN_IILB_MD_CMP_EXP_DATA1_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_IILB_MD_CMP_ENABLE0" */ +/* IILB compare MD input enable0 */ +/* ==================================================================== */ + +#define SH_XN_IILB_MD_CMP_ENABLE0 0x00000001500311a0 +#define SH_XN_IILB_MD_CMP_ENABLE0_MASK 0xffffffffffffffff +#define SH_XN_IILB_MD_CMP_ENABLE0_INIT 0x0000000000000000 + +/* SH_XN_IILB_MD_CMP_ENABLE0_ENABLE */ +/* Description: Enable0 */ +#define SH_XN_IILB_MD_CMP_ENABLE0_ENABLE_SHFT 0 +#define SH_XN_IILB_MD_CMP_ENABLE0_ENABLE_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_IILB_MD_CMP_ENABLE1" */ +/* IILB compare MD input enable1 */ +/* ==================================================================== */ + +#define SH_XN_IILB_MD_CMP_ENABLE1 0x00000001500311b0 +#define SH_XN_IILB_MD_CMP_ENABLE1_MASK 0xffffffffffffffff +#define SH_XN_IILB_MD_CMP_ENABLE1_INIT 0x0000000000000000 + +/* SH_XN_IILB_MD_CMP_ENABLE1_ENABLE */ +/* Description: Enable1 */ +#define SH_XN_IILB_MD_CMP_ENABLE1_ENABLE_SHFT 0 +#define SH_XN_IILB_MD_CMP_ENABLE1_ENABLE_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_IILB_PI_CMP_EXP_DATA0" */ +/* IILB compare PI input expected data0 */ +/* ==================================================================== */ + +#define SH_XN_IILB_PI_CMP_EXP_DATA0 0x00000001500311c0 +#define SH_XN_IILB_PI_CMP_EXP_DATA0_MASK 0xffffffffffffffff +#define SH_XN_IILB_PI_CMP_EXP_DATA0_INIT 0x0000000000000000 + +/* SH_XN_IILB_PI_CMP_EXP_DATA0_DATA */ +/* Description: Expected data 0 */ +#define SH_XN_IILB_PI_CMP_EXP_DATA0_DATA_SHFT 0 +#define SH_XN_IILB_PI_CMP_EXP_DATA0_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_IILB_PI_CMP_EXP_DATA1" */ +/* IILB compare PI input expected data1 */ +/* ==================================================================== */ + +#define SH_XN_IILB_PI_CMP_EXP_DATA1 0x00000001500311d0 +#define SH_XN_IILB_PI_CMP_EXP_DATA1_MASK 0xffffffffffffffff +#define SH_XN_IILB_PI_CMP_EXP_DATA1_INIT 0x0000000000000000 + +/* SH_XN_IILB_PI_CMP_EXP_DATA1_DATA */ +/* Description: Expected data 1 */ +#define SH_XN_IILB_PI_CMP_EXP_DATA1_DATA_SHFT 0 +#define SH_XN_IILB_PI_CMP_EXP_DATA1_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_IILB_PI_CMP_ENABLE0" */ +/* IILB compare PI input enable0 */ +/* ==================================================================== */ + +#define SH_XN_IILB_PI_CMP_ENABLE0 0x00000001500311e0 +#define SH_XN_IILB_PI_CMP_ENABLE0_MASK 0xffffffffffffffff +#define SH_XN_IILB_PI_CMP_ENABLE0_INIT 0x0000000000000000 + +/* SH_XN_IILB_PI_CMP_ENABLE0_ENABLE */ +/* Description: Enable0 */ +#define SH_XN_IILB_PI_CMP_ENABLE0_ENABLE_SHFT 0 +#define SH_XN_IILB_PI_CMP_ENABLE0_ENABLE_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_IILB_PI_CMP_ENABLE1" */ +/* IILB compare PI input enable1 */ +/* ==================================================================== */ + +#define SH_XN_IILB_PI_CMP_ENABLE1 0x00000001500311f0 +#define SH_XN_IILB_PI_CMP_ENABLE1_MASK 0xffffffffffffffff +#define SH_XN_IILB_PI_CMP_ENABLE1_INIT 0x0000000000000000 + +/* SH_XN_IILB_PI_CMP_ENABLE1_ENABLE */ +/* Description: Enable1 */ +#define SH_XN_IILB_PI_CMP_ENABLE1_ENABLE_SHFT 0 +#define SH_XN_IILB_PI_CMP_ENABLE1_ENABLE_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_IILB_NI0_CMP_EXP_DATA0" */ +/* IILB compare NI0 input expected data0 */ +/* ==================================================================== */ + +#define SH_XN_IILB_NI0_CMP_EXP_DATA0 0x0000000150031200 +#define SH_XN_IILB_NI0_CMP_EXP_DATA0_MASK 0xffffffffffffffff +#define SH_XN_IILB_NI0_CMP_EXP_DATA0_INIT 0x0000000000000000 + +/* SH_XN_IILB_NI0_CMP_EXP_DATA0_DATA */ +/* Description: Expected data 0 */ +#define SH_XN_IILB_NI0_CMP_EXP_DATA0_DATA_SHFT 0 +#define SH_XN_IILB_NI0_CMP_EXP_DATA0_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_IILB_NI0_CMP_EXP_DATA1" */ +/* IILB compare NI0 input expected data1 */ +/* ==================================================================== */ + +#define SH_XN_IILB_NI0_CMP_EXP_DATA1 0x0000000150031210 +#define SH_XN_IILB_NI0_CMP_EXP_DATA1_MASK 0xffffffffffffffff +#define SH_XN_IILB_NI0_CMP_EXP_DATA1_INIT 0x0000000000000000 + +/* SH_XN_IILB_NI0_CMP_EXP_DATA1_DATA */ +/* Description: Expected data 1 */ +#define SH_XN_IILB_NI0_CMP_EXP_DATA1_DATA_SHFT 0 +#define SH_XN_IILB_NI0_CMP_EXP_DATA1_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_IILB_NI0_CMP_ENABLE0" */ +/* IILB compare NI0 input enable0 */ +/* ==================================================================== */ + +#define SH_XN_IILB_NI0_CMP_ENABLE0 0x0000000150031220 +#define SH_XN_IILB_NI0_CMP_ENABLE0_MASK 0xffffffffffffffff +#define SH_XN_IILB_NI0_CMP_ENABLE0_INIT 0x0000000000000000 + +/* SH_XN_IILB_NI0_CMP_ENABLE0_ENABLE */ +/* Description: Enable0 */ +#define SH_XN_IILB_NI0_CMP_ENABLE0_ENABLE_SHFT 0 +#define SH_XN_IILB_NI0_CMP_ENABLE0_ENABLE_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_IILB_NI0_CMP_ENABLE1" */ +/* IILB compare NI0 input enable1 */ +/* ==================================================================== */ + +#define SH_XN_IILB_NI0_CMP_ENABLE1 0x0000000150031230 +#define SH_XN_IILB_NI0_CMP_ENABLE1_MASK 0xffffffffffffffff +#define SH_XN_IILB_NI0_CMP_ENABLE1_INIT 0x0000000000000000 + +/* SH_XN_IILB_NI0_CMP_ENABLE1_ENABLE */ +/* Description: Enable1 */ +#define SH_XN_IILB_NI0_CMP_ENABLE1_ENABLE_SHFT 0 +#define SH_XN_IILB_NI0_CMP_ENABLE1_ENABLE_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_IILB_NI1_CMP_EXP_DATA0" */ +/* IILB compare NI1 input expected data0 */ +/* ==================================================================== */ + +#define SH_XN_IILB_NI1_CMP_EXP_DATA0 0x0000000150031240 +#define SH_XN_IILB_NI1_CMP_EXP_DATA0_MASK 0xffffffffffffffff +#define SH_XN_IILB_NI1_CMP_EXP_DATA0_INIT 0x0000000000000000 + +/* SH_XN_IILB_NI1_CMP_EXP_DATA0_DATA */ +/* Description: Expected data 0 */ +#define SH_XN_IILB_NI1_CMP_EXP_DATA0_DATA_SHFT 0 +#define SH_XN_IILB_NI1_CMP_EXP_DATA0_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_IILB_NI1_CMP_EXP_DATA1" */ +/* IILB compare NI1 input expected data1 */ +/* ==================================================================== */ + +#define SH_XN_IILB_NI1_CMP_EXP_DATA1 0x0000000150031250 +#define SH_XN_IILB_NI1_CMP_EXP_DATA1_MASK 0xffffffffffffffff +#define SH_XN_IILB_NI1_CMP_EXP_DATA1_INIT 0x0000000000000000 + +/* SH_XN_IILB_NI1_CMP_EXP_DATA1_DATA */ +/* Description: Expected data 1 */ +#define SH_XN_IILB_NI1_CMP_EXP_DATA1_DATA_SHFT 0 +#define SH_XN_IILB_NI1_CMP_EXP_DATA1_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_IILB_NI1_CMP_ENABLE0" */ +/* IILB compare NI1 input enable0 */ +/* ==================================================================== */ + +#define SH_XN_IILB_NI1_CMP_ENABLE0 0x0000000150031260 +#define SH_XN_IILB_NI1_CMP_ENABLE0_MASK 0xffffffffffffffff +#define SH_XN_IILB_NI1_CMP_ENABLE0_INIT 0x0000000000000000 + +/* SH_XN_IILB_NI1_CMP_ENABLE0_ENABLE */ +/* Description: Enable0 */ +#define SH_XN_IILB_NI1_CMP_ENABLE0_ENABLE_SHFT 0 +#define SH_XN_IILB_NI1_CMP_ENABLE0_ENABLE_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_IILB_NI1_CMP_ENABLE1" */ +/* IILB compare NI1 input enable1 */ +/* ==================================================================== */ + +#define SH_XN_IILB_NI1_CMP_ENABLE1 0x0000000150031270 +#define SH_XN_IILB_NI1_CMP_ENABLE1_MASK 0xffffffffffffffff +#define SH_XN_IILB_NI1_CMP_ENABLE1_INIT 0x0000000000000000 + +/* SH_XN_IILB_NI1_CMP_ENABLE1_ENABLE */ +/* Description: Enable1 */ +#define SH_XN_IILB_NI1_CMP_ENABLE1_ENABLE_SHFT 0 +#define SH_XN_IILB_NI1_CMP_ENABLE1_ENABLE_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_MD_IILB_CMP_EXP_DATA0" */ +/* MD compare IILB input expected data0 */ +/* ==================================================================== */ + +#define SH_XN_MD_IILB_CMP_EXP_DATA0 0x0000000150031500 +#define SH_XN_MD_IILB_CMP_EXP_DATA0_MASK 0xffffffffffffffff +#define SH_XN_MD_IILB_CMP_EXP_DATA0_INIT 0x0000000000000000 + +/* SH_XN_MD_IILB_CMP_EXP_DATA0_DATA */ +/* Description: Expected data 0 */ +#define SH_XN_MD_IILB_CMP_EXP_DATA0_DATA_SHFT 0 +#define SH_XN_MD_IILB_CMP_EXP_DATA0_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_MD_IILB_CMP_EXP_DATA1" */ +/* MD compare IILB input expected data1 */ +/* ==================================================================== */ + +#define SH_XN_MD_IILB_CMP_EXP_DATA1 0x0000000150031510 +#define SH_XN_MD_IILB_CMP_EXP_DATA1_MASK 0xffffffffffffffff +#define SH_XN_MD_IILB_CMP_EXP_DATA1_INIT 0x0000000000000000 + +/* SH_XN_MD_IILB_CMP_EXP_DATA1_DATA */ +/* Description: Expected data 1 */ +#define SH_XN_MD_IILB_CMP_EXP_DATA1_DATA_SHFT 0 +#define SH_XN_MD_IILB_CMP_EXP_DATA1_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_MD_IILB_CMP_ENABLE0" */ +/* MD compare IILB input enable0 */ +/* ==================================================================== */ + +#define SH_XN_MD_IILB_CMP_ENABLE0 0x0000000150031520 +#define SH_XN_MD_IILB_CMP_ENABLE0_MASK 0xffffffffffffffff +#define SH_XN_MD_IILB_CMP_ENABLE0_INIT 0x0000000000000000 + +/* SH_XN_MD_IILB_CMP_ENABLE0_ENABLE */ +/* Description: Enable0 */ +#define SH_XN_MD_IILB_CMP_ENABLE0_ENABLE_SHFT 0 +#define SH_XN_MD_IILB_CMP_ENABLE0_ENABLE_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_MD_IILB_CMP_ENABLE1" */ +/* MD compare IILB input enable1 */ +/* ==================================================================== */ + +#define SH_XN_MD_IILB_CMP_ENABLE1 0x0000000150031530 +#define SH_XN_MD_IILB_CMP_ENABLE1_MASK 0xffffffffffffffff +#define SH_XN_MD_IILB_CMP_ENABLE1_INIT 0x0000000000000000 + +/* SH_XN_MD_IILB_CMP_ENABLE1_ENABLE */ +/* Description: Enable1 */ +#define SH_XN_MD_IILB_CMP_ENABLE1_ENABLE_SHFT 0 +#define SH_XN_MD_IILB_CMP_ENABLE1_ENABLE_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_MD_NI0_CMP_EXP_DATA0" */ +/* MD compare NI0 input expected data0 */ +/* ==================================================================== */ + +#define SH_XN_MD_NI0_CMP_EXP_DATA0 0x0000000150031540 +#define SH_XN_MD_NI0_CMP_EXP_DATA0_MASK 0xffffffffffffffff +#define SH_XN_MD_NI0_CMP_EXP_DATA0_INIT 0x0000000000000000 + +/* SH_XN_MD_NI0_CMP_EXP_DATA0_DATA */ +/* Description: Expected data 0 */ +#define SH_XN_MD_NI0_CMP_EXP_DATA0_DATA_SHFT 0 +#define SH_XN_MD_NI0_CMP_EXP_DATA0_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_MD_NI0_CMP_EXP_DATA1" */ +/* MD compare NI0 input expected data1 */ +/* ==================================================================== */ + +#define SH_XN_MD_NI0_CMP_EXP_DATA1 0x0000000150031550 +#define SH_XN_MD_NI0_CMP_EXP_DATA1_MASK 0xffffffffffffffff +#define SH_XN_MD_NI0_CMP_EXP_DATA1_INIT 0x0000000000000000 + +/* SH_XN_MD_NI0_CMP_EXP_DATA1_DATA */ +/* Description: Expected data 1 */ +#define SH_XN_MD_NI0_CMP_EXP_DATA1_DATA_SHFT 0 +#define SH_XN_MD_NI0_CMP_EXP_DATA1_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_MD_NI0_CMP_ENABLE0" */ +/* MD compare NI0 input enable0 */ +/* ==================================================================== */ + +#define SH_XN_MD_NI0_CMP_ENABLE0 0x0000000150031560 +#define SH_XN_MD_NI0_CMP_ENABLE0_MASK 0xffffffffffffffff +#define SH_XN_MD_NI0_CMP_ENABLE0_INIT 0x0000000000000000 + +/* SH_XN_MD_NI0_CMP_ENABLE0_ENABLE */ +/* Description: Enable0 */ +#define SH_XN_MD_NI0_CMP_ENABLE0_ENABLE_SHFT 0 +#define SH_XN_MD_NI0_CMP_ENABLE0_ENABLE_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_MD_NI0_CMP_ENABLE1" */ +/* MD compare NI0 input enable1 */ +/* ==================================================================== */ + +#define SH_XN_MD_NI0_CMP_ENABLE1 0x0000000150031570 +#define SH_XN_MD_NI0_CMP_ENABLE1_MASK 0xffffffffffffffff +#define SH_XN_MD_NI0_CMP_ENABLE1_INIT 0x0000000000000000 + +/* SH_XN_MD_NI0_CMP_ENABLE1_ENABLE */ +/* Description: Enable1 */ +#define SH_XN_MD_NI0_CMP_ENABLE1_ENABLE_SHFT 0 +#define SH_XN_MD_NI0_CMP_ENABLE1_ENABLE_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_MD_NI1_CMP_EXP_DATA0" */ +/* MD compare NI1 input expected data0 */ +/* ==================================================================== */ + +#define SH_XN_MD_NI1_CMP_EXP_DATA0 0x0000000150031580 +#define SH_XN_MD_NI1_CMP_EXP_DATA0_MASK 0xffffffffffffffff +#define SH_XN_MD_NI1_CMP_EXP_DATA0_INIT 0x0000000000000000 + +/* SH_XN_MD_NI1_CMP_EXP_DATA0_DATA */ +/* Description: Expected data 0 */ +#define SH_XN_MD_NI1_CMP_EXP_DATA0_DATA_SHFT 0 +#define SH_XN_MD_NI1_CMP_EXP_DATA0_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_MD_NI1_CMP_EXP_DATA1" */ +/* MD compare NI1 input expected data1 */ +/* ==================================================================== */ + +#define SH_XN_MD_NI1_CMP_EXP_DATA1 0x0000000150031590 +#define SH_XN_MD_NI1_CMP_EXP_DATA1_MASK 0xffffffffffffffff +#define SH_XN_MD_NI1_CMP_EXP_DATA1_INIT 0x0000000000000000 + +/* SH_XN_MD_NI1_CMP_EXP_DATA1_DATA */ +/* Description: Expected data 1 */ +#define SH_XN_MD_NI1_CMP_EXP_DATA1_DATA_SHFT 0 +#define SH_XN_MD_NI1_CMP_EXP_DATA1_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_MD_NI1_CMP_ENABLE0" */ +/* MD compare NI1 input enable0 */ +/* ==================================================================== */ + +#define SH_XN_MD_NI1_CMP_ENABLE0 0x00000001500315a0 +#define SH_XN_MD_NI1_CMP_ENABLE0_MASK 0xffffffffffffffff +#define SH_XN_MD_NI1_CMP_ENABLE0_INIT 0x0000000000000000 + +/* SH_XN_MD_NI1_CMP_ENABLE0_ENABLE */ +/* Description: Enable0 */ +#define SH_XN_MD_NI1_CMP_ENABLE0_ENABLE_SHFT 0 +#define SH_XN_MD_NI1_CMP_ENABLE0_ENABLE_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_MD_NI1_CMP_ENABLE1" */ +/* MD compare NI1 input enable1 */ +/* ==================================================================== */ + +#define SH_XN_MD_NI1_CMP_ENABLE1 0x00000001500315b0 +#define SH_XN_MD_NI1_CMP_ENABLE1_MASK 0xffffffffffffffff +#define SH_XN_MD_NI1_CMP_ENABLE1_INIT 0x0000000000000000 + +/* SH_XN_MD_NI1_CMP_ENABLE1_ENABLE */ +/* Description: Enable1 */ +#define SH_XN_MD_NI1_CMP_ENABLE1_ENABLE_SHFT 0 +#define SH_XN_MD_NI1_CMP_ENABLE1_ENABLE_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_MD_SIC_CMP_EXP_HDR0" */ +/* MD compare SIC input expected header0 */ +/* ==================================================================== */ + +#define SH_XN_MD_SIC_CMP_EXP_HDR0 0x00000001500315c0 +#define SH_XN_MD_SIC_CMP_EXP_HDR0_MASK 0xffffffffffffffff +#define SH_XN_MD_SIC_CMP_EXP_HDR0_INIT 0x0000000000000000 + +/* SH_XN_MD_SIC_CMP_EXP_HDR0_DATA */ +/* Description: Expected data 0 */ +#define SH_XN_MD_SIC_CMP_EXP_HDR0_DATA_SHFT 0 +#define SH_XN_MD_SIC_CMP_EXP_HDR0_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_MD_SIC_CMP_EXP_HDR1" */ +/* MD compare SIC input expected header1 */ +/* ==================================================================== */ + +#define SH_XN_MD_SIC_CMP_EXP_HDR1 0x00000001500315d0 +#define SH_XN_MD_SIC_CMP_EXP_HDR1_MASK 0x000003ffffffffff +#define SH_XN_MD_SIC_CMP_EXP_HDR1_INIT 0x0000000000000000 + +/* SH_XN_MD_SIC_CMP_EXP_HDR1_DATA */ +/* Description: Expected data 1 */ +#define SH_XN_MD_SIC_CMP_EXP_HDR1_DATA_SHFT 0 +#define SH_XN_MD_SIC_CMP_EXP_HDR1_DATA_MASK 0x000003ffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_MD_SIC_CMP_HDR_ENABLE0" */ +/* MD compare SIC header enable0 */ +/* ==================================================================== */ + +#define SH_XN_MD_SIC_CMP_HDR_ENABLE0 0x00000001500315e0 +#define SH_XN_MD_SIC_CMP_HDR_ENABLE0_MASK 0xffffffffffffffff +#define SH_XN_MD_SIC_CMP_HDR_ENABLE0_INIT 0x0000000000000000 + +/* SH_XN_MD_SIC_CMP_HDR_ENABLE0_ENABLE */ +/* Description: Enable0 */ +#define SH_XN_MD_SIC_CMP_HDR_ENABLE0_ENABLE_SHFT 0 +#define SH_XN_MD_SIC_CMP_HDR_ENABLE0_ENABLE_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_MD_SIC_CMP_HDR_ENABLE1" */ +/* MD compare SIC header enable1 */ +/* ==================================================================== */ + +#define SH_XN_MD_SIC_CMP_HDR_ENABLE1 0x00000001500315f0 +#define SH_XN_MD_SIC_CMP_HDR_ENABLE1_MASK 0x000003ffffffffff +#define SH_XN_MD_SIC_CMP_HDR_ENABLE1_INIT 0x0000000000000000 + +/* SH_XN_MD_SIC_CMP_HDR_ENABLE1_ENABLE */ +/* Description: Enable1 */ +#define SH_XN_MD_SIC_CMP_HDR_ENABLE1_ENABLE_SHFT 0 +#define SH_XN_MD_SIC_CMP_HDR_ENABLE1_ENABLE_MASK 0x000003ffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_MD_SIC_CMP_DATA0" */ +/* MD compare SIC data0 */ +/* ==================================================================== */ + +#define SH_XN_MD_SIC_CMP_DATA0 0x0000000150031600 +#define SH_XN_MD_SIC_CMP_DATA0_MASK 0xffffffffffffffff +#define SH_XN_MD_SIC_CMP_DATA0_INIT 0x0000000000000000 + +/* SH_XN_MD_SIC_CMP_DATA0_DATA0 */ +/* Description: Data0 */ +#define SH_XN_MD_SIC_CMP_DATA0_DATA0_SHFT 0 +#define SH_XN_MD_SIC_CMP_DATA0_DATA0_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_MD_SIC_CMP_DATA1" */ +/* MD compare SIC data1 */ +/* ==================================================================== */ + +#define SH_XN_MD_SIC_CMP_DATA1 0x0000000150031610 +#define SH_XN_MD_SIC_CMP_DATA1_MASK 0xffffffffffffffff +#define SH_XN_MD_SIC_CMP_DATA1_INIT 0x0000000000000000 + +/* SH_XN_MD_SIC_CMP_DATA1_DATA1 */ +/* Description: Data1 */ +#define SH_XN_MD_SIC_CMP_DATA1_DATA1_SHFT 0 +#define SH_XN_MD_SIC_CMP_DATA1_DATA1_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_MD_SIC_CMP_DATA2" */ +/* MD compare SIC data2 */ +/* ==================================================================== */ + +#define SH_XN_MD_SIC_CMP_DATA2 0x0000000150031620 +#define SH_XN_MD_SIC_CMP_DATA2_MASK 0xffffffffffffffff +#define SH_XN_MD_SIC_CMP_DATA2_INIT 0x0000000000000000 + +/* SH_XN_MD_SIC_CMP_DATA2_DATA2 */ +/* Description: Data2 */ +#define SH_XN_MD_SIC_CMP_DATA2_DATA2_SHFT 0 +#define SH_XN_MD_SIC_CMP_DATA2_DATA2_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_MD_SIC_CMP_DATA3" */ +/* MD compare SIC data3 */ +/* ==================================================================== */ + +#define SH_XN_MD_SIC_CMP_DATA3 0x0000000150031630 +#define SH_XN_MD_SIC_CMP_DATA3_MASK 0xffffffffffffffff +#define SH_XN_MD_SIC_CMP_DATA3_INIT 0x0000000000000000 + +/* SH_XN_MD_SIC_CMP_DATA3_DATA3 */ +/* Description: Data3 */ +#define SH_XN_MD_SIC_CMP_DATA3_DATA3_SHFT 0 +#define SH_XN_MD_SIC_CMP_DATA3_DATA3_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_MD_SIC_CMP_DATA_ENABLE0" */ +/* MD enable compare SIC data0 */ +/* ==================================================================== */ + +#define SH_XN_MD_SIC_CMP_DATA_ENABLE0 0x0000000150031640 +#define SH_XN_MD_SIC_CMP_DATA_ENABLE0_MASK 0xffffffffffffffff +#define SH_XN_MD_SIC_CMP_DATA_ENABLE0_INIT 0x0000000000000000 + +/* SH_XN_MD_SIC_CMP_DATA_ENABLE0_DATA_ENABLE0 */ +/* Description: Data0 */ +#define SH_XN_MD_SIC_CMP_DATA_ENABLE0_DATA_ENABLE0_SHFT 0 +#define SH_XN_MD_SIC_CMP_DATA_ENABLE0_DATA_ENABLE0_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_MD_SIC_CMP_DATA_ENABLE1" */ +/* MD enable compare SIC data1 */ +/* ==================================================================== */ + +#define SH_XN_MD_SIC_CMP_DATA_ENABLE1 0x0000000150031650 +#define SH_XN_MD_SIC_CMP_DATA_ENABLE1_MASK 0xffffffffffffffff +#define SH_XN_MD_SIC_CMP_DATA_ENABLE1_INIT 0x0000000000000000 + +/* SH_XN_MD_SIC_CMP_DATA_ENABLE1_DATA_ENABLE1 */ +/* Description: Data1 */ +#define SH_XN_MD_SIC_CMP_DATA_ENABLE1_DATA_ENABLE1_SHFT 0 +#define SH_XN_MD_SIC_CMP_DATA_ENABLE1_DATA_ENABLE1_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_MD_SIC_CMP_DATA_ENABLE2" */ +/* MD enable compare SIC data2 */ +/* ==================================================================== */ + +#define SH_XN_MD_SIC_CMP_DATA_ENABLE2 0x0000000150031660 +#define SH_XN_MD_SIC_CMP_DATA_ENABLE2_MASK 0xffffffffffffffff +#define SH_XN_MD_SIC_CMP_DATA_ENABLE2_INIT 0x0000000000000000 + +/* SH_XN_MD_SIC_CMP_DATA_ENABLE2_DATA_ENABLE2 */ +/* Description: Data2 */ +#define SH_XN_MD_SIC_CMP_DATA_ENABLE2_DATA_ENABLE2_SHFT 0 +#define SH_XN_MD_SIC_CMP_DATA_ENABLE2_DATA_ENABLE2_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_MD_SIC_CMP_DATA_ENABLE3" */ +/* MD enable compare SIC data3 */ +/* ==================================================================== */ + +#define SH_XN_MD_SIC_CMP_DATA_ENABLE3 0x0000000150031670 +#define SH_XN_MD_SIC_CMP_DATA_ENABLE3_MASK 0xffffffffffffffff +#define SH_XN_MD_SIC_CMP_DATA_ENABLE3_INIT 0x0000000000000000 + +/* SH_XN_MD_SIC_CMP_DATA_ENABLE3_DATA_ENABLE3 */ +/* Description: Data3 */ +#define SH_XN_MD_SIC_CMP_DATA_ENABLE3_DATA_ENABLE3_SHFT 0 +#define SH_XN_MD_SIC_CMP_DATA_ENABLE3_DATA_ENABLE3_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_PI_IILB_CMP_EXP_DATA0" */ +/* PI compare IILB input expected data0 */ +/* ==================================================================== */ + +#define SH_XN_PI_IILB_CMP_EXP_DATA0 0x0000000150031300 +#define SH_XN_PI_IILB_CMP_EXP_DATA0_MASK 0xffffffffffffffff +#define SH_XN_PI_IILB_CMP_EXP_DATA0_INIT 0x0000000000000000 + +/* SH_XN_PI_IILB_CMP_EXP_DATA0_DATA */ +/* Description: Expected data 0 */ +#define SH_XN_PI_IILB_CMP_EXP_DATA0_DATA_SHFT 0 +#define SH_XN_PI_IILB_CMP_EXP_DATA0_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_PI_IILB_CMP_EXP_DATA1" */ +/* PI compare IILB input expected data1 */ +/* ==================================================================== */ + +#define SH_XN_PI_IILB_CMP_EXP_DATA1 0x0000000150031310 +#define SH_XN_PI_IILB_CMP_EXP_DATA1_MASK 0xffffffffffffffff +#define SH_XN_PI_IILB_CMP_EXP_DATA1_INIT 0x0000000000000000 + +/* SH_XN_PI_IILB_CMP_EXP_DATA1_DATA */ +/* Description: Expected data 1 */ +#define SH_XN_PI_IILB_CMP_EXP_DATA1_DATA_SHFT 0 +#define SH_XN_PI_IILB_CMP_EXP_DATA1_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_PI_IILB_CMP_ENABLE0" */ +/* PI compare IILB input enable0 */ +/* ==================================================================== */ + +#define SH_XN_PI_IILB_CMP_ENABLE0 0x0000000150031320 +#define SH_XN_PI_IILB_CMP_ENABLE0_MASK 0xffffffffffffffff +#define SH_XN_PI_IILB_CMP_ENABLE0_INIT 0x0000000000000000 + +/* SH_XN_PI_IILB_CMP_ENABLE0_ENABLE */ +/* Description: Enable0 */ +#define SH_XN_PI_IILB_CMP_ENABLE0_ENABLE_SHFT 0 +#define SH_XN_PI_IILB_CMP_ENABLE0_ENABLE_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_PI_IILB_CMP_ENABLE1" */ +/* PI compare IILB input enable1 */ +/* ==================================================================== */ + +#define SH_XN_PI_IILB_CMP_ENABLE1 0x0000000150031330 +#define SH_XN_PI_IILB_CMP_ENABLE1_MASK 0xffffffffffffffff +#define SH_XN_PI_IILB_CMP_ENABLE1_INIT 0x0000000000000000 + +/* SH_XN_PI_IILB_CMP_ENABLE1_ENABLE */ +/* Description: Enable1 */ +#define SH_XN_PI_IILB_CMP_ENABLE1_ENABLE_SHFT 0 +#define SH_XN_PI_IILB_CMP_ENABLE1_ENABLE_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_PI_NI0_CMP_EXP_DATA0" */ +/* PI compare NI0 input expected data0 */ +/* ==================================================================== */ + +#define SH_XN_PI_NI0_CMP_EXP_DATA0 0x0000000150031340 +#define SH_XN_PI_NI0_CMP_EXP_DATA0_MASK 0xffffffffffffffff +#define SH_XN_PI_NI0_CMP_EXP_DATA0_INIT 0x0000000000000000 + +/* SH_XN_PI_NI0_CMP_EXP_DATA0_DATA */ +/* Description: Expected data 0 */ +#define SH_XN_PI_NI0_CMP_EXP_DATA0_DATA_SHFT 0 +#define SH_XN_PI_NI0_CMP_EXP_DATA0_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_PI_NI0_CMP_EXP_DATA1" */ +/* PI compare NI0 input expected data1 */ +/* ==================================================================== */ + +#define SH_XN_PI_NI0_CMP_EXP_DATA1 0x0000000150031350 +#define SH_XN_PI_NI0_CMP_EXP_DATA1_MASK 0xffffffffffffffff +#define SH_XN_PI_NI0_CMP_EXP_DATA1_INIT 0x0000000000000000 + +/* SH_XN_PI_NI0_CMP_EXP_DATA1_DATA */ +/* Description: Expected data 1 */ +#define SH_XN_PI_NI0_CMP_EXP_DATA1_DATA_SHFT 0 +#define SH_XN_PI_NI0_CMP_EXP_DATA1_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_PI_NI0_CMP_ENABLE0" */ +/* PI compare NI0 input enable0 */ +/* ==================================================================== */ + +#define SH_XN_PI_NI0_CMP_ENABLE0 0x0000000150031360 +#define SH_XN_PI_NI0_CMP_ENABLE0_MASK 0xffffffffffffffff +#define SH_XN_PI_NI0_CMP_ENABLE0_INIT 0x0000000000000000 + +/* SH_XN_PI_NI0_CMP_ENABLE0_ENABLE */ +/* Description: Enable0 */ +#define SH_XN_PI_NI0_CMP_ENABLE0_ENABLE_SHFT 0 +#define SH_XN_PI_NI0_CMP_ENABLE0_ENABLE_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_PI_NI0_CMP_ENABLE1" */ +/* PI compare NI0 input enable1 */ +/* ==================================================================== */ + +#define SH_XN_PI_NI0_CMP_ENABLE1 0x0000000150031370 +#define SH_XN_PI_NI0_CMP_ENABLE1_MASK 0xffffffffffffffff +#define SH_XN_PI_NI0_CMP_ENABLE1_INIT 0x0000000000000000 + +/* SH_XN_PI_NI0_CMP_ENABLE1_ENABLE */ +/* Description: Enable1 */ +#define SH_XN_PI_NI0_CMP_ENABLE1_ENABLE_SHFT 0 +#define SH_XN_PI_NI0_CMP_ENABLE1_ENABLE_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_PI_NI1_CMP_EXP_DATA0" */ +/* PI compare NI1 input expected data0 */ +/* ==================================================================== */ + +#define SH_XN_PI_NI1_CMP_EXP_DATA0 0x0000000150031380 +#define SH_XN_PI_NI1_CMP_EXP_DATA0_MASK 0xffffffffffffffff +#define SH_XN_PI_NI1_CMP_EXP_DATA0_INIT 0x0000000000000000 + +/* SH_XN_PI_NI1_CMP_EXP_DATA0_DATA */ +/* Description: Expected data 0 */ +#define SH_XN_PI_NI1_CMP_EXP_DATA0_DATA_SHFT 0 +#define SH_XN_PI_NI1_CMP_EXP_DATA0_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_PI_NI1_CMP_EXP_DATA1" */ +/* PI compare NI1 input expected data1 */ +/* ==================================================================== */ + +#define SH_XN_PI_NI1_CMP_EXP_DATA1 0x0000000150031390 +#define SH_XN_PI_NI1_CMP_EXP_DATA1_MASK 0xffffffffffffffff +#define SH_XN_PI_NI1_CMP_EXP_DATA1_INIT 0x0000000000000000 + +/* SH_XN_PI_NI1_CMP_EXP_DATA1_DATA */ +/* Description: Expected data 1 */ +#define SH_XN_PI_NI1_CMP_EXP_DATA1_DATA_SHFT 0 +#define SH_XN_PI_NI1_CMP_EXP_DATA1_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_PI_NI1_CMP_ENABLE0" */ +/* PI compare NI1 input enable0 */ +/* ==================================================================== */ + +#define SH_XN_PI_NI1_CMP_ENABLE0 0x00000001500313a0 +#define SH_XN_PI_NI1_CMP_ENABLE0_MASK 0xffffffffffffffff +#define SH_XN_PI_NI1_CMP_ENABLE0_INIT 0x0000000000000000 + +/* SH_XN_PI_NI1_CMP_ENABLE0_ENABLE */ +/* Description: Enable0 */ +#define SH_XN_PI_NI1_CMP_ENABLE0_ENABLE_SHFT 0 +#define SH_XN_PI_NI1_CMP_ENABLE0_ENABLE_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_PI_NI1_CMP_ENABLE1" */ +/* PI compare NI1 input enable1 */ +/* ==================================================================== */ + +#define SH_XN_PI_NI1_CMP_ENABLE1 0x00000001500313b0 +#define SH_XN_PI_NI1_CMP_ENABLE1_MASK 0xffffffffffffffff +#define SH_XN_PI_NI1_CMP_ENABLE1_INIT 0x0000000000000000 + +/* SH_XN_PI_NI1_CMP_ENABLE1_ENABLE */ +/* Description: Enable1 */ +#define SH_XN_PI_NI1_CMP_ENABLE1_ENABLE_SHFT 0 +#define SH_XN_PI_NI1_CMP_ENABLE1_ENABLE_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_PI_SIC_CMP_EXP_HDR0" */ +/* PI compare SIC input expected header0 */ +/* ==================================================================== */ + +#define SH_XN_PI_SIC_CMP_EXP_HDR0 0x00000001500313c0 +#define SH_XN_PI_SIC_CMP_EXP_HDR0_MASK 0xffffffffffffffff +#define SH_XN_PI_SIC_CMP_EXP_HDR0_INIT 0x0000000000000000 + +/* SH_XN_PI_SIC_CMP_EXP_HDR0_DATA */ +/* Description: Expected data 0 */ +#define SH_XN_PI_SIC_CMP_EXP_HDR0_DATA_SHFT 0 +#define SH_XN_PI_SIC_CMP_EXP_HDR0_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_PI_SIC_CMP_EXP_HDR1" */ +/* PI compare SIC input expected header1 */ +/* ==================================================================== */ + +#define SH_XN_PI_SIC_CMP_EXP_HDR1 0x00000001500313d0 +#define SH_XN_PI_SIC_CMP_EXP_HDR1_MASK 0x000003ffffffffff +#define SH_XN_PI_SIC_CMP_EXP_HDR1_INIT 0x0000000000000000 + +/* SH_XN_PI_SIC_CMP_EXP_HDR1_DATA */ +/* Description: Expected data 1 */ +#define SH_XN_PI_SIC_CMP_EXP_HDR1_DATA_SHFT 0 +#define SH_XN_PI_SIC_CMP_EXP_HDR1_DATA_MASK 0x000003ffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_PI_SIC_CMP_HDR_ENABLE0" */ +/* PI compare SIC header enable0 */ +/* ==================================================================== */ + +#define SH_XN_PI_SIC_CMP_HDR_ENABLE0 0x00000001500313e0 +#define SH_XN_PI_SIC_CMP_HDR_ENABLE0_MASK 0xffffffffffffffff +#define SH_XN_PI_SIC_CMP_HDR_ENABLE0_INIT 0x0000000000000000 + +/* SH_XN_PI_SIC_CMP_HDR_ENABLE0_ENABLE */ +/* Description: Enable0 */ +#define SH_XN_PI_SIC_CMP_HDR_ENABLE0_ENABLE_SHFT 0 +#define SH_XN_PI_SIC_CMP_HDR_ENABLE0_ENABLE_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_PI_SIC_CMP_HDR_ENABLE1" */ +/* PI compare SIC header enable1 */ +/* ==================================================================== */ + +#define SH_XN_PI_SIC_CMP_HDR_ENABLE1 0x00000001500313f0 +#define SH_XN_PI_SIC_CMP_HDR_ENABLE1_MASK 0x000003ffffffffff +#define SH_XN_PI_SIC_CMP_HDR_ENABLE1_INIT 0x0000000000000000 + +/* SH_XN_PI_SIC_CMP_HDR_ENABLE1_ENABLE */ +/* Description: Enable1 */ +#define SH_XN_PI_SIC_CMP_HDR_ENABLE1_ENABLE_SHFT 0 +#define SH_XN_PI_SIC_CMP_HDR_ENABLE1_ENABLE_MASK 0x000003ffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_PI_SIC_CMP_DATA0" */ +/* PI compare SIC data0 */ +/* ==================================================================== */ + +#define SH_XN_PI_SIC_CMP_DATA0 0x0000000150031400 +#define SH_XN_PI_SIC_CMP_DATA0_MASK 0xffffffffffffffff +#define SH_XN_PI_SIC_CMP_DATA0_INIT 0x0000000000000000 + +/* SH_XN_PI_SIC_CMP_DATA0_DATA0 */ +/* Description: Data0 */ +#define SH_XN_PI_SIC_CMP_DATA0_DATA0_SHFT 0 +#define SH_XN_PI_SIC_CMP_DATA0_DATA0_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_PI_SIC_CMP_DATA1" */ +/* PI compare SIC data1 */ +/* ==================================================================== */ + +#define SH_XN_PI_SIC_CMP_DATA1 0x0000000150031410 +#define SH_XN_PI_SIC_CMP_DATA1_MASK 0xffffffffffffffff +#define SH_XN_PI_SIC_CMP_DATA1_INIT 0x0000000000000000 + +/* SH_XN_PI_SIC_CMP_DATA1_DATA1 */ +/* Description: Data1 */ +#define SH_XN_PI_SIC_CMP_DATA1_DATA1_SHFT 0 +#define SH_XN_PI_SIC_CMP_DATA1_DATA1_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_PI_SIC_CMP_DATA2" */ +/* PI compare SIC data2 */ +/* ==================================================================== */ + +#define SH_XN_PI_SIC_CMP_DATA2 0x0000000150031420 +#define SH_XN_PI_SIC_CMP_DATA2_MASK 0xffffffffffffffff +#define SH_XN_PI_SIC_CMP_DATA2_INIT 0x0000000000000000 + +/* SH_XN_PI_SIC_CMP_DATA2_DATA2 */ +/* Description: Data2 */ +#define SH_XN_PI_SIC_CMP_DATA2_DATA2_SHFT 0 +#define SH_XN_PI_SIC_CMP_DATA2_DATA2_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_PI_SIC_CMP_DATA3" */ +/* PI compare SIC data3 */ +/* ==================================================================== */ + +#define SH_XN_PI_SIC_CMP_DATA3 0x0000000150031430 +#define SH_XN_PI_SIC_CMP_DATA3_MASK 0xffffffffffffffff +#define SH_XN_PI_SIC_CMP_DATA3_INIT 0x0000000000000000 + +/* SH_XN_PI_SIC_CMP_DATA3_DATA3 */ +/* Description: Data3 */ +#define SH_XN_PI_SIC_CMP_DATA3_DATA3_SHFT 0 +#define SH_XN_PI_SIC_CMP_DATA3_DATA3_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_PI_SIC_CMP_DATA_ENABLE0" */ +/* PI enable compare SIC data0 */ +/* ==================================================================== */ + +#define SH_XN_PI_SIC_CMP_DATA_ENABLE0 0x0000000150031440 +#define SH_XN_PI_SIC_CMP_DATA_ENABLE0_MASK 0xffffffffffffffff +#define SH_XN_PI_SIC_CMP_DATA_ENABLE0_INIT 0x0000000000000000 + +/* SH_XN_PI_SIC_CMP_DATA_ENABLE0_DATA_ENABLE0 */ +/* Description: Data0 */ +#define SH_XN_PI_SIC_CMP_DATA_ENABLE0_DATA_ENABLE0_SHFT 0 +#define SH_XN_PI_SIC_CMP_DATA_ENABLE0_DATA_ENABLE0_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_PI_SIC_CMP_DATA_ENABLE1" */ +/* PI enable compare SIC data1 */ +/* ==================================================================== */ + +#define SH_XN_PI_SIC_CMP_DATA_ENABLE1 0x0000000150031450 +#define SH_XN_PI_SIC_CMP_DATA_ENABLE1_MASK 0xffffffffffffffff +#define SH_XN_PI_SIC_CMP_DATA_ENABLE1_INIT 0x0000000000000000 + +/* SH_XN_PI_SIC_CMP_DATA_ENABLE1_DATA_ENABLE1 */ +/* Description: Data1 */ +#define SH_XN_PI_SIC_CMP_DATA_ENABLE1_DATA_ENABLE1_SHFT 0 +#define SH_XN_PI_SIC_CMP_DATA_ENABLE1_DATA_ENABLE1_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_PI_SIC_CMP_DATA_ENABLE2" */ +/* PI enable compare SIC data2 */ +/* ==================================================================== */ + +#define SH_XN_PI_SIC_CMP_DATA_ENABLE2 0x0000000150031460 +#define SH_XN_PI_SIC_CMP_DATA_ENABLE2_MASK 0xffffffffffffffff +#define SH_XN_PI_SIC_CMP_DATA_ENABLE2_INIT 0x0000000000000000 + +/* SH_XN_PI_SIC_CMP_DATA_ENABLE2_DATA_ENABLE2 */ +/* Description: Data2 */ +#define SH_XN_PI_SIC_CMP_DATA_ENABLE2_DATA_ENABLE2_SHFT 0 +#define SH_XN_PI_SIC_CMP_DATA_ENABLE2_DATA_ENABLE2_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_PI_SIC_CMP_DATA_ENABLE3" */ +/* PI enable compare SIC data3 */ +/* ==================================================================== */ + +#define SH_XN_PI_SIC_CMP_DATA_ENABLE3 0x0000000150031470 +#define SH_XN_PI_SIC_CMP_DATA_ENABLE3_MASK 0xffffffffffffffff +#define SH_XN_PI_SIC_CMP_DATA_ENABLE3_INIT 0x0000000000000000 + +/* SH_XN_PI_SIC_CMP_DATA_ENABLE3_DATA_ENABLE3 */ +/* Description: Data3 */ +#define SH_XN_PI_SIC_CMP_DATA_ENABLE3_DATA_ENABLE3_SHFT 0 +#define SH_XN_PI_SIC_CMP_DATA_ENABLE3_DATA_ENABLE3_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_NI0_IILB_CMP_EXP_DATA0" */ +/* NI0 compare IILB input expected data0 */ +/* ==================================================================== */ + +#define SH_XN_NI0_IILB_CMP_EXP_DATA0 0x0000000150031700 +#define SH_XN_NI0_IILB_CMP_EXP_DATA0_MASK 0xffffffffffffffff +#define SH_XN_NI0_IILB_CMP_EXP_DATA0_INIT 0x0000000000000000 + +/* SH_XN_NI0_IILB_CMP_EXP_DATA0_DATA */ +/* Description: Expected data 0 */ +#define SH_XN_NI0_IILB_CMP_EXP_DATA0_DATA_SHFT 0 +#define SH_XN_NI0_IILB_CMP_EXP_DATA0_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_NI0_IILB_CMP_EXP_DATA1" */ +/* NI0 compare IILB input expected data1 */ +/* ==================================================================== */ + +#define SH_XN_NI0_IILB_CMP_EXP_DATA1 0x0000000150031710 +#define SH_XN_NI0_IILB_CMP_EXP_DATA1_MASK 0xffffffffffffffff +#define SH_XN_NI0_IILB_CMP_EXP_DATA1_INIT 0x0000000000000000 + +/* SH_XN_NI0_IILB_CMP_EXP_DATA1_DATA */ +/* Description: Expected data 1 */ +#define SH_XN_NI0_IILB_CMP_EXP_DATA1_DATA_SHFT 0 +#define SH_XN_NI0_IILB_CMP_EXP_DATA1_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_NI0_IILB_CMP_ENABLE0" */ +/* NI0 compare IILB input enable0 */ +/* ==================================================================== */ + +#define SH_XN_NI0_IILB_CMP_ENABLE0 0x0000000150031720 +#define SH_XN_NI0_IILB_CMP_ENABLE0_MASK 0xffffffffffffffff +#define SH_XN_NI0_IILB_CMP_ENABLE0_INIT 0x0000000000000000 + +/* SH_XN_NI0_IILB_CMP_ENABLE0_ENABLE */ +/* Description: Enable0 */ +#define SH_XN_NI0_IILB_CMP_ENABLE0_ENABLE_SHFT 0 +#define SH_XN_NI0_IILB_CMP_ENABLE0_ENABLE_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_NI0_IILB_CMP_ENABLE1" */ +/* NI0 compare IILB input enable1 */ +/* ==================================================================== */ + +#define SH_XN_NI0_IILB_CMP_ENABLE1 0x0000000150031730 +#define SH_XN_NI0_IILB_CMP_ENABLE1_MASK 0xffffffffffffffff +#define SH_XN_NI0_IILB_CMP_ENABLE1_INIT 0x0000000000000000 + +/* SH_XN_NI0_IILB_CMP_ENABLE1_ENABLE */ +/* Description: Enable1 */ +#define SH_XN_NI0_IILB_CMP_ENABLE1_ENABLE_SHFT 0 +#define SH_XN_NI0_IILB_CMP_ENABLE1_ENABLE_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_NI0_PI_CMP_EXP_DATA0" */ +/* NI0 compare PI input expected data0 */ +/* ==================================================================== */ + +#define SH_XN_NI0_PI_CMP_EXP_DATA0 0x0000000150031740 +#define SH_XN_NI0_PI_CMP_EXP_DATA0_MASK 0xffffffffffffffff +#define SH_XN_NI0_PI_CMP_EXP_DATA0_INIT 0x0000000000000000 + +/* SH_XN_NI0_PI_CMP_EXP_DATA0_DATA */ +/* Description: Expected data 0 */ +#define SH_XN_NI0_PI_CMP_EXP_DATA0_DATA_SHFT 0 +#define SH_XN_NI0_PI_CMP_EXP_DATA0_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_NI0_PI_CMP_EXP_DATA1" */ +/* NI0 compare PI input expected data1 */ +/* ==================================================================== */ + +#define SH_XN_NI0_PI_CMP_EXP_DATA1 0x0000000150031750 +#define SH_XN_NI0_PI_CMP_EXP_DATA1_MASK 0xffffffffffffffff +#define SH_XN_NI0_PI_CMP_EXP_DATA1_INIT 0x0000000000000000 + +/* SH_XN_NI0_PI_CMP_EXP_DATA1_DATA */ +/* Description: Expected data 1 */ +#define SH_XN_NI0_PI_CMP_EXP_DATA1_DATA_SHFT 0 +#define SH_XN_NI0_PI_CMP_EXP_DATA1_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_NI0_PI_CMP_ENABLE0" */ +/* NI0 compare PI input enable0 */ +/* ==================================================================== */ + +#define SH_XN_NI0_PI_CMP_ENABLE0 0x0000000150031760 +#define SH_XN_NI0_PI_CMP_ENABLE0_MASK 0xffffffffffffffff +#define SH_XN_NI0_PI_CMP_ENABLE0_INIT 0x0000000000000000 + +/* SH_XN_NI0_PI_CMP_ENABLE0_ENABLE */ +/* Description: Enable0 */ +#define SH_XN_NI0_PI_CMP_ENABLE0_ENABLE_SHFT 0 +#define SH_XN_NI0_PI_CMP_ENABLE0_ENABLE_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_NI0_PI_CMP_ENABLE1" */ +/* NI0 compare PI input enable1 */ +/* ==================================================================== */ + +#define SH_XN_NI0_PI_CMP_ENABLE1 0x0000000150031770 +#define SH_XN_NI0_PI_CMP_ENABLE1_MASK 0xffffffffffffffff +#define SH_XN_NI0_PI_CMP_ENABLE1_INIT 0x0000000000000000 + +/* SH_XN_NI0_PI_CMP_ENABLE1_ENABLE */ +/* Description: Enable1 */ +#define SH_XN_NI0_PI_CMP_ENABLE1_ENABLE_SHFT 0 +#define SH_XN_NI0_PI_CMP_ENABLE1_ENABLE_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_NI0_MD_CMP_EXP_DATA0" */ +/* NI0 compare MD input expected data0 */ +/* ==================================================================== */ + +#define SH_XN_NI0_MD_CMP_EXP_DATA0 0x0000000150031780 +#define SH_XN_NI0_MD_CMP_EXP_DATA0_MASK 0xffffffffffffffff +#define SH_XN_NI0_MD_CMP_EXP_DATA0_INIT 0x0000000000000000 + +/* SH_XN_NI0_MD_CMP_EXP_DATA0_DATA */ +/* Description: Expected data 0 */ +#define SH_XN_NI0_MD_CMP_EXP_DATA0_DATA_SHFT 0 +#define SH_XN_NI0_MD_CMP_EXP_DATA0_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_NI0_MD_CMP_EXP_DATA1" */ +/* NI0 compare MD input expected data1 */ +/* ==================================================================== */ + +#define SH_XN_NI0_MD_CMP_EXP_DATA1 0x0000000150031790 +#define SH_XN_NI0_MD_CMP_EXP_DATA1_MASK 0xffffffffffffffff +#define SH_XN_NI0_MD_CMP_EXP_DATA1_INIT 0x0000000000000000 + +/* SH_XN_NI0_MD_CMP_EXP_DATA1_DATA */ +/* Description: Expected data 1 */ +#define SH_XN_NI0_MD_CMP_EXP_DATA1_DATA_SHFT 0 +#define SH_XN_NI0_MD_CMP_EXP_DATA1_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_NI0_MD_CMP_ENABLE0" */ +/* NI0 compare MD input enable0 */ +/* ==================================================================== */ + +#define SH_XN_NI0_MD_CMP_ENABLE0 0x00000001500317a0 +#define SH_XN_NI0_MD_CMP_ENABLE0_MASK 0xffffffffffffffff +#define SH_XN_NI0_MD_CMP_ENABLE0_INIT 0x0000000000000000 + +/* SH_XN_NI0_MD_CMP_ENABLE0_ENABLE */ +/* Description: Enable0 */ +#define SH_XN_NI0_MD_CMP_ENABLE0_ENABLE_SHFT 0 +#define SH_XN_NI0_MD_CMP_ENABLE0_ENABLE_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_NI0_MD_CMP_ENABLE1" */ +/* NI0 compare MD input enable1 */ +/* ==================================================================== */ + +#define SH_XN_NI0_MD_CMP_ENABLE1 0x00000001500317b0 +#define SH_XN_NI0_MD_CMP_ENABLE1_MASK 0xffffffffffffffff +#define SH_XN_NI0_MD_CMP_ENABLE1_INIT 0x0000000000000000 + +/* SH_XN_NI0_MD_CMP_ENABLE1_ENABLE */ +/* Description: Enable1 */ +#define SH_XN_NI0_MD_CMP_ENABLE1_ENABLE_SHFT 0 +#define SH_XN_NI0_MD_CMP_ENABLE1_ENABLE_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_NI0_NI_CMP_EXP_DATA0" */ +/* NI0 compare NI input expected data0 */ +/* ==================================================================== */ + +#define SH_XN_NI0_NI_CMP_EXP_DATA0 0x00000001500317c0 +#define SH_XN_NI0_NI_CMP_EXP_DATA0_MASK 0xffffffffffffffff +#define SH_XN_NI0_NI_CMP_EXP_DATA0_INIT 0x0000000000000000 + +/* SH_XN_NI0_NI_CMP_EXP_DATA0_DATA */ +/* Description: Expected data 0 */ +#define SH_XN_NI0_NI_CMP_EXP_DATA0_DATA_SHFT 0 +#define SH_XN_NI0_NI_CMP_EXP_DATA0_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_NI0_NI_CMP_EXP_DATA1" */ +/* NI0 compare NI input expected data1 */ +/* ==================================================================== */ + +#define SH_XN_NI0_NI_CMP_EXP_DATA1 0x00000001500317d0 +#define SH_XN_NI0_NI_CMP_EXP_DATA1_MASK 0xffffffffffffffff +#define SH_XN_NI0_NI_CMP_EXP_DATA1_INIT 0x0000000000000000 + +/* SH_XN_NI0_NI_CMP_EXP_DATA1_DATA */ +/* Description: Expected data 1 */ +#define SH_XN_NI0_NI_CMP_EXP_DATA1_DATA_SHFT 0 +#define SH_XN_NI0_NI_CMP_EXP_DATA1_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_NI0_NI_CMP_ENABLE0" */ +/* NI0 compare NI input enable0 */ +/* ==================================================================== */ + +#define SH_XN_NI0_NI_CMP_ENABLE0 0x00000001500317e0 +#define SH_XN_NI0_NI_CMP_ENABLE0_MASK 0xffffffffffffffff +#define SH_XN_NI0_NI_CMP_ENABLE0_INIT 0x0000000000000000 + +/* SH_XN_NI0_NI_CMP_ENABLE0_ENABLE */ +/* Description: Enable0 */ +#define SH_XN_NI0_NI_CMP_ENABLE0_ENABLE_SHFT 0 +#define SH_XN_NI0_NI_CMP_ENABLE0_ENABLE_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_NI0_NI_CMP_ENABLE1" */ +/* NI0 compare NI input enable1 */ +/* ==================================================================== */ + +#define SH_XN_NI0_NI_CMP_ENABLE1 0x00000001500317f0 +#define SH_XN_NI0_NI_CMP_ENABLE1_MASK 0xffffffffffffffff +#define SH_XN_NI0_NI_CMP_ENABLE1_INIT 0x0000000000000000 + +/* SH_XN_NI0_NI_CMP_ENABLE1_ENABLE */ +/* Description: Enable1 */ +#define SH_XN_NI0_NI_CMP_ENABLE1_ENABLE_SHFT 0 +#define SH_XN_NI0_NI_CMP_ENABLE1_ENABLE_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_NI0_LLP_CMP_EXP_DATA0" */ +/* NI0 compare LLP input expected data0 */ +/* ==================================================================== */ + +#define SH_XN_NI0_LLP_CMP_EXP_DATA0 0x0000000150031800 +#define SH_XN_NI0_LLP_CMP_EXP_DATA0_MASK 0xffffffffffffffff +#define SH_XN_NI0_LLP_CMP_EXP_DATA0_INIT 0x0000000000000000 + +/* SH_XN_NI0_LLP_CMP_EXP_DATA0_DATA */ +/* Description: Expected data 0 */ +#define SH_XN_NI0_LLP_CMP_EXP_DATA0_DATA_SHFT 0 +#define SH_XN_NI0_LLP_CMP_EXP_DATA0_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_NI0_LLP_CMP_EXP_DATA1" */ +/* NI0 compare LLP input expected data1 */ +/* ==================================================================== */ + +#define SH_XN_NI0_LLP_CMP_EXP_DATA1 0x0000000150031810 +#define SH_XN_NI0_LLP_CMP_EXP_DATA1_MASK 0xffffffffffffffff +#define SH_XN_NI0_LLP_CMP_EXP_DATA1_INIT 0x0000000000000000 + +/* SH_XN_NI0_LLP_CMP_EXP_DATA1_DATA */ +/* Description: Expected data 1 */ +#define SH_XN_NI0_LLP_CMP_EXP_DATA1_DATA_SHFT 0 +#define SH_XN_NI0_LLP_CMP_EXP_DATA1_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_NI0_LLP_CMP_ENABLE0" */ +/* NI0 compare LLP input enable0 */ +/* ==================================================================== */ + +#define SH_XN_NI0_LLP_CMP_ENABLE0 0x0000000150031820 +#define SH_XN_NI0_LLP_CMP_ENABLE0_MASK 0xffffffffffffffff +#define SH_XN_NI0_LLP_CMP_ENABLE0_INIT 0x0000000000000000 + +/* SH_XN_NI0_LLP_CMP_ENABLE0_ENABLE */ +/* Description: Enable0 */ +#define SH_XN_NI0_LLP_CMP_ENABLE0_ENABLE_SHFT 0 +#define SH_XN_NI0_LLP_CMP_ENABLE0_ENABLE_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_NI0_LLP_CMP_ENABLE1" */ +/* NI0 compare LLP input enable1 */ +/* ==================================================================== */ + +#define SH_XN_NI0_LLP_CMP_ENABLE1 0x0000000150031830 +#define SH_XN_NI0_LLP_CMP_ENABLE1_MASK 0xffffffffffffffff +#define SH_XN_NI0_LLP_CMP_ENABLE1_INIT 0x0000000000000000 + +/* SH_XN_NI0_LLP_CMP_ENABLE1_ENABLE */ +/* Description: Enable1 */ +#define SH_XN_NI0_LLP_CMP_ENABLE1_ENABLE_SHFT 0 +#define SH_XN_NI0_LLP_CMP_ENABLE1_ENABLE_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_NI1_IILB_CMP_EXP_DATA0" */ +/* NI1 compare IILB input expected data0 */ +/* ==================================================================== */ + +#define SH_XN_NI1_IILB_CMP_EXP_DATA0 0x0000000150031900 +#define SH_XN_NI1_IILB_CMP_EXP_DATA0_MASK 0xffffffffffffffff +#define SH_XN_NI1_IILB_CMP_EXP_DATA0_INIT 0x0000000000000000 + +/* SH_XN_NI1_IILB_CMP_EXP_DATA0_DATA */ +/* Description: Expected data 0 */ +#define SH_XN_NI1_IILB_CMP_EXP_DATA0_DATA_SHFT 0 +#define SH_XN_NI1_IILB_CMP_EXP_DATA0_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_NI1_IILB_CMP_EXP_DATA1" */ +/* NI1 compare IILB input expected data1 */ +/* ==================================================================== */ + +#define SH_XN_NI1_IILB_CMP_EXP_DATA1 0x0000000150031910 +#define SH_XN_NI1_IILB_CMP_EXP_DATA1_MASK 0xffffffffffffffff +#define SH_XN_NI1_IILB_CMP_EXP_DATA1_INIT 0x0000000000000000 + +/* SH_XN_NI1_IILB_CMP_EXP_DATA1_DATA */ +/* Description: Expected data 1 */ +#define SH_XN_NI1_IILB_CMP_EXP_DATA1_DATA_SHFT 0 +#define SH_XN_NI1_IILB_CMP_EXP_DATA1_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_NI1_IILB_CMP_ENABLE0" */ +/* NI1 compare IILB input enable0 */ +/* ==================================================================== */ + +#define SH_XN_NI1_IILB_CMP_ENABLE0 0x0000000150031920 +#define SH_XN_NI1_IILB_CMP_ENABLE0_MASK 0xffffffffffffffff +#define SH_XN_NI1_IILB_CMP_ENABLE0_INIT 0x0000000000000000 + +/* SH_XN_NI1_IILB_CMP_ENABLE0_ENABLE */ +/* Description: Enable0 */ +#define SH_XN_NI1_IILB_CMP_ENABLE0_ENABLE_SHFT 0 +#define SH_XN_NI1_IILB_CMP_ENABLE0_ENABLE_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_NI1_IILB_CMP_ENABLE1" */ +/* NI1 compare IILB input enable1 */ +/* ==================================================================== */ + +#define SH_XN_NI1_IILB_CMP_ENABLE1 0x0000000150031930 +#define SH_XN_NI1_IILB_CMP_ENABLE1_MASK 0xffffffffffffffff +#define SH_XN_NI1_IILB_CMP_ENABLE1_INIT 0x0000000000000000 + +/* SH_XN_NI1_IILB_CMP_ENABLE1_ENABLE */ +/* Description: Enable1 */ +#define SH_XN_NI1_IILB_CMP_ENABLE1_ENABLE_SHFT 0 +#define SH_XN_NI1_IILB_CMP_ENABLE1_ENABLE_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_NI1_PI_CMP_EXP_DATA0" */ +/* NI1 compare PI input expected data0 */ +/* ==================================================================== */ + +#define SH_XN_NI1_PI_CMP_EXP_DATA0 0x0000000150031940 +#define SH_XN_NI1_PI_CMP_EXP_DATA0_MASK 0xffffffffffffffff +#define SH_XN_NI1_PI_CMP_EXP_DATA0_INIT 0x0000000000000000 + +/* SH_XN_NI1_PI_CMP_EXP_DATA0_DATA */ +/* Description: Expected data 0 */ +#define SH_XN_NI1_PI_CMP_EXP_DATA0_DATA_SHFT 0 +#define SH_XN_NI1_PI_CMP_EXP_DATA0_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_NI1_PI_CMP_EXP_DATA1" */ +/* NI1 compare PI input expected data1 */ +/* ==================================================================== */ + +#define SH_XN_NI1_PI_CMP_EXP_DATA1 0x0000000150031950 +#define SH_XN_NI1_PI_CMP_EXP_DATA1_MASK 0xffffffffffffffff +#define SH_XN_NI1_PI_CMP_EXP_DATA1_INIT 0x0000000000000000 + +/* SH_XN_NI1_PI_CMP_EXP_DATA1_DATA */ +/* Description: Expected data 1 */ +#define SH_XN_NI1_PI_CMP_EXP_DATA1_DATA_SHFT 0 +#define SH_XN_NI1_PI_CMP_EXP_DATA1_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_NI1_PI_CMP_ENABLE0" */ +/* NI1 compare PI input enable0 */ +/* ==================================================================== */ + +#define SH_XN_NI1_PI_CMP_ENABLE0 0x0000000150031960 +#define SH_XN_NI1_PI_CMP_ENABLE0_MASK 0xffffffffffffffff +#define SH_XN_NI1_PI_CMP_ENABLE0_INIT 0x0000000000000000 + +/* SH_XN_NI1_PI_CMP_ENABLE0_ENABLE */ +/* Description: Enable0 */ +#define SH_XN_NI1_PI_CMP_ENABLE0_ENABLE_SHFT 0 +#define SH_XN_NI1_PI_CMP_ENABLE0_ENABLE_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_NI1_PI_CMP_ENABLE1" */ +/* NI1 compare PI input enable1 */ +/* ==================================================================== */ + +#define SH_XN_NI1_PI_CMP_ENABLE1 0x0000000150031970 +#define SH_XN_NI1_PI_CMP_ENABLE1_MASK 0xffffffffffffffff +#define SH_XN_NI1_PI_CMP_ENABLE1_INIT 0x0000000000000000 + +/* SH_XN_NI1_PI_CMP_ENABLE1_ENABLE */ +/* Description: Enable1 */ +#define SH_XN_NI1_PI_CMP_ENABLE1_ENABLE_SHFT 0 +#define SH_XN_NI1_PI_CMP_ENABLE1_ENABLE_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_NI1_MD_CMP_EXP_DATA0" */ +/* NI1 compare MD input expected data0 */ +/* ==================================================================== */ + +#define SH_XN_NI1_MD_CMP_EXP_DATA0 0x0000000150031980 +#define SH_XN_NI1_MD_CMP_EXP_DATA0_MASK 0xffffffffffffffff +#define SH_XN_NI1_MD_CMP_EXP_DATA0_INIT 0x0000000000000000 + +/* SH_XN_NI1_MD_CMP_EXP_DATA0_DATA */ +/* Description: Expected data 0 */ +#define SH_XN_NI1_MD_CMP_EXP_DATA0_DATA_SHFT 0 +#define SH_XN_NI1_MD_CMP_EXP_DATA0_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_NI1_MD_CMP_EXP_DATA1" */ +/* NI1 compare MD input expected data1 */ +/* ==================================================================== */ + +#define SH_XN_NI1_MD_CMP_EXP_DATA1 0x0000000150031990 +#define SH_XN_NI1_MD_CMP_EXP_DATA1_MASK 0xffffffffffffffff +#define SH_XN_NI1_MD_CMP_EXP_DATA1_INIT 0x0000000000000000 + +/* SH_XN_NI1_MD_CMP_EXP_DATA1_DATA */ +/* Description: Expected data 1 */ +#define SH_XN_NI1_MD_CMP_EXP_DATA1_DATA_SHFT 0 +#define SH_XN_NI1_MD_CMP_EXP_DATA1_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_NI1_MD_CMP_ENABLE0" */ +/* NI1 compare MD input enable0 */ +/* ==================================================================== */ + +#define SH_XN_NI1_MD_CMP_ENABLE0 0x00000001500319a0 +#define SH_XN_NI1_MD_CMP_ENABLE0_MASK 0xffffffffffffffff +#define SH_XN_NI1_MD_CMP_ENABLE0_INIT 0x0000000000000000 + +/* SH_XN_NI1_MD_CMP_ENABLE0_ENABLE */ +/* Description: Enable0 */ +#define SH_XN_NI1_MD_CMP_ENABLE0_ENABLE_SHFT 0 +#define SH_XN_NI1_MD_CMP_ENABLE0_ENABLE_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_NI1_MD_CMP_ENABLE1" */ +/* NI1 compare MD input enable1 */ +/* ==================================================================== */ + +#define SH_XN_NI1_MD_CMP_ENABLE1 0x00000001500319b0 +#define SH_XN_NI1_MD_CMP_ENABLE1_MASK 0xffffffffffffffff +#define SH_XN_NI1_MD_CMP_ENABLE1_INIT 0x0000000000000000 + +/* SH_XN_NI1_MD_CMP_ENABLE1_ENABLE */ +/* Description: Enable1 */ +#define SH_XN_NI1_MD_CMP_ENABLE1_ENABLE_SHFT 0 +#define SH_XN_NI1_MD_CMP_ENABLE1_ENABLE_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_NI1_NI_CMP_EXP_DATA0" */ +/* NI1 compare NI input expected data0 */ +/* ==================================================================== */ + +#define SH_XN_NI1_NI_CMP_EXP_DATA0 0x00000001500319c0 +#define SH_XN_NI1_NI_CMP_EXP_DATA0_MASK 0xffffffffffffffff +#define SH_XN_NI1_NI_CMP_EXP_DATA0_INIT 0x0000000000000000 + +/* SH_XN_NI1_NI_CMP_EXP_DATA0_DATA */ +/* Description: Expected data 0 */ +#define SH_XN_NI1_NI_CMP_EXP_DATA0_DATA_SHFT 0 +#define SH_XN_NI1_NI_CMP_EXP_DATA0_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_NI1_NI_CMP_EXP_DATA1" */ +/* NI1 compare NI input expected data1 */ +/* ==================================================================== */ + +#define SH_XN_NI1_NI_CMP_EXP_DATA1 0x00000001500319d0 +#define SH_XN_NI1_NI_CMP_EXP_DATA1_MASK 0xffffffffffffffff +#define SH_XN_NI1_NI_CMP_EXP_DATA1_INIT 0x0000000000000000 + +/* SH_XN_NI1_NI_CMP_EXP_DATA1_DATA */ +/* Description: Expected data 1 */ +#define SH_XN_NI1_NI_CMP_EXP_DATA1_DATA_SHFT 0 +#define SH_XN_NI1_NI_CMP_EXP_DATA1_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_NI1_NI_CMP_ENABLE0" */ +/* NI1 compare NI input enable0 */ +/* ==================================================================== */ + +#define SH_XN_NI1_NI_CMP_ENABLE0 0x00000001500319e0 +#define SH_XN_NI1_NI_CMP_ENABLE0_MASK 0xffffffffffffffff +#define SH_XN_NI1_NI_CMP_ENABLE0_INIT 0x0000000000000000 + +/* SH_XN_NI1_NI_CMP_ENABLE0_ENABLE */ +/* Description: Enable0 */ +#define SH_XN_NI1_NI_CMP_ENABLE0_ENABLE_SHFT 0 +#define SH_XN_NI1_NI_CMP_ENABLE0_ENABLE_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_NI1_NI_CMP_ENABLE1" */ +/* NI1 compare NI input enable1 */ +/* ==================================================================== */ + +#define SH_XN_NI1_NI_CMP_ENABLE1 0x00000001500319f0 +#define SH_XN_NI1_NI_CMP_ENABLE1_MASK 0xffffffffffffffff +#define SH_XN_NI1_NI_CMP_ENABLE1_INIT 0x0000000000000000 + +/* SH_XN_NI1_NI_CMP_ENABLE1_ENABLE */ +/* Description: Enable1 */ +#define SH_XN_NI1_NI_CMP_ENABLE1_ENABLE_SHFT 0 +#define SH_XN_NI1_NI_CMP_ENABLE1_ENABLE_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_NI1_LLP_CMP_EXP_DATA0" */ +/* NI1 compare LLP input expected data0 */ +/* ==================================================================== */ + +#define SH_XN_NI1_LLP_CMP_EXP_DATA0 0x0000000150031a00 +#define SH_XN_NI1_LLP_CMP_EXP_DATA0_MASK 0xffffffffffffffff +#define SH_XN_NI1_LLP_CMP_EXP_DATA0_INIT 0x0000000000000000 + +/* SH_XN_NI1_LLP_CMP_EXP_DATA0_DATA */ +/* Description: Expected data 0 */ +#define SH_XN_NI1_LLP_CMP_EXP_DATA0_DATA_SHFT 0 +#define SH_XN_NI1_LLP_CMP_EXP_DATA0_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_NI1_LLP_CMP_EXP_DATA1" */ +/* NI1 compare LLP input expected data1 */ +/* ==================================================================== */ + +#define SH_XN_NI1_LLP_CMP_EXP_DATA1 0x0000000150031a10 +#define SH_XN_NI1_LLP_CMP_EXP_DATA1_MASK 0xffffffffffffffff +#define SH_XN_NI1_LLP_CMP_EXP_DATA1_INIT 0x0000000000000000 + +/* SH_XN_NI1_LLP_CMP_EXP_DATA1_DATA */ +/* Description: Expected data 1 */ +#define SH_XN_NI1_LLP_CMP_EXP_DATA1_DATA_SHFT 0 +#define SH_XN_NI1_LLP_CMP_EXP_DATA1_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_NI1_LLP_CMP_ENABLE0" */ +/* NI1 compare LLP input enable0 */ +/* ==================================================================== */ + +#define SH_XN_NI1_LLP_CMP_ENABLE0 0x0000000150031a20 +#define SH_XN_NI1_LLP_CMP_ENABLE0_MASK 0xffffffffffffffff +#define SH_XN_NI1_LLP_CMP_ENABLE0_INIT 0x0000000000000000 + +/* SH_XN_NI1_LLP_CMP_ENABLE0_ENABLE */ +/* Description: Enable0 */ +#define SH_XN_NI1_LLP_CMP_ENABLE0_ENABLE_SHFT 0 +#define SH_XN_NI1_LLP_CMP_ENABLE0_ENABLE_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_NI1_LLP_CMP_ENABLE1" */ +/* NI1 compare LLP input enable1 */ +/* ==================================================================== */ + +#define SH_XN_NI1_LLP_CMP_ENABLE1 0x0000000150031a30 +#define SH_XN_NI1_LLP_CMP_ENABLE1_MASK 0xffffffffffffffff +#define SH_XN_NI1_LLP_CMP_ENABLE1_INIT 0x0000000000000000 + +/* SH_XN_NI1_LLP_CMP_ENABLE1_ENABLE */ +/* Description: Enable1 */ +#define SH_XN_NI1_LLP_CMP_ENABLE1_ENABLE_SHFT 0 +#define SH_XN_NI1_LLP_CMP_ENABLE1_ENABLE_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XNPI_ECC_INJ_REG" */ +/* ==================================================================== */ + +#define SH_XNPI_ECC_INJ_REG 0x0000000150032000 +#define SH_XNPI_ECC_INJ_REG_MASK 0xf0fff0fff0fff0ff +#define SH_XNPI_ECC_INJ_REG_INIT 0x0000000000000000 + +/* SH_XNPI_ECC_INJ_REG_BYTE0 */ +/* Description: Replacement Checkbyte */ +#define SH_XNPI_ECC_INJ_REG_BYTE0_SHFT 0 +#define SH_XNPI_ECC_INJ_REG_BYTE0_MASK 0x00000000000000ff + +/* SH_XNPI_ECC_INJ_REG_DATA_1SHOT0 */ +/* Description: 1 shot mask data */ +#define SH_XNPI_ECC_INJ_REG_DATA_1SHOT0_SHFT 12 +#define SH_XNPI_ECC_INJ_REG_DATA_1SHOT0_MASK 0x0000000000001000 + +/* SH_XNPI_ECC_INJ_REG_DATA_CONT0 */ +/* Description: toggle mask data */ +#define SH_XNPI_ECC_INJ_REG_DATA_CONT0_SHFT 13 +#define SH_XNPI_ECC_INJ_REG_DATA_CONT0_MASK 0x0000000000002000 + +/* SH_XNPI_ECC_INJ_REG_DATA_CB_1SHOT0 */ +/* Description: Replace Checkbyte One Shot */ +#define SH_XNPI_ECC_INJ_REG_DATA_CB_1SHOT0_SHFT 14 +#define SH_XNPI_ECC_INJ_REG_DATA_CB_1SHOT0_MASK 0x0000000000004000 + +/* SH_XNPI_ECC_INJ_REG_DATA_CB_CONT0 */ +/* Description: Replace Checkbyte Continuous */ +#define SH_XNPI_ECC_INJ_REG_DATA_CB_CONT0_SHFT 15 +#define SH_XNPI_ECC_INJ_REG_DATA_CB_CONT0_MASK 0x0000000000008000 + +/* SH_XNPI_ECC_INJ_REG_BYTE1 */ +/* Description: Replacement Checkbyte */ +#define SH_XNPI_ECC_INJ_REG_BYTE1_SHFT 16 +#define SH_XNPI_ECC_INJ_REG_BYTE1_MASK 0x0000000000ff0000 + +/* SH_XNPI_ECC_INJ_REG_DATA_1SHOT1 */ +/* Description: 1 shot mask data */ +#define SH_XNPI_ECC_INJ_REG_DATA_1SHOT1_SHFT 28 +#define SH_XNPI_ECC_INJ_REG_DATA_1SHOT1_MASK 0x0000000010000000 + +/* SH_XNPI_ECC_INJ_REG_DATA_CONT1 */ +/* Description: toggle mask data */ +#define SH_XNPI_ECC_INJ_REG_DATA_CONT1_SHFT 29 +#define SH_XNPI_ECC_INJ_REG_DATA_CONT1_MASK 0x0000000020000000 + +/* SH_XNPI_ECC_INJ_REG_DATA_CB_1SHOT1 */ +/* Description: Replace Checkbyte One Shot */ +#define SH_XNPI_ECC_INJ_REG_DATA_CB_1SHOT1_SHFT 30 +#define SH_XNPI_ECC_INJ_REG_DATA_CB_1SHOT1_MASK 0x0000000040000000 + +/* SH_XNPI_ECC_INJ_REG_DATA_CB_CONT1 */ +/* Description: Replace Checkbyte Continous */ +#define SH_XNPI_ECC_INJ_REG_DATA_CB_CONT1_SHFT 31 +#define SH_XNPI_ECC_INJ_REG_DATA_CB_CONT1_MASK 0x0000000080000000 + +/* SH_XNPI_ECC_INJ_REG_BYTE2 */ +/* Description: Replacement Checkbyte */ +#define SH_XNPI_ECC_INJ_REG_BYTE2_SHFT 32 +#define SH_XNPI_ECC_INJ_REG_BYTE2_MASK 0x000000ff00000000 + +/* SH_XNPI_ECC_INJ_REG_DATA_1SHOT2 */ +/* Description: 1 shot mask data */ +#define SH_XNPI_ECC_INJ_REG_DATA_1SHOT2_SHFT 44 +#define SH_XNPI_ECC_INJ_REG_DATA_1SHOT2_MASK 0x0000100000000000 + +/* SH_XNPI_ECC_INJ_REG_DATA_CONT2 */ +/* Description: toggle mask data */ +#define SH_XNPI_ECC_INJ_REG_DATA_CONT2_SHFT 45 +#define SH_XNPI_ECC_INJ_REG_DATA_CONT2_MASK 0x0000200000000000 + +/* SH_XNPI_ECC_INJ_REG_DATA_CB_1SHOT2 */ +/* Description: Replace Checkbyte OneShot */ +#define SH_XNPI_ECC_INJ_REG_DATA_CB_1SHOT2_SHFT 46 +#define SH_XNPI_ECC_INJ_REG_DATA_CB_1SHOT2_MASK 0x0000400000000000 + +/* SH_XNPI_ECC_INJ_REG_DATA_CB_CONT2 */ +/* Description: Replace Checkbyte Continous */ +#define SH_XNPI_ECC_INJ_REG_DATA_CB_CONT2_SHFT 47 +#define SH_XNPI_ECC_INJ_REG_DATA_CB_CONT2_MASK 0x0000800000000000 + +/* SH_XNPI_ECC_INJ_REG_BYTE3 */ +/* Description: Replacement Checkbyte */ +#define SH_XNPI_ECC_INJ_REG_BYTE3_SHFT 48 +#define SH_XNPI_ECC_INJ_REG_BYTE3_MASK 0x00ff000000000000 + +/* SH_XNPI_ECC_INJ_REG_DATA_1SHOT3 */ +/* Description: 1 shot mask data */ +#define SH_XNPI_ECC_INJ_REG_DATA_1SHOT3_SHFT 60 +#define SH_XNPI_ECC_INJ_REG_DATA_1SHOT3_MASK 0x1000000000000000 + +/* SH_XNPI_ECC_INJ_REG_DATA_CONT3 */ +/* Description: toggle mask data */ +#define SH_XNPI_ECC_INJ_REG_DATA_CONT3_SHFT 61 +#define SH_XNPI_ECC_INJ_REG_DATA_CONT3_MASK 0x2000000000000000 + +/* SH_XNPI_ECC_INJ_REG_DATA_CB_1SHOT3 */ +/* Description: Replace Checkbyte One-Shot */ +#define SH_XNPI_ECC_INJ_REG_DATA_CB_1SHOT3_SHFT 62 +#define SH_XNPI_ECC_INJ_REG_DATA_CB_1SHOT3_MASK 0x4000000000000000 + +/* SH_XNPI_ECC_INJ_REG_DATA_CB_CONT3 */ +/* Description: Replace Checkbyte Continous */ +#define SH_XNPI_ECC_INJ_REG_DATA_CB_CONT3_SHFT 63 +#define SH_XNPI_ECC_INJ_REG_DATA_CB_CONT3_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_XNPI_ECC0_INJ_MASK_REG" */ +/* ==================================================================== */ + +#define SH_XNPI_ECC0_INJ_MASK_REG 0x0000000150032008 +#define SH_XNPI_ECC0_INJ_MASK_REG_MASK 0xffffffffffffffff +#define SH_XNPI_ECC0_INJ_MASK_REG_INIT 0x0000000000000000 + +/* SH_XNPI_ECC0_INJ_MASK_REG_MASK_ECC0 */ +/* Description: Replacement Data */ +#define SH_XNPI_ECC0_INJ_MASK_REG_MASK_ECC0_SHFT 0 +#define SH_XNPI_ECC0_INJ_MASK_REG_MASK_ECC0_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XNPI_ECC1_INJ_MASK_REG" */ +/* ==================================================================== */ + +#define SH_XNPI_ECC1_INJ_MASK_REG 0x0000000150032010 +#define SH_XNPI_ECC1_INJ_MASK_REG_MASK 0xffffffffffffffff +#define SH_XNPI_ECC1_INJ_MASK_REG_INIT 0x0000000000000000 + +/* SH_XNPI_ECC1_INJ_MASK_REG_MASK_ECC1 */ +/* Description: Replacement Data */ +#define SH_XNPI_ECC1_INJ_MASK_REG_MASK_ECC1_SHFT 0 +#define SH_XNPI_ECC1_INJ_MASK_REG_MASK_ECC1_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XNPI_ECC2_INJ_MASK_REG" */ +/* ==================================================================== */ + +#define SH_XNPI_ECC2_INJ_MASK_REG 0x0000000150032018 +#define SH_XNPI_ECC2_INJ_MASK_REG_MASK 0xffffffffffffffff +#define SH_XNPI_ECC2_INJ_MASK_REG_INIT 0x0000000000000000 + +/* SH_XNPI_ECC2_INJ_MASK_REG_MASK_ECC2 */ +/* Description: Replacement Data */ +#define SH_XNPI_ECC2_INJ_MASK_REG_MASK_ECC2_SHFT 0 +#define SH_XNPI_ECC2_INJ_MASK_REG_MASK_ECC2_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XNPI_ECC3_INJ_MASK_REG" */ +/* ==================================================================== */ + +#define SH_XNPI_ECC3_INJ_MASK_REG 0x0000000150032020 +#define SH_XNPI_ECC3_INJ_MASK_REG_MASK 0xffffffffffffffff +#define SH_XNPI_ECC3_INJ_MASK_REG_INIT 0x0000000000000000 + +/* SH_XNPI_ECC3_INJ_MASK_REG_MASK_ECC3 */ +/* Description: Replacement Data */ +#define SH_XNPI_ECC3_INJ_MASK_REG_MASK_ECC3_SHFT 0 +#define SH_XNPI_ECC3_INJ_MASK_REG_MASK_ECC3_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XNMD_ECC_INJ_REG" */ +/* ==================================================================== */ + +#define SH_XNMD_ECC_INJ_REG 0x0000000150032030 +#define SH_XNMD_ECC_INJ_REG_MASK 0xf0fff0fff0fff0ff +#define SH_XNMD_ECC_INJ_REG_INIT 0x0000000000000000 + +/* SH_XNMD_ECC_INJ_REG_BYTE0 */ +/* Description: Replacement Checkbyte */ +#define SH_XNMD_ECC_INJ_REG_BYTE0_SHFT 0 +#define SH_XNMD_ECC_INJ_REG_BYTE0_MASK 0x00000000000000ff + +/* SH_XNMD_ECC_INJ_REG_DATA_1SHOT0 */ +/* Description: 1 shot mask data */ +#define SH_XNMD_ECC_INJ_REG_DATA_1SHOT0_SHFT 12 +#define SH_XNMD_ECC_INJ_REG_DATA_1SHOT0_MASK 0x0000000000001000 + +/* SH_XNMD_ECC_INJ_REG_DATA_CONT0 */ +/* Description: toggle mask data */ +#define SH_XNMD_ECC_INJ_REG_DATA_CONT0_SHFT 13 +#define SH_XNMD_ECC_INJ_REG_DATA_CONT0_MASK 0x0000000000002000 + +/* SH_XNMD_ECC_INJ_REG_DATA_CB_1SHOT0 */ +/* Description: Replace Checkbyte One Shot */ +#define SH_XNMD_ECC_INJ_REG_DATA_CB_1SHOT0_SHFT 14 +#define SH_XNMD_ECC_INJ_REG_DATA_CB_1SHOT0_MASK 0x0000000000004000 + +/* SH_XNMD_ECC_INJ_REG_DATA_CB_CONT0 */ +/* Description: Replace Checkbyte Continuous */ +#define SH_XNMD_ECC_INJ_REG_DATA_CB_CONT0_SHFT 15 +#define SH_XNMD_ECC_INJ_REG_DATA_CB_CONT0_MASK 0x0000000000008000 + +/* SH_XNMD_ECC_INJ_REG_BYTE1 */ +/* Description: Replacement Checkbyte */ +#define SH_XNMD_ECC_INJ_REG_BYTE1_SHFT 16 +#define SH_XNMD_ECC_INJ_REG_BYTE1_MASK 0x0000000000ff0000 + +/* SH_XNMD_ECC_INJ_REG_DATA_1SHOT1 */ +/* Description: 1 shot mask data */ +#define SH_XNMD_ECC_INJ_REG_DATA_1SHOT1_SHFT 28 +#define SH_XNMD_ECC_INJ_REG_DATA_1SHOT1_MASK 0x0000000010000000 + +/* SH_XNMD_ECC_INJ_REG_DATA_CONT1 */ +/* Description: toggle mask data */ +#define SH_XNMD_ECC_INJ_REG_DATA_CONT1_SHFT 29 +#define SH_XNMD_ECC_INJ_REG_DATA_CONT1_MASK 0x0000000020000000 + +/* SH_XNMD_ECC_INJ_REG_DATA_CB_1SHOT1 */ +/* Description: Replace Checkbyte One Shot */ +#define SH_XNMD_ECC_INJ_REG_DATA_CB_1SHOT1_SHFT 30 +#define SH_XNMD_ECC_INJ_REG_DATA_CB_1SHOT1_MASK 0x0000000040000000 + +/* SH_XNMD_ECC_INJ_REG_DATA_CB_CONT1 */ +/* Description: Replace Checkbyte Continous */ +#define SH_XNMD_ECC_INJ_REG_DATA_CB_CONT1_SHFT 31 +#define SH_XNMD_ECC_INJ_REG_DATA_CB_CONT1_MASK 0x0000000080000000 + +/* SH_XNMD_ECC_INJ_REG_BYTE2 */ +/* Description: Replacement Checkbyte */ +#define SH_XNMD_ECC_INJ_REG_BYTE2_SHFT 32 +#define SH_XNMD_ECC_INJ_REG_BYTE2_MASK 0x000000ff00000000 + +/* SH_XNMD_ECC_INJ_REG_DATA_1SHOT2 */ +/* Description: 1 shot mask data */ +#define SH_XNMD_ECC_INJ_REG_DATA_1SHOT2_SHFT 44 +#define SH_XNMD_ECC_INJ_REG_DATA_1SHOT2_MASK 0x0000100000000000 + +/* SH_XNMD_ECC_INJ_REG_DATA_CONT2 */ +/* Description: toggle mask data */ +#define SH_XNMD_ECC_INJ_REG_DATA_CONT2_SHFT 45 +#define SH_XNMD_ECC_INJ_REG_DATA_CONT2_MASK 0x0000200000000000 + +/* SH_XNMD_ECC_INJ_REG_DATA_CB_1SHOT2 */ +/* Description: Replace Checkbyte OneShot */ +#define SH_XNMD_ECC_INJ_REG_DATA_CB_1SHOT2_SHFT 46 +#define SH_XNMD_ECC_INJ_REG_DATA_CB_1SHOT2_MASK 0x0000400000000000 + +/* SH_XNMD_ECC_INJ_REG_DATA_CB_CONT2 */ +/* Description: Replace Checkbyte Continous */ +#define SH_XNMD_ECC_INJ_REG_DATA_CB_CONT2_SHFT 47 +#define SH_XNMD_ECC_INJ_REG_DATA_CB_CONT2_MASK 0x0000800000000000 + +/* SH_XNMD_ECC_INJ_REG_BYTE3 */ +/* Description: Replacement Checkbyte */ +#define SH_XNMD_ECC_INJ_REG_BYTE3_SHFT 48 +#define SH_XNMD_ECC_INJ_REG_BYTE3_MASK 0x00ff000000000000 + +/* SH_XNMD_ECC_INJ_REG_DATA_1SHOT3 */ +/* Description: 1 shot mask data */ +#define SH_XNMD_ECC_INJ_REG_DATA_1SHOT3_SHFT 60 +#define SH_XNMD_ECC_INJ_REG_DATA_1SHOT3_MASK 0x1000000000000000 + +/* SH_XNMD_ECC_INJ_REG_DATA_CONT3 */ +/* Description: toggle mask data */ +#define SH_XNMD_ECC_INJ_REG_DATA_CONT3_SHFT 61 +#define SH_XNMD_ECC_INJ_REG_DATA_CONT3_MASK 0x2000000000000000 + +/* SH_XNMD_ECC_INJ_REG_DATA_CB_1SHOT3 */ +/* Description: Replace Checkbyte One-Shot */ +#define SH_XNMD_ECC_INJ_REG_DATA_CB_1SHOT3_SHFT 62 +#define SH_XNMD_ECC_INJ_REG_DATA_CB_1SHOT3_MASK 0x4000000000000000 + +/* SH_XNMD_ECC_INJ_REG_DATA_CB_CONT3 */ +/* Description: Replace Checkbyte Continous */ +#define SH_XNMD_ECC_INJ_REG_DATA_CB_CONT3_SHFT 63 +#define SH_XNMD_ECC_INJ_REG_DATA_CB_CONT3_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_XNMD_ECC0_INJ_MASK_REG" */ +/* ==================================================================== */ + +#define SH_XNMD_ECC0_INJ_MASK_REG 0x0000000150032038 +#define SH_XNMD_ECC0_INJ_MASK_REG_MASK 0xffffffffffffffff +#define SH_XNMD_ECC0_INJ_MASK_REG_INIT 0x0000000000000000 + +/* SH_XNMD_ECC0_INJ_MASK_REG_MASK_ECC0 */ +/* Description: Replacement Data */ +#define SH_XNMD_ECC0_INJ_MASK_REG_MASK_ECC0_SHFT 0 +#define SH_XNMD_ECC0_INJ_MASK_REG_MASK_ECC0_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XNMD_ECC1_INJ_MASK_REG" */ +/* ==================================================================== */ + +#define SH_XNMD_ECC1_INJ_MASK_REG 0x0000000150032040 +#define SH_XNMD_ECC1_INJ_MASK_REG_MASK 0xffffffffffffffff +#define SH_XNMD_ECC1_INJ_MASK_REG_INIT 0x0000000000000000 + +/* SH_XNMD_ECC1_INJ_MASK_REG_MASK_ECC1 */ +/* Description: Replacement Data */ +#define SH_XNMD_ECC1_INJ_MASK_REG_MASK_ECC1_SHFT 0 +#define SH_XNMD_ECC1_INJ_MASK_REG_MASK_ECC1_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XNMD_ECC2_INJ_MASK_REG" */ +/* ==================================================================== */ + +#define SH_XNMD_ECC2_INJ_MASK_REG 0x0000000150032048 +#define SH_XNMD_ECC2_INJ_MASK_REG_MASK 0xffffffffffffffff +#define SH_XNMD_ECC2_INJ_MASK_REG_INIT 0x0000000000000000 + +/* SH_XNMD_ECC2_INJ_MASK_REG_MASK_ECC2 */ +/* Description: Replacement Data */ +#define SH_XNMD_ECC2_INJ_MASK_REG_MASK_ECC2_SHFT 0 +#define SH_XNMD_ECC2_INJ_MASK_REG_MASK_ECC2_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XNMD_ECC3_INJ_MASK_REG" */ +/* ==================================================================== */ + +#define SH_XNMD_ECC3_INJ_MASK_REG 0x0000000150032050 +#define SH_XNMD_ECC3_INJ_MASK_REG_MASK 0xffffffffffffffff +#define SH_XNMD_ECC3_INJ_MASK_REG_INIT 0x0000000000000000 + +/* SH_XNMD_ECC3_INJ_MASK_REG_MASK_ECC3 */ +/* Description: Replacement Data */ +#define SH_XNMD_ECC3_INJ_MASK_REG_MASK_ECC3_SHFT 0 +#define SH_XNMD_ECC3_INJ_MASK_REG_MASK_ECC3_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XNMD_ECC_ERR_REPORT" */ +/* ==================================================================== */ + +#define SH_XNMD_ECC_ERR_REPORT 0x0000000150032058 +#define SH_XNMD_ECC_ERR_REPORT_MASK 0x0001000100010001 +#define SH_XNMD_ECC_ERR_REPORT_INIT 0x0000000000000000 + +/* SH_XNMD_ECC_ERR_REPORT_ECC_DISABLE0 */ +/* Description: Disable Error Correction */ +#define SH_XNMD_ECC_ERR_REPORT_ECC_DISABLE0_SHFT 0 +#define SH_XNMD_ECC_ERR_REPORT_ECC_DISABLE0_MASK 0x0000000000000001 + +/* SH_XNMD_ECC_ERR_REPORT_ECC_DISABLE1 */ +/* Description: Disable Error Correction */ +#define SH_XNMD_ECC_ERR_REPORT_ECC_DISABLE1_SHFT 16 +#define SH_XNMD_ECC_ERR_REPORT_ECC_DISABLE1_MASK 0x0000000000010000 + +/* SH_XNMD_ECC_ERR_REPORT_ECC_DISABLE2 */ +/* Description: Disable Error Correction */ +#define SH_XNMD_ECC_ERR_REPORT_ECC_DISABLE2_SHFT 32 +#define SH_XNMD_ECC_ERR_REPORT_ECC_DISABLE2_MASK 0x0000000100000000 + +/* SH_XNMD_ECC_ERR_REPORT_ECC_DISABLE3 */ +/* Description: Disable Error Correction */ +#define SH_XNMD_ECC_ERR_REPORT_ECC_DISABLE3_SHFT 48 +#define SH_XNMD_ECC_ERR_REPORT_ECC_DISABLE3_MASK 0x0001000000000000 + +/* ==================================================================== */ +/* Register "SH_NI0_ERROR_SUMMARY_1" */ +/* ni0 Error Summary Bits */ +/* ==================================================================== */ + +#define SH_NI0_ERROR_SUMMARY_1 0x0000000150040500 +#define SH_NI0_ERROR_SUMMARY_1_MASK 0xffffffffffffffff +#define SH_NI0_ERROR_SUMMARY_1_INIT 0xffffffffffffffff + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO02_DEBIT0 */ +/* Description: Fifo 02 debit0 overflow */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO02_DEBIT0_SHFT 0 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO02_DEBIT0_MASK 0x0000000000000001 + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO02_DEBIT2 */ +/* Description: Fifo 02 debit2 overflow */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO02_DEBIT2_SHFT 1 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO02_DEBIT2_MASK 0x0000000000000002 + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO13_DEBIT0 */ +/* Description: Fifo 13 debit0 overflow */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO13_DEBIT0_SHFT 2 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO13_DEBIT0_MASK 0x0000000000000004 + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO13_DEBIT2 */ +/* Description: Fifo 13 debit2 overflow */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO13_DEBIT2_SHFT 3 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO13_DEBIT2_MASK 0x0000000000000008 + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC0_POP */ +/* Description: Fifo 02 vc0 pop overflow */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC0_POP_SHFT 4 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC0_POP_MASK 0x0000000000000010 + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC2_POP */ +/* Description: Fifo 02 vc2 pop overflow */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC2_POP_SHFT 5 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC2_POP_MASK 0x0000000000000020 + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC1_POP */ +/* Description: Fifo 13 vc1 pop overflow */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC1_POP_SHFT 6 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC1_POP_MASK 0x0000000000000040 + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC3_POP */ +/* Description: Fifo 13 vc3 pop overflow */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC3_POP_SHFT 7 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC3_POP_MASK 0x0000000000000080 + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC0_PUSH */ +/* Description: Fifo 02 vc0 push overflow */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC0_PUSH_SHFT 8 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC0_PUSH_MASK 0x0000000000000100 + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC2_PUSH */ +/* Description: Fifo 02 vc2 push overflow */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC2_PUSH_SHFT 9 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC2_PUSH_MASK 0x0000000000000200 + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC1_PUSH */ +/* Description: Fifo 13 vc1 push overflow */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC1_PUSH_SHFT 10 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC1_PUSH_MASK 0x0000000000000400 + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC3_PUSH */ +/* Description: Fifo 13 vc3 push overflow */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC3_PUSH_SHFT 11 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC3_PUSH_MASK 0x0000000000000800 + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC0_CREDIT */ +/* Description: Fifo 02 vc0 credit overflow */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC0_CREDIT_SHFT 12 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC0_CREDIT_MASK 0x0000000000001000 + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC2_CREDIT */ +/* Description: Fifo 02 vc2 credit overflow */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC2_CREDIT_SHFT 13 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC2_CREDIT_MASK 0x0000000000002000 + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC0_CREDIT */ +/* Description: Fifo 13 vc0 credit overflow */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC0_CREDIT_SHFT 14 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC0_CREDIT_MASK 0x0000000000004000 + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC2_CREDIT */ +/* Description: Fifo 13 vc2 credit overflow */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC2_CREDIT_SHFT 15 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC2_CREDIT_MASK 0x0000000000008000 + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW0_VC0_CREDIT */ +/* Description: VC0 credit overflow 0 */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW0_VC0_CREDIT_SHFT 16 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW0_VC0_CREDIT_MASK 0x0000000000010000 + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW1_VC0_CREDIT */ +/* Description: VC0 credit overflow 1 */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW1_VC0_CREDIT_SHFT 17 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW1_VC0_CREDIT_MASK 0x0000000000020000 + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW2_VC0_CREDIT */ +/* Description: VC0 credit overflow 2 */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW2_VC0_CREDIT_SHFT 18 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW2_VC0_CREDIT_MASK 0x0000000000040000 + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW0_VC2_CREDIT */ +/* Description: VC2 credit overflow 0 */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW0_VC2_CREDIT_SHFT 19 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW0_VC2_CREDIT_MASK 0x0000000000080000 + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW1_VC2_CREDIT */ +/* Description: VC2 credit overflow 1 */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW1_VC2_CREDIT_SHFT 20 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW1_VC2_CREDIT_MASK 0x0000000000100000 + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW2_VC2_CREDIT */ +/* Description: VC2 credit overflow 2 */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW2_VC2_CREDIT_SHFT 21 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW2_VC2_CREDIT_MASK 0x0000000000200000 + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_DEBIT0 */ +/* Description: PI Fifo debit0 overflow */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_DEBIT0_SHFT 22 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_DEBIT0_MASK 0x0000000000400000 + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_DEBIT2 */ +/* Description: PI Fifo debit2 overflow */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_DEBIT2_SHFT 23 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_DEBIT2_MASK 0x0000000000800000 + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_DEBIT0 */ +/* Description: IILB Fifo debit0 overflow */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_DEBIT0_SHFT 24 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_DEBIT0_MASK 0x0000000001000000 + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_DEBIT2 */ +/* Description: IILB Fifo debit2 overflow */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_DEBIT2_SHFT 25 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_DEBIT2_MASK 0x0000000002000000 + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_DEBIT0 */ +/* Description: MD Fifo debit0 overflow */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_DEBIT0_SHFT 26 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_DEBIT0_MASK 0x0000000004000000 + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_DEBIT2 */ +/* Description: MD Fifo debit2 overflow */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_DEBIT2_SHFT 27 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_DEBIT2_MASK 0x0000000008000000 + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_DEBIT0 */ +/* Description: NI Fifo debit0 overflow */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_DEBIT0_SHFT 28 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_DEBIT0_MASK 0x0000000010000000 + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_DEBIT1 */ +/* Description: NI Fifo debit1 overflow */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_DEBIT1_SHFT 29 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_DEBIT1_MASK 0x0000000020000000 + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_DEBIT2 */ +/* Description: NI Fifo debit2 overflow */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_DEBIT2_SHFT 30 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_DEBIT2_MASK 0x0000000040000000 + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_DEBIT3 */ +/* Description: NI Fifo debit3 overflow */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_DEBIT3_SHFT 31 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_DEBIT3_MASK 0x0000000080000000 + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC0_POP */ +/* Description: PI Fifo vc0 pop overflow */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC0_POP_SHFT 32 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC0_POP_MASK 0x0000000100000000 + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC2_POP */ +/* Description: PI Fifo vc2 pop overflow */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC2_POP_SHFT 33 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC2_POP_MASK 0x0000000200000000 + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC0_POP */ +/* Description: IILB Fifo vc0 pop overflow */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC0_POP_SHFT 34 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC0_POP_MASK 0x0000000400000000 + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC2_POP */ +/* Description: IILB Fifo vc2 pop overflow */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC2_POP_SHFT 35 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC2_POP_MASK 0x0000000800000000 + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC0_POP */ +/* Description: MD Fifo vc0 pop overflow */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC0_POP_SHFT 36 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC0_POP_MASK 0x0000001000000000 + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC2_POP */ +/* Description: MD Fifo vc2 pop overflow */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC2_POP_SHFT 37 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC2_POP_MASK 0x0000002000000000 + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC0_POP */ +/* Description: NI Fifo vc0 pop overflow */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC0_POP_SHFT 38 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC0_POP_MASK 0x0000004000000000 + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC2_POP */ +/* Description: NI Fifo vc2 pop overflow */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC2_POP_SHFT 39 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC2_POP_MASK 0x0000008000000000 + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC0_PUSH */ +/* Description: PI Fifo vc0 push overflow */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC0_PUSH_SHFT 40 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC0_PUSH_MASK 0x0000010000000000 + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC2_PUSH */ +/* Description: PI Fifo vc2 push overflow */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC2_PUSH_SHFT 41 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC2_PUSH_MASK 0x0000020000000000 + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC0_PUSH */ +/* Description: IILB Fifo vc0 push overflow */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC0_PUSH_SHFT 42 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC0_PUSH_MASK 0x0000040000000000 + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC2_PUSH */ +/* Description: IILB Fifo vc2 push overflow */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC2_PUSH_SHFT 43 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC2_PUSH_MASK 0x0000080000000000 + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC0_PUSH */ +/* Description: MD Fifo vc0 push overflow */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC0_PUSH_SHFT 44 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC0_PUSH_MASK 0x0000100000000000 + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC2_PUSH */ +/* Description: MD Fifo vc2 push overflow */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC2_PUSH_SHFT 45 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC2_PUSH_MASK 0x0000200000000000 + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC0_CREDIT */ +/* Description: PI Fifo vc0 credit overflow */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC0_CREDIT_SHFT 46 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC0_CREDIT_MASK 0x0000400000000000 + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC2_CREDIT */ +/* Description: PI Fifo vc2 credit overflow */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC2_CREDIT_SHFT 47 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC2_CREDIT_MASK 0x0000800000000000 + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC0_CREDIT */ +/* Description: IILB Fifo vc0 credit overflow */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC0_CREDIT_SHFT 48 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC0_CREDIT_MASK 0x0001000000000000 + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC2_CREDIT */ +/* Description: IILB Fifo vc2 credit overflow */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC2_CREDIT_SHFT 49 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC2_CREDIT_MASK 0x0002000000000000 + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC0_CREDIT */ +/* Description: MD Fifo vc0 credit overflow */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC0_CREDIT_SHFT 50 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC0_CREDIT_MASK 0x0004000000000000 + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC2_CREDIT */ +/* Description: MD Fifo vc2 credit overflow */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC2_CREDIT_SHFT 51 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC2_CREDIT_MASK 0x0008000000000000 + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC0_CREDIT */ +/* Description: NI Fifo vc0 credit overflow */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC0_CREDIT_SHFT 52 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC0_CREDIT_MASK 0x0010000000000000 + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC1_CREDIT */ +/* Description: NI Fifo vc1 credit overflow */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC1_CREDIT_SHFT 53 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC1_CREDIT_MASK 0x0020000000000000 + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC2_CREDIT */ +/* Description: NI Fifo vc2 credit overflow */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC2_CREDIT_SHFT 54 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC2_CREDIT_MASK 0x0040000000000000 + +/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC3_CREDIT */ +/* Description: NI Fifo vc3 credit overflow */ +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC3_CREDIT_SHFT 55 +#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC3_CREDIT_MASK 0x0080000000000000 + +/* SH_NI0_ERROR_SUMMARY_1_TAIL_TIMEOUT_FIFO02_VC0 */ +/* Description: Fifo02 vc0 tail timeout */ +#define SH_NI0_ERROR_SUMMARY_1_TAIL_TIMEOUT_FIFO02_VC0_SHFT 56 +#define SH_NI0_ERROR_SUMMARY_1_TAIL_TIMEOUT_FIFO02_VC0_MASK 0x0100000000000000 + +/* SH_NI0_ERROR_SUMMARY_1_TAIL_TIMEOUT_FIFO02_VC2 */ +/* Description: Fifo02 vc2 tail timeout */ +#define SH_NI0_ERROR_SUMMARY_1_TAIL_TIMEOUT_FIFO02_VC2_SHFT 57 +#define SH_NI0_ERROR_SUMMARY_1_TAIL_TIMEOUT_FIFO02_VC2_MASK 0x0200000000000000 + +/* SH_NI0_ERROR_SUMMARY_1_TAIL_TIMEOUT_FIFO13_VC1 */ +/* Description: Fifo13 vc1 tail timeout */ +#define SH_NI0_ERROR_SUMMARY_1_TAIL_TIMEOUT_FIFO13_VC1_SHFT 58 +#define SH_NI0_ERROR_SUMMARY_1_TAIL_TIMEOUT_FIFO13_VC1_MASK 0x0400000000000000 + +/* SH_NI0_ERROR_SUMMARY_1_TAIL_TIMEOUT_FIFO13_VC3 */ +/* Description: Fifo13 vc3 tail timeout */ +#define SH_NI0_ERROR_SUMMARY_1_TAIL_TIMEOUT_FIFO13_VC3_SHFT 59 +#define SH_NI0_ERROR_SUMMARY_1_TAIL_TIMEOUT_FIFO13_VC3_MASK 0x0800000000000000 + +/* SH_NI0_ERROR_SUMMARY_1_TAIL_TIMEOUT_NI_VC0 */ +/* Description: NI vc0 tail timeout */ +#define SH_NI0_ERROR_SUMMARY_1_TAIL_TIMEOUT_NI_VC0_SHFT 60 +#define SH_NI0_ERROR_SUMMARY_1_TAIL_TIMEOUT_NI_VC0_MASK 0x1000000000000000 + +/* SH_NI0_ERROR_SUMMARY_1_TAIL_TIMEOUT_NI_VC1 */ +/* Description: NI vc1 tail timeout */ +#define SH_NI0_ERROR_SUMMARY_1_TAIL_TIMEOUT_NI_VC1_SHFT 61 +#define SH_NI0_ERROR_SUMMARY_1_TAIL_TIMEOUT_NI_VC1_MASK 0x2000000000000000 + +/* SH_NI0_ERROR_SUMMARY_1_TAIL_TIMEOUT_NI_VC2 */ +/* Description: NI vc2 tail timeout */ +#define SH_NI0_ERROR_SUMMARY_1_TAIL_TIMEOUT_NI_VC2_SHFT 62 +#define SH_NI0_ERROR_SUMMARY_1_TAIL_TIMEOUT_NI_VC2_MASK 0x4000000000000000 + +/* SH_NI0_ERROR_SUMMARY_1_TAIL_TIMEOUT_NI_VC3 */ +/* Description: NI vc3 tail timeout */ +#define SH_NI0_ERROR_SUMMARY_1_TAIL_TIMEOUT_NI_VC3_SHFT 63 +#define SH_NI0_ERROR_SUMMARY_1_TAIL_TIMEOUT_NI_VC3_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_NI0_ERROR_SUMMARY_1_ALIAS" */ +/* ni0 Error Summary Bits Alias */ +/* ==================================================================== */ + +#define SH_NI0_ERROR_SUMMARY_1_ALIAS 0x0000000150040508 + +/* ==================================================================== */ +/* Register "SH_NI0_ERROR_SUMMARY_2" */ +/* ni0 Error Summary Bits */ +/* ==================================================================== */ + +#define SH_NI0_ERROR_SUMMARY_2 0x0000000150040510 +#define SH_NI0_ERROR_SUMMARY_2_MASK 0x7fffffff003fffff +#define SH_NI0_ERROR_SUMMARY_2_INIT 0x7fffffff003fffff + +/* SH_NI0_ERROR_SUMMARY_2_ILLEGAL_VCNI */ +/* Description: Illegal VC NI */ +#define SH_NI0_ERROR_SUMMARY_2_ILLEGAL_VCNI_SHFT 0 +#define SH_NI0_ERROR_SUMMARY_2_ILLEGAL_VCNI_MASK 0x0000000000000001 + +/* SH_NI0_ERROR_SUMMARY_2_ILLEGAL_VCPI */ +/* Description: Illegal VC PI */ +#define SH_NI0_ERROR_SUMMARY_2_ILLEGAL_VCPI_SHFT 1 +#define SH_NI0_ERROR_SUMMARY_2_ILLEGAL_VCPI_MASK 0x0000000000000002 + +/* SH_NI0_ERROR_SUMMARY_2_ILLEGAL_VCMD */ +/* Description: Illegal VC MD */ +#define SH_NI0_ERROR_SUMMARY_2_ILLEGAL_VCMD_SHFT 2 +#define SH_NI0_ERROR_SUMMARY_2_ILLEGAL_VCMD_MASK 0x0000000000000004 + +/* SH_NI0_ERROR_SUMMARY_2_ILLEGAL_VCIILB */ +/* Description: Illegal VC IILB */ +#define SH_NI0_ERROR_SUMMARY_2_ILLEGAL_VCIILB_SHFT 3 +#define SH_NI0_ERROR_SUMMARY_2_ILLEGAL_VCIILB_MASK 0x0000000000000008 + +/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC0_POP */ +/* Description: Fifo 02 vc0 pop underflow */ +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC0_POP_SHFT 4 +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC0_POP_MASK 0x0000000000000010 + +/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC2_POP */ +/* Description: Fifo 02 vc2 pop underflow */ +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC2_POP_SHFT 5 +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC2_POP_MASK 0x0000000000000020 + +/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC1_POP */ +/* Description: Fifo 13 vc1 pop underflow */ +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC1_POP_SHFT 6 +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC1_POP_MASK 0x0000000000000040 + +/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC3_POP */ +/* Description: Fifo 13 vc3 pop underflow */ +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC3_POP_SHFT 7 +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC3_POP_MASK 0x0000000000000080 + +/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC0_PUSH */ +/* Description: Fifo 02 vc0 push underflow */ +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC0_PUSH_SHFT 8 +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC0_PUSH_MASK 0x0000000000000100 + +/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC2_PUSH */ +/* Description: Fifo 02 vc2 push underflow */ +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC2_PUSH_SHFT 9 +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC2_PUSH_MASK 0x0000000000000200 + +/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC1_PUSH */ +/* Description: Fifo 13 vc1 push underflow */ +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC1_PUSH_SHFT 10 +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC1_PUSH_MASK 0x0000000000000400 + +/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC3_PUSH */ +/* Description: Fifo 13 vc3 push underflow */ +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC3_PUSH_SHFT 11 +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC3_PUSH_MASK 0x0000000000000800 + +/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC0_CREDIT */ +/* Description: Fifo 02 vc0 credit underflow */ +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC0_CREDIT_SHFT 12 +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC0_CREDIT_MASK 0x0000000000001000 + +/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC2_CREDIT */ +/* Description: Fifo 02 vc2 credit underflow */ +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC2_CREDIT_SHFT 13 +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC2_CREDIT_MASK 0x0000000000002000 + +/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC0_CREDIT */ +/* Description: Fifo 13 vc0 credit underflow */ +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC0_CREDIT_SHFT 14 +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC0_CREDIT_MASK 0x0000000000004000 + +/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC2_CREDIT */ +/* Description: Fifo 13 vc2 credit underflow */ +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC2_CREDIT_SHFT 15 +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC2_CREDIT_MASK 0x0000000000008000 + +/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW0_VC0_CREDIT */ +/* Description: VC0 credit underflow 0 */ +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW0_VC0_CREDIT_SHFT 16 +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW0_VC0_CREDIT_MASK 0x0000000000010000 + +/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW1_VC0_CREDIT */ +/* Description: VC0 credit underflow 1 */ +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW1_VC0_CREDIT_SHFT 17 +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW1_VC0_CREDIT_MASK 0x0000000000020000 + +/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW2_VC0_CREDIT */ +/* Description: VC0 credit underflow 2 */ +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW2_VC0_CREDIT_SHFT 18 +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW2_VC0_CREDIT_MASK 0x0000000000040000 + +/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW0_VC2_CREDIT */ +/* Description: VC2 credit underflow 0 */ +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW0_VC2_CREDIT_SHFT 19 +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW0_VC2_CREDIT_MASK 0x0000000000080000 + +/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW1_VC2_CREDIT */ +/* Description: VC2 credit underflow 1 */ +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW1_VC2_CREDIT_SHFT 20 +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW1_VC2_CREDIT_MASK 0x0000000000100000 + +/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW2_VC2_CREDIT */ +/* Description: VC2 credit underflow 2 */ +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW2_VC2_CREDIT_SHFT 21 +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW2_VC2_CREDIT_MASK 0x0000000000200000 + +/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC0_POP */ +/* Description: PI Fifo vc0 pop underflow */ +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC0_POP_SHFT 32 +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC0_POP_MASK 0x0000000100000000 + +/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC2_POP */ +/* Description: PI Fifo vc2 pop underflow */ +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC2_POP_SHFT 33 +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC2_POP_MASK 0x0000000200000000 + +/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC0_POP */ +/* Description: IILB Fifo vc0 pop underflow */ +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC0_POP_SHFT 34 +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC0_POP_MASK 0x0000000400000000 + +/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC2_POP */ +/* Description: IILB Fifo vc2 pop underflow */ +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC2_POP_SHFT 35 +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC2_POP_MASK 0x0000000800000000 + +/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC0_POP */ +/* Description: MD Fifo vc0 pop underflow */ +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC0_POP_SHFT 36 +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC0_POP_MASK 0x0000001000000000 + +/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC2_POP */ +/* Description: MD Fifo vc2 pop underflow */ +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC2_POP_SHFT 37 +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC2_POP_MASK 0x0000002000000000 + +/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC0_POP */ +/* Description: NI Fifo vc0 pop underflow */ +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC0_POP_SHFT 38 +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC0_POP_MASK 0x0000004000000000 + +/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC2_POP */ +/* Description: NI Fifo vc2 pop underflow */ +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC2_POP_SHFT 39 +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC2_POP_MASK 0x0000008000000000 + +/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC0_PUSH */ +/* Description: PI Fifo vc0 push underflow */ +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC0_PUSH_SHFT 40 +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC0_PUSH_MASK 0x0000010000000000 + +/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC2_PUSH */ +/* Description: PI Fifo vc2 push underflow */ +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC2_PUSH_SHFT 41 +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC2_PUSH_MASK 0x0000020000000000 + +/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC0_PUSH */ +/* Description: IILB Fifo vc0 push underflow */ +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC0_PUSH_SHFT 42 +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC0_PUSH_MASK 0x0000040000000000 + +/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC2_PUSH */ +/* Description: IILB Fifo vc2 push underflow */ +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC2_PUSH_SHFT 43 +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC2_PUSH_MASK 0x0000080000000000 + +/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC0_PUSH */ +/* Description: MD Fifo vc0 push underflow */ +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC0_PUSH_SHFT 44 +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC0_PUSH_MASK 0x0000100000000000 + +/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC2_PUSH */ +/* Description: MD Fifo vc2 push underflow */ +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC2_PUSH_SHFT 45 +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC2_PUSH_MASK 0x0000200000000000 + +/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC0_CREDIT */ +/* Description: PI Fifo vc0 credit underflow */ +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC0_CREDIT_SHFT 46 +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC0_CREDIT_MASK 0x0000400000000000 + +/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC2_CREDIT */ +/* Description: PI Fifo vc2 credit underflow */ +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC2_CREDIT_SHFT 47 +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC2_CREDIT_MASK 0x0000800000000000 + +/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC0_CREDIT */ +/* Description: IILB Fifo vc0 credit underflow */ +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC0_CREDIT_SHFT 48 +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC0_CREDIT_MASK 0x0001000000000000 + +/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC2_CREDIT */ +/* Description: IILB Fifo vc2 credit underflow */ +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC2_CREDIT_SHFT 49 +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC2_CREDIT_MASK 0x0002000000000000 + +/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC0_CREDIT */ +/* Description: MD Fifo vc0 credit underflow */ +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC0_CREDIT_SHFT 50 +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC0_CREDIT_MASK 0x0004000000000000 + +/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC2_CREDIT */ +/* Description: MD Fifo vc2 credit underflow */ +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC2_CREDIT_SHFT 51 +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC2_CREDIT_MASK 0x0008000000000000 + +/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC0_CREDIT */ +/* Description: NI Fifo vc0 credit underflow */ +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC0_CREDIT_SHFT 52 +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC0_CREDIT_MASK 0x0010000000000000 + +/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC1_CREDIT */ +/* Description: NI Fifo vc1 credit underflow */ +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC1_CREDIT_SHFT 53 +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC1_CREDIT_MASK 0x0020000000000000 + +/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC2_CREDIT */ +/* Description: NI Fifo vc2 credit underflow */ +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC2_CREDIT_SHFT 54 +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC2_CREDIT_MASK 0x0040000000000000 + +/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC3_CREDIT */ +/* Description: NI Fifo vc3 credit underflow */ +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC3_CREDIT_SHFT 55 +#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC3_CREDIT_MASK 0x0080000000000000 + +/* SH_NI0_ERROR_SUMMARY_2_LLP_DEADLOCK_VC0 */ +/* Description: llp deadlock vc0 */ +#define SH_NI0_ERROR_SUMMARY_2_LLP_DEADLOCK_VC0_SHFT 56 +#define SH_NI0_ERROR_SUMMARY_2_LLP_DEADLOCK_VC0_MASK 0x0100000000000000 + +/* SH_NI0_ERROR_SUMMARY_2_LLP_DEADLOCK_VC1 */ +/* Description: llp deadlock vc1 */ +#define SH_NI0_ERROR_SUMMARY_2_LLP_DEADLOCK_VC1_SHFT 57 +#define SH_NI0_ERROR_SUMMARY_2_LLP_DEADLOCK_VC1_MASK 0x0200000000000000 + +/* SH_NI0_ERROR_SUMMARY_2_LLP_DEADLOCK_VC2 */ +/* Description: llp deadlock vc2 */ +#define SH_NI0_ERROR_SUMMARY_2_LLP_DEADLOCK_VC2_SHFT 58 +#define SH_NI0_ERROR_SUMMARY_2_LLP_DEADLOCK_VC2_MASK 0x0400000000000000 + +/* SH_NI0_ERROR_SUMMARY_2_LLP_DEADLOCK_VC3 */ +/* Description: llp deadlock vc3 */ +#define SH_NI0_ERROR_SUMMARY_2_LLP_DEADLOCK_VC3_SHFT 59 +#define SH_NI0_ERROR_SUMMARY_2_LLP_DEADLOCK_VC3_MASK 0x0800000000000000 + +/* SH_NI0_ERROR_SUMMARY_2_CHIPLET_NOMATCH */ +/* Description: chiplet nomatch */ +#define SH_NI0_ERROR_SUMMARY_2_CHIPLET_NOMATCH_SHFT 60 +#define SH_NI0_ERROR_SUMMARY_2_CHIPLET_NOMATCH_MASK 0x1000000000000000 + +/* SH_NI0_ERROR_SUMMARY_2_LUT_READ_ERROR */ +/* Description: LUT Read Error */ +#define SH_NI0_ERROR_SUMMARY_2_LUT_READ_ERROR_SHFT 61 +#define SH_NI0_ERROR_SUMMARY_2_LUT_READ_ERROR_MASK 0x2000000000000000 + +/* SH_NI0_ERROR_SUMMARY_2_RETRY_TIMEOUT_ERROR */ +/* Description: Retry Timeout Error */ +#define SH_NI0_ERROR_SUMMARY_2_RETRY_TIMEOUT_ERROR_SHFT 62 +#define SH_NI0_ERROR_SUMMARY_2_RETRY_TIMEOUT_ERROR_MASK 0x4000000000000000 + +/* ==================================================================== */ +/* Register "SH_NI0_ERROR_SUMMARY_2_ALIAS" */ +/* ni0 Error Summary Bits Alias */ +/* ==================================================================== */ + +#define SH_NI0_ERROR_SUMMARY_2_ALIAS 0x0000000150040518 + +/* ==================================================================== */ +/* Register "SH_NI0_ERROR_OVERFLOW_1" */ +/* ni0 Error Overflow Bits */ +/* ==================================================================== */ + +#define SH_NI0_ERROR_OVERFLOW_1 0x0000000150040520 +#define SH_NI0_ERROR_OVERFLOW_1_MASK 0xffffffffffffffff +#define SH_NI0_ERROR_OVERFLOW_1_INIT 0xffffffffffffffff + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_DEBIT0 */ +/* Description: Fifo 02 debit0 overflow */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_DEBIT0_SHFT 0 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_DEBIT0_MASK 0x0000000000000001 + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_DEBIT2 */ +/* Description: Fifo 02 debit2 overflow */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_DEBIT2_SHFT 1 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_DEBIT2_MASK 0x0000000000000002 + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_DEBIT0 */ +/* Description: Fifo 13 debit0 overflow */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_DEBIT0_SHFT 2 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_DEBIT0_MASK 0x0000000000000004 + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_DEBIT2 */ +/* Description: Fifo 13 debit2 overflow */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_DEBIT2_SHFT 3 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_DEBIT2_MASK 0x0000000000000008 + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC0_POP */ +/* Description: Fifo 02 vc0 pop overflow */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC0_POP_SHFT 4 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC0_POP_MASK 0x0000000000000010 + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC2_POP */ +/* Description: Fifo 02 vc2 pop overflow */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC2_POP_SHFT 5 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC2_POP_MASK 0x0000000000000020 + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC1_POP */ +/* Description: Fifo 13 vc1 pop overflow */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC1_POP_SHFT 6 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC1_POP_MASK 0x0000000000000040 + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC3_POP */ +/* Description: Fifo 13 vc3 pop overflow */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC3_POP_SHFT 7 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC3_POP_MASK 0x0000000000000080 + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC0_PUSH */ +/* Description: Fifo 02 vc0 push overflow */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC0_PUSH_SHFT 8 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC0_PUSH_MASK 0x0000000000000100 + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC2_PUSH */ +/* Description: Fifo 02 vc2 push overflow */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC2_PUSH_SHFT 9 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC2_PUSH_MASK 0x0000000000000200 + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC1_PUSH */ +/* Description: Fifo 13 vc1 push overflow */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC1_PUSH_SHFT 10 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC1_PUSH_MASK 0x0000000000000400 + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC3_PUSH */ +/* Description: Fifo 13 vc3 push overflow */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC3_PUSH_SHFT 11 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC3_PUSH_MASK 0x0000000000000800 + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC0_CREDIT */ +/* Description: Fifo 02 vc0 credit overflow */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC0_CREDIT_SHFT 12 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC0_CREDIT_MASK 0x0000000000001000 + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC2_CREDIT */ +/* Description: Fifo 02 vc2 credit overflow */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC2_CREDIT_SHFT 13 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC2_CREDIT_MASK 0x0000000000002000 + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC0_CREDIT */ +/* Description: Fifo 13 vc0 credit overflow */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC0_CREDIT_SHFT 14 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC0_CREDIT_MASK 0x0000000000004000 + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC2_CREDIT */ +/* Description: Fifo 13 vc2 credit overflow */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC2_CREDIT_SHFT 15 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC2_CREDIT_MASK 0x0000000000008000 + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW0_VC0_CREDIT */ +/* Description: VC0 credit overflow 0 */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW0_VC0_CREDIT_SHFT 16 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW0_VC0_CREDIT_MASK 0x0000000000010000 + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW1_VC0_CREDIT */ +/* Description: VC0 credit overflow 1 */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW1_VC0_CREDIT_SHFT 17 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW1_VC0_CREDIT_MASK 0x0000000000020000 + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW2_VC0_CREDIT */ +/* Description: VC0 credit overflow 2 */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW2_VC0_CREDIT_SHFT 18 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW2_VC0_CREDIT_MASK 0x0000000000040000 + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW0_VC2_CREDIT */ +/* Description: VC2 credit overflow 0 */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW0_VC2_CREDIT_SHFT 19 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW0_VC2_CREDIT_MASK 0x0000000000080000 + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW1_VC2_CREDIT */ +/* Description: VC2 credit overflow 1 */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW1_VC2_CREDIT_SHFT 20 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW1_VC2_CREDIT_MASK 0x0000000000100000 + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW2_VC2_CREDIT */ +/* Description: VC2 credit overflow 2 */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW2_VC2_CREDIT_SHFT 21 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW2_VC2_CREDIT_MASK 0x0000000000200000 + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_DEBIT0 */ +/* Description: PI Fifo debit0 overflow */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_DEBIT0_SHFT 22 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_DEBIT0_MASK 0x0000000000400000 + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_DEBIT2 */ +/* Description: PI Fifo debit2 overflow */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_DEBIT2_SHFT 23 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_DEBIT2_MASK 0x0000000000800000 + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_DEBIT0 */ +/* Description: IILB Fifo debit0 overflow */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_DEBIT0_SHFT 24 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_DEBIT0_MASK 0x0000000001000000 + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_DEBIT2 */ +/* Description: IILB Fifo debit2 overflow */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_DEBIT2_SHFT 25 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_DEBIT2_MASK 0x0000000002000000 + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_DEBIT0 */ +/* Description: MD Fifo debit0 overflow */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_DEBIT0_SHFT 26 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_DEBIT0_MASK 0x0000000004000000 + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_DEBIT2 */ +/* Description: MD Fifo debit2 overflow */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_DEBIT2_SHFT 27 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_DEBIT2_MASK 0x0000000008000000 + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_DEBIT0 */ +/* Description: NI Fifo debit0 overflow */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_DEBIT0_SHFT 28 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_DEBIT0_MASK 0x0000000010000000 + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_DEBIT1 */ +/* Description: NI Fifo debit1 overflow */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_DEBIT1_SHFT 29 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_DEBIT1_MASK 0x0000000020000000 + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_DEBIT2 */ +/* Description: NI Fifo debit2 overflow */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_DEBIT2_SHFT 30 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_DEBIT2_MASK 0x0000000040000000 + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_DEBIT3 */ +/* Description: NI Fifo debit3 overflow */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_DEBIT3_SHFT 31 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_DEBIT3_MASK 0x0000000080000000 + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC0_POP */ +/* Description: PI Fifo vc0 pop overflow */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC0_POP_SHFT 32 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC0_POP_MASK 0x0000000100000000 + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC2_POP */ +/* Description: PI Fifo vc2 pop overflow */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC2_POP_SHFT 33 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC2_POP_MASK 0x0000000200000000 + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC0_POP */ +/* Description: IILB Fifo vc0 pop overflow */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC0_POP_SHFT 34 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC0_POP_MASK 0x0000000400000000 + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC2_POP */ +/* Description: IILB Fifo vc2 pop overflow */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC2_POP_SHFT 35 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC2_POP_MASK 0x0000000800000000 + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC0_POP */ +/* Description: MD Fifo vc0 pop overflow */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC0_POP_SHFT 36 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC0_POP_MASK 0x0000001000000000 + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC2_POP */ +/* Description: MD Fifo vc2 pop overflow */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC2_POP_SHFT 37 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC2_POP_MASK 0x0000002000000000 + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC0_POP */ +/* Description: NI Fifo vc0 pop overflow */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC0_POP_SHFT 38 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC0_POP_MASK 0x0000004000000000 + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC2_POP */ +/* Description: NI Fifo vc2 pop overflow */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC2_POP_SHFT 39 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC2_POP_MASK 0x0000008000000000 + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC0_PUSH */ +/* Description: PI Fifo vc0 push overflow */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC0_PUSH_SHFT 40 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC0_PUSH_MASK 0x0000010000000000 + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC2_PUSH */ +/* Description: PI Fifo vc2 push overflow */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC2_PUSH_SHFT 41 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC2_PUSH_MASK 0x0000020000000000 + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC0_PUSH */ +/* Description: IILB Fifo vc0 push overflow */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC0_PUSH_SHFT 42 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC0_PUSH_MASK 0x0000040000000000 + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC2_PUSH */ +/* Description: IILB Fifo vc2 push overflow */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC2_PUSH_SHFT 43 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC2_PUSH_MASK 0x0000080000000000 + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC0_PUSH */ +/* Description: MD Fifo vc0 push overflow */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC0_PUSH_SHFT 44 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC0_PUSH_MASK 0x0000100000000000 + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC2_PUSH */ +/* Description: MD Fifo vc2 push overflow */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC2_PUSH_SHFT 45 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC2_PUSH_MASK 0x0000200000000000 + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC0_CREDIT */ +/* Description: PI Fifo vc0 credit overflow */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC0_CREDIT_SHFT 46 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC0_CREDIT_MASK 0x0000400000000000 + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC2_CREDIT */ +/* Description: PI Fifo vc2 credit overflow */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC2_CREDIT_SHFT 47 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC2_CREDIT_MASK 0x0000800000000000 + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC0_CREDIT */ +/* Description: IILB Fifo vc0 credit overflow */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC0_CREDIT_SHFT 48 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC0_CREDIT_MASK 0x0001000000000000 + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC2_CREDIT */ +/* Description: IILB Fifo vc2 credit overflow */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC2_CREDIT_SHFT 49 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC2_CREDIT_MASK 0x0002000000000000 + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC0_CREDIT */ +/* Description: MD Fifo vc0 credit overflow */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC0_CREDIT_SHFT 50 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC0_CREDIT_MASK 0x0004000000000000 + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC2_CREDIT */ +/* Description: MD Fifo vc2 credit overflow */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC2_CREDIT_SHFT 51 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC2_CREDIT_MASK 0x0008000000000000 + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC0_CREDIT */ +/* Description: NI Fifo vc0 credit overflow */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC0_CREDIT_SHFT 52 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC0_CREDIT_MASK 0x0010000000000000 + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC1_CREDIT */ +/* Description: NI Fifo vc1 credit overflow */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC1_CREDIT_SHFT 53 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC1_CREDIT_MASK 0x0020000000000000 + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC2_CREDIT */ +/* Description: NI Fifo vc2 credit overflow */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC2_CREDIT_SHFT 54 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC2_CREDIT_MASK 0x0040000000000000 + +/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC3_CREDIT */ +/* Description: NI Fifo vc3 credit overflow */ +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC3_CREDIT_SHFT 55 +#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC3_CREDIT_MASK 0x0080000000000000 + +/* SH_NI0_ERROR_OVERFLOW_1_TAIL_TIMEOUT_FIFO02_VC0 */ +/* Description: Fifo02 vc0 tail timeout */ +#define SH_NI0_ERROR_OVERFLOW_1_TAIL_TIMEOUT_FIFO02_VC0_SHFT 56 +#define SH_NI0_ERROR_OVERFLOW_1_TAIL_TIMEOUT_FIFO02_VC0_MASK 0x0100000000000000 + +/* SH_NI0_ERROR_OVERFLOW_1_TAIL_TIMEOUT_FIFO02_VC2 */ +/* Description: Fifo02 vc2 tail timeout */ +#define SH_NI0_ERROR_OVERFLOW_1_TAIL_TIMEOUT_FIFO02_VC2_SHFT 57 +#define SH_NI0_ERROR_OVERFLOW_1_TAIL_TIMEOUT_FIFO02_VC2_MASK 0x0200000000000000 + +/* SH_NI0_ERROR_OVERFLOW_1_TAIL_TIMEOUT_FIFO13_VC1 */ +/* Description: Fifo13 vc1 tail timeout */ +#define SH_NI0_ERROR_OVERFLOW_1_TAIL_TIMEOUT_FIFO13_VC1_SHFT 58 +#define SH_NI0_ERROR_OVERFLOW_1_TAIL_TIMEOUT_FIFO13_VC1_MASK 0x0400000000000000 + +/* SH_NI0_ERROR_OVERFLOW_1_TAIL_TIMEOUT_FIFO13_VC3 */ +/* Description: Fifo13 vc3 tail timeout */ +#define SH_NI0_ERROR_OVERFLOW_1_TAIL_TIMEOUT_FIFO13_VC3_SHFT 59 +#define SH_NI0_ERROR_OVERFLOW_1_TAIL_TIMEOUT_FIFO13_VC3_MASK 0x0800000000000000 + +/* SH_NI0_ERROR_OVERFLOW_1_TAIL_TIMEOUT_NI_VC0 */ +/* Description: NI vc0 tail timeout */ +#define SH_NI0_ERROR_OVERFLOW_1_TAIL_TIMEOUT_NI_VC0_SHFT 60 +#define SH_NI0_ERROR_OVERFLOW_1_TAIL_TIMEOUT_NI_VC0_MASK 0x1000000000000000 + +/* SH_NI0_ERROR_OVERFLOW_1_TAIL_TIMEOUT_NI_VC1 */ +/* Description: NI vc1 tail timeout */ +#define SH_NI0_ERROR_OVERFLOW_1_TAIL_TIMEOUT_NI_VC1_SHFT 61 +#define SH_NI0_ERROR_OVERFLOW_1_TAIL_TIMEOUT_NI_VC1_MASK 0x2000000000000000 + +/* SH_NI0_ERROR_OVERFLOW_1_TAIL_TIMEOUT_NI_VC2 */ +/* Description: NI vc2 tail timeout */ +#define SH_NI0_ERROR_OVERFLOW_1_TAIL_TIMEOUT_NI_VC2_SHFT 62 +#define SH_NI0_ERROR_OVERFLOW_1_TAIL_TIMEOUT_NI_VC2_MASK 0x4000000000000000 + +/* SH_NI0_ERROR_OVERFLOW_1_TAIL_TIMEOUT_NI_VC3 */ +/* Description: NI vc3 tail timeout */ +#define SH_NI0_ERROR_OVERFLOW_1_TAIL_TIMEOUT_NI_VC3_SHFT 63 +#define SH_NI0_ERROR_OVERFLOW_1_TAIL_TIMEOUT_NI_VC3_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_NI0_ERROR_OVERFLOW_1_ALIAS" */ +/* ni0 Error Overflow Bits Alias */ +/* ==================================================================== */ + +#define SH_NI0_ERROR_OVERFLOW_1_ALIAS 0x0000000150040528 + +/* ==================================================================== */ +/* Register "SH_NI0_ERROR_OVERFLOW_2" */ +/* ni0 Error Overflow Bits */ +/* ==================================================================== */ + +#define SH_NI0_ERROR_OVERFLOW_2 0x0000000150040530 +#define SH_NI0_ERROR_OVERFLOW_2_MASK 0x7fffffff003fffff +#define SH_NI0_ERROR_OVERFLOW_2_INIT 0x7fffffff003fffff + +/* SH_NI0_ERROR_OVERFLOW_2_ILLEGAL_VCNI */ +/* Description: Illegal VC NI */ +#define SH_NI0_ERROR_OVERFLOW_2_ILLEGAL_VCNI_SHFT 0 +#define SH_NI0_ERROR_OVERFLOW_2_ILLEGAL_VCNI_MASK 0x0000000000000001 + +/* SH_NI0_ERROR_OVERFLOW_2_ILLEGAL_VCPI */ +/* Description: Illegal VC PI */ +#define SH_NI0_ERROR_OVERFLOW_2_ILLEGAL_VCPI_SHFT 1 +#define SH_NI0_ERROR_OVERFLOW_2_ILLEGAL_VCPI_MASK 0x0000000000000002 + +/* SH_NI0_ERROR_OVERFLOW_2_ILLEGAL_VCMD */ +/* Description: Illegal VC MD */ +#define SH_NI0_ERROR_OVERFLOW_2_ILLEGAL_VCMD_SHFT 2 +#define SH_NI0_ERROR_OVERFLOW_2_ILLEGAL_VCMD_MASK 0x0000000000000004 + +/* SH_NI0_ERROR_OVERFLOW_2_ILLEGAL_VCIILB */ +/* Description: Illegal VC IILB */ +#define SH_NI0_ERROR_OVERFLOW_2_ILLEGAL_VCIILB_SHFT 3 +#define SH_NI0_ERROR_OVERFLOW_2_ILLEGAL_VCIILB_MASK 0x0000000000000008 + +/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC0_POP */ +/* Description: Fifo 02 vc0 pop underflow */ +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC0_POP_SHFT 4 +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC0_POP_MASK 0x0000000000000010 + +/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC2_POP */ +/* Description: Fifo 02 vc2 pop underflow */ +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC2_POP_SHFT 5 +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC2_POP_MASK 0x0000000000000020 + +/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC1_POP */ +/* Description: Fifo 13 vc1 pop underflow */ +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC1_POP_SHFT 6 +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC1_POP_MASK 0x0000000000000040 + +/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC3_POP */ +/* Description: Fifo 13 vc3 pop underflow */ +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC3_POP_SHFT 7 +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC3_POP_MASK 0x0000000000000080 + +/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC0_PUSH */ +/* Description: Fifo 02 vc0 push underflow */ +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC0_PUSH_SHFT 8 +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC0_PUSH_MASK 0x0000000000000100 + +/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC2_PUSH */ +/* Description: Fifo 02 vc2 push underflow */ +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC2_PUSH_SHFT 9 +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC2_PUSH_MASK 0x0000000000000200 + +/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC1_PUSH */ +/* Description: Fifo 13 vc1 push underflow */ +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC1_PUSH_SHFT 10 +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC1_PUSH_MASK 0x0000000000000400 + +/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC3_PUSH */ +/* Description: Fifo 13 vc3 push underflow */ +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC3_PUSH_SHFT 11 +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC3_PUSH_MASK 0x0000000000000800 + +/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC0_CREDIT */ +/* Description: Fifo 02 vc0 credit underflow */ +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC0_CREDIT_SHFT 12 +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC0_CREDIT_MASK 0x0000000000001000 + +/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC2_CREDIT */ +/* Description: Fifo 02 vc2 credit underflow */ +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC2_CREDIT_SHFT 13 +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC2_CREDIT_MASK 0x0000000000002000 + +/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC0_CREDIT */ +/* Description: Fifo 13 vc0 credit underflow */ +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC0_CREDIT_SHFT 14 +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC0_CREDIT_MASK 0x0000000000004000 + +/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC2_CREDIT */ +/* Description: Fifo 13 vc2 credit underflow */ +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC2_CREDIT_SHFT 15 +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC2_CREDIT_MASK 0x0000000000008000 + +/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW0_VC0_CREDIT */ +/* Description: VC0 credit underflow 0 */ +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW0_VC0_CREDIT_SHFT 16 +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW0_VC0_CREDIT_MASK 0x0000000000010000 + +/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW1_VC0_CREDIT */ +/* Description: VC0 credit underflow 1 */ +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW1_VC0_CREDIT_SHFT 17 +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW1_VC0_CREDIT_MASK 0x0000000000020000 + +/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW2_VC0_CREDIT */ +/* Description: VC0 credit underflow 2 */ +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW2_VC0_CREDIT_SHFT 18 +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW2_VC0_CREDIT_MASK 0x0000000000040000 + +/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW0_VC2_CREDIT */ +/* Description: VC2 credit underflow 0 */ +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW0_VC2_CREDIT_SHFT 19 +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW0_VC2_CREDIT_MASK 0x0000000000080000 + +/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW1_VC2_CREDIT */ +/* Description: VC2 credit underflow 1 */ +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW1_VC2_CREDIT_SHFT 20 +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW1_VC2_CREDIT_MASK 0x0000000000100000 + +/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW2_VC2_CREDIT */ +/* Description: VC2 credit underflow 2 */ +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW2_VC2_CREDIT_SHFT 21 +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW2_VC2_CREDIT_MASK 0x0000000000200000 + +/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC0_POP */ +/* Description: PI Fifo vc0 pop underflow */ +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC0_POP_SHFT 32 +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC0_POP_MASK 0x0000000100000000 + +/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC2_POP */ +/* Description: PI Fifo vc2 pop underflow */ +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC2_POP_SHFT 33 +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC2_POP_MASK 0x0000000200000000 + +/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC0_POP */ +/* Description: IILB Fifo vc0 pop underflow */ +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC0_POP_SHFT 34 +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC0_POP_MASK 0x0000000400000000 + +/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC2_POP */ +/* Description: IILB Fifo vc2 pop underflow */ +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC2_POP_SHFT 35 +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC2_POP_MASK 0x0000000800000000 + +/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC0_POP */ +/* Description: MD Fifo vc0 pop underflow */ +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC0_POP_SHFT 36 +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC0_POP_MASK 0x0000001000000000 + +/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC2_POP */ +/* Description: MD Fifo vc2 pop underflow */ +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC2_POP_SHFT 37 +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC2_POP_MASK 0x0000002000000000 + +/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC0_POP */ +/* Description: NI Fifo vc0 pop underflow */ +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC0_POP_SHFT 38 +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC0_POP_MASK 0x0000004000000000 + +/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC2_POP */ +/* Description: NI Fifo vc2 pop underflow */ +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC2_POP_SHFT 39 +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC2_POP_MASK 0x0000008000000000 + +/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC0_PUSH */ +/* Description: PI Fifo vc0 push underflow */ +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC0_PUSH_SHFT 40 +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC0_PUSH_MASK 0x0000010000000000 + +/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC2_PUSH */ +/* Description: PI Fifo vc2 push underflow */ +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC2_PUSH_SHFT 41 +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC2_PUSH_MASK 0x0000020000000000 + +/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC0_PUSH */ +/* Description: IILB Fifo vc0 push underflow */ +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC0_PUSH_SHFT 42 +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC0_PUSH_MASK 0x0000040000000000 + +/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC2_PUSH */ +/* Description: IILB Fifo vc2 push underflow */ +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC2_PUSH_SHFT 43 +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC2_PUSH_MASK 0x0000080000000000 + +/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC0_PUSH */ +/* Description: MD Fifo vc0 push underflow */ +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC0_PUSH_SHFT 44 +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC0_PUSH_MASK 0x0000100000000000 + +/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC2_PUSH */ +/* Description: MD Fifo vc2 push underflow */ +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC2_PUSH_SHFT 45 +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC2_PUSH_MASK 0x0000200000000000 + +/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC0_CREDIT */ +/* Description: PI Fifo vc0 credit underflow */ +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC0_CREDIT_SHFT 46 +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC0_CREDIT_MASK 0x0000400000000000 + +/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC2_CREDIT */ +/* Description: PI Fifo vc2 credit underflow */ +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC2_CREDIT_SHFT 47 +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC2_CREDIT_MASK 0x0000800000000000 + +/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC0_CREDIT */ +/* Description: IILB Fifo vc0 credit underflow */ +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC0_CREDIT_SHFT 48 +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC0_CREDIT_MASK 0x0001000000000000 + +/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC2_CREDIT */ +/* Description: IILB Fifo vc2 credit underflow */ +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC2_CREDIT_SHFT 49 +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC2_CREDIT_MASK 0x0002000000000000 + +/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC0_CREDIT */ +/* Description: MD Fifo vc0 credit underflow */ +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC0_CREDIT_SHFT 50 +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC0_CREDIT_MASK 0x0004000000000000 + +/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC2_CREDIT */ +/* Description: MD Fifo vc2 credit underflow */ +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC2_CREDIT_SHFT 51 +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC2_CREDIT_MASK 0x0008000000000000 + +/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC0_CREDIT */ +/* Description: NI Fifo vc0 credit underflow */ +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC0_CREDIT_SHFT 52 +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC0_CREDIT_MASK 0x0010000000000000 + +/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC1_CREDIT */ +/* Description: NI Fifo vc1 credit underflow */ +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC1_CREDIT_SHFT 53 +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC1_CREDIT_MASK 0x0020000000000000 + +/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC2_CREDIT */ +/* Description: NI Fifo vc2 credit underflow */ +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC2_CREDIT_SHFT 54 +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC2_CREDIT_MASK 0x0040000000000000 + +/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC3_CREDIT */ +/* Description: NI Fifo vc3 credit underflow */ +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC3_CREDIT_SHFT 55 +#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC3_CREDIT_MASK 0x0080000000000000 + +/* SH_NI0_ERROR_OVERFLOW_2_LLP_DEADLOCK_VC0 */ +/* Description: llp deadlock vc0 */ +#define SH_NI0_ERROR_OVERFLOW_2_LLP_DEADLOCK_VC0_SHFT 56 +#define SH_NI0_ERROR_OVERFLOW_2_LLP_DEADLOCK_VC0_MASK 0x0100000000000000 + +/* SH_NI0_ERROR_OVERFLOW_2_LLP_DEADLOCK_VC1 */ +/* Description: llp deadlock vc1 */ +#define SH_NI0_ERROR_OVERFLOW_2_LLP_DEADLOCK_VC1_SHFT 57 +#define SH_NI0_ERROR_OVERFLOW_2_LLP_DEADLOCK_VC1_MASK 0x0200000000000000 + +/* SH_NI0_ERROR_OVERFLOW_2_LLP_DEADLOCK_VC2 */ +/* Description: llp deadlock vc2 */ +#define SH_NI0_ERROR_OVERFLOW_2_LLP_DEADLOCK_VC2_SHFT 58 +#define SH_NI0_ERROR_OVERFLOW_2_LLP_DEADLOCK_VC2_MASK 0x0400000000000000 + +/* SH_NI0_ERROR_OVERFLOW_2_LLP_DEADLOCK_VC3 */ +/* Description: llp deadlock vc3 */ +#define SH_NI0_ERROR_OVERFLOW_2_LLP_DEADLOCK_VC3_SHFT 59 +#define SH_NI0_ERROR_OVERFLOW_2_LLP_DEADLOCK_VC3_MASK 0x0800000000000000 + +/* SH_NI0_ERROR_OVERFLOW_2_CHIPLET_NOMATCH */ +/* Description: chiplet nomatch */ +#define SH_NI0_ERROR_OVERFLOW_2_CHIPLET_NOMATCH_SHFT 60 +#define SH_NI0_ERROR_OVERFLOW_2_CHIPLET_NOMATCH_MASK 0x1000000000000000 + +/* SH_NI0_ERROR_OVERFLOW_2_LUT_READ_ERROR */ +/* Description: LUT Read Error */ +#define SH_NI0_ERROR_OVERFLOW_2_LUT_READ_ERROR_SHFT 61 +#define SH_NI0_ERROR_OVERFLOW_2_LUT_READ_ERROR_MASK 0x2000000000000000 + +/* SH_NI0_ERROR_OVERFLOW_2_RETRY_TIMEOUT_ERROR */ +/* Description: Retry Timeout Error */ +#define SH_NI0_ERROR_OVERFLOW_2_RETRY_TIMEOUT_ERROR_SHFT 62 +#define SH_NI0_ERROR_OVERFLOW_2_RETRY_TIMEOUT_ERROR_MASK 0x4000000000000000 + +/* ==================================================================== */ +/* Register "SH_NI0_ERROR_OVERFLOW_2_ALIAS" */ +/* ni0 Error Overflow Bits Alias */ +/* ==================================================================== */ + +#define SH_NI0_ERROR_OVERFLOW_2_ALIAS 0x0000000150040538 + +/* ==================================================================== */ +/* Register "SH_NI0_ERROR_MASK_1" */ +/* ni0 Error Mask Bits */ +/* ==================================================================== */ + +#define SH_NI0_ERROR_MASK_1 0x0000000150040540 +#define SH_NI0_ERROR_MASK_1_MASK 0xffffffffffffffff +#define SH_NI0_ERROR_MASK_1_INIT 0xffffffffffffffff + +/* SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO02_DEBIT0 */ +/* Description: Fifo 02 debit0 overflow */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO02_DEBIT0_SHFT 0 +#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO02_DEBIT0_MASK 0x0000000000000001 + +/* SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO02_DEBIT2 */ +/* Description: Fifo 02 debit2 overflow */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO02_DEBIT2_SHFT 1 +#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO02_DEBIT2_MASK 0x0000000000000002 + +/* SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO13_DEBIT0 */ +/* Description: Fifo 13 debit0 overflow */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO13_DEBIT0_SHFT 2 +#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO13_DEBIT0_MASK 0x0000000000000004 + +/* SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO13_DEBIT2 */ +/* Description: Fifo 13 debit2 overflow */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO13_DEBIT2_SHFT 3 +#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO13_DEBIT2_MASK 0x0000000000000008 + +/* SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO02_VC0_POP */ +/* Description: Fifo 02 vc0 pop overflow */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO02_VC0_POP_SHFT 4 +#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO02_VC0_POP_MASK 0x0000000000000010 + +/* SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO02_VC2_POP */ +/* Description: Fifo 02 vc2 pop overflow */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO02_VC2_POP_SHFT 5 +#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO02_VC2_POP_MASK 0x0000000000000020 + +/* SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO13_VC1_POP */ +/* Description: Fifo 13 vc1 pop overflow */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO13_VC1_POP_SHFT 6 +#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO13_VC1_POP_MASK 0x0000000000000040 + +/* SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO13_VC3_POP */ +/* Description: Fifo 13 vc3 pop overflow */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO13_VC3_POP_SHFT 7 +#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO13_VC3_POP_MASK 0x0000000000000080 + +/* SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO02_VC0_PUSH */ +/* Description: Fifo 02 vc0 push overflow */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO02_VC0_PUSH_SHFT 8 +#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO02_VC0_PUSH_MASK 0x0000000000000100 + +/* SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO02_VC2_PUSH */ +/* Description: Fifo 02 vc2 push overflow */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO02_VC2_PUSH_SHFT 9 +#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO02_VC2_PUSH_MASK 0x0000000000000200 + +/* SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO13_VC1_PUSH */ +/* Description: Fifo 13 vc1 push overflow */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO13_VC1_PUSH_SHFT 10 +#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO13_VC1_PUSH_MASK 0x0000000000000400 + +/* SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO13_VC3_PUSH */ +/* Description: Fifo 13 vc3 push overflow */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO13_VC3_PUSH_SHFT 11 +#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO13_VC3_PUSH_MASK 0x0000000000000800 + +/* SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO02_VC0_CREDIT */ +/* Description: Fifo 02 vc0 credit overflow */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO02_VC0_CREDIT_SHFT 12 +#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO02_VC0_CREDIT_MASK 0x0000000000001000 + +/* SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO02_VC2_CREDIT */ +/* Description: Fifo 02 vc2 credit overflow */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO02_VC2_CREDIT_SHFT 13 +#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO02_VC2_CREDIT_MASK 0x0000000000002000 + +/* SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO13_VC0_CREDIT */ +/* Description: Fifo 13 vc0 credit overflow */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO13_VC0_CREDIT_SHFT 14 +#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO13_VC0_CREDIT_MASK 0x0000000000004000 + +/* SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO13_VC2_CREDIT */ +/* Description: Fifo 13 vc2 credit overflow */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO13_VC2_CREDIT_SHFT 15 +#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO13_VC2_CREDIT_MASK 0x0000000000008000 + +/* SH_NI0_ERROR_MASK_1_OVERFLOW0_VC0_CREDIT */ +/* Description: VC0 credit overflow 0 */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW0_VC0_CREDIT_SHFT 16 +#define SH_NI0_ERROR_MASK_1_OVERFLOW0_VC0_CREDIT_MASK 0x0000000000010000 + +/* SH_NI0_ERROR_MASK_1_OVERFLOW1_VC0_CREDIT */ +/* Description: VC0 credit overflow 1 */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW1_VC0_CREDIT_SHFT 17 +#define SH_NI0_ERROR_MASK_1_OVERFLOW1_VC0_CREDIT_MASK 0x0000000000020000 + +/* SH_NI0_ERROR_MASK_1_OVERFLOW2_VC0_CREDIT */ +/* Description: VC0 credit overflow 2 */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW2_VC0_CREDIT_SHFT 18 +#define SH_NI0_ERROR_MASK_1_OVERFLOW2_VC0_CREDIT_MASK 0x0000000000040000 + +/* SH_NI0_ERROR_MASK_1_OVERFLOW0_VC2_CREDIT */ +/* Description: VC2 credit overflow 0 */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW0_VC2_CREDIT_SHFT 19 +#define SH_NI0_ERROR_MASK_1_OVERFLOW0_VC2_CREDIT_MASK 0x0000000000080000 + +/* SH_NI0_ERROR_MASK_1_OVERFLOW1_VC2_CREDIT */ +/* Description: VC2 credit overflow 1 */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW1_VC2_CREDIT_SHFT 20 +#define SH_NI0_ERROR_MASK_1_OVERFLOW1_VC2_CREDIT_MASK 0x0000000000100000 + +/* SH_NI0_ERROR_MASK_1_OVERFLOW2_VC2_CREDIT */ +/* Description: VC2 credit overflow 2 */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW2_VC2_CREDIT_SHFT 21 +#define SH_NI0_ERROR_MASK_1_OVERFLOW2_VC2_CREDIT_MASK 0x0000000000200000 + +/* SH_NI0_ERROR_MASK_1_OVERFLOW_PI_FIFO_DEBIT0 */ +/* Description: PI Fifo debit0 overflow */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW_PI_FIFO_DEBIT0_SHFT 22 +#define SH_NI0_ERROR_MASK_1_OVERFLOW_PI_FIFO_DEBIT0_MASK 0x0000000000400000 + +/* SH_NI0_ERROR_MASK_1_OVERFLOW_PI_FIFO_DEBIT2 */ +/* Description: PI Fifo debit2 overflow */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW_PI_FIFO_DEBIT2_SHFT 23 +#define SH_NI0_ERROR_MASK_1_OVERFLOW_PI_FIFO_DEBIT2_MASK 0x0000000000800000 + +/* SH_NI0_ERROR_MASK_1_OVERFLOW_IILB_FIFO_DEBIT0 */ +/* Description: IILB Fifo debit0 overflow */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW_IILB_FIFO_DEBIT0_SHFT 24 +#define SH_NI0_ERROR_MASK_1_OVERFLOW_IILB_FIFO_DEBIT0_MASK 0x0000000001000000 + +/* SH_NI0_ERROR_MASK_1_OVERFLOW_IILB_FIFO_DEBIT2 */ +/* Description: IILB Fifo debit2 overflow */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW_IILB_FIFO_DEBIT2_SHFT 25 +#define SH_NI0_ERROR_MASK_1_OVERFLOW_IILB_FIFO_DEBIT2_MASK 0x0000000002000000 + +/* SH_NI0_ERROR_MASK_1_OVERFLOW_MD_FIFO_DEBIT0 */ +/* Description: MD Fifo debit0 overflow */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW_MD_FIFO_DEBIT0_SHFT 26 +#define SH_NI0_ERROR_MASK_1_OVERFLOW_MD_FIFO_DEBIT0_MASK 0x0000000004000000 + +/* SH_NI0_ERROR_MASK_1_OVERFLOW_MD_FIFO_DEBIT2 */ +/* Description: MD Fifo debit2 overflow */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW_MD_FIFO_DEBIT2_SHFT 27 +#define SH_NI0_ERROR_MASK_1_OVERFLOW_MD_FIFO_DEBIT2_MASK 0x0000000008000000 + +/* SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_DEBIT0 */ +/* Description: NI Fifo debit0 overflow */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_DEBIT0_SHFT 28 +#define SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_DEBIT0_MASK 0x0000000010000000 + +/* SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_DEBIT1 */ +/* Description: NI Fifo debit1 overflow */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_DEBIT1_SHFT 29 +#define SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_DEBIT1_MASK 0x0000000020000000 + +/* SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_DEBIT2 */ +/* Description: NI Fifo debit2 overflow */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_DEBIT2_SHFT 30 +#define SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_DEBIT2_MASK 0x0000000040000000 + +/* SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_DEBIT3 */ +/* Description: NI Fifo debit3 overflow */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_DEBIT3_SHFT 31 +#define SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_DEBIT3_MASK 0x0000000080000000 + +/* SH_NI0_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC0_POP */ +/* Description: PI Fifo vc0 pop overflow */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC0_POP_SHFT 32 +#define SH_NI0_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC0_POP_MASK 0x0000000100000000 + +/* SH_NI0_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC2_POP */ +/* Description: PI Fifo vc2 pop overflow */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC2_POP_SHFT 33 +#define SH_NI0_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC2_POP_MASK 0x0000000200000000 + +/* SH_NI0_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC0_POP */ +/* Description: IILB Fifo vc0 pop overflow */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC0_POP_SHFT 34 +#define SH_NI0_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC0_POP_MASK 0x0000000400000000 + +/* SH_NI0_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC2_POP */ +/* Description: IILB Fifo vc2 pop overflow */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC2_POP_SHFT 35 +#define SH_NI0_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC2_POP_MASK 0x0000000800000000 + +/* SH_NI0_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC0_POP */ +/* Description: MD Fifo vc0 pop overflow */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC0_POP_SHFT 36 +#define SH_NI0_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC0_POP_MASK 0x0000001000000000 + +/* SH_NI0_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC2_POP */ +/* Description: MD Fifo vc2 pop overflow */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC2_POP_SHFT 37 +#define SH_NI0_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC2_POP_MASK 0x0000002000000000 + +/* SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC0_POP */ +/* Description: NI Fifo vc0 pop overflow */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC0_POP_SHFT 38 +#define SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC0_POP_MASK 0x0000004000000000 + +/* SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC2_POP */ +/* Description: NI Fifo vc2 pop overflow */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC2_POP_SHFT 39 +#define SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC2_POP_MASK 0x0000008000000000 + +/* SH_NI0_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC0_PUSH */ +/* Description: PI Fifo vc0 push overflow */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC0_PUSH_SHFT 40 +#define SH_NI0_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC0_PUSH_MASK 0x0000010000000000 + +/* SH_NI0_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC2_PUSH */ +/* Description: PI Fifo vc2 push overflow */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC2_PUSH_SHFT 41 +#define SH_NI0_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC2_PUSH_MASK 0x0000020000000000 + +/* SH_NI0_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC0_PUSH */ +/* Description: IILB Fifo vc0 push overflow */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC0_PUSH_SHFT 42 +#define SH_NI0_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC0_PUSH_MASK 0x0000040000000000 + +/* SH_NI0_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC2_PUSH */ +/* Description: IILB Fifo vc2 push overflow */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC2_PUSH_SHFT 43 +#define SH_NI0_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC2_PUSH_MASK 0x0000080000000000 + +/* SH_NI0_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC0_PUSH */ +/* Description: MD Fifo vc0 push overflow */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC0_PUSH_SHFT 44 +#define SH_NI0_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC0_PUSH_MASK 0x0000100000000000 + +/* SH_NI0_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC2_PUSH */ +/* Description: MD Fifo vc2 push overflow */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC2_PUSH_SHFT 45 +#define SH_NI0_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC2_PUSH_MASK 0x0000200000000000 + +/* SH_NI0_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC0_CREDIT */ +/* Description: PI Fifo vc0 credit overflow */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC0_CREDIT_SHFT 46 +#define SH_NI0_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC0_CREDIT_MASK 0x0000400000000000 + +/* SH_NI0_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC2_CREDIT */ +/* Description: PI Fifo vc2 credit overflow */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC2_CREDIT_SHFT 47 +#define SH_NI0_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC2_CREDIT_MASK 0x0000800000000000 + +/* SH_NI0_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC0_CREDIT */ +/* Description: IILB Fifo vc0 credit overflow */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC0_CREDIT_SHFT 48 +#define SH_NI0_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC0_CREDIT_MASK 0x0001000000000000 + +/* SH_NI0_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC2_CREDIT */ +/* Description: IILB Fifo vc2 credit overflow */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC2_CREDIT_SHFT 49 +#define SH_NI0_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC2_CREDIT_MASK 0x0002000000000000 + +/* SH_NI0_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC0_CREDIT */ +/* Description: MD Fifo vc0 credit overflow */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC0_CREDIT_SHFT 50 +#define SH_NI0_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC0_CREDIT_MASK 0x0004000000000000 + +/* SH_NI0_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC2_CREDIT */ +/* Description: MD Fifo vc2 credit overflow */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC2_CREDIT_SHFT 51 +#define SH_NI0_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC2_CREDIT_MASK 0x0008000000000000 + +/* SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC0_CREDIT */ +/* Description: NI Fifo vc0 credit overflow */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC0_CREDIT_SHFT 52 +#define SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC0_CREDIT_MASK 0x0010000000000000 + +/* SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC1_CREDIT */ +/* Description: NI Fifo vc1 credit overflow */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC1_CREDIT_SHFT 53 +#define SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC1_CREDIT_MASK 0x0020000000000000 + +/* SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC2_CREDIT */ +/* Description: NI Fifo vc2 credit overflow */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC2_CREDIT_SHFT 54 +#define SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC2_CREDIT_MASK 0x0040000000000000 + +/* SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC3_CREDIT */ +/* Description: NI Fifo vc3 credit overflow */ +#define SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC3_CREDIT_SHFT 55 +#define SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC3_CREDIT_MASK 0x0080000000000000 + +/* SH_NI0_ERROR_MASK_1_TAIL_TIMEOUT_FIFO02_VC0 */ +/* Description: Fifo02 vc0 tail timeout */ +#define SH_NI0_ERROR_MASK_1_TAIL_TIMEOUT_FIFO02_VC0_SHFT 56 +#define SH_NI0_ERROR_MASK_1_TAIL_TIMEOUT_FIFO02_VC0_MASK 0x0100000000000000 + +/* SH_NI0_ERROR_MASK_1_TAIL_TIMEOUT_FIFO02_VC2 */ +/* Description: Fifo02 vc2 tail timeout */ +#define SH_NI0_ERROR_MASK_1_TAIL_TIMEOUT_FIFO02_VC2_SHFT 57 +#define SH_NI0_ERROR_MASK_1_TAIL_TIMEOUT_FIFO02_VC2_MASK 0x0200000000000000 + +/* SH_NI0_ERROR_MASK_1_TAIL_TIMEOUT_FIFO13_VC1 */ +/* Description: Fifo13 vc1 tail timeout */ +#define SH_NI0_ERROR_MASK_1_TAIL_TIMEOUT_FIFO13_VC1_SHFT 58 +#define SH_NI0_ERROR_MASK_1_TAIL_TIMEOUT_FIFO13_VC1_MASK 0x0400000000000000 + +/* SH_NI0_ERROR_MASK_1_TAIL_TIMEOUT_FIFO13_VC3 */ +/* Description: Fifo13 vc3 tail timeout */ +#define SH_NI0_ERROR_MASK_1_TAIL_TIMEOUT_FIFO13_VC3_SHFT 59 +#define SH_NI0_ERROR_MASK_1_TAIL_TIMEOUT_FIFO13_VC3_MASK 0x0800000000000000 + +/* SH_NI0_ERROR_MASK_1_TAIL_TIMEOUT_NI_VC0 */ +/* Description: NI vc0 tail timeout */ +#define SH_NI0_ERROR_MASK_1_TAIL_TIMEOUT_NI_VC0_SHFT 60 +#define SH_NI0_ERROR_MASK_1_TAIL_TIMEOUT_NI_VC0_MASK 0x1000000000000000 + +/* SH_NI0_ERROR_MASK_1_TAIL_TIMEOUT_NI_VC1 */ +/* Description: NI vc1 tail timeout */ +#define SH_NI0_ERROR_MASK_1_TAIL_TIMEOUT_NI_VC1_SHFT 61 +#define SH_NI0_ERROR_MASK_1_TAIL_TIMEOUT_NI_VC1_MASK 0x2000000000000000 + +/* SH_NI0_ERROR_MASK_1_TAIL_TIMEOUT_NI_VC2 */ +/* Description: NI vc2 tail timeout */ +#define SH_NI0_ERROR_MASK_1_TAIL_TIMEOUT_NI_VC2_SHFT 62 +#define SH_NI0_ERROR_MASK_1_TAIL_TIMEOUT_NI_VC2_MASK 0x4000000000000000 + +/* SH_NI0_ERROR_MASK_1_TAIL_TIMEOUT_NI_VC3 */ +/* Description: NI vc3 tail timeout */ +#define SH_NI0_ERROR_MASK_1_TAIL_TIMEOUT_NI_VC3_SHFT 63 +#define SH_NI0_ERROR_MASK_1_TAIL_TIMEOUT_NI_VC3_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_NI0_ERROR_MASK_2" */ +/* ni0 Error Mask Bits */ +/* ==================================================================== */ + +#define SH_NI0_ERROR_MASK_2 0x0000000150040550 +#define SH_NI0_ERROR_MASK_2_MASK 0x7fffffff003fffff +#define SH_NI0_ERROR_MASK_2_INIT 0x7fffffff003fffff + +/* SH_NI0_ERROR_MASK_2_ILLEGAL_VCNI */ +/* Description: Illegal VC NI */ +#define SH_NI0_ERROR_MASK_2_ILLEGAL_VCNI_SHFT 0 +#define SH_NI0_ERROR_MASK_2_ILLEGAL_VCNI_MASK 0x0000000000000001 + +/* SH_NI0_ERROR_MASK_2_ILLEGAL_VCPI */ +/* Description: Illegal VC PI */ +#define SH_NI0_ERROR_MASK_2_ILLEGAL_VCPI_SHFT 1 +#define SH_NI0_ERROR_MASK_2_ILLEGAL_VCPI_MASK 0x0000000000000002 + +/* SH_NI0_ERROR_MASK_2_ILLEGAL_VCMD */ +/* Description: Illegal VC MD */ +#define SH_NI0_ERROR_MASK_2_ILLEGAL_VCMD_SHFT 2 +#define SH_NI0_ERROR_MASK_2_ILLEGAL_VCMD_MASK 0x0000000000000004 + +/* SH_NI0_ERROR_MASK_2_ILLEGAL_VCIILB */ +/* Description: Illegal VC IILB */ +#define SH_NI0_ERROR_MASK_2_ILLEGAL_VCIILB_SHFT 3 +#define SH_NI0_ERROR_MASK_2_ILLEGAL_VCIILB_MASK 0x0000000000000008 + +/* SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO02_VC0_POP */ +/* Description: Fifo 02 vc0 pop underflow */ +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO02_VC0_POP_SHFT 4 +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO02_VC0_POP_MASK 0x0000000000000010 + +/* SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO02_VC2_POP */ +/* Description: Fifo 02 vc2 pop underflow */ +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO02_VC2_POP_SHFT 5 +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO02_VC2_POP_MASK 0x0000000000000020 + +/* SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO13_VC1_POP */ +/* Description: Fifo 13 vc1 pop underflow */ +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO13_VC1_POP_SHFT 6 +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO13_VC1_POP_MASK 0x0000000000000040 + +/* SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO13_VC3_POP */ +/* Description: Fifo 13 vc3 pop underflow */ +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO13_VC3_POP_SHFT 7 +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO13_VC3_POP_MASK 0x0000000000000080 + +/* SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO02_VC0_PUSH */ +/* Description: Fifo 02 vc0 push underflow */ +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO02_VC0_PUSH_SHFT 8 +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO02_VC0_PUSH_MASK 0x0000000000000100 + +/* SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO02_VC2_PUSH */ +/* Description: Fifo 02 vc2 push underflow */ +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO02_VC2_PUSH_SHFT 9 +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO02_VC2_PUSH_MASK 0x0000000000000200 + +/* SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO13_VC1_PUSH */ +/* Description: Fifo 13 vc1 push underflow */ +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO13_VC1_PUSH_SHFT 10 +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO13_VC1_PUSH_MASK 0x0000000000000400 + +/* SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO13_VC3_PUSH */ +/* Description: Fifo 13 vc3 push underflow */ +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO13_VC3_PUSH_SHFT 11 +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO13_VC3_PUSH_MASK 0x0000000000000800 + +/* SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO02_VC0_CREDIT */ +/* Description: Fifo 02 vc0 credit underflow */ +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO02_VC0_CREDIT_SHFT 12 +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO02_VC0_CREDIT_MASK 0x0000000000001000 + +/* SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO02_VC2_CREDIT */ +/* Description: Fifo 02 vc2 credit underflow */ +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO02_VC2_CREDIT_SHFT 13 +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO02_VC2_CREDIT_MASK 0x0000000000002000 + +/* SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO13_VC0_CREDIT */ +/* Description: Fifo 13 vc0 credit underflow */ +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO13_VC0_CREDIT_SHFT 14 +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO13_VC0_CREDIT_MASK 0x0000000000004000 + +/* SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO13_VC2_CREDIT */ +/* Description: Fifo 13 vc2 credit underflow */ +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO13_VC2_CREDIT_SHFT 15 +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO13_VC2_CREDIT_MASK 0x0000000000008000 + +/* SH_NI0_ERROR_MASK_2_UNDERFLOW0_VC0_CREDIT */ +/* Description: VC0 credit underflow 0 */ +#define SH_NI0_ERROR_MASK_2_UNDERFLOW0_VC0_CREDIT_SHFT 16 +#define SH_NI0_ERROR_MASK_2_UNDERFLOW0_VC0_CREDIT_MASK 0x0000000000010000 + +/* SH_NI0_ERROR_MASK_2_UNDERFLOW1_VC0_CREDIT */ +/* Description: VC0 credit underflow 1 */ +#define SH_NI0_ERROR_MASK_2_UNDERFLOW1_VC0_CREDIT_SHFT 17 +#define SH_NI0_ERROR_MASK_2_UNDERFLOW1_VC0_CREDIT_MASK 0x0000000000020000 + +/* SH_NI0_ERROR_MASK_2_UNDERFLOW2_VC0_CREDIT */ +/* Description: VC0 credit underflow 2 */ +#define SH_NI0_ERROR_MASK_2_UNDERFLOW2_VC0_CREDIT_SHFT 18 +#define SH_NI0_ERROR_MASK_2_UNDERFLOW2_VC0_CREDIT_MASK 0x0000000000040000 + +/* SH_NI0_ERROR_MASK_2_UNDERFLOW0_VC2_CREDIT */ +/* Description: VC2 credit underflow 0 */ +#define SH_NI0_ERROR_MASK_2_UNDERFLOW0_VC2_CREDIT_SHFT 19 +#define SH_NI0_ERROR_MASK_2_UNDERFLOW0_VC2_CREDIT_MASK 0x0000000000080000 + +/* SH_NI0_ERROR_MASK_2_UNDERFLOW1_VC2_CREDIT */ +/* Description: VC2 credit underflow 1 */ +#define SH_NI0_ERROR_MASK_2_UNDERFLOW1_VC2_CREDIT_SHFT 20 +#define SH_NI0_ERROR_MASK_2_UNDERFLOW1_VC2_CREDIT_MASK 0x0000000000100000 + +/* SH_NI0_ERROR_MASK_2_UNDERFLOW2_VC2_CREDIT */ +/* Description: VC2 credit underflow 2 */ +#define SH_NI0_ERROR_MASK_2_UNDERFLOW2_VC2_CREDIT_SHFT 21 +#define SH_NI0_ERROR_MASK_2_UNDERFLOW2_VC2_CREDIT_MASK 0x0000000000200000 + +/* SH_NI0_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC0_POP */ +/* Description: PI Fifo vc0 pop underflow */ +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC0_POP_SHFT 32 +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC0_POP_MASK 0x0000000100000000 + +/* SH_NI0_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC2_POP */ +/* Description: PI Fifo vc2 pop underflow */ +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC2_POP_SHFT 33 +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC2_POP_MASK 0x0000000200000000 + +/* SH_NI0_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC0_POP */ +/* Description: IILB Fifo vc0 pop underflow */ +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC0_POP_SHFT 34 +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC0_POP_MASK 0x0000000400000000 + +/* SH_NI0_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC2_POP */ +/* Description: IILB Fifo vc2 pop underflow */ +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC2_POP_SHFT 35 +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC2_POP_MASK 0x0000000800000000 + +/* SH_NI0_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC0_POP */ +/* Description: MD Fifo vc0 pop underflow */ +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC0_POP_SHFT 36 +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC0_POP_MASK 0x0000001000000000 + +/* SH_NI0_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC2_POP */ +/* Description: MD Fifo vc2 pop underflow */ +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC2_POP_SHFT 37 +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC2_POP_MASK 0x0000002000000000 + +/* SH_NI0_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC0_POP */ +/* Description: NI Fifo vc0 pop underflow */ +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC0_POP_SHFT 38 +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC0_POP_MASK 0x0000004000000000 + +/* SH_NI0_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC2_POP */ +/* Description: NI Fifo vc2 pop underflow */ +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC2_POP_SHFT 39 +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC2_POP_MASK 0x0000008000000000 + +/* SH_NI0_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC0_PUSH */ +/* Description: PI Fifo vc0 push underflow */ +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC0_PUSH_SHFT 40 +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC0_PUSH_MASK 0x0000010000000000 + +/* SH_NI0_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC2_PUSH */ +/* Description: PI Fifo vc2 push underflow */ +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC2_PUSH_SHFT 41 +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC2_PUSH_MASK 0x0000020000000000 + +/* SH_NI0_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC0_PUSH */ +/* Description: IILB Fifo vc0 push underflow */ +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC0_PUSH_SHFT 42 +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC0_PUSH_MASK 0x0000040000000000 + +/* SH_NI0_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC2_PUSH */ +/* Description: IILB Fifo vc2 push underflow */ +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC2_PUSH_SHFT 43 +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC2_PUSH_MASK 0x0000080000000000 + +/* SH_NI0_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC0_PUSH */ +/* Description: MD Fifo vc0 push underflow */ +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC0_PUSH_SHFT 44 +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC0_PUSH_MASK 0x0000100000000000 + +/* SH_NI0_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC2_PUSH */ +/* Description: MD Fifo vc2 push underflow */ +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC2_PUSH_SHFT 45 +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC2_PUSH_MASK 0x0000200000000000 + +/* SH_NI0_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC0_CREDIT */ +/* Description: PI Fifo vc0 credit underflow */ +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC0_CREDIT_SHFT 46 +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC0_CREDIT_MASK 0x0000400000000000 + +/* SH_NI0_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC2_CREDIT */ +/* Description: PI Fifo vc2 credit underflow */ +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC2_CREDIT_SHFT 47 +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC2_CREDIT_MASK 0x0000800000000000 + +/* SH_NI0_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC0_CREDIT */ +/* Description: IILB Fifo vc0 credit underflow */ +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC0_CREDIT_SHFT 48 +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC0_CREDIT_MASK 0x0001000000000000 + +/* SH_NI0_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC2_CREDIT */ +/* Description: IILB Fifo vc2 credit underflow */ +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC2_CREDIT_SHFT 49 +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC2_CREDIT_MASK 0x0002000000000000 + +/* SH_NI0_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC0_CREDIT */ +/* Description: MD Fifo vc0 credit underflow */ +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC0_CREDIT_SHFT 50 +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC0_CREDIT_MASK 0x0004000000000000 + +/* SH_NI0_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC2_CREDIT */ +/* Description: MD Fifo vc2 credit underflow */ +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC2_CREDIT_SHFT 51 +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC2_CREDIT_MASK 0x0008000000000000 + +/* SH_NI0_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC0_CREDIT */ +/* Description: NI Fifo vc0 credit underflow */ +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC0_CREDIT_SHFT 52 +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC0_CREDIT_MASK 0x0010000000000000 + +/* SH_NI0_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC1_CREDIT */ +/* Description: NI Fifo vc1 credit underflow */ +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC1_CREDIT_SHFT 53 +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC1_CREDIT_MASK 0x0020000000000000 + +/* SH_NI0_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC2_CREDIT */ +/* Description: NI Fifo vc2 credit underflow */ +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC2_CREDIT_SHFT 54 +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC2_CREDIT_MASK 0x0040000000000000 + +/* SH_NI0_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC3_CREDIT */ +/* Description: NI Fifo vc3 credit underflow */ +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC3_CREDIT_SHFT 55 +#define SH_NI0_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC3_CREDIT_MASK 0x0080000000000000 + +/* SH_NI0_ERROR_MASK_2_LLP_DEADLOCK_VC0 */ +/* Description: llp deadlock vc0 */ +#define SH_NI0_ERROR_MASK_2_LLP_DEADLOCK_VC0_SHFT 56 +#define SH_NI0_ERROR_MASK_2_LLP_DEADLOCK_VC0_MASK 0x0100000000000000 + +/* SH_NI0_ERROR_MASK_2_LLP_DEADLOCK_VC1 */ +/* Description: llp deadlock vc1 */ +#define SH_NI0_ERROR_MASK_2_LLP_DEADLOCK_VC1_SHFT 57 +#define SH_NI0_ERROR_MASK_2_LLP_DEADLOCK_VC1_MASK 0x0200000000000000 + +/* SH_NI0_ERROR_MASK_2_LLP_DEADLOCK_VC2 */ +/* Description: llp deadlock vc2 */ +#define SH_NI0_ERROR_MASK_2_LLP_DEADLOCK_VC2_SHFT 58 +#define SH_NI0_ERROR_MASK_2_LLP_DEADLOCK_VC2_MASK 0x0400000000000000 + +/* SH_NI0_ERROR_MASK_2_LLP_DEADLOCK_VC3 */ +/* Description: llp deadlock vc3 */ +#define SH_NI0_ERROR_MASK_2_LLP_DEADLOCK_VC3_SHFT 59 +#define SH_NI0_ERROR_MASK_2_LLP_DEADLOCK_VC3_MASK 0x0800000000000000 + +/* SH_NI0_ERROR_MASK_2_CHIPLET_NOMATCH */ +/* Description: chiplet nomatch */ +#define SH_NI0_ERROR_MASK_2_CHIPLET_NOMATCH_SHFT 60 +#define SH_NI0_ERROR_MASK_2_CHIPLET_NOMATCH_MASK 0x1000000000000000 + +/* SH_NI0_ERROR_MASK_2_LUT_READ_ERROR */ +/* Description: LUT Read Error */ +#define SH_NI0_ERROR_MASK_2_LUT_READ_ERROR_SHFT 61 +#define SH_NI0_ERROR_MASK_2_LUT_READ_ERROR_MASK 0x2000000000000000 + +/* SH_NI0_ERROR_MASK_2_RETRY_TIMEOUT_ERROR */ +/* Description: Retry Timeout Error */ +#define SH_NI0_ERROR_MASK_2_RETRY_TIMEOUT_ERROR_SHFT 62 +#define SH_NI0_ERROR_MASK_2_RETRY_TIMEOUT_ERROR_MASK 0x4000000000000000 + +/* ==================================================================== */ +/* Register "SH_NI0_FIRST_ERROR_1" */ +/* ni0 First Error Bits */ +/* ==================================================================== */ + +#define SH_NI0_FIRST_ERROR_1 0x0000000150040560 +#define SH_NI0_FIRST_ERROR_1_MASK 0xffffffffffffffff +#define SH_NI0_FIRST_ERROR_1_INIT 0xffffffffffffffff + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO02_DEBIT0 */ +/* Description: Fifo 02 debit0 overflow */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO02_DEBIT0_SHFT 0 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO02_DEBIT0_MASK 0x0000000000000001 + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO02_DEBIT2 */ +/* Description: Fifo 02 debit2 overflow */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO02_DEBIT2_SHFT 1 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO02_DEBIT2_MASK 0x0000000000000002 + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO13_DEBIT0 */ +/* Description: Fifo 13 debit0 overflow */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO13_DEBIT0_SHFT 2 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO13_DEBIT0_MASK 0x0000000000000004 + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO13_DEBIT2 */ +/* Description: Fifo 13 debit2 overflow */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO13_DEBIT2_SHFT 3 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO13_DEBIT2_MASK 0x0000000000000008 + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO02_VC0_POP */ +/* Description: Fifo 02 vc0 pop overflow */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO02_VC0_POP_SHFT 4 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO02_VC0_POP_MASK 0x0000000000000010 + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO02_VC2_POP */ +/* Description: Fifo 02 vc2 pop overflow */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO02_VC2_POP_SHFT 5 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO02_VC2_POP_MASK 0x0000000000000020 + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO13_VC1_POP */ +/* Description: Fifo 13 vc1 pop overflow */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO13_VC1_POP_SHFT 6 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO13_VC1_POP_MASK 0x0000000000000040 + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO13_VC3_POP */ +/* Description: Fifo 13 vc3 pop overflow */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO13_VC3_POP_SHFT 7 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO13_VC3_POP_MASK 0x0000000000000080 + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO02_VC0_PUSH */ +/* Description: Fifo 02 vc0 push overflow */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO02_VC0_PUSH_SHFT 8 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO02_VC0_PUSH_MASK 0x0000000000000100 + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO02_VC2_PUSH */ +/* Description: Fifo 02 vc2 push overflow */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO02_VC2_PUSH_SHFT 9 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO02_VC2_PUSH_MASK 0x0000000000000200 + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO13_VC1_PUSH */ +/* Description: Fifo 13 vc1 push overflow */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO13_VC1_PUSH_SHFT 10 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO13_VC1_PUSH_MASK 0x0000000000000400 + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO13_VC3_PUSH */ +/* Description: Fifo 13 vc3 push overflow */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO13_VC3_PUSH_SHFT 11 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO13_VC3_PUSH_MASK 0x0000000000000800 + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO02_VC0_CREDIT */ +/* Description: Fifo 02 vc0 credit overflow */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO02_VC0_CREDIT_SHFT 12 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO02_VC0_CREDIT_MASK 0x0000000000001000 + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO02_VC2_CREDIT */ +/* Description: Fifo 02 vc2 credit overflow */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO02_VC2_CREDIT_SHFT 13 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO02_VC2_CREDIT_MASK 0x0000000000002000 + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO13_VC0_CREDIT */ +/* Description: Fifo 13 vc0 credit overflow */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO13_VC0_CREDIT_SHFT 14 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO13_VC0_CREDIT_MASK 0x0000000000004000 + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO13_VC2_CREDIT */ +/* Description: Fifo 13 vc2 credit overflow */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO13_VC2_CREDIT_SHFT 15 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO13_VC2_CREDIT_MASK 0x0000000000008000 + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW0_VC0_CREDIT */ +/* Description: VC0 credit overflow 0 */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW0_VC0_CREDIT_SHFT 16 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW0_VC0_CREDIT_MASK 0x0000000000010000 + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW1_VC0_CREDIT */ +/* Description: VC0 credit overflow 1 */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW1_VC0_CREDIT_SHFT 17 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW1_VC0_CREDIT_MASK 0x0000000000020000 + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW2_VC0_CREDIT */ +/* Description: VC0 credit overflow 2 */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW2_VC0_CREDIT_SHFT 18 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW2_VC0_CREDIT_MASK 0x0000000000040000 + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW0_VC2_CREDIT */ +/* Description: VC2 credit overflow 0 */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW0_VC2_CREDIT_SHFT 19 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW0_VC2_CREDIT_MASK 0x0000000000080000 + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW1_VC2_CREDIT */ +/* Description: VC2 credit overflow 1 */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW1_VC2_CREDIT_SHFT 20 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW1_VC2_CREDIT_MASK 0x0000000000100000 + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW2_VC2_CREDIT */ +/* Description: VC2 credit overflow 2 */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW2_VC2_CREDIT_SHFT 21 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW2_VC2_CREDIT_MASK 0x0000000000200000 + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW_PI_FIFO_DEBIT0 */ +/* Description: PI Fifo debit0 overflow */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_PI_FIFO_DEBIT0_SHFT 22 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_PI_FIFO_DEBIT0_MASK 0x0000000000400000 + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW_PI_FIFO_DEBIT2 */ +/* Description: PI Fifo debit2 overflow */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_PI_FIFO_DEBIT2_SHFT 23 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_PI_FIFO_DEBIT2_MASK 0x0000000000800000 + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_DEBIT0 */ +/* Description: IILB Fifo debit0 overflow */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_DEBIT0_SHFT 24 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_DEBIT0_MASK 0x0000000001000000 + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_DEBIT2 */ +/* Description: IILB Fifo debit2 overflow */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_DEBIT2_SHFT 25 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_DEBIT2_MASK 0x0000000002000000 + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW_MD_FIFO_DEBIT0 */ +/* Description: MD Fifo debit0 overflow */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_MD_FIFO_DEBIT0_SHFT 26 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_MD_FIFO_DEBIT0_MASK 0x0000000004000000 + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW_MD_FIFO_DEBIT2 */ +/* Description: MD Fifo debit2 overflow */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_MD_FIFO_DEBIT2_SHFT 27 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_MD_FIFO_DEBIT2_MASK 0x0000000008000000 + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_DEBIT0 */ +/* Description: NI Fifo debit0 overflow */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_DEBIT0_SHFT 28 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_DEBIT0_MASK 0x0000000010000000 + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_DEBIT1 */ +/* Description: NI Fifo debit1 overflow */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_DEBIT1_SHFT 29 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_DEBIT1_MASK 0x0000000020000000 + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_DEBIT2 */ +/* Description: NI Fifo debit2 overflow */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_DEBIT2_SHFT 30 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_DEBIT2_MASK 0x0000000040000000 + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_DEBIT3 */ +/* Description: NI Fifo debit3 overflow */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_DEBIT3_SHFT 31 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_DEBIT3_MASK 0x0000000080000000 + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC0_POP */ +/* Description: PI Fifo vc0 pop overflow */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC0_POP_SHFT 32 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC0_POP_MASK 0x0000000100000000 + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC2_POP */ +/* Description: PI Fifo vc2 pop overflow */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC2_POP_SHFT 33 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC2_POP_MASK 0x0000000200000000 + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC0_POP */ +/* Description: IILB Fifo vc0 pop overflow */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC0_POP_SHFT 34 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC0_POP_MASK 0x0000000400000000 + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC2_POP */ +/* Description: IILB Fifo vc2 pop overflow */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC2_POP_SHFT 35 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC2_POP_MASK 0x0000000800000000 + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC0_POP */ +/* Description: MD Fifo vc0 pop overflow */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC0_POP_SHFT 36 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC0_POP_MASK 0x0000001000000000 + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC2_POP */ +/* Description: MD Fifo vc2 pop overflow */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC2_POP_SHFT 37 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC2_POP_MASK 0x0000002000000000 + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC0_POP */ +/* Description: NI Fifo vc0 pop overflow */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC0_POP_SHFT 38 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC0_POP_MASK 0x0000004000000000 + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC2_POP */ +/* Description: NI Fifo vc2 pop overflow */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC2_POP_SHFT 39 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC2_POP_MASK 0x0000008000000000 + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC0_PUSH */ +/* Description: PI Fifo vc0 push overflow */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC0_PUSH_SHFT 40 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC0_PUSH_MASK 0x0000010000000000 + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC2_PUSH */ +/* Description: PI Fifo vc2 push overflow */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC2_PUSH_SHFT 41 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC2_PUSH_MASK 0x0000020000000000 + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC0_PUSH */ +/* Description: IILB Fifo vc0 push overflow */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC0_PUSH_SHFT 42 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC0_PUSH_MASK 0x0000040000000000 + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC2_PUSH */ +/* Description: IILB Fifo vc2 push overflow */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC2_PUSH_SHFT 43 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC2_PUSH_MASK 0x0000080000000000 + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC0_PUSH */ +/* Description: MD Fifo vc0 push overflow */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC0_PUSH_SHFT 44 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC0_PUSH_MASK 0x0000100000000000 + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC2_PUSH */ +/* Description: MD Fifo vc2 push overflow */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC2_PUSH_SHFT 45 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC2_PUSH_MASK 0x0000200000000000 + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC0_CREDIT */ +/* Description: PI Fifo vc0 credit overflow */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC0_CREDIT_SHFT 46 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC0_CREDIT_MASK 0x0000400000000000 + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC2_CREDIT */ +/* Description: PI Fifo vc2 credit overflow */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC2_CREDIT_SHFT 47 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC2_CREDIT_MASK 0x0000800000000000 + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC0_CREDIT */ +/* Description: IILB Fifo vc0 credit overflow */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC0_CREDIT_SHFT 48 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC0_CREDIT_MASK 0x0001000000000000 + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC2_CREDIT */ +/* Description: IILB Fifo vc2 credit overflow */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC2_CREDIT_SHFT 49 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC2_CREDIT_MASK 0x0002000000000000 + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC0_CREDIT */ +/* Description: MD Fifo vc0 credit overflow */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC0_CREDIT_SHFT 50 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC0_CREDIT_MASK 0x0004000000000000 + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC2_CREDIT */ +/* Description: MD Fifo vc2 credit overflow */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC2_CREDIT_SHFT 51 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC2_CREDIT_MASK 0x0008000000000000 + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC0_CREDIT */ +/* Description: NI Fifo vc0 credit overflow */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC0_CREDIT_SHFT 52 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC0_CREDIT_MASK 0x0010000000000000 + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC1_CREDIT */ +/* Description: NI Fifo vc1 credit overflow */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC1_CREDIT_SHFT 53 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC1_CREDIT_MASK 0x0020000000000000 + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC2_CREDIT */ +/* Description: NI Fifo vc2 credit overflow */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC2_CREDIT_SHFT 54 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC2_CREDIT_MASK 0x0040000000000000 + +/* SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC3_CREDIT */ +/* Description: NI Fifo vc3 credit overflow */ +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC3_CREDIT_SHFT 55 +#define SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC3_CREDIT_MASK 0x0080000000000000 + +/* SH_NI0_FIRST_ERROR_1_TAIL_TIMEOUT_FIFO02_VC0 */ +/* Description: Fifo02 vc0 tail timeout */ +#define SH_NI0_FIRST_ERROR_1_TAIL_TIMEOUT_FIFO02_VC0_SHFT 56 +#define SH_NI0_FIRST_ERROR_1_TAIL_TIMEOUT_FIFO02_VC0_MASK 0x0100000000000000 + +/* SH_NI0_FIRST_ERROR_1_TAIL_TIMEOUT_FIFO02_VC2 */ +/* Description: Fifo02 vc2 tail timeout */ +#define SH_NI0_FIRST_ERROR_1_TAIL_TIMEOUT_FIFO02_VC2_SHFT 57 +#define SH_NI0_FIRST_ERROR_1_TAIL_TIMEOUT_FIFO02_VC2_MASK 0x0200000000000000 + +/* SH_NI0_FIRST_ERROR_1_TAIL_TIMEOUT_FIFO13_VC1 */ +/* Description: Fifo13 vc1 tail timeout */ +#define SH_NI0_FIRST_ERROR_1_TAIL_TIMEOUT_FIFO13_VC1_SHFT 58 +#define SH_NI0_FIRST_ERROR_1_TAIL_TIMEOUT_FIFO13_VC1_MASK 0x0400000000000000 + +/* SH_NI0_FIRST_ERROR_1_TAIL_TIMEOUT_FIFO13_VC3 */ +/* Description: Fifo13 vc3 tail timeout */ +#define SH_NI0_FIRST_ERROR_1_TAIL_TIMEOUT_FIFO13_VC3_SHFT 59 +#define SH_NI0_FIRST_ERROR_1_TAIL_TIMEOUT_FIFO13_VC3_MASK 0x0800000000000000 + +/* SH_NI0_FIRST_ERROR_1_TAIL_TIMEOUT_NI_VC0 */ +/* Description: NI vc0 tail timeout */ +#define SH_NI0_FIRST_ERROR_1_TAIL_TIMEOUT_NI_VC0_SHFT 60 +#define SH_NI0_FIRST_ERROR_1_TAIL_TIMEOUT_NI_VC0_MASK 0x1000000000000000 + +/* SH_NI0_FIRST_ERROR_1_TAIL_TIMEOUT_NI_VC1 */ +/* Description: NI vc1 tail timeout */ +#define SH_NI0_FIRST_ERROR_1_TAIL_TIMEOUT_NI_VC1_SHFT 61 +#define SH_NI0_FIRST_ERROR_1_TAIL_TIMEOUT_NI_VC1_MASK 0x2000000000000000 + +/* SH_NI0_FIRST_ERROR_1_TAIL_TIMEOUT_NI_VC2 */ +/* Description: NI vc2 tail timeout */ +#define SH_NI0_FIRST_ERROR_1_TAIL_TIMEOUT_NI_VC2_SHFT 62 +#define SH_NI0_FIRST_ERROR_1_TAIL_TIMEOUT_NI_VC2_MASK 0x4000000000000000 + +/* SH_NI0_FIRST_ERROR_1_TAIL_TIMEOUT_NI_VC3 */ +/* Description: NI vc3 tail timeout */ +#define SH_NI0_FIRST_ERROR_1_TAIL_TIMEOUT_NI_VC3_SHFT 63 +#define SH_NI0_FIRST_ERROR_1_TAIL_TIMEOUT_NI_VC3_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_NI0_FIRST_ERROR_2" */ +/* ni0 First Error Bits */ +/* ==================================================================== */ + +#define SH_NI0_FIRST_ERROR_2 0x0000000150040570 +#define SH_NI0_FIRST_ERROR_2_MASK 0x7fffffff003fffff +#define SH_NI0_FIRST_ERROR_2_INIT 0x7fffffff003fffff + +/* SH_NI0_FIRST_ERROR_2_ILLEGAL_VCNI */ +/* Description: Illegal VC NI */ +#define SH_NI0_FIRST_ERROR_2_ILLEGAL_VCNI_SHFT 0 +#define SH_NI0_FIRST_ERROR_2_ILLEGAL_VCNI_MASK 0x0000000000000001 + +/* SH_NI0_FIRST_ERROR_2_ILLEGAL_VCPI */ +/* Description: Illegal VC PI */ +#define SH_NI0_FIRST_ERROR_2_ILLEGAL_VCPI_SHFT 1 +#define SH_NI0_FIRST_ERROR_2_ILLEGAL_VCPI_MASK 0x0000000000000002 + +/* SH_NI0_FIRST_ERROR_2_ILLEGAL_VCMD */ +/* Description: Illegal VC MD */ +#define SH_NI0_FIRST_ERROR_2_ILLEGAL_VCMD_SHFT 2 +#define SH_NI0_FIRST_ERROR_2_ILLEGAL_VCMD_MASK 0x0000000000000004 + +/* SH_NI0_FIRST_ERROR_2_ILLEGAL_VCIILB */ +/* Description: Illegal VC IILB */ +#define SH_NI0_FIRST_ERROR_2_ILLEGAL_VCIILB_SHFT 3 +#define SH_NI0_FIRST_ERROR_2_ILLEGAL_VCIILB_MASK 0x0000000000000008 + +/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC0_POP */ +/* Description: Fifo 02 vc0 pop underflow */ +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC0_POP_SHFT 4 +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC0_POP_MASK 0x0000000000000010 + +/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC2_POP */ +/* Description: Fifo 02 vc2 pop underflow */ +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC2_POP_SHFT 5 +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC2_POP_MASK 0x0000000000000020 + +/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC1_POP */ +/* Description: Fifo 13 vc1 pop underflow */ +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC1_POP_SHFT 6 +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC1_POP_MASK 0x0000000000000040 + +/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC3_POP */ +/* Description: Fifo 13 vc3 pop underflow */ +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC3_POP_SHFT 7 +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC3_POP_MASK 0x0000000000000080 + +/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC0_PUSH */ +/* Description: Fifo 02 vc0 push underflow */ +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC0_PUSH_SHFT 8 +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC0_PUSH_MASK 0x0000000000000100 + +/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC2_PUSH */ +/* Description: Fifo 02 vc2 push underflow */ +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC2_PUSH_SHFT 9 +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC2_PUSH_MASK 0x0000000000000200 + +/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC1_PUSH */ +/* Description: Fifo 13 vc1 push underflow */ +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC1_PUSH_SHFT 10 +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC1_PUSH_MASK 0x0000000000000400 + +/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC3_PUSH */ +/* Description: Fifo 13 vc3 push underflow */ +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC3_PUSH_SHFT 11 +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC3_PUSH_MASK 0x0000000000000800 + +/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC0_CREDIT */ +/* Description: Fifo 02 vc0 credit underflow */ +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC0_CREDIT_SHFT 12 +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC0_CREDIT_MASK 0x0000000000001000 + +/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC2_CREDIT */ +/* Description: Fifo 02 vc2 credit underflow */ +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC2_CREDIT_SHFT 13 +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC2_CREDIT_MASK 0x0000000000002000 + +/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC0_CREDIT */ +/* Description: Fifo 13 vc0 credit underflow */ +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC0_CREDIT_SHFT 14 +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC0_CREDIT_MASK 0x0000000000004000 + +/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC2_CREDIT */ +/* Description: Fifo 13 vc2 credit underflow */ +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC2_CREDIT_SHFT 15 +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC2_CREDIT_MASK 0x0000000000008000 + +/* SH_NI0_FIRST_ERROR_2_UNDERFLOW0_VC0_CREDIT */ +/* Description: VC0 credit underflow 0 */ +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW0_VC0_CREDIT_SHFT 16 +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW0_VC0_CREDIT_MASK 0x0000000000010000 + +/* SH_NI0_FIRST_ERROR_2_UNDERFLOW1_VC0_CREDIT */ +/* Description: VC0 credit underflow 1 */ +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW1_VC0_CREDIT_SHFT 17 +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW1_VC0_CREDIT_MASK 0x0000000000020000 + +/* SH_NI0_FIRST_ERROR_2_UNDERFLOW2_VC0_CREDIT */ +/* Description: VC0 credit underflow 2 */ +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW2_VC0_CREDIT_SHFT 18 +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW2_VC0_CREDIT_MASK 0x0000000000040000 + +/* SH_NI0_FIRST_ERROR_2_UNDERFLOW0_VC2_CREDIT */ +/* Description: VC2 credit underflow 0 */ +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW0_VC2_CREDIT_SHFT 19 +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW0_VC2_CREDIT_MASK 0x0000000000080000 + +/* SH_NI0_FIRST_ERROR_2_UNDERFLOW1_VC2_CREDIT */ +/* Description: VC2 credit underflow 1 */ +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW1_VC2_CREDIT_SHFT 20 +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW1_VC2_CREDIT_MASK 0x0000000000100000 + +/* SH_NI0_FIRST_ERROR_2_UNDERFLOW2_VC2_CREDIT */ +/* Description: VC2 credit underflow 2 */ +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW2_VC2_CREDIT_SHFT 21 +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW2_VC2_CREDIT_MASK 0x0000000000200000 + +/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC0_POP */ +/* Description: PI Fifo vc0 pop underflow */ +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC0_POP_SHFT 32 +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC0_POP_MASK 0x0000000100000000 + +/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC2_POP */ +/* Description: PI Fifo vc2 pop underflow */ +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC2_POP_SHFT 33 +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC2_POP_MASK 0x0000000200000000 + +/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC0_POP */ +/* Description: IILB Fifo vc0 pop underflow */ +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC0_POP_SHFT 34 +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC0_POP_MASK 0x0000000400000000 + +/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC2_POP */ +/* Description: IILB Fifo vc2 pop underflow */ +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC2_POP_SHFT 35 +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC2_POP_MASK 0x0000000800000000 + +/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC0_POP */ +/* Description: MD Fifo vc0 pop underflow */ +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC0_POP_SHFT 36 +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC0_POP_MASK 0x0000001000000000 + +/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC2_POP */ +/* Description: MD Fifo vc2 pop underflow */ +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC2_POP_SHFT 37 +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC2_POP_MASK 0x0000002000000000 + +/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC0_POP */ +/* Description: NI Fifo vc0 pop underflow */ +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC0_POP_SHFT 38 +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC0_POP_MASK 0x0000004000000000 + +/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC2_POP */ +/* Description: NI Fifo vc2 pop underflow */ +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC2_POP_SHFT 39 +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC2_POP_MASK 0x0000008000000000 + +/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC0_PUSH */ +/* Description: PI Fifo vc0 push underflow */ +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC0_PUSH_SHFT 40 +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC0_PUSH_MASK 0x0000010000000000 + +/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC2_PUSH */ +/* Description: PI Fifo vc2 push underflow */ +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC2_PUSH_SHFT 41 +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC2_PUSH_MASK 0x0000020000000000 + +/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC0_PUSH */ +/* Description: IILB Fifo vc0 push underflow */ +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC0_PUSH_SHFT 42 +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC0_PUSH_MASK 0x0000040000000000 + +/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC2_PUSH */ +/* Description: IILB Fifo vc2 push underflow */ +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC2_PUSH_SHFT 43 +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC2_PUSH_MASK 0x0000080000000000 + +/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC0_PUSH */ +/* Description: MD Fifo vc0 push underflow */ +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC0_PUSH_SHFT 44 +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC0_PUSH_MASK 0x0000100000000000 + +/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC2_PUSH */ +/* Description: MD Fifo vc2 push underflow */ +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC2_PUSH_SHFT 45 +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC2_PUSH_MASK 0x0000200000000000 + +/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC0_CREDIT */ +/* Description: PI Fifo vc0 credit underflow */ +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC0_CREDIT_SHFT 46 +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC0_CREDIT_MASK 0x0000400000000000 + +/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC2_CREDIT */ +/* Description: PI Fifo vc2 credit underflow */ +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC2_CREDIT_SHFT 47 +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC2_CREDIT_MASK 0x0000800000000000 + +/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC0_CREDIT */ +/* Description: IILB Fifo vc0 credit underflow */ +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC0_CREDIT_SHFT 48 +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC0_CREDIT_MASK 0x0001000000000000 + +/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC2_CREDIT */ +/* Description: IILB Fifo vc2 credit underflow */ +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC2_CREDIT_SHFT 49 +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC2_CREDIT_MASK 0x0002000000000000 + +/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC0_CREDIT */ +/* Description: MD Fifo vc0 credit underflow */ +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC0_CREDIT_SHFT 50 +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC0_CREDIT_MASK 0x0004000000000000 + +/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC2_CREDIT */ +/* Description: MD Fifo vc2 credit underflow */ +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC2_CREDIT_SHFT 51 +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC2_CREDIT_MASK 0x0008000000000000 + +/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC0_CREDIT */ +/* Description: NI Fifo vc0 credit underflow */ +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC0_CREDIT_SHFT 52 +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC0_CREDIT_MASK 0x0010000000000000 + +/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC1_CREDIT */ +/* Description: NI Fifo vc1 credit underflow */ +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC1_CREDIT_SHFT 53 +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC1_CREDIT_MASK 0x0020000000000000 + +/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC2_CREDIT */ +/* Description: NI Fifo vc2 credit underflow */ +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC2_CREDIT_SHFT 54 +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC2_CREDIT_MASK 0x0040000000000000 + +/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC3_CREDIT */ +/* Description: NI Fifo vc3 credit underflow */ +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC3_CREDIT_SHFT 55 +#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC3_CREDIT_MASK 0x0080000000000000 + +/* SH_NI0_FIRST_ERROR_2_LLP_DEADLOCK_VC0 */ +/* Description: llp deadlock vc0 */ +#define SH_NI0_FIRST_ERROR_2_LLP_DEADLOCK_VC0_SHFT 56 +#define SH_NI0_FIRST_ERROR_2_LLP_DEADLOCK_VC0_MASK 0x0100000000000000 + +/* SH_NI0_FIRST_ERROR_2_LLP_DEADLOCK_VC1 */ +/* Description: llp deadlock vc1 */ +#define SH_NI0_FIRST_ERROR_2_LLP_DEADLOCK_VC1_SHFT 57 +#define SH_NI0_FIRST_ERROR_2_LLP_DEADLOCK_VC1_MASK 0x0200000000000000 + +/* SH_NI0_FIRST_ERROR_2_LLP_DEADLOCK_VC2 */ +/* Description: llp deadlock vc2 */ +#define SH_NI0_FIRST_ERROR_2_LLP_DEADLOCK_VC2_SHFT 58 +#define SH_NI0_FIRST_ERROR_2_LLP_DEADLOCK_VC2_MASK 0x0400000000000000 + +/* SH_NI0_FIRST_ERROR_2_LLP_DEADLOCK_VC3 */ +/* Description: llp deadlock vc3 */ +#define SH_NI0_FIRST_ERROR_2_LLP_DEADLOCK_VC3_SHFT 59 +#define SH_NI0_FIRST_ERROR_2_LLP_DEADLOCK_VC3_MASK 0x0800000000000000 + +/* SH_NI0_FIRST_ERROR_2_CHIPLET_NOMATCH */ +/* Description: chiplet nomatch */ +#define SH_NI0_FIRST_ERROR_2_CHIPLET_NOMATCH_SHFT 60 +#define SH_NI0_FIRST_ERROR_2_CHIPLET_NOMATCH_MASK 0x1000000000000000 + +/* SH_NI0_FIRST_ERROR_2_LUT_READ_ERROR */ +/* Description: LUT Read Error */ +#define SH_NI0_FIRST_ERROR_2_LUT_READ_ERROR_SHFT 61 +#define SH_NI0_FIRST_ERROR_2_LUT_READ_ERROR_MASK 0x2000000000000000 + +/* SH_NI0_FIRST_ERROR_2_RETRY_TIMEOUT_ERROR */ +/* Description: Retry Timeout Error */ +#define SH_NI0_FIRST_ERROR_2_RETRY_TIMEOUT_ERROR_SHFT 62 +#define SH_NI0_FIRST_ERROR_2_RETRY_TIMEOUT_ERROR_MASK 0x4000000000000000 + +/* ==================================================================== */ +/* Register "SH_NI0_ERROR_DETAIL_1" */ +/* ni0 Chiplet no match header bits 63:0 */ +/* ==================================================================== */ + +#define SH_NI0_ERROR_DETAIL_1 0x0000000150040580 +#define SH_NI0_ERROR_DETAIL_1_MASK 0xffffffffffffffff +#define SH_NI0_ERROR_DETAIL_1_INIT 0x0000000000000000 + +/* SH_NI0_ERROR_DETAIL_1_HEADER */ +/* Description: Header bits 63:0 */ +#define SH_NI0_ERROR_DETAIL_1_HEADER_SHFT 0 +#define SH_NI0_ERROR_DETAIL_1_HEADER_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_NI0_ERROR_DETAIL_2" */ +/* ni0 Chiplet no match header bits 127:64 */ +/* ==================================================================== */ + +#define SH_NI0_ERROR_DETAIL_2 0x0000000150040590 +#define SH_NI0_ERROR_DETAIL_2_MASK 0xffffffffffffffff +#define SH_NI0_ERROR_DETAIL_2_INIT 0x0000000000000000 + +/* SH_NI0_ERROR_DETAIL_2_HEADER */ +/* Description: Header bits 127:64 */ +#define SH_NI0_ERROR_DETAIL_2_HEADER_SHFT 0 +#define SH_NI0_ERROR_DETAIL_2_HEADER_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_NI1_ERROR_SUMMARY_1" */ +/* ni1 Error Summary Bits */ +/* ==================================================================== */ + +#define SH_NI1_ERROR_SUMMARY_1 0x0000000150040600 +#define SH_NI1_ERROR_SUMMARY_1_MASK 0xffffffffffffffff +#define SH_NI1_ERROR_SUMMARY_1_INIT 0xffffffffffffffff + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO02_DEBIT0 */ +/* Description: Fifo 02 debit0 overflow */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO02_DEBIT0_SHFT 0 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO02_DEBIT0_MASK 0x0000000000000001 + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO02_DEBIT2 */ +/* Description: Fifo 02 debit2 overflow */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO02_DEBIT2_SHFT 1 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO02_DEBIT2_MASK 0x0000000000000002 + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO13_DEBIT0 */ +/* Description: Fifo 13 debit0 overflow */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO13_DEBIT0_SHFT 2 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO13_DEBIT0_MASK 0x0000000000000004 + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO13_DEBIT2 */ +/* Description: Fifo 13 debit2 overflow */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO13_DEBIT2_SHFT 3 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO13_DEBIT2_MASK 0x0000000000000008 + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC0_POP */ +/* Description: Fifo 02 vc0 pop overflow */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC0_POP_SHFT 4 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC0_POP_MASK 0x0000000000000010 + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC2_POP */ +/* Description: Fifo 02 vc2 pop overflow */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC2_POP_SHFT 5 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC2_POP_MASK 0x0000000000000020 + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC1_POP */ +/* Description: Fifo 13 vc1 pop overflow */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC1_POP_SHFT 6 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC1_POP_MASK 0x0000000000000040 + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC3_POP */ +/* Description: Fifo 13 vc3 pop overflow */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC3_POP_SHFT 7 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC3_POP_MASK 0x0000000000000080 + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC0_PUSH */ +/* Description: Fifo 02 vc0 push overflow */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC0_PUSH_SHFT 8 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC0_PUSH_MASK 0x0000000000000100 + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC2_PUSH */ +/* Description: Fifo 02 vc2 push overflow */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC2_PUSH_SHFT 9 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC2_PUSH_MASK 0x0000000000000200 + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC1_PUSH */ +/* Description: Fifo 13 vc1 push overflow */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC1_PUSH_SHFT 10 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC1_PUSH_MASK 0x0000000000000400 + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC3_PUSH */ +/* Description: Fifo 13 vc3 push overflow */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC3_PUSH_SHFT 11 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC3_PUSH_MASK 0x0000000000000800 + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC0_CREDIT */ +/* Description: Fifo 02 vc0 credit overflow */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC0_CREDIT_SHFT 12 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC0_CREDIT_MASK 0x0000000000001000 + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC2_CREDIT */ +/* Description: Fifo 02 vc2 credit overflow */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC2_CREDIT_SHFT 13 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC2_CREDIT_MASK 0x0000000000002000 + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC0_CREDIT */ +/* Description: Fifo 13 vc0 credit overflow */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC0_CREDIT_SHFT 14 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC0_CREDIT_MASK 0x0000000000004000 + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC2_CREDIT */ +/* Description: Fifo 13 vc2 credit overflow */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC2_CREDIT_SHFT 15 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC2_CREDIT_MASK 0x0000000000008000 + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW0_VC0_CREDIT */ +/* Description: VC0 credit overflow 0 */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW0_VC0_CREDIT_SHFT 16 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW0_VC0_CREDIT_MASK 0x0000000000010000 + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW1_VC0_CREDIT */ +/* Description: VC0 credit overflow 1 */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW1_VC0_CREDIT_SHFT 17 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW1_VC0_CREDIT_MASK 0x0000000000020000 + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW2_VC0_CREDIT */ +/* Description: VC0 credit overflow 2 */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW2_VC0_CREDIT_SHFT 18 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW2_VC0_CREDIT_MASK 0x0000000000040000 + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW0_VC2_CREDIT */ +/* Description: VC2 credit overflow 0 */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW0_VC2_CREDIT_SHFT 19 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW0_VC2_CREDIT_MASK 0x0000000000080000 + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW1_VC2_CREDIT */ +/* Description: VC2 credit overflow 1 */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW1_VC2_CREDIT_SHFT 20 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW1_VC2_CREDIT_MASK 0x0000000000100000 + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW2_VC2_CREDIT */ +/* Description: VC2 credit overflow 2 */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW2_VC2_CREDIT_SHFT 21 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW2_VC2_CREDIT_MASK 0x0000000000200000 + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_DEBIT0 */ +/* Description: PI Fifo debit0 overflow */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_DEBIT0_SHFT 22 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_DEBIT0_MASK 0x0000000000400000 + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_DEBIT2 */ +/* Description: PI Fifo debit2 overflow */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_DEBIT2_SHFT 23 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_DEBIT2_MASK 0x0000000000800000 + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_DEBIT0 */ +/* Description: IILB Fifo debit0 overflow */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_DEBIT0_SHFT 24 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_DEBIT0_MASK 0x0000000001000000 + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_DEBIT2 */ +/* Description: IILB Fifo debit2 overflow */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_DEBIT2_SHFT 25 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_DEBIT2_MASK 0x0000000002000000 + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_DEBIT0 */ +/* Description: MD Fifo debit0 overflow */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_DEBIT0_SHFT 26 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_DEBIT0_MASK 0x0000000004000000 + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_DEBIT2 */ +/* Description: MD Fifo debit2 overflow */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_DEBIT2_SHFT 27 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_DEBIT2_MASK 0x0000000008000000 + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_DEBIT0 */ +/* Description: NI Fifo debit0 overflow */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_DEBIT0_SHFT 28 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_DEBIT0_MASK 0x0000000010000000 + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_DEBIT1 */ +/* Description: NI Fifo debit1 overflow */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_DEBIT1_SHFT 29 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_DEBIT1_MASK 0x0000000020000000 + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_DEBIT2 */ +/* Description: NI Fifo debit2 overflow */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_DEBIT2_SHFT 30 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_DEBIT2_MASK 0x0000000040000000 + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_DEBIT3 */ +/* Description: NI Fifo debit3 overflow */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_DEBIT3_SHFT 31 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_DEBIT3_MASK 0x0000000080000000 + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC0_POP */ +/* Description: PI Fifo vc0 pop overflow */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC0_POP_SHFT 32 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC0_POP_MASK 0x0000000100000000 + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC2_POP */ +/* Description: PI Fifo vc2 pop overflow */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC2_POP_SHFT 33 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC2_POP_MASK 0x0000000200000000 + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC0_POP */ +/* Description: IILB Fifo vc0 pop overflow */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC0_POP_SHFT 34 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC0_POP_MASK 0x0000000400000000 + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC2_POP */ +/* Description: IILB Fifo vc2 pop overflow */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC2_POP_SHFT 35 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC2_POP_MASK 0x0000000800000000 + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC0_POP */ +/* Description: MD Fifo vc0 pop overflow */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC0_POP_SHFT 36 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC0_POP_MASK 0x0000001000000000 + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC2_POP */ +/* Description: MD Fifo vc2 pop overflow */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC2_POP_SHFT 37 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC2_POP_MASK 0x0000002000000000 + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC0_POP */ +/* Description: NI Fifo vc0 pop overflow */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC0_POP_SHFT 38 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC0_POP_MASK 0x0000004000000000 + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC2_POP */ +/* Description: NI Fifo vc2 pop overflow */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC2_POP_SHFT 39 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC2_POP_MASK 0x0000008000000000 + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC0_PUSH */ +/* Description: PI Fifo vc0 push overflow */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC0_PUSH_SHFT 40 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC0_PUSH_MASK 0x0000010000000000 + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC2_PUSH */ +/* Description: PI Fifo vc2 push overflow */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC2_PUSH_SHFT 41 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC2_PUSH_MASK 0x0000020000000000 + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC0_PUSH */ +/* Description: IILB Fifo vc0 push overflow */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC0_PUSH_SHFT 42 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC0_PUSH_MASK 0x0000040000000000 + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC2_PUSH */ +/* Description: IILB Fifo vc2 push overflow */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC2_PUSH_SHFT 43 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC2_PUSH_MASK 0x0000080000000000 + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC0_PUSH */ +/* Description: MD Fifo vc0 push overflow */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC0_PUSH_SHFT 44 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC0_PUSH_MASK 0x0000100000000000 + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC2_PUSH */ +/* Description: MD Fifo vc2 push overflow */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC2_PUSH_SHFT 45 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC2_PUSH_MASK 0x0000200000000000 + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC0_CREDIT */ +/* Description: PI Fifo vc0 credit overflow */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC0_CREDIT_SHFT 46 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC0_CREDIT_MASK 0x0000400000000000 + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC2_CREDIT */ +/* Description: PI Fifo vc2 credit overflow */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC2_CREDIT_SHFT 47 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC2_CREDIT_MASK 0x0000800000000000 + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC0_CREDIT */ +/* Description: IILB Fifo vc0 credit overflow */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC0_CREDIT_SHFT 48 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC0_CREDIT_MASK 0x0001000000000000 + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC2_CREDIT */ +/* Description: IILB Fifo vc2 credit overflow */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC2_CREDIT_SHFT 49 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC2_CREDIT_MASK 0x0002000000000000 + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC0_CREDIT */ +/* Description: MD Fifo vc0 credit overflow */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC0_CREDIT_SHFT 50 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC0_CREDIT_MASK 0x0004000000000000 + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC2_CREDIT */ +/* Description: MD Fifo vc2 credit overflow */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC2_CREDIT_SHFT 51 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC2_CREDIT_MASK 0x0008000000000000 + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC0_CREDIT */ +/* Description: NI Fifo vc0 credit overflow */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC0_CREDIT_SHFT 52 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC0_CREDIT_MASK 0x0010000000000000 + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC1_CREDIT */ +/* Description: NI Fifo vc1 credit overflow */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC1_CREDIT_SHFT 53 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC1_CREDIT_MASK 0x0020000000000000 + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC2_CREDIT */ +/* Description: NI Fifo vc2 credit overflow */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC2_CREDIT_SHFT 54 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC2_CREDIT_MASK 0x0040000000000000 + +/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC3_CREDIT */ +/* Description: NI Fifo vc3 credit overflow */ +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC3_CREDIT_SHFT 55 +#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC3_CREDIT_MASK 0x0080000000000000 + +/* SH_NI1_ERROR_SUMMARY_1_TAIL_TIMEOUT_FIFO02_VC0 */ +/* Description: Fifo02 vc0 tail timeout */ +#define SH_NI1_ERROR_SUMMARY_1_TAIL_TIMEOUT_FIFO02_VC0_SHFT 56 +#define SH_NI1_ERROR_SUMMARY_1_TAIL_TIMEOUT_FIFO02_VC0_MASK 0x0100000000000000 + +/* SH_NI1_ERROR_SUMMARY_1_TAIL_TIMEOUT_FIFO02_VC2 */ +/* Description: Fifo02 vc2 tail timeout */ +#define SH_NI1_ERROR_SUMMARY_1_TAIL_TIMEOUT_FIFO02_VC2_SHFT 57 +#define SH_NI1_ERROR_SUMMARY_1_TAIL_TIMEOUT_FIFO02_VC2_MASK 0x0200000000000000 + +/* SH_NI1_ERROR_SUMMARY_1_TAIL_TIMEOUT_FIFO13_VC1 */ +/* Description: Fifo13 vc1 tail timeout */ +#define SH_NI1_ERROR_SUMMARY_1_TAIL_TIMEOUT_FIFO13_VC1_SHFT 58 +#define SH_NI1_ERROR_SUMMARY_1_TAIL_TIMEOUT_FIFO13_VC1_MASK 0x0400000000000000 + +/* SH_NI1_ERROR_SUMMARY_1_TAIL_TIMEOUT_FIFO13_VC3 */ +/* Description: Fifo13 vc3 tail timeout */ +#define SH_NI1_ERROR_SUMMARY_1_TAIL_TIMEOUT_FIFO13_VC3_SHFT 59 +#define SH_NI1_ERROR_SUMMARY_1_TAIL_TIMEOUT_FIFO13_VC3_MASK 0x0800000000000000 + +/* SH_NI1_ERROR_SUMMARY_1_TAIL_TIMEOUT_NI_VC0 */ +/* Description: NI vc0 tail timeout */ +#define SH_NI1_ERROR_SUMMARY_1_TAIL_TIMEOUT_NI_VC0_SHFT 60 +#define SH_NI1_ERROR_SUMMARY_1_TAIL_TIMEOUT_NI_VC0_MASK 0x1000000000000000 + +/* SH_NI1_ERROR_SUMMARY_1_TAIL_TIMEOUT_NI_VC1 */ +/* Description: NI vc1 tail timeout */ +#define SH_NI1_ERROR_SUMMARY_1_TAIL_TIMEOUT_NI_VC1_SHFT 61 +#define SH_NI1_ERROR_SUMMARY_1_TAIL_TIMEOUT_NI_VC1_MASK 0x2000000000000000 + +/* SH_NI1_ERROR_SUMMARY_1_TAIL_TIMEOUT_NI_VC2 */ +/* Description: NI vc2 tail timeout */ +#define SH_NI1_ERROR_SUMMARY_1_TAIL_TIMEOUT_NI_VC2_SHFT 62 +#define SH_NI1_ERROR_SUMMARY_1_TAIL_TIMEOUT_NI_VC2_MASK 0x4000000000000000 + +/* SH_NI1_ERROR_SUMMARY_1_TAIL_TIMEOUT_NI_VC3 */ +/* Description: NI vc3 tail timeout */ +#define SH_NI1_ERROR_SUMMARY_1_TAIL_TIMEOUT_NI_VC3_SHFT 63 +#define SH_NI1_ERROR_SUMMARY_1_TAIL_TIMEOUT_NI_VC3_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_NI1_ERROR_SUMMARY_1_ALIAS" */ +/* ni1 Error Summary Bits Alias */ +/* ==================================================================== */ + +#define SH_NI1_ERROR_SUMMARY_1_ALIAS 0x0000000150040608 + +/* ==================================================================== */ +/* Register "SH_NI1_ERROR_SUMMARY_2" */ +/* ni1 Error Summary Bits */ +/* ==================================================================== */ + +#define SH_NI1_ERROR_SUMMARY_2 0x0000000150040610 +#define SH_NI1_ERROR_SUMMARY_2_MASK 0x7fffffff003fffff +#define SH_NI1_ERROR_SUMMARY_2_INIT 0x7fffffff003fffff + +/* SH_NI1_ERROR_SUMMARY_2_ILLEGAL_VCNI */ +/* Description: Illegal VC NI */ +#define SH_NI1_ERROR_SUMMARY_2_ILLEGAL_VCNI_SHFT 0 +#define SH_NI1_ERROR_SUMMARY_2_ILLEGAL_VCNI_MASK 0x0000000000000001 + +/* SH_NI1_ERROR_SUMMARY_2_ILLEGAL_VCPI */ +/* Description: Illegal VC PI */ +#define SH_NI1_ERROR_SUMMARY_2_ILLEGAL_VCPI_SHFT 1 +#define SH_NI1_ERROR_SUMMARY_2_ILLEGAL_VCPI_MASK 0x0000000000000002 + +/* SH_NI1_ERROR_SUMMARY_2_ILLEGAL_VCMD */ +/* Description: Illegal VC MD */ +#define SH_NI1_ERROR_SUMMARY_2_ILLEGAL_VCMD_SHFT 2 +#define SH_NI1_ERROR_SUMMARY_2_ILLEGAL_VCMD_MASK 0x0000000000000004 + +/* SH_NI1_ERROR_SUMMARY_2_ILLEGAL_VCIILB */ +/* Description: Illegal VC IILB */ +#define SH_NI1_ERROR_SUMMARY_2_ILLEGAL_VCIILB_SHFT 3 +#define SH_NI1_ERROR_SUMMARY_2_ILLEGAL_VCIILB_MASK 0x0000000000000008 + +/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC0_POP */ +/* Description: Fifo 02 vc0 pop underflow */ +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC0_POP_SHFT 4 +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC0_POP_MASK 0x0000000000000010 + +/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC2_POP */ +/* Description: Fifo 02 vc2 pop underflow */ +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC2_POP_SHFT 5 +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC2_POP_MASK 0x0000000000000020 + +/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC1_POP */ +/* Description: Fifo 13 vc1 pop underflow */ +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC1_POP_SHFT 6 +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC1_POP_MASK 0x0000000000000040 + +/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC3_POP */ +/* Description: Fifo 13 vc3 pop underflow */ +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC3_POP_SHFT 7 +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC3_POP_MASK 0x0000000000000080 + +/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC0_PUSH */ +/* Description: Fifo 02 vc0 push underflow */ +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC0_PUSH_SHFT 8 +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC0_PUSH_MASK 0x0000000000000100 + +/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC2_PUSH */ +/* Description: Fifo 02 vc2 push underflow */ +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC2_PUSH_SHFT 9 +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC2_PUSH_MASK 0x0000000000000200 + +/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC1_PUSH */ +/* Description: Fifo 13 vc1 push underflow */ +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC1_PUSH_SHFT 10 +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC1_PUSH_MASK 0x0000000000000400 + +/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC3_PUSH */ +/* Description: Fifo 13 vc3 push underflow */ +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC3_PUSH_SHFT 11 +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC3_PUSH_MASK 0x0000000000000800 + +/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC0_CREDIT */ +/* Description: Fifo 02 vc0 credit underflow */ +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC0_CREDIT_SHFT 12 +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC0_CREDIT_MASK 0x0000000000001000 + +/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC2_CREDIT */ +/* Description: Fifo 02 vc2 credit underflow */ +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC2_CREDIT_SHFT 13 +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC2_CREDIT_MASK 0x0000000000002000 + +/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC0_CREDIT */ +/* Description: Fifo 13 vc0 credit underflow */ +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC0_CREDIT_SHFT 14 +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC0_CREDIT_MASK 0x0000000000004000 + +/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC2_CREDIT */ +/* Description: Fifo 13 vc2 credit underflow */ +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC2_CREDIT_SHFT 15 +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC2_CREDIT_MASK 0x0000000000008000 + +/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW0_VC0_CREDIT */ +/* Description: VC0 credit underflow 0 */ +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW0_VC0_CREDIT_SHFT 16 +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW0_VC0_CREDIT_MASK 0x0000000000010000 + +/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW1_VC0_CREDIT */ +/* Description: VC0 credit underflow 1 */ +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW1_VC0_CREDIT_SHFT 17 +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW1_VC0_CREDIT_MASK 0x0000000000020000 + +/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW2_VC0_CREDIT */ +/* Description: VC0 credit underflow 2 */ +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW2_VC0_CREDIT_SHFT 18 +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW2_VC0_CREDIT_MASK 0x0000000000040000 + +/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW0_VC2_CREDIT */ +/* Description: VC2 credit underflow 0 */ +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW0_VC2_CREDIT_SHFT 19 +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW0_VC2_CREDIT_MASK 0x0000000000080000 + +/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW1_VC2_CREDIT */ +/* Description: VC2 credit underflow 1 */ +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW1_VC2_CREDIT_SHFT 20 +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW1_VC2_CREDIT_MASK 0x0000000000100000 + +/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW2_VC2_CREDIT */ +/* Description: VC2 credit underflow 2 */ +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW2_VC2_CREDIT_SHFT 21 +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW2_VC2_CREDIT_MASK 0x0000000000200000 + +/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC0_POP */ +/* Description: PI Fifo vc0 pop underflow */ +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC0_POP_SHFT 32 +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC0_POP_MASK 0x0000000100000000 + +/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC2_POP */ +/* Description: PI Fifo vc2 pop underflow */ +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC2_POP_SHFT 33 +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC2_POP_MASK 0x0000000200000000 + +/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC0_POP */ +/* Description: IILB Fifo vc0 pop underflow */ +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC0_POP_SHFT 34 +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC0_POP_MASK 0x0000000400000000 + +/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC2_POP */ +/* Description: IILB Fifo vc2 pop underflow */ +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC2_POP_SHFT 35 +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC2_POP_MASK 0x0000000800000000 + +/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC0_POP */ +/* Description: MD Fifo vc0 pop underflow */ +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC0_POP_SHFT 36 +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC0_POP_MASK 0x0000001000000000 + +/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC2_POP */ +/* Description: MD Fifo vc2 pop underflow */ +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC2_POP_SHFT 37 +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC2_POP_MASK 0x0000002000000000 + +/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC0_POP */ +/* Description: NI Fifo vc0 pop underflow */ +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC0_POP_SHFT 38 +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC0_POP_MASK 0x0000004000000000 + +/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC2_POP */ +/* Description: NI Fifo vc2 pop underflow */ +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC2_POP_SHFT 39 +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC2_POP_MASK 0x0000008000000000 + +/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC0_PUSH */ +/* Description: PI Fifo vc0 push underflow */ +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC0_PUSH_SHFT 40 +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC0_PUSH_MASK 0x0000010000000000 + +/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC2_PUSH */ +/* Description: PI Fifo vc2 push underflow */ +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC2_PUSH_SHFT 41 +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC2_PUSH_MASK 0x0000020000000000 + +/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC0_PUSH */ +/* Description: IILB Fifo vc0 push underflow */ +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC0_PUSH_SHFT 42 +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC0_PUSH_MASK 0x0000040000000000 + +/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC2_PUSH */ +/* Description: IILB Fifo vc2 push underflow */ +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC2_PUSH_SHFT 43 +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC2_PUSH_MASK 0x0000080000000000 + +/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC0_PUSH */ +/* Description: MD Fifo vc0 push underflow */ +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC0_PUSH_SHFT 44 +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC0_PUSH_MASK 0x0000100000000000 + +/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC2_PUSH */ +/* Description: MD Fifo vc2 push underflow */ +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC2_PUSH_SHFT 45 +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC2_PUSH_MASK 0x0000200000000000 + +/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC0_CREDIT */ +/* Description: PI Fifo vc0 credit underflow */ +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC0_CREDIT_SHFT 46 +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC0_CREDIT_MASK 0x0000400000000000 + +/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC2_CREDIT */ +/* Description: PI Fifo vc2 credit underflow */ +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC2_CREDIT_SHFT 47 +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC2_CREDIT_MASK 0x0000800000000000 + +/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC0_CREDIT */ +/* Description: IILB Fifo vc0 credit underflow */ +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC0_CREDIT_SHFT 48 +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC0_CREDIT_MASK 0x0001000000000000 + +/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC2_CREDIT */ +/* Description: IILB Fifo vc2 credit underflow */ +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC2_CREDIT_SHFT 49 +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC2_CREDIT_MASK 0x0002000000000000 + +/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC0_CREDIT */ +/* Description: MD Fifo vc0 credit underflow */ +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC0_CREDIT_SHFT 50 +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC0_CREDIT_MASK 0x0004000000000000 + +/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC2_CREDIT */ +/* Description: MD Fifo vc2 credit underflow */ +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC2_CREDIT_SHFT 51 +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC2_CREDIT_MASK 0x0008000000000000 + +/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC0_CREDIT */ +/* Description: NI Fifo vc0 credit underflow */ +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC0_CREDIT_SHFT 52 +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC0_CREDIT_MASK 0x0010000000000000 + +/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC1_CREDIT */ +/* Description: NI Fifo vc1 credit underflow */ +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC1_CREDIT_SHFT 53 +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC1_CREDIT_MASK 0x0020000000000000 + +/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC2_CREDIT */ +/* Description: NI Fifo vc2 credit underflow */ +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC2_CREDIT_SHFT 54 +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC2_CREDIT_MASK 0x0040000000000000 + +/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC3_CREDIT */ +/* Description: NI Fifo vc3 credit underflow */ +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC3_CREDIT_SHFT 55 +#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC3_CREDIT_MASK 0x0080000000000000 + +/* SH_NI1_ERROR_SUMMARY_2_LLP_DEADLOCK_VC0 */ +/* Description: llp deadlock vc0 */ +#define SH_NI1_ERROR_SUMMARY_2_LLP_DEADLOCK_VC0_SHFT 56 +#define SH_NI1_ERROR_SUMMARY_2_LLP_DEADLOCK_VC0_MASK 0x0100000000000000 + +/* SH_NI1_ERROR_SUMMARY_2_LLP_DEADLOCK_VC1 */ +/* Description: llp deadlock vc1 */ +#define SH_NI1_ERROR_SUMMARY_2_LLP_DEADLOCK_VC1_SHFT 57 +#define SH_NI1_ERROR_SUMMARY_2_LLP_DEADLOCK_VC1_MASK 0x0200000000000000 + +/* SH_NI1_ERROR_SUMMARY_2_LLP_DEADLOCK_VC2 */ +/* Description: llp deadlock vc2 */ +#define SH_NI1_ERROR_SUMMARY_2_LLP_DEADLOCK_VC2_SHFT 58 +#define SH_NI1_ERROR_SUMMARY_2_LLP_DEADLOCK_VC2_MASK 0x0400000000000000 + +/* SH_NI1_ERROR_SUMMARY_2_LLP_DEADLOCK_VC3 */ +/* Description: llp deadlock vc3 */ +#define SH_NI1_ERROR_SUMMARY_2_LLP_DEADLOCK_VC3_SHFT 59 +#define SH_NI1_ERROR_SUMMARY_2_LLP_DEADLOCK_VC3_MASK 0x0800000000000000 + +/* SH_NI1_ERROR_SUMMARY_2_CHIPLET_NOMATCH */ +/* Description: chiplet nomatch */ +#define SH_NI1_ERROR_SUMMARY_2_CHIPLET_NOMATCH_SHFT 60 +#define SH_NI1_ERROR_SUMMARY_2_CHIPLET_NOMATCH_MASK 0x1000000000000000 + +/* SH_NI1_ERROR_SUMMARY_2_LUT_READ_ERROR */ +/* Description: LUT Read Error */ +#define SH_NI1_ERROR_SUMMARY_2_LUT_READ_ERROR_SHFT 61 +#define SH_NI1_ERROR_SUMMARY_2_LUT_READ_ERROR_MASK 0x2000000000000000 + +/* SH_NI1_ERROR_SUMMARY_2_RETRY_TIMEOUT_ERROR */ +/* Description: Retry Timeout Error */ +#define SH_NI1_ERROR_SUMMARY_2_RETRY_TIMEOUT_ERROR_SHFT 62 +#define SH_NI1_ERROR_SUMMARY_2_RETRY_TIMEOUT_ERROR_MASK 0x4000000000000000 + +/* ==================================================================== */ +/* Register "SH_NI1_ERROR_SUMMARY_2_ALIAS" */ +/* ni1 Error Summary Bits Alias */ +/* ==================================================================== */ + +#define SH_NI1_ERROR_SUMMARY_2_ALIAS 0x0000000150040618 + +/* ==================================================================== */ +/* Register "SH_NI1_ERROR_OVERFLOW_1" */ +/* ni1 Error Overflow Bits */ +/* ==================================================================== */ + +#define SH_NI1_ERROR_OVERFLOW_1 0x0000000150040620 +#define SH_NI1_ERROR_OVERFLOW_1_MASK 0xffffffffffffffff +#define SH_NI1_ERROR_OVERFLOW_1_INIT 0xffffffffffffffff + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_DEBIT0 */ +/* Description: Fifo 02 debit0 overflow */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_DEBIT0_SHFT 0 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_DEBIT0_MASK 0x0000000000000001 + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_DEBIT2 */ +/* Description: Fifo 02 debit2 overflow */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_DEBIT2_SHFT 1 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_DEBIT2_MASK 0x0000000000000002 + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_DEBIT0 */ +/* Description: Fifo 13 debit0 overflow */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_DEBIT0_SHFT 2 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_DEBIT0_MASK 0x0000000000000004 + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_DEBIT2 */ +/* Description: Fifo 13 debit2 overflow */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_DEBIT2_SHFT 3 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_DEBIT2_MASK 0x0000000000000008 + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC0_POP */ +/* Description: Fifo 02 vc0 pop overflow */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC0_POP_SHFT 4 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC0_POP_MASK 0x0000000000000010 + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC2_POP */ +/* Description: Fifo 02 vc2 pop overflow */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC2_POP_SHFT 5 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC2_POP_MASK 0x0000000000000020 + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC1_POP */ +/* Description: Fifo 13 vc1 pop overflow */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC1_POP_SHFT 6 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC1_POP_MASK 0x0000000000000040 + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC3_POP */ +/* Description: Fifo 13 vc3 pop overflow */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC3_POP_SHFT 7 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC3_POP_MASK 0x0000000000000080 + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC0_PUSH */ +/* Description: Fifo 02 vc0 push overflow */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC0_PUSH_SHFT 8 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC0_PUSH_MASK 0x0000000000000100 + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC2_PUSH */ +/* Description: Fifo 02 vc2 push overflow */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC2_PUSH_SHFT 9 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC2_PUSH_MASK 0x0000000000000200 + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC1_PUSH */ +/* Description: Fifo 13 vc1 push overflow */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC1_PUSH_SHFT 10 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC1_PUSH_MASK 0x0000000000000400 + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC3_PUSH */ +/* Description: Fifo 13 vc3 push overflow */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC3_PUSH_SHFT 11 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC3_PUSH_MASK 0x0000000000000800 + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC0_CREDIT */ +/* Description: Fifo 02 vc0 credit overflow */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC0_CREDIT_SHFT 12 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC0_CREDIT_MASK 0x0000000000001000 + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC2_CREDIT */ +/* Description: Fifo 02 vc2 credit overflow */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC2_CREDIT_SHFT 13 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC2_CREDIT_MASK 0x0000000000002000 + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC0_CREDIT */ +/* Description: Fifo 13 vc0 credit overflow */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC0_CREDIT_SHFT 14 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC0_CREDIT_MASK 0x0000000000004000 + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC2_CREDIT */ +/* Description: Fifo 13 vc2 credit overflow */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC2_CREDIT_SHFT 15 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC2_CREDIT_MASK 0x0000000000008000 + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW0_VC0_CREDIT */ +/* Description: VC0 credit overflow 0 */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW0_VC0_CREDIT_SHFT 16 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW0_VC0_CREDIT_MASK 0x0000000000010000 + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW1_VC0_CREDIT */ +/* Description: VC0 credit overflow 1 */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW1_VC0_CREDIT_SHFT 17 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW1_VC0_CREDIT_MASK 0x0000000000020000 + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW2_VC0_CREDIT */ +/* Description: VC0 credit overflow 2 */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW2_VC0_CREDIT_SHFT 18 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW2_VC0_CREDIT_MASK 0x0000000000040000 + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW0_VC2_CREDIT */ +/* Description: VC2 credit overflow 0 */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW0_VC2_CREDIT_SHFT 19 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW0_VC2_CREDIT_MASK 0x0000000000080000 + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW1_VC2_CREDIT */ +/* Description: VC2 credit overflow 1 */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW1_VC2_CREDIT_SHFT 20 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW1_VC2_CREDIT_MASK 0x0000000000100000 + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW2_VC2_CREDIT */ +/* Description: VC2 credit overflow 2 */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW2_VC2_CREDIT_SHFT 21 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW2_VC2_CREDIT_MASK 0x0000000000200000 + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_DEBIT0 */ +/* Description: PI Fifo debit0 overflow */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_DEBIT0_SHFT 22 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_DEBIT0_MASK 0x0000000000400000 + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_DEBIT2 */ +/* Description: PI Fifo debit2 overflow */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_DEBIT2_SHFT 23 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_DEBIT2_MASK 0x0000000000800000 + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_DEBIT0 */ +/* Description: IILB Fifo debit0 overflow */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_DEBIT0_SHFT 24 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_DEBIT0_MASK 0x0000000001000000 + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_DEBIT2 */ +/* Description: IILB Fifo debit2 overflow */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_DEBIT2_SHFT 25 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_DEBIT2_MASK 0x0000000002000000 + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_DEBIT0 */ +/* Description: MD Fifo debit0 overflow */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_DEBIT0_SHFT 26 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_DEBIT0_MASK 0x0000000004000000 + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_DEBIT2 */ +/* Description: MD Fifo debit2 overflow */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_DEBIT2_SHFT 27 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_DEBIT2_MASK 0x0000000008000000 + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_DEBIT0 */ +/* Description: NI Fifo debit0 overflow */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_DEBIT0_SHFT 28 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_DEBIT0_MASK 0x0000000010000000 + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_DEBIT1 */ +/* Description: NI Fifo debit1 overflow */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_DEBIT1_SHFT 29 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_DEBIT1_MASK 0x0000000020000000 + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_DEBIT2 */ +/* Description: NI Fifo debit2 overflow */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_DEBIT2_SHFT 30 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_DEBIT2_MASK 0x0000000040000000 + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_DEBIT3 */ +/* Description: NI Fifo debit3 overflow */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_DEBIT3_SHFT 31 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_DEBIT3_MASK 0x0000000080000000 + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC0_POP */ +/* Description: PI Fifo vc0 pop overflow */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC0_POP_SHFT 32 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC0_POP_MASK 0x0000000100000000 + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC2_POP */ +/* Description: PI Fifo vc2 pop overflow */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC2_POP_SHFT 33 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC2_POP_MASK 0x0000000200000000 + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC0_POP */ +/* Description: IILB Fifo vc0 pop overflow */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC0_POP_SHFT 34 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC0_POP_MASK 0x0000000400000000 + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC2_POP */ +/* Description: IILB Fifo vc2 pop overflow */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC2_POP_SHFT 35 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC2_POP_MASK 0x0000000800000000 + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC0_POP */ +/* Description: MD Fifo vc0 pop overflow */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC0_POP_SHFT 36 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC0_POP_MASK 0x0000001000000000 + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC2_POP */ +/* Description: MD Fifo vc2 pop overflow */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC2_POP_SHFT 37 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC2_POP_MASK 0x0000002000000000 + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC0_POP */ +/* Description: NI Fifo vc0 pop overflow */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC0_POP_SHFT 38 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC0_POP_MASK 0x0000004000000000 + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC2_POP */ +/* Description: NI Fifo vc2 pop overflow */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC2_POP_SHFT 39 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC2_POP_MASK 0x0000008000000000 + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC0_PUSH */ +/* Description: PI Fifo vc0 push overflow */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC0_PUSH_SHFT 40 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC0_PUSH_MASK 0x0000010000000000 + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC2_PUSH */ +/* Description: PI Fifo vc2 push overflow */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC2_PUSH_SHFT 41 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC2_PUSH_MASK 0x0000020000000000 + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC0_PUSH */ +/* Description: IILB Fifo vc0 push overflow */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC0_PUSH_SHFT 42 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC0_PUSH_MASK 0x0000040000000000 + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC2_PUSH */ +/* Description: IILB Fifo vc2 push overflow */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC2_PUSH_SHFT 43 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC2_PUSH_MASK 0x0000080000000000 + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC0_PUSH */ +/* Description: MD Fifo vc0 push overflow */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC0_PUSH_SHFT 44 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC0_PUSH_MASK 0x0000100000000000 + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC2_PUSH */ +/* Description: MD Fifo vc2 push overflow */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC2_PUSH_SHFT 45 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC2_PUSH_MASK 0x0000200000000000 + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC0_CREDIT */ +/* Description: PI Fifo vc0 credit overflow */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC0_CREDIT_SHFT 46 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC0_CREDIT_MASK 0x0000400000000000 + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC2_CREDIT */ +/* Description: PI Fifo vc2 credit overflow */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC2_CREDIT_SHFT 47 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC2_CREDIT_MASK 0x0000800000000000 + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC0_CREDIT */ +/* Description: IILB Fifo vc0 credit overflow */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC0_CREDIT_SHFT 48 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC0_CREDIT_MASK 0x0001000000000000 + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC2_CREDIT */ +/* Description: IILB Fifo vc2 credit overflow */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC2_CREDIT_SHFT 49 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC2_CREDIT_MASK 0x0002000000000000 + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC0_CREDIT */ +/* Description: MD Fifo vc0 credit overflow */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC0_CREDIT_SHFT 50 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC0_CREDIT_MASK 0x0004000000000000 + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC2_CREDIT */ +/* Description: MD Fifo vc2 credit overflow */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC2_CREDIT_SHFT 51 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC2_CREDIT_MASK 0x0008000000000000 + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC0_CREDIT */ +/* Description: NI Fifo vc0 credit overflow */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC0_CREDIT_SHFT 52 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC0_CREDIT_MASK 0x0010000000000000 + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC1_CREDIT */ +/* Description: NI Fifo vc1 credit overflow */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC1_CREDIT_SHFT 53 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC1_CREDIT_MASK 0x0020000000000000 + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC2_CREDIT */ +/* Description: NI Fifo vc2 credit overflow */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC2_CREDIT_SHFT 54 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC2_CREDIT_MASK 0x0040000000000000 + +/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC3_CREDIT */ +/* Description: NI Fifo vc3 credit overflow */ +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC3_CREDIT_SHFT 55 +#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC3_CREDIT_MASK 0x0080000000000000 + +/* SH_NI1_ERROR_OVERFLOW_1_TAIL_TIMEOUT_FIFO02_VC0 */ +/* Description: Fifo02 vc0 tail timeout */ +#define SH_NI1_ERROR_OVERFLOW_1_TAIL_TIMEOUT_FIFO02_VC0_SHFT 56 +#define SH_NI1_ERROR_OVERFLOW_1_TAIL_TIMEOUT_FIFO02_VC0_MASK 0x0100000000000000 + +/* SH_NI1_ERROR_OVERFLOW_1_TAIL_TIMEOUT_FIFO02_VC2 */ +/* Description: Fifo02 vc2 tail timeout */ +#define SH_NI1_ERROR_OVERFLOW_1_TAIL_TIMEOUT_FIFO02_VC2_SHFT 57 +#define SH_NI1_ERROR_OVERFLOW_1_TAIL_TIMEOUT_FIFO02_VC2_MASK 0x0200000000000000 + +/* SH_NI1_ERROR_OVERFLOW_1_TAIL_TIMEOUT_FIFO13_VC1 */ +/* Description: Fifo13 vc1 tail timeout */ +#define SH_NI1_ERROR_OVERFLOW_1_TAIL_TIMEOUT_FIFO13_VC1_SHFT 58 +#define SH_NI1_ERROR_OVERFLOW_1_TAIL_TIMEOUT_FIFO13_VC1_MASK 0x0400000000000000 + +/* SH_NI1_ERROR_OVERFLOW_1_TAIL_TIMEOUT_FIFO13_VC3 */ +/* Description: Fifo13 vc3 tail timeout */ +#define SH_NI1_ERROR_OVERFLOW_1_TAIL_TIMEOUT_FIFO13_VC3_SHFT 59 +#define SH_NI1_ERROR_OVERFLOW_1_TAIL_TIMEOUT_FIFO13_VC3_MASK 0x0800000000000000 + +/* SH_NI1_ERROR_OVERFLOW_1_TAIL_TIMEOUT_NI_VC0 */ +/* Description: NI vc0 tail timeout */ +#define SH_NI1_ERROR_OVERFLOW_1_TAIL_TIMEOUT_NI_VC0_SHFT 60 +#define SH_NI1_ERROR_OVERFLOW_1_TAIL_TIMEOUT_NI_VC0_MASK 0x1000000000000000 + +/* SH_NI1_ERROR_OVERFLOW_1_TAIL_TIMEOUT_NI_VC1 */ +/* Description: NI vc1 tail timeout */ +#define SH_NI1_ERROR_OVERFLOW_1_TAIL_TIMEOUT_NI_VC1_SHFT 61 +#define SH_NI1_ERROR_OVERFLOW_1_TAIL_TIMEOUT_NI_VC1_MASK 0x2000000000000000 + +/* SH_NI1_ERROR_OVERFLOW_1_TAIL_TIMEOUT_NI_VC2 */ +/* Description: NI vc2 tail timeout */ +#define SH_NI1_ERROR_OVERFLOW_1_TAIL_TIMEOUT_NI_VC2_SHFT 62 +#define SH_NI1_ERROR_OVERFLOW_1_TAIL_TIMEOUT_NI_VC2_MASK 0x4000000000000000 + +/* SH_NI1_ERROR_OVERFLOW_1_TAIL_TIMEOUT_NI_VC3 */ +/* Description: NI vc3 tail timeout */ +#define SH_NI1_ERROR_OVERFLOW_1_TAIL_TIMEOUT_NI_VC3_SHFT 63 +#define SH_NI1_ERROR_OVERFLOW_1_TAIL_TIMEOUT_NI_VC3_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_NI1_ERROR_OVERFLOW_1_ALIAS" */ +/* ni1 Error Overflow Bits Alias */ +/* ==================================================================== */ + +#define SH_NI1_ERROR_OVERFLOW_1_ALIAS 0x0000000150040628 + +/* ==================================================================== */ +/* Register "SH_NI1_ERROR_OVERFLOW_2" */ +/* ni1 Error Overflow Bits */ +/* ==================================================================== */ + +#define SH_NI1_ERROR_OVERFLOW_2 0x0000000150040630 +#define SH_NI1_ERROR_OVERFLOW_2_MASK 0x7fffffff003fffff +#define SH_NI1_ERROR_OVERFLOW_2_INIT 0x7fffffff003fffff + +/* SH_NI1_ERROR_OVERFLOW_2_ILLEGAL_VCNI */ +/* Description: Illegal VC NI */ +#define SH_NI1_ERROR_OVERFLOW_2_ILLEGAL_VCNI_SHFT 0 +#define SH_NI1_ERROR_OVERFLOW_2_ILLEGAL_VCNI_MASK 0x0000000000000001 + +/* SH_NI1_ERROR_OVERFLOW_2_ILLEGAL_VCPI */ +/* Description: Illegal VC PI */ +#define SH_NI1_ERROR_OVERFLOW_2_ILLEGAL_VCPI_SHFT 1 +#define SH_NI1_ERROR_OVERFLOW_2_ILLEGAL_VCPI_MASK 0x0000000000000002 + +/* SH_NI1_ERROR_OVERFLOW_2_ILLEGAL_VCMD */ +/* Description: Illegal VC MD */ +#define SH_NI1_ERROR_OVERFLOW_2_ILLEGAL_VCMD_SHFT 2 +#define SH_NI1_ERROR_OVERFLOW_2_ILLEGAL_VCMD_MASK 0x0000000000000004 + +/* SH_NI1_ERROR_OVERFLOW_2_ILLEGAL_VCIILB */ +/* Description: Illegal VC IILB */ +#define SH_NI1_ERROR_OVERFLOW_2_ILLEGAL_VCIILB_SHFT 3 +#define SH_NI1_ERROR_OVERFLOW_2_ILLEGAL_VCIILB_MASK 0x0000000000000008 + +/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC0_POP */ +/* Description: Fifo 02 vc0 pop underflow */ +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC0_POP_SHFT 4 +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC0_POP_MASK 0x0000000000000010 + +/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC2_POP */ +/* Description: Fifo 02 vc2 pop underflow */ +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC2_POP_SHFT 5 +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC2_POP_MASK 0x0000000000000020 + +/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC1_POP */ +/* Description: Fifo 13 vc1 pop underflow */ +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC1_POP_SHFT 6 +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC1_POP_MASK 0x0000000000000040 + +/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC3_POP */ +/* Description: Fifo 13 vc3 pop underflow */ +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC3_POP_SHFT 7 +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC3_POP_MASK 0x0000000000000080 + +/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC0_PUSH */ +/* Description: Fifo 02 vc0 push underflow */ +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC0_PUSH_SHFT 8 +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC0_PUSH_MASK 0x0000000000000100 + +/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC2_PUSH */ +/* Description: Fifo 02 vc2 push underflow */ +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC2_PUSH_SHFT 9 +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC2_PUSH_MASK 0x0000000000000200 + +/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC1_PUSH */ +/* Description: Fifo 13 vc1 push underflow */ +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC1_PUSH_SHFT 10 +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC1_PUSH_MASK 0x0000000000000400 + +/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC3_PUSH */ +/* Description: Fifo 13 vc3 push underflow */ +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC3_PUSH_SHFT 11 +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC3_PUSH_MASK 0x0000000000000800 + +/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC0_CREDIT */ +/* Description: Fifo 02 vc0 credit underflow */ +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC0_CREDIT_SHFT 12 +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC0_CREDIT_MASK 0x0000000000001000 + +/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC2_CREDIT */ +/* Description: Fifo 02 vc2 credit underflow */ +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC2_CREDIT_SHFT 13 +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC2_CREDIT_MASK 0x0000000000002000 + +/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC0_CREDIT */ +/* Description: Fifo 13 vc0 credit underflow */ +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC0_CREDIT_SHFT 14 +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC0_CREDIT_MASK 0x0000000000004000 + +/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC2_CREDIT */ +/* Description: Fifo 13 vc2 credit underflow */ +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC2_CREDIT_SHFT 15 +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC2_CREDIT_MASK 0x0000000000008000 + +/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW0_VC0_CREDIT */ +/* Description: VC0 credit underflow 0 */ +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW0_VC0_CREDIT_SHFT 16 +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW0_VC0_CREDIT_MASK 0x0000000000010000 + +/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW1_VC0_CREDIT */ +/* Description: VC0 credit underflow 1 */ +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW1_VC0_CREDIT_SHFT 17 +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW1_VC0_CREDIT_MASK 0x0000000000020000 + +/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW2_VC0_CREDIT */ +/* Description: VC0 credit underflow 2 */ +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW2_VC0_CREDIT_SHFT 18 +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW2_VC0_CREDIT_MASK 0x0000000000040000 + +/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW0_VC2_CREDIT */ +/* Description: VC2 credit underflow 0 */ +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW0_VC2_CREDIT_SHFT 19 +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW0_VC2_CREDIT_MASK 0x0000000000080000 + +/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW1_VC2_CREDIT */ +/* Description: VC2 credit underflow 1 */ +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW1_VC2_CREDIT_SHFT 20 +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW1_VC2_CREDIT_MASK 0x0000000000100000 + +/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW2_VC2_CREDIT */ +/* Description: VC2 credit underflow 2 */ +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW2_VC2_CREDIT_SHFT 21 +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW2_VC2_CREDIT_MASK 0x0000000000200000 + +/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC0_POP */ +/* Description: PI Fifo vc0 pop underflow */ +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC0_POP_SHFT 32 +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC0_POP_MASK 0x0000000100000000 + +/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC2_POP */ +/* Description: PI Fifo vc2 pop underflow */ +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC2_POP_SHFT 33 +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC2_POP_MASK 0x0000000200000000 + +/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC0_POP */ +/* Description: IILB Fifo vc0 pop underflow */ +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC0_POP_SHFT 34 +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC0_POP_MASK 0x0000000400000000 + +/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC2_POP */ +/* Description: IILB Fifo vc2 pop underflow */ +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC2_POP_SHFT 35 +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC2_POP_MASK 0x0000000800000000 + +/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC0_POP */ +/* Description: MD Fifo vc0 pop underflow */ +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC0_POP_SHFT 36 +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC0_POP_MASK 0x0000001000000000 + +/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC2_POP */ +/* Description: MD Fifo vc2 pop underflow */ +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC2_POP_SHFT 37 +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC2_POP_MASK 0x0000002000000000 + +/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC0_POP */ +/* Description: NI Fifo vc0 pop underflow */ +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC0_POP_SHFT 38 +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC0_POP_MASK 0x0000004000000000 + +/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC2_POP */ +/* Description: NI Fifo vc2 pop underflow */ +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC2_POP_SHFT 39 +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC2_POP_MASK 0x0000008000000000 + +/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC0_PUSH */ +/* Description: PI Fifo vc0 push underflow */ +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC0_PUSH_SHFT 40 +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC0_PUSH_MASK 0x0000010000000000 + +/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC2_PUSH */ +/* Description: PI Fifo vc2 push underflow */ +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC2_PUSH_SHFT 41 +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC2_PUSH_MASK 0x0000020000000000 + +/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC0_PUSH */ +/* Description: IILB Fifo vc0 push underflow */ +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC0_PUSH_SHFT 42 +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC0_PUSH_MASK 0x0000040000000000 + +/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC2_PUSH */ +/* Description: IILB Fifo vc2 push underflow */ +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC2_PUSH_SHFT 43 +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC2_PUSH_MASK 0x0000080000000000 + +/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC0_PUSH */ +/* Description: MD Fifo vc0 push underflow */ +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC0_PUSH_SHFT 44 +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC0_PUSH_MASK 0x0000100000000000 + +/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC2_PUSH */ +/* Description: MD Fifo vc2 push underflow */ +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC2_PUSH_SHFT 45 +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC2_PUSH_MASK 0x0000200000000000 + +/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC0_CREDIT */ +/* Description: PI Fifo vc0 credit underflow */ +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC0_CREDIT_SHFT 46 +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC0_CREDIT_MASK 0x0000400000000000 + +/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC2_CREDIT */ +/* Description: PI Fifo vc2 credit underflow */ +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC2_CREDIT_SHFT 47 +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC2_CREDIT_MASK 0x0000800000000000 + +/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC0_CREDIT */ +/* Description: IILB Fifo vc0 credit underflow */ +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC0_CREDIT_SHFT 48 +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC0_CREDIT_MASK 0x0001000000000000 + +/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC2_CREDIT */ +/* Description: IILB Fifo vc2 credit underflow */ +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC2_CREDIT_SHFT 49 +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC2_CREDIT_MASK 0x0002000000000000 + +/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC0_CREDIT */ +/* Description: MD Fifo vc0 credit underflow */ +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC0_CREDIT_SHFT 50 +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC0_CREDIT_MASK 0x0004000000000000 + +/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC2_CREDIT */ +/* Description: MD Fifo vc2 credit underflow */ +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC2_CREDIT_SHFT 51 +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC2_CREDIT_MASK 0x0008000000000000 + +/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC0_CREDIT */ +/* Description: NI Fifo vc0 credit underflow */ +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC0_CREDIT_SHFT 52 +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC0_CREDIT_MASK 0x0010000000000000 + +/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC1_CREDIT */ +/* Description: NI Fifo vc1 credit underflow */ +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC1_CREDIT_SHFT 53 +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC1_CREDIT_MASK 0x0020000000000000 + +/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC2_CREDIT */ +/* Description: NI Fifo vc2 credit underflow */ +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC2_CREDIT_SHFT 54 +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC2_CREDIT_MASK 0x0040000000000000 + +/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC3_CREDIT */ +/* Description: NI Fifo vc3 credit underflow */ +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC3_CREDIT_SHFT 55 +#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC3_CREDIT_MASK 0x0080000000000000 + +/* SH_NI1_ERROR_OVERFLOW_2_LLP_DEADLOCK_VC0 */ +/* Description: llp deadlock vc0 */ +#define SH_NI1_ERROR_OVERFLOW_2_LLP_DEADLOCK_VC0_SHFT 56 +#define SH_NI1_ERROR_OVERFLOW_2_LLP_DEADLOCK_VC0_MASK 0x0100000000000000 + +/* SH_NI1_ERROR_OVERFLOW_2_LLP_DEADLOCK_VC1 */ +/* Description: llp deadlock vc1 */ +#define SH_NI1_ERROR_OVERFLOW_2_LLP_DEADLOCK_VC1_SHFT 57 +#define SH_NI1_ERROR_OVERFLOW_2_LLP_DEADLOCK_VC1_MASK 0x0200000000000000 + +/* SH_NI1_ERROR_OVERFLOW_2_LLP_DEADLOCK_VC2 */ +/* Description: llp deadlock vc2 */ +#define SH_NI1_ERROR_OVERFLOW_2_LLP_DEADLOCK_VC2_SHFT 58 +#define SH_NI1_ERROR_OVERFLOW_2_LLP_DEADLOCK_VC2_MASK 0x0400000000000000 + +/* SH_NI1_ERROR_OVERFLOW_2_LLP_DEADLOCK_VC3 */ +/* Description: llp deadlock vc3 */ +#define SH_NI1_ERROR_OVERFLOW_2_LLP_DEADLOCK_VC3_SHFT 59 +#define SH_NI1_ERROR_OVERFLOW_2_LLP_DEADLOCK_VC3_MASK 0x0800000000000000 + +/* SH_NI1_ERROR_OVERFLOW_2_CHIPLET_NOMATCH */ +/* Description: chiplet nomatch */ +#define SH_NI1_ERROR_OVERFLOW_2_CHIPLET_NOMATCH_SHFT 60 +#define SH_NI1_ERROR_OVERFLOW_2_CHIPLET_NOMATCH_MASK 0x1000000000000000 + +/* SH_NI1_ERROR_OVERFLOW_2_LUT_READ_ERROR */ +/* Description: LUT Read Error */ +#define SH_NI1_ERROR_OVERFLOW_2_LUT_READ_ERROR_SHFT 61 +#define SH_NI1_ERROR_OVERFLOW_2_LUT_READ_ERROR_MASK 0x2000000000000000 + +/* SH_NI1_ERROR_OVERFLOW_2_RETRY_TIMEOUT_ERROR */ +/* Description: Retry Timeout Error */ +#define SH_NI1_ERROR_OVERFLOW_2_RETRY_TIMEOUT_ERROR_SHFT 62 +#define SH_NI1_ERROR_OVERFLOW_2_RETRY_TIMEOUT_ERROR_MASK 0x4000000000000000 + +/* ==================================================================== */ +/* Register "SH_NI1_ERROR_OVERFLOW_2_ALIAS" */ +/* ni1 Error Overflow Bits Alias */ +/* ==================================================================== */ + +#define SH_NI1_ERROR_OVERFLOW_2_ALIAS 0x0000000150040638 + +/* ==================================================================== */ +/* Register "SH_NI1_ERROR_MASK_1" */ +/* ni1 Error Mask Bits */ +/* ==================================================================== */ + +#define SH_NI1_ERROR_MASK_1 0x0000000150040640 +#define SH_NI1_ERROR_MASK_1_MASK 0xffffffffffffffff +#define SH_NI1_ERROR_MASK_1_INIT 0xffffffffffffffff + +/* SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO02_DEBIT0 */ +/* Description: Fifo 02 debit0 overflow */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO02_DEBIT0_SHFT 0 +#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO02_DEBIT0_MASK 0x0000000000000001 + +/* SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO02_DEBIT2 */ +/* Description: Fifo 02 debit2 overflow */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO02_DEBIT2_SHFT 1 +#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO02_DEBIT2_MASK 0x0000000000000002 + +/* SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO13_DEBIT0 */ +/* Description: Fifo 13 debit0 overflow */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO13_DEBIT0_SHFT 2 +#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO13_DEBIT0_MASK 0x0000000000000004 + +/* SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO13_DEBIT2 */ +/* Description: Fifo 13 debit2 overflow */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO13_DEBIT2_SHFT 3 +#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO13_DEBIT2_MASK 0x0000000000000008 + +/* SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO02_VC0_POP */ +/* Description: Fifo 02 vc0 pop overflow */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO02_VC0_POP_SHFT 4 +#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO02_VC0_POP_MASK 0x0000000000000010 + +/* SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO02_VC2_POP */ +/* Description: Fifo 02 vc2 pop overflow */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO02_VC2_POP_SHFT 5 +#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO02_VC2_POP_MASK 0x0000000000000020 + +/* SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO13_VC1_POP */ +/* Description: Fifo 13 vc1 pop overflow */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO13_VC1_POP_SHFT 6 +#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO13_VC1_POP_MASK 0x0000000000000040 + +/* SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO13_VC3_POP */ +/* Description: Fifo 13 vc3 pop overflow */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO13_VC3_POP_SHFT 7 +#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO13_VC3_POP_MASK 0x0000000000000080 + +/* SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO02_VC0_PUSH */ +/* Description: Fifo 02 vc0 push overflow */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO02_VC0_PUSH_SHFT 8 +#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO02_VC0_PUSH_MASK 0x0000000000000100 + +/* SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO02_VC2_PUSH */ +/* Description: Fifo 02 vc2 push overflow */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO02_VC2_PUSH_SHFT 9 +#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO02_VC2_PUSH_MASK 0x0000000000000200 + +/* SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO13_VC1_PUSH */ +/* Description: Fifo 13 vc1 push overflow */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO13_VC1_PUSH_SHFT 10 +#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO13_VC1_PUSH_MASK 0x0000000000000400 + +/* SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO13_VC3_PUSH */ +/* Description: Fifo 13 vc3 push overflow */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO13_VC3_PUSH_SHFT 11 +#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO13_VC3_PUSH_MASK 0x0000000000000800 + +/* SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO02_VC0_CREDIT */ +/* Description: Fifo 02 vc0 credit overflow */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO02_VC0_CREDIT_SHFT 12 +#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO02_VC0_CREDIT_MASK 0x0000000000001000 + +/* SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO02_VC2_CREDIT */ +/* Description: Fifo 02 vc2 credit overflow */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO02_VC2_CREDIT_SHFT 13 +#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO02_VC2_CREDIT_MASK 0x0000000000002000 + +/* SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO13_VC0_CREDIT */ +/* Description: Fifo 13 vc0 credit overflow */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO13_VC0_CREDIT_SHFT 14 +#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO13_VC0_CREDIT_MASK 0x0000000000004000 + +/* SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO13_VC2_CREDIT */ +/* Description: Fifo 13 vc2 credit overflow */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO13_VC2_CREDIT_SHFT 15 +#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO13_VC2_CREDIT_MASK 0x0000000000008000 + +/* SH_NI1_ERROR_MASK_1_OVERFLOW0_VC0_CREDIT */ +/* Description: VC0 credit overflow 0 */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW0_VC0_CREDIT_SHFT 16 +#define SH_NI1_ERROR_MASK_1_OVERFLOW0_VC0_CREDIT_MASK 0x0000000000010000 + +/* SH_NI1_ERROR_MASK_1_OVERFLOW1_VC0_CREDIT */ +/* Description: VC0 credit overflow 1 */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW1_VC0_CREDIT_SHFT 17 +#define SH_NI1_ERROR_MASK_1_OVERFLOW1_VC0_CREDIT_MASK 0x0000000000020000 + +/* SH_NI1_ERROR_MASK_1_OVERFLOW2_VC0_CREDIT */ +/* Description: VC0 credit overflow 2 */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW2_VC0_CREDIT_SHFT 18 +#define SH_NI1_ERROR_MASK_1_OVERFLOW2_VC0_CREDIT_MASK 0x0000000000040000 + +/* SH_NI1_ERROR_MASK_1_OVERFLOW0_VC2_CREDIT */ +/* Description: VC2 credit overflow 0 */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW0_VC2_CREDIT_SHFT 19 +#define SH_NI1_ERROR_MASK_1_OVERFLOW0_VC2_CREDIT_MASK 0x0000000000080000 + +/* SH_NI1_ERROR_MASK_1_OVERFLOW1_VC2_CREDIT */ +/* Description: VC2 credit overflow 1 */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW1_VC2_CREDIT_SHFT 20 +#define SH_NI1_ERROR_MASK_1_OVERFLOW1_VC2_CREDIT_MASK 0x0000000000100000 + +/* SH_NI1_ERROR_MASK_1_OVERFLOW2_VC2_CREDIT */ +/* Description: VC2 credit overflow 2 */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW2_VC2_CREDIT_SHFT 21 +#define SH_NI1_ERROR_MASK_1_OVERFLOW2_VC2_CREDIT_MASK 0x0000000000200000 + +/* SH_NI1_ERROR_MASK_1_OVERFLOW_PI_FIFO_DEBIT0 */ +/* Description: PI Fifo debit0 overflow */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW_PI_FIFO_DEBIT0_SHFT 22 +#define SH_NI1_ERROR_MASK_1_OVERFLOW_PI_FIFO_DEBIT0_MASK 0x0000000000400000 + +/* SH_NI1_ERROR_MASK_1_OVERFLOW_PI_FIFO_DEBIT2 */ +/* Description: PI Fifo debit2 overflow */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW_PI_FIFO_DEBIT2_SHFT 23 +#define SH_NI1_ERROR_MASK_1_OVERFLOW_PI_FIFO_DEBIT2_MASK 0x0000000000800000 + +/* SH_NI1_ERROR_MASK_1_OVERFLOW_IILB_FIFO_DEBIT0 */ +/* Description: IILB Fifo debit0 overflow */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW_IILB_FIFO_DEBIT0_SHFT 24 +#define SH_NI1_ERROR_MASK_1_OVERFLOW_IILB_FIFO_DEBIT0_MASK 0x0000000001000000 + +/* SH_NI1_ERROR_MASK_1_OVERFLOW_IILB_FIFO_DEBIT2 */ +/* Description: IILB Fifo debit2 overflow */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW_IILB_FIFO_DEBIT2_SHFT 25 +#define SH_NI1_ERROR_MASK_1_OVERFLOW_IILB_FIFO_DEBIT2_MASK 0x0000000002000000 + +/* SH_NI1_ERROR_MASK_1_OVERFLOW_MD_FIFO_DEBIT0 */ +/* Description: MD Fifo debit0 overflow */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW_MD_FIFO_DEBIT0_SHFT 26 +#define SH_NI1_ERROR_MASK_1_OVERFLOW_MD_FIFO_DEBIT0_MASK 0x0000000004000000 + +/* SH_NI1_ERROR_MASK_1_OVERFLOW_MD_FIFO_DEBIT2 */ +/* Description: MD Fifo debit2 overflow */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW_MD_FIFO_DEBIT2_SHFT 27 +#define SH_NI1_ERROR_MASK_1_OVERFLOW_MD_FIFO_DEBIT2_MASK 0x0000000008000000 + +/* SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_DEBIT0 */ +/* Description: NI Fifo debit0 overflow */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_DEBIT0_SHFT 28 +#define SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_DEBIT0_MASK 0x0000000010000000 + +/* SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_DEBIT1 */ +/* Description: NI Fifo debit1 overflow */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_DEBIT1_SHFT 29 +#define SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_DEBIT1_MASK 0x0000000020000000 + +/* SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_DEBIT2 */ +/* Description: NI Fifo debit2 overflow */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_DEBIT2_SHFT 30 +#define SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_DEBIT2_MASK 0x0000000040000000 + +/* SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_DEBIT3 */ +/* Description: NI Fifo debit3 overflow */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_DEBIT3_SHFT 31 +#define SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_DEBIT3_MASK 0x0000000080000000 + +/* SH_NI1_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC0_POP */ +/* Description: PI Fifo vc0 pop overflow */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC0_POP_SHFT 32 +#define SH_NI1_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC0_POP_MASK 0x0000000100000000 + +/* SH_NI1_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC2_POP */ +/* Description: PI Fifo vc2 pop overflow */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC2_POP_SHFT 33 +#define SH_NI1_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC2_POP_MASK 0x0000000200000000 + +/* SH_NI1_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC0_POP */ +/* Description: IILB Fifo vc0 pop overflow */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC0_POP_SHFT 34 +#define SH_NI1_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC0_POP_MASK 0x0000000400000000 + +/* SH_NI1_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC2_POP */ +/* Description: IILB Fifo vc2 pop overflow */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC2_POP_SHFT 35 +#define SH_NI1_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC2_POP_MASK 0x0000000800000000 + +/* SH_NI1_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC0_POP */ +/* Description: MD Fifo vc0 pop overflow */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC0_POP_SHFT 36 +#define SH_NI1_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC0_POP_MASK 0x0000001000000000 + +/* SH_NI1_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC2_POP */ +/* Description: MD Fifo vc2 pop overflow */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC2_POP_SHFT 37 +#define SH_NI1_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC2_POP_MASK 0x0000002000000000 + +/* SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC0_POP */ +/* Description: NI Fifo vc0 pop overflow */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC0_POP_SHFT 38 +#define SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC0_POP_MASK 0x0000004000000000 + +/* SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC2_POP */ +/* Description: NI Fifo vc2 pop overflow */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC2_POP_SHFT 39 +#define SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC2_POP_MASK 0x0000008000000000 + +/* SH_NI1_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC0_PUSH */ +/* Description: PI Fifo vc0 push overflow */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC0_PUSH_SHFT 40 +#define SH_NI1_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC0_PUSH_MASK 0x0000010000000000 + +/* SH_NI1_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC2_PUSH */ +/* Description: PI Fifo vc2 push overflow */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC2_PUSH_SHFT 41 +#define SH_NI1_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC2_PUSH_MASK 0x0000020000000000 + +/* SH_NI1_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC0_PUSH */ +/* Description: IILB Fifo vc0 push overflow */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC0_PUSH_SHFT 42 +#define SH_NI1_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC0_PUSH_MASK 0x0000040000000000 + +/* SH_NI1_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC2_PUSH */ +/* Description: IILB Fifo vc2 push overflow */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC2_PUSH_SHFT 43 +#define SH_NI1_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC2_PUSH_MASK 0x0000080000000000 + +/* SH_NI1_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC0_PUSH */ +/* Description: MD Fifo vc0 push overflow */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC0_PUSH_SHFT 44 +#define SH_NI1_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC0_PUSH_MASK 0x0000100000000000 + +/* SH_NI1_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC2_PUSH */ +/* Description: MD Fifo vc2 push overflow */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC2_PUSH_SHFT 45 +#define SH_NI1_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC2_PUSH_MASK 0x0000200000000000 + +/* SH_NI1_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC0_CREDIT */ +/* Description: PI Fifo vc0 credit overflow */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC0_CREDIT_SHFT 46 +#define SH_NI1_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC0_CREDIT_MASK 0x0000400000000000 + +/* SH_NI1_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC2_CREDIT */ +/* Description: PI Fifo vc2 credit overflow */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC2_CREDIT_SHFT 47 +#define SH_NI1_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC2_CREDIT_MASK 0x0000800000000000 + +/* SH_NI1_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC0_CREDIT */ +/* Description: IILB Fifo vc0 credit overflow */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC0_CREDIT_SHFT 48 +#define SH_NI1_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC0_CREDIT_MASK 0x0001000000000000 + +/* SH_NI1_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC2_CREDIT */ +/* Description: IILB Fifo vc2 credit overflow */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC2_CREDIT_SHFT 49 +#define SH_NI1_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC2_CREDIT_MASK 0x0002000000000000 + +/* SH_NI1_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC0_CREDIT */ +/* Description: MD Fifo vc0 credit overflow */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC0_CREDIT_SHFT 50 +#define SH_NI1_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC0_CREDIT_MASK 0x0004000000000000 + +/* SH_NI1_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC2_CREDIT */ +/* Description: MD Fifo vc2 credit overflow */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC2_CREDIT_SHFT 51 +#define SH_NI1_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC2_CREDIT_MASK 0x0008000000000000 + +/* SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC0_CREDIT */ +/* Description: NI Fifo vc0 credit overflow */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC0_CREDIT_SHFT 52 +#define SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC0_CREDIT_MASK 0x0010000000000000 + +/* SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC1_CREDIT */ +/* Description: NI Fifo vc1 credit overflow */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC1_CREDIT_SHFT 53 +#define SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC1_CREDIT_MASK 0x0020000000000000 + +/* SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC2_CREDIT */ +/* Description: NI Fifo vc2 credit overflow */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC2_CREDIT_SHFT 54 +#define SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC2_CREDIT_MASK 0x0040000000000000 + +/* SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC3_CREDIT */ +/* Description: NI Fifo vc3 credit overflow */ +#define SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC3_CREDIT_SHFT 55 +#define SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC3_CREDIT_MASK 0x0080000000000000 + +/* SH_NI1_ERROR_MASK_1_TAIL_TIMEOUT_FIFO02_VC0 */ +/* Description: Fifo02 vc0 tail timeout */ +#define SH_NI1_ERROR_MASK_1_TAIL_TIMEOUT_FIFO02_VC0_SHFT 56 +#define SH_NI1_ERROR_MASK_1_TAIL_TIMEOUT_FIFO02_VC0_MASK 0x0100000000000000 + +/* SH_NI1_ERROR_MASK_1_TAIL_TIMEOUT_FIFO02_VC2 */ +/* Description: Fifo02 vc2 tail timeout */ +#define SH_NI1_ERROR_MASK_1_TAIL_TIMEOUT_FIFO02_VC2_SHFT 57 +#define SH_NI1_ERROR_MASK_1_TAIL_TIMEOUT_FIFO02_VC2_MASK 0x0200000000000000 + +/* SH_NI1_ERROR_MASK_1_TAIL_TIMEOUT_FIFO13_VC1 */ +/* Description: Fifo13 vc1 tail timeout */ +#define SH_NI1_ERROR_MASK_1_TAIL_TIMEOUT_FIFO13_VC1_SHFT 58 +#define SH_NI1_ERROR_MASK_1_TAIL_TIMEOUT_FIFO13_VC1_MASK 0x0400000000000000 + +/* SH_NI1_ERROR_MASK_1_TAIL_TIMEOUT_FIFO13_VC3 */ +/* Description: Fifo13 vc3 tail timeout */ +#define SH_NI1_ERROR_MASK_1_TAIL_TIMEOUT_FIFO13_VC3_SHFT 59 +#define SH_NI1_ERROR_MASK_1_TAIL_TIMEOUT_FIFO13_VC3_MASK 0x0800000000000000 + +/* SH_NI1_ERROR_MASK_1_TAIL_TIMEOUT_NI_VC0 */ +/* Description: NI vc0 tail timeout */ +#define SH_NI1_ERROR_MASK_1_TAIL_TIMEOUT_NI_VC0_SHFT 60 +#define SH_NI1_ERROR_MASK_1_TAIL_TIMEOUT_NI_VC0_MASK 0x1000000000000000 + +/* SH_NI1_ERROR_MASK_1_TAIL_TIMEOUT_NI_VC1 */ +/* Description: NI vc1 tail timeout */ +#define SH_NI1_ERROR_MASK_1_TAIL_TIMEOUT_NI_VC1_SHFT 61 +#define SH_NI1_ERROR_MASK_1_TAIL_TIMEOUT_NI_VC1_MASK 0x2000000000000000 + +/* SH_NI1_ERROR_MASK_1_TAIL_TIMEOUT_NI_VC2 */ +/* Description: NI vc2 tail timeout */ +#define SH_NI1_ERROR_MASK_1_TAIL_TIMEOUT_NI_VC2_SHFT 62 +#define SH_NI1_ERROR_MASK_1_TAIL_TIMEOUT_NI_VC2_MASK 0x4000000000000000 + +/* SH_NI1_ERROR_MASK_1_TAIL_TIMEOUT_NI_VC3 */ +/* Description: NI vc3 tail timeout */ +#define SH_NI1_ERROR_MASK_1_TAIL_TIMEOUT_NI_VC3_SHFT 63 +#define SH_NI1_ERROR_MASK_1_TAIL_TIMEOUT_NI_VC3_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_NI1_ERROR_MASK_2" */ +/* ni1 Error Mask Bits */ +/* ==================================================================== */ + +#define SH_NI1_ERROR_MASK_2 0x0000000150040650 +#define SH_NI1_ERROR_MASK_2_MASK 0x7fffffff003fffff +#define SH_NI1_ERROR_MASK_2_INIT 0x7fffffff003fffff + +/* SH_NI1_ERROR_MASK_2_ILLEGAL_VCNI */ +/* Description: Illegal VC NI */ +#define SH_NI1_ERROR_MASK_2_ILLEGAL_VCNI_SHFT 0 +#define SH_NI1_ERROR_MASK_2_ILLEGAL_VCNI_MASK 0x0000000000000001 + +/* SH_NI1_ERROR_MASK_2_ILLEGAL_VCPI */ +/* Description: Illegal VC PI */ +#define SH_NI1_ERROR_MASK_2_ILLEGAL_VCPI_SHFT 1 +#define SH_NI1_ERROR_MASK_2_ILLEGAL_VCPI_MASK 0x0000000000000002 + +/* SH_NI1_ERROR_MASK_2_ILLEGAL_VCMD */ +/* Description: Illegal VC MD */ +#define SH_NI1_ERROR_MASK_2_ILLEGAL_VCMD_SHFT 2 +#define SH_NI1_ERROR_MASK_2_ILLEGAL_VCMD_MASK 0x0000000000000004 + +/* SH_NI1_ERROR_MASK_2_ILLEGAL_VCIILB */ +/* Description: Illegal VC IILB */ +#define SH_NI1_ERROR_MASK_2_ILLEGAL_VCIILB_SHFT 3 +#define SH_NI1_ERROR_MASK_2_ILLEGAL_VCIILB_MASK 0x0000000000000008 + +/* SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO02_VC0_POP */ +/* Description: Fifo 02 vc0 pop underflow */ +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO02_VC0_POP_SHFT 4 +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO02_VC0_POP_MASK 0x0000000000000010 + +/* SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO02_VC2_POP */ +/* Description: Fifo 02 vc2 pop underflow */ +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO02_VC2_POP_SHFT 5 +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO02_VC2_POP_MASK 0x0000000000000020 + +/* SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO13_VC1_POP */ +/* Description: Fifo 13 vc1 pop underflow */ +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO13_VC1_POP_SHFT 6 +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO13_VC1_POP_MASK 0x0000000000000040 + +/* SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO13_VC3_POP */ +/* Description: Fifo 13 vc3 pop underflow */ +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO13_VC3_POP_SHFT 7 +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO13_VC3_POP_MASK 0x0000000000000080 + +/* SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO02_VC0_PUSH */ +/* Description: Fifo 02 vc0 push underflow */ +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO02_VC0_PUSH_SHFT 8 +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO02_VC0_PUSH_MASK 0x0000000000000100 + +/* SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO02_VC2_PUSH */ +/* Description: Fifo 02 vc2 push underflow */ +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO02_VC2_PUSH_SHFT 9 +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO02_VC2_PUSH_MASK 0x0000000000000200 + +/* SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO13_VC1_PUSH */ +/* Description: Fifo 13 vc1 push underflow */ +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO13_VC1_PUSH_SHFT 10 +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO13_VC1_PUSH_MASK 0x0000000000000400 + +/* SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO13_VC3_PUSH */ +/* Description: Fifo 13 vc3 push underflow */ +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO13_VC3_PUSH_SHFT 11 +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO13_VC3_PUSH_MASK 0x0000000000000800 + +/* SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO02_VC0_CREDIT */ +/* Description: Fifo 02 vc0 credit underflow */ +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO02_VC0_CREDIT_SHFT 12 +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO02_VC0_CREDIT_MASK 0x0000000000001000 + +/* SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO02_VC2_CREDIT */ +/* Description: Fifo 02 vc2 credit underflow */ +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO02_VC2_CREDIT_SHFT 13 +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO02_VC2_CREDIT_MASK 0x0000000000002000 + +/* SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO13_VC0_CREDIT */ +/* Description: Fifo 13 vc0 credit underflow */ +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO13_VC0_CREDIT_SHFT 14 +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO13_VC0_CREDIT_MASK 0x0000000000004000 + +/* SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO13_VC2_CREDIT */ +/* Description: Fifo 13 vc2 credit underflow */ +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO13_VC2_CREDIT_SHFT 15 +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO13_VC2_CREDIT_MASK 0x0000000000008000 + +/* SH_NI1_ERROR_MASK_2_UNDERFLOW0_VC0_CREDIT */ +/* Description: VC0 credit underflow 0 */ +#define SH_NI1_ERROR_MASK_2_UNDERFLOW0_VC0_CREDIT_SHFT 16 +#define SH_NI1_ERROR_MASK_2_UNDERFLOW0_VC0_CREDIT_MASK 0x0000000000010000 + +/* SH_NI1_ERROR_MASK_2_UNDERFLOW1_VC0_CREDIT */ +/* Description: VC0 credit underflow 1 */ +#define SH_NI1_ERROR_MASK_2_UNDERFLOW1_VC0_CREDIT_SHFT 17 +#define SH_NI1_ERROR_MASK_2_UNDERFLOW1_VC0_CREDIT_MASK 0x0000000000020000 + +/* SH_NI1_ERROR_MASK_2_UNDERFLOW2_VC0_CREDIT */ +/* Description: VC0 credit underflow 2 */ +#define SH_NI1_ERROR_MASK_2_UNDERFLOW2_VC0_CREDIT_SHFT 18 +#define SH_NI1_ERROR_MASK_2_UNDERFLOW2_VC0_CREDIT_MASK 0x0000000000040000 + +/* SH_NI1_ERROR_MASK_2_UNDERFLOW0_VC2_CREDIT */ +/* Description: VC2 credit underflow 0 */ +#define SH_NI1_ERROR_MASK_2_UNDERFLOW0_VC2_CREDIT_SHFT 19 +#define SH_NI1_ERROR_MASK_2_UNDERFLOW0_VC2_CREDIT_MASK 0x0000000000080000 + +/* SH_NI1_ERROR_MASK_2_UNDERFLOW1_VC2_CREDIT */ +/* Description: VC2 credit underflow 1 */ +#define SH_NI1_ERROR_MASK_2_UNDERFLOW1_VC2_CREDIT_SHFT 20 +#define SH_NI1_ERROR_MASK_2_UNDERFLOW1_VC2_CREDIT_MASK 0x0000000000100000 + +/* SH_NI1_ERROR_MASK_2_UNDERFLOW2_VC2_CREDIT */ +/* Description: VC2 credit underflow 2 */ +#define SH_NI1_ERROR_MASK_2_UNDERFLOW2_VC2_CREDIT_SHFT 21 +#define SH_NI1_ERROR_MASK_2_UNDERFLOW2_VC2_CREDIT_MASK 0x0000000000200000 + +/* SH_NI1_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC0_POP */ +/* Description: PI Fifo vc0 pop underflow */ +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC0_POP_SHFT 32 +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC0_POP_MASK 0x0000000100000000 + +/* SH_NI1_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC2_POP */ +/* Description: PI Fifo vc2 pop underflow */ +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC2_POP_SHFT 33 +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC2_POP_MASK 0x0000000200000000 + +/* SH_NI1_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC0_POP */ +/* Description: IILB Fifo vc0 pop underflow */ +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC0_POP_SHFT 34 +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC0_POP_MASK 0x0000000400000000 + +/* SH_NI1_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC2_POP */ +/* Description: IILB Fifo vc2 pop underflow */ +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC2_POP_SHFT 35 +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC2_POP_MASK 0x0000000800000000 + +/* SH_NI1_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC0_POP */ +/* Description: MD Fifo vc0 pop underflow */ +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC0_POP_SHFT 36 +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC0_POP_MASK 0x0000001000000000 + +/* SH_NI1_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC2_POP */ +/* Description: MD Fifo vc2 pop underflow */ +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC2_POP_SHFT 37 +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC2_POP_MASK 0x0000002000000000 + +/* SH_NI1_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC0_POP */ +/* Description: NI Fifo vc0 pop underflow */ +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC0_POP_SHFT 38 +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC0_POP_MASK 0x0000004000000000 + +/* SH_NI1_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC2_POP */ +/* Description: NI Fifo vc2 pop underflow */ +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC2_POP_SHFT 39 +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC2_POP_MASK 0x0000008000000000 + +/* SH_NI1_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC0_PUSH */ +/* Description: PI Fifo vc0 push underflow */ +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC0_PUSH_SHFT 40 +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC0_PUSH_MASK 0x0000010000000000 + +/* SH_NI1_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC2_PUSH */ +/* Description: PI Fifo vc2 push underflow */ +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC2_PUSH_SHFT 41 +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC2_PUSH_MASK 0x0000020000000000 + +/* SH_NI1_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC0_PUSH */ +/* Description: IILB Fifo vc0 push underflow */ +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC0_PUSH_SHFT 42 +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC0_PUSH_MASK 0x0000040000000000 + +/* SH_NI1_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC2_PUSH */ +/* Description: IILB Fifo vc2 push underflow */ +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC2_PUSH_SHFT 43 +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC2_PUSH_MASK 0x0000080000000000 + +/* SH_NI1_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC0_PUSH */ +/* Description: MD Fifo vc0 push underflow */ +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC0_PUSH_SHFT 44 +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC0_PUSH_MASK 0x0000100000000000 + +/* SH_NI1_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC2_PUSH */ +/* Description: MD Fifo vc2 push underflow */ +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC2_PUSH_SHFT 45 +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC2_PUSH_MASK 0x0000200000000000 + +/* SH_NI1_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC0_CREDIT */ +/* Description: PI Fifo vc0 credit underflow */ +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC0_CREDIT_SHFT 46 +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC0_CREDIT_MASK 0x0000400000000000 + +/* SH_NI1_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC2_CREDIT */ +/* Description: PI Fifo vc2 credit underflow */ +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC2_CREDIT_SHFT 47 +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC2_CREDIT_MASK 0x0000800000000000 + +/* SH_NI1_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC0_CREDIT */ +/* Description: IILB Fifo vc0 credit underflow */ +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC0_CREDIT_SHFT 48 +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC0_CREDIT_MASK 0x0001000000000000 + +/* SH_NI1_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC2_CREDIT */ +/* Description: IILB Fifo vc2 credit underflow */ +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC2_CREDIT_SHFT 49 +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC2_CREDIT_MASK 0x0002000000000000 + +/* SH_NI1_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC0_CREDIT */ +/* Description: MD Fifo vc0 credit underflow */ +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC0_CREDIT_SHFT 50 +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC0_CREDIT_MASK 0x0004000000000000 + +/* SH_NI1_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC2_CREDIT */ +/* Description: MD Fifo vc2 credit underflow */ +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC2_CREDIT_SHFT 51 +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC2_CREDIT_MASK 0x0008000000000000 + +/* SH_NI1_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC0_CREDIT */ +/* Description: NI Fifo vc0 credit underflow */ +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC0_CREDIT_SHFT 52 +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC0_CREDIT_MASK 0x0010000000000000 + +/* SH_NI1_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC1_CREDIT */ +/* Description: NI Fifo vc1 credit underflow */ +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC1_CREDIT_SHFT 53 +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC1_CREDIT_MASK 0x0020000000000000 + +/* SH_NI1_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC2_CREDIT */ +/* Description: NI Fifo vc2 credit underflow */ +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC2_CREDIT_SHFT 54 +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC2_CREDIT_MASK 0x0040000000000000 + +/* SH_NI1_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC3_CREDIT */ +/* Description: NI Fifo vc3 credit underflow */ +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC3_CREDIT_SHFT 55 +#define SH_NI1_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC3_CREDIT_MASK 0x0080000000000000 + +/* SH_NI1_ERROR_MASK_2_LLP_DEADLOCK_VC0 */ +/* Description: llp deadlock vc0 */ +#define SH_NI1_ERROR_MASK_2_LLP_DEADLOCK_VC0_SHFT 56 +#define SH_NI1_ERROR_MASK_2_LLP_DEADLOCK_VC0_MASK 0x0100000000000000 + +/* SH_NI1_ERROR_MASK_2_LLP_DEADLOCK_VC1 */ +/* Description: llp deadlock vc1 */ +#define SH_NI1_ERROR_MASK_2_LLP_DEADLOCK_VC1_SHFT 57 +#define SH_NI1_ERROR_MASK_2_LLP_DEADLOCK_VC1_MASK 0x0200000000000000 + +/* SH_NI1_ERROR_MASK_2_LLP_DEADLOCK_VC2 */ +/* Description: llp deadlock vc2 */ +#define SH_NI1_ERROR_MASK_2_LLP_DEADLOCK_VC2_SHFT 58 +#define SH_NI1_ERROR_MASK_2_LLP_DEADLOCK_VC2_MASK 0x0400000000000000 + +/* SH_NI1_ERROR_MASK_2_LLP_DEADLOCK_VC3 */ +/* Description: llp deadlock vc3 */ +#define SH_NI1_ERROR_MASK_2_LLP_DEADLOCK_VC3_SHFT 59 +#define SH_NI1_ERROR_MASK_2_LLP_DEADLOCK_VC3_MASK 0x0800000000000000 + +/* SH_NI1_ERROR_MASK_2_CHIPLET_NOMATCH */ +/* Description: chiplet nomatch */ +#define SH_NI1_ERROR_MASK_2_CHIPLET_NOMATCH_SHFT 60 +#define SH_NI1_ERROR_MASK_2_CHIPLET_NOMATCH_MASK 0x1000000000000000 + +/* SH_NI1_ERROR_MASK_2_LUT_READ_ERROR */ +/* Description: LUT Read Error */ +#define SH_NI1_ERROR_MASK_2_LUT_READ_ERROR_SHFT 61 +#define SH_NI1_ERROR_MASK_2_LUT_READ_ERROR_MASK 0x2000000000000000 + +/* SH_NI1_ERROR_MASK_2_RETRY_TIMEOUT_ERROR */ +/* Description: Retry Timeout Error */ +#define SH_NI1_ERROR_MASK_2_RETRY_TIMEOUT_ERROR_SHFT 62 +#define SH_NI1_ERROR_MASK_2_RETRY_TIMEOUT_ERROR_MASK 0x4000000000000000 + +/* ==================================================================== */ +/* Register "SH_NI1_FIRST_ERROR_1" */ +/* ni1 First Error Bits */ +/* ==================================================================== */ + +#define SH_NI1_FIRST_ERROR_1 0x0000000150040660 +#define SH_NI1_FIRST_ERROR_1_MASK 0xffffffffffffffff +#define SH_NI1_FIRST_ERROR_1_INIT 0xffffffffffffffff + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO02_DEBIT0 */ +/* Description: Fifo 02 debit0 overflow */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO02_DEBIT0_SHFT 0 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO02_DEBIT0_MASK 0x0000000000000001 + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO02_DEBIT2 */ +/* Description: Fifo 02 debit2 overflow */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO02_DEBIT2_SHFT 1 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO02_DEBIT2_MASK 0x0000000000000002 + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO13_DEBIT0 */ +/* Description: Fifo 13 debit0 overflow */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO13_DEBIT0_SHFT 2 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO13_DEBIT0_MASK 0x0000000000000004 + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO13_DEBIT2 */ +/* Description: Fifo 13 debit2 overflow */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO13_DEBIT2_SHFT 3 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO13_DEBIT2_MASK 0x0000000000000008 + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO02_VC0_POP */ +/* Description: Fifo 02 vc0 pop overflow */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO02_VC0_POP_SHFT 4 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO02_VC0_POP_MASK 0x0000000000000010 + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO02_VC2_POP */ +/* Description: Fifo 02 vc2 pop overflow */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO02_VC2_POP_SHFT 5 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO02_VC2_POP_MASK 0x0000000000000020 + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO13_VC1_POP */ +/* Description: Fifo 13 vc1 pop overflow */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO13_VC1_POP_SHFT 6 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO13_VC1_POP_MASK 0x0000000000000040 + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO13_VC3_POP */ +/* Description: Fifo 13 vc3 pop overflow */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO13_VC3_POP_SHFT 7 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO13_VC3_POP_MASK 0x0000000000000080 + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO02_VC0_PUSH */ +/* Description: Fifo 02 vc0 push overflow */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO02_VC0_PUSH_SHFT 8 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO02_VC0_PUSH_MASK 0x0000000000000100 + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO02_VC2_PUSH */ +/* Description: Fifo 02 vc2 push overflow */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO02_VC2_PUSH_SHFT 9 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO02_VC2_PUSH_MASK 0x0000000000000200 + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO13_VC1_PUSH */ +/* Description: Fifo 13 vc1 push overflow */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO13_VC1_PUSH_SHFT 10 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO13_VC1_PUSH_MASK 0x0000000000000400 + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO13_VC3_PUSH */ +/* Description: Fifo 13 vc3 push overflow */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO13_VC3_PUSH_SHFT 11 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO13_VC3_PUSH_MASK 0x0000000000000800 + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO02_VC0_CREDIT */ +/* Description: Fifo 02 vc0 credit overflow */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO02_VC0_CREDIT_SHFT 12 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO02_VC0_CREDIT_MASK 0x0000000000001000 + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO02_VC2_CREDIT */ +/* Description: Fifo 02 vc2 credit overflow */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO02_VC2_CREDIT_SHFT 13 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO02_VC2_CREDIT_MASK 0x0000000000002000 + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO13_VC0_CREDIT */ +/* Description: Fifo 13 vc0 credit overflow */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO13_VC0_CREDIT_SHFT 14 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO13_VC0_CREDIT_MASK 0x0000000000004000 + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO13_VC2_CREDIT */ +/* Description: Fifo 13 vc2 credit overflow */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO13_VC2_CREDIT_SHFT 15 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO13_VC2_CREDIT_MASK 0x0000000000008000 + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW0_VC0_CREDIT */ +/* Description: VC0 credit overflow 0 */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW0_VC0_CREDIT_SHFT 16 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW0_VC0_CREDIT_MASK 0x0000000000010000 + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW1_VC0_CREDIT */ +/* Description: VC0 credit overflow 1 */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW1_VC0_CREDIT_SHFT 17 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW1_VC0_CREDIT_MASK 0x0000000000020000 + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW2_VC0_CREDIT */ +/* Description: VC0 credit overflow 2 */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW2_VC0_CREDIT_SHFT 18 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW2_VC0_CREDIT_MASK 0x0000000000040000 + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW0_VC2_CREDIT */ +/* Description: VC2 credit overflow 0 */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW0_VC2_CREDIT_SHFT 19 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW0_VC2_CREDIT_MASK 0x0000000000080000 + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW1_VC2_CREDIT */ +/* Description: VC2 credit overflow 1 */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW1_VC2_CREDIT_SHFT 20 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW1_VC2_CREDIT_MASK 0x0000000000100000 + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW2_VC2_CREDIT */ +/* Description: VC2 credit overflow 2 */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW2_VC2_CREDIT_SHFT 21 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW2_VC2_CREDIT_MASK 0x0000000000200000 + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW_PI_FIFO_DEBIT0 */ +/* Description: PI Fifo debit0 overflow */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_PI_FIFO_DEBIT0_SHFT 22 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_PI_FIFO_DEBIT0_MASK 0x0000000000400000 + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW_PI_FIFO_DEBIT2 */ +/* Description: PI Fifo debit2 overflow */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_PI_FIFO_DEBIT2_SHFT 23 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_PI_FIFO_DEBIT2_MASK 0x0000000000800000 + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_DEBIT0 */ +/* Description: IILB Fifo debit0 overflow */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_DEBIT0_SHFT 24 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_DEBIT0_MASK 0x0000000001000000 + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_DEBIT2 */ +/* Description: IILB Fifo debit2 overflow */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_DEBIT2_SHFT 25 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_DEBIT2_MASK 0x0000000002000000 + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW_MD_FIFO_DEBIT0 */ +/* Description: MD Fifo debit0 overflow */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_MD_FIFO_DEBIT0_SHFT 26 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_MD_FIFO_DEBIT0_MASK 0x0000000004000000 + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW_MD_FIFO_DEBIT2 */ +/* Description: MD Fifo debit2 overflow */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_MD_FIFO_DEBIT2_SHFT 27 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_MD_FIFO_DEBIT2_MASK 0x0000000008000000 + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_DEBIT0 */ +/* Description: NI Fifo debit0 overflow */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_DEBIT0_SHFT 28 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_DEBIT0_MASK 0x0000000010000000 + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_DEBIT1 */ +/* Description: NI Fifo debit1 overflow */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_DEBIT1_SHFT 29 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_DEBIT1_MASK 0x0000000020000000 + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_DEBIT2 */ +/* Description: NI Fifo debit2 overflow */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_DEBIT2_SHFT 30 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_DEBIT2_MASK 0x0000000040000000 + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_DEBIT3 */ +/* Description: NI Fifo debit3 overflow */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_DEBIT3_SHFT 31 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_DEBIT3_MASK 0x0000000080000000 + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC0_POP */ +/* Description: PI Fifo vc0 pop overflow */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC0_POP_SHFT 32 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC0_POP_MASK 0x0000000100000000 + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC2_POP */ +/* Description: PI Fifo vc2 pop overflow */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC2_POP_SHFT 33 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC2_POP_MASK 0x0000000200000000 + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC0_POP */ +/* Description: IILB Fifo vc0 pop overflow */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC0_POP_SHFT 34 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC0_POP_MASK 0x0000000400000000 + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC2_POP */ +/* Description: IILB Fifo vc2 pop overflow */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC2_POP_SHFT 35 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC2_POP_MASK 0x0000000800000000 + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC0_POP */ +/* Description: MD Fifo vc0 pop overflow */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC0_POP_SHFT 36 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC0_POP_MASK 0x0000001000000000 + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC2_POP */ +/* Description: MD Fifo vc2 pop overflow */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC2_POP_SHFT 37 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC2_POP_MASK 0x0000002000000000 + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC0_POP */ +/* Description: NI Fifo vc0 pop overflow */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC0_POP_SHFT 38 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC0_POP_MASK 0x0000004000000000 + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC2_POP */ +/* Description: NI Fifo vc2 pop overflow */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC2_POP_SHFT 39 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC2_POP_MASK 0x0000008000000000 + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC0_PUSH */ +/* Description: PI Fifo vc0 push overflow */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC0_PUSH_SHFT 40 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC0_PUSH_MASK 0x0000010000000000 + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC2_PUSH */ +/* Description: PI Fifo vc2 push overflow */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC2_PUSH_SHFT 41 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC2_PUSH_MASK 0x0000020000000000 + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC0_PUSH */ +/* Description: IILB Fifo vc0 push overflow */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC0_PUSH_SHFT 42 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC0_PUSH_MASK 0x0000040000000000 + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC2_PUSH */ +/* Description: IILB Fifo vc2 push overflow */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC2_PUSH_SHFT 43 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC2_PUSH_MASK 0x0000080000000000 + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC0_PUSH */ +/* Description: MD Fifo vc0 push overflow */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC0_PUSH_SHFT 44 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC0_PUSH_MASK 0x0000100000000000 + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC2_PUSH */ +/* Description: MD Fifo vc2 push overflow */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC2_PUSH_SHFT 45 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC2_PUSH_MASK 0x0000200000000000 + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC0_CREDIT */ +/* Description: PI Fifo vc0 credit overflow */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC0_CREDIT_SHFT 46 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC0_CREDIT_MASK 0x0000400000000000 + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC2_CREDIT */ +/* Description: PI Fifo vc2 credit overflow */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC2_CREDIT_SHFT 47 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC2_CREDIT_MASK 0x0000800000000000 + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC0_CREDIT */ +/* Description: IILB Fifo vc0 credit overflow */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC0_CREDIT_SHFT 48 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC0_CREDIT_MASK 0x0001000000000000 + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC2_CREDIT */ +/* Description: IILB Fifo vc2 credit overflow */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC2_CREDIT_SHFT 49 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC2_CREDIT_MASK 0x0002000000000000 + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC0_CREDIT */ +/* Description: MD Fifo vc0 credit overflow */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC0_CREDIT_SHFT 50 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC0_CREDIT_MASK 0x0004000000000000 + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC2_CREDIT */ +/* Description: MD Fifo vc2 credit overflow */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC2_CREDIT_SHFT 51 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC2_CREDIT_MASK 0x0008000000000000 + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC0_CREDIT */ +/* Description: NI Fifo vc0 credit overflow */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC0_CREDIT_SHFT 52 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC0_CREDIT_MASK 0x0010000000000000 + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC1_CREDIT */ +/* Description: NI Fifo vc1 credit overflow */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC1_CREDIT_SHFT 53 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC1_CREDIT_MASK 0x0020000000000000 + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC2_CREDIT */ +/* Description: NI Fifo vc2 credit overflow */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC2_CREDIT_SHFT 54 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC2_CREDIT_MASK 0x0040000000000000 + +/* SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC3_CREDIT */ +/* Description: NI Fifo vc3 credit overflow */ +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC3_CREDIT_SHFT 55 +#define SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC3_CREDIT_MASK 0x0080000000000000 + +/* SH_NI1_FIRST_ERROR_1_TAIL_TIMEOUT_FIFO02_VC0 */ +/* Description: Fifo02 vc0 tail timeout */ +#define SH_NI1_FIRST_ERROR_1_TAIL_TIMEOUT_FIFO02_VC0_SHFT 56 +#define SH_NI1_FIRST_ERROR_1_TAIL_TIMEOUT_FIFO02_VC0_MASK 0x0100000000000000 + +/* SH_NI1_FIRST_ERROR_1_TAIL_TIMEOUT_FIFO02_VC2 */ +/* Description: Fifo02 vc2 tail timeout */ +#define SH_NI1_FIRST_ERROR_1_TAIL_TIMEOUT_FIFO02_VC2_SHFT 57 +#define SH_NI1_FIRST_ERROR_1_TAIL_TIMEOUT_FIFO02_VC2_MASK 0x0200000000000000 + +/* SH_NI1_FIRST_ERROR_1_TAIL_TIMEOUT_FIFO13_VC1 */ +/* Description: Fifo13 vc1 tail timeout */ +#define SH_NI1_FIRST_ERROR_1_TAIL_TIMEOUT_FIFO13_VC1_SHFT 58 +#define SH_NI1_FIRST_ERROR_1_TAIL_TIMEOUT_FIFO13_VC1_MASK 0x0400000000000000 + +/* SH_NI1_FIRST_ERROR_1_TAIL_TIMEOUT_FIFO13_VC3 */ +/* Description: Fifo13 vc3 tail timeout */ +#define SH_NI1_FIRST_ERROR_1_TAIL_TIMEOUT_FIFO13_VC3_SHFT 59 +#define SH_NI1_FIRST_ERROR_1_TAIL_TIMEOUT_FIFO13_VC3_MASK 0x0800000000000000 + +/* SH_NI1_FIRST_ERROR_1_TAIL_TIMEOUT_NI_VC0 */ +/* Description: NI vc0 tail timeout */ +#define SH_NI1_FIRST_ERROR_1_TAIL_TIMEOUT_NI_VC0_SHFT 60 +#define SH_NI1_FIRST_ERROR_1_TAIL_TIMEOUT_NI_VC0_MASK 0x1000000000000000 + +/* SH_NI1_FIRST_ERROR_1_TAIL_TIMEOUT_NI_VC1 */ +/* Description: NI vc1 tail timeout */ +#define SH_NI1_FIRST_ERROR_1_TAIL_TIMEOUT_NI_VC1_SHFT 61 +#define SH_NI1_FIRST_ERROR_1_TAIL_TIMEOUT_NI_VC1_MASK 0x2000000000000000 + +/* SH_NI1_FIRST_ERROR_1_TAIL_TIMEOUT_NI_VC2 */ +/* Description: NI vc2 tail timeout */ +#define SH_NI1_FIRST_ERROR_1_TAIL_TIMEOUT_NI_VC2_SHFT 62 +#define SH_NI1_FIRST_ERROR_1_TAIL_TIMEOUT_NI_VC2_MASK 0x4000000000000000 + +/* SH_NI1_FIRST_ERROR_1_TAIL_TIMEOUT_NI_VC3 */ +/* Description: NI vc3 tail timeout */ +#define SH_NI1_FIRST_ERROR_1_TAIL_TIMEOUT_NI_VC3_SHFT 63 +#define SH_NI1_FIRST_ERROR_1_TAIL_TIMEOUT_NI_VC3_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_NI1_FIRST_ERROR_2" */ +/* ni1 First Error Bits */ +/* ==================================================================== */ + +#define SH_NI1_FIRST_ERROR_2 0x0000000150040670 +#define SH_NI1_FIRST_ERROR_2_MASK 0x7fffffff003fffff +#define SH_NI1_FIRST_ERROR_2_INIT 0x7fffffff003fffff + +/* SH_NI1_FIRST_ERROR_2_ILLEGAL_VCNI */ +/* Description: Illegal VC NI */ +#define SH_NI1_FIRST_ERROR_2_ILLEGAL_VCNI_SHFT 0 +#define SH_NI1_FIRST_ERROR_2_ILLEGAL_VCNI_MASK 0x0000000000000001 + +/* SH_NI1_FIRST_ERROR_2_ILLEGAL_VCPI */ +/* Description: Illegal VC PI */ +#define SH_NI1_FIRST_ERROR_2_ILLEGAL_VCPI_SHFT 1 +#define SH_NI1_FIRST_ERROR_2_ILLEGAL_VCPI_MASK 0x0000000000000002 + +/* SH_NI1_FIRST_ERROR_2_ILLEGAL_VCMD */ +/* Description: Illegal VC MD */ +#define SH_NI1_FIRST_ERROR_2_ILLEGAL_VCMD_SHFT 2 +#define SH_NI1_FIRST_ERROR_2_ILLEGAL_VCMD_MASK 0x0000000000000004 + +/* SH_NI1_FIRST_ERROR_2_ILLEGAL_VCIILB */ +/* Description: Illegal VC IILB */ +#define SH_NI1_FIRST_ERROR_2_ILLEGAL_VCIILB_SHFT 3 +#define SH_NI1_FIRST_ERROR_2_ILLEGAL_VCIILB_MASK 0x0000000000000008 + +/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC0_POP */ +/* Description: Fifo 02 vc0 pop underflow */ +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC0_POP_SHFT 4 +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC0_POP_MASK 0x0000000000000010 + +/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC2_POP */ +/* Description: Fifo 02 vc2 pop underflow */ +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC2_POP_SHFT 5 +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC2_POP_MASK 0x0000000000000020 + +/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC1_POP */ +/* Description: Fifo 13 vc1 pop underflow */ +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC1_POP_SHFT 6 +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC1_POP_MASK 0x0000000000000040 + +/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC3_POP */ +/* Description: Fifo 13 vc3 pop underflow */ +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC3_POP_SHFT 7 +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC3_POP_MASK 0x0000000000000080 + +/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC0_PUSH */ +/* Description: Fifo 02 vc0 push underflow */ +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC0_PUSH_SHFT 8 +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC0_PUSH_MASK 0x0000000000000100 + +/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC2_PUSH */ +/* Description: Fifo 02 vc2 push underflow */ +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC2_PUSH_SHFT 9 +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC2_PUSH_MASK 0x0000000000000200 + +/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC1_PUSH */ +/* Description: Fifo 13 vc1 push underflow */ +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC1_PUSH_SHFT 10 +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC1_PUSH_MASK 0x0000000000000400 + +/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC3_PUSH */ +/* Description: Fifo 13 vc3 push underflow */ +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC3_PUSH_SHFT 11 +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC3_PUSH_MASK 0x0000000000000800 + +/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC0_CREDIT */ +/* Description: Fifo 02 vc0 credit underflow */ +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC0_CREDIT_SHFT 12 +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC0_CREDIT_MASK 0x0000000000001000 + +/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC2_CREDIT */ +/* Description: Fifo 02 vc2 credit underflow */ +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC2_CREDIT_SHFT 13 +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC2_CREDIT_MASK 0x0000000000002000 + +/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC0_CREDIT */ +/* Description: Fifo 13 vc0 credit underflow */ +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC0_CREDIT_SHFT 14 +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC0_CREDIT_MASK 0x0000000000004000 + +/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC2_CREDIT */ +/* Description: Fifo 13 vc2 credit underflow */ +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC2_CREDIT_SHFT 15 +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC2_CREDIT_MASK 0x0000000000008000 + +/* SH_NI1_FIRST_ERROR_2_UNDERFLOW0_VC0_CREDIT */ +/* Description: VC0 credit underflow 0 */ +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW0_VC0_CREDIT_SHFT 16 +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW0_VC0_CREDIT_MASK 0x0000000000010000 + +/* SH_NI1_FIRST_ERROR_2_UNDERFLOW1_VC0_CREDIT */ +/* Description: VC0 credit underflow 1 */ +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW1_VC0_CREDIT_SHFT 17 +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW1_VC0_CREDIT_MASK 0x0000000000020000 + +/* SH_NI1_FIRST_ERROR_2_UNDERFLOW2_VC0_CREDIT */ +/* Description: VC0 credit underflow 2 */ +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW2_VC0_CREDIT_SHFT 18 +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW2_VC0_CREDIT_MASK 0x0000000000040000 + +/* SH_NI1_FIRST_ERROR_2_UNDERFLOW0_VC2_CREDIT */ +/* Description: VC2 credit underflow 0 */ +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW0_VC2_CREDIT_SHFT 19 +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW0_VC2_CREDIT_MASK 0x0000000000080000 + +/* SH_NI1_FIRST_ERROR_2_UNDERFLOW1_VC2_CREDIT */ +/* Description: VC2 credit underflow 1 */ +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW1_VC2_CREDIT_SHFT 20 +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW1_VC2_CREDIT_MASK 0x0000000000100000 + +/* SH_NI1_FIRST_ERROR_2_UNDERFLOW2_VC2_CREDIT */ +/* Description: VC2 credit underflow 2 */ +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW2_VC2_CREDIT_SHFT 21 +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW2_VC2_CREDIT_MASK 0x0000000000200000 + +/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC0_POP */ +/* Description: PI Fifo vc0 pop underflow */ +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC0_POP_SHFT 32 +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC0_POP_MASK 0x0000000100000000 + +/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC2_POP */ +/* Description: PI Fifo vc2 pop underflow */ +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC2_POP_SHFT 33 +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC2_POP_MASK 0x0000000200000000 + +/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC0_POP */ +/* Description: IILB Fifo vc0 pop underflow */ +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC0_POP_SHFT 34 +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC0_POP_MASK 0x0000000400000000 + +/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC2_POP */ +/* Description: IILB Fifo vc2 pop underflow */ +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC2_POP_SHFT 35 +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC2_POP_MASK 0x0000000800000000 + +/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC0_POP */ +/* Description: MD Fifo vc0 pop underflow */ +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC0_POP_SHFT 36 +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC0_POP_MASK 0x0000001000000000 + +/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC2_POP */ +/* Description: MD Fifo vc2 pop underflow */ +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC2_POP_SHFT 37 +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC2_POP_MASK 0x0000002000000000 + +/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC0_POP */ +/* Description: NI Fifo vc0 pop underflow */ +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC0_POP_SHFT 38 +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC0_POP_MASK 0x0000004000000000 + +/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC2_POP */ +/* Description: NI Fifo vc2 pop underflow */ +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC2_POP_SHFT 39 +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC2_POP_MASK 0x0000008000000000 + +/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC0_PUSH */ +/* Description: PI Fifo vc0 push underflow */ +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC0_PUSH_SHFT 40 +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC0_PUSH_MASK 0x0000010000000000 + +/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC2_PUSH */ +/* Description: PI Fifo vc2 push underflow */ +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC2_PUSH_SHFT 41 +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC2_PUSH_MASK 0x0000020000000000 + +/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC0_PUSH */ +/* Description: IILB Fifo vc0 push underflow */ +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC0_PUSH_SHFT 42 +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC0_PUSH_MASK 0x0000040000000000 + +/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC2_PUSH */ +/* Description: IILB Fifo vc2 push underflow */ +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC2_PUSH_SHFT 43 +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC2_PUSH_MASK 0x0000080000000000 + +/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC0_PUSH */ +/* Description: MD Fifo vc0 push underflow */ +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC0_PUSH_SHFT 44 +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC0_PUSH_MASK 0x0000100000000000 + +/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC2_PUSH */ +/* Description: MD Fifo vc2 push underflow */ +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC2_PUSH_SHFT 45 +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC2_PUSH_MASK 0x0000200000000000 + +/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC0_CREDIT */ +/* Description: PI Fifo vc0 credit underflow */ +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC0_CREDIT_SHFT 46 +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC0_CREDIT_MASK 0x0000400000000000 + +/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC2_CREDIT */ +/* Description: PI Fifo vc2 credit underflow */ +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC2_CREDIT_SHFT 47 +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC2_CREDIT_MASK 0x0000800000000000 + +/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC0_CREDIT */ +/* Description: IILB Fifo vc0 credit underflow */ +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC0_CREDIT_SHFT 48 +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC0_CREDIT_MASK 0x0001000000000000 + +/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC2_CREDIT */ +/* Description: IILB Fifo vc2 credit underflow */ +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC2_CREDIT_SHFT 49 +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC2_CREDIT_MASK 0x0002000000000000 + +/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC0_CREDIT */ +/* Description: MD Fifo vc0 credit underflow */ +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC0_CREDIT_SHFT 50 +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC0_CREDIT_MASK 0x0004000000000000 + +/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC2_CREDIT */ +/* Description: MD Fifo vc2 credit underflow */ +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC2_CREDIT_SHFT 51 +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC2_CREDIT_MASK 0x0008000000000000 + +/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC0_CREDIT */ +/* Description: NI Fifo vc0 credit underflow */ +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC0_CREDIT_SHFT 52 +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC0_CREDIT_MASK 0x0010000000000000 + +/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC1_CREDIT */ +/* Description: NI Fifo vc1 credit underflow */ +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC1_CREDIT_SHFT 53 +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC1_CREDIT_MASK 0x0020000000000000 + +/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC2_CREDIT */ +/* Description: NI Fifo vc2 credit underflow */ +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC2_CREDIT_SHFT 54 +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC2_CREDIT_MASK 0x0040000000000000 + +/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC3_CREDIT */ +/* Description: NI Fifo vc3 credit underflow */ +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC3_CREDIT_SHFT 55 +#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC3_CREDIT_MASK 0x0080000000000000 + +/* SH_NI1_FIRST_ERROR_2_LLP_DEADLOCK_VC0 */ +/* Description: llp deadlock vc0 */ +#define SH_NI1_FIRST_ERROR_2_LLP_DEADLOCK_VC0_SHFT 56 +#define SH_NI1_FIRST_ERROR_2_LLP_DEADLOCK_VC0_MASK 0x0100000000000000 + +/* SH_NI1_FIRST_ERROR_2_LLP_DEADLOCK_VC1 */ +/* Description: llp deadlock vc1 */ +#define SH_NI1_FIRST_ERROR_2_LLP_DEADLOCK_VC1_SHFT 57 +#define SH_NI1_FIRST_ERROR_2_LLP_DEADLOCK_VC1_MASK 0x0200000000000000 + +/* SH_NI1_FIRST_ERROR_2_LLP_DEADLOCK_VC2 */ +/* Description: llp deadlock vc2 */ +#define SH_NI1_FIRST_ERROR_2_LLP_DEADLOCK_VC2_SHFT 58 +#define SH_NI1_FIRST_ERROR_2_LLP_DEADLOCK_VC2_MASK 0x0400000000000000 + +/* SH_NI1_FIRST_ERROR_2_LLP_DEADLOCK_VC3 */ +/* Description: llp deadlock vc3 */ +#define SH_NI1_FIRST_ERROR_2_LLP_DEADLOCK_VC3_SHFT 59 +#define SH_NI1_FIRST_ERROR_2_LLP_DEADLOCK_VC3_MASK 0x0800000000000000 + +/* SH_NI1_FIRST_ERROR_2_CHIPLET_NOMATCH */ +/* Description: chiplet nomatch */ +#define SH_NI1_FIRST_ERROR_2_CHIPLET_NOMATCH_SHFT 60 +#define SH_NI1_FIRST_ERROR_2_CHIPLET_NOMATCH_MASK 0x1000000000000000 + +/* SH_NI1_FIRST_ERROR_2_LUT_READ_ERROR */ +/* Description: LUT Read Error */ +#define SH_NI1_FIRST_ERROR_2_LUT_READ_ERROR_SHFT 61 +#define SH_NI1_FIRST_ERROR_2_LUT_READ_ERROR_MASK 0x2000000000000000 + +/* SH_NI1_FIRST_ERROR_2_RETRY_TIMEOUT_ERROR */ +/* Description: Retry Timeout Error */ +#define SH_NI1_FIRST_ERROR_2_RETRY_TIMEOUT_ERROR_SHFT 62 +#define SH_NI1_FIRST_ERROR_2_RETRY_TIMEOUT_ERROR_MASK 0x4000000000000000 + +/* ==================================================================== */ +/* Register "SH_NI1_ERROR_DETAIL_1" */ +/* ni1 Chiplet no match header bits 63:0 */ +/* ==================================================================== */ + +#define SH_NI1_ERROR_DETAIL_1 0x0000000150040680 +#define SH_NI1_ERROR_DETAIL_1_MASK 0xffffffffffffffff +#define SH_NI1_ERROR_DETAIL_1_INIT 0x0000000000000000 + +/* SH_NI1_ERROR_DETAIL_1_HEADER */ +/* Description: Header bits 63:0 */ +#define SH_NI1_ERROR_DETAIL_1_HEADER_SHFT 0 +#define SH_NI1_ERROR_DETAIL_1_HEADER_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_NI1_ERROR_DETAIL_2" */ +/* ni1 Chiplet no match header bits 127:64 */ +/* ==================================================================== */ + +#define SH_NI1_ERROR_DETAIL_2 0x0000000150040690 +#define SH_NI1_ERROR_DETAIL_2_MASK 0xffffffffffffffff +#define SH_NI1_ERROR_DETAIL_2_INIT 0x0000000000000000 + +/* SH_NI1_ERROR_DETAIL_2_HEADER */ +/* Description: Header bits 127:64 */ +#define SH_NI1_ERROR_DETAIL_2_HEADER_SHFT 0 +#define SH_NI1_ERROR_DETAIL_2_HEADER_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_CORRECTED_DETAIL_1" */ +/* Corrected error details */ +/* ==================================================================== */ + +#define SH_XN_CORRECTED_DETAIL_1 0x0000000150040070 +#define SH_XN_CORRECTED_DETAIL_1_MASK 0x0fff0fff0fff0fff +#define SH_XN_CORRECTED_DETAIL_1_INIT 0x0000000000000000 + +/* SH_XN_CORRECTED_DETAIL_1_ECC0_SYNDROME */ +/* Description: ECC0 Syndrome */ +#define SH_XN_CORRECTED_DETAIL_1_ECC0_SYNDROME_SHFT 0 +#define SH_XN_CORRECTED_DETAIL_1_ECC0_SYNDROME_MASK 0x00000000000000ff + +/* SH_XN_CORRECTED_DETAIL_1_ECC0_WC */ +/* Description: ECC0 Word Count */ +#define SH_XN_CORRECTED_DETAIL_1_ECC0_WC_SHFT 8 +#define SH_XN_CORRECTED_DETAIL_1_ECC0_WC_MASK 0x0000000000000300 + +/* SH_XN_CORRECTED_DETAIL_1_ECC0_VC */ +/* Description: ECC0 Virtual Channel */ +#define SH_XN_CORRECTED_DETAIL_1_ECC0_VC_SHFT 10 +#define SH_XN_CORRECTED_DETAIL_1_ECC0_VC_MASK 0x0000000000000c00 + +/* SH_XN_CORRECTED_DETAIL_1_ECC1_SYNDROME */ +/* Description: ECC1 Syndrome */ +#define SH_XN_CORRECTED_DETAIL_1_ECC1_SYNDROME_SHFT 16 +#define SH_XN_CORRECTED_DETAIL_1_ECC1_SYNDROME_MASK 0x0000000000ff0000 + +/* SH_XN_CORRECTED_DETAIL_1_ECC1_WC */ +/* Description: ECC1 Word Count */ +#define SH_XN_CORRECTED_DETAIL_1_ECC1_WC_SHFT 24 +#define SH_XN_CORRECTED_DETAIL_1_ECC1_WC_MASK 0x0000000003000000 + +/* SH_XN_CORRECTED_DETAIL_1_ECC1_VC */ +/* Description: ECC1 Virtual Channel */ +#define SH_XN_CORRECTED_DETAIL_1_ECC1_VC_SHFT 26 +#define SH_XN_CORRECTED_DETAIL_1_ECC1_VC_MASK 0x000000000c000000 + +/* SH_XN_CORRECTED_DETAIL_1_ECC2_SYNDROME */ +/* Description: ECC2 Syndrome */ +#define SH_XN_CORRECTED_DETAIL_1_ECC2_SYNDROME_SHFT 32 +#define SH_XN_CORRECTED_DETAIL_1_ECC2_SYNDROME_MASK 0x000000ff00000000 + +/* SH_XN_CORRECTED_DETAIL_1_ECC2_WC */ +/* Description: ECC2 Word Count */ +#define SH_XN_CORRECTED_DETAIL_1_ECC2_WC_SHFT 40 +#define SH_XN_CORRECTED_DETAIL_1_ECC2_WC_MASK 0x0000030000000000 + +/* SH_XN_CORRECTED_DETAIL_1_ECC2_VC */ +/* Description: ECC2 Virtual Channel */ +#define SH_XN_CORRECTED_DETAIL_1_ECC2_VC_SHFT 42 +#define SH_XN_CORRECTED_DETAIL_1_ECC2_VC_MASK 0x00000c0000000000 + +/* SH_XN_CORRECTED_DETAIL_1_ECC3_SYNDROME */ +/* Description: ECC3 Syndrome */ +#define SH_XN_CORRECTED_DETAIL_1_ECC3_SYNDROME_SHFT 48 +#define SH_XN_CORRECTED_DETAIL_1_ECC3_SYNDROME_MASK 0x00ff000000000000 + +/* SH_XN_CORRECTED_DETAIL_1_ECC3_WC */ +/* Description: ECC3 Word Count */ +#define SH_XN_CORRECTED_DETAIL_1_ECC3_WC_SHFT 56 +#define SH_XN_CORRECTED_DETAIL_1_ECC3_WC_MASK 0x0300000000000000 + +/* SH_XN_CORRECTED_DETAIL_1_ECC3_VC */ +/* Description: ECC3 Virtual Channel */ +#define SH_XN_CORRECTED_DETAIL_1_ECC3_VC_SHFT 58 +#define SH_XN_CORRECTED_DETAIL_1_ECC3_VC_MASK 0x0c00000000000000 + +/* ==================================================================== */ +/* Register "SH_XN_CORRECTED_DETAIL_2" */ +/* Corrected error data */ +/* ==================================================================== */ + +#define SH_XN_CORRECTED_DETAIL_2 0x0000000150040080 +#define SH_XN_CORRECTED_DETAIL_2_MASK 0xffffffffffffffff +#define SH_XN_CORRECTED_DETAIL_2_INIT 0x0000000000000000 + +/* SH_XN_CORRECTED_DETAIL_2_DATA */ +/* Description: ECC data */ +#define SH_XN_CORRECTED_DETAIL_2_DATA_SHFT 0 +#define SH_XN_CORRECTED_DETAIL_2_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_CORRECTED_DETAIL_3" */ +/* Corrected error header0 */ +/* ==================================================================== */ + +#define SH_XN_CORRECTED_DETAIL_3 0x0000000150040090 +#define SH_XN_CORRECTED_DETAIL_3_MASK 0xffffffffffffffff +#define SH_XN_CORRECTED_DETAIL_3_INIT 0x0000000000000000 + +/* SH_XN_CORRECTED_DETAIL_3_HEADER0 */ +/* Description: ECC header0 (bits 63 - 0) */ +#define SH_XN_CORRECTED_DETAIL_3_HEADER0_SHFT 0 +#define SH_XN_CORRECTED_DETAIL_3_HEADER0_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_CORRECTED_DETAIL_4" */ +/* Corrected error header1 */ +/* ==================================================================== */ + +#define SH_XN_CORRECTED_DETAIL_4 0x00000001500400a0 +#define SH_XN_CORRECTED_DETAIL_4_MASK 0xc00003ffffffffff +#define SH_XN_CORRECTED_DETAIL_4_INIT 0x0000000000000000 + +/* SH_XN_CORRECTED_DETAIL_4_HEADER1 */ +/* Description: ECC header1 (bits 104 - 64) */ +#define SH_XN_CORRECTED_DETAIL_4_HEADER1_SHFT 0 +#define SH_XN_CORRECTED_DETAIL_4_HEADER1_MASK 0x000003ffffffffff + +/* SH_XN_CORRECTED_DETAIL_4_ERR_GROUP */ +/* Description: Error group */ +#define SH_XN_CORRECTED_DETAIL_4_ERR_GROUP_SHFT 62 +#define SH_XN_CORRECTED_DETAIL_4_ERR_GROUP_MASK 0xc000000000000000 + +/* ==================================================================== */ +/* Register "SH_XN_UNCORRECTED_DETAIL_1" */ +/* Uncorrected error details */ +/* ==================================================================== */ + +#define SH_XN_UNCORRECTED_DETAIL_1 0x00000001500400b0 +#define SH_XN_UNCORRECTED_DETAIL_1_MASK 0x0fff0fff0fff0fff +#define SH_XN_UNCORRECTED_DETAIL_1_INIT 0x0000000000000000 + +/* SH_XN_UNCORRECTED_DETAIL_1_ECC0_SYNDROME */ +/* Description: ECC0 Syndrome */ +#define SH_XN_UNCORRECTED_DETAIL_1_ECC0_SYNDROME_SHFT 0 +#define SH_XN_UNCORRECTED_DETAIL_1_ECC0_SYNDROME_MASK 0x00000000000000ff + +/* SH_XN_UNCORRECTED_DETAIL_1_ECC0_WC */ +/* Description: ECC0 Word Count */ +#define SH_XN_UNCORRECTED_DETAIL_1_ECC0_WC_SHFT 8 +#define SH_XN_UNCORRECTED_DETAIL_1_ECC0_WC_MASK 0x0000000000000300 + +/* SH_XN_UNCORRECTED_DETAIL_1_ECC0_VC */ +/* Description: ECC0 Virtual Channel */ +#define SH_XN_UNCORRECTED_DETAIL_1_ECC0_VC_SHFT 10 +#define SH_XN_UNCORRECTED_DETAIL_1_ECC0_VC_MASK 0x0000000000000c00 + +/* SH_XN_UNCORRECTED_DETAIL_1_ECC1_SYNDROME */ +/* Description: ECC1 Syndrome */ +#define SH_XN_UNCORRECTED_DETAIL_1_ECC1_SYNDROME_SHFT 16 +#define SH_XN_UNCORRECTED_DETAIL_1_ECC1_SYNDROME_MASK 0x0000000000ff0000 + +/* SH_XN_UNCORRECTED_DETAIL_1_ECC1_WC */ +/* Description: ECC1 Word Count */ +#define SH_XN_UNCORRECTED_DETAIL_1_ECC1_WC_SHFT 24 +#define SH_XN_UNCORRECTED_DETAIL_1_ECC1_WC_MASK 0x0000000003000000 + +/* SH_XN_UNCORRECTED_DETAIL_1_ECC1_VC */ +/* Description: ECC1 Virtual Channel */ +#define SH_XN_UNCORRECTED_DETAIL_1_ECC1_VC_SHFT 26 +#define SH_XN_UNCORRECTED_DETAIL_1_ECC1_VC_MASK 0x000000000c000000 + +/* SH_XN_UNCORRECTED_DETAIL_1_ECC2_SYNDROME */ +/* Description: ECC2 Syndrome */ +#define SH_XN_UNCORRECTED_DETAIL_1_ECC2_SYNDROME_SHFT 32 +#define SH_XN_UNCORRECTED_DETAIL_1_ECC2_SYNDROME_MASK 0x000000ff00000000 + +/* SH_XN_UNCORRECTED_DETAIL_1_ECC2_WC */ +/* Description: ECC2 Word Count */ +#define SH_XN_UNCORRECTED_DETAIL_1_ECC2_WC_SHFT 40 +#define SH_XN_UNCORRECTED_DETAIL_1_ECC2_WC_MASK 0x0000030000000000 + +/* SH_XN_UNCORRECTED_DETAIL_1_ECC2_VC */ +/* Description: ECC2 Virtual Channel */ +#define SH_XN_UNCORRECTED_DETAIL_1_ECC2_VC_SHFT 42 +#define SH_XN_UNCORRECTED_DETAIL_1_ECC2_VC_MASK 0x00000c0000000000 + +/* SH_XN_UNCORRECTED_DETAIL_1_ECC3_SYNDROME */ +/* Description: ECC3 Syndrome */ +#define SH_XN_UNCORRECTED_DETAIL_1_ECC3_SYNDROME_SHFT 48 +#define SH_XN_UNCORRECTED_DETAIL_1_ECC3_SYNDROME_MASK 0x00ff000000000000 + +/* SH_XN_UNCORRECTED_DETAIL_1_ECC3_WC */ +/* Description: ECC3 Word Count */ +#define SH_XN_UNCORRECTED_DETAIL_1_ECC3_WC_SHFT 56 +#define SH_XN_UNCORRECTED_DETAIL_1_ECC3_WC_MASK 0x0300000000000000 + +/* SH_XN_UNCORRECTED_DETAIL_1_ECC3_VC */ +/* Description: ECC3 Virtual Channel */ +#define SH_XN_UNCORRECTED_DETAIL_1_ECC3_VC_SHFT 58 +#define SH_XN_UNCORRECTED_DETAIL_1_ECC3_VC_MASK 0x0c00000000000000 + +/* ==================================================================== */ +/* Register "SH_XN_UNCORRECTED_DETAIL_2" */ +/* Uncorrected error data */ +/* ==================================================================== */ + +#define SH_XN_UNCORRECTED_DETAIL_2 0x00000001500400c0 +#define SH_XN_UNCORRECTED_DETAIL_2_MASK 0xffffffffffffffff +#define SH_XN_UNCORRECTED_DETAIL_2_INIT 0x0000000000000000 + +/* SH_XN_UNCORRECTED_DETAIL_2_DATA */ +/* Description: ECC data */ +#define SH_XN_UNCORRECTED_DETAIL_2_DATA_SHFT 0 +#define SH_XN_UNCORRECTED_DETAIL_2_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_UNCORRECTED_DETAIL_3" */ +/* Uncorrected error header0 */ +/* ==================================================================== */ + +#define SH_XN_UNCORRECTED_DETAIL_3 0x00000001500400d0 +#define SH_XN_UNCORRECTED_DETAIL_3_MASK 0xffffffffffffffff +#define SH_XN_UNCORRECTED_DETAIL_3_INIT 0x0000000000000000 + +/* SH_XN_UNCORRECTED_DETAIL_3_HEADER0 */ +/* Description: ECC header0 (bits 63 - 0) */ +#define SH_XN_UNCORRECTED_DETAIL_3_HEADER0_SHFT 0 +#define SH_XN_UNCORRECTED_DETAIL_3_HEADER0_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XN_UNCORRECTED_DETAIL_4" */ +/* Uncorrected error header1 */ +/* ==================================================================== */ + +#define SH_XN_UNCORRECTED_DETAIL_4 0x00000001500400e0 +#define SH_XN_UNCORRECTED_DETAIL_4_MASK 0xc00003ffffffffff +#define SH_XN_UNCORRECTED_DETAIL_4_INIT 0x0000000000000000 + +/* SH_XN_UNCORRECTED_DETAIL_4_HEADER1 */ +/* Description: ECC header1 (bits 104 - 64) */ +#define SH_XN_UNCORRECTED_DETAIL_4_HEADER1_SHFT 0 +#define SH_XN_UNCORRECTED_DETAIL_4_HEADER1_MASK 0x000003ffffffffff + +/* SH_XN_UNCORRECTED_DETAIL_4_ERR_GROUP */ +/* Description: Error group */ +#define SH_XN_UNCORRECTED_DETAIL_4_ERR_GROUP_SHFT 62 +#define SH_XN_UNCORRECTED_DETAIL_4_ERR_GROUP_MASK 0xc000000000000000 + +/* ==================================================================== */ +/* Register "SH_XNMD_ERROR_DETAIL_1" */ +/* Look Up Table Address (md) */ +/* ==================================================================== */ + +#define SH_XNMD_ERROR_DETAIL_1 0x00000001500400f0 +#define SH_XNMD_ERROR_DETAIL_1_MASK 0x00000000000007ff +#define SH_XNMD_ERROR_DETAIL_1_INIT 0x0000000000000000 + +/* SH_XNMD_ERROR_DETAIL_1_LUT_ADDR */ +/* Description: Look Up Table Read Address */ +#define SH_XNMD_ERROR_DETAIL_1_LUT_ADDR_SHFT 0 +#define SH_XNMD_ERROR_DETAIL_1_LUT_ADDR_MASK 0x00000000000007ff + +/* ==================================================================== */ +/* Register "SH_XNPI_ERROR_DETAIL_1" */ +/* Look Up Table Address (pi) */ +/* ==================================================================== */ + +#define SH_XNPI_ERROR_DETAIL_1 0x0000000150040100 +#define SH_XNPI_ERROR_DETAIL_1_MASK 0x00000000000007ff +#define SH_XNPI_ERROR_DETAIL_1_INIT 0x0000000000000000 + +/* SH_XNPI_ERROR_DETAIL_1_LUT_ADDR */ +/* Description: Look Up Table Read Address */ +#define SH_XNPI_ERROR_DETAIL_1_LUT_ADDR_SHFT 0 +#define SH_XNPI_ERROR_DETAIL_1_LUT_ADDR_MASK 0x00000000000007ff + +/* ==================================================================== */ +/* Register "SH_XNIILB_ERROR_DETAIL_1" */ +/* Chiplet NoMatch header [63:0] */ +/* ==================================================================== */ + +#define SH_XNIILB_ERROR_DETAIL_1 0x0000000150040110 +#define SH_XNIILB_ERROR_DETAIL_1_MASK 0xffffffffffffffff +#define SH_XNIILB_ERROR_DETAIL_1_INIT 0x0000000000000000 + +/* SH_XNIILB_ERROR_DETAIL_1_HEADER */ +/* Description: header bits [63:0] */ +#define SH_XNIILB_ERROR_DETAIL_1_HEADER_SHFT 0 +#define SH_XNIILB_ERROR_DETAIL_1_HEADER_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XNIILB_ERROR_DETAIL_2" */ +/* Chiplet NoMatch header [127:64] */ +/* ==================================================================== */ + +#define SH_XNIILB_ERROR_DETAIL_2 0x0000000150040120 +#define SH_XNIILB_ERROR_DETAIL_2_MASK 0xffffffffffffffff +#define SH_XNIILB_ERROR_DETAIL_2_INIT 0x0000000000000000 + +/* SH_XNIILB_ERROR_DETAIL_2_HEADER */ +/* Description: header bits [127:64] */ +#define SH_XNIILB_ERROR_DETAIL_2_HEADER_SHFT 0 +#define SH_XNIILB_ERROR_DETAIL_2_HEADER_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_XNIILB_ERROR_DETAIL_3" */ +/* Look Up Table Address (iilb) */ +/* ==================================================================== */ + +#define SH_XNIILB_ERROR_DETAIL_3 0x0000000150040130 +#define SH_XNIILB_ERROR_DETAIL_3_MASK 0x00000000000007ff +#define SH_XNIILB_ERROR_DETAIL_3_INIT 0x0000000000000000 + +/* SH_XNIILB_ERROR_DETAIL_3_LUT_ADDR */ +/* Description: Look Up Table Read Address */ +#define SH_XNIILB_ERROR_DETAIL_3_LUT_ADDR_SHFT 0 +#define SH_XNIILB_ERROR_DETAIL_3_LUT_ADDR_MASK 0x00000000000007ff + +/* ==================================================================== */ +/* Register "SH_NI0_ERROR_DETAIL_3" */ +/* Look Up Table Address (ni0) */ +/* ==================================================================== */ + +#define SH_NI0_ERROR_DETAIL_3 0x0000000150040140 +#define SH_NI0_ERROR_DETAIL_3_MASK 0x00000000000007ff +#define SH_NI0_ERROR_DETAIL_3_INIT 0x0000000000000000 + +/* SH_NI0_ERROR_DETAIL_3_LUT_ADDR */ +/* Description: Look Up Table Read Address */ +#define SH_NI0_ERROR_DETAIL_3_LUT_ADDR_SHFT 0 +#define SH_NI0_ERROR_DETAIL_3_LUT_ADDR_MASK 0x00000000000007ff + +/* ==================================================================== */ +/* Register "SH_NI1_ERROR_DETAIL_3" */ +/* Look Up Table Address (ni1) */ +/* ==================================================================== */ + +#define SH_NI1_ERROR_DETAIL_3 0x0000000150040150 +#define SH_NI1_ERROR_DETAIL_3_MASK 0x00000000000007ff +#define SH_NI1_ERROR_DETAIL_3_INIT 0x0000000000000000 + +/* SH_NI1_ERROR_DETAIL_3_LUT_ADDR */ +/* Description: Look Up Table Read Address */ +#define SH_NI1_ERROR_DETAIL_3_LUT_ADDR_SHFT 0 +#define SH_NI1_ERROR_DETAIL_3_LUT_ADDR_MASK 0x00000000000007ff + +/* ==================================================================== */ +/* Register "SH_XN_ERROR_SUMMARY" */ +/* ==================================================================== */ + +#define SH_XN_ERROR_SUMMARY 0x0000000150040000 +#define SH_XN_ERROR_SUMMARY_MASK 0x0000003fffffffff +#define SH_XN_ERROR_SUMMARY_INIT 0x0000003fffffffff + +/* SH_XN_ERROR_SUMMARY_NI0_POP_OVERFLOW */ +/* Description: NI0 pop overflow */ +#define SH_XN_ERROR_SUMMARY_NI0_POP_OVERFLOW_SHFT 0 +#define SH_XN_ERROR_SUMMARY_NI0_POP_OVERFLOW_MASK 0x0000000000000001 + +/* SH_XN_ERROR_SUMMARY_NI0_PUSH_OVERFLOW */ +/* Description: NI0 push overflow */ +#define SH_XN_ERROR_SUMMARY_NI0_PUSH_OVERFLOW_SHFT 1 +#define SH_XN_ERROR_SUMMARY_NI0_PUSH_OVERFLOW_MASK 0x0000000000000002 + +/* SH_XN_ERROR_SUMMARY_NI0_CREDIT_OVERFLOW */ +/* Description: NI0 credit overflow */ +#define SH_XN_ERROR_SUMMARY_NI0_CREDIT_OVERFLOW_SHFT 2 +#define SH_XN_ERROR_SUMMARY_NI0_CREDIT_OVERFLOW_MASK 0x0000000000000004 + +/* SH_XN_ERROR_SUMMARY_NI0_DEBIT_OVERFLOW */ +/* Description: NI0 debit overflow */ +#define SH_XN_ERROR_SUMMARY_NI0_DEBIT_OVERFLOW_SHFT 3 +#define SH_XN_ERROR_SUMMARY_NI0_DEBIT_OVERFLOW_MASK 0x0000000000000008 + +/* SH_XN_ERROR_SUMMARY_NI0_POP_UNDERFLOW */ +/* Description: NI0 pop underflow */ +#define SH_XN_ERROR_SUMMARY_NI0_POP_UNDERFLOW_SHFT 4 +#define SH_XN_ERROR_SUMMARY_NI0_POP_UNDERFLOW_MASK 0x0000000000000010 + +/* SH_XN_ERROR_SUMMARY_NI0_PUSH_UNDERFLOW */ +/* Description: NI0 push underflow */ +#define SH_XN_ERROR_SUMMARY_NI0_PUSH_UNDERFLOW_SHFT 5 +#define SH_XN_ERROR_SUMMARY_NI0_PUSH_UNDERFLOW_MASK 0x0000000000000020 + +/* SH_XN_ERROR_SUMMARY_NI0_CREDIT_UNDERFLOW */ +/* Description: NI0 credit underflow */ +#define SH_XN_ERROR_SUMMARY_NI0_CREDIT_UNDERFLOW_SHFT 6 +#define SH_XN_ERROR_SUMMARY_NI0_CREDIT_UNDERFLOW_MASK 0x0000000000000040 + +/* SH_XN_ERROR_SUMMARY_NI0_LLP_ERROR */ +/* Description: NI0 llp error */ +#define SH_XN_ERROR_SUMMARY_NI0_LLP_ERROR_SHFT 7 +#define SH_XN_ERROR_SUMMARY_NI0_LLP_ERROR_MASK 0x0000000000000080 + +/* SH_XN_ERROR_SUMMARY_NI0_PIPE_ERROR */ +/* Description: NI0 Pipe in/out errors */ +#define SH_XN_ERROR_SUMMARY_NI0_PIPE_ERROR_SHFT 8 +#define SH_XN_ERROR_SUMMARY_NI0_PIPE_ERROR_MASK 0x0000000000000100 + +/* SH_XN_ERROR_SUMMARY_NI1_POP_OVERFLOW */ +/* Description: NI1 pop overflow */ +#define SH_XN_ERROR_SUMMARY_NI1_POP_OVERFLOW_SHFT 9 +#define SH_XN_ERROR_SUMMARY_NI1_POP_OVERFLOW_MASK 0x0000000000000200 + +/* SH_XN_ERROR_SUMMARY_NI1_PUSH_OVERFLOW */ +/* Description: NI1 push overflow */ +#define SH_XN_ERROR_SUMMARY_NI1_PUSH_OVERFLOW_SHFT 10 +#define SH_XN_ERROR_SUMMARY_NI1_PUSH_OVERFLOW_MASK 0x0000000000000400 + +/* SH_XN_ERROR_SUMMARY_NI1_CREDIT_OVERFLOW */ +/* Description: NI1 credit overflow */ +#define SH_XN_ERROR_SUMMARY_NI1_CREDIT_OVERFLOW_SHFT 11 +#define SH_XN_ERROR_SUMMARY_NI1_CREDIT_OVERFLOW_MASK 0x0000000000000800 + +/* SH_XN_ERROR_SUMMARY_NI1_DEBIT_OVERFLOW */ +/* Description: NI1 debit overflow */ +#define SH_XN_ERROR_SUMMARY_NI1_DEBIT_OVERFLOW_SHFT 12 +#define SH_XN_ERROR_SUMMARY_NI1_DEBIT_OVERFLOW_MASK 0x0000000000001000 + +/* SH_XN_ERROR_SUMMARY_NI1_POP_UNDERFLOW */ +/* Description: NI1 pop underflow */ +#define SH_XN_ERROR_SUMMARY_NI1_POP_UNDERFLOW_SHFT 13 +#define SH_XN_ERROR_SUMMARY_NI1_POP_UNDERFLOW_MASK 0x0000000000002000 + +/* SH_XN_ERROR_SUMMARY_NI1_PUSH_UNDERFLOW */ +/* Description: NI1 push underflow */ +#define SH_XN_ERROR_SUMMARY_NI1_PUSH_UNDERFLOW_SHFT 14 +#define SH_XN_ERROR_SUMMARY_NI1_PUSH_UNDERFLOW_MASK 0x0000000000004000 + +/* SH_XN_ERROR_SUMMARY_NI1_CREDIT_UNDERFLOW */ +/* Description: NI1 credit underflow */ +#define SH_XN_ERROR_SUMMARY_NI1_CREDIT_UNDERFLOW_SHFT 15 +#define SH_XN_ERROR_SUMMARY_NI1_CREDIT_UNDERFLOW_MASK 0x0000000000008000 + +/* SH_XN_ERROR_SUMMARY_NI1_LLP_ERROR */ +/* Description: NI1 llp error */ +#define SH_XN_ERROR_SUMMARY_NI1_LLP_ERROR_SHFT 16 +#define SH_XN_ERROR_SUMMARY_NI1_LLP_ERROR_MASK 0x0000000000010000 + +/* SH_XN_ERROR_SUMMARY_NI1_PIPE_ERROR */ +/* Description: NI1 pipe in/out error */ +#define SH_XN_ERROR_SUMMARY_NI1_PIPE_ERROR_SHFT 17 +#define SH_XN_ERROR_SUMMARY_NI1_PIPE_ERROR_MASK 0x0000000000020000 + +/* SH_XN_ERROR_SUMMARY_XNMD_CREDIT_OVERFLOW */ +/* Description: XNMD credit overflow */ +#define SH_XN_ERROR_SUMMARY_XNMD_CREDIT_OVERFLOW_SHFT 18 +#define SH_XN_ERROR_SUMMARY_XNMD_CREDIT_OVERFLOW_MASK 0x0000000000040000 + +/* SH_XN_ERROR_SUMMARY_XNMD_DEBIT_OVERFLOW */ +/* Description: XNMD debit overflow */ +#define SH_XN_ERROR_SUMMARY_XNMD_DEBIT_OVERFLOW_SHFT 19 +#define SH_XN_ERROR_SUMMARY_XNMD_DEBIT_OVERFLOW_MASK 0x0000000000080000 + +/* SH_XN_ERROR_SUMMARY_XNMD_DATA_BUFF_OVERFLOW */ +/* Description: XNMD data buffer overflow */ +#define SH_XN_ERROR_SUMMARY_XNMD_DATA_BUFF_OVERFLOW_SHFT 20 +#define SH_XN_ERROR_SUMMARY_XNMD_DATA_BUFF_OVERFLOW_MASK 0x0000000000100000 + +/* SH_XN_ERROR_SUMMARY_XNMD_CREDIT_UNDERFLOW */ +/* Description: XNMD credit underflow */ +#define SH_XN_ERROR_SUMMARY_XNMD_CREDIT_UNDERFLOW_SHFT 21 +#define SH_XN_ERROR_SUMMARY_XNMD_CREDIT_UNDERFLOW_MASK 0x0000000000200000 + +/* SH_XN_ERROR_SUMMARY_XNMD_SBE_ERROR */ +/* Description: XNMD single bit error */ +#define SH_XN_ERROR_SUMMARY_XNMD_SBE_ERROR_SHFT 22 +#define SH_XN_ERROR_SUMMARY_XNMD_SBE_ERROR_MASK 0x0000000000400000 + +/* SH_XN_ERROR_SUMMARY_XNMD_UCE_ERROR */ +/* Description: XNMD uncorrectable error */ +#define SH_XN_ERROR_SUMMARY_XNMD_UCE_ERROR_SHFT 23 +#define SH_XN_ERROR_SUMMARY_XNMD_UCE_ERROR_MASK 0x0000000000800000 + +/* SH_XN_ERROR_SUMMARY_XNMD_LUT_ERROR */ +/* Description: XNMD look up table error */ +#define SH_XN_ERROR_SUMMARY_XNMD_LUT_ERROR_SHFT 24 +#define SH_XN_ERROR_SUMMARY_XNMD_LUT_ERROR_MASK 0x0000000001000000 + +/* SH_XN_ERROR_SUMMARY_XNPI_CREDIT_OVERFLOW */ +/* Description: XNMD credit overflow */ +#define SH_XN_ERROR_SUMMARY_XNPI_CREDIT_OVERFLOW_SHFT 25 +#define SH_XN_ERROR_SUMMARY_XNPI_CREDIT_OVERFLOW_MASK 0x0000000002000000 + +/* SH_XN_ERROR_SUMMARY_XNPI_DEBIT_OVERFLOW */ +/* Description: XNPI debit overflow */ +#define SH_XN_ERROR_SUMMARY_XNPI_DEBIT_OVERFLOW_SHFT 26 +#define SH_XN_ERROR_SUMMARY_XNPI_DEBIT_OVERFLOW_MASK 0x0000000004000000 + +/* SH_XN_ERROR_SUMMARY_XNPI_DATA_BUFF_OVERFLOW */ +/* Description: XNPI data buffer overflow */ +#define SH_XN_ERROR_SUMMARY_XNPI_DATA_BUFF_OVERFLOW_SHFT 27 +#define SH_XN_ERROR_SUMMARY_XNPI_DATA_BUFF_OVERFLOW_MASK 0x0000000008000000 + +/* SH_XN_ERROR_SUMMARY_XNPI_CREDIT_UNDERFLOW */ +/* Description: XNPI credit underflow */ +#define SH_XN_ERROR_SUMMARY_XNPI_CREDIT_UNDERFLOW_SHFT 28 +#define SH_XN_ERROR_SUMMARY_XNPI_CREDIT_UNDERFLOW_MASK 0x0000000010000000 + +/* SH_XN_ERROR_SUMMARY_XNPI_SBE_ERROR */ +/* Description: XNPI single bit error */ +#define SH_XN_ERROR_SUMMARY_XNPI_SBE_ERROR_SHFT 29 +#define SH_XN_ERROR_SUMMARY_XNPI_SBE_ERROR_MASK 0x0000000020000000 + +/* SH_XN_ERROR_SUMMARY_XNPI_UCE_ERROR */ +/* Description: XNPI uncorrectable error */ +#define SH_XN_ERROR_SUMMARY_XNPI_UCE_ERROR_SHFT 30 +#define SH_XN_ERROR_SUMMARY_XNPI_UCE_ERROR_MASK 0x0000000040000000 + +/* SH_XN_ERROR_SUMMARY_XNPI_LUT_ERROR */ +/* Description: XNPI look up table error */ +#define SH_XN_ERROR_SUMMARY_XNPI_LUT_ERROR_SHFT 31 +#define SH_XN_ERROR_SUMMARY_XNPI_LUT_ERROR_MASK 0x0000000080000000 + +/* SH_XN_ERROR_SUMMARY_IILB_DEBIT_OVERFLOW */ +/* Description: IILB debit overflow */ +#define SH_XN_ERROR_SUMMARY_IILB_DEBIT_OVERFLOW_SHFT 32 +#define SH_XN_ERROR_SUMMARY_IILB_DEBIT_OVERFLOW_MASK 0x0000000100000000 + +/* SH_XN_ERROR_SUMMARY_IILB_CREDIT_OVERFLOW */ +/* Description: IILB credit overflow */ +#define SH_XN_ERROR_SUMMARY_IILB_CREDIT_OVERFLOW_SHFT 33 +#define SH_XN_ERROR_SUMMARY_IILB_CREDIT_OVERFLOW_MASK 0x0000000200000000 + +/* SH_XN_ERROR_SUMMARY_IILB_FIFO_OVERFLOW */ +/* Description: IILB fifo overflow */ +#define SH_XN_ERROR_SUMMARY_IILB_FIFO_OVERFLOW_SHFT 34 +#define SH_XN_ERROR_SUMMARY_IILB_FIFO_OVERFLOW_MASK 0x0000000400000000 + +/* SH_XN_ERROR_SUMMARY_IILB_CREDIT_UNDERFLOW */ +/* Description: IILB credit underflow */ +#define SH_XN_ERROR_SUMMARY_IILB_CREDIT_UNDERFLOW_SHFT 35 +#define SH_XN_ERROR_SUMMARY_IILB_CREDIT_UNDERFLOW_MASK 0x0000000800000000 + +/* SH_XN_ERROR_SUMMARY_IILB_FIFO_UNDERFLOW */ +/* Description: IILB fifo underflow */ +#define SH_XN_ERROR_SUMMARY_IILB_FIFO_UNDERFLOW_SHFT 36 +#define SH_XN_ERROR_SUMMARY_IILB_FIFO_UNDERFLOW_MASK 0x0000001000000000 + +/* SH_XN_ERROR_SUMMARY_IILB_CHIPLET_OR_LUT */ +/* Description: IILB chiplet nomatch or lut read error */ +#define SH_XN_ERROR_SUMMARY_IILB_CHIPLET_OR_LUT_SHFT 37 +#define SH_XN_ERROR_SUMMARY_IILB_CHIPLET_OR_LUT_MASK 0x0000002000000000 + +/* ==================================================================== */ +/* Register "SH_XN_ERRORS_ALIAS" */ +/* ==================================================================== */ + +#define SH_XN_ERRORS_ALIAS 0x0000000150040008 + +/* ==================================================================== */ +/* Register "SH_XN_ERROR_OVERFLOW" */ +/* ==================================================================== */ + +#define SH_XN_ERROR_OVERFLOW 0x0000000150040020 +#define SH_XN_ERROR_OVERFLOW_MASK 0x0000003fffffffff +#define SH_XN_ERROR_OVERFLOW_INIT 0x0000003fffffffff + +/* SH_XN_ERROR_OVERFLOW_NI0_POP_OVERFLOW */ +/* Description: NI0 pop overflow */ +#define SH_XN_ERROR_OVERFLOW_NI0_POP_OVERFLOW_SHFT 0 +#define SH_XN_ERROR_OVERFLOW_NI0_POP_OVERFLOW_MASK 0x0000000000000001 + +/* SH_XN_ERROR_OVERFLOW_NI0_PUSH_OVERFLOW */ +/* Description: NI0 push overflow */ +#define SH_XN_ERROR_OVERFLOW_NI0_PUSH_OVERFLOW_SHFT 1 +#define SH_XN_ERROR_OVERFLOW_NI0_PUSH_OVERFLOW_MASK 0x0000000000000002 + +/* SH_XN_ERROR_OVERFLOW_NI0_CREDIT_OVERFLOW */ +/* Description: NI0 credit overflow */ +#define SH_XN_ERROR_OVERFLOW_NI0_CREDIT_OVERFLOW_SHFT 2 +#define SH_XN_ERROR_OVERFLOW_NI0_CREDIT_OVERFLOW_MASK 0x0000000000000004 + +/* SH_XN_ERROR_OVERFLOW_NI0_DEBIT_OVERFLOW */ +/* Description: NI0 debit overflow */ +#define SH_XN_ERROR_OVERFLOW_NI0_DEBIT_OVERFLOW_SHFT 3 +#define SH_XN_ERROR_OVERFLOW_NI0_DEBIT_OVERFLOW_MASK 0x0000000000000008 + +/* SH_XN_ERROR_OVERFLOW_NI0_POP_UNDERFLOW */ +/* Description: NI0 pop underflow */ +#define SH_XN_ERROR_OVERFLOW_NI0_POP_UNDERFLOW_SHFT 4 +#define SH_XN_ERROR_OVERFLOW_NI0_POP_UNDERFLOW_MASK 0x0000000000000010 + +/* SH_XN_ERROR_OVERFLOW_NI0_PUSH_UNDERFLOW */ +/* Description: NI0 push underflow */ +#define SH_XN_ERROR_OVERFLOW_NI0_PUSH_UNDERFLOW_SHFT 5 +#define SH_XN_ERROR_OVERFLOW_NI0_PUSH_UNDERFLOW_MASK 0x0000000000000020 + +/* SH_XN_ERROR_OVERFLOW_NI0_CREDIT_UNDERFLOW */ +/* Description: NI0 credit underflow */ +#define SH_XN_ERROR_OVERFLOW_NI0_CREDIT_UNDERFLOW_SHFT 6 +#define SH_XN_ERROR_OVERFLOW_NI0_CREDIT_UNDERFLOW_MASK 0x0000000000000040 + +/* SH_XN_ERROR_OVERFLOW_NI0_LLP_ERROR */ +/* Description: NI0 llp error */ +#define SH_XN_ERROR_OVERFLOW_NI0_LLP_ERROR_SHFT 7 +#define SH_XN_ERROR_OVERFLOW_NI0_LLP_ERROR_MASK 0x0000000000000080 + +/* SH_XN_ERROR_OVERFLOW_NI0_PIPE_ERROR */ +/* Description: NI0 Pipe in/out errors */ +#define SH_XN_ERROR_OVERFLOW_NI0_PIPE_ERROR_SHFT 8 +#define SH_XN_ERROR_OVERFLOW_NI0_PIPE_ERROR_MASK 0x0000000000000100 + +/* SH_XN_ERROR_OVERFLOW_NI1_POP_OVERFLOW */ +/* Description: NI1 pop overflow */ +#define SH_XN_ERROR_OVERFLOW_NI1_POP_OVERFLOW_SHFT 9 +#define SH_XN_ERROR_OVERFLOW_NI1_POP_OVERFLOW_MASK 0x0000000000000200 + +/* SH_XN_ERROR_OVERFLOW_NI1_PUSH_OVERFLOW */ +/* Description: NI1 push overflow */ +#define SH_XN_ERROR_OVERFLOW_NI1_PUSH_OVERFLOW_SHFT 10 +#define SH_XN_ERROR_OVERFLOW_NI1_PUSH_OVERFLOW_MASK 0x0000000000000400 + +/* SH_XN_ERROR_OVERFLOW_NI1_CREDIT_OVERFLOW */ +/* Description: NI1 credit overflow */ +#define SH_XN_ERROR_OVERFLOW_NI1_CREDIT_OVERFLOW_SHFT 11 +#define SH_XN_ERROR_OVERFLOW_NI1_CREDIT_OVERFLOW_MASK 0x0000000000000800 + +/* SH_XN_ERROR_OVERFLOW_NI1_DEBIT_OVERFLOW */ +/* Description: NI1 debit overflow */ +#define SH_XN_ERROR_OVERFLOW_NI1_DEBIT_OVERFLOW_SHFT 12 +#define SH_XN_ERROR_OVERFLOW_NI1_DEBIT_OVERFLOW_MASK 0x0000000000001000 + +/* SH_XN_ERROR_OVERFLOW_NI1_POP_UNDERFLOW */ +/* Description: NI1 pop underflow */ +#define SH_XN_ERROR_OVERFLOW_NI1_POP_UNDERFLOW_SHFT 13 +#define SH_XN_ERROR_OVERFLOW_NI1_POP_UNDERFLOW_MASK 0x0000000000002000 + +/* SH_XN_ERROR_OVERFLOW_NI1_PUSH_UNDERFLOW */ +/* Description: NI1 push underflow */ +#define SH_XN_ERROR_OVERFLOW_NI1_PUSH_UNDERFLOW_SHFT 14 +#define SH_XN_ERROR_OVERFLOW_NI1_PUSH_UNDERFLOW_MASK 0x0000000000004000 + +/* SH_XN_ERROR_OVERFLOW_NI1_CREDIT_UNDERFLOW */ +/* Description: NI1 credit underflow */ +#define SH_XN_ERROR_OVERFLOW_NI1_CREDIT_UNDERFLOW_SHFT 15 +#define SH_XN_ERROR_OVERFLOW_NI1_CREDIT_UNDERFLOW_MASK 0x0000000000008000 + +/* SH_XN_ERROR_OVERFLOW_NI1_LLP_ERROR */ +/* Description: NI1 llp error */ +#define SH_XN_ERROR_OVERFLOW_NI1_LLP_ERROR_SHFT 16 +#define SH_XN_ERROR_OVERFLOW_NI1_LLP_ERROR_MASK 0x0000000000010000 + +/* SH_XN_ERROR_OVERFLOW_NI1_PIPE_ERROR */ +/* Description: NI1 pipe in/out error */ +#define SH_XN_ERROR_OVERFLOW_NI1_PIPE_ERROR_SHFT 17 +#define SH_XN_ERROR_OVERFLOW_NI1_PIPE_ERROR_MASK 0x0000000000020000 + +/* SH_XN_ERROR_OVERFLOW_XNMD_CREDIT_OVERFLOW */ +/* Description: XNMD credit overflow */ +#define SH_XN_ERROR_OVERFLOW_XNMD_CREDIT_OVERFLOW_SHFT 18 +#define SH_XN_ERROR_OVERFLOW_XNMD_CREDIT_OVERFLOW_MASK 0x0000000000040000 + +/* SH_XN_ERROR_OVERFLOW_XNMD_DEBIT_OVERFLOW */ +/* Description: XNMD debit overflow */ +#define SH_XN_ERROR_OVERFLOW_XNMD_DEBIT_OVERFLOW_SHFT 19 +#define SH_XN_ERROR_OVERFLOW_XNMD_DEBIT_OVERFLOW_MASK 0x0000000000080000 + +/* SH_XN_ERROR_OVERFLOW_XNMD_DATA_BUFF_OVERFLOW */ +/* Description: XNMD data buffer overflow */ +#define SH_XN_ERROR_OVERFLOW_XNMD_DATA_BUFF_OVERFLOW_SHFT 20 +#define SH_XN_ERROR_OVERFLOW_XNMD_DATA_BUFF_OVERFLOW_MASK 0x0000000000100000 + +/* SH_XN_ERROR_OVERFLOW_XNMD_CREDIT_UNDERFLOW */ +/* Description: XNMD credit underflow */ +#define SH_XN_ERROR_OVERFLOW_XNMD_CREDIT_UNDERFLOW_SHFT 21 +#define SH_XN_ERROR_OVERFLOW_XNMD_CREDIT_UNDERFLOW_MASK 0x0000000000200000 + +/* SH_XN_ERROR_OVERFLOW_XNMD_SBE_ERROR */ +/* Description: XNMD single bit error */ +#define SH_XN_ERROR_OVERFLOW_XNMD_SBE_ERROR_SHFT 22 +#define SH_XN_ERROR_OVERFLOW_XNMD_SBE_ERROR_MASK 0x0000000000400000 + +/* SH_XN_ERROR_OVERFLOW_XNMD_UCE_ERROR */ +/* Description: XNMD uncorrectable error */ +#define SH_XN_ERROR_OVERFLOW_XNMD_UCE_ERROR_SHFT 23 +#define SH_XN_ERROR_OVERFLOW_XNMD_UCE_ERROR_MASK 0x0000000000800000 + +/* SH_XN_ERROR_OVERFLOW_XNMD_LUT_ERROR */ +/* Description: XNMD look up table error */ +#define SH_XN_ERROR_OVERFLOW_XNMD_LUT_ERROR_SHFT 24 +#define SH_XN_ERROR_OVERFLOW_XNMD_LUT_ERROR_MASK 0x0000000001000000 + +/* SH_XN_ERROR_OVERFLOW_XNPI_CREDIT_OVERFLOW */ +/* Description: XNMD credit overflow */ +#define SH_XN_ERROR_OVERFLOW_XNPI_CREDIT_OVERFLOW_SHFT 25 +#define SH_XN_ERROR_OVERFLOW_XNPI_CREDIT_OVERFLOW_MASK 0x0000000002000000 + +/* SH_XN_ERROR_OVERFLOW_XNPI_DEBIT_OVERFLOW */ +/* Description: XNPI debit overflow */ +#define SH_XN_ERROR_OVERFLOW_XNPI_DEBIT_OVERFLOW_SHFT 26 +#define SH_XN_ERROR_OVERFLOW_XNPI_DEBIT_OVERFLOW_MASK 0x0000000004000000 + +/* SH_XN_ERROR_OVERFLOW_XNPI_DATA_BUFF_OVERFLOW */ +/* Description: XNPI data buffer overflow */ +#define SH_XN_ERROR_OVERFLOW_XNPI_DATA_BUFF_OVERFLOW_SHFT 27 +#define SH_XN_ERROR_OVERFLOW_XNPI_DATA_BUFF_OVERFLOW_MASK 0x0000000008000000 + +/* SH_XN_ERROR_OVERFLOW_XNPI_CREDIT_UNDERFLOW */ +/* Description: XNPI credit underflow */ +#define SH_XN_ERROR_OVERFLOW_XNPI_CREDIT_UNDERFLOW_SHFT 28 +#define SH_XN_ERROR_OVERFLOW_XNPI_CREDIT_UNDERFLOW_MASK 0x0000000010000000 + +/* SH_XN_ERROR_OVERFLOW_XNPI_SBE_ERROR */ +/* Description: XNPI single bit error */ +#define SH_XN_ERROR_OVERFLOW_XNPI_SBE_ERROR_SHFT 29 +#define SH_XN_ERROR_OVERFLOW_XNPI_SBE_ERROR_MASK 0x0000000020000000 + +/* SH_XN_ERROR_OVERFLOW_XNPI_UCE_ERROR */ +/* Description: XNPI uncorrectable error */ +#define SH_XN_ERROR_OVERFLOW_XNPI_UCE_ERROR_SHFT 30 +#define SH_XN_ERROR_OVERFLOW_XNPI_UCE_ERROR_MASK 0x0000000040000000 + +/* SH_XN_ERROR_OVERFLOW_XNPI_LUT_ERROR */ +/* Description: XNPI look up table error */ +#define SH_XN_ERROR_OVERFLOW_XNPI_LUT_ERROR_SHFT 31 +#define SH_XN_ERROR_OVERFLOW_XNPI_LUT_ERROR_MASK 0x0000000080000000 + +/* SH_XN_ERROR_OVERFLOW_IILB_DEBIT_OVERFLOW */ +/* Description: IILB debit overflow */ +#define SH_XN_ERROR_OVERFLOW_IILB_DEBIT_OVERFLOW_SHFT 32 +#define SH_XN_ERROR_OVERFLOW_IILB_DEBIT_OVERFLOW_MASK 0x0000000100000000 + +/* SH_XN_ERROR_OVERFLOW_IILB_CREDIT_OVERFLOW */ +/* Description: IILB credit overflow */ +#define SH_XN_ERROR_OVERFLOW_IILB_CREDIT_OVERFLOW_SHFT 33 +#define SH_XN_ERROR_OVERFLOW_IILB_CREDIT_OVERFLOW_MASK 0x0000000200000000 + +/* SH_XN_ERROR_OVERFLOW_IILB_FIFO_OVERFLOW */ +/* Description: IILB fifo overflow */ +#define SH_XN_ERROR_OVERFLOW_IILB_FIFO_OVERFLOW_SHFT 34 +#define SH_XN_ERROR_OVERFLOW_IILB_FIFO_OVERFLOW_MASK 0x0000000400000000 + +/* SH_XN_ERROR_OVERFLOW_IILB_CREDIT_UNDERFLOW */ +/* Description: IILB credit underflow */ +#define SH_XN_ERROR_OVERFLOW_IILB_CREDIT_UNDERFLOW_SHFT 35 +#define SH_XN_ERROR_OVERFLOW_IILB_CREDIT_UNDERFLOW_MASK 0x0000000800000000 + +/* SH_XN_ERROR_OVERFLOW_IILB_FIFO_UNDERFLOW */ +/* Description: IILB fifo underflow */ +#define SH_XN_ERROR_OVERFLOW_IILB_FIFO_UNDERFLOW_SHFT 36 +#define SH_XN_ERROR_OVERFLOW_IILB_FIFO_UNDERFLOW_MASK 0x0000001000000000 + +/* SH_XN_ERROR_OVERFLOW_IILB_CHIPLET_OR_LUT */ +/* Description: IILB chiplet nomatch or lut read error */ +#define SH_XN_ERROR_OVERFLOW_IILB_CHIPLET_OR_LUT_SHFT 37 +#define SH_XN_ERROR_OVERFLOW_IILB_CHIPLET_OR_LUT_MASK 0x0000002000000000 + +/* ==================================================================== */ +/* Register "SH_XN_ERROR_OVERFLOW_ALIAS" */ +/* ==================================================================== */ + +#define SH_XN_ERROR_OVERFLOW_ALIAS 0x0000000150040028 + +/* ==================================================================== */ +/* Register "SH_XN_ERROR_MASK" */ +/* ==================================================================== */ + +#define SH_XN_ERROR_MASK 0x0000000150040040 +#define SH_XN_ERROR_MASK_MASK 0x0000003fffffffff +#define SH_XN_ERROR_MASK_INIT 0x0000003fffffffff + +/* SH_XN_ERROR_MASK_NI0_POP_OVERFLOW */ +/* Description: NI0 pop overflow */ +#define SH_XN_ERROR_MASK_NI0_POP_OVERFLOW_SHFT 0 +#define SH_XN_ERROR_MASK_NI0_POP_OVERFLOW_MASK 0x0000000000000001 + +/* SH_XN_ERROR_MASK_NI0_PUSH_OVERFLOW */ +/* Description: NI0 push overflow */ +#define SH_XN_ERROR_MASK_NI0_PUSH_OVERFLOW_SHFT 1 +#define SH_XN_ERROR_MASK_NI0_PUSH_OVERFLOW_MASK 0x0000000000000002 + +/* SH_XN_ERROR_MASK_NI0_CREDIT_OVERFLOW */ +/* Description: NI0 credit overflow */ +#define SH_XN_ERROR_MASK_NI0_CREDIT_OVERFLOW_SHFT 2 +#define SH_XN_ERROR_MASK_NI0_CREDIT_OVERFLOW_MASK 0x0000000000000004 + +/* SH_XN_ERROR_MASK_NI0_DEBIT_OVERFLOW */ +/* Description: NI0 debit overflow */ +#define SH_XN_ERROR_MASK_NI0_DEBIT_OVERFLOW_SHFT 3 +#define SH_XN_ERROR_MASK_NI0_DEBIT_OVERFLOW_MASK 0x0000000000000008 + +/* SH_XN_ERROR_MASK_NI0_POP_UNDERFLOW */ +/* Description: NI0 pop underflow */ +#define SH_XN_ERROR_MASK_NI0_POP_UNDERFLOW_SHFT 4 +#define SH_XN_ERROR_MASK_NI0_POP_UNDERFLOW_MASK 0x0000000000000010 + +/* SH_XN_ERROR_MASK_NI0_PUSH_UNDERFLOW */ +/* Description: NI0 push underflow */ +#define SH_XN_ERROR_MASK_NI0_PUSH_UNDERFLOW_SHFT 5 +#define SH_XN_ERROR_MASK_NI0_PUSH_UNDERFLOW_MASK 0x0000000000000020 + +/* SH_XN_ERROR_MASK_NI0_CREDIT_UNDERFLOW */ +/* Description: NI0 credit underflow */ +#define SH_XN_ERROR_MASK_NI0_CREDIT_UNDERFLOW_SHFT 6 +#define SH_XN_ERROR_MASK_NI0_CREDIT_UNDERFLOW_MASK 0x0000000000000040 + +/* SH_XN_ERROR_MASK_NI0_LLP_ERROR */ +/* Description: NI0 llp error */ +#define SH_XN_ERROR_MASK_NI0_LLP_ERROR_SHFT 7 +#define SH_XN_ERROR_MASK_NI0_LLP_ERROR_MASK 0x0000000000000080 + +/* SH_XN_ERROR_MASK_NI0_PIPE_ERROR */ +/* Description: NI0 Pipe in/out errors */ +#define SH_XN_ERROR_MASK_NI0_PIPE_ERROR_SHFT 8 +#define SH_XN_ERROR_MASK_NI0_PIPE_ERROR_MASK 0x0000000000000100 + +/* SH_XN_ERROR_MASK_NI1_POP_OVERFLOW */ +/* Description: NI1 pop overflow */ +#define SH_XN_ERROR_MASK_NI1_POP_OVERFLOW_SHFT 9 +#define SH_XN_ERROR_MASK_NI1_POP_OVERFLOW_MASK 0x0000000000000200 + +/* SH_XN_ERROR_MASK_NI1_PUSH_OVERFLOW */ +/* Description: NI1 push overflow */ +#define SH_XN_ERROR_MASK_NI1_PUSH_OVERFLOW_SHFT 10 +#define SH_XN_ERROR_MASK_NI1_PUSH_OVERFLOW_MASK 0x0000000000000400 + +/* SH_XN_ERROR_MASK_NI1_CREDIT_OVERFLOW */ +/* Description: NI1 credit overflow */ +#define SH_XN_ERROR_MASK_NI1_CREDIT_OVERFLOW_SHFT 11 +#define SH_XN_ERROR_MASK_NI1_CREDIT_OVERFLOW_MASK 0x0000000000000800 + +/* SH_XN_ERROR_MASK_NI1_DEBIT_OVERFLOW */ +/* Description: NI1 debit overflow */ +#define SH_XN_ERROR_MASK_NI1_DEBIT_OVERFLOW_SHFT 12 +#define SH_XN_ERROR_MASK_NI1_DEBIT_OVERFLOW_MASK 0x0000000000001000 + +/* SH_XN_ERROR_MASK_NI1_POP_UNDERFLOW */ +/* Description: NI1 pop underflow */ +#define SH_XN_ERROR_MASK_NI1_POP_UNDERFLOW_SHFT 13 +#define SH_XN_ERROR_MASK_NI1_POP_UNDERFLOW_MASK 0x0000000000002000 + +/* SH_XN_ERROR_MASK_NI1_PUSH_UNDERFLOW */ +/* Description: NI1 push underflow */ +#define SH_XN_ERROR_MASK_NI1_PUSH_UNDERFLOW_SHFT 14 +#define SH_XN_ERROR_MASK_NI1_PUSH_UNDERFLOW_MASK 0x0000000000004000 + +/* SH_XN_ERROR_MASK_NI1_CREDIT_UNDERFLOW */ +/* Description: NI1 credit underflow */ +#define SH_XN_ERROR_MASK_NI1_CREDIT_UNDERFLOW_SHFT 15 +#define SH_XN_ERROR_MASK_NI1_CREDIT_UNDERFLOW_MASK 0x0000000000008000 + +/* SH_XN_ERROR_MASK_NI1_LLP_ERROR */ +/* Description: NI1 llp error */ +#define SH_XN_ERROR_MASK_NI1_LLP_ERROR_SHFT 16 +#define SH_XN_ERROR_MASK_NI1_LLP_ERROR_MASK 0x0000000000010000 + +/* SH_XN_ERROR_MASK_NI1_PIPE_ERROR */ +/* Description: NI1 pipe in/out error */ +#define SH_XN_ERROR_MASK_NI1_PIPE_ERROR_SHFT 17 +#define SH_XN_ERROR_MASK_NI1_PIPE_ERROR_MASK 0x0000000000020000 + +/* SH_XN_ERROR_MASK_XNMD_CREDIT_OVERFLOW */ +/* Description: XNMD credit overflow */ +#define SH_XN_ERROR_MASK_XNMD_CREDIT_OVERFLOW_SHFT 18 +#define SH_XN_ERROR_MASK_XNMD_CREDIT_OVERFLOW_MASK 0x0000000000040000 + +/* SH_XN_ERROR_MASK_XNMD_DEBIT_OVERFLOW */ +/* Description: XNMD debit overflow */ +#define SH_XN_ERROR_MASK_XNMD_DEBIT_OVERFLOW_SHFT 19 +#define SH_XN_ERROR_MASK_XNMD_DEBIT_OVERFLOW_MASK 0x0000000000080000 + +/* SH_XN_ERROR_MASK_XNMD_DATA_BUFF_OVERFLOW */ +/* Description: XNMD data buffer overflow */ +#define SH_XN_ERROR_MASK_XNMD_DATA_BUFF_OVERFLOW_SHFT 20 +#define SH_XN_ERROR_MASK_XNMD_DATA_BUFF_OVERFLOW_MASK 0x0000000000100000 + +/* SH_XN_ERROR_MASK_XNMD_CREDIT_UNDERFLOW */ +/* Description: XNMD credit underflow */ +#define SH_XN_ERROR_MASK_XNMD_CREDIT_UNDERFLOW_SHFT 21 +#define SH_XN_ERROR_MASK_XNMD_CREDIT_UNDERFLOW_MASK 0x0000000000200000 + +/* SH_XN_ERROR_MASK_XNMD_SBE_ERROR */ +/* Description: XNMD single bit error */ +#define SH_XN_ERROR_MASK_XNMD_SBE_ERROR_SHFT 22 +#define SH_XN_ERROR_MASK_XNMD_SBE_ERROR_MASK 0x0000000000400000 + +/* SH_XN_ERROR_MASK_XNMD_UCE_ERROR */ +/* Description: XNMD uncorrectable error */ +#define SH_XN_ERROR_MASK_XNMD_UCE_ERROR_SHFT 23 +#define SH_XN_ERROR_MASK_XNMD_UCE_ERROR_MASK 0x0000000000800000 + +/* SH_XN_ERROR_MASK_XNMD_LUT_ERROR */ +/* Description: XNMD look up table error */ +#define SH_XN_ERROR_MASK_XNMD_LUT_ERROR_SHFT 24 +#define SH_XN_ERROR_MASK_XNMD_LUT_ERROR_MASK 0x0000000001000000 + +/* SH_XN_ERROR_MASK_XNPI_CREDIT_OVERFLOW */ +/* Description: XNMD credit overflow */ +#define SH_XN_ERROR_MASK_XNPI_CREDIT_OVERFLOW_SHFT 25 +#define SH_XN_ERROR_MASK_XNPI_CREDIT_OVERFLOW_MASK 0x0000000002000000 + +/* SH_XN_ERROR_MASK_XNPI_DEBIT_OVERFLOW */ +/* Description: XNPI debit overflow */ +#define SH_XN_ERROR_MASK_XNPI_DEBIT_OVERFLOW_SHFT 26 +#define SH_XN_ERROR_MASK_XNPI_DEBIT_OVERFLOW_MASK 0x0000000004000000 + +/* SH_XN_ERROR_MASK_XNPI_DATA_BUFF_OVERFLOW */ +/* Description: XNPI data buffer overflow */ +#define SH_XN_ERROR_MASK_XNPI_DATA_BUFF_OVERFLOW_SHFT 27 +#define SH_XN_ERROR_MASK_XNPI_DATA_BUFF_OVERFLOW_MASK 0x0000000008000000 + +/* SH_XN_ERROR_MASK_XNPI_CREDIT_UNDERFLOW */ +/* Description: XNPI credit underflow */ +#define SH_XN_ERROR_MASK_XNPI_CREDIT_UNDERFLOW_SHFT 28 +#define SH_XN_ERROR_MASK_XNPI_CREDIT_UNDERFLOW_MASK 0x0000000010000000 + +/* SH_XN_ERROR_MASK_XNPI_SBE_ERROR */ +/* Description: XNPI single bit error */ +#define SH_XN_ERROR_MASK_XNPI_SBE_ERROR_SHFT 29 +#define SH_XN_ERROR_MASK_XNPI_SBE_ERROR_MASK 0x0000000020000000 + +/* SH_XN_ERROR_MASK_XNPI_UCE_ERROR */ +/* Description: XNPI uncorrectable error */ +#define SH_XN_ERROR_MASK_XNPI_UCE_ERROR_SHFT 30 +#define SH_XN_ERROR_MASK_XNPI_UCE_ERROR_MASK 0x0000000040000000 + +/* SH_XN_ERROR_MASK_XNPI_LUT_ERROR */ +/* Description: XNPI look up table error */ +#define SH_XN_ERROR_MASK_XNPI_LUT_ERROR_SHFT 31 +#define SH_XN_ERROR_MASK_XNPI_LUT_ERROR_MASK 0x0000000080000000 + +/* SH_XN_ERROR_MASK_IILB_DEBIT_OVERFLOW */ +/* Description: IILB debit overflow */ +#define SH_XN_ERROR_MASK_IILB_DEBIT_OVERFLOW_SHFT 32 +#define SH_XN_ERROR_MASK_IILB_DEBIT_OVERFLOW_MASK 0x0000000100000000 + +/* SH_XN_ERROR_MASK_IILB_CREDIT_OVERFLOW */ +/* Description: IILB credit overflow */ +#define SH_XN_ERROR_MASK_IILB_CREDIT_OVERFLOW_SHFT 33 +#define SH_XN_ERROR_MASK_IILB_CREDIT_OVERFLOW_MASK 0x0000000200000000 + +/* SH_XN_ERROR_MASK_IILB_FIFO_OVERFLOW */ +/* Description: IILB fifo overflow */ +#define SH_XN_ERROR_MASK_IILB_FIFO_OVERFLOW_SHFT 34 +#define SH_XN_ERROR_MASK_IILB_FIFO_OVERFLOW_MASK 0x0000000400000000 + +/* SH_XN_ERROR_MASK_IILB_CREDIT_UNDERFLOW */ +/* Description: IILB credit underflow */ +#define SH_XN_ERROR_MASK_IILB_CREDIT_UNDERFLOW_SHFT 35 +#define SH_XN_ERROR_MASK_IILB_CREDIT_UNDERFLOW_MASK 0x0000000800000000 + +/* SH_XN_ERROR_MASK_IILB_FIFO_UNDERFLOW */ +/* Description: IILB fifo underflow */ +#define SH_XN_ERROR_MASK_IILB_FIFO_UNDERFLOW_SHFT 36 +#define SH_XN_ERROR_MASK_IILB_FIFO_UNDERFLOW_MASK 0x0000001000000000 + +/* SH_XN_ERROR_MASK_IILB_CHIPLET_OR_LUT */ +/* Description: IILB chiplet nomatch or lut read error */ +#define SH_XN_ERROR_MASK_IILB_CHIPLET_OR_LUT_SHFT 37 +#define SH_XN_ERROR_MASK_IILB_CHIPLET_OR_LUT_MASK 0x0000002000000000 + +/* ==================================================================== */ +/* Register "SH_XN_FIRST_ERROR" */ +/* ==================================================================== */ + +#define SH_XN_FIRST_ERROR 0x0000000150040060 +#define SH_XN_FIRST_ERROR_MASK 0x0000003fffffffff +#define SH_XN_FIRST_ERROR_INIT 0x0000003fffffffff + +/* SH_XN_FIRST_ERROR_NI0_POP_OVERFLOW */ +/* Description: NI0 pop overflow */ +#define SH_XN_FIRST_ERROR_NI0_POP_OVERFLOW_SHFT 0 +#define SH_XN_FIRST_ERROR_NI0_POP_OVERFLOW_MASK 0x0000000000000001 + +/* SH_XN_FIRST_ERROR_NI0_PUSH_OVERFLOW */ +/* Description: NI0 push overflow */ +#define SH_XN_FIRST_ERROR_NI0_PUSH_OVERFLOW_SHFT 1 +#define SH_XN_FIRST_ERROR_NI0_PUSH_OVERFLOW_MASK 0x0000000000000002 + +/* SH_XN_FIRST_ERROR_NI0_CREDIT_OVERFLOW */ +/* Description: NI0 credit overflow */ +#define SH_XN_FIRST_ERROR_NI0_CREDIT_OVERFLOW_SHFT 2 +#define SH_XN_FIRST_ERROR_NI0_CREDIT_OVERFLOW_MASK 0x0000000000000004 + +/* SH_XN_FIRST_ERROR_NI0_DEBIT_OVERFLOW */ +/* Description: NI0 debit overflow */ +#define SH_XN_FIRST_ERROR_NI0_DEBIT_OVERFLOW_SHFT 3 +#define SH_XN_FIRST_ERROR_NI0_DEBIT_OVERFLOW_MASK 0x0000000000000008 + +/* SH_XN_FIRST_ERROR_NI0_POP_UNDERFLOW */ +/* Description: NI0 pop underflow */ +#define SH_XN_FIRST_ERROR_NI0_POP_UNDERFLOW_SHFT 4 +#define SH_XN_FIRST_ERROR_NI0_POP_UNDERFLOW_MASK 0x0000000000000010 + +/* SH_XN_FIRST_ERROR_NI0_PUSH_UNDERFLOW */ +/* Description: NI0 push underflow */ +#define SH_XN_FIRST_ERROR_NI0_PUSH_UNDERFLOW_SHFT 5 +#define SH_XN_FIRST_ERROR_NI0_PUSH_UNDERFLOW_MASK 0x0000000000000020 + +/* SH_XN_FIRST_ERROR_NI0_CREDIT_UNDERFLOW */ +/* Description: NI0 credit underflow */ +#define SH_XN_FIRST_ERROR_NI0_CREDIT_UNDERFLOW_SHFT 6 +#define SH_XN_FIRST_ERROR_NI0_CREDIT_UNDERFLOW_MASK 0x0000000000000040 + +/* SH_XN_FIRST_ERROR_NI0_LLP_ERROR */ +/* Description: NI0 llp error */ +#define SH_XN_FIRST_ERROR_NI0_LLP_ERROR_SHFT 7 +#define SH_XN_FIRST_ERROR_NI0_LLP_ERROR_MASK 0x0000000000000080 + +/* SH_XN_FIRST_ERROR_NI0_PIPE_ERROR */ +/* Description: NI0 Pipe in/out errors */ +#define SH_XN_FIRST_ERROR_NI0_PIPE_ERROR_SHFT 8 +#define SH_XN_FIRST_ERROR_NI0_PIPE_ERROR_MASK 0x0000000000000100 + +/* SH_XN_FIRST_ERROR_NI1_POP_OVERFLOW */ +/* Description: NI1 pop overflow */ +#define SH_XN_FIRST_ERROR_NI1_POP_OVERFLOW_SHFT 9 +#define SH_XN_FIRST_ERROR_NI1_POP_OVERFLOW_MASK 0x0000000000000200 + +/* SH_XN_FIRST_ERROR_NI1_PUSH_OVERFLOW */ +/* Description: NI1 push overflow */ +#define SH_XN_FIRST_ERROR_NI1_PUSH_OVERFLOW_SHFT 10 +#define SH_XN_FIRST_ERROR_NI1_PUSH_OVERFLOW_MASK 0x0000000000000400 + +/* SH_XN_FIRST_ERROR_NI1_CREDIT_OVERFLOW */ +/* Description: NI1 credit overflow */ +#define SH_XN_FIRST_ERROR_NI1_CREDIT_OVERFLOW_SHFT 11 +#define SH_XN_FIRST_ERROR_NI1_CREDIT_OVERFLOW_MASK 0x0000000000000800 + +/* SH_XN_FIRST_ERROR_NI1_DEBIT_OVERFLOW */ +/* Description: NI1 debit overflow */ +#define SH_XN_FIRST_ERROR_NI1_DEBIT_OVERFLOW_SHFT 12 +#define SH_XN_FIRST_ERROR_NI1_DEBIT_OVERFLOW_MASK 0x0000000000001000 + +/* SH_XN_FIRST_ERROR_NI1_POP_UNDERFLOW */ +/* Description: NI1 pop underflow */ +#define SH_XN_FIRST_ERROR_NI1_POP_UNDERFLOW_SHFT 13 +#define SH_XN_FIRST_ERROR_NI1_POP_UNDERFLOW_MASK 0x0000000000002000 + +/* SH_XN_FIRST_ERROR_NI1_PUSH_UNDERFLOW */ +/* Description: NI1 push underflow */ +#define SH_XN_FIRST_ERROR_NI1_PUSH_UNDERFLOW_SHFT 14 +#define SH_XN_FIRST_ERROR_NI1_PUSH_UNDERFLOW_MASK 0x0000000000004000 + +/* SH_XN_FIRST_ERROR_NI1_CREDIT_UNDERFLOW */ +/* Description: NI1 credit underflow */ +#define SH_XN_FIRST_ERROR_NI1_CREDIT_UNDERFLOW_SHFT 15 +#define SH_XN_FIRST_ERROR_NI1_CREDIT_UNDERFLOW_MASK 0x0000000000008000 + +/* SH_XN_FIRST_ERROR_NI1_LLP_ERROR */ +/* Description: NI1 llp error */ +#define SH_XN_FIRST_ERROR_NI1_LLP_ERROR_SHFT 16 +#define SH_XN_FIRST_ERROR_NI1_LLP_ERROR_MASK 0x0000000000010000 + +/* SH_XN_FIRST_ERROR_NI1_PIPE_ERROR */ +/* Description: NI1 pipe in/out error */ +#define SH_XN_FIRST_ERROR_NI1_PIPE_ERROR_SHFT 17 +#define SH_XN_FIRST_ERROR_NI1_PIPE_ERROR_MASK 0x0000000000020000 + +/* SH_XN_FIRST_ERROR_XNMD_CREDIT_OVERFLOW */ +/* Description: XNMD credit overflow */ +#define SH_XN_FIRST_ERROR_XNMD_CREDIT_OVERFLOW_SHFT 18 +#define SH_XN_FIRST_ERROR_XNMD_CREDIT_OVERFLOW_MASK 0x0000000000040000 + +/* SH_XN_FIRST_ERROR_XNMD_DEBIT_OVERFLOW */ +/* Description: XNMD debit overflow */ +#define SH_XN_FIRST_ERROR_XNMD_DEBIT_OVERFLOW_SHFT 19 +#define SH_XN_FIRST_ERROR_XNMD_DEBIT_OVERFLOW_MASK 0x0000000000080000 + +/* SH_XN_FIRST_ERROR_XNMD_DATA_BUFF_OVERFLOW */ +/* Description: XNMD data buffer overflow */ +#define SH_XN_FIRST_ERROR_XNMD_DATA_BUFF_OVERFLOW_SHFT 20 +#define SH_XN_FIRST_ERROR_XNMD_DATA_BUFF_OVERFLOW_MASK 0x0000000000100000 + +/* SH_XN_FIRST_ERROR_XNMD_CREDIT_UNDERFLOW */ +/* Description: XNMD credit underflow */ +#define SH_XN_FIRST_ERROR_XNMD_CREDIT_UNDERFLOW_SHFT 21 +#define SH_XN_FIRST_ERROR_XNMD_CREDIT_UNDERFLOW_MASK 0x0000000000200000 + +/* SH_XN_FIRST_ERROR_XNMD_SBE_ERROR */ +/* Description: XNMD single bit error */ +#define SH_XN_FIRST_ERROR_XNMD_SBE_ERROR_SHFT 22 +#define SH_XN_FIRST_ERROR_XNMD_SBE_ERROR_MASK 0x0000000000400000 + +/* SH_XN_FIRST_ERROR_XNMD_UCE_ERROR */ +/* Description: XNMD uncorrectable error */ +#define SH_XN_FIRST_ERROR_XNMD_UCE_ERROR_SHFT 23 +#define SH_XN_FIRST_ERROR_XNMD_UCE_ERROR_MASK 0x0000000000800000 + +/* SH_XN_FIRST_ERROR_XNMD_LUT_ERROR */ +/* Description: XNMD look up table error */ +#define SH_XN_FIRST_ERROR_XNMD_LUT_ERROR_SHFT 24 +#define SH_XN_FIRST_ERROR_XNMD_LUT_ERROR_MASK 0x0000000001000000 + +/* SH_XN_FIRST_ERROR_XNPI_CREDIT_OVERFLOW */ +/* Description: XNMD credit overflow */ +#define SH_XN_FIRST_ERROR_XNPI_CREDIT_OVERFLOW_SHFT 25 +#define SH_XN_FIRST_ERROR_XNPI_CREDIT_OVERFLOW_MASK 0x0000000002000000 + +/* SH_XN_FIRST_ERROR_XNPI_DEBIT_OVERFLOW */ +/* Description: XNPI debit overflow */ +#define SH_XN_FIRST_ERROR_XNPI_DEBIT_OVERFLOW_SHFT 26 +#define SH_XN_FIRST_ERROR_XNPI_DEBIT_OVERFLOW_MASK 0x0000000004000000 + +/* SH_XN_FIRST_ERROR_XNPI_DATA_BUFF_OVERFLOW */ +/* Description: XNPI data buffer overflow */ +#define SH_XN_FIRST_ERROR_XNPI_DATA_BUFF_OVERFLOW_SHFT 27 +#define SH_XN_FIRST_ERROR_XNPI_DATA_BUFF_OVERFLOW_MASK 0x0000000008000000 + +/* SH_XN_FIRST_ERROR_XNPI_CREDIT_UNDERFLOW */ +/* Description: XNPI credit underflow */ +#define SH_XN_FIRST_ERROR_XNPI_CREDIT_UNDERFLOW_SHFT 28 +#define SH_XN_FIRST_ERROR_XNPI_CREDIT_UNDERFLOW_MASK 0x0000000010000000 + +/* SH_XN_FIRST_ERROR_XNPI_SBE_ERROR */ +/* Description: XNPI single bit error */ +#define SH_XN_FIRST_ERROR_XNPI_SBE_ERROR_SHFT 29 +#define SH_XN_FIRST_ERROR_XNPI_SBE_ERROR_MASK 0x0000000020000000 + +/* SH_XN_FIRST_ERROR_XNPI_UCE_ERROR */ +/* Description: XNPI uncorrectable error */ +#define SH_XN_FIRST_ERROR_XNPI_UCE_ERROR_SHFT 30 +#define SH_XN_FIRST_ERROR_XNPI_UCE_ERROR_MASK 0x0000000040000000 + +/* SH_XN_FIRST_ERROR_XNPI_LUT_ERROR */ +/* Description: XNPI look up table error */ +#define SH_XN_FIRST_ERROR_XNPI_LUT_ERROR_SHFT 31 +#define SH_XN_FIRST_ERROR_XNPI_LUT_ERROR_MASK 0x0000000080000000 + +/* SH_XN_FIRST_ERROR_IILB_DEBIT_OVERFLOW */ +/* Description: IILB debit overflow */ +#define SH_XN_FIRST_ERROR_IILB_DEBIT_OVERFLOW_SHFT 32 +#define SH_XN_FIRST_ERROR_IILB_DEBIT_OVERFLOW_MASK 0x0000000100000000 + +/* SH_XN_FIRST_ERROR_IILB_CREDIT_OVERFLOW */ +/* Description: IILB credit overflow */ +#define SH_XN_FIRST_ERROR_IILB_CREDIT_OVERFLOW_SHFT 33 +#define SH_XN_FIRST_ERROR_IILB_CREDIT_OVERFLOW_MASK 0x0000000200000000 + +/* SH_XN_FIRST_ERROR_IILB_FIFO_OVERFLOW */ +/* Description: IILB fifo overflow */ +#define SH_XN_FIRST_ERROR_IILB_FIFO_OVERFLOW_SHFT 34 +#define SH_XN_FIRST_ERROR_IILB_FIFO_OVERFLOW_MASK 0x0000000400000000 + +/* SH_XN_FIRST_ERROR_IILB_CREDIT_UNDERFLOW */ +/* Description: IILB credit underflow */ +#define SH_XN_FIRST_ERROR_IILB_CREDIT_UNDERFLOW_SHFT 35 +#define SH_XN_FIRST_ERROR_IILB_CREDIT_UNDERFLOW_MASK 0x0000000800000000 + +/* SH_XN_FIRST_ERROR_IILB_FIFO_UNDERFLOW */ +/* Description: IILB fifo underflow */ +#define SH_XN_FIRST_ERROR_IILB_FIFO_UNDERFLOW_SHFT 36 +#define SH_XN_FIRST_ERROR_IILB_FIFO_UNDERFLOW_MASK 0x0000001000000000 + +/* SH_XN_FIRST_ERROR_IILB_CHIPLET_OR_LUT */ +/* Description: IILB chiplet nomatch or lut read error */ +#define SH_XN_FIRST_ERROR_IILB_CHIPLET_OR_LUT_SHFT 37 +#define SH_XN_FIRST_ERROR_IILB_CHIPLET_OR_LUT_MASK 0x0000002000000000 + +/* ==================================================================== */ +/* Register "SH_XNIILB_ERROR_SUMMARY" */ +/* ==================================================================== */ + +#define SH_XNIILB_ERROR_SUMMARY 0x0000000150040200 +#define SH_XNIILB_ERROR_SUMMARY_MASK 0xffffffffffffffff +#define SH_XNIILB_ERROR_SUMMARY_INIT 0xffffffffffffffff + +/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_II_DEBIT0 */ +/* Description: II debit0 overflow */ +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_II_DEBIT0_SHFT 0 +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_II_DEBIT0_MASK 0x0000000000000001 + +/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_II_DEBIT2 */ +/* Description: II debit2 overflow */ +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_II_DEBIT2_SHFT 1 +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_II_DEBIT2_MASK 0x0000000000000002 + +/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_LB_DEBIT0 */ +/* Description: LB debit0 overflow */ +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_LB_DEBIT0_SHFT 2 +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_LB_DEBIT0_MASK 0x0000000000000004 + +/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_LB_DEBIT2 */ +/* Description: LB debit2 overflow */ +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_LB_DEBIT2_SHFT 3 +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_LB_DEBIT2_MASK 0x0000000000000008 + +/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_II_VC0 */ +/* Description: II VC0 fifo overflow */ +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_II_VC0_SHFT 4 +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_II_VC0_MASK 0x0000000000000010 + +/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_II_VC2 */ +/* Description: II VC2 fifo overflow */ +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_II_VC2_SHFT 5 +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_II_VC2_MASK 0x0000000000000020 + +/* SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_II_VC0 */ +/* Description: II VC0 fifo underflow */ +#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_II_VC0_SHFT 6 +#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_II_VC0_MASK 0x0000000000000040 + +/* SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_II_VC2 */ +/* Description: II VC2 fifo underflow */ +#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_II_VC2_SHFT 7 +#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_II_VC2_MASK 0x0000000000000080 + +/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_LB_VC0 */ +/* Description: LB VC0 fifo overflow */ +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_LB_VC0_SHFT 8 +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_LB_VC0_MASK 0x0000000000000100 + +/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_LB_VC2 */ +/* Description: LB VC2 fifo overflow */ +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_LB_VC2_SHFT 9 +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_LB_VC2_MASK 0x0000000000000200 + +/* SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_LB_VC0 */ +/* Description: LB VC0 fifo underflow */ +#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_LB_VC0_SHFT 10 +#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_LB_VC0_MASK 0x0000000000000400 + +/* SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_LB_VC2 */ +/* Description: LB VC2 fifo underflow */ +#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_LB_VC2_SHFT 11 +#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_LB_VC2_MASK 0x0000000000000800 + +/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_PI_VC0_CREDIT_IN */ +/* Description: PI VC0 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_PI_VC0_CREDIT_IN_SHFT 12 +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_PI_VC0_CREDIT_IN_MASK 0x0000000000001000 + +/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_IILB_VC0_CREDIT_IN */ +/* Description: IILB VC0 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_IILB_VC0_CREDIT_IN_SHFT 13 +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_IILB_VC0_CREDIT_IN_MASK 0x0000000000002000 + +/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_MD_VC0_CREDIT_IN */ +/* Description: MD VC0 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_MD_VC0_CREDIT_IN_SHFT 14 +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_MD_VC0_CREDIT_IN_MASK 0x0000000000004000 + +/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI0_VC0_CREDIT_IN */ +/* Description: NI0 VC0 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI0_VC0_CREDIT_IN_SHFT 15 +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI0_VC0_CREDIT_IN_MASK 0x0000000000008000 + +/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI1_VC0_CREDIT_IN */ +/* Description: NI1 VC0 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI1_VC0_CREDIT_IN_SHFT 16 +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI1_VC0_CREDIT_IN_MASK 0x0000000000010000 + +/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_PI_VC2_CREDIT_IN */ +/* Description: PI VC2 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_PI_VC2_CREDIT_IN_SHFT 17 +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_PI_VC2_CREDIT_IN_MASK 0x0000000000020000 + +/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_IILB_VC2_CREDIT_IN */ +/* Description: IILB VC2 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_IILB_VC2_CREDIT_IN_SHFT 18 +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_IILB_VC2_CREDIT_IN_MASK 0x0000000000040000 + +/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_MD_VC2_CREDIT_IN */ +/* Description: MD VC2 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_MD_VC2_CREDIT_IN_SHFT 19 +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_MD_VC2_CREDIT_IN_MASK 0x0000000000080000 + +/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI0_VC2_CREDIT_IN */ +/* Description: NI0 VC2 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI0_VC2_CREDIT_IN_SHFT 20 +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI0_VC2_CREDIT_IN_MASK 0x0000000000100000 + +/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI1_VC2_CREDIT_IN */ +/* Description: NI1 VC2 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI1_VC2_CREDIT_IN_SHFT 21 +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI1_VC2_CREDIT_IN_MASK 0x0000000000200000 + +/* SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_PI_VC0_CREDIT_IN */ +/* Description: PI VC0 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_PI_VC0_CREDIT_IN_SHFT 22 +#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_PI_VC0_CREDIT_IN_MASK 0x0000000000400000 + +/* SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_IILB_VC0_CREDIT_IN */ +/* Description: IILB VC0 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_IILB_VC0_CREDIT_IN_SHFT 23 +#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_IILB_VC0_CREDIT_IN_MASK 0x0000000000800000 + +/* SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_MD_VC0_CREDIT_IN */ +/* Description: MD VC0 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_MD_VC0_CREDIT_IN_SHFT 24 +#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_MD_VC0_CREDIT_IN_MASK 0x0000000001000000 + +/* SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_NI0_VC0_CREDIT_IN */ +/* Description: NI0 VC0 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_NI0_VC0_CREDIT_IN_SHFT 25 +#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_NI0_VC0_CREDIT_IN_MASK 0x0000000002000000 + +/* SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_NI1_VC0_CREDIT_IN */ +/* Description: NI1 VC0 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_NI1_VC0_CREDIT_IN_SHFT 26 +#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_NI1_VC0_CREDIT_IN_MASK 0x0000000004000000 + +/* SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_PI_VC2_CREDIT_IN */ +/* Description: PI VC2 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_PI_VC2_CREDIT_IN_SHFT 27 +#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_PI_VC2_CREDIT_IN_MASK 0x0000000008000000 + +/* SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_IILB_VC2_CREDIT_IN */ +/* Description: IILB VC2 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_IILB_VC2_CREDIT_IN_SHFT 28 +#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_IILB_VC2_CREDIT_IN_MASK 0x0000000010000000 + +/* SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_MD_VC2_CREDIT_IN */ +/* Description: MD VC2 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_MD_VC2_CREDIT_IN_SHFT 29 +#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_MD_VC2_CREDIT_IN_MASK 0x0000000020000000 + +/* SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_NI0_VC2_CREDIT_IN */ +/* Description: NI0 VC2 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_NI0_VC2_CREDIT_IN_SHFT 30 +#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_NI0_VC2_CREDIT_IN_MASK 0x0000000040000000 + +/* SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_NI1_VC2_CREDIT_IN */ +/* Description: NI1 VC2 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_NI1_VC2_CREDIT_IN_SHFT 31 +#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_NI1_VC2_CREDIT_IN_MASK 0x0000000080000000 + +/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_PI_DEBIT0 */ +/* Description: PI Fifo Debit0 overflow */ +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_PI_DEBIT0_SHFT 32 +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_PI_DEBIT0_MASK 0x0000000100000000 + +/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_PI_DEBIT2 */ +/* Description: PI Fifo Debit2 overflow */ +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_PI_DEBIT2_SHFT 33 +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_PI_DEBIT2_MASK 0x0000000200000000 + +/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_IILB_DEBIT0 */ +/* Description: IILB Fifo Debit0 overflow */ +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_IILB_DEBIT0_SHFT 34 +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_IILB_DEBIT0_MASK 0x0000000400000000 + +/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_IILB_DEBIT2 */ +/* Description: IILB Fifo Debit2 overflow */ +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_IILB_DEBIT2_SHFT 35 +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_IILB_DEBIT2_MASK 0x0000000800000000 + +/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_MD_DEBIT0 */ +/* Description: MD Fifo Debit0 overflow */ +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_MD_DEBIT0_SHFT 36 +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_MD_DEBIT0_MASK 0x0000001000000000 + +/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_MD_DEBIT2 */ +/* Description: MD Fifo Debit2 overflow */ +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_MD_DEBIT2_SHFT 37 +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_MD_DEBIT2_MASK 0x0000002000000000 + +/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI0_DEBIT0 */ +/* Description: NI0 Fifo Debit0 overflow */ +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI0_DEBIT0_SHFT 38 +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI0_DEBIT0_MASK 0x0000004000000000 + +/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI0_DEBIT2 */ +/* Description: NI0 Fifo Debit2 overflow */ +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI0_DEBIT2_SHFT 39 +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI0_DEBIT2_MASK 0x0000008000000000 + +/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI1_DEBIT0 */ +/* Description: NI1 Fifo Debit0 overflow */ +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI1_DEBIT0_SHFT 40 +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI1_DEBIT0_MASK 0x0000010000000000 + +/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI1_DEBIT2 */ +/* Description: NI1 Fifo Debit2 overflow */ +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI1_DEBIT2_SHFT 41 +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI1_DEBIT2_MASK 0x0000020000000000 + +/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_PI_VC0_CREDIT_OUT */ +/* Description: PI VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_PI_VC0_CREDIT_OUT_SHFT 42 +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_PI_VC0_CREDIT_OUT_MASK 0x0000040000000000 + +/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_PI_VC2_CREDIT_OUT */ +/* Description: PI VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_PI_VC2_CREDIT_OUT_SHFT 43 +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_PI_VC2_CREDIT_OUT_MASK 0x0000080000000000 + +/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_MD_VC0_CREDIT_OUT */ +/* Description: MD VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_MD_VC0_CREDIT_OUT_SHFT 44 +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_MD_VC0_CREDIT_OUT_MASK 0x0000100000000000 + +/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_MD_VC2_CREDIT_OUT */ +/* Description: MD VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_MD_VC2_CREDIT_OUT_SHFT 45 +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_MD_VC2_CREDIT_OUT_MASK 0x0000200000000000 + +/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_IILB_VC0_CREDIT_OUT */ +/* Description: IILB VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_IILB_VC0_CREDIT_OUT_SHFT 46 +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_IILB_VC0_CREDIT_OUT_MASK 0x0000400000000000 + +/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_IILB_VC2_CREDIT_OUT */ +/* Description: IILB VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_IILB_VC2_CREDIT_OUT_SHFT 47 +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_IILB_VC2_CREDIT_OUT_MASK 0x0000800000000000 + +/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI0_VC0_CREDIT_OUT */ +/* Description: NI0 VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI0_VC0_CREDIT_OUT_SHFT 48 +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI0_VC0_CREDIT_OUT_MASK 0x0001000000000000 + +/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI0_VC2_CREDIT_OUT */ +/* Description: NI0 VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI0_VC2_CREDIT_OUT_SHFT 49 +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI0_VC2_CREDIT_OUT_MASK 0x0002000000000000 + +/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI1_VC0_CREDIT_OUT */ +/* Description: NI1 VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI1_VC0_CREDIT_OUT_SHFT 50 +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI1_VC0_CREDIT_OUT_MASK 0x0004000000000000 + +/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI1_VC2_CREDIT_OUT */ +/* Description: NI1 VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI1_VC2_CREDIT_OUT_SHFT 51 +#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI1_VC2_CREDIT_OUT_MASK 0x0008000000000000 + +/* SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_PI_VC0_CREDIT_OUT */ +/* Description: PI VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_PI_VC0_CREDIT_OUT_SHFT 52 +#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_PI_VC0_CREDIT_OUT_MASK 0x0010000000000000 + +/* SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_PI_VC2_CREDIT_OUT */ +/* Description: PI VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_PI_VC2_CREDIT_OUT_SHFT 53 +#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_PI_VC2_CREDIT_OUT_MASK 0x0020000000000000 + +/* SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_MD_VC0_CREDIT_OUT */ +/* Description: MD VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_MD_VC0_CREDIT_OUT_SHFT 54 +#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_MD_VC0_CREDIT_OUT_MASK 0x0040000000000000 + +/* SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_MD_VC2_CREDIT_OUT */ +/* Description: MD VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_MD_VC2_CREDIT_OUT_SHFT 55 +#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_MD_VC2_CREDIT_OUT_MASK 0x0080000000000000 + +/* SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_IILB_VC0_CREDIT_OUT */ +/* Description: IILB VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_IILB_VC0_CREDIT_OUT_SHFT 56 +#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_IILB_VC0_CREDIT_OUT_MASK 0x0100000000000000 + +/* SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_IILB_VC2_CREDIT_OUT */ +/* Description: IILB VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_IILB_VC2_CREDIT_OUT_SHFT 57 +#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_IILB_VC2_CREDIT_OUT_MASK 0x0200000000000000 + +/* SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_NI0_VC0_CREDIT_OUT */ +/* Description: NI0 VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_NI0_VC0_CREDIT_OUT_SHFT 58 +#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_NI0_VC0_CREDIT_OUT_MASK 0x0400000000000000 + +/* SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_NI0_VC2_CREDIT_OUT */ +/* Description: NI0 VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_NI0_VC2_CREDIT_OUT_SHFT 59 +#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_NI0_VC2_CREDIT_OUT_MASK 0x0800000000000000 + +/* SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_NI1_VC0_CREDIT_OUT */ +/* Description: NI1 VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_NI1_VC0_CREDIT_OUT_SHFT 60 +#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_NI1_VC0_CREDIT_OUT_MASK 0x1000000000000000 + +/* SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_NI1_VC2_CREDIT_OUT */ +/* Description: NI1 VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_NI1_VC2_CREDIT_OUT_SHFT 61 +#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_NI1_VC2_CREDIT_OUT_MASK 0x2000000000000000 + +/* SH_XNIILB_ERROR_SUMMARY_CHIPLET_NOMATCH */ +/* Description: chiplet nomatch */ +#define SH_XNIILB_ERROR_SUMMARY_CHIPLET_NOMATCH_SHFT 62 +#define SH_XNIILB_ERROR_SUMMARY_CHIPLET_NOMATCH_MASK 0x4000000000000000 + +/* SH_XNIILB_ERROR_SUMMARY_LUT_READ_ERROR */ +/* Description: LUT Read Error */ +#define SH_XNIILB_ERROR_SUMMARY_LUT_READ_ERROR_SHFT 63 +#define SH_XNIILB_ERROR_SUMMARY_LUT_READ_ERROR_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_XNIILB_ERRORS_ALIAS" */ +/* ==================================================================== */ + +#define SH_XNIILB_ERRORS_ALIAS 0x0000000150040208 + +/* ==================================================================== */ +/* Register "SH_XNIILB_ERROR_OVERFLOW" */ +/* ==================================================================== */ + +#define SH_XNIILB_ERROR_OVERFLOW 0x0000000150040220 +#define SH_XNIILB_ERROR_OVERFLOW_MASK 0xffffffffffffffff +#define SH_XNIILB_ERROR_OVERFLOW_INIT 0xffffffffffffffff + +/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_II_DEBIT0 */ +/* Description: II debit0 overflow */ +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_II_DEBIT0_SHFT 0 +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_II_DEBIT0_MASK 0x0000000000000001 + +/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_II_DEBIT2 */ +/* Description: II debit2 overflow */ +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_II_DEBIT2_SHFT 1 +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_II_DEBIT2_MASK 0x0000000000000002 + +/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_LB_DEBIT0 */ +/* Description: LB debit0 overflow */ +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_LB_DEBIT0_SHFT 2 +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_LB_DEBIT0_MASK 0x0000000000000004 + +/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_LB_DEBIT2 */ +/* Description: LB debit2 overflow */ +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_LB_DEBIT2_SHFT 3 +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_LB_DEBIT2_MASK 0x0000000000000008 + +/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_II_VC0 */ +/* Description: II VC0 fifo overflow */ +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_II_VC0_SHFT 4 +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_II_VC0_MASK 0x0000000000000010 + +/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_II_VC2 */ +/* Description: II VC2 fifo overflow */ +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_II_VC2_SHFT 5 +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_II_VC2_MASK 0x0000000000000020 + +/* SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_II_VC0 */ +/* Description: II VC0 fifo underflow */ +#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_II_VC0_SHFT 6 +#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_II_VC0_MASK 0x0000000000000040 + +/* SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_II_VC2 */ +/* Description: II VC2 fifo underflow */ +#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_II_VC2_SHFT 7 +#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_II_VC2_MASK 0x0000000000000080 + +/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_LB_VC0 */ +/* Description: LB VC0 fifo overflow */ +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_LB_VC0_SHFT 8 +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_LB_VC0_MASK 0x0000000000000100 + +/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_LB_VC2 */ +/* Description: LB VC2 fifo overflow */ +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_LB_VC2_SHFT 9 +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_LB_VC2_MASK 0x0000000000000200 + +/* SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_LB_VC0 */ +/* Description: LB VC0 fifo underflow */ +#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_LB_VC0_SHFT 10 +#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_LB_VC0_MASK 0x0000000000000400 + +/* SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_LB_VC2 */ +/* Description: LB VC2 fifo underflow */ +#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_LB_VC2_SHFT 11 +#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_LB_VC2_MASK 0x0000000000000800 + +/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_PI_VC0_CREDIT_IN */ +/* Description: PI VC0 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_PI_VC0_CREDIT_IN_SHFT 12 +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_PI_VC0_CREDIT_IN_MASK 0x0000000000001000 + +/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_IILB_VC0_CREDIT_IN */ +/* Description: IILB VC0 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_IILB_VC0_CREDIT_IN_SHFT 13 +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_IILB_VC0_CREDIT_IN_MASK 0x0000000000002000 + +/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_MD_VC0_CREDIT_IN */ +/* Description: MD VC0 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_MD_VC0_CREDIT_IN_SHFT 14 +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_MD_VC0_CREDIT_IN_MASK 0x0000000000004000 + +/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI0_VC0_CREDIT_IN */ +/* Description: NI0 VC0 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI0_VC0_CREDIT_IN_SHFT 15 +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI0_VC0_CREDIT_IN_MASK 0x0000000000008000 + +/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI1_VC0_CREDIT_IN */ +/* Description: NI1 VC0 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI1_VC0_CREDIT_IN_SHFT 16 +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI1_VC0_CREDIT_IN_MASK 0x0000000000010000 + +/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_PI_VC2_CREDIT_IN */ +/* Description: PI VC2 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_PI_VC2_CREDIT_IN_SHFT 17 +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_PI_VC2_CREDIT_IN_MASK 0x0000000000020000 + +/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_IILB_VC2_CREDIT_IN */ +/* Description: IILB VC2 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_IILB_VC2_CREDIT_IN_SHFT 18 +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_IILB_VC2_CREDIT_IN_MASK 0x0000000000040000 + +/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_MD_VC2_CREDIT_IN */ +/* Description: MD VC2 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_MD_VC2_CREDIT_IN_SHFT 19 +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_MD_VC2_CREDIT_IN_MASK 0x0000000000080000 + +/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI0_VC2_CREDIT_IN */ +/* Description: NI0 VC2 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI0_VC2_CREDIT_IN_SHFT 20 +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI0_VC2_CREDIT_IN_MASK 0x0000000000100000 + +/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI1_VC2_CREDIT_IN */ +/* Description: NI1 VC2 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI1_VC2_CREDIT_IN_SHFT 21 +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI1_VC2_CREDIT_IN_MASK 0x0000000000200000 + +/* SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_PI_VC0_CREDIT_IN */ +/* Description: PI VC0 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_PI_VC0_CREDIT_IN_SHFT 22 +#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_PI_VC0_CREDIT_IN_MASK 0x0000000000400000 + +/* SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_IILB_VC0_CREDIT_IN */ +/* Description: IILB VC0 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_IILB_VC0_CREDIT_IN_SHFT 23 +#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_IILB_VC0_CREDIT_IN_MASK 0x0000000000800000 + +/* SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_MD_VC0_CREDIT_IN */ +/* Description: MD VC0 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_MD_VC0_CREDIT_IN_SHFT 24 +#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_MD_VC0_CREDIT_IN_MASK 0x0000000001000000 + +/* SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_NI0_VC0_CREDIT_IN */ +/* Description: NI0 VC0 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_NI0_VC0_CREDIT_IN_SHFT 25 +#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_NI0_VC0_CREDIT_IN_MASK 0x0000000002000000 + +/* SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_NI1_VC0_CREDIT_IN */ +/* Description: NI1 VC0 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_NI1_VC0_CREDIT_IN_SHFT 26 +#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_NI1_VC0_CREDIT_IN_MASK 0x0000000004000000 + +/* SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_PI_VC2_CREDIT_IN */ +/* Description: PI VC2 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_PI_VC2_CREDIT_IN_SHFT 27 +#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_PI_VC2_CREDIT_IN_MASK 0x0000000008000000 + +/* SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_IILB_VC2_CREDIT_IN */ +/* Description: IILB VC2 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_IILB_VC2_CREDIT_IN_SHFT 28 +#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_IILB_VC2_CREDIT_IN_MASK 0x0000000010000000 + +/* SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_MD_VC2_CREDIT_IN */ +/* Description: MD VC2 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_MD_VC2_CREDIT_IN_SHFT 29 +#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_MD_VC2_CREDIT_IN_MASK 0x0000000020000000 + +/* SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_NI0_VC2_CREDIT_IN */ +/* Description: NI0 VC2 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_NI0_VC2_CREDIT_IN_SHFT 30 +#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_NI0_VC2_CREDIT_IN_MASK 0x0000000040000000 + +/* SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_NI1_VC2_CREDIT_IN */ +/* Description: NI1 VC2 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_NI1_VC2_CREDIT_IN_SHFT 31 +#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_NI1_VC2_CREDIT_IN_MASK 0x0000000080000000 + +/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_PI_DEBIT0 */ +/* Description: PI Fifo Debit0 overflow */ +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_PI_DEBIT0_SHFT 32 +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_PI_DEBIT0_MASK 0x0000000100000000 + +/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_PI_DEBIT2 */ +/* Description: PI Fifo Debit2 overflow */ +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_PI_DEBIT2_SHFT 33 +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_PI_DEBIT2_MASK 0x0000000200000000 + +/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_IILB_DEBIT0 */ +/* Description: IILB Fifo Debit0 overflow */ +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_IILB_DEBIT0_SHFT 34 +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_IILB_DEBIT0_MASK 0x0000000400000000 + +/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_IILB_DEBIT2 */ +/* Description: IILB Fifo Debit2 overflow */ +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_IILB_DEBIT2_SHFT 35 +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_IILB_DEBIT2_MASK 0x0000000800000000 + +/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_MD_DEBIT0 */ +/* Description: MD Fifo Debit0 overflow */ +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_MD_DEBIT0_SHFT 36 +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_MD_DEBIT0_MASK 0x0000001000000000 + +/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_MD_DEBIT2 */ +/* Description: MD Fifo Debit2 overflow */ +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_MD_DEBIT2_SHFT 37 +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_MD_DEBIT2_MASK 0x0000002000000000 + +/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI0_DEBIT0 */ +/* Description: NI0 Fifo Debit0 overflow */ +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI0_DEBIT0_SHFT 38 +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI0_DEBIT0_MASK 0x0000004000000000 + +/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI0_DEBIT2 */ +/* Description: NI0 Fifo Debit2 overflow */ +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI0_DEBIT2_SHFT 39 +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI0_DEBIT2_MASK 0x0000008000000000 + +/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI1_DEBIT0 */ +/* Description: NI1 Fifo Debit0 overflow */ +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI1_DEBIT0_SHFT 40 +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI1_DEBIT0_MASK 0x0000010000000000 + +/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI1_DEBIT2 */ +/* Description: NI1 Fifo Debit2 overflow */ +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI1_DEBIT2_SHFT 41 +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI1_DEBIT2_MASK 0x0000020000000000 + +/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_PI_VC0_CREDIT_OUT */ +/* Description: PI VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_PI_VC0_CREDIT_OUT_SHFT 42 +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_PI_VC0_CREDIT_OUT_MASK 0x0000040000000000 + +/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_PI_VC2_CREDIT_OUT */ +/* Description: PI VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_PI_VC2_CREDIT_OUT_SHFT 43 +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_PI_VC2_CREDIT_OUT_MASK 0x0000080000000000 + +/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_MD_VC0_CREDIT_OUT */ +/* Description: MD VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_MD_VC0_CREDIT_OUT_SHFT 44 +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_MD_VC0_CREDIT_OUT_MASK 0x0000100000000000 + +/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_MD_VC2_CREDIT_OUT */ +/* Description: MD VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_MD_VC2_CREDIT_OUT_SHFT 45 +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_MD_VC2_CREDIT_OUT_MASK 0x0000200000000000 + +/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_IILB_VC0_CREDIT_OUT */ +/* Description: IILB VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_IILB_VC0_CREDIT_OUT_SHFT 46 +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_IILB_VC0_CREDIT_OUT_MASK 0x0000400000000000 + +/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_IILB_VC2_CREDIT_OUT */ +/* Description: IILB VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_IILB_VC2_CREDIT_OUT_SHFT 47 +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_IILB_VC2_CREDIT_OUT_MASK 0x0000800000000000 + +/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI0_VC0_CREDIT_OUT */ +/* Description: NI0 VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI0_VC0_CREDIT_OUT_SHFT 48 +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI0_VC0_CREDIT_OUT_MASK 0x0001000000000000 + +/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI0_VC2_CREDIT_OUT */ +/* Description: NI0 VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI0_VC2_CREDIT_OUT_SHFT 49 +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI0_VC2_CREDIT_OUT_MASK 0x0002000000000000 + +/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI1_VC0_CREDIT_OUT */ +/* Description: NI1 VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI1_VC0_CREDIT_OUT_SHFT 50 +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI1_VC0_CREDIT_OUT_MASK 0x0004000000000000 + +/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI1_VC2_CREDIT_OUT */ +/* Description: NI1 VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI1_VC2_CREDIT_OUT_SHFT 51 +#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI1_VC2_CREDIT_OUT_MASK 0x0008000000000000 + +/* SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_PI_VC0_CREDIT_OUT */ +/* Description: PI VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_PI_VC0_CREDIT_OUT_SHFT 52 +#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_PI_VC0_CREDIT_OUT_MASK 0x0010000000000000 + +/* SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_PI_VC2_CREDIT_OUT */ +/* Description: PI VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_PI_VC2_CREDIT_OUT_SHFT 53 +#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_PI_VC2_CREDIT_OUT_MASK 0x0020000000000000 + +/* SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_MD_VC0_CREDIT_OUT */ +/* Description: MD VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_MD_VC0_CREDIT_OUT_SHFT 54 +#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_MD_VC0_CREDIT_OUT_MASK 0x0040000000000000 + +/* SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_MD_VC2_CREDIT_OUT */ +/* Description: MD VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_MD_VC2_CREDIT_OUT_SHFT 55 +#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_MD_VC2_CREDIT_OUT_MASK 0x0080000000000000 + +/* SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_IILB_VC0_CREDIT_OUT */ +/* Description: IILB VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_IILB_VC0_CREDIT_OUT_SHFT 56 +#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_IILB_VC0_CREDIT_OUT_MASK 0x0100000000000000 + +/* SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_IILB_VC2_CREDIT_OUT */ +/* Description: IILB VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_IILB_VC2_CREDIT_OUT_SHFT 57 +#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_IILB_VC2_CREDIT_OUT_MASK 0x0200000000000000 + +/* SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_NI0_VC0_CREDIT_OUT */ +/* Description: NI0 VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_NI0_VC0_CREDIT_OUT_SHFT 58 +#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_NI0_VC0_CREDIT_OUT_MASK 0x0400000000000000 + +/* SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_NI0_VC2_CREDIT_OUT */ +/* Description: NI0 VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_NI0_VC2_CREDIT_OUT_SHFT 59 +#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_NI0_VC2_CREDIT_OUT_MASK 0x0800000000000000 + +/* SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_NI1_VC0_CREDIT_OUT */ +/* Description: NI1 VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_NI1_VC0_CREDIT_OUT_SHFT 60 +#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_NI1_VC0_CREDIT_OUT_MASK 0x1000000000000000 + +/* SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_NI1_VC2_CREDIT_OUT */ +/* Description: NI1 VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_NI1_VC2_CREDIT_OUT_SHFT 61 +#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_NI1_VC2_CREDIT_OUT_MASK 0x2000000000000000 + +/* SH_XNIILB_ERROR_OVERFLOW_CHIPLET_NOMATCH */ +/* Description: chiplet nomatch */ +#define SH_XNIILB_ERROR_OVERFLOW_CHIPLET_NOMATCH_SHFT 62 +#define SH_XNIILB_ERROR_OVERFLOW_CHIPLET_NOMATCH_MASK 0x4000000000000000 + +/* SH_XNIILB_ERROR_OVERFLOW_LUT_READ_ERROR */ +/* Description: LUT Read Error */ +#define SH_XNIILB_ERROR_OVERFLOW_LUT_READ_ERROR_SHFT 63 +#define SH_XNIILB_ERROR_OVERFLOW_LUT_READ_ERROR_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_XNIILB_ERROR_OVERFLOW_ALIAS" */ +/* ==================================================================== */ + +#define SH_XNIILB_ERROR_OVERFLOW_ALIAS 0x0000000150040228 + +/* ==================================================================== */ +/* Register "SH_XNIILB_ERROR_MASK" */ +/* ==================================================================== */ + +#define SH_XNIILB_ERROR_MASK 0x0000000150040240 +#define SH_XNIILB_ERROR_MASK_MASK 0xffffffffffffffff +#define SH_XNIILB_ERROR_MASK_INIT 0xffffffffffffffff + +/* SH_XNIILB_ERROR_MASK_OVERFLOW_II_DEBIT0 */ +/* Description: II debit0 overflow */ +#define SH_XNIILB_ERROR_MASK_OVERFLOW_II_DEBIT0_SHFT 0 +#define SH_XNIILB_ERROR_MASK_OVERFLOW_II_DEBIT0_MASK 0x0000000000000001 + +/* SH_XNIILB_ERROR_MASK_OVERFLOW_II_DEBIT2 */ +/* Description: II debit2 overflow */ +#define SH_XNIILB_ERROR_MASK_OVERFLOW_II_DEBIT2_SHFT 1 +#define SH_XNIILB_ERROR_MASK_OVERFLOW_II_DEBIT2_MASK 0x0000000000000002 + +/* SH_XNIILB_ERROR_MASK_OVERFLOW_LB_DEBIT0 */ +/* Description: LB debit0 overflow */ +#define SH_XNIILB_ERROR_MASK_OVERFLOW_LB_DEBIT0_SHFT 2 +#define SH_XNIILB_ERROR_MASK_OVERFLOW_LB_DEBIT0_MASK 0x0000000000000004 + +/* SH_XNIILB_ERROR_MASK_OVERFLOW_LB_DEBIT2 */ +/* Description: LB debit2 overflow */ +#define SH_XNIILB_ERROR_MASK_OVERFLOW_LB_DEBIT2_SHFT 3 +#define SH_XNIILB_ERROR_MASK_OVERFLOW_LB_DEBIT2_MASK 0x0000000000000008 + +/* SH_XNIILB_ERROR_MASK_OVERFLOW_II_VC0 */ +/* Description: II VC0 fifo overflow */ +#define SH_XNIILB_ERROR_MASK_OVERFLOW_II_VC0_SHFT 4 +#define SH_XNIILB_ERROR_MASK_OVERFLOW_II_VC0_MASK 0x0000000000000010 + +/* SH_XNIILB_ERROR_MASK_OVERFLOW_II_VC2 */ +/* Description: II VC2 fifo overflow */ +#define SH_XNIILB_ERROR_MASK_OVERFLOW_II_VC2_SHFT 5 +#define SH_XNIILB_ERROR_MASK_OVERFLOW_II_VC2_MASK 0x0000000000000020 + +/* SH_XNIILB_ERROR_MASK_UNDERFLOW_II_VC0 */ +/* Description: II VC0 fifo underflow */ +#define SH_XNIILB_ERROR_MASK_UNDERFLOW_II_VC0_SHFT 6 +#define SH_XNIILB_ERROR_MASK_UNDERFLOW_II_VC0_MASK 0x0000000000000040 + +/* SH_XNIILB_ERROR_MASK_UNDERFLOW_II_VC2 */ +/* Description: II VC2 fifo underflow */ +#define SH_XNIILB_ERROR_MASK_UNDERFLOW_II_VC2_SHFT 7 +#define SH_XNIILB_ERROR_MASK_UNDERFLOW_II_VC2_MASK 0x0000000000000080 + +/* SH_XNIILB_ERROR_MASK_OVERFLOW_LB_VC0 */ +/* Description: LB VC0 fifo overflow */ +#define SH_XNIILB_ERROR_MASK_OVERFLOW_LB_VC0_SHFT 8 +#define SH_XNIILB_ERROR_MASK_OVERFLOW_LB_VC0_MASK 0x0000000000000100 + +/* SH_XNIILB_ERROR_MASK_OVERFLOW_LB_VC2 */ +/* Description: LB VC2 fifo overflow */ +#define SH_XNIILB_ERROR_MASK_OVERFLOW_LB_VC2_SHFT 9 +#define SH_XNIILB_ERROR_MASK_OVERFLOW_LB_VC2_MASK 0x0000000000000200 + +/* SH_XNIILB_ERROR_MASK_UNDERFLOW_LB_VC0 */ +/* Description: LB VC0 fifo underflow */ +#define SH_XNIILB_ERROR_MASK_UNDERFLOW_LB_VC0_SHFT 10 +#define SH_XNIILB_ERROR_MASK_UNDERFLOW_LB_VC0_MASK 0x0000000000000400 + +/* SH_XNIILB_ERROR_MASK_UNDERFLOW_LB_VC2 */ +/* Description: LB VC2 fifo underflow */ +#define SH_XNIILB_ERROR_MASK_UNDERFLOW_LB_VC2_SHFT 11 +#define SH_XNIILB_ERROR_MASK_UNDERFLOW_LB_VC2_MASK 0x0000000000000800 + +/* SH_XNIILB_ERROR_MASK_OVERFLOW_PI_VC0_CREDIT_IN */ +/* Description: PI VC0 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_MASK_OVERFLOW_PI_VC0_CREDIT_IN_SHFT 12 +#define SH_XNIILB_ERROR_MASK_OVERFLOW_PI_VC0_CREDIT_IN_MASK 0x0000000000001000 + +/* SH_XNIILB_ERROR_MASK_OVERFLOW_IILB_VC0_CREDIT_IN */ +/* Description: IILB VC0 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_MASK_OVERFLOW_IILB_VC0_CREDIT_IN_SHFT 13 +#define SH_XNIILB_ERROR_MASK_OVERFLOW_IILB_VC0_CREDIT_IN_MASK 0x0000000000002000 + +/* SH_XNIILB_ERROR_MASK_OVERFLOW_MD_VC0_CREDIT_IN */ +/* Description: MD VC0 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_MASK_OVERFLOW_MD_VC0_CREDIT_IN_SHFT 14 +#define SH_XNIILB_ERROR_MASK_OVERFLOW_MD_VC0_CREDIT_IN_MASK 0x0000000000004000 + +/* SH_XNIILB_ERROR_MASK_OVERFLOW_NI0_VC0_CREDIT_IN */ +/* Description: NI0 VC0 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_MASK_OVERFLOW_NI0_VC0_CREDIT_IN_SHFT 15 +#define SH_XNIILB_ERROR_MASK_OVERFLOW_NI0_VC0_CREDIT_IN_MASK 0x0000000000008000 + +/* SH_XNIILB_ERROR_MASK_OVERFLOW_NI1_VC0_CREDIT_IN */ +/* Description: NI1 VC0 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_MASK_OVERFLOW_NI1_VC0_CREDIT_IN_SHFT 16 +#define SH_XNIILB_ERROR_MASK_OVERFLOW_NI1_VC0_CREDIT_IN_MASK 0x0000000000010000 + +/* SH_XNIILB_ERROR_MASK_OVERFLOW_PI_VC2_CREDIT_IN */ +/* Description: PI VC2 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_MASK_OVERFLOW_PI_VC2_CREDIT_IN_SHFT 17 +#define SH_XNIILB_ERROR_MASK_OVERFLOW_PI_VC2_CREDIT_IN_MASK 0x0000000000020000 + +/* SH_XNIILB_ERROR_MASK_OVERFLOW_IILB_VC2_CREDIT_IN */ +/* Description: IILB VC2 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_MASK_OVERFLOW_IILB_VC2_CREDIT_IN_SHFT 18 +#define SH_XNIILB_ERROR_MASK_OVERFLOW_IILB_VC2_CREDIT_IN_MASK 0x0000000000040000 + +/* SH_XNIILB_ERROR_MASK_OVERFLOW_MD_VC2_CREDIT_IN */ +/* Description: MD VC2 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_MASK_OVERFLOW_MD_VC2_CREDIT_IN_SHFT 19 +#define SH_XNIILB_ERROR_MASK_OVERFLOW_MD_VC2_CREDIT_IN_MASK 0x0000000000080000 + +/* SH_XNIILB_ERROR_MASK_OVERFLOW_NI0_VC2_CREDIT_IN */ +/* Description: NI0 VC2 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_MASK_OVERFLOW_NI0_VC2_CREDIT_IN_SHFT 20 +#define SH_XNIILB_ERROR_MASK_OVERFLOW_NI0_VC2_CREDIT_IN_MASK 0x0000000000100000 + +/* SH_XNIILB_ERROR_MASK_OVERFLOW_NI1_VC2_CREDIT_IN */ +/* Description: NI1 VC2 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_MASK_OVERFLOW_NI1_VC2_CREDIT_IN_SHFT 21 +#define SH_XNIILB_ERROR_MASK_OVERFLOW_NI1_VC2_CREDIT_IN_MASK 0x0000000000200000 + +/* SH_XNIILB_ERROR_MASK_UNDERFLOW_PI_VC0_CREDIT_IN */ +/* Description: PI VC0 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_MASK_UNDERFLOW_PI_VC0_CREDIT_IN_SHFT 22 +#define SH_XNIILB_ERROR_MASK_UNDERFLOW_PI_VC0_CREDIT_IN_MASK 0x0000000000400000 + +/* SH_XNIILB_ERROR_MASK_UNDERFLOW_IILB_VC0_CREDIT_IN */ +/* Description: IILB VC0 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_MASK_UNDERFLOW_IILB_VC0_CREDIT_IN_SHFT 23 +#define SH_XNIILB_ERROR_MASK_UNDERFLOW_IILB_VC0_CREDIT_IN_MASK 0x0000000000800000 + +/* SH_XNIILB_ERROR_MASK_UNDERFLOW_MD_VC0_CREDIT_IN */ +/* Description: MD VC0 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_MASK_UNDERFLOW_MD_VC0_CREDIT_IN_SHFT 24 +#define SH_XNIILB_ERROR_MASK_UNDERFLOW_MD_VC0_CREDIT_IN_MASK 0x0000000001000000 + +/* SH_XNIILB_ERROR_MASK_UNDERFLOW_NI0_VC0_CREDIT_IN */ +/* Description: NI0 VC0 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_MASK_UNDERFLOW_NI0_VC0_CREDIT_IN_SHFT 25 +#define SH_XNIILB_ERROR_MASK_UNDERFLOW_NI0_VC0_CREDIT_IN_MASK 0x0000000002000000 + +/* SH_XNIILB_ERROR_MASK_UNDERFLOW_NI1_VC0_CREDIT_IN */ +/* Description: NI1 VC0 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_MASK_UNDERFLOW_NI1_VC0_CREDIT_IN_SHFT 26 +#define SH_XNIILB_ERROR_MASK_UNDERFLOW_NI1_VC0_CREDIT_IN_MASK 0x0000000004000000 + +/* SH_XNIILB_ERROR_MASK_UNDERFLOW_PI_VC2_CREDIT_IN */ +/* Description: PI VC2 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_MASK_UNDERFLOW_PI_VC2_CREDIT_IN_SHFT 27 +#define SH_XNIILB_ERROR_MASK_UNDERFLOW_PI_VC2_CREDIT_IN_MASK 0x0000000008000000 + +/* SH_XNIILB_ERROR_MASK_UNDERFLOW_IILB_VC2_CREDIT_IN */ +/* Description: IILB VC2 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_MASK_UNDERFLOW_IILB_VC2_CREDIT_IN_SHFT 28 +#define SH_XNIILB_ERROR_MASK_UNDERFLOW_IILB_VC2_CREDIT_IN_MASK 0x0000000010000000 + +/* SH_XNIILB_ERROR_MASK_UNDERFLOW_MD_VC2_CREDIT_IN */ +/* Description: MD VC2 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_MASK_UNDERFLOW_MD_VC2_CREDIT_IN_SHFT 29 +#define SH_XNIILB_ERROR_MASK_UNDERFLOW_MD_VC2_CREDIT_IN_MASK 0x0000000020000000 + +/* SH_XNIILB_ERROR_MASK_UNDERFLOW_NI0_VC2_CREDIT_IN */ +/* Description: NI0 VC2 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_MASK_UNDERFLOW_NI0_VC2_CREDIT_IN_SHFT 30 +#define SH_XNIILB_ERROR_MASK_UNDERFLOW_NI0_VC2_CREDIT_IN_MASK 0x0000000040000000 + +/* SH_XNIILB_ERROR_MASK_UNDERFLOW_NI1_VC2_CREDIT_IN */ +/* Description: NI1 VC2 credit overflow Pipe In */ +#define SH_XNIILB_ERROR_MASK_UNDERFLOW_NI1_VC2_CREDIT_IN_SHFT 31 +#define SH_XNIILB_ERROR_MASK_UNDERFLOW_NI1_VC2_CREDIT_IN_MASK 0x0000000080000000 + +/* SH_XNIILB_ERROR_MASK_OVERFLOW_PI_DEBIT0 */ +/* Description: PI Fifo Debit0 overflow */ +#define SH_XNIILB_ERROR_MASK_OVERFLOW_PI_DEBIT0_SHFT 32 +#define SH_XNIILB_ERROR_MASK_OVERFLOW_PI_DEBIT0_MASK 0x0000000100000000 + +/* SH_XNIILB_ERROR_MASK_OVERFLOW_PI_DEBIT2 */ +/* Description: PI Fifo Debit2 overflow */ +#define SH_XNIILB_ERROR_MASK_OVERFLOW_PI_DEBIT2_SHFT 33 +#define SH_XNIILB_ERROR_MASK_OVERFLOW_PI_DEBIT2_MASK 0x0000000200000000 + +/* SH_XNIILB_ERROR_MASK_OVERFLOW_IILB_DEBIT0 */ +/* Description: IILB Fifo Debit0 overflow */ +#define SH_XNIILB_ERROR_MASK_OVERFLOW_IILB_DEBIT0_SHFT 34 +#define SH_XNIILB_ERROR_MASK_OVERFLOW_IILB_DEBIT0_MASK 0x0000000400000000 + +/* SH_XNIILB_ERROR_MASK_OVERFLOW_IILB_DEBIT2 */ +/* Description: IILB Fifo Debit2 overflow */ +#define SH_XNIILB_ERROR_MASK_OVERFLOW_IILB_DEBIT2_SHFT 35 +#define SH_XNIILB_ERROR_MASK_OVERFLOW_IILB_DEBIT2_MASK 0x0000000800000000 + +/* SH_XNIILB_ERROR_MASK_OVERFLOW_MD_DEBIT0 */ +/* Description: MD Fifo Debit0 overflow */ +#define SH_XNIILB_ERROR_MASK_OVERFLOW_MD_DEBIT0_SHFT 36 +#define SH_XNIILB_ERROR_MASK_OVERFLOW_MD_DEBIT0_MASK 0x0000001000000000 + +/* SH_XNIILB_ERROR_MASK_OVERFLOW_MD_DEBIT2 */ +/* Description: MD Fifo Debit2 overflow */ +#define SH_XNIILB_ERROR_MASK_OVERFLOW_MD_DEBIT2_SHFT 37 +#define SH_XNIILB_ERROR_MASK_OVERFLOW_MD_DEBIT2_MASK 0x0000002000000000 + +/* SH_XNIILB_ERROR_MASK_OVERFLOW_NI0_DEBIT0 */ +/* Description: NI0 Fifo Debit0 overflow */ +#define SH_XNIILB_ERROR_MASK_OVERFLOW_NI0_DEBIT0_SHFT 38 +#define SH_XNIILB_ERROR_MASK_OVERFLOW_NI0_DEBIT0_MASK 0x0000004000000000 + +/* SH_XNIILB_ERROR_MASK_OVERFLOW_NI0_DEBIT2 */ +/* Description: NI0 Fifo Debit2 overflow */ +#define SH_XNIILB_ERROR_MASK_OVERFLOW_NI0_DEBIT2_SHFT 39 +#define SH_XNIILB_ERROR_MASK_OVERFLOW_NI0_DEBIT2_MASK 0x0000008000000000 + +/* SH_XNIILB_ERROR_MASK_OVERFLOW_NI1_DEBIT0 */ +/* Description: NI1 Fifo Debit0 overflow */ +#define SH_XNIILB_ERROR_MASK_OVERFLOW_NI1_DEBIT0_SHFT 40 +#define SH_XNIILB_ERROR_MASK_OVERFLOW_NI1_DEBIT0_MASK 0x0000010000000000 + +/* SH_XNIILB_ERROR_MASK_OVERFLOW_NI1_DEBIT2 */ +/* Description: NI1 Fifo Debit2 overflow */ +#define SH_XNIILB_ERROR_MASK_OVERFLOW_NI1_DEBIT2_SHFT 41 +#define SH_XNIILB_ERROR_MASK_OVERFLOW_NI1_DEBIT2_MASK 0x0000020000000000 + +/* SH_XNIILB_ERROR_MASK_OVERFLOW_PI_VC0_CREDIT_OUT */ +/* Description: PI VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_MASK_OVERFLOW_PI_VC0_CREDIT_OUT_SHFT 42 +#define SH_XNIILB_ERROR_MASK_OVERFLOW_PI_VC0_CREDIT_OUT_MASK 0x0000040000000000 + +/* SH_XNIILB_ERROR_MASK_OVERFLOW_PI_VC2_CREDIT_OUT */ +/* Description: PI VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_MASK_OVERFLOW_PI_VC2_CREDIT_OUT_SHFT 43 +#define SH_XNIILB_ERROR_MASK_OVERFLOW_PI_VC2_CREDIT_OUT_MASK 0x0000080000000000 + +/* SH_XNIILB_ERROR_MASK_OVERFLOW_MD_VC0_CREDIT_OUT */ +/* Description: MD VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_MASK_OVERFLOW_MD_VC0_CREDIT_OUT_SHFT 44 +#define SH_XNIILB_ERROR_MASK_OVERFLOW_MD_VC0_CREDIT_OUT_MASK 0x0000100000000000 + +/* SH_XNIILB_ERROR_MASK_OVERFLOW_MD_VC2_CREDIT_OUT */ +/* Description: MD VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_MASK_OVERFLOW_MD_VC2_CREDIT_OUT_SHFT 45 +#define SH_XNIILB_ERROR_MASK_OVERFLOW_MD_VC2_CREDIT_OUT_MASK 0x0000200000000000 + +/* SH_XNIILB_ERROR_MASK_OVERFLOW_IILB_VC0_CREDIT_OUT */ +/* Description: IILB VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_MASK_OVERFLOW_IILB_VC0_CREDIT_OUT_SHFT 46 +#define SH_XNIILB_ERROR_MASK_OVERFLOW_IILB_VC0_CREDIT_OUT_MASK 0x0000400000000000 + +/* SH_XNIILB_ERROR_MASK_OVERFLOW_IILB_VC2_CREDIT_OUT */ +/* Description: IILB VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_MASK_OVERFLOW_IILB_VC2_CREDIT_OUT_SHFT 47 +#define SH_XNIILB_ERROR_MASK_OVERFLOW_IILB_VC2_CREDIT_OUT_MASK 0x0000800000000000 + +/* SH_XNIILB_ERROR_MASK_OVERFLOW_NI0_VC0_CREDIT_OUT */ +/* Description: NI0 VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_MASK_OVERFLOW_NI0_VC0_CREDIT_OUT_SHFT 48 +#define SH_XNIILB_ERROR_MASK_OVERFLOW_NI0_VC0_CREDIT_OUT_MASK 0x0001000000000000 + +/* SH_XNIILB_ERROR_MASK_OVERFLOW_NI0_VC2_CREDIT_OUT */ +/* Description: NI0 VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_MASK_OVERFLOW_NI0_VC2_CREDIT_OUT_SHFT 49 +#define SH_XNIILB_ERROR_MASK_OVERFLOW_NI0_VC2_CREDIT_OUT_MASK 0x0002000000000000 + +/* SH_XNIILB_ERROR_MASK_OVERFLOW_NI1_VC0_CREDIT_OUT */ +/* Description: NI1 VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_MASK_OVERFLOW_NI1_VC0_CREDIT_OUT_SHFT 50 +#define SH_XNIILB_ERROR_MASK_OVERFLOW_NI1_VC0_CREDIT_OUT_MASK 0x0004000000000000 + +/* SH_XNIILB_ERROR_MASK_OVERFLOW_NI1_VC2_CREDIT_OUT */ +/* Description: NI1 VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_MASK_OVERFLOW_NI1_VC2_CREDIT_OUT_SHFT 51 +#define SH_XNIILB_ERROR_MASK_OVERFLOW_NI1_VC2_CREDIT_OUT_MASK 0x0008000000000000 + +/* SH_XNIILB_ERROR_MASK_UNDERFLOW_PI_VC0_CREDIT_OUT */ +/* Description: PI VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_MASK_UNDERFLOW_PI_VC0_CREDIT_OUT_SHFT 52 +#define SH_XNIILB_ERROR_MASK_UNDERFLOW_PI_VC0_CREDIT_OUT_MASK 0x0010000000000000 + +/* SH_XNIILB_ERROR_MASK_UNDERFLOW_PI_VC2_CREDIT_OUT */ +/* Description: PI VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_MASK_UNDERFLOW_PI_VC2_CREDIT_OUT_SHFT 53 +#define SH_XNIILB_ERROR_MASK_UNDERFLOW_PI_VC2_CREDIT_OUT_MASK 0x0020000000000000 + +/* SH_XNIILB_ERROR_MASK_UNDERFLOW_MD_VC0_CREDIT_OUT */ +/* Description: MD VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_MASK_UNDERFLOW_MD_VC0_CREDIT_OUT_SHFT 54 +#define SH_XNIILB_ERROR_MASK_UNDERFLOW_MD_VC0_CREDIT_OUT_MASK 0x0040000000000000 + +/* SH_XNIILB_ERROR_MASK_UNDERFLOW_MD_VC2_CREDIT_OUT */ +/* Description: MD VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_MASK_UNDERFLOW_MD_VC2_CREDIT_OUT_SHFT 55 +#define SH_XNIILB_ERROR_MASK_UNDERFLOW_MD_VC2_CREDIT_OUT_MASK 0x0080000000000000 + +/* SH_XNIILB_ERROR_MASK_UNDERFLOW_IILB_VC0_CREDIT_OUT */ +/* Description: IILB VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_MASK_UNDERFLOW_IILB_VC0_CREDIT_OUT_SHFT 56 +#define SH_XNIILB_ERROR_MASK_UNDERFLOW_IILB_VC0_CREDIT_OUT_MASK 0x0100000000000000 + +/* SH_XNIILB_ERROR_MASK_UNDERFLOW_IILB_VC2_CREDIT_OUT */ +/* Description: IILB VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_MASK_UNDERFLOW_IILB_VC2_CREDIT_OUT_SHFT 57 +#define SH_XNIILB_ERROR_MASK_UNDERFLOW_IILB_VC2_CREDIT_OUT_MASK 0x0200000000000000 + +/* SH_XNIILB_ERROR_MASK_UNDERFLOW_NI0_VC0_CREDIT_OUT */ +/* Description: NI0 VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_MASK_UNDERFLOW_NI0_VC0_CREDIT_OUT_SHFT 58 +#define SH_XNIILB_ERROR_MASK_UNDERFLOW_NI0_VC0_CREDIT_OUT_MASK 0x0400000000000000 + +/* SH_XNIILB_ERROR_MASK_UNDERFLOW_NI0_VC2_CREDIT_OUT */ +/* Description: NI0 VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_MASK_UNDERFLOW_NI0_VC2_CREDIT_OUT_SHFT 59 +#define SH_XNIILB_ERROR_MASK_UNDERFLOW_NI0_VC2_CREDIT_OUT_MASK 0x0800000000000000 + +/* SH_XNIILB_ERROR_MASK_UNDERFLOW_NI1_VC0_CREDIT_OUT */ +/* Description: NI1 VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_MASK_UNDERFLOW_NI1_VC0_CREDIT_OUT_SHFT 60 +#define SH_XNIILB_ERROR_MASK_UNDERFLOW_NI1_VC0_CREDIT_OUT_MASK 0x1000000000000000 + +/* SH_XNIILB_ERROR_MASK_UNDERFLOW_NI1_VC2_CREDIT_OUT */ +/* Description: NI1 VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_ERROR_MASK_UNDERFLOW_NI1_VC2_CREDIT_OUT_SHFT 61 +#define SH_XNIILB_ERROR_MASK_UNDERFLOW_NI1_VC2_CREDIT_OUT_MASK 0x2000000000000000 + +/* SH_XNIILB_ERROR_MASK_CHIPLET_NOMATCH */ +/* Description: chiplet nomatch */ +#define SH_XNIILB_ERROR_MASK_CHIPLET_NOMATCH_SHFT 62 +#define SH_XNIILB_ERROR_MASK_CHIPLET_NOMATCH_MASK 0x4000000000000000 + +/* SH_XNIILB_ERROR_MASK_LUT_READ_ERROR */ +/* Description: LUT Read Error */ +#define SH_XNIILB_ERROR_MASK_LUT_READ_ERROR_SHFT 63 +#define SH_XNIILB_ERROR_MASK_LUT_READ_ERROR_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_XNIILB_FIRST_ERROR" */ +/* ==================================================================== */ + +#define SH_XNIILB_FIRST_ERROR 0x0000000150040260 +#define SH_XNIILB_FIRST_ERROR_MASK 0xffffffffffffffff +#define SH_XNIILB_FIRST_ERROR_INIT 0xffffffffffffffff + +/* SH_XNIILB_FIRST_ERROR_OVERFLOW_II_DEBIT0 */ +/* Description: II debit0 overflow */ +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_II_DEBIT0_SHFT 0 +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_II_DEBIT0_MASK 0x0000000000000001 + +/* SH_XNIILB_FIRST_ERROR_OVERFLOW_II_DEBIT2 */ +/* Description: II debit2 overflow */ +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_II_DEBIT2_SHFT 1 +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_II_DEBIT2_MASK 0x0000000000000002 + +/* SH_XNIILB_FIRST_ERROR_OVERFLOW_LB_DEBIT0 */ +/* Description: LB debit0 overflow */ +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_LB_DEBIT0_SHFT 2 +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_LB_DEBIT0_MASK 0x0000000000000004 + +/* SH_XNIILB_FIRST_ERROR_OVERFLOW_LB_DEBIT2 */ +/* Description: LB debit2 overflow */ +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_LB_DEBIT2_SHFT 3 +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_LB_DEBIT2_MASK 0x0000000000000008 + +/* SH_XNIILB_FIRST_ERROR_OVERFLOW_II_VC0 */ +/* Description: II VC0 fifo overflow */ +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_II_VC0_SHFT 4 +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_II_VC0_MASK 0x0000000000000010 + +/* SH_XNIILB_FIRST_ERROR_OVERFLOW_II_VC2 */ +/* Description: II VC2 fifo overflow */ +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_II_VC2_SHFT 5 +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_II_VC2_MASK 0x0000000000000020 + +/* SH_XNIILB_FIRST_ERROR_UNDERFLOW_II_VC0 */ +/* Description: II VC0 fifo underflow */ +#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_II_VC0_SHFT 6 +#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_II_VC0_MASK 0x0000000000000040 + +/* SH_XNIILB_FIRST_ERROR_UNDERFLOW_II_VC2 */ +/* Description: II VC2 fifo underflow */ +#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_II_VC2_SHFT 7 +#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_II_VC2_MASK 0x0000000000000080 + +/* SH_XNIILB_FIRST_ERROR_OVERFLOW_LB_VC0 */ +/* Description: LB VC0 fifo overflow */ +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_LB_VC0_SHFT 8 +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_LB_VC0_MASK 0x0000000000000100 + +/* SH_XNIILB_FIRST_ERROR_OVERFLOW_LB_VC2 */ +/* Description: LB VC2 fifo overflow */ +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_LB_VC2_SHFT 9 +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_LB_VC2_MASK 0x0000000000000200 + +/* SH_XNIILB_FIRST_ERROR_UNDERFLOW_LB_VC0 */ +/* Description: LB VC0 fifo underflow */ +#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_LB_VC0_SHFT 10 +#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_LB_VC0_MASK 0x0000000000000400 + +/* SH_XNIILB_FIRST_ERROR_UNDERFLOW_LB_VC2 */ +/* Description: LB VC2 fifo underflow */ +#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_LB_VC2_SHFT 11 +#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_LB_VC2_MASK 0x0000000000000800 + +/* SH_XNIILB_FIRST_ERROR_OVERFLOW_PI_VC0_CREDIT_IN */ +/* Description: PI VC0 credit overflow Pipe In */ +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_PI_VC0_CREDIT_IN_SHFT 12 +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_PI_VC0_CREDIT_IN_MASK 0x0000000000001000 + +/* SH_XNIILB_FIRST_ERROR_OVERFLOW_IILB_VC0_CREDIT_IN */ +/* Description: IILB VC0 credit overflow Pipe In */ +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_IILB_VC0_CREDIT_IN_SHFT 13 +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_IILB_VC0_CREDIT_IN_MASK 0x0000000000002000 + +/* SH_XNIILB_FIRST_ERROR_OVERFLOW_MD_VC0_CREDIT_IN */ +/* Description: MD VC0 credit overflow Pipe In */ +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_MD_VC0_CREDIT_IN_SHFT 14 +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_MD_VC0_CREDIT_IN_MASK 0x0000000000004000 + +/* SH_XNIILB_FIRST_ERROR_OVERFLOW_NI0_VC0_CREDIT_IN */ +/* Description: NI0 VC0 credit overflow Pipe In */ +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_NI0_VC0_CREDIT_IN_SHFT 15 +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_NI0_VC0_CREDIT_IN_MASK 0x0000000000008000 + +/* SH_XNIILB_FIRST_ERROR_OVERFLOW_NI1_VC0_CREDIT_IN */ +/* Description: NI1 VC0 credit overflow Pipe In */ +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_NI1_VC0_CREDIT_IN_SHFT 16 +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_NI1_VC0_CREDIT_IN_MASK 0x0000000000010000 + +/* SH_XNIILB_FIRST_ERROR_OVERFLOW_PI_VC2_CREDIT_IN */ +/* Description: PI VC2 credit overflow Pipe In */ +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_PI_VC2_CREDIT_IN_SHFT 17 +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_PI_VC2_CREDIT_IN_MASK 0x0000000000020000 + +/* SH_XNIILB_FIRST_ERROR_OVERFLOW_IILB_VC2_CREDIT_IN */ +/* Description: IILB VC2 credit overflow Pipe In */ +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_IILB_VC2_CREDIT_IN_SHFT 18 +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_IILB_VC2_CREDIT_IN_MASK 0x0000000000040000 + +/* SH_XNIILB_FIRST_ERROR_OVERFLOW_MD_VC2_CREDIT_IN */ +/* Description: MD VC2 credit overflow Pipe In */ +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_MD_VC2_CREDIT_IN_SHFT 19 +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_MD_VC2_CREDIT_IN_MASK 0x0000000000080000 + +/* SH_XNIILB_FIRST_ERROR_OVERFLOW_NI0_VC2_CREDIT_IN */ +/* Description: NI0 VC2 credit overflow Pipe In */ +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_NI0_VC2_CREDIT_IN_SHFT 20 +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_NI0_VC2_CREDIT_IN_MASK 0x0000000000100000 + +/* SH_XNIILB_FIRST_ERROR_OVERFLOW_NI1_VC2_CREDIT_IN */ +/* Description: NI1 VC2 credit overflow Pipe In */ +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_NI1_VC2_CREDIT_IN_SHFT 21 +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_NI1_VC2_CREDIT_IN_MASK 0x0000000000200000 + +/* SH_XNIILB_FIRST_ERROR_UNDERFLOW_PI_VC0_CREDIT_IN */ +/* Description: PI VC0 credit overflow Pipe In */ +#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_PI_VC0_CREDIT_IN_SHFT 22 +#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_PI_VC0_CREDIT_IN_MASK 0x0000000000400000 + +/* SH_XNIILB_FIRST_ERROR_UNDERFLOW_IILB_VC0_CREDIT_IN */ +/* Description: IILB VC0 credit overflow Pipe In */ +#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_IILB_VC0_CREDIT_IN_SHFT 23 +#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_IILB_VC0_CREDIT_IN_MASK 0x0000000000800000 + +/* SH_XNIILB_FIRST_ERROR_UNDERFLOW_MD_VC0_CREDIT_IN */ +/* Description: MD VC0 credit overflow Pipe In */ +#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_MD_VC0_CREDIT_IN_SHFT 24 +#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_MD_VC0_CREDIT_IN_MASK 0x0000000001000000 + +/* SH_XNIILB_FIRST_ERROR_UNDERFLOW_NI0_VC0_CREDIT_IN */ +/* Description: NI0 VC0 credit overflow Pipe In */ +#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_NI0_VC0_CREDIT_IN_SHFT 25 +#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_NI0_VC0_CREDIT_IN_MASK 0x0000000002000000 + +/* SH_XNIILB_FIRST_ERROR_UNDERFLOW_NI1_VC0_CREDIT_IN */ +/* Description: NI1 VC0 credit overflow Pipe In */ +#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_NI1_VC0_CREDIT_IN_SHFT 26 +#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_NI1_VC0_CREDIT_IN_MASK 0x0000000004000000 + +/* SH_XNIILB_FIRST_ERROR_UNDERFLOW_PI_VC2_CREDIT_IN */ +/* Description: PI VC2 credit overflow Pipe In */ +#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_PI_VC2_CREDIT_IN_SHFT 27 +#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_PI_VC2_CREDIT_IN_MASK 0x0000000008000000 + +/* SH_XNIILB_FIRST_ERROR_UNDERFLOW_IILB_VC2_CREDIT_IN */ +/* Description: IILB VC2 credit overflow Pipe In */ +#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_IILB_VC2_CREDIT_IN_SHFT 28 +#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_IILB_VC2_CREDIT_IN_MASK 0x0000000010000000 + +/* SH_XNIILB_FIRST_ERROR_UNDERFLOW_MD_VC2_CREDIT_IN */ +/* Description: MD VC2 credit overflow Pipe In */ +#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_MD_VC2_CREDIT_IN_SHFT 29 +#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_MD_VC2_CREDIT_IN_MASK 0x0000000020000000 + +/* SH_XNIILB_FIRST_ERROR_UNDERFLOW_NI0_VC2_CREDIT_IN */ +/* Description: NI0 VC2 credit overflow Pipe In */ +#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_NI0_VC2_CREDIT_IN_SHFT 30 +#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_NI0_VC2_CREDIT_IN_MASK 0x0000000040000000 + +/* SH_XNIILB_FIRST_ERROR_UNDERFLOW_NI1_VC2_CREDIT_IN */ +/* Description: NI1 VC2 credit overflow Pipe In */ +#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_NI1_VC2_CREDIT_IN_SHFT 31 +#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_NI1_VC2_CREDIT_IN_MASK 0x0000000080000000 + +/* SH_XNIILB_FIRST_ERROR_OVERFLOW_PI_DEBIT0 */ +/* Description: PI Fifo Debit0 overflow */ +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_PI_DEBIT0_SHFT 32 +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_PI_DEBIT0_MASK 0x0000000100000000 + +/* SH_XNIILB_FIRST_ERROR_OVERFLOW_PI_DEBIT2 */ +/* Description: PI Fifo Debit2 overflow */ +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_PI_DEBIT2_SHFT 33 +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_PI_DEBIT2_MASK 0x0000000200000000 + +/* SH_XNIILB_FIRST_ERROR_OVERFLOW_IILB_DEBIT0 */ +/* Description: IILB Fifo Debit0 overflow */ +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_IILB_DEBIT0_SHFT 34 +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_IILB_DEBIT0_MASK 0x0000000400000000 + +/* SH_XNIILB_FIRST_ERROR_OVERFLOW_IILB_DEBIT2 */ +/* Description: IILB Fifo Debit2 overflow */ +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_IILB_DEBIT2_SHFT 35 +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_IILB_DEBIT2_MASK 0x0000000800000000 + +/* SH_XNIILB_FIRST_ERROR_OVERFLOW_MD_DEBIT0 */ +/* Description: MD Fifo Debit0 overflow */ +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_MD_DEBIT0_SHFT 36 +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_MD_DEBIT0_MASK 0x0000001000000000 + +/* SH_XNIILB_FIRST_ERROR_OVERFLOW_MD_DEBIT2 */ +/* Description: MD Fifo Debit2 overflow */ +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_MD_DEBIT2_SHFT 37 +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_MD_DEBIT2_MASK 0x0000002000000000 + +/* SH_XNIILB_FIRST_ERROR_OVERFLOW_NI0_DEBIT0 */ +/* Description: NI0 Fifo Debit0 overflow */ +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_NI0_DEBIT0_SHFT 38 +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_NI0_DEBIT0_MASK 0x0000004000000000 + +/* SH_XNIILB_FIRST_ERROR_OVERFLOW_NI0_DEBIT2 */ +/* Description: NI0 Fifo Debit2 overflow */ +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_NI0_DEBIT2_SHFT 39 +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_NI0_DEBIT2_MASK 0x0000008000000000 + +/* SH_XNIILB_FIRST_ERROR_OVERFLOW_NI1_DEBIT0 */ +/* Description: NI1 Fifo Debit0 overflow */ +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_NI1_DEBIT0_SHFT 40 +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_NI1_DEBIT0_MASK 0x0000010000000000 + +/* SH_XNIILB_FIRST_ERROR_OVERFLOW_NI1_DEBIT2 */ +/* Description: NI1 Fifo Debit2 overflow */ +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_NI1_DEBIT2_SHFT 41 +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_NI1_DEBIT2_MASK 0x0000020000000000 + +/* SH_XNIILB_FIRST_ERROR_OVERFLOW_PI_VC0_CREDIT_OUT */ +/* Description: PI VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_PI_VC0_CREDIT_OUT_SHFT 42 +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_PI_VC0_CREDIT_OUT_MASK 0x0000040000000000 + +/* SH_XNIILB_FIRST_ERROR_OVERFLOW_PI_VC2_CREDIT_OUT */ +/* Description: PI VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_PI_VC2_CREDIT_OUT_SHFT 43 +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_PI_VC2_CREDIT_OUT_MASK 0x0000080000000000 + +/* SH_XNIILB_FIRST_ERROR_OVERFLOW_MD_VC0_CREDIT_OUT */ +/* Description: MD VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_MD_VC0_CREDIT_OUT_SHFT 44 +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_MD_VC0_CREDIT_OUT_MASK 0x0000100000000000 + +/* SH_XNIILB_FIRST_ERROR_OVERFLOW_MD_VC2_CREDIT_OUT */ +/* Description: MD VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_MD_VC2_CREDIT_OUT_SHFT 45 +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_MD_VC2_CREDIT_OUT_MASK 0x0000200000000000 + +/* SH_XNIILB_FIRST_ERROR_OVERFLOW_IILB_VC0_CREDIT_OUT */ +/* Description: IILB VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_IILB_VC0_CREDIT_OUT_SHFT 46 +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_IILB_VC0_CREDIT_OUT_MASK 0x0000400000000000 + +/* SH_XNIILB_FIRST_ERROR_OVERFLOW_IILB_VC2_CREDIT_OUT */ +/* Description: IILB VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_IILB_VC2_CREDIT_OUT_SHFT 47 +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_IILB_VC2_CREDIT_OUT_MASK 0x0000800000000000 + +/* SH_XNIILB_FIRST_ERROR_OVERFLOW_NI0_VC0_CREDIT_OUT */ +/* Description: NI0 VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_NI0_VC0_CREDIT_OUT_SHFT 48 +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_NI0_VC0_CREDIT_OUT_MASK 0x0001000000000000 + +/* SH_XNIILB_FIRST_ERROR_OVERFLOW_NI0_VC2_CREDIT_OUT */ +/* Description: NI0 VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_NI0_VC2_CREDIT_OUT_SHFT 49 +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_NI0_VC2_CREDIT_OUT_MASK 0x0002000000000000 + +/* SH_XNIILB_FIRST_ERROR_OVERFLOW_NI1_VC0_CREDIT_OUT */ +/* Description: NI1 VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_NI1_VC0_CREDIT_OUT_SHFT 50 +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_NI1_VC0_CREDIT_OUT_MASK 0x0004000000000000 + +/* SH_XNIILB_FIRST_ERROR_OVERFLOW_NI1_VC2_CREDIT_OUT */ +/* Description: NI1 VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_NI1_VC2_CREDIT_OUT_SHFT 51 +#define SH_XNIILB_FIRST_ERROR_OVERFLOW_NI1_VC2_CREDIT_OUT_MASK 0x0008000000000000 + +/* SH_XNIILB_FIRST_ERROR_UNDERFLOW_PI_VC0_CREDIT_OUT */ +/* Description: PI VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_PI_VC0_CREDIT_OUT_SHFT 52 +#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_PI_VC0_CREDIT_OUT_MASK 0x0010000000000000 + +/* SH_XNIILB_FIRST_ERROR_UNDERFLOW_PI_VC2_CREDIT_OUT */ +/* Description: PI VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_PI_VC2_CREDIT_OUT_SHFT 53 +#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_PI_VC2_CREDIT_OUT_MASK 0x0020000000000000 + +/* SH_XNIILB_FIRST_ERROR_UNDERFLOW_MD_VC0_CREDIT_OUT */ +/* Description: MD VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_MD_VC0_CREDIT_OUT_SHFT 54 +#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_MD_VC0_CREDIT_OUT_MASK 0x0040000000000000 + +/* SH_XNIILB_FIRST_ERROR_UNDERFLOW_MD_VC2_CREDIT_OUT */ +/* Description: MD VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_MD_VC2_CREDIT_OUT_SHFT 55 +#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_MD_VC2_CREDIT_OUT_MASK 0x0080000000000000 + +/* SH_XNIILB_FIRST_ERROR_UNDERFLOW_IILB_VC0_CREDIT_OUT */ +/* Description: IILB VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_IILB_VC0_CREDIT_OUT_SHFT 56 +#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_IILB_VC0_CREDIT_OUT_MASK 0x0100000000000000 + +/* SH_XNIILB_FIRST_ERROR_UNDERFLOW_IILB_VC2_CREDIT_OUT */ +/* Description: IILB VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_IILB_VC2_CREDIT_OUT_SHFT 57 +#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_IILB_VC2_CREDIT_OUT_MASK 0x0200000000000000 + +/* SH_XNIILB_FIRST_ERROR_UNDERFLOW_NI0_VC0_CREDIT_OUT */ +/* Description: NI0 VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_NI0_VC0_CREDIT_OUT_SHFT 58 +#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_NI0_VC0_CREDIT_OUT_MASK 0x0400000000000000 + +/* SH_XNIILB_FIRST_ERROR_UNDERFLOW_NI0_VC2_CREDIT_OUT */ +/* Description: NI0 VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_NI0_VC2_CREDIT_OUT_SHFT 59 +#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_NI0_VC2_CREDIT_OUT_MASK 0x0800000000000000 + +/* SH_XNIILB_FIRST_ERROR_UNDERFLOW_NI1_VC0_CREDIT_OUT */ +/* Description: NI1 VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_NI1_VC0_CREDIT_OUT_SHFT 60 +#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_NI1_VC0_CREDIT_OUT_MASK 0x1000000000000000 + +/* SH_XNIILB_FIRST_ERROR_UNDERFLOW_NI1_VC2_CREDIT_OUT */ +/* Description: NI1 VC0 Credit overflow Pipe Out */ +#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_NI1_VC2_CREDIT_OUT_SHFT 61 +#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_NI1_VC2_CREDIT_OUT_MASK 0x2000000000000000 + +/* SH_XNIILB_FIRST_ERROR_CHIPLET_NOMATCH */ +/* Description: chiplet nomatch */ +#define SH_XNIILB_FIRST_ERROR_CHIPLET_NOMATCH_SHFT 62 +#define SH_XNIILB_FIRST_ERROR_CHIPLET_NOMATCH_MASK 0x4000000000000000 + +/* SH_XNIILB_FIRST_ERROR_LUT_READ_ERROR */ +/* Description: LUT Read Error */ +#define SH_XNIILB_FIRST_ERROR_LUT_READ_ERROR_SHFT 63 +#define SH_XNIILB_FIRST_ERROR_LUT_READ_ERROR_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_XNPI_ERROR_SUMMARY" */ +/* ==================================================================== */ + +#define SH_XNPI_ERROR_SUMMARY 0x0000000150040300 +#define SH_XNPI_ERROR_SUMMARY_MASK 0x0003ffffffffffff +#define SH_XNPI_ERROR_SUMMARY_INIT 0x0003ffffffffffff + +/* SH_XNPI_ERROR_SUMMARY_UNDERFLOW_NI0_VC0 */ +/* Description: NI0 VC0 fifo underflow */ +#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_NI0_VC0_SHFT 0 +#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_NI0_VC0_MASK 0x0000000000000001 + +/* SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI0_VC0 */ +/* Description: NI0 VC0 fifo overflow */ +#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI0_VC0_SHFT 1 +#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI0_VC0_MASK 0x0000000000000002 + +/* SH_XNPI_ERROR_SUMMARY_UNDERFLOW_NI0_VC2 */ +/* Description: NI0 VC2 fifo underflow */ +#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_NI0_VC2_SHFT 2 +#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_NI0_VC2_MASK 0x0000000000000004 + +/* SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI0_VC2 */ +/* Description: NI0 VC2 fifo overflow */ +#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI0_VC2_SHFT 3 +#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI0_VC2_MASK 0x0000000000000008 + +/* SH_XNPI_ERROR_SUMMARY_UNDERFLOW_NI1_VC0 */ +/* Description: NI1 VC0 fifo underflow */ +#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_NI1_VC0_SHFT 4 +#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_NI1_VC0_MASK 0x0000000000000010 + +/* SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI1_VC0 */ +/* Description: NI1 VC0 fifo overflow */ +#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI1_VC0_SHFT 5 +#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI1_VC0_MASK 0x0000000000000020 + +/* SH_XNPI_ERROR_SUMMARY_UNDERFLOW_NI1_VC2 */ +/* Description: NI1 VC2 fifo underflow */ +#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_NI1_VC2_SHFT 6 +#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_NI1_VC2_MASK 0x0000000000000040 + +/* SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI1_VC2 */ +/* Description: NI1 VC2 fifo overflow */ +#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI1_VC2_SHFT 7 +#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI1_VC2_MASK 0x0000000000000080 + +/* SH_XNPI_ERROR_SUMMARY_UNDERFLOW_IILB_VC0 */ +/* Description: IILB VC0 fifo underflow */ +#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_IILB_VC0_SHFT 8 +#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_IILB_VC0_MASK 0x0000000000000100 + +/* SH_XNPI_ERROR_SUMMARY_OVERFLOW_IILB_VC0 */ +/* Description: IILB VC0 fifo overflow */ +#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_IILB_VC0_SHFT 9 +#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_IILB_VC0_MASK 0x0000000000000200 + +/* SH_XNPI_ERROR_SUMMARY_UNDERFLOW_IILB_VC2 */ +/* Description: IILB VC2 fifo underflow */ +#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_IILB_VC2_SHFT 10 +#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_IILB_VC2_MASK 0x0000000000000400 + +/* SH_XNPI_ERROR_SUMMARY_OVERFLOW_IILB_VC2 */ +/* Description: IILB VC2 fifo overflow */ +#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_IILB_VC2_SHFT 11 +#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_IILB_VC2_MASK 0x0000000000000800 + +/* SH_XNPI_ERROR_SUMMARY_UNDERFLOW_VC0_CREDIT */ +/* Description: VC0 Credit underflow */ +#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_VC0_CREDIT_SHFT 12 +#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_VC0_CREDIT_MASK 0x0000000000001000 + +/* SH_XNPI_ERROR_SUMMARY_OVERFLOW_VC0_CREDIT */ +/* Description: VC0 Credit overflow */ +#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_VC0_CREDIT_SHFT 13 +#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_VC0_CREDIT_MASK 0x0000000000002000 + +/* SH_XNPI_ERROR_SUMMARY_UNDERFLOW_VC2_CREDIT */ +/* Description: VC2 Credit underflow */ +#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_VC2_CREDIT_SHFT 14 +#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_VC2_CREDIT_MASK 0x0000000000004000 + +/* SH_XNPI_ERROR_SUMMARY_OVERFLOW_VC2_CREDIT */ +/* Description: VC2 Credit overflow */ +#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_VC2_CREDIT_SHFT 15 +#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_VC2_CREDIT_MASK 0x0000000000008000 + +/* SH_XNPI_ERROR_SUMMARY_OVERFLOW_DATABUFF_VC0 */ +/* Description: VC0 Data Buffer overflow */ +#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_DATABUFF_VC0_SHFT 16 +#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_DATABUFF_VC0_MASK 0x0000000000010000 + +/* SH_XNPI_ERROR_SUMMARY_OVERFLOW_DATABUFF_VC2 */ +/* Description: VC2 Data Buffer overflow */ +#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_DATABUFF_VC2_SHFT 17 +#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_DATABUFF_VC2_MASK 0x0000000000020000 + +/* SH_XNPI_ERROR_SUMMARY_LUT_READ_ERROR */ +/* Description: LUT Read Error */ +#define SH_XNPI_ERROR_SUMMARY_LUT_READ_ERROR_SHFT 18 +#define SH_XNPI_ERROR_SUMMARY_LUT_READ_ERROR_MASK 0x0000000000040000 + +/* SH_XNPI_ERROR_SUMMARY_SINGLE_BIT_ERROR0 */ +/* Description: Single Bit Error in Bits 63:0 */ +#define SH_XNPI_ERROR_SUMMARY_SINGLE_BIT_ERROR0_SHFT 19 +#define SH_XNPI_ERROR_SUMMARY_SINGLE_BIT_ERROR0_MASK 0x0000000000080000 + +/* SH_XNPI_ERROR_SUMMARY_SINGLE_BIT_ERROR1 */ +/* Description: Single Bit Error in Bits 127:64 */ +#define SH_XNPI_ERROR_SUMMARY_SINGLE_BIT_ERROR1_SHFT 20 +#define SH_XNPI_ERROR_SUMMARY_SINGLE_BIT_ERROR1_MASK 0x0000000000100000 + +/* SH_XNPI_ERROR_SUMMARY_SINGLE_BIT_ERROR2 */ +/* Description: Single Bit Error in Bits 191:128 */ +#define SH_XNPI_ERROR_SUMMARY_SINGLE_BIT_ERROR2_SHFT 21 +#define SH_XNPI_ERROR_SUMMARY_SINGLE_BIT_ERROR2_MASK 0x0000000000200000 + +/* SH_XNPI_ERROR_SUMMARY_SINGLE_BIT_ERROR3 */ +/* Description: Single Bit Error in Bits 255:192 */ +#define SH_XNPI_ERROR_SUMMARY_SINGLE_BIT_ERROR3_SHFT 22 +#define SH_XNPI_ERROR_SUMMARY_SINGLE_BIT_ERROR3_MASK 0x0000000000400000 + +/* SH_XNPI_ERROR_SUMMARY_UNCOR_ERROR0 */ +/* Description: Uncorrectable Error in Bits 63:0 */ +#define SH_XNPI_ERROR_SUMMARY_UNCOR_ERROR0_SHFT 23 +#define SH_XNPI_ERROR_SUMMARY_UNCOR_ERROR0_MASK 0x0000000000800000 + +/* SH_XNPI_ERROR_SUMMARY_UNCOR_ERROR1 */ +/* Description: Uncorrectable Error in Bits 127:64 */ +#define SH_XNPI_ERROR_SUMMARY_UNCOR_ERROR1_SHFT 24 +#define SH_XNPI_ERROR_SUMMARY_UNCOR_ERROR1_MASK 0x0000000001000000 + +/* SH_XNPI_ERROR_SUMMARY_UNCOR_ERROR2 */ +/* Description: Uncorrectable Error in Bits 191:128 */ +#define SH_XNPI_ERROR_SUMMARY_UNCOR_ERROR2_SHFT 25 +#define SH_XNPI_ERROR_SUMMARY_UNCOR_ERROR2_MASK 0x0000000002000000 + +/* SH_XNPI_ERROR_SUMMARY_UNCOR_ERROR3 */ +/* Description: Uncorrectable Error in Bits 255:192 */ +#define SH_XNPI_ERROR_SUMMARY_UNCOR_ERROR3_SHFT 26 +#define SH_XNPI_ERROR_SUMMARY_UNCOR_ERROR3_MASK 0x0000000004000000 + +/* SH_XNPI_ERROR_SUMMARY_UNDERFLOW_SIC_CNTR0 */ +/* Description: SIC Counter 0 Underflow */ +#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_SIC_CNTR0_SHFT 27 +#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_SIC_CNTR0_MASK 0x0000000008000000 + +/* SH_XNPI_ERROR_SUMMARY_OVERFLOW_SIC_CNTR0 */ +/* Description: SIC Counter 0 Overflow */ +#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_SIC_CNTR0_SHFT 28 +#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_SIC_CNTR0_MASK 0x0000000010000000 + +/* SH_XNPI_ERROR_SUMMARY_UNDERFLOW_SIC_CNTR2 */ +/* Description: SIC Counter 2 Underflow */ +#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_SIC_CNTR2_SHFT 29 +#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_SIC_CNTR2_MASK 0x0000000020000000 + +/* SH_XNPI_ERROR_SUMMARY_OVERFLOW_SIC_CNTR2 */ +/* Description: SIC Counter 2 Overflow */ +#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_SIC_CNTR2_SHFT 30 +#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_SIC_CNTR2_MASK 0x0000000040000000 + +/* SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI0_DEBIT0 */ +/* Description: NI0 Debit 0 Overflow */ +#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI0_DEBIT0_SHFT 31 +#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI0_DEBIT0_MASK 0x0000000080000000 + +/* SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI0_DEBIT2 */ +/* Description: NI0 Debit 2 Overflow */ +#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI0_DEBIT2_SHFT 32 +#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI0_DEBIT2_MASK 0x0000000100000000 + +/* SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI1_DEBIT0 */ +/* Description: NI1 Debit 0 Overflow */ +#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI1_DEBIT0_SHFT 33 +#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI1_DEBIT0_MASK 0x0000000200000000 + +/* SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI1_DEBIT2 */ +/* Description: NI1 Debit 2 Overflow */ +#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI1_DEBIT2_SHFT 34 +#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI1_DEBIT2_MASK 0x0000000400000000 + +/* SH_XNPI_ERROR_SUMMARY_OVERFLOW_IILB_DEBIT0 */ +/* Description: IILB Debit 0 Overflow */ +#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_IILB_DEBIT0_SHFT 35 +#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_IILB_DEBIT0_MASK 0x0000000800000000 + +/* SH_XNPI_ERROR_SUMMARY_OVERFLOW_IILB_DEBIT2 */ +/* Description: IILB Debit 2 Overflow */ +#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_IILB_DEBIT2_SHFT 36 +#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_IILB_DEBIT2_MASK 0x0000001000000000 + +/* SH_XNPI_ERROR_SUMMARY_UNDERFLOW_NI0_VC0_CREDIT */ +/* Description: NI0 VC0 Credit Underflow */ +#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_NI0_VC0_CREDIT_SHFT 37 +#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_NI0_VC0_CREDIT_MASK 0x0000002000000000 + +/* SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI0_VC0_CREDIT */ +/* Description: NI0 VC0 Credit Overflow */ +#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI0_VC0_CREDIT_SHFT 38 +#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI0_VC0_CREDIT_MASK 0x0000004000000000 + +/* SH_XNPI_ERROR_SUMMARY_UNDERFLOW_NI0_VC2_CREDIT */ +/* Description: NI0 VC2 Credit Underflow */ +#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_NI0_VC2_CREDIT_SHFT 39 +#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_NI0_VC2_CREDIT_MASK 0x0000008000000000 + +/* SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI0_VC2_CREDIT */ +/* Description: NI0 VC2 Credit Overflow */ +#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI0_VC2_CREDIT_SHFT 40 +#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI0_VC2_CREDIT_MASK 0x0000010000000000 + +/* SH_XNPI_ERROR_SUMMARY_UNDERFLOW_NI1_VC0_CREDIT */ +/* Description: NI1 VC0 Credit Underflow */ +#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_NI1_VC0_CREDIT_SHFT 41 +#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_NI1_VC0_CREDIT_MASK 0x0000020000000000 + +/* SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI1_VC0_CREDIT */ +/* Description: NI1 VC0 Credit Overflow */ +#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI1_VC0_CREDIT_SHFT 42 +#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI1_VC0_CREDIT_MASK 0x0000040000000000 + +/* SH_XNPI_ERROR_SUMMARY_UNDERFLOW_NI1_VC2_CREDIT */ +/* Description: NI1 VC2 Credit Underflow */ +#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_NI1_VC2_CREDIT_SHFT 43 +#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_NI1_VC2_CREDIT_MASK 0x0000080000000000 + +/* SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI1_VC2_CREDIT */ +/* Description: NI1 VC2 Credit Overflow */ +#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI1_VC2_CREDIT_SHFT 44 +#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI1_VC2_CREDIT_MASK 0x0000100000000000 + +/* SH_XNPI_ERROR_SUMMARY_UNDERFLOW_IILB_VC0_CREDIT */ +/* Description: IILB VC0 Credit Underflow */ +#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_IILB_VC0_CREDIT_SHFT 45 +#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_IILB_VC0_CREDIT_MASK 0x0000200000000000 + +/* SH_XNPI_ERROR_SUMMARY_OVERFLOW_IILB_VC0_CREDIT */ +/* Description: IILB VC0 Credit Overflow */ +#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_IILB_VC0_CREDIT_SHFT 46 +#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_IILB_VC0_CREDIT_MASK 0x0000400000000000 + +/* SH_XNPI_ERROR_SUMMARY_UNDERFLOW_IILB_VC2_CREDIT */ +/* Description: IILB VC2 Credit Underflow */ +#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_IILB_VC2_CREDIT_SHFT 47 +#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_IILB_VC2_CREDIT_MASK 0x0000800000000000 + +/* SH_XNPI_ERROR_SUMMARY_OVERFLOW_IILB_VC2_CREDIT */ +/* Description: IILB VC2 Credit Overflow */ +#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_IILB_VC2_CREDIT_SHFT 48 +#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_IILB_VC2_CREDIT_MASK 0x0001000000000000 + +/* SH_XNPI_ERROR_SUMMARY_OVERFLOW_HEADER_CANCEL_FIFO */ +/* Description: Header Cancel Fifo Overflow */ +#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_HEADER_CANCEL_FIFO_SHFT 49 +#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_HEADER_CANCEL_FIFO_MASK 0x0002000000000000 + +/* ==================================================================== */ +/* Register "SH_XNPI_ERRORS_ALIAS" */ +/* ==================================================================== */ + +#define SH_XNPI_ERRORS_ALIAS 0x0000000150040308 + +/* ==================================================================== */ +/* Register "SH_XNPI_ERROR_OVERFLOW" */ +/* ==================================================================== */ + +#define SH_XNPI_ERROR_OVERFLOW 0x0000000150040320 +#define SH_XNPI_ERROR_OVERFLOW_MASK 0x0003ffffffffffff +#define SH_XNPI_ERROR_OVERFLOW_INIT 0x0003ffffffffffff + +/* SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_NI0_VC0 */ +/* Description: NI0 VC0 fifo underflow */ +#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_NI0_VC0_SHFT 0 +#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_NI0_VC0_MASK 0x0000000000000001 + +/* SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI0_VC0 */ +/* Description: NI0 VC0 fifo overflow */ +#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI0_VC0_SHFT 1 +#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI0_VC0_MASK 0x0000000000000002 + +/* SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_NI0_VC2 */ +/* Description: NI0 VC2 fifo underflow */ +#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_NI0_VC2_SHFT 2 +#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_NI0_VC2_MASK 0x0000000000000004 + +/* SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI0_VC2 */ +/* Description: NI0 VC2 fifo overflow */ +#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI0_VC2_SHFT 3 +#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI0_VC2_MASK 0x0000000000000008 + +/* SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_NI1_VC0 */ +/* Description: NI1 VC0 fifo underflow */ +#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_NI1_VC0_SHFT 4 +#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_NI1_VC0_MASK 0x0000000000000010 + +/* SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI1_VC0 */ +/* Description: NI1 VC0 fifo overflow */ +#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI1_VC0_SHFT 5 +#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI1_VC0_MASK 0x0000000000000020 + +/* SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_NI1_VC2 */ +/* Description: NI1 VC2 fifo underflow */ +#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_NI1_VC2_SHFT 6 +#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_NI1_VC2_MASK 0x0000000000000040 + +/* SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI1_VC2 */ +/* Description: NI1 VC2 fifo overflow */ +#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI1_VC2_SHFT 7 +#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI1_VC2_MASK 0x0000000000000080 + +/* SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_IILB_VC0 */ +/* Description: IILB VC0 fifo underflow */ +#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_IILB_VC0_SHFT 8 +#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_IILB_VC0_MASK 0x0000000000000100 + +/* SH_XNPI_ERROR_OVERFLOW_OVERFLOW_IILB_VC0 */ +/* Description: IILB VC0 fifo overflow */ +#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_IILB_VC0_SHFT 9 +#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_IILB_VC0_MASK 0x0000000000000200 + +/* SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_IILB_VC2 */ +/* Description: IILB VC2 fifo underflow */ +#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_IILB_VC2_SHFT 10 +#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_IILB_VC2_MASK 0x0000000000000400 + +/* SH_XNPI_ERROR_OVERFLOW_OVERFLOW_IILB_VC2 */ +/* Description: IILB VC2 fifo overflow */ +#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_IILB_VC2_SHFT 11 +#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_IILB_VC2_MASK 0x0000000000000800 + +/* SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_VC0_CREDIT */ +/* Description: VC0 Credit underflow */ +#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_VC0_CREDIT_SHFT 12 +#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_VC0_CREDIT_MASK 0x0000000000001000 + +/* SH_XNPI_ERROR_OVERFLOW_OVERFLOW_VC0_CREDIT */ +/* Description: VC0 Credit overflow */ +#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_VC0_CREDIT_SHFT 13 +#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_VC0_CREDIT_MASK 0x0000000000002000 + +/* SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_VC2_CREDIT */ +/* Description: VC2 Credit underflow */ +#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_VC2_CREDIT_SHFT 14 +#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_VC2_CREDIT_MASK 0x0000000000004000 + +/* SH_XNPI_ERROR_OVERFLOW_OVERFLOW_VC2_CREDIT */ +/* Description: VC2 Credit overflow */ +#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_VC2_CREDIT_SHFT 15 +#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_VC2_CREDIT_MASK 0x0000000000008000 + +/* SH_XNPI_ERROR_OVERFLOW_OVERFLOW_DATABUFF_VC0 */ +/* Description: VC0 Data Buffer overflow */ +#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_DATABUFF_VC0_SHFT 16 +#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_DATABUFF_VC0_MASK 0x0000000000010000 + +/* SH_XNPI_ERROR_OVERFLOW_OVERFLOW_DATABUFF_VC2 */ +/* Description: VC2 Data Buffer overflow */ +#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_DATABUFF_VC2_SHFT 17 +#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_DATABUFF_VC2_MASK 0x0000000000020000 + +/* SH_XNPI_ERROR_OVERFLOW_LUT_READ_ERROR */ +/* Description: LUT Read Error */ +#define SH_XNPI_ERROR_OVERFLOW_LUT_READ_ERROR_SHFT 18 +#define SH_XNPI_ERROR_OVERFLOW_LUT_READ_ERROR_MASK 0x0000000000040000 + +/* SH_XNPI_ERROR_OVERFLOW_SINGLE_BIT_ERROR0 */ +/* Description: Single Bit Error in Bits 63:0 */ +#define SH_XNPI_ERROR_OVERFLOW_SINGLE_BIT_ERROR0_SHFT 19 +#define SH_XNPI_ERROR_OVERFLOW_SINGLE_BIT_ERROR0_MASK 0x0000000000080000 + +/* SH_XNPI_ERROR_OVERFLOW_SINGLE_BIT_ERROR1 */ +/* Description: Single Bit Error in Bits 127:64 */ +#define SH_XNPI_ERROR_OVERFLOW_SINGLE_BIT_ERROR1_SHFT 20 +#define SH_XNPI_ERROR_OVERFLOW_SINGLE_BIT_ERROR1_MASK 0x0000000000100000 + +/* SH_XNPI_ERROR_OVERFLOW_SINGLE_BIT_ERROR2 */ +/* Description: Single Bit Error in Bits 191:128 */ +#define SH_XNPI_ERROR_OVERFLOW_SINGLE_BIT_ERROR2_SHFT 21 +#define SH_XNPI_ERROR_OVERFLOW_SINGLE_BIT_ERROR2_MASK 0x0000000000200000 + +/* SH_XNPI_ERROR_OVERFLOW_SINGLE_BIT_ERROR3 */ +/* Description: Single Bit Error in Bits 255:192 */ +#define SH_XNPI_ERROR_OVERFLOW_SINGLE_BIT_ERROR3_SHFT 22 +#define SH_XNPI_ERROR_OVERFLOW_SINGLE_BIT_ERROR3_MASK 0x0000000000400000 + +/* SH_XNPI_ERROR_OVERFLOW_UNCOR_ERROR0 */ +/* Description: Uncorrectable Error in Bits 63:0 */ +#define SH_XNPI_ERROR_OVERFLOW_UNCOR_ERROR0_SHFT 23 +#define SH_XNPI_ERROR_OVERFLOW_UNCOR_ERROR0_MASK 0x0000000000800000 + +/* SH_XNPI_ERROR_OVERFLOW_UNCOR_ERROR1 */ +/* Description: Uncorrectable Error in Bits 127:64 */ +#define SH_XNPI_ERROR_OVERFLOW_UNCOR_ERROR1_SHFT 24 +#define SH_XNPI_ERROR_OVERFLOW_UNCOR_ERROR1_MASK 0x0000000001000000 + +/* SH_XNPI_ERROR_OVERFLOW_UNCOR_ERROR2 */ +/* Description: Uncorrectable Error in Bits 191:128 */ +#define SH_XNPI_ERROR_OVERFLOW_UNCOR_ERROR2_SHFT 25 +#define SH_XNPI_ERROR_OVERFLOW_UNCOR_ERROR2_MASK 0x0000000002000000 + +/* SH_XNPI_ERROR_OVERFLOW_UNCOR_ERROR3 */ +/* Description: Uncorrectable Error in Bits 255:192 */ +#define SH_XNPI_ERROR_OVERFLOW_UNCOR_ERROR3_SHFT 26 +#define SH_XNPI_ERROR_OVERFLOW_UNCOR_ERROR3_MASK 0x0000000004000000 + +/* SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_SIC_CNTR0 */ +/* Description: SIC Counter 0 Underflow */ +#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_SIC_CNTR0_SHFT 27 +#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_SIC_CNTR0_MASK 0x0000000008000000 + +/* SH_XNPI_ERROR_OVERFLOW_OVERFLOW_SIC_CNTR0 */ +/* Description: SIC Counter 0 Overflow */ +#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_SIC_CNTR0_SHFT 28 +#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_SIC_CNTR0_MASK 0x0000000010000000 + +/* SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_SIC_CNTR2 */ +/* Description: SIC Counter 2 Underflow */ +#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_SIC_CNTR2_SHFT 29 +#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_SIC_CNTR2_MASK 0x0000000020000000 + +/* SH_XNPI_ERROR_OVERFLOW_OVERFLOW_SIC_CNTR2 */ +/* Description: SIC Counter 2 Overflow */ +#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_SIC_CNTR2_SHFT 30 +#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_SIC_CNTR2_MASK 0x0000000040000000 + +/* SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI0_DEBIT0 */ +/* Description: NI0 Debit 0 Overflow */ +#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI0_DEBIT0_SHFT 31 +#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI0_DEBIT0_MASK 0x0000000080000000 + +/* SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI0_DEBIT2 */ +/* Description: NI0 Debit 2 Overflow */ +#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI0_DEBIT2_SHFT 32 +#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI0_DEBIT2_MASK 0x0000000100000000 + +/* SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI1_DEBIT0 */ +/* Description: NI1 Debit 0 Overflow */ +#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI1_DEBIT0_SHFT 33 +#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI1_DEBIT0_MASK 0x0000000200000000 + +/* SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI1_DEBIT2 */ +/* Description: NI1 Debit 2 Overflow */ +#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI1_DEBIT2_SHFT 34 +#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI1_DEBIT2_MASK 0x0000000400000000 + +/* SH_XNPI_ERROR_OVERFLOW_OVERFLOW_IILB_DEBIT0 */ +/* Description: IILB Debit 0 Overflow */ +#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_IILB_DEBIT0_SHFT 35 +#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_IILB_DEBIT0_MASK 0x0000000800000000 + +/* SH_XNPI_ERROR_OVERFLOW_OVERFLOW_IILB_DEBIT2 */ +/* Description: IILB Debit 2 Overflow */ +#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_IILB_DEBIT2_SHFT 36 +#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_IILB_DEBIT2_MASK 0x0000001000000000 + +/* SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_NI0_VC0_CREDIT */ +/* Description: NI0 VC0 Credit Underflow */ +#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_NI0_VC0_CREDIT_SHFT 37 +#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_NI0_VC0_CREDIT_MASK 0x0000002000000000 + +/* SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI0_VC0_CREDIT */ +/* Description: NI0 VC0 Credit Overflow */ +#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI0_VC0_CREDIT_SHFT 38 +#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI0_VC0_CREDIT_MASK 0x0000004000000000 + +/* SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_NI0_VC2_CREDIT */ +/* Description: NI0 VC2 Credit Underflow */ +#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_NI0_VC2_CREDIT_SHFT 39 +#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_NI0_VC2_CREDIT_MASK 0x0000008000000000 + +/* SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI0_VC2_CREDIT */ +/* Description: NI0 VC2 Credit Overflow */ +#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI0_VC2_CREDIT_SHFT 40 +#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI0_VC2_CREDIT_MASK 0x0000010000000000 + +/* SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_NI1_VC0_CREDIT */ +/* Description: NI1 VC0 Credit Underflow */ +#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_NI1_VC0_CREDIT_SHFT 41 +#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_NI1_VC0_CREDIT_MASK 0x0000020000000000 + +/* SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI1_VC0_CREDIT */ +/* Description: NI1 VC0 Credit Overflow */ +#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI1_VC0_CREDIT_SHFT 42 +#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI1_VC0_CREDIT_MASK 0x0000040000000000 + +/* SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_NI1_VC2_CREDIT */ +/* Description: NI1 VC2 Credit Underflow */ +#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_NI1_VC2_CREDIT_SHFT 43 +#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_NI1_VC2_CREDIT_MASK 0x0000080000000000 + +/* SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI1_VC2_CREDIT */ +/* Description: NI1 VC2 Credit Overflow */ +#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI1_VC2_CREDIT_SHFT 44 +#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI1_VC2_CREDIT_MASK 0x0000100000000000 + +/* SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_IILB_VC0_CREDIT */ +/* Description: IILB VC0 Credit Underflow */ +#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_IILB_VC0_CREDIT_SHFT 45 +#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_IILB_VC0_CREDIT_MASK 0x0000200000000000 + +/* SH_XNPI_ERROR_OVERFLOW_OVERFLOW_IILB_VC0_CREDIT */ +/* Description: IILB VC0 Credit Overflow */ +#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_IILB_VC0_CREDIT_SHFT 46 +#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_IILB_VC0_CREDIT_MASK 0x0000400000000000 + +/* SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_IILB_VC2_CREDIT */ +/* Description: IILB VC2 Credit Underflow */ +#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_IILB_VC2_CREDIT_SHFT 47 +#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_IILB_VC2_CREDIT_MASK 0x0000800000000000 + +/* SH_XNPI_ERROR_OVERFLOW_OVERFLOW_IILB_VC2_CREDIT */ +/* Description: IILB VC2 Credit Overflow */ +#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_IILB_VC2_CREDIT_SHFT 48 +#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_IILB_VC2_CREDIT_MASK 0x0001000000000000 + +/* SH_XNPI_ERROR_OVERFLOW_OVERFLOW_HEADER_CANCEL_FIFO */ +/* Description: Header Cancel Fifo Overflow */ +#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_HEADER_CANCEL_FIFO_SHFT 49 +#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_HEADER_CANCEL_FIFO_MASK 0x0002000000000000 + +/* ==================================================================== */ +/* Register "SH_XNPI_ERROR_OVERFLOW_ALIAS" */ +/* ==================================================================== */ + +#define SH_XNPI_ERROR_OVERFLOW_ALIAS 0x0000000150040328 + +/* ==================================================================== */ +/* Register "SH_XNPI_ERROR_MASK" */ +/* ==================================================================== */ + +#define SH_XNPI_ERROR_MASK 0x0000000150040340 +#define SH_XNPI_ERROR_MASK_MASK 0x0003ffffffffffff +#define SH_XNPI_ERROR_MASK_INIT 0x0003ffffffffffff + +/* SH_XNPI_ERROR_MASK_UNDERFLOW_NI0_VC0 */ +/* Description: NI0 VC0 fifo underflow */ +#define SH_XNPI_ERROR_MASK_UNDERFLOW_NI0_VC0_SHFT 0 +#define SH_XNPI_ERROR_MASK_UNDERFLOW_NI0_VC0_MASK 0x0000000000000001 + +/* SH_XNPI_ERROR_MASK_OVERFLOW_NI0_VC0 */ +/* Description: NI0 VC0 fifo overflow */ +#define SH_XNPI_ERROR_MASK_OVERFLOW_NI0_VC0_SHFT 1 +#define SH_XNPI_ERROR_MASK_OVERFLOW_NI0_VC0_MASK 0x0000000000000002 + +/* SH_XNPI_ERROR_MASK_UNDERFLOW_NI0_VC2 */ +/* Description: NI0 VC2 fifo underflow */ +#define SH_XNPI_ERROR_MASK_UNDERFLOW_NI0_VC2_SHFT 2 +#define SH_XNPI_ERROR_MASK_UNDERFLOW_NI0_VC2_MASK 0x0000000000000004 + +/* SH_XNPI_ERROR_MASK_OVERFLOW_NI0_VC2 */ +/* Description: NI0 VC2 fifo overflow */ +#define SH_XNPI_ERROR_MASK_OVERFLOW_NI0_VC2_SHFT 3 +#define SH_XNPI_ERROR_MASK_OVERFLOW_NI0_VC2_MASK 0x0000000000000008 + +/* SH_XNPI_ERROR_MASK_UNDERFLOW_NI1_VC0 */ +/* Description: NI1 VC0 fifo underflow */ +#define SH_XNPI_ERROR_MASK_UNDERFLOW_NI1_VC0_SHFT 4 +#define SH_XNPI_ERROR_MASK_UNDERFLOW_NI1_VC0_MASK 0x0000000000000010 + +/* SH_XNPI_ERROR_MASK_OVERFLOW_NI1_VC0 */ +/* Description: NI1 VC0 fifo overflow */ +#define SH_XNPI_ERROR_MASK_OVERFLOW_NI1_VC0_SHFT 5 +#define SH_XNPI_ERROR_MASK_OVERFLOW_NI1_VC0_MASK 0x0000000000000020 + +/* SH_XNPI_ERROR_MASK_UNDERFLOW_NI1_VC2 */ +/* Description: NI1 VC2 fifo underflow */ +#define SH_XNPI_ERROR_MASK_UNDERFLOW_NI1_VC2_SHFT 6 +#define SH_XNPI_ERROR_MASK_UNDERFLOW_NI1_VC2_MASK 0x0000000000000040 + +/* SH_XNPI_ERROR_MASK_OVERFLOW_NI1_VC2 */ +/* Description: NI1 VC2 fifo overflow */ +#define SH_XNPI_ERROR_MASK_OVERFLOW_NI1_VC2_SHFT 7 +#define SH_XNPI_ERROR_MASK_OVERFLOW_NI1_VC2_MASK 0x0000000000000080 + +/* SH_XNPI_ERROR_MASK_UNDERFLOW_IILB_VC0 */ +/* Description: IILB VC0 fifo underflow */ +#define SH_XNPI_ERROR_MASK_UNDERFLOW_IILB_VC0_SHFT 8 +#define SH_XNPI_ERROR_MASK_UNDERFLOW_IILB_VC0_MASK 0x0000000000000100 + +/* SH_XNPI_ERROR_MASK_OVERFLOW_IILB_VC0 */ +/* Description: IILB VC0 fifo overflow */ +#define SH_XNPI_ERROR_MASK_OVERFLOW_IILB_VC0_SHFT 9 +#define SH_XNPI_ERROR_MASK_OVERFLOW_IILB_VC0_MASK 0x0000000000000200 + +/* SH_XNPI_ERROR_MASK_UNDERFLOW_IILB_VC2 */ +/* Description: IILB VC2 fifo underflow */ +#define SH_XNPI_ERROR_MASK_UNDERFLOW_IILB_VC2_SHFT 10 +#define SH_XNPI_ERROR_MASK_UNDERFLOW_IILB_VC2_MASK 0x0000000000000400 + +/* SH_XNPI_ERROR_MASK_OVERFLOW_IILB_VC2 */ +/* Description: IILB VC2 fifo overflow */ +#define SH_XNPI_ERROR_MASK_OVERFLOW_IILB_VC2_SHFT 11 +#define SH_XNPI_ERROR_MASK_OVERFLOW_IILB_VC2_MASK 0x0000000000000800 + +/* SH_XNPI_ERROR_MASK_UNDERFLOW_VC0_CREDIT */ +/* Description: VC0 Credit underflow */ +#define SH_XNPI_ERROR_MASK_UNDERFLOW_VC0_CREDIT_SHFT 12 +#define SH_XNPI_ERROR_MASK_UNDERFLOW_VC0_CREDIT_MASK 0x0000000000001000 + +/* SH_XNPI_ERROR_MASK_OVERFLOW_VC0_CREDIT */ +/* Description: VC0 Credit overflow */ +#define SH_XNPI_ERROR_MASK_OVERFLOW_VC0_CREDIT_SHFT 13 +#define SH_XNPI_ERROR_MASK_OVERFLOW_VC0_CREDIT_MASK 0x0000000000002000 + +/* SH_XNPI_ERROR_MASK_UNDERFLOW_VC2_CREDIT */ +/* Description: VC2 Credit underflow */ +#define SH_XNPI_ERROR_MASK_UNDERFLOW_VC2_CREDIT_SHFT 14 +#define SH_XNPI_ERROR_MASK_UNDERFLOW_VC2_CREDIT_MASK 0x0000000000004000 + +/* SH_XNPI_ERROR_MASK_OVERFLOW_VC2_CREDIT */ +/* Description: VC2 Credit overflow */ +#define SH_XNPI_ERROR_MASK_OVERFLOW_VC2_CREDIT_SHFT 15 +#define SH_XNPI_ERROR_MASK_OVERFLOW_VC2_CREDIT_MASK 0x0000000000008000 + +/* SH_XNPI_ERROR_MASK_OVERFLOW_DATABUFF_VC0 */ +/* Description: VC0 Data Buffer overflow */ +#define SH_XNPI_ERROR_MASK_OVERFLOW_DATABUFF_VC0_SHFT 16 +#define SH_XNPI_ERROR_MASK_OVERFLOW_DATABUFF_VC0_MASK 0x0000000000010000 + +/* SH_XNPI_ERROR_MASK_OVERFLOW_DATABUFF_VC2 */ +/* Description: VC2 Data Buffer overflow */ +#define SH_XNPI_ERROR_MASK_OVERFLOW_DATABUFF_VC2_SHFT 17 +#define SH_XNPI_ERROR_MASK_OVERFLOW_DATABUFF_VC2_MASK 0x0000000000020000 + +/* SH_XNPI_ERROR_MASK_LUT_READ_ERROR */ +/* Description: LUT Read Error */ +#define SH_XNPI_ERROR_MASK_LUT_READ_ERROR_SHFT 18 +#define SH_XNPI_ERROR_MASK_LUT_READ_ERROR_MASK 0x0000000000040000 + +/* SH_XNPI_ERROR_MASK_SINGLE_BIT_ERROR0 */ +/* Description: Single Bit Error in Bits 63:0 */ +#define SH_XNPI_ERROR_MASK_SINGLE_BIT_ERROR0_SHFT 19 +#define SH_XNPI_ERROR_MASK_SINGLE_BIT_ERROR0_MASK 0x0000000000080000 + +/* SH_XNPI_ERROR_MASK_SINGLE_BIT_ERROR1 */ +/* Description: Single Bit Error in Bits 127:64 */ +#define SH_XNPI_ERROR_MASK_SINGLE_BIT_ERROR1_SHFT 20 +#define SH_XNPI_ERROR_MASK_SINGLE_BIT_ERROR1_MASK 0x0000000000100000 + +/* SH_XNPI_ERROR_MASK_SINGLE_BIT_ERROR2 */ +/* Description: Single Bit Error in Bits 191:128 */ +#define SH_XNPI_ERROR_MASK_SINGLE_BIT_ERROR2_SHFT 21 +#define SH_XNPI_ERROR_MASK_SINGLE_BIT_ERROR2_MASK 0x0000000000200000 + +/* SH_XNPI_ERROR_MASK_SINGLE_BIT_ERROR3 */ +/* Description: Single Bit Error in Bits 255:192 */ +#define SH_XNPI_ERROR_MASK_SINGLE_BIT_ERROR3_SHFT 22 +#define SH_XNPI_ERROR_MASK_SINGLE_BIT_ERROR3_MASK 0x0000000000400000 + +/* SH_XNPI_ERROR_MASK_UNCOR_ERROR0 */ +/* Description: Uncorrectable Error in Bits 63:0 */ +#define SH_XNPI_ERROR_MASK_UNCOR_ERROR0_SHFT 23 +#define SH_XNPI_ERROR_MASK_UNCOR_ERROR0_MASK 0x0000000000800000 + +/* SH_XNPI_ERROR_MASK_UNCOR_ERROR1 */ +/* Description: Uncorrectable Error in Bits 127:64 */ +#define SH_XNPI_ERROR_MASK_UNCOR_ERROR1_SHFT 24 +#define SH_XNPI_ERROR_MASK_UNCOR_ERROR1_MASK 0x0000000001000000 + +/* SH_XNPI_ERROR_MASK_UNCOR_ERROR2 */ +/* Description: Uncorrectable Error in Bits 191:128 */ +#define SH_XNPI_ERROR_MASK_UNCOR_ERROR2_SHFT 25 +#define SH_XNPI_ERROR_MASK_UNCOR_ERROR2_MASK 0x0000000002000000 + +/* SH_XNPI_ERROR_MASK_UNCOR_ERROR3 */ +/* Description: Uncorrectable Error in Bits 255:192 */ +#define SH_XNPI_ERROR_MASK_UNCOR_ERROR3_SHFT 26 +#define SH_XNPI_ERROR_MASK_UNCOR_ERROR3_MASK 0x0000000004000000 + +/* SH_XNPI_ERROR_MASK_UNDERFLOW_SIC_CNTR0 */ +/* Description: SIC Counter 0 Underflow */ +#define SH_XNPI_ERROR_MASK_UNDERFLOW_SIC_CNTR0_SHFT 27 +#define SH_XNPI_ERROR_MASK_UNDERFLOW_SIC_CNTR0_MASK 0x0000000008000000 + +/* SH_XNPI_ERROR_MASK_OVERFLOW_SIC_CNTR0 */ +/* Description: SIC Counter 0 Overflow */ +#define SH_XNPI_ERROR_MASK_OVERFLOW_SIC_CNTR0_SHFT 28 +#define SH_XNPI_ERROR_MASK_OVERFLOW_SIC_CNTR0_MASK 0x0000000010000000 + +/* SH_XNPI_ERROR_MASK_UNDERFLOW_SIC_CNTR2 */ +/* Description: SIC Counter 2 Underflow */ +#define SH_XNPI_ERROR_MASK_UNDERFLOW_SIC_CNTR2_SHFT 29 +#define SH_XNPI_ERROR_MASK_UNDERFLOW_SIC_CNTR2_MASK 0x0000000020000000 + +/* SH_XNPI_ERROR_MASK_OVERFLOW_SIC_CNTR2 */ +/* Description: SIC Counter 2 Overflow */ +#define SH_XNPI_ERROR_MASK_OVERFLOW_SIC_CNTR2_SHFT 30 +#define SH_XNPI_ERROR_MASK_OVERFLOW_SIC_CNTR2_MASK 0x0000000040000000 + +/* SH_XNPI_ERROR_MASK_OVERFLOW_NI0_DEBIT0 */ +/* Description: NI0 Debit 0 Overflow */ +#define SH_XNPI_ERROR_MASK_OVERFLOW_NI0_DEBIT0_SHFT 31 +#define SH_XNPI_ERROR_MASK_OVERFLOW_NI0_DEBIT0_MASK 0x0000000080000000 + +/* SH_XNPI_ERROR_MASK_OVERFLOW_NI0_DEBIT2 */ +/* Description: NI0 Debit 2 Overflow */ +#define SH_XNPI_ERROR_MASK_OVERFLOW_NI0_DEBIT2_SHFT 32 +#define SH_XNPI_ERROR_MASK_OVERFLOW_NI0_DEBIT2_MASK 0x0000000100000000 + +/* SH_XNPI_ERROR_MASK_OVERFLOW_NI1_DEBIT0 */ +/* Description: NI1 Debit 0 Overflow */ +#define SH_XNPI_ERROR_MASK_OVERFLOW_NI1_DEBIT0_SHFT 33 +#define SH_XNPI_ERROR_MASK_OVERFLOW_NI1_DEBIT0_MASK 0x0000000200000000 + +/* SH_XNPI_ERROR_MASK_OVERFLOW_NI1_DEBIT2 */ +/* Description: NI1 Debit 2 Overflow */ +#define SH_XNPI_ERROR_MASK_OVERFLOW_NI1_DEBIT2_SHFT 34 +#define SH_XNPI_ERROR_MASK_OVERFLOW_NI1_DEBIT2_MASK 0x0000000400000000 + +/* SH_XNPI_ERROR_MASK_OVERFLOW_IILB_DEBIT0 */ +/* Description: IILB Debit 0 Overflow */ +#define SH_XNPI_ERROR_MASK_OVERFLOW_IILB_DEBIT0_SHFT 35 +#define SH_XNPI_ERROR_MASK_OVERFLOW_IILB_DEBIT0_MASK 0x0000000800000000 + +/* SH_XNPI_ERROR_MASK_OVERFLOW_IILB_DEBIT2 */ +/* Description: IILB Debit 2 Overflow */ +#define SH_XNPI_ERROR_MASK_OVERFLOW_IILB_DEBIT2_SHFT 36 +#define SH_XNPI_ERROR_MASK_OVERFLOW_IILB_DEBIT2_MASK 0x0000001000000000 + +/* SH_XNPI_ERROR_MASK_UNDERFLOW_NI0_VC0_CREDIT */ +/* Description: NI0 VC0 Credit Underflow */ +#define SH_XNPI_ERROR_MASK_UNDERFLOW_NI0_VC0_CREDIT_SHFT 37 +#define SH_XNPI_ERROR_MASK_UNDERFLOW_NI0_VC0_CREDIT_MASK 0x0000002000000000 + +/* SH_XNPI_ERROR_MASK_OVERFLOW_NI0_VC0_CREDIT */ +/* Description: NI0 VC0 Credit Overflow */ +#define SH_XNPI_ERROR_MASK_OVERFLOW_NI0_VC0_CREDIT_SHFT 38 +#define SH_XNPI_ERROR_MASK_OVERFLOW_NI0_VC0_CREDIT_MASK 0x0000004000000000 + +/* SH_XNPI_ERROR_MASK_UNDERFLOW_NI0_VC2_CREDIT */ +/* Description: NI0 VC2 Credit Underflow */ +#define SH_XNPI_ERROR_MASK_UNDERFLOW_NI0_VC2_CREDIT_SHFT 39 +#define SH_XNPI_ERROR_MASK_UNDERFLOW_NI0_VC2_CREDIT_MASK 0x0000008000000000 + +/* SH_XNPI_ERROR_MASK_OVERFLOW_NI0_VC2_CREDIT */ +/* Description: NI0 VC2 Credit Overflow */ +#define SH_XNPI_ERROR_MASK_OVERFLOW_NI0_VC2_CREDIT_SHFT 40 +#define SH_XNPI_ERROR_MASK_OVERFLOW_NI0_VC2_CREDIT_MASK 0x0000010000000000 + +/* SH_XNPI_ERROR_MASK_UNDERFLOW_NI1_VC0_CREDIT */ +/* Description: NI1 VC0 Credit Underflow */ +#define SH_XNPI_ERROR_MASK_UNDERFLOW_NI1_VC0_CREDIT_SHFT 41 +#define SH_XNPI_ERROR_MASK_UNDERFLOW_NI1_VC0_CREDIT_MASK 0x0000020000000000 + +/* SH_XNPI_ERROR_MASK_OVERFLOW_NI1_VC0_CREDIT */ +/* Description: NI1 VC0 Credit Overflow */ +#define SH_XNPI_ERROR_MASK_OVERFLOW_NI1_VC0_CREDIT_SHFT 42 +#define SH_XNPI_ERROR_MASK_OVERFLOW_NI1_VC0_CREDIT_MASK 0x0000040000000000 + +/* SH_XNPI_ERROR_MASK_UNDERFLOW_NI1_VC2_CREDIT */ +/* Description: NI1 VC2 Credit Underflow */ +#define SH_XNPI_ERROR_MASK_UNDERFLOW_NI1_VC2_CREDIT_SHFT 43 +#define SH_XNPI_ERROR_MASK_UNDERFLOW_NI1_VC2_CREDIT_MASK 0x0000080000000000 + +/* SH_XNPI_ERROR_MASK_OVERFLOW_NI1_VC2_CREDIT */ +/* Description: NI1 VC2 Credit Overflow */ +#define SH_XNPI_ERROR_MASK_OVERFLOW_NI1_VC2_CREDIT_SHFT 44 +#define SH_XNPI_ERROR_MASK_OVERFLOW_NI1_VC2_CREDIT_MASK 0x0000100000000000 + +/* SH_XNPI_ERROR_MASK_UNDERFLOW_IILB_VC0_CREDIT */ +/* Description: IILB VC0 Credit Underflow */ +#define SH_XNPI_ERROR_MASK_UNDERFLOW_IILB_VC0_CREDIT_SHFT 45 +#define SH_XNPI_ERROR_MASK_UNDERFLOW_IILB_VC0_CREDIT_MASK 0x0000200000000000 + +/* SH_XNPI_ERROR_MASK_OVERFLOW_IILB_VC0_CREDIT */ +/* Description: IILB VC0 Credit Overflow */ +#define SH_XNPI_ERROR_MASK_OVERFLOW_IILB_VC0_CREDIT_SHFT 46 +#define SH_XNPI_ERROR_MASK_OVERFLOW_IILB_VC0_CREDIT_MASK 0x0000400000000000 + +/* SH_XNPI_ERROR_MASK_UNDERFLOW_IILB_VC2_CREDIT */ +/* Description: IILB VC2 Credit Underflow */ +#define SH_XNPI_ERROR_MASK_UNDERFLOW_IILB_VC2_CREDIT_SHFT 47 +#define SH_XNPI_ERROR_MASK_UNDERFLOW_IILB_VC2_CREDIT_MASK 0x0000800000000000 + +/* SH_XNPI_ERROR_MASK_OVERFLOW_IILB_VC2_CREDIT */ +/* Description: IILB VC2 Credit Overflow */ +#define SH_XNPI_ERROR_MASK_OVERFLOW_IILB_VC2_CREDIT_SHFT 48 +#define SH_XNPI_ERROR_MASK_OVERFLOW_IILB_VC2_CREDIT_MASK 0x0001000000000000 + +/* SH_XNPI_ERROR_MASK_OVERFLOW_HEADER_CANCEL_FIFO */ +/* Description: Header Cancel Fifo Overflow */ +#define SH_XNPI_ERROR_MASK_OVERFLOW_HEADER_CANCEL_FIFO_SHFT 49 +#define SH_XNPI_ERROR_MASK_OVERFLOW_HEADER_CANCEL_FIFO_MASK 0x0002000000000000 + +/* ==================================================================== */ +/* Register "SH_XNPI_FIRST_ERROR" */ +/* ==================================================================== */ + +#define SH_XNPI_FIRST_ERROR 0x0000000150040360 +#define SH_XNPI_FIRST_ERROR_MASK 0x0003ffffffffffff +#define SH_XNPI_FIRST_ERROR_INIT 0x0003ffffffffffff + +/* SH_XNPI_FIRST_ERROR_UNDERFLOW_NI0_VC0 */ +/* Description: NI0 VC0 fifo underflow */ +#define SH_XNPI_FIRST_ERROR_UNDERFLOW_NI0_VC0_SHFT 0 +#define SH_XNPI_FIRST_ERROR_UNDERFLOW_NI0_VC0_MASK 0x0000000000000001 + +/* SH_XNPI_FIRST_ERROR_OVERFLOW_NI0_VC0 */ +/* Description: NI0 VC0 fifo overflow */ +#define SH_XNPI_FIRST_ERROR_OVERFLOW_NI0_VC0_SHFT 1 +#define SH_XNPI_FIRST_ERROR_OVERFLOW_NI0_VC0_MASK 0x0000000000000002 + +/* SH_XNPI_FIRST_ERROR_UNDERFLOW_NI0_VC2 */ +/* Description: NI0 VC2 fifo underflow */ +#define SH_XNPI_FIRST_ERROR_UNDERFLOW_NI0_VC2_SHFT 2 +#define SH_XNPI_FIRST_ERROR_UNDERFLOW_NI0_VC2_MASK 0x0000000000000004 + +/* SH_XNPI_FIRST_ERROR_OVERFLOW_NI0_VC2 */ +/* Description: NI0 VC2 fifo overflow */ +#define SH_XNPI_FIRST_ERROR_OVERFLOW_NI0_VC2_SHFT 3 +#define SH_XNPI_FIRST_ERROR_OVERFLOW_NI0_VC2_MASK 0x0000000000000008 + +/* SH_XNPI_FIRST_ERROR_UNDERFLOW_NI1_VC0 */ +/* Description: NI1 VC0 fifo underflow */ +#define SH_XNPI_FIRST_ERROR_UNDERFLOW_NI1_VC0_SHFT 4 +#define SH_XNPI_FIRST_ERROR_UNDERFLOW_NI1_VC0_MASK 0x0000000000000010 + +/* SH_XNPI_FIRST_ERROR_OVERFLOW_NI1_VC0 */ +/* Description: NI1 VC0 fifo overflow */ +#define SH_XNPI_FIRST_ERROR_OVERFLOW_NI1_VC0_SHFT 5 +#define SH_XNPI_FIRST_ERROR_OVERFLOW_NI1_VC0_MASK 0x0000000000000020 + +/* SH_XNPI_FIRST_ERROR_UNDERFLOW_NI1_VC2 */ +/* Description: NI1 VC2 fifo underflow */ +#define SH_XNPI_FIRST_ERROR_UNDERFLOW_NI1_VC2_SHFT 6 +#define SH_XNPI_FIRST_ERROR_UNDERFLOW_NI1_VC2_MASK 0x0000000000000040 + +/* SH_XNPI_FIRST_ERROR_OVERFLOW_NI1_VC2 */ +/* Description: NI1 VC2 fifo overflow */ +#define SH_XNPI_FIRST_ERROR_OVERFLOW_NI1_VC2_SHFT 7 +#define SH_XNPI_FIRST_ERROR_OVERFLOW_NI1_VC2_MASK 0x0000000000000080 + +/* SH_XNPI_FIRST_ERROR_UNDERFLOW_IILB_VC0 */ +/* Description: IILB VC0 fifo underflow */ +#define SH_XNPI_FIRST_ERROR_UNDERFLOW_IILB_VC0_SHFT 8 +#define SH_XNPI_FIRST_ERROR_UNDERFLOW_IILB_VC0_MASK 0x0000000000000100 + +/* SH_XNPI_FIRST_ERROR_OVERFLOW_IILB_VC0 */ +/* Description: IILB VC0 fifo overflow */ +#define SH_XNPI_FIRST_ERROR_OVERFLOW_IILB_VC0_SHFT 9 +#define SH_XNPI_FIRST_ERROR_OVERFLOW_IILB_VC0_MASK 0x0000000000000200 + +/* SH_XNPI_FIRST_ERROR_UNDERFLOW_IILB_VC2 */ +/* Description: IILB VC2 fifo underflow */ +#define SH_XNPI_FIRST_ERROR_UNDERFLOW_IILB_VC2_SHFT 10 +#define SH_XNPI_FIRST_ERROR_UNDERFLOW_IILB_VC2_MASK 0x0000000000000400 + +/* SH_XNPI_FIRST_ERROR_OVERFLOW_IILB_VC2 */ +/* Description: IILB VC2 fifo overflow */ +#define SH_XNPI_FIRST_ERROR_OVERFLOW_IILB_VC2_SHFT 11 +#define SH_XNPI_FIRST_ERROR_OVERFLOW_IILB_VC2_MASK 0x0000000000000800 + +/* SH_XNPI_FIRST_ERROR_UNDERFLOW_VC0_CREDIT */ +/* Description: VC0 Credit underflow */ +#define SH_XNPI_FIRST_ERROR_UNDERFLOW_VC0_CREDIT_SHFT 12 +#define SH_XNPI_FIRST_ERROR_UNDERFLOW_VC0_CREDIT_MASK 0x0000000000001000 + +/* SH_XNPI_FIRST_ERROR_OVERFLOW_VC0_CREDIT */ +/* Description: VC0 Credit overflow */ +#define SH_XNPI_FIRST_ERROR_OVERFLOW_VC0_CREDIT_SHFT 13 +#define SH_XNPI_FIRST_ERROR_OVERFLOW_VC0_CREDIT_MASK 0x0000000000002000 + +/* SH_XNPI_FIRST_ERROR_UNDERFLOW_VC2_CREDIT */ +/* Description: VC2 Credit underflow */ +#define SH_XNPI_FIRST_ERROR_UNDERFLOW_VC2_CREDIT_SHFT 14 +#define SH_XNPI_FIRST_ERROR_UNDERFLOW_VC2_CREDIT_MASK 0x0000000000004000 + +/* SH_XNPI_FIRST_ERROR_OVERFLOW_VC2_CREDIT */ +/* Description: VC2 Credit overflow */ +#define SH_XNPI_FIRST_ERROR_OVERFLOW_VC2_CREDIT_SHFT 15 +#define SH_XNPI_FIRST_ERROR_OVERFLOW_VC2_CREDIT_MASK 0x0000000000008000 + +/* SH_XNPI_FIRST_ERROR_OVERFLOW_DATABUFF_VC0 */ +/* Description: VC0 Data Buffer overflow */ +#define SH_XNPI_FIRST_ERROR_OVERFLOW_DATABUFF_VC0_SHFT 16 +#define SH_XNPI_FIRST_ERROR_OVERFLOW_DATABUFF_VC0_MASK 0x0000000000010000 + +/* SH_XNPI_FIRST_ERROR_OVERFLOW_DATABUFF_VC2 */ +/* Description: VC2 Data Buffer overflow */ +#define SH_XNPI_FIRST_ERROR_OVERFLOW_DATABUFF_VC2_SHFT 17 +#define SH_XNPI_FIRST_ERROR_OVERFLOW_DATABUFF_VC2_MASK 0x0000000000020000 + +/* SH_XNPI_FIRST_ERROR_LUT_READ_ERROR */ +/* Description: LUT Read Error */ +#define SH_XNPI_FIRST_ERROR_LUT_READ_ERROR_SHFT 18 +#define SH_XNPI_FIRST_ERROR_LUT_READ_ERROR_MASK 0x0000000000040000 + +/* SH_XNPI_FIRST_ERROR_SINGLE_BIT_ERROR0 */ +/* Description: Single Bit Error in Bits 63:0 */ +#define SH_XNPI_FIRST_ERROR_SINGLE_BIT_ERROR0_SHFT 19 +#define SH_XNPI_FIRST_ERROR_SINGLE_BIT_ERROR0_MASK 0x0000000000080000 + +/* SH_XNPI_FIRST_ERROR_SINGLE_BIT_ERROR1 */ +/* Description: Single Bit Error in Bits 127:64 */ +#define SH_XNPI_FIRST_ERROR_SINGLE_BIT_ERROR1_SHFT 20 +#define SH_XNPI_FIRST_ERROR_SINGLE_BIT_ERROR1_MASK 0x0000000000100000 + +/* SH_XNPI_FIRST_ERROR_SINGLE_BIT_ERROR2 */ +/* Description: Single Bit Error in Bits 191:128 */ +#define SH_XNPI_FIRST_ERROR_SINGLE_BIT_ERROR2_SHFT 21 +#define SH_XNPI_FIRST_ERROR_SINGLE_BIT_ERROR2_MASK 0x0000000000200000 + +/* SH_XNPI_FIRST_ERROR_SINGLE_BIT_ERROR3 */ +/* Description: Single Bit Error in Bits 255:192 */ +#define SH_XNPI_FIRST_ERROR_SINGLE_BIT_ERROR3_SHFT 22 +#define SH_XNPI_FIRST_ERROR_SINGLE_BIT_ERROR3_MASK 0x0000000000400000 + +/* SH_XNPI_FIRST_ERROR_UNCOR_ERROR0 */ +/* Description: Uncorrectable Error in Bits 63:0 */ +#define SH_XNPI_FIRST_ERROR_UNCOR_ERROR0_SHFT 23 +#define SH_XNPI_FIRST_ERROR_UNCOR_ERROR0_MASK 0x0000000000800000 + +/* SH_XNPI_FIRST_ERROR_UNCOR_ERROR1 */ +/* Description: Uncorrectable Error in Bits 127:64 */ +#define SH_XNPI_FIRST_ERROR_UNCOR_ERROR1_SHFT 24 +#define SH_XNPI_FIRST_ERROR_UNCOR_ERROR1_MASK 0x0000000001000000 + +/* SH_XNPI_FIRST_ERROR_UNCOR_ERROR2 */ +/* Description: Uncorrectable Error in Bits 191:128 */ +#define SH_XNPI_FIRST_ERROR_UNCOR_ERROR2_SHFT 25 +#define SH_XNPI_FIRST_ERROR_UNCOR_ERROR2_MASK 0x0000000002000000 + +/* SH_XNPI_FIRST_ERROR_UNCOR_ERROR3 */ +/* Description: Uncorrectable Error in Bits 255:192 */ +#define SH_XNPI_FIRST_ERROR_UNCOR_ERROR3_SHFT 26 +#define SH_XNPI_FIRST_ERROR_UNCOR_ERROR3_MASK 0x0000000004000000 + +/* SH_XNPI_FIRST_ERROR_UNDERFLOW_SIC_CNTR0 */ +/* Description: SIC Counter 0 Underflow */ +#define SH_XNPI_FIRST_ERROR_UNDERFLOW_SIC_CNTR0_SHFT 27 +#define SH_XNPI_FIRST_ERROR_UNDERFLOW_SIC_CNTR0_MASK 0x0000000008000000 + +/* SH_XNPI_FIRST_ERROR_OVERFLOW_SIC_CNTR0 */ +/* Description: SIC Counter 0 Overflow */ +#define SH_XNPI_FIRST_ERROR_OVERFLOW_SIC_CNTR0_SHFT 28 +#define SH_XNPI_FIRST_ERROR_OVERFLOW_SIC_CNTR0_MASK 0x0000000010000000 + +/* SH_XNPI_FIRST_ERROR_UNDERFLOW_SIC_CNTR2 */ +/* Description: SIC Counter 2 Underflow */ +#define SH_XNPI_FIRST_ERROR_UNDERFLOW_SIC_CNTR2_SHFT 29 +#define SH_XNPI_FIRST_ERROR_UNDERFLOW_SIC_CNTR2_MASK 0x0000000020000000 + +/* SH_XNPI_FIRST_ERROR_OVERFLOW_SIC_CNTR2 */ +/* Description: SIC Counter 2 Overflow */ +#define SH_XNPI_FIRST_ERROR_OVERFLOW_SIC_CNTR2_SHFT 30 +#define SH_XNPI_FIRST_ERROR_OVERFLOW_SIC_CNTR2_MASK 0x0000000040000000 + +/* SH_XNPI_FIRST_ERROR_OVERFLOW_NI0_DEBIT0 */ +/* Description: NI0 Debit 0 Overflow */ +#define SH_XNPI_FIRST_ERROR_OVERFLOW_NI0_DEBIT0_SHFT 31 +#define SH_XNPI_FIRST_ERROR_OVERFLOW_NI0_DEBIT0_MASK 0x0000000080000000 + +/* SH_XNPI_FIRST_ERROR_OVERFLOW_NI0_DEBIT2 */ +/* Description: NI0 Debit 2 Overflow */ +#define SH_XNPI_FIRST_ERROR_OVERFLOW_NI0_DEBIT2_SHFT 32 +#define SH_XNPI_FIRST_ERROR_OVERFLOW_NI0_DEBIT2_MASK 0x0000000100000000 + +/* SH_XNPI_FIRST_ERROR_OVERFLOW_NI1_DEBIT0 */ +/* Description: NI1 Debit 0 Overflow */ +#define SH_XNPI_FIRST_ERROR_OVERFLOW_NI1_DEBIT0_SHFT 33 +#define SH_XNPI_FIRST_ERROR_OVERFLOW_NI1_DEBIT0_MASK 0x0000000200000000 + +/* SH_XNPI_FIRST_ERROR_OVERFLOW_NI1_DEBIT2 */ +/* Description: NI1 Debit 2 Overflow */ +#define SH_XNPI_FIRST_ERROR_OVERFLOW_NI1_DEBIT2_SHFT 34 +#define SH_XNPI_FIRST_ERROR_OVERFLOW_NI1_DEBIT2_MASK 0x0000000400000000 + +/* SH_XNPI_FIRST_ERROR_OVERFLOW_IILB_DEBIT0 */ +/* Description: IILB Debit 0 Overflow */ +#define SH_XNPI_FIRST_ERROR_OVERFLOW_IILB_DEBIT0_SHFT 35 +#define SH_XNPI_FIRST_ERROR_OVERFLOW_IILB_DEBIT0_MASK 0x0000000800000000 + +/* SH_XNPI_FIRST_ERROR_OVERFLOW_IILB_DEBIT2 */ +/* Description: IILB Debit 2 Overflow */ +#define SH_XNPI_FIRST_ERROR_OVERFLOW_IILB_DEBIT2_SHFT 36 +#define SH_XNPI_FIRST_ERROR_OVERFLOW_IILB_DEBIT2_MASK 0x0000001000000000 + +/* SH_XNPI_FIRST_ERROR_UNDERFLOW_NI0_VC0_CREDIT */ +/* Description: NI0 VC0 Credit Underflow */ +#define SH_XNPI_FIRST_ERROR_UNDERFLOW_NI0_VC0_CREDIT_SHFT 37 +#define SH_XNPI_FIRST_ERROR_UNDERFLOW_NI0_VC0_CREDIT_MASK 0x0000002000000000 + +/* SH_XNPI_FIRST_ERROR_OVERFLOW_NI0_VC0_CREDIT */ +/* Description: NI0 VC0 Credit Overflow */ +#define SH_XNPI_FIRST_ERROR_OVERFLOW_NI0_VC0_CREDIT_SHFT 38 +#define SH_XNPI_FIRST_ERROR_OVERFLOW_NI0_VC0_CREDIT_MASK 0x0000004000000000 + +/* SH_XNPI_FIRST_ERROR_UNDERFLOW_NI0_VC2_CREDIT */ +/* Description: NI0 VC2 Credit Underflow */ +#define SH_XNPI_FIRST_ERROR_UNDERFLOW_NI0_VC2_CREDIT_SHFT 39 +#define SH_XNPI_FIRST_ERROR_UNDERFLOW_NI0_VC2_CREDIT_MASK 0x0000008000000000 + +/* SH_XNPI_FIRST_ERROR_OVERFLOW_NI0_VC2_CREDIT */ +/* Description: NI0 VC2 Credit Overflow */ +#define SH_XNPI_FIRST_ERROR_OVERFLOW_NI0_VC2_CREDIT_SHFT 40 +#define SH_XNPI_FIRST_ERROR_OVERFLOW_NI0_VC2_CREDIT_MASK 0x0000010000000000 + +/* SH_XNPI_FIRST_ERROR_UNDERFLOW_NI1_VC0_CREDIT */ +/* Description: NI1 VC0 Credit Underflow */ +#define SH_XNPI_FIRST_ERROR_UNDERFLOW_NI1_VC0_CREDIT_SHFT 41 +#define SH_XNPI_FIRST_ERROR_UNDERFLOW_NI1_VC0_CREDIT_MASK 0x0000020000000000 + +/* SH_XNPI_FIRST_ERROR_OVERFLOW_NI1_VC0_CREDIT */ +/* Description: NI1 VC0 Credit Overflow */ +#define SH_XNPI_FIRST_ERROR_OVERFLOW_NI1_VC0_CREDIT_SHFT 42 +#define SH_XNPI_FIRST_ERROR_OVERFLOW_NI1_VC0_CREDIT_MASK 0x0000040000000000 + +/* SH_XNPI_FIRST_ERROR_UNDERFLOW_NI1_VC2_CREDIT */ +/* Description: NI1 VC2 Credit Underflow */ +#define SH_XNPI_FIRST_ERROR_UNDERFLOW_NI1_VC2_CREDIT_SHFT 43 +#define SH_XNPI_FIRST_ERROR_UNDERFLOW_NI1_VC2_CREDIT_MASK 0x0000080000000000 + +/* SH_XNPI_FIRST_ERROR_OVERFLOW_NI1_VC2_CREDIT */ +/* Description: NI1 VC2 Credit Overflow */ +#define SH_XNPI_FIRST_ERROR_OVERFLOW_NI1_VC2_CREDIT_SHFT 44 +#define SH_XNPI_FIRST_ERROR_OVERFLOW_NI1_VC2_CREDIT_MASK 0x0000100000000000 + +/* SH_XNPI_FIRST_ERROR_UNDERFLOW_IILB_VC0_CREDIT */ +/* Description: IILB VC0 Credit Underflow */ +#define SH_XNPI_FIRST_ERROR_UNDERFLOW_IILB_VC0_CREDIT_SHFT 45 +#define SH_XNPI_FIRST_ERROR_UNDERFLOW_IILB_VC0_CREDIT_MASK 0x0000200000000000 + +/* SH_XNPI_FIRST_ERROR_OVERFLOW_IILB_VC0_CREDIT */ +/* Description: IILB VC0 Credit Overflow */ +#define SH_XNPI_FIRST_ERROR_OVERFLOW_IILB_VC0_CREDIT_SHFT 46 +#define SH_XNPI_FIRST_ERROR_OVERFLOW_IILB_VC0_CREDIT_MASK 0x0000400000000000 + +/* SH_XNPI_FIRST_ERROR_UNDERFLOW_IILB_VC2_CREDIT */ +/* Description: IILB VC2 Credit Underflow */ +#define SH_XNPI_FIRST_ERROR_UNDERFLOW_IILB_VC2_CREDIT_SHFT 47 +#define SH_XNPI_FIRST_ERROR_UNDERFLOW_IILB_VC2_CREDIT_MASK 0x0000800000000000 + +/* SH_XNPI_FIRST_ERROR_OVERFLOW_IILB_VC2_CREDIT */ +/* Description: IILB VC2 Credit Overflow */ +#define SH_XNPI_FIRST_ERROR_OVERFLOW_IILB_VC2_CREDIT_SHFT 48 +#define SH_XNPI_FIRST_ERROR_OVERFLOW_IILB_VC2_CREDIT_MASK 0x0001000000000000 + +/* SH_XNPI_FIRST_ERROR_OVERFLOW_HEADER_CANCEL_FIFO */ +/* Description: Header Cancel Fifo Overflow */ +#define SH_XNPI_FIRST_ERROR_OVERFLOW_HEADER_CANCEL_FIFO_SHFT 49 +#define SH_XNPI_FIRST_ERROR_OVERFLOW_HEADER_CANCEL_FIFO_MASK 0x0002000000000000 + +/* ==================================================================== */ +/* Register "SH_XNMD_ERROR_SUMMARY" */ +/* ==================================================================== */ + +#define SH_XNMD_ERROR_SUMMARY 0x0000000150040400 +#define SH_XNMD_ERROR_SUMMARY_MASK 0x0003ffffffffffff +#define SH_XNMD_ERROR_SUMMARY_INIT 0x0003ffffffffffff + +/* SH_XNMD_ERROR_SUMMARY_UNDERFLOW_NI0_VC0 */ +/* Description: NI0 VC0 fifo underflow */ +#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_NI0_VC0_SHFT 0 +#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_NI0_VC0_MASK 0x0000000000000001 + +/* SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI0_VC0 */ +/* Description: NI0 VC0 fifo overflow */ +#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI0_VC0_SHFT 1 +#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI0_VC0_MASK 0x0000000000000002 + +/* SH_XNMD_ERROR_SUMMARY_UNDERFLOW_NI0_VC2 */ +/* Description: NI0 VC2 fifo underflow */ +#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_NI0_VC2_SHFT 2 +#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_NI0_VC2_MASK 0x0000000000000004 + +/* SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI0_VC2 */ +/* Description: NI0 VC2 fifo overflow */ +#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI0_VC2_SHFT 3 +#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI0_VC2_MASK 0x0000000000000008 + +/* SH_XNMD_ERROR_SUMMARY_UNDERFLOW_NI1_VC0 */ +/* Description: NI1 VC0 fifo underflow */ +#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_NI1_VC0_SHFT 4 +#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_NI1_VC0_MASK 0x0000000000000010 + +/* SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI1_VC0 */ +/* Description: NI1 VC0 fifo overflow */ +#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI1_VC0_SHFT 5 +#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI1_VC0_MASK 0x0000000000000020 + +/* SH_XNMD_ERROR_SUMMARY_UNDERFLOW_NI1_VC2 */ +/* Description: NI1 VC2 fifo underflow */ +#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_NI1_VC2_SHFT 6 +#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_NI1_VC2_MASK 0x0000000000000040 + +/* SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI1_VC2 */ +/* Description: NI1 VC2 fifo overflow */ +#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI1_VC2_SHFT 7 +#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI1_VC2_MASK 0x0000000000000080 + +/* SH_XNMD_ERROR_SUMMARY_UNDERFLOW_IILB_VC0 */ +/* Description: IILB VC0 fifo underflow */ +#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_IILB_VC0_SHFT 8 +#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_IILB_VC0_MASK 0x0000000000000100 + +/* SH_XNMD_ERROR_SUMMARY_OVERFLOW_IILB_VC0 */ +/* Description: IILB VC0 fifo overflow */ +#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_IILB_VC0_SHFT 9 +#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_IILB_VC0_MASK 0x0000000000000200 + +/* SH_XNMD_ERROR_SUMMARY_UNDERFLOW_IILB_VC2 */ +/* Description: IILB VC2 fifo underflow */ +#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_IILB_VC2_SHFT 10 +#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_IILB_VC2_MASK 0x0000000000000400 + +/* SH_XNMD_ERROR_SUMMARY_OVERFLOW_IILB_VC2 */ +/* Description: IILB VC2 fifo overflow */ +#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_IILB_VC2_SHFT 11 +#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_IILB_VC2_MASK 0x0000000000000800 + +/* SH_XNMD_ERROR_SUMMARY_UNDERFLOW_VC0_CREDIT */ +/* Description: VC0 Credit underflow */ +#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_VC0_CREDIT_SHFT 12 +#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_VC0_CREDIT_MASK 0x0000000000001000 + +/* SH_XNMD_ERROR_SUMMARY_OVERFLOW_VC0_CREDIT */ +/* Description: VC0 Credit overflow */ +#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_VC0_CREDIT_SHFT 13 +#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_VC0_CREDIT_MASK 0x0000000000002000 + +/* SH_XNMD_ERROR_SUMMARY_UNDERFLOW_VC2_CREDIT */ +/* Description: VC2 Credit underflow */ +#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_VC2_CREDIT_SHFT 14 +#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_VC2_CREDIT_MASK 0x0000000000004000 + +/* SH_XNMD_ERROR_SUMMARY_OVERFLOW_VC2_CREDIT */ +/* Description: VC2 Credit overflow */ +#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_VC2_CREDIT_SHFT 15 +#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_VC2_CREDIT_MASK 0x0000000000008000 + +/* SH_XNMD_ERROR_SUMMARY_OVERFLOW_DATABUFF_VC0 */ +/* Description: VC0 Data Buffer overflow */ +#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_DATABUFF_VC0_SHFT 16 +#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_DATABUFF_VC0_MASK 0x0000000000010000 + +/* SH_XNMD_ERROR_SUMMARY_OVERFLOW_DATABUFF_VC2 */ +/* Description: VC2 Data Buffer overflow */ +#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_DATABUFF_VC2_SHFT 17 +#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_DATABUFF_VC2_MASK 0x0000000000020000 + +/* SH_XNMD_ERROR_SUMMARY_LUT_READ_ERROR */ +/* Description: LUT Read Error */ +#define SH_XNMD_ERROR_SUMMARY_LUT_READ_ERROR_SHFT 18 +#define SH_XNMD_ERROR_SUMMARY_LUT_READ_ERROR_MASK 0x0000000000040000 + +/* SH_XNMD_ERROR_SUMMARY_SINGLE_BIT_ERROR0 */ +/* Description: Single Bit Error in Bits 63:0 */ +#define SH_XNMD_ERROR_SUMMARY_SINGLE_BIT_ERROR0_SHFT 19 +#define SH_XNMD_ERROR_SUMMARY_SINGLE_BIT_ERROR0_MASK 0x0000000000080000 + +/* SH_XNMD_ERROR_SUMMARY_SINGLE_BIT_ERROR1 */ +/* Description: Single Bit Error in Bits 127:64 */ +#define SH_XNMD_ERROR_SUMMARY_SINGLE_BIT_ERROR1_SHFT 20 +#define SH_XNMD_ERROR_SUMMARY_SINGLE_BIT_ERROR1_MASK 0x0000000000100000 + +/* SH_XNMD_ERROR_SUMMARY_SINGLE_BIT_ERROR2 */ +/* Description: Single Bit Error in Bits 191:128 */ +#define SH_XNMD_ERROR_SUMMARY_SINGLE_BIT_ERROR2_SHFT 21 +#define SH_XNMD_ERROR_SUMMARY_SINGLE_BIT_ERROR2_MASK 0x0000000000200000 + +/* SH_XNMD_ERROR_SUMMARY_SINGLE_BIT_ERROR3 */ +/* Description: Single Bit Error in Bits 255:192 */ +#define SH_XNMD_ERROR_SUMMARY_SINGLE_BIT_ERROR3_SHFT 22 +#define SH_XNMD_ERROR_SUMMARY_SINGLE_BIT_ERROR3_MASK 0x0000000000400000 + +/* SH_XNMD_ERROR_SUMMARY_UNCOR_ERROR0 */ +/* Description: Uncorrectable Error in Bits 63:0 */ +#define SH_XNMD_ERROR_SUMMARY_UNCOR_ERROR0_SHFT 23 +#define SH_XNMD_ERROR_SUMMARY_UNCOR_ERROR0_MASK 0x0000000000800000 + +/* SH_XNMD_ERROR_SUMMARY_UNCOR_ERROR1 */ +/* Description: Uncorrectable Error in Bits 127:64 */ +#define SH_XNMD_ERROR_SUMMARY_UNCOR_ERROR1_SHFT 24 +#define SH_XNMD_ERROR_SUMMARY_UNCOR_ERROR1_MASK 0x0000000001000000 + +/* SH_XNMD_ERROR_SUMMARY_UNCOR_ERROR2 */ +/* Description: Uncorrectable Error in Bits 191:128 */ +#define SH_XNMD_ERROR_SUMMARY_UNCOR_ERROR2_SHFT 25 +#define SH_XNMD_ERROR_SUMMARY_UNCOR_ERROR2_MASK 0x0000000002000000 + +/* SH_XNMD_ERROR_SUMMARY_UNCOR_ERROR3 */ +/* Description: Uncorrectable Error in Bits 255:192 */ +#define SH_XNMD_ERROR_SUMMARY_UNCOR_ERROR3_SHFT 26 +#define SH_XNMD_ERROR_SUMMARY_UNCOR_ERROR3_MASK 0x0000000004000000 + +/* SH_XNMD_ERROR_SUMMARY_UNDERFLOW_SIC_CNTR0 */ +/* Description: SIC Counter 0 Underflow */ +#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_SIC_CNTR0_SHFT 27 +#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_SIC_CNTR0_MASK 0x0000000008000000 + +/* SH_XNMD_ERROR_SUMMARY_OVERFLOW_SIC_CNTR0 */ +/* Description: SIC Counter 0 Overflow */ +#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_SIC_CNTR0_SHFT 28 +#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_SIC_CNTR0_MASK 0x0000000010000000 + +/* SH_XNMD_ERROR_SUMMARY_UNDERFLOW_SIC_CNTR2 */ +/* Description: SIC Counter 2 Underflow */ +#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_SIC_CNTR2_SHFT 29 +#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_SIC_CNTR2_MASK 0x0000000020000000 + +/* SH_XNMD_ERROR_SUMMARY_OVERFLOW_SIC_CNTR2 */ +/* Description: SIC Counter 2 Overflow */ +#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_SIC_CNTR2_SHFT 30 +#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_SIC_CNTR2_MASK 0x0000000040000000 + +/* SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI0_DEBIT0 */ +/* Description: NI0 Debit 0 Overflow */ +#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI0_DEBIT0_SHFT 31 +#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI0_DEBIT0_MASK 0x0000000080000000 + +/* SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI0_DEBIT2 */ +/* Description: NI0 Debit 2 Overflow */ +#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI0_DEBIT2_SHFT 32 +#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI0_DEBIT2_MASK 0x0000000100000000 + +/* SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI1_DEBIT0 */ +/* Description: NI1 Debit 0 Overflow */ +#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI1_DEBIT0_SHFT 33 +#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI1_DEBIT0_MASK 0x0000000200000000 + +/* SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI1_DEBIT2 */ +/* Description: NI1 Debit 2 Overflow */ +#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI1_DEBIT2_SHFT 34 +#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI1_DEBIT2_MASK 0x0000000400000000 + +/* SH_XNMD_ERROR_SUMMARY_OVERFLOW_IILB_DEBIT0 */ +/* Description: IILB Debit 0 Overflow */ +#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_IILB_DEBIT0_SHFT 35 +#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_IILB_DEBIT0_MASK 0x0000000800000000 + +/* SH_XNMD_ERROR_SUMMARY_OVERFLOW_IILB_DEBIT2 */ +/* Description: IILB Debit 2 Overflow */ +#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_IILB_DEBIT2_SHFT 36 +#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_IILB_DEBIT2_MASK 0x0000001000000000 + +/* SH_XNMD_ERROR_SUMMARY_UNDERFLOW_NI0_VC0_CREDIT */ +/* Description: NI0 VC0 Credit Underflow */ +#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_NI0_VC0_CREDIT_SHFT 37 +#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_NI0_VC0_CREDIT_MASK 0x0000002000000000 + +/* SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI0_VC0_CREDIT */ +/* Description: NI0 VC0 Credit Overflow */ +#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI0_VC0_CREDIT_SHFT 38 +#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI0_VC0_CREDIT_MASK 0x0000004000000000 + +/* SH_XNMD_ERROR_SUMMARY_UNDERFLOW_NI0_VC2_CREDIT */ +/* Description: NI0 VC2 Credit Underflow */ +#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_NI0_VC2_CREDIT_SHFT 39 +#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_NI0_VC2_CREDIT_MASK 0x0000008000000000 + +/* SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI0_VC2_CREDIT */ +/* Description: NI0 VC2 Credit Overflow */ +#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI0_VC2_CREDIT_SHFT 40 +#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI0_VC2_CREDIT_MASK 0x0000010000000000 + +/* SH_XNMD_ERROR_SUMMARY_UNDERFLOW_NI1_VC0_CREDIT */ +/* Description: NI1 VC0 Credit Underflow */ +#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_NI1_VC0_CREDIT_SHFT 41 +#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_NI1_VC0_CREDIT_MASK 0x0000020000000000 + +/* SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI1_VC0_CREDIT */ +/* Description: NI1 VC0 Credit Overflow */ +#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI1_VC0_CREDIT_SHFT 42 +#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI1_VC0_CREDIT_MASK 0x0000040000000000 + +/* SH_XNMD_ERROR_SUMMARY_UNDERFLOW_NI1_VC2_CREDIT */ +/* Description: NI1 VC2 Credit Underflow */ +#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_NI1_VC2_CREDIT_SHFT 43 +#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_NI1_VC2_CREDIT_MASK 0x0000080000000000 + +/* SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI1_VC2_CREDIT */ +/* Description: NI1 VC2 Credit Overflow */ +#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI1_VC2_CREDIT_SHFT 44 +#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI1_VC2_CREDIT_MASK 0x0000100000000000 + +/* SH_XNMD_ERROR_SUMMARY_UNDERFLOW_IILB_VC0_CREDIT */ +/* Description: IILB VC0 Credit Underflow */ +#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_IILB_VC0_CREDIT_SHFT 45 +#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_IILB_VC0_CREDIT_MASK 0x0000200000000000 + +/* SH_XNMD_ERROR_SUMMARY_OVERFLOW_IILB_VC0_CREDIT */ +/* Description: IILB VC0 Credit Overflow */ +#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_IILB_VC0_CREDIT_SHFT 46 +#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_IILB_VC0_CREDIT_MASK 0x0000400000000000 + +/* SH_XNMD_ERROR_SUMMARY_UNDERFLOW_IILB_VC2_CREDIT */ +/* Description: IILB VC2 Credit Underflow */ +#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_IILB_VC2_CREDIT_SHFT 47 +#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_IILB_VC2_CREDIT_MASK 0x0000800000000000 + +/* SH_XNMD_ERROR_SUMMARY_OVERFLOW_IILB_VC2_CREDIT */ +/* Description: IILB VC2 Credit Overflow */ +#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_IILB_VC2_CREDIT_SHFT 48 +#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_IILB_VC2_CREDIT_MASK 0x0001000000000000 + +/* SH_XNMD_ERROR_SUMMARY_OVERFLOW_HEADER_CANCEL_FIFO */ +/* Description: Header Cancel Fifo Overflow */ +#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_HEADER_CANCEL_FIFO_SHFT 49 +#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_HEADER_CANCEL_FIFO_MASK 0x0002000000000000 + +/* ==================================================================== */ +/* Register "SH_XNMD_ERRORS_ALIAS" */ +/* ==================================================================== */ + +#define SH_XNMD_ERRORS_ALIAS 0x0000000150040408 + +/* ==================================================================== */ +/* Register "SH_XNMD_ERROR_OVERFLOW" */ +/* ==================================================================== */ + +#define SH_XNMD_ERROR_OVERFLOW 0x0000000150040420 +#define SH_XNMD_ERROR_OVERFLOW_MASK 0x0003ffffffffffff +#define SH_XNMD_ERROR_OVERFLOW_INIT 0x0003ffffffffffff + +/* SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_NI0_VC0 */ +/* Description: NI0 VC0 fifo underflow */ +#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_NI0_VC0_SHFT 0 +#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_NI0_VC0_MASK 0x0000000000000001 + +/* SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI0_VC0 */ +/* Description: NI0 VC0 fifo overflow */ +#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI0_VC0_SHFT 1 +#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI0_VC0_MASK 0x0000000000000002 + +/* SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_NI0_VC2 */ +/* Description: NI0 VC2 fifo underflow */ +#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_NI0_VC2_SHFT 2 +#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_NI0_VC2_MASK 0x0000000000000004 + +/* SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI0_VC2 */ +/* Description: NI0 VC2 fifo overflow */ +#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI0_VC2_SHFT 3 +#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI0_VC2_MASK 0x0000000000000008 + +/* SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_NI1_VC0 */ +/* Description: NI1 VC0 fifo underflow */ +#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_NI1_VC0_SHFT 4 +#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_NI1_VC0_MASK 0x0000000000000010 + +/* SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI1_VC0 */ +/* Description: NI1 VC0 fifo overflow */ +#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI1_VC0_SHFT 5 +#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI1_VC0_MASK 0x0000000000000020 + +/* SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_NI1_VC2 */ +/* Description: NI1 VC2 fifo underflow */ +#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_NI1_VC2_SHFT 6 +#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_NI1_VC2_MASK 0x0000000000000040 + +/* SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI1_VC2 */ +/* Description: NI1 VC2 fifo overflow */ +#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI1_VC2_SHFT 7 +#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI1_VC2_MASK 0x0000000000000080 + +/* SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_IILB_VC0 */ +/* Description: IILB VC0 fifo underflow */ +#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_IILB_VC0_SHFT 8 +#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_IILB_VC0_MASK 0x0000000000000100 + +/* SH_XNMD_ERROR_OVERFLOW_OVERFLOW_IILB_VC0 */ +/* Description: IILB VC0 fifo overflow */ +#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_IILB_VC0_SHFT 9 +#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_IILB_VC0_MASK 0x0000000000000200 + +/* SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_IILB_VC2 */ +/* Description: IILB VC2 fifo underflow */ +#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_IILB_VC2_SHFT 10 +#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_IILB_VC2_MASK 0x0000000000000400 + +/* SH_XNMD_ERROR_OVERFLOW_OVERFLOW_IILB_VC2 */ +/* Description: IILB VC2 fifo overflow */ +#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_IILB_VC2_SHFT 11 +#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_IILB_VC2_MASK 0x0000000000000800 + +/* SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_VC0_CREDIT */ +/* Description: VC0 Credit underflow */ +#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_VC0_CREDIT_SHFT 12 +#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_VC0_CREDIT_MASK 0x0000000000001000 + +/* SH_XNMD_ERROR_OVERFLOW_OVERFLOW_VC0_CREDIT */ +/* Description: VC0 Credit overflow */ +#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_VC0_CREDIT_SHFT 13 +#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_VC0_CREDIT_MASK 0x0000000000002000 + +/* SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_VC2_CREDIT */ +/* Description: VC2 Credit underflow */ +#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_VC2_CREDIT_SHFT 14 +#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_VC2_CREDIT_MASK 0x0000000000004000 + +/* SH_XNMD_ERROR_OVERFLOW_OVERFLOW_VC2_CREDIT */ +/* Description: VC2 Credit overflow */ +#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_VC2_CREDIT_SHFT 15 +#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_VC2_CREDIT_MASK 0x0000000000008000 + +/* SH_XNMD_ERROR_OVERFLOW_OVERFLOW_DATABUFF_VC0 */ +/* Description: VC0 Data Buffer overflow */ +#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_DATABUFF_VC0_SHFT 16 +#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_DATABUFF_VC0_MASK 0x0000000000010000 + +/* SH_XNMD_ERROR_OVERFLOW_OVERFLOW_DATABUFF_VC2 */ +/* Description: VC2 Data Buffer overflow */ +#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_DATABUFF_VC2_SHFT 17 +#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_DATABUFF_VC2_MASK 0x0000000000020000 + +/* SH_XNMD_ERROR_OVERFLOW_LUT_READ_ERROR */ +/* Description: LUT Read Error */ +#define SH_XNMD_ERROR_OVERFLOW_LUT_READ_ERROR_SHFT 18 +#define SH_XNMD_ERROR_OVERFLOW_LUT_READ_ERROR_MASK 0x0000000000040000 + +/* SH_XNMD_ERROR_OVERFLOW_SINGLE_BIT_ERROR0 */ +/* Description: Single Bit Error in Bits 63:0 */ +#define SH_XNMD_ERROR_OVERFLOW_SINGLE_BIT_ERROR0_SHFT 19 +#define SH_XNMD_ERROR_OVERFLOW_SINGLE_BIT_ERROR0_MASK 0x0000000000080000 + +/* SH_XNMD_ERROR_OVERFLOW_SINGLE_BIT_ERROR1 */ +/* Description: Single Bit Error in Bits 127:64 */ +#define SH_XNMD_ERROR_OVERFLOW_SINGLE_BIT_ERROR1_SHFT 20 +#define SH_XNMD_ERROR_OVERFLOW_SINGLE_BIT_ERROR1_MASK 0x0000000000100000 + +/* SH_XNMD_ERROR_OVERFLOW_SINGLE_BIT_ERROR2 */ +/* Description: Single Bit Error in Bits 191:128 */ +#define SH_XNMD_ERROR_OVERFLOW_SINGLE_BIT_ERROR2_SHFT 21 +#define SH_XNMD_ERROR_OVERFLOW_SINGLE_BIT_ERROR2_MASK 0x0000000000200000 + +/* SH_XNMD_ERROR_OVERFLOW_SINGLE_BIT_ERROR3 */ +/* Description: Single Bit Error in Bits 255:192 */ +#define SH_XNMD_ERROR_OVERFLOW_SINGLE_BIT_ERROR3_SHFT 22 +#define SH_XNMD_ERROR_OVERFLOW_SINGLE_BIT_ERROR3_MASK 0x0000000000400000 + +/* SH_XNMD_ERROR_OVERFLOW_UNCOR_ERROR0 */ +/* Description: Uncorrectable Error in Bits 63:0 */ +#define SH_XNMD_ERROR_OVERFLOW_UNCOR_ERROR0_SHFT 23 +#define SH_XNMD_ERROR_OVERFLOW_UNCOR_ERROR0_MASK 0x0000000000800000 + +/* SH_XNMD_ERROR_OVERFLOW_UNCOR_ERROR1 */ +/* Description: Uncorrectable Error in Bits 127:64 */ +#define SH_XNMD_ERROR_OVERFLOW_UNCOR_ERROR1_SHFT 24 +#define SH_XNMD_ERROR_OVERFLOW_UNCOR_ERROR1_MASK 0x0000000001000000 + +/* SH_XNMD_ERROR_OVERFLOW_UNCOR_ERROR2 */ +/* Description: Uncorrectable Error in Bits 191:128 */ +#define SH_XNMD_ERROR_OVERFLOW_UNCOR_ERROR2_SHFT 25 +#define SH_XNMD_ERROR_OVERFLOW_UNCOR_ERROR2_MASK 0x0000000002000000 + +/* SH_XNMD_ERROR_OVERFLOW_UNCOR_ERROR3 */ +/* Description: Uncorrectable Error in Bits 255:192 */ +#define SH_XNMD_ERROR_OVERFLOW_UNCOR_ERROR3_SHFT 26 +#define SH_XNMD_ERROR_OVERFLOW_UNCOR_ERROR3_MASK 0x0000000004000000 + +/* SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_SIC_CNTR0 */ +/* Description: SIC Counter 0 Underflow */ +#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_SIC_CNTR0_SHFT 27 +#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_SIC_CNTR0_MASK 0x0000000008000000 + +/* SH_XNMD_ERROR_OVERFLOW_OVERFLOW_SIC_CNTR0 */ +/* Description: SIC Counter 0 Overflow */ +#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_SIC_CNTR0_SHFT 28 +#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_SIC_CNTR0_MASK 0x0000000010000000 + +/* SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_SIC_CNTR2 */ +/* Description: SIC Counter 2 Underflow */ +#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_SIC_CNTR2_SHFT 29 +#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_SIC_CNTR2_MASK 0x0000000020000000 + +/* SH_XNMD_ERROR_OVERFLOW_OVERFLOW_SIC_CNTR2 */ +/* Description: SIC Counter 2 Overflow */ +#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_SIC_CNTR2_SHFT 30 +#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_SIC_CNTR2_MASK 0x0000000040000000 + +/* SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI0_DEBIT0 */ +/* Description: NI0 Debit 0 Overflow */ +#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI0_DEBIT0_SHFT 31 +#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI0_DEBIT0_MASK 0x0000000080000000 + +/* SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI0_DEBIT2 */ +/* Description: NI0 Debit 2 Overflow */ +#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI0_DEBIT2_SHFT 32 +#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI0_DEBIT2_MASK 0x0000000100000000 + +/* SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI1_DEBIT0 */ +/* Description: NI1 Debit 0 Overflow */ +#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI1_DEBIT0_SHFT 33 +#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI1_DEBIT0_MASK 0x0000000200000000 + +/* SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI1_DEBIT2 */ +/* Description: NI1 Debit 2 Overflow */ +#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI1_DEBIT2_SHFT 34 +#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI1_DEBIT2_MASK 0x0000000400000000 + +/* SH_XNMD_ERROR_OVERFLOW_OVERFLOW_IILB_DEBIT0 */ +/* Description: IILB Debit 0 Overflow */ +#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_IILB_DEBIT0_SHFT 35 +#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_IILB_DEBIT0_MASK 0x0000000800000000 + +/* SH_XNMD_ERROR_OVERFLOW_OVERFLOW_IILB_DEBIT2 */ +/* Description: IILB Debit 2 Overflow */ +#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_IILB_DEBIT2_SHFT 36 +#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_IILB_DEBIT2_MASK 0x0000001000000000 + +/* SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_NI0_VC0_CREDIT */ +/* Description: NI0 VC0 Credit Underflow */ +#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_NI0_VC0_CREDIT_SHFT 37 +#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_NI0_VC0_CREDIT_MASK 0x0000002000000000 + +/* SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI0_VC0_CREDIT */ +/* Description: NI0 VC0 Credit Overflow */ +#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI0_VC0_CREDIT_SHFT 38 +#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI0_VC0_CREDIT_MASK 0x0000004000000000 + +/* SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_NI0_VC2_CREDIT */ +/* Description: NI0 VC2 Credit Underflow */ +#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_NI0_VC2_CREDIT_SHFT 39 +#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_NI0_VC2_CREDIT_MASK 0x0000008000000000 + +/* SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI0_VC2_CREDIT */ +/* Description: NI0 VC2 Credit Overflow */ +#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI0_VC2_CREDIT_SHFT 40 +#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI0_VC2_CREDIT_MASK 0x0000010000000000 + +/* SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_NI1_VC0_CREDIT */ +/* Description: NI1 VC0 Credit Underflow */ +#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_NI1_VC0_CREDIT_SHFT 41 +#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_NI1_VC0_CREDIT_MASK 0x0000020000000000 + +/* SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI1_VC0_CREDIT */ +/* Description: NI1 VC0 Credit Overflow */ +#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI1_VC0_CREDIT_SHFT 42 +#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI1_VC0_CREDIT_MASK 0x0000040000000000 + +/* SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_NI1_VC2_CREDIT */ +/* Description: NI1 VC2 Credit Underflow */ +#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_NI1_VC2_CREDIT_SHFT 43 +#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_NI1_VC2_CREDIT_MASK 0x0000080000000000 + +/* SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI1_VC2_CREDIT */ +/* Description: NI1 VC2 Credit Overflow */ +#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI1_VC2_CREDIT_SHFT 44 +#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI1_VC2_CREDIT_MASK 0x0000100000000000 + +/* SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_IILB_VC0_CREDIT */ +/* Description: IILB VC0 Credit Underflow */ +#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_IILB_VC0_CREDIT_SHFT 45 +#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_IILB_VC0_CREDIT_MASK 0x0000200000000000 + +/* SH_XNMD_ERROR_OVERFLOW_OVERFLOW_IILB_VC0_CREDIT */ +/* Description: IILB VC0 Credit Overflow */ +#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_IILB_VC0_CREDIT_SHFT 46 +#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_IILB_VC0_CREDIT_MASK 0x0000400000000000 + +/* SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_IILB_VC2_CREDIT */ +/* Description: IILB VC2 Credit Underflow */ +#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_IILB_VC2_CREDIT_SHFT 47 +#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_IILB_VC2_CREDIT_MASK 0x0000800000000000 + +/* SH_XNMD_ERROR_OVERFLOW_OVERFLOW_IILB_VC2_CREDIT */ +/* Description: IILB VC2 Credit Overflow */ +#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_IILB_VC2_CREDIT_SHFT 48 +#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_IILB_VC2_CREDIT_MASK 0x0001000000000000 + +/* SH_XNMD_ERROR_OVERFLOW_OVERFLOW_HEADER_CANCEL_FIFO */ +/* Description: Header Cancel Fifo Overflow */ +#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_HEADER_CANCEL_FIFO_SHFT 49 +#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_HEADER_CANCEL_FIFO_MASK 0x0002000000000000 + +/* ==================================================================== */ +/* Register "SH_XNMD_ERROR_OVERFLOW_ALIAS" */ +/* ==================================================================== */ + +#define SH_XNMD_ERROR_OVERFLOW_ALIAS 0x0000000150040428 + +/* ==================================================================== */ +/* Register "SH_XNMD_ERROR_MASK" */ +/* ==================================================================== */ + +#define SH_XNMD_ERROR_MASK 0x0000000150040440 +#define SH_XNMD_ERROR_MASK_MASK 0x0003ffffffffffff +#define SH_XNMD_ERROR_MASK_INIT 0x0003ffffffffffff + +/* SH_XNMD_ERROR_MASK_UNDERFLOW_NI0_VC0 */ +/* Description: NI0 VC0 fifo underflow */ +#define SH_XNMD_ERROR_MASK_UNDERFLOW_NI0_VC0_SHFT 0 +#define SH_XNMD_ERROR_MASK_UNDERFLOW_NI0_VC0_MASK 0x0000000000000001 + +/* SH_XNMD_ERROR_MASK_OVERFLOW_NI0_VC0 */ +/* Description: NI0 VC0 fifo overflow */ +#define SH_XNMD_ERROR_MASK_OVERFLOW_NI0_VC0_SHFT 1 +#define SH_XNMD_ERROR_MASK_OVERFLOW_NI0_VC0_MASK 0x0000000000000002 + +/* SH_XNMD_ERROR_MASK_UNDERFLOW_NI0_VC2 */ +/* Description: NI0 VC2 fifo underflow */ +#define SH_XNMD_ERROR_MASK_UNDERFLOW_NI0_VC2_SHFT 2 +#define SH_XNMD_ERROR_MASK_UNDERFLOW_NI0_VC2_MASK 0x0000000000000004 + +/* SH_XNMD_ERROR_MASK_OVERFLOW_NI0_VC2 */ +/* Description: NI0 VC2 fifo overflow */ +#define SH_XNMD_ERROR_MASK_OVERFLOW_NI0_VC2_SHFT 3 +#define SH_XNMD_ERROR_MASK_OVERFLOW_NI0_VC2_MASK 0x0000000000000008 + +/* SH_XNMD_ERROR_MASK_UNDERFLOW_NI1_VC0 */ +/* Description: NI1 VC0 fifo underflow */ +#define SH_XNMD_ERROR_MASK_UNDERFLOW_NI1_VC0_SHFT 4 +#define SH_XNMD_ERROR_MASK_UNDERFLOW_NI1_VC0_MASK 0x0000000000000010 + +/* SH_XNMD_ERROR_MASK_OVERFLOW_NI1_VC0 */ +/* Description: NI1 VC0 fifo overflow */ +#define SH_XNMD_ERROR_MASK_OVERFLOW_NI1_VC0_SHFT 5 +#define SH_XNMD_ERROR_MASK_OVERFLOW_NI1_VC0_MASK 0x0000000000000020 + +/* SH_XNMD_ERROR_MASK_UNDERFLOW_NI1_VC2 */ +/* Description: NI1 VC2 fifo underflow */ +#define SH_XNMD_ERROR_MASK_UNDERFLOW_NI1_VC2_SHFT 6 +#define SH_XNMD_ERROR_MASK_UNDERFLOW_NI1_VC2_MASK 0x0000000000000040 + +/* SH_XNMD_ERROR_MASK_OVERFLOW_NI1_VC2 */ +/* Description: NI1 VC2 fifo overflow */ +#define SH_XNMD_ERROR_MASK_OVERFLOW_NI1_VC2_SHFT 7 +#define SH_XNMD_ERROR_MASK_OVERFLOW_NI1_VC2_MASK 0x0000000000000080 + +/* SH_XNMD_ERROR_MASK_UNDERFLOW_IILB_VC0 */ +/* Description: IILB VC0 fifo underflow */ +#define SH_XNMD_ERROR_MASK_UNDERFLOW_IILB_VC0_SHFT 8 +#define SH_XNMD_ERROR_MASK_UNDERFLOW_IILB_VC0_MASK 0x0000000000000100 + +/* SH_XNMD_ERROR_MASK_OVERFLOW_IILB_VC0 */ +/* Description: IILB VC0 fifo overflow */ +#define SH_XNMD_ERROR_MASK_OVERFLOW_IILB_VC0_SHFT 9 +#define SH_XNMD_ERROR_MASK_OVERFLOW_IILB_VC0_MASK 0x0000000000000200 + +/* SH_XNMD_ERROR_MASK_UNDERFLOW_IILB_VC2 */ +/* Description: IILB VC2 fifo underflow */ +#define SH_XNMD_ERROR_MASK_UNDERFLOW_IILB_VC2_SHFT 10 +#define SH_XNMD_ERROR_MASK_UNDERFLOW_IILB_VC2_MASK 0x0000000000000400 + +/* SH_XNMD_ERROR_MASK_OVERFLOW_IILB_VC2 */ +/* Description: IILB VC2 fifo overflow */ +#define SH_XNMD_ERROR_MASK_OVERFLOW_IILB_VC2_SHFT 11 +#define SH_XNMD_ERROR_MASK_OVERFLOW_IILB_VC2_MASK 0x0000000000000800 + +/* SH_XNMD_ERROR_MASK_UNDERFLOW_VC0_CREDIT */ +/* Description: VC0 Credit underflow */ +#define SH_XNMD_ERROR_MASK_UNDERFLOW_VC0_CREDIT_SHFT 12 +#define SH_XNMD_ERROR_MASK_UNDERFLOW_VC0_CREDIT_MASK 0x0000000000001000 + +/* SH_XNMD_ERROR_MASK_OVERFLOW_VC0_CREDIT */ +/* Description: VC0 Credit overflow */ +#define SH_XNMD_ERROR_MASK_OVERFLOW_VC0_CREDIT_SHFT 13 +#define SH_XNMD_ERROR_MASK_OVERFLOW_VC0_CREDIT_MASK 0x0000000000002000 + +/* SH_XNMD_ERROR_MASK_UNDERFLOW_VC2_CREDIT */ +/* Description: VC2 Credit underflow */ +#define SH_XNMD_ERROR_MASK_UNDERFLOW_VC2_CREDIT_SHFT 14 +#define SH_XNMD_ERROR_MASK_UNDERFLOW_VC2_CREDIT_MASK 0x0000000000004000 + +/* SH_XNMD_ERROR_MASK_OVERFLOW_VC2_CREDIT */ +/* Description: VC2 Credit overflow */ +#define SH_XNMD_ERROR_MASK_OVERFLOW_VC2_CREDIT_SHFT 15 +#define SH_XNMD_ERROR_MASK_OVERFLOW_VC2_CREDIT_MASK 0x0000000000008000 + +/* SH_XNMD_ERROR_MASK_OVERFLOW_DATABUFF_VC0 */ +/* Description: VC0 Data Buffer overflow */ +#define SH_XNMD_ERROR_MASK_OVERFLOW_DATABUFF_VC0_SHFT 16 +#define SH_XNMD_ERROR_MASK_OVERFLOW_DATABUFF_VC0_MASK 0x0000000000010000 + +/* SH_XNMD_ERROR_MASK_OVERFLOW_DATABUFF_VC2 */ +/* Description: VC2 Data Buffer overflow */ +#define SH_XNMD_ERROR_MASK_OVERFLOW_DATABUFF_VC2_SHFT 17 +#define SH_XNMD_ERROR_MASK_OVERFLOW_DATABUFF_VC2_MASK 0x0000000000020000 + +/* SH_XNMD_ERROR_MASK_LUT_READ_ERROR */ +/* Description: LUT Read Error */ +#define SH_XNMD_ERROR_MASK_LUT_READ_ERROR_SHFT 18 +#define SH_XNMD_ERROR_MASK_LUT_READ_ERROR_MASK 0x0000000000040000 + +/* SH_XNMD_ERROR_MASK_SINGLE_BIT_ERROR0 */ +/* Description: Single Bit Error in Bits 63:0 */ +#define SH_XNMD_ERROR_MASK_SINGLE_BIT_ERROR0_SHFT 19 +#define SH_XNMD_ERROR_MASK_SINGLE_BIT_ERROR0_MASK 0x0000000000080000 + +/* SH_XNMD_ERROR_MASK_SINGLE_BIT_ERROR1 */ +/* Description: Single Bit Error in Bits 127:64 */ +#define SH_XNMD_ERROR_MASK_SINGLE_BIT_ERROR1_SHFT 20 +#define SH_XNMD_ERROR_MASK_SINGLE_BIT_ERROR1_MASK 0x0000000000100000 + +/* SH_XNMD_ERROR_MASK_SINGLE_BIT_ERROR2 */ +/* Description: Single Bit Error in Bits 191:128 */ +#define SH_XNMD_ERROR_MASK_SINGLE_BIT_ERROR2_SHFT 21 +#define SH_XNMD_ERROR_MASK_SINGLE_BIT_ERROR2_MASK 0x0000000000200000 + +/* SH_XNMD_ERROR_MASK_SINGLE_BIT_ERROR3 */ +/* Description: Single Bit Error in Bits 255:192 */ +#define SH_XNMD_ERROR_MASK_SINGLE_BIT_ERROR3_SHFT 22 +#define SH_XNMD_ERROR_MASK_SINGLE_BIT_ERROR3_MASK 0x0000000000400000 + +/* SH_XNMD_ERROR_MASK_UNCOR_ERROR0 */ +/* Description: Uncorrectable Error in Bits 63:0 */ +#define SH_XNMD_ERROR_MASK_UNCOR_ERROR0_SHFT 23 +#define SH_XNMD_ERROR_MASK_UNCOR_ERROR0_MASK 0x0000000000800000 + +/* SH_XNMD_ERROR_MASK_UNCOR_ERROR1 */ +/* Description: Uncorrectable Error in Bits 127:64 */ +#define SH_XNMD_ERROR_MASK_UNCOR_ERROR1_SHFT 24 +#define SH_XNMD_ERROR_MASK_UNCOR_ERROR1_MASK 0x0000000001000000 + +/* SH_XNMD_ERROR_MASK_UNCOR_ERROR2 */ +/* Description: Uncorrectable Error in Bits 191:128 */ +#define SH_XNMD_ERROR_MASK_UNCOR_ERROR2_SHFT 25 +#define SH_XNMD_ERROR_MASK_UNCOR_ERROR2_MASK 0x0000000002000000 + +/* SH_XNMD_ERROR_MASK_UNCOR_ERROR3 */ +/* Description: Uncorrectable Error in Bits 255:192 */ +#define SH_XNMD_ERROR_MASK_UNCOR_ERROR3_SHFT 26 +#define SH_XNMD_ERROR_MASK_UNCOR_ERROR3_MASK 0x0000000004000000 + +/* SH_XNMD_ERROR_MASK_UNDERFLOW_SIC_CNTR0 */ +/* Description: SIC Counter 0 Underflow */ +#define SH_XNMD_ERROR_MASK_UNDERFLOW_SIC_CNTR0_SHFT 27 +#define SH_XNMD_ERROR_MASK_UNDERFLOW_SIC_CNTR0_MASK 0x0000000008000000 + +/* SH_XNMD_ERROR_MASK_OVERFLOW_SIC_CNTR0 */ +/* Description: SIC Counter 0 Overflow */ +#define SH_XNMD_ERROR_MASK_OVERFLOW_SIC_CNTR0_SHFT 28 +#define SH_XNMD_ERROR_MASK_OVERFLOW_SIC_CNTR0_MASK 0x0000000010000000 + +/* SH_XNMD_ERROR_MASK_UNDERFLOW_SIC_CNTR2 */ +/* Description: SIC Counter 2 Underflow */ +#define SH_XNMD_ERROR_MASK_UNDERFLOW_SIC_CNTR2_SHFT 29 +#define SH_XNMD_ERROR_MASK_UNDERFLOW_SIC_CNTR2_MASK 0x0000000020000000 + +/* SH_XNMD_ERROR_MASK_OVERFLOW_SIC_CNTR2 */ +/* Description: SIC Counter 2 Overflow */ +#define SH_XNMD_ERROR_MASK_OVERFLOW_SIC_CNTR2_SHFT 30 +#define SH_XNMD_ERROR_MASK_OVERFLOW_SIC_CNTR2_MASK 0x0000000040000000 + +/* SH_XNMD_ERROR_MASK_OVERFLOW_NI0_DEBIT0 */ +/* Description: NI0 Debit 0 Overflow */ +#define SH_XNMD_ERROR_MASK_OVERFLOW_NI0_DEBIT0_SHFT 31 +#define SH_XNMD_ERROR_MASK_OVERFLOW_NI0_DEBIT0_MASK 0x0000000080000000 + +/* SH_XNMD_ERROR_MASK_OVERFLOW_NI0_DEBIT2 */ +/* Description: NI0 Debit 2 Overflow */ +#define SH_XNMD_ERROR_MASK_OVERFLOW_NI0_DEBIT2_SHFT 32 +#define SH_XNMD_ERROR_MASK_OVERFLOW_NI0_DEBIT2_MASK 0x0000000100000000 + +/* SH_XNMD_ERROR_MASK_OVERFLOW_NI1_DEBIT0 */ +/* Description: NI1 Debit 0 Overflow */ +#define SH_XNMD_ERROR_MASK_OVERFLOW_NI1_DEBIT0_SHFT 33 +#define SH_XNMD_ERROR_MASK_OVERFLOW_NI1_DEBIT0_MASK 0x0000000200000000 + +/* SH_XNMD_ERROR_MASK_OVERFLOW_NI1_DEBIT2 */ +/* Description: NI1 Debit 2 Overflow */ +#define SH_XNMD_ERROR_MASK_OVERFLOW_NI1_DEBIT2_SHFT 34 +#define SH_XNMD_ERROR_MASK_OVERFLOW_NI1_DEBIT2_MASK 0x0000000400000000 + +/* SH_XNMD_ERROR_MASK_OVERFLOW_IILB_DEBIT0 */ +/* Description: IILB Debit 0 Overflow */ +#define SH_XNMD_ERROR_MASK_OVERFLOW_IILB_DEBIT0_SHFT 35 +#define SH_XNMD_ERROR_MASK_OVERFLOW_IILB_DEBIT0_MASK 0x0000000800000000 + +/* SH_XNMD_ERROR_MASK_OVERFLOW_IILB_DEBIT2 */ +/* Description: IILB Debit 2 Overflow */ +#define SH_XNMD_ERROR_MASK_OVERFLOW_IILB_DEBIT2_SHFT 36 +#define SH_XNMD_ERROR_MASK_OVERFLOW_IILB_DEBIT2_MASK 0x0000001000000000 + +/* SH_XNMD_ERROR_MASK_UNDERFLOW_NI0_VC0_CREDIT */ +/* Description: NI0 VC0 Credit Underflow */ +#define SH_XNMD_ERROR_MASK_UNDERFLOW_NI0_VC0_CREDIT_SHFT 37 +#define SH_XNMD_ERROR_MASK_UNDERFLOW_NI0_VC0_CREDIT_MASK 0x0000002000000000 + +/* SH_XNMD_ERROR_MASK_OVERFLOW_NI0_VC0_CREDIT */ +/* Description: NI0 VC0 Credit Overflow */ +#define SH_XNMD_ERROR_MASK_OVERFLOW_NI0_VC0_CREDIT_SHFT 38 +#define SH_XNMD_ERROR_MASK_OVERFLOW_NI0_VC0_CREDIT_MASK 0x0000004000000000 + +/* SH_XNMD_ERROR_MASK_UNDERFLOW_NI0_VC2_CREDIT */ +/* Description: NI0 VC2 Credit Underflow */ +#define SH_XNMD_ERROR_MASK_UNDERFLOW_NI0_VC2_CREDIT_SHFT 39 +#define SH_XNMD_ERROR_MASK_UNDERFLOW_NI0_VC2_CREDIT_MASK 0x0000008000000000 + +/* SH_XNMD_ERROR_MASK_OVERFLOW_NI0_VC2_CREDIT */ +/* Description: NI0 VC2 Credit Overflow */ +#define SH_XNMD_ERROR_MASK_OVERFLOW_NI0_VC2_CREDIT_SHFT 40 +#define SH_XNMD_ERROR_MASK_OVERFLOW_NI0_VC2_CREDIT_MASK 0x0000010000000000 + +/* SH_XNMD_ERROR_MASK_UNDERFLOW_NI1_VC0_CREDIT */ +/* Description: NI1 VC0 Credit Underflow */ +#define SH_XNMD_ERROR_MASK_UNDERFLOW_NI1_VC0_CREDIT_SHFT 41 +#define SH_XNMD_ERROR_MASK_UNDERFLOW_NI1_VC0_CREDIT_MASK 0x0000020000000000 + +/* SH_XNMD_ERROR_MASK_OVERFLOW_NI1_VC0_CREDIT */ +/* Description: NI1 VC0 Credit Overflow */ +#define SH_XNMD_ERROR_MASK_OVERFLOW_NI1_VC0_CREDIT_SHFT 42 +#define SH_XNMD_ERROR_MASK_OVERFLOW_NI1_VC0_CREDIT_MASK 0x0000040000000000 + +/* SH_XNMD_ERROR_MASK_UNDERFLOW_NI1_VC2_CREDIT */ +/* Description: NI1 VC2 Credit Underflow */ +#define SH_XNMD_ERROR_MASK_UNDERFLOW_NI1_VC2_CREDIT_SHFT 43 +#define SH_XNMD_ERROR_MASK_UNDERFLOW_NI1_VC2_CREDIT_MASK 0x0000080000000000 + +/* SH_XNMD_ERROR_MASK_OVERFLOW_NI1_VC2_CREDIT */ +/* Description: NI1 VC2 Credit Overflow */ +#define SH_XNMD_ERROR_MASK_OVERFLOW_NI1_VC2_CREDIT_SHFT 44 +#define SH_XNMD_ERROR_MASK_OVERFLOW_NI1_VC2_CREDIT_MASK 0x0000100000000000 + +/* SH_XNMD_ERROR_MASK_UNDERFLOW_IILB_VC0_CREDIT */ +/* Description: IILB VC0 Credit Underflow */ +#define SH_XNMD_ERROR_MASK_UNDERFLOW_IILB_VC0_CREDIT_SHFT 45 +#define SH_XNMD_ERROR_MASK_UNDERFLOW_IILB_VC0_CREDIT_MASK 0x0000200000000000 + +/* SH_XNMD_ERROR_MASK_OVERFLOW_IILB_VC0_CREDIT */ +/* Description: IILB VC0 Credit Overflow */ +#define SH_XNMD_ERROR_MASK_OVERFLOW_IILB_VC0_CREDIT_SHFT 46 +#define SH_XNMD_ERROR_MASK_OVERFLOW_IILB_VC0_CREDIT_MASK 0x0000400000000000 + +/* SH_XNMD_ERROR_MASK_UNDERFLOW_IILB_VC2_CREDIT */ +/* Description: IILB VC2 Credit Underflow */ +#define SH_XNMD_ERROR_MASK_UNDERFLOW_IILB_VC2_CREDIT_SHFT 47 +#define SH_XNMD_ERROR_MASK_UNDERFLOW_IILB_VC2_CREDIT_MASK 0x0000800000000000 + +/* SH_XNMD_ERROR_MASK_OVERFLOW_IILB_VC2_CREDIT */ +/* Description: IILB VC2 Credit Overflow */ +#define SH_XNMD_ERROR_MASK_OVERFLOW_IILB_VC2_CREDIT_SHFT 48 +#define SH_XNMD_ERROR_MASK_OVERFLOW_IILB_VC2_CREDIT_MASK 0x0001000000000000 + +/* SH_XNMD_ERROR_MASK_OVERFLOW_HEADER_CANCEL_FIFO */ +/* Description: Header Cancel Fifo Overflow */ +#define SH_XNMD_ERROR_MASK_OVERFLOW_HEADER_CANCEL_FIFO_SHFT 49 +#define SH_XNMD_ERROR_MASK_OVERFLOW_HEADER_CANCEL_FIFO_MASK 0x0002000000000000 + +/* ==================================================================== */ +/* Register "SH_XNMD_FIRST_ERROR" */ +/* ==================================================================== */ + +#define SH_XNMD_FIRST_ERROR 0x0000000150040460 +#define SH_XNMD_FIRST_ERROR_MASK 0x0003ffffffffffff +#define SH_XNMD_FIRST_ERROR_INIT 0x0003ffffffffffff + +/* SH_XNMD_FIRST_ERROR_UNDERFLOW_NI0_VC0 */ +/* Description: NI0 VC0 fifo underflow */ +#define SH_XNMD_FIRST_ERROR_UNDERFLOW_NI0_VC0_SHFT 0 +#define SH_XNMD_FIRST_ERROR_UNDERFLOW_NI0_VC0_MASK 0x0000000000000001 + +/* SH_XNMD_FIRST_ERROR_OVERFLOW_NI0_VC0 */ +/* Description: NI0 VC0 fifo overflow */ +#define SH_XNMD_FIRST_ERROR_OVERFLOW_NI0_VC0_SHFT 1 +#define SH_XNMD_FIRST_ERROR_OVERFLOW_NI0_VC0_MASK 0x0000000000000002 + +/* SH_XNMD_FIRST_ERROR_UNDERFLOW_NI0_VC2 */ +/* Description: NI0 VC2 fifo underflow */ +#define SH_XNMD_FIRST_ERROR_UNDERFLOW_NI0_VC2_SHFT 2 +#define SH_XNMD_FIRST_ERROR_UNDERFLOW_NI0_VC2_MASK 0x0000000000000004 + +/* SH_XNMD_FIRST_ERROR_OVERFLOW_NI0_VC2 */ +/* Description: NI0 VC2 fifo overflow */ +#define SH_XNMD_FIRST_ERROR_OVERFLOW_NI0_VC2_SHFT 3 +#define SH_XNMD_FIRST_ERROR_OVERFLOW_NI0_VC2_MASK 0x0000000000000008 + +/* SH_XNMD_FIRST_ERROR_UNDERFLOW_NI1_VC0 */ +/* Description: NI1 VC0 fifo underflow */ +#define SH_XNMD_FIRST_ERROR_UNDERFLOW_NI1_VC0_SHFT 4 +#define SH_XNMD_FIRST_ERROR_UNDERFLOW_NI1_VC0_MASK 0x0000000000000010 + +/* SH_XNMD_FIRST_ERROR_OVERFLOW_NI1_VC0 */ +/* Description: NI1 VC0 fifo overflow */ +#define SH_XNMD_FIRST_ERROR_OVERFLOW_NI1_VC0_SHFT 5 +#define SH_XNMD_FIRST_ERROR_OVERFLOW_NI1_VC0_MASK 0x0000000000000020 + +/* SH_XNMD_FIRST_ERROR_UNDERFLOW_NI1_VC2 */ +/* Description: NI1 VC2 fifo underflow */ +#define SH_XNMD_FIRST_ERROR_UNDERFLOW_NI1_VC2_SHFT 6 +#define SH_XNMD_FIRST_ERROR_UNDERFLOW_NI1_VC2_MASK 0x0000000000000040 + +/* SH_XNMD_FIRST_ERROR_OVERFLOW_NI1_VC2 */ +/* Description: NI1 VC2 fifo overflow */ +#define SH_XNMD_FIRST_ERROR_OVERFLOW_NI1_VC2_SHFT 7 +#define SH_XNMD_FIRST_ERROR_OVERFLOW_NI1_VC2_MASK 0x0000000000000080 + +/* SH_XNMD_FIRST_ERROR_UNDERFLOW_IILB_VC0 */ +/* Description: IILB VC0 fifo underflow */ +#define SH_XNMD_FIRST_ERROR_UNDERFLOW_IILB_VC0_SHFT 8 +#define SH_XNMD_FIRST_ERROR_UNDERFLOW_IILB_VC0_MASK 0x0000000000000100 + +/* SH_XNMD_FIRST_ERROR_OVERFLOW_IILB_VC0 */ +/* Description: IILB VC0 fifo overflow */ +#define SH_XNMD_FIRST_ERROR_OVERFLOW_IILB_VC0_SHFT 9 +#define SH_XNMD_FIRST_ERROR_OVERFLOW_IILB_VC0_MASK 0x0000000000000200 + +/* SH_XNMD_FIRST_ERROR_UNDERFLOW_IILB_VC2 */ +/* Description: IILB VC2 fifo underflow */ +#define SH_XNMD_FIRST_ERROR_UNDERFLOW_IILB_VC2_SHFT 10 +#define SH_XNMD_FIRST_ERROR_UNDERFLOW_IILB_VC2_MASK 0x0000000000000400 + +/* SH_XNMD_FIRST_ERROR_OVERFLOW_IILB_VC2 */ +/* Description: IILB VC2 fifo overflow */ +#define SH_XNMD_FIRST_ERROR_OVERFLOW_IILB_VC2_SHFT 11 +#define SH_XNMD_FIRST_ERROR_OVERFLOW_IILB_VC2_MASK 0x0000000000000800 + +/* SH_XNMD_FIRST_ERROR_UNDERFLOW_VC0_CREDIT */ +/* Description: VC0 Credit underflow */ +#define SH_XNMD_FIRST_ERROR_UNDERFLOW_VC0_CREDIT_SHFT 12 +#define SH_XNMD_FIRST_ERROR_UNDERFLOW_VC0_CREDIT_MASK 0x0000000000001000 + +/* SH_XNMD_FIRST_ERROR_OVERFLOW_VC0_CREDIT */ +/* Description: VC0 Credit overflow */ +#define SH_XNMD_FIRST_ERROR_OVERFLOW_VC0_CREDIT_SHFT 13 +#define SH_XNMD_FIRST_ERROR_OVERFLOW_VC0_CREDIT_MASK 0x0000000000002000 + +/* SH_XNMD_FIRST_ERROR_UNDERFLOW_VC2_CREDIT */ +/* Description: VC2 Credit underflow */ +#define SH_XNMD_FIRST_ERROR_UNDERFLOW_VC2_CREDIT_SHFT 14 +#define SH_XNMD_FIRST_ERROR_UNDERFLOW_VC2_CREDIT_MASK 0x0000000000004000 + +/* SH_XNMD_FIRST_ERROR_OVERFLOW_VC2_CREDIT */ +/* Description: VC2 Credit overflow */ +#define SH_XNMD_FIRST_ERROR_OVERFLOW_VC2_CREDIT_SHFT 15 +#define SH_XNMD_FIRST_ERROR_OVERFLOW_VC2_CREDIT_MASK 0x0000000000008000 + +/* SH_XNMD_FIRST_ERROR_OVERFLOW_DATABUFF_VC0 */ +/* Description: VC0 Data Buffer overflow */ +#define SH_XNMD_FIRST_ERROR_OVERFLOW_DATABUFF_VC0_SHFT 16 +#define SH_XNMD_FIRST_ERROR_OVERFLOW_DATABUFF_VC0_MASK 0x0000000000010000 + +/* SH_XNMD_FIRST_ERROR_OVERFLOW_DATABUFF_VC2 */ +/* Description: VC2 Data Buffer overflow */ +#define SH_XNMD_FIRST_ERROR_OVERFLOW_DATABUFF_VC2_SHFT 17 +#define SH_XNMD_FIRST_ERROR_OVERFLOW_DATABUFF_VC2_MASK 0x0000000000020000 + +/* SH_XNMD_FIRST_ERROR_LUT_READ_ERROR */ +/* Description: LUT Read Error */ +#define SH_XNMD_FIRST_ERROR_LUT_READ_ERROR_SHFT 18 +#define SH_XNMD_FIRST_ERROR_LUT_READ_ERROR_MASK 0x0000000000040000 + +/* SH_XNMD_FIRST_ERROR_SINGLE_BIT_ERROR0 */ +/* Description: Single Bit Error in Bits 63:0 */ +#define SH_XNMD_FIRST_ERROR_SINGLE_BIT_ERROR0_SHFT 19 +#define SH_XNMD_FIRST_ERROR_SINGLE_BIT_ERROR0_MASK 0x0000000000080000 + +/* SH_XNMD_FIRST_ERROR_SINGLE_BIT_ERROR1 */ +/* Description: Single Bit Error in Bits 127:64 */ +#define SH_XNMD_FIRST_ERROR_SINGLE_BIT_ERROR1_SHFT 20 +#define SH_XNMD_FIRST_ERROR_SINGLE_BIT_ERROR1_MASK 0x0000000000100000 + +/* SH_XNMD_FIRST_ERROR_SINGLE_BIT_ERROR2 */ +/* Description: Single Bit Error in Bits 191:128 */ +#define SH_XNMD_FIRST_ERROR_SINGLE_BIT_ERROR2_SHFT 21 +#define SH_XNMD_FIRST_ERROR_SINGLE_BIT_ERROR2_MASK 0x0000000000200000 + +/* SH_XNMD_FIRST_ERROR_SINGLE_BIT_ERROR3 */ +/* Description: Single Bit Error in Bits 255:192 */ +#define SH_XNMD_FIRST_ERROR_SINGLE_BIT_ERROR3_SHFT 22 +#define SH_XNMD_FIRST_ERROR_SINGLE_BIT_ERROR3_MASK 0x0000000000400000 + +/* SH_XNMD_FIRST_ERROR_UNCOR_ERROR0 */ +/* Description: Uncorrectable Error in Bits 63:0 */ +#define SH_XNMD_FIRST_ERROR_UNCOR_ERROR0_SHFT 23 +#define SH_XNMD_FIRST_ERROR_UNCOR_ERROR0_MASK 0x0000000000800000 + +/* SH_XNMD_FIRST_ERROR_UNCOR_ERROR1 */ +/* Description: Uncorrectable Error in Bits 127:64 */ +#define SH_XNMD_FIRST_ERROR_UNCOR_ERROR1_SHFT 24 +#define SH_XNMD_FIRST_ERROR_UNCOR_ERROR1_MASK 0x0000000001000000 + +/* SH_XNMD_FIRST_ERROR_UNCOR_ERROR2 */ +/* Description: Uncorrectable Error in Bits 191:128 */ +#define SH_XNMD_FIRST_ERROR_UNCOR_ERROR2_SHFT 25 +#define SH_XNMD_FIRST_ERROR_UNCOR_ERROR2_MASK 0x0000000002000000 + +/* SH_XNMD_FIRST_ERROR_UNCOR_ERROR3 */ +/* Description: Uncorrectable Error in Bits 255:192 */ +#define SH_XNMD_FIRST_ERROR_UNCOR_ERROR3_SHFT 26 +#define SH_XNMD_FIRST_ERROR_UNCOR_ERROR3_MASK 0x0000000004000000 + +/* SH_XNMD_FIRST_ERROR_UNDERFLOW_SIC_CNTR0 */ +/* Description: SIC Counter 0 Underflow */ +#define SH_XNMD_FIRST_ERROR_UNDERFLOW_SIC_CNTR0_SHFT 27 +#define SH_XNMD_FIRST_ERROR_UNDERFLOW_SIC_CNTR0_MASK 0x0000000008000000 + +/* SH_XNMD_FIRST_ERROR_OVERFLOW_SIC_CNTR0 */ +/* Description: SIC Counter 0 Overflow */ +#define SH_XNMD_FIRST_ERROR_OVERFLOW_SIC_CNTR0_SHFT 28 +#define SH_XNMD_FIRST_ERROR_OVERFLOW_SIC_CNTR0_MASK 0x0000000010000000 + +/* SH_XNMD_FIRST_ERROR_UNDERFLOW_SIC_CNTR2 */ +/* Description: SIC Counter 2 Underflow */ +#define SH_XNMD_FIRST_ERROR_UNDERFLOW_SIC_CNTR2_SHFT 29 +#define SH_XNMD_FIRST_ERROR_UNDERFLOW_SIC_CNTR2_MASK 0x0000000020000000 + +/* SH_XNMD_FIRST_ERROR_OVERFLOW_SIC_CNTR2 */ +/* Description: SIC Counter 2 Overflow */ +#define SH_XNMD_FIRST_ERROR_OVERFLOW_SIC_CNTR2_SHFT 30 +#define SH_XNMD_FIRST_ERROR_OVERFLOW_SIC_CNTR2_MASK 0x0000000040000000 + +/* SH_XNMD_FIRST_ERROR_OVERFLOW_NI0_DEBIT0 */ +/* Description: NI0 Debit 0 Overflow */ +#define SH_XNMD_FIRST_ERROR_OVERFLOW_NI0_DEBIT0_SHFT 31 +#define SH_XNMD_FIRST_ERROR_OVERFLOW_NI0_DEBIT0_MASK 0x0000000080000000 + +/* SH_XNMD_FIRST_ERROR_OVERFLOW_NI0_DEBIT2 */ +/* Description: NI0 Debit 2 Overflow */ +#define SH_XNMD_FIRST_ERROR_OVERFLOW_NI0_DEBIT2_SHFT 32 +#define SH_XNMD_FIRST_ERROR_OVERFLOW_NI0_DEBIT2_MASK 0x0000000100000000 + +/* SH_XNMD_FIRST_ERROR_OVERFLOW_NI1_DEBIT0 */ +/* Description: NI1 Debit 0 Overflow */ +#define SH_XNMD_FIRST_ERROR_OVERFLOW_NI1_DEBIT0_SHFT 33 +#define SH_XNMD_FIRST_ERROR_OVERFLOW_NI1_DEBIT0_MASK 0x0000000200000000 + +/* SH_XNMD_FIRST_ERROR_OVERFLOW_NI1_DEBIT2 */ +/* Description: NI1 Debit 2 Overflow */ +#define SH_XNMD_FIRST_ERROR_OVERFLOW_NI1_DEBIT2_SHFT 34 +#define SH_XNMD_FIRST_ERROR_OVERFLOW_NI1_DEBIT2_MASK 0x0000000400000000 + +/* SH_XNMD_FIRST_ERROR_OVERFLOW_IILB_DEBIT0 */ +/* Description: IILB Debit 0 Overflow */ +#define SH_XNMD_FIRST_ERROR_OVERFLOW_IILB_DEBIT0_SHFT 35 +#define SH_XNMD_FIRST_ERROR_OVERFLOW_IILB_DEBIT0_MASK 0x0000000800000000 + +/* SH_XNMD_FIRST_ERROR_OVERFLOW_IILB_DEBIT2 */ +/* Description: IILB Debit 2 Overflow */ +#define SH_XNMD_FIRST_ERROR_OVERFLOW_IILB_DEBIT2_SHFT 36 +#define SH_XNMD_FIRST_ERROR_OVERFLOW_IILB_DEBIT2_MASK 0x0000001000000000 + +/* SH_XNMD_FIRST_ERROR_UNDERFLOW_NI0_VC0_CREDIT */ +/* Description: NI0 VC0 Credit Underflow */ +#define SH_XNMD_FIRST_ERROR_UNDERFLOW_NI0_VC0_CREDIT_SHFT 37 +#define SH_XNMD_FIRST_ERROR_UNDERFLOW_NI0_VC0_CREDIT_MASK 0x0000002000000000 + +/* SH_XNMD_FIRST_ERROR_OVERFLOW_NI0_VC0_CREDIT */ +/* Description: NI0 VC0 Credit Overflow */ +#define SH_XNMD_FIRST_ERROR_OVERFLOW_NI0_VC0_CREDIT_SHFT 38 +#define SH_XNMD_FIRST_ERROR_OVERFLOW_NI0_VC0_CREDIT_MASK 0x0000004000000000 + +/* SH_XNMD_FIRST_ERROR_UNDERFLOW_NI0_VC2_CREDIT */ +/* Description: NI0 VC2 Credit Underflow */ +#define SH_XNMD_FIRST_ERROR_UNDERFLOW_NI0_VC2_CREDIT_SHFT 39 +#define SH_XNMD_FIRST_ERROR_UNDERFLOW_NI0_VC2_CREDIT_MASK 0x0000008000000000 + +/* SH_XNMD_FIRST_ERROR_OVERFLOW_NI0_VC2_CREDIT */ +/* Description: NI0 VC2 Credit Overflow */ +#define SH_XNMD_FIRST_ERROR_OVERFLOW_NI0_VC2_CREDIT_SHFT 40 +#define SH_XNMD_FIRST_ERROR_OVERFLOW_NI0_VC2_CREDIT_MASK 0x0000010000000000 + +/* SH_XNMD_FIRST_ERROR_UNDERFLOW_NI1_VC0_CREDIT */ +/* Description: NI1 VC0 Credit Underflow */ +#define SH_XNMD_FIRST_ERROR_UNDERFLOW_NI1_VC0_CREDIT_SHFT 41 +#define SH_XNMD_FIRST_ERROR_UNDERFLOW_NI1_VC0_CREDIT_MASK 0x0000020000000000 + +/* SH_XNMD_FIRST_ERROR_OVERFLOW_NI1_VC0_CREDIT */ +/* Description: NI1 VC0 Credit Overflow */ +#define SH_XNMD_FIRST_ERROR_OVERFLOW_NI1_VC0_CREDIT_SHFT 42 +#define SH_XNMD_FIRST_ERROR_OVERFLOW_NI1_VC0_CREDIT_MASK 0x0000040000000000 + +/* SH_XNMD_FIRST_ERROR_UNDERFLOW_NI1_VC2_CREDIT */ +/* Description: NI1 VC2 Credit Underflow */ +#define SH_XNMD_FIRST_ERROR_UNDERFLOW_NI1_VC2_CREDIT_SHFT 43 +#define SH_XNMD_FIRST_ERROR_UNDERFLOW_NI1_VC2_CREDIT_MASK 0x0000080000000000 + +/* SH_XNMD_FIRST_ERROR_OVERFLOW_NI1_VC2_CREDIT */ +/* Description: NI1 VC2 Credit Overflow */ +#define SH_XNMD_FIRST_ERROR_OVERFLOW_NI1_VC2_CREDIT_SHFT 44 +#define SH_XNMD_FIRST_ERROR_OVERFLOW_NI1_VC2_CREDIT_MASK 0x0000100000000000 + +/* SH_XNMD_FIRST_ERROR_UNDERFLOW_IILB_VC0_CREDIT */ +/* Description: IILB VC0 Credit Underflow */ +#define SH_XNMD_FIRST_ERROR_UNDERFLOW_IILB_VC0_CREDIT_SHFT 45 +#define SH_XNMD_FIRST_ERROR_UNDERFLOW_IILB_VC0_CREDIT_MASK 0x0000200000000000 + +/* SH_XNMD_FIRST_ERROR_OVERFLOW_IILB_VC0_CREDIT */ +/* Description: IILB VC0 Credit Overflow */ +#define SH_XNMD_FIRST_ERROR_OVERFLOW_IILB_VC0_CREDIT_SHFT 46 +#define SH_XNMD_FIRST_ERROR_OVERFLOW_IILB_VC0_CREDIT_MASK 0x0000400000000000 + +/* SH_XNMD_FIRST_ERROR_UNDERFLOW_IILB_VC2_CREDIT */ +/* Description: IILB VC2 Credit Underflow */ +#define SH_XNMD_FIRST_ERROR_UNDERFLOW_IILB_VC2_CREDIT_SHFT 47 +#define SH_XNMD_FIRST_ERROR_UNDERFLOW_IILB_VC2_CREDIT_MASK 0x0000800000000000 + +/* SH_XNMD_FIRST_ERROR_OVERFLOW_IILB_VC2_CREDIT */ +/* Description: IILB VC2 Credit Overflow */ +#define SH_XNMD_FIRST_ERROR_OVERFLOW_IILB_VC2_CREDIT_SHFT 48 +#define SH_XNMD_FIRST_ERROR_OVERFLOW_IILB_VC2_CREDIT_MASK 0x0001000000000000 + +/* SH_XNMD_FIRST_ERROR_OVERFLOW_HEADER_CANCEL_FIFO */ +/* Description: Header Cancel Fifo Overflow */ +#define SH_XNMD_FIRST_ERROR_OVERFLOW_HEADER_CANCEL_FIFO_SHFT 49 +#define SH_XNMD_FIRST_ERROR_OVERFLOW_HEADER_CANCEL_FIFO_MASK 0x0002000000000000 + +/* ==================================================================== */ +/* Register "SH_AUTO_REPLY_ENABLE0" */ +/* Automatic Maintenance Reply Enable 0 */ +/* ==================================================================== */ + +#define SH_AUTO_REPLY_ENABLE0 0x0000000110061000 +#define SH_AUTO_REPLY_ENABLE0_MASK 0xffffffffffffffff +#define SH_AUTO_REPLY_ENABLE0_INIT 0x0000000000000000 + +/* SH_AUTO_REPLY_ENABLE0_ENABLE0 */ +/* Description: Enable 0 */ +#define SH_AUTO_REPLY_ENABLE0_ENABLE0_SHFT 0 +#define SH_AUTO_REPLY_ENABLE0_ENABLE0_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_AUTO_REPLY_ENABLE1" */ +/* Automatic Maintenance Reply Enable 1 */ +/* ==================================================================== */ + +#define SH_AUTO_REPLY_ENABLE1 0x0000000110061080 +#define SH_AUTO_REPLY_ENABLE1_MASK 0xffffffffffffffff +#define SH_AUTO_REPLY_ENABLE1_INIT 0x0000000000000000 + +/* SH_AUTO_REPLY_ENABLE1_ENABLE1 */ +/* Description: Enable 1 */ +#define SH_AUTO_REPLY_ENABLE1_ENABLE1_SHFT 0 +#define SH_AUTO_REPLY_ENABLE1_ENABLE1_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_AUTO_REPLY_HEADER0" */ +/* Automatic Maintenance Reply Header 0 */ +/* ==================================================================== */ + +#define SH_AUTO_REPLY_HEADER0 0x0000000110061100 +#define SH_AUTO_REPLY_HEADER0_MASK 0xffffffffffffffff +#define SH_AUTO_REPLY_HEADER0_INIT 0x0000000000000000 + +/* SH_AUTO_REPLY_HEADER0_HEADER0 */ +/* Description: Header 0 */ +#define SH_AUTO_REPLY_HEADER0_HEADER0_SHFT 0 +#define SH_AUTO_REPLY_HEADER0_HEADER0_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_AUTO_REPLY_HEADER1" */ +/* Automatic Maintenance Reply Header 1 */ +/* ==================================================================== */ + +#define SH_AUTO_REPLY_HEADER1 0x0000000110061180 +#define SH_AUTO_REPLY_HEADER1_MASK 0xffffffffffffffff +#define SH_AUTO_REPLY_HEADER1_INIT 0x0000000000000000 + +/* SH_AUTO_REPLY_HEADER1_HEADER1 */ +/* Description: Header 1 */ +#define SH_AUTO_REPLY_HEADER1_HEADER1_SHFT 0 +#define SH_AUTO_REPLY_HEADER1_HEADER1_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_ENABLE_RP_AUTO_REPLY" */ +/* Enable Automatic Maintenance Reply From Reply Queue */ +/* ==================================================================== */ + +#define SH_ENABLE_RP_AUTO_REPLY 0x0000000110061200 +#define SH_ENABLE_RP_AUTO_REPLY_MASK 0x0000000000000001 +#define SH_ENABLE_RP_AUTO_REPLY_INIT 0x0000000000000000 + +/* SH_ENABLE_RP_AUTO_REPLY_ENABLE */ +/* Description: Enable Reply Auto Reply */ +#define SH_ENABLE_RP_AUTO_REPLY_ENABLE_SHFT 0 +#define SH_ENABLE_RP_AUTO_REPLY_ENABLE_MASK 0x0000000000000001 + +/* ==================================================================== */ +/* Register "SH_ENABLE_RQ_AUTO_REPLY" */ +/* Enable Automatic Maintenance Reply From Request Queue */ +/* ==================================================================== */ + +#define SH_ENABLE_RQ_AUTO_REPLY 0x0000000110061280 +#define SH_ENABLE_RQ_AUTO_REPLY_MASK 0x0000000000000001 +#define SH_ENABLE_RQ_AUTO_REPLY_INIT 0x0000000000000000 + +/* SH_ENABLE_RQ_AUTO_REPLY_ENABLE */ +/* Description: Enable Request Auto Reply */ +#define SH_ENABLE_RQ_AUTO_REPLY_ENABLE_SHFT 0 +#define SH_ENABLE_RQ_AUTO_REPLY_ENABLE_MASK 0x0000000000000001 + +/* ==================================================================== */ +/* Register "SH_REDIRECT_INVAL" */ +/* Redirect invalidate to LB instead of PI */ +/* ==================================================================== */ + +#define SH_REDIRECT_INVAL 0x0000000110061300 +#define SH_REDIRECT_INVAL_MASK 0x0000000000000001 +#define SH_REDIRECT_INVAL_INIT 0x0000000000000000 + +/* SH_REDIRECT_INVAL_REDIRECT */ +/* Description: Redirect invalidates to LB instead of PI */ +#define SH_REDIRECT_INVAL_REDIRECT_SHFT 0 +#define SH_REDIRECT_INVAL_REDIRECT_MASK 0x0000000000000001 + +/* ==================================================================== */ +/* Register "SH_DIAG_MSG_CNTRL" */ +/* Diagnostic Message Control Register */ +/* ==================================================================== */ + +#define SH_DIAG_MSG_CNTRL 0x0000000110062000 +#define SH_DIAG_MSG_CNTRL_MASK 0xc000000000003fff +#define SH_DIAG_MSG_CNTRL_INIT 0x0000000000000000 + +/* SH_DIAG_MSG_CNTRL_MSG_LENGTH */ +/* Description: Message data payload length, 0 - 63 */ +#define SH_DIAG_MSG_CNTRL_MSG_LENGTH_SHFT 0 +#define SH_DIAG_MSG_CNTRL_MSG_LENGTH_MASK 0x000000000000003f + +/* SH_DIAG_MSG_CNTRL_ERROR_INJECT_POINT */ +/* Description: Point message that the error bit would be activated */ +#define SH_DIAG_MSG_CNTRL_ERROR_INJECT_POINT_SHFT 6 +#define SH_DIAG_MSG_CNTRL_ERROR_INJECT_POINT_MASK 0x0000000000000fc0 + +/* SH_DIAG_MSG_CNTRL_ERROR_INJECT_ENABLE */ +/* Description: Enable ERROR_INJECT_POINT field */ +#define SH_DIAG_MSG_CNTRL_ERROR_INJECT_ENABLE_SHFT 12 +#define SH_DIAG_MSG_CNTRL_ERROR_INJECT_ENABLE_MASK 0x0000000000001000 + +/* SH_DIAG_MSG_CNTRL_PORT */ +/* Description: 0 = request port, 1 = reply port */ +#define SH_DIAG_MSG_CNTRL_PORT_SHFT 13 +#define SH_DIAG_MSG_CNTRL_PORT_MASK 0x0000000000002000 + +/* SH_DIAG_MSG_CNTRL_START */ +/* Description: Start */ +#define SH_DIAG_MSG_CNTRL_START_SHFT 62 +#define SH_DIAG_MSG_CNTRL_START_MASK 0x4000000000000000 + +/* SH_DIAG_MSG_CNTRL_BUSY */ +/* Description: Busy */ +#define SH_DIAG_MSG_CNTRL_BUSY_SHFT 63 +#define SH_DIAG_MSG_CNTRL_BUSY_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_DIAG_MSG_DATA0L" */ +/* Diagnostic Data, lower 64 bits */ +/* ==================================================================== */ + +#define SH_DIAG_MSG_DATA0L 0x0000000110062080 +#define SH_DIAG_MSG_DATA0L_MASK 0xffffffffffffffff +#define SH_DIAG_MSG_DATA0L_INIT 0x0000000000000000 + +/* SH_DIAG_MSG_DATA0L_DATA_LOWER */ +/* Description: Lower 64 bits of Diagnositic Message Data */ +#define SH_DIAG_MSG_DATA0L_DATA_LOWER_SHFT 0 +#define SH_DIAG_MSG_DATA0L_DATA_LOWER_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_DIAG_MSG_DATA0U" */ +/* Diagnostice Data, upper 64 bits */ +/* ==================================================================== */ + +#define SH_DIAG_MSG_DATA0U 0x0000000110062100 +#define SH_DIAG_MSG_DATA0U_MASK 0xffffffffffffffff +#define SH_DIAG_MSG_DATA0U_INIT 0x0000000000000000 + +/* SH_DIAG_MSG_DATA0U_DATA_UPPER */ +/* Description: Upper 64 bits of Diagnositic Message Data */ +#define SH_DIAG_MSG_DATA0U_DATA_UPPER_SHFT 0 +#define SH_DIAG_MSG_DATA0U_DATA_UPPER_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_DIAG_MSG_DATA1L" */ +/* Diagnostic Data, lower 64 bits */ +/* ==================================================================== */ + +#define SH_DIAG_MSG_DATA1L 0x0000000110062180 +#define SH_DIAG_MSG_DATA1L_MASK 0xffffffffffffffff +#define SH_DIAG_MSG_DATA1L_INIT 0x0000000000000000 + +/* SH_DIAG_MSG_DATA1L_DATA_LOWER */ +/* Description: Lower 64 bits of Diagnositic Message Data */ +#define SH_DIAG_MSG_DATA1L_DATA_LOWER_SHFT 0 +#define SH_DIAG_MSG_DATA1L_DATA_LOWER_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_DIAG_MSG_DATA1U" */ +/* Diagnostice Data, upper 64 bits */ +/* ==================================================================== */ + +#define SH_DIAG_MSG_DATA1U 0x0000000110062200 +#define SH_DIAG_MSG_DATA1U_MASK 0xffffffffffffffff +#define SH_DIAG_MSG_DATA1U_INIT 0x0000000000000000 + +/* SH_DIAG_MSG_DATA1U_DATA_UPPER */ +/* Description: Upper 64 bits of Diagnositic Message Data */ +#define SH_DIAG_MSG_DATA1U_DATA_UPPER_SHFT 0 +#define SH_DIAG_MSG_DATA1U_DATA_UPPER_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_DIAG_MSG_DATA2L" */ +/* Diagnostic Data, lower 64 bits */ +/* ==================================================================== */ + +#define SH_DIAG_MSG_DATA2L 0x0000000110062280 +#define SH_DIAG_MSG_DATA2L_MASK 0xffffffffffffffff +#define SH_DIAG_MSG_DATA2L_INIT 0x0000000000000000 + +/* SH_DIAG_MSG_DATA2L_DATA_LOWER */ +/* Description: Lower 64 bits of Diagnositic Message Data */ +#define SH_DIAG_MSG_DATA2L_DATA_LOWER_SHFT 0 +#define SH_DIAG_MSG_DATA2L_DATA_LOWER_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_DIAG_MSG_DATA2U" */ +/* Diagnostice Data, upper 64 bits */ +/* ==================================================================== */ + +#define SH_DIAG_MSG_DATA2U 0x0000000110062300 +#define SH_DIAG_MSG_DATA2U_MASK 0xffffffffffffffff +#define SH_DIAG_MSG_DATA2U_INIT 0x0000000000000000 + +/* SH_DIAG_MSG_DATA2U_DATA_UPPER */ +/* Description: Upper 64 bits of Diagnositic Message Data */ +#define SH_DIAG_MSG_DATA2U_DATA_UPPER_SHFT 0 +#define SH_DIAG_MSG_DATA2U_DATA_UPPER_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_DIAG_MSG_DATA3L" */ +/* Diagnostic Data, lower 64 bits */ +/* ==================================================================== */ + +#define SH_DIAG_MSG_DATA3L 0x0000000110062380 +#define SH_DIAG_MSG_DATA3L_MASK 0xffffffffffffffff +#define SH_DIAG_MSG_DATA3L_INIT 0x0000000000000000 + +/* SH_DIAG_MSG_DATA3L_DATA_LOWER */ +/* Description: Lower 64 bits of Diagnositic Message Data */ +#define SH_DIAG_MSG_DATA3L_DATA_LOWER_SHFT 0 +#define SH_DIAG_MSG_DATA3L_DATA_LOWER_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_DIAG_MSG_DATA3U" */ +/* Diagnostice Data, upper 64 bits */ +/* ==================================================================== */ + +#define SH_DIAG_MSG_DATA3U 0x0000000110062400 +#define SH_DIAG_MSG_DATA3U_MASK 0xffffffffffffffff +#define SH_DIAG_MSG_DATA3U_INIT 0x0000000000000000 + +/* SH_DIAG_MSG_DATA3U_DATA_UPPER */ +/* Description: Upper 64 bits of Diagnositic Message Data */ +#define SH_DIAG_MSG_DATA3U_DATA_UPPER_SHFT 0 +#define SH_DIAG_MSG_DATA3U_DATA_UPPER_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_DIAG_MSG_DATA4L" */ +/* Diagnostic Data, lower 64 bits */ +/* ==================================================================== */ + +#define SH_DIAG_MSG_DATA4L 0x0000000110062480 +#define SH_DIAG_MSG_DATA4L_MASK 0xffffffffffffffff +#define SH_DIAG_MSG_DATA4L_INIT 0x0000000000000000 + +/* SH_DIAG_MSG_DATA4L_DATA_LOWER */ +/* Description: Lower 64 bits of Diagnositic Message Data */ +#define SH_DIAG_MSG_DATA4L_DATA_LOWER_SHFT 0 +#define SH_DIAG_MSG_DATA4L_DATA_LOWER_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_DIAG_MSG_DATA4U" */ +/* Diagnostice Data, upper 64 bits */ +/* ==================================================================== */ + +#define SH_DIAG_MSG_DATA4U 0x0000000110062500 +#define SH_DIAG_MSG_DATA4U_MASK 0xffffffffffffffff +#define SH_DIAG_MSG_DATA4U_INIT 0x0000000000000000 + +/* SH_DIAG_MSG_DATA4U_DATA_UPPER */ +/* Description: Upper 64 bits of Diagnositic Message Data */ +#define SH_DIAG_MSG_DATA4U_DATA_UPPER_SHFT 0 +#define SH_DIAG_MSG_DATA4U_DATA_UPPER_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_DIAG_MSG_DATA5L" */ +/* Diagnostic Data, lower 64 bits */ +/* ==================================================================== */ + +#define SH_DIAG_MSG_DATA5L 0x0000000110062580 +#define SH_DIAG_MSG_DATA5L_MASK 0xffffffffffffffff +#define SH_DIAG_MSG_DATA5L_INIT 0x0000000000000000 + +/* SH_DIAG_MSG_DATA5L_DATA_LOWER */ +/* Description: Lower 64 bits of Diagnositic Message Data */ +#define SH_DIAG_MSG_DATA5L_DATA_LOWER_SHFT 0 +#define SH_DIAG_MSG_DATA5L_DATA_LOWER_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_DIAG_MSG_DATA5U" */ +/* Diagnostice Data, upper 64 bits */ +/* ==================================================================== */ + +#define SH_DIAG_MSG_DATA5U 0x0000000110062600 +#define SH_DIAG_MSG_DATA5U_MASK 0xffffffffffffffff +#define SH_DIAG_MSG_DATA5U_INIT 0x0000000000000000 + +/* SH_DIAG_MSG_DATA5U_DATA_UPPER */ +/* Description: Upper 64 bits of Diagnositic Message Data */ +#define SH_DIAG_MSG_DATA5U_DATA_UPPER_SHFT 0 +#define SH_DIAG_MSG_DATA5U_DATA_UPPER_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_DIAG_MSG_DATA6L" */ +/* Diagnostic Data, lower 64 bits */ +/* ==================================================================== */ + +#define SH_DIAG_MSG_DATA6L 0x0000000110062680 +#define SH_DIAG_MSG_DATA6L_MASK 0xffffffffffffffff +#define SH_DIAG_MSG_DATA6L_INIT 0x0000000000000000 + +/* SH_DIAG_MSG_DATA6L_DATA_LOWER */ +/* Description: Lower 64 bits of Diagnositic Message Data */ +#define SH_DIAG_MSG_DATA6L_DATA_LOWER_SHFT 0 +#define SH_DIAG_MSG_DATA6L_DATA_LOWER_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_DIAG_MSG_DATA6U" */ +/* Diagnostice Data, upper 64 bits */ +/* ==================================================================== */ + +#define SH_DIAG_MSG_DATA6U 0x0000000110062700 +#define SH_DIAG_MSG_DATA6U_MASK 0xffffffffffffffff +#define SH_DIAG_MSG_DATA6U_INIT 0x0000000000000000 + +/* SH_DIAG_MSG_DATA6U_DATA_UPPER */ +/* Description: Upper 64 bits of Diagnositic Message Data */ +#define SH_DIAG_MSG_DATA6U_DATA_UPPER_SHFT 0 +#define SH_DIAG_MSG_DATA6U_DATA_UPPER_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_DIAG_MSG_DATA7L" */ +/* Diagnostic Data, lower 64 bits */ +/* ==================================================================== */ + +#define SH_DIAG_MSG_DATA7L 0x0000000110062780 +#define SH_DIAG_MSG_DATA7L_MASK 0xffffffffffffffff +#define SH_DIAG_MSG_DATA7L_INIT 0x0000000000000000 + +/* SH_DIAG_MSG_DATA7L_DATA_LOWER */ +/* Description: Lower 64 bits of Diagnositic Message Data */ +#define SH_DIAG_MSG_DATA7L_DATA_LOWER_SHFT 0 +#define SH_DIAG_MSG_DATA7L_DATA_LOWER_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_DIAG_MSG_DATA7U" */ +/* Diagnostice Data, upper 64 bits */ +/* ==================================================================== */ + +#define SH_DIAG_MSG_DATA7U 0x0000000110062800 +#define SH_DIAG_MSG_DATA7U_MASK 0xffffffffffffffff +#define SH_DIAG_MSG_DATA7U_INIT 0x0000000000000000 + +/* SH_DIAG_MSG_DATA7U_DATA_UPPER */ +/* Description: Upper 64 bits of Diagnositic Message Data */ +#define SH_DIAG_MSG_DATA7U_DATA_UPPER_SHFT 0 +#define SH_DIAG_MSG_DATA7U_DATA_UPPER_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_DIAG_MSG_DATA8L" */ +/* Diagnostic Data, lower 64 bits */ +/* ==================================================================== */ + +#define SH_DIAG_MSG_DATA8L 0x0000000110062880 +#define SH_DIAG_MSG_DATA8L_MASK 0xffffffffffffffff +#define SH_DIAG_MSG_DATA8L_INIT 0x0000000000000000 + +/* SH_DIAG_MSG_DATA8L_DATA_LOWER */ +/* Description: Lower 64 bits of Diagnositic Message Data */ +#define SH_DIAG_MSG_DATA8L_DATA_LOWER_SHFT 0 +#define SH_DIAG_MSG_DATA8L_DATA_LOWER_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_DIAG_MSG_DATA8U" */ +/* Diagnostice Data, upper 64 bits */ +/* ==================================================================== */ + +#define SH_DIAG_MSG_DATA8U 0x0000000110062900 +#define SH_DIAG_MSG_DATA8U_MASK 0xffffffffffffffff +#define SH_DIAG_MSG_DATA8U_INIT 0x0000000000000000 + +/* SH_DIAG_MSG_DATA8U_DATA_UPPER */ +/* Description: Upper 64 bits of Diagnositic Message Data */ +#define SH_DIAG_MSG_DATA8U_DATA_UPPER_SHFT 0 +#define SH_DIAG_MSG_DATA8U_DATA_UPPER_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_DIAG_MSG_HDR0" */ +/* Diagnostice Data, lower 64 bits of header */ +/* ==================================================================== */ + +#define SH_DIAG_MSG_HDR0 0x0000000110062980 +#define SH_DIAG_MSG_HDR0_MASK 0xffffffffffffffff +#define SH_DIAG_MSG_HDR0_INIT 0x0000000000000000 + +/* SH_DIAG_MSG_HDR0_HEADER0 */ +/* Description: Lower 64 bits of Diagnositic Message Header */ +#define SH_DIAG_MSG_HDR0_HEADER0_SHFT 0 +#define SH_DIAG_MSG_HDR0_HEADER0_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_DIAG_MSG_HDR1" */ +/* Diagnostice Data, upper 64 bits of header */ +/* ==================================================================== */ + +#define SH_DIAG_MSG_HDR1 0x0000000110062a00 +#define SH_DIAG_MSG_HDR1_MASK 0xffffffffffffffff +#define SH_DIAG_MSG_HDR1_INIT 0x0000000000000000 + +/* SH_DIAG_MSG_HDR1_HEADER1 */ +/* Description: Upper 64 bits of Diagnositic Message Header */ +#define SH_DIAG_MSG_HDR1_HEADER1_SHFT 0 +#define SH_DIAG_MSG_HDR1_HEADER1_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_DEBUG_SELECT" */ +/* SHub Debug Port Select */ +/* ==================================================================== */ + +#define SH_DEBUG_SELECT 0x0000000110063000 +#define SH_DEBUG_SELECT_MASK 0x8fffffffffffffff +#define SH_DEBUG_SELECT_INIT 0x0000e38e38e38e38 + +/* SH_DEBUG_SELECT_NIBBLE0_NIBBLE_SEL */ +/* Description: Nibble0_nibble_select */ +#define SH_DEBUG_SELECT_NIBBLE0_NIBBLE_SEL_SHFT 0 +#define SH_DEBUG_SELECT_NIBBLE0_NIBBLE_SEL_MASK 0x0000000000000007 + +/* SH_DEBUG_SELECT_NIBBLE0_CHIPLET_SEL */ +/* Description: Nibble0_chiplet_select */ +#define SH_DEBUG_SELECT_NIBBLE0_CHIPLET_SEL_SHFT 3 +#define SH_DEBUG_SELECT_NIBBLE0_CHIPLET_SEL_MASK 0x0000000000000038 + +/* SH_DEBUG_SELECT_NIBBLE1_NIBBLE_SEL */ +/* Description: Nibble1_nibble_select */ +#define SH_DEBUG_SELECT_NIBBLE1_NIBBLE_SEL_SHFT 6 +#define SH_DEBUG_SELECT_NIBBLE1_NIBBLE_SEL_MASK 0x00000000000001c0 + +/* SH_DEBUG_SELECT_NIBBLE1_CHIPLET_SEL */ +/* Description: Nibble1_chiplet_select */ +#define SH_DEBUG_SELECT_NIBBLE1_CHIPLET_SEL_SHFT 9 +#define SH_DEBUG_SELECT_NIBBLE1_CHIPLET_SEL_MASK 0x0000000000000e00 + +/* SH_DEBUG_SELECT_NIBBLE2_NIBBLE_SEL */ +/* Description: Nibble2_nibble_select */ +#define SH_DEBUG_SELECT_NIBBLE2_NIBBLE_SEL_SHFT 12 +#define SH_DEBUG_SELECT_NIBBLE2_NIBBLE_SEL_MASK 0x0000000000007000 + +/* SH_DEBUG_SELECT_NIBBLE2_CHIPLET_SEL */ +/* Description: Nibble2_chiplet_select */ +#define SH_DEBUG_SELECT_NIBBLE2_CHIPLET_SEL_SHFT 15 +#define SH_DEBUG_SELECT_NIBBLE2_CHIPLET_SEL_MASK 0x0000000000038000 + +/* SH_DEBUG_SELECT_NIBBLE3_NIBBLE_SEL */ +/* Description: Nibble3_nibble_select */ +#define SH_DEBUG_SELECT_NIBBLE3_NIBBLE_SEL_SHFT 18 +#define SH_DEBUG_SELECT_NIBBLE3_NIBBLE_SEL_MASK 0x00000000001c0000 + +/* SH_DEBUG_SELECT_NIBBLE3_CHIPLET_SEL */ +/* Description: Nibble3_chiplet_select */ +#define SH_DEBUG_SELECT_NIBBLE3_CHIPLET_SEL_SHFT 21 +#define SH_DEBUG_SELECT_NIBBLE3_CHIPLET_SEL_MASK 0x0000000000e00000 + +/* SH_DEBUG_SELECT_NIBBLE4_NIBBLE_SEL */ +/* Description: Nibble4_nibble_select */ +#define SH_DEBUG_SELECT_NIBBLE4_NIBBLE_SEL_SHFT 24 +#define SH_DEBUG_SELECT_NIBBLE4_NIBBLE_SEL_MASK 0x0000000007000000 + +/* SH_DEBUG_SELECT_NIBBLE4_CHIPLET_SEL */ +/* Description: Nibble4_chiplet_select */ +#define SH_DEBUG_SELECT_NIBBLE4_CHIPLET_SEL_SHFT 27 +#define SH_DEBUG_SELECT_NIBBLE4_CHIPLET_SEL_MASK 0x0000000038000000 + +/* SH_DEBUG_SELECT_NIBBLE5_NIBBLE_SEL */ +/* Description: Nibble5_nibble_select */ +#define SH_DEBUG_SELECT_NIBBLE5_NIBBLE_SEL_SHFT 30 +#define SH_DEBUG_SELECT_NIBBLE5_NIBBLE_SEL_MASK 0x00000001c0000000 + +/* SH_DEBUG_SELECT_NIBBLE5_CHIPLET_SEL */ +/* Description: Nibble5_chiplet_select */ +#define SH_DEBUG_SELECT_NIBBLE5_CHIPLET_SEL_SHFT 33 +#define SH_DEBUG_SELECT_NIBBLE5_CHIPLET_SEL_MASK 0x0000000e00000000 + +/* SH_DEBUG_SELECT_NIBBLE6_NIBBLE_SEL */ +/* Description: Nibble6_nibble_select */ +#define SH_DEBUG_SELECT_NIBBLE6_NIBBLE_SEL_SHFT 36 +#define SH_DEBUG_SELECT_NIBBLE6_NIBBLE_SEL_MASK 0x0000007000000000 + +/* SH_DEBUG_SELECT_NIBBLE6_CHIPLET_SEL */ +/* Description: Nibble6_chiplet_select */ +#define SH_DEBUG_SELECT_NIBBLE6_CHIPLET_SEL_SHFT 39 +#define SH_DEBUG_SELECT_NIBBLE6_CHIPLET_SEL_MASK 0x0000038000000000 + +/* SH_DEBUG_SELECT_NIBBLE7_NIBBLE_SEL */ +/* Description: Nibble7_nibble_select */ +#define SH_DEBUG_SELECT_NIBBLE7_NIBBLE_SEL_SHFT 42 +#define SH_DEBUG_SELECT_NIBBLE7_NIBBLE_SEL_MASK 0x00001c0000000000 + +/* SH_DEBUG_SELECT_NIBBLE7_CHIPLET_SEL */ +/* Description: Nibble7_chiplet_select */ +#define SH_DEBUG_SELECT_NIBBLE7_CHIPLET_SEL_SHFT 45 +#define SH_DEBUG_SELECT_NIBBLE7_CHIPLET_SEL_MASK 0x0000e00000000000 + +/* SH_DEBUG_SELECT_DEBUG_II_SEL */ +/* Description: Select bits to II port */ +#define SH_DEBUG_SELECT_DEBUG_II_SEL_SHFT 48 +#define SH_DEBUG_SELECT_DEBUG_II_SEL_MASK 0x0007000000000000 + +/* SH_DEBUG_SELECT_SEL_II */ +/* Description: Select II to debug port */ +#define SH_DEBUG_SELECT_SEL_II_SHFT 51 +#define SH_DEBUG_SELECT_SEL_II_MASK 0x0ff8000000000000 + +/* SH_DEBUG_SELECT_TRIGGER_ENABLE */ +/* Description: Enable trigger on bit 32 of Analyzer data */ +#define SH_DEBUG_SELECT_TRIGGER_ENABLE_SHFT 63 +#define SH_DEBUG_SELECT_TRIGGER_ENABLE_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_TRIGGER_COMPARE_MASK" */ +/* SHub Trigger Compare Mask */ +/* ==================================================================== */ + +#define SH_TRIGGER_COMPARE_MASK 0x0000000110063080 +#define SH_TRIGGER_COMPARE_MASK_MASK 0x00000000ffffffff +#define SH_TRIGGER_COMPARE_MASK_INIT 0x0000000000000000 + +/* SH_TRIGGER_COMPARE_MASK_MASK */ +/* Description: SHub Trigger Compare Mask */ +#define SH_TRIGGER_COMPARE_MASK_MASK_SHFT 0 +#define SH_TRIGGER_COMPARE_MASK_MASK_MASK 0x00000000ffffffff + +/* ==================================================================== */ +/* Register "SH_TRIGGER_COMPARE_PATTERN" */ +/* SHub Trigger Compare Pattern */ +/* ==================================================================== */ + +#define SH_TRIGGER_COMPARE_PATTERN 0x0000000110063100 +#define SH_TRIGGER_COMPARE_PATTERN_MASK 0x00000000ffffffff +#define SH_TRIGGER_COMPARE_PATTERN_INIT 0x0000000000000000 + +/* SH_TRIGGER_COMPARE_PATTERN_DATA */ +/* Description: SHub Trigger Compare Pattern */ +#define SH_TRIGGER_COMPARE_PATTERN_DATA_SHFT 0 +#define SH_TRIGGER_COMPARE_PATTERN_DATA_MASK 0x00000000ffffffff + +/* ==================================================================== */ +/* Register "SH_TRIGGER_SEL" */ +/* Trigger select for SHUB debug port */ +/* ==================================================================== */ + +#define SH_TRIGGER_SEL 0x0000000110063180 +#define SH_TRIGGER_SEL_MASK 0x7777777777777777 +#define SH_TRIGGER_SEL_INIT 0x0000000000000000 + +/* SH_TRIGGER_SEL_NIBBLE0_INPUT_SEL */ +/* Description: Nibble 0 input select */ +#define SH_TRIGGER_SEL_NIBBLE0_INPUT_SEL_SHFT 0 +#define SH_TRIGGER_SEL_NIBBLE0_INPUT_SEL_MASK 0x0000000000000007 + +/* SH_TRIGGER_SEL_NIBBLE0_NIBBLE_SEL */ +/* Description: Nibble 0 Nibble select */ +#define SH_TRIGGER_SEL_NIBBLE0_NIBBLE_SEL_SHFT 4 +#define SH_TRIGGER_SEL_NIBBLE0_NIBBLE_SEL_MASK 0x0000000000000070 + +/* SH_TRIGGER_SEL_NIBBLE1_INPUT_SEL */ +/* Description: Nibble 1 input select */ +#define SH_TRIGGER_SEL_NIBBLE1_INPUT_SEL_SHFT 8 +#define SH_TRIGGER_SEL_NIBBLE1_INPUT_SEL_MASK 0x0000000000000700 + +/* SH_TRIGGER_SEL_NIBBLE1_NIBBLE_SEL */ +/* Description: Nibble 1 Nibble select */ +#define SH_TRIGGER_SEL_NIBBLE1_NIBBLE_SEL_SHFT 12 +#define SH_TRIGGER_SEL_NIBBLE1_NIBBLE_SEL_MASK 0x0000000000007000 + +/* SH_TRIGGER_SEL_NIBBLE2_INPUT_SEL */ +/* Description: Nibble 2 input select */ +#define SH_TRIGGER_SEL_NIBBLE2_INPUT_SEL_SHFT 16 +#define SH_TRIGGER_SEL_NIBBLE2_INPUT_SEL_MASK 0x0000000000070000 + +/* SH_TRIGGER_SEL_NIBBLE2_NIBBLE_SEL */ +/* Description: Nibble 2 Nibble select */ +#define SH_TRIGGER_SEL_NIBBLE2_NIBBLE_SEL_SHFT 20 +#define SH_TRIGGER_SEL_NIBBLE2_NIBBLE_SEL_MASK 0x0000000000700000 + +/* SH_TRIGGER_SEL_NIBBLE3_INPUT_SEL */ +/* Description: Nibble 3 input select */ +#define SH_TRIGGER_SEL_NIBBLE3_INPUT_SEL_SHFT 24 +#define SH_TRIGGER_SEL_NIBBLE3_INPUT_SEL_MASK 0x0000000007000000 + +/* SH_TRIGGER_SEL_NIBBLE3_NIBBLE_SEL */ +/* Description: Nibble 3 Nibble select */ +#define SH_TRIGGER_SEL_NIBBLE3_NIBBLE_SEL_SHFT 28 +#define SH_TRIGGER_SEL_NIBBLE3_NIBBLE_SEL_MASK 0x0000000070000000 + +/* SH_TRIGGER_SEL_NIBBLE4_INPUT_SEL */ +/* Description: Nibble 4 input select */ +#define SH_TRIGGER_SEL_NIBBLE4_INPUT_SEL_SHFT 32 +#define SH_TRIGGER_SEL_NIBBLE4_INPUT_SEL_MASK 0x0000000700000000 + +/* SH_TRIGGER_SEL_NIBBLE4_NIBBLE_SEL */ +/* Description: Nibble 4 Nibble select */ +#define SH_TRIGGER_SEL_NIBBLE4_NIBBLE_SEL_SHFT 36 +#define SH_TRIGGER_SEL_NIBBLE4_NIBBLE_SEL_MASK 0x0000007000000000 + +/* SH_TRIGGER_SEL_NIBBLE5_INPUT_SEL */ +/* Description: Nibble 5 input select */ +#define SH_TRIGGER_SEL_NIBBLE5_INPUT_SEL_SHFT 40 +#define SH_TRIGGER_SEL_NIBBLE5_INPUT_SEL_MASK 0x0000070000000000 + +/* SH_TRIGGER_SEL_NIBBLE5_NIBBLE_SEL */ +/* Description: Nibble 5 Nibble select */ +#define SH_TRIGGER_SEL_NIBBLE5_NIBBLE_SEL_SHFT 44 +#define SH_TRIGGER_SEL_NIBBLE5_NIBBLE_SEL_MASK 0x0000700000000000 + +/* SH_TRIGGER_SEL_NIBBLE6_INPUT_SEL */ +/* Description: Nibble 6 input select */ +#define SH_TRIGGER_SEL_NIBBLE6_INPUT_SEL_SHFT 48 +#define SH_TRIGGER_SEL_NIBBLE6_INPUT_SEL_MASK 0x0007000000000000 + +/* SH_TRIGGER_SEL_NIBBLE6_NIBBLE_SEL */ +/* Description: Nibble 6 Nibble select */ +#define SH_TRIGGER_SEL_NIBBLE6_NIBBLE_SEL_SHFT 52 +#define SH_TRIGGER_SEL_NIBBLE6_NIBBLE_SEL_MASK 0x0070000000000000 + +/* SH_TRIGGER_SEL_NIBBLE7_INPUT_SEL */ +/* Description: Nibble 7 input select */ +#define SH_TRIGGER_SEL_NIBBLE7_INPUT_SEL_SHFT 56 +#define SH_TRIGGER_SEL_NIBBLE7_INPUT_SEL_MASK 0x0700000000000000 + +/* SH_TRIGGER_SEL_NIBBLE7_NIBBLE_SEL */ +/* Description: Nibble 7 Nibble select */ +#define SH_TRIGGER_SEL_NIBBLE7_NIBBLE_SEL_SHFT 60 +#define SH_TRIGGER_SEL_NIBBLE7_NIBBLE_SEL_MASK 0x7000000000000000 + +/* ==================================================================== */ +/* Register "SH_STOP_CLK_CONTROL" */ +/* Stop Clock Control */ +/* ==================================================================== */ + +#define SH_STOP_CLK_CONTROL 0x0000000110064000 +#define SH_STOP_CLK_CONTROL_MASK 0x00000000000000ff +#define SH_STOP_CLK_CONTROL_INIT 0x00000000000000e0 + +/* SH_STOP_CLK_CONTROL_STIMULUS */ +/* Description: Counter stimulus */ +#define SH_STOP_CLK_CONTROL_STIMULUS_SHFT 0 +#define SH_STOP_CLK_CONTROL_STIMULUS_MASK 0x000000000000001f + +/* SH_STOP_CLK_CONTROL_EVENT */ +/* Description: Counter event select (0-greater than, 1-equal) */ +#define SH_STOP_CLK_CONTROL_EVENT_SHFT 5 +#define SH_STOP_CLK_CONTROL_EVENT_MASK 0x0000000000000020 + +/* SH_STOP_CLK_CONTROL_POLARITY */ +/* Description: Counter polarity select (0-negative edge, 1-positiv */ +/* e edge) */ +#define SH_STOP_CLK_CONTROL_POLARITY_SHFT 6 +#define SH_STOP_CLK_CONTROL_POLARITY_MASK 0x0000000000000040 + +/* SH_STOP_CLK_CONTROL_MODE */ +/* Description: Counter mode select (0-internal, 1-external) */ +#define SH_STOP_CLK_CONTROL_MODE_SHFT 7 +#define SH_STOP_CLK_CONTROL_MODE_MASK 0x0000000000000080 + +/* ==================================================================== */ +/* Register "SH_STOP_CLK_DELAY_PHASE" */ +/* Stop Clock Delay Phase */ +/* ==================================================================== */ + +#define SH_STOP_CLK_DELAY_PHASE 0x0000000110064080 +#define SH_STOP_CLK_DELAY_PHASE_MASK 0x00000000000000ff +#define SH_STOP_CLK_DELAY_PHASE_INIT 0x0000000000000000 + +/* SH_STOP_CLK_DELAY_PHASE_DELAY */ +/* Description: Delay phase */ +#define SH_STOP_CLK_DELAY_PHASE_DELAY_SHFT 0 +#define SH_STOP_CLK_DELAY_PHASE_DELAY_MASK 0x00000000000000ff + +/* ==================================================================== */ +/* Register "SH_TSF_ARM_MASK" */ +/* Trigger sequencing facility arm mask */ +/* ==================================================================== */ + +#define SH_TSF_ARM_MASK 0x0000000110065000 +#define SH_TSF_ARM_MASK_MASK 0xffffffffffffffff +#define SH_TSF_ARM_MASK_INIT 0x0000000000000000 + +/* SH_TSF_ARM_MASK_MASK */ +/* Description: Trigger sequencing facility arm mask */ +#define SH_TSF_ARM_MASK_MASK_SHFT 0 +#define SH_TSF_ARM_MASK_MASK_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_TSF_COUNTER_PRESETS" */ +/* Trigger sequencing facility counter presets */ +/* ==================================================================== */ + +#define SH_TSF_COUNTER_PRESETS 0x0000000110065080 +#define SH_TSF_COUNTER_PRESETS_MASK 0xffffffffffffffff +#define SH_TSF_COUNTER_PRESETS_INIT 0x0000000000000000 + +/* SH_TSF_COUNTER_PRESETS_COUNT_32 */ +/* Description: Trigger sequencing facility counter 32 */ +#define SH_TSF_COUNTER_PRESETS_COUNT_32_SHFT 0 +#define SH_TSF_COUNTER_PRESETS_COUNT_32_MASK 0x00000000ffffffff + +/* SH_TSF_COUNTER_PRESETS_COUNT_16 */ +/* Description: Trigger sequencing facility counter 16 */ +#define SH_TSF_COUNTER_PRESETS_COUNT_16_SHFT 32 +#define SH_TSF_COUNTER_PRESETS_COUNT_16_MASK 0x0000ffff00000000 + +/* SH_TSF_COUNTER_PRESETS_COUNT_8B */ +/* Description: Trigger sequencing facility counter 8b */ +#define SH_TSF_COUNTER_PRESETS_COUNT_8B_SHFT 48 +#define SH_TSF_COUNTER_PRESETS_COUNT_8B_MASK 0x00ff000000000000 + +/* SH_TSF_COUNTER_PRESETS_COUNT_8A */ +/* Description: Trigger sequencing facility counter 8a */ +#define SH_TSF_COUNTER_PRESETS_COUNT_8A_SHFT 56 +#define SH_TSF_COUNTER_PRESETS_COUNT_8A_MASK 0xff00000000000000 + +/* ==================================================================== */ +/* Register "SH_TSF_DECREMENT_CTL" */ +/* Trigger sequencing facility counter decrement control */ +/* ==================================================================== */ + +#define SH_TSF_DECREMENT_CTL 0x0000000110065100 +#define SH_TSF_DECREMENT_CTL_MASK 0x000000000000ffff +#define SH_TSF_DECREMENT_CTL_INIT 0x0000000000000000 + +/* SH_TSF_DECREMENT_CTL_CTL */ +/* Description: Trigger sequencing facility counter decrement contr */ +#define SH_TSF_DECREMENT_CTL_CTL_SHFT 0 +#define SH_TSF_DECREMENT_CTL_CTL_MASK 0x000000000000ffff + +/* ==================================================================== */ +/* Register "SH_TSF_DIAG_MSG_CTL" */ +/* Trigger sequencing facility diagnostic message control */ +/* ==================================================================== */ + +#define SH_TSF_DIAG_MSG_CTL 0x0000000110065180 +#define SH_TSF_DIAG_MSG_CTL_MASK 0x00000000000000ff +#define SH_TSF_DIAG_MSG_CTL_INIT 0x0000000000000000 + +/* SH_TSF_DIAG_MSG_CTL_ENABLE */ +/* Description: Trigger sequencing facility diagnostic message cont */ +#define SH_TSF_DIAG_MSG_CTL_ENABLE_SHFT 0 +#define SH_TSF_DIAG_MSG_CTL_ENABLE_MASK 0x00000000000000ff + +/* ==================================================================== */ +/* Register "SH_TSF_DISARM_MASK" */ +/* Trigger sequencing facility disarm mask */ +/* ==================================================================== */ + +#define SH_TSF_DISARM_MASK 0x0000000110065200 +#define SH_TSF_DISARM_MASK_MASK 0xffffffffffffffff +#define SH_TSF_DISARM_MASK_INIT 0x0000000000000000 + +/* SH_TSF_DISARM_MASK_MASK */ +/* Description: Trigger sequencing facility disarm mask */ +#define SH_TSF_DISARM_MASK_MASK_SHFT 0 +#define SH_TSF_DISARM_MASK_MASK_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_TSF_ENABLE_CTL" */ +/* Trigger sequencing facility counter enable control */ +/* ==================================================================== */ + +#define SH_TSF_ENABLE_CTL 0x0000000110065280 +#define SH_TSF_ENABLE_CTL_MASK 0x000000000000ffff +#define SH_TSF_ENABLE_CTL_INIT 0x0000000000000000 + +/* SH_TSF_ENABLE_CTL_CTL */ +/* Description: Trigger sequencing facility counter enable control */ +#define SH_TSF_ENABLE_CTL_CTL_SHFT 0 +#define SH_TSF_ENABLE_CTL_CTL_MASK 0x000000000000ffff + +/* ==================================================================== */ +/* Register "SH_TSF_SOFTWARE_ARM" */ +/* Trigger sequencing facility software arm */ +/* ==================================================================== */ + +#define SH_TSF_SOFTWARE_ARM 0x0000000110065300 +#define SH_TSF_SOFTWARE_ARM_MASK 0x00000000000000ff +#define SH_TSF_SOFTWARE_ARM_INIT 0x0000000000000000 + +/* SH_TSF_SOFTWARE_ARM_BIT0 */ +/* Description: Trigger sequencing facility software arm bit 0 */ +#define SH_TSF_SOFTWARE_ARM_BIT0_SHFT 0 +#define SH_TSF_SOFTWARE_ARM_BIT0_MASK 0x0000000000000001 + +/* SH_TSF_SOFTWARE_ARM_BIT1 */ +/* Description: Trigger sequencing facility software arm bit 1 */ +#define SH_TSF_SOFTWARE_ARM_BIT1_SHFT 1 +#define SH_TSF_SOFTWARE_ARM_BIT1_MASK 0x0000000000000002 + +/* SH_TSF_SOFTWARE_ARM_BIT2 */ +/* Description: Trigger sequencing facility software arm bit 2 */ +#define SH_TSF_SOFTWARE_ARM_BIT2_SHFT 2 +#define SH_TSF_SOFTWARE_ARM_BIT2_MASK 0x0000000000000004 + +/* SH_TSF_SOFTWARE_ARM_BIT3 */ +/* Description: Trigger sequencing facility software arm bit 3 */ +#define SH_TSF_SOFTWARE_ARM_BIT3_SHFT 3 +#define SH_TSF_SOFTWARE_ARM_BIT3_MASK 0x0000000000000008 + +/* SH_TSF_SOFTWARE_ARM_BIT4 */ +/* Description: Trigger sequencing facility software arm bit 4 */ +#define SH_TSF_SOFTWARE_ARM_BIT4_SHFT 4 +#define SH_TSF_SOFTWARE_ARM_BIT4_MASK 0x0000000000000010 + +/* SH_TSF_SOFTWARE_ARM_BIT5 */ +/* Description: Trigger sequencing facility software arm bit 5 */ +#define SH_TSF_SOFTWARE_ARM_BIT5_SHFT 5 +#define SH_TSF_SOFTWARE_ARM_BIT5_MASK 0x0000000000000020 + +/* SH_TSF_SOFTWARE_ARM_BIT6 */ +/* Description: Trigger sequencing facility software arm bit 6 */ +#define SH_TSF_SOFTWARE_ARM_BIT6_SHFT 6 +#define SH_TSF_SOFTWARE_ARM_BIT6_MASK 0x0000000000000040 + +/* SH_TSF_SOFTWARE_ARM_BIT7 */ +/* Description: Trigger sequencing facility software arm bit 7 */ +#define SH_TSF_SOFTWARE_ARM_BIT7_SHFT 7 +#define SH_TSF_SOFTWARE_ARM_BIT7_MASK 0x0000000000000080 + +/* ==================================================================== */ +/* Register "SH_TSF_SOFTWARE_DISARM" */ +/* Trigger sequencing facility software disarm */ +/* ==================================================================== */ + +#define SH_TSF_SOFTWARE_DISARM 0x0000000110065380 +#define SH_TSF_SOFTWARE_DISARM_MASK 0x00000000000000ff +#define SH_TSF_SOFTWARE_DISARM_INIT 0x0000000000000000 + +/* SH_TSF_SOFTWARE_DISARM_BIT0 */ +/* Description: Trigger sequencing facility software disarm bit 0 */ +#define SH_TSF_SOFTWARE_DISARM_BIT0_SHFT 0 +#define SH_TSF_SOFTWARE_DISARM_BIT0_MASK 0x0000000000000001 + +/* SH_TSF_SOFTWARE_DISARM_BIT1 */ +/* Description: Trigger sequencing facility software disarm bit 1 */ +#define SH_TSF_SOFTWARE_DISARM_BIT1_SHFT 1 +#define SH_TSF_SOFTWARE_DISARM_BIT1_MASK 0x0000000000000002 + +/* SH_TSF_SOFTWARE_DISARM_BIT2 */ +/* Description: Trigger sequencing facility software disarm bit 2 */ +#define SH_TSF_SOFTWARE_DISARM_BIT2_SHFT 2 +#define SH_TSF_SOFTWARE_DISARM_BIT2_MASK 0x0000000000000004 + +/* SH_TSF_SOFTWARE_DISARM_BIT3 */ +/* Description: Trigger sequencing facility software disarm bit 3 */ +#define SH_TSF_SOFTWARE_DISARM_BIT3_SHFT 3 +#define SH_TSF_SOFTWARE_DISARM_BIT3_MASK 0x0000000000000008 + +/* SH_TSF_SOFTWARE_DISARM_BIT4 */ +/* Description: Trigger sequencing facility software disarm bit 4 */ +#define SH_TSF_SOFTWARE_DISARM_BIT4_SHFT 4 +#define SH_TSF_SOFTWARE_DISARM_BIT4_MASK 0x0000000000000010 + +/* SH_TSF_SOFTWARE_DISARM_BIT5 */ +/* Description: Trigger sequencing facility software disarm bit 5 */ +#define SH_TSF_SOFTWARE_DISARM_BIT5_SHFT 5 +#define SH_TSF_SOFTWARE_DISARM_BIT5_MASK 0x0000000000000020 + +/* SH_TSF_SOFTWARE_DISARM_BIT6 */ +/* Description: Trigger sequencing facility software disarm bit 6 */ +#define SH_TSF_SOFTWARE_DISARM_BIT6_SHFT 6 +#define SH_TSF_SOFTWARE_DISARM_BIT6_MASK 0x0000000000000040 + +/* SH_TSF_SOFTWARE_DISARM_BIT7 */ +/* Description: Trigger sequencing facility software disarm bit 7 */ +#define SH_TSF_SOFTWARE_DISARM_BIT7_SHFT 7 +#define SH_TSF_SOFTWARE_DISARM_BIT7_MASK 0x0000000000000080 + +/* ==================================================================== */ +/* Register "SH_TSF_SOFTWARE_TRIGGERED" */ +/* Trigger sequencing facility software triggered */ +/* ==================================================================== */ + +#define SH_TSF_SOFTWARE_TRIGGERED 0x0000000110065400 +#define SH_TSF_SOFTWARE_TRIGGERED_MASK 0x00000000000000ff +#define SH_TSF_SOFTWARE_TRIGGERED_INIT 0x0000000000000000 + +/* SH_TSF_SOFTWARE_TRIGGERED_BIT0 */ +/* Description: Trigger sequencing facility software triggered bit */ +#define SH_TSF_SOFTWARE_TRIGGERED_BIT0_SHFT 0 +#define SH_TSF_SOFTWARE_TRIGGERED_BIT0_MASK 0x0000000000000001 + +/* SH_TSF_SOFTWARE_TRIGGERED_BIT1 */ +/* Description: Trigger sequencing facility software triggered bit */ +#define SH_TSF_SOFTWARE_TRIGGERED_BIT1_SHFT 1 +#define SH_TSF_SOFTWARE_TRIGGERED_BIT1_MASK 0x0000000000000002 + +/* SH_TSF_SOFTWARE_TRIGGERED_BIT2 */ +/* Description: Trigger sequencing facility software triggered bit */ +#define SH_TSF_SOFTWARE_TRIGGERED_BIT2_SHFT 2 +#define SH_TSF_SOFTWARE_TRIGGERED_BIT2_MASK 0x0000000000000004 + +/* SH_TSF_SOFTWARE_TRIGGERED_BIT3 */ +/* Description: Trigger sequencing facility software triggered bit */ +#define SH_TSF_SOFTWARE_TRIGGERED_BIT3_SHFT 3 +#define SH_TSF_SOFTWARE_TRIGGERED_BIT3_MASK 0x0000000000000008 + +/* SH_TSF_SOFTWARE_TRIGGERED_BIT4 */ +/* Description: Trigger sequencing facility software triggered bit */ +#define SH_TSF_SOFTWARE_TRIGGERED_BIT4_SHFT 4 +#define SH_TSF_SOFTWARE_TRIGGERED_BIT4_MASK 0x0000000000000010 + +/* SH_TSF_SOFTWARE_TRIGGERED_BIT5 */ +/* Description: Trigger sequencing facility software triggered bit */ +#define SH_TSF_SOFTWARE_TRIGGERED_BIT5_SHFT 5 +#define SH_TSF_SOFTWARE_TRIGGERED_BIT5_MASK 0x0000000000000020 + +/* SH_TSF_SOFTWARE_TRIGGERED_BIT6 */ +/* Description: Trigger sequencing facility software triggered bit */ +#define SH_TSF_SOFTWARE_TRIGGERED_BIT6_SHFT 6 +#define SH_TSF_SOFTWARE_TRIGGERED_BIT6_MASK 0x0000000000000040 + +/* SH_TSF_SOFTWARE_TRIGGERED_BIT7 */ +/* Description: Trigger sequencing facility software triggered bit */ +#define SH_TSF_SOFTWARE_TRIGGERED_BIT7_SHFT 7 +#define SH_TSF_SOFTWARE_TRIGGERED_BIT7_MASK 0x0000000000000080 + +/* ==================================================================== */ +/* Register "SH_TSF_TRIGGER_MASK" */ +/* Trigger sequencing facility trigger mask */ +/* ==================================================================== */ + +#define SH_TSF_TRIGGER_MASK 0x0000000110065480 +#define SH_TSF_TRIGGER_MASK_MASK 0xffffffffffffffff +#define SH_TSF_TRIGGER_MASK_INIT 0x0000000000000000 + +/* SH_TSF_TRIGGER_MASK_MASK */ +/* Description: Trigger sequencing facility trigger mask */ +#define SH_TSF_TRIGGER_MASK_MASK_SHFT 0 +#define SH_TSF_TRIGGER_MASK_MASK_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_VEC_DATA" */ +/* Vector Write Request Message Data */ +/* ==================================================================== */ + +#define SH_VEC_DATA 0x0000000110066000 +#define SH_VEC_DATA_MASK 0xffffffffffffffff +#define SH_VEC_DATA_INIT 0x0000000000000000 + +/* SH_VEC_DATA_DATA */ +/* Description: Data */ +#define SH_VEC_DATA_DATA_SHFT 0 +#define SH_VEC_DATA_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_VEC_PARMS" */ +/* Vector Message Parameters Register */ +/* ==================================================================== */ + +#define SH_VEC_PARMS 0x0000000110066080 +#define SH_VEC_PARMS_MASK 0xc0003ffffffffffb +#define SH_VEC_PARMS_INIT 0x0000000000000000 + +/* SH_VEC_PARMS_TYPE */ +/* Description: Vector Request Message Type */ +#define SH_VEC_PARMS_TYPE_SHFT 0 +#define SH_VEC_PARMS_TYPE_MASK 0x0000000000000001 + +/* SH_VEC_PARMS_NI_PORT */ +/* Description: Network Interface Port Select */ +#define SH_VEC_PARMS_NI_PORT_SHFT 1 +#define SH_VEC_PARMS_NI_PORT_MASK 0x0000000000000002 + +/* SH_VEC_PARMS_ADDRESS */ +/* Description: Address[37:6] */ +#define SH_VEC_PARMS_ADDRESS_SHFT 3 +#define SH_VEC_PARMS_ADDRESS_MASK 0x00000007fffffff8 + +/* SH_VEC_PARMS_PIO_ID */ +/* Description: PIO ID */ +#define SH_VEC_PARMS_PIO_ID_SHFT 35 +#define SH_VEC_PARMS_PIO_ID_MASK 0x00003ff800000000 + +/* SH_VEC_PARMS_START */ +/* Description: Start */ +#define SH_VEC_PARMS_START_SHFT 62 +#define SH_VEC_PARMS_START_MASK 0x4000000000000000 + +/* SH_VEC_PARMS_BUSY */ +/* Description: Busy */ +#define SH_VEC_PARMS_BUSY_SHFT 63 +#define SH_VEC_PARMS_BUSY_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_VEC_ROUTE" */ +/* Vector Request Message Route */ +/* ==================================================================== */ + +#define SH_VEC_ROUTE 0x0000000110066100 +#define SH_VEC_ROUTE_MASK 0xffffffffffffffff +#define SH_VEC_ROUTE_INIT 0x0000000000000000 + +/* SH_VEC_ROUTE_ROUTE */ +/* Description: Route */ +#define SH_VEC_ROUTE_ROUTE_SHFT 0 +#define SH_VEC_ROUTE_ROUTE_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_CPU_PERM" */ +/* CPU MMR Access Permission Bits */ +/* ==================================================================== */ + +#define SH_CPU_PERM 0x0000000110060000 +#define SH_CPU_PERM_MASK 0xffffffffffffffff +#define SH_CPU_PERM_INIT 0xffffffffffffffff + +/* SH_CPU_PERM_ACCESS_BITS */ +/* Description: Access Bits */ +#define SH_CPU_PERM_ACCESS_BITS_SHFT 0 +#define SH_CPU_PERM_ACCESS_BITS_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_CPU_PERM_OVR" */ +/* CPU MMR Access Permission Override */ +/* ==================================================================== */ + +#define SH_CPU_PERM_OVR 0x0000000110060080 +#define SH_CPU_PERM_OVR_MASK 0xffffffffffffffff +#define SH_CPU_PERM_OVR_INIT 0x0000000000000000 + +/* SH_CPU_PERM_OVR_OVERRIDE */ +/* Description: Override */ +#define SH_CPU_PERM_OVR_OVERRIDE_SHFT 0 +#define SH_CPU_PERM_OVR_OVERRIDE_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_EXT_IO_PERM" */ +/* External IO MMR Access Permission Bits */ +/* ==================================================================== */ + +#define SH_EXT_IO_PERM 0x0000000110060100 +#define SH_EXT_IO_PERM_MASK 0xffffffffffffffff +#define SH_EXT_IO_PERM_INIT 0x0000000000000000 + +/* SH_EXT_IO_PERM_ACCESS_BITS */ +/* Description: Access Bits */ +#define SH_EXT_IO_PERM_ACCESS_BITS_SHFT 0 +#define SH_EXT_IO_PERM_ACCESS_BITS_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_EXT_IOI_ACCESS" */ +/* External IO Interrupt Access Permission Bits */ +/* ==================================================================== */ + +#define SH_EXT_IOI_ACCESS 0x0000000110060180 +#define SH_EXT_IOI_ACCESS_MASK 0xffffffffffffffff +#define SH_EXT_IOI_ACCESS_INIT 0xffffffffffffffff + +/* SH_EXT_IOI_ACCESS_ACCESS_BITS */ +/* Description: Access Bits */ +#define SH_EXT_IOI_ACCESS_ACCESS_BITS_SHFT 0 +#define SH_EXT_IOI_ACCESS_ACCESS_BITS_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_GC_FIL_CTRL" */ +/* SHub Global Clock Filter Control */ +/* ==================================================================== */ + +#define SH_GC_FIL_CTRL 0x0000000110060200 +#define SH_GC_FIL_CTRL_MASK 0x03ff3ff3ff1fff1f +#define SH_GC_FIL_CTRL_INIT 0x0000000000000000 + +/* SH_GC_FIL_CTRL_OFFSET */ +/* Description: Offset */ +#define SH_GC_FIL_CTRL_OFFSET_SHFT 0 +#define SH_GC_FIL_CTRL_OFFSET_MASK 0x000000000000001f + +/* SH_GC_FIL_CTRL_MASK_COUNTER */ +/* Description: Mask Counter */ +#define SH_GC_FIL_CTRL_MASK_COUNTER_SHFT 8 +#define SH_GC_FIL_CTRL_MASK_COUNTER_MASK 0x00000000000fff00 + +/* SH_GC_FIL_CTRL_MASK_ENABLE */ +/* Description: Mask Enable */ +#define SH_GC_FIL_CTRL_MASK_ENABLE_SHFT 20 +#define SH_GC_FIL_CTRL_MASK_ENABLE_MASK 0x0000000000100000 + +/* SH_GC_FIL_CTRL_DROPOUT_COUNTER */ +/* Description: Dropout Counter */ +#define SH_GC_FIL_CTRL_DROPOUT_COUNTER_SHFT 24 +#define SH_GC_FIL_CTRL_DROPOUT_COUNTER_MASK 0x00000003ff000000 + +/* SH_GC_FIL_CTRL_DROPOUT_THRESH */ +/* Description: Dropout threshold */ +#define SH_GC_FIL_CTRL_DROPOUT_THRESH_SHFT 36 +#define SH_GC_FIL_CTRL_DROPOUT_THRESH_MASK 0x00003ff000000000 + +/* SH_GC_FIL_CTRL_ERROR_COUNTER */ +/* Description: Error counter */ +#define SH_GC_FIL_CTRL_ERROR_COUNTER_SHFT 48 +#define SH_GC_FIL_CTRL_ERROR_COUNTER_MASK 0x03ff000000000000 + +/* ==================================================================== */ +/* Register "SH_GC_SRC_CTRL" */ +/* SHub Global Clock Control */ +/* ==================================================================== */ + +#define SH_GC_SRC_CTRL 0x0000000110060280 +#define SH_GC_SRC_CTRL_MASK 0x0000000313ff3ff1 +#define SH_GC_SRC_CTRL_INIT 0x0000000100000000 + +/* SH_GC_SRC_CTRL_ENABLE_COUNTER */ +/* Description: Enable Counter */ +#define SH_GC_SRC_CTRL_ENABLE_COUNTER_SHFT 0 +#define SH_GC_SRC_CTRL_ENABLE_COUNTER_MASK 0x0000000000000001 + +/* SH_GC_SRC_CTRL_MAX_COUNT */ +/* Description: Max Count */ +#define SH_GC_SRC_CTRL_MAX_COUNT_SHFT 4 +#define SH_GC_SRC_CTRL_MAX_COUNT_MASK 0x0000000000003ff0 + +/* SH_GC_SRC_CTRL_COUNTER */ +/* Description: Counter */ +#define SH_GC_SRC_CTRL_COUNTER_SHFT 16 +#define SH_GC_SRC_CTRL_COUNTER_MASK 0x0000000003ff0000 + +/* SH_GC_SRC_CTRL_TOGGLE_BIT */ +/* Description: Toggle bit */ +#define SH_GC_SRC_CTRL_TOGGLE_BIT_SHFT 28 +#define SH_GC_SRC_CTRL_TOGGLE_BIT_MASK 0x0000000010000000 + +/* SH_GC_SRC_CTRL_SOURCE_SEL */ +/* Description: Source select (0=ext., 1=Int., 2=SHUB) */ +#define SH_GC_SRC_CTRL_SOURCE_SEL_SHFT 32 +#define SH_GC_SRC_CTRL_SOURCE_SEL_MASK 0x0000000300000000 + +/* ==================================================================== */ +/* Register "SH_HARD_RESET" */ +/* SHub Hard Reset */ +/* ==================================================================== */ + +#define SH_HARD_RESET 0x0000000110060300 +#define SH_HARD_RESET_MASK 0x0000000000000001 +#define SH_HARD_RESET_INIT 0x0000000000000000 + +/* SH_HARD_RESET_HARD_RESET */ +/* Description: Hard Reset */ +#define SH_HARD_RESET_HARD_RESET_SHFT 0 +#define SH_HARD_RESET_HARD_RESET_MASK 0x0000000000000001 + +/* ==================================================================== */ +/* Register "SH_IO_PERM" */ +/* II MMR Access Permission Bits */ +/* ==================================================================== */ + +#define SH_IO_PERM 0x0000000110060380 +#define SH_IO_PERM_MASK 0xffffffffffffffff +#define SH_IO_PERM_INIT 0x0000000000000000 + +/* SH_IO_PERM_ACCESS_BITS */ +/* Description: Access Bits */ +#define SH_IO_PERM_ACCESS_BITS_SHFT 0 +#define SH_IO_PERM_ACCESS_BITS_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_IOI_ACCESS" */ +/* II Interrupt Access Permission Bits */ +/* ==================================================================== */ + +#define SH_IOI_ACCESS 0x0000000110060400 +#define SH_IOI_ACCESS_MASK 0xffffffffffffffff +#define SH_IOI_ACCESS_INIT 0xffffffffffffffff + +/* SH_IOI_ACCESS_ACCESS_BITS */ +/* Description: Access Bits */ +#define SH_IOI_ACCESS_ACCESS_BITS_SHFT 0 +#define SH_IOI_ACCESS_ACCESS_BITS_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_IPI_ACCESS" */ +/* CPU interrupt Access Permission Bits */ +/* ==================================================================== */ + +#define SH_IPI_ACCESS 0x0000000110060480 +#define SH_IPI_ACCESS_MASK 0xffffffffffffffff +#define SH_IPI_ACCESS_INIT 0xffffffffffffffff + +/* SH_IPI_ACCESS_ACCESS_BITS */ +/* Description: Access Bits */ +#define SH_IPI_ACCESS_ACCESS_BITS_SHFT 0 +#define SH_IPI_ACCESS_ACCESS_BITS_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_JTAG_CONFIG" */ +/* SHub JTAG configuration */ +/* ==================================================================== */ + +#define SH_JTAG_CONFIG 0x0000000110060500 +#define SH_JTAG_CONFIG_MASK 0x00ffffffffffffff +#define SH_JTAG_CONFIG_INIT 0x0000000000000000 + +/* SH_JTAG_CONFIG_MD_CLK_SEL */ +/* Description: Select divide freq of DRAMCLK */ +#define SH_JTAG_CONFIG_MD_CLK_SEL_SHFT 0 +#define SH_JTAG_CONFIG_MD_CLK_SEL_MASK 0x0000000000000003 + +/* SH_JTAG_CONFIG_NI_CLK_SEL */ +/* Description: Selects clock source for NICLK domain */ +#define SH_JTAG_CONFIG_NI_CLK_SEL_SHFT 2 +#define SH_JTAG_CONFIG_NI_CLK_SEL_MASK 0x0000000000000004 + +/* SH_JTAG_CONFIG_II_CLK_SEL */ +/* Description: Selects clock source for IOCLK domain */ +#define SH_JTAG_CONFIG_II_CLK_SEL_SHFT 3 +#define SH_JTAG_CONFIG_II_CLK_SEL_MASK 0x0000000000000018 + +/* SH_JTAG_CONFIG_WRT90_TARGET */ +/* Description: wrt90_target */ +#define SH_JTAG_CONFIG_WRT90_TARGET_SHFT 5 +#define SH_JTAG_CONFIG_WRT90_TARGET_MASK 0x000000000007ffe0 + +/* SH_JTAG_CONFIG_WRT90_OVERRIDER */ +/* Description: wrt90_overrideR */ +#define SH_JTAG_CONFIG_WRT90_OVERRIDER_SHFT 19 +#define SH_JTAG_CONFIG_WRT90_OVERRIDER_MASK 0x0000000000080000 + +/* SH_JTAG_CONFIG_WRT90_OVERRIDE */ +/* Description: wrt90_override */ +#define SH_JTAG_CONFIG_WRT90_OVERRIDE_SHFT 20 +#define SH_JTAG_CONFIG_WRT90_OVERRIDE_MASK 0x0000000000100000 + +/* SH_JTAG_CONFIG_JTAG_MCI_RESET_DELAY */ +/* Description: jtag_mci_reset_delay */ +#define SH_JTAG_CONFIG_JTAG_MCI_RESET_DELAY_SHFT 21 +#define SH_JTAG_CONFIG_JTAG_MCI_RESET_DELAY_MASK 0x0000000001e00000 + +/* SH_JTAG_CONFIG_JTAG_MCI_TARGET */ +/* Description: jtag_mci_target */ +#define SH_JTAG_CONFIG_JTAG_MCI_TARGET_SHFT 25 +#define SH_JTAG_CONFIG_JTAG_MCI_TARGET_MASK 0x0000007ffe000000 + +/* SH_JTAG_CONFIG_JTAG_MCI_OVERRIDE */ +/* Description: jtag_mci_override */ +#define SH_JTAG_CONFIG_JTAG_MCI_OVERRIDE_SHFT 39 +#define SH_JTAG_CONFIG_JTAG_MCI_OVERRIDE_MASK 0x0000008000000000 + +/* SH_JTAG_CONFIG_FSB_CONFIG_IOQ_DEPTH */ +/* Description: 0=depth 8, 1=depth1 */ +#define SH_JTAG_CONFIG_FSB_CONFIG_IOQ_DEPTH_SHFT 40 +#define SH_JTAG_CONFIG_FSB_CONFIG_IOQ_DEPTH_MASK 0x0000010000000000 + +/* SH_JTAG_CONFIG_FSB_CONFIG_SAMPLE_BINIT */ +/* Description: Enable sampling of BINIT */ +#define SH_JTAG_CONFIG_FSB_CONFIG_SAMPLE_BINIT_SHFT 41 +#define SH_JTAG_CONFIG_FSB_CONFIG_SAMPLE_BINIT_MASK 0x0000020000000000 + +/* SH_JTAG_CONFIG_FSB_CONFIG_ENABLE_BUS_PARKING */ +#define SH_JTAG_CONFIG_FSB_CONFIG_ENABLE_BUS_PARKING_SHFT 42 +#define SH_JTAG_CONFIG_FSB_CONFIG_ENABLE_BUS_PARKING_MASK 0x0000040000000000 + +/* SH_JTAG_CONFIG_FSB_CONFIG_CLOCK_RATIO */ +#define SH_JTAG_CONFIG_FSB_CONFIG_CLOCK_RATIO_SHFT 43 +#define SH_JTAG_CONFIG_FSB_CONFIG_CLOCK_RATIO_MASK 0x0000f80000000000 + +/* SH_JTAG_CONFIG_FSB_CONFIG_OUTPUT_TRISTATE */ +/* Description: Output tristate control */ +#define SH_JTAG_CONFIG_FSB_CONFIG_OUTPUT_TRISTATE_SHFT 48 +#define SH_JTAG_CONFIG_FSB_CONFIG_OUTPUT_TRISTATE_MASK 0x000f000000000000 + +/* SH_JTAG_CONFIG_FSB_CONFIG_ENABLE_BIST */ +/* Description: Enables BIST */ +#define SH_JTAG_CONFIG_FSB_CONFIG_ENABLE_BIST_SHFT 52 +#define SH_JTAG_CONFIG_FSB_CONFIG_ENABLE_BIST_MASK 0x0010000000000000 + +/* SH_JTAG_CONFIG_FSB_CONFIG_AUX */ +/* Description: Enables BIST */ +#define SH_JTAG_CONFIG_FSB_CONFIG_AUX_SHFT 53 +#define SH_JTAG_CONFIG_FSB_CONFIG_AUX_MASK 0x0060000000000000 + +/* SH_JTAG_CONFIG_GTL_CONFIG_RE */ +/* Description: Reference Enable selection for GTL io */ +#define SH_JTAG_CONFIG_GTL_CONFIG_RE_SHFT 55 +#define SH_JTAG_CONFIG_GTL_CONFIG_RE_MASK 0x0080000000000000 + +/* ==================================================================== */ +/* Register "SH_SHUB_ID" */ +/* SHub ID Number */ +/* ==================================================================== */ + +#define SH_SHUB_ID 0x0000000110060580 +#define SH_SHUB_ID_MASK 0x011f37ffffffffff +#define SH_SHUB_ID_INIT 0x0010300000000000 + +/* SH_SHUB_ID_FORCE1 */ +/* Description: Must be 1 */ +#define SH_SHUB_ID_FORCE1_SHFT 0 +#define SH_SHUB_ID_FORCE1_MASK 0x0000000000000001 + +/* SH_SHUB_ID_MANUFACTURER */ +/* Description: Manufacturer */ +#define SH_SHUB_ID_MANUFACTURER_SHFT 1 +#define SH_SHUB_ID_MANUFACTURER_MASK 0x0000000000000ffe + +/* SH_SHUB_ID_PART_NUMBER */ +/* Description: Part Number */ +#define SH_SHUB_ID_PART_NUMBER_SHFT 12 +#define SH_SHUB_ID_PART_NUMBER_MASK 0x000000000ffff000 + +/* SH_SHUB_ID_REVISION */ +/* Description: Revision */ +#define SH_SHUB_ID_REVISION_SHFT 28 +#define SH_SHUB_ID_REVISION_MASK 0x00000000f0000000 + +/* SH_SHUB_ID_NODE_ID */ +/* Description: Node Identification */ +#define SH_SHUB_ID_NODE_ID_SHFT 32 +#define SH_SHUB_ID_NODE_ID_MASK 0x000007ff00000000 + +/* SH_SHUB_ID_SHARING_MODE */ +/* Description: Sharing mode (Coherency Domain Size) */ +#define SH_SHUB_ID_SHARING_MODE_SHFT 44 +#define SH_SHUB_ID_SHARING_MODE_MASK 0x0000300000000000 + +/* SH_SHUB_ID_NODES_PER_BIT */ +/* Description: Nodes per bit definition for MMR access */ +#define SH_SHUB_ID_NODES_PER_BIT_SHFT 48 +#define SH_SHUB_ID_NODES_PER_BIT_MASK 0x001f000000000000 + +/* SH_SHUB_ID_NI_PORT */ +/* Description: NI port of vector reference, 0 = NI0, 1 = NI1 */ +#define SH_SHUB_ID_NI_PORT_SHFT 56 +#define SH_SHUB_ID_NI_PORT_MASK 0x0100000000000000 + +/* ==================================================================== */ +/* Register "SH_SHUBS_PRESENT0" */ +/* Shubs 0 - 63 Present. Used for invalidate generation */ +/* ==================================================================== */ + +#define SH_SHUBS_PRESENT0 0x0000000110060600 +#define SH_SHUBS_PRESENT0_MASK 0xffffffffffffffff +#define SH_SHUBS_PRESENT0_INIT 0xffffffffffffffff + +/* SH_SHUBS_PRESENT0_SHUBS_PRESENT0 */ +/* Description: Shubs 0 - 63 Present configuration */ +#define SH_SHUBS_PRESENT0_SHUBS_PRESENT0_SHFT 0 +#define SH_SHUBS_PRESENT0_SHUBS_PRESENT0_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_SHUBS_PRESENT1" */ +/* Shubs 64 - 127 Present. Used for invalidate generation */ +/* ==================================================================== */ + +#define SH_SHUBS_PRESENT1 0x0000000110060680 +#define SH_SHUBS_PRESENT1_MASK 0xffffffffffffffff +#define SH_SHUBS_PRESENT1_INIT 0xffffffffffffffff + +/* SH_SHUBS_PRESENT1_SHUBS_PRESENT1 */ +/* Description: Shubs 64 - 127 Present configuration */ +#define SH_SHUBS_PRESENT1_SHUBS_PRESENT1_SHFT 0 +#define SH_SHUBS_PRESENT1_SHUBS_PRESENT1_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_SHUBS_PRESENT2" */ +/* Shubs 128 - 191 Present. Used for invalidate generation */ +/* ==================================================================== */ + +#define SH_SHUBS_PRESENT2 0x0000000110060700 +#define SH_SHUBS_PRESENT2_MASK 0xffffffffffffffff +#define SH_SHUBS_PRESENT2_INIT 0xffffffffffffffff + +/* SH_SHUBS_PRESENT2_SHUBS_PRESENT2 */ +/* Description: Shubs 128 - 191 Present configuration */ +#define SH_SHUBS_PRESENT2_SHUBS_PRESENT2_SHFT 0 +#define SH_SHUBS_PRESENT2_SHUBS_PRESENT2_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_SHUBS_PRESENT3" */ +/* Shubs 192 - 255 Present. Used for invalidate generation */ +/* ==================================================================== */ + +#define SH_SHUBS_PRESENT3 0x0000000110060780 +#define SH_SHUBS_PRESENT3_MASK 0xffffffffffffffff +#define SH_SHUBS_PRESENT3_INIT 0xffffffffffffffff + +/* SH_SHUBS_PRESENT3_SHUBS_PRESENT3 */ +/* Description: Shubs 192 - 255 Present configuration */ +#define SH_SHUBS_PRESENT3_SHUBS_PRESENT3_SHFT 0 +#define SH_SHUBS_PRESENT3_SHUBS_PRESENT3_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_SOFT_RESET" */ +/* SHub Soft Reset */ +/* ==================================================================== */ + +#define SH_SOFT_RESET 0x0000000110060800 +#define SH_SOFT_RESET_MASK 0x0000000000000001 +#define SH_SOFT_RESET_INIT 0x0000000000000000 + +/* SH_SOFT_RESET_SOFT_RESET */ +/* Description: Soft Reset */ +#define SH_SOFT_RESET_SOFT_RESET_SHFT 0 +#define SH_SOFT_RESET_SOFT_RESET_MASK 0x0000000000000001 + +/* ==================================================================== */ +/* Register "SH_FIRST_ERROR" */ +/* Shub Global First Error Flags */ +/* ==================================================================== */ + +#define SH_FIRST_ERROR 0x0000000110071000 +#define SH_FIRST_ERROR_MASK 0x000000000007ffff +#define SH_FIRST_ERROR_INIT 0x0000000000000000 + +/* SH_FIRST_ERROR_FIRST_ERROR */ +/* Description: Chiplet with first error */ +#define SH_FIRST_ERROR_FIRST_ERROR_SHFT 0 +#define SH_FIRST_ERROR_FIRST_ERROR_MASK 0x000000000007ffff + +/* ==================================================================== */ +/* Register "SH_II_HW_TIME_STAMP" */ +/* II hardware error time stamp */ +/* ==================================================================== */ + +#define SH_II_HW_TIME_STAMP 0x0000000110071080 +#define SH_II_HW_TIME_STAMP_MASK 0xffffffffffffffff +#define SH_II_HW_TIME_STAMP_INIT 0x0000000000000000 + +/* SH_II_HW_TIME_STAMP_TIME */ +/* Description: II hardware error time stamp */ +#define SH_II_HW_TIME_STAMP_TIME_SHFT 0 +#define SH_II_HW_TIME_STAMP_TIME_MASK 0x7fffffffffffffff + +/* SH_II_HW_TIME_STAMP_VALID */ +/* Description: II hardware error time stamp valid */ +#define SH_II_HW_TIME_STAMP_VALID_SHFT 63 +#define SH_II_HW_TIME_STAMP_VALID_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_LB_HW_TIME_STAMP" */ +/* LB hardware error time stamp */ +/* ==================================================================== */ + +#define SH_LB_HW_TIME_STAMP 0x0000000110071100 +#define SH_LB_HW_TIME_STAMP_MASK 0xffffffffffffffff +#define SH_LB_HW_TIME_STAMP_INIT 0x0000000000000000 + +/* SH_LB_HW_TIME_STAMP_TIME */ +/* Description: LB hardware error time stamp */ +#define SH_LB_HW_TIME_STAMP_TIME_SHFT 0 +#define SH_LB_HW_TIME_STAMP_TIME_MASK 0x7fffffffffffffff + +/* SH_LB_HW_TIME_STAMP_VALID */ +/* Description: LB hardware error time stamp valid */ +#define SH_LB_HW_TIME_STAMP_VALID_SHFT 63 +#define SH_LB_HW_TIME_STAMP_VALID_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_MD_COR_TIME_STAMP" */ +/* MD correctable error time stamp */ +/* ==================================================================== */ + +#define SH_MD_COR_TIME_STAMP 0x0000000110071180 +#define SH_MD_COR_TIME_STAMP_MASK 0xffffffffffffffff +#define SH_MD_COR_TIME_STAMP_INIT 0x0000000000000000 + +/* SH_MD_COR_TIME_STAMP_TIME */ +/* Description: MD correctable error time stamp */ +#define SH_MD_COR_TIME_STAMP_TIME_SHFT 0 +#define SH_MD_COR_TIME_STAMP_TIME_MASK 0x7fffffffffffffff + +/* SH_MD_COR_TIME_STAMP_VALID */ +/* Description: MD correctable error time stamp valid */ +#define SH_MD_COR_TIME_STAMP_VALID_SHFT 63 +#define SH_MD_COR_TIME_STAMP_VALID_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_MD_HW_TIME_STAMP" */ +/* MD hardware error time stamp */ +/* ==================================================================== */ + +#define SH_MD_HW_TIME_STAMP 0x0000000110071200 +#define SH_MD_HW_TIME_STAMP_MASK 0xffffffffffffffff +#define SH_MD_HW_TIME_STAMP_INIT 0x0000000000000000 + +/* SH_MD_HW_TIME_STAMP_TIME */ +/* Description: MD hardware error time stamp */ +#define SH_MD_HW_TIME_STAMP_TIME_SHFT 0 +#define SH_MD_HW_TIME_STAMP_TIME_MASK 0x7fffffffffffffff + +/* SH_MD_HW_TIME_STAMP_VALID */ +/* Description: MD hardware error time stamp valid */ +#define SH_MD_HW_TIME_STAMP_VALID_SHFT 63 +#define SH_MD_HW_TIME_STAMP_VALID_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_MD_UNCOR_TIME_STAMP" */ +/* MD uncorrectable error time stamp */ +/* ==================================================================== */ + +#define SH_MD_UNCOR_TIME_STAMP 0x0000000110071280 +#define SH_MD_UNCOR_TIME_STAMP_MASK 0xffffffffffffffff +#define SH_MD_UNCOR_TIME_STAMP_INIT 0x0000000000000000 + +/* SH_MD_UNCOR_TIME_STAMP_TIME */ +/* Description: MD uncorrectable error time stamp */ +#define SH_MD_UNCOR_TIME_STAMP_TIME_SHFT 0 +#define SH_MD_UNCOR_TIME_STAMP_TIME_MASK 0x7fffffffffffffff + +/* SH_MD_UNCOR_TIME_STAMP_VALID */ +/* Description: MD uncorrectable error time stamp valid */ +#define SH_MD_UNCOR_TIME_STAMP_VALID_SHFT 63 +#define SH_MD_UNCOR_TIME_STAMP_VALID_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_PI_COR_TIME_STAMP" */ +/* PI correctable error time stamp */ +/* ==================================================================== */ + +#define SH_PI_COR_TIME_STAMP 0x0000000110071300 +#define SH_PI_COR_TIME_STAMP_MASK 0xffffffffffffffff +#define SH_PI_COR_TIME_STAMP_INIT 0x0000000000000000 + +/* SH_PI_COR_TIME_STAMP_TIME */ +/* Description: PI correctable error time stamp */ +#define SH_PI_COR_TIME_STAMP_TIME_SHFT 0 +#define SH_PI_COR_TIME_STAMP_TIME_MASK 0x7fffffffffffffff + +/* SH_PI_COR_TIME_STAMP_VALID */ +/* Description: PI correctable error time stamp valid */ +#define SH_PI_COR_TIME_STAMP_VALID_SHFT 63 +#define SH_PI_COR_TIME_STAMP_VALID_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_PI_HW_TIME_STAMP" */ +/* PI hardware error time stamp */ +/* ==================================================================== */ + +#define SH_PI_HW_TIME_STAMP 0x0000000110071380 +#define SH_PI_HW_TIME_STAMP_MASK 0xffffffffffffffff +#define SH_PI_HW_TIME_STAMP_INIT 0x0000000000000000 + +/* SH_PI_HW_TIME_STAMP_TIME */ +/* Description: PI hardware error time stamp */ +#define SH_PI_HW_TIME_STAMP_TIME_SHFT 0 +#define SH_PI_HW_TIME_STAMP_TIME_MASK 0x7fffffffffffffff + +/* SH_PI_HW_TIME_STAMP_VALID */ +/* Description: PI hardware error time stamp valid */ +#define SH_PI_HW_TIME_STAMP_VALID_SHFT 63 +#define SH_PI_HW_TIME_STAMP_VALID_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_PI_UNCOR_TIME_STAMP" */ +/* PI uncorrectable error time stamp */ +/* ==================================================================== */ + +#define SH_PI_UNCOR_TIME_STAMP 0x0000000110071400 +#define SH_PI_UNCOR_TIME_STAMP_MASK 0xffffffffffffffff +#define SH_PI_UNCOR_TIME_STAMP_INIT 0x0000000000000000 + +/* SH_PI_UNCOR_TIME_STAMP_TIME */ +/* Description: PI uncorrectable error time stamp */ +#define SH_PI_UNCOR_TIME_STAMP_TIME_SHFT 0 +#define SH_PI_UNCOR_TIME_STAMP_TIME_MASK 0x7fffffffffffffff + +/* SH_PI_UNCOR_TIME_STAMP_VALID */ +/* Description: PI uncorrectable error time stamp valid */ +#define SH_PI_UNCOR_TIME_STAMP_VALID_SHFT 63 +#define SH_PI_UNCOR_TIME_STAMP_VALID_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_PROC0_ADV_TIME_STAMP" */ +/* Proc 0 advisory time stamp */ +/* ==================================================================== */ + +#define SH_PROC0_ADV_TIME_STAMP 0x0000000110071480 +#define SH_PROC0_ADV_TIME_STAMP_MASK 0xffffffffffffffff +#define SH_PROC0_ADV_TIME_STAMP_INIT 0x0000000000000000 + +/* SH_PROC0_ADV_TIME_STAMP_TIME */ +/* Description: Processor 0 advisory time stamp */ +#define SH_PROC0_ADV_TIME_STAMP_TIME_SHFT 0 +#define SH_PROC0_ADV_TIME_STAMP_TIME_MASK 0x7fffffffffffffff + +/* SH_PROC0_ADV_TIME_STAMP_VALID */ +/* Description: Processor 0 advisory time stamp valid */ +#define SH_PROC0_ADV_TIME_STAMP_VALID_SHFT 63 +#define SH_PROC0_ADV_TIME_STAMP_VALID_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_PROC0_ERR_TIME_STAMP" */ +/* Proc 0 error time stamp */ +/* ==================================================================== */ + +#define SH_PROC0_ERR_TIME_STAMP 0x0000000110071500 +#define SH_PROC0_ERR_TIME_STAMP_MASK 0xffffffffffffffff +#define SH_PROC0_ERR_TIME_STAMP_INIT 0x0000000000000000 + +/* SH_PROC0_ERR_TIME_STAMP_TIME */ +/* Description: Processor 0 error time stamp */ +#define SH_PROC0_ERR_TIME_STAMP_TIME_SHFT 0 +#define SH_PROC0_ERR_TIME_STAMP_TIME_MASK 0x7fffffffffffffff + +/* SH_PROC0_ERR_TIME_STAMP_VALID */ +/* Description: Processor 0 error time stamp valid */ +#define SH_PROC0_ERR_TIME_STAMP_VALID_SHFT 63 +#define SH_PROC0_ERR_TIME_STAMP_VALID_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_PROC1_ADV_TIME_STAMP" */ +/* Proc 1 advisory time stamp */ +/* ==================================================================== */ + +#define SH_PROC1_ADV_TIME_STAMP 0x0000000110071580 +#define SH_PROC1_ADV_TIME_STAMP_MASK 0xffffffffffffffff +#define SH_PROC1_ADV_TIME_STAMP_INIT 0x0000000000000000 + +/* SH_PROC1_ADV_TIME_STAMP_TIME */ +/* Description: Processor 1 advisory time stamp */ +#define SH_PROC1_ADV_TIME_STAMP_TIME_SHFT 0 +#define SH_PROC1_ADV_TIME_STAMP_TIME_MASK 0x7fffffffffffffff + +/* SH_PROC1_ADV_TIME_STAMP_VALID */ +/* Description: Processor 1 advisory time stamp valid */ +#define SH_PROC1_ADV_TIME_STAMP_VALID_SHFT 63 +#define SH_PROC1_ADV_TIME_STAMP_VALID_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_PROC1_ERR_TIME_STAMP" */ +/* Proc 1 error time stamp */ +/* ==================================================================== */ + +#define SH_PROC1_ERR_TIME_STAMP 0x0000000110071600 +#define SH_PROC1_ERR_TIME_STAMP_MASK 0xffffffffffffffff +#define SH_PROC1_ERR_TIME_STAMP_INIT 0x0000000000000000 + +/* SH_PROC1_ERR_TIME_STAMP_TIME */ +/* Description: Processor 1 error time stamp */ +#define SH_PROC1_ERR_TIME_STAMP_TIME_SHFT 0 +#define SH_PROC1_ERR_TIME_STAMP_TIME_MASK 0x7fffffffffffffff + +/* SH_PROC1_ERR_TIME_STAMP_VALID */ +/* Description: Processor 1 error time stamp valid */ +#define SH_PROC1_ERR_TIME_STAMP_VALID_SHFT 63 +#define SH_PROC1_ERR_TIME_STAMP_VALID_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_PROC2_ADV_TIME_STAMP" */ +/* Proc 2 advisory time stamp */ +/* ==================================================================== */ + +#define SH_PROC2_ADV_TIME_STAMP 0x0000000110071680 +#define SH_PROC2_ADV_TIME_STAMP_MASK 0xffffffffffffffff +#define SH_PROC2_ADV_TIME_STAMP_INIT 0x0000000000000000 + +/* SH_PROC2_ADV_TIME_STAMP_TIME */ +/* Description: Processor 2 advisory time stamp */ +#define SH_PROC2_ADV_TIME_STAMP_TIME_SHFT 0 +#define SH_PROC2_ADV_TIME_STAMP_TIME_MASK 0x7fffffffffffffff + +/* SH_PROC2_ADV_TIME_STAMP_VALID */ +/* Description: Processor 2 advisory time stamp valid */ +#define SH_PROC2_ADV_TIME_STAMP_VALID_SHFT 63 +#define SH_PROC2_ADV_TIME_STAMP_VALID_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_PROC2_ERR_TIME_STAMP" */ +/* Proc 2 error time stamp */ +/* ==================================================================== */ + +#define SH_PROC2_ERR_TIME_STAMP 0x0000000110071700 +#define SH_PROC2_ERR_TIME_STAMP_MASK 0xffffffffffffffff +#define SH_PROC2_ERR_TIME_STAMP_INIT 0x0000000000000000 + +/* SH_PROC2_ERR_TIME_STAMP_TIME */ +/* Description: Processor 2 error time stamp */ +#define SH_PROC2_ERR_TIME_STAMP_TIME_SHFT 0 +#define SH_PROC2_ERR_TIME_STAMP_TIME_MASK 0x7fffffffffffffff + +/* SH_PROC2_ERR_TIME_STAMP_VALID */ +/* Description: Processor 2 error time stamp valid */ +#define SH_PROC2_ERR_TIME_STAMP_VALID_SHFT 63 +#define SH_PROC2_ERR_TIME_STAMP_VALID_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_PROC3_ADV_TIME_STAMP" */ +/* Proc 3 advisory time stamp */ +/* ==================================================================== */ + +#define SH_PROC3_ADV_TIME_STAMP 0x0000000110071780 +#define SH_PROC3_ADV_TIME_STAMP_MASK 0xffffffffffffffff +#define SH_PROC3_ADV_TIME_STAMP_INIT 0x0000000000000000 + +/* SH_PROC3_ADV_TIME_STAMP_TIME */ +/* Description: Processor 3 advisory time stamp */ +#define SH_PROC3_ADV_TIME_STAMP_TIME_SHFT 0 +#define SH_PROC3_ADV_TIME_STAMP_TIME_MASK 0x7fffffffffffffff + +/* SH_PROC3_ADV_TIME_STAMP_VALID */ +/* Description: Processor 3 advisory time stamp valid */ +#define SH_PROC3_ADV_TIME_STAMP_VALID_SHFT 63 +#define SH_PROC3_ADV_TIME_STAMP_VALID_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_PROC3_ERR_TIME_STAMP" */ +/* Proc 3 error time stamp */ +/* ==================================================================== */ + +#define SH_PROC3_ERR_TIME_STAMP 0x0000000110071800 +#define SH_PROC3_ERR_TIME_STAMP_MASK 0xffffffffffffffff +#define SH_PROC3_ERR_TIME_STAMP_INIT 0x0000000000000000 + +/* SH_PROC3_ERR_TIME_STAMP_TIME */ +/* Description: Processor 3 error time stamp */ +#define SH_PROC3_ERR_TIME_STAMP_TIME_SHFT 0 +#define SH_PROC3_ERR_TIME_STAMP_TIME_MASK 0x7fffffffffffffff + +/* SH_PROC3_ERR_TIME_STAMP_VALID */ +/* Description: Processor 3 error time stamp valid */ +#define SH_PROC3_ERR_TIME_STAMP_VALID_SHFT 63 +#define SH_PROC3_ERR_TIME_STAMP_VALID_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_XN_COR_TIME_STAMP" */ +/* XN correctable error time stamp */ +/* ==================================================================== */ + +#define SH_XN_COR_TIME_STAMP 0x0000000110071880 +#define SH_XN_COR_TIME_STAMP_MASK 0xffffffffffffffff +#define SH_XN_COR_TIME_STAMP_INIT 0x0000000000000000 + +/* SH_XN_COR_TIME_STAMP_TIME */ +/* Description: XN correctable error time stamp */ +#define SH_XN_COR_TIME_STAMP_TIME_SHFT 0 +#define SH_XN_COR_TIME_STAMP_TIME_MASK 0x7fffffffffffffff + +/* SH_XN_COR_TIME_STAMP_VALID */ +/* Description: XN correctable error time stamp valid */ +#define SH_XN_COR_TIME_STAMP_VALID_SHFT 63 +#define SH_XN_COR_TIME_STAMP_VALID_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_XN_HW_TIME_STAMP" */ +/* XN hardware error time stamp */ +/* ==================================================================== */ + +#define SH_XN_HW_TIME_STAMP 0x0000000110071900 +#define SH_XN_HW_TIME_STAMP_MASK 0xffffffffffffffff +#define SH_XN_HW_TIME_STAMP_INIT 0x0000000000000000 + +/* SH_XN_HW_TIME_STAMP_TIME */ +/* Description: XN hardware error time stamp */ +#define SH_XN_HW_TIME_STAMP_TIME_SHFT 0 +#define SH_XN_HW_TIME_STAMP_TIME_MASK 0x7fffffffffffffff + +/* SH_XN_HW_TIME_STAMP_VALID */ +/* Description: XN hardware error time stamp valid */ +#define SH_XN_HW_TIME_STAMP_VALID_SHFT 63 +#define SH_XN_HW_TIME_STAMP_VALID_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_XN_UNCOR_TIME_STAMP" */ +/* XN uncorrectable error time stamp */ +/* ==================================================================== */ + +#define SH_XN_UNCOR_TIME_STAMP 0x0000000110071980 +#define SH_XN_UNCOR_TIME_STAMP_MASK 0xffffffffffffffff +#define SH_XN_UNCOR_TIME_STAMP_INIT 0x0000000000000000 + +/* SH_XN_UNCOR_TIME_STAMP_TIME */ +/* Description: XN uncorrectable error time stamp */ +#define SH_XN_UNCOR_TIME_STAMP_TIME_SHFT 0 +#define SH_XN_UNCOR_TIME_STAMP_TIME_MASK 0x7fffffffffffffff + +/* SH_XN_UNCOR_TIME_STAMP_VALID */ +/* Description: XN uncorrectable error time stamp valid */ +#define SH_XN_UNCOR_TIME_STAMP_VALID_SHFT 63 +#define SH_XN_UNCOR_TIME_STAMP_VALID_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_DEBUG_PORT" */ +/* SHub Debug Port */ +/* ==================================================================== */ + +#define SH_DEBUG_PORT 0x0000000110072000 +#define SH_DEBUG_PORT_MASK 0x00000000ffffffff +#define SH_DEBUG_PORT_INIT 0x0000000000000000 + +/* SH_DEBUG_PORT_DEBUG_NIBBLE0 */ +/* Description: Debug port nibble 0 */ +#define SH_DEBUG_PORT_DEBUG_NIBBLE0_SHFT 0 +#define SH_DEBUG_PORT_DEBUG_NIBBLE0_MASK 0x000000000000000f + +/* SH_DEBUG_PORT_DEBUG_NIBBLE1 */ +/* Description: Debug port nibble 1 */ +#define SH_DEBUG_PORT_DEBUG_NIBBLE1_SHFT 4 +#define SH_DEBUG_PORT_DEBUG_NIBBLE1_MASK 0x00000000000000f0 + +/* SH_DEBUG_PORT_DEBUG_NIBBLE2 */ +/* Description: Debug port nibble 2 */ +#define SH_DEBUG_PORT_DEBUG_NIBBLE2_SHFT 8 +#define SH_DEBUG_PORT_DEBUG_NIBBLE2_MASK 0x0000000000000f00 + +/* SH_DEBUG_PORT_DEBUG_NIBBLE3 */ +/* Description: Debug port nibble 3 */ +#define SH_DEBUG_PORT_DEBUG_NIBBLE3_SHFT 12 +#define SH_DEBUG_PORT_DEBUG_NIBBLE3_MASK 0x000000000000f000 + +/* SH_DEBUG_PORT_DEBUG_NIBBLE4 */ +/* Description: Debug port nibble 4 */ +#define SH_DEBUG_PORT_DEBUG_NIBBLE4_SHFT 16 +#define SH_DEBUG_PORT_DEBUG_NIBBLE4_MASK 0x00000000000f0000 + +/* SH_DEBUG_PORT_DEBUG_NIBBLE5 */ +/* Description: Debug port nibble 5 */ +#define SH_DEBUG_PORT_DEBUG_NIBBLE5_SHFT 20 +#define SH_DEBUG_PORT_DEBUG_NIBBLE5_MASK 0x0000000000f00000 + +/* SH_DEBUG_PORT_DEBUG_NIBBLE6 */ +/* Description: Debug port nibble 6 */ +#define SH_DEBUG_PORT_DEBUG_NIBBLE6_SHFT 24 +#define SH_DEBUG_PORT_DEBUG_NIBBLE6_MASK 0x000000000f000000 + +/* SH_DEBUG_PORT_DEBUG_NIBBLE7 */ +/* Description: Debug port nibble 7 */ +#define SH_DEBUG_PORT_DEBUG_NIBBLE7_SHFT 28 +#define SH_DEBUG_PORT_DEBUG_NIBBLE7_MASK 0x00000000f0000000 + +/* ==================================================================== */ +/* Register "SH_II_DEBUG_DATA" */ +/* II Debug Data */ +/* ==================================================================== */ + +#define SH_II_DEBUG_DATA 0x0000000110072080 +#define SH_II_DEBUG_DATA_MASK 0x00000000ffffffff +#define SH_II_DEBUG_DATA_INIT 0x0000000000000000 + +/* SH_II_DEBUG_DATA_II_DATA */ +/* Description: II debug data */ +#define SH_II_DEBUG_DATA_II_DATA_SHFT 0 +#define SH_II_DEBUG_DATA_II_DATA_MASK 0x00000000ffffffff + +/* ==================================================================== */ +/* Register "SH_II_WRAP_DEBUG_DATA" */ +/* SHub II Wrapper Debug Data */ +/* ==================================================================== */ + +#define SH_II_WRAP_DEBUG_DATA 0x0000000110072100 +#define SH_II_WRAP_DEBUG_DATA_MASK 0x00000000ffffffff +#define SH_II_WRAP_DEBUG_DATA_INIT 0x0000000000000000 + +/* SH_II_WRAP_DEBUG_DATA_II_WRAP_DATA */ +/* Description: II wrapper debug data */ +#define SH_II_WRAP_DEBUG_DATA_II_WRAP_DATA_SHFT 0 +#define SH_II_WRAP_DEBUG_DATA_II_WRAP_DATA_MASK 0x00000000ffffffff + +/* ==================================================================== */ +/* Register "SH_LB_DEBUG_DATA" */ +/* SHub LB Debug Data */ +/* ==================================================================== */ + +#define SH_LB_DEBUG_DATA 0x0000000110072180 +#define SH_LB_DEBUG_DATA_MASK 0x00000000ffffffff +#define SH_LB_DEBUG_DATA_INIT 0x0000000000000000 + +/* SH_LB_DEBUG_DATA_LB_DATA */ +/* Description: LB debug data */ +#define SH_LB_DEBUG_DATA_LB_DATA_SHFT 0 +#define SH_LB_DEBUG_DATA_LB_DATA_MASK 0x00000000ffffffff + +/* ==================================================================== */ +/* Register "SH_MD_DEBUG_DATA" */ +/* SHub MD Debug Data */ +/* ==================================================================== */ + +#define SH_MD_DEBUG_DATA 0x0000000110072200 +#define SH_MD_DEBUG_DATA_MASK 0x00000000ffffffff +#define SH_MD_DEBUG_DATA_INIT 0x0000000000000000 + +/* SH_MD_DEBUG_DATA_MD_DATA */ +/* Description: MD debug data */ +#define SH_MD_DEBUG_DATA_MD_DATA_SHFT 0 +#define SH_MD_DEBUG_DATA_MD_DATA_MASK 0x00000000ffffffff + +/* ==================================================================== */ +/* Register "SH_PI_DEBUG_DATA" */ +/* SHub PI Debug Data */ +/* ==================================================================== */ + +#define SH_PI_DEBUG_DATA 0x0000000110072280 +#define SH_PI_DEBUG_DATA_MASK 0x00000000ffffffff +#define SH_PI_DEBUG_DATA_INIT 0x0000000000000000 + +/* SH_PI_DEBUG_DATA_PI_DATA */ +/* Description: PI Debug Data */ +#define SH_PI_DEBUG_DATA_PI_DATA_SHFT 0 +#define SH_PI_DEBUG_DATA_PI_DATA_MASK 0x00000000ffffffff + +/* ==================================================================== */ +/* Register "SH_XN_DEBUG_DATA" */ +/* SHub XN Debug Data */ +/* ==================================================================== */ + +#define SH_XN_DEBUG_DATA 0x0000000110072300 +#define SH_XN_DEBUG_DATA_MASK 0x00000000ffffffff +#define SH_XN_DEBUG_DATA_INIT 0x0000000000000000 + +/* SH_XN_DEBUG_DATA_XN_DATA */ +/* Description: XN debug data */ +#define SH_XN_DEBUG_DATA_XN_DATA_SHFT 0 +#define SH_XN_DEBUG_DATA_XN_DATA_MASK 0x00000000ffffffff + +/* ==================================================================== */ +/* Register "SH_TSF_ARMED_STATE" */ +/* Trigger sequencing facility arm state */ +/* ==================================================================== */ + +#define SH_TSF_ARMED_STATE 0x0000000110073000 +#define SH_TSF_ARMED_STATE_MASK 0x00000000000000ff +#define SH_TSF_ARMED_STATE_INIT 0x0000000000000000 + +/* SH_TSF_ARMED_STATE_STATE */ +/* Description: Trigger sequencing facility armed state */ +#define SH_TSF_ARMED_STATE_STATE_SHFT 0 +#define SH_TSF_ARMED_STATE_STATE_MASK 0x00000000000000ff + +/* ==================================================================== */ +/* Register "SH_TSF_COUNTER_VALUE" */ +/* Trigger sequencing facility counter value */ +/* ==================================================================== */ + +#define SH_TSF_COUNTER_VALUE 0x0000000110073080 +#define SH_TSF_COUNTER_VALUE_MASK 0xffffffffffffffff +#define SH_TSF_COUNTER_VALUE_INIT 0x0000000000000000 + +/* SH_TSF_COUNTER_VALUE_COUNT_32 */ +/* Description: Trigger sequencing facility counter 32 */ +#define SH_TSF_COUNTER_VALUE_COUNT_32_SHFT 0 +#define SH_TSF_COUNTER_VALUE_COUNT_32_MASK 0x00000000ffffffff + +/* SH_TSF_COUNTER_VALUE_COUNT_16 */ +/* Description: Trigger sequencing facility counter 16 */ +#define SH_TSF_COUNTER_VALUE_COUNT_16_SHFT 32 +#define SH_TSF_COUNTER_VALUE_COUNT_16_MASK 0x0000ffff00000000 + +/* SH_TSF_COUNTER_VALUE_COUNT_8B */ +/* Description: Trigger sequencing facility counter 8b */ +#define SH_TSF_COUNTER_VALUE_COUNT_8B_SHFT 48 +#define SH_TSF_COUNTER_VALUE_COUNT_8B_MASK 0x00ff000000000000 + +/* SH_TSF_COUNTER_VALUE_COUNT_8A */ +/* Description: Trigger sequencing facility counter 8a */ +#define SH_TSF_COUNTER_VALUE_COUNT_8A_SHFT 56 +#define SH_TSF_COUNTER_VALUE_COUNT_8A_MASK 0xff00000000000000 + +/* ==================================================================== */ +/* Register "SH_TSF_TRIGGERED_STATE" */ +/* Trigger sequencing facility triggered state */ +/* ==================================================================== */ + +#define SH_TSF_TRIGGERED_STATE 0x0000000110073100 +#define SH_TSF_TRIGGERED_STATE_MASK 0x00000000000000ff +#define SH_TSF_TRIGGERED_STATE_INIT 0x0000000000000000 + +/* SH_TSF_TRIGGERED_STATE_STATE */ +/* Description: Trigger sequencing facility triggered state */ +#define SH_TSF_TRIGGERED_STATE_STATE_SHFT 0 +#define SH_TSF_TRIGGERED_STATE_STATE_MASK 0x00000000000000ff + +/* ==================================================================== */ +/* Register "SH_VEC_RDDATA" */ +/* Vector Reply Message Data */ +/* ==================================================================== */ + +#define SH_VEC_RDDATA 0x0000000110074000 +#define SH_VEC_RDDATA_MASK 0xffffffffffffffff +#define SH_VEC_RDDATA_INIT 0x0000000000000000 + +/* SH_VEC_RDDATA_DATA */ +/* Description: Data */ +#define SH_VEC_RDDATA_DATA_SHFT 0 +#define SH_VEC_RDDATA_DATA_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_VEC_RETURN" */ +/* Vector Reply Message Return Route */ +/* ==================================================================== */ + +#define SH_VEC_RETURN 0x0000000110074080 +#define SH_VEC_RETURN_MASK 0xffffffffffffffff +#define SH_VEC_RETURN_INIT 0x0000000000000000 + +/* SH_VEC_RETURN_ROUTE */ +/* Description: Route */ +#define SH_VEC_RETURN_ROUTE_SHFT 0 +#define SH_VEC_RETURN_ROUTE_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_VEC_STATUS" */ +/* Vector Reply Message Status */ +/* ==================================================================== */ + +#define SH_VEC_STATUS 0x0000000110074100 +#define SH_VEC_STATUS_MASK 0xcfffffffffffffff +#define SH_VEC_STATUS_INIT 0x0000000000000000 + +/* SH_VEC_STATUS_TYPE */ +/* Description: Type */ +#define SH_VEC_STATUS_TYPE_SHFT 0 +#define SH_VEC_STATUS_TYPE_MASK 0x0000000000000007 + +/* SH_VEC_STATUS_ADDRESS */ +/* Description: Address */ +#define SH_VEC_STATUS_ADDRESS_SHFT 3 +#define SH_VEC_STATUS_ADDRESS_MASK 0x00000007fffffff8 + +/* SH_VEC_STATUS_PIO_ID */ +/* Description: PIO ID */ +#define SH_VEC_STATUS_PIO_ID_SHFT 35 +#define SH_VEC_STATUS_PIO_ID_MASK 0x00003ff800000000 + +/* SH_VEC_STATUS_SOURCE */ +/* Description: Source */ +#define SH_VEC_STATUS_SOURCE_SHFT 46 +#define SH_VEC_STATUS_SOURCE_MASK 0x0fffc00000000000 + +/* SH_VEC_STATUS_OVERRUN */ +/* Description: Overrun */ +#define SH_VEC_STATUS_OVERRUN_SHFT 62 +#define SH_VEC_STATUS_OVERRUN_MASK 0x4000000000000000 + +/* SH_VEC_STATUS_STATUS_VALID */ +/* Description: Status_Valid */ +#define SH_VEC_STATUS_STATUS_VALID_SHFT 63 +#define SH_VEC_STATUS_STATUS_VALID_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_VEC_STATUS_ALIAS" */ +/* Vector Reply Message Status Alias */ +/* ==================================================================== */ + +#define SH_VEC_STATUS_ALIAS 0x0000000110074108 + +/* ==================================================================== */ +/* Register "SH_PERFORMANCE_COUNT0_CONTROL" */ +/* Performance Counter 0 Control */ +/* ==================================================================== */ + +#define SH_PERFORMANCE_COUNT0_CONTROL 0x0000000110080000 +#define SH_PERFORMANCE_COUNT0_CONTROL_MASK 0x000000000007ffff +#define SH_PERFORMANCE_COUNT0_CONTROL_INIT 0x000000000000b8b8 + +/* SH_PERFORMANCE_COUNT0_CONTROL_UP_STIMULUS */ +/* Description: Counter 0 up stimulus */ +#define SH_PERFORMANCE_COUNT0_CONTROL_UP_STIMULUS_SHFT 0 +#define SH_PERFORMANCE_COUNT0_CONTROL_UP_STIMULUS_MASK 0x000000000000001f + +/* SH_PERFORMANCE_COUNT0_CONTROL_UP_EVENT */ +/* Description: Counter 0 up event select (1-greater than, 0-equal) */ +#define SH_PERFORMANCE_COUNT0_CONTROL_UP_EVENT_SHFT 5 +#define SH_PERFORMANCE_COUNT0_CONTROL_UP_EVENT_MASK 0x0000000000000020 + +/* SH_PERFORMANCE_COUNT0_CONTROL_UP_POLARITY */ +/* Description: Counter 0 up polarity select (1-negative edge, 0-po */ +/* sitive edge) */ +#define SH_PERFORMANCE_COUNT0_CONTROL_UP_POLARITY_SHFT 6 +#define SH_PERFORMANCE_COUNT0_CONTROL_UP_POLARITY_MASK 0x0000000000000040 + +/* SH_PERFORMANCE_COUNT0_CONTROL_UP_MODE */ +/* Description: Counter 0 up mode select (1-internal, 0-external) */ +#define SH_PERFORMANCE_COUNT0_CONTROL_UP_MODE_SHFT 7 +#define SH_PERFORMANCE_COUNT0_CONTROL_UP_MODE_MASK 0x0000000000000080 + +/* SH_PERFORMANCE_COUNT0_CONTROL_DN_STIMULUS */ +/* Description: Counter 0 down stimulus */ +#define SH_PERFORMANCE_COUNT0_CONTROL_DN_STIMULUS_SHFT 8 +#define SH_PERFORMANCE_COUNT0_CONTROL_DN_STIMULUS_MASK 0x0000000000001f00 + +/* SH_PERFORMANCE_COUNT0_CONTROL_DN_EVENT */ +/* Description: Counter 0 down event select (1-greater than, 0-equa */ +#define SH_PERFORMANCE_COUNT0_CONTROL_DN_EVENT_SHFT 13 +#define SH_PERFORMANCE_COUNT0_CONTROL_DN_EVENT_MASK 0x0000000000002000 + +/* SH_PERFORMANCE_COUNT0_CONTROL_DN_POLARITY */ +/* Description: Counter 0 down polarity select (1-negative edge, 0- */ +/* positive edge) */ +#define SH_PERFORMANCE_COUNT0_CONTROL_DN_POLARITY_SHFT 14 +#define SH_PERFORMANCE_COUNT0_CONTROL_DN_POLARITY_MASK 0x0000000000004000 + +/* SH_PERFORMANCE_COUNT0_CONTROL_DN_MODE */ +/* Description: Counter 0 down mode select (1-internal, 0-external) */ +#define SH_PERFORMANCE_COUNT0_CONTROL_DN_MODE_SHFT 15 +#define SH_PERFORMANCE_COUNT0_CONTROL_DN_MODE_MASK 0x0000000000008000 + +/* SH_PERFORMANCE_COUNT0_CONTROL_INC_ENABLE */ +/* Description: Counter 0 enable increment */ +#define SH_PERFORMANCE_COUNT0_CONTROL_INC_ENABLE_SHFT 16 +#define SH_PERFORMANCE_COUNT0_CONTROL_INC_ENABLE_MASK 0x0000000000010000 + +/* SH_PERFORMANCE_COUNT0_CONTROL_DEC_ENABLE */ +/* Description: Counter 0 enable decrement */ +#define SH_PERFORMANCE_COUNT0_CONTROL_DEC_ENABLE_SHFT 17 +#define SH_PERFORMANCE_COUNT0_CONTROL_DEC_ENABLE_MASK 0x0000000000020000 + +/* SH_PERFORMANCE_COUNT0_CONTROL_PEAK_DET_ENABLE */ +/* Description: Counter 0 enable peak detection */ +#define SH_PERFORMANCE_COUNT0_CONTROL_PEAK_DET_ENABLE_SHFT 18 +#define SH_PERFORMANCE_COUNT0_CONTROL_PEAK_DET_ENABLE_MASK 0x0000000000040000 + +/* ==================================================================== */ +/* Register "SH_PERFORMANCE_COUNT1_CONTROL" */ +/* Performance Counter 1 Control */ +/* ==================================================================== */ + +#define SH_PERFORMANCE_COUNT1_CONTROL 0x0000000110090000 +#define SH_PERFORMANCE_COUNT1_CONTROL_MASK 0x000000000007ffff +#define SH_PERFORMANCE_COUNT1_CONTROL_INIT 0x000000000000b8b8 + +/* SH_PERFORMANCE_COUNT1_CONTROL_UP_STIMULUS */ +/* Description: Counter 1 up stimulus */ +#define SH_PERFORMANCE_COUNT1_CONTROL_UP_STIMULUS_SHFT 0 +#define SH_PERFORMANCE_COUNT1_CONTROL_UP_STIMULUS_MASK 0x000000000000001f + +/* SH_PERFORMANCE_COUNT1_CONTROL_UP_EVENT */ +/* Description: Counter 1 up event select (1-greater than, 0-equal) */ +#define SH_PERFORMANCE_COUNT1_CONTROL_UP_EVENT_SHFT 5 +#define SH_PERFORMANCE_COUNT1_CONTROL_UP_EVENT_MASK 0x0000000000000020 + +/* SH_PERFORMANCE_COUNT1_CONTROL_UP_POLARITY */ +/* Description: Counter 1 up polarity select (1-negative edge, 0-po */ +/* sitive edge) */ +#define SH_PERFORMANCE_COUNT1_CONTROL_UP_POLARITY_SHFT 6 +#define SH_PERFORMANCE_COUNT1_CONTROL_UP_POLARITY_MASK 0x0000000000000040 + +/* SH_PERFORMANCE_COUNT1_CONTROL_UP_MODE */ +/* Description: Counter 1 up mode select (1-internal, 0-external) */ +#define SH_PERFORMANCE_COUNT1_CONTROL_UP_MODE_SHFT 7 +#define SH_PERFORMANCE_COUNT1_CONTROL_UP_MODE_MASK 0x0000000000000080 + +/* SH_PERFORMANCE_COUNT1_CONTROL_DN_STIMULUS */ +/* Description: Counter 1 down stimulus */ +#define SH_PERFORMANCE_COUNT1_CONTROL_DN_STIMULUS_SHFT 8 +#define SH_PERFORMANCE_COUNT1_CONTROL_DN_STIMULUS_MASK 0x0000000000001f00 + +/* SH_PERFORMANCE_COUNT1_CONTROL_DN_EVENT */ +/* Description: Counter 1 down event select (1-greater than, 0-equa */ +#define SH_PERFORMANCE_COUNT1_CONTROL_DN_EVENT_SHFT 13 +#define SH_PERFORMANCE_COUNT1_CONTROL_DN_EVENT_MASK 0x0000000000002000 + +/* SH_PERFORMANCE_COUNT1_CONTROL_DN_POLARITY */ +/* Description: Counter 1 down polarity select (1-negative edge, 0- */ +/* positive edge) */ +#define SH_PERFORMANCE_COUNT1_CONTROL_DN_POLARITY_SHFT 14 +#define SH_PERFORMANCE_COUNT1_CONTROL_DN_POLARITY_MASK 0x0000000000004000 + +/* SH_PERFORMANCE_COUNT1_CONTROL_DN_MODE */ +/* Description: Counter 1 down mode select (1-internal, 0-external) */ +#define SH_PERFORMANCE_COUNT1_CONTROL_DN_MODE_SHFT 15 +#define SH_PERFORMANCE_COUNT1_CONTROL_DN_MODE_MASK 0x0000000000008000 + +/* SH_PERFORMANCE_COUNT1_CONTROL_INC_ENABLE */ +/* Description: Counter 1 enable increment */ +#define SH_PERFORMANCE_COUNT1_CONTROL_INC_ENABLE_SHFT 16 +#define SH_PERFORMANCE_COUNT1_CONTROL_INC_ENABLE_MASK 0x0000000000010000 + +/* SH_PERFORMANCE_COUNT1_CONTROL_DEC_ENABLE */ +/* Description: Counter 1 enable decrement */ +#define SH_PERFORMANCE_COUNT1_CONTROL_DEC_ENABLE_SHFT 17 +#define SH_PERFORMANCE_COUNT1_CONTROL_DEC_ENABLE_MASK 0x0000000000020000 + +/* SH_PERFORMANCE_COUNT1_CONTROL_PEAK_DET_ENABLE */ +/* Description: Counter 1 enable peak detection */ +#define SH_PERFORMANCE_COUNT1_CONTROL_PEAK_DET_ENABLE_SHFT 18 +#define SH_PERFORMANCE_COUNT1_CONTROL_PEAK_DET_ENABLE_MASK 0x0000000000040000 + +/* ==================================================================== */ +/* Register "SH_PERFORMANCE_COUNT2_CONTROL" */ +/* Performance Counter 2 Control */ +/* ==================================================================== */ + +#define SH_PERFORMANCE_COUNT2_CONTROL 0x00000001100a0000 +#define SH_PERFORMANCE_COUNT2_CONTROL_MASK 0x000000000007ffff +#define SH_PERFORMANCE_COUNT2_CONTROL_INIT 0x000000000000b8b8 + +/* SH_PERFORMANCE_COUNT2_CONTROL_UP_STIMULUS */ +/* Description: Counter 2 up stimulus */ +#define SH_PERFORMANCE_COUNT2_CONTROL_UP_STIMULUS_SHFT 0 +#define SH_PERFORMANCE_COUNT2_CONTROL_UP_STIMULUS_MASK 0x000000000000001f + +/* SH_PERFORMANCE_COUNT2_CONTROL_UP_EVENT */ +/* Description: Counter 2 up event select (1-greater than, 0-equal) */ +#define SH_PERFORMANCE_COUNT2_CONTROL_UP_EVENT_SHFT 5 +#define SH_PERFORMANCE_COUNT2_CONTROL_UP_EVENT_MASK 0x0000000000000020 + +/* SH_PERFORMANCE_COUNT2_CONTROL_UP_POLARITY */ +/* Description: Counter 2 up polarity select (1-negative edge, 0-po */ +/* sitive edge) */ +#define SH_PERFORMANCE_COUNT2_CONTROL_UP_POLARITY_SHFT 6 +#define SH_PERFORMANCE_COUNT2_CONTROL_UP_POLARITY_MASK 0x0000000000000040 + +/* SH_PERFORMANCE_COUNT2_CONTROL_UP_MODE */ +/* Description: Counter 2 up mode select (1-internal, 0-external) */ +#define SH_PERFORMANCE_COUNT2_CONTROL_UP_MODE_SHFT 7 +#define SH_PERFORMANCE_COUNT2_CONTROL_UP_MODE_MASK 0x0000000000000080 + +/* SH_PERFORMANCE_COUNT2_CONTROL_DN_STIMULUS */ +/* Description: Counter 2 down stimulus */ +#define SH_PERFORMANCE_COUNT2_CONTROL_DN_STIMULUS_SHFT 8 +#define SH_PERFORMANCE_COUNT2_CONTROL_DN_STIMULUS_MASK 0x0000000000001f00 + +/* SH_PERFORMANCE_COUNT2_CONTROL_DN_EVENT */ +/* Description: Counter 2 down event select (1-greater than, 0-equa */ +#define SH_PERFORMANCE_COUNT2_CONTROL_DN_EVENT_SHFT 13 +#define SH_PERFORMANCE_COUNT2_CONTROL_DN_EVENT_MASK 0x0000000000002000 + +/* SH_PERFORMANCE_COUNT2_CONTROL_DN_POLARITY */ +/* Description: Counter 2 down polarity select (1-negative edge, 0- */ +/* positive edge) */ +#define SH_PERFORMANCE_COUNT2_CONTROL_DN_POLARITY_SHFT 14 +#define SH_PERFORMANCE_COUNT2_CONTROL_DN_POLARITY_MASK 0x0000000000004000 + +/* SH_PERFORMANCE_COUNT2_CONTROL_DN_MODE */ +/* Description: Counter 2 down mode select (1-internal, 0-external) */ +#define SH_PERFORMANCE_COUNT2_CONTROL_DN_MODE_SHFT 15 +#define SH_PERFORMANCE_COUNT2_CONTROL_DN_MODE_MASK 0x0000000000008000 + +/* SH_PERFORMANCE_COUNT2_CONTROL_INC_ENABLE */ +/* Description: Counter 2 enable increment */ +#define SH_PERFORMANCE_COUNT2_CONTROL_INC_ENABLE_SHFT 16 +#define SH_PERFORMANCE_COUNT2_CONTROL_INC_ENABLE_MASK 0x0000000000010000 + +/* SH_PERFORMANCE_COUNT2_CONTROL_DEC_ENABLE */ +/* Description: Counter 2 enable decrement */ +#define SH_PERFORMANCE_COUNT2_CONTROL_DEC_ENABLE_SHFT 17 +#define SH_PERFORMANCE_COUNT2_CONTROL_DEC_ENABLE_MASK 0x0000000000020000 + +/* SH_PERFORMANCE_COUNT2_CONTROL_PEAK_DET_ENABLE */ +/* Description: Counter 2 enable peak detection */ +#define SH_PERFORMANCE_COUNT2_CONTROL_PEAK_DET_ENABLE_SHFT 18 +#define SH_PERFORMANCE_COUNT2_CONTROL_PEAK_DET_ENABLE_MASK 0x0000000000040000 + +/* ==================================================================== */ +/* Register "SH_PERFORMANCE_COUNT3_CONTROL" */ +/* Performance Counter 3 Control */ +/* ==================================================================== */ + +#define SH_PERFORMANCE_COUNT3_CONTROL 0x00000001100b0000 +#define SH_PERFORMANCE_COUNT3_CONTROL_MASK 0x000000000007ffff +#define SH_PERFORMANCE_COUNT3_CONTROL_INIT 0x000000000000b8b8 + +/* SH_PERFORMANCE_COUNT3_CONTROL_UP_STIMULUS */ +/* Description: Counter 3 up stimulus */ +#define SH_PERFORMANCE_COUNT3_CONTROL_UP_STIMULUS_SHFT 0 +#define SH_PERFORMANCE_COUNT3_CONTROL_UP_STIMULUS_MASK 0x000000000000001f + +/* SH_PERFORMANCE_COUNT3_CONTROL_UP_EVENT */ +/* Description: Counter 3 up event select (1-greater than, 0-equal) */ +#define SH_PERFORMANCE_COUNT3_CONTROL_UP_EVENT_SHFT 5 +#define SH_PERFORMANCE_COUNT3_CONTROL_UP_EVENT_MASK 0x0000000000000020 + +/* SH_PERFORMANCE_COUNT3_CONTROL_UP_POLARITY */ +/* Description: Counter 3 up polarity select (1-negative edge, 0-po */ +/* sitive edge) */ +#define SH_PERFORMANCE_COUNT3_CONTROL_UP_POLARITY_SHFT 6 +#define SH_PERFORMANCE_COUNT3_CONTROL_UP_POLARITY_MASK 0x0000000000000040 + +/* SH_PERFORMANCE_COUNT3_CONTROL_UP_MODE */ +/* Description: Counter 3 up mode select (1-internal, 0-external) */ +#define SH_PERFORMANCE_COUNT3_CONTROL_UP_MODE_SHFT 7 +#define SH_PERFORMANCE_COUNT3_CONTROL_UP_MODE_MASK 0x0000000000000080 + +/* SH_PERFORMANCE_COUNT3_CONTROL_DN_STIMULUS */ +/* Description: Counter 3 down stimulus */ +#define SH_PERFORMANCE_COUNT3_CONTROL_DN_STIMULUS_SHFT 8 +#define SH_PERFORMANCE_COUNT3_CONTROL_DN_STIMULUS_MASK 0x0000000000001f00 + +/* SH_PERFORMANCE_COUNT3_CONTROL_DN_EVENT */ +/* Description: Counter 3 down event select (1-greater than, 0-equa */ +#define SH_PERFORMANCE_COUNT3_CONTROL_DN_EVENT_SHFT 13 +#define SH_PERFORMANCE_COUNT3_CONTROL_DN_EVENT_MASK 0x0000000000002000 + +/* SH_PERFORMANCE_COUNT3_CONTROL_DN_POLARITY */ +/* Description: Counter 3 down polarity select (1-negative edge, 0- */ +/* positive edge) */ +#define SH_PERFORMANCE_COUNT3_CONTROL_DN_POLARITY_SHFT 14 +#define SH_PERFORMANCE_COUNT3_CONTROL_DN_POLARITY_MASK 0x0000000000004000 + +/* SH_PERFORMANCE_COUNT3_CONTROL_DN_MODE */ +/* Description: Counter 3 down mode select (1-internal, 0-external) */ +#define SH_PERFORMANCE_COUNT3_CONTROL_DN_MODE_SHFT 15 +#define SH_PERFORMANCE_COUNT3_CONTROL_DN_MODE_MASK 0x0000000000008000 + +/* SH_PERFORMANCE_COUNT3_CONTROL_INC_ENABLE */ +/* Description: Counter 3 enable increment */ +#define SH_PERFORMANCE_COUNT3_CONTROL_INC_ENABLE_SHFT 16 +#define SH_PERFORMANCE_COUNT3_CONTROL_INC_ENABLE_MASK 0x0000000000010000 + +/* SH_PERFORMANCE_COUNT3_CONTROL_DEC_ENABLE */ +/* Description: Counter 3 enable decrement */ +#define SH_PERFORMANCE_COUNT3_CONTROL_DEC_ENABLE_SHFT 17 +#define SH_PERFORMANCE_COUNT3_CONTROL_DEC_ENABLE_MASK 0x0000000000020000 + +/* SH_PERFORMANCE_COUNT3_CONTROL_PEAK_DET_ENABLE */ +/* Description: Counter 3 enable peak detection */ +#define SH_PERFORMANCE_COUNT3_CONTROL_PEAK_DET_ENABLE_SHFT 18 +#define SH_PERFORMANCE_COUNT3_CONTROL_PEAK_DET_ENABLE_MASK 0x0000000000040000 + +/* ==================================================================== */ +/* Register "SH_PERFORMANCE_COUNT4_CONTROL" */ +/* Performance Counter 4 Control */ +/* ==================================================================== */ + +#define SH_PERFORMANCE_COUNT4_CONTROL 0x00000001100c0000 +#define SH_PERFORMANCE_COUNT4_CONTROL_MASK 0x000000000007ffff +#define SH_PERFORMANCE_COUNT4_CONTROL_INIT 0x000000000000b8b8 + +/* SH_PERFORMANCE_COUNT4_CONTROL_UP_STIMULUS */ +/* Description: Counter 4 up stimulus */ +#define SH_PERFORMANCE_COUNT4_CONTROL_UP_STIMULUS_SHFT 0 +#define SH_PERFORMANCE_COUNT4_CONTROL_UP_STIMULUS_MASK 0x000000000000001f + +/* SH_PERFORMANCE_COUNT4_CONTROL_UP_EVENT */ +/* Description: Counter 4 up event select (1-greater than, 0-equal) */ +#define SH_PERFORMANCE_COUNT4_CONTROL_UP_EVENT_SHFT 5 +#define SH_PERFORMANCE_COUNT4_CONTROL_UP_EVENT_MASK 0x0000000000000020 + +/* SH_PERFORMANCE_COUNT4_CONTROL_UP_POLARITY */ +/* Description: Counter 4 up polarity select (1-negative edge, 0-po */ +/* sitive edge) */ +#define SH_PERFORMANCE_COUNT4_CONTROL_UP_POLARITY_SHFT 6 +#define SH_PERFORMANCE_COUNT4_CONTROL_UP_POLARITY_MASK 0x0000000000000040 + +/* SH_PERFORMANCE_COUNT4_CONTROL_UP_MODE */ +/* Description: Counter 4 up mode select (1-internal, 0-external) */ +#define SH_PERFORMANCE_COUNT4_CONTROL_UP_MODE_SHFT 7 +#define SH_PERFORMANCE_COUNT4_CONTROL_UP_MODE_MASK 0x0000000000000080 + +/* SH_PERFORMANCE_COUNT4_CONTROL_DN_STIMULUS */ +/* Description: Counter 4 down stimulus */ +#define SH_PERFORMANCE_COUNT4_CONTROL_DN_STIMULUS_SHFT 8 +#define SH_PERFORMANCE_COUNT4_CONTROL_DN_STIMULUS_MASK 0x0000000000001f00 + +/* SH_PERFORMANCE_COUNT4_CONTROL_DN_EVENT */ +/* Description: Counter 4 down event select (1-greater than, 0-equa */ +#define SH_PERFORMANCE_COUNT4_CONTROL_DN_EVENT_SHFT 13 +#define SH_PERFORMANCE_COUNT4_CONTROL_DN_EVENT_MASK 0x0000000000002000 + +/* SH_PERFORMANCE_COUNT4_CONTROL_DN_POLARITY */ +/* Description: Counter 4 down polarity select (1-negative edge, 0- */ +/* positive edge) */ +#define SH_PERFORMANCE_COUNT4_CONTROL_DN_POLARITY_SHFT 14 +#define SH_PERFORMANCE_COUNT4_CONTROL_DN_POLARITY_MASK 0x0000000000004000 + +/* SH_PERFORMANCE_COUNT4_CONTROL_DN_MODE */ +/* Description: Counter 4 down mode select (1-internal, 0-external) */ +#define SH_PERFORMANCE_COUNT4_CONTROL_DN_MODE_SHFT 15 +#define SH_PERFORMANCE_COUNT4_CONTROL_DN_MODE_MASK 0x0000000000008000 + +/* SH_PERFORMANCE_COUNT4_CONTROL_INC_ENABLE */ +/* Description: Counter 4 enable increment */ +#define SH_PERFORMANCE_COUNT4_CONTROL_INC_ENABLE_SHFT 16 +#define SH_PERFORMANCE_COUNT4_CONTROL_INC_ENABLE_MASK 0x0000000000010000 + +/* SH_PERFORMANCE_COUNT4_CONTROL_DEC_ENABLE */ +/* Description: Counter 4 enable decrement */ +#define SH_PERFORMANCE_COUNT4_CONTROL_DEC_ENABLE_SHFT 17 +#define SH_PERFORMANCE_COUNT4_CONTROL_DEC_ENABLE_MASK 0x0000000000020000 + +/* SH_PERFORMANCE_COUNT4_CONTROL_PEAK_DET_ENABLE */ +/* Description: Counter 4 enable peak detection */ +#define SH_PERFORMANCE_COUNT4_CONTROL_PEAK_DET_ENABLE_SHFT 18 +#define SH_PERFORMANCE_COUNT4_CONTROL_PEAK_DET_ENABLE_MASK 0x0000000000040000 + +/* ==================================================================== */ +/* Register "SH_PERFORMANCE_COUNT5_CONTROL" */ +/* Performance Counter 5 Control */ +/* ==================================================================== */ + +#define SH_PERFORMANCE_COUNT5_CONTROL 0x00000001100d0000 +#define SH_PERFORMANCE_COUNT5_CONTROL_MASK 0x000000000007ffff +#define SH_PERFORMANCE_COUNT5_CONTROL_INIT 0x000000000000b8b8 + +/* SH_PERFORMANCE_COUNT5_CONTROL_UP_STIMULUS */ +/* Description: Counter 5 up stimulus */ +#define SH_PERFORMANCE_COUNT5_CONTROL_UP_STIMULUS_SHFT 0 +#define SH_PERFORMANCE_COUNT5_CONTROL_UP_STIMULUS_MASK 0x000000000000001f + +/* SH_PERFORMANCE_COUNT5_CONTROL_UP_EVENT */ +/* Description: Counter 5 up event select (1-greater than, 0-equal) */ +#define SH_PERFORMANCE_COUNT5_CONTROL_UP_EVENT_SHFT 5 +#define SH_PERFORMANCE_COUNT5_CONTROL_UP_EVENT_MASK 0x0000000000000020 + +/* SH_PERFORMANCE_COUNT5_CONTROL_UP_POLARITY */ +/* Description: Counter 5 up polarity select (1-negative edge, 0-po */ +/* sitive edge) */ +#define SH_PERFORMANCE_COUNT5_CONTROL_UP_POLARITY_SHFT 6 +#define SH_PERFORMANCE_COUNT5_CONTROL_UP_POLARITY_MASK 0x0000000000000040 + +/* SH_PERFORMANCE_COUNT5_CONTROL_UP_MODE */ +/* Description: Counter 5 up mode select (1-internal, 0-external) */ +#define SH_PERFORMANCE_COUNT5_CONTROL_UP_MODE_SHFT 7 +#define SH_PERFORMANCE_COUNT5_CONTROL_UP_MODE_MASK 0x0000000000000080 + +/* SH_PERFORMANCE_COUNT5_CONTROL_DN_STIMULUS */ +/* Description: Counter 5 down stimulus */ +#define SH_PERFORMANCE_COUNT5_CONTROL_DN_STIMULUS_SHFT 8 +#define SH_PERFORMANCE_COUNT5_CONTROL_DN_STIMULUS_MASK 0x0000000000001f00 + +/* SH_PERFORMANCE_COUNT5_CONTROL_DN_EVENT */ +/* Description: Counter 5 down event select (1-greater than, 0-equa */ +#define SH_PERFORMANCE_COUNT5_CONTROL_DN_EVENT_SHFT 13 +#define SH_PERFORMANCE_COUNT5_CONTROL_DN_EVENT_MASK 0x0000000000002000 + +/* SH_PERFORMANCE_COUNT5_CONTROL_DN_POLARITY */ +/* Description: Counter 5 down polarity select (1-negative edge, 0- */ +/* positive edge) */ +#define SH_PERFORMANCE_COUNT5_CONTROL_DN_POLARITY_SHFT 14 +#define SH_PERFORMANCE_COUNT5_CONTROL_DN_POLARITY_MASK 0x0000000000004000 + +/* SH_PERFORMANCE_COUNT5_CONTROL_DN_MODE */ +/* Description: Counter 5 down mode select (1-internal, 0-external) */ +#define SH_PERFORMANCE_COUNT5_CONTROL_DN_MODE_SHFT 15 +#define SH_PERFORMANCE_COUNT5_CONTROL_DN_MODE_MASK 0x0000000000008000 + +/* SH_PERFORMANCE_COUNT5_CONTROL_INC_ENABLE */ +/* Description: Counter 5 enable increment */ +#define SH_PERFORMANCE_COUNT5_CONTROL_INC_ENABLE_SHFT 16 +#define SH_PERFORMANCE_COUNT5_CONTROL_INC_ENABLE_MASK 0x0000000000010000 + +/* SH_PERFORMANCE_COUNT5_CONTROL_DEC_ENABLE */ +/* Description: Counter 5 enable decrement */ +#define SH_PERFORMANCE_COUNT5_CONTROL_DEC_ENABLE_SHFT 17 +#define SH_PERFORMANCE_COUNT5_CONTROL_DEC_ENABLE_MASK 0x0000000000020000 + +/* SH_PERFORMANCE_COUNT5_CONTROL_PEAK_DET_ENABLE */ +/* Description: Counter 5 enable peak detection */ +#define SH_PERFORMANCE_COUNT5_CONTROL_PEAK_DET_ENABLE_SHFT 18 +#define SH_PERFORMANCE_COUNT5_CONTROL_PEAK_DET_ENABLE_MASK 0x0000000000040000 + +/* ==================================================================== */ +/* Register "SH_PERFORMANCE_COUNT6_CONTROL" */ +/* Performance Counter 6 Control */ +/* ==================================================================== */ + +#define SH_PERFORMANCE_COUNT6_CONTROL 0x00000001100e0000 +#define SH_PERFORMANCE_COUNT6_CONTROL_MASK 0x000000000007ffff +#define SH_PERFORMANCE_COUNT6_CONTROL_INIT 0x000000000000b8b8 + +/* SH_PERFORMANCE_COUNT6_CONTROL_UP_STIMULUS */ +/* Description: Counter 6 up stimulus */ +#define SH_PERFORMANCE_COUNT6_CONTROL_UP_STIMULUS_SHFT 0 +#define SH_PERFORMANCE_COUNT6_CONTROL_UP_STIMULUS_MASK 0x000000000000001f + +/* SH_PERFORMANCE_COUNT6_CONTROL_UP_EVENT */ +/* Description: Counter 6 up event select (1-greater than, 0-equal) */ +#define SH_PERFORMANCE_COUNT6_CONTROL_UP_EVENT_SHFT 5 +#define SH_PERFORMANCE_COUNT6_CONTROL_UP_EVENT_MASK 0x0000000000000020 + +/* SH_PERFORMANCE_COUNT6_CONTROL_UP_POLARITY */ +/* Description: Counter 6 up polarity select (1-negative edge, 0-po */ +/* sitive edge) */ +#define SH_PERFORMANCE_COUNT6_CONTROL_UP_POLARITY_SHFT 6 +#define SH_PERFORMANCE_COUNT6_CONTROL_UP_POLARITY_MASK 0x0000000000000040 + +/* SH_PERFORMANCE_COUNT6_CONTROL_UP_MODE */ +/* Description: Counter 6 up mode select (1-internal, 0-external) */ +#define SH_PERFORMANCE_COUNT6_CONTROL_UP_MODE_SHFT 7 +#define SH_PERFORMANCE_COUNT6_CONTROL_UP_MODE_MASK 0x0000000000000080 + +/* SH_PERFORMANCE_COUNT6_CONTROL_DN_STIMULUS */ +/* Description: Counter 6 down stimulus */ +#define SH_PERFORMANCE_COUNT6_CONTROL_DN_STIMULUS_SHFT 8 +#define SH_PERFORMANCE_COUNT6_CONTROL_DN_STIMULUS_MASK 0x0000000000001f00 + +/* SH_PERFORMANCE_COUNT6_CONTROL_DN_EVENT */ +/* Description: Counter 6 down event select (1-greater than, 0-equa */ +#define SH_PERFORMANCE_COUNT6_CONTROL_DN_EVENT_SHFT 13 +#define SH_PERFORMANCE_COUNT6_CONTROL_DN_EVENT_MASK 0x0000000000002000 + +/* SH_PERFORMANCE_COUNT6_CONTROL_DN_POLARITY */ +/* Description: Counter 6 down polarity select (1-negative edge, 0- */ +/* positive edge) */ +#define SH_PERFORMANCE_COUNT6_CONTROL_DN_POLARITY_SHFT 14 +#define SH_PERFORMANCE_COUNT6_CONTROL_DN_POLARITY_MASK 0x0000000000004000 + +/* SH_PERFORMANCE_COUNT6_CONTROL_DN_MODE */ +/* Description: Counter 6 down mode select (1-internal, 0-external) */ +#define SH_PERFORMANCE_COUNT6_CONTROL_DN_MODE_SHFT 15 +#define SH_PERFORMANCE_COUNT6_CONTROL_DN_MODE_MASK 0x0000000000008000 + +/* SH_PERFORMANCE_COUNT6_CONTROL_INC_ENABLE */ +/* Description: Counter 6 enable increment */ +#define SH_PERFORMANCE_COUNT6_CONTROL_INC_ENABLE_SHFT 16 +#define SH_PERFORMANCE_COUNT6_CONTROL_INC_ENABLE_MASK 0x0000000000010000 + +/* SH_PERFORMANCE_COUNT6_CONTROL_DEC_ENABLE */ +/* Description: Counter 6 enable decrement */ +#define SH_PERFORMANCE_COUNT6_CONTROL_DEC_ENABLE_SHFT 17 +#define SH_PERFORMANCE_COUNT6_CONTROL_DEC_ENABLE_MASK 0x0000000000020000 + +/* SH_PERFORMANCE_COUNT6_CONTROL_PEAK_DET_ENABLE */ +/* Description: Counter 6 enable peak detection */ +#define SH_PERFORMANCE_COUNT6_CONTROL_PEAK_DET_ENABLE_SHFT 18 +#define SH_PERFORMANCE_COUNT6_CONTROL_PEAK_DET_ENABLE_MASK 0x0000000000040000 + +/* ==================================================================== */ +/* Register "SH_PERFORMANCE_COUNT7_CONTROL" */ +/* Performance Counter 7 Control */ +/* ==================================================================== */ + +#define SH_PERFORMANCE_COUNT7_CONTROL 0x00000001100f0000 +#define SH_PERFORMANCE_COUNT7_CONTROL_MASK 0x000000000007ffff +#define SH_PERFORMANCE_COUNT7_CONTROL_INIT 0x000000000000b8b8 + +/* SH_PERFORMANCE_COUNT7_CONTROL_UP_STIMULUS */ +/* Description: Counter 7 up stimulus */ +#define SH_PERFORMANCE_COUNT7_CONTROL_UP_STIMULUS_SHFT 0 +#define SH_PERFORMANCE_COUNT7_CONTROL_UP_STIMULUS_MASK 0x000000000000001f + +/* SH_PERFORMANCE_COUNT7_CONTROL_UP_EVENT */ +/* Description: Counter 7 up event select (1-greater than, 0-equal) */ +#define SH_PERFORMANCE_COUNT7_CONTROL_UP_EVENT_SHFT 5 +#define SH_PERFORMANCE_COUNT7_CONTROL_UP_EVENT_MASK 0x0000000000000020 + +/* SH_PERFORMANCE_COUNT7_CONTROL_UP_POLARITY */ +/* Description: Counter 7 up polarity select (1-negative edge, 0-po */ +/* sitive edge) */ +#define SH_PERFORMANCE_COUNT7_CONTROL_UP_POLARITY_SHFT 6 +#define SH_PERFORMANCE_COUNT7_CONTROL_UP_POLARITY_MASK 0x0000000000000040 + +/* SH_PERFORMANCE_COUNT7_CONTROL_UP_MODE */ +/* Description: Counter 7 up mode select (1-internal, 0-external) */ +#define SH_PERFORMANCE_COUNT7_CONTROL_UP_MODE_SHFT 7 +#define SH_PERFORMANCE_COUNT7_CONTROL_UP_MODE_MASK 0x0000000000000080 + +/* SH_PERFORMANCE_COUNT7_CONTROL_DN_STIMULUS */ +/* Description: Counter 7 down stimulus */ +#define SH_PERFORMANCE_COUNT7_CONTROL_DN_STIMULUS_SHFT 8 +#define SH_PERFORMANCE_COUNT7_CONTROL_DN_STIMULUS_MASK 0x0000000000001f00 + +/* SH_PERFORMANCE_COUNT7_CONTROL_DN_EVENT */ +/* Description: Counter 7 down event select (1-greater than, 0-equa */ +#define SH_PERFORMANCE_COUNT7_CONTROL_DN_EVENT_SHFT 13 +#define SH_PERFORMANCE_COUNT7_CONTROL_DN_EVENT_MASK 0x0000000000002000 + +/* SH_PERFORMANCE_COUNT7_CONTROL_DN_POLARITY */ +/* Description: Counter 7 down polarity select (1-negative edge, 0- */ +/* positive edge) */ +#define SH_PERFORMANCE_COUNT7_CONTROL_DN_POLARITY_SHFT 14 +#define SH_PERFORMANCE_COUNT7_CONTROL_DN_POLARITY_MASK 0x0000000000004000 + +/* SH_PERFORMANCE_COUNT7_CONTROL_DN_MODE */ +/* Description: Counter 7 down mode select (1-internal, 0-external) */ +#define SH_PERFORMANCE_COUNT7_CONTROL_DN_MODE_SHFT 15 +#define SH_PERFORMANCE_COUNT7_CONTROL_DN_MODE_MASK 0x0000000000008000 + +/* SH_PERFORMANCE_COUNT7_CONTROL_INC_ENABLE */ +/* Description: Counter 7 enable increment */ +#define SH_PERFORMANCE_COUNT7_CONTROL_INC_ENABLE_SHFT 16 +#define SH_PERFORMANCE_COUNT7_CONTROL_INC_ENABLE_MASK 0x0000000000010000 + +/* SH_PERFORMANCE_COUNT7_CONTROL_DEC_ENABLE */ +/* Description: Counter 7 enable decrement */ +#define SH_PERFORMANCE_COUNT7_CONTROL_DEC_ENABLE_SHFT 17 +#define SH_PERFORMANCE_COUNT7_CONTROL_DEC_ENABLE_MASK 0x0000000000020000 + +/* SH_PERFORMANCE_COUNT7_CONTROL_PEAK_DET_ENABLE */ +/* Description: Counter 7 enable peak detection */ +#define SH_PERFORMANCE_COUNT7_CONTROL_PEAK_DET_ENABLE_SHFT 18 +#define SH_PERFORMANCE_COUNT7_CONTROL_PEAK_DET_ENABLE_MASK 0x0000000000040000 + +/* ==================================================================== */ +/* Register "SH_PROFILE_DN_CONTROL" */ +/* Profile Counter Down Control */ +/* ==================================================================== */ + +#define SH_PROFILE_DN_CONTROL 0x0000000110100000 +#define SH_PROFILE_DN_CONTROL_MASK 0x00000000000000ff +#define SH_PROFILE_DN_CONTROL_INIT 0x00000000000000b8 + +/* SH_PROFILE_DN_CONTROL_STIMULUS */ +/* Description: Counter stimulus */ +#define SH_PROFILE_DN_CONTROL_STIMULUS_SHFT 0 +#define SH_PROFILE_DN_CONTROL_STIMULUS_MASK 0x000000000000001f + +/* SH_PROFILE_DN_CONTROL_EVENT */ +/* Description: Counter event select (1-greater than, 0-equal) */ +#define SH_PROFILE_DN_CONTROL_EVENT_SHFT 5 +#define SH_PROFILE_DN_CONTROL_EVENT_MASK 0x0000000000000020 + +/* SH_PROFILE_DN_CONTROL_POLARITY */ +/* Description: Counter polarity select (1-negative edge, 0-positiv */ +/* e edge) */ +#define SH_PROFILE_DN_CONTROL_POLARITY_SHFT 6 +#define SH_PROFILE_DN_CONTROL_POLARITY_MASK 0x0000000000000040 + +/* SH_PROFILE_DN_CONTROL_MODE */ +/* Description: Counter mode select (1-internal, 0-external) */ +#define SH_PROFILE_DN_CONTROL_MODE_SHFT 7 +#define SH_PROFILE_DN_CONTROL_MODE_MASK 0x0000000000000080 + +/* ==================================================================== */ +/* Register "SH_PROFILE_PEAK_CONTROL" */ +/* Profile Counter Peak Control */ +/* ==================================================================== */ + +#define SH_PROFILE_PEAK_CONTROL 0x0000000110100080 +#define SH_PROFILE_PEAK_CONTROL_MASK 0x0000000000000068 +#define SH_PROFILE_PEAK_CONTROL_INIT 0x0000000000000060 + +/* SH_PROFILE_PEAK_CONTROL_STIMULUS */ +/* Description: Counter stimulus */ +#define SH_PROFILE_PEAK_CONTROL_STIMULUS_SHFT 3 +#define SH_PROFILE_PEAK_CONTROL_STIMULUS_MASK 0x0000000000000008 + +/* SH_PROFILE_PEAK_CONTROL_EVENT */ +/* Description: Counter event select (0-greater than, 1-equal) */ +#define SH_PROFILE_PEAK_CONTROL_EVENT_SHFT 5 +#define SH_PROFILE_PEAK_CONTROL_EVENT_MASK 0x0000000000000020 + +/* SH_PROFILE_PEAK_CONTROL_POLARITY */ +/* Description: Counter polarity select (0-negative edge, 1-positiv */ +/* e edge) */ +#define SH_PROFILE_PEAK_CONTROL_POLARITY_SHFT 6 +#define SH_PROFILE_PEAK_CONTROL_POLARITY_MASK 0x0000000000000040 + +/* ==================================================================== */ +/* Register "SH_PROFILE_RANGE" */ +/* Profile Counter Range */ +/* ==================================================================== */ + +#define SH_PROFILE_RANGE 0x0000000110100100 +#define SH_PROFILE_RANGE_MASK 0xffffffffffffffff +#define SH_PROFILE_RANGE_INIT 0x0000000000000000 + +/* SH_PROFILE_RANGE_RANGE0 */ +/* Description: Profiling range 0 */ +#define SH_PROFILE_RANGE_RANGE0_SHFT 0 +#define SH_PROFILE_RANGE_RANGE0_MASK 0x00000000000000ff + +/* SH_PROFILE_RANGE_RANGE1 */ +/* Description: Profiling range 1 */ +#define SH_PROFILE_RANGE_RANGE1_SHFT 8 +#define SH_PROFILE_RANGE_RANGE1_MASK 0x000000000000ff00 + +/* SH_PROFILE_RANGE_RANGE2 */ +/* Description: Profiling range 2 */ +#define SH_PROFILE_RANGE_RANGE2_SHFT 16 +#define SH_PROFILE_RANGE_RANGE2_MASK 0x0000000000ff0000 + +/* SH_PROFILE_RANGE_RANGE3 */ +/* Description: Profiling range 3 */ +#define SH_PROFILE_RANGE_RANGE3_SHFT 24 +#define SH_PROFILE_RANGE_RANGE3_MASK 0x00000000ff000000 + +/* SH_PROFILE_RANGE_RANGE4 */ +/* Description: Profiling range 4 */ +#define SH_PROFILE_RANGE_RANGE4_SHFT 32 +#define SH_PROFILE_RANGE_RANGE4_MASK 0x000000ff00000000 + +/* SH_PROFILE_RANGE_RANGE5 */ +/* Description: Profiling range 5 */ +#define SH_PROFILE_RANGE_RANGE5_SHFT 40 +#define SH_PROFILE_RANGE_RANGE5_MASK 0x0000ff0000000000 + +/* SH_PROFILE_RANGE_RANGE6 */ +/* Description: Profiling range 6 */ +#define SH_PROFILE_RANGE_RANGE6_SHFT 48 +#define SH_PROFILE_RANGE_RANGE6_MASK 0x00ff000000000000 + +/* SH_PROFILE_RANGE_RANGE7 */ +/* Description: Profiling range 7 */ +#define SH_PROFILE_RANGE_RANGE7_SHFT 56 +#define SH_PROFILE_RANGE_RANGE7_MASK 0xff00000000000000 + +/* ==================================================================== */ +/* Register "SH_PROFILE_UP_CONTROL" */ +/* Profile Counter Up Control */ +/* ==================================================================== */ + +#define SH_PROFILE_UP_CONTROL 0x0000000110100180 +#define SH_PROFILE_UP_CONTROL_MASK 0x00000000000000ff +#define SH_PROFILE_UP_CONTROL_INIT 0x00000000000000b8 + +/* SH_PROFILE_UP_CONTROL_STIMULUS */ +/* Description: Counter stimulus */ +#define SH_PROFILE_UP_CONTROL_STIMULUS_SHFT 0 +#define SH_PROFILE_UP_CONTROL_STIMULUS_MASK 0x000000000000001f + +/* SH_PROFILE_UP_CONTROL_EVENT */ +/* Description: Counter event select (1-greater than, 0-equal) */ +#define SH_PROFILE_UP_CONTROL_EVENT_SHFT 5 +#define SH_PROFILE_UP_CONTROL_EVENT_MASK 0x0000000000000020 + +/* SH_PROFILE_UP_CONTROL_POLARITY */ +/* Description: Counter polarity select (1-negative edge, 0-positiv */ +/* e edge) */ +#define SH_PROFILE_UP_CONTROL_POLARITY_SHFT 6 +#define SH_PROFILE_UP_CONTROL_POLARITY_MASK 0x0000000000000040 + +/* SH_PROFILE_UP_CONTROL_MODE */ +/* Description: Counter mode select (1-internal, 0-external) */ +#define SH_PROFILE_UP_CONTROL_MODE_SHFT 7 +#define SH_PROFILE_UP_CONTROL_MODE_MASK 0x0000000000000080 + +/* ==================================================================== */ +/* Register "SH_PERFORMANCE_COUNTER0" */ +/* Performance Counter 0 */ +/* ==================================================================== */ + +#define SH_PERFORMANCE_COUNTER0 0x0000000110110000 +#define SH_PERFORMANCE_COUNTER0_MASK 0x00000000ffffffff +#define SH_PERFORMANCE_COUNTER0_INIT 0x0000000000000000 + +/* SH_PERFORMANCE_COUNTER0_COUNT */ +/* Description: Counter 0 */ +#define SH_PERFORMANCE_COUNTER0_COUNT_SHFT 0 +#define SH_PERFORMANCE_COUNTER0_COUNT_MASK 0x00000000ffffffff + +/* ==================================================================== */ +/* Register "SH_PERFORMANCE_COUNTER1" */ +/* Performance Counter 1 */ +/* ==================================================================== */ + +#define SH_PERFORMANCE_COUNTER1 0x0000000110120000 +#define SH_PERFORMANCE_COUNTER1_MASK 0x00000000ffffffff +#define SH_PERFORMANCE_COUNTER1_INIT 0x0000000000000000 + +/* SH_PERFORMANCE_COUNTER1_COUNT */ +/* Description: Counter 1 */ +#define SH_PERFORMANCE_COUNTER1_COUNT_SHFT 0 +#define SH_PERFORMANCE_COUNTER1_COUNT_MASK 0x00000000ffffffff + +/* ==================================================================== */ +/* Register "SH_PERFORMANCE_COUNTER2" */ +/* Performance Counter 2 */ +/* ==================================================================== */ + +#define SH_PERFORMANCE_COUNTER2 0x0000000110130000 +#define SH_PERFORMANCE_COUNTER2_MASK 0x00000000ffffffff +#define SH_PERFORMANCE_COUNTER2_INIT 0x0000000000000000 + +/* SH_PERFORMANCE_COUNTER2_COUNT */ +/* Description: Counter 2 */ +#define SH_PERFORMANCE_COUNTER2_COUNT_SHFT 0 +#define SH_PERFORMANCE_COUNTER2_COUNT_MASK 0x00000000ffffffff + +/* ==================================================================== */ +/* Register "SH_PERFORMANCE_COUNTER3" */ +/* Performance Counter 3 */ +/* ==================================================================== */ + +#define SH_PERFORMANCE_COUNTER3 0x0000000110140000 +#define SH_PERFORMANCE_COUNTER3_MASK 0x00000000ffffffff +#define SH_PERFORMANCE_COUNTER3_INIT 0x0000000000000000 + +/* SH_PERFORMANCE_COUNTER3_COUNT */ +/* Description: Counter 3 */ +#define SH_PERFORMANCE_COUNTER3_COUNT_SHFT 0 +#define SH_PERFORMANCE_COUNTER3_COUNT_MASK 0x00000000ffffffff + +/* ==================================================================== */ +/* Register "SH_PERFORMANCE_COUNTER4" */ +/* Performance Counter 4 */ +/* ==================================================================== */ + +#define SH_PERFORMANCE_COUNTER4 0x0000000110150000 +#define SH_PERFORMANCE_COUNTER4_MASK 0x00000000ffffffff +#define SH_PERFORMANCE_COUNTER4_INIT 0x0000000000000000 + +/* SH_PERFORMANCE_COUNTER4_COUNT */ +/* Description: Counter 4 */ +#define SH_PERFORMANCE_COUNTER4_COUNT_SHFT 0 +#define SH_PERFORMANCE_COUNTER4_COUNT_MASK 0x00000000ffffffff + +/* ==================================================================== */ +/* Register "SH_PERFORMANCE_COUNTER5" */ +/* Performance Counter 5 */ +/* ==================================================================== */ + +#define SH_PERFORMANCE_COUNTER5 0x0000000110160000 +#define SH_PERFORMANCE_COUNTER5_MASK 0x00000000ffffffff +#define SH_PERFORMANCE_COUNTER5_INIT 0x0000000000000000 + +/* SH_PERFORMANCE_COUNTER5_COUNT */ +/* Description: Counter 5 */ +#define SH_PERFORMANCE_COUNTER5_COUNT_SHFT 0 +#define SH_PERFORMANCE_COUNTER5_COUNT_MASK 0x00000000ffffffff + +/* ==================================================================== */ +/* Register "SH_PERFORMANCE_COUNTER6" */ +/* Performance Counter 6 */ +/* ==================================================================== */ + +#define SH_PERFORMANCE_COUNTER6 0x0000000110170000 +#define SH_PERFORMANCE_COUNTER6_MASK 0x00000000ffffffff +#define SH_PERFORMANCE_COUNTER6_INIT 0x0000000000000000 + +/* SH_PERFORMANCE_COUNTER6_COUNT */ +/* Description: Counter 6 */ +#define SH_PERFORMANCE_COUNTER6_COUNT_SHFT 0 +#define SH_PERFORMANCE_COUNTER6_COUNT_MASK 0x00000000ffffffff + +/* ==================================================================== */ +/* Register "SH_PERFORMANCE_COUNTER7" */ +/* Performance Counter 7 */ +/* ==================================================================== */ + +#define SH_PERFORMANCE_COUNTER7 0x0000000110180000 +#define SH_PERFORMANCE_COUNTER7_MASK 0x00000000ffffffff +#define SH_PERFORMANCE_COUNTER7_INIT 0x0000000000000000 + +/* SH_PERFORMANCE_COUNTER7_COUNT */ +/* Description: Counter 7 */ +#define SH_PERFORMANCE_COUNTER7_COUNT_SHFT 0 +#define SH_PERFORMANCE_COUNTER7_COUNT_MASK 0x00000000ffffffff + +/* ==================================================================== */ +/* Register "SH_PROFILE_COUNTER" */ +/* Profile Counter */ +/* ==================================================================== */ + +#define SH_PROFILE_COUNTER 0x0000000110190000 +#define SH_PROFILE_COUNTER_MASK 0x00000000000000ff +#define SH_PROFILE_COUNTER_INIT 0x0000000000000000 + +/* SH_PROFILE_COUNTER_COUNTER */ +/* Description: Counter Value */ +#define SH_PROFILE_COUNTER_COUNTER_SHFT 0 +#define SH_PROFILE_COUNTER_COUNTER_MASK 0x00000000000000ff + +/* ==================================================================== */ +/* Register "SH_PROFILE_PEAK" */ +/* Profile Peak Counter */ +/* ==================================================================== */ + +#define SH_PROFILE_PEAK 0x0000000110190080 +#define SH_PROFILE_PEAK_MASK 0x00000000000000ff +#define SH_PROFILE_PEAK_INIT 0x0000000000000000 + +/* SH_PROFILE_PEAK_COUNTER */ +/* Description: Counter Value */ +#define SH_PROFILE_PEAK_COUNTER_SHFT 0 +#define SH_PROFILE_PEAK_COUNTER_MASK 0x00000000000000ff + +/* ==================================================================== */ +/* Register "SH_PTC_0" */ +/* Puge Translation Cache Message Configuration Information */ +/* ==================================================================== */ + +#define SH_PTC_0 0x00000001101a0000 +#define SH_PTC_0_MASK 0x80000000fffffffd +#define SH_PTC_0_INIT 0x0000000000000000 + +/* SH_PTC_0_A */ +/* Description: Type */ +#define SH_PTC_0_A_SHFT 0 +#define SH_PTC_0_A_MASK 0x0000000000000001 + +/* SH_PTC_0_PS */ +/* Description: Page Size */ +#define SH_PTC_0_PS_SHFT 2 +#define SH_PTC_0_PS_MASK 0x00000000000000fc + +/* SH_PTC_0_RID */ +/* Description: Region ID */ +#define SH_PTC_0_RID_SHFT 8 +#define SH_PTC_0_RID_MASK 0x00000000ffffff00 + +/* SH_PTC_0_START */ +/* Description: Start */ +#define SH_PTC_0_START_SHFT 63 +#define SH_PTC_0_START_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_PTC_1" */ +/* Puge Translation Cache Message Configuration Information */ +/* ==================================================================== */ + +#define SH_PTC_1 0x00000001101a0080 +#define SH_PTC_1_MASK 0x9ffffffffffff000 +#define SH_PTC_1_INIT 0x0000000000000000 + +/* SH_PTC_1_VPN */ +/* Description: Virtual page number */ +#define SH_PTC_1_VPN_SHFT 12 +#define SH_PTC_1_VPN_MASK 0x1ffffffffffff000 + +/* SH_PTC_1_START */ +/* Description: PTC_1 Start */ +#define SH_PTC_1_START_SHFT 63 +#define SH_PTC_1_START_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_PTC_PARMS" */ +/* PTC Time-out parmaeters */ +/* ==================================================================== */ + +#define SH_PTC_PARMS 0x00000001101a0100 +#define SH_PTC_PARMS_MASK 0x0000000fffffffff +#define SH_PTC_PARMS_INIT 0x00000007ffffffff + +/* SH_PTC_PARMS_PTC_TO_WRAP */ +/* Description: PTC time-out period */ +#define SH_PTC_PARMS_PTC_TO_WRAP_SHFT 0 +#define SH_PTC_PARMS_PTC_TO_WRAP_MASK 0x0000000000ffffff + +/* SH_PTC_PARMS_PTC_TO_VAL */ +/* Description: PTC time-out valid */ +#define SH_PTC_PARMS_PTC_TO_VAL_SHFT 24 +#define SH_PTC_PARMS_PTC_TO_VAL_MASK 0x0000000fff000000 + +/* ==================================================================== */ +/* Register "SH_INT_CMPA" */ +/* RTC Compare Value for Processor A */ +/* ==================================================================== */ + +#define SH_INT_CMPA 0x00000001101b0000 +#define SH_INT_CMPA_MASK 0x007fffffffffffff +#define SH_INT_CMPA_INIT 0x0000000000000000 + +/* SH_INT_CMPA_REAL_TIME_CMPA */ +/* Description: Real Time Clock Compare */ +#define SH_INT_CMPA_REAL_TIME_CMPA_SHFT 0 +#define SH_INT_CMPA_REAL_TIME_CMPA_MASK 0x007fffffffffffff + +/* ==================================================================== */ +/* Register "SH_INT_CMPB" */ +/* RTC Compare Value for Processor B */ +/* ==================================================================== */ + +#define SH_INT_CMPB 0x00000001101b0080 +#define SH_INT_CMPB_MASK 0x007fffffffffffff +#define SH_INT_CMPB_INIT 0x0000000000000000 + +/* SH_INT_CMPB_REAL_TIME_CMPB */ +/* Description: Real Time Clock Compare */ +#define SH_INT_CMPB_REAL_TIME_CMPB_SHFT 0 +#define SH_INT_CMPB_REAL_TIME_CMPB_MASK 0x007fffffffffffff + +/* ==================================================================== */ +/* Register "SH_INT_CMPC" */ +/* RTC Compare Value for Processor C */ +/* ==================================================================== */ + +#define SH_INT_CMPC 0x00000001101b0100 +#define SH_INT_CMPC_MASK 0x007fffffffffffff +#define SH_INT_CMPC_INIT 0x0000000000000000 + +/* SH_INT_CMPC_REAL_TIME_CMPC */ +/* Description: Real Time Clock Compare */ +#define SH_INT_CMPC_REAL_TIME_CMPC_SHFT 0 +#define SH_INT_CMPC_REAL_TIME_CMPC_MASK 0x007fffffffffffff + +/* ==================================================================== */ +/* Register "SH_INT_CMPD" */ +/* RTC Compare Value for Processor D */ +/* ==================================================================== */ + +#define SH_INT_CMPD 0x00000001101b0180 +#define SH_INT_CMPD_MASK 0x007fffffffffffff +#define SH_INT_CMPD_INIT 0x0000000000000000 + +/* SH_INT_CMPD_REAL_TIME_CMPD */ +/* Description: Real Time Clock Compare */ +#define SH_INT_CMPD_REAL_TIME_CMPD_SHFT 0 +#define SH_INT_CMPD_REAL_TIME_CMPD_MASK 0x007fffffffffffff + +/* ==================================================================== */ +/* Register "SH_INT_PROF" */ +/* Profile Compare Registers */ +/* ==================================================================== */ + +#define SH_INT_PROF 0x00000001101b0200 +#define SH_INT_PROF_MASK 0x00000000ffffffff +#define SH_INT_PROF_INIT 0x0000000000000000 + +/* SH_INT_PROF_PROFILE_COMPARE */ +/* Description: Profile Compare */ +#define SH_INT_PROF_PROFILE_COMPARE_SHFT 0 +#define SH_INT_PROF_PROFILE_COMPARE_MASK 0x00000000ffffffff + +/* ==================================================================== */ +/* Register "SH_RTC" */ +/* Real-time Clock */ +/* ==================================================================== */ + +#define SH_RTC 0x00000001101c0000 +#define SH_RTC_MASK 0x007fffffffffffff +#define SH_RTC_INIT 0x0000000000000000 + +/* SH_RTC_REAL_TIME_CLOCK */ +/* Description: Real-time Clock */ +#define SH_RTC_REAL_TIME_CLOCK_SHFT 0 +#define SH_RTC_REAL_TIME_CLOCK_MASK 0x007fffffffffffff + +/* ==================================================================== */ +/* Register "SH_SCRATCH0" */ +/* Scratch Register 0 */ +/* ==================================================================== */ + +#define SH_SCRATCH0 0x00000001101d0000 +#define SH_SCRATCH0_MASK 0xffffffffffffffff +#define SH_SCRATCH0_INIT 0x0000000000000000 + +/* SH_SCRATCH0_SCRATCH0 */ +/* Description: Scratch register 0 */ +#define SH_SCRATCH0_SCRATCH0_SHFT 0 +#define SH_SCRATCH0_SCRATCH0_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_SCRATCH0_ALIAS" */ +/* Scratch Register 0 Alias Address */ +/* ==================================================================== */ + +#define SH_SCRATCH0_ALIAS 0x00000001101d0008 + +/* ==================================================================== */ +/* Register "SH_SCRATCH1" */ +/* Scratch Register 1 */ +/* ==================================================================== */ + +#define SH_SCRATCH1 0x00000001101d0080 +#define SH_SCRATCH1_MASK 0xffffffffffffffff +#define SH_SCRATCH1_INIT 0x0000000000000000 + +/* SH_SCRATCH1_SCRATCH1 */ +/* Description: Scratch register 1 */ +#define SH_SCRATCH1_SCRATCH1_SHFT 0 +#define SH_SCRATCH1_SCRATCH1_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_SCRATCH1_ALIAS" */ +/* Scratch Register 1 Alias Address */ +/* ==================================================================== */ + +#define SH_SCRATCH1_ALIAS 0x00000001101d0088 + +/* ==================================================================== */ +/* Register "SH_SCRATCH2" */ +/* Scratch Register 2 */ +/* ==================================================================== */ + +#define SH_SCRATCH2 0x00000001101d0100 +#define SH_SCRATCH2_MASK 0xffffffffffffffff +#define SH_SCRATCH2_INIT 0x0000000000000000 + +/* SH_SCRATCH2_SCRATCH2 */ +/* Description: Scratch register 2 */ +#define SH_SCRATCH2_SCRATCH2_SHFT 0 +#define SH_SCRATCH2_SCRATCH2_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_SCRATCH2_ALIAS" */ +/* Scratch Register 2 Alias Address */ +/* ==================================================================== */ + +#define SH_SCRATCH2_ALIAS 0x00000001101d0108 + +/* ==================================================================== */ +/* Register "SH_SCRATCH3" */ +/* Scratch Register 3 */ +/* ==================================================================== */ + +#define SH_SCRATCH3 0x00000001101d0180 +#define SH_SCRATCH3_MASK 0x0000000000000001 +#define SH_SCRATCH3_INIT 0x0000000000000000 + +/* SH_SCRATCH3_SCRATCH3 */ +/* Description: Scratch register 3 */ +#define SH_SCRATCH3_SCRATCH3_SHFT 0 +#define SH_SCRATCH3_SCRATCH3_MASK 0x0000000000000001 + +/* ==================================================================== */ +/* Register "SH_SCRATCH3_ALIAS" */ +/* Scratch Register 3 Alias Address */ +/* ==================================================================== */ + +#define SH_SCRATCH3_ALIAS 0x00000001101d0188 + +/* ==================================================================== */ +/* Register "SH_SCRATCH4" */ +/* Scratch Register 4 */ +/* ==================================================================== */ + +#define SH_SCRATCH4 0x00000001101d0200 +#define SH_SCRATCH4_MASK 0x0000000000000001 +#define SH_SCRATCH4_INIT 0x0000000000000000 + +/* SH_SCRATCH4_SCRATCH4 */ +/* Description: Scratch register 4 */ +#define SH_SCRATCH4_SCRATCH4_SHFT 0 +#define SH_SCRATCH4_SCRATCH4_MASK 0x0000000000000001 + +/* ==================================================================== */ +/* Register "SH_SCRATCH4_ALIAS" */ +/* Scratch Register 4 Alias Address */ +/* ==================================================================== */ + +#define SH_SCRATCH4_ALIAS 0x00000001101d0208 + +/* ==================================================================== */ +/* Register "SH_CRB_MESSAGE_CONTROL" */ +/* Coherent Request Buffer Message Control */ +/* ==================================================================== */ + +#define SH_CRB_MESSAGE_CONTROL 0x0000000120000000 +#define SH_CRB_MESSAGE_CONTROL_MASK 0xffffffff00000fff +#define SH_CRB_MESSAGE_CONTROL_INIT 0x0000000000000006 + +/* SH_CRB_MESSAGE_CONTROL_SYSTEM_COHERENCE_ENABLE */ +/* Description: System Coherence Enabled */ +#define SH_CRB_MESSAGE_CONTROL_SYSTEM_COHERENCE_ENABLE_SHFT 0 +#define SH_CRB_MESSAGE_CONTROL_SYSTEM_COHERENCE_ENABLE_MASK 0x0000000000000001 + +/* SH_CRB_MESSAGE_CONTROL_LOCAL_SPECULATIVE_MESSAGE_ENABLE */ +/* Description: Speculative Read Requests to Local Memory Enabled */ +#define SH_CRB_MESSAGE_CONTROL_LOCAL_SPECULATIVE_MESSAGE_ENABLE_SHFT 1 +#define SH_CRB_MESSAGE_CONTROL_LOCAL_SPECULATIVE_MESSAGE_ENABLE_MASK 0x0000000000000002 + +/* SH_CRB_MESSAGE_CONTROL_REMOTE_SPECULATIVE_MESSAGE_ENABLE */ +/* Description: Speculative Read Requests to Remote Memory Enabled */ +#define SH_CRB_MESSAGE_CONTROL_REMOTE_SPECULATIVE_MESSAGE_ENABLE_SHFT 2 +#define SH_CRB_MESSAGE_CONTROL_REMOTE_SPECULATIVE_MESSAGE_ENABLE_MASK 0x0000000000000004 + +/* SH_CRB_MESSAGE_CONTROL_MESSAGE_COLOR */ +/* Description: Define color of message */ +#define SH_CRB_MESSAGE_CONTROL_MESSAGE_COLOR_SHFT 3 +#define SH_CRB_MESSAGE_CONTROL_MESSAGE_COLOR_MASK 0x0000000000000008 + +/* SH_CRB_MESSAGE_CONTROL_MESSAGE_COLOR_ENABLE */ +/* Description: Enable color message processing */ +#define SH_CRB_MESSAGE_CONTROL_MESSAGE_COLOR_ENABLE_SHFT 4 +#define SH_CRB_MESSAGE_CONTROL_MESSAGE_COLOR_ENABLE_MASK 0x0000000000000010 + +/* SH_CRB_MESSAGE_CONTROL_RRB_ATTRIBUTE_MISMATCH_FSB_ENABLE */ +/* Description: Enable FSB RRB Mismatch check */ +#define SH_CRB_MESSAGE_CONTROL_RRB_ATTRIBUTE_MISMATCH_FSB_ENABLE_SHFT 5 +#define SH_CRB_MESSAGE_CONTROL_RRB_ATTRIBUTE_MISMATCH_FSB_ENABLE_MASK 0x0000000000000020 + +/* SH_CRB_MESSAGE_CONTROL_WRB_ATTRIBUTE_MISMATCH_FSB_ENABLE */ +/* Description: Enable FSB WRB Mismatch check */ +#define SH_CRB_MESSAGE_CONTROL_WRB_ATTRIBUTE_MISMATCH_FSB_ENABLE_SHFT 6 +#define SH_CRB_MESSAGE_CONTROL_WRB_ATTRIBUTE_MISMATCH_FSB_ENABLE_MASK 0x0000000000000040 + +/* SH_CRB_MESSAGE_CONTROL_IRB_ATTRIBUTE_MISMATCH_FSB_ENABLE */ +/* Description: Enable FSB IRB Mismatch check */ +#define SH_CRB_MESSAGE_CONTROL_IRB_ATTRIBUTE_MISMATCH_FSB_ENABLE_SHFT 7 +#define SH_CRB_MESSAGE_CONTROL_IRB_ATTRIBUTE_MISMATCH_FSB_ENABLE_MASK 0x0000000000000080 + +/* SH_CRB_MESSAGE_CONTROL_RRB_ATTRIBUTE_MISMATCH_XB_ENABLE */ +/* Description: Enable XB RRB Mismatch check */ +#define SH_CRB_MESSAGE_CONTROL_RRB_ATTRIBUTE_MISMATCH_XB_ENABLE_SHFT 8 +#define SH_CRB_MESSAGE_CONTROL_RRB_ATTRIBUTE_MISMATCH_XB_ENABLE_MASK 0x0000000000000100 + +/* SH_CRB_MESSAGE_CONTROL_WRB_ATTRIBUTE_MISMATCH_XB_ENABLE */ +/* Description: Enable XB WRB Mismatch check */ +#define SH_CRB_MESSAGE_CONTROL_WRB_ATTRIBUTE_MISMATCH_XB_ENABLE_SHFT 9 +#define SH_CRB_MESSAGE_CONTROL_WRB_ATTRIBUTE_MISMATCH_XB_ENABLE_MASK 0x0000000000000200 + +/* SH_CRB_MESSAGE_CONTROL_SUPPRESS_BOGUS_WRITES */ +/* Description: ignor residual write data */ +#define SH_CRB_MESSAGE_CONTROL_SUPPRESS_BOGUS_WRITES_SHFT 10 +#define SH_CRB_MESSAGE_CONTROL_SUPPRESS_BOGUS_WRITES_MASK 0x0000000000000400 + +/* SH_CRB_MESSAGE_CONTROL_ENABLE_IVACK_CONSOLIDATION */ +/* Description: enable IVACK reply consolidation */ +#define SH_CRB_MESSAGE_CONTROL_ENABLE_IVACK_CONSOLIDATION_SHFT 11 +#define SH_CRB_MESSAGE_CONTROL_ENABLE_IVACK_CONSOLIDATION_MASK 0x0000000000000800 + +/* SH_CRB_MESSAGE_CONTROL_IVACK_STALL_COUNT */ +/* Description: IVACK stall counter */ +#define SH_CRB_MESSAGE_CONTROL_IVACK_STALL_COUNT_SHFT 32 +#define SH_CRB_MESSAGE_CONTROL_IVACK_STALL_COUNT_MASK 0x0000ffff00000000 + +/* SH_CRB_MESSAGE_CONTROL_IVACK_THROTTLE_CONTROL */ +/* Description: IVACK throttling limit/timer control */ +#define SH_CRB_MESSAGE_CONTROL_IVACK_THROTTLE_CONTROL_SHFT 48 +#define SH_CRB_MESSAGE_CONTROL_IVACK_THROTTLE_CONTROL_MASK 0xffff000000000000 + +/* ==================================================================== */ +/* Register "SH_CRB_NACK_LIMIT" */ +/* CRB Nack Limit */ +/* ==================================================================== */ + +#define SH_CRB_NACK_LIMIT 0x0000000120000080 +#define SH_CRB_NACK_LIMIT_MASK 0x800000000000ffff +#define SH_CRB_NACK_LIMIT_INIT 0x0000000000000000 + +/* SH_CRB_NACK_LIMIT_LIMIT */ +/* Description: Nack Count Limit */ +#define SH_CRB_NACK_LIMIT_LIMIT_SHFT 0 +#define SH_CRB_NACK_LIMIT_LIMIT_MASK 0x0000000000000fff + +/* SH_CRB_NACK_LIMIT_PRI_FREQ */ +/* Description: Frequency at which priority count is incremented */ +#define SH_CRB_NACK_LIMIT_PRI_FREQ_SHFT 12 +#define SH_CRB_NACK_LIMIT_PRI_FREQ_MASK 0x000000000000f000 + +/* SH_CRB_NACK_LIMIT_ENABLE */ +/* Description: Enable NACK limit detection */ +#define SH_CRB_NACK_LIMIT_ENABLE_SHFT 63 +#define SH_CRB_NACK_LIMIT_ENABLE_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_CRB_TIMEOUT_PRESCALE" */ +/* Coherent Request Buffer Timeout Prescale */ +/* ==================================================================== */ + +#define SH_CRB_TIMEOUT_PRESCALE 0x0000000120000100 +#define SH_CRB_TIMEOUT_PRESCALE_MASK 0x00000000ffffffff +#define SH_CRB_TIMEOUT_PRESCALE_INIT 0x0000000000000000 + +/* SH_CRB_TIMEOUT_PRESCALE_SCALING_FACTOR */ +/* Description: CRB Time-out Prescale Factor */ +#define SH_CRB_TIMEOUT_PRESCALE_SCALING_FACTOR_SHFT 0 +#define SH_CRB_TIMEOUT_PRESCALE_SCALING_FACTOR_MASK 0x00000000ffffffff + +/* ==================================================================== */ +/* Register "SH_CRB_TIMEOUT_SKID" */ +/* Coherent Request Buffer Timeout Skid Limit */ +/* ==================================================================== */ + +#define SH_CRB_TIMEOUT_SKID 0x0000000120000180 +#define SH_CRB_TIMEOUT_SKID_MASK 0x800000000000003f +#define SH_CRB_TIMEOUT_SKID_INIT 0x0000000000000007 + +/* SH_CRB_TIMEOUT_SKID_SKID */ +/* Description: CRB Time-out Skid */ +#define SH_CRB_TIMEOUT_SKID_SKID_SHFT 0 +#define SH_CRB_TIMEOUT_SKID_SKID_MASK 0x000000000000003f + +/* SH_CRB_TIMEOUT_SKID_RESET_SKID_COUNT */ +/* Description: Reset Skid counter */ +#define SH_CRB_TIMEOUT_SKID_RESET_SKID_COUNT_SHFT 63 +#define SH_CRB_TIMEOUT_SKID_RESET_SKID_COUNT_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_MEMORY_WRITE_STATUS_0" */ +/* Memory Write Status for CPU 0 */ +/* ==================================================================== */ + +#define SH_MEMORY_WRITE_STATUS_0 0x0000000120070000 +#define SH_MEMORY_WRITE_STATUS_0_MASK 0x000000000000003f +#define SH_MEMORY_WRITE_STATUS_0_INIT 0x0000000000000000 + +/* SH_MEMORY_WRITE_STATUS_0_PENDING_WRITE_COUNT */ +/* Description: Pending Write Count */ +#define SH_MEMORY_WRITE_STATUS_0_PENDING_WRITE_COUNT_SHFT 0 +#define SH_MEMORY_WRITE_STATUS_0_PENDING_WRITE_COUNT_MASK 0x000000000000003f + +/* ==================================================================== */ +/* Register "SH_MEMORY_WRITE_STATUS_1" */ +/* Memory Write Status for CPU 1 */ +/* ==================================================================== */ + +#define SH_MEMORY_WRITE_STATUS_1 0x0000000120070080 +#define SH_MEMORY_WRITE_STATUS_1_MASK 0x000000000000003f +#define SH_MEMORY_WRITE_STATUS_1_INIT 0x0000000000000000 + +/* SH_MEMORY_WRITE_STATUS_1_PENDING_WRITE_COUNT */ +/* Description: Pending Write Count */ +#define SH_MEMORY_WRITE_STATUS_1_PENDING_WRITE_COUNT_SHFT 0 +#define SH_MEMORY_WRITE_STATUS_1_PENDING_WRITE_COUNT_MASK 0x000000000000003f + +/* ==================================================================== */ +/* Register "SH_PIO_WRITE_STATUS_0" */ +/* PIO Write Status for CPU 0 */ +/* ==================================================================== */ + +#define SH_PIO_WRITE_STATUS_0 0x0000000120070200 +#define SH_PIO_WRITE_STATUS_0_MASK 0xbf03ffffffffffff +#define SH_PIO_WRITE_STATUS_0_INIT 0x8000000000000000 + +/* SH_PIO_WRITE_STATUS_0_MULTI_WRITE_ERROR */ +/* Description: More than one PIO write error occured */ +#define SH_PIO_WRITE_STATUS_0_MULTI_WRITE_ERROR_SHFT 0 +#define SH_PIO_WRITE_STATUS_0_MULTI_WRITE_ERROR_MASK 0x0000000000000001 + +/* SH_PIO_WRITE_STATUS_0_WRITE_DEADLOCK */ +/* Description: Deaklock response detected */ +#define SH_PIO_WRITE_STATUS_0_WRITE_DEADLOCK_SHFT 1 +#define SH_PIO_WRITE_STATUS_0_WRITE_DEADLOCK_MASK 0x0000000000000002 + +/* SH_PIO_WRITE_STATUS_0_WRITE_ERROR */ +/* Description: Error response detected */ +#define SH_PIO_WRITE_STATUS_0_WRITE_ERROR_SHFT 2 +#define SH_PIO_WRITE_STATUS_0_WRITE_ERROR_MASK 0x0000000000000004 + +/* SH_PIO_WRITE_STATUS_0_WRITE_ERROR_ADDRESS */ +/* Description: Address associated with error response */ +#define SH_PIO_WRITE_STATUS_0_WRITE_ERROR_ADDRESS_SHFT 3 +#define SH_PIO_WRITE_STATUS_0_WRITE_ERROR_ADDRESS_MASK 0x0003fffffffffff8 + +/* SH_PIO_WRITE_STATUS_0_PENDING_WRITE_COUNT */ +/* Description: Count of currently pending PIO writes */ +#define SH_PIO_WRITE_STATUS_0_PENDING_WRITE_COUNT_SHFT 56 +#define SH_PIO_WRITE_STATUS_0_PENDING_WRITE_COUNT_MASK 0x3f00000000000000 + +/* SH_PIO_WRITE_STATUS_0_WRITES_OK */ +/* Description: No pending writes or errors */ +#define SH_PIO_WRITE_STATUS_0_WRITES_OK_SHFT 63 +#define SH_PIO_WRITE_STATUS_0_WRITES_OK_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_PIO_WRITE_STATUS_1" */ +/* PIO Write Status for CPU 1 */ +/* ==================================================================== */ + +#define SH_PIO_WRITE_STATUS_1 0x0000000120070280 +#define SH_PIO_WRITE_STATUS_1_MASK 0xbf03ffffffffffff +#define SH_PIO_WRITE_STATUS_1_INIT 0x8000000000000000 + +/* SH_PIO_WRITE_STATUS_1_MULTI_WRITE_ERROR */ +/* Description: More than one PIO write error occured */ +#define SH_PIO_WRITE_STATUS_1_MULTI_WRITE_ERROR_SHFT 0 +#define SH_PIO_WRITE_STATUS_1_MULTI_WRITE_ERROR_MASK 0x0000000000000001 + +/* SH_PIO_WRITE_STATUS_1_WRITE_DEADLOCK */ +/* Description: Deaklock response detected */ +#define SH_PIO_WRITE_STATUS_1_WRITE_DEADLOCK_SHFT 1 +#define SH_PIO_WRITE_STATUS_1_WRITE_DEADLOCK_MASK 0x0000000000000002 + +/* SH_PIO_WRITE_STATUS_1_WRITE_ERROR */ +/* Description: Error response detected */ +#define SH_PIO_WRITE_STATUS_1_WRITE_ERROR_SHFT 2 +#define SH_PIO_WRITE_STATUS_1_WRITE_ERROR_MASK 0x0000000000000004 + +/* SH_PIO_WRITE_STATUS_1_WRITE_ERROR_ADDRESS */ +/* Description: Address associated with error response */ +#define SH_PIO_WRITE_STATUS_1_WRITE_ERROR_ADDRESS_SHFT 3 +#define SH_PIO_WRITE_STATUS_1_WRITE_ERROR_ADDRESS_MASK 0x0003fffffffffff8 + +/* SH_PIO_WRITE_STATUS_1_PENDING_WRITE_COUNT */ +/* Description: Count of currently pending PIO writes */ +#define SH_PIO_WRITE_STATUS_1_PENDING_WRITE_COUNT_SHFT 56 +#define SH_PIO_WRITE_STATUS_1_PENDING_WRITE_COUNT_MASK 0x3f00000000000000 + +/* SH_PIO_WRITE_STATUS_1_WRITES_OK */ +/* Description: No pending writes or errors */ +#define SH_PIO_WRITE_STATUS_1_WRITES_OK_SHFT 63 +#define SH_PIO_WRITE_STATUS_1_WRITES_OK_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_PIO_WRITE_STATUS_0_ALIAS" */ +/* ==================================================================== */ + +#define SH_PIO_WRITE_STATUS_0_ALIAS 0x0000000120070208 + +/* ==================================================================== */ +/* Register "SH_PIO_WRITE_STATUS_1_ALIAS" */ +/* ==================================================================== */ + +#define SH_PIO_WRITE_STATUS_1_ALIAS 0x0000000120070288 + +/* ==================================================================== */ +/* Register "SH_MEMORY_WRITE_STATUS_NON_USER_0" */ +/* Memory Write Status for CPU 0. OS access only */ +/* ==================================================================== */ + +#define SH_MEMORY_WRITE_STATUS_NON_USER_0 0x0000000120070400 +#define SH_MEMORY_WRITE_STATUS_NON_USER_0_MASK 0x800000000000003f +#define SH_MEMORY_WRITE_STATUS_NON_USER_0_INIT 0x0000000000000000 + +/* SH_MEMORY_WRITE_STATUS_NON_USER_0_PENDING_WRITE_COUNT */ +/* Description: Pending Write Count */ +#define SH_MEMORY_WRITE_STATUS_NON_USER_0_PENDING_WRITE_COUNT_SHFT 0 +#define SH_MEMORY_WRITE_STATUS_NON_USER_0_PENDING_WRITE_COUNT_MASK 0x000000000000003f + +/* SH_MEMORY_WRITE_STATUS_NON_USER_0_CLEAR */ +/* Description: Clear pending write count */ +#define SH_MEMORY_WRITE_STATUS_NON_USER_0_CLEAR_SHFT 63 +#define SH_MEMORY_WRITE_STATUS_NON_USER_0_CLEAR_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_MEMORY_WRITE_STATUS_NON_USER_1" */ +/* Memory Write Status for CPU 1. OS access only */ +/* ==================================================================== */ + +#define SH_MEMORY_WRITE_STATUS_NON_USER_1 0x0000000120070480 +#define SH_MEMORY_WRITE_STATUS_NON_USER_1_MASK 0x800000000000003f +#define SH_MEMORY_WRITE_STATUS_NON_USER_1_INIT 0x0000000000000000 + +/* SH_MEMORY_WRITE_STATUS_NON_USER_1_PENDING_WRITE_COUNT */ +/* Description: Pending Write Count */ +#define SH_MEMORY_WRITE_STATUS_NON_USER_1_PENDING_WRITE_COUNT_SHFT 0 +#define SH_MEMORY_WRITE_STATUS_NON_USER_1_PENDING_WRITE_COUNT_MASK 0x000000000000003f + +/* SH_MEMORY_WRITE_STATUS_NON_USER_1_CLEAR */ +/* Description: Clear pending write count */ +#define SH_MEMORY_WRITE_STATUS_NON_USER_1_CLEAR_SHFT 63 +#define SH_MEMORY_WRITE_STATUS_NON_USER_1_CLEAR_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_MMRBIST_ERR" */ +/* Error capture for bist read errors */ +/* ==================================================================== */ + +#define SH_MMRBIST_ERR 0x0000000100000080 +#define SH_MMRBIST_ERR_MASK 0x00000071ffffffff +#define SH_MMRBIST_ERR_INIT 0x0000000000000000 + +/* SH_MMRBIST_ERR_ADDR */ +/* Description: dword address of bist error */ +#define SH_MMRBIST_ERR_ADDR_SHFT 0 +#define SH_MMRBIST_ERR_ADDR_MASK 0x00000001ffffffff + +/* SH_MMRBIST_ERR_DETECTED */ +/* Description: error detected flag */ +#define SH_MMRBIST_ERR_DETECTED_SHFT 36 +#define SH_MMRBIST_ERR_DETECTED_MASK 0x0000001000000000 + +/* SH_MMRBIST_ERR_MULTIPLE_DETECTED */ +/* Description: multiple errors detected flag */ +#define SH_MMRBIST_ERR_MULTIPLE_DETECTED_SHFT 37 +#define SH_MMRBIST_ERR_MULTIPLE_DETECTED_MASK 0x0000002000000000 + +/* SH_MMRBIST_ERR_CANCELLED */ +/* Description: mmr/bist was cancelled */ +#define SH_MMRBIST_ERR_CANCELLED_SHFT 38 +#define SH_MMRBIST_ERR_CANCELLED_MASK 0x0000004000000000 + +/* ==================================================================== */ +/* Register "SH_MISC_ERR_HDR_LOWER" */ +/* Header capture register */ +/* ==================================================================== */ + +#define SH_MISC_ERR_HDR_LOWER 0x0000000100000088 +#define SH_MISC_ERR_HDR_LOWER_MASK 0x93fffffffffffff8 +#define SH_MISC_ERR_HDR_LOWER_INIT 0x0000000000000000 + +/* SH_MISC_ERR_HDR_LOWER_ADDR */ +/* Description: upper bits of reference address */ +#define SH_MISC_ERR_HDR_LOWER_ADDR_SHFT 3 +#define SH_MISC_ERR_HDR_LOWER_ADDR_MASK 0x0000000ffffffff8 + +/* SH_MISC_ERR_HDR_LOWER_CMD */ +/* Description: command of reference */ +#define SH_MISC_ERR_HDR_LOWER_CMD_SHFT 36 +#define SH_MISC_ERR_HDR_LOWER_CMD_MASK 0x00000ff000000000 + +/* SH_MISC_ERR_HDR_LOWER_SRC */ +/* Description: source node of reference */ +#define SH_MISC_ERR_HDR_LOWER_SRC_SHFT 44 +#define SH_MISC_ERR_HDR_LOWER_SRC_MASK 0x03fff00000000000 + +/* SH_MISC_ERR_HDR_LOWER_WRITE */ +/* Description: reference is a write */ +#define SH_MISC_ERR_HDR_LOWER_WRITE_SHFT 60 +#define SH_MISC_ERR_HDR_LOWER_WRITE_MASK 0x1000000000000000 + +/* SH_MISC_ERR_HDR_LOWER_VALID */ +/* Description: set when capture occurs */ +#define SH_MISC_ERR_HDR_LOWER_VALID_SHFT 63 +#define SH_MISC_ERR_HDR_LOWER_VALID_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_MISC_ERR_HDR_UPPER" */ +/* Error header capture packet and protocol errors */ +/* ==================================================================== */ + +#define SH_MISC_ERR_HDR_UPPER 0x0000000100000090 +#define SH_MISC_ERR_HDR_UPPER_MASK 0x000000001ff000ff +#define SH_MISC_ERR_HDR_UPPER_INIT 0x0000000000000000 + +/* SH_MISC_ERR_HDR_UPPER_DIR_PROTOCOL */ +/* Description: indicates a directory protocol error captured */ +#define SH_MISC_ERR_HDR_UPPER_DIR_PROTOCOL_SHFT 0 +#define SH_MISC_ERR_HDR_UPPER_DIR_PROTOCOL_MASK 0x0000000000000001 + +/* SH_MISC_ERR_HDR_UPPER_ILLEGAL_CMD */ +/* Description: indicates an illegal command error captured */ +#define SH_MISC_ERR_HDR_UPPER_ILLEGAL_CMD_SHFT 1 +#define SH_MISC_ERR_HDR_UPPER_ILLEGAL_CMD_MASK 0x0000000000000002 + +/* SH_MISC_ERR_HDR_UPPER_NONEXIST_ADDR */ +/* Description: indicates a non-existent memory error captured */ +#define SH_MISC_ERR_HDR_UPPER_NONEXIST_ADDR_SHFT 2 +#define SH_MISC_ERR_HDR_UPPER_NONEXIST_ADDR_MASK 0x0000000000000004 + +/* SH_MISC_ERR_HDR_UPPER_RMW_UC */ +/* Description: indicates an uncorrectable store rmw */ +#define SH_MISC_ERR_HDR_UPPER_RMW_UC_SHFT 3 +#define SH_MISC_ERR_HDR_UPPER_RMW_UC_MASK 0x0000000000000008 + +/* SH_MISC_ERR_HDR_UPPER_RMW_COR */ +/* Description: indicates a correctable store rmw */ +#define SH_MISC_ERR_HDR_UPPER_RMW_COR_SHFT 4 +#define SH_MISC_ERR_HDR_UPPER_RMW_COR_MASK 0x0000000000000010 + +/* SH_MISC_ERR_HDR_UPPER_DIR_ACC */ +/* Description: indicates a data request to directory memory error */ +/* captured */ +#define SH_MISC_ERR_HDR_UPPER_DIR_ACC_SHFT 5 +#define SH_MISC_ERR_HDR_UPPER_DIR_ACC_MASK 0x0000000000000020 + +/* SH_MISC_ERR_HDR_UPPER_PI_PKT_SIZE */ +/* Description: indicates a pkt size error from pi */ +#define SH_MISC_ERR_HDR_UPPER_PI_PKT_SIZE_SHFT 6 +#define SH_MISC_ERR_HDR_UPPER_PI_PKT_SIZE_MASK 0x0000000000000040 + +/* SH_MISC_ERR_HDR_UPPER_XN_PKT_SIZE */ +/* Description: indicates a pkt size error from xn */ +#define SH_MISC_ERR_HDR_UPPER_XN_PKT_SIZE_SHFT 7 +#define SH_MISC_ERR_HDR_UPPER_XN_PKT_SIZE_MASK 0x0000000000000080 + +/* SH_MISC_ERR_HDR_UPPER_ECHO */ +#define SH_MISC_ERR_HDR_UPPER_ECHO_SHFT 20 +#define SH_MISC_ERR_HDR_UPPER_ECHO_MASK 0x000000001ff00000 + +/* ==================================================================== */ +/* Register "SH_DIR_UC_ERR_HDR_LOWER" */ +/* Header capture register */ +/* ==================================================================== */ + +#define SH_DIR_UC_ERR_HDR_LOWER 0x0000000100000098 +#define SH_DIR_UC_ERR_HDR_LOWER_MASK 0x93fffffffffffff8 +#define SH_DIR_UC_ERR_HDR_LOWER_INIT 0x0000000000000000 + +/* SH_DIR_UC_ERR_HDR_LOWER_ADDR */ +/* Description: upper bits of reference address */ +#define SH_DIR_UC_ERR_HDR_LOWER_ADDR_SHFT 3 +#define SH_DIR_UC_ERR_HDR_LOWER_ADDR_MASK 0x0000000ffffffff8 + +/* SH_DIR_UC_ERR_HDR_LOWER_CMD */ +/* Description: command of reference */ +#define SH_DIR_UC_ERR_HDR_LOWER_CMD_SHFT 36 +#define SH_DIR_UC_ERR_HDR_LOWER_CMD_MASK 0x00000ff000000000 + +/* SH_DIR_UC_ERR_HDR_LOWER_SRC */ +/* Description: source node of reference */ +#define SH_DIR_UC_ERR_HDR_LOWER_SRC_SHFT 44 +#define SH_DIR_UC_ERR_HDR_LOWER_SRC_MASK 0x03fff00000000000 + +/* SH_DIR_UC_ERR_HDR_LOWER_WRITE */ +/* Description: reference is a write */ +#define SH_DIR_UC_ERR_HDR_LOWER_WRITE_SHFT 60 +#define SH_DIR_UC_ERR_HDR_LOWER_WRITE_MASK 0x1000000000000000 + +/* SH_DIR_UC_ERR_HDR_LOWER_VALID */ +/* Description: set when capture occurs */ +#define SH_DIR_UC_ERR_HDR_LOWER_VALID_SHFT 63 +#define SH_DIR_UC_ERR_HDR_LOWER_VALID_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_DIR_UC_ERR_HDR_UPPER" */ +/* Error header capture packet and protocol errors */ +/* ==================================================================== */ + +#define SH_DIR_UC_ERR_HDR_UPPER 0x00000001000000a0 +#define SH_DIR_UC_ERR_HDR_UPPER_MASK 0x000000001ff00008 +#define SH_DIR_UC_ERR_HDR_UPPER_INIT 0x0000000000000000 + +/* SH_DIR_UC_ERR_HDR_UPPER_DIR_UC */ +/* Description: indicates uncorrectable directory error captured */ +#define SH_DIR_UC_ERR_HDR_UPPER_DIR_UC_SHFT 3 +#define SH_DIR_UC_ERR_HDR_UPPER_DIR_UC_MASK 0x0000000000000008 + +/* SH_DIR_UC_ERR_HDR_UPPER_ECHO */ +#define SH_DIR_UC_ERR_HDR_UPPER_ECHO_SHFT 20 +#define SH_DIR_UC_ERR_HDR_UPPER_ECHO_MASK 0x000000001ff00000 + +/* ==================================================================== */ +/* Register "SH_DIR_COR_ERR_HDR_LOWER" */ +/* Header capture register */ +/* ==================================================================== */ + +#define SH_DIR_COR_ERR_HDR_LOWER 0x00000001000000a8 +#define SH_DIR_COR_ERR_HDR_LOWER_MASK 0x93fffffffffffff8 +#define SH_DIR_COR_ERR_HDR_LOWER_INIT 0x0000000000000000 + +/* SH_DIR_COR_ERR_HDR_LOWER_ADDR */ +/* Description: upper bits of reference address */ +#define SH_DIR_COR_ERR_HDR_LOWER_ADDR_SHFT 3 +#define SH_DIR_COR_ERR_HDR_LOWER_ADDR_MASK 0x0000000ffffffff8 + +/* SH_DIR_COR_ERR_HDR_LOWER_CMD */ +/* Description: command of reference */ +#define SH_DIR_COR_ERR_HDR_LOWER_CMD_SHFT 36 +#define SH_DIR_COR_ERR_HDR_LOWER_CMD_MASK 0x00000ff000000000 + +/* SH_DIR_COR_ERR_HDR_LOWER_SRC */ +/* Description: source node of reference */ +#define SH_DIR_COR_ERR_HDR_LOWER_SRC_SHFT 44 +#define SH_DIR_COR_ERR_HDR_LOWER_SRC_MASK 0x03fff00000000000 + +/* SH_DIR_COR_ERR_HDR_LOWER_WRITE */ +/* Description: reference is a write */ +#define SH_DIR_COR_ERR_HDR_LOWER_WRITE_SHFT 60 +#define SH_DIR_COR_ERR_HDR_LOWER_WRITE_MASK 0x1000000000000000 + +/* SH_DIR_COR_ERR_HDR_LOWER_VALID */ +/* Description: set when capture occurs */ +#define SH_DIR_COR_ERR_HDR_LOWER_VALID_SHFT 63 +#define SH_DIR_COR_ERR_HDR_LOWER_VALID_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_DIR_COR_ERR_HDR_UPPER" */ +/* Error header capture packet and protocol errors */ +/* ==================================================================== */ + +#define SH_DIR_COR_ERR_HDR_UPPER 0x00000001000000b0 +#define SH_DIR_COR_ERR_HDR_UPPER_MASK 0x000000001ff00100 +#define SH_DIR_COR_ERR_HDR_UPPER_INIT 0x0000000000000000 + +/* SH_DIR_COR_ERR_HDR_UPPER_DIR_COR */ +/* Description: indicates correctable directory error captured */ +#define SH_DIR_COR_ERR_HDR_UPPER_DIR_COR_SHFT 8 +#define SH_DIR_COR_ERR_HDR_UPPER_DIR_COR_MASK 0x0000000000000100 + +/* SH_DIR_COR_ERR_HDR_UPPER_ECHO */ +#define SH_DIR_COR_ERR_HDR_UPPER_ECHO_SHFT 20 +#define SH_DIR_COR_ERR_HDR_UPPER_ECHO_MASK 0x000000001ff00000 + +/* ==================================================================== */ +/* Register "SH_MEM_ERROR_SUMMARY" */ +/* Memory error flags */ +/* ==================================================================== */ + +#define SH_MEM_ERROR_SUMMARY 0x00000001000000b8 +#define SH_MEM_ERROR_SUMMARY_MASK 0x00000007f77777ff +#define SH_MEM_ERROR_SUMMARY_INIT 0x0000000000000000 + +/* SH_MEM_ERROR_SUMMARY_ILLEGAL_CMD */ +/* Description: illegal command error */ +#define SH_MEM_ERROR_SUMMARY_ILLEGAL_CMD_SHFT 0 +#define SH_MEM_ERROR_SUMMARY_ILLEGAL_CMD_MASK 0x0000000000000001 + +/* SH_MEM_ERROR_SUMMARY_NONEXIST_ADDR */ +/* Description: non-existent memory error */ +#define SH_MEM_ERROR_SUMMARY_NONEXIST_ADDR_SHFT 1 +#define SH_MEM_ERROR_SUMMARY_NONEXIST_ADDR_MASK 0x0000000000000002 + +/* SH_MEM_ERROR_SUMMARY_DQLP_DIR_PERR */ +/* Description: directory protocol error in dqlp */ +#define SH_MEM_ERROR_SUMMARY_DQLP_DIR_PERR_SHFT 2 +#define SH_MEM_ERROR_SUMMARY_DQLP_DIR_PERR_MASK 0x0000000000000004 + +/* SH_MEM_ERROR_SUMMARY_DQRP_DIR_PERR */ +/* Description: directory protocol error in dqrp */ +#define SH_MEM_ERROR_SUMMARY_DQRP_DIR_PERR_SHFT 3 +#define SH_MEM_ERROR_SUMMARY_DQRP_DIR_PERR_MASK 0x0000000000000008 + +/* SH_MEM_ERROR_SUMMARY_DQLP_DIR_UC */ +/* Description: uncorrectable directory error in dqlp */ +#define SH_MEM_ERROR_SUMMARY_DQLP_DIR_UC_SHFT 4 +#define SH_MEM_ERROR_SUMMARY_DQLP_DIR_UC_MASK 0x0000000000000010 + +/* SH_MEM_ERROR_SUMMARY_DQLP_DIR_COR */ +/* Description: correctable directory error in dqlp */ +#define SH_MEM_ERROR_SUMMARY_DQLP_DIR_COR_SHFT 5 +#define SH_MEM_ERROR_SUMMARY_DQLP_DIR_COR_MASK 0x0000000000000020 + +/* SH_MEM_ERROR_SUMMARY_DQRP_DIR_UC */ +/* Description: uncorrectable directory error in dqrp */ +#define SH_MEM_ERROR_SUMMARY_DQRP_DIR_UC_SHFT 6 +#define SH_MEM_ERROR_SUMMARY_DQRP_DIR_UC_MASK 0x0000000000000040 + +/* SH_MEM_ERROR_SUMMARY_DQRP_DIR_COR */ +/* Description: correctable directory error in dqrp */ +#define SH_MEM_ERROR_SUMMARY_DQRP_DIR_COR_SHFT 7 +#define SH_MEM_ERROR_SUMMARY_DQRP_DIR_COR_MASK 0x0000000000000080 + +/* SH_MEM_ERROR_SUMMARY_ACX_INT_HW */ +/* Description: hardware interrupt from acx */ +#define SH_MEM_ERROR_SUMMARY_ACX_INT_HW_SHFT 8 +#define SH_MEM_ERROR_SUMMARY_ACX_INT_HW_MASK 0x0000000000000100 + +/* SH_MEM_ERROR_SUMMARY_ACY_INT_HW */ +/* Description: hardware interrupt from acy */ +#define SH_MEM_ERROR_SUMMARY_ACY_INT_HW_SHFT 9 +#define SH_MEM_ERROR_SUMMARY_ACY_INT_HW_MASK 0x0000000000000200 + +/* SH_MEM_ERROR_SUMMARY_DIR_ACC */ +/* Description: directory memory access error */ +#define SH_MEM_ERROR_SUMMARY_DIR_ACC_SHFT 10 +#define SH_MEM_ERROR_SUMMARY_DIR_ACC_MASK 0x0000000000000400 + +/* SH_MEM_ERROR_SUMMARY_DQLP_INT_UC */ +/* Description: uncorrectable interrupt from dqlp */ +#define SH_MEM_ERROR_SUMMARY_DQLP_INT_UC_SHFT 12 +#define SH_MEM_ERROR_SUMMARY_DQLP_INT_UC_MASK 0x0000000000001000 + +/* SH_MEM_ERROR_SUMMARY_DQLP_INT_COR */ +/* Description: correctable interrupt from dqlp */ +#define SH_MEM_ERROR_SUMMARY_DQLP_INT_COR_SHFT 13 +#define SH_MEM_ERROR_SUMMARY_DQLP_INT_COR_MASK 0x0000000000002000 + +/* SH_MEM_ERROR_SUMMARY_DQLP_INT_HW */ +/* Description: hardware interrupt from dqlp */ +#define SH_MEM_ERROR_SUMMARY_DQLP_INT_HW_SHFT 14 +#define SH_MEM_ERROR_SUMMARY_DQLP_INT_HW_MASK 0x0000000000004000 + +/* SH_MEM_ERROR_SUMMARY_DQLS_INT_UC */ +/* Description: uncorrectable interrupt from dqls */ +#define SH_MEM_ERROR_SUMMARY_DQLS_INT_UC_SHFT 16 +#define SH_MEM_ERROR_SUMMARY_DQLS_INT_UC_MASK 0x0000000000010000 + +/* SH_MEM_ERROR_SUMMARY_DQLS_INT_COR */ +/* Description: correctable interrupt from dqls */ +#define SH_MEM_ERROR_SUMMARY_DQLS_INT_COR_SHFT 17 +#define SH_MEM_ERROR_SUMMARY_DQLS_INT_COR_MASK 0x0000000000020000 + +/* SH_MEM_ERROR_SUMMARY_DQLS_INT_HW */ +/* Description: hardware interrupt from dqls */ +#define SH_MEM_ERROR_SUMMARY_DQLS_INT_HW_SHFT 18 +#define SH_MEM_ERROR_SUMMARY_DQLS_INT_HW_MASK 0x0000000000040000 + +/* SH_MEM_ERROR_SUMMARY_DQRP_INT_UC */ +/* Description: uncorrectable interrupt from dqrp */ +#define SH_MEM_ERROR_SUMMARY_DQRP_INT_UC_SHFT 20 +#define SH_MEM_ERROR_SUMMARY_DQRP_INT_UC_MASK 0x0000000000100000 + +/* SH_MEM_ERROR_SUMMARY_DQRP_INT_COR */ +/* Description: correctable interrupt from dqrp */ +#define SH_MEM_ERROR_SUMMARY_DQRP_INT_COR_SHFT 21 +#define SH_MEM_ERROR_SUMMARY_DQRP_INT_COR_MASK 0x0000000000200000 + +/* SH_MEM_ERROR_SUMMARY_DQRP_INT_HW */ +/* Description: hardware interrupt from dqrp */ +#define SH_MEM_ERROR_SUMMARY_DQRP_INT_HW_SHFT 22 +#define SH_MEM_ERROR_SUMMARY_DQRP_INT_HW_MASK 0x0000000000400000 + +/* SH_MEM_ERROR_SUMMARY_DQRS_INT_UC */ +/* Description: uncorrectable interrupt from dqrs */ +#define SH_MEM_ERROR_SUMMARY_DQRS_INT_UC_SHFT 24 +#define SH_MEM_ERROR_SUMMARY_DQRS_INT_UC_MASK 0x0000000001000000 + +/* SH_MEM_ERROR_SUMMARY_DQRS_INT_COR */ +/* Description: correctable interrupt from dqrs */ +#define SH_MEM_ERROR_SUMMARY_DQRS_INT_COR_SHFT 25 +#define SH_MEM_ERROR_SUMMARY_DQRS_INT_COR_MASK 0x0000000002000000 + +/* SH_MEM_ERROR_SUMMARY_DQRS_INT_HW */ +/* Description: hardware interrupt from dqrs */ +#define SH_MEM_ERROR_SUMMARY_DQRS_INT_HW_SHFT 26 +#define SH_MEM_ERROR_SUMMARY_DQRS_INT_HW_MASK 0x0000000004000000 + +/* SH_MEM_ERROR_SUMMARY_PI_REPLY_OVERFLOW */ +/* Description: too many reply packets came from pi */ +#define SH_MEM_ERROR_SUMMARY_PI_REPLY_OVERFLOW_SHFT 28 +#define SH_MEM_ERROR_SUMMARY_PI_REPLY_OVERFLOW_MASK 0x0000000010000000 + +/* SH_MEM_ERROR_SUMMARY_XN_REPLY_OVERFLOW */ +/* Description: too many reply packets came from xn */ +#define SH_MEM_ERROR_SUMMARY_XN_REPLY_OVERFLOW_SHFT 29 +#define SH_MEM_ERROR_SUMMARY_XN_REPLY_OVERFLOW_MASK 0x0000000020000000 + +/* SH_MEM_ERROR_SUMMARY_PI_REQUEST_OVERFLOW */ +/* Description: too many request packets came from pi */ +#define SH_MEM_ERROR_SUMMARY_PI_REQUEST_OVERFLOW_SHFT 30 +#define SH_MEM_ERROR_SUMMARY_PI_REQUEST_OVERFLOW_MASK 0x0000000040000000 + +/* SH_MEM_ERROR_SUMMARY_XN_REQUEST_OVERFLOW */ +/* Description: too many request packets came from xn */ +#define SH_MEM_ERROR_SUMMARY_XN_REQUEST_OVERFLOW_SHFT 31 +#define SH_MEM_ERROR_SUMMARY_XN_REQUEST_OVERFLOW_MASK 0x0000000080000000 + +/* SH_MEM_ERROR_SUMMARY_RED_BLACK_ERR_TIMEOUT */ +/* Description: red black scheme did not clean up soon enough */ +#define SH_MEM_ERROR_SUMMARY_RED_BLACK_ERR_TIMEOUT_SHFT 32 +#define SH_MEM_ERROR_SUMMARY_RED_BLACK_ERR_TIMEOUT_MASK 0x0000000100000000 + +/* SH_MEM_ERROR_SUMMARY_PI_PKT_SIZE */ +/* Description: received data bearing packet from pi with wrong siz */ +#define SH_MEM_ERROR_SUMMARY_PI_PKT_SIZE_SHFT 33 +#define SH_MEM_ERROR_SUMMARY_PI_PKT_SIZE_MASK 0x0000000200000000 + +/* SH_MEM_ERROR_SUMMARY_XN_PKT_SIZE */ +/* Description: received data bearing packet from xn with wrong siz */ +#define SH_MEM_ERROR_SUMMARY_XN_PKT_SIZE_SHFT 34 +#define SH_MEM_ERROR_SUMMARY_XN_PKT_SIZE_MASK 0x0000000400000000 + +/* ==================================================================== */ +/* Register "SH_MEM_ERROR_SUMMARY_ALIAS" */ +/* Memory error flags clear alias */ +/* ==================================================================== */ + +#define SH_MEM_ERROR_SUMMARY_ALIAS 0x00000001000000c0 + +/* ==================================================================== */ +/* Register "SH_MEM_ERROR_OVERFLOW" */ +/* Memory error flags */ +/* ==================================================================== */ + +#define SH_MEM_ERROR_OVERFLOW 0x00000001000000c8 +#define SH_MEM_ERROR_OVERFLOW_MASK 0x00000007f77777ff +#define SH_MEM_ERROR_OVERFLOW_INIT 0x0000000000000000 + +/* SH_MEM_ERROR_OVERFLOW_ILLEGAL_CMD */ +/* Description: illegal command error */ +#define SH_MEM_ERROR_OVERFLOW_ILLEGAL_CMD_SHFT 0 +#define SH_MEM_ERROR_OVERFLOW_ILLEGAL_CMD_MASK 0x0000000000000001 + +/* SH_MEM_ERROR_OVERFLOW_NONEXIST_ADDR */ +/* Description: non-existent memory error */ +#define SH_MEM_ERROR_OVERFLOW_NONEXIST_ADDR_SHFT 1 +#define SH_MEM_ERROR_OVERFLOW_NONEXIST_ADDR_MASK 0x0000000000000002 + +/* SH_MEM_ERROR_OVERFLOW_DQLP_DIR_PERR */ +/* Description: directory protocol error in dqlp */ +#define SH_MEM_ERROR_OVERFLOW_DQLP_DIR_PERR_SHFT 2 +#define SH_MEM_ERROR_OVERFLOW_DQLP_DIR_PERR_MASK 0x0000000000000004 + +/* SH_MEM_ERROR_OVERFLOW_DQRP_DIR_PERR */ +/* Description: directory protocol error in dqrp */ +#define SH_MEM_ERROR_OVERFLOW_DQRP_DIR_PERR_SHFT 3 +#define SH_MEM_ERROR_OVERFLOW_DQRP_DIR_PERR_MASK 0x0000000000000008 + +/* SH_MEM_ERROR_OVERFLOW_DQLP_DIR_UC */ +/* Description: uncorrectable directory error in dqlp */ +#define SH_MEM_ERROR_OVERFLOW_DQLP_DIR_UC_SHFT 4 +#define SH_MEM_ERROR_OVERFLOW_DQLP_DIR_UC_MASK 0x0000000000000010 + +/* SH_MEM_ERROR_OVERFLOW_DQLP_DIR_COR */ +/* Description: correctable directory error in dqlp */ +#define SH_MEM_ERROR_OVERFLOW_DQLP_DIR_COR_SHFT 5 +#define SH_MEM_ERROR_OVERFLOW_DQLP_DIR_COR_MASK 0x0000000000000020 + +/* SH_MEM_ERROR_OVERFLOW_DQRP_DIR_UC */ +/* Description: uncorrectable directory error in dqrp */ +#define SH_MEM_ERROR_OVERFLOW_DQRP_DIR_UC_SHFT 6 +#define SH_MEM_ERROR_OVERFLOW_DQRP_DIR_UC_MASK 0x0000000000000040 + +/* SH_MEM_ERROR_OVERFLOW_DQRP_DIR_COR */ +/* Description: correctable directory error in dqrp */ +#define SH_MEM_ERROR_OVERFLOW_DQRP_DIR_COR_SHFT 7 +#define SH_MEM_ERROR_OVERFLOW_DQRP_DIR_COR_MASK 0x0000000000000080 + +/* SH_MEM_ERROR_OVERFLOW_ACX_INT_HW */ +/* Description: hardware interrupt from acx */ +#define SH_MEM_ERROR_OVERFLOW_ACX_INT_HW_SHFT 8 +#define SH_MEM_ERROR_OVERFLOW_ACX_INT_HW_MASK 0x0000000000000100 + +/* SH_MEM_ERROR_OVERFLOW_ACY_INT_HW */ +/* Description: hardware interrupt from acy */ +#define SH_MEM_ERROR_OVERFLOW_ACY_INT_HW_SHFT 9 +#define SH_MEM_ERROR_OVERFLOW_ACY_INT_HW_MASK 0x0000000000000200 + +/* SH_MEM_ERROR_OVERFLOW_DIR_ACC */ +/* Description: directory memory access error */ +#define SH_MEM_ERROR_OVERFLOW_DIR_ACC_SHFT 10 +#define SH_MEM_ERROR_OVERFLOW_DIR_ACC_MASK 0x0000000000000400 + +/* SH_MEM_ERROR_OVERFLOW_DQLP_INT_UC */ +/* Description: uncorrectable interrupt from dqlp */ +#define SH_MEM_ERROR_OVERFLOW_DQLP_INT_UC_SHFT 12 +#define SH_MEM_ERROR_OVERFLOW_DQLP_INT_UC_MASK 0x0000000000001000 + +/* SH_MEM_ERROR_OVERFLOW_DQLP_INT_COR */ +/* Description: correctable interrupt from dqlp */ +#define SH_MEM_ERROR_OVERFLOW_DQLP_INT_COR_SHFT 13 +#define SH_MEM_ERROR_OVERFLOW_DQLP_INT_COR_MASK 0x0000000000002000 + +/* SH_MEM_ERROR_OVERFLOW_DQLP_INT_HW */ +/* Description: hardware interrupt from dqlp */ +#define SH_MEM_ERROR_OVERFLOW_DQLP_INT_HW_SHFT 14 +#define SH_MEM_ERROR_OVERFLOW_DQLP_INT_HW_MASK 0x0000000000004000 + +/* SH_MEM_ERROR_OVERFLOW_DQLS_INT_UC */ +/* Description: uncorrectable interrupt from dqls */ +#define SH_MEM_ERROR_OVERFLOW_DQLS_INT_UC_SHFT 16 +#define SH_MEM_ERROR_OVERFLOW_DQLS_INT_UC_MASK 0x0000000000010000 + +/* SH_MEM_ERROR_OVERFLOW_DQLS_INT_COR */ +/* Description: correctable interrupt from dqls */ +#define SH_MEM_ERROR_OVERFLOW_DQLS_INT_COR_SHFT 17 +#define SH_MEM_ERROR_OVERFLOW_DQLS_INT_COR_MASK 0x0000000000020000 + +/* SH_MEM_ERROR_OVERFLOW_DQLS_INT_HW */ +/* Description: hardware interrupt from dqls */ +#define SH_MEM_ERROR_OVERFLOW_DQLS_INT_HW_SHFT 18 +#define SH_MEM_ERROR_OVERFLOW_DQLS_INT_HW_MASK 0x0000000000040000 + +/* SH_MEM_ERROR_OVERFLOW_DQRP_INT_UC */ +/* Description: uncorrectable interrupt from dqrp */ +#define SH_MEM_ERROR_OVERFLOW_DQRP_INT_UC_SHFT 20 +#define SH_MEM_ERROR_OVERFLOW_DQRP_INT_UC_MASK 0x0000000000100000 + +/* SH_MEM_ERROR_OVERFLOW_DQRP_INT_COR */ +/* Description: correctable interrupt from dqrp */ +#define SH_MEM_ERROR_OVERFLOW_DQRP_INT_COR_SHFT 21 +#define SH_MEM_ERROR_OVERFLOW_DQRP_INT_COR_MASK 0x0000000000200000 + +/* SH_MEM_ERROR_OVERFLOW_DQRP_INT_HW */ +/* Description: hardware interrupt from dqrp */ +#define SH_MEM_ERROR_OVERFLOW_DQRP_INT_HW_SHFT 22 +#define SH_MEM_ERROR_OVERFLOW_DQRP_INT_HW_MASK 0x0000000000400000 + +/* SH_MEM_ERROR_OVERFLOW_DQRS_INT_UC */ +/* Description: uncorrectable interrupt from dqrs */ +#define SH_MEM_ERROR_OVERFLOW_DQRS_INT_UC_SHFT 24 +#define SH_MEM_ERROR_OVERFLOW_DQRS_INT_UC_MASK 0x0000000001000000 + +/* SH_MEM_ERROR_OVERFLOW_DQRS_INT_COR */ +/* Description: correctable interrupt from dqrs */ +#define SH_MEM_ERROR_OVERFLOW_DQRS_INT_COR_SHFT 25 +#define SH_MEM_ERROR_OVERFLOW_DQRS_INT_COR_MASK 0x0000000002000000 + +/* SH_MEM_ERROR_OVERFLOW_DQRS_INT_HW */ +/* Description: hardware interrupt from dqrs */ +#define SH_MEM_ERROR_OVERFLOW_DQRS_INT_HW_SHFT 26 +#define SH_MEM_ERROR_OVERFLOW_DQRS_INT_HW_MASK 0x0000000004000000 + +/* SH_MEM_ERROR_OVERFLOW_PI_REPLY_OVERFLOW */ +/* Description: too many reply packets came from pi */ +#define SH_MEM_ERROR_OVERFLOW_PI_REPLY_OVERFLOW_SHFT 28 +#define SH_MEM_ERROR_OVERFLOW_PI_REPLY_OVERFLOW_MASK 0x0000000010000000 + +/* SH_MEM_ERROR_OVERFLOW_XN_REPLY_OVERFLOW */ +/* Description: too many reply packets came from xn */ +#define SH_MEM_ERROR_OVERFLOW_XN_REPLY_OVERFLOW_SHFT 29 +#define SH_MEM_ERROR_OVERFLOW_XN_REPLY_OVERFLOW_MASK 0x0000000020000000 + +/* SH_MEM_ERROR_OVERFLOW_PI_REQUEST_OVERFLOW */ +/* Description: too many request packets came from pi */ +#define SH_MEM_ERROR_OVERFLOW_PI_REQUEST_OVERFLOW_SHFT 30 +#define SH_MEM_ERROR_OVERFLOW_PI_REQUEST_OVERFLOW_MASK 0x0000000040000000 + +/* SH_MEM_ERROR_OVERFLOW_XN_REQUEST_OVERFLOW */ +/* Description: too many request packets came from xn */ +#define SH_MEM_ERROR_OVERFLOW_XN_REQUEST_OVERFLOW_SHFT 31 +#define SH_MEM_ERROR_OVERFLOW_XN_REQUEST_OVERFLOW_MASK 0x0000000080000000 + +/* SH_MEM_ERROR_OVERFLOW_RED_BLACK_ERR_TIMEOUT */ +/* Description: red black scheme did not clean up soon enough */ +#define SH_MEM_ERROR_OVERFLOW_RED_BLACK_ERR_TIMEOUT_SHFT 32 +#define SH_MEM_ERROR_OVERFLOW_RED_BLACK_ERR_TIMEOUT_MASK 0x0000000100000000 + +/* SH_MEM_ERROR_OVERFLOW_PI_PKT_SIZE */ +/* Description: received data bearing packet from pi with wrong siz */ +#define SH_MEM_ERROR_OVERFLOW_PI_PKT_SIZE_SHFT 33 +#define SH_MEM_ERROR_OVERFLOW_PI_PKT_SIZE_MASK 0x0000000200000000 + +/* SH_MEM_ERROR_OVERFLOW_XN_PKT_SIZE */ +/* Description: received data bearing packet from xn with wrong siz */ +#define SH_MEM_ERROR_OVERFLOW_XN_PKT_SIZE_SHFT 34 +#define SH_MEM_ERROR_OVERFLOW_XN_PKT_SIZE_MASK 0x0000000400000000 + +/* ==================================================================== */ +/* Register "SH_MEM_ERROR_OVERFLOW_ALIAS" */ +/* Memory error flags clear alias */ +/* ==================================================================== */ + +#define SH_MEM_ERROR_OVERFLOW_ALIAS 0x00000001000000d0 + +/* ==================================================================== */ +/* Register "SH_MEM_ERROR_MASK" */ +/* Memory error flags */ +/* ==================================================================== */ + +#define SH_MEM_ERROR_MASK 0x00000001000000d8 +#define SH_MEM_ERROR_MASK_MASK 0x00000007f77777ff +#define SH_MEM_ERROR_MASK_INIT 0x00000007f77773ff + +/* SH_MEM_ERROR_MASK_ILLEGAL_CMD */ +/* Description: illegal command error */ +#define SH_MEM_ERROR_MASK_ILLEGAL_CMD_SHFT 0 +#define SH_MEM_ERROR_MASK_ILLEGAL_CMD_MASK 0x0000000000000001 + +/* SH_MEM_ERROR_MASK_NONEXIST_ADDR */ +/* Description: non-existent memory error */ +#define SH_MEM_ERROR_MASK_NONEXIST_ADDR_SHFT 1 +#define SH_MEM_ERROR_MASK_NONEXIST_ADDR_MASK 0x0000000000000002 + +/* SH_MEM_ERROR_MASK_DQLP_DIR_PERR */ +/* Description: directory protocol error in dqlp */ +#define SH_MEM_ERROR_MASK_DQLP_DIR_PERR_SHFT 2 +#define SH_MEM_ERROR_MASK_DQLP_DIR_PERR_MASK 0x0000000000000004 + +/* SH_MEM_ERROR_MASK_DQRP_DIR_PERR */ +/* Description: directory protocol error in dqrp */ +#define SH_MEM_ERROR_MASK_DQRP_DIR_PERR_SHFT 3 +#define SH_MEM_ERROR_MASK_DQRP_DIR_PERR_MASK 0x0000000000000008 + +/* SH_MEM_ERROR_MASK_DQLP_DIR_UC */ +/* Description: uncorrectable directory error in dqlp */ +#define SH_MEM_ERROR_MASK_DQLP_DIR_UC_SHFT 4 +#define SH_MEM_ERROR_MASK_DQLP_DIR_UC_MASK 0x0000000000000010 + +/* SH_MEM_ERROR_MASK_DQLP_DIR_COR */ +/* Description: correctable directory error in dqlp */ +#define SH_MEM_ERROR_MASK_DQLP_DIR_COR_SHFT 5 +#define SH_MEM_ERROR_MASK_DQLP_DIR_COR_MASK 0x0000000000000020 + +/* SH_MEM_ERROR_MASK_DQRP_DIR_UC */ +/* Description: uncorrectable directory error in dqrp */ +#define SH_MEM_ERROR_MASK_DQRP_DIR_UC_SHFT 6 +#define SH_MEM_ERROR_MASK_DQRP_DIR_UC_MASK 0x0000000000000040 + +/* SH_MEM_ERROR_MASK_DQRP_DIR_COR */ +/* Description: correctable directory error in dqrp */ +#define SH_MEM_ERROR_MASK_DQRP_DIR_COR_SHFT 7 +#define SH_MEM_ERROR_MASK_DQRP_DIR_COR_MASK 0x0000000000000080 + +/* SH_MEM_ERROR_MASK_ACX_INT_HW */ +/* Description: hardware interrupt from acx */ +#define SH_MEM_ERROR_MASK_ACX_INT_HW_SHFT 8 +#define SH_MEM_ERROR_MASK_ACX_INT_HW_MASK 0x0000000000000100 + +/* SH_MEM_ERROR_MASK_ACY_INT_HW */ +/* Description: hardware interrupt from acy */ +#define SH_MEM_ERROR_MASK_ACY_INT_HW_SHFT 9 +#define SH_MEM_ERROR_MASK_ACY_INT_HW_MASK 0x0000000000000200 + +/* SH_MEM_ERROR_MASK_DIR_ACC */ +/* Description: directory memory access error */ +#define SH_MEM_ERROR_MASK_DIR_ACC_SHFT 10 +#define SH_MEM_ERROR_MASK_DIR_ACC_MASK 0x0000000000000400 + +/* SH_MEM_ERROR_MASK_DQLP_INT_UC */ +/* Description: uncorrectable interrupt from dqlp */ +#define SH_MEM_ERROR_MASK_DQLP_INT_UC_SHFT 12 +#define SH_MEM_ERROR_MASK_DQLP_INT_UC_MASK 0x0000000000001000 + +/* SH_MEM_ERROR_MASK_DQLP_INT_COR */ +/* Description: correctable interrupt from dqlp */ +#define SH_MEM_ERROR_MASK_DQLP_INT_COR_SHFT 13 +#define SH_MEM_ERROR_MASK_DQLP_INT_COR_MASK 0x0000000000002000 + +/* SH_MEM_ERROR_MASK_DQLP_INT_HW */ +/* Description: hardware interrupt from dqlp */ +#define SH_MEM_ERROR_MASK_DQLP_INT_HW_SHFT 14 +#define SH_MEM_ERROR_MASK_DQLP_INT_HW_MASK 0x0000000000004000 + +/* SH_MEM_ERROR_MASK_DQLS_INT_UC */ +/* Description: uncorrectable interrupt from dqls */ +#define SH_MEM_ERROR_MASK_DQLS_INT_UC_SHFT 16 +#define SH_MEM_ERROR_MASK_DQLS_INT_UC_MASK 0x0000000000010000 + +/* SH_MEM_ERROR_MASK_DQLS_INT_COR */ +/* Description: correctable interrupt from dqls */ +#define SH_MEM_ERROR_MASK_DQLS_INT_COR_SHFT 17 +#define SH_MEM_ERROR_MASK_DQLS_INT_COR_MASK 0x0000000000020000 + +/* SH_MEM_ERROR_MASK_DQLS_INT_HW */ +/* Description: hardware interrupt from dqls */ +#define SH_MEM_ERROR_MASK_DQLS_INT_HW_SHFT 18 +#define SH_MEM_ERROR_MASK_DQLS_INT_HW_MASK 0x0000000000040000 + +/* SH_MEM_ERROR_MASK_DQRP_INT_UC */ +/* Description: uncorrectable interrupt from dqrp */ +#define SH_MEM_ERROR_MASK_DQRP_INT_UC_SHFT 20 +#define SH_MEM_ERROR_MASK_DQRP_INT_UC_MASK 0x0000000000100000 + +/* SH_MEM_ERROR_MASK_DQRP_INT_COR */ +/* Description: correctable interrupt from dqrp */ +#define SH_MEM_ERROR_MASK_DQRP_INT_COR_SHFT 21 +#define SH_MEM_ERROR_MASK_DQRP_INT_COR_MASK 0x0000000000200000 + +/* SH_MEM_ERROR_MASK_DQRP_INT_HW */ +/* Description: hardware interrupt from dqrp */ +#define SH_MEM_ERROR_MASK_DQRP_INT_HW_SHFT 22 +#define SH_MEM_ERROR_MASK_DQRP_INT_HW_MASK 0x0000000000400000 + +/* SH_MEM_ERROR_MASK_DQRS_INT_UC */ +/* Description: uncorrectable interrupt from dqrs */ +#define SH_MEM_ERROR_MASK_DQRS_INT_UC_SHFT 24 +#define SH_MEM_ERROR_MASK_DQRS_INT_UC_MASK 0x0000000001000000 + +/* SH_MEM_ERROR_MASK_DQRS_INT_COR */ +/* Description: correctable interrupt from dqrs */ +#define SH_MEM_ERROR_MASK_DQRS_INT_COR_SHFT 25 +#define SH_MEM_ERROR_MASK_DQRS_INT_COR_MASK 0x0000000002000000 + +/* SH_MEM_ERROR_MASK_DQRS_INT_HW */ +/* Description: hardware interrupt from dqrs */ +#define SH_MEM_ERROR_MASK_DQRS_INT_HW_SHFT 26 +#define SH_MEM_ERROR_MASK_DQRS_INT_HW_MASK 0x0000000004000000 + +/* SH_MEM_ERROR_MASK_PI_REPLY_OVERFLOW */ +/* Description: too many reply packets came from pi */ +#define SH_MEM_ERROR_MASK_PI_REPLY_OVERFLOW_SHFT 28 +#define SH_MEM_ERROR_MASK_PI_REPLY_OVERFLOW_MASK 0x0000000010000000 + +/* SH_MEM_ERROR_MASK_XN_REPLY_OVERFLOW */ +/* Description: too many reply packets came from xn */ +#define SH_MEM_ERROR_MASK_XN_REPLY_OVERFLOW_SHFT 29 +#define SH_MEM_ERROR_MASK_XN_REPLY_OVERFLOW_MASK 0x0000000020000000 + +/* SH_MEM_ERROR_MASK_PI_REQUEST_OVERFLOW */ +/* Description: too many request packets came from pi */ +#define SH_MEM_ERROR_MASK_PI_REQUEST_OVERFLOW_SHFT 30 +#define SH_MEM_ERROR_MASK_PI_REQUEST_OVERFLOW_MASK 0x0000000040000000 + +/* SH_MEM_ERROR_MASK_XN_REQUEST_OVERFLOW */ +/* Description: too many request packets came from xn */ +#define SH_MEM_ERROR_MASK_XN_REQUEST_OVERFLOW_SHFT 31 +#define SH_MEM_ERROR_MASK_XN_REQUEST_OVERFLOW_MASK 0x0000000080000000 + +/* SH_MEM_ERROR_MASK_RED_BLACK_ERR_TIMEOUT */ +/* Description: red black scheme did not clean up soon enough */ +#define SH_MEM_ERROR_MASK_RED_BLACK_ERR_TIMEOUT_SHFT 32 +#define SH_MEM_ERROR_MASK_RED_BLACK_ERR_TIMEOUT_MASK 0x0000000100000000 + +/* SH_MEM_ERROR_MASK_PI_PKT_SIZE */ +/* Description: received data bearing packet from pi with wrong siz */ +#define SH_MEM_ERROR_MASK_PI_PKT_SIZE_SHFT 33 +#define SH_MEM_ERROR_MASK_PI_PKT_SIZE_MASK 0x0000000200000000 + +/* SH_MEM_ERROR_MASK_XN_PKT_SIZE */ +/* Description: received data bearing packet from xn with wrong siz */ +#define SH_MEM_ERROR_MASK_XN_PKT_SIZE_SHFT 34 +#define SH_MEM_ERROR_MASK_XN_PKT_SIZE_MASK 0x0000000400000000 + +/* ==================================================================== */ +/* Register "SH_X_DIMM_CFG" */ +/* AC Mem Config Registers */ +/* ==================================================================== */ + +#define SH_X_DIMM_CFG 0x0000000100010000 +#define SH_X_DIMM_CFG_MASK 0x0000000f7f7f7f7f +#define SH_X_DIMM_CFG_INIT 0x000000026f4f2f0f + +/* SH_X_DIMM_CFG_DIMM0_SIZE */ +/* Description: DIMM 0 DRAM size */ +#define SH_X_DIMM_CFG_DIMM0_SIZE_SHFT 0 +#define SH_X_DIMM_CFG_DIMM0_SIZE_MASK 0x0000000000000007 + +/* SH_X_DIMM_CFG_DIMM0_2BK */ +/* Description: DIMM 0 has two physical banks */ +#define SH_X_DIMM_CFG_DIMM0_2BK_SHFT 3 +#define SH_X_DIMM_CFG_DIMM0_2BK_MASK 0x0000000000000008 + +/* SH_X_DIMM_CFG_DIMM0_REV */ +/* Description: DIMM 0 physical banks reversed */ +#define SH_X_DIMM_CFG_DIMM0_REV_SHFT 4 +#define SH_X_DIMM_CFG_DIMM0_REV_MASK 0x0000000000000010 + +/* SH_X_DIMM_CFG_DIMM0_CS */ +/* Description: DIMM 0 chip select, addr[35:34] match */ +#define SH_X_DIMM_CFG_DIMM0_CS_SHFT 5 +#define SH_X_DIMM_CFG_DIMM0_CS_MASK 0x0000000000000060 + +/* SH_X_DIMM_CFG_DIMM1_SIZE */ +/* Description: DIMM 1 DRAM size */ +#define SH_X_DIMM_CFG_DIMM1_SIZE_SHFT 8 +#define SH_X_DIMM_CFG_DIMM1_SIZE_MASK 0x0000000000000700 + +/* SH_X_DIMM_CFG_DIMM1_2BK */ +/* Description: DIMM 1 has two physical banks */ +#define SH_X_DIMM_CFG_DIMM1_2BK_SHFT 11 +#define SH_X_DIMM_CFG_DIMM1_2BK_MASK 0x0000000000000800 + +/* SH_X_DIMM_CFG_DIMM1_REV */ +/* Description: DIMM 1 physical banks reversed */ +#define SH_X_DIMM_CFG_DIMM1_REV_SHFT 12 +#define SH_X_DIMM_CFG_DIMM1_REV_MASK 0x0000000000001000 + +/* SH_X_DIMM_CFG_DIMM1_CS */ +/* Description: DIMM 1 chip select, addr[35:34] match */ +#define SH_X_DIMM_CFG_DIMM1_CS_SHFT 13 +#define SH_X_DIMM_CFG_DIMM1_CS_MASK 0x0000000000006000 + +/* SH_X_DIMM_CFG_DIMM2_SIZE */ +/* Description: DIMM 2 DRAM size */ +#define SH_X_DIMM_CFG_DIMM2_SIZE_SHFT 16 +#define SH_X_DIMM_CFG_DIMM2_SIZE_MASK 0x0000000000070000 + +/* SH_X_DIMM_CFG_DIMM2_2BK */ +/* Description: DIMM 2 has two physical banks */ +#define SH_X_DIMM_CFG_DIMM2_2BK_SHFT 19 +#define SH_X_DIMM_CFG_DIMM2_2BK_MASK 0x0000000000080000 + +/* SH_X_DIMM_CFG_DIMM2_REV */ +/* Description: DIMM 2 physical banks reversed */ +#define SH_X_DIMM_CFG_DIMM2_REV_SHFT 20 +#define SH_X_DIMM_CFG_DIMM2_REV_MASK 0x0000000000100000 + +/* SH_X_DIMM_CFG_DIMM2_CS */ +/* Description: DIMM 2 chip select, addr[35:34] match */ +#define SH_X_DIMM_CFG_DIMM2_CS_SHFT 21 +#define SH_X_DIMM_CFG_DIMM2_CS_MASK 0x0000000000600000 + +/* SH_X_DIMM_CFG_DIMM3_SIZE */ +/* Description: DIMM 3 DRAM size */ +#define SH_X_DIMM_CFG_DIMM3_SIZE_SHFT 24 +#define SH_X_DIMM_CFG_DIMM3_SIZE_MASK 0x0000000007000000 + +/* SH_X_DIMM_CFG_DIMM3_2BK */ +/* Description: DIMM 3 has two physical banks */ +#define SH_X_DIMM_CFG_DIMM3_2BK_SHFT 27 +#define SH_X_DIMM_CFG_DIMM3_2BK_MASK 0x0000000008000000 + +/* SH_X_DIMM_CFG_DIMM3_REV */ +/* Description: DIMM 3 physical banks reversed */ +#define SH_X_DIMM_CFG_DIMM3_REV_SHFT 28 +#define SH_X_DIMM_CFG_DIMM3_REV_MASK 0x0000000010000000 + +/* SH_X_DIMM_CFG_DIMM3_CS */ +/* Description: DIMM 3 chip select, addr[35:34] match */ +#define SH_X_DIMM_CFG_DIMM3_CS_SHFT 29 +#define SH_X_DIMM_CFG_DIMM3_CS_MASK 0x0000000060000000 + +/* SH_X_DIMM_CFG_FREQ */ +/* Description: DIMM frequency select */ +#define SH_X_DIMM_CFG_FREQ_SHFT 32 +#define SH_X_DIMM_CFG_FREQ_MASK 0x0000000f00000000 + +/* ==================================================================== */ +/* Register "SH_Y_DIMM_CFG" */ +/* AC Mem Config Registers */ +/* ==================================================================== */ + +#define SH_Y_DIMM_CFG 0x0000000100010008 +#define SH_Y_DIMM_CFG_MASK 0x0000000f7f7f7f7f +#define SH_Y_DIMM_CFG_INIT 0x000000026f4f2f0f + +/* SH_Y_DIMM_CFG_DIMM0_SIZE */ +/* Description: DIMM 0 DRAM size */ +#define SH_Y_DIMM_CFG_DIMM0_SIZE_SHFT 0 +#define SH_Y_DIMM_CFG_DIMM0_SIZE_MASK 0x0000000000000007 + +/* SH_Y_DIMM_CFG_DIMM0_2BK */ +/* Description: DIMM 0 has two physical banks */ +#define SH_Y_DIMM_CFG_DIMM0_2BK_SHFT 3 +#define SH_Y_DIMM_CFG_DIMM0_2BK_MASK 0x0000000000000008 + +/* SH_Y_DIMM_CFG_DIMM0_REV */ +/* Description: DIMM 0 physical banks reversed */ +#define SH_Y_DIMM_CFG_DIMM0_REV_SHFT 4 +#define SH_Y_DIMM_CFG_DIMM0_REV_MASK 0x0000000000000010 + +/* SH_Y_DIMM_CFG_DIMM0_CS */ +/* Description: DIMM 0 chip select, addr[35:34] match */ +#define SH_Y_DIMM_CFG_DIMM0_CS_SHFT 5 +#define SH_Y_DIMM_CFG_DIMM0_CS_MASK 0x0000000000000060 + +/* SH_Y_DIMM_CFG_DIMM1_SIZE */ +/* Description: DIMM 1 DRAM size */ +#define SH_Y_DIMM_CFG_DIMM1_SIZE_SHFT 8 +#define SH_Y_DIMM_CFG_DIMM1_SIZE_MASK 0x0000000000000700 + +/* SH_Y_DIMM_CFG_DIMM1_2BK */ +/* Description: DIMM 1 has two physical banks */ +#define SH_Y_DIMM_CFG_DIMM1_2BK_SHFT 11 +#define SH_Y_DIMM_CFG_DIMM1_2BK_MASK 0x0000000000000800 + +/* SH_Y_DIMM_CFG_DIMM1_REV */ +/* Description: DIMM 1 physical banks reversed */ +#define SH_Y_DIMM_CFG_DIMM1_REV_SHFT 12 +#define SH_Y_DIMM_CFG_DIMM1_REV_MASK 0x0000000000001000 + +/* SH_Y_DIMM_CFG_DIMM1_CS */ +/* Description: DIMM 1 chip select, addr[35:34] match */ +#define SH_Y_DIMM_CFG_DIMM1_CS_SHFT 13 +#define SH_Y_DIMM_CFG_DIMM1_CS_MASK 0x0000000000006000 + +/* SH_Y_DIMM_CFG_DIMM2_SIZE */ +/* Description: DIMM 2 DRAM size */ +#define SH_Y_DIMM_CFG_DIMM2_SIZE_SHFT 16 +#define SH_Y_DIMM_CFG_DIMM2_SIZE_MASK 0x0000000000070000 + +/* SH_Y_DIMM_CFG_DIMM2_2BK */ +/* Description: DIMM 2 has two physical banks */ +#define SH_Y_DIMM_CFG_DIMM2_2BK_SHFT 19 +#define SH_Y_DIMM_CFG_DIMM2_2BK_MASK 0x0000000000080000 + +/* SH_Y_DIMM_CFG_DIMM2_REV */ +/* Description: DIMM 2 physical banks reversed */ +#define SH_Y_DIMM_CFG_DIMM2_REV_SHFT 20 +#define SH_Y_DIMM_CFG_DIMM2_REV_MASK 0x0000000000100000 + +/* SH_Y_DIMM_CFG_DIMM2_CS */ +/* Description: DIMM 2 chip select, addr[35:34] match */ +#define SH_Y_DIMM_CFG_DIMM2_CS_SHFT 21 +#define SH_Y_DIMM_CFG_DIMM2_CS_MASK 0x0000000000600000 + +/* SH_Y_DIMM_CFG_DIMM3_SIZE */ +/* Description: DIMM 3 DRAM size */ +#define SH_Y_DIMM_CFG_DIMM3_SIZE_SHFT 24 +#define SH_Y_DIMM_CFG_DIMM3_SIZE_MASK 0x0000000007000000 + +/* SH_Y_DIMM_CFG_DIMM3_2BK */ +/* Description: DIMM 3 has two physical banks */ +#define SH_Y_DIMM_CFG_DIMM3_2BK_SHFT 27 +#define SH_Y_DIMM_CFG_DIMM3_2BK_MASK 0x0000000008000000 + +/* SH_Y_DIMM_CFG_DIMM3_REV */ +/* Description: DIMM 3 physical banks reversed */ +#define SH_Y_DIMM_CFG_DIMM3_REV_SHFT 28 +#define SH_Y_DIMM_CFG_DIMM3_REV_MASK 0x0000000010000000 + +/* SH_Y_DIMM_CFG_DIMM3_CS */ +/* Description: DIMM 3 chip select, addr[35:34] match */ +#define SH_Y_DIMM_CFG_DIMM3_CS_SHFT 29 +#define SH_Y_DIMM_CFG_DIMM3_CS_MASK 0x0000000060000000 + +/* SH_Y_DIMM_CFG_FREQ */ +/* Description: DIMM frequency select */ +#define SH_Y_DIMM_CFG_FREQ_SHFT 32 +#define SH_Y_DIMM_CFG_FREQ_MASK 0x0000000f00000000 + +/* ==================================================================== */ +/* Register "SH_JNR_DIMM_CFG" */ +/* AC Mem Config Registers */ +/* ==================================================================== */ + +#define SH_JNR_DIMM_CFG 0x0000000100010010 +#define SH_JNR_DIMM_CFG_MASK 0x0000000f7f7f7f7f +#define SH_JNR_DIMM_CFG_INIT 0x000000026f4f2f0f + +/* SH_JNR_DIMM_CFG_DIMM0_SIZE */ +/* Description: DIMM 0 DRAM size */ +#define SH_JNR_DIMM_CFG_DIMM0_SIZE_SHFT 0 +#define SH_JNR_DIMM_CFG_DIMM0_SIZE_MASK 0x0000000000000007 + +/* SH_JNR_DIMM_CFG_DIMM0_2BK */ +/* Description: DIMM 0 has two physical banks */ +#define SH_JNR_DIMM_CFG_DIMM0_2BK_SHFT 3 +#define SH_JNR_DIMM_CFG_DIMM0_2BK_MASK 0x0000000000000008 + +/* SH_JNR_DIMM_CFG_DIMM0_REV */ +/* Description: DIMM 0 physical banks reversed */ +#define SH_JNR_DIMM_CFG_DIMM0_REV_SHFT 4 +#define SH_JNR_DIMM_CFG_DIMM0_REV_MASK 0x0000000000000010 + +/* SH_JNR_DIMM_CFG_DIMM0_CS */ +/* Description: DIMM 0 chip select, addr[35:34] match */ +#define SH_JNR_DIMM_CFG_DIMM0_CS_SHFT 5 +#define SH_JNR_DIMM_CFG_DIMM0_CS_MASK 0x0000000000000060 + +/* SH_JNR_DIMM_CFG_DIMM1_SIZE */ +/* Description: DIMM 1 DRAM size */ +#define SH_JNR_DIMM_CFG_DIMM1_SIZE_SHFT 8 +#define SH_JNR_DIMM_CFG_DIMM1_SIZE_MASK 0x0000000000000700 + +/* SH_JNR_DIMM_CFG_DIMM1_2BK */ +/* Description: DIMM 1 has two physical banks */ +#define SH_JNR_DIMM_CFG_DIMM1_2BK_SHFT 11 +#define SH_JNR_DIMM_CFG_DIMM1_2BK_MASK 0x0000000000000800 + +/* SH_JNR_DIMM_CFG_DIMM1_REV */ +/* Description: DIMM 1 physical banks reversed */ +#define SH_JNR_DIMM_CFG_DIMM1_REV_SHFT 12 +#define SH_JNR_DIMM_CFG_DIMM1_REV_MASK 0x0000000000001000 + +/* SH_JNR_DIMM_CFG_DIMM1_CS */ +/* Description: DIMM 1 chip select, addr[35:34] match */ +#define SH_JNR_DIMM_CFG_DIMM1_CS_SHFT 13 +#define SH_JNR_DIMM_CFG_DIMM1_CS_MASK 0x0000000000006000 + +/* SH_JNR_DIMM_CFG_DIMM2_SIZE */ +/* Description: DIMM 2 DRAM size */ +#define SH_JNR_DIMM_CFG_DIMM2_SIZE_SHFT 16 +#define SH_JNR_DIMM_CFG_DIMM2_SIZE_MASK 0x0000000000070000 + +/* SH_JNR_DIMM_CFG_DIMM2_2BK */ +/* Description: DIMM 2 has two physical banks */ +#define SH_JNR_DIMM_CFG_DIMM2_2BK_SHFT 19 +#define SH_JNR_DIMM_CFG_DIMM2_2BK_MASK 0x0000000000080000 + +/* SH_JNR_DIMM_CFG_DIMM2_REV */ +/* Description: DIMM 2 physical banks reversed */ +#define SH_JNR_DIMM_CFG_DIMM2_REV_SHFT 20 +#define SH_JNR_DIMM_CFG_DIMM2_REV_MASK 0x0000000000100000 + +/* SH_JNR_DIMM_CFG_DIMM2_CS */ +/* Description: DIMM 2 chip select, addr[35:34] match */ +#define SH_JNR_DIMM_CFG_DIMM2_CS_SHFT 21 +#define SH_JNR_DIMM_CFG_DIMM2_CS_MASK 0x0000000000600000 + +/* SH_JNR_DIMM_CFG_DIMM3_SIZE */ +/* Description: DIMM 3 DRAM size */ +#define SH_JNR_DIMM_CFG_DIMM3_SIZE_SHFT 24 +#define SH_JNR_DIMM_CFG_DIMM3_SIZE_MASK 0x0000000007000000 + +/* SH_JNR_DIMM_CFG_DIMM3_2BK */ +/* Description: DIMM 3 has two physical banks */ +#define SH_JNR_DIMM_CFG_DIMM3_2BK_SHFT 27 +#define SH_JNR_DIMM_CFG_DIMM3_2BK_MASK 0x0000000008000000 + +/* SH_JNR_DIMM_CFG_DIMM3_REV */ +/* Description: DIMM 3 physical banks reversed */ +#define SH_JNR_DIMM_CFG_DIMM3_REV_SHFT 28 +#define SH_JNR_DIMM_CFG_DIMM3_REV_MASK 0x0000000010000000 + +/* SH_JNR_DIMM_CFG_DIMM3_CS */ +/* Description: DIMM 3 chip select, addr[35:34] match */ +#define SH_JNR_DIMM_CFG_DIMM3_CS_SHFT 29 +#define SH_JNR_DIMM_CFG_DIMM3_CS_MASK 0x0000000060000000 + +/* SH_JNR_DIMM_CFG_FREQ */ +/* Description: DIMM frequency select */ +#define SH_JNR_DIMM_CFG_FREQ_SHFT 32 +#define SH_JNR_DIMM_CFG_FREQ_MASK 0x0000000f00000000 + +/* ==================================================================== */ +/* Register "SH_X_PHASE_CFG" */ +/* AC Phase Config Registers */ +/* ==================================================================== */ + +#define SH_X_PHASE_CFG 0x0000000100010018 +#define SH_X_PHASE_CFG_MASK 0x7fffffffffffffff +#define SH_X_PHASE_CFG_INIT 0x0000000000000000 + +/* SH_X_PHASE_CFG_LD_A */ +/* Description: Address, control load core clock A latch */ +#define SH_X_PHASE_CFG_LD_A_SHFT 0 +#define SH_X_PHASE_CFG_LD_A_MASK 0x000000000000001f + +/* SH_X_PHASE_CFG_LD_B */ +/* Description: Address, control load core clock B latch */ +#define SH_X_PHASE_CFG_LD_B_SHFT 5 +#define SH_X_PHASE_CFG_LD_B_MASK 0x00000000000003e0 + +/* SH_X_PHASE_CFG_DQ_LD_A */ +/* Description: DATA MCI load core clock A latch */ +#define SH_X_PHASE_CFG_DQ_LD_A_SHFT 10 +#define SH_X_PHASE_CFG_DQ_LD_A_MASK 0x0000000000007c00 + +/* SH_X_PHASE_CFG_DQ_LD_B */ +/* Description: DATA MCI load core clock B latch */ +#define SH_X_PHASE_CFG_DQ_LD_B_SHFT 15 +#define SH_X_PHASE_CFG_DQ_LD_B_MASK 0x00000000000f8000 + +/* SH_X_PHASE_CFG_HOLD */ +/* Description: Hold request on core clock phase */ +#define SH_X_PHASE_CFG_HOLD_SHFT 20 +#define SH_X_PHASE_CFG_HOLD_MASK 0x0000000001f00000 + +/* SH_X_PHASE_CFG_HOLD_REQ */ +/* Description: Hold next request on core clock phase */ +#define SH_X_PHASE_CFG_HOLD_REQ_SHFT 25 +#define SH_X_PHASE_CFG_HOLD_REQ_MASK 0x000000003e000000 + +/* SH_X_PHASE_CFG_ADD_CP */ +/* Description: add delay clock period to dqct delay chain on phase */ +#define SH_X_PHASE_CFG_ADD_CP_SHFT 30 +#define SH_X_PHASE_CFG_ADD_CP_MASK 0x00000007c0000000 + +/* SH_X_PHASE_CFG_BUBBLE_EN */ +/* Description: bubble, idle core clock to wait for memory clock */ +#define SH_X_PHASE_CFG_BUBBLE_EN_SHFT 35 +#define SH_X_PHASE_CFG_BUBBLE_EN_MASK 0x000000f800000000 + +/* SH_X_PHASE_CFG_PHA_BUBBLE */ +/* Description: MMR phaseA bubble value */ +#define SH_X_PHASE_CFG_PHA_BUBBLE_SHFT 40 +#define SH_X_PHASE_CFG_PHA_BUBBLE_MASK 0x0000070000000000 + +/* SH_X_PHASE_CFG_PHB_BUBBLE */ +/* Description: MMR phaseB bubble value */ +#define SH_X_PHASE_CFG_PHB_BUBBLE_SHFT 43 +#define SH_X_PHASE_CFG_PHB_BUBBLE_MASK 0x0000380000000000 + +/* SH_X_PHASE_CFG_PHC_BUBBLE */ +/* Description: MMR phaseC bubble value */ +#define SH_X_PHASE_CFG_PHC_BUBBLE_SHFT 46 +#define SH_X_PHASE_CFG_PHC_BUBBLE_MASK 0x0001c00000000000 + +/* SH_X_PHASE_CFG_PHD_BUBBLE */ +/* Description: MMR phaseD bubble value */ +#define SH_X_PHASE_CFG_PHD_BUBBLE_SHFT 49 +#define SH_X_PHASE_CFG_PHD_BUBBLE_MASK 0x000e000000000000 + +/* SH_X_PHASE_CFG_PHE_BUBBLE */ +/* Description: MMR phaseE bubble value */ +#define SH_X_PHASE_CFG_PHE_BUBBLE_SHFT 52 +#define SH_X_PHASE_CFG_PHE_BUBBLE_MASK 0x0070000000000000 + +/* SH_X_PHASE_CFG_SEL_A */ +/* Description: address,control select A memory clock latch */ +#define SH_X_PHASE_CFG_SEL_A_SHFT 55 +#define SH_X_PHASE_CFG_SEL_A_MASK 0x0780000000000000 + +/* SH_X_PHASE_CFG_DQ_SEL_A */ +/* Description: DATA MCI select A memory clock latch */ +#define SH_X_PHASE_CFG_DQ_SEL_A_SHFT 59 +#define SH_X_PHASE_CFG_DQ_SEL_A_MASK 0x7800000000000000 + +/* ==================================================================== */ +/* Register "SH_X_CFG" */ +/* AC Config Registers */ +/* ==================================================================== */ + +#define SH_X_CFG 0x0000000100010020 +#define SH_X_CFG_MASK 0xffffffffffffffff +#define SH_X_CFG_INIT 0x108443103322100c + +/* SH_X_CFG_MODE_SERIAL */ +/* Description: Arbque arbitration in serial mode */ +#define SH_X_CFG_MODE_SERIAL_SHFT 0 +#define SH_X_CFG_MODE_SERIAL_MASK 0x0000000000000001 + +/* SH_X_CFG_DIRC_RANDOM_REPLACEMENT */ +/* Description: Directory cache random replacement */ +#define SH_X_CFG_DIRC_RANDOM_REPLACEMENT_SHFT 1 +#define SH_X_CFG_DIRC_RANDOM_REPLACEMENT_MASK 0x0000000000000002 + +/* SH_X_CFG_DIR_COUNTER_INIT */ +/* Description: Dir counter initial value */ +#define SH_X_CFG_DIR_COUNTER_INIT_SHFT 2 +#define SH_X_CFG_DIR_COUNTER_INIT_MASK 0x00000000000000fc + +/* SH_X_CFG_TA_DLYS */ +/* Description: Turn around delays */ +#define SH_X_CFG_TA_DLYS_SHFT 8 +#define SH_X_CFG_TA_DLYS_MASK 0x000000ffffffff00 + +/* SH_X_CFG_DA_BB_CLR */ +/* Description: Bank busy CPs for a data read request */ +#define SH_X_CFG_DA_BB_CLR_SHFT 40 +#define SH_X_CFG_DA_BB_CLR_MASK 0x00000f0000000000 + +/* SH_X_CFG_DC_BB_CLR */ +/* Description: Bank busy CPs for a directory cache read request */ +#define SH_X_CFG_DC_BB_CLR_SHFT 44 +#define SH_X_CFG_DC_BB_CLR_MASK 0x0000f00000000000 + +/* SH_X_CFG_WT_BB_CLR */ +/* Description: Bank busy CPs for all write request */ +#define SH_X_CFG_WT_BB_CLR_SHFT 48 +#define SH_X_CFG_WT_BB_CLR_MASK 0x000f000000000000 + +/* SH_X_CFG_SSO_WT_EN */ +/* Description: Simultaneous switching enabled on output data pins */ +#define SH_X_CFG_SSO_WT_EN_SHFT 52 +#define SH_X_CFG_SSO_WT_EN_MASK 0x0010000000000000 + +/* SH_X_CFG_TRCD2_EN */ +/* Description: Trcd, ras to cas delay of 2 CPs enabled */ +#define SH_X_CFG_TRCD2_EN_SHFT 53 +#define SH_X_CFG_TRCD2_EN_MASK 0x0020000000000000 + +/* SH_X_CFG_TRCD4_EN */ +/* Description: Trcd, ras to case delay of 4 CPs enabled */ +#define SH_X_CFG_TRCD4_EN_SHFT 54 +#define SH_X_CFG_TRCD4_EN_MASK 0x0040000000000000 + +/* SH_X_CFG_REQ_CNTR_DIS */ +/* Description: Request delay counter disabled */ +#define SH_X_CFG_REQ_CNTR_DIS_SHFT 55 +#define SH_X_CFG_REQ_CNTR_DIS_MASK 0x0080000000000000 + +/* SH_X_CFG_REQ_CNTR_VAL */ +/* Description: Request counter delay value in CPs */ +#define SH_X_CFG_REQ_CNTR_VAL_SHFT 56 +#define SH_X_CFG_REQ_CNTR_VAL_MASK 0x3f00000000000000 + +/* SH_X_CFG_INV_CAS_ADDR */ +/* Description: Invert cas address bits 3 to 7 */ +#define SH_X_CFG_INV_CAS_ADDR_SHFT 62 +#define SH_X_CFG_INV_CAS_ADDR_MASK 0x4000000000000000 + +/* SH_X_CFG_CLR_DIR_CACHE */ +/* Description: Clear directory cache tags */ +#define SH_X_CFG_CLR_DIR_CACHE_SHFT 63 +#define SH_X_CFG_CLR_DIR_CACHE_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_X_DQCT_CFG" */ +/* AC Config Registers */ +/* ==================================================================== */ + +#define SH_X_DQCT_CFG 0x0000000100010028 +#define SH_X_DQCT_CFG_MASK 0x0000000000ffffff +#define SH_X_DQCT_CFG_INIT 0x0000000000585418 + +/* SH_X_DQCT_CFG_RD_SEL */ +/* Description: Read data select */ +#define SH_X_DQCT_CFG_RD_SEL_SHFT 0 +#define SH_X_DQCT_CFG_RD_SEL_MASK 0x000000000000000f + +/* SH_X_DQCT_CFG_WT_SEL */ +/* Description: Write data select */ +#define SH_X_DQCT_CFG_WT_SEL_SHFT 4 +#define SH_X_DQCT_CFG_WT_SEL_MASK 0x00000000000000f0 + +/* SH_X_DQCT_CFG_DTA_RD_SEL */ +/* Description: Data ready read select */ +#define SH_X_DQCT_CFG_DTA_RD_SEL_SHFT 8 +#define SH_X_DQCT_CFG_DTA_RD_SEL_MASK 0x0000000000000f00 + +/* SH_X_DQCT_CFG_DTA_WT_SEL */ +/* Description: Data ready write select */ +#define SH_X_DQCT_CFG_DTA_WT_SEL_SHFT 12 +#define SH_X_DQCT_CFG_DTA_WT_SEL_MASK 0x000000000000f000 + +/* SH_X_DQCT_CFG_DIR_RD_SEL */ +/* Description: Dir ready read select */ +#define SH_X_DQCT_CFG_DIR_RD_SEL_SHFT 16 +#define SH_X_DQCT_CFG_DIR_RD_SEL_MASK 0x00000000000f0000 + +/* SH_X_DQCT_CFG_MDIR_RD_SEL */ +/* Description: Dir ready read select */ +#define SH_X_DQCT_CFG_MDIR_RD_SEL_SHFT 20 +#define SH_X_DQCT_CFG_MDIR_RD_SEL_MASK 0x0000000000f00000 + +/* ==================================================================== */ +/* Register "SH_X_REFRESH_CONTROL" */ +/* Refresh Control Register */ +/* ==================================================================== */ + +#define SH_X_REFRESH_CONTROL 0x0000000100010030 +#define SH_X_REFRESH_CONTROL_MASK 0x000000000fffffff +#define SH_X_REFRESH_CONTROL_INIT 0x00000000009cc300 + +/* SH_X_REFRESH_CONTROL_ENABLE */ +/* Description: Refresh enable */ +#define SH_X_REFRESH_CONTROL_ENABLE_SHFT 0 +#define SH_X_REFRESH_CONTROL_ENABLE_MASK 0x00000000000000ff + +/* SH_X_REFRESH_CONTROL_INTERVAL */ +/* Description: Refresh interval in core CPs */ +#define SH_X_REFRESH_CONTROL_INTERVAL_SHFT 8 +#define SH_X_REFRESH_CONTROL_INTERVAL_MASK 0x000000000001ff00 + +/* SH_X_REFRESH_CONTROL_HOLD */ +/* Description: Refresh hold */ +#define SH_X_REFRESH_CONTROL_HOLD_SHFT 17 +#define SH_X_REFRESH_CONTROL_HOLD_MASK 0x00000000007e0000 + +/* SH_X_REFRESH_CONTROL_INTERLEAVE */ +/* Description: Refresh interleave */ +#define SH_X_REFRESH_CONTROL_INTERLEAVE_SHFT 23 +#define SH_X_REFRESH_CONTROL_INTERLEAVE_MASK 0x0000000000800000 + +/* SH_X_REFRESH_CONTROL_HALF_RATE */ +/* Description: Refresh half rate */ +#define SH_X_REFRESH_CONTROL_HALF_RATE_SHFT 24 +#define SH_X_REFRESH_CONTROL_HALF_RATE_MASK 0x000000000f000000 + +/* ==================================================================== */ +/* Register "SH_Y_PHASE_CFG" */ +/* AC Phase Config Registers */ +/* ==================================================================== */ + +#define SH_Y_PHASE_CFG 0x0000000100010038 +#define SH_Y_PHASE_CFG_MASK 0x7fffffffffffffff +#define SH_Y_PHASE_CFG_INIT 0x0000000000000000 + +/* SH_Y_PHASE_CFG_LD_A */ +/* Description: Address, control load core clock A latch */ +#define SH_Y_PHASE_CFG_LD_A_SHFT 0 +#define SH_Y_PHASE_CFG_LD_A_MASK 0x000000000000001f + +/* SH_Y_PHASE_CFG_LD_B */ +/* Description: Address, control load core clock B latch */ +#define SH_Y_PHASE_CFG_LD_B_SHFT 5 +#define SH_Y_PHASE_CFG_LD_B_MASK 0x00000000000003e0 + +/* SH_Y_PHASE_CFG_DQ_LD_A */ +/* Description: DATA MCI load core clock A latch */ +#define SH_Y_PHASE_CFG_DQ_LD_A_SHFT 10 +#define SH_Y_PHASE_CFG_DQ_LD_A_MASK 0x0000000000007c00 + +/* SH_Y_PHASE_CFG_DQ_LD_B */ +/* Description: DATA MCI load core clock B latch */ +#define SH_Y_PHASE_CFG_DQ_LD_B_SHFT 15 +#define SH_Y_PHASE_CFG_DQ_LD_B_MASK 0x00000000000f8000 + +/* SH_Y_PHASE_CFG_HOLD */ +/* Description: Hold request on core clock phase */ +#define SH_Y_PHASE_CFG_HOLD_SHFT 20 +#define SH_Y_PHASE_CFG_HOLD_MASK 0x0000000001f00000 + +/* SH_Y_PHASE_CFG_HOLD_REQ */ +/* Description: Hold next request on core clock phase */ +#define SH_Y_PHASE_CFG_HOLD_REQ_SHFT 25 +#define SH_Y_PHASE_CFG_HOLD_REQ_MASK 0x000000003e000000 + +/* SH_Y_PHASE_CFG_ADD_CP */ +/* Description: add delay clock period to dqct delay chain on phase */ +#define SH_Y_PHASE_CFG_ADD_CP_SHFT 30 +#define SH_Y_PHASE_CFG_ADD_CP_MASK 0x00000007c0000000 + +/* SH_Y_PHASE_CFG_BUBBLE_EN */ +/* Description: bubble, idle core clock to wait for memory clock */ +#define SH_Y_PHASE_CFG_BUBBLE_EN_SHFT 35 +#define SH_Y_PHASE_CFG_BUBBLE_EN_MASK 0x000000f800000000 + +/* SH_Y_PHASE_CFG_PHA_BUBBLE */ +/* Description: MMR phaseA bubble value */ +#define SH_Y_PHASE_CFG_PHA_BUBBLE_SHFT 40 +#define SH_Y_PHASE_CFG_PHA_BUBBLE_MASK 0x0000070000000000 + +/* SH_Y_PHASE_CFG_PHB_BUBBLE */ +/* Description: MMR phaseB bubble value */ +#define SH_Y_PHASE_CFG_PHB_BUBBLE_SHFT 43 +#define SH_Y_PHASE_CFG_PHB_BUBBLE_MASK 0x0000380000000000 + +/* SH_Y_PHASE_CFG_PHC_BUBBLE */ +/* Description: MMR phaseC bubble value */ +#define SH_Y_PHASE_CFG_PHC_BUBBLE_SHFT 46 +#define SH_Y_PHASE_CFG_PHC_BUBBLE_MASK 0x0001c00000000000 + +/* SH_Y_PHASE_CFG_PHD_BUBBLE */ +/* Description: MMR phaseD bubble value */ +#define SH_Y_PHASE_CFG_PHD_BUBBLE_SHFT 49 +#define SH_Y_PHASE_CFG_PHD_BUBBLE_MASK 0x000e000000000000 + +/* SH_Y_PHASE_CFG_PHE_BUBBLE */ +/* Description: MMR phaseE bubble value */ +#define SH_Y_PHASE_CFG_PHE_BUBBLE_SHFT 52 +#define SH_Y_PHASE_CFG_PHE_BUBBLE_MASK 0x0070000000000000 + +/* SH_Y_PHASE_CFG_SEL_A */ +/* Description: address,control select A memory clock latch */ +#define SH_Y_PHASE_CFG_SEL_A_SHFT 55 +#define SH_Y_PHASE_CFG_SEL_A_MASK 0x0780000000000000 + +/* SH_Y_PHASE_CFG_DQ_SEL_A */ +/* Description: DATA MCI select A memory clock latch */ +#define SH_Y_PHASE_CFG_DQ_SEL_A_SHFT 59 +#define SH_Y_PHASE_CFG_DQ_SEL_A_MASK 0x7800000000000000 + +/* ==================================================================== */ +/* Register "SH_Y_CFG" */ +/* AC Config Registers */ +/* ==================================================================== */ + +#define SH_Y_CFG 0x0000000100010040 +#define SH_Y_CFG_MASK 0xffffffffffffffff +#define SH_Y_CFG_INIT 0x108443103322100c + +/* SH_Y_CFG_MODE_SERIAL */ +/* Description: Arbque arbitration in serial mode */ +#define SH_Y_CFG_MODE_SERIAL_SHFT 0 +#define SH_Y_CFG_MODE_SERIAL_MASK 0x0000000000000001 + +/* SH_Y_CFG_DIRC_RANDOM_REPLACEMENT */ +/* Description: Directory cache random replacement */ +#define SH_Y_CFG_DIRC_RANDOM_REPLACEMENT_SHFT 1 +#define SH_Y_CFG_DIRC_RANDOM_REPLACEMENT_MASK 0x0000000000000002 + +/* SH_Y_CFG_DIR_COUNTER_INIT */ +/* Description: Dir counter initial value */ +#define SH_Y_CFG_DIR_COUNTER_INIT_SHFT 2 +#define SH_Y_CFG_DIR_COUNTER_INIT_MASK 0x00000000000000fc + +/* SH_Y_CFG_TA_DLYS */ +/* Description: Turn around delays */ +#define SH_Y_CFG_TA_DLYS_SHFT 8 +#define SH_Y_CFG_TA_DLYS_MASK 0x000000ffffffff00 + +/* SH_Y_CFG_DA_BB_CLR */ +/* Description: Bank busy CPs for a data read request */ +#define SH_Y_CFG_DA_BB_CLR_SHFT 40 +#define SH_Y_CFG_DA_BB_CLR_MASK 0x00000f0000000000 + +/* SH_Y_CFG_DC_BB_CLR */ +/* Description: Bank busy CPs for a directory cache read request */ +#define SH_Y_CFG_DC_BB_CLR_SHFT 44 +#define SH_Y_CFG_DC_BB_CLR_MASK 0x0000f00000000000 + +/* SH_Y_CFG_WT_BB_CLR */ +/* Description: Bank busy CPs for all write request */ +#define SH_Y_CFG_WT_BB_CLR_SHFT 48 +#define SH_Y_CFG_WT_BB_CLR_MASK 0x000f000000000000 + +/* SH_Y_CFG_SSO_WT_EN */ +/* Description: Simultaneous switching enabled on output data pins */ +#define SH_Y_CFG_SSO_WT_EN_SHFT 52 +#define SH_Y_CFG_SSO_WT_EN_MASK 0x0010000000000000 + +/* SH_Y_CFG_TRCD2_EN */ +/* Description: Trcd, ras to cas delay of 2 CPs enabled */ +#define SH_Y_CFG_TRCD2_EN_SHFT 53 +#define SH_Y_CFG_TRCD2_EN_MASK 0x0020000000000000 + +/* SH_Y_CFG_TRCD4_EN */ +/* Description: Trcd, ras to case delay of 4 CPs enabled */ +#define SH_Y_CFG_TRCD4_EN_SHFT 54 +#define SH_Y_CFG_TRCD4_EN_MASK 0x0040000000000000 + +/* SH_Y_CFG_REQ_CNTR_DIS */ +/* Description: Request delay counter disabled */ +#define SH_Y_CFG_REQ_CNTR_DIS_SHFT 55 +#define SH_Y_CFG_REQ_CNTR_DIS_MASK 0x0080000000000000 + +/* SH_Y_CFG_REQ_CNTR_VAL */ +/* Description: Request counter delay value in CPs */ +#define SH_Y_CFG_REQ_CNTR_VAL_SHFT 56 +#define SH_Y_CFG_REQ_CNTR_VAL_MASK 0x3f00000000000000 + +/* SH_Y_CFG_INV_CAS_ADDR */ +/* Description: Invert cas address bits 3 to 7 */ +#define SH_Y_CFG_INV_CAS_ADDR_SHFT 62 +#define SH_Y_CFG_INV_CAS_ADDR_MASK 0x4000000000000000 + +/* SH_Y_CFG_CLR_DIR_CACHE */ +/* Description: Clear directory cache tags */ +#define SH_Y_CFG_CLR_DIR_CACHE_SHFT 63 +#define SH_Y_CFG_CLR_DIR_CACHE_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_Y_DQCT_CFG" */ +/* AC Config Registers */ +/* ==================================================================== */ + +#define SH_Y_DQCT_CFG 0x0000000100010048 +#define SH_Y_DQCT_CFG_MASK 0x0000000000ffffff +#define SH_Y_DQCT_CFG_INIT 0x0000000000585418 + +/* SH_Y_DQCT_CFG_RD_SEL */ +/* Description: Read data select */ +#define SH_Y_DQCT_CFG_RD_SEL_SHFT 0 +#define SH_Y_DQCT_CFG_RD_SEL_MASK 0x000000000000000f + +/* SH_Y_DQCT_CFG_WT_SEL */ +/* Description: Write data select */ +#define SH_Y_DQCT_CFG_WT_SEL_SHFT 4 +#define SH_Y_DQCT_CFG_WT_SEL_MASK 0x00000000000000f0 + +/* SH_Y_DQCT_CFG_DTA_RD_SEL */ +/* Description: Data ready read select */ +#define SH_Y_DQCT_CFG_DTA_RD_SEL_SHFT 8 +#define SH_Y_DQCT_CFG_DTA_RD_SEL_MASK 0x0000000000000f00 + +/* SH_Y_DQCT_CFG_DTA_WT_SEL */ +/* Description: Data ready write select */ +#define SH_Y_DQCT_CFG_DTA_WT_SEL_SHFT 12 +#define SH_Y_DQCT_CFG_DTA_WT_SEL_MASK 0x000000000000f000 + +/* SH_Y_DQCT_CFG_DIR_RD_SEL */ +/* Description: Dir ready read select */ +#define SH_Y_DQCT_CFG_DIR_RD_SEL_SHFT 16 +#define SH_Y_DQCT_CFG_DIR_RD_SEL_MASK 0x00000000000f0000 + +/* SH_Y_DQCT_CFG_MDIR_RD_SEL */ +/* Description: Dir ready read select */ +#define SH_Y_DQCT_CFG_MDIR_RD_SEL_SHFT 20 +#define SH_Y_DQCT_CFG_MDIR_RD_SEL_MASK 0x0000000000f00000 + +/* ==================================================================== */ +/* Register "SH_Y_REFRESH_CONTROL" */ +/* Refresh Control Register */ +/* ==================================================================== */ + +#define SH_Y_REFRESH_CONTROL 0x0000000100010050 +#define SH_Y_REFRESH_CONTROL_MASK 0x000000000fffffff +#define SH_Y_REFRESH_CONTROL_INIT 0x00000000009cc300 + +/* SH_Y_REFRESH_CONTROL_ENABLE */ +/* Description: Refresh enable */ +#define SH_Y_REFRESH_CONTROL_ENABLE_SHFT 0 +#define SH_Y_REFRESH_CONTROL_ENABLE_MASK 0x00000000000000ff + +/* SH_Y_REFRESH_CONTROL_INTERVAL */ +/* Description: Refresh interval in core CPs */ +#define SH_Y_REFRESH_CONTROL_INTERVAL_SHFT 8 +#define SH_Y_REFRESH_CONTROL_INTERVAL_MASK 0x000000000001ff00 + +/* SH_Y_REFRESH_CONTROL_HOLD */ +/* Description: Refresh hold */ +#define SH_Y_REFRESH_CONTROL_HOLD_SHFT 17 +#define SH_Y_REFRESH_CONTROL_HOLD_MASK 0x00000000007e0000 + +/* SH_Y_REFRESH_CONTROL_INTERLEAVE */ +/* Description: Refresh interleave */ +#define SH_Y_REFRESH_CONTROL_INTERLEAVE_SHFT 23 +#define SH_Y_REFRESH_CONTROL_INTERLEAVE_MASK 0x0000000000800000 + +/* SH_Y_REFRESH_CONTROL_HALF_RATE */ +/* Description: Refresh half rate */ +#define SH_Y_REFRESH_CONTROL_HALF_RATE_SHFT 24 +#define SH_Y_REFRESH_CONTROL_HALF_RATE_MASK 0x000000000f000000 + +/* ==================================================================== */ +/* Register "SH_MEM_RED_BLACK" */ +/* MD fairness watchdog timers */ +/* ==================================================================== */ + +#define SH_MEM_RED_BLACK 0x0000000100010058 +#define SH_MEM_RED_BLACK_MASK 0x000fffffffffffff +#define SH_MEM_RED_BLACK_INIT 0x0000000040000400 + +/* SH_MEM_RED_BLACK_TIME */ +/* Description: Clocks to tag references with a given color */ +#define SH_MEM_RED_BLACK_TIME_SHFT 0 +#define SH_MEM_RED_BLACK_TIME_MASK 0x000000000000ffff + +/* SH_MEM_RED_BLACK_ERR_TIME */ +/* Description: Max clocks to wait after red/black change for old c */ +/* olor to clear. */ +#define SH_MEM_RED_BLACK_ERR_TIME_SHFT 16 +#define SH_MEM_RED_BLACK_ERR_TIME_MASK 0x000fffffffff0000 + +/* ==================================================================== */ +/* Register "SH_MISC_MEM_CFG" */ +/* ==================================================================== */ + +#define SH_MISC_MEM_CFG 0x0000000100010060 +#define SH_MISC_MEM_CFG_MASK 0x0013f1f1fff3f3ff +#define SH_MISC_MEM_CFG_INIT 0x0000000000010107 + +/* SH_MISC_MEM_CFG_EXPRESS_HEADER_ENABLE */ +/* Description: enables the use of express headers from md to pi */ +#define SH_MISC_MEM_CFG_EXPRESS_HEADER_ENABLE_SHFT 0 +#define SH_MISC_MEM_CFG_EXPRESS_HEADER_ENABLE_MASK 0x0000000000000001 + +/* SH_MISC_MEM_CFG_SPEC_HEADER_ENABLE */ +/* Description: enables the use of speculative headers from md to p */ +#define SH_MISC_MEM_CFG_SPEC_HEADER_ENABLE_SHFT 1 +#define SH_MISC_MEM_CFG_SPEC_HEADER_ENABLE_MASK 0x0000000000000002 + +/* SH_MISC_MEM_CFG_JNR_BYPASS_ENABLE */ +/* Description: enables bypass path for requests going through ac */ +#define SH_MISC_MEM_CFG_JNR_BYPASS_ENABLE_SHFT 2 +#define SH_MISC_MEM_CFG_JNR_BYPASS_ENABLE_MASK 0x0000000000000004 + +/* SH_MISC_MEM_CFG_XN_RD_SAME_AS_PI */ +/* Description: disables a one clock delay of XN read data */ +#define SH_MISC_MEM_CFG_XN_RD_SAME_AS_PI_SHFT 3 +#define SH_MISC_MEM_CFG_XN_RD_SAME_AS_PI_MASK 0x0000000000000008 + +/* SH_MISC_MEM_CFG_LOW_WRITE_BUFFER_THRESHOLD */ +/* Description: point at which data writes get higher priority */ +#define SH_MISC_MEM_CFG_LOW_WRITE_BUFFER_THRESHOLD_SHFT 4 +#define SH_MISC_MEM_CFG_LOW_WRITE_BUFFER_THRESHOLD_MASK 0x00000000000003f0 + +/* SH_MISC_MEM_CFG_LOW_VICTIM_BUFFER_THRESHOLD */ +/* Description: point at which dir cache writes get higher priority */ +#define SH_MISC_MEM_CFG_LOW_VICTIM_BUFFER_THRESHOLD_SHFT 12 +#define SH_MISC_MEM_CFG_LOW_VICTIM_BUFFER_THRESHOLD_MASK 0x000000000003f000 + +/* SH_MISC_MEM_CFG_THROTTLE_CNT */ +/* Description: number of clocks between accepting references */ +#define SH_MISC_MEM_CFG_THROTTLE_CNT_SHFT 20 +#define SH_MISC_MEM_CFG_THROTTLE_CNT_MASK 0x000000000ff00000 + +/* SH_MISC_MEM_CFG_DISABLED_READ_TNUMS */ +/* Description: number of read tnums to take out of circulation */ +#define SH_MISC_MEM_CFG_DISABLED_READ_TNUMS_SHFT 28 +#define SH_MISC_MEM_CFG_DISABLED_READ_TNUMS_MASK 0x00000001f0000000 + +/* SH_MISC_MEM_CFG_DISABLED_WRITE_TNUMS */ +/* Description: number of write tnums to take out of circulation */ +#define SH_MISC_MEM_CFG_DISABLED_WRITE_TNUMS_SHFT 36 +#define SH_MISC_MEM_CFG_DISABLED_WRITE_TNUMS_MASK 0x000001f000000000 + +/* SH_MISC_MEM_CFG_DISABLED_VICTIMS */ +/* Description: number of dir cache victim buffers to take out of c */ +/* irculation in each quadrant of the MD */ +#define SH_MISC_MEM_CFG_DISABLED_VICTIMS_SHFT 44 +#define SH_MISC_MEM_CFG_DISABLED_VICTIMS_MASK 0x0003f00000000000 + +/* SH_MISC_MEM_CFG_ALTERNATE_XN_RP_PLANE */ +/* Description: enables plane alternating for replies to XN */ +#define SH_MISC_MEM_CFG_ALTERNATE_XN_RP_PLANE_SHFT 52 +#define SH_MISC_MEM_CFG_ALTERNATE_XN_RP_PLANE_MASK 0x0010000000000000 + +/* ==================================================================== */ +/* Register "SH_PIO_RQ_CRD_CTL" */ +/* pio_rq Credit Circulation Control */ +/* ==================================================================== */ + +#define SH_PIO_RQ_CRD_CTL 0x0000000100010068 +#define SH_PIO_RQ_CRD_CTL_MASK 0x000000000000003f +#define SH_PIO_RQ_CRD_CTL_INIT 0x0000000000000002 + +/* SH_PIO_RQ_CRD_CTL_DEPTH */ +/* Description: Total depth of buffering (in sic packets) */ +#define SH_PIO_RQ_CRD_CTL_DEPTH_SHFT 0 +#define SH_PIO_RQ_CRD_CTL_DEPTH_MASK 0x000000000000003f + +/* ==================================================================== */ +/* Register "SH_PI_MD_RQ_CRD_CTL" */ +/* pi_md_rq Credit Circulation Control */ +/* ==================================================================== */ + +#define SH_PI_MD_RQ_CRD_CTL 0x0000000100010070 +#define SH_PI_MD_RQ_CRD_CTL_MASK 0x000000000000003f +#define SH_PI_MD_RQ_CRD_CTL_INIT 0x0000000000000008 + +/* SH_PI_MD_RQ_CRD_CTL_DEPTH */ +/* Description: Total depth of buffering (in sic packets) */ +#define SH_PI_MD_RQ_CRD_CTL_DEPTH_SHFT 0 +#define SH_PI_MD_RQ_CRD_CTL_DEPTH_MASK 0x000000000000003f + +/* ==================================================================== */ +/* Register "SH_PI_MD_RP_CRD_CTL" */ +/* pi_md_rp Credit Circulation Control */ +/* ==================================================================== */ + +#define SH_PI_MD_RP_CRD_CTL 0x0000000100010078 +#define SH_PI_MD_RP_CRD_CTL_MASK 0x000000000000003f +#define SH_PI_MD_RP_CRD_CTL_INIT 0x0000000000000004 + +/* SH_PI_MD_RP_CRD_CTL_DEPTH */ +/* Description: Total depth of buffering (in sic packets) */ +#define SH_PI_MD_RP_CRD_CTL_DEPTH_SHFT 0 +#define SH_PI_MD_RP_CRD_CTL_DEPTH_MASK 0x000000000000003f + +/* ==================================================================== */ +/* Register "SH_XN_MD_RQ_CRD_CTL" */ +/* xn_md_rq Credit Circulation Control */ +/* ==================================================================== */ + +#define SH_XN_MD_RQ_CRD_CTL 0x0000000100010080 +#define SH_XN_MD_RQ_CRD_CTL_MASK 0x000000000000003f +#define SH_XN_MD_RQ_CRD_CTL_INIT 0x0000000000000008 + +/* SH_XN_MD_RQ_CRD_CTL_DEPTH */ +/* Description: Total depth of buffering (in sic packets) */ +#define SH_XN_MD_RQ_CRD_CTL_DEPTH_SHFT 0 +#define SH_XN_MD_RQ_CRD_CTL_DEPTH_MASK 0x000000000000003f + +/* ==================================================================== */ +/* Register "SH_XN_MD_RP_CRD_CTL" */ +/* xn_md_rp Credit Circulation Control */ +/* ==================================================================== */ + +#define SH_XN_MD_RP_CRD_CTL 0x0000000100010088 +#define SH_XN_MD_RP_CRD_CTL_MASK 0x000000000000003f +#define SH_XN_MD_RP_CRD_CTL_INIT 0x0000000000000004 + +/* SH_XN_MD_RP_CRD_CTL_DEPTH */ +/* Description: Total depth of buffering (in sic packets) */ +#define SH_XN_MD_RP_CRD_CTL_DEPTH_SHFT 0 +#define SH_XN_MD_RP_CRD_CTL_DEPTH_MASK 0x000000000000003f + +/* ==================================================================== */ +/* Register "SH_X_TAG0" */ +/* AC tag Registers */ +/* ==================================================================== */ + +#define SH_X_TAG0 0x0000000100020000 +#define SH_X_TAG0_MASK 0x00000000000fffff +#define SH_X_TAG0_INIT 0x0000000000000000 + +/* SH_X_TAG0_TAG */ +/* Description: Valid + Tag Address */ +#define SH_X_TAG0_TAG_SHFT 0 +#define SH_X_TAG0_TAG_MASK 0x00000000000fffff + +/* ==================================================================== */ +/* Register "SH_X_TAG1" */ +/* AC tag Registers */ +/* ==================================================================== */ + +#define SH_X_TAG1 0x0000000100020008 +#define SH_X_TAG1_MASK 0x00000000000fffff +#define SH_X_TAG1_INIT 0x0000000000000000 + +/* SH_X_TAG1_TAG */ +/* Description: Valid + Tag Address */ +#define SH_X_TAG1_TAG_SHFT 0 +#define SH_X_TAG1_TAG_MASK 0x00000000000fffff + +/* ==================================================================== */ +/* Register "SH_X_TAG2" */ +/* AC tag Registers */ +/* ==================================================================== */ + +#define SH_X_TAG2 0x0000000100020010 +#define SH_X_TAG2_MASK 0x00000000000fffff +#define SH_X_TAG2_INIT 0x0000000000000000 + +/* SH_X_TAG2_TAG */ +/* Description: Valid + Tag Address */ +#define SH_X_TAG2_TAG_SHFT 0 +#define SH_X_TAG2_TAG_MASK 0x00000000000fffff + +/* ==================================================================== */ +/* Register "SH_X_TAG3" */ +/* AC tag Registers */ +/* ==================================================================== */ + +#define SH_X_TAG3 0x0000000100020018 +#define SH_X_TAG3_MASK 0x00000000000fffff +#define SH_X_TAG3_INIT 0x0000000000000000 + +/* SH_X_TAG3_TAG */ +/* Description: Valid + Tag Address */ +#define SH_X_TAG3_TAG_SHFT 0 +#define SH_X_TAG3_TAG_MASK 0x00000000000fffff + +/* ==================================================================== */ +/* Register "SH_X_TAG4" */ +/* AC tag Registers */ +/* ==================================================================== */ + +#define SH_X_TAG4 0x0000000100020020 +#define SH_X_TAG4_MASK 0x00000000000fffff +#define SH_X_TAG4_INIT 0x0000000000000000 + +/* SH_X_TAG4_TAG */ +/* Description: Valid + Tag Address */ +#define SH_X_TAG4_TAG_SHFT 0 +#define SH_X_TAG4_TAG_MASK 0x00000000000fffff + +/* ==================================================================== */ +/* Register "SH_X_TAG5" */ +/* AC tag Registers */ +/* ==================================================================== */ + +#define SH_X_TAG5 0x0000000100020028 +#define SH_X_TAG5_MASK 0x00000000000fffff +#define SH_X_TAG5_INIT 0x0000000000000000 + +/* SH_X_TAG5_TAG */ +/* Description: Valid + Tag Address */ +#define SH_X_TAG5_TAG_SHFT 0 +#define SH_X_TAG5_TAG_MASK 0x00000000000fffff + +/* ==================================================================== */ +/* Register "SH_X_TAG6" */ +/* AC tag Registers */ +/* ==================================================================== */ + +#define SH_X_TAG6 0x0000000100020030 +#define SH_X_TAG6_MASK 0x00000000000fffff +#define SH_X_TAG6_INIT 0x0000000000000000 + +/* SH_X_TAG6_TAG */ +/* Description: Valid + Tag Address */ +#define SH_X_TAG6_TAG_SHFT 0 +#define SH_X_TAG6_TAG_MASK 0x00000000000fffff + +/* ==================================================================== */ +/* Register "SH_X_TAG7" */ +/* AC tag Registers */ +/* ==================================================================== */ + +#define SH_X_TAG7 0x0000000100020038 +#define SH_X_TAG7_MASK 0x00000000000fffff +#define SH_X_TAG7_INIT 0x0000000000000000 + +/* SH_X_TAG7_TAG */ +/* Description: Valid + Tag Address */ +#define SH_X_TAG7_TAG_SHFT 0 +#define SH_X_TAG7_TAG_MASK 0x00000000000fffff + +/* ==================================================================== */ +/* Register "SH_Y_TAG0" */ +/* AC tag Registers */ +/* ==================================================================== */ + +#define SH_Y_TAG0 0x0000000100020040 +#define SH_Y_TAG0_MASK 0x00000000000fffff +#define SH_Y_TAG0_INIT 0x0000000000000000 + +/* SH_Y_TAG0_TAG */ +/* Description: Valid + Tag Address */ +#define SH_Y_TAG0_TAG_SHFT 0 +#define SH_Y_TAG0_TAG_MASK 0x00000000000fffff + +/* ==================================================================== */ +/* Register "SH_Y_TAG1" */ +/* AC tag Registers */ +/* ==================================================================== */ + +#define SH_Y_TAG1 0x0000000100020048 +#define SH_Y_TAG1_MASK 0x00000000000fffff +#define SH_Y_TAG1_INIT 0x0000000000000000 + +/* SH_Y_TAG1_TAG */ +/* Description: Valid + Tag Address */ +#define SH_Y_TAG1_TAG_SHFT 0 +#define SH_Y_TAG1_TAG_MASK 0x00000000000fffff + +/* ==================================================================== */ +/* Register "SH_Y_TAG2" */ +/* AC tag Registers */ +/* ==================================================================== */ + +#define SH_Y_TAG2 0x0000000100020050 +#define SH_Y_TAG2_MASK 0x00000000000fffff +#define SH_Y_TAG2_INIT 0x0000000000000000 + +/* SH_Y_TAG2_TAG */ +/* Description: Valid + Tag Address */ +#define SH_Y_TAG2_TAG_SHFT 0 +#define SH_Y_TAG2_TAG_MASK 0x00000000000fffff + +/* ==================================================================== */ +/* Register "SH_Y_TAG3" */ +/* AC tag Registers */ +/* ==================================================================== */ + +#define SH_Y_TAG3 0x0000000100020058 +#define SH_Y_TAG3_MASK 0x00000000000fffff +#define SH_Y_TAG3_INIT 0x0000000000000000 + +/* SH_Y_TAG3_TAG */ +/* Description: Valid + Tag Address */ +#define SH_Y_TAG3_TAG_SHFT 0 +#define SH_Y_TAG3_TAG_MASK 0x00000000000fffff + +/* ==================================================================== */ +/* Register "SH_Y_TAG4" */ +/* AC tag Registers */ +/* ==================================================================== */ + +#define SH_Y_TAG4 0x0000000100020060 +#define SH_Y_TAG4_MASK 0x00000000000fffff +#define SH_Y_TAG4_INIT 0x0000000000000000 + +/* SH_Y_TAG4_TAG */ +/* Description: Valid + Tag Address */ +#define SH_Y_TAG4_TAG_SHFT 0 +#define SH_Y_TAG4_TAG_MASK 0x00000000000fffff + +/* ==================================================================== */ +/* Register "SH_Y_TAG5" */ +/* AC tag Registers */ +/* ==================================================================== */ + +#define SH_Y_TAG5 0x0000000100020068 +#define SH_Y_TAG5_MASK 0x00000000000fffff +#define SH_Y_TAG5_INIT 0x0000000000000000 + +/* SH_Y_TAG5_TAG */ +/* Description: Valid + Tag Address */ +#define SH_Y_TAG5_TAG_SHFT 0 +#define SH_Y_TAG5_TAG_MASK 0x00000000000fffff + +/* ==================================================================== */ +/* Register "SH_Y_TAG6" */ +/* AC tag Registers */ +/* ==================================================================== */ + +#define SH_Y_TAG6 0x0000000100020070 +#define SH_Y_TAG6_MASK 0x00000000000fffff +#define SH_Y_TAG6_INIT 0x0000000000000000 + +/* SH_Y_TAG6_TAG */ +/* Description: Valid + Tag Address */ +#define SH_Y_TAG6_TAG_SHFT 0 +#define SH_Y_TAG6_TAG_MASK 0x00000000000fffff + +/* ==================================================================== */ +/* Register "SH_Y_TAG7" */ +/* AC tag Registers */ +/* ==================================================================== */ + +#define SH_Y_TAG7 0x0000000100020078 +#define SH_Y_TAG7_MASK 0x00000000000fffff +#define SH_Y_TAG7_INIT 0x0000000000000000 + +/* SH_Y_TAG7_TAG */ +/* Description: Valid + Tag Address */ +#define SH_Y_TAG7_TAG_SHFT 0 +#define SH_Y_TAG7_TAG_MASK 0x00000000000fffff + +/* ==================================================================== */ +/* Register "SH_MMRBIST_BASE" */ +/* mmr/bist base address */ +/* ==================================================================== */ + +#define SH_MMRBIST_BASE 0x0000000100020080 +#define SH_MMRBIST_BASE_MASK 0x0003fffffffffff8 +#define SH_MMRBIST_BASE_INIT 0x0000000000000000 + +/* SH_MMRBIST_BASE_DWORD_ADDR */ +/* Description: bits 49:3 of the memory address */ +#define SH_MMRBIST_BASE_DWORD_ADDR_SHFT 3 +#define SH_MMRBIST_BASE_DWORD_ADDR_MASK 0x0003fffffffffff8 + +/* ==================================================================== */ +/* Register "SH_MMRBIST_CTL" */ +/* Bist base address */ +/* ==================================================================== */ + +#define SH_MMRBIST_CTL 0x0000000100020088 +#define SH_MMRBIST_CTL_MASK 0x0000177f7fffffff +#define SH_MMRBIST_CTL_INIT 0x0000000000000000 + +/* SH_MMRBIST_CTL_BLOCK_LENGTH */ +/* Description: number of dwords in operation */ +#define SH_MMRBIST_CTL_BLOCK_LENGTH_SHFT 0 +#define SH_MMRBIST_CTL_BLOCK_LENGTH_MASK 0x000000007fffffff + +/* SH_MMRBIST_CTL_CMD */ +/* Description: mmr/bist function */ +#define SH_MMRBIST_CTL_CMD_SHFT 32 +#define SH_MMRBIST_CTL_CMD_MASK 0x0000007f00000000 + +/* SH_MMRBIST_CTL_IN_PROGRESS */ +/* Description: writing a 1 starts operation, hardware clears on co */ +/* mpletion */ +#define SH_MMRBIST_CTL_IN_PROGRESS_SHFT 40 +#define SH_MMRBIST_CTL_IN_PROGRESS_MASK 0x0000010000000000 + +/* SH_MMRBIST_CTL_FAIL */ +/* Description: mmr/bist had a data or address error */ +#define SH_MMRBIST_CTL_FAIL_SHFT 41 +#define SH_MMRBIST_CTL_FAIL_MASK 0x0000020000000000 + +/* SH_MMRBIST_CTL_MEM_IDLE */ +/* Description: all memory activity is complete */ +#define SH_MMRBIST_CTL_MEM_IDLE_SHFT 42 +#define SH_MMRBIST_CTL_MEM_IDLE_MASK 0x0000040000000000 + +/* SH_MMRBIST_CTL_RESET_STATE */ +/* Description: writing a 1 resets mmrbist hardware, hardware clear */ +/* s on completion */ +#define SH_MMRBIST_CTL_RESET_STATE_SHFT 44 +#define SH_MMRBIST_CTL_RESET_STATE_MASK 0x0000100000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DBUG_DATA_CFG" */ +/* configuration for md debug data muxes */ +/* ==================================================================== */ + +#define SH_MD_DBUG_DATA_CFG 0x0000000100020100 +#define SH_MD_DBUG_DATA_CFG_MASK 0x7777777777777777 +#define SH_MD_DBUG_DATA_CFG_INIT 0x0000000000000000 + +/* SH_MD_DBUG_DATA_CFG_NIBBLE0_CHIPLET */ +/* Description: selects which md chiplet drives nibble0 */ +#define SH_MD_DBUG_DATA_CFG_NIBBLE0_CHIPLET_SHFT 0 +#define SH_MD_DBUG_DATA_CFG_NIBBLE0_CHIPLET_MASK 0x0000000000000007 + +/* SH_MD_DBUG_DATA_CFG_NIBBLE0_NIBBLE */ +/* Description: selects which nibble from selected chiplet drives n */ +#define SH_MD_DBUG_DATA_CFG_NIBBLE0_NIBBLE_SHFT 4 +#define SH_MD_DBUG_DATA_CFG_NIBBLE0_NIBBLE_MASK 0x0000000000000070 + +/* SH_MD_DBUG_DATA_CFG_NIBBLE1_CHIPLET */ +/* Description: selects which md chiplet drives nibble1 */ +#define SH_MD_DBUG_DATA_CFG_NIBBLE1_CHIPLET_SHFT 8 +#define SH_MD_DBUG_DATA_CFG_NIBBLE1_CHIPLET_MASK 0x0000000000000700 + +/* SH_MD_DBUG_DATA_CFG_NIBBLE1_NIBBLE */ +/* Description: selects which nibble from selected chiplet drives n */ +#define SH_MD_DBUG_DATA_CFG_NIBBLE1_NIBBLE_SHFT 12 +#define SH_MD_DBUG_DATA_CFG_NIBBLE1_NIBBLE_MASK 0x0000000000007000 + +/* SH_MD_DBUG_DATA_CFG_NIBBLE2_CHIPLET */ +/* Description: selects which md chiplet drives nibble2 */ +#define SH_MD_DBUG_DATA_CFG_NIBBLE2_CHIPLET_SHFT 16 +#define SH_MD_DBUG_DATA_CFG_NIBBLE2_CHIPLET_MASK 0x0000000000070000 + +/* SH_MD_DBUG_DATA_CFG_NIBBLE2_NIBBLE */ +/* Description: selects which nibble from selected chiplet drives n */ +#define SH_MD_DBUG_DATA_CFG_NIBBLE2_NIBBLE_SHFT 20 +#define SH_MD_DBUG_DATA_CFG_NIBBLE2_NIBBLE_MASK 0x0000000000700000 + +/* SH_MD_DBUG_DATA_CFG_NIBBLE3_CHIPLET */ +/* Description: selects which md chiplet drives nibble3 */ +#define SH_MD_DBUG_DATA_CFG_NIBBLE3_CHIPLET_SHFT 24 +#define SH_MD_DBUG_DATA_CFG_NIBBLE3_CHIPLET_MASK 0x0000000007000000 + +/* SH_MD_DBUG_DATA_CFG_NIBBLE3_NIBBLE */ +/* Description: selects which nibble from selected chiplet drives n */ +#define SH_MD_DBUG_DATA_CFG_NIBBLE3_NIBBLE_SHFT 28 +#define SH_MD_DBUG_DATA_CFG_NIBBLE3_NIBBLE_MASK 0x0000000070000000 + +/* SH_MD_DBUG_DATA_CFG_NIBBLE4_CHIPLET */ +/* Description: selects which md chiplet drives nibble4 */ +#define SH_MD_DBUG_DATA_CFG_NIBBLE4_CHIPLET_SHFT 32 +#define SH_MD_DBUG_DATA_CFG_NIBBLE4_CHIPLET_MASK 0x0000000700000000 + +/* SH_MD_DBUG_DATA_CFG_NIBBLE4_NIBBLE */ +/* Description: selects which nibble from selected chiplet drives n */ +#define SH_MD_DBUG_DATA_CFG_NIBBLE4_NIBBLE_SHFT 36 +#define SH_MD_DBUG_DATA_CFG_NIBBLE4_NIBBLE_MASK 0x0000007000000000 + +/* SH_MD_DBUG_DATA_CFG_NIBBLE5_CHIPLET */ +/* Description: selects which md chiplet drives nibble5 */ +#define SH_MD_DBUG_DATA_CFG_NIBBLE5_CHIPLET_SHFT 40 +#define SH_MD_DBUG_DATA_CFG_NIBBLE5_CHIPLET_MASK 0x0000070000000000 + +/* SH_MD_DBUG_DATA_CFG_NIBBLE5_NIBBLE */ +/* Description: selects which nibble from selected chiplet drives n */ +#define SH_MD_DBUG_DATA_CFG_NIBBLE5_NIBBLE_SHFT 44 +#define SH_MD_DBUG_DATA_CFG_NIBBLE5_NIBBLE_MASK 0x0000700000000000 + +/* SH_MD_DBUG_DATA_CFG_NIBBLE6_CHIPLET */ +/* Description: selects which md chiplet drives nibble6 */ +#define SH_MD_DBUG_DATA_CFG_NIBBLE6_CHIPLET_SHFT 48 +#define SH_MD_DBUG_DATA_CFG_NIBBLE6_CHIPLET_MASK 0x0007000000000000 + +/* SH_MD_DBUG_DATA_CFG_NIBBLE6_NIBBLE */ +/* Description: selects which nibble from selected chiplet drives n */ +#define SH_MD_DBUG_DATA_CFG_NIBBLE6_NIBBLE_SHFT 52 +#define SH_MD_DBUG_DATA_CFG_NIBBLE6_NIBBLE_MASK 0x0070000000000000 + +/* SH_MD_DBUG_DATA_CFG_NIBBLE7_CHIPLET */ +/* Description: selects which md chiplet drives nibble7 */ +#define SH_MD_DBUG_DATA_CFG_NIBBLE7_CHIPLET_SHFT 56 +#define SH_MD_DBUG_DATA_CFG_NIBBLE7_CHIPLET_MASK 0x0700000000000000 + +/* SH_MD_DBUG_DATA_CFG_NIBBLE7_NIBBLE */ +/* Description: selects which nibble from selected chiplet drives n */ +#define SH_MD_DBUG_DATA_CFG_NIBBLE7_NIBBLE_SHFT 60 +#define SH_MD_DBUG_DATA_CFG_NIBBLE7_NIBBLE_MASK 0x7000000000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DBUG_TRIGGER_CFG" */ +/* configuration for md debug triggers */ +/* ==================================================================== */ + +#define SH_MD_DBUG_TRIGGER_CFG 0x0000000100020108 +#define SH_MD_DBUG_TRIGGER_CFG_MASK 0xf777777777777777 +#define SH_MD_DBUG_TRIGGER_CFG_INIT 0x0000000000000000 + +/* SH_MD_DBUG_TRIGGER_CFG_NIBBLE0_CHIPLET */ +/* Description: selects which md chiplet drives nibble0 */ +#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE0_CHIPLET_SHFT 0 +#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE0_CHIPLET_MASK 0x0000000000000007 + +/* SH_MD_DBUG_TRIGGER_CFG_NIBBLE0_NIBBLE */ +/* Description: selects which nibble from selected chiplet drives n */ +#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE0_NIBBLE_SHFT 4 +#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE0_NIBBLE_MASK 0x0000000000000070 + +/* SH_MD_DBUG_TRIGGER_CFG_NIBBLE1_CHIPLET */ +/* Description: selects which md chiplet drives nibble1 */ +#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE1_CHIPLET_SHFT 8 +#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE1_CHIPLET_MASK 0x0000000000000700 + +/* SH_MD_DBUG_TRIGGER_CFG_NIBBLE1_NIBBLE */ +/* Description: selects which nibble from selected chiplet drives n */ +#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE1_NIBBLE_SHFT 12 +#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE1_NIBBLE_MASK 0x0000000000007000 + +/* SH_MD_DBUG_TRIGGER_CFG_NIBBLE2_CHIPLET */ +/* Description: selects which md chiplet drives nibble2 */ +#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE2_CHIPLET_SHFT 16 +#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE2_CHIPLET_MASK 0x0000000000070000 + +/* SH_MD_DBUG_TRIGGER_CFG_NIBBLE2_NIBBLE */ +/* Description: selects which nibble from selected chiplet drives n */ +#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE2_NIBBLE_SHFT 20 +#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE2_NIBBLE_MASK 0x0000000000700000 + +/* SH_MD_DBUG_TRIGGER_CFG_NIBBLE3_CHIPLET */ +/* Description: selects which md chiplet drives nibble3 */ +#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE3_CHIPLET_SHFT 24 +#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE3_CHIPLET_MASK 0x0000000007000000 + +/* SH_MD_DBUG_TRIGGER_CFG_NIBBLE3_NIBBLE */ +/* Description: selects which nibble from selected chiplet drives n */ +#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE3_NIBBLE_SHFT 28 +#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE3_NIBBLE_MASK 0x0000000070000000 + +/* SH_MD_DBUG_TRIGGER_CFG_NIBBLE4_CHIPLET */ +/* Description: selects which md chiplet drives nibble4 */ +#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE4_CHIPLET_SHFT 32 +#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE4_CHIPLET_MASK 0x0000000700000000 + +/* SH_MD_DBUG_TRIGGER_CFG_NIBBLE4_NIBBLE */ +/* Description: selects which nibble from selected chiplet drives n */ +#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE4_NIBBLE_SHFT 36 +#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE4_NIBBLE_MASK 0x0000007000000000 + +/* SH_MD_DBUG_TRIGGER_CFG_NIBBLE5_CHIPLET */ +/* Description: selects which md chiplet drives nibble5 */ +#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE5_CHIPLET_SHFT 40 +#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE5_CHIPLET_MASK 0x0000070000000000 + +/* SH_MD_DBUG_TRIGGER_CFG_NIBBLE5_NIBBLE */ +/* Description: selects which nibble from selected chiplet drives n */ +#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE5_NIBBLE_SHFT 44 +#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE5_NIBBLE_MASK 0x0000700000000000 + +/* SH_MD_DBUG_TRIGGER_CFG_NIBBLE6_CHIPLET */ +/* Description: selects which md chiplet drives nibble6 */ +#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE6_CHIPLET_SHFT 48 +#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE6_CHIPLET_MASK 0x0007000000000000 + +/* SH_MD_DBUG_TRIGGER_CFG_NIBBLE6_NIBBLE */ +/* Description: selects which nibble from selected chiplet drives n */ +#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE6_NIBBLE_SHFT 52 +#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE6_NIBBLE_MASK 0x0070000000000000 + +/* SH_MD_DBUG_TRIGGER_CFG_NIBBLE7_CHIPLET */ +/* Description: selects which md chiplet drives nibble7 */ +#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE7_CHIPLET_SHFT 56 +#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE7_CHIPLET_MASK 0x0700000000000000 + +/* SH_MD_DBUG_TRIGGER_CFG_NIBBLE7_NIBBLE */ +/* Description: selects which nibble from selected chiplet drives n */ +#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE7_NIBBLE_SHFT 60 +#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE7_NIBBLE_MASK 0x7000000000000000 + +/* SH_MD_DBUG_TRIGGER_CFG_ENABLE */ +/* Description: enables triggering on pattern match */ +#define SH_MD_DBUG_TRIGGER_CFG_ENABLE_SHFT 63 +#define SH_MD_DBUG_TRIGGER_CFG_ENABLE_MASK 0x8000000000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DBUG_COMPARE" */ +/* md debug compare pattern and mask */ +/* ==================================================================== */ + +#define SH_MD_DBUG_COMPARE 0x0000000100020110 +#define SH_MD_DBUG_COMPARE_MASK 0xffffffffffffffff +#define SH_MD_DBUG_COMPARE_INIT 0x0000000000000000 + +/* SH_MD_DBUG_COMPARE_PATTERN */ +/* Description: pattern against which to compare dbug data for trig */ +#define SH_MD_DBUG_COMPARE_PATTERN_SHFT 0 +#define SH_MD_DBUG_COMPARE_PATTERN_MASK 0x00000000ffffffff + +/* SH_MD_DBUG_COMPARE_MASK */ +/* Description: bits to include in compare of dbug data for trigger */ +#define SH_MD_DBUG_COMPARE_MASK_SHFT 32 +#define SH_MD_DBUG_COMPARE_MASK_MASK 0xffffffff00000000 + +/* ==================================================================== */ +/* Register "SH_X_MOD_DBUG_SEL" */ +/* MD acx debug select */ +/* ==================================================================== */ + +#define SH_X_MOD_DBUG_SEL 0x0000000100020118 +#define SH_X_MOD_DBUG_SEL_MASK 0x03ffffffffffffff +#define SH_X_MOD_DBUG_SEL_INIT 0x0000000000000000 + +/* SH_X_MOD_DBUG_SEL_TAG_SEL */ +/* Description: tagmgr select */ +#define SH_X_MOD_DBUG_SEL_TAG_SEL_SHFT 0 +#define SH_X_MOD_DBUG_SEL_TAG_SEL_MASK 0x00000000000000ff + +/* SH_X_MOD_DBUG_SEL_WBQ_SEL */ +/* Description: wbqtg select */ +#define SH_X_MOD_DBUG_SEL_WBQ_SEL_SHFT 8 +#define SH_X_MOD_DBUG_SEL_WBQ_SEL_MASK 0x000000000000ff00 + +/* SH_X_MOD_DBUG_SEL_ARB_SEL */ +/* Description: arbque select */ +#define SH_X_MOD_DBUG_SEL_ARB_SEL_SHFT 16 +#define SH_X_MOD_DBUG_SEL_ARB_SEL_MASK 0x0000000000ff0000 + +/* SH_X_MOD_DBUG_SEL_ATL_SEL */ +/* Description: aintl select */ +#define SH_X_MOD_DBUG_SEL_ATL_SEL_SHFT 24 +#define SH_X_MOD_DBUG_SEL_ATL_SEL_MASK 0x00000007ff000000 + +/* SH_X_MOD_DBUG_SEL_ATR_SEL */ +/* Description: aintr select */ +#define SH_X_MOD_DBUG_SEL_ATR_SEL_SHFT 35 +#define SH_X_MOD_DBUG_SEL_ATR_SEL_MASK 0x00003ff800000000 + +/* SH_X_MOD_DBUG_SEL_DQL_SEL */ +/* Description: dqctr select */ +#define SH_X_MOD_DBUG_SEL_DQL_SEL_SHFT 46 +#define SH_X_MOD_DBUG_SEL_DQL_SEL_MASK 0x000fc00000000000 + +/* SH_X_MOD_DBUG_SEL_DQR_SEL */ +/* Description: dqctl select */ +#define SH_X_MOD_DBUG_SEL_DQR_SEL_SHFT 52 +#define SH_X_MOD_DBUG_SEL_DQR_SEL_MASK 0x03f0000000000000 + +/* ==================================================================== */ +/* Register "SH_X_DBUG_SEL" */ +/* MD acx debug select */ +/* ==================================================================== */ + +#define SH_X_DBUG_SEL 0x0000000100020120 +#define SH_X_DBUG_SEL_MASK 0x0000000000ffffff +#define SH_X_DBUG_SEL_INIT 0x0000000000000000 + +/* SH_X_DBUG_SEL_DBG_SEL */ +/* Description: debug select */ +#define SH_X_DBUG_SEL_DBG_SEL_SHFT 0 +#define SH_X_DBUG_SEL_DBG_SEL_MASK 0x0000000000ffffff + +/* ==================================================================== */ +/* Register "SH_X_LADDR_CMP" */ +/* MD acx address compare */ +/* ==================================================================== */ + +#define SH_X_LADDR_CMP 0x0000000100020128 +#define SH_X_LADDR_CMP_MASK 0x0fffffff0fffffff +#define SH_X_LADDR_CMP_INIT 0x0000000000000000 + +/* SH_X_LADDR_CMP_CMP_VAL */ +/* Description: Compare value */ +#define SH_X_LADDR_CMP_CMP_VAL_SHFT 0 +#define SH_X_LADDR_CMP_CMP_VAL_MASK 0x000000000fffffff + +/* SH_X_LADDR_CMP_MASK_VAL */ +/* Description: Mask value */ +#define SH_X_LADDR_CMP_MASK_VAL_SHFT 32 +#define SH_X_LADDR_CMP_MASK_VAL_MASK 0x0fffffff00000000 + +/* ==================================================================== */ +/* Register "SH_X_RADDR_CMP" */ +/* MD acx address compare */ +/* ==================================================================== */ + +#define SH_X_RADDR_CMP 0x0000000100020130 +#define SH_X_RADDR_CMP_MASK 0x0fffffff0fffffff +#define SH_X_RADDR_CMP_INIT 0x0000000000000000 + +/* SH_X_RADDR_CMP_CMP_VAL */ +/* Description: Compare value */ +#define SH_X_RADDR_CMP_CMP_VAL_SHFT 0 +#define SH_X_RADDR_CMP_CMP_VAL_MASK 0x000000000fffffff + +/* SH_X_RADDR_CMP_MASK_VAL */ +/* Description: Mask value */ +#define SH_X_RADDR_CMP_MASK_VAL_SHFT 32 +#define SH_X_RADDR_CMP_MASK_VAL_MASK 0x0fffffff00000000 + +/* ==================================================================== */ +/* Register "SH_X_TAG_CMP" */ +/* MD acx tagmgr compare */ +/* ==================================================================== */ + +#define SH_X_TAG_CMP 0x0000000100020138 +#define SH_X_TAG_CMP_MASK 0x007fffffffffffff +#define SH_X_TAG_CMP_INIT 0x0000000000000000 + +/* SH_X_TAG_CMP_CMD */ +/* Description: Command compare value */ +#define SH_X_TAG_CMP_CMD_SHFT 0 +#define SH_X_TAG_CMP_CMD_MASK 0x00000000000000ff + +/* SH_X_TAG_CMP_ADDR */ +/* Description: Address compare value */ +#define SH_X_TAG_CMP_ADDR_SHFT 8 +#define SH_X_TAG_CMP_ADDR_MASK 0x000001ffffffff00 + +/* SH_X_TAG_CMP_SRC */ +/* Description: Source compare value */ +#define SH_X_TAG_CMP_SRC_SHFT 41 +#define SH_X_TAG_CMP_SRC_MASK 0x007ffe0000000000 + +/* ==================================================================== */ +/* Register "SH_X_TAG_MASK" */ +/* MD acx tagmgr mask */ +/* ==================================================================== */ + +#define SH_X_TAG_MASK 0x0000000100020140 +#define SH_X_TAG_MASK_MASK 0x007fffffffffffff +#define SH_X_TAG_MASK_INIT 0x0000000000000000 + +/* SH_X_TAG_MASK_CMD */ +/* Description: Command compare value */ +#define SH_X_TAG_MASK_CMD_SHFT 0 +#define SH_X_TAG_MASK_CMD_MASK 0x00000000000000ff + +/* SH_X_TAG_MASK_ADDR */ +/* Description: Address compare value */ +#define SH_X_TAG_MASK_ADDR_SHFT 8 +#define SH_X_TAG_MASK_ADDR_MASK 0x000001ffffffff00 + +/* SH_X_TAG_MASK_SRC */ +/* Description: Source compare value */ +#define SH_X_TAG_MASK_SRC_SHFT 41 +#define SH_X_TAG_MASK_SRC_MASK 0x007ffe0000000000 + +/* ==================================================================== */ +/* Register "SH_Y_MOD_DBUG_SEL" */ +/* MD acy debug select */ +/* ==================================================================== */ + +#define SH_Y_MOD_DBUG_SEL 0x0000000100020148 +#define SH_Y_MOD_DBUG_SEL_MASK 0x03ffffffffffffff +#define SH_Y_MOD_DBUG_SEL_INIT 0x0000000000000000 + +/* SH_Y_MOD_DBUG_SEL_TAG_SEL */ +/* Description: tagmgr select */ +#define SH_Y_MOD_DBUG_SEL_TAG_SEL_SHFT 0 +#define SH_Y_MOD_DBUG_SEL_TAG_SEL_MASK 0x00000000000000ff + +/* SH_Y_MOD_DBUG_SEL_WBQ_SEL */ +/* Description: wbqtg select */ +#define SH_Y_MOD_DBUG_SEL_WBQ_SEL_SHFT 8 +#define SH_Y_MOD_DBUG_SEL_WBQ_SEL_MASK 0x000000000000ff00 + +/* SH_Y_MOD_DBUG_SEL_ARB_SEL */ +/* Description: arbque select */ +#define SH_Y_MOD_DBUG_SEL_ARB_SEL_SHFT 16 +#define SH_Y_MOD_DBUG_SEL_ARB_SEL_MASK 0x0000000000ff0000 + +/* SH_Y_MOD_DBUG_SEL_ATL_SEL */ +/* Description: aintl select */ +#define SH_Y_MOD_DBUG_SEL_ATL_SEL_SHFT 24 +#define SH_Y_MOD_DBUG_SEL_ATL_SEL_MASK 0x00000007ff000000 + +/* SH_Y_MOD_DBUG_SEL_ATR_SEL */ +/* Description: aintr select */ +#define SH_Y_MOD_DBUG_SEL_ATR_SEL_SHFT 35 +#define SH_Y_MOD_DBUG_SEL_ATR_SEL_MASK 0x00003ff800000000 + +/* SH_Y_MOD_DBUG_SEL_DQL_SEL */ +/* Description: dqctr select */ +#define SH_Y_MOD_DBUG_SEL_DQL_SEL_SHFT 46 +#define SH_Y_MOD_DBUG_SEL_DQL_SEL_MASK 0x000fc00000000000 + +/* SH_Y_MOD_DBUG_SEL_DQR_SEL */ +/* Description: dqctl select */ +#define SH_Y_MOD_DBUG_SEL_DQR_SEL_SHFT 52 +#define SH_Y_MOD_DBUG_SEL_DQR_SEL_MASK 0x03f0000000000000 + +/* ==================================================================== */ +/* Register "SH_Y_DBUG_SEL" */ +/* MD acy debug select */ +/* ==================================================================== */ + +#define SH_Y_DBUG_SEL 0x0000000100020150 +#define SH_Y_DBUG_SEL_MASK 0x0000000000ffffff +#define SH_Y_DBUG_SEL_INIT 0x0000000000000000 + +/* SH_Y_DBUG_SEL_DBG_SEL */ +/* Description: debug select */ +#define SH_Y_DBUG_SEL_DBG_SEL_SHFT 0 +#define SH_Y_DBUG_SEL_DBG_SEL_MASK 0x0000000000ffffff + +/* ==================================================================== */ +/* Register "SH_Y_LADDR_CMP" */ +/* MD acy address compare */ +/* ==================================================================== */ + +#define SH_Y_LADDR_CMP 0x0000000100020158 +#define SH_Y_LADDR_CMP_MASK 0x0fffffff0fffffff +#define SH_Y_LADDR_CMP_INIT 0x0000000000000000 + +/* SH_Y_LADDR_CMP_CMP_VAL */ +/* Description: Compare value */ +#define SH_Y_LADDR_CMP_CMP_VAL_SHFT 0 +#define SH_Y_LADDR_CMP_CMP_VAL_MASK 0x000000000fffffff + +/* SH_Y_LADDR_CMP_MASK_VAL */ +/* Description: Mask value */ +#define SH_Y_LADDR_CMP_MASK_VAL_SHFT 32 +#define SH_Y_LADDR_CMP_MASK_VAL_MASK 0x0fffffff00000000 + +/* ==================================================================== */ +/* Register "SH_Y_RADDR_CMP" */ +/* MD acy address compare */ +/* ==================================================================== */ + +#define SH_Y_RADDR_CMP 0x0000000100020160 +#define SH_Y_RADDR_CMP_MASK 0x0fffffff0fffffff +#define SH_Y_RADDR_CMP_INIT 0x0000000000000000 + +/* SH_Y_RADDR_CMP_CMP_VAL */ +/* Description: Compare value */ +#define SH_Y_RADDR_CMP_CMP_VAL_SHFT 0 +#define SH_Y_RADDR_CMP_CMP_VAL_MASK 0x000000000fffffff + +/* SH_Y_RADDR_CMP_MASK_VAL */ +/* Description: Mask value */ +#define SH_Y_RADDR_CMP_MASK_VAL_SHFT 32 +#define SH_Y_RADDR_CMP_MASK_VAL_MASK 0x0fffffff00000000 + +/* ==================================================================== */ +/* Register "SH_Y_TAG_CMP" */ +/* MD acy tagmgr compare */ +/* ==================================================================== */ + +#define SH_Y_TAG_CMP 0x0000000100020168 +#define SH_Y_TAG_CMP_MASK 0x007fffffffffffff +#define SH_Y_TAG_CMP_INIT 0x0000000000000000 + +/* SH_Y_TAG_CMP_CMD */ +/* Description: Command compare value */ +#define SH_Y_TAG_CMP_CMD_SHFT 0 +#define SH_Y_TAG_CMP_CMD_MASK 0x00000000000000ff + +/* SH_Y_TAG_CMP_ADDR */ +/* Description: Address compare value */ +#define SH_Y_TAG_CMP_ADDR_SHFT 8 +#define SH_Y_TAG_CMP_ADDR_MASK 0x000001ffffffff00 + +/* SH_Y_TAG_CMP_SRC */ +/* Description: Source compare value */ +#define SH_Y_TAG_CMP_SRC_SHFT 41 +#define SH_Y_TAG_CMP_SRC_MASK 0x007ffe0000000000 + +/* ==================================================================== */ +/* Register "SH_Y_TAG_MASK" */ +/* MD acy tagmgr mask */ +/* ==================================================================== */ + +#define SH_Y_TAG_MASK 0x0000000100020170 +#define SH_Y_TAG_MASK_MASK 0x007fffffffffffff +#define SH_Y_TAG_MASK_INIT 0x0000000000000000 + +/* SH_Y_TAG_MASK_CMD */ +/* Description: Command compare value */ +#define SH_Y_TAG_MASK_CMD_SHFT 0 +#define SH_Y_TAG_MASK_CMD_MASK 0x00000000000000ff + +/* SH_Y_TAG_MASK_ADDR */ +/* Description: Address compare value */ +#define SH_Y_TAG_MASK_ADDR_SHFT 8 +#define SH_Y_TAG_MASK_ADDR_MASK 0x000001ffffffff00 + +/* SH_Y_TAG_MASK_SRC */ +/* Description: Source compare value */ +#define SH_Y_TAG_MASK_SRC_SHFT 41 +#define SH_Y_TAG_MASK_SRC_MASK 0x007ffe0000000000 + +/* ==================================================================== */ +/* Register "SH_MD_JNR_DBUG_DATA_CFG" */ +/* configuration for md jnr debug data muxes */ +/* ==================================================================== */ + +#define SH_MD_JNR_DBUG_DATA_CFG 0x0000000100020178 +#define SH_MD_JNR_DBUG_DATA_CFG_MASK 0x0000000077777777 +#define SH_MD_JNR_DBUG_DATA_CFG_INIT 0x0000000000000000 + +/* SH_MD_JNR_DBUG_DATA_CFG_NIBBLE0_SEL */ +/* Description: selects which nibble drives nibble0 */ +#define SH_MD_JNR_DBUG_DATA_CFG_NIBBLE0_SEL_SHFT 0 +#define SH_MD_JNR_DBUG_DATA_CFG_NIBBLE0_SEL_MASK 0x0000000000000007 + +/* SH_MD_JNR_DBUG_DATA_CFG_NIBBLE1_SEL */ +/* Description: selects which nibble drives nibble1 */ +#define SH_MD_JNR_DBUG_DATA_CFG_NIBBLE1_SEL_SHFT 4 +#define SH_MD_JNR_DBUG_DATA_CFG_NIBBLE1_SEL_MASK 0x0000000000000070 + +/* SH_MD_JNR_DBUG_DATA_CFG_NIBBLE2_SEL */ +/* Description: selects which nibble drives nibble2 */ +#define SH_MD_JNR_DBUG_DATA_CFG_NIBBLE2_SEL_SHFT 8 +#define SH_MD_JNR_DBUG_DATA_CFG_NIBBLE2_SEL_MASK 0x0000000000000700 + +/* SH_MD_JNR_DBUG_DATA_CFG_NIBBLE3_SEL */ +/* Description: selects which nibble drives nibble3 */ +#define SH_MD_JNR_DBUG_DATA_CFG_NIBBLE3_SEL_SHFT 12 +#define SH_MD_JNR_DBUG_DATA_CFG_NIBBLE3_SEL_MASK 0x0000000000007000 + +/* SH_MD_JNR_DBUG_DATA_CFG_NIBBLE4_SEL */ +/* Description: selects which nibble drives nibble4 */ +#define SH_MD_JNR_DBUG_DATA_CFG_NIBBLE4_SEL_SHFT 16 +#define SH_MD_JNR_DBUG_DATA_CFG_NIBBLE4_SEL_MASK 0x0000000000070000 + +/* SH_MD_JNR_DBUG_DATA_CFG_NIBBLE5_SEL */ +/* Description: selects which nibble drives nibble5 */ +#define SH_MD_JNR_DBUG_DATA_CFG_NIBBLE5_SEL_SHFT 20 +#define SH_MD_JNR_DBUG_DATA_CFG_NIBBLE5_SEL_MASK 0x0000000000700000 + +/* SH_MD_JNR_DBUG_DATA_CFG_NIBBLE6_SEL */ +/* Description: selects which nibble drives nibble6 */ +#define SH_MD_JNR_DBUG_DATA_CFG_NIBBLE6_SEL_SHFT 24 +#define SH_MD_JNR_DBUG_DATA_CFG_NIBBLE6_SEL_MASK 0x0000000007000000 + +/* SH_MD_JNR_DBUG_DATA_CFG_NIBBLE7_SEL */ +/* Description: selects which nibble drives nibble7 */ +#define SH_MD_JNR_DBUG_DATA_CFG_NIBBLE7_SEL_SHFT 28 +#define SH_MD_JNR_DBUG_DATA_CFG_NIBBLE7_SEL_MASK 0x0000000070000000 + +/* ==================================================================== */ +/* Register "SH_MD_LAST_CREDIT" */ +/* captures last credit values on reset */ +/* ==================================================================== */ + +#define SH_MD_LAST_CREDIT 0x0000000100020180 +#define SH_MD_LAST_CREDIT_MASK 0x0000003f3f3f3f3f +#define SH_MD_LAST_CREDIT_INIT 0x0000000000000000 + +/* SH_MD_LAST_CREDIT_RQ_TO_PI */ +/* Description: capture of request credits to pi */ +#define SH_MD_LAST_CREDIT_RQ_TO_PI_SHFT 0 +#define SH_MD_LAST_CREDIT_RQ_TO_PI_MASK 0x000000000000003f + +/* SH_MD_LAST_CREDIT_RP_TO_PI */ +/* Description: capture of reply credits to pi */ +#define SH_MD_LAST_CREDIT_RP_TO_PI_SHFT 8 +#define SH_MD_LAST_CREDIT_RP_TO_PI_MASK 0x0000000000003f00 + +/* SH_MD_LAST_CREDIT_RQ_TO_XN */ +/* Description: capture of request credits to xn */ +#define SH_MD_LAST_CREDIT_RQ_TO_XN_SHFT 16 +#define SH_MD_LAST_CREDIT_RQ_TO_XN_MASK 0x00000000003f0000 + +/* SH_MD_LAST_CREDIT_RP_TO_XN */ +/* Description: capture of reply credits to xn */ +#define SH_MD_LAST_CREDIT_RP_TO_XN_SHFT 24 +#define SH_MD_LAST_CREDIT_RP_TO_XN_MASK 0x000000003f000000 + +/* SH_MD_LAST_CREDIT_TO_LB */ +/* Description: capture of credits to pi */ +#define SH_MD_LAST_CREDIT_TO_LB_SHFT 32 +#define SH_MD_LAST_CREDIT_TO_LB_MASK 0x0000003f00000000 + +/* ==================================================================== */ +/* Register "SH_MEM_CAPTURE_ADDR" */ +/* Address capture address register */ +/* ==================================================================== */ + +#define SH_MEM_CAPTURE_ADDR 0x0000000100020300 +#define SH_MEM_CAPTURE_ADDR_MASK 0x00000ffffffffff8 +#define SH_MEM_CAPTURE_ADDR_INIT 0x0000000000000000 + +/* SH_MEM_CAPTURE_ADDR_ADDR */ +/* Description: upper bits of address */ +#define SH_MEM_CAPTURE_ADDR_ADDR_SHFT 3 +#define SH_MEM_CAPTURE_ADDR_ADDR_MASK 0x0000000ffffffff8 + +/* SH_MEM_CAPTURE_ADDR_CMD */ +/* Description: command of reference */ +#define SH_MEM_CAPTURE_ADDR_CMD_SHFT 36 +#define SH_MEM_CAPTURE_ADDR_CMD_MASK 0x00000ff000000000 + +/* ==================================================================== */ +/* Register "SH_MEM_CAPTURE_MASK" */ +/* Address capture mask register */ +/* ==================================================================== */ + +#define SH_MEM_CAPTURE_MASK 0x0000000100020308 +#define SH_MEM_CAPTURE_MASK_MASK 0x00003ffffffffff8 +#define SH_MEM_CAPTURE_MASK_INIT 0x0000000000000000 + +/* SH_MEM_CAPTURE_MASK_ADDR */ +/* Description: upper bits of address */ +#define SH_MEM_CAPTURE_MASK_ADDR_SHFT 3 +#define SH_MEM_CAPTURE_MASK_ADDR_MASK 0x0000000ffffffff8 + +/* SH_MEM_CAPTURE_MASK_CMD */ +/* Description: command of reference */ +#define SH_MEM_CAPTURE_MASK_CMD_SHFT 36 +#define SH_MEM_CAPTURE_MASK_CMD_MASK 0x00000ff000000000 + +/* SH_MEM_CAPTURE_MASK_ENABLE_LOCAL */ +/* Description: capture references originating locally */ +#define SH_MEM_CAPTURE_MASK_ENABLE_LOCAL_SHFT 44 +#define SH_MEM_CAPTURE_MASK_ENABLE_LOCAL_MASK 0x0000100000000000 + +/* SH_MEM_CAPTURE_MASK_ENABLE_REMOTE */ +/* Description: capture references originating remotely */ +#define SH_MEM_CAPTURE_MASK_ENABLE_REMOTE_SHFT 45 +#define SH_MEM_CAPTURE_MASK_ENABLE_REMOTE_MASK 0x0000200000000000 + +/* ==================================================================== */ +/* Register "SH_MEM_CAPTURE_HDR" */ +/* Address capture header register */ +/* ==================================================================== */ + +#define SH_MEM_CAPTURE_HDR 0x0000000100020310 +#define SH_MEM_CAPTURE_HDR_MASK 0xfffffffffffffff8 +#define SH_MEM_CAPTURE_HDR_INIT 0x0000000000000000 + +/* SH_MEM_CAPTURE_HDR_ADDR */ +/* Description: upper bits of reference address */ +#define SH_MEM_CAPTURE_HDR_ADDR_SHFT 3 +#define SH_MEM_CAPTURE_HDR_ADDR_MASK 0x0000000ffffffff8 + +/* SH_MEM_CAPTURE_HDR_CMD */ +/* Description: command of reference */ +#define SH_MEM_CAPTURE_HDR_CMD_SHFT 36 +#define SH_MEM_CAPTURE_HDR_CMD_MASK 0x00000ff000000000 + +/* SH_MEM_CAPTURE_HDR_SRC */ +/* Description: source node of reference */ +#define SH_MEM_CAPTURE_HDR_SRC_SHFT 44 +#define SH_MEM_CAPTURE_HDR_SRC_MASK 0x03fff00000000000 + +/* SH_MEM_CAPTURE_HDR_CNTR */ +/* Description: increments on every capture */ +#define SH_MEM_CAPTURE_HDR_CNTR_SHFT 58 +#define SH_MEM_CAPTURE_HDR_CNTR_MASK 0xfc00000000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_DIR_CONFIG" */ +/* DQ directory config register */ +/* ==================================================================== */ + +#define SH_MD_DQLP_MMR_DIR_CONFIG 0x0000000100030000 +#define SH_MD_DQLP_MMR_DIR_CONFIG_MASK 0x000000000000001f +#define SH_MD_DQLP_MMR_DIR_CONFIG_INIT 0x0000000000000010 + +/* SH_MD_DQLP_MMR_DIR_CONFIG_SYS_SIZE */ +/* Description: system size code */ +#define SH_MD_DQLP_MMR_DIR_CONFIG_SYS_SIZE_SHFT 0 +#define SH_MD_DQLP_MMR_DIR_CONFIG_SYS_SIZE_MASK 0x0000000000000007 + +/* SH_MD_DQLP_MMR_DIR_CONFIG_EN_DIRECC */ +/* Description: enable directory ecc correction */ +#define SH_MD_DQLP_MMR_DIR_CONFIG_EN_DIRECC_SHFT 3 +#define SH_MD_DQLP_MMR_DIR_CONFIG_EN_DIRECC_MASK 0x0000000000000008 + +/* SH_MD_DQLP_MMR_DIR_CONFIG_EN_DIRPOIS */ +/* Description: enable local poisoning for dir table fall-through */ +#define SH_MD_DQLP_MMR_DIR_CONFIG_EN_DIRPOIS_SHFT 4 +#define SH_MD_DQLP_MMR_DIR_CONFIG_EN_DIRPOIS_MASK 0x0000000000000010 + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_DIR_PRESVEC0" */ +/* node [63:0] presence bits */ +/* ==================================================================== */ + +#define SH_MD_DQLP_MMR_DIR_PRESVEC0 0x0000000100030100 +#define SH_MD_DQLP_MMR_DIR_PRESVEC0_MASK 0xffffffffffffffff +#define SH_MD_DQLP_MMR_DIR_PRESVEC0_INIT 0x0000000000000000 + +/* SH_MD_DQLP_MMR_DIR_PRESVEC0_VEC */ +/* Description: node presence bits, 1=present */ +#define SH_MD_DQLP_MMR_DIR_PRESVEC0_VEC_SHFT 0 +#define SH_MD_DQLP_MMR_DIR_PRESVEC0_VEC_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_DIR_PRESVEC1" */ +/* node [127:64] presence bits */ +/* ==================================================================== */ + +#define SH_MD_DQLP_MMR_DIR_PRESVEC1 0x0000000100030110 +#define SH_MD_DQLP_MMR_DIR_PRESVEC1_MASK 0xffffffffffffffff +#define SH_MD_DQLP_MMR_DIR_PRESVEC1_INIT 0x0000000000000000 + +/* SH_MD_DQLP_MMR_DIR_PRESVEC1_VEC */ +/* Description: node presence bits, 1=present */ +#define SH_MD_DQLP_MMR_DIR_PRESVEC1_VEC_SHFT 0 +#define SH_MD_DQLP_MMR_DIR_PRESVEC1_VEC_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_DIR_PRESVEC2" */ +/* node [191:128] presence bits */ +/* ==================================================================== */ + +#define SH_MD_DQLP_MMR_DIR_PRESVEC2 0x0000000100030120 +#define SH_MD_DQLP_MMR_DIR_PRESVEC2_MASK 0xffffffffffffffff +#define SH_MD_DQLP_MMR_DIR_PRESVEC2_INIT 0x0000000000000000 + +/* SH_MD_DQLP_MMR_DIR_PRESVEC2_VEC */ +/* Description: node presence bits, 1=present */ +#define SH_MD_DQLP_MMR_DIR_PRESVEC2_VEC_SHFT 0 +#define SH_MD_DQLP_MMR_DIR_PRESVEC2_VEC_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_DIR_PRESVEC3" */ +/* node [255:192] presence bits */ +/* ==================================================================== */ + +#define SH_MD_DQLP_MMR_DIR_PRESVEC3 0x0000000100030130 +#define SH_MD_DQLP_MMR_DIR_PRESVEC3_MASK 0xffffffffffffffff +#define SH_MD_DQLP_MMR_DIR_PRESVEC3_INIT 0x0000000000000000 + +/* SH_MD_DQLP_MMR_DIR_PRESVEC3_VEC */ +/* Description: node presence bits, 1=present */ +#define SH_MD_DQLP_MMR_DIR_PRESVEC3_VEC_SHFT 0 +#define SH_MD_DQLP_MMR_DIR_PRESVEC3_VEC_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_DIR_LOCVEC0" */ +/* local vector for acc=0 */ +/* ==================================================================== */ + +#define SH_MD_DQLP_MMR_DIR_LOCVEC0 0x0000000100030200 +#define SH_MD_DQLP_MMR_DIR_LOCVEC0_MASK 0xffffffffffffffff +#define SH_MD_DQLP_MMR_DIR_LOCVEC0_INIT 0x0000000000000000 + +/* SH_MD_DQLP_MMR_DIR_LOCVEC0_VEC */ +/* Description: 1 node is local */ +#define SH_MD_DQLP_MMR_DIR_LOCVEC0_VEC_SHFT 0 +#define SH_MD_DQLP_MMR_DIR_LOCVEC0_VEC_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_DIR_LOCVEC1" */ +/* local vector for acc=1 */ +/* ==================================================================== */ + +#define SH_MD_DQLP_MMR_DIR_LOCVEC1 0x0000000100030210 +#define SH_MD_DQLP_MMR_DIR_LOCVEC1_MASK 0xffffffffffffffff +#define SH_MD_DQLP_MMR_DIR_LOCVEC1_INIT 0x0000000000000000 + +/* SH_MD_DQLP_MMR_DIR_LOCVEC1_VEC */ +/* Description: 1 node is local */ +#define SH_MD_DQLP_MMR_DIR_LOCVEC1_VEC_SHFT 0 +#define SH_MD_DQLP_MMR_DIR_LOCVEC1_VEC_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_DIR_LOCVEC2" */ +/* local vector for acc=2 */ +/* ==================================================================== */ + +#define SH_MD_DQLP_MMR_DIR_LOCVEC2 0x0000000100030220 +#define SH_MD_DQLP_MMR_DIR_LOCVEC2_MASK 0xffffffffffffffff +#define SH_MD_DQLP_MMR_DIR_LOCVEC2_INIT 0x0000000000000000 + +/* SH_MD_DQLP_MMR_DIR_LOCVEC2_VEC */ +/* Description: 1 node is local */ +#define SH_MD_DQLP_MMR_DIR_LOCVEC2_VEC_SHFT 0 +#define SH_MD_DQLP_MMR_DIR_LOCVEC2_VEC_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_DIR_LOCVEC3" */ +/* local vector for acc=3 */ +/* ==================================================================== */ + +#define SH_MD_DQLP_MMR_DIR_LOCVEC3 0x0000000100030230 +#define SH_MD_DQLP_MMR_DIR_LOCVEC3_MASK 0xffffffffffffffff +#define SH_MD_DQLP_MMR_DIR_LOCVEC3_INIT 0x0000000000000000 + +/* SH_MD_DQLP_MMR_DIR_LOCVEC3_VEC */ +/* Description: 1 node is local */ +#define SH_MD_DQLP_MMR_DIR_LOCVEC3_VEC_SHFT 0 +#define SH_MD_DQLP_MMR_DIR_LOCVEC3_VEC_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_DIR_LOCVEC4" */ +/* local vector for acc=4 */ +/* ==================================================================== */ + +#define SH_MD_DQLP_MMR_DIR_LOCVEC4 0x0000000100030240 +#define SH_MD_DQLP_MMR_DIR_LOCVEC4_MASK 0xffffffffffffffff +#define SH_MD_DQLP_MMR_DIR_LOCVEC4_INIT 0x0000000000000000 + +/* SH_MD_DQLP_MMR_DIR_LOCVEC4_VEC */ +/* Description: 1 node is local */ +#define SH_MD_DQLP_MMR_DIR_LOCVEC4_VEC_SHFT 0 +#define SH_MD_DQLP_MMR_DIR_LOCVEC4_VEC_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_DIR_LOCVEC5" */ +/* local vector for acc=5 */ +/* ==================================================================== */ + +#define SH_MD_DQLP_MMR_DIR_LOCVEC5 0x0000000100030250 +#define SH_MD_DQLP_MMR_DIR_LOCVEC5_MASK 0xffffffffffffffff +#define SH_MD_DQLP_MMR_DIR_LOCVEC5_INIT 0x0000000000000000 + +/* SH_MD_DQLP_MMR_DIR_LOCVEC5_VEC */ +/* Description: 1 node is local */ +#define SH_MD_DQLP_MMR_DIR_LOCVEC5_VEC_SHFT 0 +#define SH_MD_DQLP_MMR_DIR_LOCVEC5_VEC_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_DIR_LOCVEC6" */ +/* local vector for acc=6 */ +/* ==================================================================== */ + +#define SH_MD_DQLP_MMR_DIR_LOCVEC6 0x0000000100030260 +#define SH_MD_DQLP_MMR_DIR_LOCVEC6_MASK 0xffffffffffffffff +#define SH_MD_DQLP_MMR_DIR_LOCVEC6_INIT 0x0000000000000000 + +/* SH_MD_DQLP_MMR_DIR_LOCVEC6_VEC */ +/* Description: 1 node is local */ +#define SH_MD_DQLP_MMR_DIR_LOCVEC6_VEC_SHFT 0 +#define SH_MD_DQLP_MMR_DIR_LOCVEC6_VEC_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_DIR_LOCVEC7" */ +/* local vector for acc=7 */ +/* ==================================================================== */ + +#define SH_MD_DQLP_MMR_DIR_LOCVEC7 0x0000000100030270 +#define SH_MD_DQLP_MMR_DIR_LOCVEC7_MASK 0xffffffffffffffff +#define SH_MD_DQLP_MMR_DIR_LOCVEC7_INIT 0x0000000000000000 + +/* SH_MD_DQLP_MMR_DIR_LOCVEC7_VEC */ +/* Description: 1 node is local */ +#define SH_MD_DQLP_MMR_DIR_LOCVEC7_VEC_SHFT 0 +#define SH_MD_DQLP_MMR_DIR_LOCVEC7_VEC_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_DIR_PRIVEC0" */ +/* privilege vector for acc=0 */ +/* ==================================================================== */ + +#define SH_MD_DQLP_MMR_DIR_PRIVEC0 0x0000000100030300 +#define SH_MD_DQLP_MMR_DIR_PRIVEC0_MASK 0x000000000fffffff +#define SH_MD_DQLP_MMR_DIR_PRIVEC0_INIT 0x0000000000000000 + +/* SH_MD_DQLP_MMR_DIR_PRIVEC0_IN */ +/* Description: in partition privileges, locvec bit=1 */ +#define SH_MD_DQLP_MMR_DIR_PRIVEC0_IN_SHFT 0 +#define SH_MD_DQLP_MMR_DIR_PRIVEC0_IN_MASK 0x0000000000003fff + +/* SH_MD_DQLP_MMR_DIR_PRIVEC0_OUT */ +/* Description: out of partition privileges, locvec bit=0 */ +#define SH_MD_DQLP_MMR_DIR_PRIVEC0_OUT_SHFT 14 +#define SH_MD_DQLP_MMR_DIR_PRIVEC0_OUT_MASK 0x000000000fffc000 + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_DIR_PRIVEC1" */ +/* privilege vector for acc=1 */ +/* ==================================================================== */ + +#define SH_MD_DQLP_MMR_DIR_PRIVEC1 0x0000000100030310 +#define SH_MD_DQLP_MMR_DIR_PRIVEC1_MASK 0x000000000fffffff +#define SH_MD_DQLP_MMR_DIR_PRIVEC1_INIT 0x0000000000000000 + +/* SH_MD_DQLP_MMR_DIR_PRIVEC1_IN */ +/* Description: in partition privileges, locvec bit=1 */ +#define SH_MD_DQLP_MMR_DIR_PRIVEC1_IN_SHFT 0 +#define SH_MD_DQLP_MMR_DIR_PRIVEC1_IN_MASK 0x0000000000003fff + +/* SH_MD_DQLP_MMR_DIR_PRIVEC1_OUT */ +/* Description: out of partition privileges, locvec bit=0 */ +#define SH_MD_DQLP_MMR_DIR_PRIVEC1_OUT_SHFT 14 +#define SH_MD_DQLP_MMR_DIR_PRIVEC1_OUT_MASK 0x000000000fffc000 + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_DIR_PRIVEC2" */ +/* privilege vector for acc=2 */ +/* ==================================================================== */ + +#define SH_MD_DQLP_MMR_DIR_PRIVEC2 0x0000000100030320 +#define SH_MD_DQLP_MMR_DIR_PRIVEC2_MASK 0x000000000fffffff +#define SH_MD_DQLP_MMR_DIR_PRIVEC2_INIT 0x0000000000000000 + +/* SH_MD_DQLP_MMR_DIR_PRIVEC2_IN */ +/* Description: in partition privileges, locvec bit=1 */ +#define SH_MD_DQLP_MMR_DIR_PRIVEC2_IN_SHFT 0 +#define SH_MD_DQLP_MMR_DIR_PRIVEC2_IN_MASK 0x0000000000003fff + +/* SH_MD_DQLP_MMR_DIR_PRIVEC2_OUT */ +/* Description: out of partition privileges, locvec bit=0 */ +#define SH_MD_DQLP_MMR_DIR_PRIVEC2_OUT_SHFT 14 +#define SH_MD_DQLP_MMR_DIR_PRIVEC2_OUT_MASK 0x000000000fffc000 + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_DIR_PRIVEC3" */ +/* privilege vector for acc=3 */ +/* ==================================================================== */ + +#define SH_MD_DQLP_MMR_DIR_PRIVEC3 0x0000000100030330 +#define SH_MD_DQLP_MMR_DIR_PRIVEC3_MASK 0x000000000fffffff +#define SH_MD_DQLP_MMR_DIR_PRIVEC3_INIT 0x0000000000000000 + +/* SH_MD_DQLP_MMR_DIR_PRIVEC3_IN */ +/* Description: in partition privileges, locvec bit=1 */ +#define SH_MD_DQLP_MMR_DIR_PRIVEC3_IN_SHFT 0 +#define SH_MD_DQLP_MMR_DIR_PRIVEC3_IN_MASK 0x0000000000003fff + +/* SH_MD_DQLP_MMR_DIR_PRIVEC3_OUT */ +/* Description: out of partition privileges, locvec bit=0 */ +#define SH_MD_DQLP_MMR_DIR_PRIVEC3_OUT_SHFT 14 +#define SH_MD_DQLP_MMR_DIR_PRIVEC3_OUT_MASK 0x000000000fffc000 + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_DIR_PRIVEC4" */ +/* privilege vector for acc=4 */ +/* ==================================================================== */ + +#define SH_MD_DQLP_MMR_DIR_PRIVEC4 0x0000000100030340 +#define SH_MD_DQLP_MMR_DIR_PRIVEC4_MASK 0x000000000fffffff +#define SH_MD_DQLP_MMR_DIR_PRIVEC4_INIT 0x0000000000000000 + +/* SH_MD_DQLP_MMR_DIR_PRIVEC4_IN */ +/* Description: in partition privileges, locvec bit=1 */ +#define SH_MD_DQLP_MMR_DIR_PRIVEC4_IN_SHFT 0 +#define SH_MD_DQLP_MMR_DIR_PRIVEC4_IN_MASK 0x0000000000003fff + +/* SH_MD_DQLP_MMR_DIR_PRIVEC4_OUT */ +/* Description: out of partition privileges, locvec bit=0 */ +#define SH_MD_DQLP_MMR_DIR_PRIVEC4_OUT_SHFT 14 +#define SH_MD_DQLP_MMR_DIR_PRIVEC4_OUT_MASK 0x000000000fffc000 + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_DIR_PRIVEC5" */ +/* privilege vector for acc=5 */ +/* ==================================================================== */ + +#define SH_MD_DQLP_MMR_DIR_PRIVEC5 0x0000000100030350 +#define SH_MD_DQLP_MMR_DIR_PRIVEC5_MASK 0x000000000fffffff +#define SH_MD_DQLP_MMR_DIR_PRIVEC5_INIT 0x0000000000000000 + +/* SH_MD_DQLP_MMR_DIR_PRIVEC5_IN */ +/* Description: in partition privileges, locvec bit=1 */ +#define SH_MD_DQLP_MMR_DIR_PRIVEC5_IN_SHFT 0 +#define SH_MD_DQLP_MMR_DIR_PRIVEC5_IN_MASK 0x0000000000003fff + +/* SH_MD_DQLP_MMR_DIR_PRIVEC5_OUT */ +/* Description: out of partition privileges, locvec bit=0 */ +#define SH_MD_DQLP_MMR_DIR_PRIVEC5_OUT_SHFT 14 +#define SH_MD_DQLP_MMR_DIR_PRIVEC5_OUT_MASK 0x000000000fffc000 + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_DIR_PRIVEC6" */ +/* privilege vector for acc=6 */ +/* ==================================================================== */ + +#define SH_MD_DQLP_MMR_DIR_PRIVEC6 0x0000000100030360 +#define SH_MD_DQLP_MMR_DIR_PRIVEC6_MASK 0x000000000fffffff +#define SH_MD_DQLP_MMR_DIR_PRIVEC6_INIT 0x0000000000000000 + +/* SH_MD_DQLP_MMR_DIR_PRIVEC6_IN */ +/* Description: in partition privileges, locvec bit=1 */ +#define SH_MD_DQLP_MMR_DIR_PRIVEC6_IN_SHFT 0 +#define SH_MD_DQLP_MMR_DIR_PRIVEC6_IN_MASK 0x0000000000003fff + +/* SH_MD_DQLP_MMR_DIR_PRIVEC6_OUT */ +/* Description: out of partition privileges, locvec bit=0 */ +#define SH_MD_DQLP_MMR_DIR_PRIVEC6_OUT_SHFT 14 +#define SH_MD_DQLP_MMR_DIR_PRIVEC6_OUT_MASK 0x000000000fffc000 + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_DIR_PRIVEC7" */ +/* privilege vector for acc=7 */ +/* ==================================================================== */ + +#define SH_MD_DQLP_MMR_DIR_PRIVEC7 0x0000000100030370 +#define SH_MD_DQLP_MMR_DIR_PRIVEC7_MASK 0x000000000fffffff +#define SH_MD_DQLP_MMR_DIR_PRIVEC7_INIT 0x0000000000000000 + +/* SH_MD_DQLP_MMR_DIR_PRIVEC7_IN */ +/* Description: in partition privileges, locvec bit=1 */ +#define SH_MD_DQLP_MMR_DIR_PRIVEC7_IN_SHFT 0 +#define SH_MD_DQLP_MMR_DIR_PRIVEC7_IN_MASK 0x0000000000003fff + +/* SH_MD_DQLP_MMR_DIR_PRIVEC7_OUT */ +/* Description: out of partition privileges, locvec bit=0 */ +#define SH_MD_DQLP_MMR_DIR_PRIVEC7_OUT_SHFT 14 +#define SH_MD_DQLP_MMR_DIR_PRIVEC7_OUT_MASK 0x000000000fffc000 + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_DIR_TIMER" */ +/* MD SXRO timer */ +/* ==================================================================== */ + +#define SH_MD_DQLP_MMR_DIR_TIMER 0x0000000100030400 +#define SH_MD_DQLP_MMR_DIR_TIMER_MASK 0x00000000003fffff +#define SH_MD_DQLP_MMR_DIR_TIMER_INIT 0x0000000000000000 + +/* SH_MD_DQLP_MMR_DIR_TIMER_TIMER_DIV */ +/* Description: timer divide register */ +#define SH_MD_DQLP_MMR_DIR_TIMER_TIMER_DIV_SHFT 0 +#define SH_MD_DQLP_MMR_DIR_TIMER_TIMER_DIV_MASK 0x0000000000000fff + +/* SH_MD_DQLP_MMR_DIR_TIMER_TIMER_EN */ +/* Description: timer enable */ +#define SH_MD_DQLP_MMR_DIR_TIMER_TIMER_EN_SHFT 12 +#define SH_MD_DQLP_MMR_DIR_TIMER_TIMER_EN_MASK 0x0000000000001000 + +/* SH_MD_DQLP_MMR_DIR_TIMER_TIMER_CUR */ +/* Description: value of current timer */ +#define SH_MD_DQLP_MMR_DIR_TIMER_TIMER_CUR_SHFT 13 +#define SH_MD_DQLP_MMR_DIR_TIMER_TIMER_CUR_MASK 0x00000000003fe000 + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_PIOWD_DIR_ENTRY" */ +/* directory pio write data */ +/* ==================================================================== */ + +#define SH_MD_DQLP_MMR_PIOWD_DIR_ENTRY 0x0000000100031000 +#define SH_MD_DQLP_MMR_PIOWD_DIR_ENTRY_MASK 0x03ffffffffffffff +#define SH_MD_DQLP_MMR_PIOWD_DIR_ENTRY_INIT 0x0000000000000000 + +/* SH_MD_DQLP_MMR_PIOWD_DIR_ENTRY_DIRA */ +/* Description: directory entry A */ +#define SH_MD_DQLP_MMR_PIOWD_DIR_ENTRY_DIRA_SHFT 0 +#define SH_MD_DQLP_MMR_PIOWD_DIR_ENTRY_DIRA_MASK 0x0000000003ffffff + +/* SH_MD_DQLP_MMR_PIOWD_DIR_ENTRY_DIRB */ +/* Description: directory entry B */ +#define SH_MD_DQLP_MMR_PIOWD_DIR_ENTRY_DIRB_SHFT 26 +#define SH_MD_DQLP_MMR_PIOWD_DIR_ENTRY_DIRB_MASK 0x000ffffffc000000 + +/* SH_MD_DQLP_MMR_PIOWD_DIR_ENTRY_PRI */ +/* Description: directory priority */ +#define SH_MD_DQLP_MMR_PIOWD_DIR_ENTRY_PRI_SHFT 52 +#define SH_MD_DQLP_MMR_PIOWD_DIR_ENTRY_PRI_MASK 0x0070000000000000 + +/* SH_MD_DQLP_MMR_PIOWD_DIR_ENTRY_ACC */ +/* Description: directory access bits */ +#define SH_MD_DQLP_MMR_PIOWD_DIR_ENTRY_ACC_SHFT 55 +#define SH_MD_DQLP_MMR_PIOWD_DIR_ENTRY_ACC_MASK 0x0380000000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_PIOWD_DIR_ECC" */ +/* directory ecc register */ +/* ==================================================================== */ + +#define SH_MD_DQLP_MMR_PIOWD_DIR_ECC 0x0000000100031010 +#define SH_MD_DQLP_MMR_PIOWD_DIR_ECC_MASK 0x0000000000003fff +#define SH_MD_DQLP_MMR_PIOWD_DIR_ECC_INIT 0x0000000000000000 + +/* SH_MD_DQLP_MMR_PIOWD_DIR_ECC_ECCA */ +/* Description: XOR bits for directory ECC group 1 */ +#define SH_MD_DQLP_MMR_PIOWD_DIR_ECC_ECCA_SHFT 0 +#define SH_MD_DQLP_MMR_PIOWD_DIR_ECC_ECCA_MASK 0x000000000000007f + +/* SH_MD_DQLP_MMR_PIOWD_DIR_ECC_ECCB */ +/* Description: XOR bits for directory ECC group 2 */ +#define SH_MD_DQLP_MMR_PIOWD_DIR_ECC_ECCB_SHFT 7 +#define SH_MD_DQLP_MMR_PIOWD_DIR_ECC_ECCB_MASK 0x0000000000003f80 + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_XPIORD_XDIR_ENTRY" */ +/* x directory pio read data */ +/* ==================================================================== */ + +#define SH_MD_DQLP_MMR_XPIORD_XDIR_ENTRY 0x0000000100032000 +#define SH_MD_DQLP_MMR_XPIORD_XDIR_ENTRY_MASK 0x0fffffffffffffff +#define SH_MD_DQLP_MMR_XPIORD_XDIR_ENTRY_INIT 0x0000000000000000 + +/* SH_MD_DQLP_MMR_XPIORD_XDIR_ENTRY_DIRA */ +/* Description: directory entry A */ +#define SH_MD_DQLP_MMR_XPIORD_XDIR_ENTRY_DIRA_SHFT 0 +#define SH_MD_DQLP_MMR_XPIORD_XDIR_ENTRY_DIRA_MASK 0x0000000003ffffff + +/* SH_MD_DQLP_MMR_XPIORD_XDIR_ENTRY_DIRB */ +/* Description: directory entry B */ +#define SH_MD_DQLP_MMR_XPIORD_XDIR_ENTRY_DIRB_SHFT 26 +#define SH_MD_DQLP_MMR_XPIORD_XDIR_ENTRY_DIRB_MASK 0x000ffffffc000000 + +/* SH_MD_DQLP_MMR_XPIORD_XDIR_ENTRY_PRI */ +/* Description: directory priority */ +#define SH_MD_DQLP_MMR_XPIORD_XDIR_ENTRY_PRI_SHFT 52 +#define SH_MD_DQLP_MMR_XPIORD_XDIR_ENTRY_PRI_MASK 0x0070000000000000 + +/* SH_MD_DQLP_MMR_XPIORD_XDIR_ENTRY_ACC */ +/* Description: directory access bits */ +#define SH_MD_DQLP_MMR_XPIORD_XDIR_ENTRY_ACC_SHFT 55 +#define SH_MD_DQLP_MMR_XPIORD_XDIR_ENTRY_ACC_MASK 0x0380000000000000 + +/* SH_MD_DQLP_MMR_XPIORD_XDIR_ENTRY_COR */ +/* Description: correctable ecc error */ +#define SH_MD_DQLP_MMR_XPIORD_XDIR_ENTRY_COR_SHFT 58 +#define SH_MD_DQLP_MMR_XPIORD_XDIR_ENTRY_COR_MASK 0x0400000000000000 + +/* SH_MD_DQLP_MMR_XPIORD_XDIR_ENTRY_UNC */ +/* Description: uncorrectable ecc error */ +#define SH_MD_DQLP_MMR_XPIORD_XDIR_ENTRY_UNC_SHFT 59 +#define SH_MD_DQLP_MMR_XPIORD_XDIR_ENTRY_UNC_MASK 0x0800000000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_XPIORD_XDIR_ECC" */ +/* x directory ecc */ +/* ==================================================================== */ + +#define SH_MD_DQLP_MMR_XPIORD_XDIR_ECC 0x0000000100032010 +#define SH_MD_DQLP_MMR_XPIORD_XDIR_ECC_MASK 0x0000000000003fff +#define SH_MD_DQLP_MMR_XPIORD_XDIR_ECC_INIT 0x0000000000000000 + +/* SH_MD_DQLP_MMR_XPIORD_XDIR_ECC_ECCA */ +/* Description: group 1 ecc */ +#define SH_MD_DQLP_MMR_XPIORD_XDIR_ECC_ECCA_SHFT 0 +#define SH_MD_DQLP_MMR_XPIORD_XDIR_ECC_ECCA_MASK 0x000000000000007f + +/* SH_MD_DQLP_MMR_XPIORD_XDIR_ECC_ECCB */ +/* Description: group 2 ecc */ +#define SH_MD_DQLP_MMR_XPIORD_XDIR_ECC_ECCB_SHFT 7 +#define SH_MD_DQLP_MMR_XPIORD_XDIR_ECC_ECCB_MASK 0x0000000000003f80 + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_YPIORD_YDIR_ENTRY" */ +/* y directory pio read data */ +/* ==================================================================== */ + +#define SH_MD_DQLP_MMR_YPIORD_YDIR_ENTRY 0x0000000100032800 +#define SH_MD_DQLP_MMR_YPIORD_YDIR_ENTRY_MASK 0x0fffffffffffffff +#define SH_MD_DQLP_MMR_YPIORD_YDIR_ENTRY_INIT 0x0000000000000000 + +/* SH_MD_DQLP_MMR_YPIORD_YDIR_ENTRY_DIRA */ +/* Description: directory entry A */ +#define SH_MD_DQLP_MMR_YPIORD_YDIR_ENTRY_DIRA_SHFT 0 +#define SH_MD_DQLP_MMR_YPIORD_YDIR_ENTRY_DIRA_MASK 0x0000000003ffffff + +/* SH_MD_DQLP_MMR_YPIORD_YDIR_ENTRY_DIRB */ +/* Description: directory entry B */ +#define SH_MD_DQLP_MMR_YPIORD_YDIR_ENTRY_DIRB_SHFT 26 +#define SH_MD_DQLP_MMR_YPIORD_YDIR_ENTRY_DIRB_MASK 0x000ffffffc000000 + +/* SH_MD_DQLP_MMR_YPIORD_YDIR_ENTRY_PRI */ +/* Description: directory priority */ +#define SH_MD_DQLP_MMR_YPIORD_YDIR_ENTRY_PRI_SHFT 52 +#define SH_MD_DQLP_MMR_YPIORD_YDIR_ENTRY_PRI_MASK 0x0070000000000000 + +/* SH_MD_DQLP_MMR_YPIORD_YDIR_ENTRY_ACC */ +/* Description: directory access bits */ +#define SH_MD_DQLP_MMR_YPIORD_YDIR_ENTRY_ACC_SHFT 55 +#define SH_MD_DQLP_MMR_YPIORD_YDIR_ENTRY_ACC_MASK 0x0380000000000000 + +/* SH_MD_DQLP_MMR_YPIORD_YDIR_ENTRY_COR */ +/* Description: correctable ecc error */ +#define SH_MD_DQLP_MMR_YPIORD_YDIR_ENTRY_COR_SHFT 58 +#define SH_MD_DQLP_MMR_YPIORD_YDIR_ENTRY_COR_MASK 0x0400000000000000 + +/* SH_MD_DQLP_MMR_YPIORD_YDIR_ENTRY_UNC */ +/* Description: uncorrectable ecc error */ +#define SH_MD_DQLP_MMR_YPIORD_YDIR_ENTRY_UNC_SHFT 59 +#define SH_MD_DQLP_MMR_YPIORD_YDIR_ENTRY_UNC_MASK 0x0800000000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_YPIORD_YDIR_ECC" */ +/* y directory ecc */ +/* ==================================================================== */ + +#define SH_MD_DQLP_MMR_YPIORD_YDIR_ECC 0x0000000100032810 +#define SH_MD_DQLP_MMR_YPIORD_YDIR_ECC_MASK 0x0000000000003fff +#define SH_MD_DQLP_MMR_YPIORD_YDIR_ECC_INIT 0x0000000000000000 + +/* SH_MD_DQLP_MMR_YPIORD_YDIR_ECC_ECCA */ +/* Description: group 1 ecc */ +#define SH_MD_DQLP_MMR_YPIORD_YDIR_ECC_ECCA_SHFT 0 +#define SH_MD_DQLP_MMR_YPIORD_YDIR_ECC_ECCA_MASK 0x000000000000007f + +/* SH_MD_DQLP_MMR_YPIORD_YDIR_ECC_ECCB */ +/* Description: group 2 ecc */ +#define SH_MD_DQLP_MMR_YPIORD_YDIR_ECC_ECCB_SHFT 7 +#define SH_MD_DQLP_MMR_YPIORD_YDIR_ECC_ECCB_MASK 0x0000000000003f80 + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_XCERR1" */ +/* correctable dir ecc group 1 error register */ +/* ==================================================================== */ + +#define SH_MD_DQLP_MMR_XCERR1 0x0000000100033000 +#define SH_MD_DQLP_MMR_XCERR1_MASK 0x0000007fffffffff +#define SH_MD_DQLP_MMR_XCERR1_INIT 0x0000000000000000 + +/* SH_MD_DQLP_MMR_XCERR1_GRP1 */ +/* Description: ecc group 1 bits */ +#define SH_MD_DQLP_MMR_XCERR1_GRP1_SHFT 0 +#define SH_MD_DQLP_MMR_XCERR1_GRP1_MASK 0x0000000fffffffff + +/* SH_MD_DQLP_MMR_XCERR1_VAL */ +/* Description: correctable ecc error in group 1 bits */ +#define SH_MD_DQLP_MMR_XCERR1_VAL_SHFT 36 +#define SH_MD_DQLP_MMR_XCERR1_VAL_MASK 0x0000001000000000 + +/* SH_MD_DQLP_MMR_XCERR1_MORE */ +/* Description: more than one correctable ecc error in group 1 */ +#define SH_MD_DQLP_MMR_XCERR1_MORE_SHFT 37 +#define SH_MD_DQLP_MMR_XCERR1_MORE_MASK 0x0000002000000000 + +/* SH_MD_DQLP_MMR_XCERR1_ARM */ +/* Description: writing 1 arms uncorrectable ecc error capture */ +#define SH_MD_DQLP_MMR_XCERR1_ARM_SHFT 38 +#define SH_MD_DQLP_MMR_XCERR1_ARM_MASK 0x0000004000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_XCERR2" */ +/* correctable dir ecc group 2 error register */ +/* ==================================================================== */ + +#define SH_MD_DQLP_MMR_XCERR2 0x0000000100033010 +#define SH_MD_DQLP_MMR_XCERR2_MASK 0x0000003fffffffff +#define SH_MD_DQLP_MMR_XCERR2_INIT 0x0000000000000000 + +/* SH_MD_DQLP_MMR_XCERR2_GRP2 */ +/* Description: ecc group 2 bits */ +#define SH_MD_DQLP_MMR_XCERR2_GRP2_SHFT 0 +#define SH_MD_DQLP_MMR_XCERR2_GRP2_MASK 0x0000000fffffffff + +/* SH_MD_DQLP_MMR_XCERR2_VAL */ +/* Description: correctable ecc error in group 2 bits */ +#define SH_MD_DQLP_MMR_XCERR2_VAL_SHFT 36 +#define SH_MD_DQLP_MMR_XCERR2_VAL_MASK 0x0000001000000000 + +/* SH_MD_DQLP_MMR_XCERR2_MORE */ +/* Description: more than one correctable ecc error in group 2 */ +#define SH_MD_DQLP_MMR_XCERR2_MORE_SHFT 37 +#define SH_MD_DQLP_MMR_XCERR2_MORE_MASK 0x0000002000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_XUERR1" */ +/* uncorrectable dir ecc group 1 error register */ +/* ==================================================================== */ + +#define SH_MD_DQLP_MMR_XUERR1 0x0000000100033020 +#define SH_MD_DQLP_MMR_XUERR1_MASK 0x0000007fffffffff +#define SH_MD_DQLP_MMR_XUERR1_INIT 0x0000000000000000 + +/* SH_MD_DQLP_MMR_XUERR1_GRP1 */ +/* Description: ecc group 1 bits */ +#define SH_MD_DQLP_MMR_XUERR1_GRP1_SHFT 0 +#define SH_MD_DQLP_MMR_XUERR1_GRP1_MASK 0x0000000fffffffff + +/* SH_MD_DQLP_MMR_XUERR1_VAL */ +/* Description: uncorrectable ecc error in group 1 bits */ +#define SH_MD_DQLP_MMR_XUERR1_VAL_SHFT 36 +#define SH_MD_DQLP_MMR_XUERR1_VAL_MASK 0x0000001000000000 + +/* SH_MD_DQLP_MMR_XUERR1_MORE */ +/* Description: more than one uncorrectable ecc error in group 1 */ +#define SH_MD_DQLP_MMR_XUERR1_MORE_SHFT 37 +#define SH_MD_DQLP_MMR_XUERR1_MORE_MASK 0x0000002000000000 + +/* SH_MD_DQLP_MMR_XUERR1_ARM */ +/* Description: writing 1 arms uncorrectable ecc error capture */ +#define SH_MD_DQLP_MMR_XUERR1_ARM_SHFT 38 +#define SH_MD_DQLP_MMR_XUERR1_ARM_MASK 0x0000004000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_XUERR2" */ +/* uncorrectable dir ecc group 2 error register */ +/* ==================================================================== */ + +#define SH_MD_DQLP_MMR_XUERR2 0x0000000100033030 +#define SH_MD_DQLP_MMR_XUERR2_MASK 0x0000003fffffffff +#define SH_MD_DQLP_MMR_XUERR2_INIT 0x0000000000000000 + +/* SH_MD_DQLP_MMR_XUERR2_GRP2 */ +/* Description: ecc group 2 bits */ +#define SH_MD_DQLP_MMR_XUERR2_GRP2_SHFT 0 +#define SH_MD_DQLP_MMR_XUERR2_GRP2_MASK 0x0000000fffffffff + +/* SH_MD_DQLP_MMR_XUERR2_VAL */ +/* Description: uncorrectable ecc error in group 2 bits */ +#define SH_MD_DQLP_MMR_XUERR2_VAL_SHFT 36 +#define SH_MD_DQLP_MMR_XUERR2_VAL_MASK 0x0000001000000000 + +/* SH_MD_DQLP_MMR_XUERR2_MORE */ +/* Description: more than one uncorrectable ecc error in group 2 */ +#define SH_MD_DQLP_MMR_XUERR2_MORE_SHFT 37 +#define SH_MD_DQLP_MMR_XUERR2_MORE_MASK 0x0000002000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_XPERR" */ +/* protocol error register */ +/* ==================================================================== */ + +#define SH_MD_DQLP_MMR_XPERR 0x0000000100033040 +#define SH_MD_DQLP_MMR_XPERR_MASK 0x7fffffffffffffff +#define SH_MD_DQLP_MMR_XPERR_INIT 0x0000000000000000 + +/* SH_MD_DQLP_MMR_XPERR_DIR */ +/* Description: directory entry */ +#define SH_MD_DQLP_MMR_XPERR_DIR_SHFT 0 +#define SH_MD_DQLP_MMR_XPERR_DIR_MASK 0x0000000003ffffff + +/* SH_MD_DQLP_MMR_XPERR_CMD */ +/* Description: incoming command */ +#define SH_MD_DQLP_MMR_XPERR_CMD_SHFT 26 +#define SH_MD_DQLP_MMR_XPERR_CMD_MASK 0x00000003fc000000 + +/* SH_MD_DQLP_MMR_XPERR_SRC */ +/* Description: source node of dir operation */ +#define SH_MD_DQLP_MMR_XPERR_SRC_SHFT 34 +#define SH_MD_DQLP_MMR_XPERR_SRC_MASK 0x0000fffc00000000 + +/* SH_MD_DQLP_MMR_XPERR_PRIGE */ +/* Description: priority was greater-equal */ +#define SH_MD_DQLP_MMR_XPERR_PRIGE_SHFT 48 +#define SH_MD_DQLP_MMR_XPERR_PRIGE_MASK 0x0001000000000000 + +/* SH_MD_DQLP_MMR_XPERR_PRIV */ +/* Description: access privilege bit */ +#define SH_MD_DQLP_MMR_XPERR_PRIV_SHFT 49 +#define SH_MD_DQLP_MMR_XPERR_PRIV_MASK 0x0002000000000000 + +/* SH_MD_DQLP_MMR_XPERR_COR */ +/* Description: correctable ecc error */ +#define SH_MD_DQLP_MMR_XPERR_COR_SHFT 50 +#define SH_MD_DQLP_MMR_XPERR_COR_MASK 0x0004000000000000 + +/* SH_MD_DQLP_MMR_XPERR_UNC */ +/* Description: uncorrectable ecc error */ +#define SH_MD_DQLP_MMR_XPERR_UNC_SHFT 51 +#define SH_MD_DQLP_MMR_XPERR_UNC_MASK 0x0008000000000000 + +/* SH_MD_DQLP_MMR_XPERR_MYBIT */ +/* Description: ptreq,timeq,timlast,timspec,onlyme,anytim,ptrii,src */ +#define SH_MD_DQLP_MMR_XPERR_MYBIT_SHFT 52 +#define SH_MD_DQLP_MMR_XPERR_MYBIT_MASK 0x0ff0000000000000 + +/* SH_MD_DQLP_MMR_XPERR_VAL */ +/* Description: protocol error info valid */ +#define SH_MD_DQLP_MMR_XPERR_VAL_SHFT 60 +#define SH_MD_DQLP_MMR_XPERR_VAL_MASK 0x1000000000000000 + +/* SH_MD_DQLP_MMR_XPERR_MORE */ +/* Description: more than one protocol error */ +#define SH_MD_DQLP_MMR_XPERR_MORE_SHFT 61 +#define SH_MD_DQLP_MMR_XPERR_MORE_MASK 0x2000000000000000 + +/* SH_MD_DQLP_MMR_XPERR_ARM */ +/* Description: writing 1 arms error capture */ +#define SH_MD_DQLP_MMR_XPERR_ARM_SHFT 62 +#define SH_MD_DQLP_MMR_XPERR_ARM_MASK 0x4000000000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_YCERR1" */ +/* correctable dir ecc group 1 error register */ +/* ==================================================================== */ + +#define SH_MD_DQLP_MMR_YCERR1 0x0000000100033800 +#define SH_MD_DQLP_MMR_YCERR1_MASK 0x0000007fffffffff +#define SH_MD_DQLP_MMR_YCERR1_INIT 0x0000000000000000 + +/* SH_MD_DQLP_MMR_YCERR1_GRP1 */ +/* Description: ecc group 1 bits */ +#define SH_MD_DQLP_MMR_YCERR1_GRP1_SHFT 0 +#define SH_MD_DQLP_MMR_YCERR1_GRP1_MASK 0x0000000fffffffff + +/* SH_MD_DQLP_MMR_YCERR1_VAL */ +/* Description: correctable ecc error in group 1 bits */ +#define SH_MD_DQLP_MMR_YCERR1_VAL_SHFT 36 +#define SH_MD_DQLP_MMR_YCERR1_VAL_MASK 0x0000001000000000 + +/* SH_MD_DQLP_MMR_YCERR1_MORE */ +/* Description: more than one correctable ecc error in group 1 */ +#define SH_MD_DQLP_MMR_YCERR1_MORE_SHFT 37 +#define SH_MD_DQLP_MMR_YCERR1_MORE_MASK 0x0000002000000000 + +/* SH_MD_DQLP_MMR_YCERR1_ARM */ +/* Description: writing 1 arms uncorrectable ecc error capture */ +#define SH_MD_DQLP_MMR_YCERR1_ARM_SHFT 38 +#define SH_MD_DQLP_MMR_YCERR1_ARM_MASK 0x0000004000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_YCERR2" */ +/* correctable dir ecc group 2 error register */ +/* ==================================================================== */ + +#define SH_MD_DQLP_MMR_YCERR2 0x0000000100033810 +#define SH_MD_DQLP_MMR_YCERR2_MASK 0x0000003fffffffff +#define SH_MD_DQLP_MMR_YCERR2_INIT 0x0000000000000000 + +/* SH_MD_DQLP_MMR_YCERR2_GRP2 */ +/* Description: ecc group 2 bits */ +#define SH_MD_DQLP_MMR_YCERR2_GRP2_SHFT 0 +#define SH_MD_DQLP_MMR_YCERR2_GRP2_MASK 0x0000000fffffffff + +/* SH_MD_DQLP_MMR_YCERR2_VAL */ +/* Description: correctable ecc error in group 2 bits */ +#define SH_MD_DQLP_MMR_YCERR2_VAL_SHFT 36 +#define SH_MD_DQLP_MMR_YCERR2_VAL_MASK 0x0000001000000000 + +/* SH_MD_DQLP_MMR_YCERR2_MORE */ +/* Description: more than one correctable ecc error in group 2 */ +#define SH_MD_DQLP_MMR_YCERR2_MORE_SHFT 37 +#define SH_MD_DQLP_MMR_YCERR2_MORE_MASK 0x0000002000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_YUERR1" */ +/* uncorrectable dir ecc group 1 error register */ +/* ==================================================================== */ + +#define SH_MD_DQLP_MMR_YUERR1 0x0000000100033820 +#define SH_MD_DQLP_MMR_YUERR1_MASK 0x0000007fffffffff +#define SH_MD_DQLP_MMR_YUERR1_INIT 0x0000000000000000 + +/* SH_MD_DQLP_MMR_YUERR1_GRP1 */ +/* Description: ecc group 1 bits */ +#define SH_MD_DQLP_MMR_YUERR1_GRP1_SHFT 0 +#define SH_MD_DQLP_MMR_YUERR1_GRP1_MASK 0x0000000fffffffff + +/* SH_MD_DQLP_MMR_YUERR1_VAL */ +/* Description: uncorrectable ecc error in group 1 bits */ +#define SH_MD_DQLP_MMR_YUERR1_VAL_SHFT 36 +#define SH_MD_DQLP_MMR_YUERR1_VAL_MASK 0x0000001000000000 + +/* SH_MD_DQLP_MMR_YUERR1_MORE */ +/* Description: more than one uncorrectable ecc error in group 1 */ +#define SH_MD_DQLP_MMR_YUERR1_MORE_SHFT 37 +#define SH_MD_DQLP_MMR_YUERR1_MORE_MASK 0x0000002000000000 + +/* SH_MD_DQLP_MMR_YUERR1_ARM */ +/* Description: writing 1 arms uncorrectable ecc error capture */ +#define SH_MD_DQLP_MMR_YUERR1_ARM_SHFT 38 +#define SH_MD_DQLP_MMR_YUERR1_ARM_MASK 0x0000004000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_YUERR2" */ +/* uncorrectable dir ecc group 2 error register */ +/* ==================================================================== */ + +#define SH_MD_DQLP_MMR_YUERR2 0x0000000100033830 +#define SH_MD_DQLP_MMR_YUERR2_MASK 0x0000003fffffffff +#define SH_MD_DQLP_MMR_YUERR2_INIT 0x0000000000000000 + +/* SH_MD_DQLP_MMR_YUERR2_GRP2 */ +/* Description: ecc group 2 bits */ +#define SH_MD_DQLP_MMR_YUERR2_GRP2_SHFT 0 +#define SH_MD_DQLP_MMR_YUERR2_GRP2_MASK 0x0000000fffffffff + +/* SH_MD_DQLP_MMR_YUERR2_VAL */ +/* Description: uncorrectable ecc error in group 2 bits */ +#define SH_MD_DQLP_MMR_YUERR2_VAL_SHFT 36 +#define SH_MD_DQLP_MMR_YUERR2_VAL_MASK 0x0000001000000000 + +/* SH_MD_DQLP_MMR_YUERR2_MORE */ +/* Description: more than one uncorrectable ecc error in group 2 */ +#define SH_MD_DQLP_MMR_YUERR2_MORE_SHFT 37 +#define SH_MD_DQLP_MMR_YUERR2_MORE_MASK 0x0000002000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_YPERR" */ +/* protocol error register */ +/* ==================================================================== */ + +#define SH_MD_DQLP_MMR_YPERR 0x0000000100033840 +#define SH_MD_DQLP_MMR_YPERR_MASK 0x7fffffffffffffff +#define SH_MD_DQLP_MMR_YPERR_INIT 0x0000000000000000 + +/* SH_MD_DQLP_MMR_YPERR_DIR */ +/* Description: directory entry */ +#define SH_MD_DQLP_MMR_YPERR_DIR_SHFT 0 +#define SH_MD_DQLP_MMR_YPERR_DIR_MASK 0x0000000003ffffff + +/* SH_MD_DQLP_MMR_YPERR_CMD */ +/* Description: incoming command */ +#define SH_MD_DQLP_MMR_YPERR_CMD_SHFT 26 +#define SH_MD_DQLP_MMR_YPERR_CMD_MASK 0x00000003fc000000 + +/* SH_MD_DQLP_MMR_YPERR_SRC */ +/* Description: source node of dir operation */ +#define SH_MD_DQLP_MMR_YPERR_SRC_SHFT 34 +#define SH_MD_DQLP_MMR_YPERR_SRC_MASK 0x0000fffc00000000 + +/* SH_MD_DQLP_MMR_YPERR_PRIGE */ +/* Description: priority was greater-equal */ +#define SH_MD_DQLP_MMR_YPERR_PRIGE_SHFT 48 +#define SH_MD_DQLP_MMR_YPERR_PRIGE_MASK 0x0001000000000000 + +/* SH_MD_DQLP_MMR_YPERR_PRIV */ +/* Description: access privilege bit */ +#define SH_MD_DQLP_MMR_YPERR_PRIV_SHFT 49 +#define SH_MD_DQLP_MMR_YPERR_PRIV_MASK 0x0002000000000000 + +/* SH_MD_DQLP_MMR_YPERR_COR */ +/* Description: correctable ecc error */ +#define SH_MD_DQLP_MMR_YPERR_COR_SHFT 50 +#define SH_MD_DQLP_MMR_YPERR_COR_MASK 0x0004000000000000 + +/* SH_MD_DQLP_MMR_YPERR_UNC */ +/* Description: uncorrectable ecc error */ +#define SH_MD_DQLP_MMR_YPERR_UNC_SHFT 51 +#define SH_MD_DQLP_MMR_YPERR_UNC_MASK 0x0008000000000000 + +/* SH_MD_DQLP_MMR_YPERR_MYBIT */ +/* Description: ptreq,timeq,timlast,timspec,onlyme,anytim,ptrii,src */ +#define SH_MD_DQLP_MMR_YPERR_MYBIT_SHFT 52 +#define SH_MD_DQLP_MMR_YPERR_MYBIT_MASK 0x0ff0000000000000 + +/* SH_MD_DQLP_MMR_YPERR_VAL */ +/* Description: protocol error info valid */ +#define SH_MD_DQLP_MMR_YPERR_VAL_SHFT 60 +#define SH_MD_DQLP_MMR_YPERR_VAL_MASK 0x1000000000000000 + +/* SH_MD_DQLP_MMR_YPERR_MORE */ +/* Description: more than one protocol error */ +#define SH_MD_DQLP_MMR_YPERR_MORE_SHFT 61 +#define SH_MD_DQLP_MMR_YPERR_MORE_MASK 0x2000000000000000 + +/* SH_MD_DQLP_MMR_YPERR_ARM */ +/* Description: writing 1 arms error capture */ +#define SH_MD_DQLP_MMR_YPERR_ARM_SHFT 62 +#define SH_MD_DQLP_MMR_YPERR_ARM_MASK 0x4000000000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_DIR_CMDTRIG" */ +/* cmd triggers */ +/* ==================================================================== */ + +#define SH_MD_DQLP_MMR_DIR_CMDTRIG 0x0000000100034000 +#define SH_MD_DQLP_MMR_DIR_CMDTRIG_MASK 0x00000000ffffffff +#define SH_MD_DQLP_MMR_DIR_CMDTRIG_INIT 0x0000000000000000 + +/* SH_MD_DQLP_MMR_DIR_CMDTRIG_CMD0 */ +/* Description: command trigger 0 */ +#define SH_MD_DQLP_MMR_DIR_CMDTRIG_CMD0_SHFT 0 +#define SH_MD_DQLP_MMR_DIR_CMDTRIG_CMD0_MASK 0x00000000000000ff + +/* SH_MD_DQLP_MMR_DIR_CMDTRIG_CMD1 */ +/* Description: command trigger 1 */ +#define SH_MD_DQLP_MMR_DIR_CMDTRIG_CMD1_SHFT 8 +#define SH_MD_DQLP_MMR_DIR_CMDTRIG_CMD1_MASK 0x000000000000ff00 + +/* SH_MD_DQLP_MMR_DIR_CMDTRIG_CMD2 */ +/* Description: command trigger 2 */ +#define SH_MD_DQLP_MMR_DIR_CMDTRIG_CMD2_SHFT 16 +#define SH_MD_DQLP_MMR_DIR_CMDTRIG_CMD2_MASK 0x0000000000ff0000 + +/* SH_MD_DQLP_MMR_DIR_CMDTRIG_CMD3 */ +/* Description: command trigger 3 */ +#define SH_MD_DQLP_MMR_DIR_CMDTRIG_CMD3_SHFT 24 +#define SH_MD_DQLP_MMR_DIR_CMDTRIG_CMD3_MASK 0x00000000ff000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_DIR_TBLTRIG" */ +/* dir table trigger */ +/* ==================================================================== */ + +#define SH_MD_DQLP_MMR_DIR_TBLTRIG 0x0000000100034010 +#define SH_MD_DQLP_MMR_DIR_TBLTRIG_MASK 0x000003ffffffffff +#define SH_MD_DQLP_MMR_DIR_TBLTRIG_INIT 0x0000000000000000 + +/* SH_MD_DQLP_MMR_DIR_TBLTRIG_SRC */ +/* Description: source of request */ +#define SH_MD_DQLP_MMR_DIR_TBLTRIG_SRC_SHFT 0 +#define SH_MD_DQLP_MMR_DIR_TBLTRIG_SRC_MASK 0x0000000000003fff + +/* SH_MD_DQLP_MMR_DIR_TBLTRIG_CMD */ +/* Description: incoming request */ +#define SH_MD_DQLP_MMR_DIR_TBLTRIG_CMD_SHFT 14 +#define SH_MD_DQLP_MMR_DIR_TBLTRIG_CMD_MASK 0x00000000003fc000 + +/* SH_MD_DQLP_MMR_DIR_TBLTRIG_ACC */ +/* Description: uncorrectable error, privilege bit */ +#define SH_MD_DQLP_MMR_DIR_TBLTRIG_ACC_SHFT 22 +#define SH_MD_DQLP_MMR_DIR_TBLTRIG_ACC_MASK 0x0000000000c00000 + +/* SH_MD_DQLP_MMR_DIR_TBLTRIG_PRIGE */ +/* Description: priority greater-equal */ +#define SH_MD_DQLP_MMR_DIR_TBLTRIG_PRIGE_SHFT 24 +#define SH_MD_DQLP_MMR_DIR_TBLTRIG_PRIGE_MASK 0x0000000001000000 + +/* SH_MD_DQLP_MMR_DIR_TBLTRIG_DIRST */ +/* Description: shrd,sxro,sub-state */ +#define SH_MD_DQLP_MMR_DIR_TBLTRIG_DIRST_SHFT 25 +#define SH_MD_DQLP_MMR_DIR_TBLTRIG_DIRST_MASK 0x00000003fe000000 + +/* SH_MD_DQLP_MMR_DIR_TBLTRIG_MYBIT */ +/* Description: ptreq,timeq,timlast,timspec,onlyme,anytim,ptrii,src */ +#define SH_MD_DQLP_MMR_DIR_TBLTRIG_MYBIT_SHFT 34 +#define SH_MD_DQLP_MMR_DIR_TBLTRIG_MYBIT_MASK 0x000003fc00000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_DIR_TBLMASK" */ +/* dir table trigger mask */ +/* ==================================================================== */ + +#define SH_MD_DQLP_MMR_DIR_TBLMASK 0x0000000100034020 +#define SH_MD_DQLP_MMR_DIR_TBLMASK_MASK 0x000003ffffffffff +#define SH_MD_DQLP_MMR_DIR_TBLMASK_INIT 0x0000000000000000 + +/* SH_MD_DQLP_MMR_DIR_TBLMASK_SRC */ +/* Description: source of request */ +#define SH_MD_DQLP_MMR_DIR_TBLMASK_SRC_SHFT 0 +#define SH_MD_DQLP_MMR_DIR_TBLMASK_SRC_MASK 0x0000000000003fff + +/* SH_MD_DQLP_MMR_DIR_TBLMASK_CMD */ +/* Description: incoming request */ +#define SH_MD_DQLP_MMR_DIR_TBLMASK_CMD_SHFT 14 +#define SH_MD_DQLP_MMR_DIR_TBLMASK_CMD_MASK 0x00000000003fc000 + +/* SH_MD_DQLP_MMR_DIR_TBLMASK_ACC */ +/* Description: uncorrectable error, privilege bit */ +#define SH_MD_DQLP_MMR_DIR_TBLMASK_ACC_SHFT 22 +#define SH_MD_DQLP_MMR_DIR_TBLMASK_ACC_MASK 0x0000000000c00000 + +/* SH_MD_DQLP_MMR_DIR_TBLMASK_PRIGE */ +/* Description: priority greater-equal */ +#define SH_MD_DQLP_MMR_DIR_TBLMASK_PRIGE_SHFT 24 +#define SH_MD_DQLP_MMR_DIR_TBLMASK_PRIGE_MASK 0x0000000001000000 + +/* SH_MD_DQLP_MMR_DIR_TBLMASK_DIRST */ +/* Description: shrd,sxro,sub-state */ +#define SH_MD_DQLP_MMR_DIR_TBLMASK_DIRST_SHFT 25 +#define SH_MD_DQLP_MMR_DIR_TBLMASK_DIRST_MASK 0x00000003fe000000 + +/* SH_MD_DQLP_MMR_DIR_TBLMASK_MYBIT */ +/* Description: ptreq,timeq,timlast,timspec,onlyme,anytim,ptrii,src */ +#define SH_MD_DQLP_MMR_DIR_TBLMASK_MYBIT_SHFT 34 +#define SH_MD_DQLP_MMR_DIR_TBLMASK_MYBIT_MASK 0x000003fc00000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_XBIST_H" */ +/* rising edge bist/fill pattern */ +/* ==================================================================== */ + +#define SH_MD_DQLP_MMR_XBIST_H 0x0000000100038000 +#define SH_MD_DQLP_MMR_XBIST_H_MASK 0x00000700ffffffff +#define SH_MD_DQLP_MMR_XBIST_H_INIT 0x0000000000000000 + +/* SH_MD_DQLP_MMR_XBIST_H_PAT */ +/* Description: data pattern */ +#define SH_MD_DQLP_MMR_XBIST_H_PAT_SHFT 0 +#define SH_MD_DQLP_MMR_XBIST_H_PAT_MASK 0x00000000ffffffff + +/* SH_MD_DQLP_MMR_XBIST_H_INV */ +/* Description: invert data pattern in next cycle */ +#define SH_MD_DQLP_MMR_XBIST_H_INV_SHFT 40 +#define SH_MD_DQLP_MMR_XBIST_H_INV_MASK 0x0000010000000000 + +/* SH_MD_DQLP_MMR_XBIST_H_ROT */ +/* Description: rotate left data pattern in next cycle */ +#define SH_MD_DQLP_MMR_XBIST_H_ROT_SHFT 41 +#define SH_MD_DQLP_MMR_XBIST_H_ROT_MASK 0x0000020000000000 + +/* SH_MD_DQLP_MMR_XBIST_H_ARM */ +/* Description: writing 1 arms data miscompare capture */ +#define SH_MD_DQLP_MMR_XBIST_H_ARM_SHFT 42 +#define SH_MD_DQLP_MMR_XBIST_H_ARM_MASK 0x0000040000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_XBIST_L" */ +/* falling edge bist/fill pattern */ +/* ==================================================================== */ + +#define SH_MD_DQLP_MMR_XBIST_L 0x0000000100038010 +#define SH_MD_DQLP_MMR_XBIST_L_MASK 0x00000300ffffffff +#define SH_MD_DQLP_MMR_XBIST_L_INIT 0x0000000000000000 + +/* SH_MD_DQLP_MMR_XBIST_L_PAT */ +/* Description: data pattern */ +#define SH_MD_DQLP_MMR_XBIST_L_PAT_SHFT 0 +#define SH_MD_DQLP_MMR_XBIST_L_PAT_MASK 0x00000000ffffffff + +/* SH_MD_DQLP_MMR_XBIST_L_INV */ +/* Description: invert data pattern in next cycle */ +#define SH_MD_DQLP_MMR_XBIST_L_INV_SHFT 40 +#define SH_MD_DQLP_MMR_XBIST_L_INV_MASK 0x0000010000000000 + +/* SH_MD_DQLP_MMR_XBIST_L_ROT */ +/* Description: rotate left data pattern in next cycle */ +#define SH_MD_DQLP_MMR_XBIST_L_ROT_SHFT 41 +#define SH_MD_DQLP_MMR_XBIST_L_ROT_MASK 0x0000020000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_XBIST_ERR_H" */ +/* rising edge bist error pattern */ +/* ==================================================================== */ + +#define SH_MD_DQLP_MMR_XBIST_ERR_H 0x0000000100038020 +#define SH_MD_DQLP_MMR_XBIST_ERR_H_MASK 0x00000300ffffffff +#define SH_MD_DQLP_MMR_XBIST_ERR_H_INIT 0x0000000000000000 + +/* SH_MD_DQLP_MMR_XBIST_ERR_H_PAT */ +/* Description: data pattern */ +#define SH_MD_DQLP_MMR_XBIST_ERR_H_PAT_SHFT 0 +#define SH_MD_DQLP_MMR_XBIST_ERR_H_PAT_MASK 0x00000000ffffffff + +/* SH_MD_DQLP_MMR_XBIST_ERR_H_VAL */ +/* Description: bist data miscompare */ +#define SH_MD_DQLP_MMR_XBIST_ERR_H_VAL_SHFT 40 +#define SH_MD_DQLP_MMR_XBIST_ERR_H_VAL_MASK 0x0000010000000000 + +/* SH_MD_DQLP_MMR_XBIST_ERR_H_MORE */ +/* Description: more than one bist data miscompare */ +#define SH_MD_DQLP_MMR_XBIST_ERR_H_MORE_SHFT 41 +#define SH_MD_DQLP_MMR_XBIST_ERR_H_MORE_MASK 0x0000020000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_XBIST_ERR_L" */ +/* falling edge bist error pattern */ +/* ==================================================================== */ + +#define SH_MD_DQLP_MMR_XBIST_ERR_L 0x0000000100038030 +#define SH_MD_DQLP_MMR_XBIST_ERR_L_MASK 0x00000300ffffffff +#define SH_MD_DQLP_MMR_XBIST_ERR_L_INIT 0x0000000000000000 + +/* SH_MD_DQLP_MMR_XBIST_ERR_L_PAT */ +/* Description: data pattern */ +#define SH_MD_DQLP_MMR_XBIST_ERR_L_PAT_SHFT 0 +#define SH_MD_DQLP_MMR_XBIST_ERR_L_PAT_MASK 0x00000000ffffffff + +/* SH_MD_DQLP_MMR_XBIST_ERR_L_VAL */ +/* Description: bist data miscompare */ +#define SH_MD_DQLP_MMR_XBIST_ERR_L_VAL_SHFT 40 +#define SH_MD_DQLP_MMR_XBIST_ERR_L_VAL_MASK 0x0000010000000000 + +/* SH_MD_DQLP_MMR_XBIST_ERR_L_MORE */ +/* Description: more than one bist data miscompare */ +#define SH_MD_DQLP_MMR_XBIST_ERR_L_MORE_SHFT 41 +#define SH_MD_DQLP_MMR_XBIST_ERR_L_MORE_MASK 0x0000020000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_YBIST_H" */ +/* rising edge bist/fill pattern */ +/* ==================================================================== */ + +#define SH_MD_DQLP_MMR_YBIST_H 0x0000000100038800 +#define SH_MD_DQLP_MMR_YBIST_H_MASK 0x00000700ffffffff +#define SH_MD_DQLP_MMR_YBIST_H_INIT 0x0000000000000000 + +/* SH_MD_DQLP_MMR_YBIST_H_PAT */ +/* Description: data pattern */ +#define SH_MD_DQLP_MMR_YBIST_H_PAT_SHFT 0 +#define SH_MD_DQLP_MMR_YBIST_H_PAT_MASK 0x00000000ffffffff + +/* SH_MD_DQLP_MMR_YBIST_H_INV */ +/* Description: invert data pattern in next cycle */ +#define SH_MD_DQLP_MMR_YBIST_H_INV_SHFT 40 +#define SH_MD_DQLP_MMR_YBIST_H_INV_MASK 0x0000010000000000 + +/* SH_MD_DQLP_MMR_YBIST_H_ROT */ +/* Description: rotate left data pattern in next cycle */ +#define SH_MD_DQLP_MMR_YBIST_H_ROT_SHFT 41 +#define SH_MD_DQLP_MMR_YBIST_H_ROT_MASK 0x0000020000000000 + +/* SH_MD_DQLP_MMR_YBIST_H_ARM */ +/* Description: writing 1 arms data miscompare capture */ +#define SH_MD_DQLP_MMR_YBIST_H_ARM_SHFT 42 +#define SH_MD_DQLP_MMR_YBIST_H_ARM_MASK 0x0000040000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_YBIST_L" */ +/* falling edge bist/fill pattern */ +/* ==================================================================== */ + +#define SH_MD_DQLP_MMR_YBIST_L 0x0000000100038810 +#define SH_MD_DQLP_MMR_YBIST_L_MASK 0x00000300ffffffff +#define SH_MD_DQLP_MMR_YBIST_L_INIT 0x0000000000000000 + +/* SH_MD_DQLP_MMR_YBIST_L_PAT */ +/* Description: data pattern */ +#define SH_MD_DQLP_MMR_YBIST_L_PAT_SHFT 0 +#define SH_MD_DQLP_MMR_YBIST_L_PAT_MASK 0x00000000ffffffff + +/* SH_MD_DQLP_MMR_YBIST_L_INV */ +/* Description: invert data pattern in next cycle */ +#define SH_MD_DQLP_MMR_YBIST_L_INV_SHFT 40 +#define SH_MD_DQLP_MMR_YBIST_L_INV_MASK 0x0000010000000000 + +/* SH_MD_DQLP_MMR_YBIST_L_ROT */ +/* Description: rotate left data pattern in next cycle */ +#define SH_MD_DQLP_MMR_YBIST_L_ROT_SHFT 41 +#define SH_MD_DQLP_MMR_YBIST_L_ROT_MASK 0x0000020000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_YBIST_ERR_H" */ +/* rising edge bist error pattern */ +/* ==================================================================== */ + +#define SH_MD_DQLP_MMR_YBIST_ERR_H 0x0000000100038820 +#define SH_MD_DQLP_MMR_YBIST_ERR_H_MASK 0x00000300ffffffff +#define SH_MD_DQLP_MMR_YBIST_ERR_H_INIT 0x0000000000000000 + +/* SH_MD_DQLP_MMR_YBIST_ERR_H_PAT */ +/* Description: data pattern */ +#define SH_MD_DQLP_MMR_YBIST_ERR_H_PAT_SHFT 0 +#define SH_MD_DQLP_MMR_YBIST_ERR_H_PAT_MASK 0x00000000ffffffff + +/* SH_MD_DQLP_MMR_YBIST_ERR_H_VAL */ +/* Description: bist data miscompare */ +#define SH_MD_DQLP_MMR_YBIST_ERR_H_VAL_SHFT 40 +#define SH_MD_DQLP_MMR_YBIST_ERR_H_VAL_MASK 0x0000010000000000 + +/* SH_MD_DQLP_MMR_YBIST_ERR_H_MORE */ +/* Description: more than one bist data miscompare */ +#define SH_MD_DQLP_MMR_YBIST_ERR_H_MORE_SHFT 41 +#define SH_MD_DQLP_MMR_YBIST_ERR_H_MORE_MASK 0x0000020000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_YBIST_ERR_L" */ +/* falling edge bist error pattern */ +/* ==================================================================== */ + +#define SH_MD_DQLP_MMR_YBIST_ERR_L 0x0000000100038830 +#define SH_MD_DQLP_MMR_YBIST_ERR_L_MASK 0x00000300ffffffff +#define SH_MD_DQLP_MMR_YBIST_ERR_L_INIT 0x0000000000000000 + +/* SH_MD_DQLP_MMR_YBIST_ERR_L_PAT */ +/* Description: data pattern */ +#define SH_MD_DQLP_MMR_YBIST_ERR_L_PAT_SHFT 0 +#define SH_MD_DQLP_MMR_YBIST_ERR_L_PAT_MASK 0x00000000ffffffff + +/* SH_MD_DQLP_MMR_YBIST_ERR_L_VAL */ +/* Description: bist data miscompare */ +#define SH_MD_DQLP_MMR_YBIST_ERR_L_VAL_SHFT 40 +#define SH_MD_DQLP_MMR_YBIST_ERR_L_VAL_MASK 0x0000010000000000 + +/* SH_MD_DQLP_MMR_YBIST_ERR_L_MORE */ +/* Description: more than one bist data miscompare */ +#define SH_MD_DQLP_MMR_YBIST_ERR_L_MORE_SHFT 41 +#define SH_MD_DQLP_MMR_YBIST_ERR_L_MORE_MASK 0x0000020000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQLS_MMR_XBIST_H" */ +/* rising edge bist/fill pattern */ +/* ==================================================================== */ + +#define SH_MD_DQLS_MMR_XBIST_H 0x0000000100048000 +#define SH_MD_DQLS_MMR_XBIST_H_MASK 0x000007ffffffffff +#define SH_MD_DQLS_MMR_XBIST_H_INIT 0x0000000000000000 + +/* SH_MD_DQLS_MMR_XBIST_H_PAT */ +/* Description: data pattern */ +#define SH_MD_DQLS_MMR_XBIST_H_PAT_SHFT 0 +#define SH_MD_DQLS_MMR_XBIST_H_PAT_MASK 0x000000ffffffffff + +/* SH_MD_DQLS_MMR_XBIST_H_INV */ +/* Description: invert data pattern in next cycle */ +#define SH_MD_DQLS_MMR_XBIST_H_INV_SHFT 40 +#define SH_MD_DQLS_MMR_XBIST_H_INV_MASK 0x0000010000000000 + +/* SH_MD_DQLS_MMR_XBIST_H_ROT */ +/* Description: rotate left data pattern in next cycle */ +#define SH_MD_DQLS_MMR_XBIST_H_ROT_SHFT 41 +#define SH_MD_DQLS_MMR_XBIST_H_ROT_MASK 0x0000020000000000 + +/* SH_MD_DQLS_MMR_XBIST_H_ARM */ +/* Description: writing 1 arms data miscompare capture */ +#define SH_MD_DQLS_MMR_XBIST_H_ARM_SHFT 42 +#define SH_MD_DQLS_MMR_XBIST_H_ARM_MASK 0x0000040000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQLS_MMR_XBIST_L" */ +/* falling edge bist/fill pattern */ +/* ==================================================================== */ + +#define SH_MD_DQLS_MMR_XBIST_L 0x0000000100048010 +#define SH_MD_DQLS_MMR_XBIST_L_MASK 0x000003ffffffffff +#define SH_MD_DQLS_MMR_XBIST_L_INIT 0x0000000000000000 + +/* SH_MD_DQLS_MMR_XBIST_L_PAT */ +/* Description: data pattern */ +#define SH_MD_DQLS_MMR_XBIST_L_PAT_SHFT 0 +#define SH_MD_DQLS_MMR_XBIST_L_PAT_MASK 0x000000ffffffffff + +/* SH_MD_DQLS_MMR_XBIST_L_INV */ +/* Description: invert data pattern in next cycle */ +#define SH_MD_DQLS_MMR_XBIST_L_INV_SHFT 40 +#define SH_MD_DQLS_MMR_XBIST_L_INV_MASK 0x0000010000000000 + +/* SH_MD_DQLS_MMR_XBIST_L_ROT */ +/* Description: rotate left data pattern in next cycle */ +#define SH_MD_DQLS_MMR_XBIST_L_ROT_SHFT 41 +#define SH_MD_DQLS_MMR_XBIST_L_ROT_MASK 0x0000020000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQLS_MMR_XBIST_ERR_H" */ +/* rising edge bist error pattern */ +/* ==================================================================== */ + +#define SH_MD_DQLS_MMR_XBIST_ERR_H 0x0000000100048020 +#define SH_MD_DQLS_MMR_XBIST_ERR_H_MASK 0x000003ffffffffff +#define SH_MD_DQLS_MMR_XBIST_ERR_H_INIT 0x0000000000000000 + +/* SH_MD_DQLS_MMR_XBIST_ERR_H_PAT */ +/* Description: data pattern */ +#define SH_MD_DQLS_MMR_XBIST_ERR_H_PAT_SHFT 0 +#define SH_MD_DQLS_MMR_XBIST_ERR_H_PAT_MASK 0x000000ffffffffff + +/* SH_MD_DQLS_MMR_XBIST_ERR_H_VAL */ +/* Description: bist data miscompare */ +#define SH_MD_DQLS_MMR_XBIST_ERR_H_VAL_SHFT 40 +#define SH_MD_DQLS_MMR_XBIST_ERR_H_VAL_MASK 0x0000010000000000 + +/* SH_MD_DQLS_MMR_XBIST_ERR_H_MORE */ +/* Description: more than one bist data miscompare */ +#define SH_MD_DQLS_MMR_XBIST_ERR_H_MORE_SHFT 41 +#define SH_MD_DQLS_MMR_XBIST_ERR_H_MORE_MASK 0x0000020000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQLS_MMR_XBIST_ERR_L" */ +/* falling edge bist error pattern */ +/* ==================================================================== */ + +#define SH_MD_DQLS_MMR_XBIST_ERR_L 0x0000000100048030 +#define SH_MD_DQLS_MMR_XBIST_ERR_L_MASK 0x000003ffffffffff +#define SH_MD_DQLS_MMR_XBIST_ERR_L_INIT 0x0000000000000000 + +/* SH_MD_DQLS_MMR_XBIST_ERR_L_PAT */ +/* Description: data pattern */ +#define SH_MD_DQLS_MMR_XBIST_ERR_L_PAT_SHFT 0 +#define SH_MD_DQLS_MMR_XBIST_ERR_L_PAT_MASK 0x000000ffffffffff + +/* SH_MD_DQLS_MMR_XBIST_ERR_L_VAL */ +/* Description: bist data miscompare */ +#define SH_MD_DQLS_MMR_XBIST_ERR_L_VAL_SHFT 40 +#define SH_MD_DQLS_MMR_XBIST_ERR_L_VAL_MASK 0x0000010000000000 + +/* SH_MD_DQLS_MMR_XBIST_ERR_L_MORE */ +/* Description: more than one bist data miscompare */ +#define SH_MD_DQLS_MMR_XBIST_ERR_L_MORE_SHFT 41 +#define SH_MD_DQLS_MMR_XBIST_ERR_L_MORE_MASK 0x0000020000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQLS_MMR_YBIST_H" */ +/* rising edge bist/fill pattern */ +/* ==================================================================== */ + +#define SH_MD_DQLS_MMR_YBIST_H 0x0000000100048800 +#define SH_MD_DQLS_MMR_YBIST_H_MASK 0x000007ffffffffff +#define SH_MD_DQLS_MMR_YBIST_H_INIT 0x0000000000000000 + +/* SH_MD_DQLS_MMR_YBIST_H_PAT */ +/* Description: data pattern */ +#define SH_MD_DQLS_MMR_YBIST_H_PAT_SHFT 0 +#define SH_MD_DQLS_MMR_YBIST_H_PAT_MASK 0x000000ffffffffff + +/* SH_MD_DQLS_MMR_YBIST_H_INV */ +/* Description: invert data pattern in next cycle */ +#define SH_MD_DQLS_MMR_YBIST_H_INV_SHFT 40 +#define SH_MD_DQLS_MMR_YBIST_H_INV_MASK 0x0000010000000000 + +/* SH_MD_DQLS_MMR_YBIST_H_ROT */ +/* Description: rotate left data pattern in next cycle */ +#define SH_MD_DQLS_MMR_YBIST_H_ROT_SHFT 41 +#define SH_MD_DQLS_MMR_YBIST_H_ROT_MASK 0x0000020000000000 + +/* SH_MD_DQLS_MMR_YBIST_H_ARM */ +/* Description: writing 1 arms data miscompare capture */ +#define SH_MD_DQLS_MMR_YBIST_H_ARM_SHFT 42 +#define SH_MD_DQLS_MMR_YBIST_H_ARM_MASK 0x0000040000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQLS_MMR_YBIST_L" */ +/* falling edge bist/fill pattern */ +/* ==================================================================== */ + +#define SH_MD_DQLS_MMR_YBIST_L 0x0000000100048810 +#define SH_MD_DQLS_MMR_YBIST_L_MASK 0x000003ffffffffff +#define SH_MD_DQLS_MMR_YBIST_L_INIT 0x0000000000000000 + +/* SH_MD_DQLS_MMR_YBIST_L_PAT */ +/* Description: data pattern */ +#define SH_MD_DQLS_MMR_YBIST_L_PAT_SHFT 0 +#define SH_MD_DQLS_MMR_YBIST_L_PAT_MASK 0x000000ffffffffff + +/* SH_MD_DQLS_MMR_YBIST_L_INV */ +/* Description: invert data pattern in next cycle */ +#define SH_MD_DQLS_MMR_YBIST_L_INV_SHFT 40 +#define SH_MD_DQLS_MMR_YBIST_L_INV_MASK 0x0000010000000000 + +/* SH_MD_DQLS_MMR_YBIST_L_ROT */ +/* Description: rotate left data pattern in next cycle */ +#define SH_MD_DQLS_MMR_YBIST_L_ROT_SHFT 41 +#define SH_MD_DQLS_MMR_YBIST_L_ROT_MASK 0x0000020000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQLS_MMR_YBIST_ERR_H" */ +/* rising edge bist error pattern */ +/* ==================================================================== */ + +#define SH_MD_DQLS_MMR_YBIST_ERR_H 0x0000000100048820 +#define SH_MD_DQLS_MMR_YBIST_ERR_H_MASK 0x000003ffffffffff +#define SH_MD_DQLS_MMR_YBIST_ERR_H_INIT 0x0000000000000000 + +/* SH_MD_DQLS_MMR_YBIST_ERR_H_PAT */ +/* Description: data pattern */ +#define SH_MD_DQLS_MMR_YBIST_ERR_H_PAT_SHFT 0 +#define SH_MD_DQLS_MMR_YBIST_ERR_H_PAT_MASK 0x000000ffffffffff + +/* SH_MD_DQLS_MMR_YBIST_ERR_H_VAL */ +/* Description: bist data miscompare */ +#define SH_MD_DQLS_MMR_YBIST_ERR_H_VAL_SHFT 40 +#define SH_MD_DQLS_MMR_YBIST_ERR_H_VAL_MASK 0x0000010000000000 + +/* SH_MD_DQLS_MMR_YBIST_ERR_H_MORE */ +/* Description: more than one bist data miscompare */ +#define SH_MD_DQLS_MMR_YBIST_ERR_H_MORE_SHFT 41 +#define SH_MD_DQLS_MMR_YBIST_ERR_H_MORE_MASK 0x0000020000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQLS_MMR_YBIST_ERR_L" */ +/* falling edge bist error pattern */ +/* ==================================================================== */ + +#define SH_MD_DQLS_MMR_YBIST_ERR_L 0x0000000100048830 +#define SH_MD_DQLS_MMR_YBIST_ERR_L_MASK 0x000003ffffffffff +#define SH_MD_DQLS_MMR_YBIST_ERR_L_INIT 0x0000000000000000 + +/* SH_MD_DQLS_MMR_YBIST_ERR_L_PAT */ +/* Description: data pattern */ +#define SH_MD_DQLS_MMR_YBIST_ERR_L_PAT_SHFT 0 +#define SH_MD_DQLS_MMR_YBIST_ERR_L_PAT_MASK 0x000000ffffffffff + +/* SH_MD_DQLS_MMR_YBIST_ERR_L_VAL */ +/* Description: bist data miscompare */ +#define SH_MD_DQLS_MMR_YBIST_ERR_L_VAL_SHFT 40 +#define SH_MD_DQLS_MMR_YBIST_ERR_L_VAL_MASK 0x0000010000000000 + +/* SH_MD_DQLS_MMR_YBIST_ERR_L_MORE */ +/* Description: more than one bist data miscompare */ +#define SH_MD_DQLS_MMR_YBIST_ERR_L_MORE_SHFT 41 +#define SH_MD_DQLS_MMR_YBIST_ERR_L_MORE_MASK 0x0000020000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQLS_MMR_JNR_DEBUG" */ +/* joiner/fct debug configuration */ +/* ==================================================================== */ + +#define SH_MD_DQLS_MMR_JNR_DEBUG 0x0000000100049000 +#define SH_MD_DQLS_MMR_JNR_DEBUG_MASK 0x0000000000000003 +#define SH_MD_DQLS_MMR_JNR_DEBUG_INIT 0x0000000000000000 + +/* SH_MD_DQLS_MMR_JNR_DEBUG_PX */ +/* Description: select 0=pi 1=xn side */ +#define SH_MD_DQLS_MMR_JNR_DEBUG_PX_SHFT 0 +#define SH_MD_DQLS_MMR_JNR_DEBUG_PX_MASK 0x0000000000000001 + +/* SH_MD_DQLS_MMR_JNR_DEBUG_RW */ +/* Description: select 0=read 1=write side */ +#define SH_MD_DQLS_MMR_JNR_DEBUG_RW_SHFT 1 +#define SH_MD_DQLS_MMR_JNR_DEBUG_RW_MASK 0x0000000000000002 + +/* ==================================================================== */ +/* Register "SH_MD_DQLS_MMR_XAMOPW_ERR" */ +/* amo/partial rmw ecc error register */ +/* ==================================================================== */ + +#define SH_MD_DQLS_MMR_XAMOPW_ERR 0x000000010004a000 +#define SH_MD_DQLS_MMR_XAMOPW_ERR_MASK 0x0000000103ff03ff +#define SH_MD_DQLS_MMR_XAMOPW_ERR_INIT 0x0000000000000000 + +/* SH_MD_DQLS_MMR_XAMOPW_ERR_SSYN */ +/* Description: store data syndrome */ +#define SH_MD_DQLS_MMR_XAMOPW_ERR_SSYN_SHFT 0 +#define SH_MD_DQLS_MMR_XAMOPW_ERR_SSYN_MASK 0x00000000000000ff + +/* SH_MD_DQLS_MMR_XAMOPW_ERR_SCOR */ +/* Description: correctable ecc errror on store data */ +#define SH_MD_DQLS_MMR_XAMOPW_ERR_SCOR_SHFT 8 +#define SH_MD_DQLS_MMR_XAMOPW_ERR_SCOR_MASK 0x0000000000000100 + +/* SH_MD_DQLS_MMR_XAMOPW_ERR_SUNC */ +/* Description: uncorrectable ecc errror on store data */ +#define SH_MD_DQLS_MMR_XAMOPW_ERR_SUNC_SHFT 9 +#define SH_MD_DQLS_MMR_XAMOPW_ERR_SUNC_MASK 0x0000000000000200 + +/* SH_MD_DQLS_MMR_XAMOPW_ERR_RSYN */ +/* Description: memory read data syndrome */ +#define SH_MD_DQLS_MMR_XAMOPW_ERR_RSYN_SHFT 16 +#define SH_MD_DQLS_MMR_XAMOPW_ERR_RSYN_MASK 0x0000000000ff0000 + +/* SH_MD_DQLS_MMR_XAMOPW_ERR_RCOR */ +/* Description: correctable ecc errror on read data */ +#define SH_MD_DQLS_MMR_XAMOPW_ERR_RCOR_SHFT 24 +#define SH_MD_DQLS_MMR_XAMOPW_ERR_RCOR_MASK 0x0000000001000000 + +/* SH_MD_DQLS_MMR_XAMOPW_ERR_RUNC */ +/* Description: uncorrectable ecc errror on read data */ +#define SH_MD_DQLS_MMR_XAMOPW_ERR_RUNC_SHFT 25 +#define SH_MD_DQLS_MMR_XAMOPW_ERR_RUNC_MASK 0x0000000002000000 + +/* SH_MD_DQLS_MMR_XAMOPW_ERR_ARM */ +/* Description: writing 1 arms ecc error capture */ +#define SH_MD_DQLS_MMR_XAMOPW_ERR_ARM_SHFT 32 +#define SH_MD_DQLS_MMR_XAMOPW_ERR_ARM_MASK 0x0000000100000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_DIR_CONFIG" */ +/* DQ directory config register */ +/* ==================================================================== */ + +#define SH_MD_DQRP_MMR_DIR_CONFIG 0x0000000100050000 +#define SH_MD_DQRP_MMR_DIR_CONFIG_MASK 0x000000000000001f +#define SH_MD_DQRP_MMR_DIR_CONFIG_INIT 0x0000000000000010 + +/* SH_MD_DQRP_MMR_DIR_CONFIG_SYS_SIZE */ +/* Description: system size code */ +#define SH_MD_DQRP_MMR_DIR_CONFIG_SYS_SIZE_SHFT 0 +#define SH_MD_DQRP_MMR_DIR_CONFIG_SYS_SIZE_MASK 0x0000000000000007 + +/* SH_MD_DQRP_MMR_DIR_CONFIG_EN_DIRECC */ +/* Description: enable directory ecc correction */ +#define SH_MD_DQRP_MMR_DIR_CONFIG_EN_DIRECC_SHFT 3 +#define SH_MD_DQRP_MMR_DIR_CONFIG_EN_DIRECC_MASK 0x0000000000000008 + +/* SH_MD_DQRP_MMR_DIR_CONFIG_EN_DIRPOIS */ +/* Description: enable local poisoning for dir table fall-through */ +#define SH_MD_DQRP_MMR_DIR_CONFIG_EN_DIRPOIS_SHFT 4 +#define SH_MD_DQRP_MMR_DIR_CONFIG_EN_DIRPOIS_MASK 0x0000000000000010 + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_DIR_PRESVEC0" */ +/* node [63:0] presence bits */ +/* ==================================================================== */ + +#define SH_MD_DQRP_MMR_DIR_PRESVEC0 0x0000000100050100 +#define SH_MD_DQRP_MMR_DIR_PRESVEC0_MASK 0xffffffffffffffff +#define SH_MD_DQRP_MMR_DIR_PRESVEC0_INIT 0x0000000000000000 + +/* SH_MD_DQRP_MMR_DIR_PRESVEC0_VEC */ +/* Description: node presence bits, 1=present */ +#define SH_MD_DQRP_MMR_DIR_PRESVEC0_VEC_SHFT 0 +#define SH_MD_DQRP_MMR_DIR_PRESVEC0_VEC_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_DIR_PRESVEC1" */ +/* node [127:64] presence bits */ +/* ==================================================================== */ + +#define SH_MD_DQRP_MMR_DIR_PRESVEC1 0x0000000100050110 +#define SH_MD_DQRP_MMR_DIR_PRESVEC1_MASK 0xffffffffffffffff +#define SH_MD_DQRP_MMR_DIR_PRESVEC1_INIT 0x0000000000000000 + +/* SH_MD_DQRP_MMR_DIR_PRESVEC1_VEC */ +/* Description: node presence bits, 1=present */ +#define SH_MD_DQRP_MMR_DIR_PRESVEC1_VEC_SHFT 0 +#define SH_MD_DQRP_MMR_DIR_PRESVEC1_VEC_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_DIR_PRESVEC2" */ +/* node [191:128] presence bits */ +/* ==================================================================== */ + +#define SH_MD_DQRP_MMR_DIR_PRESVEC2 0x0000000100050120 +#define SH_MD_DQRP_MMR_DIR_PRESVEC2_MASK 0xffffffffffffffff +#define SH_MD_DQRP_MMR_DIR_PRESVEC2_INIT 0x0000000000000000 + +/* SH_MD_DQRP_MMR_DIR_PRESVEC2_VEC */ +/* Description: node presence bits, 1=present */ +#define SH_MD_DQRP_MMR_DIR_PRESVEC2_VEC_SHFT 0 +#define SH_MD_DQRP_MMR_DIR_PRESVEC2_VEC_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_DIR_PRESVEC3" */ +/* node [255:192] presence bits */ +/* ==================================================================== */ + +#define SH_MD_DQRP_MMR_DIR_PRESVEC3 0x0000000100050130 +#define SH_MD_DQRP_MMR_DIR_PRESVEC3_MASK 0xffffffffffffffff +#define SH_MD_DQRP_MMR_DIR_PRESVEC3_INIT 0x0000000000000000 + +/* SH_MD_DQRP_MMR_DIR_PRESVEC3_VEC */ +/* Description: node presence bits, 1=present */ +#define SH_MD_DQRP_MMR_DIR_PRESVEC3_VEC_SHFT 0 +#define SH_MD_DQRP_MMR_DIR_PRESVEC3_VEC_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_DIR_LOCVEC0" */ +/* local vector for acc=0 */ +/* ==================================================================== */ + +#define SH_MD_DQRP_MMR_DIR_LOCVEC0 0x0000000100050200 +#define SH_MD_DQRP_MMR_DIR_LOCVEC0_MASK 0xffffffffffffffff +#define SH_MD_DQRP_MMR_DIR_LOCVEC0_INIT 0x0000000000000000 + +/* SH_MD_DQRP_MMR_DIR_LOCVEC0_VEC */ +/* Description: 1 node is local */ +#define SH_MD_DQRP_MMR_DIR_LOCVEC0_VEC_SHFT 0 +#define SH_MD_DQRP_MMR_DIR_LOCVEC0_VEC_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_DIR_LOCVEC1" */ +/* local vector for acc=1 */ +/* ==================================================================== */ + +#define SH_MD_DQRP_MMR_DIR_LOCVEC1 0x0000000100050210 +#define SH_MD_DQRP_MMR_DIR_LOCVEC1_MASK 0xffffffffffffffff +#define SH_MD_DQRP_MMR_DIR_LOCVEC1_INIT 0x0000000000000000 + +/* SH_MD_DQRP_MMR_DIR_LOCVEC1_VEC */ +/* Description: 1 node is local */ +#define SH_MD_DQRP_MMR_DIR_LOCVEC1_VEC_SHFT 0 +#define SH_MD_DQRP_MMR_DIR_LOCVEC1_VEC_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_DIR_LOCVEC2" */ +/* local vector for acc=2 */ +/* ==================================================================== */ + +#define SH_MD_DQRP_MMR_DIR_LOCVEC2 0x0000000100050220 +#define SH_MD_DQRP_MMR_DIR_LOCVEC2_MASK 0xffffffffffffffff +#define SH_MD_DQRP_MMR_DIR_LOCVEC2_INIT 0x0000000000000000 + +/* SH_MD_DQRP_MMR_DIR_LOCVEC2_VEC */ +/* Description: 1 node is local */ +#define SH_MD_DQRP_MMR_DIR_LOCVEC2_VEC_SHFT 0 +#define SH_MD_DQRP_MMR_DIR_LOCVEC2_VEC_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_DIR_LOCVEC3" */ +/* local vector for acc=3 */ +/* ==================================================================== */ + +#define SH_MD_DQRP_MMR_DIR_LOCVEC3 0x0000000100050230 +#define SH_MD_DQRP_MMR_DIR_LOCVEC3_MASK 0xffffffffffffffff +#define SH_MD_DQRP_MMR_DIR_LOCVEC3_INIT 0x0000000000000000 + +/* SH_MD_DQRP_MMR_DIR_LOCVEC3_VEC */ +/* Description: 1 node is local */ +#define SH_MD_DQRP_MMR_DIR_LOCVEC3_VEC_SHFT 0 +#define SH_MD_DQRP_MMR_DIR_LOCVEC3_VEC_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_DIR_LOCVEC4" */ +/* local vector for acc=4 */ +/* ==================================================================== */ + +#define SH_MD_DQRP_MMR_DIR_LOCVEC4 0x0000000100050240 +#define SH_MD_DQRP_MMR_DIR_LOCVEC4_MASK 0xffffffffffffffff +#define SH_MD_DQRP_MMR_DIR_LOCVEC4_INIT 0x0000000000000000 + +/* SH_MD_DQRP_MMR_DIR_LOCVEC4_VEC */ +/* Description: 1 node is local */ +#define SH_MD_DQRP_MMR_DIR_LOCVEC4_VEC_SHFT 0 +#define SH_MD_DQRP_MMR_DIR_LOCVEC4_VEC_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_DIR_LOCVEC5" */ +/* local vector for acc=5 */ +/* ==================================================================== */ + +#define SH_MD_DQRP_MMR_DIR_LOCVEC5 0x0000000100050250 +#define SH_MD_DQRP_MMR_DIR_LOCVEC5_MASK 0xffffffffffffffff +#define SH_MD_DQRP_MMR_DIR_LOCVEC5_INIT 0x0000000000000000 + +/* SH_MD_DQRP_MMR_DIR_LOCVEC5_VEC */ +/* Description: 1 node is local */ +#define SH_MD_DQRP_MMR_DIR_LOCVEC5_VEC_SHFT 0 +#define SH_MD_DQRP_MMR_DIR_LOCVEC5_VEC_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_DIR_LOCVEC6" */ +/* local vector for acc=6 */ +/* ==================================================================== */ + +#define SH_MD_DQRP_MMR_DIR_LOCVEC6 0x0000000100050260 +#define SH_MD_DQRP_MMR_DIR_LOCVEC6_MASK 0xffffffffffffffff +#define SH_MD_DQRP_MMR_DIR_LOCVEC6_INIT 0x0000000000000000 + +/* SH_MD_DQRP_MMR_DIR_LOCVEC6_VEC */ +/* Description: 1 node is local */ +#define SH_MD_DQRP_MMR_DIR_LOCVEC6_VEC_SHFT 0 +#define SH_MD_DQRP_MMR_DIR_LOCVEC6_VEC_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_DIR_LOCVEC7" */ +/* local vector for acc=7 */ +/* ==================================================================== */ + +#define SH_MD_DQRP_MMR_DIR_LOCVEC7 0x0000000100050270 +#define SH_MD_DQRP_MMR_DIR_LOCVEC7_MASK 0xffffffffffffffff +#define SH_MD_DQRP_MMR_DIR_LOCVEC7_INIT 0x0000000000000000 + +/* SH_MD_DQRP_MMR_DIR_LOCVEC7_VEC */ +/* Description: 1 node is local */ +#define SH_MD_DQRP_MMR_DIR_LOCVEC7_VEC_SHFT 0 +#define SH_MD_DQRP_MMR_DIR_LOCVEC7_VEC_MASK 0xffffffffffffffff + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_DIR_PRIVEC0" */ +/* privilege vector for acc=0 */ +/* ==================================================================== */ + +#define SH_MD_DQRP_MMR_DIR_PRIVEC0 0x0000000100050300 +#define SH_MD_DQRP_MMR_DIR_PRIVEC0_MASK 0x000000000fffffff +#define SH_MD_DQRP_MMR_DIR_PRIVEC0_INIT 0x0000000000000000 + +/* SH_MD_DQRP_MMR_DIR_PRIVEC0_IN */ +/* Description: in partition privileges, locvec bit=1 */ +#define SH_MD_DQRP_MMR_DIR_PRIVEC0_IN_SHFT 0 +#define SH_MD_DQRP_MMR_DIR_PRIVEC0_IN_MASK 0x0000000000003fff + +/* SH_MD_DQRP_MMR_DIR_PRIVEC0_OUT */ +/* Description: out of partition privileges, locvec bit=0 */ +#define SH_MD_DQRP_MMR_DIR_PRIVEC0_OUT_SHFT 14 +#define SH_MD_DQRP_MMR_DIR_PRIVEC0_OUT_MASK 0x000000000fffc000 + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_DIR_PRIVEC1" */ +/* privilege vector for acc=1 */ +/* ==================================================================== */ + +#define SH_MD_DQRP_MMR_DIR_PRIVEC1 0x0000000100050310 +#define SH_MD_DQRP_MMR_DIR_PRIVEC1_MASK 0x000000000fffffff +#define SH_MD_DQRP_MMR_DIR_PRIVEC1_INIT 0x0000000000000000 + +/* SH_MD_DQRP_MMR_DIR_PRIVEC1_IN */ +/* Description: in partition privileges, locvec bit=1 */ +#define SH_MD_DQRP_MMR_DIR_PRIVEC1_IN_SHFT 0 +#define SH_MD_DQRP_MMR_DIR_PRIVEC1_IN_MASK 0x0000000000003fff + +/* SH_MD_DQRP_MMR_DIR_PRIVEC1_OUT */ +/* Description: out of partition privileges, locvec bit=0 */ +#define SH_MD_DQRP_MMR_DIR_PRIVEC1_OUT_SHFT 14 +#define SH_MD_DQRP_MMR_DIR_PRIVEC1_OUT_MASK 0x000000000fffc000 + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_DIR_PRIVEC2" */ +/* privilege vector for acc=2 */ +/* ==================================================================== */ + +#define SH_MD_DQRP_MMR_DIR_PRIVEC2 0x0000000100050320 +#define SH_MD_DQRP_MMR_DIR_PRIVEC2_MASK 0x000000000fffffff +#define SH_MD_DQRP_MMR_DIR_PRIVEC2_INIT 0x0000000000000000 + +/* SH_MD_DQRP_MMR_DIR_PRIVEC2_IN */ +/* Description: in partition privileges, locvec bit=1 */ +#define SH_MD_DQRP_MMR_DIR_PRIVEC2_IN_SHFT 0 +#define SH_MD_DQRP_MMR_DIR_PRIVEC2_IN_MASK 0x0000000000003fff + +/* SH_MD_DQRP_MMR_DIR_PRIVEC2_OUT */ +/* Description: out of partition privileges, locvec bit=0 */ +#define SH_MD_DQRP_MMR_DIR_PRIVEC2_OUT_SHFT 14 +#define SH_MD_DQRP_MMR_DIR_PRIVEC2_OUT_MASK 0x000000000fffc000 + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_DIR_PRIVEC3" */ +/* privilege vector for acc=3 */ +/* ==================================================================== */ + +#define SH_MD_DQRP_MMR_DIR_PRIVEC3 0x0000000100050330 +#define SH_MD_DQRP_MMR_DIR_PRIVEC3_MASK 0x000000000fffffff +#define SH_MD_DQRP_MMR_DIR_PRIVEC3_INIT 0x0000000000000000 + +/* SH_MD_DQRP_MMR_DIR_PRIVEC3_IN */ +/* Description: in partition privileges, locvec bit=1 */ +#define SH_MD_DQRP_MMR_DIR_PRIVEC3_IN_SHFT 0 +#define SH_MD_DQRP_MMR_DIR_PRIVEC3_IN_MASK 0x0000000000003fff + +/* SH_MD_DQRP_MMR_DIR_PRIVEC3_OUT */ +/* Description: out of partition privileges, locvec bit=0 */ +#define SH_MD_DQRP_MMR_DIR_PRIVEC3_OUT_SHFT 14 +#define SH_MD_DQRP_MMR_DIR_PRIVEC3_OUT_MASK 0x000000000fffc000 + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_DIR_PRIVEC4" */ +/* privilege vector for acc=4 */ +/* ==================================================================== */ + +#define SH_MD_DQRP_MMR_DIR_PRIVEC4 0x0000000100050340 +#define SH_MD_DQRP_MMR_DIR_PRIVEC4_MASK 0x000000000fffffff +#define SH_MD_DQRP_MMR_DIR_PRIVEC4_INIT 0x0000000000000000 + +/* SH_MD_DQRP_MMR_DIR_PRIVEC4_IN */ +/* Description: in partition privileges, locvec bit=1 */ +#define SH_MD_DQRP_MMR_DIR_PRIVEC4_IN_SHFT 0 +#define SH_MD_DQRP_MMR_DIR_PRIVEC4_IN_MASK 0x0000000000003fff + +/* SH_MD_DQRP_MMR_DIR_PRIVEC4_OUT */ +/* Description: out of partition privileges, locvec bit=0 */ +#define SH_MD_DQRP_MMR_DIR_PRIVEC4_OUT_SHFT 14 +#define SH_MD_DQRP_MMR_DIR_PRIVEC4_OUT_MASK 0x000000000fffc000 + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_DIR_PRIVEC5" */ +/* privilege vector for acc=5 */ +/* ==================================================================== */ + +#define SH_MD_DQRP_MMR_DIR_PRIVEC5 0x0000000100050350 +#define SH_MD_DQRP_MMR_DIR_PRIVEC5_MASK 0x000000000fffffff +#define SH_MD_DQRP_MMR_DIR_PRIVEC5_INIT 0x0000000000000000 + +/* SH_MD_DQRP_MMR_DIR_PRIVEC5_IN */ +/* Description: in partition privileges, locvec bit=1 */ +#define SH_MD_DQRP_MMR_DIR_PRIVEC5_IN_SHFT 0 +#define SH_MD_DQRP_MMR_DIR_PRIVEC5_IN_MASK 0x0000000000003fff + +/* SH_MD_DQRP_MMR_DIR_PRIVEC5_OUT */ +/* Description: out of partition privileges, locvec bit=0 */ +#define SH_MD_DQRP_MMR_DIR_PRIVEC5_OUT_SHFT 14 +#define SH_MD_DQRP_MMR_DIR_PRIVEC5_OUT_MASK 0x000000000fffc000 + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_DIR_PRIVEC6" */ +/* privilege vector for acc=6 */ +/* ==================================================================== */ + +#define SH_MD_DQRP_MMR_DIR_PRIVEC6 0x0000000100050360 +#define SH_MD_DQRP_MMR_DIR_PRIVEC6_MASK 0x000000000fffffff +#define SH_MD_DQRP_MMR_DIR_PRIVEC6_INIT 0x0000000000000000 + +/* SH_MD_DQRP_MMR_DIR_PRIVEC6_IN */ +/* Description: in partition privileges, locvec bit=1 */ +#define SH_MD_DQRP_MMR_DIR_PRIVEC6_IN_SHFT 0 +#define SH_MD_DQRP_MMR_DIR_PRIVEC6_IN_MASK 0x0000000000003fff + +/* SH_MD_DQRP_MMR_DIR_PRIVEC6_OUT */ +/* Description: out of partition privileges, locvec bit=0 */ +#define SH_MD_DQRP_MMR_DIR_PRIVEC6_OUT_SHFT 14 +#define SH_MD_DQRP_MMR_DIR_PRIVEC6_OUT_MASK 0x000000000fffc000 + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_DIR_PRIVEC7" */ +/* privilege vector for acc=7 */ +/* ==================================================================== */ + +#define SH_MD_DQRP_MMR_DIR_PRIVEC7 0x0000000100050370 +#define SH_MD_DQRP_MMR_DIR_PRIVEC7_MASK 0x000000000fffffff +#define SH_MD_DQRP_MMR_DIR_PRIVEC7_INIT 0x0000000000000000 + +/* SH_MD_DQRP_MMR_DIR_PRIVEC7_IN */ +/* Description: in partition privileges, locvec bit=1 */ +#define SH_MD_DQRP_MMR_DIR_PRIVEC7_IN_SHFT 0 +#define SH_MD_DQRP_MMR_DIR_PRIVEC7_IN_MASK 0x0000000000003fff + +/* SH_MD_DQRP_MMR_DIR_PRIVEC7_OUT */ +/* Description: out of partition privileges, locvec bit=0 */ +#define SH_MD_DQRP_MMR_DIR_PRIVEC7_OUT_SHFT 14 +#define SH_MD_DQRP_MMR_DIR_PRIVEC7_OUT_MASK 0x000000000fffc000 + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_DIR_TIMER" */ +/* MD SXRO timer */ +/* ==================================================================== */ + +#define SH_MD_DQRP_MMR_DIR_TIMER 0x0000000100050400 +#define SH_MD_DQRP_MMR_DIR_TIMER_MASK 0x00000000003fffff +#define SH_MD_DQRP_MMR_DIR_TIMER_INIT 0x0000000000000000 + +/* SH_MD_DQRP_MMR_DIR_TIMER_TIMER_DIV */ +/* Description: timer divide register */ +#define SH_MD_DQRP_MMR_DIR_TIMER_TIMER_DIV_SHFT 0 +#define SH_MD_DQRP_MMR_DIR_TIMER_TIMER_DIV_MASK 0x0000000000000fff + +/* SH_MD_DQRP_MMR_DIR_TIMER_TIMER_EN */ +/* Description: timer enable */ +#define SH_MD_DQRP_MMR_DIR_TIMER_TIMER_EN_SHFT 12 +#define SH_MD_DQRP_MMR_DIR_TIMER_TIMER_EN_MASK 0x0000000000001000 + +/* SH_MD_DQRP_MMR_DIR_TIMER_TIMER_CUR */ +/* Description: value of current timer */ +#define SH_MD_DQRP_MMR_DIR_TIMER_TIMER_CUR_SHFT 13 +#define SH_MD_DQRP_MMR_DIR_TIMER_TIMER_CUR_MASK 0x00000000003fe000 + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_PIOWD_DIR_ENTRY" */ +/* directory pio write data */ +/* ==================================================================== */ + +#define SH_MD_DQRP_MMR_PIOWD_DIR_ENTRY 0x0000000100051000 +#define SH_MD_DQRP_MMR_PIOWD_DIR_ENTRY_MASK 0x03ffffffffffffff +#define SH_MD_DQRP_MMR_PIOWD_DIR_ENTRY_INIT 0x0000000000000000 + +/* SH_MD_DQRP_MMR_PIOWD_DIR_ENTRY_DIRA */ +/* Description: directory entry A */ +#define SH_MD_DQRP_MMR_PIOWD_DIR_ENTRY_DIRA_SHFT 0 +#define SH_MD_DQRP_MMR_PIOWD_DIR_ENTRY_DIRA_MASK 0x0000000003ffffff + +/* SH_MD_DQRP_MMR_PIOWD_DIR_ENTRY_DIRB */ +/* Description: directory entry B */ +#define SH_MD_DQRP_MMR_PIOWD_DIR_ENTRY_DIRB_SHFT 26 +#define SH_MD_DQRP_MMR_PIOWD_DIR_ENTRY_DIRB_MASK 0x000ffffffc000000 + +/* SH_MD_DQRP_MMR_PIOWD_DIR_ENTRY_PRI */ +/* Description: directory priority */ +#define SH_MD_DQRP_MMR_PIOWD_DIR_ENTRY_PRI_SHFT 52 +#define SH_MD_DQRP_MMR_PIOWD_DIR_ENTRY_PRI_MASK 0x0070000000000000 + +/* SH_MD_DQRP_MMR_PIOWD_DIR_ENTRY_ACC */ +/* Description: directory access bits */ +#define SH_MD_DQRP_MMR_PIOWD_DIR_ENTRY_ACC_SHFT 55 +#define SH_MD_DQRP_MMR_PIOWD_DIR_ENTRY_ACC_MASK 0x0380000000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_PIOWD_DIR_ECC" */ +/* directory ecc register */ +/* ==================================================================== */ + +#define SH_MD_DQRP_MMR_PIOWD_DIR_ECC 0x0000000100051010 +#define SH_MD_DQRP_MMR_PIOWD_DIR_ECC_MASK 0x0000000000003fff +#define SH_MD_DQRP_MMR_PIOWD_DIR_ECC_INIT 0x0000000000000000 + +/* SH_MD_DQRP_MMR_PIOWD_DIR_ECC_ECCA */ +/* Description: XOR bits for directory ECC group 1 */ +#define SH_MD_DQRP_MMR_PIOWD_DIR_ECC_ECCA_SHFT 0 +#define SH_MD_DQRP_MMR_PIOWD_DIR_ECC_ECCA_MASK 0x000000000000007f + +/* SH_MD_DQRP_MMR_PIOWD_DIR_ECC_ECCB */ +/* Description: XOR bits for directory ECC group 2 */ +#define SH_MD_DQRP_MMR_PIOWD_DIR_ECC_ECCB_SHFT 7 +#define SH_MD_DQRP_MMR_PIOWD_DIR_ECC_ECCB_MASK 0x0000000000003f80 + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_XPIORD_XDIR_ENTRY" */ +/* x directory pio read data */ +/* ==================================================================== */ + +#define SH_MD_DQRP_MMR_XPIORD_XDIR_ENTRY 0x0000000100052000 +#define SH_MD_DQRP_MMR_XPIORD_XDIR_ENTRY_MASK 0x0fffffffffffffff +#define SH_MD_DQRP_MMR_XPIORD_XDIR_ENTRY_INIT 0x0000000000000000 + +/* SH_MD_DQRP_MMR_XPIORD_XDIR_ENTRY_DIRA */ +/* Description: directory entry A */ +#define SH_MD_DQRP_MMR_XPIORD_XDIR_ENTRY_DIRA_SHFT 0 +#define SH_MD_DQRP_MMR_XPIORD_XDIR_ENTRY_DIRA_MASK 0x0000000003ffffff + +/* SH_MD_DQRP_MMR_XPIORD_XDIR_ENTRY_DIRB */ +/* Description: directory entry B */ +#define SH_MD_DQRP_MMR_XPIORD_XDIR_ENTRY_DIRB_SHFT 26 +#define SH_MD_DQRP_MMR_XPIORD_XDIR_ENTRY_DIRB_MASK 0x000ffffffc000000 + +/* SH_MD_DQRP_MMR_XPIORD_XDIR_ENTRY_PRI */ +/* Description: directory priority */ +#define SH_MD_DQRP_MMR_XPIORD_XDIR_ENTRY_PRI_SHFT 52 +#define SH_MD_DQRP_MMR_XPIORD_XDIR_ENTRY_PRI_MASK 0x0070000000000000 + +/* SH_MD_DQRP_MMR_XPIORD_XDIR_ENTRY_ACC */ +/* Description: directory access bits */ +#define SH_MD_DQRP_MMR_XPIORD_XDIR_ENTRY_ACC_SHFT 55 +#define SH_MD_DQRP_MMR_XPIORD_XDIR_ENTRY_ACC_MASK 0x0380000000000000 + +/* SH_MD_DQRP_MMR_XPIORD_XDIR_ENTRY_COR */ +/* Description: correctable ecc error */ +#define SH_MD_DQRP_MMR_XPIORD_XDIR_ENTRY_COR_SHFT 58 +#define SH_MD_DQRP_MMR_XPIORD_XDIR_ENTRY_COR_MASK 0x0400000000000000 + +/* SH_MD_DQRP_MMR_XPIORD_XDIR_ENTRY_UNC */ +/* Description: uncorrectable ecc error */ +#define SH_MD_DQRP_MMR_XPIORD_XDIR_ENTRY_UNC_SHFT 59 +#define SH_MD_DQRP_MMR_XPIORD_XDIR_ENTRY_UNC_MASK 0x0800000000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_XPIORD_XDIR_ECC" */ +/* x directory ecc */ +/* ==================================================================== */ + +#define SH_MD_DQRP_MMR_XPIORD_XDIR_ECC 0x0000000100052010 +#define SH_MD_DQRP_MMR_XPIORD_XDIR_ECC_MASK 0x0000000000003fff +#define SH_MD_DQRP_MMR_XPIORD_XDIR_ECC_INIT 0x0000000000000000 + +/* SH_MD_DQRP_MMR_XPIORD_XDIR_ECC_ECCA */ +/* Description: group 1 ecc */ +#define SH_MD_DQRP_MMR_XPIORD_XDIR_ECC_ECCA_SHFT 0 +#define SH_MD_DQRP_MMR_XPIORD_XDIR_ECC_ECCA_MASK 0x000000000000007f + +/* SH_MD_DQRP_MMR_XPIORD_XDIR_ECC_ECCB */ +/* Description: group 2 ecc */ +#define SH_MD_DQRP_MMR_XPIORD_XDIR_ECC_ECCB_SHFT 7 +#define SH_MD_DQRP_MMR_XPIORD_XDIR_ECC_ECCB_MASK 0x0000000000003f80 + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_YPIORD_YDIR_ENTRY" */ +/* y directory pio read data */ +/* ==================================================================== */ + +#define SH_MD_DQRP_MMR_YPIORD_YDIR_ENTRY 0x0000000100052800 +#define SH_MD_DQRP_MMR_YPIORD_YDIR_ENTRY_MASK 0x0fffffffffffffff +#define SH_MD_DQRP_MMR_YPIORD_YDIR_ENTRY_INIT 0x0000000000000000 + +/* SH_MD_DQRP_MMR_YPIORD_YDIR_ENTRY_DIRA */ +/* Description: directory entry A */ +#define SH_MD_DQRP_MMR_YPIORD_YDIR_ENTRY_DIRA_SHFT 0 +#define SH_MD_DQRP_MMR_YPIORD_YDIR_ENTRY_DIRA_MASK 0x0000000003ffffff + +/* SH_MD_DQRP_MMR_YPIORD_YDIR_ENTRY_DIRB */ +/* Description: directory entry B */ +#define SH_MD_DQRP_MMR_YPIORD_YDIR_ENTRY_DIRB_SHFT 26 +#define SH_MD_DQRP_MMR_YPIORD_YDIR_ENTRY_DIRB_MASK 0x000ffffffc000000 + +/* SH_MD_DQRP_MMR_YPIORD_YDIR_ENTRY_PRI */ +/* Description: directory priority */ +#define SH_MD_DQRP_MMR_YPIORD_YDIR_ENTRY_PRI_SHFT 52 +#define SH_MD_DQRP_MMR_YPIORD_YDIR_ENTRY_PRI_MASK 0x0070000000000000 + +/* SH_MD_DQRP_MMR_YPIORD_YDIR_ENTRY_ACC */ +/* Description: directory access bits */ +#define SH_MD_DQRP_MMR_YPIORD_YDIR_ENTRY_ACC_SHFT 55 +#define SH_MD_DQRP_MMR_YPIORD_YDIR_ENTRY_ACC_MASK 0x0380000000000000 + +/* SH_MD_DQRP_MMR_YPIORD_YDIR_ENTRY_COR */ +/* Description: correctable ecc error */ +#define SH_MD_DQRP_MMR_YPIORD_YDIR_ENTRY_COR_SHFT 58 +#define SH_MD_DQRP_MMR_YPIORD_YDIR_ENTRY_COR_MASK 0x0400000000000000 + +/* SH_MD_DQRP_MMR_YPIORD_YDIR_ENTRY_UNC */ +/* Description: uncorrectable ecc error */ +#define SH_MD_DQRP_MMR_YPIORD_YDIR_ENTRY_UNC_SHFT 59 +#define SH_MD_DQRP_MMR_YPIORD_YDIR_ENTRY_UNC_MASK 0x0800000000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_YPIORD_YDIR_ECC" */ +/* y directory ecc */ +/* ==================================================================== */ + +#define SH_MD_DQRP_MMR_YPIORD_YDIR_ECC 0x0000000100052810 +#define SH_MD_DQRP_MMR_YPIORD_YDIR_ECC_MASK 0x0000000000003fff +#define SH_MD_DQRP_MMR_YPIORD_YDIR_ECC_INIT 0x0000000000000000 + +/* SH_MD_DQRP_MMR_YPIORD_YDIR_ECC_ECCA */ +/* Description: group 1 ecc */ +#define SH_MD_DQRP_MMR_YPIORD_YDIR_ECC_ECCA_SHFT 0 +#define SH_MD_DQRP_MMR_YPIORD_YDIR_ECC_ECCA_MASK 0x000000000000007f + +/* SH_MD_DQRP_MMR_YPIORD_YDIR_ECC_ECCB */ +/* Description: group 2 ecc */ +#define SH_MD_DQRP_MMR_YPIORD_YDIR_ECC_ECCB_SHFT 7 +#define SH_MD_DQRP_MMR_YPIORD_YDIR_ECC_ECCB_MASK 0x0000000000003f80 + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_XCERR1" */ +/* correctable dir ecc group 1 error register */ +/* ==================================================================== */ + +#define SH_MD_DQRP_MMR_XCERR1 0x0000000100053000 +#define SH_MD_DQRP_MMR_XCERR1_MASK 0x0000007fffffffff +#define SH_MD_DQRP_MMR_XCERR1_INIT 0x0000000000000000 + +/* SH_MD_DQRP_MMR_XCERR1_GRP1 */ +/* Description: ecc group 1 bits */ +#define SH_MD_DQRP_MMR_XCERR1_GRP1_SHFT 0 +#define SH_MD_DQRP_MMR_XCERR1_GRP1_MASK 0x0000000fffffffff + +/* SH_MD_DQRP_MMR_XCERR1_VAL */ +/* Description: correctable ecc error in group 1 bits */ +#define SH_MD_DQRP_MMR_XCERR1_VAL_SHFT 36 +#define SH_MD_DQRP_MMR_XCERR1_VAL_MASK 0x0000001000000000 + +/* SH_MD_DQRP_MMR_XCERR1_MORE */ +/* Description: more than one correctable ecc error in group 1 */ +#define SH_MD_DQRP_MMR_XCERR1_MORE_SHFT 37 +#define SH_MD_DQRP_MMR_XCERR1_MORE_MASK 0x0000002000000000 + +/* SH_MD_DQRP_MMR_XCERR1_ARM */ +/* Description: writing 1 arms uncorrectable ecc error capture */ +#define SH_MD_DQRP_MMR_XCERR1_ARM_SHFT 38 +#define SH_MD_DQRP_MMR_XCERR1_ARM_MASK 0x0000004000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_XCERR2" */ +/* correctable dir ecc group 2 error register */ +/* ==================================================================== */ + +#define SH_MD_DQRP_MMR_XCERR2 0x0000000100053010 +#define SH_MD_DQRP_MMR_XCERR2_MASK 0x0000003fffffffff +#define SH_MD_DQRP_MMR_XCERR2_INIT 0x0000000000000000 + +/* SH_MD_DQRP_MMR_XCERR2_GRP2 */ +/* Description: ecc group 2 bits */ +#define SH_MD_DQRP_MMR_XCERR2_GRP2_SHFT 0 +#define SH_MD_DQRP_MMR_XCERR2_GRP2_MASK 0x0000000fffffffff + +/* SH_MD_DQRP_MMR_XCERR2_VAL */ +/* Description: correctable ecc error in group 2 bits */ +#define SH_MD_DQRP_MMR_XCERR2_VAL_SHFT 36 +#define SH_MD_DQRP_MMR_XCERR2_VAL_MASK 0x0000001000000000 + +/* SH_MD_DQRP_MMR_XCERR2_MORE */ +/* Description: more than one correctable ecc error in group 2 */ +#define SH_MD_DQRP_MMR_XCERR2_MORE_SHFT 37 +#define SH_MD_DQRP_MMR_XCERR2_MORE_MASK 0x0000002000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_XUERR1" */ +/* uncorrectable dir ecc group 1 error register */ +/* ==================================================================== */ + +#define SH_MD_DQRP_MMR_XUERR1 0x0000000100053020 +#define SH_MD_DQRP_MMR_XUERR1_MASK 0x0000007fffffffff +#define SH_MD_DQRP_MMR_XUERR1_INIT 0x0000000000000000 + +/* SH_MD_DQRP_MMR_XUERR1_GRP1 */ +/* Description: ecc group 1 bits */ +#define SH_MD_DQRP_MMR_XUERR1_GRP1_SHFT 0 +#define SH_MD_DQRP_MMR_XUERR1_GRP1_MASK 0x0000000fffffffff + +/* SH_MD_DQRP_MMR_XUERR1_VAL */ +/* Description: uncorrectable ecc error in group 1 bits */ +#define SH_MD_DQRP_MMR_XUERR1_VAL_SHFT 36 +#define SH_MD_DQRP_MMR_XUERR1_VAL_MASK 0x0000001000000000 + +/* SH_MD_DQRP_MMR_XUERR1_MORE */ +/* Description: more than one uncorrectable ecc error in group 1 */ +#define SH_MD_DQRP_MMR_XUERR1_MORE_SHFT 37 +#define SH_MD_DQRP_MMR_XUERR1_MORE_MASK 0x0000002000000000 + +/* SH_MD_DQRP_MMR_XUERR1_ARM */ +/* Description: writing 1 arms uncorrectable ecc error capture */ +#define SH_MD_DQRP_MMR_XUERR1_ARM_SHFT 38 +#define SH_MD_DQRP_MMR_XUERR1_ARM_MASK 0x0000004000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_XUERR2" */ +/* uncorrectable dir ecc group 2 error register */ +/* ==================================================================== */ + +#define SH_MD_DQRP_MMR_XUERR2 0x0000000100053030 +#define SH_MD_DQRP_MMR_XUERR2_MASK 0x0000003fffffffff +#define SH_MD_DQRP_MMR_XUERR2_INIT 0x0000000000000000 + +/* SH_MD_DQRP_MMR_XUERR2_GRP2 */ +/* Description: ecc group 2 bits */ +#define SH_MD_DQRP_MMR_XUERR2_GRP2_SHFT 0 +#define SH_MD_DQRP_MMR_XUERR2_GRP2_MASK 0x0000000fffffffff + +/* SH_MD_DQRP_MMR_XUERR2_VAL */ +/* Description: uncorrectable ecc error in group 2 bits */ +#define SH_MD_DQRP_MMR_XUERR2_VAL_SHFT 36 +#define SH_MD_DQRP_MMR_XUERR2_VAL_MASK 0x0000001000000000 + +/* SH_MD_DQRP_MMR_XUERR2_MORE */ +/* Description: more than one uncorrectable ecc error in group 2 */ +#define SH_MD_DQRP_MMR_XUERR2_MORE_SHFT 37 +#define SH_MD_DQRP_MMR_XUERR2_MORE_MASK 0x0000002000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_XPERR" */ +/* protocol error register */ +/* ==================================================================== */ + +#define SH_MD_DQRP_MMR_XPERR 0x0000000100053040 +#define SH_MD_DQRP_MMR_XPERR_MASK 0x7fffffffffffffff +#define SH_MD_DQRP_MMR_XPERR_INIT 0x0000000000000000 + +/* SH_MD_DQRP_MMR_XPERR_DIR */ +/* Description: directory entry */ +#define SH_MD_DQRP_MMR_XPERR_DIR_SHFT 0 +#define SH_MD_DQRP_MMR_XPERR_DIR_MASK 0x0000000003ffffff + +/* SH_MD_DQRP_MMR_XPERR_CMD */ +/* Description: incoming command */ +#define SH_MD_DQRP_MMR_XPERR_CMD_SHFT 26 +#define SH_MD_DQRP_MMR_XPERR_CMD_MASK 0x00000003fc000000 + +/* SH_MD_DQRP_MMR_XPERR_SRC */ +/* Description: source node of dir operation */ +#define SH_MD_DQRP_MMR_XPERR_SRC_SHFT 34 +#define SH_MD_DQRP_MMR_XPERR_SRC_MASK 0x0000fffc00000000 + +/* SH_MD_DQRP_MMR_XPERR_PRIGE */ +/* Description: priority was greater-equal */ +#define SH_MD_DQRP_MMR_XPERR_PRIGE_SHFT 48 +#define SH_MD_DQRP_MMR_XPERR_PRIGE_MASK 0x0001000000000000 + +/* SH_MD_DQRP_MMR_XPERR_PRIV */ +/* Description: access privilege bit */ +#define SH_MD_DQRP_MMR_XPERR_PRIV_SHFT 49 +#define SH_MD_DQRP_MMR_XPERR_PRIV_MASK 0x0002000000000000 + +/* SH_MD_DQRP_MMR_XPERR_COR */ +/* Description: correctable ecc error */ +#define SH_MD_DQRP_MMR_XPERR_COR_SHFT 50 +#define SH_MD_DQRP_MMR_XPERR_COR_MASK 0x0004000000000000 + +/* SH_MD_DQRP_MMR_XPERR_UNC */ +/* Description: uncorrectable ecc error */ +#define SH_MD_DQRP_MMR_XPERR_UNC_SHFT 51 +#define SH_MD_DQRP_MMR_XPERR_UNC_MASK 0x0008000000000000 + +/* SH_MD_DQRP_MMR_XPERR_MYBIT */ +/* Description: ptreq,timeq,timlast,timspec,onlyme,anytim,ptrii,src */ +#define SH_MD_DQRP_MMR_XPERR_MYBIT_SHFT 52 +#define SH_MD_DQRP_MMR_XPERR_MYBIT_MASK 0x0ff0000000000000 + +/* SH_MD_DQRP_MMR_XPERR_VAL */ +/* Description: protocol error info valid */ +#define SH_MD_DQRP_MMR_XPERR_VAL_SHFT 60 +#define SH_MD_DQRP_MMR_XPERR_VAL_MASK 0x1000000000000000 + +/* SH_MD_DQRP_MMR_XPERR_MORE */ +/* Description: more than one protocol error */ +#define SH_MD_DQRP_MMR_XPERR_MORE_SHFT 61 +#define SH_MD_DQRP_MMR_XPERR_MORE_MASK 0x2000000000000000 + +/* SH_MD_DQRP_MMR_XPERR_ARM */ +/* Description: writing 1 arms error capture */ +#define SH_MD_DQRP_MMR_XPERR_ARM_SHFT 62 +#define SH_MD_DQRP_MMR_XPERR_ARM_MASK 0x4000000000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_YCERR1" */ +/* correctable dir ecc group 1 error register */ +/* ==================================================================== */ + +#define SH_MD_DQRP_MMR_YCERR1 0x0000000100053800 +#define SH_MD_DQRP_MMR_YCERR1_MASK 0x0000007fffffffff +#define SH_MD_DQRP_MMR_YCERR1_INIT 0x0000000000000000 + +/* SH_MD_DQRP_MMR_YCERR1_GRP1 */ +/* Description: ecc group 1 bits */ +#define SH_MD_DQRP_MMR_YCERR1_GRP1_SHFT 0 +#define SH_MD_DQRP_MMR_YCERR1_GRP1_MASK 0x0000000fffffffff + +/* SH_MD_DQRP_MMR_YCERR1_VAL */ +/* Description: correctable ecc error in group 1 bits */ +#define SH_MD_DQRP_MMR_YCERR1_VAL_SHFT 36 +#define SH_MD_DQRP_MMR_YCERR1_VAL_MASK 0x0000001000000000 + +/* SH_MD_DQRP_MMR_YCERR1_MORE */ +/* Description: more than one correctable ecc error in group 1 */ +#define SH_MD_DQRP_MMR_YCERR1_MORE_SHFT 37 +#define SH_MD_DQRP_MMR_YCERR1_MORE_MASK 0x0000002000000000 + +/* SH_MD_DQRP_MMR_YCERR1_ARM */ +/* Description: writing 1 arms uncorrectable ecc error capture */ +#define SH_MD_DQRP_MMR_YCERR1_ARM_SHFT 38 +#define SH_MD_DQRP_MMR_YCERR1_ARM_MASK 0x0000004000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_YCERR2" */ +/* correctable dir ecc group 2 error register */ +/* ==================================================================== */ + +#define SH_MD_DQRP_MMR_YCERR2 0x0000000100053810 +#define SH_MD_DQRP_MMR_YCERR2_MASK 0x0000003fffffffff +#define SH_MD_DQRP_MMR_YCERR2_INIT 0x0000000000000000 + +/* SH_MD_DQRP_MMR_YCERR2_GRP2 */ +/* Description: ecc group 2 bits */ +#define SH_MD_DQRP_MMR_YCERR2_GRP2_SHFT 0 +#define SH_MD_DQRP_MMR_YCERR2_GRP2_MASK 0x0000000fffffffff + +/* SH_MD_DQRP_MMR_YCERR2_VAL */ +/* Description: correctable ecc error in group 2 bits */ +#define SH_MD_DQRP_MMR_YCERR2_VAL_SHFT 36 +#define SH_MD_DQRP_MMR_YCERR2_VAL_MASK 0x0000001000000000 + +/* SH_MD_DQRP_MMR_YCERR2_MORE */ +/* Description: more than one correctable ecc error in group 2 */ +#define SH_MD_DQRP_MMR_YCERR2_MORE_SHFT 37 +#define SH_MD_DQRP_MMR_YCERR2_MORE_MASK 0x0000002000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_YUERR1" */ +/* uncorrectable dir ecc group 1 error register */ +/* ==================================================================== */ + +#define SH_MD_DQRP_MMR_YUERR1 0x0000000100053820 +#define SH_MD_DQRP_MMR_YUERR1_MASK 0x0000007fffffffff +#define SH_MD_DQRP_MMR_YUERR1_INIT 0x0000000000000000 + +/* SH_MD_DQRP_MMR_YUERR1_GRP1 */ +/* Description: ecc group 1 bits */ +#define SH_MD_DQRP_MMR_YUERR1_GRP1_SHFT 0 +#define SH_MD_DQRP_MMR_YUERR1_GRP1_MASK 0x0000000fffffffff + +/* SH_MD_DQRP_MMR_YUERR1_VAL */ +/* Description: uncorrectable ecc error in group 1 bits */ +#define SH_MD_DQRP_MMR_YUERR1_VAL_SHFT 36 +#define SH_MD_DQRP_MMR_YUERR1_VAL_MASK 0x0000001000000000 + +/* SH_MD_DQRP_MMR_YUERR1_MORE */ +/* Description: more than one uncorrectable ecc error in group 1 */ +#define SH_MD_DQRP_MMR_YUERR1_MORE_SHFT 37 +#define SH_MD_DQRP_MMR_YUERR1_MORE_MASK 0x0000002000000000 + +/* SH_MD_DQRP_MMR_YUERR1_ARM */ +/* Description: writing 1 arms uncorrectable ecc error capture */ +#define SH_MD_DQRP_MMR_YUERR1_ARM_SHFT 38 +#define SH_MD_DQRP_MMR_YUERR1_ARM_MASK 0x0000004000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_YUERR2" */ +/* uncorrectable dir ecc group 2 error register */ +/* ==================================================================== */ + +#define SH_MD_DQRP_MMR_YUERR2 0x0000000100053830 +#define SH_MD_DQRP_MMR_YUERR2_MASK 0x0000003fffffffff +#define SH_MD_DQRP_MMR_YUERR2_INIT 0x0000000000000000 + +/* SH_MD_DQRP_MMR_YUERR2_GRP2 */ +/* Description: ecc group 2 bits */ +#define SH_MD_DQRP_MMR_YUERR2_GRP2_SHFT 0 +#define SH_MD_DQRP_MMR_YUERR2_GRP2_MASK 0x0000000fffffffff + +/* SH_MD_DQRP_MMR_YUERR2_VAL */ +/* Description: uncorrectable ecc error in group 2 bits */ +#define SH_MD_DQRP_MMR_YUERR2_VAL_SHFT 36 +#define SH_MD_DQRP_MMR_YUERR2_VAL_MASK 0x0000001000000000 + +/* SH_MD_DQRP_MMR_YUERR2_MORE */ +/* Description: more than one uncorrectable ecc error in group 2 */ +#define SH_MD_DQRP_MMR_YUERR2_MORE_SHFT 37 +#define SH_MD_DQRP_MMR_YUERR2_MORE_MASK 0x0000002000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_YPERR" */ +/* protocol error register */ +/* ==================================================================== */ + +#define SH_MD_DQRP_MMR_YPERR 0x0000000100053840 +#define SH_MD_DQRP_MMR_YPERR_MASK 0x7fffffffffffffff +#define SH_MD_DQRP_MMR_YPERR_INIT 0x0000000000000000 + +/* SH_MD_DQRP_MMR_YPERR_DIR */ +/* Description: directory entry */ +#define SH_MD_DQRP_MMR_YPERR_DIR_SHFT 0 +#define SH_MD_DQRP_MMR_YPERR_DIR_MASK 0x0000000003ffffff + +/* SH_MD_DQRP_MMR_YPERR_CMD */ +/* Description: incoming command */ +#define SH_MD_DQRP_MMR_YPERR_CMD_SHFT 26 +#define SH_MD_DQRP_MMR_YPERR_CMD_MASK 0x00000003fc000000 + +/* SH_MD_DQRP_MMR_YPERR_SRC */ +/* Description: source node of dir operation */ +#define SH_MD_DQRP_MMR_YPERR_SRC_SHFT 34 +#define SH_MD_DQRP_MMR_YPERR_SRC_MASK 0x0000fffc00000000 + +/* SH_MD_DQRP_MMR_YPERR_PRIGE */ +/* Description: priority was greater-equal */ +#define SH_MD_DQRP_MMR_YPERR_PRIGE_SHFT 48 +#define SH_MD_DQRP_MMR_YPERR_PRIGE_MASK 0x0001000000000000 + +/* SH_MD_DQRP_MMR_YPERR_PRIV */ +/* Description: access privilege bit */ +#define SH_MD_DQRP_MMR_YPERR_PRIV_SHFT 49 +#define SH_MD_DQRP_MMR_YPERR_PRIV_MASK 0x0002000000000000 + +/* SH_MD_DQRP_MMR_YPERR_COR */ +/* Description: correctable ecc error */ +#define SH_MD_DQRP_MMR_YPERR_COR_SHFT 50 +#define SH_MD_DQRP_MMR_YPERR_COR_MASK 0x0004000000000000 + +/* SH_MD_DQRP_MMR_YPERR_UNC */ +/* Description: uncorrectable ecc error */ +#define SH_MD_DQRP_MMR_YPERR_UNC_SHFT 51 +#define SH_MD_DQRP_MMR_YPERR_UNC_MASK 0x0008000000000000 + +/* SH_MD_DQRP_MMR_YPERR_MYBIT */ +/* Description: ptreq,timeq,timlast,timspec,onlyme,anytim,ptrii,src */ +#define SH_MD_DQRP_MMR_YPERR_MYBIT_SHFT 52 +#define SH_MD_DQRP_MMR_YPERR_MYBIT_MASK 0x0ff0000000000000 + +/* SH_MD_DQRP_MMR_YPERR_VAL */ +/* Description: protocol error info valid */ +#define SH_MD_DQRP_MMR_YPERR_VAL_SHFT 60 +#define SH_MD_DQRP_MMR_YPERR_VAL_MASK 0x1000000000000000 + +/* SH_MD_DQRP_MMR_YPERR_MORE */ +/* Description: more than one protocol error */ +#define SH_MD_DQRP_MMR_YPERR_MORE_SHFT 61 +#define SH_MD_DQRP_MMR_YPERR_MORE_MASK 0x2000000000000000 + +/* SH_MD_DQRP_MMR_YPERR_ARM */ +/* Description: writing 1 arms error capture */ +#define SH_MD_DQRP_MMR_YPERR_ARM_SHFT 62 +#define SH_MD_DQRP_MMR_YPERR_ARM_MASK 0x4000000000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_DIR_CMDTRIG" */ +/* cmd triggers */ +/* ==================================================================== */ + +#define SH_MD_DQRP_MMR_DIR_CMDTRIG 0x0000000100054000 +#define SH_MD_DQRP_MMR_DIR_CMDTRIG_MASK 0x00000000ffffffff +#define SH_MD_DQRP_MMR_DIR_CMDTRIG_INIT 0x0000000000000000 + +/* SH_MD_DQRP_MMR_DIR_CMDTRIG_CMD0 */ +/* Description: command trigger 0 */ +#define SH_MD_DQRP_MMR_DIR_CMDTRIG_CMD0_SHFT 0 +#define SH_MD_DQRP_MMR_DIR_CMDTRIG_CMD0_MASK 0x00000000000000ff + +/* SH_MD_DQRP_MMR_DIR_CMDTRIG_CMD1 */ +/* Description: command trigger 1 */ +#define SH_MD_DQRP_MMR_DIR_CMDTRIG_CMD1_SHFT 8 +#define SH_MD_DQRP_MMR_DIR_CMDTRIG_CMD1_MASK 0x000000000000ff00 + +/* SH_MD_DQRP_MMR_DIR_CMDTRIG_CMD2 */ +/* Description: command trigger 2 */ +#define SH_MD_DQRP_MMR_DIR_CMDTRIG_CMD2_SHFT 16 +#define SH_MD_DQRP_MMR_DIR_CMDTRIG_CMD2_MASK 0x0000000000ff0000 + +/* SH_MD_DQRP_MMR_DIR_CMDTRIG_CMD3 */ +/* Description: command trigger 3 */ +#define SH_MD_DQRP_MMR_DIR_CMDTRIG_CMD3_SHFT 24 +#define SH_MD_DQRP_MMR_DIR_CMDTRIG_CMD3_MASK 0x00000000ff000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_DIR_TBLTRIG" */ +/* dir table trigger */ +/* ==================================================================== */ + +#define SH_MD_DQRP_MMR_DIR_TBLTRIG 0x0000000100054010 +#define SH_MD_DQRP_MMR_DIR_TBLTRIG_MASK 0x000003ffffffffff +#define SH_MD_DQRP_MMR_DIR_TBLTRIG_INIT 0x0000000000000000 + +/* SH_MD_DQRP_MMR_DIR_TBLTRIG_SRC */ +/* Description: source of request */ +#define SH_MD_DQRP_MMR_DIR_TBLTRIG_SRC_SHFT 0 +#define SH_MD_DQRP_MMR_DIR_TBLTRIG_SRC_MASK 0x0000000000003fff + +/* SH_MD_DQRP_MMR_DIR_TBLTRIG_CMD */ +/* Description: incoming request */ +#define SH_MD_DQRP_MMR_DIR_TBLTRIG_CMD_SHFT 14 +#define SH_MD_DQRP_MMR_DIR_TBLTRIG_CMD_MASK 0x00000000003fc000 + +/* SH_MD_DQRP_MMR_DIR_TBLTRIG_ACC */ +/* Description: uncorrectable error, privilege bit */ +#define SH_MD_DQRP_MMR_DIR_TBLTRIG_ACC_SHFT 22 +#define SH_MD_DQRP_MMR_DIR_TBLTRIG_ACC_MASK 0x0000000000c00000 + +/* SH_MD_DQRP_MMR_DIR_TBLTRIG_PRIGE */ +/* Description: priority greater-equal */ +#define SH_MD_DQRP_MMR_DIR_TBLTRIG_PRIGE_SHFT 24 +#define SH_MD_DQRP_MMR_DIR_TBLTRIG_PRIGE_MASK 0x0000000001000000 + +/* SH_MD_DQRP_MMR_DIR_TBLTRIG_DIRST */ +/* Description: shrd,sxro,sub-state */ +#define SH_MD_DQRP_MMR_DIR_TBLTRIG_DIRST_SHFT 25 +#define SH_MD_DQRP_MMR_DIR_TBLTRIG_DIRST_MASK 0x00000003fe000000 + +/* SH_MD_DQRP_MMR_DIR_TBLTRIG_MYBIT */ +/* Description: ptreq,timeq,timlast,timspec,onlyme,anytim,ptrii,src */ +#define SH_MD_DQRP_MMR_DIR_TBLTRIG_MYBIT_SHFT 34 +#define SH_MD_DQRP_MMR_DIR_TBLTRIG_MYBIT_MASK 0x000003fc00000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_DIR_TBLMASK" */ +/* dir table trigger mask */ +/* ==================================================================== */ + +#define SH_MD_DQRP_MMR_DIR_TBLMASK 0x0000000100054020 +#define SH_MD_DQRP_MMR_DIR_TBLMASK_MASK 0x000003ffffffffff +#define SH_MD_DQRP_MMR_DIR_TBLMASK_INIT 0x0000000000000000 + +/* SH_MD_DQRP_MMR_DIR_TBLMASK_SRC */ +/* Description: source of request */ +#define SH_MD_DQRP_MMR_DIR_TBLMASK_SRC_SHFT 0 +#define SH_MD_DQRP_MMR_DIR_TBLMASK_SRC_MASK 0x0000000000003fff + +/* SH_MD_DQRP_MMR_DIR_TBLMASK_CMD */ +/* Description: incoming request */ +#define SH_MD_DQRP_MMR_DIR_TBLMASK_CMD_SHFT 14 +#define SH_MD_DQRP_MMR_DIR_TBLMASK_CMD_MASK 0x00000000003fc000 + +/* SH_MD_DQRP_MMR_DIR_TBLMASK_ACC */ +/* Description: uncorrectable error, privilege bit */ +#define SH_MD_DQRP_MMR_DIR_TBLMASK_ACC_SHFT 22 +#define SH_MD_DQRP_MMR_DIR_TBLMASK_ACC_MASK 0x0000000000c00000 + +/* SH_MD_DQRP_MMR_DIR_TBLMASK_PRIGE */ +/* Description: priority greater-equal */ +#define SH_MD_DQRP_MMR_DIR_TBLMASK_PRIGE_SHFT 24 +#define SH_MD_DQRP_MMR_DIR_TBLMASK_PRIGE_MASK 0x0000000001000000 + +/* SH_MD_DQRP_MMR_DIR_TBLMASK_DIRST */ +/* Description: shrd,sxro,sub-state */ +#define SH_MD_DQRP_MMR_DIR_TBLMASK_DIRST_SHFT 25 +#define SH_MD_DQRP_MMR_DIR_TBLMASK_DIRST_MASK 0x00000003fe000000 + +/* SH_MD_DQRP_MMR_DIR_TBLMASK_MYBIT */ +/* Description: ptreq,timeq,timlast,timspec,onlyme,anytim,ptrii,src */ +#define SH_MD_DQRP_MMR_DIR_TBLMASK_MYBIT_SHFT 34 +#define SH_MD_DQRP_MMR_DIR_TBLMASK_MYBIT_MASK 0x000003fc00000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_XBIST_H" */ +/* rising edge bist/fill pattern */ +/* ==================================================================== */ + +#define SH_MD_DQRP_MMR_XBIST_H 0x0000000100058000 +#define SH_MD_DQRP_MMR_XBIST_H_MASK 0x00000700ffffffff +#define SH_MD_DQRP_MMR_XBIST_H_INIT 0x0000000000000000 + +/* SH_MD_DQRP_MMR_XBIST_H_PAT */ +/* Description: data pattern */ +#define SH_MD_DQRP_MMR_XBIST_H_PAT_SHFT 0 +#define SH_MD_DQRP_MMR_XBIST_H_PAT_MASK 0x00000000ffffffff + +/* SH_MD_DQRP_MMR_XBIST_H_INV */ +/* Description: invert data pattern in next cycle */ +#define SH_MD_DQRP_MMR_XBIST_H_INV_SHFT 40 +#define SH_MD_DQRP_MMR_XBIST_H_INV_MASK 0x0000010000000000 + +/* SH_MD_DQRP_MMR_XBIST_H_ROT */ +/* Description: rotate left data pattern in next cycle */ +#define SH_MD_DQRP_MMR_XBIST_H_ROT_SHFT 41 +#define SH_MD_DQRP_MMR_XBIST_H_ROT_MASK 0x0000020000000000 + +/* SH_MD_DQRP_MMR_XBIST_H_ARM */ +/* Description: writing 1 arms data miscompare capture */ +#define SH_MD_DQRP_MMR_XBIST_H_ARM_SHFT 42 +#define SH_MD_DQRP_MMR_XBIST_H_ARM_MASK 0x0000040000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_XBIST_L" */ +/* falling edge bist/fill pattern */ +/* ==================================================================== */ + +#define SH_MD_DQRP_MMR_XBIST_L 0x0000000100058010 +#define SH_MD_DQRP_MMR_XBIST_L_MASK 0x00000300ffffffff +#define SH_MD_DQRP_MMR_XBIST_L_INIT 0x0000000000000000 + +/* SH_MD_DQRP_MMR_XBIST_L_PAT */ +/* Description: data pattern */ +#define SH_MD_DQRP_MMR_XBIST_L_PAT_SHFT 0 +#define SH_MD_DQRP_MMR_XBIST_L_PAT_MASK 0x00000000ffffffff + +/* SH_MD_DQRP_MMR_XBIST_L_INV */ +/* Description: invert data pattern in next cycle */ +#define SH_MD_DQRP_MMR_XBIST_L_INV_SHFT 40 +#define SH_MD_DQRP_MMR_XBIST_L_INV_MASK 0x0000010000000000 + +/* SH_MD_DQRP_MMR_XBIST_L_ROT */ +/* Description: rotate left data pattern in next cycle */ +#define SH_MD_DQRP_MMR_XBIST_L_ROT_SHFT 41 +#define SH_MD_DQRP_MMR_XBIST_L_ROT_MASK 0x0000020000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_XBIST_ERR_H" */ +/* rising edge bist error pattern */ +/* ==================================================================== */ + +#define SH_MD_DQRP_MMR_XBIST_ERR_H 0x0000000100058020 +#define SH_MD_DQRP_MMR_XBIST_ERR_H_MASK 0x00000300ffffffff +#define SH_MD_DQRP_MMR_XBIST_ERR_H_INIT 0x0000000000000000 + +/* SH_MD_DQRP_MMR_XBIST_ERR_H_PAT */ +/* Description: data pattern */ +#define SH_MD_DQRP_MMR_XBIST_ERR_H_PAT_SHFT 0 +#define SH_MD_DQRP_MMR_XBIST_ERR_H_PAT_MASK 0x00000000ffffffff + +/* SH_MD_DQRP_MMR_XBIST_ERR_H_VAL */ +/* Description: bist data miscompare */ +#define SH_MD_DQRP_MMR_XBIST_ERR_H_VAL_SHFT 40 +#define SH_MD_DQRP_MMR_XBIST_ERR_H_VAL_MASK 0x0000010000000000 + +/* SH_MD_DQRP_MMR_XBIST_ERR_H_MORE */ +/* Description: more than one bist data miscompare */ +#define SH_MD_DQRP_MMR_XBIST_ERR_H_MORE_SHFT 41 +#define SH_MD_DQRP_MMR_XBIST_ERR_H_MORE_MASK 0x0000020000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_XBIST_ERR_L" */ +/* falling edge bist error pattern */ +/* ==================================================================== */ + +#define SH_MD_DQRP_MMR_XBIST_ERR_L 0x0000000100058030 +#define SH_MD_DQRP_MMR_XBIST_ERR_L_MASK 0x00000300ffffffff +#define SH_MD_DQRP_MMR_XBIST_ERR_L_INIT 0x0000000000000000 + +/* SH_MD_DQRP_MMR_XBIST_ERR_L_PAT */ +/* Description: data pattern */ +#define SH_MD_DQRP_MMR_XBIST_ERR_L_PAT_SHFT 0 +#define SH_MD_DQRP_MMR_XBIST_ERR_L_PAT_MASK 0x00000000ffffffff + +/* SH_MD_DQRP_MMR_XBIST_ERR_L_VAL */ +/* Description: bist data miscompare */ +#define SH_MD_DQRP_MMR_XBIST_ERR_L_VAL_SHFT 40 +#define SH_MD_DQRP_MMR_XBIST_ERR_L_VAL_MASK 0x0000010000000000 + +/* SH_MD_DQRP_MMR_XBIST_ERR_L_MORE */ +/* Description: more than one bist data miscompare */ +#define SH_MD_DQRP_MMR_XBIST_ERR_L_MORE_SHFT 41 +#define SH_MD_DQRP_MMR_XBIST_ERR_L_MORE_MASK 0x0000020000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_YBIST_H" */ +/* rising edge bist/fill pattern */ +/* ==================================================================== */ + +#define SH_MD_DQRP_MMR_YBIST_H 0x0000000100058800 +#define SH_MD_DQRP_MMR_YBIST_H_MASK 0x00000700ffffffff +#define SH_MD_DQRP_MMR_YBIST_H_INIT 0x0000000000000000 + +/* SH_MD_DQRP_MMR_YBIST_H_PAT */ +/* Description: data pattern */ +#define SH_MD_DQRP_MMR_YBIST_H_PAT_SHFT 0 +#define SH_MD_DQRP_MMR_YBIST_H_PAT_MASK 0x00000000ffffffff + +/* SH_MD_DQRP_MMR_YBIST_H_INV */ +/* Description: invert data pattern in next cycle */ +#define SH_MD_DQRP_MMR_YBIST_H_INV_SHFT 40 +#define SH_MD_DQRP_MMR_YBIST_H_INV_MASK 0x0000010000000000 + +/* SH_MD_DQRP_MMR_YBIST_H_ROT */ +/* Description: rotate left data pattern in next cycle */ +#define SH_MD_DQRP_MMR_YBIST_H_ROT_SHFT 41 +#define SH_MD_DQRP_MMR_YBIST_H_ROT_MASK 0x0000020000000000 + +/* SH_MD_DQRP_MMR_YBIST_H_ARM */ +/* Description: writing 1 arms data miscompare capture */ +#define SH_MD_DQRP_MMR_YBIST_H_ARM_SHFT 42 +#define SH_MD_DQRP_MMR_YBIST_H_ARM_MASK 0x0000040000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_YBIST_L" */ +/* falling edge bist/fill pattern */ +/* ==================================================================== */ + +#define SH_MD_DQRP_MMR_YBIST_L 0x0000000100058810 +#define SH_MD_DQRP_MMR_YBIST_L_MASK 0x00000300ffffffff +#define SH_MD_DQRP_MMR_YBIST_L_INIT 0x0000000000000000 + +/* SH_MD_DQRP_MMR_YBIST_L_PAT */ +/* Description: data pattern */ +#define SH_MD_DQRP_MMR_YBIST_L_PAT_SHFT 0 +#define SH_MD_DQRP_MMR_YBIST_L_PAT_MASK 0x00000000ffffffff + +/* SH_MD_DQRP_MMR_YBIST_L_INV */ +/* Description: invert data pattern in next cycle */ +#define SH_MD_DQRP_MMR_YBIST_L_INV_SHFT 40 +#define SH_MD_DQRP_MMR_YBIST_L_INV_MASK 0x0000010000000000 + +/* SH_MD_DQRP_MMR_YBIST_L_ROT */ +/* Description: rotate left data pattern in next cycle */ +#define SH_MD_DQRP_MMR_YBIST_L_ROT_SHFT 41 +#define SH_MD_DQRP_MMR_YBIST_L_ROT_MASK 0x0000020000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_YBIST_ERR_H" */ +/* rising edge bist error pattern */ +/* ==================================================================== */ + +#define SH_MD_DQRP_MMR_YBIST_ERR_H 0x0000000100058820 +#define SH_MD_DQRP_MMR_YBIST_ERR_H_MASK 0x00000300ffffffff +#define SH_MD_DQRP_MMR_YBIST_ERR_H_INIT 0x0000000000000000 + +/* SH_MD_DQRP_MMR_YBIST_ERR_H_PAT */ +/* Description: data pattern */ +#define SH_MD_DQRP_MMR_YBIST_ERR_H_PAT_SHFT 0 +#define SH_MD_DQRP_MMR_YBIST_ERR_H_PAT_MASK 0x00000000ffffffff + +/* SH_MD_DQRP_MMR_YBIST_ERR_H_VAL */ +/* Description: bist data miscompare */ +#define SH_MD_DQRP_MMR_YBIST_ERR_H_VAL_SHFT 40 +#define SH_MD_DQRP_MMR_YBIST_ERR_H_VAL_MASK 0x0000010000000000 + +/* SH_MD_DQRP_MMR_YBIST_ERR_H_MORE */ +/* Description: more than one bist data miscompare */ +#define SH_MD_DQRP_MMR_YBIST_ERR_H_MORE_SHFT 41 +#define SH_MD_DQRP_MMR_YBIST_ERR_H_MORE_MASK 0x0000020000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_YBIST_ERR_L" */ +/* falling edge bist error pattern */ +/* ==================================================================== */ + +#define SH_MD_DQRP_MMR_YBIST_ERR_L 0x0000000100058830 +#define SH_MD_DQRP_MMR_YBIST_ERR_L_MASK 0x00000300ffffffff +#define SH_MD_DQRP_MMR_YBIST_ERR_L_INIT 0x0000000000000000 + +/* SH_MD_DQRP_MMR_YBIST_ERR_L_PAT */ +/* Description: data pattern */ +#define SH_MD_DQRP_MMR_YBIST_ERR_L_PAT_SHFT 0 +#define SH_MD_DQRP_MMR_YBIST_ERR_L_PAT_MASK 0x00000000ffffffff + +/* SH_MD_DQRP_MMR_YBIST_ERR_L_VAL */ +/* Description: bist data miscompare */ +#define SH_MD_DQRP_MMR_YBIST_ERR_L_VAL_SHFT 40 +#define SH_MD_DQRP_MMR_YBIST_ERR_L_VAL_MASK 0x0000010000000000 + +/* SH_MD_DQRP_MMR_YBIST_ERR_L_MORE */ +/* Description: more than one bist data miscompare */ +#define SH_MD_DQRP_MMR_YBIST_ERR_L_MORE_SHFT 41 +#define SH_MD_DQRP_MMR_YBIST_ERR_L_MORE_MASK 0x0000020000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQRS_MMR_XBIST_H" */ +/* rising edge bist/fill pattern */ +/* ==================================================================== */ + +#define SH_MD_DQRS_MMR_XBIST_H 0x0000000100068000 +#define SH_MD_DQRS_MMR_XBIST_H_MASK 0x000007ffffffffff +#define SH_MD_DQRS_MMR_XBIST_H_INIT 0x0000000000000000 + +/* SH_MD_DQRS_MMR_XBIST_H_PAT */ +/* Description: data pattern */ +#define SH_MD_DQRS_MMR_XBIST_H_PAT_SHFT 0 +#define SH_MD_DQRS_MMR_XBIST_H_PAT_MASK 0x000000ffffffffff + +/* SH_MD_DQRS_MMR_XBIST_H_INV */ +/* Description: invert data pattern in next cycle */ +#define SH_MD_DQRS_MMR_XBIST_H_INV_SHFT 40 +#define SH_MD_DQRS_MMR_XBIST_H_INV_MASK 0x0000010000000000 + +/* SH_MD_DQRS_MMR_XBIST_H_ROT */ +/* Description: rotate left data pattern in next cycle */ +#define SH_MD_DQRS_MMR_XBIST_H_ROT_SHFT 41 +#define SH_MD_DQRS_MMR_XBIST_H_ROT_MASK 0x0000020000000000 + +/* SH_MD_DQRS_MMR_XBIST_H_ARM */ +/* Description: writing 1 arms data miscompare capture */ +#define SH_MD_DQRS_MMR_XBIST_H_ARM_SHFT 42 +#define SH_MD_DQRS_MMR_XBIST_H_ARM_MASK 0x0000040000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQRS_MMR_XBIST_L" */ +/* falling edge bist/fill pattern */ +/* ==================================================================== */ + +#define SH_MD_DQRS_MMR_XBIST_L 0x0000000100068010 +#define SH_MD_DQRS_MMR_XBIST_L_MASK 0x000003ffffffffff +#define SH_MD_DQRS_MMR_XBIST_L_INIT 0x0000000000000000 + +/* SH_MD_DQRS_MMR_XBIST_L_PAT */ +/* Description: data pattern */ +#define SH_MD_DQRS_MMR_XBIST_L_PAT_SHFT 0 +#define SH_MD_DQRS_MMR_XBIST_L_PAT_MASK 0x000000ffffffffff + +/* SH_MD_DQRS_MMR_XBIST_L_INV */ +/* Description: invert data pattern in next cycle */ +#define SH_MD_DQRS_MMR_XBIST_L_INV_SHFT 40 +#define SH_MD_DQRS_MMR_XBIST_L_INV_MASK 0x0000010000000000 + +/* SH_MD_DQRS_MMR_XBIST_L_ROT */ +/* Description: rotate left data pattern in next cycle */ +#define SH_MD_DQRS_MMR_XBIST_L_ROT_SHFT 41 +#define SH_MD_DQRS_MMR_XBIST_L_ROT_MASK 0x0000020000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQRS_MMR_XBIST_ERR_H" */ +/* rising edge bist error pattern */ +/* ==================================================================== */ + +#define SH_MD_DQRS_MMR_XBIST_ERR_H 0x0000000100068020 +#define SH_MD_DQRS_MMR_XBIST_ERR_H_MASK 0x000003ffffffffff +#define SH_MD_DQRS_MMR_XBIST_ERR_H_INIT 0x0000000000000000 + +/* SH_MD_DQRS_MMR_XBIST_ERR_H_PAT */ +/* Description: data pattern */ +#define SH_MD_DQRS_MMR_XBIST_ERR_H_PAT_SHFT 0 +#define SH_MD_DQRS_MMR_XBIST_ERR_H_PAT_MASK 0x000000ffffffffff + +/* SH_MD_DQRS_MMR_XBIST_ERR_H_VAL */ +/* Description: bist data miscompare */ +#define SH_MD_DQRS_MMR_XBIST_ERR_H_VAL_SHFT 40 +#define SH_MD_DQRS_MMR_XBIST_ERR_H_VAL_MASK 0x0000010000000000 + +/* SH_MD_DQRS_MMR_XBIST_ERR_H_MORE */ +/* Description: more than one bist data miscompare */ +#define SH_MD_DQRS_MMR_XBIST_ERR_H_MORE_SHFT 41 +#define SH_MD_DQRS_MMR_XBIST_ERR_H_MORE_MASK 0x0000020000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQRS_MMR_XBIST_ERR_L" */ +/* falling edge bist error pattern */ +/* ==================================================================== */ + +#define SH_MD_DQRS_MMR_XBIST_ERR_L 0x0000000100068030 +#define SH_MD_DQRS_MMR_XBIST_ERR_L_MASK 0x000003ffffffffff +#define SH_MD_DQRS_MMR_XBIST_ERR_L_INIT 0x0000000000000000 + +/* SH_MD_DQRS_MMR_XBIST_ERR_L_PAT */ +/* Description: data pattern */ +#define SH_MD_DQRS_MMR_XBIST_ERR_L_PAT_SHFT 0 +#define SH_MD_DQRS_MMR_XBIST_ERR_L_PAT_MASK 0x000000ffffffffff + +/* SH_MD_DQRS_MMR_XBIST_ERR_L_VAL */ +/* Description: bist data miscompare */ +#define SH_MD_DQRS_MMR_XBIST_ERR_L_VAL_SHFT 40 +#define SH_MD_DQRS_MMR_XBIST_ERR_L_VAL_MASK 0x0000010000000000 + +/* SH_MD_DQRS_MMR_XBIST_ERR_L_MORE */ +/* Description: more than one bist data miscompare */ +#define SH_MD_DQRS_MMR_XBIST_ERR_L_MORE_SHFT 41 +#define SH_MD_DQRS_MMR_XBIST_ERR_L_MORE_MASK 0x0000020000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQRS_MMR_YBIST_H" */ +/* rising edge bist/fill pattern */ +/* ==================================================================== */ + +#define SH_MD_DQRS_MMR_YBIST_H 0x0000000100068800 +#define SH_MD_DQRS_MMR_YBIST_H_MASK 0x000007ffffffffff +#define SH_MD_DQRS_MMR_YBIST_H_INIT 0x0000000000000000 + +/* SH_MD_DQRS_MMR_YBIST_H_PAT */ +/* Description: data pattern */ +#define SH_MD_DQRS_MMR_YBIST_H_PAT_SHFT 0 +#define SH_MD_DQRS_MMR_YBIST_H_PAT_MASK 0x000000ffffffffff + +/* SH_MD_DQRS_MMR_YBIST_H_INV */ +/* Description: invert data pattern in next cycle */ +#define SH_MD_DQRS_MMR_YBIST_H_INV_SHFT 40 +#define SH_MD_DQRS_MMR_YBIST_H_INV_MASK 0x0000010000000000 + +/* SH_MD_DQRS_MMR_YBIST_H_ROT */ +/* Description: rotate left data pattern in next cycle */ +#define SH_MD_DQRS_MMR_YBIST_H_ROT_SHFT 41 +#define SH_MD_DQRS_MMR_YBIST_H_ROT_MASK 0x0000020000000000 + +/* SH_MD_DQRS_MMR_YBIST_H_ARM */ +/* Description: writing 1 arms data miscompare capture */ +#define SH_MD_DQRS_MMR_YBIST_H_ARM_SHFT 42 +#define SH_MD_DQRS_MMR_YBIST_H_ARM_MASK 0x0000040000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQRS_MMR_YBIST_L" */ +/* falling edge bist/fill pattern */ +/* ==================================================================== */ + +#define SH_MD_DQRS_MMR_YBIST_L 0x0000000100068810 +#define SH_MD_DQRS_MMR_YBIST_L_MASK 0x000003ffffffffff +#define SH_MD_DQRS_MMR_YBIST_L_INIT 0x0000000000000000 + +/* SH_MD_DQRS_MMR_YBIST_L_PAT */ +/* Description: data pattern */ +#define SH_MD_DQRS_MMR_YBIST_L_PAT_SHFT 0 +#define SH_MD_DQRS_MMR_YBIST_L_PAT_MASK 0x000000ffffffffff + +/* SH_MD_DQRS_MMR_YBIST_L_INV */ +/* Description: invert data pattern in next cycle */ +#define SH_MD_DQRS_MMR_YBIST_L_INV_SHFT 40 +#define SH_MD_DQRS_MMR_YBIST_L_INV_MASK 0x0000010000000000 + +/* SH_MD_DQRS_MMR_YBIST_L_ROT */ +/* Description: rotate left data pattern in next cycle */ +#define SH_MD_DQRS_MMR_YBIST_L_ROT_SHFT 41 +#define SH_MD_DQRS_MMR_YBIST_L_ROT_MASK 0x0000020000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQRS_MMR_YBIST_ERR_H" */ +/* rising edge bist error pattern */ +/* ==================================================================== */ + +#define SH_MD_DQRS_MMR_YBIST_ERR_H 0x0000000100068820 +#define SH_MD_DQRS_MMR_YBIST_ERR_H_MASK 0x000003ffffffffff +#define SH_MD_DQRS_MMR_YBIST_ERR_H_INIT 0x0000000000000000 + +/* SH_MD_DQRS_MMR_YBIST_ERR_H_PAT */ +/* Description: data pattern */ +#define SH_MD_DQRS_MMR_YBIST_ERR_H_PAT_SHFT 0 +#define SH_MD_DQRS_MMR_YBIST_ERR_H_PAT_MASK 0x000000ffffffffff + +/* SH_MD_DQRS_MMR_YBIST_ERR_H_VAL */ +/* Description: bist data miscompare */ +#define SH_MD_DQRS_MMR_YBIST_ERR_H_VAL_SHFT 40 +#define SH_MD_DQRS_MMR_YBIST_ERR_H_VAL_MASK 0x0000010000000000 + +/* SH_MD_DQRS_MMR_YBIST_ERR_H_MORE */ +/* Description: more than one bist data miscompare */ +#define SH_MD_DQRS_MMR_YBIST_ERR_H_MORE_SHFT 41 +#define SH_MD_DQRS_MMR_YBIST_ERR_H_MORE_MASK 0x0000020000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQRS_MMR_YBIST_ERR_L" */ +/* falling edge bist error pattern */ +/* ==================================================================== */ + +#define SH_MD_DQRS_MMR_YBIST_ERR_L 0x0000000100068830 +#define SH_MD_DQRS_MMR_YBIST_ERR_L_MASK 0x000003ffffffffff +#define SH_MD_DQRS_MMR_YBIST_ERR_L_INIT 0x0000000000000000 + +/* SH_MD_DQRS_MMR_YBIST_ERR_L_PAT */ +/* Description: data pattern */ +#define SH_MD_DQRS_MMR_YBIST_ERR_L_PAT_SHFT 0 +#define SH_MD_DQRS_MMR_YBIST_ERR_L_PAT_MASK 0x000000ffffffffff + +/* SH_MD_DQRS_MMR_YBIST_ERR_L_VAL */ +/* Description: bist data miscompare */ +#define SH_MD_DQRS_MMR_YBIST_ERR_L_VAL_SHFT 40 +#define SH_MD_DQRS_MMR_YBIST_ERR_L_VAL_MASK 0x0000010000000000 + +/* SH_MD_DQRS_MMR_YBIST_ERR_L_MORE */ +/* Description: more than one bist data miscompare */ +#define SH_MD_DQRS_MMR_YBIST_ERR_L_MORE_SHFT 41 +#define SH_MD_DQRS_MMR_YBIST_ERR_L_MORE_MASK 0x0000020000000000 + +/* ==================================================================== */ +/* Register "SH_MD_DQRS_MMR_JNR_DEBUG" */ +/* joiner/fct debug configuration */ +/* ==================================================================== */ + +#define SH_MD_DQRS_MMR_JNR_DEBUG 0x0000000100069000 +#define SH_MD_DQRS_MMR_JNR_DEBUG_MASK 0x0000000000000003 +#define SH_MD_DQRS_MMR_JNR_DEBUG_INIT 0x0000000000000000 + +/* SH_MD_DQRS_MMR_JNR_DEBUG_PX */ +/* Description: select 0=pi 1=xn side */ +#define SH_MD_DQRS_MMR_JNR_DEBUG_PX_SHFT 0 +#define SH_MD_DQRS_MMR_JNR_DEBUG_PX_MASK 0x0000000000000001 + +/* SH_MD_DQRS_MMR_JNR_DEBUG_RW */ +/* Description: select 0=read 1=write side */ +#define SH_MD_DQRS_MMR_JNR_DEBUG_RW_SHFT 1 +#define SH_MD_DQRS_MMR_JNR_DEBUG_RW_MASK 0x0000000000000002 + +/* ==================================================================== */ +/* Register "SH_MD_DQRS_MMR_YAMOPW_ERR" */ +/* amo/partial rmw ecc error register */ +/* ==================================================================== */ + +#define SH_MD_DQRS_MMR_YAMOPW_ERR 0x000000010006a000 +#define SH_MD_DQRS_MMR_YAMOPW_ERR_MASK 0x0000000103ff03ff +#define SH_MD_DQRS_MMR_YAMOPW_ERR_INIT 0x0000000000000000 + +/* SH_MD_DQRS_MMR_YAMOPW_ERR_SSYN */ +/* Description: store data syndrome */ +#define SH_MD_DQRS_MMR_YAMOPW_ERR_SSYN_SHFT 0 +#define SH_MD_DQRS_MMR_YAMOPW_ERR_SSYN_MASK 0x00000000000000ff + +/* SH_MD_DQRS_MMR_YAMOPW_ERR_SCOR */ +/* Description: correctable ecc errror on store data */ +#define SH_MD_DQRS_MMR_YAMOPW_ERR_SCOR_SHFT 8 +#define SH_MD_DQRS_MMR_YAMOPW_ERR_SCOR_MASK 0x0000000000000100 + +/* SH_MD_DQRS_MMR_YAMOPW_ERR_SUNC */ +/* Description: uncorrectable ecc errror on store data */ +#define SH_MD_DQRS_MMR_YAMOPW_ERR_SUNC_SHFT 9 +#define SH_MD_DQRS_MMR_YAMOPW_ERR_SUNC_MASK 0x0000000000000200 + +/* SH_MD_DQRS_MMR_YAMOPW_ERR_RSYN */ +/* Description: memory read data syndrome */ +#define SH_MD_DQRS_MMR_YAMOPW_ERR_RSYN_SHFT 16 +#define SH_MD_DQRS_MMR_YAMOPW_ERR_RSYN_MASK 0x0000000000ff0000 + +/* SH_MD_DQRS_MMR_YAMOPW_ERR_RCOR */ +/* Description: correctable ecc errror on read data */ +#define SH_MD_DQRS_MMR_YAMOPW_ERR_RCOR_SHFT 24 +#define SH_MD_DQRS_MMR_YAMOPW_ERR_RCOR_MASK 0x0000000001000000 + +/* SH_MD_DQRS_MMR_YAMOPW_ERR_RUNC */ +/* Description: uncorrectable ecc errror on read data */ +#define SH_MD_DQRS_MMR_YAMOPW_ERR_RUNC_SHFT 25 +#define SH_MD_DQRS_MMR_YAMOPW_ERR_RUNC_MASK 0x0000000002000000 + +/* SH_MD_DQRS_MMR_YAMOPW_ERR_ARM */ +/* Description: writing 1 arms ecc error capture */ +#define SH_MD_DQRS_MMR_YAMOPW_ERR_ARM_SHFT 32 +#define SH_MD_DQRS_MMR_YAMOPW_ERR_ARM_MASK 0x0000000100000000 + + +#endif /* _ASM_IA64_SN_SN2_SHUB_MMR_H */ diff -Nru a/include/asm-ia64/sn/sn2/shub_mmr_t.h b/include/asm-ia64/sn/sn2/shub_mmr_t.h --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/include/asm-ia64/sn/sn2/shub_mmr_t.h Tue Mar 12 13:58:16 2002 @@ -0,0 +1,27385 @@ +/* + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (c) 2001-2002 Silicon Graphics, Inc. All rights reserved. + */ + + + +#ifndef _ASM_IA64_SN_SN2_SHUB_MMR_T_H +#define _ASM_IA64_SN_SN2_SHUB_MMR_T_H + +#include + +/* ==================================================================== */ +/* Register "SH_FSB_BINIT_CONTROL" */ +/* FSB BINIT# Control */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_fsb_binit_control_u { + mmr_t sh_fsb_binit_control_regval; + struct { + mmr_t binit : 1; + mmr_t reserved_0 : 63; + } sh_fsb_binit_control_s; +} sh_fsb_binit_control_u_t; +#else +typedef union sh_fsb_binit_control_u { + mmr_t sh_fsb_binit_control_regval; + struct { + mmr_t reserved_0 : 63; + mmr_t binit : 1; + } sh_fsb_binit_control_s; +} sh_fsb_binit_control_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_FSB_RESET_CONTROL" */ +/* FSB Reset Control */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_fsb_reset_control_u { + mmr_t sh_fsb_reset_control_regval; + struct { + mmr_t reset : 1; + mmr_t reserved_0 : 63; + } sh_fsb_reset_control_s; +} sh_fsb_reset_control_u_t; +#else +typedef union sh_fsb_reset_control_u { + mmr_t sh_fsb_reset_control_regval; + struct { + mmr_t reserved_0 : 63; + mmr_t reset : 1; + } sh_fsb_reset_control_s; +} sh_fsb_reset_control_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_FSB_SYSTEM_AGENT_CONFIG" */ +/* FSB System Agent Configuration */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_fsb_system_agent_config_u { + mmr_t sh_fsb_system_agent_config_regval; + struct { + mmr_t rcnt_scnt_en : 1; + mmr_t reserved_0 : 2; + mmr_t berr_assert_en : 1; + mmr_t berr_sampling_en : 1; + mmr_t binit_assert_en : 1; + mmr_t bnr_throttling_en : 1; + mmr_t short_hang_en : 1; + mmr_t inta_rsp_data : 8; + mmr_t io_trans_rsp : 1; + mmr_t xtpr_trans_rsp : 1; + mmr_t inta_trans_rsp : 1; + mmr_t reserved_1 : 4; + mmr_t tdot : 1; + mmr_t serialize_fsb_en : 1; + mmr_t reserved_2 : 7; + mmr_t binit_event_enables : 14; + mmr_t reserved_3 : 18; + } sh_fsb_system_agent_config_s; +} sh_fsb_system_agent_config_u_t; +#else +typedef union sh_fsb_system_agent_config_u { + mmr_t sh_fsb_system_agent_config_regval; + struct { + mmr_t reserved_3 : 18; + mmr_t binit_event_enables : 14; + mmr_t reserved_2 : 7; + mmr_t serialize_fsb_en : 1; + mmr_t tdot : 1; + mmr_t reserved_1 : 4; + mmr_t inta_trans_rsp : 1; + mmr_t xtpr_trans_rsp : 1; + mmr_t io_trans_rsp : 1; + mmr_t inta_rsp_data : 8; + mmr_t short_hang_en : 1; + mmr_t bnr_throttling_en : 1; + mmr_t binit_assert_en : 1; + mmr_t berr_sampling_en : 1; + mmr_t berr_assert_en : 1; + mmr_t reserved_0 : 2; + mmr_t rcnt_scnt_en : 1; + } sh_fsb_system_agent_config_s; +} sh_fsb_system_agent_config_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_FSB_VGA_REMAP" */ +/* FSB VGA Address Space Remap */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_fsb_vga_remap_u { + mmr_t sh_fsb_vga_remap_regval; + struct { + mmr_t reserved_0 : 17; + mmr_t offset : 19; + mmr_t asid : 2; + mmr_t nid : 11; + mmr_t reserved_1 : 13; + mmr_t vga_remapping_enabled : 1; + mmr_t reserved_2 : 1; + } sh_fsb_vga_remap_s; +} sh_fsb_vga_remap_u_t; +#else +typedef union sh_fsb_vga_remap_u { + mmr_t sh_fsb_vga_remap_regval; + struct { + mmr_t reserved_2 : 1; + mmr_t vga_remapping_enabled : 1; + mmr_t reserved_1 : 13; + mmr_t nid : 11; + mmr_t asid : 2; + mmr_t offset : 19; + mmr_t reserved_0 : 17; + } sh_fsb_vga_remap_s; +} sh_fsb_vga_remap_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_FSB_RESET_STATUS" */ +/* FSB Reset Status */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_fsb_reset_status_u { + mmr_t sh_fsb_reset_status_regval; + struct { + mmr_t reset_in_progress : 1; + mmr_t reserved_0 : 63; + } sh_fsb_reset_status_s; +} sh_fsb_reset_status_u_t; +#else +typedef union sh_fsb_reset_status_u { + mmr_t sh_fsb_reset_status_regval; + struct { + mmr_t reserved_0 : 63; + mmr_t reset_in_progress : 1; + } sh_fsb_reset_status_s; +} sh_fsb_reset_status_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_FSB_SYMMETRIC_AGENT_STATUS" */ +/* FSB Symmetric Agent Status */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_fsb_symmetric_agent_status_u { + mmr_t sh_fsb_symmetric_agent_status_regval; + struct { + mmr_t cpu_0_active : 1; + mmr_t cpu_1_active : 1; + mmr_t cpus_ready : 1; + mmr_t reserved_0 : 61; + } sh_fsb_symmetric_agent_status_s; +} sh_fsb_symmetric_agent_status_u_t; +#else +typedef union sh_fsb_symmetric_agent_status_u { + mmr_t sh_fsb_symmetric_agent_status_regval; + struct { + mmr_t reserved_0 : 61; + mmr_t cpus_ready : 1; + mmr_t cpu_1_active : 1; + mmr_t cpu_0_active : 1; + } sh_fsb_symmetric_agent_status_s; +} sh_fsb_symmetric_agent_status_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_GFX_CREDIT_COUNT_0" */ +/* Graphics-write Credit Count for CPU 0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_gfx_credit_count_0_u { + mmr_t sh_gfx_credit_count_0_regval; + struct { + mmr_t count : 20; + mmr_t reserved_0 : 43; + mmr_t reset_gfx_state : 1; + } sh_gfx_credit_count_0_s; +} sh_gfx_credit_count_0_u_t; +#else +typedef union sh_gfx_credit_count_0_u { + mmr_t sh_gfx_credit_count_0_regval; + struct { + mmr_t reset_gfx_state : 1; + mmr_t reserved_0 : 43; + mmr_t count : 20; + } sh_gfx_credit_count_0_s; +} sh_gfx_credit_count_0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_GFX_CREDIT_COUNT_1" */ +/* Graphics-write Credit Count for CPU 1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_gfx_credit_count_1_u { + mmr_t sh_gfx_credit_count_1_regval; + struct { + mmr_t count : 20; + mmr_t reserved_0 : 43; + mmr_t reset_gfx_state : 1; + } sh_gfx_credit_count_1_s; +} sh_gfx_credit_count_1_u_t; +#else +typedef union sh_gfx_credit_count_1_u { + mmr_t sh_gfx_credit_count_1_regval; + struct { + mmr_t reset_gfx_state : 1; + mmr_t reserved_0 : 43; + mmr_t count : 20; + } sh_gfx_credit_count_1_s; +} sh_gfx_credit_count_1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_GFX_MODE_CNTRL_0" */ +/* Graphics credit mode amd message ordering for CPU 0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_gfx_mode_cntrl_0_u { + mmr_t sh_gfx_mode_cntrl_0_regval; + struct { + mmr_t dword_credits : 1; + mmr_t mixed_mode_credits : 1; + mmr_t relaxed_ordering : 1; + mmr_t reserved_0 : 61; + } sh_gfx_mode_cntrl_0_s; +} sh_gfx_mode_cntrl_0_u_t; +#else +typedef union sh_gfx_mode_cntrl_0_u { + mmr_t sh_gfx_mode_cntrl_0_regval; + struct { + mmr_t reserved_0 : 61; + mmr_t relaxed_ordering : 1; + mmr_t mixed_mode_credits : 1; + mmr_t dword_credits : 1; + } sh_gfx_mode_cntrl_0_s; +} sh_gfx_mode_cntrl_0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_GFX_MODE_CNTRL_1" */ +/* Graphics credit mode amd message ordering for CPU 1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_gfx_mode_cntrl_1_u { + mmr_t sh_gfx_mode_cntrl_1_regval; + struct { + mmr_t dword_credits : 1; + mmr_t mixed_mode_credits : 1; + mmr_t relaxed_ordering : 1; + mmr_t reserved_0 : 61; + } sh_gfx_mode_cntrl_1_s; +} sh_gfx_mode_cntrl_1_u_t; +#else +typedef union sh_gfx_mode_cntrl_1_u { + mmr_t sh_gfx_mode_cntrl_1_regval; + struct { + mmr_t reserved_0 : 61; + mmr_t relaxed_ordering : 1; + mmr_t mixed_mode_credits : 1; + mmr_t dword_credits : 1; + } sh_gfx_mode_cntrl_1_s; +} sh_gfx_mode_cntrl_1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_GFX_SKID_CREDIT_COUNT_0" */ +/* Graphics-write Skid Credit Count for CPU 0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_gfx_skid_credit_count_0_u { + mmr_t sh_gfx_skid_credit_count_0_regval; + struct { + mmr_t skid : 20; + mmr_t reserved_0 : 44; + } sh_gfx_skid_credit_count_0_s; +} sh_gfx_skid_credit_count_0_u_t; +#else +typedef union sh_gfx_skid_credit_count_0_u { + mmr_t sh_gfx_skid_credit_count_0_regval; + struct { + mmr_t reserved_0 : 44; + mmr_t skid : 20; + } sh_gfx_skid_credit_count_0_s; +} sh_gfx_skid_credit_count_0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_GFX_SKID_CREDIT_COUNT_1" */ +/* Graphics-write Skid Credit Count for CPU 1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_gfx_skid_credit_count_1_u { + mmr_t sh_gfx_skid_credit_count_1_regval; + struct { + mmr_t skid : 20; + mmr_t reserved_0 : 44; + } sh_gfx_skid_credit_count_1_s; +} sh_gfx_skid_credit_count_1_u_t; +#else +typedef union sh_gfx_skid_credit_count_1_u { + mmr_t sh_gfx_skid_credit_count_1_regval; + struct { + mmr_t reserved_0 : 44; + mmr_t skid : 20; + } sh_gfx_skid_credit_count_1_s; +} sh_gfx_skid_credit_count_1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_GFX_STALL_LIMIT_0" */ +/* Graphics-write Stall Limit for CPU 0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_gfx_stall_limit_0_u { + mmr_t sh_gfx_stall_limit_0_regval; + struct { + mmr_t limit : 26; + mmr_t reserved_0 : 38; + } sh_gfx_stall_limit_0_s; +} sh_gfx_stall_limit_0_u_t; +#else +typedef union sh_gfx_stall_limit_0_u { + mmr_t sh_gfx_stall_limit_0_regval; + struct { + mmr_t reserved_0 : 38; + mmr_t limit : 26; + } sh_gfx_stall_limit_0_s; +} sh_gfx_stall_limit_0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_GFX_STALL_LIMIT_1" */ +/* Graphics-write Stall Limit for CPU 1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_gfx_stall_limit_1_u { + mmr_t sh_gfx_stall_limit_1_regval; + struct { + mmr_t limit : 26; + mmr_t reserved_0 : 38; + } sh_gfx_stall_limit_1_s; +} sh_gfx_stall_limit_1_u_t; +#else +typedef union sh_gfx_stall_limit_1_u { + mmr_t sh_gfx_stall_limit_1_regval; + struct { + mmr_t reserved_0 : 38; + mmr_t limit : 26; + } sh_gfx_stall_limit_1_s; +} sh_gfx_stall_limit_1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_GFX_STALL_TIMER_0" */ +/* Graphics-write Stall Timer for CPU 0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_gfx_stall_timer_0_u { + mmr_t sh_gfx_stall_timer_0_regval; + struct { + mmr_t timer_value : 26; + mmr_t reserved_0 : 38; + } sh_gfx_stall_timer_0_s; +} sh_gfx_stall_timer_0_u_t; +#else +typedef union sh_gfx_stall_timer_0_u { + mmr_t sh_gfx_stall_timer_0_regval; + struct { + mmr_t reserved_0 : 38; + mmr_t timer_value : 26; + } sh_gfx_stall_timer_0_s; +} sh_gfx_stall_timer_0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_GFX_STALL_TIMER_1" */ +/* Graphics-write Stall Timer for CPU 1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_gfx_stall_timer_1_u { + mmr_t sh_gfx_stall_timer_1_regval; + struct { + mmr_t timer_value : 26; + mmr_t reserved_0 : 38; + } sh_gfx_stall_timer_1_s; +} sh_gfx_stall_timer_1_u_t; +#else +typedef union sh_gfx_stall_timer_1_u { + mmr_t sh_gfx_stall_timer_1_regval; + struct { + mmr_t reserved_0 : 38; + mmr_t timer_value : 26; + } sh_gfx_stall_timer_1_s; +} sh_gfx_stall_timer_1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_GFX_WINDOW_0" */ +/* Graphics-write Window for CPU 0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_gfx_window_0_u { + mmr_t sh_gfx_window_0_regval; + struct { + mmr_t reserved_0 : 24; + mmr_t base_addr : 12; + mmr_t reserved_1 : 27; + mmr_t gfx_window_en : 1; + } sh_gfx_window_0_s; +} sh_gfx_window_0_u_t; +#else +typedef union sh_gfx_window_0_u { + mmr_t sh_gfx_window_0_regval; + struct { + mmr_t gfx_window_en : 1; + mmr_t reserved_1 : 27; + mmr_t base_addr : 12; + mmr_t reserved_0 : 24; + } sh_gfx_window_0_s; +} sh_gfx_window_0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_GFX_WINDOW_1" */ +/* Graphics-write Window for CPU 1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_gfx_window_1_u { + mmr_t sh_gfx_window_1_regval; + struct { + mmr_t reserved_0 : 24; + mmr_t base_addr : 12; + mmr_t reserved_1 : 27; + mmr_t gfx_window_en : 1; + } sh_gfx_window_1_s; +} sh_gfx_window_1_u_t; +#else +typedef union sh_gfx_window_1_u { + mmr_t sh_gfx_window_1_regval; + struct { + mmr_t gfx_window_en : 1; + mmr_t reserved_1 : 27; + mmr_t base_addr : 12; + mmr_t reserved_0 : 24; + } sh_gfx_window_1_s; +} sh_gfx_window_1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_GFX_INTERRUPT_TIMER_LIMIT_0" */ +/* Graphics-write Interrupt Limit for CPU 0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_gfx_interrupt_timer_limit_0_u { + mmr_t sh_gfx_interrupt_timer_limit_0_regval; + struct { + mmr_t interrupt_timer_limit : 8; + mmr_t reserved_0 : 56; + } sh_gfx_interrupt_timer_limit_0_s; +} sh_gfx_interrupt_timer_limit_0_u_t; +#else +typedef union sh_gfx_interrupt_timer_limit_0_u { + mmr_t sh_gfx_interrupt_timer_limit_0_regval; + struct { + mmr_t reserved_0 : 56; + mmr_t interrupt_timer_limit : 8; + } sh_gfx_interrupt_timer_limit_0_s; +} sh_gfx_interrupt_timer_limit_0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_GFX_INTERRUPT_TIMER_LIMIT_1" */ +/* Graphics-write Interrupt Limit for CPU 1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_gfx_interrupt_timer_limit_1_u { + mmr_t sh_gfx_interrupt_timer_limit_1_regval; + struct { + mmr_t interrupt_timer_limit : 8; + mmr_t reserved_0 : 56; + } sh_gfx_interrupt_timer_limit_1_s; +} sh_gfx_interrupt_timer_limit_1_u_t; +#else +typedef union sh_gfx_interrupt_timer_limit_1_u { + mmr_t sh_gfx_interrupt_timer_limit_1_regval; + struct { + mmr_t reserved_0 : 56; + mmr_t interrupt_timer_limit : 8; + } sh_gfx_interrupt_timer_limit_1_s; +} sh_gfx_interrupt_timer_limit_1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_GFX_WRITE_STATUS_0" */ +/* Graphics Write Status for CPU 0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_gfx_write_status_0_u { + mmr_t sh_gfx_write_status_0_regval; + struct { + mmr_t busy : 1; + mmr_t reserved_0 : 62; + mmr_t re_enable_gfx_stall : 1; + } sh_gfx_write_status_0_s; +} sh_gfx_write_status_0_u_t; +#else +typedef union sh_gfx_write_status_0_u { + mmr_t sh_gfx_write_status_0_regval; + struct { + mmr_t re_enable_gfx_stall : 1; + mmr_t reserved_0 : 62; + mmr_t busy : 1; + } sh_gfx_write_status_0_s; +} sh_gfx_write_status_0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_GFX_WRITE_STATUS_1" */ +/* Graphics Write Status for CPU 1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_gfx_write_status_1_u { + mmr_t sh_gfx_write_status_1_regval; + struct { + mmr_t busy : 1; + mmr_t reserved_0 : 62; + mmr_t re_enable_gfx_stall : 1; + } sh_gfx_write_status_1_s; +} sh_gfx_write_status_1_u_t; +#else +typedef union sh_gfx_write_status_1_u { + mmr_t sh_gfx_write_status_1_regval; + struct { + mmr_t re_enable_gfx_stall : 1; + mmr_t reserved_0 : 62; + mmr_t busy : 1; + } sh_gfx_write_status_1_s; +} sh_gfx_write_status_1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_II_INT0" */ +/* SHub II Interrupt 0 Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ii_int0_u { + mmr_t sh_ii_int0_regval; + struct { + mmr_t idx : 8; + mmr_t send : 1; + mmr_t reserved_0 : 55; + } sh_ii_int0_s; +} sh_ii_int0_u_t; +#else +typedef union sh_ii_int0_u { + mmr_t sh_ii_int0_regval; + struct { + mmr_t reserved_0 : 55; + mmr_t send : 1; + mmr_t idx : 8; + } sh_ii_int0_s; +} sh_ii_int0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_II_INT0_CONFIG" */ +/* SHub II Interrupt 0 Config Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ii_int0_config_u { + mmr_t sh_ii_int0_config_regval; + struct { + mmr_t type : 3; + mmr_t agt : 1; + mmr_t pid : 16; + mmr_t reserved_0 : 1; + mmr_t base : 29; + mmr_t reserved_1 : 14; + } sh_ii_int0_config_s; +} sh_ii_int0_config_u_t; +#else +typedef union sh_ii_int0_config_u { + mmr_t sh_ii_int0_config_regval; + struct { + mmr_t reserved_1 : 14; + mmr_t base : 29; + mmr_t reserved_0 : 1; + mmr_t pid : 16; + mmr_t agt : 1; + mmr_t type : 3; + } sh_ii_int0_config_s; +} sh_ii_int0_config_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_II_INT0_ENABLE" */ +/* SHub II Interrupt 0 Enable Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ii_int0_enable_u { + mmr_t sh_ii_int0_enable_regval; + struct { + mmr_t ii_enable : 1; + mmr_t reserved_0 : 63; + } sh_ii_int0_enable_s; +} sh_ii_int0_enable_u_t; +#else +typedef union sh_ii_int0_enable_u { + mmr_t sh_ii_int0_enable_regval; + struct { + mmr_t reserved_0 : 63; + mmr_t ii_enable : 1; + } sh_ii_int0_enable_s; +} sh_ii_int0_enable_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_II_INT1" */ +/* SHub II Interrupt 1 Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ii_int1_u { + mmr_t sh_ii_int1_regval; + struct { + mmr_t idx : 8; + mmr_t send : 1; + mmr_t reserved_0 : 55; + } sh_ii_int1_s; +} sh_ii_int1_u_t; +#else +typedef union sh_ii_int1_u { + mmr_t sh_ii_int1_regval; + struct { + mmr_t reserved_0 : 55; + mmr_t send : 1; + mmr_t idx : 8; + } sh_ii_int1_s; +} sh_ii_int1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_II_INT1_CONFIG" */ +/* SHub II Interrupt 1 Config Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ii_int1_config_u { + mmr_t sh_ii_int1_config_regval; + struct { + mmr_t type : 3; + mmr_t agt : 1; + mmr_t pid : 16; + mmr_t reserved_0 : 1; + mmr_t base : 29; + mmr_t reserved_1 : 14; + } sh_ii_int1_config_s; +} sh_ii_int1_config_u_t; +#else +typedef union sh_ii_int1_config_u { + mmr_t sh_ii_int1_config_regval; + struct { + mmr_t reserved_1 : 14; + mmr_t base : 29; + mmr_t reserved_0 : 1; + mmr_t pid : 16; + mmr_t agt : 1; + mmr_t type : 3; + } sh_ii_int1_config_s; +} sh_ii_int1_config_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_II_INT1_ENABLE" */ +/* SHub II Interrupt 1 Enable Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ii_int1_enable_u { + mmr_t sh_ii_int1_enable_regval; + struct { + mmr_t ii_enable : 1; + mmr_t reserved_0 : 63; + } sh_ii_int1_enable_s; +} sh_ii_int1_enable_u_t; +#else +typedef union sh_ii_int1_enable_u { + mmr_t sh_ii_int1_enable_regval; + struct { + mmr_t reserved_0 : 63; + mmr_t ii_enable : 1; + } sh_ii_int1_enable_s; +} sh_ii_int1_enable_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_INT_NODE_ID_CONFIG" */ +/* SHub Interrupt Node ID Configuration */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_int_node_id_config_u { + mmr_t sh_int_node_id_config_regval; + struct { + mmr_t node_id : 11; + mmr_t id_sel : 1; + mmr_t reserved_0 : 52; + } sh_int_node_id_config_s; +} sh_int_node_id_config_u_t; +#else +typedef union sh_int_node_id_config_u { + mmr_t sh_int_node_id_config_regval; + struct { + mmr_t reserved_0 : 52; + mmr_t id_sel : 1; + mmr_t node_id : 11; + } sh_int_node_id_config_s; +} sh_int_node_id_config_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_IPI_INT" */ +/* SHub Inter-Processor Interrupt Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ipi_int_u { + mmr_t sh_ipi_int_regval; + struct { + mmr_t type : 3; + mmr_t agt : 1; + mmr_t pid : 16; + mmr_t reserved_0 : 1; + mmr_t base : 29; + mmr_t reserved_1 : 2; + mmr_t idx : 8; + mmr_t reserved_2 : 3; + mmr_t send : 1; + } sh_ipi_int_s; +} sh_ipi_int_u_t; +#else +typedef union sh_ipi_int_u { + mmr_t sh_ipi_int_regval; + struct { + mmr_t send : 1; + mmr_t reserved_2 : 3; + mmr_t idx : 8; + mmr_t reserved_1 : 2; + mmr_t base : 29; + mmr_t reserved_0 : 1; + mmr_t pid : 16; + mmr_t agt : 1; + mmr_t type : 3; + } sh_ipi_int_s; +} sh_ipi_int_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_IPI_INT_ENABLE" */ +/* SHub Inter-Processor Interrupt Enable Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ipi_int_enable_u { + mmr_t sh_ipi_int_enable_regval; + struct { + mmr_t pio_enable : 1; + mmr_t reserved_0 : 63; + } sh_ipi_int_enable_s; +} sh_ipi_int_enable_u_t; +#else +typedef union sh_ipi_int_enable_u { + mmr_t sh_ipi_int_enable_regval; + struct { + mmr_t reserved_0 : 63; + mmr_t pio_enable : 1; + } sh_ipi_int_enable_s; +} sh_ipi_int_enable_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_LOCAL_INT0_CONFIG" */ +/* SHub Local Interrupt 0 Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_local_int0_config_u { + mmr_t sh_local_int0_config_regval; + struct { + mmr_t type : 3; + mmr_t agt : 1; + mmr_t pid : 16; + mmr_t reserved_0 : 1; + mmr_t base : 29; + mmr_t reserved_1 : 2; + mmr_t idx : 8; + mmr_t reserved_2 : 4; + } sh_local_int0_config_s; +} sh_local_int0_config_u_t; +#else +typedef union sh_local_int0_config_u { + mmr_t sh_local_int0_config_regval; + struct { + mmr_t reserved_2 : 4; + mmr_t idx : 8; + mmr_t reserved_1 : 2; + mmr_t base : 29; + mmr_t reserved_0 : 1; + mmr_t pid : 16; + mmr_t agt : 1; + mmr_t type : 3; + } sh_local_int0_config_s; +} sh_local_int0_config_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_LOCAL_INT0_ENABLE" */ +/* SHub Local Interrupt 0 Enable */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_local_int0_enable_u { + mmr_t sh_local_int0_enable_regval; + struct { + mmr_t pi_hw_int : 1; + mmr_t md_hw_int : 1; + mmr_t xn_hw_int : 1; + mmr_t lb_hw_int : 1; + mmr_t ii_hw_int : 1; + mmr_t pi_ce_int : 1; + mmr_t md_ce_int : 1; + mmr_t xn_ce_int : 1; + mmr_t pi_uce_int : 1; + mmr_t md_uce_int : 1; + mmr_t xn_uce_int : 1; + mmr_t reserved_0 : 1; + mmr_t system_shutdown_int : 1; + mmr_t uart_int : 1; + mmr_t l1_nmi_int : 1; + mmr_t stop_clock : 1; + mmr_t reserved_1 : 48; + } sh_local_int0_enable_s; +} sh_local_int0_enable_u_t; +#else +typedef union sh_local_int0_enable_u { + mmr_t sh_local_int0_enable_regval; + struct { + mmr_t reserved_1 : 48; + mmr_t stop_clock : 1; + mmr_t l1_nmi_int : 1; + mmr_t uart_int : 1; + mmr_t system_shutdown_int : 1; + mmr_t reserved_0 : 1; + mmr_t xn_uce_int : 1; + mmr_t md_uce_int : 1; + mmr_t pi_uce_int : 1; + mmr_t xn_ce_int : 1; + mmr_t md_ce_int : 1; + mmr_t pi_ce_int : 1; + mmr_t ii_hw_int : 1; + mmr_t lb_hw_int : 1; + mmr_t xn_hw_int : 1; + mmr_t md_hw_int : 1; + mmr_t pi_hw_int : 1; + } sh_local_int0_enable_s; +} sh_local_int0_enable_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_LOCAL_INT1_CONFIG" */ +/* SHub Local Interrupt 1 Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_local_int1_config_u { + mmr_t sh_local_int1_config_regval; + struct { + mmr_t type : 3; + mmr_t agt : 1; + mmr_t pid : 16; + mmr_t reserved_0 : 1; + mmr_t base : 29; + mmr_t reserved_1 : 2; + mmr_t idx : 8; + mmr_t reserved_2 : 4; + } sh_local_int1_config_s; +} sh_local_int1_config_u_t; +#else +typedef union sh_local_int1_config_u { + mmr_t sh_local_int1_config_regval; + struct { + mmr_t reserved_2 : 4; + mmr_t idx : 8; + mmr_t reserved_1 : 2; + mmr_t base : 29; + mmr_t reserved_0 : 1; + mmr_t pid : 16; + mmr_t agt : 1; + mmr_t type : 3; + } sh_local_int1_config_s; +} sh_local_int1_config_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_LOCAL_INT1_ENABLE" */ +/* SHub Local Interrupt 1 Enable */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_local_int1_enable_u { + mmr_t sh_local_int1_enable_regval; + struct { + mmr_t pi_hw_int : 1; + mmr_t md_hw_int : 1; + mmr_t xn_hw_int : 1; + mmr_t lb_hw_int : 1; + mmr_t ii_hw_int : 1; + mmr_t pi_ce_int : 1; + mmr_t md_ce_int : 1; + mmr_t xn_ce_int : 1; + mmr_t pi_uce_int : 1; + mmr_t md_uce_int : 1; + mmr_t xn_uce_int : 1; + mmr_t reserved_0 : 1; + mmr_t system_shutdown_int : 1; + mmr_t uart_int : 1; + mmr_t l1_nmi_int : 1; + mmr_t stop_clock : 1; + mmr_t reserved_1 : 48; + } sh_local_int1_enable_s; +} sh_local_int1_enable_u_t; +#else +typedef union sh_local_int1_enable_u { + mmr_t sh_local_int1_enable_regval; + struct { + mmr_t reserved_1 : 48; + mmr_t stop_clock : 1; + mmr_t l1_nmi_int : 1; + mmr_t uart_int : 1; + mmr_t system_shutdown_int : 1; + mmr_t reserved_0 : 1; + mmr_t xn_uce_int : 1; + mmr_t md_uce_int : 1; + mmr_t pi_uce_int : 1; + mmr_t xn_ce_int : 1; + mmr_t md_ce_int : 1; + mmr_t pi_ce_int : 1; + mmr_t ii_hw_int : 1; + mmr_t lb_hw_int : 1; + mmr_t xn_hw_int : 1; + mmr_t md_hw_int : 1; + mmr_t pi_hw_int : 1; + } sh_local_int1_enable_s; +} sh_local_int1_enable_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_LOCAL_INT2_CONFIG" */ +/* SHub Local Interrupt 2 Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_local_int2_config_u { + mmr_t sh_local_int2_config_regval; + struct { + mmr_t type : 3; + mmr_t agt : 1; + mmr_t pid : 16; + mmr_t reserved_0 : 1; + mmr_t base : 29; + mmr_t reserved_1 : 2; + mmr_t idx : 8; + mmr_t reserved_2 : 4; + } sh_local_int2_config_s; +} sh_local_int2_config_u_t; +#else +typedef union sh_local_int2_config_u { + mmr_t sh_local_int2_config_regval; + struct { + mmr_t reserved_2 : 4; + mmr_t idx : 8; + mmr_t reserved_1 : 2; + mmr_t base : 29; + mmr_t reserved_0 : 1; + mmr_t pid : 16; + mmr_t agt : 1; + mmr_t type : 3; + } sh_local_int2_config_s; +} sh_local_int2_config_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_LOCAL_INT2_ENABLE" */ +/* SHub Local Interrupt 2 Enable */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_local_int2_enable_u { + mmr_t sh_local_int2_enable_regval; + struct { + mmr_t pi_hw_int : 1; + mmr_t md_hw_int : 1; + mmr_t xn_hw_int : 1; + mmr_t lb_hw_int : 1; + mmr_t ii_hw_int : 1; + mmr_t pi_ce_int : 1; + mmr_t md_ce_int : 1; + mmr_t xn_ce_int : 1; + mmr_t pi_uce_int : 1; + mmr_t md_uce_int : 1; + mmr_t xn_uce_int : 1; + mmr_t reserved_0 : 1; + mmr_t system_shutdown_int : 1; + mmr_t uart_int : 1; + mmr_t l1_nmi_int : 1; + mmr_t stop_clock : 1; + mmr_t reserved_1 : 48; + } sh_local_int2_enable_s; +} sh_local_int2_enable_u_t; +#else +typedef union sh_local_int2_enable_u { + mmr_t sh_local_int2_enable_regval; + struct { + mmr_t reserved_1 : 48; + mmr_t stop_clock : 1; + mmr_t l1_nmi_int : 1; + mmr_t uart_int : 1; + mmr_t system_shutdown_int : 1; + mmr_t reserved_0 : 1; + mmr_t xn_uce_int : 1; + mmr_t md_uce_int : 1; + mmr_t pi_uce_int : 1; + mmr_t xn_ce_int : 1; + mmr_t md_ce_int : 1; + mmr_t pi_ce_int : 1; + mmr_t ii_hw_int : 1; + mmr_t lb_hw_int : 1; + mmr_t xn_hw_int : 1; + mmr_t md_hw_int : 1; + mmr_t pi_hw_int : 1; + } sh_local_int2_enable_s; +} sh_local_int2_enable_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_LOCAL_INT3_CONFIG" */ +/* SHub Local Interrupt 3 Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_local_int3_config_u { + mmr_t sh_local_int3_config_regval; + struct { + mmr_t type : 3; + mmr_t agt : 1; + mmr_t pid : 16; + mmr_t reserved_0 : 1; + mmr_t base : 29; + mmr_t reserved_1 : 2; + mmr_t idx : 8; + mmr_t reserved_2 : 4; + } sh_local_int3_config_s; +} sh_local_int3_config_u_t; +#else +typedef union sh_local_int3_config_u { + mmr_t sh_local_int3_config_regval; + struct { + mmr_t reserved_2 : 4; + mmr_t idx : 8; + mmr_t reserved_1 : 2; + mmr_t base : 29; + mmr_t reserved_0 : 1; + mmr_t pid : 16; + mmr_t agt : 1; + mmr_t type : 3; + } sh_local_int3_config_s; +} sh_local_int3_config_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_LOCAL_INT3_ENABLE" */ +/* SHub Local Interrupt 3 Enable */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_local_int3_enable_u { + mmr_t sh_local_int3_enable_regval; + struct { + mmr_t pi_hw_int : 1; + mmr_t md_hw_int : 1; + mmr_t xn_hw_int : 1; + mmr_t lb_hw_int : 1; + mmr_t ii_hw_int : 1; + mmr_t pi_ce_int : 1; + mmr_t md_ce_int : 1; + mmr_t xn_ce_int : 1; + mmr_t pi_uce_int : 1; + mmr_t md_uce_int : 1; + mmr_t xn_uce_int : 1; + mmr_t reserved_0 : 1; + mmr_t system_shutdown_int : 1; + mmr_t uart_int : 1; + mmr_t l1_nmi_int : 1; + mmr_t stop_clock : 1; + mmr_t reserved_1 : 48; + } sh_local_int3_enable_s; +} sh_local_int3_enable_u_t; +#else +typedef union sh_local_int3_enable_u { + mmr_t sh_local_int3_enable_regval; + struct { + mmr_t reserved_1 : 48; + mmr_t stop_clock : 1; + mmr_t l1_nmi_int : 1; + mmr_t uart_int : 1; + mmr_t system_shutdown_int : 1; + mmr_t reserved_0 : 1; + mmr_t xn_uce_int : 1; + mmr_t md_uce_int : 1; + mmr_t pi_uce_int : 1; + mmr_t xn_ce_int : 1; + mmr_t md_ce_int : 1; + mmr_t pi_ce_int : 1; + mmr_t ii_hw_int : 1; + mmr_t lb_hw_int : 1; + mmr_t xn_hw_int : 1; + mmr_t md_hw_int : 1; + mmr_t pi_hw_int : 1; + } sh_local_int3_enable_s; +} sh_local_int3_enable_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_LOCAL_INT4_CONFIG" */ +/* SHub Local Interrupt 4 Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_local_int4_config_u { + mmr_t sh_local_int4_config_regval; + struct { + mmr_t type : 3; + mmr_t agt : 1; + mmr_t pid : 16; + mmr_t reserved_0 : 1; + mmr_t base : 29; + mmr_t reserved_1 : 2; + mmr_t idx : 8; + mmr_t reserved_2 : 4; + } sh_local_int4_config_s; +} sh_local_int4_config_u_t; +#else +typedef union sh_local_int4_config_u { + mmr_t sh_local_int4_config_regval; + struct { + mmr_t reserved_2 : 4; + mmr_t idx : 8; + mmr_t reserved_1 : 2; + mmr_t base : 29; + mmr_t reserved_0 : 1; + mmr_t pid : 16; + mmr_t agt : 1; + mmr_t type : 3; + } sh_local_int4_config_s; +} sh_local_int4_config_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_LOCAL_INT4_ENABLE" */ +/* SHub Local Interrupt 4 Enable */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_local_int4_enable_u { + mmr_t sh_local_int4_enable_regval; + struct { + mmr_t pi_hw_int : 1; + mmr_t md_hw_int : 1; + mmr_t xn_hw_int : 1; + mmr_t lb_hw_int : 1; + mmr_t ii_hw_int : 1; + mmr_t pi_ce_int : 1; + mmr_t md_ce_int : 1; + mmr_t xn_ce_int : 1; + mmr_t pi_uce_int : 1; + mmr_t md_uce_int : 1; + mmr_t xn_uce_int : 1; + mmr_t reserved_0 : 1; + mmr_t system_shutdown_int : 1; + mmr_t uart_int : 1; + mmr_t l1_nmi_int : 1; + mmr_t stop_clock : 1; + mmr_t reserved_1 : 48; + } sh_local_int4_enable_s; +} sh_local_int4_enable_u_t; +#else +typedef union sh_local_int4_enable_u { + mmr_t sh_local_int4_enable_regval; + struct { + mmr_t reserved_1 : 48; + mmr_t stop_clock : 1; + mmr_t l1_nmi_int : 1; + mmr_t uart_int : 1; + mmr_t system_shutdown_int : 1; + mmr_t reserved_0 : 1; + mmr_t xn_uce_int : 1; + mmr_t md_uce_int : 1; + mmr_t pi_uce_int : 1; + mmr_t xn_ce_int : 1; + mmr_t md_ce_int : 1; + mmr_t pi_ce_int : 1; + mmr_t ii_hw_int : 1; + mmr_t lb_hw_int : 1; + mmr_t xn_hw_int : 1; + mmr_t md_hw_int : 1; + mmr_t pi_hw_int : 1; + } sh_local_int4_enable_s; +} sh_local_int4_enable_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_LOCAL_INT5_CONFIG" */ +/* SHub Local Interrupt 5 Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_local_int5_config_u { + mmr_t sh_local_int5_config_regval; + struct { + mmr_t type : 3; + mmr_t agt : 1; + mmr_t pid : 16; + mmr_t reserved_0 : 1; + mmr_t base : 29; + mmr_t reserved_1 : 2; + mmr_t idx : 8; + mmr_t reserved_2 : 4; + } sh_local_int5_config_s; +} sh_local_int5_config_u_t; +#else +typedef union sh_local_int5_config_u { + mmr_t sh_local_int5_config_regval; + struct { + mmr_t reserved_2 : 4; + mmr_t idx : 8; + mmr_t reserved_1 : 2; + mmr_t base : 29; + mmr_t reserved_0 : 1; + mmr_t pid : 16; + mmr_t agt : 1; + mmr_t type : 3; + } sh_local_int5_config_s; +} sh_local_int5_config_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_LOCAL_INT5_ENABLE" */ +/* SHub Local Interrupt 5 Enable */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_local_int5_enable_u { + mmr_t sh_local_int5_enable_regval; + struct { + mmr_t pi_hw_int : 1; + mmr_t md_hw_int : 1; + mmr_t xn_hw_int : 1; + mmr_t lb_hw_int : 1; + mmr_t ii_hw_int : 1; + mmr_t pi_ce_int : 1; + mmr_t md_ce_int : 1; + mmr_t xn_ce_int : 1; + mmr_t pi_uce_int : 1; + mmr_t md_uce_int : 1; + mmr_t xn_uce_int : 1; + mmr_t reserved_0 : 1; + mmr_t system_shutdown_int : 1; + mmr_t uart_int : 1; + mmr_t l1_nmi_int : 1; + mmr_t stop_clock : 1; + mmr_t reserved_1 : 48; + } sh_local_int5_enable_s; +} sh_local_int5_enable_u_t; +#else +typedef union sh_local_int5_enable_u { + mmr_t sh_local_int5_enable_regval; + struct { + mmr_t reserved_1 : 48; + mmr_t stop_clock : 1; + mmr_t l1_nmi_int : 1; + mmr_t uart_int : 1; + mmr_t system_shutdown_int : 1; + mmr_t reserved_0 : 1; + mmr_t xn_uce_int : 1; + mmr_t md_uce_int : 1; + mmr_t pi_uce_int : 1; + mmr_t xn_ce_int : 1; + mmr_t md_ce_int : 1; + mmr_t pi_ce_int : 1; + mmr_t ii_hw_int : 1; + mmr_t lb_hw_int : 1; + mmr_t xn_hw_int : 1; + mmr_t md_hw_int : 1; + mmr_t pi_hw_int : 1; + } sh_local_int5_enable_s; +} sh_local_int5_enable_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PROC0_ERR_INT_CONFIG" */ +/* SHub Processor 0 Error Interrupt Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_proc0_err_int_config_u { + mmr_t sh_proc0_err_int_config_regval; + struct { + mmr_t type : 3; + mmr_t agt : 1; + mmr_t pid : 16; + mmr_t reserved_0 : 1; + mmr_t base : 29; + mmr_t reserved_1 : 2; + mmr_t idx : 8; + mmr_t reserved_2 : 4; + } sh_proc0_err_int_config_s; +} sh_proc0_err_int_config_u_t; +#else +typedef union sh_proc0_err_int_config_u { + mmr_t sh_proc0_err_int_config_regval; + struct { + mmr_t reserved_2 : 4; + mmr_t idx : 8; + mmr_t reserved_1 : 2; + mmr_t base : 29; + mmr_t reserved_0 : 1; + mmr_t pid : 16; + mmr_t agt : 1; + mmr_t type : 3; + } sh_proc0_err_int_config_s; +} sh_proc0_err_int_config_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PROC1_ERR_INT_CONFIG" */ +/* SHub Processor 1 Error Interrupt Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_proc1_err_int_config_u { + mmr_t sh_proc1_err_int_config_regval; + struct { + mmr_t type : 3; + mmr_t agt : 1; + mmr_t pid : 16; + mmr_t reserved_0 : 1; + mmr_t base : 29; + mmr_t reserved_1 : 2; + mmr_t idx : 8; + mmr_t reserved_2 : 4; + } sh_proc1_err_int_config_s; +} sh_proc1_err_int_config_u_t; +#else +typedef union sh_proc1_err_int_config_u { + mmr_t sh_proc1_err_int_config_regval; + struct { + mmr_t reserved_2 : 4; + mmr_t idx : 8; + mmr_t reserved_1 : 2; + mmr_t base : 29; + mmr_t reserved_0 : 1; + mmr_t pid : 16; + mmr_t agt : 1; + mmr_t type : 3; + } sh_proc1_err_int_config_s; +} sh_proc1_err_int_config_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PROC2_ERR_INT_CONFIG" */ +/* SHub Processor 2 Error Interrupt Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_proc2_err_int_config_u { + mmr_t sh_proc2_err_int_config_regval; + struct { + mmr_t type : 3; + mmr_t agt : 1; + mmr_t pid : 16; + mmr_t reserved_0 : 1; + mmr_t base : 29; + mmr_t reserved_1 : 2; + mmr_t idx : 8; + mmr_t reserved_2 : 4; + } sh_proc2_err_int_config_s; +} sh_proc2_err_int_config_u_t; +#else +typedef union sh_proc2_err_int_config_u { + mmr_t sh_proc2_err_int_config_regval; + struct { + mmr_t reserved_2 : 4; + mmr_t idx : 8; + mmr_t reserved_1 : 2; + mmr_t base : 29; + mmr_t reserved_0 : 1; + mmr_t pid : 16; + mmr_t agt : 1; + mmr_t type : 3; + } sh_proc2_err_int_config_s; +} sh_proc2_err_int_config_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PROC3_ERR_INT_CONFIG" */ +/* SHub Processor 3 Error Interrupt Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_proc3_err_int_config_u { + mmr_t sh_proc3_err_int_config_regval; + struct { + mmr_t type : 3; + mmr_t agt : 1; + mmr_t pid : 16; + mmr_t reserved_0 : 1; + mmr_t base : 29; + mmr_t reserved_1 : 2; + mmr_t idx : 8; + mmr_t reserved_2 : 4; + } sh_proc3_err_int_config_s; +} sh_proc3_err_int_config_u_t; +#else +typedef union sh_proc3_err_int_config_u { + mmr_t sh_proc3_err_int_config_regval; + struct { + mmr_t reserved_2 : 4; + mmr_t idx : 8; + mmr_t reserved_1 : 2; + mmr_t base : 29; + mmr_t reserved_0 : 1; + mmr_t pid : 16; + mmr_t agt : 1; + mmr_t type : 3; + } sh_proc3_err_int_config_s; +} sh_proc3_err_int_config_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PROC0_ADV_INT_CONFIG" */ +/* SHub Processor 0 Advisory Interrupt Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_proc0_adv_int_config_u { + mmr_t sh_proc0_adv_int_config_regval; + struct { + mmr_t type : 3; + mmr_t agt : 1; + mmr_t pid : 16; + mmr_t reserved_0 : 1; + mmr_t base : 29; + mmr_t reserved_1 : 2; + mmr_t idx : 8; + mmr_t reserved_2 : 4; + } sh_proc0_adv_int_config_s; +} sh_proc0_adv_int_config_u_t; +#else +typedef union sh_proc0_adv_int_config_u { + mmr_t sh_proc0_adv_int_config_regval; + struct { + mmr_t reserved_2 : 4; + mmr_t idx : 8; + mmr_t reserved_1 : 2; + mmr_t base : 29; + mmr_t reserved_0 : 1; + mmr_t pid : 16; + mmr_t agt : 1; + mmr_t type : 3; + } sh_proc0_adv_int_config_s; +} sh_proc0_adv_int_config_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PROC1_ADV_INT_CONFIG" */ +/* SHub Processor 1 Advisory Interrupt Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_proc1_adv_int_config_u { + mmr_t sh_proc1_adv_int_config_regval; + struct { + mmr_t type : 3; + mmr_t agt : 1; + mmr_t pid : 16; + mmr_t reserved_0 : 1; + mmr_t base : 29; + mmr_t reserved_1 : 2; + mmr_t idx : 8; + mmr_t reserved_2 : 4; + } sh_proc1_adv_int_config_s; +} sh_proc1_adv_int_config_u_t; +#else +typedef union sh_proc1_adv_int_config_u { + mmr_t sh_proc1_adv_int_config_regval; + struct { + mmr_t reserved_2 : 4; + mmr_t idx : 8; + mmr_t reserved_1 : 2; + mmr_t base : 29; + mmr_t reserved_0 : 1; + mmr_t pid : 16; + mmr_t agt : 1; + mmr_t type : 3; + } sh_proc1_adv_int_config_s; +} sh_proc1_adv_int_config_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PROC2_ADV_INT_CONFIG" */ +/* SHub Processor 2 Advisory Interrupt Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_proc2_adv_int_config_u { + mmr_t sh_proc2_adv_int_config_regval; + struct { + mmr_t type : 3; + mmr_t agt : 1; + mmr_t pid : 16; + mmr_t reserved_0 : 1; + mmr_t base : 29; + mmr_t reserved_1 : 2; + mmr_t idx : 8; + mmr_t reserved_2 : 4; + } sh_proc2_adv_int_config_s; +} sh_proc2_adv_int_config_u_t; +#else +typedef union sh_proc2_adv_int_config_u { + mmr_t sh_proc2_adv_int_config_regval; + struct { + mmr_t reserved_2 : 4; + mmr_t idx : 8; + mmr_t reserved_1 : 2; + mmr_t base : 29; + mmr_t reserved_0 : 1; + mmr_t pid : 16; + mmr_t agt : 1; + mmr_t type : 3; + } sh_proc2_adv_int_config_s; +} sh_proc2_adv_int_config_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PROC3_ADV_INT_CONFIG" */ +/* SHub Processor 3 Advisory Interrupt Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_proc3_adv_int_config_u { + mmr_t sh_proc3_adv_int_config_regval; + struct { + mmr_t type : 3; + mmr_t agt : 1; + mmr_t pid : 16; + mmr_t reserved_0 : 1; + mmr_t base : 29; + mmr_t reserved_1 : 2; + mmr_t idx : 8; + mmr_t reserved_2 : 4; + } sh_proc3_adv_int_config_s; +} sh_proc3_adv_int_config_u_t; +#else +typedef union sh_proc3_adv_int_config_u { + mmr_t sh_proc3_adv_int_config_regval; + struct { + mmr_t reserved_2 : 4; + mmr_t idx : 8; + mmr_t reserved_1 : 2; + mmr_t base : 29; + mmr_t reserved_0 : 1; + mmr_t pid : 16; + mmr_t agt : 1; + mmr_t type : 3; + } sh_proc3_adv_int_config_s; +} sh_proc3_adv_int_config_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PROC0_ERR_INT_ENABLE" */ +/* SHub Processor 0 Error Interrupt Enable Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_proc0_err_int_enable_u { + mmr_t sh_proc0_err_int_enable_regval; + struct { + mmr_t proc0_err_enable : 1; + mmr_t reserved_0 : 63; + } sh_proc0_err_int_enable_s; +} sh_proc0_err_int_enable_u_t; +#else +typedef union sh_proc0_err_int_enable_u { + mmr_t sh_proc0_err_int_enable_regval; + struct { + mmr_t reserved_0 : 63; + mmr_t proc0_err_enable : 1; + } sh_proc0_err_int_enable_s; +} sh_proc0_err_int_enable_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PROC1_ERR_INT_ENABLE" */ +/* SHub Processor 1 Error Interrupt Enable Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_proc1_err_int_enable_u { + mmr_t sh_proc1_err_int_enable_regval; + struct { + mmr_t proc1_err_enable : 1; + mmr_t reserved_0 : 63; + } sh_proc1_err_int_enable_s; +} sh_proc1_err_int_enable_u_t; +#else +typedef union sh_proc1_err_int_enable_u { + mmr_t sh_proc1_err_int_enable_regval; + struct { + mmr_t reserved_0 : 63; + mmr_t proc1_err_enable : 1; + } sh_proc1_err_int_enable_s; +} sh_proc1_err_int_enable_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PROC2_ERR_INT_ENABLE" */ +/* SHub Processor 2 Error Interrupt Enable Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_proc2_err_int_enable_u { + mmr_t sh_proc2_err_int_enable_regval; + struct { + mmr_t proc2_err_enable : 1; + mmr_t reserved_0 : 63; + } sh_proc2_err_int_enable_s; +} sh_proc2_err_int_enable_u_t; +#else +typedef union sh_proc2_err_int_enable_u { + mmr_t sh_proc2_err_int_enable_regval; + struct { + mmr_t reserved_0 : 63; + mmr_t proc2_err_enable : 1; + } sh_proc2_err_int_enable_s; +} sh_proc2_err_int_enable_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PROC3_ERR_INT_ENABLE" */ +/* SHub Processor 3 Error Interrupt Enable Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_proc3_err_int_enable_u { + mmr_t sh_proc3_err_int_enable_regval; + struct { + mmr_t proc3_err_enable : 1; + mmr_t reserved_0 : 63; + } sh_proc3_err_int_enable_s; +} sh_proc3_err_int_enable_u_t; +#else +typedef union sh_proc3_err_int_enable_u { + mmr_t sh_proc3_err_int_enable_regval; + struct { + mmr_t reserved_0 : 63; + mmr_t proc3_err_enable : 1; + } sh_proc3_err_int_enable_s; +} sh_proc3_err_int_enable_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PROC0_ADV_INT_ENABLE" */ +/* SHub Processor 0 Advisory Interrupt Enable Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_proc0_adv_int_enable_u { + mmr_t sh_proc0_adv_int_enable_regval; + struct { + mmr_t proc0_adv_enable : 1; + mmr_t reserved_0 : 63; + } sh_proc0_adv_int_enable_s; +} sh_proc0_adv_int_enable_u_t; +#else +typedef union sh_proc0_adv_int_enable_u { + mmr_t sh_proc0_adv_int_enable_regval; + struct { + mmr_t reserved_0 : 63; + mmr_t proc0_adv_enable : 1; + } sh_proc0_adv_int_enable_s; +} sh_proc0_adv_int_enable_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PROC1_ADV_INT_ENABLE" */ +/* SHub Processor 1 Advisory Interrupt Enable Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_proc1_adv_int_enable_u { + mmr_t sh_proc1_adv_int_enable_regval; + struct { + mmr_t proc1_adv_enable : 1; + mmr_t reserved_0 : 63; + } sh_proc1_adv_int_enable_s; +} sh_proc1_adv_int_enable_u_t; +#else +typedef union sh_proc1_adv_int_enable_u { + mmr_t sh_proc1_adv_int_enable_regval; + struct { + mmr_t reserved_0 : 63; + mmr_t proc1_adv_enable : 1; + } sh_proc1_adv_int_enable_s; +} sh_proc1_adv_int_enable_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PROC2_ADV_INT_ENABLE" */ +/* SHub Processor 2 Advisory Interrupt Enable Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_proc2_adv_int_enable_u { + mmr_t sh_proc2_adv_int_enable_regval; + struct { + mmr_t proc2_adv_enable : 1; + mmr_t reserved_0 : 63; + } sh_proc2_adv_int_enable_s; +} sh_proc2_adv_int_enable_u_t; +#else +typedef union sh_proc2_adv_int_enable_u { + mmr_t sh_proc2_adv_int_enable_regval; + struct { + mmr_t reserved_0 : 63; + mmr_t proc2_adv_enable : 1; + } sh_proc2_adv_int_enable_s; +} sh_proc2_adv_int_enable_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PROC3_ADV_INT_ENABLE" */ +/* SHub Processor 3 Advisory Interrupt Enable Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_proc3_adv_int_enable_u { + mmr_t sh_proc3_adv_int_enable_regval; + struct { + mmr_t proc3_adv_enable : 1; + mmr_t reserved_0 : 63; + } sh_proc3_adv_int_enable_s; +} sh_proc3_adv_int_enable_u_t; +#else +typedef union sh_proc3_adv_int_enable_u { + mmr_t sh_proc3_adv_int_enable_regval; + struct { + mmr_t reserved_0 : 63; + mmr_t proc3_adv_enable : 1; + } sh_proc3_adv_int_enable_s; +} sh_proc3_adv_int_enable_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PROFILE_INT_CONFIG" */ +/* SHub Profile Interrupt Configuration Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_profile_int_config_u { + mmr_t sh_profile_int_config_regval; + struct { + mmr_t type : 3; + mmr_t agt : 1; + mmr_t pid : 16; + mmr_t reserved_0 : 1; + mmr_t base : 29; + mmr_t reserved_1 : 2; + mmr_t idx : 8; + mmr_t reserved_2 : 4; + } sh_profile_int_config_s; +} sh_profile_int_config_u_t; +#else +typedef union sh_profile_int_config_u { + mmr_t sh_profile_int_config_regval; + struct { + mmr_t reserved_2 : 4; + mmr_t idx : 8; + mmr_t reserved_1 : 2; + mmr_t base : 29; + mmr_t reserved_0 : 1; + mmr_t pid : 16; + mmr_t agt : 1; + mmr_t type : 3; + } sh_profile_int_config_s; +} sh_profile_int_config_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PROFILE_INT_ENABLE" */ +/* SHub Profile Interrupt Enable Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_profile_int_enable_u { + mmr_t sh_profile_int_enable_regval; + struct { + mmr_t profile_enable : 1; + mmr_t reserved_0 : 63; + } sh_profile_int_enable_s; +} sh_profile_int_enable_u_t; +#else +typedef union sh_profile_int_enable_u { + mmr_t sh_profile_int_enable_regval; + struct { + mmr_t reserved_0 : 63; + mmr_t profile_enable : 1; + } sh_profile_int_enable_s; +} sh_profile_int_enable_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_RTC0_INT_CONFIG" */ +/* SHub RTC 0 Interrupt Config Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_rtc0_int_config_u { + mmr_t sh_rtc0_int_config_regval; + struct { + mmr_t type : 3; + mmr_t agt : 1; + mmr_t pid : 16; + mmr_t reserved_0 : 1; + mmr_t base : 29; + mmr_t reserved_1 : 2; + mmr_t idx : 8; + mmr_t reserved_2 : 4; + } sh_rtc0_int_config_s; +} sh_rtc0_int_config_u_t; +#else +typedef union sh_rtc0_int_config_u { + mmr_t sh_rtc0_int_config_regval; + struct { + mmr_t reserved_2 : 4; + mmr_t idx : 8; + mmr_t reserved_1 : 2; + mmr_t base : 29; + mmr_t reserved_0 : 1; + mmr_t pid : 16; + mmr_t agt : 1; + mmr_t type : 3; + } sh_rtc0_int_config_s; +} sh_rtc0_int_config_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_RTC0_INT_ENABLE" */ +/* SHub RTC 0 Interrupt Enable Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_rtc0_int_enable_u { + mmr_t sh_rtc0_int_enable_regval; + struct { + mmr_t rtc0_enable : 1; + mmr_t reserved_0 : 63; + } sh_rtc0_int_enable_s; +} sh_rtc0_int_enable_u_t; +#else +typedef union sh_rtc0_int_enable_u { + mmr_t sh_rtc0_int_enable_regval; + struct { + mmr_t reserved_0 : 63; + mmr_t rtc0_enable : 1; + } sh_rtc0_int_enable_s; +} sh_rtc0_int_enable_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_RTC1_INT_CONFIG" */ +/* SHub RTC 1 Interrupt Config Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_rtc1_int_config_u { + mmr_t sh_rtc1_int_config_regval; + struct { + mmr_t type : 3; + mmr_t agt : 1; + mmr_t pid : 16; + mmr_t reserved_0 : 1; + mmr_t base : 29; + mmr_t reserved_1 : 2; + mmr_t idx : 8; + mmr_t reserved_2 : 4; + } sh_rtc1_int_config_s; +} sh_rtc1_int_config_u_t; +#else +typedef union sh_rtc1_int_config_u { + mmr_t sh_rtc1_int_config_regval; + struct { + mmr_t reserved_2 : 4; + mmr_t idx : 8; + mmr_t reserved_1 : 2; + mmr_t base : 29; + mmr_t reserved_0 : 1; + mmr_t pid : 16; + mmr_t agt : 1; + mmr_t type : 3; + } sh_rtc1_int_config_s; +} sh_rtc1_int_config_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_RTC1_INT_ENABLE" */ +/* SHub RTC 1 Interrupt Enable Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_rtc1_int_enable_u { + mmr_t sh_rtc1_int_enable_regval; + struct { + mmr_t rtc1_enable : 1; + mmr_t reserved_0 : 63; + } sh_rtc1_int_enable_s; +} sh_rtc1_int_enable_u_t; +#else +typedef union sh_rtc1_int_enable_u { + mmr_t sh_rtc1_int_enable_regval; + struct { + mmr_t reserved_0 : 63; + mmr_t rtc1_enable : 1; + } sh_rtc1_int_enable_s; +} sh_rtc1_int_enable_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_RTC2_INT_CONFIG" */ +/* SHub RTC 2 Interrupt Config Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_rtc2_int_config_u { + mmr_t sh_rtc2_int_config_regval; + struct { + mmr_t type : 3; + mmr_t agt : 1; + mmr_t pid : 16; + mmr_t reserved_0 : 1; + mmr_t base : 29; + mmr_t reserved_1 : 2; + mmr_t idx : 8; + mmr_t reserved_2 : 4; + } sh_rtc2_int_config_s; +} sh_rtc2_int_config_u_t; +#else +typedef union sh_rtc2_int_config_u { + mmr_t sh_rtc2_int_config_regval; + struct { + mmr_t reserved_2 : 4; + mmr_t idx : 8; + mmr_t reserved_1 : 2; + mmr_t base : 29; + mmr_t reserved_0 : 1; + mmr_t pid : 16; + mmr_t agt : 1; + mmr_t type : 3; + } sh_rtc2_int_config_s; +} sh_rtc2_int_config_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_RTC2_INT_ENABLE" */ +/* SHub RTC 2 Interrupt Enable Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_rtc2_int_enable_u { + mmr_t sh_rtc2_int_enable_regval; + struct { + mmr_t rtc2_enable : 1; + mmr_t reserved_0 : 63; + } sh_rtc2_int_enable_s; +} sh_rtc2_int_enable_u_t; +#else +typedef union sh_rtc2_int_enable_u { + mmr_t sh_rtc2_int_enable_regval; + struct { + mmr_t reserved_0 : 63; + mmr_t rtc2_enable : 1; + } sh_rtc2_int_enable_s; +} sh_rtc2_int_enable_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_RTC3_INT_CONFIG" */ +/* SHub RTC 3 Interrupt Config Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_rtc3_int_config_u { + mmr_t sh_rtc3_int_config_regval; + struct { + mmr_t type : 3; + mmr_t agt : 1; + mmr_t pid : 16; + mmr_t reserved_0 : 1; + mmr_t base : 29; + mmr_t reserved_1 : 2; + mmr_t idx : 8; + mmr_t reserved_2 : 4; + } sh_rtc3_int_config_s; +} sh_rtc3_int_config_u_t; +#else +typedef union sh_rtc3_int_config_u { + mmr_t sh_rtc3_int_config_regval; + struct { + mmr_t reserved_2 : 4; + mmr_t idx : 8; + mmr_t reserved_1 : 2; + mmr_t base : 29; + mmr_t reserved_0 : 1; + mmr_t pid : 16; + mmr_t agt : 1; + mmr_t type : 3; + } sh_rtc3_int_config_s; +} sh_rtc3_int_config_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_RTC3_INT_ENABLE" */ +/* SHub RTC 3 Interrupt Enable Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_rtc3_int_enable_u { + mmr_t sh_rtc3_int_enable_regval; + struct { + mmr_t rtc3_enable : 1; + mmr_t reserved_0 : 63; + } sh_rtc3_int_enable_s; +} sh_rtc3_int_enable_u_t; +#else +typedef union sh_rtc3_int_enable_u { + mmr_t sh_rtc3_int_enable_regval; + struct { + mmr_t reserved_0 : 63; + mmr_t rtc3_enable : 1; + } sh_rtc3_int_enable_s; +} sh_rtc3_int_enable_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_EVENT_OCCURRED" */ +/* SHub Interrupt Event Occurred */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_event_occurred_u { + mmr_t sh_event_occurred_regval; + struct { + mmr_t pi_hw_int : 1; + mmr_t md_hw_int : 1; + mmr_t xn_hw_int : 1; + mmr_t lb_hw_int : 1; + mmr_t ii_hw_int : 1; + mmr_t pi_ce_int : 1; + mmr_t md_ce_int : 1; + mmr_t xn_ce_int : 1; + mmr_t pi_uce_int : 1; + mmr_t md_uce_int : 1; + mmr_t xn_uce_int : 1; + mmr_t proc0_adv_int : 1; + mmr_t proc1_adv_int : 1; + mmr_t proc2_adv_int : 1; + mmr_t proc3_adv_int : 1; + mmr_t proc0_err_int : 1; + mmr_t proc1_err_int : 1; + mmr_t proc2_err_int : 1; + mmr_t proc3_err_int : 1; + mmr_t system_shutdown_int : 1; + mmr_t uart_int : 1; + mmr_t l1_nmi_int : 1; + mmr_t stop_clock : 1; + mmr_t rtc0_int : 1; + mmr_t rtc1_int : 1; + mmr_t rtc2_int : 1; + mmr_t rtc3_int : 1; + mmr_t profile_int : 1; + mmr_t ipi_int : 1; + mmr_t ii_int0 : 1; + mmr_t ii_int1 : 1; + mmr_t reserved_0 : 33; + } sh_event_occurred_s; +} sh_event_occurred_u_t; +#else +typedef union sh_event_occurred_u { + mmr_t sh_event_occurred_regval; + struct { + mmr_t reserved_0 : 33; + mmr_t ii_int1 : 1; + mmr_t ii_int0 : 1; + mmr_t ipi_int : 1; + mmr_t profile_int : 1; + mmr_t rtc3_int : 1; + mmr_t rtc2_int : 1; + mmr_t rtc1_int : 1; + mmr_t rtc0_int : 1; + mmr_t stop_clock : 1; + mmr_t l1_nmi_int : 1; + mmr_t uart_int : 1; + mmr_t system_shutdown_int : 1; + mmr_t proc3_err_int : 1; + mmr_t proc2_err_int : 1; + mmr_t proc1_err_int : 1; + mmr_t proc0_err_int : 1; + mmr_t proc3_adv_int : 1; + mmr_t proc2_adv_int : 1; + mmr_t proc1_adv_int : 1; + mmr_t proc0_adv_int : 1; + mmr_t xn_uce_int : 1; + mmr_t md_uce_int : 1; + mmr_t pi_uce_int : 1; + mmr_t xn_ce_int : 1; + mmr_t md_ce_int : 1; + mmr_t pi_ce_int : 1; + mmr_t ii_hw_int : 1; + mmr_t lb_hw_int : 1; + mmr_t xn_hw_int : 1; + mmr_t md_hw_int : 1; + mmr_t pi_hw_int : 1; + } sh_event_occurred_s; +} sh_event_occurred_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_EVENT_OVERFLOW" */ +/* SHub Interrupt Event Occurred Overflow */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_event_overflow_u { + mmr_t sh_event_overflow_regval; + struct { + mmr_t pi_hw_int : 1; + mmr_t md_hw_int : 1; + mmr_t xn_hw_int : 1; + mmr_t lb_hw_int : 1; + mmr_t ii_hw_int : 1; + mmr_t pi_ce_int : 1; + mmr_t md_ce_int : 1; + mmr_t xn_ce_int : 1; + mmr_t pi_uce_int : 1; + mmr_t md_uce_int : 1; + mmr_t xn_uce_int : 1; + mmr_t proc0_adv_int : 1; + mmr_t proc1_adv_int : 1; + mmr_t proc2_adv_int : 1; + mmr_t proc3_adv_int : 1; + mmr_t proc0_err_int : 1; + mmr_t proc1_err_int : 1; + mmr_t proc2_err_int : 1; + mmr_t proc3_err_int : 1; + mmr_t system_shutdown_int : 1; + mmr_t uart_int : 1; + mmr_t l1_nmi_int : 1; + mmr_t stop_clock : 1; + mmr_t rtc0_int : 1; + mmr_t rtc1_int : 1; + mmr_t rtc2_int : 1; + mmr_t rtc3_int : 1; + mmr_t profile_int : 1; + mmr_t reserved_0 : 36; + } sh_event_overflow_s; +} sh_event_overflow_u_t; +#else +typedef union sh_event_overflow_u { + mmr_t sh_event_overflow_regval; + struct { + mmr_t reserved_0 : 36; + mmr_t profile_int : 1; + mmr_t rtc3_int : 1; + mmr_t rtc2_int : 1; + mmr_t rtc1_int : 1; + mmr_t rtc0_int : 1; + mmr_t stop_clock : 1; + mmr_t l1_nmi_int : 1; + mmr_t uart_int : 1; + mmr_t system_shutdown_int : 1; + mmr_t proc3_err_int : 1; + mmr_t proc2_err_int : 1; + mmr_t proc1_err_int : 1; + mmr_t proc0_err_int : 1; + mmr_t proc3_adv_int : 1; + mmr_t proc2_adv_int : 1; + mmr_t proc1_adv_int : 1; + mmr_t proc0_adv_int : 1; + mmr_t xn_uce_int : 1; + mmr_t md_uce_int : 1; + mmr_t pi_uce_int : 1; + mmr_t xn_ce_int : 1; + mmr_t md_ce_int : 1; + mmr_t pi_ce_int : 1; + mmr_t ii_hw_int : 1; + mmr_t lb_hw_int : 1; + mmr_t xn_hw_int : 1; + mmr_t md_hw_int : 1; + mmr_t pi_hw_int : 1; + } sh_event_overflow_s; +} sh_event_overflow_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_JUNK_BUS_TIME" */ +/* Junk Bus Timing */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_junk_bus_time_u { + mmr_t sh_junk_bus_time_regval; + struct { + mmr_t fprom_setup_hold : 8; + mmr_t fprom_enable : 8; + mmr_t uart_setup_hold : 8; + mmr_t uart_enable : 8; + mmr_t reserved_0 : 32; + } sh_junk_bus_time_s; +} sh_junk_bus_time_u_t; +#else +typedef union sh_junk_bus_time_u { + mmr_t sh_junk_bus_time_regval; + struct { + mmr_t reserved_0 : 32; + mmr_t uart_enable : 8; + mmr_t uart_setup_hold : 8; + mmr_t fprom_enable : 8; + mmr_t fprom_setup_hold : 8; + } sh_junk_bus_time_s; +} sh_junk_bus_time_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_JUNK_LATCH_TIME" */ +/* Junk Bus Latch Timing */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_junk_latch_time_u { + mmr_t sh_junk_latch_time_regval; + struct { + mmr_t setup_hold : 3; + mmr_t reserved_0 : 61; + } sh_junk_latch_time_s; +} sh_junk_latch_time_u_t; +#else +typedef union sh_junk_latch_time_u { + mmr_t sh_junk_latch_time_regval; + struct { + mmr_t reserved_0 : 61; + mmr_t setup_hold : 3; + } sh_junk_latch_time_s; +} sh_junk_latch_time_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_JUNK_NACK_RESET" */ +/* Junk Bus Nack Counter Reset */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_junk_nack_reset_u { + mmr_t sh_junk_nack_reset_regval; + struct { + mmr_t pulse : 1; + mmr_t reserved_0 : 63; + } sh_junk_nack_reset_s; +} sh_junk_nack_reset_u_t; +#else +typedef union sh_junk_nack_reset_u { + mmr_t sh_junk_nack_reset_regval; + struct { + mmr_t reserved_0 : 63; + mmr_t pulse : 1; + } sh_junk_nack_reset_s; +} sh_junk_nack_reset_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_JUNK_BUS_LED0" */ +/* Junk Bus LED0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_junk_bus_led0_u { + mmr_t sh_junk_bus_led0_regval; + struct { + mmr_t led0_data : 8; + mmr_t reserved_0 : 56; + } sh_junk_bus_led0_s; +} sh_junk_bus_led0_u_t; +#else +typedef union sh_junk_bus_led0_u { + mmr_t sh_junk_bus_led0_regval; + struct { + mmr_t reserved_0 : 56; + mmr_t led0_data : 8; + } sh_junk_bus_led0_s; +} sh_junk_bus_led0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_JUNK_BUS_LED1" */ +/* Junk Bus LED1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_junk_bus_led1_u { + mmr_t sh_junk_bus_led1_regval; + struct { + mmr_t led1_data : 8; + mmr_t reserved_0 : 56; + } sh_junk_bus_led1_s; +} sh_junk_bus_led1_u_t; +#else +typedef union sh_junk_bus_led1_u { + mmr_t sh_junk_bus_led1_regval; + struct { + mmr_t reserved_0 : 56; + mmr_t led1_data : 8; + } sh_junk_bus_led1_s; +} sh_junk_bus_led1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_JUNK_BUS_LED2" */ +/* Junk Bus LED2 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_junk_bus_led2_u { + mmr_t sh_junk_bus_led2_regval; + struct { + mmr_t led2_data : 8; + mmr_t reserved_0 : 56; + } sh_junk_bus_led2_s; +} sh_junk_bus_led2_u_t; +#else +typedef union sh_junk_bus_led2_u { + mmr_t sh_junk_bus_led2_regval; + struct { + mmr_t reserved_0 : 56; + mmr_t led2_data : 8; + } sh_junk_bus_led2_s; +} sh_junk_bus_led2_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_JUNK_BUS_LED3" */ +/* Junk Bus LED3 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_junk_bus_led3_u { + mmr_t sh_junk_bus_led3_regval; + struct { + mmr_t led3_data : 8; + mmr_t reserved_0 : 56; + } sh_junk_bus_led3_s; +} sh_junk_bus_led3_u_t; +#else +typedef union sh_junk_bus_led3_u { + mmr_t sh_junk_bus_led3_regval; + struct { + mmr_t reserved_0 : 56; + mmr_t led3_data : 8; + } sh_junk_bus_led3_s; +} sh_junk_bus_led3_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_JUNK_ERROR_STATUS" */ +/* Junk Bus Error Status */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_junk_error_status_u { + mmr_t sh_junk_error_status_regval; + struct { + mmr_t address : 47; + mmr_t reserved_0 : 1; + mmr_t cmd : 8; + mmr_t mode : 1; + mmr_t status : 4; + mmr_t reserved_1 : 3; + } sh_junk_error_status_s; +} sh_junk_error_status_u_t; +#else +typedef union sh_junk_error_status_u { + mmr_t sh_junk_error_status_regval; + struct { + mmr_t reserved_1 : 3; + mmr_t status : 4; + mmr_t mode : 1; + mmr_t cmd : 8; + mmr_t reserved_0 : 1; + mmr_t address : 47; + } sh_junk_error_status_s; +} sh_junk_error_status_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_NI0_LLP_STAT" */ +/* This register describes the LLP status. */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ni0_llp_stat_u { + mmr_t sh_ni0_llp_stat_regval; + struct { + mmr_t link_reset_state : 4; + mmr_t reserved_0 : 60; + } sh_ni0_llp_stat_s; +} sh_ni0_llp_stat_u_t; +#else +typedef union sh_ni0_llp_stat_u { + mmr_t sh_ni0_llp_stat_regval; + struct { + mmr_t reserved_0 : 60; + mmr_t link_reset_state : 4; + } sh_ni0_llp_stat_s; +} sh_ni0_llp_stat_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_NI0_LLP_RESET" */ +/* Writing issues a reset to the network interface */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ni0_llp_reset_u { + mmr_t sh_ni0_llp_reset_regval; + struct { + mmr_t link : 1; + mmr_t warm : 1; + mmr_t reserved_0 : 62; + } sh_ni0_llp_reset_s; +} sh_ni0_llp_reset_u_t; +#else +typedef union sh_ni0_llp_reset_u { + mmr_t sh_ni0_llp_reset_regval; + struct { + mmr_t reserved_0 : 62; + mmr_t warm : 1; + mmr_t link : 1; + } sh_ni0_llp_reset_s; +} sh_ni0_llp_reset_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_NI0_LLP_RESET_EN" */ +/* Controls LLP warm reset propagation */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ni0_llp_reset_en_u { + mmr_t sh_ni0_llp_reset_en_regval; + struct { + mmr_t ok : 1; + mmr_t reserved_0 : 63; + } sh_ni0_llp_reset_en_s; +} sh_ni0_llp_reset_en_u_t; +#else +typedef union sh_ni0_llp_reset_en_u { + mmr_t sh_ni0_llp_reset_en_regval; + struct { + mmr_t reserved_0 : 63; + mmr_t ok : 1; + } sh_ni0_llp_reset_en_s; +} sh_ni0_llp_reset_en_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_NI0_LLP_CHAN_MODE" */ +/* Sets the signaling mode of LLP and channel */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ni0_llp_chan_mode_u { + mmr_t sh_ni0_llp_chan_mode_regval; + struct { + mmr_t bitmode32 : 1; + mmr_t ac_encode : 1; + mmr_t enable_tuning : 1; + mmr_t enable_rmt_ft_upd : 1; + mmr_t enable_clkquad : 1; + mmr_t reserved_0 : 59; + } sh_ni0_llp_chan_mode_s; +} sh_ni0_llp_chan_mode_u_t; +#else +typedef union sh_ni0_llp_chan_mode_u { + mmr_t sh_ni0_llp_chan_mode_regval; + struct { + mmr_t reserved_0 : 59; + mmr_t enable_clkquad : 1; + mmr_t enable_rmt_ft_upd : 1; + mmr_t enable_tuning : 1; + mmr_t ac_encode : 1; + mmr_t bitmode32 : 1; + } sh_ni0_llp_chan_mode_s; +} sh_ni0_llp_chan_mode_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_NI0_LLP_CONFIG" */ +/* Sets the configuration of LLP and channel */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ni0_llp_config_u { + mmr_t sh_ni0_llp_config_regval; + struct { + mmr_t maxburst : 10; + mmr_t maxretry : 10; + mmr_t nulltimeout : 6; + mmr_t ftu_time : 12; + mmr_t reserved_0 : 26; + } sh_ni0_llp_config_s; +} sh_ni0_llp_config_u_t; +#else +typedef union sh_ni0_llp_config_u { + mmr_t sh_ni0_llp_config_regval; + struct { + mmr_t reserved_0 : 26; + mmr_t ftu_time : 12; + mmr_t nulltimeout : 6; + mmr_t maxretry : 10; + mmr_t maxburst : 10; + } sh_ni0_llp_config_s; +} sh_ni0_llp_config_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_NI0_LLP_TEST_CTL" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ni0_llp_test_ctl_u { + mmr_t sh_ni0_llp_test_ctl_regval; + struct { + mmr_t pattern : 40; + mmr_t send_test_mode : 2; + mmr_t reserved_0 : 2; + mmr_t wire_sel : 6; + mmr_t reserved_1 : 2; + mmr_t lfsr_mode : 2; + mmr_t noise_mode : 2; + mmr_t armcapture : 1; + mmr_t capturecbonly : 1; + mmr_t sendcberror : 1; + mmr_t sendsnerror : 1; + mmr_t fakesnerror : 1; + mmr_t captured : 1; + mmr_t cberror : 1; + mmr_t reserved_2 : 1; + } sh_ni0_llp_test_ctl_s; +} sh_ni0_llp_test_ctl_u_t; +#else +typedef union sh_ni0_llp_test_ctl_u { + mmr_t sh_ni0_llp_test_ctl_regval; + struct { + mmr_t reserved_2 : 1; + mmr_t cberror : 1; + mmr_t captured : 1; + mmr_t fakesnerror : 1; + mmr_t sendsnerror : 1; + mmr_t sendcberror : 1; + mmr_t capturecbonly : 1; + mmr_t armcapture : 1; + mmr_t noise_mode : 2; + mmr_t lfsr_mode : 2; + mmr_t reserved_1 : 2; + mmr_t wire_sel : 6; + mmr_t reserved_0 : 2; + mmr_t send_test_mode : 2; + mmr_t pattern : 40; + } sh_ni0_llp_test_ctl_s; +} sh_ni0_llp_test_ctl_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_NI0_LLP_CAPT_WD1" */ +/* low order 64-bit captured word */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ni0_llp_capt_wd1_u { + mmr_t sh_ni0_llp_capt_wd1_regval; + struct { + mmr_t data : 64; + } sh_ni0_llp_capt_wd1_s; +} sh_ni0_llp_capt_wd1_u_t; +#else +typedef union sh_ni0_llp_capt_wd1_u { + mmr_t sh_ni0_llp_capt_wd1_regval; + struct { + mmr_t data : 64; + } sh_ni0_llp_capt_wd1_s; +} sh_ni0_llp_capt_wd1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_NI0_LLP_CAPT_WD2" */ +/* high order 64-bit captured word */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ni0_llp_capt_wd2_u { + mmr_t sh_ni0_llp_capt_wd2_regval; + struct { + mmr_t data : 64; + } sh_ni0_llp_capt_wd2_s; +} sh_ni0_llp_capt_wd2_u_t; +#else +typedef union sh_ni0_llp_capt_wd2_u { + mmr_t sh_ni0_llp_capt_wd2_regval; + struct { + mmr_t data : 64; + } sh_ni0_llp_capt_wd2_s; +} sh_ni0_llp_capt_wd2_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_NI0_LLP_CAPT_SBCB" */ +/* captured sideband, sequence, and CRC */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ni0_llp_capt_sbcb_u { + mmr_t sh_ni0_llp_capt_sbcb_regval; + struct { + mmr_t capturedrcvsbsn : 16; + mmr_t capturedrcvcrc : 16; + mmr_t sentallcberrors : 1; + mmr_t sentallsnerrors : 1; + mmr_t fakedallsnerrors : 1; + mmr_t chargeoverflow : 1; + mmr_t chargeunderflow : 1; + mmr_t reserved_0 : 27; + } sh_ni0_llp_capt_sbcb_s; +} sh_ni0_llp_capt_sbcb_u_t; +#else +typedef union sh_ni0_llp_capt_sbcb_u { + mmr_t sh_ni0_llp_capt_sbcb_regval; + struct { + mmr_t reserved_0 : 27; + mmr_t chargeunderflow : 1; + mmr_t chargeoverflow : 1; + mmr_t fakedallsnerrors : 1; + mmr_t sentallsnerrors : 1; + mmr_t sentallcberrors : 1; + mmr_t capturedrcvcrc : 16; + mmr_t capturedrcvsbsn : 16; + } sh_ni0_llp_capt_sbcb_s; +} sh_ni0_llp_capt_sbcb_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_NI0_LLP_ERR" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ni0_llp_err_u { + mmr_t sh_ni0_llp_err_regval; + struct { + mmr_t rx_sn_err_count : 8; + mmr_t rx_cb_err_count : 8; + mmr_t retry_count : 8; + mmr_t retry_timeout : 1; + mmr_t rcv_link_reset : 1; + mmr_t squash : 1; + mmr_t power_not_ok : 1; + mmr_t wire_cnt : 24; + mmr_t wire_overflow : 1; + mmr_t reserved_0 : 11; + } sh_ni0_llp_err_s; +} sh_ni0_llp_err_u_t; +#else +typedef union sh_ni0_llp_err_u { + mmr_t sh_ni0_llp_err_regval; + struct { + mmr_t reserved_0 : 11; + mmr_t wire_overflow : 1; + mmr_t wire_cnt : 24; + mmr_t power_not_ok : 1; + mmr_t squash : 1; + mmr_t rcv_link_reset : 1; + mmr_t retry_timeout : 1; + mmr_t retry_count : 8; + mmr_t rx_cb_err_count : 8; + mmr_t rx_sn_err_count : 8; + } sh_ni0_llp_err_s; +} sh_ni0_llp_err_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_NI1_LLP_STAT" */ +/* This register describes the LLP status. */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ni1_llp_stat_u { + mmr_t sh_ni1_llp_stat_regval; + struct { + mmr_t link_reset_state : 4; + mmr_t reserved_0 : 60; + } sh_ni1_llp_stat_s; +} sh_ni1_llp_stat_u_t; +#else +typedef union sh_ni1_llp_stat_u { + mmr_t sh_ni1_llp_stat_regval; + struct { + mmr_t reserved_0 : 60; + mmr_t link_reset_state : 4; + } sh_ni1_llp_stat_s; +} sh_ni1_llp_stat_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_NI1_LLP_RESET" */ +/* Writing issues a reset to the network interface */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ni1_llp_reset_u { + mmr_t sh_ni1_llp_reset_regval; + struct { + mmr_t link : 1; + mmr_t warm : 1; + mmr_t reserved_0 : 62; + } sh_ni1_llp_reset_s; +} sh_ni1_llp_reset_u_t; +#else +typedef union sh_ni1_llp_reset_u { + mmr_t sh_ni1_llp_reset_regval; + struct { + mmr_t reserved_0 : 62; + mmr_t warm : 1; + mmr_t link : 1; + } sh_ni1_llp_reset_s; +} sh_ni1_llp_reset_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_NI1_LLP_RESET_EN" */ +/* Controls LLP warm reset propagation */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ni1_llp_reset_en_u { + mmr_t sh_ni1_llp_reset_en_regval; + struct { + mmr_t ok : 1; + mmr_t reserved_0 : 63; + } sh_ni1_llp_reset_en_s; +} sh_ni1_llp_reset_en_u_t; +#else +typedef union sh_ni1_llp_reset_en_u { + mmr_t sh_ni1_llp_reset_en_regval; + struct { + mmr_t reserved_0 : 63; + mmr_t ok : 1; + } sh_ni1_llp_reset_en_s; +} sh_ni1_llp_reset_en_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_NI1_LLP_CHAN_MODE" */ +/* Sets the signaling mode of LLP and channel */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ni1_llp_chan_mode_u { + mmr_t sh_ni1_llp_chan_mode_regval; + struct { + mmr_t bitmode32 : 1; + mmr_t ac_encode : 1; + mmr_t enable_tuning : 1; + mmr_t enable_rmt_ft_upd : 1; + mmr_t enable_clkquad : 1; + mmr_t reserved_0 : 59; + } sh_ni1_llp_chan_mode_s; +} sh_ni1_llp_chan_mode_u_t; +#else +typedef union sh_ni1_llp_chan_mode_u { + mmr_t sh_ni1_llp_chan_mode_regval; + struct { + mmr_t reserved_0 : 59; + mmr_t enable_clkquad : 1; + mmr_t enable_rmt_ft_upd : 1; + mmr_t enable_tuning : 1; + mmr_t ac_encode : 1; + mmr_t bitmode32 : 1; + } sh_ni1_llp_chan_mode_s; +} sh_ni1_llp_chan_mode_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_NI1_LLP_CONFIG" */ +/* Sets the configuration of LLP and channel */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ni1_llp_config_u { + mmr_t sh_ni1_llp_config_regval; + struct { + mmr_t maxburst : 10; + mmr_t maxretry : 10; + mmr_t nulltimeout : 6; + mmr_t ftu_time : 12; + mmr_t reserved_0 : 26; + } sh_ni1_llp_config_s; +} sh_ni1_llp_config_u_t; +#else +typedef union sh_ni1_llp_config_u { + mmr_t sh_ni1_llp_config_regval; + struct { + mmr_t reserved_0 : 26; + mmr_t ftu_time : 12; + mmr_t nulltimeout : 6; + mmr_t maxretry : 10; + mmr_t maxburst : 10; + } sh_ni1_llp_config_s; +} sh_ni1_llp_config_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_NI1_LLP_TEST_CTL" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ni1_llp_test_ctl_u { + mmr_t sh_ni1_llp_test_ctl_regval; + struct { + mmr_t pattern : 40; + mmr_t send_test_mode : 2; + mmr_t reserved_0 : 2; + mmr_t wire_sel : 6; + mmr_t reserved_1 : 2; + mmr_t lfsr_mode : 2; + mmr_t noise_mode : 2; + mmr_t armcapture : 1; + mmr_t capturecbonly : 1; + mmr_t sendcberror : 1; + mmr_t sendsnerror : 1; + mmr_t fakesnerror : 1; + mmr_t captured : 1; + mmr_t cberror : 1; + mmr_t reserved_2 : 1; + } sh_ni1_llp_test_ctl_s; +} sh_ni1_llp_test_ctl_u_t; +#else +typedef union sh_ni1_llp_test_ctl_u { + mmr_t sh_ni1_llp_test_ctl_regval; + struct { + mmr_t reserved_2 : 1; + mmr_t cberror : 1; + mmr_t captured : 1; + mmr_t fakesnerror : 1; + mmr_t sendsnerror : 1; + mmr_t sendcberror : 1; + mmr_t capturecbonly : 1; + mmr_t armcapture : 1; + mmr_t noise_mode : 2; + mmr_t lfsr_mode : 2; + mmr_t reserved_1 : 2; + mmr_t wire_sel : 6; + mmr_t reserved_0 : 2; + mmr_t send_test_mode : 2; + mmr_t pattern : 40; + } sh_ni1_llp_test_ctl_s; +} sh_ni1_llp_test_ctl_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_NI1_LLP_CAPT_WD1" */ +/* low order 64-bit captured word */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ni1_llp_capt_wd1_u { + mmr_t sh_ni1_llp_capt_wd1_regval; + struct { + mmr_t data : 64; + } sh_ni1_llp_capt_wd1_s; +} sh_ni1_llp_capt_wd1_u_t; +#else +typedef union sh_ni1_llp_capt_wd1_u { + mmr_t sh_ni1_llp_capt_wd1_regval; + struct { + mmr_t data : 64; + } sh_ni1_llp_capt_wd1_s; +} sh_ni1_llp_capt_wd1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_NI1_LLP_CAPT_WD2" */ +/* high order 64-bit captured word */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ni1_llp_capt_wd2_u { + mmr_t sh_ni1_llp_capt_wd2_regval; + struct { + mmr_t data : 64; + } sh_ni1_llp_capt_wd2_s; +} sh_ni1_llp_capt_wd2_u_t; +#else +typedef union sh_ni1_llp_capt_wd2_u { + mmr_t sh_ni1_llp_capt_wd2_regval; + struct { + mmr_t data : 64; + } sh_ni1_llp_capt_wd2_s; +} sh_ni1_llp_capt_wd2_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_NI1_LLP_CAPT_SBCB" */ +/* captured sideband, sequence, and CRC */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ni1_llp_capt_sbcb_u { + mmr_t sh_ni1_llp_capt_sbcb_regval; + struct { + mmr_t capturedrcvsbsn : 16; + mmr_t capturedrcvcrc : 16; + mmr_t sentallcberrors : 1; + mmr_t sentallsnerrors : 1; + mmr_t fakedallsnerrors : 1; + mmr_t chargeoverflow : 1; + mmr_t chargeunderflow : 1; + mmr_t reserved_0 : 27; + } sh_ni1_llp_capt_sbcb_s; +} sh_ni1_llp_capt_sbcb_u_t; +#else +typedef union sh_ni1_llp_capt_sbcb_u { + mmr_t sh_ni1_llp_capt_sbcb_regval; + struct { + mmr_t reserved_0 : 27; + mmr_t chargeunderflow : 1; + mmr_t chargeoverflow : 1; + mmr_t fakedallsnerrors : 1; + mmr_t sentallsnerrors : 1; + mmr_t sentallcberrors : 1; + mmr_t capturedrcvcrc : 16; + mmr_t capturedrcvsbsn : 16; + } sh_ni1_llp_capt_sbcb_s; +} sh_ni1_llp_capt_sbcb_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_NI1_LLP_ERR" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ni1_llp_err_u { + mmr_t sh_ni1_llp_err_regval; + struct { + mmr_t rx_sn_err_count : 8; + mmr_t rx_cb_err_count : 8; + mmr_t retry_count : 8; + mmr_t retry_timeout : 1; + mmr_t rcv_link_reset : 1; + mmr_t squash : 1; + mmr_t power_not_ok : 1; + mmr_t wire_cnt : 24; + mmr_t wire_overflow : 1; + mmr_t reserved_0 : 11; + } sh_ni1_llp_err_s; +} sh_ni1_llp_err_u_t; +#else +typedef union sh_ni1_llp_err_u { + mmr_t sh_ni1_llp_err_regval; + struct { + mmr_t reserved_0 : 11; + mmr_t wire_overflow : 1; + mmr_t wire_cnt : 24; + mmr_t power_not_ok : 1; + mmr_t squash : 1; + mmr_t rcv_link_reset : 1; + mmr_t retry_timeout : 1; + mmr_t retry_count : 8; + mmr_t rx_cb_err_count : 8; + mmr_t rx_sn_err_count : 8; + } sh_ni1_llp_err_s; +} sh_ni1_llp_err_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNNI0_LLP_TO_FIFO02_FLOW" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnni0_llp_to_fifo02_flow_u { + mmr_t sh_xnni0_llp_to_fifo02_flow_regval; + struct { + mmr_t debit_vc0_withhold : 6; + mmr_t reserved_0 : 1; + mmr_t debit_vc0_force_cred : 1; + mmr_t debit_vc2_withhold : 6; + mmr_t reserved_1 : 1; + mmr_t debit_vc2_force_cred : 1; + mmr_t reserved_2 : 8; + mmr_t credit_vc0_dyn : 6; + mmr_t reserved_3 : 2; + mmr_t credit_vc0_cap : 6; + mmr_t reserved_4 : 10; + mmr_t credit_vc2_dyn : 6; + mmr_t reserved_5 : 2; + mmr_t credit_vc2_cap : 6; + mmr_t reserved_6 : 2; + } sh_xnni0_llp_to_fifo02_flow_s; +} sh_xnni0_llp_to_fifo02_flow_u_t; +#else +typedef union sh_xnni0_llp_to_fifo02_flow_u { + mmr_t sh_xnni0_llp_to_fifo02_flow_regval; + struct { + mmr_t reserved_6 : 2; + mmr_t credit_vc2_cap : 6; + mmr_t reserved_5 : 2; + mmr_t credit_vc2_dyn : 6; + mmr_t reserved_4 : 10; + mmr_t credit_vc0_cap : 6; + mmr_t reserved_3 : 2; + mmr_t credit_vc0_dyn : 6; + mmr_t reserved_2 : 8; + mmr_t debit_vc2_force_cred : 1; + mmr_t reserved_1 : 1; + mmr_t debit_vc2_withhold : 6; + mmr_t debit_vc0_force_cred : 1; + mmr_t reserved_0 : 1; + mmr_t debit_vc0_withhold : 6; + } sh_xnni0_llp_to_fifo02_flow_s; +} sh_xnni0_llp_to_fifo02_flow_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNNI0_LLP_TO_FIFO13_FLOW" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnni0_llp_to_fifo13_flow_u { + mmr_t sh_xnni0_llp_to_fifo13_flow_regval; + struct { + mmr_t debit_vc0_withhold : 6; + mmr_t reserved_0 : 1; + mmr_t debit_vc0_force_cred : 1; + mmr_t debit_vc2_withhold : 6; + mmr_t reserved_1 : 1; + mmr_t debit_vc2_force_cred : 1; + mmr_t reserved_2 : 8; + mmr_t credit_vc0_dyn : 6; + mmr_t reserved_3 : 2; + mmr_t credit_vc0_cap : 6; + mmr_t reserved_4 : 10; + mmr_t credit_vc2_dyn : 6; + mmr_t reserved_5 : 2; + mmr_t credit_vc2_cap : 6; + mmr_t reserved_6 : 2; + } sh_xnni0_llp_to_fifo13_flow_s; +} sh_xnni0_llp_to_fifo13_flow_u_t; +#else +typedef union sh_xnni0_llp_to_fifo13_flow_u { + mmr_t sh_xnni0_llp_to_fifo13_flow_regval; + struct { + mmr_t reserved_6 : 2; + mmr_t credit_vc2_cap : 6; + mmr_t reserved_5 : 2; + mmr_t credit_vc2_dyn : 6; + mmr_t reserved_4 : 10; + mmr_t credit_vc0_cap : 6; + mmr_t reserved_3 : 2; + mmr_t credit_vc0_dyn : 6; + mmr_t reserved_2 : 8; + mmr_t debit_vc2_force_cred : 1; + mmr_t reserved_1 : 1; + mmr_t debit_vc2_withhold : 6; + mmr_t debit_vc0_force_cred : 1; + mmr_t reserved_0 : 1; + mmr_t debit_vc0_withhold : 6; + } sh_xnni0_llp_to_fifo13_flow_s; +} sh_xnni0_llp_to_fifo13_flow_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNNI0_LLP_DEBIT_FLOW" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnni0_llp_debit_flow_u { + mmr_t sh_xnni0_llp_debit_flow_regval; + struct { + mmr_t debit_vc0_dyn : 5; + mmr_t reserved_0 : 3; + mmr_t debit_vc0_cap : 5; + mmr_t reserved_1 : 3; + mmr_t debit_vc1_dyn : 5; + mmr_t reserved_2 : 3; + mmr_t debit_vc1_cap : 5; + mmr_t reserved_3 : 3; + mmr_t debit_vc2_dyn : 5; + mmr_t reserved_4 : 3; + mmr_t debit_vc2_cap : 5; + mmr_t reserved_5 : 3; + mmr_t debit_vc3_dyn : 5; + mmr_t reserved_6 : 3; + mmr_t debit_vc3_cap : 5; + mmr_t reserved_7 : 3; + } sh_xnni0_llp_debit_flow_s; +} sh_xnni0_llp_debit_flow_u_t; +#else +typedef union sh_xnni0_llp_debit_flow_u { + mmr_t sh_xnni0_llp_debit_flow_regval; + struct { + mmr_t reserved_7 : 3; + mmr_t debit_vc3_cap : 5; + mmr_t reserved_6 : 3; + mmr_t debit_vc3_dyn : 5; + mmr_t reserved_5 : 3; + mmr_t debit_vc2_cap : 5; + mmr_t reserved_4 : 3; + mmr_t debit_vc2_dyn : 5; + mmr_t reserved_3 : 3; + mmr_t debit_vc1_cap : 5; + mmr_t reserved_2 : 3; + mmr_t debit_vc1_dyn : 5; + mmr_t reserved_1 : 3; + mmr_t debit_vc0_cap : 5; + mmr_t reserved_0 : 3; + mmr_t debit_vc0_dyn : 5; + } sh_xnni0_llp_debit_flow_s; +} sh_xnni0_llp_debit_flow_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNNI0_LINK_0_FLOW" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnni0_link_0_flow_u { + mmr_t sh_xnni0_link_0_flow_regval; + struct { + mmr_t debit_vc0_withhold : 6; + mmr_t reserved_0 : 1; + mmr_t debit_vc0_force_cred : 1; + mmr_t credit_vc0_test : 7; + mmr_t reserved_1 : 1; + mmr_t credit_vc0_dyn : 7; + mmr_t reserved_2 : 1; + mmr_t credit_vc0_cap : 7; + mmr_t reserved_3 : 33; + } sh_xnni0_link_0_flow_s; +} sh_xnni0_link_0_flow_u_t; +#else +typedef union sh_xnni0_link_0_flow_u { + mmr_t sh_xnni0_link_0_flow_regval; + struct { + mmr_t reserved_3 : 33; + mmr_t credit_vc0_cap : 7; + mmr_t reserved_2 : 1; + mmr_t credit_vc0_dyn : 7; + mmr_t reserved_1 : 1; + mmr_t credit_vc0_test : 7; + mmr_t debit_vc0_force_cred : 1; + mmr_t reserved_0 : 1; + mmr_t debit_vc0_withhold : 6; + } sh_xnni0_link_0_flow_s; +} sh_xnni0_link_0_flow_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNNI0_LINK_1_FLOW" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnni0_link_1_flow_u { + mmr_t sh_xnni0_link_1_flow_regval; + struct { + mmr_t debit_vc1_withhold : 6; + mmr_t reserved_0 : 1; + mmr_t debit_vc1_force_cred : 1; + mmr_t credit_vc1_test : 7; + mmr_t reserved_1 : 1; + mmr_t credit_vc1_dyn : 7; + mmr_t reserved_2 : 1; + mmr_t credit_vc1_cap : 7; + mmr_t reserved_3 : 33; + } sh_xnni0_link_1_flow_s; +} sh_xnni0_link_1_flow_u_t; +#else +typedef union sh_xnni0_link_1_flow_u { + mmr_t sh_xnni0_link_1_flow_regval; + struct { + mmr_t reserved_3 : 33; + mmr_t credit_vc1_cap : 7; + mmr_t reserved_2 : 1; + mmr_t credit_vc1_dyn : 7; + mmr_t reserved_1 : 1; + mmr_t credit_vc1_test : 7; + mmr_t debit_vc1_force_cred : 1; + mmr_t reserved_0 : 1; + mmr_t debit_vc1_withhold : 6; + } sh_xnni0_link_1_flow_s; +} sh_xnni0_link_1_flow_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNNI0_LINK_2_FLOW" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnni0_link_2_flow_u { + mmr_t sh_xnni0_link_2_flow_regval; + struct { + mmr_t debit_vc2_withhold : 6; + mmr_t reserved_0 : 1; + mmr_t debit_vc2_force_cred : 1; + mmr_t credit_vc2_test : 7; + mmr_t reserved_1 : 1; + mmr_t credit_vc2_dyn : 7; + mmr_t reserved_2 : 1; + mmr_t credit_vc2_cap : 7; + mmr_t reserved_3 : 33; + } sh_xnni0_link_2_flow_s; +} sh_xnni0_link_2_flow_u_t; +#else +typedef union sh_xnni0_link_2_flow_u { + mmr_t sh_xnni0_link_2_flow_regval; + struct { + mmr_t reserved_3 : 33; + mmr_t credit_vc2_cap : 7; + mmr_t reserved_2 : 1; + mmr_t credit_vc2_dyn : 7; + mmr_t reserved_1 : 1; + mmr_t credit_vc2_test : 7; + mmr_t debit_vc2_force_cred : 1; + mmr_t reserved_0 : 1; + mmr_t debit_vc2_withhold : 6; + } sh_xnni0_link_2_flow_s; +} sh_xnni0_link_2_flow_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNNI0_LINK_3_FLOW" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnni0_link_3_flow_u { + mmr_t sh_xnni0_link_3_flow_regval; + struct { + mmr_t debit_vc3_withhold : 6; + mmr_t reserved_0 : 1; + mmr_t debit_vc3_force_cred : 1; + mmr_t credit_vc3_test : 7; + mmr_t reserved_1 : 1; + mmr_t credit_vc3_dyn : 7; + mmr_t reserved_2 : 1; + mmr_t credit_vc3_cap : 7; + mmr_t reserved_3 : 33; + } sh_xnni0_link_3_flow_s; +} sh_xnni0_link_3_flow_u_t; +#else +typedef union sh_xnni0_link_3_flow_u { + mmr_t sh_xnni0_link_3_flow_regval; + struct { + mmr_t reserved_3 : 33; + mmr_t credit_vc3_cap : 7; + mmr_t reserved_2 : 1; + mmr_t credit_vc3_dyn : 7; + mmr_t reserved_1 : 1; + mmr_t credit_vc3_test : 7; + mmr_t debit_vc3_force_cred : 1; + mmr_t reserved_0 : 1; + mmr_t debit_vc3_withhold : 6; + } sh_xnni0_link_3_flow_s; +} sh_xnni0_link_3_flow_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNNI1_LLP_TO_FIFO02_FLOW" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnni1_llp_to_fifo02_flow_u { + mmr_t sh_xnni1_llp_to_fifo02_flow_regval; + struct { + mmr_t debit_vc0_withhold : 6; + mmr_t reserved_0 : 1; + mmr_t debit_vc0_force_cred : 1; + mmr_t debit_vc2_withhold : 6; + mmr_t reserved_1 : 1; + mmr_t debit_vc2_force_cred : 1; + mmr_t reserved_2 : 8; + mmr_t credit_vc0_dyn : 6; + mmr_t reserved_3 : 2; + mmr_t credit_vc0_cap : 6; + mmr_t reserved_4 : 10; + mmr_t credit_vc2_dyn : 6; + mmr_t reserved_5 : 2; + mmr_t credit_vc2_cap : 6; + mmr_t reserved_6 : 2; + } sh_xnni1_llp_to_fifo02_flow_s; +} sh_xnni1_llp_to_fifo02_flow_u_t; +#else +typedef union sh_xnni1_llp_to_fifo02_flow_u { + mmr_t sh_xnni1_llp_to_fifo02_flow_regval; + struct { + mmr_t reserved_6 : 2; + mmr_t credit_vc2_cap : 6; + mmr_t reserved_5 : 2; + mmr_t credit_vc2_dyn : 6; + mmr_t reserved_4 : 10; + mmr_t credit_vc0_cap : 6; + mmr_t reserved_3 : 2; + mmr_t credit_vc0_dyn : 6; + mmr_t reserved_2 : 8; + mmr_t debit_vc2_force_cred : 1; + mmr_t reserved_1 : 1; + mmr_t debit_vc2_withhold : 6; + mmr_t debit_vc0_force_cred : 1; + mmr_t reserved_0 : 1; + mmr_t debit_vc0_withhold : 6; + } sh_xnni1_llp_to_fifo02_flow_s; +} sh_xnni1_llp_to_fifo02_flow_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNNI1_LLP_TO_FIFO13_FLOW" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnni1_llp_to_fifo13_flow_u { + mmr_t sh_xnni1_llp_to_fifo13_flow_regval; + struct { + mmr_t debit_vc0_withhold : 6; + mmr_t reserved_0 : 1; + mmr_t debit_vc0_force_cred : 1; + mmr_t debit_vc2_withhold : 6; + mmr_t reserved_1 : 1; + mmr_t debit_vc2_force_cred : 1; + mmr_t reserved_2 : 8; + mmr_t credit_vc0_dyn : 6; + mmr_t reserved_3 : 2; + mmr_t credit_vc0_cap : 6; + mmr_t reserved_4 : 10; + mmr_t credit_vc2_dyn : 6; + mmr_t reserved_5 : 2; + mmr_t credit_vc2_cap : 6; + mmr_t reserved_6 : 2; + } sh_xnni1_llp_to_fifo13_flow_s; +} sh_xnni1_llp_to_fifo13_flow_u_t; +#else +typedef union sh_xnni1_llp_to_fifo13_flow_u { + mmr_t sh_xnni1_llp_to_fifo13_flow_regval; + struct { + mmr_t reserved_6 : 2; + mmr_t credit_vc2_cap : 6; + mmr_t reserved_5 : 2; + mmr_t credit_vc2_dyn : 6; + mmr_t reserved_4 : 10; + mmr_t credit_vc0_cap : 6; + mmr_t reserved_3 : 2; + mmr_t credit_vc0_dyn : 6; + mmr_t reserved_2 : 8; + mmr_t debit_vc2_force_cred : 1; + mmr_t reserved_1 : 1; + mmr_t debit_vc2_withhold : 6; + mmr_t debit_vc0_force_cred : 1; + mmr_t reserved_0 : 1; + mmr_t debit_vc0_withhold : 6; + } sh_xnni1_llp_to_fifo13_flow_s; +} sh_xnni1_llp_to_fifo13_flow_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNNI1_LLP_DEBIT_FLOW" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnni1_llp_debit_flow_u { + mmr_t sh_xnni1_llp_debit_flow_regval; + struct { + mmr_t debit_vc0_dyn : 5; + mmr_t reserved_0 : 3; + mmr_t debit_vc0_cap : 5; + mmr_t reserved_1 : 3; + mmr_t debit_vc1_dyn : 5; + mmr_t reserved_2 : 3; + mmr_t debit_vc1_cap : 5; + mmr_t reserved_3 : 3; + mmr_t debit_vc2_dyn : 5; + mmr_t reserved_4 : 3; + mmr_t debit_vc2_cap : 5; + mmr_t reserved_5 : 3; + mmr_t debit_vc3_dyn : 5; + mmr_t reserved_6 : 3; + mmr_t debit_vc3_cap : 5; + mmr_t reserved_7 : 3; + } sh_xnni1_llp_debit_flow_s; +} sh_xnni1_llp_debit_flow_u_t; +#else +typedef union sh_xnni1_llp_debit_flow_u { + mmr_t sh_xnni1_llp_debit_flow_regval; + struct { + mmr_t reserved_7 : 3; + mmr_t debit_vc3_cap : 5; + mmr_t reserved_6 : 3; + mmr_t debit_vc3_dyn : 5; + mmr_t reserved_5 : 3; + mmr_t debit_vc2_cap : 5; + mmr_t reserved_4 : 3; + mmr_t debit_vc2_dyn : 5; + mmr_t reserved_3 : 3; + mmr_t debit_vc1_cap : 5; + mmr_t reserved_2 : 3; + mmr_t debit_vc1_dyn : 5; + mmr_t reserved_1 : 3; + mmr_t debit_vc0_cap : 5; + mmr_t reserved_0 : 3; + mmr_t debit_vc0_dyn : 5; + } sh_xnni1_llp_debit_flow_s; +} sh_xnni1_llp_debit_flow_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNNI1_LINK_0_FLOW" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnni1_link_0_flow_u { + mmr_t sh_xnni1_link_0_flow_regval; + struct { + mmr_t debit_vc0_withhold : 6; + mmr_t reserved_0 : 1; + mmr_t debit_vc0_force_cred : 1; + mmr_t credit_vc0_test : 7; + mmr_t reserved_1 : 1; + mmr_t credit_vc0_dyn : 7; + mmr_t reserved_2 : 1; + mmr_t credit_vc0_cap : 7; + mmr_t reserved_3 : 33; + } sh_xnni1_link_0_flow_s; +} sh_xnni1_link_0_flow_u_t; +#else +typedef union sh_xnni1_link_0_flow_u { + mmr_t sh_xnni1_link_0_flow_regval; + struct { + mmr_t reserved_3 : 33; + mmr_t credit_vc0_cap : 7; + mmr_t reserved_2 : 1; + mmr_t credit_vc0_dyn : 7; + mmr_t reserved_1 : 1; + mmr_t credit_vc0_test : 7; + mmr_t debit_vc0_force_cred : 1; + mmr_t reserved_0 : 1; + mmr_t debit_vc0_withhold : 6; + } sh_xnni1_link_0_flow_s; +} sh_xnni1_link_0_flow_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNNI1_LINK_1_FLOW" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnni1_link_1_flow_u { + mmr_t sh_xnni1_link_1_flow_regval; + struct { + mmr_t debit_vc1_withhold : 6; + mmr_t reserved_0 : 1; + mmr_t debit_vc1_force_cred : 1; + mmr_t credit_vc1_test : 7; + mmr_t reserved_1 : 1; + mmr_t credit_vc1_dyn : 7; + mmr_t reserved_2 : 1; + mmr_t credit_vc1_cap : 7; + mmr_t reserved_3 : 33; + } sh_xnni1_link_1_flow_s; +} sh_xnni1_link_1_flow_u_t; +#else +typedef union sh_xnni1_link_1_flow_u { + mmr_t sh_xnni1_link_1_flow_regval; + struct { + mmr_t reserved_3 : 33; + mmr_t credit_vc1_cap : 7; + mmr_t reserved_2 : 1; + mmr_t credit_vc1_dyn : 7; + mmr_t reserved_1 : 1; + mmr_t credit_vc1_test : 7; + mmr_t debit_vc1_force_cred : 1; + mmr_t reserved_0 : 1; + mmr_t debit_vc1_withhold : 6; + } sh_xnni1_link_1_flow_s; +} sh_xnni1_link_1_flow_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNNI1_LINK_2_FLOW" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnni1_link_2_flow_u { + mmr_t sh_xnni1_link_2_flow_regval; + struct { + mmr_t debit_vc2_withhold : 6; + mmr_t reserved_0 : 1; + mmr_t debit_vc2_force_cred : 1; + mmr_t credit_vc2_test : 7; + mmr_t reserved_1 : 1; + mmr_t credit_vc2_dyn : 7; + mmr_t reserved_2 : 1; + mmr_t credit_vc2_cap : 7; + mmr_t reserved_3 : 33; + } sh_xnni1_link_2_flow_s; +} sh_xnni1_link_2_flow_u_t; +#else +typedef union sh_xnni1_link_2_flow_u { + mmr_t sh_xnni1_link_2_flow_regval; + struct { + mmr_t reserved_3 : 33; + mmr_t credit_vc2_cap : 7; + mmr_t reserved_2 : 1; + mmr_t credit_vc2_dyn : 7; + mmr_t reserved_1 : 1; + mmr_t credit_vc2_test : 7; + mmr_t debit_vc2_force_cred : 1; + mmr_t reserved_0 : 1; + mmr_t debit_vc2_withhold : 6; + } sh_xnni1_link_2_flow_s; +} sh_xnni1_link_2_flow_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNNI1_LINK_3_FLOW" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnni1_link_3_flow_u { + mmr_t sh_xnni1_link_3_flow_regval; + struct { + mmr_t debit_vc3_withhold : 6; + mmr_t reserved_0 : 1; + mmr_t debit_vc3_force_cred : 1; + mmr_t credit_vc3_test : 7; + mmr_t reserved_1 : 1; + mmr_t credit_vc3_dyn : 7; + mmr_t reserved_2 : 1; + mmr_t credit_vc3_cap : 7; + mmr_t reserved_3 : 33; + } sh_xnni1_link_3_flow_s; +} sh_xnni1_link_3_flow_u_t; +#else +typedef union sh_xnni1_link_3_flow_u { + mmr_t sh_xnni1_link_3_flow_regval; + struct { + mmr_t reserved_3 : 33; + mmr_t credit_vc3_cap : 7; + mmr_t reserved_2 : 1; + mmr_t credit_vc3_dyn : 7; + mmr_t reserved_1 : 1; + mmr_t credit_vc3_test : 7; + mmr_t debit_vc3_force_cred : 1; + mmr_t reserved_0 : 1; + mmr_t debit_vc3_withhold : 6; + } sh_xnni1_link_3_flow_s; +} sh_xnni1_link_3_flow_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_IILB_LOCAL_TABLE" */ +/* local lookup table */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_iilb_local_table_u { + mmr_t sh_iilb_local_table_regval; + struct { + mmr_t dir0 : 4; + mmr_t v0 : 1; + mmr_t ni_sel0 : 1; + mmr_t reserved_0 : 57; + mmr_t valid : 1; + } sh_iilb_local_table_s; +} sh_iilb_local_table_u_t; +#else +typedef union sh_iilb_local_table_u { + mmr_t sh_iilb_local_table_regval; + struct { + mmr_t valid : 1; + mmr_t reserved_0 : 57; + mmr_t ni_sel0 : 1; + mmr_t v0 : 1; + mmr_t dir0 : 4; + } sh_iilb_local_table_s; +} sh_iilb_local_table_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_IILB_GLOBAL_TABLE" */ +/* global lookup table */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_iilb_global_table_u { + mmr_t sh_iilb_global_table_regval; + struct { + mmr_t dir0 : 4; + mmr_t v0 : 1; + mmr_t ni_sel0 : 1; + mmr_t reserved_0 : 57; + mmr_t valid : 1; + } sh_iilb_global_table_s; +} sh_iilb_global_table_u_t; +#else +typedef union sh_iilb_global_table_u { + mmr_t sh_iilb_global_table_regval; + struct { + mmr_t valid : 1; + mmr_t reserved_0 : 57; + mmr_t ni_sel0 : 1; + mmr_t v0 : 1; + mmr_t dir0 : 4; + } sh_iilb_global_table_s; +} sh_iilb_global_table_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_IILB_OVER_RIDE_TABLE" */ +/* If enabled, bypass the Global/Local tables */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_iilb_over_ride_table_u { + mmr_t sh_iilb_over_ride_table_regval; + struct { + mmr_t dir0 : 4; + mmr_t v0 : 1; + mmr_t ni_sel0 : 1; + mmr_t reserved_0 : 57; + mmr_t enable : 1; + } sh_iilb_over_ride_table_s; +} sh_iilb_over_ride_table_u_t; +#else +typedef union sh_iilb_over_ride_table_u { + mmr_t sh_iilb_over_ride_table_regval; + struct { + mmr_t enable : 1; + mmr_t reserved_0 : 57; + mmr_t ni_sel0 : 1; + mmr_t v0 : 1; + mmr_t dir0 : 4; + } sh_iilb_over_ride_table_s; +} sh_iilb_over_ride_table_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_IILB_RSP_PLANE_HINT" */ +/* If enabled, invert incoming response only plane hint bit before lo */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_iilb_rsp_plane_hint_u { + mmr_t sh_iilb_rsp_plane_hint_regval; + struct { + mmr_t reserved_0 : 64; + } sh_iilb_rsp_plane_hint_s; +} sh_iilb_rsp_plane_hint_u_t; +#else +typedef union sh_iilb_rsp_plane_hint_u { + mmr_t sh_iilb_rsp_plane_hint_regval; + struct { + mmr_t reserved_0 : 64; + } sh_iilb_rsp_plane_hint_s; +} sh_iilb_rsp_plane_hint_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_LOCAL_TABLE" */ +/* local lookup table */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_local_table_u { + mmr_t sh_pi_local_table_regval; + struct { + mmr_t dir0 : 4; + mmr_t v0 : 1; + mmr_t ni_sel0 : 1; + mmr_t reserved_0 : 2; + mmr_t dir1 : 4; + mmr_t v1 : 1; + mmr_t ni_sel1 : 1; + mmr_t reserved_1 : 49; + mmr_t valid : 1; + } sh_pi_local_table_s; +} sh_pi_local_table_u_t; +#else +typedef union sh_pi_local_table_u { + mmr_t sh_pi_local_table_regval; + struct { + mmr_t valid : 1; + mmr_t reserved_1 : 49; + mmr_t ni_sel1 : 1; + mmr_t v1 : 1; + mmr_t dir1 : 4; + mmr_t reserved_0 : 2; + mmr_t ni_sel0 : 1; + mmr_t v0 : 1; + mmr_t dir0 : 4; + } sh_pi_local_table_s; +} sh_pi_local_table_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_GLOBAL_TABLE" */ +/* global lookup table */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_global_table_u { + mmr_t sh_pi_global_table_regval; + struct { + mmr_t dir0 : 4; + mmr_t v0 : 1; + mmr_t ni_sel0 : 1; + mmr_t reserved_0 : 2; + mmr_t dir1 : 4; + mmr_t v1 : 1; + mmr_t ni_sel1 : 1; + mmr_t reserved_1 : 49; + mmr_t valid : 1; + } sh_pi_global_table_s; +} sh_pi_global_table_u_t; +#else +typedef union sh_pi_global_table_u { + mmr_t sh_pi_global_table_regval; + struct { + mmr_t valid : 1; + mmr_t reserved_1 : 49; + mmr_t ni_sel1 : 1; + mmr_t v1 : 1; + mmr_t dir1 : 4; + mmr_t reserved_0 : 2; + mmr_t ni_sel0 : 1; + mmr_t v0 : 1; + mmr_t dir0 : 4; + } sh_pi_global_table_s; +} sh_pi_global_table_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_OVER_RIDE_TABLE" */ +/* If enabled, bypass the Global/Local tables */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_over_ride_table_u { + mmr_t sh_pi_over_ride_table_regval; + struct { + mmr_t dir0 : 4; + mmr_t v0 : 1; + mmr_t ni_sel0 : 1; + mmr_t reserved_0 : 2; + mmr_t dir1 : 4; + mmr_t v1 : 1; + mmr_t ni_sel1 : 1; + mmr_t reserved_1 : 49; + mmr_t enable : 1; + } sh_pi_over_ride_table_s; +} sh_pi_over_ride_table_u_t; +#else +typedef union sh_pi_over_ride_table_u { + mmr_t sh_pi_over_ride_table_regval; + struct { + mmr_t enable : 1; + mmr_t reserved_1 : 49; + mmr_t ni_sel1 : 1; + mmr_t v1 : 1; + mmr_t dir1 : 4; + mmr_t reserved_0 : 2; + mmr_t ni_sel0 : 1; + mmr_t v0 : 1; + mmr_t dir0 : 4; + } sh_pi_over_ride_table_s; +} sh_pi_over_ride_table_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_RSP_PLANE_HINT" */ +/* If enabled, invert incoming response only plane hint bit before lo */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_rsp_plane_hint_u { + mmr_t sh_pi_rsp_plane_hint_regval; + struct { + mmr_t invert : 1; + mmr_t reserved_0 : 63; + } sh_pi_rsp_plane_hint_s; +} sh_pi_rsp_plane_hint_u_t; +#else +typedef union sh_pi_rsp_plane_hint_u { + mmr_t sh_pi_rsp_plane_hint_regval; + struct { + mmr_t reserved_0 : 63; + mmr_t invert : 1; + } sh_pi_rsp_plane_hint_s; +} sh_pi_rsp_plane_hint_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_NI0_LOCAL_TABLE" */ +/* local lookup table */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ni0_local_table_u { + mmr_t sh_ni0_local_table_regval; + struct { + mmr_t dir0 : 4; + mmr_t v0 : 1; + mmr_t reserved_0 : 58; + mmr_t valid : 1; + } sh_ni0_local_table_s; +} sh_ni0_local_table_u_t; +#else +typedef union sh_ni0_local_table_u { + mmr_t sh_ni0_local_table_regval; + struct { + mmr_t valid : 1; + mmr_t reserved_0 : 58; + mmr_t v0 : 1; + mmr_t dir0 : 4; + } sh_ni0_local_table_s; +} sh_ni0_local_table_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_NI0_GLOBAL_TABLE" */ +/* global lookup table */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ni0_global_table_u { + mmr_t sh_ni0_global_table_regval; + struct { + mmr_t dir0 : 4; + mmr_t v0 : 1; + mmr_t reserved_0 : 58; + mmr_t valid : 1; + } sh_ni0_global_table_s; +} sh_ni0_global_table_u_t; +#else +typedef union sh_ni0_global_table_u { + mmr_t sh_ni0_global_table_regval; + struct { + mmr_t valid : 1; + mmr_t reserved_0 : 58; + mmr_t v0 : 1; + mmr_t dir0 : 4; + } sh_ni0_global_table_s; +} sh_ni0_global_table_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_NI0_OVER_RIDE_TABLE" */ +/* If enabled, bypass the Global/Local tables */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ni0_over_ride_table_u { + mmr_t sh_ni0_over_ride_table_regval; + struct { + mmr_t dir0 : 4; + mmr_t v0 : 1; + mmr_t reserved_0 : 58; + mmr_t enable : 1; + } sh_ni0_over_ride_table_s; +} sh_ni0_over_ride_table_u_t; +#else +typedef union sh_ni0_over_ride_table_u { + mmr_t sh_ni0_over_ride_table_regval; + struct { + mmr_t enable : 1; + mmr_t reserved_0 : 58; + mmr_t v0 : 1; + mmr_t dir0 : 4; + } sh_ni0_over_ride_table_s; +} sh_ni0_over_ride_table_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_NI0_RSP_PLANE_HINT" */ +/* If enabled, invert incoming response only plane hint bit before lo */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ni0_rsp_plane_hint_u { + mmr_t sh_ni0_rsp_plane_hint_regval; + struct { + mmr_t reserved_0 : 64; + } sh_ni0_rsp_plane_hint_s; +} sh_ni0_rsp_plane_hint_u_t; +#else +typedef union sh_ni0_rsp_plane_hint_u { + mmr_t sh_ni0_rsp_plane_hint_regval; + struct { + mmr_t reserved_0 : 64; + } sh_ni0_rsp_plane_hint_s; +} sh_ni0_rsp_plane_hint_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_NI1_LOCAL_TABLE" */ +/* local lookup table */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ni1_local_table_u { + mmr_t sh_ni1_local_table_regval; + struct { + mmr_t dir0 : 4; + mmr_t v0 : 1; + mmr_t reserved_0 : 58; + mmr_t valid : 1; + } sh_ni1_local_table_s; +} sh_ni1_local_table_u_t; +#else +typedef union sh_ni1_local_table_u { + mmr_t sh_ni1_local_table_regval; + struct { + mmr_t valid : 1; + mmr_t reserved_0 : 58; + mmr_t v0 : 1; + mmr_t dir0 : 4; + } sh_ni1_local_table_s; +} sh_ni1_local_table_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_NI1_GLOBAL_TABLE" */ +/* global lookup table */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ni1_global_table_u { + mmr_t sh_ni1_global_table_regval; + struct { + mmr_t dir0 : 4; + mmr_t v0 : 1; + mmr_t reserved_0 : 58; + mmr_t valid : 1; + } sh_ni1_global_table_s; +} sh_ni1_global_table_u_t; +#else +typedef union sh_ni1_global_table_u { + mmr_t sh_ni1_global_table_regval; + struct { + mmr_t valid : 1; + mmr_t reserved_0 : 58; + mmr_t v0 : 1; + mmr_t dir0 : 4; + } sh_ni1_global_table_s; +} sh_ni1_global_table_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_NI1_OVER_RIDE_TABLE" */ +/* If enabled, bypass the Global/Local tables */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ni1_over_ride_table_u { + mmr_t sh_ni1_over_ride_table_regval; + struct { + mmr_t dir0 : 4; + mmr_t v0 : 1; + mmr_t reserved_0 : 58; + mmr_t enable : 1; + } sh_ni1_over_ride_table_s; +} sh_ni1_over_ride_table_u_t; +#else +typedef union sh_ni1_over_ride_table_u { + mmr_t sh_ni1_over_ride_table_regval; + struct { + mmr_t enable : 1; + mmr_t reserved_0 : 58; + mmr_t v0 : 1; + mmr_t dir0 : 4; + } sh_ni1_over_ride_table_s; +} sh_ni1_over_ride_table_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_NI1_RSP_PLANE_HINT" */ +/* If enabled, invert incoming response only plane hint bit before lo */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ni1_rsp_plane_hint_u { + mmr_t sh_ni1_rsp_plane_hint_regval; + struct { + mmr_t reserved_0 : 64; + } sh_ni1_rsp_plane_hint_s; +} sh_ni1_rsp_plane_hint_u_t; +#else +typedef union sh_ni1_rsp_plane_hint_u { + mmr_t sh_ni1_rsp_plane_hint_regval; + struct { + mmr_t reserved_0 : 64; + } sh_ni1_rsp_plane_hint_s; +} sh_ni1_rsp_plane_hint_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_LOCAL_TABLE" */ +/* local lookup table */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_local_table_u { + mmr_t sh_md_local_table_regval; + struct { + mmr_t dir0 : 4; + mmr_t v0 : 1; + mmr_t ni_sel0 : 1; + mmr_t reserved_0 : 2; + mmr_t dir1 : 4; + mmr_t v1 : 1; + mmr_t ni_sel1 : 1; + mmr_t reserved_1 : 49; + mmr_t valid : 1; + } sh_md_local_table_s; +} sh_md_local_table_u_t; +#else +typedef union sh_md_local_table_u { + mmr_t sh_md_local_table_regval; + struct { + mmr_t valid : 1; + mmr_t reserved_1 : 49; + mmr_t ni_sel1 : 1; + mmr_t v1 : 1; + mmr_t dir1 : 4; + mmr_t reserved_0 : 2; + mmr_t ni_sel0 : 1; + mmr_t v0 : 1; + mmr_t dir0 : 4; + } sh_md_local_table_s; +} sh_md_local_table_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_GLOBAL_TABLE" */ +/* global lookup table */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_global_table_u { + mmr_t sh_md_global_table_regval; + struct { + mmr_t dir0 : 4; + mmr_t v0 : 1; + mmr_t ni_sel0 : 1; + mmr_t reserved_0 : 2; + mmr_t dir1 : 4; + mmr_t v1 : 1; + mmr_t ni_sel1 : 1; + mmr_t reserved_1 : 49; + mmr_t valid : 1; + } sh_md_global_table_s; +} sh_md_global_table_u_t; +#else +typedef union sh_md_global_table_u { + mmr_t sh_md_global_table_regval; + struct { + mmr_t valid : 1; + mmr_t reserved_1 : 49; + mmr_t ni_sel1 : 1; + mmr_t v1 : 1; + mmr_t dir1 : 4; + mmr_t reserved_0 : 2; + mmr_t ni_sel0 : 1; + mmr_t v0 : 1; + mmr_t dir0 : 4; + } sh_md_global_table_s; +} sh_md_global_table_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_OVER_RIDE_TABLE" */ +/* If enabled, bypass the Global/Local tables */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_over_ride_table_u { + mmr_t sh_md_over_ride_table_regval; + struct { + mmr_t dir0 : 4; + mmr_t v0 : 1; + mmr_t ni_sel0 : 1; + mmr_t reserved_0 : 2; + mmr_t dir1 : 4; + mmr_t v1 : 1; + mmr_t ni_sel1 : 1; + mmr_t reserved_1 : 49; + mmr_t enable : 1; + } sh_md_over_ride_table_s; +} sh_md_over_ride_table_u_t; +#else +typedef union sh_md_over_ride_table_u { + mmr_t sh_md_over_ride_table_regval; + struct { + mmr_t enable : 1; + mmr_t reserved_1 : 49; + mmr_t ni_sel1 : 1; + mmr_t v1 : 1; + mmr_t dir1 : 4; + mmr_t reserved_0 : 2; + mmr_t ni_sel0 : 1; + mmr_t v0 : 1; + mmr_t dir0 : 4; + } sh_md_over_ride_table_s; +} sh_md_over_ride_table_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_RSP_PLANE_HINT" */ +/* If enabled, invert incoming response only plane hint bit before lo */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_rsp_plane_hint_u { + mmr_t sh_md_rsp_plane_hint_regval; + struct { + mmr_t invert : 1; + mmr_t reserved_0 : 63; + } sh_md_rsp_plane_hint_s; +} sh_md_rsp_plane_hint_u_t; +#else +typedef union sh_md_rsp_plane_hint_u { + mmr_t sh_md_rsp_plane_hint_regval; + struct { + mmr_t reserved_0 : 63; + mmr_t invert : 1; + } sh_md_rsp_plane_hint_s; +} sh_md_rsp_plane_hint_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_LB_LIQ_CTL" */ +/* Local Block LIQ Control */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_lb_liq_ctl_u { + mmr_t sh_lb_liq_ctl_regval; + struct { + mmr_t liq_req_ctl : 5; + mmr_t reserved_0 : 3; + mmr_t liq_rpl_ctl : 4; + mmr_t reserved_1 : 4; + mmr_t force_rq_credit : 1; + mmr_t force_rp_credit : 1; + mmr_t force_linvv_credit : 1; + mmr_t reserved_2 : 45; + } sh_lb_liq_ctl_s; +} sh_lb_liq_ctl_u_t; +#else +typedef union sh_lb_liq_ctl_u { + mmr_t sh_lb_liq_ctl_regval; + struct { + mmr_t reserved_2 : 45; + mmr_t force_linvv_credit : 1; + mmr_t force_rp_credit : 1; + mmr_t force_rq_credit : 1; + mmr_t reserved_1 : 4; + mmr_t liq_rpl_ctl : 4; + mmr_t reserved_0 : 3; + mmr_t liq_req_ctl : 5; + } sh_lb_liq_ctl_s; +} sh_lb_liq_ctl_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_LB_LOQ_CTL" */ +/* Local Block LOQ Control */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_lb_loq_ctl_u { + mmr_t sh_lb_loq_ctl_regval; + struct { + mmr_t loq_req_ctl : 1; + mmr_t loq_rpl_ctl : 1; + mmr_t reserved_0 : 62; + } sh_lb_loq_ctl_s; +} sh_lb_loq_ctl_u_t; +#else +typedef union sh_lb_loq_ctl_u { + mmr_t sh_lb_loq_ctl_regval; + struct { + mmr_t reserved_0 : 62; + mmr_t loq_rpl_ctl : 1; + mmr_t loq_req_ctl : 1; + } sh_lb_loq_ctl_s; +} sh_lb_loq_ctl_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_LB_MAX_REP_CREDIT_CNT" */ +/* Maximum number of reply credits from XN */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_lb_max_rep_credit_cnt_u { + mmr_t sh_lb_max_rep_credit_cnt_regval; + struct { + mmr_t max_cnt : 5; + mmr_t reserved_0 : 59; + } sh_lb_max_rep_credit_cnt_s; +} sh_lb_max_rep_credit_cnt_u_t; +#else +typedef union sh_lb_max_rep_credit_cnt_u { + mmr_t sh_lb_max_rep_credit_cnt_regval; + struct { + mmr_t reserved_0 : 59; + mmr_t max_cnt : 5; + } sh_lb_max_rep_credit_cnt_s; +} sh_lb_max_rep_credit_cnt_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_LB_MAX_REQ_CREDIT_CNT" */ +/* Maximum number of request credits from XN */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_lb_max_req_credit_cnt_u { + mmr_t sh_lb_max_req_credit_cnt_regval; + struct { + mmr_t max_cnt : 5; + mmr_t reserved_0 : 59; + } sh_lb_max_req_credit_cnt_s; +} sh_lb_max_req_credit_cnt_u_t; +#else +typedef union sh_lb_max_req_credit_cnt_u { + mmr_t sh_lb_max_req_credit_cnt_regval; + struct { + mmr_t reserved_0 : 59; + mmr_t max_cnt : 5; + } sh_lb_max_req_credit_cnt_s; +} sh_lb_max_req_credit_cnt_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PIO_TIME_OUT" */ +/* Local Block PIO time out value */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pio_time_out_u { + mmr_t sh_pio_time_out_regval; + struct { + mmr_t value : 16; + mmr_t reserved_0 : 48; + } sh_pio_time_out_s; +} sh_pio_time_out_u_t; +#else +typedef union sh_pio_time_out_u { + mmr_t sh_pio_time_out_regval; + struct { + mmr_t reserved_0 : 48; + mmr_t value : 16; + } sh_pio_time_out_s; +} sh_pio_time_out_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PIO_NACK_RESET" */ +/* Local Block PIO Reset for nack counters */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pio_nack_reset_u { + mmr_t sh_pio_nack_reset_regval; + struct { + mmr_t pulse : 1; + mmr_t reserved_0 : 63; + } sh_pio_nack_reset_s; +} sh_pio_nack_reset_u_t; +#else +typedef union sh_pio_nack_reset_u { + mmr_t sh_pio_nack_reset_regval; + struct { + mmr_t reserved_0 : 63; + mmr_t pulse : 1; + } sh_pio_nack_reset_s; +} sh_pio_nack_reset_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_CONVEYOR_BELT_TIME_OUT" */ +/* Local Block conveyor belt time out value */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_conveyor_belt_time_out_u { + mmr_t sh_conveyor_belt_time_out_regval; + struct { + mmr_t value : 12; + mmr_t reserved_0 : 52; + } sh_conveyor_belt_time_out_s; +} sh_conveyor_belt_time_out_u_t; +#else +typedef union sh_conveyor_belt_time_out_u { + mmr_t sh_conveyor_belt_time_out_regval; + struct { + mmr_t reserved_0 : 52; + mmr_t value : 12; + } sh_conveyor_belt_time_out_s; +} sh_conveyor_belt_time_out_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_LB_CREDIT_STATUS" */ +/* Credit Counter Status Register */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_lb_credit_status_u { + mmr_t sh_lb_credit_status_regval; + struct { + mmr_t liq_rq_credit : 5; + mmr_t reserved_0 : 1; + mmr_t liq_rp_credit : 4; + mmr_t reserved_1 : 2; + mmr_t linvv_credit : 6; + mmr_t loq_rq_credit : 5; + mmr_t loq_rp_credit : 5; + mmr_t reserved_2 : 36; + } sh_lb_credit_status_s; +} sh_lb_credit_status_u_t; +#else +typedef union sh_lb_credit_status_u { + mmr_t sh_lb_credit_status_regval; + struct { + mmr_t reserved_2 : 36; + mmr_t loq_rp_credit : 5; + mmr_t loq_rq_credit : 5; + mmr_t linvv_credit : 6; + mmr_t reserved_1 : 2; + mmr_t liq_rp_credit : 4; + mmr_t reserved_0 : 1; + mmr_t liq_rq_credit : 5; + } sh_lb_credit_status_s; +} sh_lb_credit_status_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_LB_DEBUG_LOCAL_SEL" */ +/* LB Debug Port Select */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_lb_debug_local_sel_u { + mmr_t sh_lb_debug_local_sel_regval; + struct { + mmr_t nibble0_chiplet_sel : 3; + mmr_t reserved_0 : 1; + mmr_t nibble0_nibble_sel : 3; + mmr_t reserved_1 : 1; + mmr_t nibble1_chiplet_sel : 3; + mmr_t reserved_2 : 1; + mmr_t nibble1_nibble_sel : 3; + mmr_t reserved_3 : 1; + mmr_t nibble2_chiplet_sel : 3; + mmr_t reserved_4 : 1; + mmr_t nibble2_nibble_sel : 3; + mmr_t reserved_5 : 1; + mmr_t nibble3_chiplet_sel : 3; + mmr_t reserved_6 : 1; + mmr_t nibble3_nibble_sel : 3; + mmr_t reserved_7 : 1; + mmr_t nibble4_chiplet_sel : 3; + mmr_t reserved_8 : 1; + mmr_t nibble4_nibble_sel : 3; + mmr_t reserved_9 : 1; + mmr_t nibble5_chiplet_sel : 3; + mmr_t reserved_10 : 1; + mmr_t nibble5_nibble_sel : 3; + mmr_t reserved_11 : 1; + mmr_t nibble6_chiplet_sel : 3; + mmr_t reserved_12 : 1; + mmr_t nibble6_nibble_sel : 3; + mmr_t reserved_13 : 1; + mmr_t nibble7_chiplet_sel : 3; + mmr_t reserved_14 : 1; + mmr_t nibble7_nibble_sel : 3; + mmr_t trigger_enable : 1; + } sh_lb_debug_local_sel_s; +} sh_lb_debug_local_sel_u_t; +#else +typedef union sh_lb_debug_local_sel_u { + mmr_t sh_lb_debug_local_sel_regval; + struct { + mmr_t trigger_enable : 1; + mmr_t nibble7_nibble_sel : 3; + mmr_t reserved_14 : 1; + mmr_t nibble7_chiplet_sel : 3; + mmr_t reserved_13 : 1; + mmr_t nibble6_nibble_sel : 3; + mmr_t reserved_12 : 1; + mmr_t nibble6_chiplet_sel : 3; + mmr_t reserved_11 : 1; + mmr_t nibble5_nibble_sel : 3; + mmr_t reserved_10 : 1; + mmr_t nibble5_chiplet_sel : 3; + mmr_t reserved_9 : 1; + mmr_t nibble4_nibble_sel : 3; + mmr_t reserved_8 : 1; + mmr_t nibble4_chiplet_sel : 3; + mmr_t reserved_7 : 1; + mmr_t nibble3_nibble_sel : 3; + mmr_t reserved_6 : 1; + mmr_t nibble3_chiplet_sel : 3; + mmr_t reserved_5 : 1; + mmr_t nibble2_nibble_sel : 3; + mmr_t reserved_4 : 1; + mmr_t nibble2_chiplet_sel : 3; + mmr_t reserved_3 : 1; + mmr_t nibble1_nibble_sel : 3; + mmr_t reserved_2 : 1; + mmr_t nibble1_chiplet_sel : 3; + mmr_t reserved_1 : 1; + mmr_t nibble0_nibble_sel : 3; + mmr_t reserved_0 : 1; + mmr_t nibble0_chiplet_sel : 3; + } sh_lb_debug_local_sel_s; +} sh_lb_debug_local_sel_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_LB_DEBUG_PERF_SEL" */ +/* LB Debug Port Performance Select */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_lb_debug_perf_sel_u { + mmr_t sh_lb_debug_perf_sel_regval; + struct { + mmr_t nibble0_chiplet_sel : 3; + mmr_t reserved_0 : 1; + mmr_t nibble0_nibble_sel : 3; + mmr_t reserved_1 : 1; + mmr_t nibble1_chiplet_sel : 3; + mmr_t reserved_2 : 1; + mmr_t nibble1_nibble_sel : 3; + mmr_t reserved_3 : 1; + mmr_t nibble2_chiplet_sel : 3; + mmr_t reserved_4 : 1; + mmr_t nibble2_nibble_sel : 3; + mmr_t reserved_5 : 1; + mmr_t nibble3_chiplet_sel : 3; + mmr_t reserved_6 : 1; + mmr_t nibble3_nibble_sel : 3; + mmr_t reserved_7 : 1; + mmr_t nibble4_chiplet_sel : 3; + mmr_t reserved_8 : 1; + mmr_t nibble4_nibble_sel : 3; + mmr_t reserved_9 : 1; + mmr_t nibble5_chiplet_sel : 3; + mmr_t reserved_10 : 1; + mmr_t nibble5_nibble_sel : 3; + mmr_t reserved_11 : 1; + mmr_t nibble6_chiplet_sel : 3; + mmr_t reserved_12 : 1; + mmr_t nibble6_nibble_sel : 3; + mmr_t reserved_13 : 1; + mmr_t nibble7_chiplet_sel : 3; + mmr_t reserved_14 : 1; + mmr_t nibble7_nibble_sel : 3; + mmr_t reserved_15 : 1; + } sh_lb_debug_perf_sel_s; +} sh_lb_debug_perf_sel_u_t; +#else +typedef union sh_lb_debug_perf_sel_u { + mmr_t sh_lb_debug_perf_sel_regval; + struct { + mmr_t reserved_15 : 1; + mmr_t nibble7_nibble_sel : 3; + mmr_t reserved_14 : 1; + mmr_t nibble7_chiplet_sel : 3; + mmr_t reserved_13 : 1; + mmr_t nibble6_nibble_sel : 3; + mmr_t reserved_12 : 1; + mmr_t nibble6_chiplet_sel : 3; + mmr_t reserved_11 : 1; + mmr_t nibble5_nibble_sel : 3; + mmr_t reserved_10 : 1; + mmr_t nibble5_chiplet_sel : 3; + mmr_t reserved_9 : 1; + mmr_t nibble4_nibble_sel : 3; + mmr_t reserved_8 : 1; + mmr_t nibble4_chiplet_sel : 3; + mmr_t reserved_7 : 1; + mmr_t nibble3_nibble_sel : 3; + mmr_t reserved_6 : 1; + mmr_t nibble3_chiplet_sel : 3; + mmr_t reserved_5 : 1; + mmr_t nibble2_nibble_sel : 3; + mmr_t reserved_4 : 1; + mmr_t nibble2_chiplet_sel : 3; + mmr_t reserved_3 : 1; + mmr_t nibble1_nibble_sel : 3; + mmr_t reserved_2 : 1; + mmr_t nibble1_chiplet_sel : 3; + mmr_t reserved_1 : 1; + mmr_t nibble0_nibble_sel : 3; + mmr_t reserved_0 : 1; + mmr_t nibble0_chiplet_sel : 3; + } sh_lb_debug_perf_sel_s; +} sh_lb_debug_perf_sel_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_LB_DEBUG_TRIG_SEL" */ +/* LB Debug Trigger Select */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_lb_debug_trig_sel_u { + mmr_t sh_lb_debug_trig_sel_regval; + struct { + mmr_t trigger0_chiplet_sel : 3; + mmr_t reserved_0 : 1; + mmr_t trigger0_nibble_sel : 3; + mmr_t reserved_1 : 1; + mmr_t trigger1_chiplet_sel : 3; + mmr_t reserved_2 : 1; + mmr_t trigger1_nibble_sel : 3; + mmr_t reserved_3 : 1; + mmr_t trigger2_chiplet_sel : 3; + mmr_t reserved_4 : 1; + mmr_t trigger2_nibble_sel : 3; + mmr_t reserved_5 : 1; + mmr_t trigger3_chiplet_sel : 3; + mmr_t reserved_6 : 1; + mmr_t trigger3_nibble_sel : 3; + mmr_t reserved_7 : 1; + mmr_t trigger4_chiplet_sel : 3; + mmr_t reserved_8 : 1; + mmr_t trigger4_nibble_sel : 3; + mmr_t reserved_9 : 1; + mmr_t trigger5_chiplet_sel : 3; + mmr_t reserved_10 : 1; + mmr_t trigger5_nibble_sel : 3; + mmr_t reserved_11 : 1; + mmr_t trigger6_chiplet_sel : 3; + mmr_t reserved_12 : 1; + mmr_t trigger6_nibble_sel : 3; + mmr_t reserved_13 : 1; + mmr_t trigger7_chiplet_sel : 3; + mmr_t reserved_14 : 1; + mmr_t trigger7_nibble_sel : 3; + mmr_t reserved_15 : 1; + } sh_lb_debug_trig_sel_s; +} sh_lb_debug_trig_sel_u_t; +#else +typedef union sh_lb_debug_trig_sel_u { + mmr_t sh_lb_debug_trig_sel_regval; + struct { + mmr_t reserved_15 : 1; + mmr_t trigger7_nibble_sel : 3; + mmr_t reserved_14 : 1; + mmr_t trigger7_chiplet_sel : 3; + mmr_t reserved_13 : 1; + mmr_t trigger6_nibble_sel : 3; + mmr_t reserved_12 : 1; + mmr_t trigger6_chiplet_sel : 3; + mmr_t reserved_11 : 1; + mmr_t trigger5_nibble_sel : 3; + mmr_t reserved_10 : 1; + mmr_t trigger5_chiplet_sel : 3; + mmr_t reserved_9 : 1; + mmr_t trigger4_nibble_sel : 3; + mmr_t reserved_8 : 1; + mmr_t trigger4_chiplet_sel : 3; + mmr_t reserved_7 : 1; + mmr_t trigger3_nibble_sel : 3; + mmr_t reserved_6 : 1; + mmr_t trigger3_chiplet_sel : 3; + mmr_t reserved_5 : 1; + mmr_t trigger2_nibble_sel : 3; + mmr_t reserved_4 : 1; + mmr_t trigger2_chiplet_sel : 3; + mmr_t reserved_3 : 1; + mmr_t trigger1_nibble_sel : 3; + mmr_t reserved_2 : 1; + mmr_t trigger1_chiplet_sel : 3; + mmr_t reserved_1 : 1; + mmr_t trigger0_nibble_sel : 3; + mmr_t reserved_0 : 1; + mmr_t trigger0_chiplet_sel : 3; + } sh_lb_debug_trig_sel_s; +} sh_lb_debug_trig_sel_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_LB_ERROR_DETAIL_1" */ +/* LB Error capture information: HDR1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_lb_error_detail_1_u { + mmr_t sh_lb_error_detail_1_regval; + struct { + mmr_t command : 8; + mmr_t suppl : 14; + mmr_t reserved_0 : 2; + mmr_t source : 14; + mmr_t reserved_1 : 2; + mmr_t dest : 3; + mmr_t reserved_2 : 5; + mmr_t hdr_err : 1; + mmr_t data_err : 1; + mmr_t reserved_3 : 13; + mmr_t valid : 1; + } sh_lb_error_detail_1_s; +} sh_lb_error_detail_1_u_t; +#else +typedef union sh_lb_error_detail_1_u { + mmr_t sh_lb_error_detail_1_regval; + struct { + mmr_t valid : 1; + mmr_t reserved_3 : 13; + mmr_t data_err : 1; + mmr_t hdr_err : 1; + mmr_t reserved_2 : 5; + mmr_t dest : 3; + mmr_t reserved_1 : 2; + mmr_t source : 14; + mmr_t reserved_0 : 2; + mmr_t suppl : 14; + mmr_t command : 8; + } sh_lb_error_detail_1_s; +} sh_lb_error_detail_1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_LB_ERROR_DETAIL_2" */ +/* LB Error Bits */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_lb_error_detail_2_u { + mmr_t sh_lb_error_detail_2_regval; + struct { + mmr_t address : 47; + mmr_t reserved_0 : 17; + } sh_lb_error_detail_2_s; +} sh_lb_error_detail_2_u_t; +#else +typedef union sh_lb_error_detail_2_u { + mmr_t sh_lb_error_detail_2_regval; + struct { + mmr_t reserved_0 : 17; + mmr_t address : 47; + } sh_lb_error_detail_2_s; +} sh_lb_error_detail_2_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_LB_ERROR_DETAIL_3" */ +/* LB Error Bits */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_lb_error_detail_3_u { + mmr_t sh_lb_error_detail_3_regval; + struct { + mmr_t data : 64; + } sh_lb_error_detail_3_s; +} sh_lb_error_detail_3_u_t; +#else +typedef union sh_lb_error_detail_3_u { + mmr_t sh_lb_error_detail_3_regval; + struct { + mmr_t data : 64; + } sh_lb_error_detail_3_s; +} sh_lb_error_detail_3_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_LB_ERROR_DETAIL_4" */ +/* LB Error Bits */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_lb_error_detail_4_u { + mmr_t sh_lb_error_detail_4_regval; + struct { + mmr_t route : 64; + } sh_lb_error_detail_4_s; +} sh_lb_error_detail_4_u_t; +#else +typedef union sh_lb_error_detail_4_u { + mmr_t sh_lb_error_detail_4_regval; + struct { + mmr_t route : 64; + } sh_lb_error_detail_4_s; +} sh_lb_error_detail_4_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_LB_ERROR_DETAIL_5" */ +/* LB Error Bits */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_lb_error_detail_5_u { + mmr_t sh_lb_error_detail_5_regval; + struct { + mmr_t read_retry : 1; + mmr_t ptc1_write : 1; + mmr_t write_retry : 1; + mmr_t count_a_overflow : 1; + mmr_t count_b_overflow : 1; + mmr_t nack_a_timeout : 1; + mmr_t nack_b_timeout : 1; + mmr_t reserved_0 : 57; + } sh_lb_error_detail_5_s; +} sh_lb_error_detail_5_u_t; +#else +typedef union sh_lb_error_detail_5_u { + mmr_t sh_lb_error_detail_5_regval; + struct { + mmr_t reserved_0 : 57; + mmr_t nack_b_timeout : 1; + mmr_t nack_a_timeout : 1; + mmr_t count_b_overflow : 1; + mmr_t count_a_overflow : 1; + mmr_t write_retry : 1; + mmr_t ptc1_write : 1; + mmr_t read_retry : 1; + } sh_lb_error_detail_5_s; +} sh_lb_error_detail_5_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_LB_ERROR_MASK" */ +/* LB Error Mask */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_lb_error_mask_u { + mmr_t sh_lb_error_mask_regval; + struct { + mmr_t rq_bad_cmd : 1; + mmr_t rp_bad_cmd : 1; + mmr_t rq_short : 1; + mmr_t rp_short : 1; + mmr_t rq_long : 1; + mmr_t rp_long : 1; + mmr_t rq_bad_data : 1; + mmr_t rp_bad_data : 1; + mmr_t rq_bad_addr : 1; + mmr_t rq_time_out : 1; + mmr_t linvv_overflow : 1; + mmr_t unexpected_linv : 1; + mmr_t ptc_1_timeout : 1; + mmr_t junk_bus_err : 1; + mmr_t pio_cb_err : 1; + mmr_t vector_rq_route_error : 1; + mmr_t vector_rp_route_error : 1; + mmr_t gclk_drop : 1; + mmr_t rq_fifo_error : 1; + mmr_t rp_fifo_error : 1; + mmr_t unexp_valid : 1; + mmr_t rq_credit_overflow : 1; + mmr_t rp_credit_overflow : 1; + mmr_t reserved_0 : 41; + } sh_lb_error_mask_s; +} sh_lb_error_mask_u_t; +#else +typedef union sh_lb_error_mask_u { + mmr_t sh_lb_error_mask_regval; + struct { + mmr_t reserved_0 : 41; + mmr_t rp_credit_overflow : 1; + mmr_t rq_credit_overflow : 1; + mmr_t unexp_valid : 1; + mmr_t rp_fifo_error : 1; + mmr_t rq_fifo_error : 1; + mmr_t gclk_drop : 1; + mmr_t vector_rp_route_error : 1; + mmr_t vector_rq_route_error : 1; + mmr_t pio_cb_err : 1; + mmr_t junk_bus_err : 1; + mmr_t ptc_1_timeout : 1; + mmr_t unexpected_linv : 1; + mmr_t linvv_overflow : 1; + mmr_t rq_time_out : 1; + mmr_t rq_bad_addr : 1; + mmr_t rp_bad_data : 1; + mmr_t rq_bad_data : 1; + mmr_t rp_long : 1; + mmr_t rq_long : 1; + mmr_t rp_short : 1; + mmr_t rq_short : 1; + mmr_t rp_bad_cmd : 1; + mmr_t rq_bad_cmd : 1; + } sh_lb_error_mask_s; +} sh_lb_error_mask_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_LB_ERROR_OVERFLOW" */ +/* LB Error Overflow */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_lb_error_overflow_u { + mmr_t sh_lb_error_overflow_regval; + struct { + mmr_t rq_bad_cmd_ovrfl : 1; + mmr_t rp_bad_cmd_ovrfl : 1; + mmr_t rq_short_ovrfl : 1; + mmr_t rp_short_ovrfl : 1; + mmr_t rq_long_ovrfl : 1; + mmr_t rp_long_ovrfl : 1; + mmr_t rq_bad_data_ovrfl : 1; + mmr_t rp_bad_data_ovrfl : 1; + mmr_t rq_bad_addr_ovrfl : 1; + mmr_t rq_time_out_ovrfl : 1; + mmr_t linvv_overflow_ovrfl : 1; + mmr_t unexpected_linv_ovrfl : 1; + mmr_t ptc_1_timeout_ovrfl : 1; + mmr_t junk_bus_err_ovrfl : 1; + mmr_t pio_cb_err_ovrfl : 1; + mmr_t vector_rq_route_error_ovrfl : 1; + mmr_t vector_rp_route_error_ovrfl : 1; + mmr_t gclk_drop_ovrfl : 1; + mmr_t rq_fifo_error_ovrfl : 1; + mmr_t rp_fifo_error_ovrfl : 1; + mmr_t unexp_valid_ovrfl : 1; + mmr_t rq_credit_overflow_ovrfl : 1; + mmr_t rp_credit_overflow_ovrfl : 1; + mmr_t reserved_0 : 41; + } sh_lb_error_overflow_s; +} sh_lb_error_overflow_u_t; +#else +typedef union sh_lb_error_overflow_u { + mmr_t sh_lb_error_overflow_regval; + struct { + mmr_t reserved_0 : 41; + mmr_t rp_credit_overflow_ovrfl : 1; + mmr_t rq_credit_overflow_ovrfl : 1; + mmr_t unexp_valid_ovrfl : 1; + mmr_t rp_fifo_error_ovrfl : 1; + mmr_t rq_fifo_error_ovrfl : 1; + mmr_t gclk_drop_ovrfl : 1; + mmr_t vector_rp_route_error_ovrfl : 1; + mmr_t vector_rq_route_error_ovrfl : 1; + mmr_t pio_cb_err_ovrfl : 1; + mmr_t junk_bus_err_ovrfl : 1; + mmr_t ptc_1_timeout_ovrfl : 1; + mmr_t unexpected_linv_ovrfl : 1; + mmr_t linvv_overflow_ovrfl : 1; + mmr_t rq_time_out_ovrfl : 1; + mmr_t rq_bad_addr_ovrfl : 1; + mmr_t rp_bad_data_ovrfl : 1; + mmr_t rq_bad_data_ovrfl : 1; + mmr_t rp_long_ovrfl : 1; + mmr_t rq_long_ovrfl : 1; + mmr_t rp_short_ovrfl : 1; + mmr_t rq_short_ovrfl : 1; + mmr_t rp_bad_cmd_ovrfl : 1; + mmr_t rq_bad_cmd_ovrfl : 1; + } sh_lb_error_overflow_s; +} sh_lb_error_overflow_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_LB_ERROR_SUMMARY" */ +/* LB Error Bits */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_lb_error_summary_u { + mmr_t sh_lb_error_summary_regval; + struct { + mmr_t rq_bad_cmd : 1; + mmr_t rp_bad_cmd : 1; + mmr_t rq_short : 1; + mmr_t rp_short : 1; + mmr_t rq_long : 1; + mmr_t rp_long : 1; + mmr_t rq_bad_data : 1; + mmr_t rp_bad_data : 1; + mmr_t rq_bad_addr : 1; + mmr_t rq_time_out : 1; + mmr_t linvv_overflow : 1; + mmr_t unexpected_linv : 1; + mmr_t ptc_1_timeout : 1; + mmr_t junk_bus_err : 1; + mmr_t pio_cb_err : 1; + mmr_t vector_rq_route_error : 1; + mmr_t vector_rp_route_error : 1; + mmr_t gclk_drop : 1; + mmr_t rq_fifo_error : 1; + mmr_t rp_fifo_error : 1; + mmr_t unexp_valid : 1; + mmr_t rq_credit_overflow : 1; + mmr_t rp_credit_overflow : 1; + mmr_t reserved_0 : 41; + } sh_lb_error_summary_s; +} sh_lb_error_summary_u_t; +#else +typedef union sh_lb_error_summary_u { + mmr_t sh_lb_error_summary_regval; + struct { + mmr_t reserved_0 : 41; + mmr_t rp_credit_overflow : 1; + mmr_t rq_credit_overflow : 1; + mmr_t unexp_valid : 1; + mmr_t rp_fifo_error : 1; + mmr_t rq_fifo_error : 1; + mmr_t gclk_drop : 1; + mmr_t vector_rp_route_error : 1; + mmr_t vector_rq_route_error : 1; + mmr_t pio_cb_err : 1; + mmr_t junk_bus_err : 1; + mmr_t ptc_1_timeout : 1; + mmr_t unexpected_linv : 1; + mmr_t linvv_overflow : 1; + mmr_t rq_time_out : 1; + mmr_t rq_bad_addr : 1; + mmr_t rp_bad_data : 1; + mmr_t rq_bad_data : 1; + mmr_t rp_long : 1; + mmr_t rq_long : 1; + mmr_t rp_short : 1; + mmr_t rq_short : 1; + mmr_t rp_bad_cmd : 1; + mmr_t rq_bad_cmd : 1; + } sh_lb_error_summary_s; +} sh_lb_error_summary_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_LB_FIRST_ERROR" */ +/* LB First Error */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_lb_first_error_u { + mmr_t sh_lb_first_error_regval; + struct { + mmr_t rq_bad_cmd : 1; + mmr_t rp_bad_cmd : 1; + mmr_t rq_short : 1; + mmr_t rp_short : 1; + mmr_t rq_long : 1; + mmr_t rp_long : 1; + mmr_t rq_bad_data : 1; + mmr_t rp_bad_data : 1; + mmr_t rq_bad_addr : 1; + mmr_t rq_time_out : 1; + mmr_t linvv_overflow : 1; + mmr_t unexpected_linv : 1; + mmr_t ptc_1_timeout : 1; + mmr_t junk_bus_err : 1; + mmr_t pio_cb_err : 1; + mmr_t vector_rq_route_error : 1; + mmr_t vector_rp_route_error : 1; + mmr_t gclk_drop : 1; + mmr_t rq_fifo_error : 1; + mmr_t rp_fifo_error : 1; + mmr_t unexp_valid : 1; + mmr_t rq_credit_overflow : 1; + mmr_t rp_credit_overflow : 1; + mmr_t reserved_0 : 41; + } sh_lb_first_error_s; +} sh_lb_first_error_u_t; +#else +typedef union sh_lb_first_error_u { + mmr_t sh_lb_first_error_regval; + struct { + mmr_t reserved_0 : 41; + mmr_t rp_credit_overflow : 1; + mmr_t rq_credit_overflow : 1; + mmr_t unexp_valid : 1; + mmr_t rp_fifo_error : 1; + mmr_t rq_fifo_error : 1; + mmr_t gclk_drop : 1; + mmr_t vector_rp_route_error : 1; + mmr_t vector_rq_route_error : 1; + mmr_t pio_cb_err : 1; + mmr_t junk_bus_err : 1; + mmr_t ptc_1_timeout : 1; + mmr_t unexpected_linv : 1; + mmr_t linvv_overflow : 1; + mmr_t rq_time_out : 1; + mmr_t rq_bad_addr : 1; + mmr_t rp_bad_data : 1; + mmr_t rq_bad_data : 1; + mmr_t rp_long : 1; + mmr_t rq_long : 1; + mmr_t rp_short : 1; + mmr_t rq_short : 1; + mmr_t rp_bad_cmd : 1; + mmr_t rq_bad_cmd : 1; + } sh_lb_first_error_s; +} sh_lb_first_error_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_LB_LAST_CREDIT" */ +/* Credit counter status register */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_lb_last_credit_u { + mmr_t sh_lb_last_credit_regval; + struct { + mmr_t liq_rq_credit : 5; + mmr_t reserved_0 : 1; + mmr_t liq_rp_credit : 4; + mmr_t reserved_1 : 2; + mmr_t linvv_credit : 6; + mmr_t loq_rq_credit : 5; + mmr_t loq_rp_credit : 5; + mmr_t reserved_2 : 36; + } sh_lb_last_credit_s; +} sh_lb_last_credit_u_t; +#else +typedef union sh_lb_last_credit_u { + mmr_t sh_lb_last_credit_regval; + struct { + mmr_t reserved_2 : 36; + mmr_t loq_rp_credit : 5; + mmr_t loq_rq_credit : 5; + mmr_t linvv_credit : 6; + mmr_t reserved_1 : 2; + mmr_t liq_rp_credit : 4; + mmr_t reserved_0 : 1; + mmr_t liq_rq_credit : 5; + } sh_lb_last_credit_s; +} sh_lb_last_credit_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_LB_NACK_STATUS" */ +/* Nack Counter Status Register */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_lb_nack_status_u { + mmr_t sh_lb_nack_status_regval; + struct { + mmr_t pio_nack_a : 12; + mmr_t reserved_0 : 4; + mmr_t pio_nack_b : 12; + mmr_t reserved_1 : 4; + mmr_t junk_nack : 16; + mmr_t cb_timeout_count : 12; + mmr_t cb_state : 2; + mmr_t reserved_2 : 2; + } sh_lb_nack_status_s; +} sh_lb_nack_status_u_t; +#else +typedef union sh_lb_nack_status_u { + mmr_t sh_lb_nack_status_regval; + struct { + mmr_t reserved_2 : 2; + mmr_t cb_state : 2; + mmr_t cb_timeout_count : 12; + mmr_t junk_nack : 16; + mmr_t reserved_1 : 4; + mmr_t pio_nack_b : 12; + mmr_t reserved_0 : 4; + mmr_t pio_nack_a : 12; + } sh_lb_nack_status_s; +} sh_lb_nack_status_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_LB_TRIGGER_COMPARE" */ +/* LB Test-point Trigger Compare */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_lb_trigger_compare_u { + mmr_t sh_lb_trigger_compare_regval; + struct { + mmr_t mask : 32; + mmr_t reserved_0 : 32; + } sh_lb_trigger_compare_s; +} sh_lb_trigger_compare_u_t; +#else +typedef union sh_lb_trigger_compare_u { + mmr_t sh_lb_trigger_compare_regval; + struct { + mmr_t reserved_0 : 32; + mmr_t mask : 32; + } sh_lb_trigger_compare_s; +} sh_lb_trigger_compare_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_LB_TRIGGER_DATA" */ +/* LB Test-point Trigger Compare Data */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_lb_trigger_data_u { + mmr_t sh_lb_trigger_data_regval; + struct { + mmr_t compare_pattern : 32; + mmr_t reserved_0 : 32; + } sh_lb_trigger_data_s; +} sh_lb_trigger_data_u_t; +#else +typedef union sh_lb_trigger_data_u { + mmr_t sh_lb_trigger_data_regval; + struct { + mmr_t reserved_0 : 32; + mmr_t compare_pattern : 32; + } sh_lb_trigger_data_s; +} sh_lb_trigger_data_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_AEC_CONFIG" */ +/* PI Adaptive Error Correction Configuration */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_aec_config_u { + mmr_t sh_pi_aec_config_regval; + struct { + mmr_t mode : 3; + mmr_t reserved_0 : 61; + } sh_pi_aec_config_s; +} sh_pi_aec_config_u_t; +#else +typedef union sh_pi_aec_config_u { + mmr_t sh_pi_aec_config_regval; + struct { + mmr_t reserved_0 : 61; + mmr_t mode : 3; + } sh_pi_aec_config_s; +} sh_pi_aec_config_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_AFI_ERROR_MASK" */ +/* PI AFI Error Mask */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_afi_error_mask_u { + mmr_t sh_pi_afi_error_mask_regval; + struct { + mmr_t reserved_0 : 21; + mmr_t hung_bus : 1; + mmr_t rsp_parity : 1; + mmr_t ioq_overrun : 1; + mmr_t req_format : 1; + mmr_t addr_access : 1; + mmr_t req_parity : 1; + mmr_t addr_parity : 1; + mmr_t shub_fsb_dqe : 1; + mmr_t shub_fsb_uce : 1; + mmr_t shub_fsb_ce : 1; + mmr_t livelock : 1; + mmr_t bad_snoop : 1; + mmr_t fsb_tbl_miss : 1; + mmr_t msg_len : 1; + mmr_t reserved_1 : 29; + } sh_pi_afi_error_mask_s; +} sh_pi_afi_error_mask_u_t; +#else +typedef union sh_pi_afi_error_mask_u { + mmr_t sh_pi_afi_error_mask_regval; + struct { + mmr_t reserved_1 : 29; + mmr_t msg_len : 1; + mmr_t fsb_tbl_miss : 1; + mmr_t bad_snoop : 1; + mmr_t livelock : 1; + mmr_t shub_fsb_ce : 1; + mmr_t shub_fsb_uce : 1; + mmr_t shub_fsb_dqe : 1; + mmr_t addr_parity : 1; + mmr_t req_parity : 1; + mmr_t addr_access : 1; + mmr_t req_format : 1; + mmr_t ioq_overrun : 1; + mmr_t rsp_parity : 1; + mmr_t hung_bus : 1; + mmr_t reserved_0 : 21; + } sh_pi_afi_error_mask_s; +} sh_pi_afi_error_mask_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_AFI_TEST_POINT_COMPARE" */ +/* PI AFI Test Point Compare */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_afi_test_point_compare_u { + mmr_t sh_pi_afi_test_point_compare_regval; + struct { + mmr_t compare_mask : 32; + mmr_t compare_pattern : 32; + } sh_pi_afi_test_point_compare_s; +} sh_pi_afi_test_point_compare_u_t; +#else +typedef union sh_pi_afi_test_point_compare_u { + mmr_t sh_pi_afi_test_point_compare_regval; + struct { + mmr_t compare_pattern : 32; + mmr_t compare_mask : 32; + } sh_pi_afi_test_point_compare_s; +} sh_pi_afi_test_point_compare_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_AFI_TEST_POINT_SELECT" */ +/* PI AFI Test Point Select */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_afi_test_point_select_u { + mmr_t sh_pi_afi_test_point_select_regval; + struct { + mmr_t nibble0_chiplet_sel : 4; + mmr_t nibble0_nibble_sel : 3; + mmr_t reserved_0 : 1; + mmr_t nibble1_chiplet_sel : 4; + mmr_t nibble1_nibble_sel : 3; + mmr_t reserved_1 : 1; + mmr_t nibble2_chiplet_sel : 4; + mmr_t nibble2_nibble_sel : 3; + mmr_t reserved_2 : 1; + mmr_t nibble3_chiplet_sel : 4; + mmr_t nibble3_nibble_sel : 3; + mmr_t reserved_3 : 1; + mmr_t nibble4_chiplet_sel : 4; + mmr_t nibble4_nibble_sel : 3; + mmr_t reserved_4 : 1; + mmr_t nibble5_chiplet_sel : 4; + mmr_t nibble5_nibble_sel : 3; + mmr_t reserved_5 : 1; + mmr_t nibble6_chiplet_sel : 4; + mmr_t nibble6_nibble_sel : 3; + mmr_t reserved_6 : 1; + mmr_t nibble7_chiplet_sel : 4; + mmr_t nibble7_nibble_sel : 3; + mmr_t trigger_enable : 1; + } sh_pi_afi_test_point_select_s; +} sh_pi_afi_test_point_select_u_t; +#else +typedef union sh_pi_afi_test_point_select_u { + mmr_t sh_pi_afi_test_point_select_regval; + struct { + mmr_t trigger_enable : 1; + mmr_t nibble7_nibble_sel : 3; + mmr_t nibble7_chiplet_sel : 4; + mmr_t reserved_6 : 1; + mmr_t nibble6_nibble_sel : 3; + mmr_t nibble6_chiplet_sel : 4; + mmr_t reserved_5 : 1; + mmr_t nibble5_nibble_sel : 3; + mmr_t nibble5_chiplet_sel : 4; + mmr_t reserved_4 : 1; + mmr_t nibble4_nibble_sel : 3; + mmr_t nibble4_chiplet_sel : 4; + mmr_t reserved_3 : 1; + mmr_t nibble3_nibble_sel : 3; + mmr_t nibble3_chiplet_sel : 4; + mmr_t reserved_2 : 1; + mmr_t nibble2_nibble_sel : 3; + mmr_t nibble2_chiplet_sel : 4; + mmr_t reserved_1 : 1; + mmr_t nibble1_nibble_sel : 3; + mmr_t nibble1_chiplet_sel : 4; + mmr_t reserved_0 : 1; + mmr_t nibble0_nibble_sel : 3; + mmr_t nibble0_chiplet_sel : 4; + } sh_pi_afi_test_point_select_s; +} sh_pi_afi_test_point_select_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_AFI_TEST_POINT_TRIGGER_SELECT" */ +/* PI CRBC Test Point Trigger Select */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_afi_test_point_trigger_select_u { + mmr_t sh_pi_afi_test_point_trigger_select_regval; + struct { + mmr_t trigger0_chiplet_sel : 4; + mmr_t trigger0_nibble_sel : 3; + mmr_t reserved_0 : 1; + mmr_t trigger1_chiplet_sel : 4; + mmr_t trigger1_nibble_sel : 3; + mmr_t reserved_1 : 1; + mmr_t trigger2_chiplet_sel : 4; + mmr_t trigger2_nibble_sel : 3; + mmr_t reserved_2 : 1; + mmr_t trigger3_chiplet_sel : 4; + mmr_t trigger3_nibble_sel : 3; + mmr_t reserved_3 : 1; + mmr_t trigger4_chiplet_sel : 4; + mmr_t trigger4_nibble_sel : 3; + mmr_t reserved_4 : 1; + mmr_t trigger5_chiplet_sel : 4; + mmr_t trigger5_nibble_sel : 3; + mmr_t reserved_5 : 1; + mmr_t trigger6_chiplet_sel : 4; + mmr_t trigger6_nibble_sel : 3; + mmr_t reserved_6 : 1; + mmr_t trigger7_chiplet_sel : 4; + mmr_t trigger7_nibble_sel : 3; + mmr_t reserved_7 : 1; + } sh_pi_afi_test_point_trigger_select_s; +} sh_pi_afi_test_point_trigger_select_u_t; +#else +typedef union sh_pi_afi_test_point_trigger_select_u { + mmr_t sh_pi_afi_test_point_trigger_select_regval; + struct { + mmr_t reserved_7 : 1; + mmr_t trigger7_nibble_sel : 3; + mmr_t trigger7_chiplet_sel : 4; + mmr_t reserved_6 : 1; + mmr_t trigger6_nibble_sel : 3; + mmr_t trigger6_chiplet_sel : 4; + mmr_t reserved_5 : 1; + mmr_t trigger5_nibble_sel : 3; + mmr_t trigger5_chiplet_sel : 4; + mmr_t reserved_4 : 1; + mmr_t trigger4_nibble_sel : 3; + mmr_t trigger4_chiplet_sel : 4; + mmr_t reserved_3 : 1; + mmr_t trigger3_nibble_sel : 3; + mmr_t trigger3_chiplet_sel : 4; + mmr_t reserved_2 : 1; + mmr_t trigger2_nibble_sel : 3; + mmr_t trigger2_chiplet_sel : 4; + mmr_t reserved_1 : 1; + mmr_t trigger1_nibble_sel : 3; + mmr_t trigger1_chiplet_sel : 4; + mmr_t reserved_0 : 1; + mmr_t trigger0_nibble_sel : 3; + mmr_t trigger0_chiplet_sel : 4; + } sh_pi_afi_test_point_trigger_select_s; +} sh_pi_afi_test_point_trigger_select_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_AUTO_REPLY_ENABLE" */ +/* PI Auto Reply Enable */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_auto_reply_enable_u { + mmr_t sh_pi_auto_reply_enable_regval; + struct { + mmr_t auto_reply_enable : 1; + mmr_t reserved_0 : 63; + } sh_pi_auto_reply_enable_s; +} sh_pi_auto_reply_enable_u_t; +#else +typedef union sh_pi_auto_reply_enable_u { + mmr_t sh_pi_auto_reply_enable_regval; + struct { + mmr_t reserved_0 : 63; + mmr_t auto_reply_enable : 1; + } sh_pi_auto_reply_enable_s; +} sh_pi_auto_reply_enable_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_CAM_CONTROL" */ +/* CRB CAM MMR Access Control */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_cam_control_u { + mmr_t sh_pi_cam_control_regval; + struct { + mmr_t cam_indx : 7; + mmr_t reserved_0 : 1; + mmr_t cam_write : 1; + mmr_t rrb_rd_xfer_clear : 1; + mmr_t reserved_1 : 53; + mmr_t start : 1; + } sh_pi_cam_control_s; +} sh_pi_cam_control_u_t; +#else +typedef union sh_pi_cam_control_u { + mmr_t sh_pi_cam_control_regval; + struct { + mmr_t start : 1; + mmr_t reserved_1 : 53; + mmr_t rrb_rd_xfer_clear : 1; + mmr_t cam_write : 1; + mmr_t reserved_0 : 1; + mmr_t cam_indx : 7; + } sh_pi_cam_control_s; +} sh_pi_cam_control_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_CRBC_TEST_POINT_COMPARE" */ +/* PI CRBC Test Point Compare */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_crbc_test_point_compare_u { + mmr_t sh_pi_crbc_test_point_compare_regval; + struct { + mmr_t compare_mask : 32; + mmr_t compare_pattern : 32; + } sh_pi_crbc_test_point_compare_s; +} sh_pi_crbc_test_point_compare_u_t; +#else +typedef union sh_pi_crbc_test_point_compare_u { + mmr_t sh_pi_crbc_test_point_compare_regval; + struct { + mmr_t compare_pattern : 32; + mmr_t compare_mask : 32; + } sh_pi_crbc_test_point_compare_s; +} sh_pi_crbc_test_point_compare_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_CRBC_TEST_POINT_SELECT" */ +/* PI CRBC Test Point Select */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_crbc_test_point_select_u { + mmr_t sh_pi_crbc_test_point_select_regval; + struct { + mmr_t nibble0_chiplet_sel : 3; + mmr_t reserved_0 : 1; + mmr_t nibble0_nibble_sel : 3; + mmr_t reserved_1 : 1; + mmr_t nibble1_chiplet_sel : 3; + mmr_t reserved_2 : 1; + mmr_t nibble1_nibble_sel : 3; + mmr_t reserved_3 : 1; + mmr_t nibble2_chiplet_sel : 3; + mmr_t reserved_4 : 1; + mmr_t nibble2_nibble_sel : 3; + mmr_t reserved_5 : 1; + mmr_t nibble3_chiplet_sel : 3; + mmr_t reserved_6 : 1; + mmr_t nibble3_nibble_sel : 3; + mmr_t reserved_7 : 1; + mmr_t nibble4_chiplet_sel : 3; + mmr_t reserved_8 : 1; + mmr_t nibble4_nibble_sel : 3; + mmr_t reserved_9 : 1; + mmr_t nibble5_chiplet_sel : 3; + mmr_t reserved_10 : 1; + mmr_t nibble5_nibble_sel : 3; + mmr_t reserved_11 : 1; + mmr_t nibble6_chiplet_sel : 3; + mmr_t reserved_12 : 1; + mmr_t nibble6_nibble_sel : 3; + mmr_t reserved_13 : 1; + mmr_t nibble7_chiplet_sel : 3; + mmr_t reserved_14 : 1; + mmr_t nibble7_nibble_sel : 3; + mmr_t trigger_enable : 1; + } sh_pi_crbc_test_point_select_s; +} sh_pi_crbc_test_point_select_u_t; +#else +typedef union sh_pi_crbc_test_point_select_u { + mmr_t sh_pi_crbc_test_point_select_regval; + struct { + mmr_t trigger_enable : 1; + mmr_t nibble7_nibble_sel : 3; + mmr_t reserved_14 : 1; + mmr_t nibble7_chiplet_sel : 3; + mmr_t reserved_13 : 1; + mmr_t nibble6_nibble_sel : 3; + mmr_t reserved_12 : 1; + mmr_t nibble6_chiplet_sel : 3; + mmr_t reserved_11 : 1; + mmr_t nibble5_nibble_sel : 3; + mmr_t reserved_10 : 1; + mmr_t nibble5_chiplet_sel : 3; + mmr_t reserved_9 : 1; + mmr_t nibble4_nibble_sel : 3; + mmr_t reserved_8 : 1; + mmr_t nibble4_chiplet_sel : 3; + mmr_t reserved_7 : 1; + mmr_t nibble3_nibble_sel : 3; + mmr_t reserved_6 : 1; + mmr_t nibble3_chiplet_sel : 3; + mmr_t reserved_5 : 1; + mmr_t nibble2_nibble_sel : 3; + mmr_t reserved_4 : 1; + mmr_t nibble2_chiplet_sel : 3; + mmr_t reserved_3 : 1; + mmr_t nibble1_nibble_sel : 3; + mmr_t reserved_2 : 1; + mmr_t nibble1_chiplet_sel : 3; + mmr_t reserved_1 : 1; + mmr_t nibble0_nibble_sel : 3; + mmr_t reserved_0 : 1; + mmr_t nibble0_chiplet_sel : 3; + } sh_pi_crbc_test_point_select_s; +} sh_pi_crbc_test_point_select_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT" */ +/* PI CRBC Test Point Trigger Select */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_crbc_test_point_trigger_select_u { + mmr_t sh_pi_crbc_test_point_trigger_select_regval; + struct { + mmr_t trigger0_chiplet_sel : 3; + mmr_t reserved_0 : 1; + mmr_t trigger0_nibble_sel : 3; + mmr_t reserved_1 : 1; + mmr_t trigger1_chiplet_sel : 3; + mmr_t reserved_2 : 1; + mmr_t trigger1_nibble_sel : 3; + mmr_t reserved_3 : 1; + mmr_t trigger2_chiplet_sel : 3; + mmr_t reserved_4 : 1; + mmr_t trigger2_nibble_sel : 3; + mmr_t reserved_5 : 1; + mmr_t trigger3_chiplet_sel : 3; + mmr_t reserved_6 : 1; + mmr_t trigger3_nibble_sel : 3; + mmr_t reserved_7 : 1; + mmr_t trigger4_chiplet_sel : 3; + mmr_t reserved_8 : 1; + mmr_t trigger4_nibble_sel : 3; + mmr_t reserved_9 : 1; + mmr_t trigger5_chiplet_sel : 3; + mmr_t reserved_10 : 1; + mmr_t trigger5_nibble_sel : 3; + mmr_t reserved_11 : 1; + mmr_t trigger6_chiplet_sel : 3; + mmr_t reserved_12 : 1; + mmr_t trigger6_nibble_sel : 3; + mmr_t reserved_13 : 1; + mmr_t trigger7_chiplet_sel : 3; + mmr_t reserved_14 : 1; + mmr_t trigger7_nibble_sel : 3; + mmr_t reserved_15 : 1; + } sh_pi_crbc_test_point_trigger_select_s; +} sh_pi_crbc_test_point_trigger_select_u_t; +#else +typedef union sh_pi_crbc_test_point_trigger_select_u { + mmr_t sh_pi_crbc_test_point_trigger_select_regval; + struct { + mmr_t reserved_15 : 1; + mmr_t trigger7_nibble_sel : 3; + mmr_t reserved_14 : 1; + mmr_t trigger7_chiplet_sel : 3; + mmr_t reserved_13 : 1; + mmr_t trigger6_nibble_sel : 3; + mmr_t reserved_12 : 1; + mmr_t trigger6_chiplet_sel : 3; + mmr_t reserved_11 : 1; + mmr_t trigger5_nibble_sel : 3; + mmr_t reserved_10 : 1; + mmr_t trigger5_chiplet_sel : 3; + mmr_t reserved_9 : 1; + mmr_t trigger4_nibble_sel : 3; + mmr_t reserved_8 : 1; + mmr_t trigger4_chiplet_sel : 3; + mmr_t reserved_7 : 1; + mmr_t trigger3_nibble_sel : 3; + mmr_t reserved_6 : 1; + mmr_t trigger3_chiplet_sel : 3; + mmr_t reserved_5 : 1; + mmr_t trigger2_nibble_sel : 3; + mmr_t reserved_4 : 1; + mmr_t trigger2_chiplet_sel : 3; + mmr_t reserved_3 : 1; + mmr_t trigger1_nibble_sel : 3; + mmr_t reserved_2 : 1; + mmr_t trigger1_chiplet_sel : 3; + mmr_t reserved_1 : 1; + mmr_t trigger0_nibble_sel : 3; + mmr_t reserved_0 : 1; + mmr_t trigger0_chiplet_sel : 3; + } sh_pi_crbc_test_point_trigger_select_s; +} sh_pi_crbc_test_point_trigger_select_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_CRBP_ERROR_MASK" */ +/* PI CRBP Error Mask */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_crbp_error_mask_u { + mmr_t sh_pi_crbp_error_mask_regval; + struct { + mmr_t fsb_proto_err : 1; + mmr_t gfx_rp_err : 1; + mmr_t xb_proto_err : 1; + mmr_t mem_rp_err : 1; + mmr_t pio_rp_err : 1; + mmr_t mem_to_err : 1; + mmr_t pio_to_err : 1; + mmr_t fsb_shub_uce : 1; + mmr_t fsb_shub_ce : 1; + mmr_t msg_color_err : 1; + mmr_t md_rq_q_oflow : 1; + mmr_t md_rp_q_oflow : 1; + mmr_t xn_rq_q_oflow : 1; + mmr_t xn_rp_q_oflow : 1; + mmr_t nack_oflow : 1; + mmr_t gfx_int_0 : 1; + mmr_t gfx_int_1 : 1; + mmr_t md_rq_crd_oflow : 1; + mmr_t md_rp_crd_oflow : 1; + mmr_t xn_rq_crd_oflow : 1; + mmr_t xn_rp_crd_oflow : 1; + mmr_t reserved_0 : 43; + } sh_pi_crbp_error_mask_s; +} sh_pi_crbp_error_mask_u_t; +#else +typedef union sh_pi_crbp_error_mask_u { + mmr_t sh_pi_crbp_error_mask_regval; + struct { + mmr_t reserved_0 : 43; + mmr_t xn_rp_crd_oflow : 1; + mmr_t xn_rq_crd_oflow : 1; + mmr_t md_rp_crd_oflow : 1; + mmr_t md_rq_crd_oflow : 1; + mmr_t gfx_int_1 : 1; + mmr_t gfx_int_0 : 1; + mmr_t nack_oflow : 1; + mmr_t xn_rp_q_oflow : 1; + mmr_t xn_rq_q_oflow : 1; + mmr_t md_rp_q_oflow : 1; + mmr_t md_rq_q_oflow : 1; + mmr_t msg_color_err : 1; + mmr_t fsb_shub_ce : 1; + mmr_t fsb_shub_uce : 1; + mmr_t pio_to_err : 1; + mmr_t mem_to_err : 1; + mmr_t pio_rp_err : 1; + mmr_t mem_rp_err : 1; + mmr_t xb_proto_err : 1; + mmr_t gfx_rp_err : 1; + mmr_t fsb_proto_err : 1; + } sh_pi_crbp_error_mask_s; +} sh_pi_crbp_error_mask_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_CRBP_FSB_PIPE_COMPARE" */ +/* CRBP FSB Pipe Compare */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_crbp_fsb_pipe_compare_u { + mmr_t sh_pi_crbp_fsb_pipe_compare_regval; + struct { + mmr_t compare_address : 47; + mmr_t compare_req : 6; + mmr_t reserved_0 : 11; + } sh_pi_crbp_fsb_pipe_compare_s; +} sh_pi_crbp_fsb_pipe_compare_u_t; +#else +typedef union sh_pi_crbp_fsb_pipe_compare_u { + mmr_t sh_pi_crbp_fsb_pipe_compare_regval; + struct { + mmr_t reserved_0 : 11; + mmr_t compare_req : 6; + mmr_t compare_address : 47; + } sh_pi_crbp_fsb_pipe_compare_s; +} sh_pi_crbp_fsb_pipe_compare_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_CRBP_FSB_PIPE_MASK" */ +/* CRBP Compare Mask */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_crbp_fsb_pipe_mask_u { + mmr_t sh_pi_crbp_fsb_pipe_mask_regval; + struct { + mmr_t compare_address_mask : 47; + mmr_t compare_req_mask : 6; + mmr_t reserved_0 : 11; + } sh_pi_crbp_fsb_pipe_mask_s; +} sh_pi_crbp_fsb_pipe_mask_u_t; +#else +typedef union sh_pi_crbp_fsb_pipe_mask_u { + mmr_t sh_pi_crbp_fsb_pipe_mask_regval; + struct { + mmr_t reserved_0 : 11; + mmr_t compare_req_mask : 6; + mmr_t compare_address_mask : 47; + } sh_pi_crbp_fsb_pipe_mask_s; +} sh_pi_crbp_fsb_pipe_mask_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_CRBP_TEST_POINT_COMPARE" */ +/* PI CRBP Test Point Compare */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_crbp_test_point_compare_u { + mmr_t sh_pi_crbp_test_point_compare_regval; + struct { + mmr_t compare_mask : 32; + mmr_t compare_pattern : 32; + } sh_pi_crbp_test_point_compare_s; +} sh_pi_crbp_test_point_compare_u_t; +#else +typedef union sh_pi_crbp_test_point_compare_u { + mmr_t sh_pi_crbp_test_point_compare_regval; + struct { + mmr_t compare_pattern : 32; + mmr_t compare_mask : 32; + } sh_pi_crbp_test_point_compare_s; +} sh_pi_crbp_test_point_compare_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_CRBP_TEST_POINT_SELECT" */ +/* PI CRBP Test Point Select */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_crbp_test_point_select_u { + mmr_t sh_pi_crbp_test_point_select_regval; + struct { + mmr_t nibble0_chiplet_sel : 3; + mmr_t reserved_0 : 1; + mmr_t nibble0_nibble_sel : 3; + mmr_t reserved_1 : 1; + mmr_t nibble1_chiplet_sel : 3; + mmr_t reserved_2 : 1; + mmr_t nibble1_nibble_sel : 3; + mmr_t reserved_3 : 1; + mmr_t nibble2_chiplet_sel : 3; + mmr_t reserved_4 : 1; + mmr_t nibble2_nibble_sel : 3; + mmr_t reserved_5 : 1; + mmr_t nibble3_chiplet_sel : 3; + mmr_t reserved_6 : 1; + mmr_t nibble3_nibble_sel : 3; + mmr_t reserved_7 : 1; + mmr_t nibble4_chiplet_sel : 3; + mmr_t reserved_8 : 1; + mmr_t nibble4_nibble_sel : 3; + mmr_t reserved_9 : 1; + mmr_t nibble5_chiplet_sel : 3; + mmr_t reserved_10 : 1; + mmr_t nibble5_nibble_sel : 3; + mmr_t reserved_11 : 1; + mmr_t nibble6_chiplet_sel : 3; + mmr_t reserved_12 : 1; + mmr_t nibble6_nibble_sel : 3; + mmr_t reserved_13 : 1; + mmr_t nibble7_chiplet_sel : 3; + mmr_t reserved_14 : 1; + mmr_t nibble7_nibble_sel : 3; + mmr_t trigger_enable : 1; + } sh_pi_crbp_test_point_select_s; +} sh_pi_crbp_test_point_select_u_t; +#else +typedef union sh_pi_crbp_test_point_select_u { + mmr_t sh_pi_crbp_test_point_select_regval; + struct { + mmr_t trigger_enable : 1; + mmr_t nibble7_nibble_sel : 3; + mmr_t reserved_14 : 1; + mmr_t nibble7_chiplet_sel : 3; + mmr_t reserved_13 : 1; + mmr_t nibble6_nibble_sel : 3; + mmr_t reserved_12 : 1; + mmr_t nibble6_chiplet_sel : 3; + mmr_t reserved_11 : 1; + mmr_t nibble5_nibble_sel : 3; + mmr_t reserved_10 : 1; + mmr_t nibble5_chiplet_sel : 3; + mmr_t reserved_9 : 1; + mmr_t nibble4_nibble_sel : 3; + mmr_t reserved_8 : 1; + mmr_t nibble4_chiplet_sel : 3; + mmr_t reserved_7 : 1; + mmr_t nibble3_nibble_sel : 3; + mmr_t reserved_6 : 1; + mmr_t nibble3_chiplet_sel : 3; + mmr_t reserved_5 : 1; + mmr_t nibble2_nibble_sel : 3; + mmr_t reserved_4 : 1; + mmr_t nibble2_chiplet_sel : 3; + mmr_t reserved_3 : 1; + mmr_t nibble1_nibble_sel : 3; + mmr_t reserved_2 : 1; + mmr_t nibble1_chiplet_sel : 3; + mmr_t reserved_1 : 1; + mmr_t nibble0_nibble_sel : 3; + mmr_t reserved_0 : 1; + mmr_t nibble0_chiplet_sel : 3; + } sh_pi_crbp_test_point_select_s; +} sh_pi_crbp_test_point_select_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT" */ +/* PI CRBP Test Point Trigger Select */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_crbp_test_point_trigger_select_u { + mmr_t sh_pi_crbp_test_point_trigger_select_regval; + struct { + mmr_t trigger0_chiplet_sel : 3; + mmr_t reserved_0 : 1; + mmr_t trigger0_nibble_sel : 3; + mmr_t reserved_1 : 1; + mmr_t trigger1_chiplet_sel : 3; + mmr_t reserved_2 : 1; + mmr_t trigger1_nibble_sel : 3; + mmr_t reserved_3 : 1; + mmr_t trigger2_chiplet_sel : 3; + mmr_t reserved_4 : 1; + mmr_t trigger2_nibble_sel : 3; + mmr_t reserved_5 : 1; + mmr_t trigger3_chiplet_sel : 3; + mmr_t reserved_6 : 1; + mmr_t trigger3_nibble_sel : 3; + mmr_t reserved_7 : 1; + mmr_t trigger4_chiplet_sel : 3; + mmr_t reserved_8 : 1; + mmr_t trigger4_nibble_sel : 3; + mmr_t reserved_9 : 1; + mmr_t trigger5_chiplet_sel : 3; + mmr_t reserved_10 : 1; + mmr_t trigger5_nibble_sel : 3; + mmr_t reserved_11 : 1; + mmr_t trigger6_chiplet_sel : 3; + mmr_t reserved_12 : 1; + mmr_t trigger6_nibble_sel : 3; + mmr_t reserved_13 : 1; + mmr_t trigger7_chiplet_sel : 3; + mmr_t reserved_14 : 1; + mmr_t trigger7_nibble_sel : 3; + mmr_t reserved_15 : 1; + } sh_pi_crbp_test_point_trigger_select_s; +} sh_pi_crbp_test_point_trigger_select_u_t; +#else +typedef union sh_pi_crbp_test_point_trigger_select_u { + mmr_t sh_pi_crbp_test_point_trigger_select_regval; + struct { + mmr_t reserved_15 : 1; + mmr_t trigger7_nibble_sel : 3; + mmr_t reserved_14 : 1; + mmr_t trigger7_chiplet_sel : 3; + mmr_t reserved_13 : 1; + mmr_t trigger6_nibble_sel : 3; + mmr_t reserved_12 : 1; + mmr_t trigger6_chiplet_sel : 3; + mmr_t reserved_11 : 1; + mmr_t trigger5_nibble_sel : 3; + mmr_t reserved_10 : 1; + mmr_t trigger5_chiplet_sel : 3; + mmr_t reserved_9 : 1; + mmr_t trigger4_nibble_sel : 3; + mmr_t reserved_8 : 1; + mmr_t trigger4_chiplet_sel : 3; + mmr_t reserved_7 : 1; + mmr_t trigger3_nibble_sel : 3; + mmr_t reserved_6 : 1; + mmr_t trigger3_chiplet_sel : 3; + mmr_t reserved_5 : 1; + mmr_t trigger2_nibble_sel : 3; + mmr_t reserved_4 : 1; + mmr_t trigger2_chiplet_sel : 3; + mmr_t reserved_3 : 1; + mmr_t trigger1_nibble_sel : 3; + mmr_t reserved_2 : 1; + mmr_t trigger1_chiplet_sel : 3; + mmr_t reserved_1 : 1; + mmr_t trigger0_nibble_sel : 3; + mmr_t reserved_0 : 1; + mmr_t trigger0_chiplet_sel : 3; + } sh_pi_crbp_test_point_trigger_select_s; +} sh_pi_crbp_test_point_trigger_select_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_CRBP_XB_PIPE_COMPARE_0" */ +/* CRBP XB Pipe Compare */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_crbp_xb_pipe_compare_0_u { + mmr_t sh_pi_crbp_xb_pipe_compare_0_regval; + struct { + mmr_t compare_address : 47; + mmr_t compare_command : 8; + mmr_t reserved_0 : 9; + } sh_pi_crbp_xb_pipe_compare_0_s; +} sh_pi_crbp_xb_pipe_compare_0_u_t; +#else +typedef union sh_pi_crbp_xb_pipe_compare_0_u { + mmr_t sh_pi_crbp_xb_pipe_compare_0_regval; + struct { + mmr_t reserved_0 : 9; + mmr_t compare_command : 8; + mmr_t compare_address : 47; + } sh_pi_crbp_xb_pipe_compare_0_s; +} sh_pi_crbp_xb_pipe_compare_0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_CRBP_XB_PIPE_COMPARE_1" */ +/* CRBP XB Pipe Compare */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_crbp_xb_pipe_compare_1_u { + mmr_t sh_pi_crbp_xb_pipe_compare_1_regval; + struct { + mmr_t compare_source : 14; + mmr_t reserved_0 : 2; + mmr_t compare_supplemental : 14; + mmr_t reserved_1 : 2; + mmr_t compare_echo : 9; + mmr_t reserved_2 : 23; + } sh_pi_crbp_xb_pipe_compare_1_s; +} sh_pi_crbp_xb_pipe_compare_1_u_t; +#else +typedef union sh_pi_crbp_xb_pipe_compare_1_u { + mmr_t sh_pi_crbp_xb_pipe_compare_1_regval; + struct { + mmr_t reserved_2 : 23; + mmr_t compare_echo : 9; + mmr_t reserved_1 : 2; + mmr_t compare_supplemental : 14; + mmr_t reserved_0 : 2; + mmr_t compare_source : 14; + } sh_pi_crbp_xb_pipe_compare_1_s; +} sh_pi_crbp_xb_pipe_compare_1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_CRBP_XB_PIPE_MASK_0" */ +/* CRBP Compare Mask Register 1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_crbp_xb_pipe_mask_0_u { + mmr_t sh_pi_crbp_xb_pipe_mask_0_regval; + struct { + mmr_t compare_address_mask : 47; + mmr_t compare_command_mask : 8; + mmr_t reserved_0 : 9; + } sh_pi_crbp_xb_pipe_mask_0_s; +} sh_pi_crbp_xb_pipe_mask_0_u_t; +#else +typedef union sh_pi_crbp_xb_pipe_mask_0_u { + mmr_t sh_pi_crbp_xb_pipe_mask_0_regval; + struct { + mmr_t reserved_0 : 9; + mmr_t compare_command_mask : 8; + mmr_t compare_address_mask : 47; + } sh_pi_crbp_xb_pipe_mask_0_s; +} sh_pi_crbp_xb_pipe_mask_0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_CRBP_XB_PIPE_MASK_1" */ +/* CRBP XB Pipe Compare Mask Register 1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_crbp_xb_pipe_mask_1_u { + mmr_t sh_pi_crbp_xb_pipe_mask_1_regval; + struct { + mmr_t compare_source_mask : 14; + mmr_t reserved_0 : 2; + mmr_t compare_supplemental_mask : 14; + mmr_t reserved_1 : 2; + mmr_t compare_echo_mask : 9; + mmr_t reserved_2 : 23; + } sh_pi_crbp_xb_pipe_mask_1_s; +} sh_pi_crbp_xb_pipe_mask_1_u_t; +#else +typedef union sh_pi_crbp_xb_pipe_mask_1_u { + mmr_t sh_pi_crbp_xb_pipe_mask_1_regval; + struct { + mmr_t reserved_2 : 23; + mmr_t compare_echo_mask : 9; + mmr_t reserved_1 : 2; + mmr_t compare_supplemental_mask : 14; + mmr_t reserved_0 : 2; + mmr_t compare_source_mask : 14; + } sh_pi_crbp_xb_pipe_mask_1_s; +} sh_pi_crbp_xb_pipe_mask_1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_DPC_QUEUE_CONFIG" */ +/* DPC Queue Configuration */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_dpc_queue_config_u { + mmr_t sh_pi_dpc_queue_config_regval; + struct { + mmr_t dwcq_ae_level : 5; + mmr_t reserved_0 : 3; + mmr_t dwcq_af_thresh : 5; + mmr_t reserved_1 : 3; + mmr_t fwcq_ae_level : 5; + mmr_t reserved_2 : 3; + mmr_t fwcq_af_thresh : 5; + mmr_t reserved_3 : 35; + } sh_pi_dpc_queue_config_s; +} sh_pi_dpc_queue_config_u_t; +#else +typedef union sh_pi_dpc_queue_config_u { + mmr_t sh_pi_dpc_queue_config_regval; + struct { + mmr_t reserved_3 : 35; + mmr_t fwcq_af_thresh : 5; + mmr_t reserved_2 : 3; + mmr_t fwcq_ae_level : 5; + mmr_t reserved_1 : 3; + mmr_t dwcq_af_thresh : 5; + mmr_t reserved_0 : 3; + mmr_t dwcq_ae_level : 5; + } sh_pi_dpc_queue_config_s; +} sh_pi_dpc_queue_config_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_ERROR_MASK" */ +/* PI Error Mask */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_error_mask_u { + mmr_t sh_pi_error_mask_regval; + struct { + mmr_t fsb_proto_err : 1; + mmr_t gfx_rp_err : 1; + mmr_t xb_proto_err : 1; + mmr_t mem_rp_err : 1; + mmr_t pio_rp_err : 1; + mmr_t mem_to_err : 1; + mmr_t pio_to_err : 1; + mmr_t fsb_shub_uce : 1; + mmr_t fsb_shub_ce : 1; + mmr_t msg_color_err : 1; + mmr_t md_rq_q_oflow : 1; + mmr_t md_rp_q_oflow : 1; + mmr_t xn_rq_q_oflow : 1; + mmr_t xn_rp_q_oflow : 1; + mmr_t nack_oflow : 1; + mmr_t gfx_int_0 : 1; + mmr_t gfx_int_1 : 1; + mmr_t md_rq_crd_oflow : 1; + mmr_t md_rp_crd_oflow : 1; + mmr_t xn_rq_crd_oflow : 1; + mmr_t xn_rp_crd_oflow : 1; + mmr_t hung_bus : 1; + mmr_t rsp_parity : 1; + mmr_t ioq_overrun : 1; + mmr_t req_format : 1; + mmr_t addr_access : 1; + mmr_t req_parity : 1; + mmr_t addr_parity : 1; + mmr_t shub_fsb_dqe : 1; + mmr_t shub_fsb_uce : 1; + mmr_t shub_fsb_ce : 1; + mmr_t livelock : 1; + mmr_t bad_snoop : 1; + mmr_t fsb_tbl_miss : 1; + mmr_t msg_length : 1; + mmr_t reserved_0 : 29; + } sh_pi_error_mask_s; +} sh_pi_error_mask_u_t; +#else +typedef union sh_pi_error_mask_u { + mmr_t sh_pi_error_mask_regval; + struct { + mmr_t reserved_0 : 29; + mmr_t msg_length : 1; + mmr_t fsb_tbl_miss : 1; + mmr_t bad_snoop : 1; + mmr_t livelock : 1; + mmr_t shub_fsb_ce : 1; + mmr_t shub_fsb_uce : 1; + mmr_t shub_fsb_dqe : 1; + mmr_t addr_parity : 1; + mmr_t req_parity : 1; + mmr_t addr_access : 1; + mmr_t req_format : 1; + mmr_t ioq_overrun : 1; + mmr_t rsp_parity : 1; + mmr_t hung_bus : 1; + mmr_t xn_rp_crd_oflow : 1; + mmr_t xn_rq_crd_oflow : 1; + mmr_t md_rp_crd_oflow : 1; + mmr_t md_rq_crd_oflow : 1; + mmr_t gfx_int_1 : 1; + mmr_t gfx_int_0 : 1; + mmr_t nack_oflow : 1; + mmr_t xn_rp_q_oflow : 1; + mmr_t xn_rq_q_oflow : 1; + mmr_t md_rp_q_oflow : 1; + mmr_t md_rq_q_oflow : 1; + mmr_t msg_color_err : 1; + mmr_t fsb_shub_ce : 1; + mmr_t fsb_shub_uce : 1; + mmr_t pio_to_err : 1; + mmr_t mem_to_err : 1; + mmr_t pio_rp_err : 1; + mmr_t mem_rp_err : 1; + mmr_t xb_proto_err : 1; + mmr_t gfx_rp_err : 1; + mmr_t fsb_proto_err : 1; + } sh_pi_error_mask_s; +} sh_pi_error_mask_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_EXPRESS_REPLY_CONFIG" */ +/* PI Express Reply Configuration */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_express_reply_config_u { + mmr_t sh_pi_express_reply_config_regval; + struct { + mmr_t mode : 3; + mmr_t reserved_0 : 61; + } sh_pi_express_reply_config_s; +} sh_pi_express_reply_config_u_t; +#else +typedef union sh_pi_express_reply_config_u { + mmr_t sh_pi_express_reply_config_regval; + struct { + mmr_t reserved_0 : 61; + mmr_t mode : 3; + } sh_pi_express_reply_config_s; +} sh_pi_express_reply_config_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_FSB_COMPARE_VALUE" */ +/* FSB Compare Value */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_fsb_compare_value_u { + mmr_t sh_pi_fsb_compare_value_regval; + struct { + mmr_t compare_value : 64; + } sh_pi_fsb_compare_value_s; +} sh_pi_fsb_compare_value_u_t; +#else +typedef union sh_pi_fsb_compare_value_u { + mmr_t sh_pi_fsb_compare_value_regval; + struct { + mmr_t compare_value : 64; + } sh_pi_fsb_compare_value_s; +} sh_pi_fsb_compare_value_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_FSB_COMPARE_MASK" */ +/* FSB Compare Mask */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_fsb_compare_mask_u { + mmr_t sh_pi_fsb_compare_mask_regval; + struct { + mmr_t mask_value : 64; + } sh_pi_fsb_compare_mask_s; +} sh_pi_fsb_compare_mask_u_t; +#else +typedef union sh_pi_fsb_compare_mask_u { + mmr_t sh_pi_fsb_compare_mask_regval; + struct { + mmr_t mask_value : 64; + } sh_pi_fsb_compare_mask_s; +} sh_pi_fsb_compare_mask_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_FSB_ERROR_INJECTION" */ +/* Inject an Error onto the FSB */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_fsb_error_injection_u { + mmr_t sh_pi_fsb_error_injection_regval; + struct { + mmr_t rp_pe_to_fsb : 1; + mmr_t ap0_pe_to_fsb : 1; + mmr_t ap1_pe_to_fsb : 1; + mmr_t rsp_pe_to_fsb : 1; + mmr_t dw0_ce_to_fsb : 1; + mmr_t dw0_uce_to_fsb : 1; + mmr_t dw1_ce_to_fsb : 1; + mmr_t dw1_uce_to_fsb : 1; + mmr_t ip0_pe_to_fsb : 1; + mmr_t ip1_pe_to_fsb : 1; + mmr_t reserved_0 : 6; + mmr_t rp_pe_from_fsb : 1; + mmr_t ap0_pe_from_fsb : 1; + mmr_t ap1_pe_from_fsb : 1; + mmr_t rsp_pe_from_fsb : 1; + mmr_t dw0_ce_from_fsb : 1; + mmr_t dw0_uce_from_fsb : 1; + mmr_t dw1_ce_from_fsb : 1; + mmr_t dw1_uce_from_fsb : 1; + mmr_t dw2_ce_from_fsb : 1; + mmr_t dw2_uce_from_fsb : 1; + mmr_t dw3_ce_from_fsb : 1; + mmr_t dw3_uce_from_fsb : 1; + mmr_t reserved_1 : 4; + mmr_t ioq_overrun : 1; + mmr_t livelock : 1; + mmr_t bus_hang : 1; + mmr_t reserved_2 : 29; + } sh_pi_fsb_error_injection_s; +} sh_pi_fsb_error_injection_u_t; +#else +typedef union sh_pi_fsb_error_injection_u { + mmr_t sh_pi_fsb_error_injection_regval; + struct { + mmr_t reserved_2 : 29; + mmr_t bus_hang : 1; + mmr_t livelock : 1; + mmr_t ioq_overrun : 1; + mmr_t reserved_1 : 4; + mmr_t dw3_uce_from_fsb : 1; + mmr_t dw3_ce_from_fsb : 1; + mmr_t dw2_uce_from_fsb : 1; + mmr_t dw2_ce_from_fsb : 1; + mmr_t dw1_uce_from_fsb : 1; + mmr_t dw1_ce_from_fsb : 1; + mmr_t dw0_uce_from_fsb : 1; + mmr_t dw0_ce_from_fsb : 1; + mmr_t rsp_pe_from_fsb : 1; + mmr_t ap1_pe_from_fsb : 1; + mmr_t ap0_pe_from_fsb : 1; + mmr_t rp_pe_from_fsb : 1; + mmr_t reserved_0 : 6; + mmr_t ip1_pe_to_fsb : 1; + mmr_t ip0_pe_to_fsb : 1; + mmr_t dw1_uce_to_fsb : 1; + mmr_t dw1_ce_to_fsb : 1; + mmr_t dw0_uce_to_fsb : 1; + mmr_t dw0_ce_to_fsb : 1; + mmr_t rsp_pe_to_fsb : 1; + mmr_t ap1_pe_to_fsb : 1; + mmr_t ap0_pe_to_fsb : 1; + mmr_t rp_pe_to_fsb : 1; + } sh_pi_fsb_error_injection_s; +} sh_pi_fsb_error_injection_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_MD2PI_REPLY_VC_CONFIG" */ +/* MD-to-PI Reply Virtual Channel Configuration */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_md2pi_reply_vc_config_u { + mmr_t sh_pi_md2pi_reply_vc_config_regval; + struct { + mmr_t hdr_depth : 4; + mmr_t data_depth : 4; + mmr_t max_credits : 6; + mmr_t reserved_0 : 48; + mmr_t force_credit : 1; + mmr_t capture_credit_status : 1; + } sh_pi_md2pi_reply_vc_config_s; +} sh_pi_md2pi_reply_vc_config_u_t; +#else +typedef union sh_pi_md2pi_reply_vc_config_u { + mmr_t sh_pi_md2pi_reply_vc_config_regval; + struct { + mmr_t capture_credit_status : 1; + mmr_t force_credit : 1; + mmr_t reserved_0 : 48; + mmr_t max_credits : 6; + mmr_t data_depth : 4; + mmr_t hdr_depth : 4; + } sh_pi_md2pi_reply_vc_config_s; +} sh_pi_md2pi_reply_vc_config_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_MD2PI_REQUEST_VC_CONFIG" */ +/* MD-to-PI Request Virtual Channel Configuration */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_md2pi_request_vc_config_u { + mmr_t sh_pi_md2pi_request_vc_config_regval; + struct { + mmr_t hdr_depth : 4; + mmr_t data_depth : 4; + mmr_t max_credits : 6; + mmr_t reserved_0 : 48; + mmr_t force_credit : 1; + mmr_t capture_credit_status : 1; + } sh_pi_md2pi_request_vc_config_s; +} sh_pi_md2pi_request_vc_config_u_t; +#else +typedef union sh_pi_md2pi_request_vc_config_u { + mmr_t sh_pi_md2pi_request_vc_config_regval; + struct { + mmr_t capture_credit_status : 1; + mmr_t force_credit : 1; + mmr_t reserved_0 : 48; + mmr_t max_credits : 6; + mmr_t data_depth : 4; + mmr_t hdr_depth : 4; + } sh_pi_md2pi_request_vc_config_s; +} sh_pi_md2pi_request_vc_config_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_QUEUE_ERROR_INJECTION" */ +/* PI Queue Error Injection */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_queue_error_injection_u { + mmr_t sh_pi_queue_error_injection_regval; + struct { + mmr_t dat_dfr_q : 1; + mmr_t dxb_wtl_cmnd_q : 1; + mmr_t fsb_wtl_cmnd_q : 1; + mmr_t mdpi_rpy_bfr : 1; + mmr_t ptc_intr : 1; + mmr_t rxl_kill_q : 1; + mmr_t rxl_rdy_q : 1; + mmr_t xnpi_rpy_bfr : 1; + mmr_t reserved_0 : 56; + } sh_pi_queue_error_injection_s; +} sh_pi_queue_error_injection_u_t; +#else +typedef union sh_pi_queue_error_injection_u { + mmr_t sh_pi_queue_error_injection_regval; + struct { + mmr_t reserved_0 : 56; + mmr_t xnpi_rpy_bfr : 1; + mmr_t rxl_rdy_q : 1; + mmr_t rxl_kill_q : 1; + mmr_t ptc_intr : 1; + mmr_t mdpi_rpy_bfr : 1; + mmr_t fsb_wtl_cmnd_q : 1; + mmr_t dxb_wtl_cmnd_q : 1; + mmr_t dat_dfr_q : 1; + } sh_pi_queue_error_injection_s; +} sh_pi_queue_error_injection_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_TEST_POINT_COMPARE" */ +/* PI Test Point Compare */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_test_point_compare_u { + mmr_t sh_pi_test_point_compare_regval; + struct { + mmr_t compare_mask : 32; + mmr_t compare_pattern : 32; + } sh_pi_test_point_compare_s; +} sh_pi_test_point_compare_u_t; +#else +typedef union sh_pi_test_point_compare_u { + mmr_t sh_pi_test_point_compare_regval; + struct { + mmr_t compare_pattern : 32; + mmr_t compare_mask : 32; + } sh_pi_test_point_compare_s; +} sh_pi_test_point_compare_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_TEST_POINT_SELECT" */ +/* PI Test Point Select */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_test_point_select_u { + mmr_t sh_pi_test_point_select_regval; + struct { + mmr_t nibble0_chiplet_sel : 3; + mmr_t reserved_0 : 1; + mmr_t nibble0_nibble_sel : 3; + mmr_t reserved_1 : 1; + mmr_t nibble1_chiplet_sel : 3; + mmr_t reserved_2 : 1; + mmr_t nibble1_nibble_sel : 3; + mmr_t reserved_3 : 1; + mmr_t nibble2_chiplet_sel : 3; + mmr_t reserved_4 : 1; + mmr_t nibble2_nibble_sel : 3; + mmr_t reserved_5 : 1; + mmr_t nibble3_chiplet_sel : 3; + mmr_t reserved_6 : 1; + mmr_t nibble3_nibble_sel : 3; + mmr_t reserved_7 : 1; + mmr_t nibble4_chiplet_sel : 3; + mmr_t reserved_8 : 1; + mmr_t nibble4_nibble_sel : 3; + mmr_t reserved_9 : 1; + mmr_t nibble5_chiplet_sel : 3; + mmr_t reserved_10 : 1; + mmr_t nibble5_nibble_sel : 3; + mmr_t reserved_11 : 1; + mmr_t nibble6_chiplet_sel : 3; + mmr_t reserved_12 : 1; + mmr_t nibble6_nibble_sel : 3; + mmr_t reserved_13 : 1; + mmr_t nibble7_chiplet_sel : 3; + mmr_t reserved_14 : 1; + mmr_t nibble7_nibble_sel : 3; + mmr_t trigger_enable : 1; + } sh_pi_test_point_select_s; +} sh_pi_test_point_select_u_t; +#else +typedef union sh_pi_test_point_select_u { + mmr_t sh_pi_test_point_select_regval; + struct { + mmr_t trigger_enable : 1; + mmr_t nibble7_nibble_sel : 3; + mmr_t reserved_14 : 1; + mmr_t nibble7_chiplet_sel : 3; + mmr_t reserved_13 : 1; + mmr_t nibble6_nibble_sel : 3; + mmr_t reserved_12 : 1; + mmr_t nibble6_chiplet_sel : 3; + mmr_t reserved_11 : 1; + mmr_t nibble5_nibble_sel : 3; + mmr_t reserved_10 : 1; + mmr_t nibble5_chiplet_sel : 3; + mmr_t reserved_9 : 1; + mmr_t nibble4_nibble_sel : 3; + mmr_t reserved_8 : 1; + mmr_t nibble4_chiplet_sel : 3; + mmr_t reserved_7 : 1; + mmr_t nibble3_nibble_sel : 3; + mmr_t reserved_6 : 1; + mmr_t nibble3_chiplet_sel : 3; + mmr_t reserved_5 : 1; + mmr_t nibble2_nibble_sel : 3; + mmr_t reserved_4 : 1; + mmr_t nibble2_chiplet_sel : 3; + mmr_t reserved_3 : 1; + mmr_t nibble1_nibble_sel : 3; + mmr_t reserved_2 : 1; + mmr_t nibble1_chiplet_sel : 3; + mmr_t reserved_1 : 1; + mmr_t nibble0_nibble_sel : 3; + mmr_t reserved_0 : 1; + mmr_t nibble0_chiplet_sel : 3; + } sh_pi_test_point_select_s; +} sh_pi_test_point_select_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_TEST_POINT_TRIGGER_SELECT" */ +/* PI Test Point Trigger Select */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_test_point_trigger_select_u { + mmr_t sh_pi_test_point_trigger_select_regval; + struct { + mmr_t trigger0_chiplet_sel : 3; + mmr_t reserved_0 : 1; + mmr_t trigger0_nibble_sel : 3; + mmr_t reserved_1 : 1; + mmr_t trigger1_chiplet_sel : 3; + mmr_t reserved_2 : 1; + mmr_t trigger1_nibble_sel : 3; + mmr_t reserved_3 : 1; + mmr_t trigger2_chiplet_sel : 3; + mmr_t reserved_4 : 1; + mmr_t trigger2_nibble_sel : 3; + mmr_t reserved_5 : 1; + mmr_t trigger3_chiplet_sel : 3; + mmr_t reserved_6 : 1; + mmr_t trigger3_nibble_sel : 3; + mmr_t reserved_7 : 1; + mmr_t trigger4_chiplet_sel : 3; + mmr_t reserved_8 : 1; + mmr_t trigger4_nibble_sel : 3; + mmr_t reserved_9 : 1; + mmr_t trigger5_chiplet_sel : 3; + mmr_t reserved_10 : 1; + mmr_t trigger5_nibble_sel : 3; + mmr_t reserved_11 : 1; + mmr_t trigger6_chiplet_sel : 3; + mmr_t reserved_12 : 1; + mmr_t trigger6_nibble_sel : 3; + mmr_t reserved_13 : 1; + mmr_t trigger7_chiplet_sel : 3; + mmr_t reserved_14 : 1; + mmr_t trigger7_nibble_sel : 3; + mmr_t reserved_15 : 1; + } sh_pi_test_point_trigger_select_s; +} sh_pi_test_point_trigger_select_u_t; +#else +typedef union sh_pi_test_point_trigger_select_u { + mmr_t sh_pi_test_point_trigger_select_regval; + struct { + mmr_t reserved_15 : 1; + mmr_t trigger7_nibble_sel : 3; + mmr_t reserved_14 : 1; + mmr_t trigger7_chiplet_sel : 3; + mmr_t reserved_13 : 1; + mmr_t trigger6_nibble_sel : 3; + mmr_t reserved_12 : 1; + mmr_t trigger6_chiplet_sel : 3; + mmr_t reserved_11 : 1; + mmr_t trigger5_nibble_sel : 3; + mmr_t reserved_10 : 1; + mmr_t trigger5_chiplet_sel : 3; + mmr_t reserved_9 : 1; + mmr_t trigger4_nibble_sel : 3; + mmr_t reserved_8 : 1; + mmr_t trigger4_chiplet_sel : 3; + mmr_t reserved_7 : 1; + mmr_t trigger3_nibble_sel : 3; + mmr_t reserved_6 : 1; + mmr_t trigger3_chiplet_sel : 3; + mmr_t reserved_5 : 1; + mmr_t trigger2_nibble_sel : 3; + mmr_t reserved_4 : 1; + mmr_t trigger2_chiplet_sel : 3; + mmr_t reserved_3 : 1; + mmr_t trigger1_nibble_sel : 3; + mmr_t reserved_2 : 1; + mmr_t trigger1_chiplet_sel : 3; + mmr_t reserved_1 : 1; + mmr_t trigger0_nibble_sel : 3; + mmr_t reserved_0 : 1; + mmr_t trigger0_chiplet_sel : 3; + } sh_pi_test_point_trigger_select_s; +} sh_pi_test_point_trigger_select_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_XN2PI_REPLY_VC_CONFIG" */ +/* XN-to-PI Reply Virtual Channel Configuration */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_xn2pi_reply_vc_config_u { + mmr_t sh_pi_xn2pi_reply_vc_config_regval; + struct { + mmr_t hdr_depth : 4; + mmr_t data_depth : 4; + mmr_t max_credits : 6; + mmr_t reserved_0 : 48; + mmr_t force_credit : 1; + mmr_t capture_credit_status : 1; + } sh_pi_xn2pi_reply_vc_config_s; +} sh_pi_xn2pi_reply_vc_config_u_t; +#else +typedef union sh_pi_xn2pi_reply_vc_config_u { + mmr_t sh_pi_xn2pi_reply_vc_config_regval; + struct { + mmr_t capture_credit_status : 1; + mmr_t force_credit : 1; + mmr_t reserved_0 : 48; + mmr_t max_credits : 6; + mmr_t data_depth : 4; + mmr_t hdr_depth : 4; + } sh_pi_xn2pi_reply_vc_config_s; +} sh_pi_xn2pi_reply_vc_config_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_XN2PI_REQUEST_VC_CONFIG" */ +/* XN-to-PI Request Virtual Channel Configuration */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_xn2pi_request_vc_config_u { + mmr_t sh_pi_xn2pi_request_vc_config_regval; + struct { + mmr_t hdr_depth : 4; + mmr_t data_depth : 4; + mmr_t max_credits : 6; + mmr_t reserved_0 : 48; + mmr_t force_credit : 1; + mmr_t capture_credit_status : 1; + } sh_pi_xn2pi_request_vc_config_s; +} sh_pi_xn2pi_request_vc_config_u_t; +#else +typedef union sh_pi_xn2pi_request_vc_config_u { + mmr_t sh_pi_xn2pi_request_vc_config_regval; + struct { + mmr_t capture_credit_status : 1; + mmr_t force_credit : 1; + mmr_t reserved_0 : 48; + mmr_t max_credits : 6; + mmr_t data_depth : 4; + mmr_t hdr_depth : 4; + } sh_pi_xn2pi_request_vc_config_s; +} sh_pi_xn2pi_request_vc_config_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_AEC_STATUS" */ +/* PI Adaptive Error Correction Status */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_aec_status_u { + mmr_t sh_pi_aec_status_regval; + struct { + mmr_t state : 3; + mmr_t reserved_0 : 61; + } sh_pi_aec_status_s; +} sh_pi_aec_status_u_t; +#else +typedef union sh_pi_aec_status_u { + mmr_t sh_pi_aec_status_regval; + struct { + mmr_t reserved_0 : 61; + mmr_t state : 3; + } sh_pi_aec_status_s; +} sh_pi_aec_status_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_AFI_FIRST_ERROR" */ +/* PI AFI First Error */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_afi_first_error_u { + mmr_t sh_pi_afi_first_error_regval; + struct { + mmr_t reserved_0 : 7; + mmr_t fsb_shub_uce : 1; + mmr_t fsb_shub_ce : 1; + mmr_t reserved_1 : 12; + mmr_t hung_bus : 1; + mmr_t rsp_parity : 1; + mmr_t ioq_overrun : 1; + mmr_t req_format : 1; + mmr_t addr_access : 1; + mmr_t req_parity : 1; + mmr_t addr_parity : 1; + mmr_t shub_fsb_dqe : 1; + mmr_t shub_fsb_uce : 1; + mmr_t shub_fsb_ce : 1; + mmr_t livelock : 1; + mmr_t bad_snoop : 1; + mmr_t fsb_tbl_miss : 1; + mmr_t msg_len : 1; + mmr_t reserved_2 : 29; + } sh_pi_afi_first_error_s; +} sh_pi_afi_first_error_u_t; +#else +typedef union sh_pi_afi_first_error_u { + mmr_t sh_pi_afi_first_error_regval; + struct { + mmr_t reserved_2 : 29; + mmr_t msg_len : 1; + mmr_t fsb_tbl_miss : 1; + mmr_t bad_snoop : 1; + mmr_t livelock : 1; + mmr_t shub_fsb_ce : 1; + mmr_t shub_fsb_uce : 1; + mmr_t shub_fsb_dqe : 1; + mmr_t addr_parity : 1; + mmr_t req_parity : 1; + mmr_t addr_access : 1; + mmr_t req_format : 1; + mmr_t ioq_overrun : 1; + mmr_t rsp_parity : 1; + mmr_t hung_bus : 1; + mmr_t reserved_1 : 12; + mmr_t fsb_shub_ce : 1; + mmr_t fsb_shub_uce : 1; + mmr_t reserved_0 : 7; + } sh_pi_afi_first_error_s; +} sh_pi_afi_first_error_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_CAM_ADDRESS_READ_DATA" */ +/* CRB CAM MMR Address Read Data */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_cam_address_read_data_u { + mmr_t sh_pi_cam_address_read_data_regval; + struct { + mmr_t cam_addr : 48; + mmr_t reserved_0 : 15; + mmr_t cam_addr_val : 1; + } sh_pi_cam_address_read_data_s; +} sh_pi_cam_address_read_data_u_t; +#else +typedef union sh_pi_cam_address_read_data_u { + mmr_t sh_pi_cam_address_read_data_regval; + struct { + mmr_t cam_addr_val : 1; + mmr_t reserved_0 : 15; + mmr_t cam_addr : 48; + } sh_pi_cam_address_read_data_s; +} sh_pi_cam_address_read_data_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_CAM_LPRA_READ_DATA" */ +/* CRB CAM MMR LPRA Read Data */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_cam_lpra_read_data_u { + mmr_t sh_pi_cam_lpra_read_data_regval; + struct { + mmr_t cam_lpra : 64; + } sh_pi_cam_lpra_read_data_s; +} sh_pi_cam_lpra_read_data_u_t; +#else +typedef union sh_pi_cam_lpra_read_data_u { + mmr_t sh_pi_cam_lpra_read_data_regval; + struct { + mmr_t cam_lpra : 64; + } sh_pi_cam_lpra_read_data_s; +} sh_pi_cam_lpra_read_data_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_CAM_STATE_READ_DATA" */ +/* CRB CAM MMR State Read Data */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_cam_state_read_data_u { + mmr_t sh_pi_cam_state_read_data_regval; + struct { + mmr_t cam_state : 4; + mmr_t cam_to : 1; + mmr_t cam_state_rd_pend : 1; + mmr_t reserved_0 : 26; + mmr_t cam_lpra : 18; + mmr_t reserved_1 : 13; + mmr_t cam_rd_data_val : 1; + } sh_pi_cam_state_read_data_s; +} sh_pi_cam_state_read_data_u_t; +#else +typedef union sh_pi_cam_state_read_data_u { + mmr_t sh_pi_cam_state_read_data_regval; + struct { + mmr_t cam_rd_data_val : 1; + mmr_t reserved_1 : 13; + mmr_t cam_lpra : 18; + mmr_t reserved_0 : 26; + mmr_t cam_state_rd_pend : 1; + mmr_t cam_to : 1; + mmr_t cam_state : 4; + } sh_pi_cam_state_read_data_s; +} sh_pi_cam_state_read_data_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_CORRECTED_DETAIL_1" */ +/* PI Corrected Error Detail */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_corrected_detail_1_u { + mmr_t sh_pi_corrected_detail_1_regval; + struct { + mmr_t address : 48; + mmr_t syndrome : 8; + mmr_t dep : 8; + } sh_pi_corrected_detail_1_s; +} sh_pi_corrected_detail_1_u_t; +#else +typedef union sh_pi_corrected_detail_1_u { + mmr_t sh_pi_corrected_detail_1_regval; + struct { + mmr_t dep : 8; + mmr_t syndrome : 8; + mmr_t address : 48; + } sh_pi_corrected_detail_1_s; +} sh_pi_corrected_detail_1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_CORRECTED_DETAIL_2" */ +/* PI Corrected Error Detail 2 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_corrected_detail_2_u { + mmr_t sh_pi_corrected_detail_2_regval; + struct { + mmr_t data : 64; + } sh_pi_corrected_detail_2_s; +} sh_pi_corrected_detail_2_u_t; +#else +typedef union sh_pi_corrected_detail_2_u { + mmr_t sh_pi_corrected_detail_2_regval; + struct { + mmr_t data : 64; + } sh_pi_corrected_detail_2_s; +} sh_pi_corrected_detail_2_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_CORRECTED_DETAIL_3" */ +/* PI Corrected Error Detail 3 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_corrected_detail_3_u { + mmr_t sh_pi_corrected_detail_3_regval; + struct { + mmr_t address : 48; + mmr_t syndrome : 8; + mmr_t dep : 8; + } sh_pi_corrected_detail_3_s; +} sh_pi_corrected_detail_3_u_t; +#else +typedef union sh_pi_corrected_detail_3_u { + mmr_t sh_pi_corrected_detail_3_regval; + struct { + mmr_t dep : 8; + mmr_t syndrome : 8; + mmr_t address : 48; + } sh_pi_corrected_detail_3_s; +} sh_pi_corrected_detail_3_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_CORRECTED_DETAIL_4" */ +/* PI Corrected Error Detail 4 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_corrected_detail_4_u { + mmr_t sh_pi_corrected_detail_4_regval; + struct { + mmr_t data : 64; + } sh_pi_corrected_detail_4_s; +} sh_pi_corrected_detail_4_u_t; +#else +typedef union sh_pi_corrected_detail_4_u { + mmr_t sh_pi_corrected_detail_4_regval; + struct { + mmr_t data : 64; + } sh_pi_corrected_detail_4_s; +} sh_pi_corrected_detail_4_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_CRBP_FIRST_ERROR" */ +/* PI CRBP First Error */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_crbp_first_error_u { + mmr_t sh_pi_crbp_first_error_regval; + struct { + mmr_t fsb_proto_err : 1; + mmr_t gfx_rp_err : 1; + mmr_t xb_proto_err : 1; + mmr_t mem_rp_err : 1; + mmr_t pio_rp_err : 1; + mmr_t mem_to_err : 1; + mmr_t pio_to_err : 1; + mmr_t fsb_shub_uce : 1; + mmr_t fsb_shub_ce : 1; + mmr_t msg_color_err : 1; + mmr_t md_rq_q_oflow : 1; + mmr_t md_rp_q_oflow : 1; + mmr_t xn_rq_q_oflow : 1; + mmr_t xn_rp_q_oflow : 1; + mmr_t nack_oflow : 1; + mmr_t gfx_int_0 : 1; + mmr_t gfx_int_1 : 1; + mmr_t md_rq_crd_oflow : 1; + mmr_t md_rp_crd_oflow : 1; + mmr_t xn_rq_crd_oflow : 1; + mmr_t xn_rp_crd_oflow : 1; + mmr_t reserved_0 : 43; + } sh_pi_crbp_first_error_s; +} sh_pi_crbp_first_error_u_t; +#else +typedef union sh_pi_crbp_first_error_u { + mmr_t sh_pi_crbp_first_error_regval; + struct { + mmr_t reserved_0 : 43; + mmr_t xn_rp_crd_oflow : 1; + mmr_t xn_rq_crd_oflow : 1; + mmr_t md_rp_crd_oflow : 1; + mmr_t md_rq_crd_oflow : 1; + mmr_t gfx_int_1 : 1; + mmr_t gfx_int_0 : 1; + mmr_t nack_oflow : 1; + mmr_t xn_rp_q_oflow : 1; + mmr_t xn_rq_q_oflow : 1; + mmr_t md_rp_q_oflow : 1; + mmr_t md_rq_q_oflow : 1; + mmr_t msg_color_err : 1; + mmr_t fsb_shub_ce : 1; + mmr_t fsb_shub_uce : 1; + mmr_t pio_to_err : 1; + mmr_t mem_to_err : 1; + mmr_t pio_rp_err : 1; + mmr_t mem_rp_err : 1; + mmr_t xb_proto_err : 1; + mmr_t gfx_rp_err : 1; + mmr_t fsb_proto_err : 1; + } sh_pi_crbp_first_error_s; +} sh_pi_crbp_first_error_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_ERROR_DETAIL_1" */ +/* PI Error Detail 1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_error_detail_1_u { + mmr_t sh_pi_error_detail_1_regval; + struct { + mmr_t status : 64; + } sh_pi_error_detail_1_s; +} sh_pi_error_detail_1_u_t; +#else +typedef union sh_pi_error_detail_1_u { + mmr_t sh_pi_error_detail_1_regval; + struct { + mmr_t status : 64; + } sh_pi_error_detail_1_s; +} sh_pi_error_detail_1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_ERROR_DETAIL_2" */ +/* PI Error Detail 2 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_error_detail_2_u { + mmr_t sh_pi_error_detail_2_regval; + struct { + mmr_t status : 64; + } sh_pi_error_detail_2_s; +} sh_pi_error_detail_2_u_t; +#else +typedef union sh_pi_error_detail_2_u { + mmr_t sh_pi_error_detail_2_regval; + struct { + mmr_t status : 64; + } sh_pi_error_detail_2_s; +} sh_pi_error_detail_2_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_ERROR_OVERFLOW" */ +/* PI Error Overflow */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_error_overflow_u { + mmr_t sh_pi_error_overflow_regval; + struct { + mmr_t fsb_proto_err : 1; + mmr_t gfx_rp_err : 1; + mmr_t xb_proto_err : 1; + mmr_t mem_rp_err : 1; + mmr_t pio_rp_err : 1; + mmr_t mem_to_err : 1; + mmr_t pio_to_err : 1; + mmr_t fsb_shub_uce : 1; + mmr_t fsb_shub_ce : 1; + mmr_t msg_color_err : 1; + mmr_t md_rq_q_oflow : 1; + mmr_t md_rp_q_oflow : 1; + mmr_t xn_rq_q_oflow : 1; + mmr_t xn_rp_q_oflow : 1; + mmr_t nack_oflow : 1; + mmr_t gfx_int_0 : 1; + mmr_t gfx_int_1 : 1; + mmr_t md_rq_crd_oflow : 1; + mmr_t md_rp_crd_oflow : 1; + mmr_t xn_rq_crd_oflow : 1; + mmr_t xn_rp_crd_oflow : 1; + mmr_t hung_bus : 1; + mmr_t rsp_parity : 1; + mmr_t ioq_overrun : 1; + mmr_t req_format : 1; + mmr_t addr_access : 1; + mmr_t req_parity : 1; + mmr_t addr_parity : 1; + mmr_t shub_fsb_dqe : 1; + mmr_t shub_fsb_uce : 1; + mmr_t shub_fsb_ce : 1; + mmr_t livelock : 1; + mmr_t bad_snoop : 1; + mmr_t fsb_tbl_miss : 1; + mmr_t msg_length : 1; + mmr_t reserved_0 : 29; + } sh_pi_error_overflow_s; +} sh_pi_error_overflow_u_t; +#else +typedef union sh_pi_error_overflow_u { + mmr_t sh_pi_error_overflow_regval; + struct { + mmr_t reserved_0 : 29; + mmr_t msg_length : 1; + mmr_t fsb_tbl_miss : 1; + mmr_t bad_snoop : 1; + mmr_t livelock : 1; + mmr_t shub_fsb_ce : 1; + mmr_t shub_fsb_uce : 1; + mmr_t shub_fsb_dqe : 1; + mmr_t addr_parity : 1; + mmr_t req_parity : 1; + mmr_t addr_access : 1; + mmr_t req_format : 1; + mmr_t ioq_overrun : 1; + mmr_t rsp_parity : 1; + mmr_t hung_bus : 1; + mmr_t xn_rp_crd_oflow : 1; + mmr_t xn_rq_crd_oflow : 1; + mmr_t md_rp_crd_oflow : 1; + mmr_t md_rq_crd_oflow : 1; + mmr_t gfx_int_1 : 1; + mmr_t gfx_int_0 : 1; + mmr_t nack_oflow : 1; + mmr_t xn_rp_q_oflow : 1; + mmr_t xn_rq_q_oflow : 1; + mmr_t md_rp_q_oflow : 1; + mmr_t md_rq_q_oflow : 1; + mmr_t msg_color_err : 1; + mmr_t fsb_shub_ce : 1; + mmr_t fsb_shub_uce : 1; + mmr_t pio_to_err : 1; + mmr_t mem_to_err : 1; + mmr_t pio_rp_err : 1; + mmr_t mem_rp_err : 1; + mmr_t xb_proto_err : 1; + mmr_t gfx_rp_err : 1; + mmr_t fsb_proto_err : 1; + } sh_pi_error_overflow_s; +} sh_pi_error_overflow_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_ERROR_SUMMARY" */ +/* PI Error Summary */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_error_summary_u { + mmr_t sh_pi_error_summary_regval; + struct { + mmr_t fsb_proto_err : 1; + mmr_t gfx_rp_err : 1; + mmr_t xb_proto_err : 1; + mmr_t mem_rp_err : 1; + mmr_t pio_rp_err : 1; + mmr_t mem_to_err : 1; + mmr_t pio_to_err : 1; + mmr_t fsb_shub_uce : 1; + mmr_t fsb_shub_ce : 1; + mmr_t msg_color_err : 1; + mmr_t md_rq_q_oflow : 1; + mmr_t md_rp_q_oflow : 1; + mmr_t xn_rq_q_oflow : 1; + mmr_t xn_rp_q_oflow : 1; + mmr_t nack_oflow : 1; + mmr_t gfx_int_0 : 1; + mmr_t gfx_int_1 : 1; + mmr_t md_rq_crd_oflow : 1; + mmr_t md_rp_crd_oflow : 1; + mmr_t xn_rq_crd_oflow : 1; + mmr_t xn_rp_crd_oflow : 1; + mmr_t hung_bus : 1; + mmr_t rsp_parity : 1; + mmr_t ioq_overrun : 1; + mmr_t req_format : 1; + mmr_t addr_access : 1; + mmr_t req_parity : 1; + mmr_t addr_parity : 1; + mmr_t shub_fsb_dqe : 1; + mmr_t shub_fsb_uce : 1; + mmr_t shub_fsb_ce : 1; + mmr_t livelock : 1; + mmr_t bad_snoop : 1; + mmr_t fsb_tbl_miss : 1; + mmr_t msg_length : 1; + mmr_t reserved_0 : 29; + } sh_pi_error_summary_s; +} sh_pi_error_summary_u_t; +#else +typedef union sh_pi_error_summary_u { + mmr_t sh_pi_error_summary_regval; + struct { + mmr_t reserved_0 : 29; + mmr_t msg_length : 1; + mmr_t fsb_tbl_miss : 1; + mmr_t bad_snoop : 1; + mmr_t livelock : 1; + mmr_t shub_fsb_ce : 1; + mmr_t shub_fsb_uce : 1; + mmr_t shub_fsb_dqe : 1; + mmr_t addr_parity : 1; + mmr_t req_parity : 1; + mmr_t addr_access : 1; + mmr_t req_format : 1; + mmr_t ioq_overrun : 1; + mmr_t rsp_parity : 1; + mmr_t hung_bus : 1; + mmr_t xn_rp_crd_oflow : 1; + mmr_t xn_rq_crd_oflow : 1; + mmr_t md_rp_crd_oflow : 1; + mmr_t md_rq_crd_oflow : 1; + mmr_t gfx_int_1 : 1; + mmr_t gfx_int_0 : 1; + mmr_t nack_oflow : 1; + mmr_t xn_rp_q_oflow : 1; + mmr_t xn_rq_q_oflow : 1; + mmr_t md_rp_q_oflow : 1; + mmr_t md_rq_q_oflow : 1; + mmr_t msg_color_err : 1; + mmr_t fsb_shub_ce : 1; + mmr_t fsb_shub_uce : 1; + mmr_t pio_to_err : 1; + mmr_t mem_to_err : 1; + mmr_t pio_rp_err : 1; + mmr_t mem_rp_err : 1; + mmr_t xb_proto_err : 1; + mmr_t gfx_rp_err : 1; + mmr_t fsb_proto_err : 1; + } sh_pi_error_summary_s; +} sh_pi_error_summary_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_EXPRESS_REPLY_STATUS" */ +/* PI Express Reply Status */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_express_reply_status_u { + mmr_t sh_pi_express_reply_status_regval; + struct { + mmr_t state : 3; + mmr_t reserved_0 : 61; + } sh_pi_express_reply_status_s; +} sh_pi_express_reply_status_u_t; +#else +typedef union sh_pi_express_reply_status_u { + mmr_t sh_pi_express_reply_status_regval; + struct { + mmr_t reserved_0 : 61; + mmr_t state : 3; + } sh_pi_express_reply_status_s; +} sh_pi_express_reply_status_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_FIRST_ERROR" */ +/* PI First Error */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_first_error_u { + mmr_t sh_pi_first_error_regval; + struct { + mmr_t fsb_proto_err : 1; + mmr_t gfx_rp_err : 1; + mmr_t xb_proto_err : 1; + mmr_t mem_rp_err : 1; + mmr_t pio_rp_err : 1; + mmr_t mem_to_err : 1; + mmr_t pio_to_err : 1; + mmr_t fsb_shub_uce : 1; + mmr_t fsb_shub_ce : 1; + mmr_t msg_color_err : 1; + mmr_t md_rq_q_oflow : 1; + mmr_t md_rp_q_oflow : 1; + mmr_t xn_rq_q_oflow : 1; + mmr_t xn_rp_q_oflow : 1; + mmr_t nack_oflow : 1; + mmr_t gfx_int_0 : 1; + mmr_t gfx_int_1 : 1; + mmr_t md_rq_crd_oflow : 1; + mmr_t md_rp_crd_oflow : 1; + mmr_t xn_rq_crd_oflow : 1; + mmr_t xn_rp_crd_oflow : 1; + mmr_t hung_bus : 1; + mmr_t rsp_parity : 1; + mmr_t ioq_overrun : 1; + mmr_t req_format : 1; + mmr_t addr_access : 1; + mmr_t req_parity : 1; + mmr_t addr_parity : 1; + mmr_t shub_fsb_dqe : 1; + mmr_t shub_fsb_uce : 1; + mmr_t shub_fsb_ce : 1; + mmr_t livelock : 1; + mmr_t bad_snoop : 1; + mmr_t fsb_tbl_miss : 1; + mmr_t msg_length : 1; + mmr_t reserved_0 : 29; + } sh_pi_first_error_s; +} sh_pi_first_error_u_t; +#else +typedef union sh_pi_first_error_u { + mmr_t sh_pi_first_error_regval; + struct { + mmr_t reserved_0 : 29; + mmr_t msg_length : 1; + mmr_t fsb_tbl_miss : 1; + mmr_t bad_snoop : 1; + mmr_t livelock : 1; + mmr_t shub_fsb_ce : 1; + mmr_t shub_fsb_uce : 1; + mmr_t shub_fsb_dqe : 1; + mmr_t addr_parity : 1; + mmr_t req_parity : 1; + mmr_t addr_access : 1; + mmr_t req_format : 1; + mmr_t ioq_overrun : 1; + mmr_t rsp_parity : 1; + mmr_t hung_bus : 1; + mmr_t xn_rp_crd_oflow : 1; + mmr_t xn_rq_crd_oflow : 1; + mmr_t md_rp_crd_oflow : 1; + mmr_t md_rq_crd_oflow : 1; + mmr_t gfx_int_1 : 1; + mmr_t gfx_int_0 : 1; + mmr_t nack_oflow : 1; + mmr_t xn_rp_q_oflow : 1; + mmr_t xn_rq_q_oflow : 1; + mmr_t md_rp_q_oflow : 1; + mmr_t md_rq_q_oflow : 1; + mmr_t msg_color_err : 1; + mmr_t fsb_shub_ce : 1; + mmr_t fsb_shub_uce : 1; + mmr_t pio_to_err : 1; + mmr_t mem_to_err : 1; + mmr_t pio_rp_err : 1; + mmr_t mem_rp_err : 1; + mmr_t xb_proto_err : 1; + mmr_t gfx_rp_err : 1; + mmr_t fsb_proto_err : 1; + } sh_pi_first_error_s; +} sh_pi_first_error_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_PI2MD_REPLY_VC_STATUS" */ +/* PI-to-MD Reply Virtual Channel Status */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_pi2md_reply_vc_status_u { + mmr_t sh_pi_pi2md_reply_vc_status_regval; + struct { + mmr_t output_crd_stat : 6; + mmr_t reserved_0 : 58; + } sh_pi_pi2md_reply_vc_status_s; +} sh_pi_pi2md_reply_vc_status_u_t; +#else +typedef union sh_pi_pi2md_reply_vc_status_u { + mmr_t sh_pi_pi2md_reply_vc_status_regval; + struct { + mmr_t reserved_0 : 58; + mmr_t output_crd_stat : 6; + } sh_pi_pi2md_reply_vc_status_s; +} sh_pi_pi2md_reply_vc_status_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_PI2MD_REQUEST_VC_STATUS" */ +/* PI-to-MD Request Virtual Channel Status */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_pi2md_request_vc_status_u { + mmr_t sh_pi_pi2md_request_vc_status_regval; + struct { + mmr_t output_crd_stat : 6; + mmr_t reserved_0 : 58; + } sh_pi_pi2md_request_vc_status_s; +} sh_pi_pi2md_request_vc_status_u_t; +#else +typedef union sh_pi_pi2md_request_vc_status_u { + mmr_t sh_pi_pi2md_request_vc_status_regval; + struct { + mmr_t reserved_0 : 58; + mmr_t output_crd_stat : 6; + } sh_pi_pi2md_request_vc_status_s; +} sh_pi_pi2md_request_vc_status_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_PI2XN_REPLY_VC_STATUS" */ +/* PI-to-XN Reply Virtual Channel Status */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_pi2xn_reply_vc_status_u { + mmr_t sh_pi_pi2xn_reply_vc_status_regval; + struct { + mmr_t output_crd_stat : 6; + mmr_t reserved_0 : 58; + } sh_pi_pi2xn_reply_vc_status_s; +} sh_pi_pi2xn_reply_vc_status_u_t; +#else +typedef union sh_pi_pi2xn_reply_vc_status_u { + mmr_t sh_pi_pi2xn_reply_vc_status_regval; + struct { + mmr_t reserved_0 : 58; + mmr_t output_crd_stat : 6; + } sh_pi_pi2xn_reply_vc_status_s; +} sh_pi_pi2xn_reply_vc_status_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_PI2XN_REQUEST_VC_STATUS" */ +/* PI-to-XN Request Virtual Channel Status */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_pi2xn_request_vc_status_u { + mmr_t sh_pi_pi2xn_request_vc_status_regval; + struct { + mmr_t output_crd_stat : 6; + mmr_t reserved_0 : 58; + } sh_pi_pi2xn_request_vc_status_s; +} sh_pi_pi2xn_request_vc_status_u_t; +#else +typedef union sh_pi_pi2xn_request_vc_status_u { + mmr_t sh_pi_pi2xn_request_vc_status_regval; + struct { + mmr_t reserved_0 : 58; + mmr_t output_crd_stat : 6; + } sh_pi_pi2xn_request_vc_status_s; +} sh_pi_pi2xn_request_vc_status_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_UNCORRECTED_DETAIL_1" */ +/* PI Uncorrected Error Detail 1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_uncorrected_detail_1_u { + mmr_t sh_pi_uncorrected_detail_1_regval; + struct { + mmr_t address : 48; + mmr_t syndrome : 8; + mmr_t dep : 8; + } sh_pi_uncorrected_detail_1_s; +} sh_pi_uncorrected_detail_1_u_t; +#else +typedef union sh_pi_uncorrected_detail_1_u { + mmr_t sh_pi_uncorrected_detail_1_regval; + struct { + mmr_t dep : 8; + mmr_t syndrome : 8; + mmr_t address : 48; + } sh_pi_uncorrected_detail_1_s; +} sh_pi_uncorrected_detail_1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_UNCORRECTED_DETAIL_2" */ +/* PI Uncorrected Error Detail 2 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_uncorrected_detail_2_u { + mmr_t sh_pi_uncorrected_detail_2_regval; + struct { + mmr_t data : 64; + } sh_pi_uncorrected_detail_2_s; +} sh_pi_uncorrected_detail_2_u_t; +#else +typedef union sh_pi_uncorrected_detail_2_u { + mmr_t sh_pi_uncorrected_detail_2_regval; + struct { + mmr_t data : 64; + } sh_pi_uncorrected_detail_2_s; +} sh_pi_uncorrected_detail_2_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_UNCORRECTED_DETAIL_3" */ +/* PI Uncorrected Error Detail 3 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_uncorrected_detail_3_u { + mmr_t sh_pi_uncorrected_detail_3_regval; + struct { + mmr_t address : 48; + mmr_t syndrome : 8; + mmr_t dep : 8; + } sh_pi_uncorrected_detail_3_s; +} sh_pi_uncorrected_detail_3_u_t; +#else +typedef union sh_pi_uncorrected_detail_3_u { + mmr_t sh_pi_uncorrected_detail_3_regval; + struct { + mmr_t dep : 8; + mmr_t syndrome : 8; + mmr_t address : 48; + } sh_pi_uncorrected_detail_3_s; +} sh_pi_uncorrected_detail_3_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_UNCORRECTED_DETAIL_4" */ +/* PI Uncorrected Error Detail 4 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_uncorrected_detail_4_u { + mmr_t sh_pi_uncorrected_detail_4_regval; + struct { + mmr_t data : 64; + } sh_pi_uncorrected_detail_4_s; +} sh_pi_uncorrected_detail_4_u_t; +#else +typedef union sh_pi_uncorrected_detail_4_u { + mmr_t sh_pi_uncorrected_detail_4_regval; + struct { + mmr_t data : 64; + } sh_pi_uncorrected_detail_4_s; +} sh_pi_uncorrected_detail_4_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_MD2PI_REPLY_VC_STATUS" */ +/* MD-to-PI Reply Virtual Channel Status */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_md2pi_reply_vc_status_u { + mmr_t sh_pi_md2pi_reply_vc_status_regval; + struct { + mmr_t input_hdr_crd_stat : 4; + mmr_t input_dat_crd_stat : 4; + mmr_t input_queue_stat : 4; + mmr_t reserved_0 : 52; + } sh_pi_md2pi_reply_vc_status_s; +} sh_pi_md2pi_reply_vc_status_u_t; +#else +typedef union sh_pi_md2pi_reply_vc_status_u { + mmr_t sh_pi_md2pi_reply_vc_status_regval; + struct { + mmr_t reserved_0 : 52; + mmr_t input_queue_stat : 4; + mmr_t input_dat_crd_stat : 4; + mmr_t input_hdr_crd_stat : 4; + } sh_pi_md2pi_reply_vc_status_s; +} sh_pi_md2pi_reply_vc_status_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_MD2PI_REQUEST_VC_STATUS" */ +/* MD-to-PI Request Virtual Channel Status */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_md2pi_request_vc_status_u { + mmr_t sh_pi_md2pi_request_vc_status_regval; + struct { + mmr_t input_hdr_crd_stat : 4; + mmr_t input_dat_crd_stat : 4; + mmr_t input_queue_stat : 4; + mmr_t reserved_0 : 52; + } sh_pi_md2pi_request_vc_status_s; +} sh_pi_md2pi_request_vc_status_u_t; +#else +typedef union sh_pi_md2pi_request_vc_status_u { + mmr_t sh_pi_md2pi_request_vc_status_regval; + struct { + mmr_t reserved_0 : 52; + mmr_t input_queue_stat : 4; + mmr_t input_dat_crd_stat : 4; + mmr_t input_hdr_crd_stat : 4; + } sh_pi_md2pi_request_vc_status_s; +} sh_pi_md2pi_request_vc_status_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_XN2PI_REPLY_VC_STATUS" */ +/* XN-to-PI Reply Virtual Channel Status */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_xn2pi_reply_vc_status_u { + mmr_t sh_pi_xn2pi_reply_vc_status_regval; + struct { + mmr_t input_hdr_crd_stat : 4; + mmr_t input_dat_crd_stat : 4; + mmr_t input_queue_stat : 4; + mmr_t reserved_0 : 52; + } sh_pi_xn2pi_reply_vc_status_s; +} sh_pi_xn2pi_reply_vc_status_u_t; +#else +typedef union sh_pi_xn2pi_reply_vc_status_u { + mmr_t sh_pi_xn2pi_reply_vc_status_regval; + struct { + mmr_t reserved_0 : 52; + mmr_t input_queue_stat : 4; + mmr_t input_dat_crd_stat : 4; + mmr_t input_hdr_crd_stat : 4; + } sh_pi_xn2pi_reply_vc_status_s; +} sh_pi_xn2pi_reply_vc_status_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_XN2PI_REQUEST_VC_STATUS" */ +/* XN-to-PI Request Virtual Channel Status */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_xn2pi_request_vc_status_u { + mmr_t sh_pi_xn2pi_request_vc_status_regval; + struct { + mmr_t input_hdr_crd_stat : 4; + mmr_t input_dat_crd_stat : 4; + mmr_t input_queue_stat : 4; + mmr_t reserved_0 : 52; + } sh_pi_xn2pi_request_vc_status_s; +} sh_pi_xn2pi_request_vc_status_u_t; +#else +typedef union sh_pi_xn2pi_request_vc_status_u { + mmr_t sh_pi_xn2pi_request_vc_status_regval; + struct { + mmr_t reserved_0 : 52; + mmr_t input_queue_stat : 4; + mmr_t input_dat_crd_stat : 4; + mmr_t input_hdr_crd_stat : 4; + } sh_pi_xn2pi_request_vc_status_s; +} sh_pi_xn2pi_request_vc_status_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNPI_SIC_FLOW" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnpi_sic_flow_u { + mmr_t sh_xnpi_sic_flow_regval; + struct { + mmr_t debit_vc0_withhold : 5; + mmr_t reserved_0 : 2; + mmr_t debit_vc0_force_cred : 1; + mmr_t debit_vc2_withhold : 5; + mmr_t reserved_1 : 2; + mmr_t debit_vc2_force_cred : 1; + mmr_t credit_vc0_test : 5; + mmr_t reserved_2 : 3; + mmr_t credit_vc0_dyn : 5; + mmr_t reserved_3 : 3; + mmr_t credit_vc0_cap : 5; + mmr_t reserved_4 : 3; + mmr_t credit_vc2_test : 5; + mmr_t reserved_5 : 3; + mmr_t credit_vc2_dyn : 5; + mmr_t reserved_6 : 3; + mmr_t credit_vc2_cap : 5; + mmr_t reserved_7 : 2; + mmr_t disable_bypass_out : 1; + } sh_xnpi_sic_flow_s; +} sh_xnpi_sic_flow_u_t; +#else +typedef union sh_xnpi_sic_flow_u { + mmr_t sh_xnpi_sic_flow_regval; + struct { + mmr_t disable_bypass_out : 1; + mmr_t reserved_7 : 2; + mmr_t credit_vc2_cap : 5; + mmr_t reserved_6 : 3; + mmr_t credit_vc2_dyn : 5; + mmr_t reserved_5 : 3; + mmr_t credit_vc2_test : 5; + mmr_t reserved_4 : 3; + mmr_t credit_vc0_cap : 5; + mmr_t reserved_3 : 3; + mmr_t credit_vc0_dyn : 5; + mmr_t reserved_2 : 3; + mmr_t credit_vc0_test : 5; + mmr_t debit_vc2_force_cred : 1; + mmr_t reserved_1 : 2; + mmr_t debit_vc2_withhold : 5; + mmr_t debit_vc0_force_cred : 1; + mmr_t reserved_0 : 2; + mmr_t debit_vc0_withhold : 5; + } sh_xnpi_sic_flow_s; +} sh_xnpi_sic_flow_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNPI_TO_NI0_PORT_FLOW" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnpi_to_ni0_port_flow_u { + mmr_t sh_xnpi_to_ni0_port_flow_regval; + struct { + mmr_t debit_vc0_withhold : 6; + mmr_t reserved_0 : 1; + mmr_t debit_vc0_force_cred : 1; + mmr_t debit_vc2_withhold : 6; + mmr_t reserved_1 : 1; + mmr_t debit_vc2_force_cred : 1; + mmr_t reserved_2 : 8; + mmr_t credit_vc0_dyn : 6; + mmr_t reserved_3 : 2; + mmr_t credit_vc0_cap : 6; + mmr_t reserved_4 : 10; + mmr_t credit_vc2_dyn : 6; + mmr_t reserved_5 : 2; + mmr_t credit_vc2_cap : 6; + mmr_t reserved_6 : 2; + } sh_xnpi_to_ni0_port_flow_s; +} sh_xnpi_to_ni0_port_flow_u_t; +#else +typedef union sh_xnpi_to_ni0_port_flow_u { + mmr_t sh_xnpi_to_ni0_port_flow_regval; + struct { + mmr_t reserved_6 : 2; + mmr_t credit_vc2_cap : 6; + mmr_t reserved_5 : 2; + mmr_t credit_vc2_dyn : 6; + mmr_t reserved_4 : 10; + mmr_t credit_vc0_cap : 6; + mmr_t reserved_3 : 2; + mmr_t credit_vc0_dyn : 6; + mmr_t reserved_2 : 8; + mmr_t debit_vc2_force_cred : 1; + mmr_t reserved_1 : 1; + mmr_t debit_vc2_withhold : 6; + mmr_t debit_vc0_force_cred : 1; + mmr_t reserved_0 : 1; + mmr_t debit_vc0_withhold : 6; + } sh_xnpi_to_ni0_port_flow_s; +} sh_xnpi_to_ni0_port_flow_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNPI_TO_NI1_PORT_FLOW" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnpi_to_ni1_port_flow_u { + mmr_t sh_xnpi_to_ni1_port_flow_regval; + struct { + mmr_t debit_vc0_withhold : 6; + mmr_t reserved_0 : 1; + mmr_t debit_vc0_force_cred : 1; + mmr_t debit_vc2_withhold : 6; + mmr_t reserved_1 : 1; + mmr_t debit_vc2_force_cred : 1; + mmr_t reserved_2 : 8; + mmr_t credit_vc0_dyn : 6; + mmr_t reserved_3 : 2; + mmr_t credit_vc0_cap : 6; + mmr_t reserved_4 : 10; + mmr_t credit_vc2_dyn : 6; + mmr_t reserved_5 : 2; + mmr_t credit_vc2_cap : 6; + mmr_t reserved_6 : 2; + } sh_xnpi_to_ni1_port_flow_s; +} sh_xnpi_to_ni1_port_flow_u_t; +#else +typedef union sh_xnpi_to_ni1_port_flow_u { + mmr_t sh_xnpi_to_ni1_port_flow_regval; + struct { + mmr_t reserved_6 : 2; + mmr_t credit_vc2_cap : 6; + mmr_t reserved_5 : 2; + mmr_t credit_vc2_dyn : 6; + mmr_t reserved_4 : 10; + mmr_t credit_vc0_cap : 6; + mmr_t reserved_3 : 2; + mmr_t credit_vc0_dyn : 6; + mmr_t reserved_2 : 8; + mmr_t debit_vc2_force_cred : 1; + mmr_t reserved_1 : 1; + mmr_t debit_vc2_withhold : 6; + mmr_t debit_vc0_force_cred : 1; + mmr_t reserved_0 : 1; + mmr_t debit_vc0_withhold : 6; + } sh_xnpi_to_ni1_port_flow_s; +} sh_xnpi_to_ni1_port_flow_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNPI_TO_IILB_PORT_FLOW" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnpi_to_iilb_port_flow_u { + mmr_t sh_xnpi_to_iilb_port_flow_regval; + struct { + mmr_t debit_vc0_withhold : 6; + mmr_t reserved_0 : 1; + mmr_t debit_vc0_force_cred : 1; + mmr_t debit_vc2_withhold : 6; + mmr_t reserved_1 : 1; + mmr_t debit_vc2_force_cred : 1; + mmr_t reserved_2 : 8; + mmr_t credit_vc0_dyn : 6; + mmr_t reserved_3 : 2; + mmr_t credit_vc0_cap : 6; + mmr_t reserved_4 : 10; + mmr_t credit_vc2_dyn : 6; + mmr_t reserved_5 : 2; + mmr_t credit_vc2_cap : 6; + mmr_t reserved_6 : 2; + } sh_xnpi_to_iilb_port_flow_s; +} sh_xnpi_to_iilb_port_flow_u_t; +#else +typedef union sh_xnpi_to_iilb_port_flow_u { + mmr_t sh_xnpi_to_iilb_port_flow_regval; + struct { + mmr_t reserved_6 : 2; + mmr_t credit_vc2_cap : 6; + mmr_t reserved_5 : 2; + mmr_t credit_vc2_dyn : 6; + mmr_t reserved_4 : 10; + mmr_t credit_vc0_cap : 6; + mmr_t reserved_3 : 2; + mmr_t credit_vc0_dyn : 6; + mmr_t reserved_2 : 8; + mmr_t debit_vc2_force_cred : 1; + mmr_t reserved_1 : 1; + mmr_t debit_vc2_withhold : 6; + mmr_t debit_vc0_force_cred : 1; + mmr_t reserved_0 : 1; + mmr_t debit_vc0_withhold : 6; + } sh_xnpi_to_iilb_port_flow_s; +} sh_xnpi_to_iilb_port_flow_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNPI_FR_NI0_PORT_FLOW_FIFO" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnpi_fr_ni0_port_flow_fifo_u { + mmr_t sh_xnpi_fr_ni0_port_flow_fifo_regval; + struct { + mmr_t entry_vc0_dyn : 6; + mmr_t reserved_0 : 2; + mmr_t entry_vc0_cap : 6; + mmr_t reserved_1 : 2; + mmr_t entry_vc2_dyn : 6; + mmr_t reserved_2 : 2; + mmr_t entry_vc2_cap : 6; + mmr_t reserved_3 : 2; + mmr_t entry_vc0_test : 5; + mmr_t reserved_4 : 3; + mmr_t entry_vc2_test : 5; + mmr_t reserved_5 : 19; + } sh_xnpi_fr_ni0_port_flow_fifo_s; +} sh_xnpi_fr_ni0_port_flow_fifo_u_t; +#else +typedef union sh_xnpi_fr_ni0_port_flow_fifo_u { + mmr_t sh_xnpi_fr_ni0_port_flow_fifo_regval; + struct { + mmr_t reserved_5 : 19; + mmr_t entry_vc2_test : 5; + mmr_t reserved_4 : 3; + mmr_t entry_vc0_test : 5; + mmr_t reserved_3 : 2; + mmr_t entry_vc2_cap : 6; + mmr_t reserved_2 : 2; + mmr_t entry_vc2_dyn : 6; + mmr_t reserved_1 : 2; + mmr_t entry_vc0_cap : 6; + mmr_t reserved_0 : 2; + mmr_t entry_vc0_dyn : 6; + } sh_xnpi_fr_ni0_port_flow_fifo_s; +} sh_xnpi_fr_ni0_port_flow_fifo_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNPI_FR_NI1_PORT_FLOW_FIFO" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnpi_fr_ni1_port_flow_fifo_u { + mmr_t sh_xnpi_fr_ni1_port_flow_fifo_regval; + struct { + mmr_t entry_vc0_dyn : 6; + mmr_t reserved_0 : 2; + mmr_t entry_vc0_cap : 6; + mmr_t reserved_1 : 2; + mmr_t entry_vc2_dyn : 6; + mmr_t reserved_2 : 2; + mmr_t entry_vc2_cap : 6; + mmr_t reserved_3 : 2; + mmr_t entry_vc0_test : 5; + mmr_t reserved_4 : 3; + mmr_t entry_vc2_test : 5; + mmr_t reserved_5 : 19; + } sh_xnpi_fr_ni1_port_flow_fifo_s; +} sh_xnpi_fr_ni1_port_flow_fifo_u_t; +#else +typedef union sh_xnpi_fr_ni1_port_flow_fifo_u { + mmr_t sh_xnpi_fr_ni1_port_flow_fifo_regval; + struct { + mmr_t reserved_5 : 19; + mmr_t entry_vc2_test : 5; + mmr_t reserved_4 : 3; + mmr_t entry_vc0_test : 5; + mmr_t reserved_3 : 2; + mmr_t entry_vc2_cap : 6; + mmr_t reserved_2 : 2; + mmr_t entry_vc2_dyn : 6; + mmr_t reserved_1 : 2; + mmr_t entry_vc0_cap : 6; + mmr_t reserved_0 : 2; + mmr_t entry_vc0_dyn : 6; + } sh_xnpi_fr_ni1_port_flow_fifo_s; +} sh_xnpi_fr_ni1_port_flow_fifo_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNPI_FR_IILB_PORT_FLOW_FIFO" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnpi_fr_iilb_port_flow_fifo_u { + mmr_t sh_xnpi_fr_iilb_port_flow_fifo_regval; + struct { + mmr_t entry_vc0_dyn : 6; + mmr_t reserved_0 : 2; + mmr_t entry_vc0_cap : 6; + mmr_t reserved_1 : 2; + mmr_t entry_vc2_dyn : 6; + mmr_t reserved_2 : 2; + mmr_t entry_vc2_cap : 6; + mmr_t reserved_3 : 2; + mmr_t entry_vc0_test : 5; + mmr_t reserved_4 : 3; + mmr_t entry_vc2_test : 5; + mmr_t reserved_5 : 19; + } sh_xnpi_fr_iilb_port_flow_fifo_s; +} sh_xnpi_fr_iilb_port_flow_fifo_u_t; +#else +typedef union sh_xnpi_fr_iilb_port_flow_fifo_u { + mmr_t sh_xnpi_fr_iilb_port_flow_fifo_regval; + struct { + mmr_t reserved_5 : 19; + mmr_t entry_vc2_test : 5; + mmr_t reserved_4 : 3; + mmr_t entry_vc0_test : 5; + mmr_t reserved_3 : 2; + mmr_t entry_vc2_cap : 6; + mmr_t reserved_2 : 2; + mmr_t entry_vc2_dyn : 6; + mmr_t reserved_1 : 2; + mmr_t entry_vc0_cap : 6; + mmr_t reserved_0 : 2; + mmr_t entry_vc0_dyn : 6; + } sh_xnpi_fr_iilb_port_flow_fifo_s; +} sh_xnpi_fr_iilb_port_flow_fifo_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNMD_SIC_FLOW" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnmd_sic_flow_u { + mmr_t sh_xnmd_sic_flow_regval; + struct { + mmr_t debit_vc0_withhold : 5; + mmr_t reserved_0 : 2; + mmr_t debit_vc0_force_cred : 1; + mmr_t debit_vc2_withhold : 5; + mmr_t reserved_1 : 2; + mmr_t debit_vc2_force_cred : 1; + mmr_t credit_vc0_test : 5; + mmr_t reserved_2 : 3; + mmr_t credit_vc0_dyn : 5; + mmr_t reserved_3 : 3; + mmr_t credit_vc0_cap : 5; + mmr_t reserved_4 : 3; + mmr_t credit_vc2_test : 5; + mmr_t reserved_5 : 3; + mmr_t credit_vc2_dyn : 5; + mmr_t reserved_6 : 3; + mmr_t credit_vc2_cap : 5; + mmr_t reserved_7 : 2; + mmr_t disable_bypass_out : 1; + } sh_xnmd_sic_flow_s; +} sh_xnmd_sic_flow_u_t; +#else +typedef union sh_xnmd_sic_flow_u { + mmr_t sh_xnmd_sic_flow_regval; + struct { + mmr_t disable_bypass_out : 1; + mmr_t reserved_7 : 2; + mmr_t credit_vc2_cap : 5; + mmr_t reserved_6 : 3; + mmr_t credit_vc2_dyn : 5; + mmr_t reserved_5 : 3; + mmr_t credit_vc2_test : 5; + mmr_t reserved_4 : 3; + mmr_t credit_vc0_cap : 5; + mmr_t reserved_3 : 3; + mmr_t credit_vc0_dyn : 5; + mmr_t reserved_2 : 3; + mmr_t credit_vc0_test : 5; + mmr_t debit_vc2_force_cred : 1; + mmr_t reserved_1 : 2; + mmr_t debit_vc2_withhold : 5; + mmr_t debit_vc0_force_cred : 1; + mmr_t reserved_0 : 2; + mmr_t debit_vc0_withhold : 5; + } sh_xnmd_sic_flow_s; +} sh_xnmd_sic_flow_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNMD_TO_NI0_PORT_FLOW" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnmd_to_ni0_port_flow_u { + mmr_t sh_xnmd_to_ni0_port_flow_regval; + struct { + mmr_t debit_vc0_withhold : 6; + mmr_t reserved_0 : 1; + mmr_t debit_vc0_force_cred : 1; + mmr_t debit_vc2_withhold : 6; + mmr_t reserved_1 : 1; + mmr_t debit_vc2_force_cred : 1; + mmr_t reserved_2 : 8; + mmr_t credit_vc0_dyn : 6; + mmr_t reserved_3 : 2; + mmr_t credit_vc0_cap : 6; + mmr_t reserved_4 : 10; + mmr_t credit_vc2_dyn : 6; + mmr_t reserved_5 : 2; + mmr_t credit_vc2_cap : 6; + mmr_t reserved_6 : 2; + } sh_xnmd_to_ni0_port_flow_s; +} sh_xnmd_to_ni0_port_flow_u_t; +#else +typedef union sh_xnmd_to_ni0_port_flow_u { + mmr_t sh_xnmd_to_ni0_port_flow_regval; + struct { + mmr_t reserved_6 : 2; + mmr_t credit_vc2_cap : 6; + mmr_t reserved_5 : 2; + mmr_t credit_vc2_dyn : 6; + mmr_t reserved_4 : 10; + mmr_t credit_vc0_cap : 6; + mmr_t reserved_3 : 2; + mmr_t credit_vc0_dyn : 6; + mmr_t reserved_2 : 8; + mmr_t debit_vc2_force_cred : 1; + mmr_t reserved_1 : 1; + mmr_t debit_vc2_withhold : 6; + mmr_t debit_vc0_force_cred : 1; + mmr_t reserved_0 : 1; + mmr_t debit_vc0_withhold : 6; + } sh_xnmd_to_ni0_port_flow_s; +} sh_xnmd_to_ni0_port_flow_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNMD_TO_NI1_PORT_FLOW" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnmd_to_ni1_port_flow_u { + mmr_t sh_xnmd_to_ni1_port_flow_regval; + struct { + mmr_t debit_vc0_withhold : 6; + mmr_t reserved_0 : 1; + mmr_t debit_vc0_force_cred : 1; + mmr_t debit_vc2_withhold : 6; + mmr_t reserved_1 : 1; + mmr_t debit_vc2_force_cred : 1; + mmr_t reserved_2 : 8; + mmr_t credit_vc0_dyn : 6; + mmr_t reserved_3 : 2; + mmr_t credit_vc0_cap : 6; + mmr_t reserved_4 : 10; + mmr_t credit_vc2_dyn : 6; + mmr_t reserved_5 : 2; + mmr_t credit_vc2_cap : 6; + mmr_t reserved_6 : 2; + } sh_xnmd_to_ni1_port_flow_s; +} sh_xnmd_to_ni1_port_flow_u_t; +#else +typedef union sh_xnmd_to_ni1_port_flow_u { + mmr_t sh_xnmd_to_ni1_port_flow_regval; + struct { + mmr_t reserved_6 : 2; + mmr_t credit_vc2_cap : 6; + mmr_t reserved_5 : 2; + mmr_t credit_vc2_dyn : 6; + mmr_t reserved_4 : 10; + mmr_t credit_vc0_cap : 6; + mmr_t reserved_3 : 2; + mmr_t credit_vc0_dyn : 6; + mmr_t reserved_2 : 8; + mmr_t debit_vc2_force_cred : 1; + mmr_t reserved_1 : 1; + mmr_t debit_vc2_withhold : 6; + mmr_t debit_vc0_force_cred : 1; + mmr_t reserved_0 : 1; + mmr_t debit_vc0_withhold : 6; + } sh_xnmd_to_ni1_port_flow_s; +} sh_xnmd_to_ni1_port_flow_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNMD_TO_IILB_PORT_FLOW" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnmd_to_iilb_port_flow_u { + mmr_t sh_xnmd_to_iilb_port_flow_regval; + struct { + mmr_t debit_vc0_withhold : 6; + mmr_t reserved_0 : 1; + mmr_t debit_vc0_force_cred : 1; + mmr_t debit_vc2_withhold : 6; + mmr_t reserved_1 : 1; + mmr_t debit_vc2_force_cred : 1; + mmr_t reserved_2 : 8; + mmr_t credit_vc0_dyn : 6; + mmr_t reserved_3 : 2; + mmr_t credit_vc0_cap : 6; + mmr_t reserved_4 : 10; + mmr_t credit_vc2_dyn : 6; + mmr_t reserved_5 : 2; + mmr_t credit_vc2_cap : 6; + mmr_t reserved_6 : 2; + } sh_xnmd_to_iilb_port_flow_s; +} sh_xnmd_to_iilb_port_flow_u_t; +#else +typedef union sh_xnmd_to_iilb_port_flow_u { + mmr_t sh_xnmd_to_iilb_port_flow_regval; + struct { + mmr_t reserved_6 : 2; + mmr_t credit_vc2_cap : 6; + mmr_t reserved_5 : 2; + mmr_t credit_vc2_dyn : 6; + mmr_t reserved_4 : 10; + mmr_t credit_vc0_cap : 6; + mmr_t reserved_3 : 2; + mmr_t credit_vc0_dyn : 6; + mmr_t reserved_2 : 8; + mmr_t debit_vc2_force_cred : 1; + mmr_t reserved_1 : 1; + mmr_t debit_vc2_withhold : 6; + mmr_t debit_vc0_force_cred : 1; + mmr_t reserved_0 : 1; + mmr_t debit_vc0_withhold : 6; + } sh_xnmd_to_iilb_port_flow_s; +} sh_xnmd_to_iilb_port_flow_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNMD_FR_NI0_PORT_FLOW_FIFO" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnmd_fr_ni0_port_flow_fifo_u { + mmr_t sh_xnmd_fr_ni0_port_flow_fifo_regval; + struct { + mmr_t entry_vc0_dyn : 6; + mmr_t reserved_0 : 2; + mmr_t entry_vc0_cap : 6; + mmr_t reserved_1 : 2; + mmr_t entry_vc2_dyn : 6; + mmr_t reserved_2 : 2; + mmr_t entry_vc2_cap : 6; + mmr_t reserved_3 : 2; + mmr_t entry_vc0_test : 5; + mmr_t reserved_4 : 3; + mmr_t entry_vc2_test : 5; + mmr_t reserved_5 : 19; + } sh_xnmd_fr_ni0_port_flow_fifo_s; +} sh_xnmd_fr_ni0_port_flow_fifo_u_t; +#else +typedef union sh_xnmd_fr_ni0_port_flow_fifo_u { + mmr_t sh_xnmd_fr_ni0_port_flow_fifo_regval; + struct { + mmr_t reserved_5 : 19; + mmr_t entry_vc2_test : 5; + mmr_t reserved_4 : 3; + mmr_t entry_vc0_test : 5; + mmr_t reserved_3 : 2; + mmr_t entry_vc2_cap : 6; + mmr_t reserved_2 : 2; + mmr_t entry_vc2_dyn : 6; + mmr_t reserved_1 : 2; + mmr_t entry_vc0_cap : 6; + mmr_t reserved_0 : 2; + mmr_t entry_vc0_dyn : 6; + } sh_xnmd_fr_ni0_port_flow_fifo_s; +} sh_xnmd_fr_ni0_port_flow_fifo_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNMD_FR_NI1_PORT_FLOW_FIFO" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnmd_fr_ni1_port_flow_fifo_u { + mmr_t sh_xnmd_fr_ni1_port_flow_fifo_regval; + struct { + mmr_t entry_vc0_dyn : 6; + mmr_t reserved_0 : 2; + mmr_t entry_vc0_cap : 6; + mmr_t reserved_1 : 2; + mmr_t entry_vc2_dyn : 6; + mmr_t reserved_2 : 2; + mmr_t entry_vc2_cap : 6; + mmr_t reserved_3 : 2; + mmr_t entry_vc0_test : 5; + mmr_t reserved_4 : 3; + mmr_t entry_vc2_test : 5; + mmr_t reserved_5 : 19; + } sh_xnmd_fr_ni1_port_flow_fifo_s; +} sh_xnmd_fr_ni1_port_flow_fifo_u_t; +#else +typedef union sh_xnmd_fr_ni1_port_flow_fifo_u { + mmr_t sh_xnmd_fr_ni1_port_flow_fifo_regval; + struct { + mmr_t reserved_5 : 19; + mmr_t entry_vc2_test : 5; + mmr_t reserved_4 : 3; + mmr_t entry_vc0_test : 5; + mmr_t reserved_3 : 2; + mmr_t entry_vc2_cap : 6; + mmr_t reserved_2 : 2; + mmr_t entry_vc2_dyn : 6; + mmr_t reserved_1 : 2; + mmr_t entry_vc0_cap : 6; + mmr_t reserved_0 : 2; + mmr_t entry_vc0_dyn : 6; + } sh_xnmd_fr_ni1_port_flow_fifo_s; +} sh_xnmd_fr_ni1_port_flow_fifo_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNMD_FR_IILB_PORT_FLOW_FIFO" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnmd_fr_iilb_port_flow_fifo_u { + mmr_t sh_xnmd_fr_iilb_port_flow_fifo_regval; + struct { + mmr_t entry_vc0_dyn : 6; + mmr_t reserved_0 : 2; + mmr_t entry_vc0_cap : 6; + mmr_t reserved_1 : 2; + mmr_t entry_vc2_dyn : 6; + mmr_t reserved_2 : 2; + mmr_t entry_vc2_cap : 6; + mmr_t reserved_3 : 2; + mmr_t entry_vc0_test : 5; + mmr_t reserved_4 : 3; + mmr_t entry_vc2_test : 5; + mmr_t reserved_5 : 19; + } sh_xnmd_fr_iilb_port_flow_fifo_s; +} sh_xnmd_fr_iilb_port_flow_fifo_u_t; +#else +typedef union sh_xnmd_fr_iilb_port_flow_fifo_u { + mmr_t sh_xnmd_fr_iilb_port_flow_fifo_regval; + struct { + mmr_t reserved_5 : 19; + mmr_t entry_vc2_test : 5; + mmr_t reserved_4 : 3; + mmr_t entry_vc0_test : 5; + mmr_t reserved_3 : 2; + mmr_t entry_vc2_cap : 6; + mmr_t reserved_2 : 2; + mmr_t entry_vc2_dyn : 6; + mmr_t reserved_1 : 2; + mmr_t entry_vc0_cap : 6; + mmr_t reserved_0 : 2; + mmr_t entry_vc0_dyn : 6; + } sh_xnmd_fr_iilb_port_flow_fifo_s; +} sh_xnmd_fr_iilb_port_flow_fifo_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNII_INTRA_FLOW" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnii_intra_flow_u { + mmr_t sh_xnii_intra_flow_regval; + struct { + mmr_t debit_vc0_withhold : 6; + mmr_t reserved_0 : 1; + mmr_t debit_vc0_force_cred : 1; + mmr_t debit_vc2_withhold : 6; + mmr_t reserved_1 : 1; + mmr_t debit_vc2_force_cred : 1; + mmr_t credit_vc0_test : 7; + mmr_t reserved_2 : 1; + mmr_t credit_vc0_dyn : 7; + mmr_t reserved_3 : 1; + mmr_t credit_vc0_cap : 7; + mmr_t reserved_4 : 1; + mmr_t credit_vc2_test : 7; + mmr_t reserved_5 : 1; + mmr_t credit_vc2_dyn : 7; + mmr_t reserved_6 : 1; + mmr_t credit_vc2_cap : 7; + mmr_t reserved_7 : 1; + } sh_xnii_intra_flow_s; +} sh_xnii_intra_flow_u_t; +#else +typedef union sh_xnii_intra_flow_u { + mmr_t sh_xnii_intra_flow_regval; + struct { + mmr_t reserved_7 : 1; + mmr_t credit_vc2_cap : 7; + mmr_t reserved_6 : 1; + mmr_t credit_vc2_dyn : 7; + mmr_t reserved_5 : 1; + mmr_t credit_vc2_test : 7; + mmr_t reserved_4 : 1; + mmr_t credit_vc0_cap : 7; + mmr_t reserved_3 : 1; + mmr_t credit_vc0_dyn : 7; + mmr_t reserved_2 : 1; + mmr_t credit_vc0_test : 7; + mmr_t debit_vc2_force_cred : 1; + mmr_t reserved_1 : 1; + mmr_t debit_vc2_withhold : 6; + mmr_t debit_vc0_force_cred : 1; + mmr_t reserved_0 : 1; + mmr_t debit_vc0_withhold : 6; + } sh_xnii_intra_flow_s; +} sh_xnii_intra_flow_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNLB_INTRA_FLOW" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnlb_intra_flow_u { + mmr_t sh_xnlb_intra_flow_regval; + struct { + mmr_t debit_vc0_withhold : 6; + mmr_t reserved_0 : 1; + mmr_t debit_vc0_force_cred : 1; + mmr_t debit_vc2_withhold : 6; + mmr_t reserved_1 : 1; + mmr_t debit_vc2_force_cred : 1; + mmr_t credit_vc0_test : 7; + mmr_t reserved_2 : 1; + mmr_t credit_vc0_dyn : 7; + mmr_t reserved_3 : 1; + mmr_t credit_vc0_cap : 7; + mmr_t reserved_4 : 1; + mmr_t credit_vc2_test : 7; + mmr_t reserved_5 : 1; + mmr_t credit_vc2_dyn : 7; + mmr_t reserved_6 : 1; + mmr_t credit_vc2_cap : 7; + mmr_t disable_bypass_in : 1; + } sh_xnlb_intra_flow_s; +} sh_xnlb_intra_flow_u_t; +#else +typedef union sh_xnlb_intra_flow_u { + mmr_t sh_xnlb_intra_flow_regval; + struct { + mmr_t disable_bypass_in : 1; + mmr_t credit_vc2_cap : 7; + mmr_t reserved_6 : 1; + mmr_t credit_vc2_dyn : 7; + mmr_t reserved_5 : 1; + mmr_t credit_vc2_test : 7; + mmr_t reserved_4 : 1; + mmr_t credit_vc0_cap : 7; + mmr_t reserved_3 : 1; + mmr_t credit_vc0_dyn : 7; + mmr_t reserved_2 : 1; + mmr_t credit_vc0_test : 7; + mmr_t debit_vc2_force_cred : 1; + mmr_t reserved_1 : 1; + mmr_t debit_vc2_withhold : 6; + mmr_t debit_vc0_force_cred : 1; + mmr_t reserved_0 : 1; + mmr_t debit_vc0_withhold : 6; + } sh_xnlb_intra_flow_s; +} sh_xnlb_intra_flow_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xniilb_to_ni0_intra_flow_debit_u { + mmr_t sh_xniilb_to_ni0_intra_flow_debit_regval; + struct { + mmr_t vc0_withhold : 6; + mmr_t reserved_0 : 1; + mmr_t vc0_force_cred : 1; + mmr_t vc2_withhold : 6; + mmr_t reserved_1 : 1; + mmr_t vc2_force_cred : 1; + mmr_t reserved_2 : 8; + mmr_t vc0_dyn : 7; + mmr_t reserved_3 : 1; + mmr_t vc0_cap : 7; + mmr_t reserved_4 : 9; + mmr_t vc2_dyn : 7; + mmr_t reserved_5 : 1; + mmr_t vc2_cap : 7; + mmr_t reserved_6 : 1; + } sh_xniilb_to_ni0_intra_flow_debit_s; +} sh_xniilb_to_ni0_intra_flow_debit_u_t; +#else +typedef union sh_xniilb_to_ni0_intra_flow_debit_u { + mmr_t sh_xniilb_to_ni0_intra_flow_debit_regval; + struct { + mmr_t reserved_6 : 1; + mmr_t vc2_cap : 7; + mmr_t reserved_5 : 1; + mmr_t vc2_dyn : 7; + mmr_t reserved_4 : 9; + mmr_t vc0_cap : 7; + mmr_t reserved_3 : 1; + mmr_t vc0_dyn : 7; + mmr_t reserved_2 : 8; + mmr_t vc2_force_cred : 1; + mmr_t reserved_1 : 1; + mmr_t vc2_withhold : 6; + mmr_t vc0_force_cred : 1; + mmr_t reserved_0 : 1; + mmr_t vc0_withhold : 6; + } sh_xniilb_to_ni0_intra_flow_debit_s; +} sh_xniilb_to_ni0_intra_flow_debit_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xniilb_to_ni1_intra_flow_debit_u { + mmr_t sh_xniilb_to_ni1_intra_flow_debit_regval; + struct { + mmr_t vc0_withhold : 6; + mmr_t reserved_0 : 1; + mmr_t vc0_force_cred : 1; + mmr_t vc2_withhold : 6; + mmr_t reserved_1 : 1; + mmr_t vc2_force_cred : 1; + mmr_t reserved_2 : 8; + mmr_t vc0_dyn : 7; + mmr_t reserved_3 : 1; + mmr_t vc0_cap : 7; + mmr_t reserved_4 : 9; + mmr_t vc2_dyn : 7; + mmr_t reserved_5 : 1; + mmr_t vc2_cap : 7; + mmr_t reserved_6 : 1; + } sh_xniilb_to_ni1_intra_flow_debit_s; +} sh_xniilb_to_ni1_intra_flow_debit_u_t; +#else +typedef union sh_xniilb_to_ni1_intra_flow_debit_u { + mmr_t sh_xniilb_to_ni1_intra_flow_debit_regval; + struct { + mmr_t reserved_6 : 1; + mmr_t vc2_cap : 7; + mmr_t reserved_5 : 1; + mmr_t vc2_dyn : 7; + mmr_t reserved_4 : 9; + mmr_t vc0_cap : 7; + mmr_t reserved_3 : 1; + mmr_t vc0_dyn : 7; + mmr_t reserved_2 : 8; + mmr_t vc2_force_cred : 1; + mmr_t reserved_1 : 1; + mmr_t vc2_withhold : 6; + mmr_t vc0_force_cred : 1; + mmr_t reserved_0 : 1; + mmr_t vc0_withhold : 6; + } sh_xniilb_to_ni1_intra_flow_debit_s; +} sh_xniilb_to_ni1_intra_flow_debit_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xniilb_to_md_intra_flow_debit_u { + mmr_t sh_xniilb_to_md_intra_flow_debit_regval; + struct { + mmr_t vc0_withhold : 6; + mmr_t reserved_0 : 1; + mmr_t vc0_force_cred : 1; + mmr_t vc2_withhold : 6; + mmr_t reserved_1 : 1; + mmr_t vc2_force_cred : 1; + mmr_t reserved_2 : 8; + mmr_t vc0_dyn : 7; + mmr_t reserved_3 : 1; + mmr_t vc0_cap : 7; + mmr_t reserved_4 : 9; + mmr_t vc2_dyn : 7; + mmr_t reserved_5 : 1; + mmr_t vc2_cap : 7; + mmr_t reserved_6 : 1; + } sh_xniilb_to_md_intra_flow_debit_s; +} sh_xniilb_to_md_intra_flow_debit_u_t; +#else +typedef union sh_xniilb_to_md_intra_flow_debit_u { + mmr_t sh_xniilb_to_md_intra_flow_debit_regval; + struct { + mmr_t reserved_6 : 1; + mmr_t vc2_cap : 7; + mmr_t reserved_5 : 1; + mmr_t vc2_dyn : 7; + mmr_t reserved_4 : 9; + mmr_t vc0_cap : 7; + mmr_t reserved_3 : 1; + mmr_t vc0_dyn : 7; + mmr_t reserved_2 : 8; + mmr_t vc2_force_cred : 1; + mmr_t reserved_1 : 1; + mmr_t vc2_withhold : 6; + mmr_t vc0_force_cred : 1; + mmr_t reserved_0 : 1; + mmr_t vc0_withhold : 6; + } sh_xniilb_to_md_intra_flow_debit_s; +} sh_xniilb_to_md_intra_flow_debit_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xniilb_to_iilb_intra_flow_debit_u { + mmr_t sh_xniilb_to_iilb_intra_flow_debit_regval; + struct { + mmr_t vc0_withhold : 6; + mmr_t reserved_0 : 1; + mmr_t vc0_force_cred : 1; + mmr_t vc2_withhold : 6; + mmr_t reserved_1 : 1; + mmr_t vc2_force_cred : 1; + mmr_t reserved_2 : 8; + mmr_t vc0_dyn : 7; + mmr_t reserved_3 : 1; + mmr_t vc0_cap : 7; + mmr_t reserved_4 : 9; + mmr_t vc2_dyn : 7; + mmr_t reserved_5 : 1; + mmr_t vc2_cap : 7; + mmr_t reserved_6 : 1; + } sh_xniilb_to_iilb_intra_flow_debit_s; +} sh_xniilb_to_iilb_intra_flow_debit_u_t; +#else +typedef union sh_xniilb_to_iilb_intra_flow_debit_u { + mmr_t sh_xniilb_to_iilb_intra_flow_debit_regval; + struct { + mmr_t reserved_6 : 1; + mmr_t vc2_cap : 7; + mmr_t reserved_5 : 1; + mmr_t vc2_dyn : 7; + mmr_t reserved_4 : 9; + mmr_t vc0_cap : 7; + mmr_t reserved_3 : 1; + mmr_t vc0_dyn : 7; + mmr_t reserved_2 : 8; + mmr_t vc2_force_cred : 1; + mmr_t reserved_1 : 1; + mmr_t vc2_withhold : 6; + mmr_t vc0_force_cred : 1; + mmr_t reserved_0 : 1; + mmr_t vc0_withhold : 6; + } sh_xniilb_to_iilb_intra_flow_debit_s; +} sh_xniilb_to_iilb_intra_flow_debit_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xniilb_to_pi_intra_flow_debit_u { + mmr_t sh_xniilb_to_pi_intra_flow_debit_regval; + struct { + mmr_t vc0_withhold : 6; + mmr_t reserved_0 : 1; + mmr_t vc0_force_cred : 1; + mmr_t vc2_withhold : 6; + mmr_t reserved_1 : 1; + mmr_t vc2_force_cred : 1; + mmr_t reserved_2 : 8; + mmr_t vc0_dyn : 7; + mmr_t reserved_3 : 1; + mmr_t vc0_cap : 7; + mmr_t reserved_4 : 9; + mmr_t vc2_dyn : 7; + mmr_t reserved_5 : 1; + mmr_t vc2_cap : 7; + mmr_t reserved_6 : 1; + } sh_xniilb_to_pi_intra_flow_debit_s; +} sh_xniilb_to_pi_intra_flow_debit_u_t; +#else +typedef union sh_xniilb_to_pi_intra_flow_debit_u { + mmr_t sh_xniilb_to_pi_intra_flow_debit_regval; + struct { + mmr_t reserved_6 : 1; + mmr_t vc2_cap : 7; + mmr_t reserved_5 : 1; + mmr_t vc2_dyn : 7; + mmr_t reserved_4 : 9; + mmr_t vc0_cap : 7; + mmr_t reserved_3 : 1; + mmr_t vc0_dyn : 7; + mmr_t reserved_2 : 8; + mmr_t vc2_force_cred : 1; + mmr_t reserved_1 : 1; + mmr_t vc2_withhold : 6; + mmr_t vc0_force_cred : 1; + mmr_t reserved_0 : 1; + mmr_t vc0_withhold : 6; + } sh_xniilb_to_pi_intra_flow_debit_s; +} sh_xniilb_to_pi_intra_flow_debit_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNIILB_FR_NI0_INTRA_FLOW_CREDIT" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xniilb_fr_ni0_intra_flow_credit_u { + mmr_t sh_xniilb_fr_ni0_intra_flow_credit_regval; + struct { + mmr_t vc0_test : 7; + mmr_t reserved_0 : 1; + mmr_t vc0_dyn : 7; + mmr_t reserved_1 : 1; + mmr_t vc0_cap : 7; + mmr_t reserved_2 : 1; + mmr_t vc2_test : 7; + mmr_t reserved_3 : 1; + mmr_t vc2_dyn : 7; + mmr_t reserved_4 : 1; + mmr_t vc2_cap : 7; + mmr_t reserved_5 : 17; + } sh_xniilb_fr_ni0_intra_flow_credit_s; +} sh_xniilb_fr_ni0_intra_flow_credit_u_t; +#else +typedef union sh_xniilb_fr_ni0_intra_flow_credit_u { + mmr_t sh_xniilb_fr_ni0_intra_flow_credit_regval; + struct { + mmr_t reserved_5 : 17; + mmr_t vc2_cap : 7; + mmr_t reserved_4 : 1; + mmr_t vc2_dyn : 7; + mmr_t reserved_3 : 1; + mmr_t vc2_test : 7; + mmr_t reserved_2 : 1; + mmr_t vc0_cap : 7; + mmr_t reserved_1 : 1; + mmr_t vc0_dyn : 7; + mmr_t reserved_0 : 1; + mmr_t vc0_test : 7; + } sh_xniilb_fr_ni0_intra_flow_credit_s; +} sh_xniilb_fr_ni0_intra_flow_credit_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNIILB_FR_NI1_INTRA_FLOW_CREDIT" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xniilb_fr_ni1_intra_flow_credit_u { + mmr_t sh_xniilb_fr_ni1_intra_flow_credit_regval; + struct { + mmr_t vc0_test : 7; + mmr_t reserved_0 : 1; + mmr_t vc0_dyn : 7; + mmr_t reserved_1 : 1; + mmr_t vc0_cap : 7; + mmr_t reserved_2 : 1; + mmr_t vc2_test : 7; + mmr_t reserved_3 : 1; + mmr_t vc2_dyn : 7; + mmr_t reserved_4 : 1; + mmr_t vc2_cap : 7; + mmr_t reserved_5 : 17; + } sh_xniilb_fr_ni1_intra_flow_credit_s; +} sh_xniilb_fr_ni1_intra_flow_credit_u_t; +#else +typedef union sh_xniilb_fr_ni1_intra_flow_credit_u { + mmr_t sh_xniilb_fr_ni1_intra_flow_credit_regval; + struct { + mmr_t reserved_5 : 17; + mmr_t vc2_cap : 7; + mmr_t reserved_4 : 1; + mmr_t vc2_dyn : 7; + mmr_t reserved_3 : 1; + mmr_t vc2_test : 7; + mmr_t reserved_2 : 1; + mmr_t vc0_cap : 7; + mmr_t reserved_1 : 1; + mmr_t vc0_dyn : 7; + mmr_t reserved_0 : 1; + mmr_t vc0_test : 7; + } sh_xniilb_fr_ni1_intra_flow_credit_s; +} sh_xniilb_fr_ni1_intra_flow_credit_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNIILB_FR_MD_INTRA_FLOW_CREDIT" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xniilb_fr_md_intra_flow_credit_u { + mmr_t sh_xniilb_fr_md_intra_flow_credit_regval; + struct { + mmr_t vc0_test : 7; + mmr_t reserved_0 : 1; + mmr_t vc0_dyn : 7; + mmr_t reserved_1 : 1; + mmr_t vc0_cap : 7; + mmr_t reserved_2 : 1; + mmr_t vc2_test : 7; + mmr_t reserved_3 : 1; + mmr_t vc2_dyn : 7; + mmr_t reserved_4 : 1; + mmr_t vc2_cap : 7; + mmr_t reserved_5 : 17; + } sh_xniilb_fr_md_intra_flow_credit_s; +} sh_xniilb_fr_md_intra_flow_credit_u_t; +#else +typedef union sh_xniilb_fr_md_intra_flow_credit_u { + mmr_t sh_xniilb_fr_md_intra_flow_credit_regval; + struct { + mmr_t reserved_5 : 17; + mmr_t vc2_cap : 7; + mmr_t reserved_4 : 1; + mmr_t vc2_dyn : 7; + mmr_t reserved_3 : 1; + mmr_t vc2_test : 7; + mmr_t reserved_2 : 1; + mmr_t vc0_cap : 7; + mmr_t reserved_1 : 1; + mmr_t vc0_dyn : 7; + mmr_t reserved_0 : 1; + mmr_t vc0_test : 7; + } sh_xniilb_fr_md_intra_flow_credit_s; +} sh_xniilb_fr_md_intra_flow_credit_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNIILB_FR_IILB_INTRA_FLOW_CREDIT" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xniilb_fr_iilb_intra_flow_credit_u { + mmr_t sh_xniilb_fr_iilb_intra_flow_credit_regval; + struct { + mmr_t vc0_test : 7; + mmr_t reserved_0 : 1; + mmr_t vc0_dyn : 7; + mmr_t reserved_1 : 1; + mmr_t vc0_cap : 7; + mmr_t reserved_2 : 1; + mmr_t vc2_test : 7; + mmr_t reserved_3 : 1; + mmr_t vc2_dyn : 7; + mmr_t reserved_4 : 1; + mmr_t vc2_cap : 7; + mmr_t reserved_5 : 17; + } sh_xniilb_fr_iilb_intra_flow_credit_s; +} sh_xniilb_fr_iilb_intra_flow_credit_u_t; +#else +typedef union sh_xniilb_fr_iilb_intra_flow_credit_u { + mmr_t sh_xniilb_fr_iilb_intra_flow_credit_regval; + struct { + mmr_t reserved_5 : 17; + mmr_t vc2_cap : 7; + mmr_t reserved_4 : 1; + mmr_t vc2_dyn : 7; + mmr_t reserved_3 : 1; + mmr_t vc2_test : 7; + mmr_t reserved_2 : 1; + mmr_t vc0_cap : 7; + mmr_t reserved_1 : 1; + mmr_t vc0_dyn : 7; + mmr_t reserved_0 : 1; + mmr_t vc0_test : 7; + } sh_xniilb_fr_iilb_intra_flow_credit_s; +} sh_xniilb_fr_iilb_intra_flow_credit_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNIILB_FR_PI_INTRA_FLOW_CREDIT" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xniilb_fr_pi_intra_flow_credit_u { + mmr_t sh_xniilb_fr_pi_intra_flow_credit_regval; + struct { + mmr_t vc0_test : 7; + mmr_t reserved_0 : 1; + mmr_t vc0_dyn : 7; + mmr_t reserved_1 : 1; + mmr_t vc0_cap : 7; + mmr_t reserved_2 : 1; + mmr_t vc2_test : 7; + mmr_t reserved_3 : 1; + mmr_t vc2_dyn : 7; + mmr_t reserved_4 : 1; + mmr_t vc2_cap : 7; + mmr_t reserved_5 : 17; + } sh_xniilb_fr_pi_intra_flow_credit_s; +} sh_xniilb_fr_pi_intra_flow_credit_u_t; +#else +typedef union sh_xniilb_fr_pi_intra_flow_credit_u { + mmr_t sh_xniilb_fr_pi_intra_flow_credit_regval; + struct { + mmr_t reserved_5 : 17; + mmr_t vc2_cap : 7; + mmr_t reserved_4 : 1; + mmr_t vc2_dyn : 7; + mmr_t reserved_3 : 1; + mmr_t vc2_test : 7; + mmr_t reserved_2 : 1; + mmr_t vc0_cap : 7; + mmr_t reserved_1 : 1; + mmr_t vc0_dyn : 7; + mmr_t reserved_0 : 1; + mmr_t vc0_test : 7; + } sh_xniilb_fr_pi_intra_flow_credit_s; +} sh_xniilb_fr_pi_intra_flow_credit_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnni0_to_pi_intra_flow_debit_u { + mmr_t sh_xnni0_to_pi_intra_flow_debit_regval; + struct { + mmr_t vc0_withhold : 6; + mmr_t reserved_0 : 1; + mmr_t vc0_force_cred : 1; + mmr_t vc2_withhold : 6; + mmr_t reserved_1 : 1; + mmr_t vc2_force_cred : 1; + mmr_t reserved_2 : 8; + mmr_t vc0_dyn : 7; + mmr_t reserved_3 : 1; + mmr_t vc0_cap : 7; + mmr_t reserved_4 : 9; + mmr_t vc2_dyn : 7; + mmr_t reserved_5 : 1; + mmr_t vc2_cap : 7; + mmr_t reserved_6 : 1; + } sh_xnni0_to_pi_intra_flow_debit_s; +} sh_xnni0_to_pi_intra_flow_debit_u_t; +#else +typedef union sh_xnni0_to_pi_intra_flow_debit_u { + mmr_t sh_xnni0_to_pi_intra_flow_debit_regval; + struct { + mmr_t reserved_6 : 1; + mmr_t vc2_cap : 7; + mmr_t reserved_5 : 1; + mmr_t vc2_dyn : 7; + mmr_t reserved_4 : 9; + mmr_t vc0_cap : 7; + mmr_t reserved_3 : 1; + mmr_t vc0_dyn : 7; + mmr_t reserved_2 : 8; + mmr_t vc2_force_cred : 1; + mmr_t reserved_1 : 1; + mmr_t vc2_withhold : 6; + mmr_t vc0_force_cred : 1; + mmr_t reserved_0 : 1; + mmr_t vc0_withhold : 6; + } sh_xnni0_to_pi_intra_flow_debit_s; +} sh_xnni0_to_pi_intra_flow_debit_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnni0_to_md_intra_flow_debit_u { + mmr_t sh_xnni0_to_md_intra_flow_debit_regval; + struct { + mmr_t vc0_withhold : 6; + mmr_t reserved_0 : 1; + mmr_t vc0_force_cred : 1; + mmr_t vc2_withhold : 6; + mmr_t reserved_1 : 1; + mmr_t vc2_force_cred : 1; + mmr_t reserved_2 : 8; + mmr_t vc0_dyn : 7; + mmr_t reserved_3 : 1; + mmr_t vc0_cap : 7; + mmr_t reserved_4 : 9; + mmr_t vc2_dyn : 7; + mmr_t reserved_5 : 1; + mmr_t vc2_cap : 7; + mmr_t reserved_6 : 1; + } sh_xnni0_to_md_intra_flow_debit_s; +} sh_xnni0_to_md_intra_flow_debit_u_t; +#else +typedef union sh_xnni0_to_md_intra_flow_debit_u { + mmr_t sh_xnni0_to_md_intra_flow_debit_regval; + struct { + mmr_t reserved_6 : 1; + mmr_t vc2_cap : 7; + mmr_t reserved_5 : 1; + mmr_t vc2_dyn : 7; + mmr_t reserved_4 : 9; + mmr_t vc0_cap : 7; + mmr_t reserved_3 : 1; + mmr_t vc0_dyn : 7; + mmr_t reserved_2 : 8; + mmr_t vc2_force_cred : 1; + mmr_t reserved_1 : 1; + mmr_t vc2_withhold : 6; + mmr_t vc0_force_cred : 1; + mmr_t reserved_0 : 1; + mmr_t vc0_withhold : 6; + } sh_xnni0_to_md_intra_flow_debit_s; +} sh_xnni0_to_md_intra_flow_debit_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnni0_to_iilb_intra_flow_debit_u { + mmr_t sh_xnni0_to_iilb_intra_flow_debit_regval; + struct { + mmr_t vc0_withhold : 6; + mmr_t reserved_0 : 1; + mmr_t vc0_force_cred : 1; + mmr_t vc2_withhold : 6; + mmr_t reserved_1 : 1; + mmr_t vc2_force_cred : 1; + mmr_t reserved_2 : 8; + mmr_t vc0_dyn : 7; + mmr_t reserved_3 : 1; + mmr_t vc0_cap : 7; + mmr_t reserved_4 : 9; + mmr_t vc2_dyn : 7; + mmr_t reserved_5 : 1; + mmr_t vc2_cap : 7; + mmr_t reserved_6 : 1; + } sh_xnni0_to_iilb_intra_flow_debit_s; +} sh_xnni0_to_iilb_intra_flow_debit_u_t; +#else +typedef union sh_xnni0_to_iilb_intra_flow_debit_u { + mmr_t sh_xnni0_to_iilb_intra_flow_debit_regval; + struct { + mmr_t reserved_6 : 1; + mmr_t vc2_cap : 7; + mmr_t reserved_5 : 1; + mmr_t vc2_dyn : 7; + mmr_t reserved_4 : 9; + mmr_t vc0_cap : 7; + mmr_t reserved_3 : 1; + mmr_t vc0_dyn : 7; + mmr_t reserved_2 : 8; + mmr_t vc2_force_cred : 1; + mmr_t reserved_1 : 1; + mmr_t vc2_withhold : 6; + mmr_t vc0_force_cred : 1; + mmr_t reserved_0 : 1; + mmr_t vc0_withhold : 6; + } sh_xnni0_to_iilb_intra_flow_debit_s; +} sh_xnni0_to_iilb_intra_flow_debit_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNNI0_FR_PI_INTRA_FLOW_CREDIT" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnni0_fr_pi_intra_flow_credit_u { + mmr_t sh_xnni0_fr_pi_intra_flow_credit_regval; + struct { + mmr_t vc0_test : 7; + mmr_t reserved_0 : 1; + mmr_t vc0_dyn : 7; + mmr_t reserved_1 : 1; + mmr_t vc0_cap : 7; + mmr_t reserved_2 : 1; + mmr_t vc2_test : 7; + mmr_t reserved_3 : 1; + mmr_t vc2_dyn : 7; + mmr_t reserved_4 : 1; + mmr_t vc2_cap : 7; + mmr_t reserved_5 : 17; + } sh_xnni0_fr_pi_intra_flow_credit_s; +} sh_xnni0_fr_pi_intra_flow_credit_u_t; +#else +typedef union sh_xnni0_fr_pi_intra_flow_credit_u { + mmr_t sh_xnni0_fr_pi_intra_flow_credit_regval; + struct { + mmr_t reserved_5 : 17; + mmr_t vc2_cap : 7; + mmr_t reserved_4 : 1; + mmr_t vc2_dyn : 7; + mmr_t reserved_3 : 1; + mmr_t vc2_test : 7; + mmr_t reserved_2 : 1; + mmr_t vc0_cap : 7; + mmr_t reserved_1 : 1; + mmr_t vc0_dyn : 7; + mmr_t reserved_0 : 1; + mmr_t vc0_test : 7; + } sh_xnni0_fr_pi_intra_flow_credit_s; +} sh_xnni0_fr_pi_intra_flow_credit_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNNI0_FR_MD_INTRA_FLOW_CREDIT" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnni0_fr_md_intra_flow_credit_u { + mmr_t sh_xnni0_fr_md_intra_flow_credit_regval; + struct { + mmr_t vc0_test : 7; + mmr_t reserved_0 : 1; + mmr_t vc0_dyn : 7; + mmr_t reserved_1 : 1; + mmr_t vc0_cap : 7; + mmr_t reserved_2 : 1; + mmr_t vc2_test : 7; + mmr_t reserved_3 : 1; + mmr_t vc2_dyn : 7; + mmr_t reserved_4 : 1; + mmr_t vc2_cap : 7; + mmr_t reserved_5 : 17; + } sh_xnni0_fr_md_intra_flow_credit_s; +} sh_xnni0_fr_md_intra_flow_credit_u_t; +#else +typedef union sh_xnni0_fr_md_intra_flow_credit_u { + mmr_t sh_xnni0_fr_md_intra_flow_credit_regval; + struct { + mmr_t reserved_5 : 17; + mmr_t vc2_cap : 7; + mmr_t reserved_4 : 1; + mmr_t vc2_dyn : 7; + mmr_t reserved_3 : 1; + mmr_t vc2_test : 7; + mmr_t reserved_2 : 1; + mmr_t vc0_cap : 7; + mmr_t reserved_1 : 1; + mmr_t vc0_dyn : 7; + mmr_t reserved_0 : 1; + mmr_t vc0_test : 7; + } sh_xnni0_fr_md_intra_flow_credit_s; +} sh_xnni0_fr_md_intra_flow_credit_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNNI0_FR_IILB_INTRA_FLOW_CREDIT" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnni0_fr_iilb_intra_flow_credit_u { + mmr_t sh_xnni0_fr_iilb_intra_flow_credit_regval; + struct { + mmr_t vc0_test : 7; + mmr_t reserved_0 : 1; + mmr_t vc0_dyn : 7; + mmr_t reserved_1 : 1; + mmr_t vc0_cap : 7; + mmr_t reserved_2 : 1; + mmr_t vc2_test : 7; + mmr_t reserved_3 : 1; + mmr_t vc2_dyn : 7; + mmr_t reserved_4 : 1; + mmr_t vc2_cap : 7; + mmr_t reserved_5 : 17; + } sh_xnni0_fr_iilb_intra_flow_credit_s; +} sh_xnni0_fr_iilb_intra_flow_credit_u_t; +#else +typedef union sh_xnni0_fr_iilb_intra_flow_credit_u { + mmr_t sh_xnni0_fr_iilb_intra_flow_credit_regval; + struct { + mmr_t reserved_5 : 17; + mmr_t vc2_cap : 7; + mmr_t reserved_4 : 1; + mmr_t vc2_dyn : 7; + mmr_t reserved_3 : 1; + mmr_t vc2_test : 7; + mmr_t reserved_2 : 1; + mmr_t vc0_cap : 7; + mmr_t reserved_1 : 1; + mmr_t vc0_dyn : 7; + mmr_t reserved_0 : 1; + mmr_t vc0_test : 7; + } sh_xnni0_fr_iilb_intra_flow_credit_s; +} sh_xnni0_fr_iilb_intra_flow_credit_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNNI0_0_INTRANI_FLOW" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnni0_0_intrani_flow_u { + mmr_t sh_xnni0_0_intrani_flow_regval; + struct { + mmr_t debit_vc0_withhold : 6; + mmr_t reserved_0 : 1; + mmr_t debit_vc0_force_cred : 1; + mmr_t reserved_1 : 56; + } sh_xnni0_0_intrani_flow_s; +} sh_xnni0_0_intrani_flow_u_t; +#else +typedef union sh_xnni0_0_intrani_flow_u { + mmr_t sh_xnni0_0_intrani_flow_regval; + struct { + mmr_t reserved_1 : 56; + mmr_t debit_vc0_force_cred : 1; + mmr_t reserved_0 : 1; + mmr_t debit_vc0_withhold : 6; + } sh_xnni0_0_intrani_flow_s; +} sh_xnni0_0_intrani_flow_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNNI0_1_INTRANI_FLOW" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnni0_1_intrani_flow_u { + mmr_t sh_xnni0_1_intrani_flow_regval; + struct { + mmr_t debit_vc1_withhold : 6; + mmr_t reserved_0 : 1; + mmr_t debit_vc1_force_cred : 1; + mmr_t reserved_1 : 56; + } sh_xnni0_1_intrani_flow_s; +} sh_xnni0_1_intrani_flow_u_t; +#else +typedef union sh_xnni0_1_intrani_flow_u { + mmr_t sh_xnni0_1_intrani_flow_regval; + struct { + mmr_t reserved_1 : 56; + mmr_t debit_vc1_force_cred : 1; + mmr_t reserved_0 : 1; + mmr_t debit_vc1_withhold : 6; + } sh_xnni0_1_intrani_flow_s; +} sh_xnni0_1_intrani_flow_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNNI0_2_INTRANI_FLOW" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnni0_2_intrani_flow_u { + mmr_t sh_xnni0_2_intrani_flow_regval; + struct { + mmr_t debit_vc2_withhold : 6; + mmr_t reserved_0 : 1; + mmr_t debit_vc2_force_cred : 1; + mmr_t reserved_1 : 56; + } sh_xnni0_2_intrani_flow_s; +} sh_xnni0_2_intrani_flow_u_t; +#else +typedef union sh_xnni0_2_intrani_flow_u { + mmr_t sh_xnni0_2_intrani_flow_regval; + struct { + mmr_t reserved_1 : 56; + mmr_t debit_vc2_force_cred : 1; + mmr_t reserved_0 : 1; + mmr_t debit_vc2_withhold : 6; + } sh_xnni0_2_intrani_flow_s; +} sh_xnni0_2_intrani_flow_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNNI0_3_INTRANI_FLOW" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnni0_3_intrani_flow_u { + mmr_t sh_xnni0_3_intrani_flow_regval; + struct { + mmr_t debit_vc3_withhold : 6; + mmr_t reserved_0 : 1; + mmr_t debit_vc3_force_cred : 1; + mmr_t reserved_1 : 56; + } sh_xnni0_3_intrani_flow_s; +} sh_xnni0_3_intrani_flow_u_t; +#else +typedef union sh_xnni0_3_intrani_flow_u { + mmr_t sh_xnni0_3_intrani_flow_regval; + struct { + mmr_t reserved_1 : 56; + mmr_t debit_vc3_force_cred : 1; + mmr_t reserved_0 : 1; + mmr_t debit_vc3_withhold : 6; + } sh_xnni0_3_intrani_flow_s; +} sh_xnni0_3_intrani_flow_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNNI0_VCSWITCH_FLOW" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnni0_vcswitch_flow_u { + mmr_t sh_xnni0_vcswitch_flow_regval; + struct { + mmr_t ni_vcfifo_dateline_switch : 1; + mmr_t reserved_0 : 7; + mmr_t pi_vcfifo_switch : 1; + mmr_t reserved_1 : 7; + mmr_t md_vcfifo_switch : 1; + mmr_t reserved_2 : 7; + mmr_t iilb_vcfifo_switch : 1; + mmr_t reserved_3 : 7; + mmr_t disable_sync_bypass_in : 1; + mmr_t disable_sync_bypass_out : 1; + mmr_t async_fifoes : 1; + mmr_t reserved_4 : 29; + } sh_xnni0_vcswitch_flow_s; +} sh_xnni0_vcswitch_flow_u_t; +#else +typedef union sh_xnni0_vcswitch_flow_u { + mmr_t sh_xnni0_vcswitch_flow_regval; + struct { + mmr_t reserved_4 : 29; + mmr_t async_fifoes : 1; + mmr_t disable_sync_bypass_out : 1; + mmr_t disable_sync_bypass_in : 1; + mmr_t reserved_3 : 7; + mmr_t iilb_vcfifo_switch : 1; + mmr_t reserved_2 : 7; + mmr_t md_vcfifo_switch : 1; + mmr_t reserved_1 : 7; + mmr_t pi_vcfifo_switch : 1; + mmr_t reserved_0 : 7; + mmr_t ni_vcfifo_dateline_switch : 1; + } sh_xnni0_vcswitch_flow_s; +} sh_xnni0_vcswitch_flow_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNNI0_TIMER_REG" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnni0_timer_reg_u { + mmr_t sh_xnni0_timer_reg_regval; + struct { + mmr_t timeout_reg : 24; + mmr_t reserved_0 : 8; + mmr_t linkcleanup_reg : 1; + mmr_t reserved_1 : 31; + } sh_xnni0_timer_reg_s; +} sh_xnni0_timer_reg_u_t; +#else +typedef union sh_xnni0_timer_reg_u { + mmr_t sh_xnni0_timer_reg_regval; + struct { + mmr_t reserved_1 : 31; + mmr_t linkcleanup_reg : 1; + mmr_t reserved_0 : 8; + mmr_t timeout_reg : 24; + } sh_xnni0_timer_reg_s; +} sh_xnni0_timer_reg_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNNI0_FIFO02_FLOW" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnni0_fifo02_flow_u { + mmr_t sh_xnni0_fifo02_flow_regval; + struct { + mmr_t count_vc0_limit : 4; + mmr_t reserved_0 : 4; + mmr_t count_vc0_dyn : 4; + mmr_t reserved_1 : 4; + mmr_t count_vc0_cap : 4; + mmr_t reserved_2 : 4; + mmr_t count_vc2_limit : 4; + mmr_t reserved_3 : 4; + mmr_t count_vc2_dyn : 4; + mmr_t reserved_4 : 4; + mmr_t count_vc2_cap : 4; + mmr_t reserved_5 : 20; + } sh_xnni0_fifo02_flow_s; +} sh_xnni0_fifo02_flow_u_t; +#else +typedef union sh_xnni0_fifo02_flow_u { + mmr_t sh_xnni0_fifo02_flow_regval; + struct { + mmr_t reserved_5 : 20; + mmr_t count_vc2_cap : 4; + mmr_t reserved_4 : 4; + mmr_t count_vc2_dyn : 4; + mmr_t reserved_3 : 4; + mmr_t count_vc2_limit : 4; + mmr_t reserved_2 : 4; + mmr_t count_vc0_cap : 4; + mmr_t reserved_1 : 4; + mmr_t count_vc0_dyn : 4; + mmr_t reserved_0 : 4; + mmr_t count_vc0_limit : 4; + } sh_xnni0_fifo02_flow_s; +} sh_xnni0_fifo02_flow_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNNI0_FIFO13_FLOW" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnni0_fifo13_flow_u { + mmr_t sh_xnni0_fifo13_flow_regval; + struct { + mmr_t count_vc1_limit : 4; + mmr_t reserved_0 : 4; + mmr_t count_vc1_dyn : 4; + mmr_t reserved_1 : 4; + mmr_t count_vc1_cap : 4; + mmr_t reserved_2 : 4; + mmr_t count_vc3_limit : 4; + mmr_t reserved_3 : 4; + mmr_t count_vc3_dyn : 4; + mmr_t reserved_4 : 4; + mmr_t count_vc3_cap : 4; + mmr_t reserved_5 : 20; + } sh_xnni0_fifo13_flow_s; +} sh_xnni0_fifo13_flow_u_t; +#else +typedef union sh_xnni0_fifo13_flow_u { + mmr_t sh_xnni0_fifo13_flow_regval; + struct { + mmr_t reserved_5 : 20; + mmr_t count_vc3_cap : 4; + mmr_t reserved_4 : 4; + mmr_t count_vc3_dyn : 4; + mmr_t reserved_3 : 4; + mmr_t count_vc3_limit : 4; + mmr_t reserved_2 : 4; + mmr_t count_vc1_cap : 4; + mmr_t reserved_1 : 4; + mmr_t count_vc1_dyn : 4; + mmr_t reserved_0 : 4; + mmr_t count_vc1_limit : 4; + } sh_xnni0_fifo13_flow_s; +} sh_xnni0_fifo13_flow_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNNI0_NI_FLOW" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnni0_ni_flow_u { + mmr_t sh_xnni0_ni_flow_regval; + struct { + mmr_t vc0_limit : 4; + mmr_t reserved_0 : 4; + mmr_t vc0_dyn : 4; + mmr_t vc0_cap : 4; + mmr_t vc1_limit : 4; + mmr_t reserved_1 : 4; + mmr_t vc1_dyn : 4; + mmr_t vc1_cap : 4; + mmr_t vc2_limit : 4; + mmr_t reserved_2 : 4; + mmr_t vc2_dyn : 4; + mmr_t vc2_cap : 4; + mmr_t vc3_limit : 4; + mmr_t reserved_3 : 4; + mmr_t vc3_dyn : 4; + mmr_t vc3_cap : 4; + } sh_xnni0_ni_flow_s; +} sh_xnni0_ni_flow_u_t; +#else +typedef union sh_xnni0_ni_flow_u { + mmr_t sh_xnni0_ni_flow_regval; + struct { + mmr_t vc3_cap : 4; + mmr_t vc3_dyn : 4; + mmr_t reserved_3 : 4; + mmr_t vc3_limit : 4; + mmr_t vc2_cap : 4; + mmr_t vc2_dyn : 4; + mmr_t reserved_2 : 4; + mmr_t vc2_limit : 4; + mmr_t vc1_cap : 4; + mmr_t vc1_dyn : 4; + mmr_t reserved_1 : 4; + mmr_t vc1_limit : 4; + mmr_t vc0_cap : 4; + mmr_t vc0_dyn : 4; + mmr_t reserved_0 : 4; + mmr_t vc0_limit : 4; + } sh_xnni0_ni_flow_s; +} sh_xnni0_ni_flow_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNNI0_DEAD_FLOW" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnni0_dead_flow_u { + mmr_t sh_xnni0_dead_flow_regval; + struct { + mmr_t vc0_limit : 4; + mmr_t reserved_0 : 4; + mmr_t vc0_dyn : 4; + mmr_t vc0_cap : 4; + mmr_t vc1_limit : 4; + mmr_t reserved_1 : 4; + mmr_t vc1_dyn : 4; + mmr_t vc1_cap : 4; + mmr_t vc2_limit : 4; + mmr_t reserved_2 : 4; + mmr_t vc2_dyn : 4; + mmr_t vc2_cap : 4; + mmr_t vc3_limit : 4; + mmr_t reserved_3 : 4; + mmr_t vc3_dyn : 4; + mmr_t vc3_cap : 4; + } sh_xnni0_dead_flow_s; +} sh_xnni0_dead_flow_u_t; +#else +typedef union sh_xnni0_dead_flow_u { + mmr_t sh_xnni0_dead_flow_regval; + struct { + mmr_t vc3_cap : 4; + mmr_t vc3_dyn : 4; + mmr_t reserved_3 : 4; + mmr_t vc3_limit : 4; + mmr_t vc2_cap : 4; + mmr_t vc2_dyn : 4; + mmr_t reserved_2 : 4; + mmr_t vc2_limit : 4; + mmr_t vc1_cap : 4; + mmr_t vc1_dyn : 4; + mmr_t reserved_1 : 4; + mmr_t vc1_limit : 4; + mmr_t vc0_cap : 4; + mmr_t vc0_dyn : 4; + mmr_t reserved_0 : 4; + mmr_t vc0_limit : 4; + } sh_xnni0_dead_flow_s; +} sh_xnni0_dead_flow_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNNI0_INJECT_AGE" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnni0_inject_age_u { + mmr_t sh_xnni0_inject_age_regval; + struct { + mmr_t request_inject : 8; + mmr_t reply_inject : 8; + mmr_t reserved_0 : 48; + } sh_xnni0_inject_age_s; +} sh_xnni0_inject_age_u_t; +#else +typedef union sh_xnni0_inject_age_u { + mmr_t sh_xnni0_inject_age_regval; + struct { + mmr_t reserved_0 : 48; + mmr_t reply_inject : 8; + mmr_t request_inject : 8; + } sh_xnni0_inject_age_s; +} sh_xnni0_inject_age_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnni1_to_pi_intra_flow_debit_u { + mmr_t sh_xnni1_to_pi_intra_flow_debit_regval; + struct { + mmr_t vc0_withhold : 6; + mmr_t reserved_0 : 1; + mmr_t vc0_force_cred : 1; + mmr_t vc2_withhold : 6; + mmr_t reserved_1 : 1; + mmr_t vc2_force_cred : 1; + mmr_t reserved_2 : 8; + mmr_t vc0_dyn : 7; + mmr_t reserved_3 : 1; + mmr_t vc0_cap : 7; + mmr_t reserved_4 : 9; + mmr_t vc2_dyn : 7; + mmr_t reserved_5 : 1; + mmr_t vc2_cap : 7; + mmr_t reserved_6 : 1; + } sh_xnni1_to_pi_intra_flow_debit_s; +} sh_xnni1_to_pi_intra_flow_debit_u_t; +#else +typedef union sh_xnni1_to_pi_intra_flow_debit_u { + mmr_t sh_xnni1_to_pi_intra_flow_debit_regval; + struct { + mmr_t reserved_6 : 1; + mmr_t vc2_cap : 7; + mmr_t reserved_5 : 1; + mmr_t vc2_dyn : 7; + mmr_t reserved_4 : 9; + mmr_t vc0_cap : 7; + mmr_t reserved_3 : 1; + mmr_t vc0_dyn : 7; + mmr_t reserved_2 : 8; + mmr_t vc2_force_cred : 1; + mmr_t reserved_1 : 1; + mmr_t vc2_withhold : 6; + mmr_t vc0_force_cred : 1; + mmr_t reserved_0 : 1; + mmr_t vc0_withhold : 6; + } sh_xnni1_to_pi_intra_flow_debit_s; +} sh_xnni1_to_pi_intra_flow_debit_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnni1_to_md_intra_flow_debit_u { + mmr_t sh_xnni1_to_md_intra_flow_debit_regval; + struct { + mmr_t vc0_withhold : 6; + mmr_t reserved_0 : 1; + mmr_t vc0_force_cred : 1; + mmr_t vc2_withhold : 6; + mmr_t reserved_1 : 1; + mmr_t vc2_force_cred : 1; + mmr_t reserved_2 : 8; + mmr_t vc0_dyn : 7; + mmr_t reserved_3 : 1; + mmr_t vc0_cap : 7; + mmr_t reserved_4 : 9; + mmr_t vc2_dyn : 7; + mmr_t reserved_5 : 1; + mmr_t vc2_cap : 7; + mmr_t reserved_6 : 1; + } sh_xnni1_to_md_intra_flow_debit_s; +} sh_xnni1_to_md_intra_flow_debit_u_t; +#else +typedef union sh_xnni1_to_md_intra_flow_debit_u { + mmr_t sh_xnni1_to_md_intra_flow_debit_regval; + struct { + mmr_t reserved_6 : 1; + mmr_t vc2_cap : 7; + mmr_t reserved_5 : 1; + mmr_t vc2_dyn : 7; + mmr_t reserved_4 : 9; + mmr_t vc0_cap : 7; + mmr_t reserved_3 : 1; + mmr_t vc0_dyn : 7; + mmr_t reserved_2 : 8; + mmr_t vc2_force_cred : 1; + mmr_t reserved_1 : 1; + mmr_t vc2_withhold : 6; + mmr_t vc0_force_cred : 1; + mmr_t reserved_0 : 1; + mmr_t vc0_withhold : 6; + } sh_xnni1_to_md_intra_flow_debit_s; +} sh_xnni1_to_md_intra_flow_debit_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnni1_to_iilb_intra_flow_debit_u { + mmr_t sh_xnni1_to_iilb_intra_flow_debit_regval; + struct { + mmr_t vc0_withhold : 6; + mmr_t reserved_0 : 1; + mmr_t vc0_force_cred : 1; + mmr_t vc2_withhold : 6; + mmr_t reserved_1 : 1; + mmr_t vc2_force_cred : 1; + mmr_t reserved_2 : 8; + mmr_t vc0_dyn : 7; + mmr_t reserved_3 : 1; + mmr_t vc0_cap : 7; + mmr_t reserved_4 : 9; + mmr_t vc2_dyn : 7; + mmr_t reserved_5 : 1; + mmr_t vc2_cap : 7; + mmr_t reserved_6 : 1; + } sh_xnni1_to_iilb_intra_flow_debit_s; +} sh_xnni1_to_iilb_intra_flow_debit_u_t; +#else +typedef union sh_xnni1_to_iilb_intra_flow_debit_u { + mmr_t sh_xnni1_to_iilb_intra_flow_debit_regval; + struct { + mmr_t reserved_6 : 1; + mmr_t vc2_cap : 7; + mmr_t reserved_5 : 1; + mmr_t vc2_dyn : 7; + mmr_t reserved_4 : 9; + mmr_t vc0_cap : 7; + mmr_t reserved_3 : 1; + mmr_t vc0_dyn : 7; + mmr_t reserved_2 : 8; + mmr_t vc2_force_cred : 1; + mmr_t reserved_1 : 1; + mmr_t vc2_withhold : 6; + mmr_t vc0_force_cred : 1; + mmr_t reserved_0 : 1; + mmr_t vc0_withhold : 6; + } sh_xnni1_to_iilb_intra_flow_debit_s; +} sh_xnni1_to_iilb_intra_flow_debit_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNNI1_FR_PI_INTRA_FLOW_CREDIT" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnni1_fr_pi_intra_flow_credit_u { + mmr_t sh_xnni1_fr_pi_intra_flow_credit_regval; + struct { + mmr_t vc0_test : 7; + mmr_t reserved_0 : 1; + mmr_t vc0_dyn : 7; + mmr_t reserved_1 : 1; + mmr_t vc0_cap : 7; + mmr_t reserved_2 : 1; + mmr_t vc2_test : 7; + mmr_t reserved_3 : 1; + mmr_t vc2_dyn : 7; + mmr_t reserved_4 : 1; + mmr_t vc2_cap : 7; + mmr_t reserved_5 : 17; + } sh_xnni1_fr_pi_intra_flow_credit_s; +} sh_xnni1_fr_pi_intra_flow_credit_u_t; +#else +typedef union sh_xnni1_fr_pi_intra_flow_credit_u { + mmr_t sh_xnni1_fr_pi_intra_flow_credit_regval; + struct { + mmr_t reserved_5 : 17; + mmr_t vc2_cap : 7; + mmr_t reserved_4 : 1; + mmr_t vc2_dyn : 7; + mmr_t reserved_3 : 1; + mmr_t vc2_test : 7; + mmr_t reserved_2 : 1; + mmr_t vc0_cap : 7; + mmr_t reserved_1 : 1; + mmr_t vc0_dyn : 7; + mmr_t reserved_0 : 1; + mmr_t vc0_test : 7; + } sh_xnni1_fr_pi_intra_flow_credit_s; +} sh_xnni1_fr_pi_intra_flow_credit_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNNI1_FR_MD_INTRA_FLOW_CREDIT" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnni1_fr_md_intra_flow_credit_u { + mmr_t sh_xnni1_fr_md_intra_flow_credit_regval; + struct { + mmr_t vc0_test : 7; + mmr_t reserved_0 : 1; + mmr_t vc0_dyn : 7; + mmr_t reserved_1 : 1; + mmr_t vc0_cap : 7; + mmr_t reserved_2 : 1; + mmr_t vc2_test : 7; + mmr_t reserved_3 : 1; + mmr_t vc2_dyn : 7; + mmr_t reserved_4 : 1; + mmr_t vc2_cap : 7; + mmr_t reserved_5 : 17; + } sh_xnni1_fr_md_intra_flow_credit_s; +} sh_xnni1_fr_md_intra_flow_credit_u_t; +#else +typedef union sh_xnni1_fr_md_intra_flow_credit_u { + mmr_t sh_xnni1_fr_md_intra_flow_credit_regval; + struct { + mmr_t reserved_5 : 17; + mmr_t vc2_cap : 7; + mmr_t reserved_4 : 1; + mmr_t vc2_dyn : 7; + mmr_t reserved_3 : 1; + mmr_t vc2_test : 7; + mmr_t reserved_2 : 1; + mmr_t vc0_cap : 7; + mmr_t reserved_1 : 1; + mmr_t vc0_dyn : 7; + mmr_t reserved_0 : 1; + mmr_t vc0_test : 7; + } sh_xnni1_fr_md_intra_flow_credit_s; +} sh_xnni1_fr_md_intra_flow_credit_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNNI1_FR_IILB_INTRA_FLOW_CREDIT" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnni1_fr_iilb_intra_flow_credit_u { + mmr_t sh_xnni1_fr_iilb_intra_flow_credit_regval; + struct { + mmr_t vc0_test : 7; + mmr_t reserved_0 : 1; + mmr_t vc0_dyn : 7; + mmr_t reserved_1 : 1; + mmr_t vc0_cap : 7; + mmr_t reserved_2 : 1; + mmr_t vc2_test : 7; + mmr_t reserved_3 : 1; + mmr_t vc2_dyn : 7; + mmr_t reserved_4 : 1; + mmr_t vc2_cap : 7; + mmr_t reserved_5 : 17; + } sh_xnni1_fr_iilb_intra_flow_credit_s; +} sh_xnni1_fr_iilb_intra_flow_credit_u_t; +#else +typedef union sh_xnni1_fr_iilb_intra_flow_credit_u { + mmr_t sh_xnni1_fr_iilb_intra_flow_credit_regval; + struct { + mmr_t reserved_5 : 17; + mmr_t vc2_cap : 7; + mmr_t reserved_4 : 1; + mmr_t vc2_dyn : 7; + mmr_t reserved_3 : 1; + mmr_t vc2_test : 7; + mmr_t reserved_2 : 1; + mmr_t vc0_cap : 7; + mmr_t reserved_1 : 1; + mmr_t vc0_dyn : 7; + mmr_t reserved_0 : 1; + mmr_t vc0_test : 7; + } sh_xnni1_fr_iilb_intra_flow_credit_s; +} sh_xnni1_fr_iilb_intra_flow_credit_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNNI1_0_INTRANI_FLOW" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnni1_0_intrani_flow_u { + mmr_t sh_xnni1_0_intrani_flow_regval; + struct { + mmr_t debit_vc0_withhold : 6; + mmr_t reserved_0 : 1; + mmr_t debit_vc0_force_cred : 1; + mmr_t reserved_1 : 56; + } sh_xnni1_0_intrani_flow_s; +} sh_xnni1_0_intrani_flow_u_t; +#else +typedef union sh_xnni1_0_intrani_flow_u { + mmr_t sh_xnni1_0_intrani_flow_regval; + struct { + mmr_t reserved_1 : 56; + mmr_t debit_vc0_force_cred : 1; + mmr_t reserved_0 : 1; + mmr_t debit_vc0_withhold : 6; + } sh_xnni1_0_intrani_flow_s; +} sh_xnni1_0_intrani_flow_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNNI1_1_INTRANI_FLOW" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnni1_1_intrani_flow_u { + mmr_t sh_xnni1_1_intrani_flow_regval; + struct { + mmr_t debit_vc1_withhold : 6; + mmr_t reserved_0 : 1; + mmr_t debit_vc1_force_cred : 1; + mmr_t reserved_1 : 56; + } sh_xnni1_1_intrani_flow_s; +} sh_xnni1_1_intrani_flow_u_t; +#else +typedef union sh_xnni1_1_intrani_flow_u { + mmr_t sh_xnni1_1_intrani_flow_regval; + struct { + mmr_t reserved_1 : 56; + mmr_t debit_vc1_force_cred : 1; + mmr_t reserved_0 : 1; + mmr_t debit_vc1_withhold : 6; + } sh_xnni1_1_intrani_flow_s; +} sh_xnni1_1_intrani_flow_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNNI1_2_INTRANI_FLOW" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnni1_2_intrani_flow_u { + mmr_t sh_xnni1_2_intrani_flow_regval; + struct { + mmr_t debit_vc2_withhold : 6; + mmr_t reserved_0 : 1; + mmr_t debit_vc2_force_cred : 1; + mmr_t reserved_1 : 56; + } sh_xnni1_2_intrani_flow_s; +} sh_xnni1_2_intrani_flow_u_t; +#else +typedef union sh_xnni1_2_intrani_flow_u { + mmr_t sh_xnni1_2_intrani_flow_regval; + struct { + mmr_t reserved_1 : 56; + mmr_t debit_vc2_force_cred : 1; + mmr_t reserved_0 : 1; + mmr_t debit_vc2_withhold : 6; + } sh_xnni1_2_intrani_flow_s; +} sh_xnni1_2_intrani_flow_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNNI1_3_INTRANI_FLOW" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnni1_3_intrani_flow_u { + mmr_t sh_xnni1_3_intrani_flow_regval; + struct { + mmr_t debit_vc3_withhold : 6; + mmr_t reserved_0 : 1; + mmr_t debit_vc3_force_cred : 1; + mmr_t reserved_1 : 56; + } sh_xnni1_3_intrani_flow_s; +} sh_xnni1_3_intrani_flow_u_t; +#else +typedef union sh_xnni1_3_intrani_flow_u { + mmr_t sh_xnni1_3_intrani_flow_regval; + struct { + mmr_t reserved_1 : 56; + mmr_t debit_vc3_force_cred : 1; + mmr_t reserved_0 : 1; + mmr_t debit_vc3_withhold : 6; + } sh_xnni1_3_intrani_flow_s; +} sh_xnni1_3_intrani_flow_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNNI1_VCSWITCH_FLOW" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnni1_vcswitch_flow_u { + mmr_t sh_xnni1_vcswitch_flow_regval; + struct { + mmr_t ni_vcfifo_dateline_switch : 1; + mmr_t reserved_0 : 7; + mmr_t pi_vcfifo_switch : 1; + mmr_t reserved_1 : 7; + mmr_t md_vcfifo_switch : 1; + mmr_t reserved_2 : 7; + mmr_t iilb_vcfifo_switch : 1; + mmr_t reserved_3 : 7; + mmr_t disable_sync_bypass_in : 1; + mmr_t disable_sync_bypass_out : 1; + mmr_t async_fifoes : 1; + mmr_t reserved_4 : 29; + } sh_xnni1_vcswitch_flow_s; +} sh_xnni1_vcswitch_flow_u_t; +#else +typedef union sh_xnni1_vcswitch_flow_u { + mmr_t sh_xnni1_vcswitch_flow_regval; + struct { + mmr_t reserved_4 : 29; + mmr_t async_fifoes : 1; + mmr_t disable_sync_bypass_out : 1; + mmr_t disable_sync_bypass_in : 1; + mmr_t reserved_3 : 7; + mmr_t iilb_vcfifo_switch : 1; + mmr_t reserved_2 : 7; + mmr_t md_vcfifo_switch : 1; + mmr_t reserved_1 : 7; + mmr_t pi_vcfifo_switch : 1; + mmr_t reserved_0 : 7; + mmr_t ni_vcfifo_dateline_switch : 1; + } sh_xnni1_vcswitch_flow_s; +} sh_xnni1_vcswitch_flow_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNNI1_TIMER_REG" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnni1_timer_reg_u { + mmr_t sh_xnni1_timer_reg_regval; + struct { + mmr_t timeout_reg : 24; + mmr_t reserved_0 : 8; + mmr_t linkcleanup_reg : 1; + mmr_t reserved_1 : 31; + } sh_xnni1_timer_reg_s; +} sh_xnni1_timer_reg_u_t; +#else +typedef union sh_xnni1_timer_reg_u { + mmr_t sh_xnni1_timer_reg_regval; + struct { + mmr_t reserved_1 : 31; + mmr_t linkcleanup_reg : 1; + mmr_t reserved_0 : 8; + mmr_t timeout_reg : 24; + } sh_xnni1_timer_reg_s; +} sh_xnni1_timer_reg_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNNI1_FIFO02_FLOW" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnni1_fifo02_flow_u { + mmr_t sh_xnni1_fifo02_flow_regval; + struct { + mmr_t count_vc0_limit : 4; + mmr_t reserved_0 : 4; + mmr_t count_vc0_dyn : 4; + mmr_t reserved_1 : 4; + mmr_t count_vc0_cap : 4; + mmr_t reserved_2 : 4; + mmr_t count_vc2_limit : 4; + mmr_t reserved_3 : 4; + mmr_t count_vc2_dyn : 4; + mmr_t reserved_4 : 4; + mmr_t count_vc2_cap : 4; + mmr_t reserved_5 : 20; + } sh_xnni1_fifo02_flow_s; +} sh_xnni1_fifo02_flow_u_t; +#else +typedef union sh_xnni1_fifo02_flow_u { + mmr_t sh_xnni1_fifo02_flow_regval; + struct { + mmr_t reserved_5 : 20; + mmr_t count_vc2_cap : 4; + mmr_t reserved_4 : 4; + mmr_t count_vc2_dyn : 4; + mmr_t reserved_3 : 4; + mmr_t count_vc2_limit : 4; + mmr_t reserved_2 : 4; + mmr_t count_vc0_cap : 4; + mmr_t reserved_1 : 4; + mmr_t count_vc0_dyn : 4; + mmr_t reserved_0 : 4; + mmr_t count_vc0_limit : 4; + } sh_xnni1_fifo02_flow_s; +} sh_xnni1_fifo02_flow_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNNI1_FIFO13_FLOW" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnni1_fifo13_flow_u { + mmr_t sh_xnni1_fifo13_flow_regval; + struct { + mmr_t count_vc1_limit : 4; + mmr_t reserved_0 : 4; + mmr_t count_vc1_dyn : 4; + mmr_t reserved_1 : 4; + mmr_t count_vc1_cap : 4; + mmr_t reserved_2 : 4; + mmr_t count_vc3_limit : 4; + mmr_t reserved_3 : 4; + mmr_t count_vc3_dyn : 4; + mmr_t reserved_4 : 4; + mmr_t count_vc3_cap : 4; + mmr_t reserved_5 : 20; + } sh_xnni1_fifo13_flow_s; +} sh_xnni1_fifo13_flow_u_t; +#else +typedef union sh_xnni1_fifo13_flow_u { + mmr_t sh_xnni1_fifo13_flow_regval; + struct { + mmr_t reserved_5 : 20; + mmr_t count_vc3_cap : 4; + mmr_t reserved_4 : 4; + mmr_t count_vc3_dyn : 4; + mmr_t reserved_3 : 4; + mmr_t count_vc3_limit : 4; + mmr_t reserved_2 : 4; + mmr_t count_vc1_cap : 4; + mmr_t reserved_1 : 4; + mmr_t count_vc1_dyn : 4; + mmr_t reserved_0 : 4; + mmr_t count_vc1_limit : 4; + } sh_xnni1_fifo13_flow_s; +} sh_xnni1_fifo13_flow_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNNI1_NI_FLOW" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnni1_ni_flow_u { + mmr_t sh_xnni1_ni_flow_regval; + struct { + mmr_t vc0_limit : 4; + mmr_t reserved_0 : 4; + mmr_t vc0_dyn : 4; + mmr_t vc0_cap : 4; + mmr_t vc1_limit : 4; + mmr_t reserved_1 : 4; + mmr_t vc1_dyn : 4; + mmr_t vc1_cap : 4; + mmr_t vc2_limit : 4; + mmr_t reserved_2 : 4; + mmr_t vc2_dyn : 4; + mmr_t vc2_cap : 4; + mmr_t vc3_limit : 4; + mmr_t reserved_3 : 4; + mmr_t vc3_dyn : 4; + mmr_t vc3_cap : 4; + } sh_xnni1_ni_flow_s; +} sh_xnni1_ni_flow_u_t; +#else +typedef union sh_xnni1_ni_flow_u { + mmr_t sh_xnni1_ni_flow_regval; + struct { + mmr_t vc3_cap : 4; + mmr_t vc3_dyn : 4; + mmr_t reserved_3 : 4; + mmr_t vc3_limit : 4; + mmr_t vc2_cap : 4; + mmr_t vc2_dyn : 4; + mmr_t reserved_2 : 4; + mmr_t vc2_limit : 4; + mmr_t vc1_cap : 4; + mmr_t vc1_dyn : 4; + mmr_t reserved_1 : 4; + mmr_t vc1_limit : 4; + mmr_t vc0_cap : 4; + mmr_t vc0_dyn : 4; + mmr_t reserved_0 : 4; + mmr_t vc0_limit : 4; + } sh_xnni1_ni_flow_s; +} sh_xnni1_ni_flow_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNNI1_DEAD_FLOW" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnni1_dead_flow_u { + mmr_t sh_xnni1_dead_flow_regval; + struct { + mmr_t vc0_limit : 4; + mmr_t reserved_0 : 4; + mmr_t vc0_dyn : 4; + mmr_t vc0_cap : 4; + mmr_t vc1_limit : 4; + mmr_t reserved_1 : 4; + mmr_t vc1_dyn : 4; + mmr_t vc1_cap : 4; + mmr_t vc2_limit : 4; + mmr_t reserved_2 : 4; + mmr_t vc2_dyn : 4; + mmr_t vc2_cap : 4; + mmr_t vc3_limit : 4; + mmr_t reserved_3 : 4; + mmr_t vc3_dyn : 4; + mmr_t vc3_cap : 4; + } sh_xnni1_dead_flow_s; +} sh_xnni1_dead_flow_u_t; +#else +typedef union sh_xnni1_dead_flow_u { + mmr_t sh_xnni1_dead_flow_regval; + struct { + mmr_t vc3_cap : 4; + mmr_t vc3_dyn : 4; + mmr_t reserved_3 : 4; + mmr_t vc3_limit : 4; + mmr_t vc2_cap : 4; + mmr_t vc2_dyn : 4; + mmr_t reserved_2 : 4; + mmr_t vc2_limit : 4; + mmr_t vc1_cap : 4; + mmr_t vc1_dyn : 4; + mmr_t reserved_1 : 4; + mmr_t vc1_limit : 4; + mmr_t vc0_cap : 4; + mmr_t vc0_dyn : 4; + mmr_t reserved_0 : 4; + mmr_t vc0_limit : 4; + } sh_xnni1_dead_flow_s; +} sh_xnni1_dead_flow_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNNI1_INJECT_AGE" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnni1_inject_age_u { + mmr_t sh_xnni1_inject_age_regval; + struct { + mmr_t request_inject : 8; + mmr_t reply_inject : 8; + mmr_t reserved_0 : 48; + } sh_xnni1_inject_age_s; +} sh_xnni1_inject_age_u_t; +#else +typedef union sh_xnni1_inject_age_u { + mmr_t sh_xnni1_inject_age_regval; + struct { + mmr_t reserved_0 : 48; + mmr_t reply_inject : 8; + mmr_t request_inject : 8; + } sh_xnni1_inject_age_s; +} sh_xnni1_inject_age_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_DEBUG_SEL" */ +/* XN Debug Port Select */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_debug_sel_u { + mmr_t sh_xn_debug_sel_regval; + struct { + mmr_t nibble0_rlm_sel : 3; + mmr_t reserved_0 : 1; + mmr_t nibble0_nibble_sel : 3; + mmr_t reserved_1 : 1; + mmr_t nibble1_rlm_sel : 3; + mmr_t reserved_2 : 1; + mmr_t nibble1_nibble_sel : 3; + mmr_t reserved_3 : 1; + mmr_t nibble2_rlm_sel : 3; + mmr_t reserved_4 : 1; + mmr_t nibble2_nibble_sel : 3; + mmr_t reserved_5 : 1; + mmr_t nibble3_rlm_sel : 3; + mmr_t reserved_6 : 1; + mmr_t nibble3_nibble_sel : 3; + mmr_t reserved_7 : 1; + mmr_t nibble4_rlm_sel : 3; + mmr_t reserved_8 : 1; + mmr_t nibble4_nibble_sel : 3; + mmr_t reserved_9 : 1; + mmr_t nibble5_rlm_sel : 3; + mmr_t reserved_10 : 1; + mmr_t nibble5_nibble_sel : 3; + mmr_t reserved_11 : 1; + mmr_t nibble6_rlm_sel : 3; + mmr_t reserved_12 : 1; + mmr_t nibble6_nibble_sel : 3; + mmr_t reserved_13 : 1; + mmr_t nibble7_rlm_sel : 3; + mmr_t reserved_14 : 1; + mmr_t nibble7_nibble_sel : 3; + mmr_t trigger_enable : 1; + } sh_xn_debug_sel_s; +} sh_xn_debug_sel_u_t; +#else +typedef union sh_xn_debug_sel_u { + mmr_t sh_xn_debug_sel_regval; + struct { + mmr_t trigger_enable : 1; + mmr_t nibble7_nibble_sel : 3; + mmr_t reserved_14 : 1; + mmr_t nibble7_rlm_sel : 3; + mmr_t reserved_13 : 1; + mmr_t nibble6_nibble_sel : 3; + mmr_t reserved_12 : 1; + mmr_t nibble6_rlm_sel : 3; + mmr_t reserved_11 : 1; + mmr_t nibble5_nibble_sel : 3; + mmr_t reserved_10 : 1; + mmr_t nibble5_rlm_sel : 3; + mmr_t reserved_9 : 1; + mmr_t nibble4_nibble_sel : 3; + mmr_t reserved_8 : 1; + mmr_t nibble4_rlm_sel : 3; + mmr_t reserved_7 : 1; + mmr_t nibble3_nibble_sel : 3; + mmr_t reserved_6 : 1; + mmr_t nibble3_rlm_sel : 3; + mmr_t reserved_5 : 1; + mmr_t nibble2_nibble_sel : 3; + mmr_t reserved_4 : 1; + mmr_t nibble2_rlm_sel : 3; + mmr_t reserved_3 : 1; + mmr_t nibble1_nibble_sel : 3; + mmr_t reserved_2 : 1; + mmr_t nibble1_rlm_sel : 3; + mmr_t reserved_1 : 1; + mmr_t nibble0_nibble_sel : 3; + mmr_t reserved_0 : 1; + mmr_t nibble0_rlm_sel : 3; + } sh_xn_debug_sel_s; +} sh_xn_debug_sel_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_DEBUG_TRIG_SEL" */ +/* XN Debug trigger Select */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_debug_trig_sel_u { + mmr_t sh_xn_debug_trig_sel_regval; + struct { + mmr_t trigger0_rlm_sel : 3; + mmr_t reserved_0 : 1; + mmr_t trigger0_nibble_sel : 3; + mmr_t reserved_1 : 1; + mmr_t trigger1_rlm_sel : 3; + mmr_t reserved_2 : 1; + mmr_t trigger1_nibble_sel : 3; + mmr_t reserved_3 : 1; + mmr_t trigger2_rlm_sel : 3; + mmr_t reserved_4 : 1; + mmr_t trigger2_nibble_sel : 3; + mmr_t reserved_5 : 1; + mmr_t trigger3_rlm_sel : 3; + mmr_t reserved_6 : 1; + mmr_t trigger3_nibble_sel : 3; + mmr_t reserved_7 : 1; + mmr_t trigger4_rlm_sel : 3; + mmr_t reserved_8 : 1; + mmr_t trigger4_nibble_sel : 3; + mmr_t reserved_9 : 1; + mmr_t trigger5_rlm_sel : 3; + mmr_t reserved_10 : 1; + mmr_t trigger5_nibble_sel : 3; + mmr_t reserved_11 : 1; + mmr_t trigger6_rlm_sel : 3; + mmr_t reserved_12 : 1; + mmr_t trigger6_nibble_sel : 3; + mmr_t reserved_13 : 1; + mmr_t trigger7_rlm_sel : 3; + mmr_t reserved_14 : 1; + mmr_t trigger7_nibble_sel : 3; + mmr_t reserved_15 : 1; + } sh_xn_debug_trig_sel_s; +} sh_xn_debug_trig_sel_u_t; +#else +typedef union sh_xn_debug_trig_sel_u { + mmr_t sh_xn_debug_trig_sel_regval; + struct { + mmr_t reserved_15 : 1; + mmr_t trigger7_nibble_sel : 3; + mmr_t reserved_14 : 1; + mmr_t trigger7_rlm_sel : 3; + mmr_t reserved_13 : 1; + mmr_t trigger6_nibble_sel : 3; + mmr_t reserved_12 : 1; + mmr_t trigger6_rlm_sel : 3; + mmr_t reserved_11 : 1; + mmr_t trigger5_nibble_sel : 3; + mmr_t reserved_10 : 1; + mmr_t trigger5_rlm_sel : 3; + mmr_t reserved_9 : 1; + mmr_t trigger4_nibble_sel : 3; + mmr_t reserved_8 : 1; + mmr_t trigger4_rlm_sel : 3; + mmr_t reserved_7 : 1; + mmr_t trigger3_nibble_sel : 3; + mmr_t reserved_6 : 1; + mmr_t trigger3_rlm_sel : 3; + mmr_t reserved_5 : 1; + mmr_t trigger2_nibble_sel : 3; + mmr_t reserved_4 : 1; + mmr_t trigger2_rlm_sel : 3; + mmr_t reserved_3 : 1; + mmr_t trigger1_nibble_sel : 3; + mmr_t reserved_2 : 1; + mmr_t trigger1_rlm_sel : 3; + mmr_t reserved_1 : 1; + mmr_t trigger0_nibble_sel : 3; + mmr_t reserved_0 : 1; + mmr_t trigger0_rlm_sel : 3; + } sh_xn_debug_trig_sel_s; +} sh_xn_debug_trig_sel_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_TRIGGER_COMPARE" */ +/* XN Debug Compare */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_trigger_compare_u { + mmr_t sh_xn_trigger_compare_regval; + struct { + mmr_t mask : 32; + mmr_t reserved_0 : 32; + } sh_xn_trigger_compare_s; +} sh_xn_trigger_compare_u_t; +#else +typedef union sh_xn_trigger_compare_u { + mmr_t sh_xn_trigger_compare_regval; + struct { + mmr_t reserved_0 : 32; + mmr_t mask : 32; + } sh_xn_trigger_compare_s; +} sh_xn_trigger_compare_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_TRIGGER_DATA" */ +/* XN Debug Compare Data */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_trigger_data_u { + mmr_t sh_xn_trigger_data_regval; + struct { + mmr_t compare_pattern : 32; + mmr_t reserved_0 : 32; + } sh_xn_trigger_data_s; +} sh_xn_trigger_data_u_t; +#else +typedef union sh_xn_trigger_data_u { + mmr_t sh_xn_trigger_data_regval; + struct { + mmr_t reserved_0 : 32; + mmr_t compare_pattern : 32; + } sh_xn_trigger_data_s; +} sh_xn_trigger_data_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_IILB_DEBUG_SEL" */ +/* XN IILB Debug Port Select */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_iilb_debug_sel_u { + mmr_t sh_xn_iilb_debug_sel_regval; + struct { + mmr_t nibble0_input_sel : 3; + mmr_t reserved_0 : 1; + mmr_t nibble0_nibble_sel : 3; + mmr_t reserved_1 : 1; + mmr_t nibble1_input_sel : 3; + mmr_t reserved_2 : 1; + mmr_t nibble1_nibble_sel : 3; + mmr_t reserved_3 : 1; + mmr_t nibble2_input_sel : 3; + mmr_t reserved_4 : 1; + mmr_t nibble2_nibble_sel : 3; + mmr_t reserved_5 : 1; + mmr_t nibble3_input_sel : 3; + mmr_t reserved_6 : 1; + mmr_t nibble3_nibble_sel : 3; + mmr_t reserved_7 : 1; + mmr_t nibble4_input_sel : 3; + mmr_t reserved_8 : 1; + mmr_t nibble4_nibble_sel : 3; + mmr_t reserved_9 : 1; + mmr_t nibble5_input_sel : 3; + mmr_t reserved_10 : 1; + mmr_t nibble5_nibble_sel : 3; + mmr_t reserved_11 : 1; + mmr_t nibble6_input_sel : 3; + mmr_t reserved_12 : 1; + mmr_t nibble6_nibble_sel : 3; + mmr_t reserved_13 : 1; + mmr_t nibble7_input_sel : 3; + mmr_t reserved_14 : 1; + mmr_t nibble7_nibble_sel : 3; + mmr_t reserved_15 : 1; + } sh_xn_iilb_debug_sel_s; +} sh_xn_iilb_debug_sel_u_t; +#else +typedef union sh_xn_iilb_debug_sel_u { + mmr_t sh_xn_iilb_debug_sel_regval; + struct { + mmr_t reserved_15 : 1; + mmr_t nibble7_nibble_sel : 3; + mmr_t reserved_14 : 1; + mmr_t nibble7_input_sel : 3; + mmr_t reserved_13 : 1; + mmr_t nibble6_nibble_sel : 3; + mmr_t reserved_12 : 1; + mmr_t nibble6_input_sel : 3; + mmr_t reserved_11 : 1; + mmr_t nibble5_nibble_sel : 3; + mmr_t reserved_10 : 1; + mmr_t nibble5_input_sel : 3; + mmr_t reserved_9 : 1; + mmr_t nibble4_nibble_sel : 3; + mmr_t reserved_8 : 1; + mmr_t nibble4_input_sel : 3; + mmr_t reserved_7 : 1; + mmr_t nibble3_nibble_sel : 3; + mmr_t reserved_6 : 1; + mmr_t nibble3_input_sel : 3; + mmr_t reserved_5 : 1; + mmr_t nibble2_nibble_sel : 3; + mmr_t reserved_4 : 1; + mmr_t nibble2_input_sel : 3; + mmr_t reserved_3 : 1; + mmr_t nibble1_nibble_sel : 3; + mmr_t reserved_2 : 1; + mmr_t nibble1_input_sel : 3; + mmr_t reserved_1 : 1; + mmr_t nibble0_nibble_sel : 3; + mmr_t reserved_0 : 1; + mmr_t nibble0_input_sel : 3; + } sh_xn_iilb_debug_sel_s; +} sh_xn_iilb_debug_sel_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_PI_DEBUG_SEL" */ +/* XN PI Debug Port Select */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_pi_debug_sel_u { + mmr_t sh_xn_pi_debug_sel_regval; + struct { + mmr_t nibble0_input_sel : 3; + mmr_t reserved_0 : 1; + mmr_t nibble0_nibble_sel : 3; + mmr_t reserved_1 : 1; + mmr_t nibble1_input_sel : 3; + mmr_t reserved_2 : 1; + mmr_t nibble1_nibble_sel : 3; + mmr_t reserved_3 : 1; + mmr_t nibble2_input_sel : 3; + mmr_t reserved_4 : 1; + mmr_t nibble2_nibble_sel : 3; + mmr_t reserved_5 : 1; + mmr_t nibble3_input_sel : 3; + mmr_t reserved_6 : 1; + mmr_t nibble3_nibble_sel : 3; + mmr_t reserved_7 : 1; + mmr_t nibble4_input_sel : 3; + mmr_t reserved_8 : 1; + mmr_t nibble4_nibble_sel : 3; + mmr_t reserved_9 : 1; + mmr_t nibble5_input_sel : 3; + mmr_t reserved_10 : 1; + mmr_t nibble5_nibble_sel : 3; + mmr_t reserved_11 : 1; + mmr_t nibble6_input_sel : 3; + mmr_t reserved_12 : 1; + mmr_t nibble6_nibble_sel : 3; + mmr_t reserved_13 : 1; + mmr_t nibble7_input_sel : 3; + mmr_t reserved_14 : 1; + mmr_t nibble7_nibble_sel : 3; + mmr_t reserved_15 : 1; + } sh_xn_pi_debug_sel_s; +} sh_xn_pi_debug_sel_u_t; +#else +typedef union sh_xn_pi_debug_sel_u { + mmr_t sh_xn_pi_debug_sel_regval; + struct { + mmr_t reserved_15 : 1; + mmr_t nibble7_nibble_sel : 3; + mmr_t reserved_14 : 1; + mmr_t nibble7_input_sel : 3; + mmr_t reserved_13 : 1; + mmr_t nibble6_nibble_sel : 3; + mmr_t reserved_12 : 1; + mmr_t nibble6_input_sel : 3; + mmr_t reserved_11 : 1; + mmr_t nibble5_nibble_sel : 3; + mmr_t reserved_10 : 1; + mmr_t nibble5_input_sel : 3; + mmr_t reserved_9 : 1; + mmr_t nibble4_nibble_sel : 3; + mmr_t reserved_8 : 1; + mmr_t nibble4_input_sel : 3; + mmr_t reserved_7 : 1; + mmr_t nibble3_nibble_sel : 3; + mmr_t reserved_6 : 1; + mmr_t nibble3_input_sel : 3; + mmr_t reserved_5 : 1; + mmr_t nibble2_nibble_sel : 3; + mmr_t reserved_4 : 1; + mmr_t nibble2_input_sel : 3; + mmr_t reserved_3 : 1; + mmr_t nibble1_nibble_sel : 3; + mmr_t reserved_2 : 1; + mmr_t nibble1_input_sel : 3; + mmr_t reserved_1 : 1; + mmr_t nibble0_nibble_sel : 3; + mmr_t reserved_0 : 1; + mmr_t nibble0_input_sel : 3; + } sh_xn_pi_debug_sel_s; +} sh_xn_pi_debug_sel_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_MD_DEBUG_SEL" */ +/* XN MD Debug Port Select */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_md_debug_sel_u { + mmr_t sh_xn_md_debug_sel_regval; + struct { + mmr_t nibble0_input_sel : 3; + mmr_t reserved_0 : 1; + mmr_t nibble0_nibble_sel : 3; + mmr_t reserved_1 : 1; + mmr_t nibble1_input_sel : 3; + mmr_t reserved_2 : 1; + mmr_t nibble1_nibble_sel : 3; + mmr_t reserved_3 : 1; + mmr_t nibble2_input_sel : 3; + mmr_t reserved_4 : 1; + mmr_t nibble2_nibble_sel : 3; + mmr_t reserved_5 : 1; + mmr_t nibble3_input_sel : 3; + mmr_t reserved_6 : 1; + mmr_t nibble3_nibble_sel : 3; + mmr_t reserved_7 : 1; + mmr_t nibble4_input_sel : 3; + mmr_t reserved_8 : 1; + mmr_t nibble4_nibble_sel : 3; + mmr_t reserved_9 : 1; + mmr_t nibble5_input_sel : 3; + mmr_t reserved_10 : 1; + mmr_t nibble5_nibble_sel : 3; + mmr_t reserved_11 : 1; + mmr_t nibble6_input_sel : 3; + mmr_t reserved_12 : 1; + mmr_t nibble6_nibble_sel : 3; + mmr_t reserved_13 : 1; + mmr_t nibble7_input_sel : 3; + mmr_t reserved_14 : 1; + mmr_t nibble7_nibble_sel : 3; + mmr_t reserved_15 : 1; + } sh_xn_md_debug_sel_s; +} sh_xn_md_debug_sel_u_t; +#else +typedef union sh_xn_md_debug_sel_u { + mmr_t sh_xn_md_debug_sel_regval; + struct { + mmr_t reserved_15 : 1; + mmr_t nibble7_nibble_sel : 3; + mmr_t reserved_14 : 1; + mmr_t nibble7_input_sel : 3; + mmr_t reserved_13 : 1; + mmr_t nibble6_nibble_sel : 3; + mmr_t reserved_12 : 1; + mmr_t nibble6_input_sel : 3; + mmr_t reserved_11 : 1; + mmr_t nibble5_nibble_sel : 3; + mmr_t reserved_10 : 1; + mmr_t nibble5_input_sel : 3; + mmr_t reserved_9 : 1; + mmr_t nibble4_nibble_sel : 3; + mmr_t reserved_8 : 1; + mmr_t nibble4_input_sel : 3; + mmr_t reserved_7 : 1; + mmr_t nibble3_nibble_sel : 3; + mmr_t reserved_6 : 1; + mmr_t nibble3_input_sel : 3; + mmr_t reserved_5 : 1; + mmr_t nibble2_nibble_sel : 3; + mmr_t reserved_4 : 1; + mmr_t nibble2_input_sel : 3; + mmr_t reserved_3 : 1; + mmr_t nibble1_nibble_sel : 3; + mmr_t reserved_2 : 1; + mmr_t nibble1_input_sel : 3; + mmr_t reserved_1 : 1; + mmr_t nibble0_nibble_sel : 3; + mmr_t reserved_0 : 1; + mmr_t nibble0_input_sel : 3; + } sh_xn_md_debug_sel_s; +} sh_xn_md_debug_sel_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_NI0_DEBUG_SEL" */ +/* XN NI0 Debug Port Select */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_ni0_debug_sel_u { + mmr_t sh_xn_ni0_debug_sel_regval; + struct { + mmr_t nibble0_input_sel : 3; + mmr_t reserved_0 : 1; + mmr_t nibble0_nibble_sel : 3; + mmr_t reserved_1 : 1; + mmr_t nibble1_input_sel : 3; + mmr_t reserved_2 : 1; + mmr_t nibble1_nibble_sel : 3; + mmr_t reserved_3 : 1; + mmr_t nibble2_input_sel : 3; + mmr_t reserved_4 : 1; + mmr_t nibble2_nibble_sel : 3; + mmr_t reserved_5 : 1; + mmr_t nibble3_input_sel : 3; + mmr_t reserved_6 : 1; + mmr_t nibble3_nibble_sel : 3; + mmr_t reserved_7 : 1; + mmr_t nibble4_input_sel : 3; + mmr_t reserved_8 : 1; + mmr_t nibble4_nibble_sel : 3; + mmr_t reserved_9 : 1; + mmr_t nibble5_input_sel : 3; + mmr_t reserved_10 : 1; + mmr_t nibble5_nibble_sel : 3; + mmr_t reserved_11 : 1; + mmr_t nibble6_input_sel : 3; + mmr_t reserved_12 : 1; + mmr_t nibble6_nibble_sel : 3; + mmr_t reserved_13 : 1; + mmr_t nibble7_input_sel : 3; + mmr_t reserved_14 : 1; + mmr_t nibble7_nibble_sel : 3; + mmr_t reserved_15 : 1; + } sh_xn_ni0_debug_sel_s; +} sh_xn_ni0_debug_sel_u_t; +#else +typedef union sh_xn_ni0_debug_sel_u { + mmr_t sh_xn_ni0_debug_sel_regval; + struct { + mmr_t reserved_15 : 1; + mmr_t nibble7_nibble_sel : 3; + mmr_t reserved_14 : 1; + mmr_t nibble7_input_sel : 3; + mmr_t reserved_13 : 1; + mmr_t nibble6_nibble_sel : 3; + mmr_t reserved_12 : 1; + mmr_t nibble6_input_sel : 3; + mmr_t reserved_11 : 1; + mmr_t nibble5_nibble_sel : 3; + mmr_t reserved_10 : 1; + mmr_t nibble5_input_sel : 3; + mmr_t reserved_9 : 1; + mmr_t nibble4_nibble_sel : 3; + mmr_t reserved_8 : 1; + mmr_t nibble4_input_sel : 3; + mmr_t reserved_7 : 1; + mmr_t nibble3_nibble_sel : 3; + mmr_t reserved_6 : 1; + mmr_t nibble3_input_sel : 3; + mmr_t reserved_5 : 1; + mmr_t nibble2_nibble_sel : 3; + mmr_t reserved_4 : 1; + mmr_t nibble2_input_sel : 3; + mmr_t reserved_3 : 1; + mmr_t nibble1_nibble_sel : 3; + mmr_t reserved_2 : 1; + mmr_t nibble1_input_sel : 3; + mmr_t reserved_1 : 1; + mmr_t nibble0_nibble_sel : 3; + mmr_t reserved_0 : 1; + mmr_t nibble0_input_sel : 3; + } sh_xn_ni0_debug_sel_s; +} sh_xn_ni0_debug_sel_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_NI1_DEBUG_SEL" */ +/* XN NI1 Debug Port Select */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_ni1_debug_sel_u { + mmr_t sh_xn_ni1_debug_sel_regval; + struct { + mmr_t nibble0_input_sel : 3; + mmr_t reserved_0 : 1; + mmr_t nibble0_nibble_sel : 3; + mmr_t reserved_1 : 1; + mmr_t nibble1_input_sel : 3; + mmr_t reserved_2 : 1; + mmr_t nibble1_nibble_sel : 3; + mmr_t reserved_3 : 1; + mmr_t nibble2_input_sel : 3; + mmr_t reserved_4 : 1; + mmr_t nibble2_nibble_sel : 3; + mmr_t reserved_5 : 1; + mmr_t nibble3_input_sel : 3; + mmr_t reserved_6 : 1; + mmr_t nibble3_nibble_sel : 3; + mmr_t reserved_7 : 1; + mmr_t nibble4_input_sel : 3; + mmr_t reserved_8 : 1; + mmr_t nibble4_nibble_sel : 3; + mmr_t reserved_9 : 1; + mmr_t nibble5_input_sel : 3; + mmr_t reserved_10 : 1; + mmr_t nibble5_nibble_sel : 3; + mmr_t reserved_11 : 1; + mmr_t nibble6_input_sel : 3; + mmr_t reserved_12 : 1; + mmr_t nibble6_nibble_sel : 3; + mmr_t reserved_13 : 1; + mmr_t nibble7_input_sel : 3; + mmr_t reserved_14 : 1; + mmr_t nibble7_nibble_sel : 3; + mmr_t reserved_15 : 1; + } sh_xn_ni1_debug_sel_s; +} sh_xn_ni1_debug_sel_u_t; +#else +typedef union sh_xn_ni1_debug_sel_u { + mmr_t sh_xn_ni1_debug_sel_regval; + struct { + mmr_t reserved_15 : 1; + mmr_t nibble7_nibble_sel : 3; + mmr_t reserved_14 : 1; + mmr_t nibble7_input_sel : 3; + mmr_t reserved_13 : 1; + mmr_t nibble6_nibble_sel : 3; + mmr_t reserved_12 : 1; + mmr_t nibble6_input_sel : 3; + mmr_t reserved_11 : 1; + mmr_t nibble5_nibble_sel : 3; + mmr_t reserved_10 : 1; + mmr_t nibble5_input_sel : 3; + mmr_t reserved_9 : 1; + mmr_t nibble4_nibble_sel : 3; + mmr_t reserved_8 : 1; + mmr_t nibble4_input_sel : 3; + mmr_t reserved_7 : 1; + mmr_t nibble3_nibble_sel : 3; + mmr_t reserved_6 : 1; + mmr_t nibble3_input_sel : 3; + mmr_t reserved_5 : 1; + mmr_t nibble2_nibble_sel : 3; + mmr_t reserved_4 : 1; + mmr_t nibble2_input_sel : 3; + mmr_t reserved_3 : 1; + mmr_t nibble1_nibble_sel : 3; + mmr_t reserved_2 : 1; + mmr_t nibble1_input_sel : 3; + mmr_t reserved_1 : 1; + mmr_t nibble0_nibble_sel : 3; + mmr_t reserved_0 : 1; + mmr_t nibble0_input_sel : 3; + } sh_xn_ni1_debug_sel_s; +} sh_xn_ni1_debug_sel_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_IILB_LB_CMP_EXP_DATA0" */ +/* IILB compare LB input expected data0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_iilb_lb_cmp_exp_data0_u { + mmr_t sh_xn_iilb_lb_cmp_exp_data0_regval; + struct { + mmr_t data : 64; + } sh_xn_iilb_lb_cmp_exp_data0_s; +} sh_xn_iilb_lb_cmp_exp_data0_u_t; +#else +typedef union sh_xn_iilb_lb_cmp_exp_data0_u { + mmr_t sh_xn_iilb_lb_cmp_exp_data0_regval; + struct { + mmr_t data : 64; + } sh_xn_iilb_lb_cmp_exp_data0_s; +} sh_xn_iilb_lb_cmp_exp_data0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_IILB_LB_CMP_EXP_DATA1" */ +/* IILB compare LB input expected data1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_iilb_lb_cmp_exp_data1_u { + mmr_t sh_xn_iilb_lb_cmp_exp_data1_regval; + struct { + mmr_t data : 64; + } sh_xn_iilb_lb_cmp_exp_data1_s; +} sh_xn_iilb_lb_cmp_exp_data1_u_t; +#else +typedef union sh_xn_iilb_lb_cmp_exp_data1_u { + mmr_t sh_xn_iilb_lb_cmp_exp_data1_regval; + struct { + mmr_t data : 64; + } sh_xn_iilb_lb_cmp_exp_data1_s; +} sh_xn_iilb_lb_cmp_exp_data1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_IILB_LB_CMP_ENABLE0" */ +/* IILB compare LB input enable0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_iilb_lb_cmp_enable0_u { + mmr_t sh_xn_iilb_lb_cmp_enable0_regval; + struct { + mmr_t enable : 64; + } sh_xn_iilb_lb_cmp_enable0_s; +} sh_xn_iilb_lb_cmp_enable0_u_t; +#else +typedef union sh_xn_iilb_lb_cmp_enable0_u { + mmr_t sh_xn_iilb_lb_cmp_enable0_regval; + struct { + mmr_t enable : 64; + } sh_xn_iilb_lb_cmp_enable0_s; +} sh_xn_iilb_lb_cmp_enable0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_IILB_LB_CMP_ENABLE1" */ +/* IILB compare LB input enable1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_iilb_lb_cmp_enable1_u { + mmr_t sh_xn_iilb_lb_cmp_enable1_regval; + struct { + mmr_t enable : 64; + } sh_xn_iilb_lb_cmp_enable1_s; +} sh_xn_iilb_lb_cmp_enable1_u_t; +#else +typedef union sh_xn_iilb_lb_cmp_enable1_u { + mmr_t sh_xn_iilb_lb_cmp_enable1_regval; + struct { + mmr_t enable : 64; + } sh_xn_iilb_lb_cmp_enable1_s; +} sh_xn_iilb_lb_cmp_enable1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_IILB_II_CMP_EXP_DATA0" */ +/* IILB compare II input expected data0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_iilb_ii_cmp_exp_data0_u { + mmr_t sh_xn_iilb_ii_cmp_exp_data0_regval; + struct { + mmr_t data : 64; + } sh_xn_iilb_ii_cmp_exp_data0_s; +} sh_xn_iilb_ii_cmp_exp_data0_u_t; +#else +typedef union sh_xn_iilb_ii_cmp_exp_data0_u { + mmr_t sh_xn_iilb_ii_cmp_exp_data0_regval; + struct { + mmr_t data : 64; + } sh_xn_iilb_ii_cmp_exp_data0_s; +} sh_xn_iilb_ii_cmp_exp_data0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_IILB_II_CMP_EXP_DATA1" */ +/* IILB compare II input expected data1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_iilb_ii_cmp_exp_data1_u { + mmr_t sh_xn_iilb_ii_cmp_exp_data1_regval; + struct { + mmr_t data : 64; + } sh_xn_iilb_ii_cmp_exp_data1_s; +} sh_xn_iilb_ii_cmp_exp_data1_u_t; +#else +typedef union sh_xn_iilb_ii_cmp_exp_data1_u { + mmr_t sh_xn_iilb_ii_cmp_exp_data1_regval; + struct { + mmr_t data : 64; + } sh_xn_iilb_ii_cmp_exp_data1_s; +} sh_xn_iilb_ii_cmp_exp_data1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_IILB_II_CMP_ENABLE0" */ +/* IILB compare II input enable0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_iilb_ii_cmp_enable0_u { + mmr_t sh_xn_iilb_ii_cmp_enable0_regval; + struct { + mmr_t enable : 64; + } sh_xn_iilb_ii_cmp_enable0_s; +} sh_xn_iilb_ii_cmp_enable0_u_t; +#else +typedef union sh_xn_iilb_ii_cmp_enable0_u { + mmr_t sh_xn_iilb_ii_cmp_enable0_regval; + struct { + mmr_t enable : 64; + } sh_xn_iilb_ii_cmp_enable0_s; +} sh_xn_iilb_ii_cmp_enable0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_IILB_II_CMP_ENABLE1" */ +/* IILB compare II input enable1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_iilb_ii_cmp_enable1_u { + mmr_t sh_xn_iilb_ii_cmp_enable1_regval; + struct { + mmr_t enable : 64; + } sh_xn_iilb_ii_cmp_enable1_s; +} sh_xn_iilb_ii_cmp_enable1_u_t; +#else +typedef union sh_xn_iilb_ii_cmp_enable1_u { + mmr_t sh_xn_iilb_ii_cmp_enable1_regval; + struct { + mmr_t enable : 64; + } sh_xn_iilb_ii_cmp_enable1_s; +} sh_xn_iilb_ii_cmp_enable1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_IILB_MD_CMP_EXP_DATA0" */ +/* IILB compare MD input expected data0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_iilb_md_cmp_exp_data0_u { + mmr_t sh_xn_iilb_md_cmp_exp_data0_regval; + struct { + mmr_t data : 64; + } sh_xn_iilb_md_cmp_exp_data0_s; +} sh_xn_iilb_md_cmp_exp_data0_u_t; +#else +typedef union sh_xn_iilb_md_cmp_exp_data0_u { + mmr_t sh_xn_iilb_md_cmp_exp_data0_regval; + struct { + mmr_t data : 64; + } sh_xn_iilb_md_cmp_exp_data0_s; +} sh_xn_iilb_md_cmp_exp_data0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_IILB_MD_CMP_EXP_DATA1" */ +/* IILB compare MD input expected data1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_iilb_md_cmp_exp_data1_u { + mmr_t sh_xn_iilb_md_cmp_exp_data1_regval; + struct { + mmr_t data : 64; + } sh_xn_iilb_md_cmp_exp_data1_s; +} sh_xn_iilb_md_cmp_exp_data1_u_t; +#else +typedef union sh_xn_iilb_md_cmp_exp_data1_u { + mmr_t sh_xn_iilb_md_cmp_exp_data1_regval; + struct { + mmr_t data : 64; + } sh_xn_iilb_md_cmp_exp_data1_s; +} sh_xn_iilb_md_cmp_exp_data1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_IILB_MD_CMP_ENABLE0" */ +/* IILB compare MD input enable0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_iilb_md_cmp_enable0_u { + mmr_t sh_xn_iilb_md_cmp_enable0_regval; + struct { + mmr_t enable : 64; + } sh_xn_iilb_md_cmp_enable0_s; +} sh_xn_iilb_md_cmp_enable0_u_t; +#else +typedef union sh_xn_iilb_md_cmp_enable0_u { + mmr_t sh_xn_iilb_md_cmp_enable0_regval; + struct { + mmr_t enable : 64; + } sh_xn_iilb_md_cmp_enable0_s; +} sh_xn_iilb_md_cmp_enable0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_IILB_MD_CMP_ENABLE1" */ +/* IILB compare MD input enable1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_iilb_md_cmp_enable1_u { + mmr_t sh_xn_iilb_md_cmp_enable1_regval; + struct { + mmr_t enable : 64; + } sh_xn_iilb_md_cmp_enable1_s; +} sh_xn_iilb_md_cmp_enable1_u_t; +#else +typedef union sh_xn_iilb_md_cmp_enable1_u { + mmr_t sh_xn_iilb_md_cmp_enable1_regval; + struct { + mmr_t enable : 64; + } sh_xn_iilb_md_cmp_enable1_s; +} sh_xn_iilb_md_cmp_enable1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_IILB_PI_CMP_EXP_DATA0" */ +/* IILB compare PI input expected data0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_iilb_pi_cmp_exp_data0_u { + mmr_t sh_xn_iilb_pi_cmp_exp_data0_regval; + struct { + mmr_t data : 64; + } sh_xn_iilb_pi_cmp_exp_data0_s; +} sh_xn_iilb_pi_cmp_exp_data0_u_t; +#else +typedef union sh_xn_iilb_pi_cmp_exp_data0_u { + mmr_t sh_xn_iilb_pi_cmp_exp_data0_regval; + struct { + mmr_t data : 64; + } sh_xn_iilb_pi_cmp_exp_data0_s; +} sh_xn_iilb_pi_cmp_exp_data0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_IILB_PI_CMP_EXP_DATA1" */ +/* IILB compare PI input expected data1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_iilb_pi_cmp_exp_data1_u { + mmr_t sh_xn_iilb_pi_cmp_exp_data1_regval; + struct { + mmr_t data : 64; + } sh_xn_iilb_pi_cmp_exp_data1_s; +} sh_xn_iilb_pi_cmp_exp_data1_u_t; +#else +typedef union sh_xn_iilb_pi_cmp_exp_data1_u { + mmr_t sh_xn_iilb_pi_cmp_exp_data1_regval; + struct { + mmr_t data : 64; + } sh_xn_iilb_pi_cmp_exp_data1_s; +} sh_xn_iilb_pi_cmp_exp_data1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_IILB_PI_CMP_ENABLE0" */ +/* IILB compare PI input enable0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_iilb_pi_cmp_enable0_u { + mmr_t sh_xn_iilb_pi_cmp_enable0_regval; + struct { + mmr_t enable : 64; + } sh_xn_iilb_pi_cmp_enable0_s; +} sh_xn_iilb_pi_cmp_enable0_u_t; +#else +typedef union sh_xn_iilb_pi_cmp_enable0_u { + mmr_t sh_xn_iilb_pi_cmp_enable0_regval; + struct { + mmr_t enable : 64; + } sh_xn_iilb_pi_cmp_enable0_s; +} sh_xn_iilb_pi_cmp_enable0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_IILB_PI_CMP_ENABLE1" */ +/* IILB compare PI input enable1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_iilb_pi_cmp_enable1_u { + mmr_t sh_xn_iilb_pi_cmp_enable1_regval; + struct { + mmr_t enable : 64; + } sh_xn_iilb_pi_cmp_enable1_s; +} sh_xn_iilb_pi_cmp_enable1_u_t; +#else +typedef union sh_xn_iilb_pi_cmp_enable1_u { + mmr_t sh_xn_iilb_pi_cmp_enable1_regval; + struct { + mmr_t enable : 64; + } sh_xn_iilb_pi_cmp_enable1_s; +} sh_xn_iilb_pi_cmp_enable1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_IILB_NI0_CMP_EXP_DATA0" */ +/* IILB compare NI0 input expected data0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_iilb_ni0_cmp_exp_data0_u { + mmr_t sh_xn_iilb_ni0_cmp_exp_data0_regval; + struct { + mmr_t data : 64; + } sh_xn_iilb_ni0_cmp_exp_data0_s; +} sh_xn_iilb_ni0_cmp_exp_data0_u_t; +#else +typedef union sh_xn_iilb_ni0_cmp_exp_data0_u { + mmr_t sh_xn_iilb_ni0_cmp_exp_data0_regval; + struct { + mmr_t data : 64; + } sh_xn_iilb_ni0_cmp_exp_data0_s; +} sh_xn_iilb_ni0_cmp_exp_data0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_IILB_NI0_CMP_EXP_DATA1" */ +/* IILB compare NI0 input expected data1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_iilb_ni0_cmp_exp_data1_u { + mmr_t sh_xn_iilb_ni0_cmp_exp_data1_regval; + struct { + mmr_t data : 64; + } sh_xn_iilb_ni0_cmp_exp_data1_s; +} sh_xn_iilb_ni0_cmp_exp_data1_u_t; +#else +typedef union sh_xn_iilb_ni0_cmp_exp_data1_u { + mmr_t sh_xn_iilb_ni0_cmp_exp_data1_regval; + struct { + mmr_t data : 64; + } sh_xn_iilb_ni0_cmp_exp_data1_s; +} sh_xn_iilb_ni0_cmp_exp_data1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_IILB_NI0_CMP_ENABLE0" */ +/* IILB compare NI0 input enable0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_iilb_ni0_cmp_enable0_u { + mmr_t sh_xn_iilb_ni0_cmp_enable0_regval; + struct { + mmr_t enable : 64; + } sh_xn_iilb_ni0_cmp_enable0_s; +} sh_xn_iilb_ni0_cmp_enable0_u_t; +#else +typedef union sh_xn_iilb_ni0_cmp_enable0_u { + mmr_t sh_xn_iilb_ni0_cmp_enable0_regval; + struct { + mmr_t enable : 64; + } sh_xn_iilb_ni0_cmp_enable0_s; +} sh_xn_iilb_ni0_cmp_enable0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_IILB_NI0_CMP_ENABLE1" */ +/* IILB compare NI0 input enable1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_iilb_ni0_cmp_enable1_u { + mmr_t sh_xn_iilb_ni0_cmp_enable1_regval; + struct { + mmr_t enable : 64; + } sh_xn_iilb_ni0_cmp_enable1_s; +} sh_xn_iilb_ni0_cmp_enable1_u_t; +#else +typedef union sh_xn_iilb_ni0_cmp_enable1_u { + mmr_t sh_xn_iilb_ni0_cmp_enable1_regval; + struct { + mmr_t enable : 64; + } sh_xn_iilb_ni0_cmp_enable1_s; +} sh_xn_iilb_ni0_cmp_enable1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_IILB_NI1_CMP_EXP_DATA0" */ +/* IILB compare NI1 input expected data0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_iilb_ni1_cmp_exp_data0_u { + mmr_t sh_xn_iilb_ni1_cmp_exp_data0_regval; + struct { + mmr_t data : 64; + } sh_xn_iilb_ni1_cmp_exp_data0_s; +} sh_xn_iilb_ni1_cmp_exp_data0_u_t; +#else +typedef union sh_xn_iilb_ni1_cmp_exp_data0_u { + mmr_t sh_xn_iilb_ni1_cmp_exp_data0_regval; + struct { + mmr_t data : 64; + } sh_xn_iilb_ni1_cmp_exp_data0_s; +} sh_xn_iilb_ni1_cmp_exp_data0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_IILB_NI1_CMP_EXP_DATA1" */ +/* IILB compare NI1 input expected data1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_iilb_ni1_cmp_exp_data1_u { + mmr_t sh_xn_iilb_ni1_cmp_exp_data1_regval; + struct { + mmr_t data : 64; + } sh_xn_iilb_ni1_cmp_exp_data1_s; +} sh_xn_iilb_ni1_cmp_exp_data1_u_t; +#else +typedef union sh_xn_iilb_ni1_cmp_exp_data1_u { + mmr_t sh_xn_iilb_ni1_cmp_exp_data1_regval; + struct { + mmr_t data : 64; + } sh_xn_iilb_ni1_cmp_exp_data1_s; +} sh_xn_iilb_ni1_cmp_exp_data1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_IILB_NI1_CMP_ENABLE0" */ +/* IILB compare NI1 input enable0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_iilb_ni1_cmp_enable0_u { + mmr_t sh_xn_iilb_ni1_cmp_enable0_regval; + struct { + mmr_t enable : 64; + } sh_xn_iilb_ni1_cmp_enable0_s; +} sh_xn_iilb_ni1_cmp_enable0_u_t; +#else +typedef union sh_xn_iilb_ni1_cmp_enable0_u { + mmr_t sh_xn_iilb_ni1_cmp_enable0_regval; + struct { + mmr_t enable : 64; + } sh_xn_iilb_ni1_cmp_enable0_s; +} sh_xn_iilb_ni1_cmp_enable0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_IILB_NI1_CMP_ENABLE1" */ +/* IILB compare NI1 input enable1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_iilb_ni1_cmp_enable1_u { + mmr_t sh_xn_iilb_ni1_cmp_enable1_regval; + struct { + mmr_t enable : 64; + } sh_xn_iilb_ni1_cmp_enable1_s; +} sh_xn_iilb_ni1_cmp_enable1_u_t; +#else +typedef union sh_xn_iilb_ni1_cmp_enable1_u { + mmr_t sh_xn_iilb_ni1_cmp_enable1_regval; + struct { + mmr_t enable : 64; + } sh_xn_iilb_ni1_cmp_enable1_s; +} sh_xn_iilb_ni1_cmp_enable1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_MD_IILB_CMP_EXP_DATA0" */ +/* MD compare IILB input expected data0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_md_iilb_cmp_exp_data0_u { + mmr_t sh_xn_md_iilb_cmp_exp_data0_regval; + struct { + mmr_t data : 64; + } sh_xn_md_iilb_cmp_exp_data0_s; +} sh_xn_md_iilb_cmp_exp_data0_u_t; +#else +typedef union sh_xn_md_iilb_cmp_exp_data0_u { + mmr_t sh_xn_md_iilb_cmp_exp_data0_regval; + struct { + mmr_t data : 64; + } sh_xn_md_iilb_cmp_exp_data0_s; +} sh_xn_md_iilb_cmp_exp_data0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_MD_IILB_CMP_EXP_DATA1" */ +/* MD compare IILB input expected data1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_md_iilb_cmp_exp_data1_u { + mmr_t sh_xn_md_iilb_cmp_exp_data1_regval; + struct { + mmr_t data : 64; + } sh_xn_md_iilb_cmp_exp_data1_s; +} sh_xn_md_iilb_cmp_exp_data1_u_t; +#else +typedef union sh_xn_md_iilb_cmp_exp_data1_u { + mmr_t sh_xn_md_iilb_cmp_exp_data1_regval; + struct { + mmr_t data : 64; + } sh_xn_md_iilb_cmp_exp_data1_s; +} sh_xn_md_iilb_cmp_exp_data1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_MD_IILB_CMP_ENABLE0" */ +/* MD compare IILB input enable0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_md_iilb_cmp_enable0_u { + mmr_t sh_xn_md_iilb_cmp_enable0_regval; + struct { + mmr_t enable : 64; + } sh_xn_md_iilb_cmp_enable0_s; +} sh_xn_md_iilb_cmp_enable0_u_t; +#else +typedef union sh_xn_md_iilb_cmp_enable0_u { + mmr_t sh_xn_md_iilb_cmp_enable0_regval; + struct { + mmr_t enable : 64; + } sh_xn_md_iilb_cmp_enable0_s; +} sh_xn_md_iilb_cmp_enable0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_MD_IILB_CMP_ENABLE1" */ +/* MD compare IILB input enable1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_md_iilb_cmp_enable1_u { + mmr_t sh_xn_md_iilb_cmp_enable1_regval; + struct { + mmr_t enable : 64; + } sh_xn_md_iilb_cmp_enable1_s; +} sh_xn_md_iilb_cmp_enable1_u_t; +#else +typedef union sh_xn_md_iilb_cmp_enable1_u { + mmr_t sh_xn_md_iilb_cmp_enable1_regval; + struct { + mmr_t enable : 64; + } sh_xn_md_iilb_cmp_enable1_s; +} sh_xn_md_iilb_cmp_enable1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_MD_NI0_CMP_EXP_DATA0" */ +/* MD compare NI0 input expected data0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_md_ni0_cmp_exp_data0_u { + mmr_t sh_xn_md_ni0_cmp_exp_data0_regval; + struct { + mmr_t data : 64; + } sh_xn_md_ni0_cmp_exp_data0_s; +} sh_xn_md_ni0_cmp_exp_data0_u_t; +#else +typedef union sh_xn_md_ni0_cmp_exp_data0_u { + mmr_t sh_xn_md_ni0_cmp_exp_data0_regval; + struct { + mmr_t data : 64; + } sh_xn_md_ni0_cmp_exp_data0_s; +} sh_xn_md_ni0_cmp_exp_data0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_MD_NI0_CMP_EXP_DATA1" */ +/* MD compare NI0 input expected data1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_md_ni0_cmp_exp_data1_u { + mmr_t sh_xn_md_ni0_cmp_exp_data1_regval; + struct { + mmr_t data : 64; + } sh_xn_md_ni0_cmp_exp_data1_s; +} sh_xn_md_ni0_cmp_exp_data1_u_t; +#else +typedef union sh_xn_md_ni0_cmp_exp_data1_u { + mmr_t sh_xn_md_ni0_cmp_exp_data1_regval; + struct { + mmr_t data : 64; + } sh_xn_md_ni0_cmp_exp_data1_s; +} sh_xn_md_ni0_cmp_exp_data1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_MD_NI0_CMP_ENABLE0" */ +/* MD compare NI0 input enable0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_md_ni0_cmp_enable0_u { + mmr_t sh_xn_md_ni0_cmp_enable0_regval; + struct { + mmr_t enable : 64; + } sh_xn_md_ni0_cmp_enable0_s; +} sh_xn_md_ni0_cmp_enable0_u_t; +#else +typedef union sh_xn_md_ni0_cmp_enable0_u { + mmr_t sh_xn_md_ni0_cmp_enable0_regval; + struct { + mmr_t enable : 64; + } sh_xn_md_ni0_cmp_enable0_s; +} sh_xn_md_ni0_cmp_enable0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_MD_NI0_CMP_ENABLE1" */ +/* MD compare NI0 input enable1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_md_ni0_cmp_enable1_u { + mmr_t sh_xn_md_ni0_cmp_enable1_regval; + struct { + mmr_t enable : 64; + } sh_xn_md_ni0_cmp_enable1_s; +} sh_xn_md_ni0_cmp_enable1_u_t; +#else +typedef union sh_xn_md_ni0_cmp_enable1_u { + mmr_t sh_xn_md_ni0_cmp_enable1_regval; + struct { + mmr_t enable : 64; + } sh_xn_md_ni0_cmp_enable1_s; +} sh_xn_md_ni0_cmp_enable1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_MD_NI1_CMP_EXP_DATA0" */ +/* MD compare NI1 input expected data0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_md_ni1_cmp_exp_data0_u { + mmr_t sh_xn_md_ni1_cmp_exp_data0_regval; + struct { + mmr_t data : 64; + } sh_xn_md_ni1_cmp_exp_data0_s; +} sh_xn_md_ni1_cmp_exp_data0_u_t; +#else +typedef union sh_xn_md_ni1_cmp_exp_data0_u { + mmr_t sh_xn_md_ni1_cmp_exp_data0_regval; + struct { + mmr_t data : 64; + } sh_xn_md_ni1_cmp_exp_data0_s; +} sh_xn_md_ni1_cmp_exp_data0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_MD_NI1_CMP_EXP_DATA1" */ +/* MD compare NI1 input expected data1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_md_ni1_cmp_exp_data1_u { + mmr_t sh_xn_md_ni1_cmp_exp_data1_regval; + struct { + mmr_t data : 64; + } sh_xn_md_ni1_cmp_exp_data1_s; +} sh_xn_md_ni1_cmp_exp_data1_u_t; +#else +typedef union sh_xn_md_ni1_cmp_exp_data1_u { + mmr_t sh_xn_md_ni1_cmp_exp_data1_regval; + struct { + mmr_t data : 64; + } sh_xn_md_ni1_cmp_exp_data1_s; +} sh_xn_md_ni1_cmp_exp_data1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_MD_NI1_CMP_ENABLE0" */ +/* MD compare NI1 input enable0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_md_ni1_cmp_enable0_u { + mmr_t sh_xn_md_ni1_cmp_enable0_regval; + struct { + mmr_t enable : 64; + } sh_xn_md_ni1_cmp_enable0_s; +} sh_xn_md_ni1_cmp_enable0_u_t; +#else +typedef union sh_xn_md_ni1_cmp_enable0_u { + mmr_t sh_xn_md_ni1_cmp_enable0_regval; + struct { + mmr_t enable : 64; + } sh_xn_md_ni1_cmp_enable0_s; +} sh_xn_md_ni1_cmp_enable0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_MD_NI1_CMP_ENABLE1" */ +/* MD compare NI1 input enable1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_md_ni1_cmp_enable1_u { + mmr_t sh_xn_md_ni1_cmp_enable1_regval; + struct { + mmr_t enable : 64; + } sh_xn_md_ni1_cmp_enable1_s; +} sh_xn_md_ni1_cmp_enable1_u_t; +#else +typedef union sh_xn_md_ni1_cmp_enable1_u { + mmr_t sh_xn_md_ni1_cmp_enable1_regval; + struct { + mmr_t enable : 64; + } sh_xn_md_ni1_cmp_enable1_s; +} sh_xn_md_ni1_cmp_enable1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_MD_SIC_CMP_EXP_HDR0" */ +/* MD compare SIC input expected header0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_md_sic_cmp_exp_hdr0_u { + mmr_t sh_xn_md_sic_cmp_exp_hdr0_regval; + struct { + mmr_t data : 64; + } sh_xn_md_sic_cmp_exp_hdr0_s; +} sh_xn_md_sic_cmp_exp_hdr0_u_t; +#else +typedef union sh_xn_md_sic_cmp_exp_hdr0_u { + mmr_t sh_xn_md_sic_cmp_exp_hdr0_regval; + struct { + mmr_t data : 64; + } sh_xn_md_sic_cmp_exp_hdr0_s; +} sh_xn_md_sic_cmp_exp_hdr0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_MD_SIC_CMP_EXP_HDR1" */ +/* MD compare SIC input expected header1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_md_sic_cmp_exp_hdr1_u { + mmr_t sh_xn_md_sic_cmp_exp_hdr1_regval; + struct { + mmr_t data : 42; + mmr_t reserved_0 : 22; + } sh_xn_md_sic_cmp_exp_hdr1_s; +} sh_xn_md_sic_cmp_exp_hdr1_u_t; +#else +typedef union sh_xn_md_sic_cmp_exp_hdr1_u { + mmr_t sh_xn_md_sic_cmp_exp_hdr1_regval; + struct { + mmr_t reserved_0 : 22; + mmr_t data : 42; + } sh_xn_md_sic_cmp_exp_hdr1_s; +} sh_xn_md_sic_cmp_exp_hdr1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_MD_SIC_CMP_HDR_ENABLE0" */ +/* MD compare SIC header enable0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_md_sic_cmp_hdr_enable0_u { + mmr_t sh_xn_md_sic_cmp_hdr_enable0_regval; + struct { + mmr_t enable : 64; + } sh_xn_md_sic_cmp_hdr_enable0_s; +} sh_xn_md_sic_cmp_hdr_enable0_u_t; +#else +typedef union sh_xn_md_sic_cmp_hdr_enable0_u { + mmr_t sh_xn_md_sic_cmp_hdr_enable0_regval; + struct { + mmr_t enable : 64; + } sh_xn_md_sic_cmp_hdr_enable0_s; +} sh_xn_md_sic_cmp_hdr_enable0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_MD_SIC_CMP_HDR_ENABLE1" */ +/* MD compare SIC header enable1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_md_sic_cmp_hdr_enable1_u { + mmr_t sh_xn_md_sic_cmp_hdr_enable1_regval; + struct { + mmr_t enable : 42; + mmr_t reserved_0 : 22; + } sh_xn_md_sic_cmp_hdr_enable1_s; +} sh_xn_md_sic_cmp_hdr_enable1_u_t; +#else +typedef union sh_xn_md_sic_cmp_hdr_enable1_u { + mmr_t sh_xn_md_sic_cmp_hdr_enable1_regval; + struct { + mmr_t reserved_0 : 22; + mmr_t enable : 42; + } sh_xn_md_sic_cmp_hdr_enable1_s; +} sh_xn_md_sic_cmp_hdr_enable1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_MD_SIC_CMP_DATA0" */ +/* MD compare SIC data0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_md_sic_cmp_data0_u { + mmr_t sh_xn_md_sic_cmp_data0_regval; + struct { + mmr_t data0 : 64; + } sh_xn_md_sic_cmp_data0_s; +} sh_xn_md_sic_cmp_data0_u_t; +#else +typedef union sh_xn_md_sic_cmp_data0_u { + mmr_t sh_xn_md_sic_cmp_data0_regval; + struct { + mmr_t data0 : 64; + } sh_xn_md_sic_cmp_data0_s; +} sh_xn_md_sic_cmp_data0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_MD_SIC_CMP_DATA1" */ +/* MD compare SIC data1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_md_sic_cmp_data1_u { + mmr_t sh_xn_md_sic_cmp_data1_regval; + struct { + mmr_t data1 : 64; + } sh_xn_md_sic_cmp_data1_s; +} sh_xn_md_sic_cmp_data1_u_t; +#else +typedef union sh_xn_md_sic_cmp_data1_u { + mmr_t sh_xn_md_sic_cmp_data1_regval; + struct { + mmr_t data1 : 64; + } sh_xn_md_sic_cmp_data1_s; +} sh_xn_md_sic_cmp_data1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_MD_SIC_CMP_DATA2" */ +/* MD compare SIC data2 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_md_sic_cmp_data2_u { + mmr_t sh_xn_md_sic_cmp_data2_regval; + struct { + mmr_t data2 : 64; + } sh_xn_md_sic_cmp_data2_s; +} sh_xn_md_sic_cmp_data2_u_t; +#else +typedef union sh_xn_md_sic_cmp_data2_u { + mmr_t sh_xn_md_sic_cmp_data2_regval; + struct { + mmr_t data2 : 64; + } sh_xn_md_sic_cmp_data2_s; +} sh_xn_md_sic_cmp_data2_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_MD_SIC_CMP_DATA3" */ +/* MD compare SIC data3 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_md_sic_cmp_data3_u { + mmr_t sh_xn_md_sic_cmp_data3_regval; + struct { + mmr_t data3 : 64; + } sh_xn_md_sic_cmp_data3_s; +} sh_xn_md_sic_cmp_data3_u_t; +#else +typedef union sh_xn_md_sic_cmp_data3_u { + mmr_t sh_xn_md_sic_cmp_data3_regval; + struct { + mmr_t data3 : 64; + } sh_xn_md_sic_cmp_data3_s; +} sh_xn_md_sic_cmp_data3_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_MD_SIC_CMP_DATA_ENABLE0" */ +/* MD enable compare SIC data0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_md_sic_cmp_data_enable0_u { + mmr_t sh_xn_md_sic_cmp_data_enable0_regval; + struct { + mmr_t data_enable0 : 64; + } sh_xn_md_sic_cmp_data_enable0_s; +} sh_xn_md_sic_cmp_data_enable0_u_t; +#else +typedef union sh_xn_md_sic_cmp_data_enable0_u { + mmr_t sh_xn_md_sic_cmp_data_enable0_regval; + struct { + mmr_t data_enable0 : 64; + } sh_xn_md_sic_cmp_data_enable0_s; +} sh_xn_md_sic_cmp_data_enable0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_MD_SIC_CMP_DATA_ENABLE1" */ +/* MD enable compare SIC data1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_md_sic_cmp_data_enable1_u { + mmr_t sh_xn_md_sic_cmp_data_enable1_regval; + struct { + mmr_t data_enable1 : 64; + } sh_xn_md_sic_cmp_data_enable1_s; +} sh_xn_md_sic_cmp_data_enable1_u_t; +#else +typedef union sh_xn_md_sic_cmp_data_enable1_u { + mmr_t sh_xn_md_sic_cmp_data_enable1_regval; + struct { + mmr_t data_enable1 : 64; + } sh_xn_md_sic_cmp_data_enable1_s; +} sh_xn_md_sic_cmp_data_enable1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_MD_SIC_CMP_DATA_ENABLE2" */ +/* MD enable compare SIC data2 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_md_sic_cmp_data_enable2_u { + mmr_t sh_xn_md_sic_cmp_data_enable2_regval; + struct { + mmr_t data_enable2 : 64; + } sh_xn_md_sic_cmp_data_enable2_s; +} sh_xn_md_sic_cmp_data_enable2_u_t; +#else +typedef union sh_xn_md_sic_cmp_data_enable2_u { + mmr_t sh_xn_md_sic_cmp_data_enable2_regval; + struct { + mmr_t data_enable2 : 64; + } sh_xn_md_sic_cmp_data_enable2_s; +} sh_xn_md_sic_cmp_data_enable2_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_MD_SIC_CMP_DATA_ENABLE3" */ +/* MD enable compare SIC data3 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_md_sic_cmp_data_enable3_u { + mmr_t sh_xn_md_sic_cmp_data_enable3_regval; + struct { + mmr_t data_enable3 : 64; + } sh_xn_md_sic_cmp_data_enable3_s; +} sh_xn_md_sic_cmp_data_enable3_u_t; +#else +typedef union sh_xn_md_sic_cmp_data_enable3_u { + mmr_t sh_xn_md_sic_cmp_data_enable3_regval; + struct { + mmr_t data_enable3 : 64; + } sh_xn_md_sic_cmp_data_enable3_s; +} sh_xn_md_sic_cmp_data_enable3_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_PI_IILB_CMP_EXP_DATA0" */ +/* PI compare IILB input expected data0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_pi_iilb_cmp_exp_data0_u { + mmr_t sh_xn_pi_iilb_cmp_exp_data0_regval; + struct { + mmr_t data : 64; + } sh_xn_pi_iilb_cmp_exp_data0_s; +} sh_xn_pi_iilb_cmp_exp_data0_u_t; +#else +typedef union sh_xn_pi_iilb_cmp_exp_data0_u { + mmr_t sh_xn_pi_iilb_cmp_exp_data0_regval; + struct { + mmr_t data : 64; + } sh_xn_pi_iilb_cmp_exp_data0_s; +} sh_xn_pi_iilb_cmp_exp_data0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_PI_IILB_CMP_EXP_DATA1" */ +/* PI compare IILB input expected data1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_pi_iilb_cmp_exp_data1_u { + mmr_t sh_xn_pi_iilb_cmp_exp_data1_regval; + struct { + mmr_t data : 64; + } sh_xn_pi_iilb_cmp_exp_data1_s; +} sh_xn_pi_iilb_cmp_exp_data1_u_t; +#else +typedef union sh_xn_pi_iilb_cmp_exp_data1_u { + mmr_t sh_xn_pi_iilb_cmp_exp_data1_regval; + struct { + mmr_t data : 64; + } sh_xn_pi_iilb_cmp_exp_data1_s; +} sh_xn_pi_iilb_cmp_exp_data1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_PI_IILB_CMP_ENABLE0" */ +/* PI compare IILB input enable0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_pi_iilb_cmp_enable0_u { + mmr_t sh_xn_pi_iilb_cmp_enable0_regval; + struct { + mmr_t enable : 64; + } sh_xn_pi_iilb_cmp_enable0_s; +} sh_xn_pi_iilb_cmp_enable0_u_t; +#else +typedef union sh_xn_pi_iilb_cmp_enable0_u { + mmr_t sh_xn_pi_iilb_cmp_enable0_regval; + struct { + mmr_t enable : 64; + } sh_xn_pi_iilb_cmp_enable0_s; +} sh_xn_pi_iilb_cmp_enable0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_PI_IILB_CMP_ENABLE1" */ +/* PI compare IILB input enable1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_pi_iilb_cmp_enable1_u { + mmr_t sh_xn_pi_iilb_cmp_enable1_regval; + struct { + mmr_t enable : 64; + } sh_xn_pi_iilb_cmp_enable1_s; +} sh_xn_pi_iilb_cmp_enable1_u_t; +#else +typedef union sh_xn_pi_iilb_cmp_enable1_u { + mmr_t sh_xn_pi_iilb_cmp_enable1_regval; + struct { + mmr_t enable : 64; + } sh_xn_pi_iilb_cmp_enable1_s; +} sh_xn_pi_iilb_cmp_enable1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_PI_NI0_CMP_EXP_DATA0" */ +/* PI compare NI0 input expected data0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_pi_ni0_cmp_exp_data0_u { + mmr_t sh_xn_pi_ni0_cmp_exp_data0_regval; + struct { + mmr_t data : 64; + } sh_xn_pi_ni0_cmp_exp_data0_s; +} sh_xn_pi_ni0_cmp_exp_data0_u_t; +#else +typedef union sh_xn_pi_ni0_cmp_exp_data0_u { + mmr_t sh_xn_pi_ni0_cmp_exp_data0_regval; + struct { + mmr_t data : 64; + } sh_xn_pi_ni0_cmp_exp_data0_s; +} sh_xn_pi_ni0_cmp_exp_data0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_PI_NI0_CMP_EXP_DATA1" */ +/* PI compare NI0 input expected data1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_pi_ni0_cmp_exp_data1_u { + mmr_t sh_xn_pi_ni0_cmp_exp_data1_regval; + struct { + mmr_t data : 64; + } sh_xn_pi_ni0_cmp_exp_data1_s; +} sh_xn_pi_ni0_cmp_exp_data1_u_t; +#else +typedef union sh_xn_pi_ni0_cmp_exp_data1_u { + mmr_t sh_xn_pi_ni0_cmp_exp_data1_regval; + struct { + mmr_t data : 64; + } sh_xn_pi_ni0_cmp_exp_data1_s; +} sh_xn_pi_ni0_cmp_exp_data1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_PI_NI0_CMP_ENABLE0" */ +/* PI compare NI0 input enable0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_pi_ni0_cmp_enable0_u { + mmr_t sh_xn_pi_ni0_cmp_enable0_regval; + struct { + mmr_t enable : 64; + } sh_xn_pi_ni0_cmp_enable0_s; +} sh_xn_pi_ni0_cmp_enable0_u_t; +#else +typedef union sh_xn_pi_ni0_cmp_enable0_u { + mmr_t sh_xn_pi_ni0_cmp_enable0_regval; + struct { + mmr_t enable : 64; + } sh_xn_pi_ni0_cmp_enable0_s; +} sh_xn_pi_ni0_cmp_enable0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_PI_NI0_CMP_ENABLE1" */ +/* PI compare NI0 input enable1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_pi_ni0_cmp_enable1_u { + mmr_t sh_xn_pi_ni0_cmp_enable1_regval; + struct { + mmr_t enable : 64; + } sh_xn_pi_ni0_cmp_enable1_s; +} sh_xn_pi_ni0_cmp_enable1_u_t; +#else +typedef union sh_xn_pi_ni0_cmp_enable1_u { + mmr_t sh_xn_pi_ni0_cmp_enable1_regval; + struct { + mmr_t enable : 64; + } sh_xn_pi_ni0_cmp_enable1_s; +} sh_xn_pi_ni0_cmp_enable1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_PI_NI1_CMP_EXP_DATA0" */ +/* PI compare NI1 input expected data0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_pi_ni1_cmp_exp_data0_u { + mmr_t sh_xn_pi_ni1_cmp_exp_data0_regval; + struct { + mmr_t data : 64; + } sh_xn_pi_ni1_cmp_exp_data0_s; +} sh_xn_pi_ni1_cmp_exp_data0_u_t; +#else +typedef union sh_xn_pi_ni1_cmp_exp_data0_u { + mmr_t sh_xn_pi_ni1_cmp_exp_data0_regval; + struct { + mmr_t data : 64; + } sh_xn_pi_ni1_cmp_exp_data0_s; +} sh_xn_pi_ni1_cmp_exp_data0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_PI_NI1_CMP_EXP_DATA1" */ +/* PI compare NI1 input expected data1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_pi_ni1_cmp_exp_data1_u { + mmr_t sh_xn_pi_ni1_cmp_exp_data1_regval; + struct { + mmr_t data : 64; + } sh_xn_pi_ni1_cmp_exp_data1_s; +} sh_xn_pi_ni1_cmp_exp_data1_u_t; +#else +typedef union sh_xn_pi_ni1_cmp_exp_data1_u { + mmr_t sh_xn_pi_ni1_cmp_exp_data1_regval; + struct { + mmr_t data : 64; + } sh_xn_pi_ni1_cmp_exp_data1_s; +} sh_xn_pi_ni1_cmp_exp_data1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_PI_NI1_CMP_ENABLE0" */ +/* PI compare NI1 input enable0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_pi_ni1_cmp_enable0_u { + mmr_t sh_xn_pi_ni1_cmp_enable0_regval; + struct { + mmr_t enable : 64; + } sh_xn_pi_ni1_cmp_enable0_s; +} sh_xn_pi_ni1_cmp_enable0_u_t; +#else +typedef union sh_xn_pi_ni1_cmp_enable0_u { + mmr_t sh_xn_pi_ni1_cmp_enable0_regval; + struct { + mmr_t enable : 64; + } sh_xn_pi_ni1_cmp_enable0_s; +} sh_xn_pi_ni1_cmp_enable0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_PI_NI1_CMP_ENABLE1" */ +/* PI compare NI1 input enable1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_pi_ni1_cmp_enable1_u { + mmr_t sh_xn_pi_ni1_cmp_enable1_regval; + struct { + mmr_t enable : 64; + } sh_xn_pi_ni1_cmp_enable1_s; +} sh_xn_pi_ni1_cmp_enable1_u_t; +#else +typedef union sh_xn_pi_ni1_cmp_enable1_u { + mmr_t sh_xn_pi_ni1_cmp_enable1_regval; + struct { + mmr_t enable : 64; + } sh_xn_pi_ni1_cmp_enable1_s; +} sh_xn_pi_ni1_cmp_enable1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_PI_SIC_CMP_EXP_HDR0" */ +/* PI compare SIC input expected header0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_pi_sic_cmp_exp_hdr0_u { + mmr_t sh_xn_pi_sic_cmp_exp_hdr0_regval; + struct { + mmr_t data : 64; + } sh_xn_pi_sic_cmp_exp_hdr0_s; +} sh_xn_pi_sic_cmp_exp_hdr0_u_t; +#else +typedef union sh_xn_pi_sic_cmp_exp_hdr0_u { + mmr_t sh_xn_pi_sic_cmp_exp_hdr0_regval; + struct { + mmr_t data : 64; + } sh_xn_pi_sic_cmp_exp_hdr0_s; +} sh_xn_pi_sic_cmp_exp_hdr0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_PI_SIC_CMP_EXP_HDR1" */ +/* PI compare SIC input expected header1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_pi_sic_cmp_exp_hdr1_u { + mmr_t sh_xn_pi_sic_cmp_exp_hdr1_regval; + struct { + mmr_t data : 42; + mmr_t reserved_0 : 22; + } sh_xn_pi_sic_cmp_exp_hdr1_s; +} sh_xn_pi_sic_cmp_exp_hdr1_u_t; +#else +typedef union sh_xn_pi_sic_cmp_exp_hdr1_u { + mmr_t sh_xn_pi_sic_cmp_exp_hdr1_regval; + struct { + mmr_t reserved_0 : 22; + mmr_t data : 42; + } sh_xn_pi_sic_cmp_exp_hdr1_s; +} sh_xn_pi_sic_cmp_exp_hdr1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_PI_SIC_CMP_HDR_ENABLE0" */ +/* PI compare SIC header enable0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_pi_sic_cmp_hdr_enable0_u { + mmr_t sh_xn_pi_sic_cmp_hdr_enable0_regval; + struct { + mmr_t enable : 64; + } sh_xn_pi_sic_cmp_hdr_enable0_s; +} sh_xn_pi_sic_cmp_hdr_enable0_u_t; +#else +typedef union sh_xn_pi_sic_cmp_hdr_enable0_u { + mmr_t sh_xn_pi_sic_cmp_hdr_enable0_regval; + struct { + mmr_t enable : 64; + } sh_xn_pi_sic_cmp_hdr_enable0_s; +} sh_xn_pi_sic_cmp_hdr_enable0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_PI_SIC_CMP_HDR_ENABLE1" */ +/* PI compare SIC header enable1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_pi_sic_cmp_hdr_enable1_u { + mmr_t sh_xn_pi_sic_cmp_hdr_enable1_regval; + struct { + mmr_t enable : 42; + mmr_t reserved_0 : 22; + } sh_xn_pi_sic_cmp_hdr_enable1_s; +} sh_xn_pi_sic_cmp_hdr_enable1_u_t; +#else +typedef union sh_xn_pi_sic_cmp_hdr_enable1_u { + mmr_t sh_xn_pi_sic_cmp_hdr_enable1_regval; + struct { + mmr_t reserved_0 : 22; + mmr_t enable : 42; + } sh_xn_pi_sic_cmp_hdr_enable1_s; +} sh_xn_pi_sic_cmp_hdr_enable1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_PI_SIC_CMP_DATA0" */ +/* PI compare SIC data0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_pi_sic_cmp_data0_u { + mmr_t sh_xn_pi_sic_cmp_data0_regval; + struct { + mmr_t data0 : 64; + } sh_xn_pi_sic_cmp_data0_s; +} sh_xn_pi_sic_cmp_data0_u_t; +#else +typedef union sh_xn_pi_sic_cmp_data0_u { + mmr_t sh_xn_pi_sic_cmp_data0_regval; + struct { + mmr_t data0 : 64; + } sh_xn_pi_sic_cmp_data0_s; +} sh_xn_pi_sic_cmp_data0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_PI_SIC_CMP_DATA1" */ +/* PI compare SIC data1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_pi_sic_cmp_data1_u { + mmr_t sh_xn_pi_sic_cmp_data1_regval; + struct { + mmr_t data1 : 64; + } sh_xn_pi_sic_cmp_data1_s; +} sh_xn_pi_sic_cmp_data1_u_t; +#else +typedef union sh_xn_pi_sic_cmp_data1_u { + mmr_t sh_xn_pi_sic_cmp_data1_regval; + struct { + mmr_t data1 : 64; + } sh_xn_pi_sic_cmp_data1_s; +} sh_xn_pi_sic_cmp_data1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_PI_SIC_CMP_DATA2" */ +/* PI compare SIC data2 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_pi_sic_cmp_data2_u { + mmr_t sh_xn_pi_sic_cmp_data2_regval; + struct { + mmr_t data2 : 64; + } sh_xn_pi_sic_cmp_data2_s; +} sh_xn_pi_sic_cmp_data2_u_t; +#else +typedef union sh_xn_pi_sic_cmp_data2_u { + mmr_t sh_xn_pi_sic_cmp_data2_regval; + struct { + mmr_t data2 : 64; + } sh_xn_pi_sic_cmp_data2_s; +} sh_xn_pi_sic_cmp_data2_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_PI_SIC_CMP_DATA3" */ +/* PI compare SIC data3 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_pi_sic_cmp_data3_u { + mmr_t sh_xn_pi_sic_cmp_data3_regval; + struct { + mmr_t data3 : 64; + } sh_xn_pi_sic_cmp_data3_s; +} sh_xn_pi_sic_cmp_data3_u_t; +#else +typedef union sh_xn_pi_sic_cmp_data3_u { + mmr_t sh_xn_pi_sic_cmp_data3_regval; + struct { + mmr_t data3 : 64; + } sh_xn_pi_sic_cmp_data3_s; +} sh_xn_pi_sic_cmp_data3_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_PI_SIC_CMP_DATA_ENABLE0" */ +/* PI enable compare SIC data0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_pi_sic_cmp_data_enable0_u { + mmr_t sh_xn_pi_sic_cmp_data_enable0_regval; + struct { + mmr_t data_enable0 : 64; + } sh_xn_pi_sic_cmp_data_enable0_s; +} sh_xn_pi_sic_cmp_data_enable0_u_t; +#else +typedef union sh_xn_pi_sic_cmp_data_enable0_u { + mmr_t sh_xn_pi_sic_cmp_data_enable0_regval; + struct { + mmr_t data_enable0 : 64; + } sh_xn_pi_sic_cmp_data_enable0_s; +} sh_xn_pi_sic_cmp_data_enable0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_PI_SIC_CMP_DATA_ENABLE1" */ +/* PI enable compare SIC data1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_pi_sic_cmp_data_enable1_u { + mmr_t sh_xn_pi_sic_cmp_data_enable1_regval; + struct { + mmr_t data_enable1 : 64; + } sh_xn_pi_sic_cmp_data_enable1_s; +} sh_xn_pi_sic_cmp_data_enable1_u_t; +#else +typedef union sh_xn_pi_sic_cmp_data_enable1_u { + mmr_t sh_xn_pi_sic_cmp_data_enable1_regval; + struct { + mmr_t data_enable1 : 64; + } sh_xn_pi_sic_cmp_data_enable1_s; +} sh_xn_pi_sic_cmp_data_enable1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_PI_SIC_CMP_DATA_ENABLE2" */ +/* PI enable compare SIC data2 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_pi_sic_cmp_data_enable2_u { + mmr_t sh_xn_pi_sic_cmp_data_enable2_regval; + struct { + mmr_t data_enable2 : 64; + } sh_xn_pi_sic_cmp_data_enable2_s; +} sh_xn_pi_sic_cmp_data_enable2_u_t; +#else +typedef union sh_xn_pi_sic_cmp_data_enable2_u { + mmr_t sh_xn_pi_sic_cmp_data_enable2_regval; + struct { + mmr_t data_enable2 : 64; + } sh_xn_pi_sic_cmp_data_enable2_s; +} sh_xn_pi_sic_cmp_data_enable2_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_PI_SIC_CMP_DATA_ENABLE3" */ +/* PI enable compare SIC data3 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_pi_sic_cmp_data_enable3_u { + mmr_t sh_xn_pi_sic_cmp_data_enable3_regval; + struct { + mmr_t data_enable3 : 64; + } sh_xn_pi_sic_cmp_data_enable3_s; +} sh_xn_pi_sic_cmp_data_enable3_u_t; +#else +typedef union sh_xn_pi_sic_cmp_data_enable3_u { + mmr_t sh_xn_pi_sic_cmp_data_enable3_regval; + struct { + mmr_t data_enable3 : 64; + } sh_xn_pi_sic_cmp_data_enable3_s; +} sh_xn_pi_sic_cmp_data_enable3_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_NI0_IILB_CMP_EXP_DATA0" */ +/* NI0 compare IILB input expected data0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_ni0_iilb_cmp_exp_data0_u { + mmr_t sh_xn_ni0_iilb_cmp_exp_data0_regval; + struct { + mmr_t data : 64; + } sh_xn_ni0_iilb_cmp_exp_data0_s; +} sh_xn_ni0_iilb_cmp_exp_data0_u_t; +#else +typedef union sh_xn_ni0_iilb_cmp_exp_data0_u { + mmr_t sh_xn_ni0_iilb_cmp_exp_data0_regval; + struct { + mmr_t data : 64; + } sh_xn_ni0_iilb_cmp_exp_data0_s; +} sh_xn_ni0_iilb_cmp_exp_data0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_NI0_IILB_CMP_EXP_DATA1" */ +/* NI0 compare IILB input expected data1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_ni0_iilb_cmp_exp_data1_u { + mmr_t sh_xn_ni0_iilb_cmp_exp_data1_regval; + struct { + mmr_t data : 64; + } sh_xn_ni0_iilb_cmp_exp_data1_s; +} sh_xn_ni0_iilb_cmp_exp_data1_u_t; +#else +typedef union sh_xn_ni0_iilb_cmp_exp_data1_u { + mmr_t sh_xn_ni0_iilb_cmp_exp_data1_regval; + struct { + mmr_t data : 64; + } sh_xn_ni0_iilb_cmp_exp_data1_s; +} sh_xn_ni0_iilb_cmp_exp_data1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_NI0_IILB_CMP_ENABLE0" */ +/* NI0 compare IILB input enable0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_ni0_iilb_cmp_enable0_u { + mmr_t sh_xn_ni0_iilb_cmp_enable0_regval; + struct { + mmr_t enable : 64; + } sh_xn_ni0_iilb_cmp_enable0_s; +} sh_xn_ni0_iilb_cmp_enable0_u_t; +#else +typedef union sh_xn_ni0_iilb_cmp_enable0_u { + mmr_t sh_xn_ni0_iilb_cmp_enable0_regval; + struct { + mmr_t enable : 64; + } sh_xn_ni0_iilb_cmp_enable0_s; +} sh_xn_ni0_iilb_cmp_enable0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_NI0_IILB_CMP_ENABLE1" */ +/* NI0 compare IILB input enable1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_ni0_iilb_cmp_enable1_u { + mmr_t sh_xn_ni0_iilb_cmp_enable1_regval; + struct { + mmr_t enable : 64; + } sh_xn_ni0_iilb_cmp_enable1_s; +} sh_xn_ni0_iilb_cmp_enable1_u_t; +#else +typedef union sh_xn_ni0_iilb_cmp_enable1_u { + mmr_t sh_xn_ni0_iilb_cmp_enable1_regval; + struct { + mmr_t enable : 64; + } sh_xn_ni0_iilb_cmp_enable1_s; +} sh_xn_ni0_iilb_cmp_enable1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_NI0_PI_CMP_EXP_DATA0" */ +/* NI0 compare PI input expected data0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_ni0_pi_cmp_exp_data0_u { + mmr_t sh_xn_ni0_pi_cmp_exp_data0_regval; + struct { + mmr_t data : 64; + } sh_xn_ni0_pi_cmp_exp_data0_s; +} sh_xn_ni0_pi_cmp_exp_data0_u_t; +#else +typedef union sh_xn_ni0_pi_cmp_exp_data0_u { + mmr_t sh_xn_ni0_pi_cmp_exp_data0_regval; + struct { + mmr_t data : 64; + } sh_xn_ni0_pi_cmp_exp_data0_s; +} sh_xn_ni0_pi_cmp_exp_data0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_NI0_PI_CMP_EXP_DATA1" */ +/* NI0 compare PI input expected data1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_ni0_pi_cmp_exp_data1_u { + mmr_t sh_xn_ni0_pi_cmp_exp_data1_regval; + struct { + mmr_t data : 64; + } sh_xn_ni0_pi_cmp_exp_data1_s; +} sh_xn_ni0_pi_cmp_exp_data1_u_t; +#else +typedef union sh_xn_ni0_pi_cmp_exp_data1_u { + mmr_t sh_xn_ni0_pi_cmp_exp_data1_regval; + struct { + mmr_t data : 64; + } sh_xn_ni0_pi_cmp_exp_data1_s; +} sh_xn_ni0_pi_cmp_exp_data1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_NI0_PI_CMP_ENABLE0" */ +/* NI0 compare PI input enable0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_ni0_pi_cmp_enable0_u { + mmr_t sh_xn_ni0_pi_cmp_enable0_regval; + struct { + mmr_t enable : 64; + } sh_xn_ni0_pi_cmp_enable0_s; +} sh_xn_ni0_pi_cmp_enable0_u_t; +#else +typedef union sh_xn_ni0_pi_cmp_enable0_u { + mmr_t sh_xn_ni0_pi_cmp_enable0_regval; + struct { + mmr_t enable : 64; + } sh_xn_ni0_pi_cmp_enable0_s; +} sh_xn_ni0_pi_cmp_enable0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_NI0_PI_CMP_ENABLE1" */ +/* NI0 compare PI input enable1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_ni0_pi_cmp_enable1_u { + mmr_t sh_xn_ni0_pi_cmp_enable1_regval; + struct { + mmr_t enable : 64; + } sh_xn_ni0_pi_cmp_enable1_s; +} sh_xn_ni0_pi_cmp_enable1_u_t; +#else +typedef union sh_xn_ni0_pi_cmp_enable1_u { + mmr_t sh_xn_ni0_pi_cmp_enable1_regval; + struct { + mmr_t enable : 64; + } sh_xn_ni0_pi_cmp_enable1_s; +} sh_xn_ni0_pi_cmp_enable1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_NI0_MD_CMP_EXP_DATA0" */ +/* NI0 compare MD input expected data0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_ni0_md_cmp_exp_data0_u { + mmr_t sh_xn_ni0_md_cmp_exp_data0_regval; + struct { + mmr_t data : 64; + } sh_xn_ni0_md_cmp_exp_data0_s; +} sh_xn_ni0_md_cmp_exp_data0_u_t; +#else +typedef union sh_xn_ni0_md_cmp_exp_data0_u { + mmr_t sh_xn_ni0_md_cmp_exp_data0_regval; + struct { + mmr_t data : 64; + } sh_xn_ni0_md_cmp_exp_data0_s; +} sh_xn_ni0_md_cmp_exp_data0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_NI0_MD_CMP_EXP_DATA1" */ +/* NI0 compare MD input expected data1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_ni0_md_cmp_exp_data1_u { + mmr_t sh_xn_ni0_md_cmp_exp_data1_regval; + struct { + mmr_t data : 64; + } sh_xn_ni0_md_cmp_exp_data1_s; +} sh_xn_ni0_md_cmp_exp_data1_u_t; +#else +typedef union sh_xn_ni0_md_cmp_exp_data1_u { + mmr_t sh_xn_ni0_md_cmp_exp_data1_regval; + struct { + mmr_t data : 64; + } sh_xn_ni0_md_cmp_exp_data1_s; +} sh_xn_ni0_md_cmp_exp_data1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_NI0_MD_CMP_ENABLE0" */ +/* NI0 compare MD input enable0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_ni0_md_cmp_enable0_u { + mmr_t sh_xn_ni0_md_cmp_enable0_regval; + struct { + mmr_t enable : 64; + } sh_xn_ni0_md_cmp_enable0_s; +} sh_xn_ni0_md_cmp_enable0_u_t; +#else +typedef union sh_xn_ni0_md_cmp_enable0_u { + mmr_t sh_xn_ni0_md_cmp_enable0_regval; + struct { + mmr_t enable : 64; + } sh_xn_ni0_md_cmp_enable0_s; +} sh_xn_ni0_md_cmp_enable0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_NI0_MD_CMP_ENABLE1" */ +/* NI0 compare MD input enable1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_ni0_md_cmp_enable1_u { + mmr_t sh_xn_ni0_md_cmp_enable1_regval; + struct { + mmr_t enable : 64; + } sh_xn_ni0_md_cmp_enable1_s; +} sh_xn_ni0_md_cmp_enable1_u_t; +#else +typedef union sh_xn_ni0_md_cmp_enable1_u { + mmr_t sh_xn_ni0_md_cmp_enable1_regval; + struct { + mmr_t enable : 64; + } sh_xn_ni0_md_cmp_enable1_s; +} sh_xn_ni0_md_cmp_enable1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_NI0_NI_CMP_EXP_DATA0" */ +/* NI0 compare NI input expected data0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_ni0_ni_cmp_exp_data0_u { + mmr_t sh_xn_ni0_ni_cmp_exp_data0_regval; + struct { + mmr_t data : 64; + } sh_xn_ni0_ni_cmp_exp_data0_s; +} sh_xn_ni0_ni_cmp_exp_data0_u_t; +#else +typedef union sh_xn_ni0_ni_cmp_exp_data0_u { + mmr_t sh_xn_ni0_ni_cmp_exp_data0_regval; + struct { + mmr_t data : 64; + } sh_xn_ni0_ni_cmp_exp_data0_s; +} sh_xn_ni0_ni_cmp_exp_data0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_NI0_NI_CMP_EXP_DATA1" */ +/* NI0 compare NI input expected data1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_ni0_ni_cmp_exp_data1_u { + mmr_t sh_xn_ni0_ni_cmp_exp_data1_regval; + struct { + mmr_t data : 64; + } sh_xn_ni0_ni_cmp_exp_data1_s; +} sh_xn_ni0_ni_cmp_exp_data1_u_t; +#else +typedef union sh_xn_ni0_ni_cmp_exp_data1_u { + mmr_t sh_xn_ni0_ni_cmp_exp_data1_regval; + struct { + mmr_t data : 64; + } sh_xn_ni0_ni_cmp_exp_data1_s; +} sh_xn_ni0_ni_cmp_exp_data1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_NI0_NI_CMP_ENABLE0" */ +/* NI0 compare NI input enable0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_ni0_ni_cmp_enable0_u { + mmr_t sh_xn_ni0_ni_cmp_enable0_regval; + struct { + mmr_t enable : 64; + } sh_xn_ni0_ni_cmp_enable0_s; +} sh_xn_ni0_ni_cmp_enable0_u_t; +#else +typedef union sh_xn_ni0_ni_cmp_enable0_u { + mmr_t sh_xn_ni0_ni_cmp_enable0_regval; + struct { + mmr_t enable : 64; + } sh_xn_ni0_ni_cmp_enable0_s; +} sh_xn_ni0_ni_cmp_enable0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_NI0_NI_CMP_ENABLE1" */ +/* NI0 compare NI input enable1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_ni0_ni_cmp_enable1_u { + mmr_t sh_xn_ni0_ni_cmp_enable1_regval; + struct { + mmr_t enable : 64; + } sh_xn_ni0_ni_cmp_enable1_s; +} sh_xn_ni0_ni_cmp_enable1_u_t; +#else +typedef union sh_xn_ni0_ni_cmp_enable1_u { + mmr_t sh_xn_ni0_ni_cmp_enable1_regval; + struct { + mmr_t enable : 64; + } sh_xn_ni0_ni_cmp_enable1_s; +} sh_xn_ni0_ni_cmp_enable1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_NI0_LLP_CMP_EXP_DATA0" */ +/* NI0 compare LLP input expected data0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_ni0_llp_cmp_exp_data0_u { + mmr_t sh_xn_ni0_llp_cmp_exp_data0_regval; + struct { + mmr_t data : 64; + } sh_xn_ni0_llp_cmp_exp_data0_s; +} sh_xn_ni0_llp_cmp_exp_data0_u_t; +#else +typedef union sh_xn_ni0_llp_cmp_exp_data0_u { + mmr_t sh_xn_ni0_llp_cmp_exp_data0_regval; + struct { + mmr_t data : 64; + } sh_xn_ni0_llp_cmp_exp_data0_s; +} sh_xn_ni0_llp_cmp_exp_data0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_NI0_LLP_CMP_EXP_DATA1" */ +/* NI0 compare LLP input expected data1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_ni0_llp_cmp_exp_data1_u { + mmr_t sh_xn_ni0_llp_cmp_exp_data1_regval; + struct { + mmr_t data : 64; + } sh_xn_ni0_llp_cmp_exp_data1_s; +} sh_xn_ni0_llp_cmp_exp_data1_u_t; +#else +typedef union sh_xn_ni0_llp_cmp_exp_data1_u { + mmr_t sh_xn_ni0_llp_cmp_exp_data1_regval; + struct { + mmr_t data : 64; + } sh_xn_ni0_llp_cmp_exp_data1_s; +} sh_xn_ni0_llp_cmp_exp_data1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_NI0_LLP_CMP_ENABLE0" */ +/* NI0 compare LLP input enable0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_ni0_llp_cmp_enable0_u { + mmr_t sh_xn_ni0_llp_cmp_enable0_regval; + struct { + mmr_t enable : 64; + } sh_xn_ni0_llp_cmp_enable0_s; +} sh_xn_ni0_llp_cmp_enable0_u_t; +#else +typedef union sh_xn_ni0_llp_cmp_enable0_u { + mmr_t sh_xn_ni0_llp_cmp_enable0_regval; + struct { + mmr_t enable : 64; + } sh_xn_ni0_llp_cmp_enable0_s; +} sh_xn_ni0_llp_cmp_enable0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_NI0_LLP_CMP_ENABLE1" */ +/* NI0 compare LLP input enable1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_ni0_llp_cmp_enable1_u { + mmr_t sh_xn_ni0_llp_cmp_enable1_regval; + struct { + mmr_t enable : 64; + } sh_xn_ni0_llp_cmp_enable1_s; +} sh_xn_ni0_llp_cmp_enable1_u_t; +#else +typedef union sh_xn_ni0_llp_cmp_enable1_u { + mmr_t sh_xn_ni0_llp_cmp_enable1_regval; + struct { + mmr_t enable : 64; + } sh_xn_ni0_llp_cmp_enable1_s; +} sh_xn_ni0_llp_cmp_enable1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_NI1_IILB_CMP_EXP_DATA0" */ +/* NI1 compare IILB input expected data0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_ni1_iilb_cmp_exp_data0_u { + mmr_t sh_xn_ni1_iilb_cmp_exp_data0_regval; + struct { + mmr_t data : 64; + } sh_xn_ni1_iilb_cmp_exp_data0_s; +} sh_xn_ni1_iilb_cmp_exp_data0_u_t; +#else +typedef union sh_xn_ni1_iilb_cmp_exp_data0_u { + mmr_t sh_xn_ni1_iilb_cmp_exp_data0_regval; + struct { + mmr_t data : 64; + } sh_xn_ni1_iilb_cmp_exp_data0_s; +} sh_xn_ni1_iilb_cmp_exp_data0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_NI1_IILB_CMP_EXP_DATA1" */ +/* NI1 compare IILB input expected data1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_ni1_iilb_cmp_exp_data1_u { + mmr_t sh_xn_ni1_iilb_cmp_exp_data1_regval; + struct { + mmr_t data : 64; + } sh_xn_ni1_iilb_cmp_exp_data1_s; +} sh_xn_ni1_iilb_cmp_exp_data1_u_t; +#else +typedef union sh_xn_ni1_iilb_cmp_exp_data1_u { + mmr_t sh_xn_ni1_iilb_cmp_exp_data1_regval; + struct { + mmr_t data : 64; + } sh_xn_ni1_iilb_cmp_exp_data1_s; +} sh_xn_ni1_iilb_cmp_exp_data1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_NI1_IILB_CMP_ENABLE0" */ +/* NI1 compare IILB input enable0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_ni1_iilb_cmp_enable0_u { + mmr_t sh_xn_ni1_iilb_cmp_enable0_regval; + struct { + mmr_t enable : 64; + } sh_xn_ni1_iilb_cmp_enable0_s; +} sh_xn_ni1_iilb_cmp_enable0_u_t; +#else +typedef union sh_xn_ni1_iilb_cmp_enable0_u { + mmr_t sh_xn_ni1_iilb_cmp_enable0_regval; + struct { + mmr_t enable : 64; + } sh_xn_ni1_iilb_cmp_enable0_s; +} sh_xn_ni1_iilb_cmp_enable0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_NI1_IILB_CMP_ENABLE1" */ +/* NI1 compare IILB input enable1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_ni1_iilb_cmp_enable1_u { + mmr_t sh_xn_ni1_iilb_cmp_enable1_regval; + struct { + mmr_t enable : 64; + } sh_xn_ni1_iilb_cmp_enable1_s; +} sh_xn_ni1_iilb_cmp_enable1_u_t; +#else +typedef union sh_xn_ni1_iilb_cmp_enable1_u { + mmr_t sh_xn_ni1_iilb_cmp_enable1_regval; + struct { + mmr_t enable : 64; + } sh_xn_ni1_iilb_cmp_enable1_s; +} sh_xn_ni1_iilb_cmp_enable1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_NI1_PI_CMP_EXP_DATA0" */ +/* NI1 compare PI input expected data0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_ni1_pi_cmp_exp_data0_u { + mmr_t sh_xn_ni1_pi_cmp_exp_data0_regval; + struct { + mmr_t data : 64; + } sh_xn_ni1_pi_cmp_exp_data0_s; +} sh_xn_ni1_pi_cmp_exp_data0_u_t; +#else +typedef union sh_xn_ni1_pi_cmp_exp_data0_u { + mmr_t sh_xn_ni1_pi_cmp_exp_data0_regval; + struct { + mmr_t data : 64; + } sh_xn_ni1_pi_cmp_exp_data0_s; +} sh_xn_ni1_pi_cmp_exp_data0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_NI1_PI_CMP_EXP_DATA1" */ +/* NI1 compare PI input expected data1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_ni1_pi_cmp_exp_data1_u { + mmr_t sh_xn_ni1_pi_cmp_exp_data1_regval; + struct { + mmr_t data : 64; + } sh_xn_ni1_pi_cmp_exp_data1_s; +} sh_xn_ni1_pi_cmp_exp_data1_u_t; +#else +typedef union sh_xn_ni1_pi_cmp_exp_data1_u { + mmr_t sh_xn_ni1_pi_cmp_exp_data1_regval; + struct { + mmr_t data : 64; + } sh_xn_ni1_pi_cmp_exp_data1_s; +} sh_xn_ni1_pi_cmp_exp_data1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_NI1_PI_CMP_ENABLE0" */ +/* NI1 compare PI input enable0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_ni1_pi_cmp_enable0_u { + mmr_t sh_xn_ni1_pi_cmp_enable0_regval; + struct { + mmr_t enable : 64; + } sh_xn_ni1_pi_cmp_enable0_s; +} sh_xn_ni1_pi_cmp_enable0_u_t; +#else +typedef union sh_xn_ni1_pi_cmp_enable0_u { + mmr_t sh_xn_ni1_pi_cmp_enable0_regval; + struct { + mmr_t enable : 64; + } sh_xn_ni1_pi_cmp_enable0_s; +} sh_xn_ni1_pi_cmp_enable0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_NI1_PI_CMP_ENABLE1" */ +/* NI1 compare PI input enable1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_ni1_pi_cmp_enable1_u { + mmr_t sh_xn_ni1_pi_cmp_enable1_regval; + struct { + mmr_t enable : 64; + } sh_xn_ni1_pi_cmp_enable1_s; +} sh_xn_ni1_pi_cmp_enable1_u_t; +#else +typedef union sh_xn_ni1_pi_cmp_enable1_u { + mmr_t sh_xn_ni1_pi_cmp_enable1_regval; + struct { + mmr_t enable : 64; + } sh_xn_ni1_pi_cmp_enable1_s; +} sh_xn_ni1_pi_cmp_enable1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_NI1_MD_CMP_EXP_DATA0" */ +/* NI1 compare MD input expected data0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_ni1_md_cmp_exp_data0_u { + mmr_t sh_xn_ni1_md_cmp_exp_data0_regval; + struct { + mmr_t data : 64; + } sh_xn_ni1_md_cmp_exp_data0_s; +} sh_xn_ni1_md_cmp_exp_data0_u_t; +#else +typedef union sh_xn_ni1_md_cmp_exp_data0_u { + mmr_t sh_xn_ni1_md_cmp_exp_data0_regval; + struct { + mmr_t data : 64; + } sh_xn_ni1_md_cmp_exp_data0_s; +} sh_xn_ni1_md_cmp_exp_data0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_NI1_MD_CMP_EXP_DATA1" */ +/* NI1 compare MD input expected data1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_ni1_md_cmp_exp_data1_u { + mmr_t sh_xn_ni1_md_cmp_exp_data1_regval; + struct { + mmr_t data : 64; + } sh_xn_ni1_md_cmp_exp_data1_s; +} sh_xn_ni1_md_cmp_exp_data1_u_t; +#else +typedef union sh_xn_ni1_md_cmp_exp_data1_u { + mmr_t sh_xn_ni1_md_cmp_exp_data1_regval; + struct { + mmr_t data : 64; + } sh_xn_ni1_md_cmp_exp_data1_s; +} sh_xn_ni1_md_cmp_exp_data1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_NI1_MD_CMP_ENABLE0" */ +/* NI1 compare MD input enable0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_ni1_md_cmp_enable0_u { + mmr_t sh_xn_ni1_md_cmp_enable0_regval; + struct { + mmr_t enable : 64; + } sh_xn_ni1_md_cmp_enable0_s; +} sh_xn_ni1_md_cmp_enable0_u_t; +#else +typedef union sh_xn_ni1_md_cmp_enable0_u { + mmr_t sh_xn_ni1_md_cmp_enable0_regval; + struct { + mmr_t enable : 64; + } sh_xn_ni1_md_cmp_enable0_s; +} sh_xn_ni1_md_cmp_enable0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_NI1_MD_CMP_ENABLE1" */ +/* NI1 compare MD input enable1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_ni1_md_cmp_enable1_u { + mmr_t sh_xn_ni1_md_cmp_enable1_regval; + struct { + mmr_t enable : 64; + } sh_xn_ni1_md_cmp_enable1_s; +} sh_xn_ni1_md_cmp_enable1_u_t; +#else +typedef union sh_xn_ni1_md_cmp_enable1_u { + mmr_t sh_xn_ni1_md_cmp_enable1_regval; + struct { + mmr_t enable : 64; + } sh_xn_ni1_md_cmp_enable1_s; +} sh_xn_ni1_md_cmp_enable1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_NI1_NI_CMP_EXP_DATA0" */ +/* NI1 compare NI input expected data0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_ni1_ni_cmp_exp_data0_u { + mmr_t sh_xn_ni1_ni_cmp_exp_data0_regval; + struct { + mmr_t data : 64; + } sh_xn_ni1_ni_cmp_exp_data0_s; +} sh_xn_ni1_ni_cmp_exp_data0_u_t; +#else +typedef union sh_xn_ni1_ni_cmp_exp_data0_u { + mmr_t sh_xn_ni1_ni_cmp_exp_data0_regval; + struct { + mmr_t data : 64; + } sh_xn_ni1_ni_cmp_exp_data0_s; +} sh_xn_ni1_ni_cmp_exp_data0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_NI1_NI_CMP_EXP_DATA1" */ +/* NI1 compare NI input expected data1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_ni1_ni_cmp_exp_data1_u { + mmr_t sh_xn_ni1_ni_cmp_exp_data1_regval; + struct { + mmr_t data : 64; + } sh_xn_ni1_ni_cmp_exp_data1_s; +} sh_xn_ni1_ni_cmp_exp_data1_u_t; +#else +typedef union sh_xn_ni1_ni_cmp_exp_data1_u { + mmr_t sh_xn_ni1_ni_cmp_exp_data1_regval; + struct { + mmr_t data : 64; + } sh_xn_ni1_ni_cmp_exp_data1_s; +} sh_xn_ni1_ni_cmp_exp_data1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_NI1_NI_CMP_ENABLE0" */ +/* NI1 compare NI input enable0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_ni1_ni_cmp_enable0_u { + mmr_t sh_xn_ni1_ni_cmp_enable0_regval; + struct { + mmr_t enable : 64; + } sh_xn_ni1_ni_cmp_enable0_s; +} sh_xn_ni1_ni_cmp_enable0_u_t; +#else +typedef union sh_xn_ni1_ni_cmp_enable0_u { + mmr_t sh_xn_ni1_ni_cmp_enable0_regval; + struct { + mmr_t enable : 64; + } sh_xn_ni1_ni_cmp_enable0_s; +} sh_xn_ni1_ni_cmp_enable0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_NI1_NI_CMP_ENABLE1" */ +/* NI1 compare NI input enable1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_ni1_ni_cmp_enable1_u { + mmr_t sh_xn_ni1_ni_cmp_enable1_regval; + struct { + mmr_t enable : 64; + } sh_xn_ni1_ni_cmp_enable1_s; +} sh_xn_ni1_ni_cmp_enable1_u_t; +#else +typedef union sh_xn_ni1_ni_cmp_enable1_u { + mmr_t sh_xn_ni1_ni_cmp_enable1_regval; + struct { + mmr_t enable : 64; + } sh_xn_ni1_ni_cmp_enable1_s; +} sh_xn_ni1_ni_cmp_enable1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_NI1_LLP_CMP_EXP_DATA0" */ +/* NI1 compare LLP input expected data0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_ni1_llp_cmp_exp_data0_u { + mmr_t sh_xn_ni1_llp_cmp_exp_data0_regval; + struct { + mmr_t data : 64; + } sh_xn_ni1_llp_cmp_exp_data0_s; +} sh_xn_ni1_llp_cmp_exp_data0_u_t; +#else +typedef union sh_xn_ni1_llp_cmp_exp_data0_u { + mmr_t sh_xn_ni1_llp_cmp_exp_data0_regval; + struct { + mmr_t data : 64; + } sh_xn_ni1_llp_cmp_exp_data0_s; +} sh_xn_ni1_llp_cmp_exp_data0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_NI1_LLP_CMP_EXP_DATA1" */ +/* NI1 compare LLP input expected data1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_ni1_llp_cmp_exp_data1_u { + mmr_t sh_xn_ni1_llp_cmp_exp_data1_regval; + struct { + mmr_t data : 64; + } sh_xn_ni1_llp_cmp_exp_data1_s; +} sh_xn_ni1_llp_cmp_exp_data1_u_t; +#else +typedef union sh_xn_ni1_llp_cmp_exp_data1_u { + mmr_t sh_xn_ni1_llp_cmp_exp_data1_regval; + struct { + mmr_t data : 64; + } sh_xn_ni1_llp_cmp_exp_data1_s; +} sh_xn_ni1_llp_cmp_exp_data1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_NI1_LLP_CMP_ENABLE0" */ +/* NI1 compare LLP input enable0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_ni1_llp_cmp_enable0_u { + mmr_t sh_xn_ni1_llp_cmp_enable0_regval; + struct { + mmr_t enable : 64; + } sh_xn_ni1_llp_cmp_enable0_s; +} sh_xn_ni1_llp_cmp_enable0_u_t; +#else +typedef union sh_xn_ni1_llp_cmp_enable0_u { + mmr_t sh_xn_ni1_llp_cmp_enable0_regval; + struct { + mmr_t enable : 64; + } sh_xn_ni1_llp_cmp_enable0_s; +} sh_xn_ni1_llp_cmp_enable0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_NI1_LLP_CMP_ENABLE1" */ +/* NI1 compare LLP input enable1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_ni1_llp_cmp_enable1_u { + mmr_t sh_xn_ni1_llp_cmp_enable1_regval; + struct { + mmr_t enable : 64; + } sh_xn_ni1_llp_cmp_enable1_s; +} sh_xn_ni1_llp_cmp_enable1_u_t; +#else +typedef union sh_xn_ni1_llp_cmp_enable1_u { + mmr_t sh_xn_ni1_llp_cmp_enable1_regval; + struct { + mmr_t enable : 64; + } sh_xn_ni1_llp_cmp_enable1_s; +} sh_xn_ni1_llp_cmp_enable1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNPI_ECC_INJ_REG" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnpi_ecc_inj_reg_u { + mmr_t sh_xnpi_ecc_inj_reg_regval; + struct { + mmr_t byte0 : 8; + mmr_t reserved_0 : 4; + mmr_t data_1shot0 : 1; + mmr_t data_cont0 : 1; + mmr_t data_cb_1shot0 : 1; + mmr_t data_cb_cont0 : 1; + mmr_t byte1 : 8; + mmr_t reserved_1 : 4; + mmr_t data_1shot1 : 1; + mmr_t data_cont1 : 1; + mmr_t data_cb_1shot1 : 1; + mmr_t data_cb_cont1 : 1; + mmr_t byte2 : 8; + mmr_t reserved_2 : 4; + mmr_t data_1shot2 : 1; + mmr_t data_cont2 : 1; + mmr_t data_cb_1shot2 : 1; + mmr_t data_cb_cont2 : 1; + mmr_t byte3 : 8; + mmr_t reserved_3 : 4; + mmr_t data_1shot3 : 1; + mmr_t data_cont3 : 1; + mmr_t data_cb_1shot3 : 1; + mmr_t data_cb_cont3 : 1; + } sh_xnpi_ecc_inj_reg_s; +} sh_xnpi_ecc_inj_reg_u_t; +#else +typedef union sh_xnpi_ecc_inj_reg_u { + mmr_t sh_xnpi_ecc_inj_reg_regval; + struct { + mmr_t data_cb_cont3 : 1; + mmr_t data_cb_1shot3 : 1; + mmr_t data_cont3 : 1; + mmr_t data_1shot3 : 1; + mmr_t reserved_3 : 4; + mmr_t byte3 : 8; + mmr_t data_cb_cont2 : 1; + mmr_t data_cb_1shot2 : 1; + mmr_t data_cont2 : 1; + mmr_t data_1shot2 : 1; + mmr_t reserved_2 : 4; + mmr_t byte2 : 8; + mmr_t data_cb_cont1 : 1; + mmr_t data_cb_1shot1 : 1; + mmr_t data_cont1 : 1; + mmr_t data_1shot1 : 1; + mmr_t reserved_1 : 4; + mmr_t byte1 : 8; + mmr_t data_cb_cont0 : 1; + mmr_t data_cb_1shot0 : 1; + mmr_t data_cont0 : 1; + mmr_t data_1shot0 : 1; + mmr_t reserved_0 : 4; + mmr_t byte0 : 8; + } sh_xnpi_ecc_inj_reg_s; +} sh_xnpi_ecc_inj_reg_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNPI_ECC0_INJ_MASK_REG" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnpi_ecc0_inj_mask_reg_u { + mmr_t sh_xnpi_ecc0_inj_mask_reg_regval; + struct { + mmr_t mask_ecc0 : 64; + } sh_xnpi_ecc0_inj_mask_reg_s; +} sh_xnpi_ecc0_inj_mask_reg_u_t; +#else +typedef union sh_xnpi_ecc0_inj_mask_reg_u { + mmr_t sh_xnpi_ecc0_inj_mask_reg_regval; + struct { + mmr_t mask_ecc0 : 64; + } sh_xnpi_ecc0_inj_mask_reg_s; +} sh_xnpi_ecc0_inj_mask_reg_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNPI_ECC1_INJ_MASK_REG" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnpi_ecc1_inj_mask_reg_u { + mmr_t sh_xnpi_ecc1_inj_mask_reg_regval; + struct { + mmr_t mask_ecc1 : 64; + } sh_xnpi_ecc1_inj_mask_reg_s; +} sh_xnpi_ecc1_inj_mask_reg_u_t; +#else +typedef union sh_xnpi_ecc1_inj_mask_reg_u { + mmr_t sh_xnpi_ecc1_inj_mask_reg_regval; + struct { + mmr_t mask_ecc1 : 64; + } sh_xnpi_ecc1_inj_mask_reg_s; +} sh_xnpi_ecc1_inj_mask_reg_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNPI_ECC2_INJ_MASK_REG" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnpi_ecc2_inj_mask_reg_u { + mmr_t sh_xnpi_ecc2_inj_mask_reg_regval; + struct { + mmr_t mask_ecc2 : 64; + } sh_xnpi_ecc2_inj_mask_reg_s; +} sh_xnpi_ecc2_inj_mask_reg_u_t; +#else +typedef union sh_xnpi_ecc2_inj_mask_reg_u { + mmr_t sh_xnpi_ecc2_inj_mask_reg_regval; + struct { + mmr_t mask_ecc2 : 64; + } sh_xnpi_ecc2_inj_mask_reg_s; +} sh_xnpi_ecc2_inj_mask_reg_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNPI_ECC3_INJ_MASK_REG" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnpi_ecc3_inj_mask_reg_u { + mmr_t sh_xnpi_ecc3_inj_mask_reg_regval; + struct { + mmr_t mask_ecc3 : 64; + } sh_xnpi_ecc3_inj_mask_reg_s; +} sh_xnpi_ecc3_inj_mask_reg_u_t; +#else +typedef union sh_xnpi_ecc3_inj_mask_reg_u { + mmr_t sh_xnpi_ecc3_inj_mask_reg_regval; + struct { + mmr_t mask_ecc3 : 64; + } sh_xnpi_ecc3_inj_mask_reg_s; +} sh_xnpi_ecc3_inj_mask_reg_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNMD_ECC_INJ_REG" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnmd_ecc_inj_reg_u { + mmr_t sh_xnmd_ecc_inj_reg_regval; + struct { + mmr_t byte0 : 8; + mmr_t reserved_0 : 4; + mmr_t data_1shot0 : 1; + mmr_t data_cont0 : 1; + mmr_t data_cb_1shot0 : 1; + mmr_t data_cb_cont0 : 1; + mmr_t byte1 : 8; + mmr_t reserved_1 : 4; + mmr_t data_1shot1 : 1; + mmr_t data_cont1 : 1; + mmr_t data_cb_1shot1 : 1; + mmr_t data_cb_cont1 : 1; + mmr_t byte2 : 8; + mmr_t reserved_2 : 4; + mmr_t data_1shot2 : 1; + mmr_t data_cont2 : 1; + mmr_t data_cb_1shot2 : 1; + mmr_t data_cb_cont2 : 1; + mmr_t byte3 : 8; + mmr_t reserved_3 : 4; + mmr_t data_1shot3 : 1; + mmr_t data_cont3 : 1; + mmr_t data_cb_1shot3 : 1; + mmr_t data_cb_cont3 : 1; + } sh_xnmd_ecc_inj_reg_s; +} sh_xnmd_ecc_inj_reg_u_t; +#else +typedef union sh_xnmd_ecc_inj_reg_u { + mmr_t sh_xnmd_ecc_inj_reg_regval; + struct { + mmr_t data_cb_cont3 : 1; + mmr_t data_cb_1shot3 : 1; + mmr_t data_cont3 : 1; + mmr_t data_1shot3 : 1; + mmr_t reserved_3 : 4; + mmr_t byte3 : 8; + mmr_t data_cb_cont2 : 1; + mmr_t data_cb_1shot2 : 1; + mmr_t data_cont2 : 1; + mmr_t data_1shot2 : 1; + mmr_t reserved_2 : 4; + mmr_t byte2 : 8; + mmr_t data_cb_cont1 : 1; + mmr_t data_cb_1shot1 : 1; + mmr_t data_cont1 : 1; + mmr_t data_1shot1 : 1; + mmr_t reserved_1 : 4; + mmr_t byte1 : 8; + mmr_t data_cb_cont0 : 1; + mmr_t data_cb_1shot0 : 1; + mmr_t data_cont0 : 1; + mmr_t data_1shot0 : 1; + mmr_t reserved_0 : 4; + mmr_t byte0 : 8; + } sh_xnmd_ecc_inj_reg_s; +} sh_xnmd_ecc_inj_reg_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNMD_ECC0_INJ_MASK_REG" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnmd_ecc0_inj_mask_reg_u { + mmr_t sh_xnmd_ecc0_inj_mask_reg_regval; + struct { + mmr_t mask_ecc0 : 64; + } sh_xnmd_ecc0_inj_mask_reg_s; +} sh_xnmd_ecc0_inj_mask_reg_u_t; +#else +typedef union sh_xnmd_ecc0_inj_mask_reg_u { + mmr_t sh_xnmd_ecc0_inj_mask_reg_regval; + struct { + mmr_t mask_ecc0 : 64; + } sh_xnmd_ecc0_inj_mask_reg_s; +} sh_xnmd_ecc0_inj_mask_reg_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNMD_ECC1_INJ_MASK_REG" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnmd_ecc1_inj_mask_reg_u { + mmr_t sh_xnmd_ecc1_inj_mask_reg_regval; + struct { + mmr_t mask_ecc1 : 64; + } sh_xnmd_ecc1_inj_mask_reg_s; +} sh_xnmd_ecc1_inj_mask_reg_u_t; +#else +typedef union sh_xnmd_ecc1_inj_mask_reg_u { + mmr_t sh_xnmd_ecc1_inj_mask_reg_regval; + struct { + mmr_t mask_ecc1 : 64; + } sh_xnmd_ecc1_inj_mask_reg_s; +} sh_xnmd_ecc1_inj_mask_reg_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNMD_ECC2_INJ_MASK_REG" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnmd_ecc2_inj_mask_reg_u { + mmr_t sh_xnmd_ecc2_inj_mask_reg_regval; + struct { + mmr_t mask_ecc2 : 64; + } sh_xnmd_ecc2_inj_mask_reg_s; +} sh_xnmd_ecc2_inj_mask_reg_u_t; +#else +typedef union sh_xnmd_ecc2_inj_mask_reg_u { + mmr_t sh_xnmd_ecc2_inj_mask_reg_regval; + struct { + mmr_t mask_ecc2 : 64; + } sh_xnmd_ecc2_inj_mask_reg_s; +} sh_xnmd_ecc2_inj_mask_reg_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNMD_ECC3_INJ_MASK_REG" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnmd_ecc3_inj_mask_reg_u { + mmr_t sh_xnmd_ecc3_inj_mask_reg_regval; + struct { + mmr_t mask_ecc3 : 64; + } sh_xnmd_ecc3_inj_mask_reg_s; +} sh_xnmd_ecc3_inj_mask_reg_u_t; +#else +typedef union sh_xnmd_ecc3_inj_mask_reg_u { + mmr_t sh_xnmd_ecc3_inj_mask_reg_regval; + struct { + mmr_t mask_ecc3 : 64; + } sh_xnmd_ecc3_inj_mask_reg_s; +} sh_xnmd_ecc3_inj_mask_reg_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNMD_ECC_ERR_REPORT" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnmd_ecc_err_report_u { + mmr_t sh_xnmd_ecc_err_report_regval; + struct { + mmr_t ecc_disable0 : 1; + mmr_t reserved_0 : 15; + mmr_t ecc_disable1 : 1; + mmr_t reserved_1 : 15; + mmr_t ecc_disable2 : 1; + mmr_t reserved_2 : 15; + mmr_t ecc_disable3 : 1; + mmr_t reserved_3 : 15; + } sh_xnmd_ecc_err_report_s; +} sh_xnmd_ecc_err_report_u_t; +#else +typedef union sh_xnmd_ecc_err_report_u { + mmr_t sh_xnmd_ecc_err_report_regval; + struct { + mmr_t reserved_3 : 15; + mmr_t ecc_disable3 : 1; + mmr_t reserved_2 : 15; + mmr_t ecc_disable2 : 1; + mmr_t reserved_1 : 15; + mmr_t ecc_disable1 : 1; + mmr_t reserved_0 : 15; + mmr_t ecc_disable0 : 1; + } sh_xnmd_ecc_err_report_s; +} sh_xnmd_ecc_err_report_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_NI0_ERROR_SUMMARY_1" */ +/* ni0 Error Summary Bits */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ni0_error_summary_1_u { + mmr_t sh_ni0_error_summary_1_regval; + struct { + mmr_t overflow_fifo02_debit0 : 1; + mmr_t overflow_fifo02_debit2 : 1; + mmr_t overflow_fifo13_debit0 : 1; + mmr_t overflow_fifo13_debit2 : 1; + mmr_t overflow_fifo02_vc0_pop : 1; + mmr_t overflow_fifo02_vc2_pop : 1; + mmr_t overflow_fifo13_vc1_pop : 1; + mmr_t overflow_fifo13_vc3_pop : 1; + mmr_t overflow_fifo02_vc0_push : 1; + mmr_t overflow_fifo02_vc2_push : 1; + mmr_t overflow_fifo13_vc1_push : 1; + mmr_t overflow_fifo13_vc3_push : 1; + mmr_t overflow_fifo02_vc0_credit : 1; + mmr_t overflow_fifo02_vc2_credit : 1; + mmr_t overflow_fifo13_vc0_credit : 1; + mmr_t overflow_fifo13_vc2_credit : 1; + mmr_t overflow0_vc0_credit : 1; + mmr_t overflow1_vc0_credit : 1; + mmr_t overflow2_vc0_credit : 1; + mmr_t overflow0_vc2_credit : 1; + mmr_t overflow1_vc2_credit : 1; + mmr_t overflow2_vc2_credit : 1; + mmr_t overflow_pi_fifo_debit0 : 1; + mmr_t overflow_pi_fifo_debit2 : 1; + mmr_t overflow_iilb_fifo_debit0 : 1; + mmr_t overflow_iilb_fifo_debit2 : 1; + mmr_t overflow_md_fifo_debit0 : 1; + mmr_t overflow_md_fifo_debit2 : 1; + mmr_t overflow_ni_fifo_debit0 : 1; + mmr_t overflow_ni_fifo_debit1 : 1; + mmr_t overflow_ni_fifo_debit2 : 1; + mmr_t overflow_ni_fifo_debit3 : 1; + mmr_t overflow_pi_fifo_vc0_pop : 1; + mmr_t overflow_pi_fifo_vc2_pop : 1; + mmr_t overflow_iilb_fifo_vc0_pop : 1; + mmr_t overflow_iilb_fifo_vc2_pop : 1; + mmr_t overflow_md_fifo_vc0_pop : 1; + mmr_t overflow_md_fifo_vc2_pop : 1; + mmr_t overflow_ni_fifo_vc0_pop : 1; + mmr_t overflow_ni_fifo_vc2_pop : 1; + mmr_t overflow_pi_fifo_vc0_push : 1; + mmr_t overflow_pi_fifo_vc2_push : 1; + mmr_t overflow_iilb_fifo_vc0_push : 1; + mmr_t overflow_iilb_fifo_vc2_push : 1; + mmr_t overflow_md_fifo_vc0_push : 1; + mmr_t overflow_md_fifo_vc2_push : 1; + mmr_t overflow_pi_fifo_vc0_credit : 1; + mmr_t overflow_pi_fifo_vc2_credit : 1; + mmr_t overflow_iilb_fifo_vc0_credit : 1; + mmr_t overflow_iilb_fifo_vc2_credit : 1; + mmr_t overflow_md_fifo_vc0_credit : 1; + mmr_t overflow_md_fifo_vc2_credit : 1; + mmr_t overflow_ni_fifo_vc0_credit : 1; + mmr_t overflow_ni_fifo_vc1_credit : 1; + mmr_t overflow_ni_fifo_vc2_credit : 1; + mmr_t overflow_ni_fifo_vc3_credit : 1; + mmr_t tail_timeout_fifo02_vc0 : 1; + mmr_t tail_timeout_fifo02_vc2 : 1; + mmr_t tail_timeout_fifo13_vc1 : 1; + mmr_t tail_timeout_fifo13_vc3 : 1; + mmr_t tail_timeout_ni_vc0 : 1; + mmr_t tail_timeout_ni_vc1 : 1; + mmr_t tail_timeout_ni_vc2 : 1; + mmr_t tail_timeout_ni_vc3 : 1; + } sh_ni0_error_summary_1_s; +} sh_ni0_error_summary_1_u_t; +#else +typedef union sh_ni0_error_summary_1_u { + mmr_t sh_ni0_error_summary_1_regval; + struct { + mmr_t tail_timeout_ni_vc3 : 1; + mmr_t tail_timeout_ni_vc2 : 1; + mmr_t tail_timeout_ni_vc1 : 1; + mmr_t tail_timeout_ni_vc0 : 1; + mmr_t tail_timeout_fifo13_vc3 : 1; + mmr_t tail_timeout_fifo13_vc1 : 1; + mmr_t tail_timeout_fifo02_vc2 : 1; + mmr_t tail_timeout_fifo02_vc0 : 1; + mmr_t overflow_ni_fifo_vc3_credit : 1; + mmr_t overflow_ni_fifo_vc2_credit : 1; + mmr_t overflow_ni_fifo_vc1_credit : 1; + mmr_t overflow_ni_fifo_vc0_credit : 1; + mmr_t overflow_md_fifo_vc2_credit : 1; + mmr_t overflow_md_fifo_vc0_credit : 1; + mmr_t overflow_iilb_fifo_vc2_credit : 1; + mmr_t overflow_iilb_fifo_vc0_credit : 1; + mmr_t overflow_pi_fifo_vc2_credit : 1; + mmr_t overflow_pi_fifo_vc0_credit : 1; + mmr_t overflow_md_fifo_vc2_push : 1; + mmr_t overflow_md_fifo_vc0_push : 1; + mmr_t overflow_iilb_fifo_vc2_push : 1; + mmr_t overflow_iilb_fifo_vc0_push : 1; + mmr_t overflow_pi_fifo_vc2_push : 1; + mmr_t overflow_pi_fifo_vc0_push : 1; + mmr_t overflow_ni_fifo_vc2_pop : 1; + mmr_t overflow_ni_fifo_vc0_pop : 1; + mmr_t overflow_md_fifo_vc2_pop : 1; + mmr_t overflow_md_fifo_vc0_pop : 1; + mmr_t overflow_iilb_fifo_vc2_pop : 1; + mmr_t overflow_iilb_fifo_vc0_pop : 1; + mmr_t overflow_pi_fifo_vc2_pop : 1; + mmr_t overflow_pi_fifo_vc0_pop : 1; + mmr_t overflow_ni_fifo_debit3 : 1; + mmr_t overflow_ni_fifo_debit2 : 1; + mmr_t overflow_ni_fifo_debit1 : 1; + mmr_t overflow_ni_fifo_debit0 : 1; + mmr_t overflow_md_fifo_debit2 : 1; + mmr_t overflow_md_fifo_debit0 : 1; + mmr_t overflow_iilb_fifo_debit2 : 1; + mmr_t overflow_iilb_fifo_debit0 : 1; + mmr_t overflow_pi_fifo_debit2 : 1; + mmr_t overflow_pi_fifo_debit0 : 1; + mmr_t overflow2_vc2_credit : 1; + mmr_t overflow1_vc2_credit : 1; + mmr_t overflow0_vc2_credit : 1; + mmr_t overflow2_vc0_credit : 1; + mmr_t overflow1_vc0_credit : 1; + mmr_t overflow0_vc0_credit : 1; + mmr_t overflow_fifo13_vc2_credit : 1; + mmr_t overflow_fifo13_vc0_credit : 1; + mmr_t overflow_fifo02_vc2_credit : 1; + mmr_t overflow_fifo02_vc0_credit : 1; + mmr_t overflow_fifo13_vc3_push : 1; + mmr_t overflow_fifo13_vc1_push : 1; + mmr_t overflow_fifo02_vc2_push : 1; + mmr_t overflow_fifo02_vc0_push : 1; + mmr_t overflow_fifo13_vc3_pop : 1; + mmr_t overflow_fifo13_vc1_pop : 1; + mmr_t overflow_fifo02_vc2_pop : 1; + mmr_t overflow_fifo02_vc0_pop : 1; + mmr_t overflow_fifo13_debit2 : 1; + mmr_t overflow_fifo13_debit0 : 1; + mmr_t overflow_fifo02_debit2 : 1; + mmr_t overflow_fifo02_debit0 : 1; + } sh_ni0_error_summary_1_s; +} sh_ni0_error_summary_1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_NI0_ERROR_SUMMARY_2" */ +/* ni0 Error Summary Bits */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ni0_error_summary_2_u { + mmr_t sh_ni0_error_summary_2_regval; + struct { + mmr_t illegal_vcni : 1; + mmr_t illegal_vcpi : 1; + mmr_t illegal_vcmd : 1; + mmr_t illegal_vciilb : 1; + mmr_t underflow_fifo02_vc0_pop : 1; + mmr_t underflow_fifo02_vc2_pop : 1; + mmr_t underflow_fifo13_vc1_pop : 1; + mmr_t underflow_fifo13_vc3_pop : 1; + mmr_t underflow_fifo02_vc0_push : 1; + mmr_t underflow_fifo02_vc2_push : 1; + mmr_t underflow_fifo13_vc1_push : 1; + mmr_t underflow_fifo13_vc3_push : 1; + mmr_t underflow_fifo02_vc0_credit : 1; + mmr_t underflow_fifo02_vc2_credit : 1; + mmr_t underflow_fifo13_vc0_credit : 1; + mmr_t underflow_fifo13_vc2_credit : 1; + mmr_t underflow0_vc0_credit : 1; + mmr_t underflow1_vc0_credit : 1; + mmr_t underflow2_vc0_credit : 1; + mmr_t underflow0_vc2_credit : 1; + mmr_t underflow1_vc2_credit : 1; + mmr_t underflow2_vc2_credit : 1; + mmr_t reserved_0 : 10; + mmr_t underflow_pi_fifo_vc0_pop : 1; + mmr_t underflow_pi_fifo_vc2_pop : 1; + mmr_t underflow_iilb_fifo_vc0_pop : 1; + mmr_t underflow_iilb_fifo_vc2_pop : 1; + mmr_t underflow_md_fifo_vc0_pop : 1; + mmr_t underflow_md_fifo_vc2_pop : 1; + mmr_t underflow_ni_fifo_vc0_pop : 1; + mmr_t underflow_ni_fifo_vc2_pop : 1; + mmr_t underflow_pi_fifo_vc0_push : 1; + mmr_t underflow_pi_fifo_vc2_push : 1; + mmr_t underflow_iilb_fifo_vc0_push : 1; + mmr_t underflow_iilb_fifo_vc2_push : 1; + mmr_t underflow_md_fifo_vc0_push : 1; + mmr_t underflow_md_fifo_vc2_push : 1; + mmr_t underflow_pi_fifo_vc0_credit : 1; + mmr_t underflow_pi_fifo_vc2_credit : 1; + mmr_t underflow_iilb_fifo_vc0_credit : 1; + mmr_t underflow_iilb_fifo_vc2_credit : 1; + mmr_t underflow_md_fifo_vc0_credit : 1; + mmr_t underflow_md_fifo_vc2_credit : 1; + mmr_t underflow_ni_fifo_vc0_credit : 1; + mmr_t underflow_ni_fifo_vc1_credit : 1; + mmr_t underflow_ni_fifo_vc2_credit : 1; + mmr_t underflow_ni_fifo_vc3_credit : 1; + mmr_t llp_deadlock_vc0 : 1; + mmr_t llp_deadlock_vc1 : 1; + mmr_t llp_deadlock_vc2 : 1; + mmr_t llp_deadlock_vc3 : 1; + mmr_t chiplet_nomatch : 1; + mmr_t lut_read_error : 1; + mmr_t retry_timeout_error : 1; + mmr_t reserved_1 : 1; + } sh_ni0_error_summary_2_s; +} sh_ni0_error_summary_2_u_t; +#else +typedef union sh_ni0_error_summary_2_u { + mmr_t sh_ni0_error_summary_2_regval; + struct { + mmr_t reserved_1 : 1; + mmr_t retry_timeout_error : 1; + mmr_t lut_read_error : 1; + mmr_t chiplet_nomatch : 1; + mmr_t llp_deadlock_vc3 : 1; + mmr_t llp_deadlock_vc2 : 1; + mmr_t llp_deadlock_vc1 : 1; + mmr_t llp_deadlock_vc0 : 1; + mmr_t underflow_ni_fifo_vc3_credit : 1; + mmr_t underflow_ni_fifo_vc2_credit : 1; + mmr_t underflow_ni_fifo_vc1_credit : 1; + mmr_t underflow_ni_fifo_vc0_credit : 1; + mmr_t underflow_md_fifo_vc2_credit : 1; + mmr_t underflow_md_fifo_vc0_credit : 1; + mmr_t underflow_iilb_fifo_vc2_credit : 1; + mmr_t underflow_iilb_fifo_vc0_credit : 1; + mmr_t underflow_pi_fifo_vc2_credit : 1; + mmr_t underflow_pi_fifo_vc0_credit : 1; + mmr_t underflow_md_fifo_vc2_push : 1; + mmr_t underflow_md_fifo_vc0_push : 1; + mmr_t underflow_iilb_fifo_vc2_push : 1; + mmr_t underflow_iilb_fifo_vc0_push : 1; + mmr_t underflow_pi_fifo_vc2_push : 1; + mmr_t underflow_pi_fifo_vc0_push : 1; + mmr_t underflow_ni_fifo_vc2_pop : 1; + mmr_t underflow_ni_fifo_vc0_pop : 1; + mmr_t underflow_md_fifo_vc2_pop : 1; + mmr_t underflow_md_fifo_vc0_pop : 1; + mmr_t underflow_iilb_fifo_vc2_pop : 1; + mmr_t underflow_iilb_fifo_vc0_pop : 1; + mmr_t underflow_pi_fifo_vc2_pop : 1; + mmr_t underflow_pi_fifo_vc0_pop : 1; + mmr_t reserved_0 : 10; + mmr_t underflow2_vc2_credit : 1; + mmr_t underflow1_vc2_credit : 1; + mmr_t underflow0_vc2_credit : 1; + mmr_t underflow2_vc0_credit : 1; + mmr_t underflow1_vc0_credit : 1; + mmr_t underflow0_vc0_credit : 1; + mmr_t underflow_fifo13_vc2_credit : 1; + mmr_t underflow_fifo13_vc0_credit : 1; + mmr_t underflow_fifo02_vc2_credit : 1; + mmr_t underflow_fifo02_vc0_credit : 1; + mmr_t underflow_fifo13_vc3_push : 1; + mmr_t underflow_fifo13_vc1_push : 1; + mmr_t underflow_fifo02_vc2_push : 1; + mmr_t underflow_fifo02_vc0_push : 1; + mmr_t underflow_fifo13_vc3_pop : 1; + mmr_t underflow_fifo13_vc1_pop : 1; + mmr_t underflow_fifo02_vc2_pop : 1; + mmr_t underflow_fifo02_vc0_pop : 1; + mmr_t illegal_vciilb : 1; + mmr_t illegal_vcmd : 1; + mmr_t illegal_vcpi : 1; + mmr_t illegal_vcni : 1; + } sh_ni0_error_summary_2_s; +} sh_ni0_error_summary_2_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_NI0_ERROR_OVERFLOW_1" */ +/* ni0 Error Overflow Bits */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ni0_error_overflow_1_u { + mmr_t sh_ni0_error_overflow_1_regval; + struct { + mmr_t overflow_fifo02_debit0 : 1; + mmr_t overflow_fifo02_debit2 : 1; + mmr_t overflow_fifo13_debit0 : 1; + mmr_t overflow_fifo13_debit2 : 1; + mmr_t overflow_fifo02_vc0_pop : 1; + mmr_t overflow_fifo02_vc2_pop : 1; + mmr_t overflow_fifo13_vc1_pop : 1; + mmr_t overflow_fifo13_vc3_pop : 1; + mmr_t overflow_fifo02_vc0_push : 1; + mmr_t overflow_fifo02_vc2_push : 1; + mmr_t overflow_fifo13_vc1_push : 1; + mmr_t overflow_fifo13_vc3_push : 1; + mmr_t overflow_fifo02_vc0_credit : 1; + mmr_t overflow_fifo02_vc2_credit : 1; + mmr_t overflow_fifo13_vc0_credit : 1; + mmr_t overflow_fifo13_vc2_credit : 1; + mmr_t overflow0_vc0_credit : 1; + mmr_t overflow1_vc0_credit : 1; + mmr_t overflow2_vc0_credit : 1; + mmr_t overflow0_vc2_credit : 1; + mmr_t overflow1_vc2_credit : 1; + mmr_t overflow2_vc2_credit : 1; + mmr_t overflow_pi_fifo_debit0 : 1; + mmr_t overflow_pi_fifo_debit2 : 1; + mmr_t overflow_iilb_fifo_debit0 : 1; + mmr_t overflow_iilb_fifo_debit2 : 1; + mmr_t overflow_md_fifo_debit0 : 1; + mmr_t overflow_md_fifo_debit2 : 1; + mmr_t overflow_ni_fifo_debit0 : 1; + mmr_t overflow_ni_fifo_debit1 : 1; + mmr_t overflow_ni_fifo_debit2 : 1; + mmr_t overflow_ni_fifo_debit3 : 1; + mmr_t overflow_pi_fifo_vc0_pop : 1; + mmr_t overflow_pi_fifo_vc2_pop : 1; + mmr_t overflow_iilb_fifo_vc0_pop : 1; + mmr_t overflow_iilb_fifo_vc2_pop : 1; + mmr_t overflow_md_fifo_vc0_pop : 1; + mmr_t overflow_md_fifo_vc2_pop : 1; + mmr_t overflow_ni_fifo_vc0_pop : 1; + mmr_t overflow_ni_fifo_vc2_pop : 1; + mmr_t overflow_pi_fifo_vc0_push : 1; + mmr_t overflow_pi_fifo_vc2_push : 1; + mmr_t overflow_iilb_fifo_vc0_push : 1; + mmr_t overflow_iilb_fifo_vc2_push : 1; + mmr_t overflow_md_fifo_vc0_push : 1; + mmr_t overflow_md_fifo_vc2_push : 1; + mmr_t overflow_pi_fifo_vc0_credit : 1; + mmr_t overflow_pi_fifo_vc2_credit : 1; + mmr_t overflow_iilb_fifo_vc0_credit : 1; + mmr_t overflow_iilb_fifo_vc2_credit : 1; + mmr_t overflow_md_fifo_vc0_credit : 1; + mmr_t overflow_md_fifo_vc2_credit : 1; + mmr_t overflow_ni_fifo_vc0_credit : 1; + mmr_t overflow_ni_fifo_vc1_credit : 1; + mmr_t overflow_ni_fifo_vc2_credit : 1; + mmr_t overflow_ni_fifo_vc3_credit : 1; + mmr_t tail_timeout_fifo02_vc0 : 1; + mmr_t tail_timeout_fifo02_vc2 : 1; + mmr_t tail_timeout_fifo13_vc1 : 1; + mmr_t tail_timeout_fifo13_vc3 : 1; + mmr_t tail_timeout_ni_vc0 : 1; + mmr_t tail_timeout_ni_vc1 : 1; + mmr_t tail_timeout_ni_vc2 : 1; + mmr_t tail_timeout_ni_vc3 : 1; + } sh_ni0_error_overflow_1_s; +} sh_ni0_error_overflow_1_u_t; +#else +typedef union sh_ni0_error_overflow_1_u { + mmr_t sh_ni0_error_overflow_1_regval; + struct { + mmr_t tail_timeout_ni_vc3 : 1; + mmr_t tail_timeout_ni_vc2 : 1; + mmr_t tail_timeout_ni_vc1 : 1; + mmr_t tail_timeout_ni_vc0 : 1; + mmr_t tail_timeout_fifo13_vc3 : 1; + mmr_t tail_timeout_fifo13_vc1 : 1; + mmr_t tail_timeout_fifo02_vc2 : 1; + mmr_t tail_timeout_fifo02_vc0 : 1; + mmr_t overflow_ni_fifo_vc3_credit : 1; + mmr_t overflow_ni_fifo_vc2_credit : 1; + mmr_t overflow_ni_fifo_vc1_credit : 1; + mmr_t overflow_ni_fifo_vc0_credit : 1; + mmr_t overflow_md_fifo_vc2_credit : 1; + mmr_t overflow_md_fifo_vc0_credit : 1; + mmr_t overflow_iilb_fifo_vc2_credit : 1; + mmr_t overflow_iilb_fifo_vc0_credit : 1; + mmr_t overflow_pi_fifo_vc2_credit : 1; + mmr_t overflow_pi_fifo_vc0_credit : 1; + mmr_t overflow_md_fifo_vc2_push : 1; + mmr_t overflow_md_fifo_vc0_push : 1; + mmr_t overflow_iilb_fifo_vc2_push : 1; + mmr_t overflow_iilb_fifo_vc0_push : 1; + mmr_t overflow_pi_fifo_vc2_push : 1; + mmr_t overflow_pi_fifo_vc0_push : 1; + mmr_t overflow_ni_fifo_vc2_pop : 1; + mmr_t overflow_ni_fifo_vc0_pop : 1; + mmr_t overflow_md_fifo_vc2_pop : 1; + mmr_t overflow_md_fifo_vc0_pop : 1; + mmr_t overflow_iilb_fifo_vc2_pop : 1; + mmr_t overflow_iilb_fifo_vc0_pop : 1; + mmr_t overflow_pi_fifo_vc2_pop : 1; + mmr_t overflow_pi_fifo_vc0_pop : 1; + mmr_t overflow_ni_fifo_debit3 : 1; + mmr_t overflow_ni_fifo_debit2 : 1; + mmr_t overflow_ni_fifo_debit1 : 1; + mmr_t overflow_ni_fifo_debit0 : 1; + mmr_t overflow_md_fifo_debit2 : 1; + mmr_t overflow_md_fifo_debit0 : 1; + mmr_t overflow_iilb_fifo_debit2 : 1; + mmr_t overflow_iilb_fifo_debit0 : 1; + mmr_t overflow_pi_fifo_debit2 : 1; + mmr_t overflow_pi_fifo_debit0 : 1; + mmr_t overflow2_vc2_credit : 1; + mmr_t overflow1_vc2_credit : 1; + mmr_t overflow0_vc2_credit : 1; + mmr_t overflow2_vc0_credit : 1; + mmr_t overflow1_vc0_credit : 1; + mmr_t overflow0_vc0_credit : 1; + mmr_t overflow_fifo13_vc2_credit : 1; + mmr_t overflow_fifo13_vc0_credit : 1; + mmr_t overflow_fifo02_vc2_credit : 1; + mmr_t overflow_fifo02_vc0_credit : 1; + mmr_t overflow_fifo13_vc3_push : 1; + mmr_t overflow_fifo13_vc1_push : 1; + mmr_t overflow_fifo02_vc2_push : 1; + mmr_t overflow_fifo02_vc0_push : 1; + mmr_t overflow_fifo13_vc3_pop : 1; + mmr_t overflow_fifo13_vc1_pop : 1; + mmr_t overflow_fifo02_vc2_pop : 1; + mmr_t overflow_fifo02_vc0_pop : 1; + mmr_t overflow_fifo13_debit2 : 1; + mmr_t overflow_fifo13_debit0 : 1; + mmr_t overflow_fifo02_debit2 : 1; + mmr_t overflow_fifo02_debit0 : 1; + } sh_ni0_error_overflow_1_s; +} sh_ni0_error_overflow_1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_NI0_ERROR_OVERFLOW_2" */ +/* ni0 Error Overflow Bits */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ni0_error_overflow_2_u { + mmr_t sh_ni0_error_overflow_2_regval; + struct { + mmr_t illegal_vcni : 1; + mmr_t illegal_vcpi : 1; + mmr_t illegal_vcmd : 1; + mmr_t illegal_vciilb : 1; + mmr_t underflow_fifo02_vc0_pop : 1; + mmr_t underflow_fifo02_vc2_pop : 1; + mmr_t underflow_fifo13_vc1_pop : 1; + mmr_t underflow_fifo13_vc3_pop : 1; + mmr_t underflow_fifo02_vc0_push : 1; + mmr_t underflow_fifo02_vc2_push : 1; + mmr_t underflow_fifo13_vc1_push : 1; + mmr_t underflow_fifo13_vc3_push : 1; + mmr_t underflow_fifo02_vc0_credit : 1; + mmr_t underflow_fifo02_vc2_credit : 1; + mmr_t underflow_fifo13_vc0_credit : 1; + mmr_t underflow_fifo13_vc2_credit : 1; + mmr_t underflow0_vc0_credit : 1; + mmr_t underflow1_vc0_credit : 1; + mmr_t underflow2_vc0_credit : 1; + mmr_t underflow0_vc2_credit : 1; + mmr_t underflow1_vc2_credit : 1; + mmr_t underflow2_vc2_credit : 1; + mmr_t reserved_0 : 10; + mmr_t underflow_pi_fifo_vc0_pop : 1; + mmr_t underflow_pi_fifo_vc2_pop : 1; + mmr_t underflow_iilb_fifo_vc0_pop : 1; + mmr_t underflow_iilb_fifo_vc2_pop : 1; + mmr_t underflow_md_fifo_vc0_pop : 1; + mmr_t underflow_md_fifo_vc2_pop : 1; + mmr_t underflow_ni_fifo_vc0_pop : 1; + mmr_t underflow_ni_fifo_vc2_pop : 1; + mmr_t underflow_pi_fifo_vc0_push : 1; + mmr_t underflow_pi_fifo_vc2_push : 1; + mmr_t underflow_iilb_fifo_vc0_push : 1; + mmr_t underflow_iilb_fifo_vc2_push : 1; + mmr_t underflow_md_fifo_vc0_push : 1; + mmr_t underflow_md_fifo_vc2_push : 1; + mmr_t underflow_pi_fifo_vc0_credit : 1; + mmr_t underflow_pi_fifo_vc2_credit : 1; + mmr_t underflow_iilb_fifo_vc0_credit : 1; + mmr_t underflow_iilb_fifo_vc2_credit : 1; + mmr_t underflow_md_fifo_vc0_credit : 1; + mmr_t underflow_md_fifo_vc2_credit : 1; + mmr_t underflow_ni_fifo_vc0_credit : 1; + mmr_t underflow_ni_fifo_vc1_credit : 1; + mmr_t underflow_ni_fifo_vc2_credit : 1; + mmr_t underflow_ni_fifo_vc3_credit : 1; + mmr_t llp_deadlock_vc0 : 1; + mmr_t llp_deadlock_vc1 : 1; + mmr_t llp_deadlock_vc2 : 1; + mmr_t llp_deadlock_vc3 : 1; + mmr_t chiplet_nomatch : 1; + mmr_t lut_read_error : 1; + mmr_t retry_timeout_error : 1; + mmr_t reserved_1 : 1; + } sh_ni0_error_overflow_2_s; +} sh_ni0_error_overflow_2_u_t; +#else +typedef union sh_ni0_error_overflow_2_u { + mmr_t sh_ni0_error_overflow_2_regval; + struct { + mmr_t reserved_1 : 1; + mmr_t retry_timeout_error : 1; + mmr_t lut_read_error : 1; + mmr_t chiplet_nomatch : 1; + mmr_t llp_deadlock_vc3 : 1; + mmr_t llp_deadlock_vc2 : 1; + mmr_t llp_deadlock_vc1 : 1; + mmr_t llp_deadlock_vc0 : 1; + mmr_t underflow_ni_fifo_vc3_credit : 1; + mmr_t underflow_ni_fifo_vc2_credit : 1; + mmr_t underflow_ni_fifo_vc1_credit : 1; + mmr_t underflow_ni_fifo_vc0_credit : 1; + mmr_t underflow_md_fifo_vc2_credit : 1; + mmr_t underflow_md_fifo_vc0_credit : 1; + mmr_t underflow_iilb_fifo_vc2_credit : 1; + mmr_t underflow_iilb_fifo_vc0_credit : 1; + mmr_t underflow_pi_fifo_vc2_credit : 1; + mmr_t underflow_pi_fifo_vc0_credit : 1; + mmr_t underflow_md_fifo_vc2_push : 1; + mmr_t underflow_md_fifo_vc0_push : 1; + mmr_t underflow_iilb_fifo_vc2_push : 1; + mmr_t underflow_iilb_fifo_vc0_push : 1; + mmr_t underflow_pi_fifo_vc2_push : 1; + mmr_t underflow_pi_fifo_vc0_push : 1; + mmr_t underflow_ni_fifo_vc2_pop : 1; + mmr_t underflow_ni_fifo_vc0_pop : 1; + mmr_t underflow_md_fifo_vc2_pop : 1; + mmr_t underflow_md_fifo_vc0_pop : 1; + mmr_t underflow_iilb_fifo_vc2_pop : 1; + mmr_t underflow_iilb_fifo_vc0_pop : 1; + mmr_t underflow_pi_fifo_vc2_pop : 1; + mmr_t underflow_pi_fifo_vc0_pop : 1; + mmr_t reserved_0 : 10; + mmr_t underflow2_vc2_credit : 1; + mmr_t underflow1_vc2_credit : 1; + mmr_t underflow0_vc2_credit : 1; + mmr_t underflow2_vc0_credit : 1; + mmr_t underflow1_vc0_credit : 1; + mmr_t underflow0_vc0_credit : 1; + mmr_t underflow_fifo13_vc2_credit : 1; + mmr_t underflow_fifo13_vc0_credit : 1; + mmr_t underflow_fifo02_vc2_credit : 1; + mmr_t underflow_fifo02_vc0_credit : 1; + mmr_t underflow_fifo13_vc3_push : 1; + mmr_t underflow_fifo13_vc1_push : 1; + mmr_t underflow_fifo02_vc2_push : 1; + mmr_t underflow_fifo02_vc0_push : 1; + mmr_t underflow_fifo13_vc3_pop : 1; + mmr_t underflow_fifo13_vc1_pop : 1; + mmr_t underflow_fifo02_vc2_pop : 1; + mmr_t underflow_fifo02_vc0_pop : 1; + mmr_t illegal_vciilb : 1; + mmr_t illegal_vcmd : 1; + mmr_t illegal_vcpi : 1; + mmr_t illegal_vcni : 1; + } sh_ni0_error_overflow_2_s; +} sh_ni0_error_overflow_2_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_NI0_ERROR_MASK_1" */ +/* ni0 Error Mask Bits */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ni0_error_mask_1_u { + mmr_t sh_ni0_error_mask_1_regval; + struct { + mmr_t overflow_fifo02_debit0 : 1; + mmr_t overflow_fifo02_debit2 : 1; + mmr_t overflow_fifo13_debit0 : 1; + mmr_t overflow_fifo13_debit2 : 1; + mmr_t overflow_fifo02_vc0_pop : 1; + mmr_t overflow_fifo02_vc2_pop : 1; + mmr_t overflow_fifo13_vc1_pop : 1; + mmr_t overflow_fifo13_vc3_pop : 1; + mmr_t overflow_fifo02_vc0_push : 1; + mmr_t overflow_fifo02_vc2_push : 1; + mmr_t overflow_fifo13_vc1_push : 1; + mmr_t overflow_fifo13_vc3_push : 1; + mmr_t overflow_fifo02_vc0_credit : 1; + mmr_t overflow_fifo02_vc2_credit : 1; + mmr_t overflow_fifo13_vc0_credit : 1; + mmr_t overflow_fifo13_vc2_credit : 1; + mmr_t overflow0_vc0_credit : 1; + mmr_t overflow1_vc0_credit : 1; + mmr_t overflow2_vc0_credit : 1; + mmr_t overflow0_vc2_credit : 1; + mmr_t overflow1_vc2_credit : 1; + mmr_t overflow2_vc2_credit : 1; + mmr_t overflow_pi_fifo_debit0 : 1; + mmr_t overflow_pi_fifo_debit2 : 1; + mmr_t overflow_iilb_fifo_debit0 : 1; + mmr_t overflow_iilb_fifo_debit2 : 1; + mmr_t overflow_md_fifo_debit0 : 1; + mmr_t overflow_md_fifo_debit2 : 1; + mmr_t overflow_ni_fifo_debit0 : 1; + mmr_t overflow_ni_fifo_debit1 : 1; + mmr_t overflow_ni_fifo_debit2 : 1; + mmr_t overflow_ni_fifo_debit3 : 1; + mmr_t overflow_pi_fifo_vc0_pop : 1; + mmr_t overflow_pi_fifo_vc2_pop : 1; + mmr_t overflow_iilb_fifo_vc0_pop : 1; + mmr_t overflow_iilb_fifo_vc2_pop : 1; + mmr_t overflow_md_fifo_vc0_pop : 1; + mmr_t overflow_md_fifo_vc2_pop : 1; + mmr_t overflow_ni_fifo_vc0_pop : 1; + mmr_t overflow_ni_fifo_vc2_pop : 1; + mmr_t overflow_pi_fifo_vc0_push : 1; + mmr_t overflow_pi_fifo_vc2_push : 1; + mmr_t overflow_iilb_fifo_vc0_push : 1; + mmr_t overflow_iilb_fifo_vc2_push : 1; + mmr_t overflow_md_fifo_vc0_push : 1; + mmr_t overflow_md_fifo_vc2_push : 1; + mmr_t overflow_pi_fifo_vc0_credit : 1; + mmr_t overflow_pi_fifo_vc2_credit : 1; + mmr_t overflow_iilb_fifo_vc0_credit : 1; + mmr_t overflow_iilb_fifo_vc2_credit : 1; + mmr_t overflow_md_fifo_vc0_credit : 1; + mmr_t overflow_md_fifo_vc2_credit : 1; + mmr_t overflow_ni_fifo_vc0_credit : 1; + mmr_t overflow_ni_fifo_vc1_credit : 1; + mmr_t overflow_ni_fifo_vc2_credit : 1; + mmr_t overflow_ni_fifo_vc3_credit : 1; + mmr_t tail_timeout_fifo02_vc0 : 1; + mmr_t tail_timeout_fifo02_vc2 : 1; + mmr_t tail_timeout_fifo13_vc1 : 1; + mmr_t tail_timeout_fifo13_vc3 : 1; + mmr_t tail_timeout_ni_vc0 : 1; + mmr_t tail_timeout_ni_vc1 : 1; + mmr_t tail_timeout_ni_vc2 : 1; + mmr_t tail_timeout_ni_vc3 : 1; + } sh_ni0_error_mask_1_s; +} sh_ni0_error_mask_1_u_t; +#else +typedef union sh_ni0_error_mask_1_u { + mmr_t sh_ni0_error_mask_1_regval; + struct { + mmr_t tail_timeout_ni_vc3 : 1; + mmr_t tail_timeout_ni_vc2 : 1; + mmr_t tail_timeout_ni_vc1 : 1; + mmr_t tail_timeout_ni_vc0 : 1; + mmr_t tail_timeout_fifo13_vc3 : 1; + mmr_t tail_timeout_fifo13_vc1 : 1; + mmr_t tail_timeout_fifo02_vc2 : 1; + mmr_t tail_timeout_fifo02_vc0 : 1; + mmr_t overflow_ni_fifo_vc3_credit : 1; + mmr_t overflow_ni_fifo_vc2_credit : 1; + mmr_t overflow_ni_fifo_vc1_credit : 1; + mmr_t overflow_ni_fifo_vc0_credit : 1; + mmr_t overflow_md_fifo_vc2_credit : 1; + mmr_t overflow_md_fifo_vc0_credit : 1; + mmr_t overflow_iilb_fifo_vc2_credit : 1; + mmr_t overflow_iilb_fifo_vc0_credit : 1; + mmr_t overflow_pi_fifo_vc2_credit : 1; + mmr_t overflow_pi_fifo_vc0_credit : 1; + mmr_t overflow_md_fifo_vc2_push : 1; + mmr_t overflow_md_fifo_vc0_push : 1; + mmr_t overflow_iilb_fifo_vc2_push : 1; + mmr_t overflow_iilb_fifo_vc0_push : 1; + mmr_t overflow_pi_fifo_vc2_push : 1; + mmr_t overflow_pi_fifo_vc0_push : 1; + mmr_t overflow_ni_fifo_vc2_pop : 1; + mmr_t overflow_ni_fifo_vc0_pop : 1; + mmr_t overflow_md_fifo_vc2_pop : 1; + mmr_t overflow_md_fifo_vc0_pop : 1; + mmr_t overflow_iilb_fifo_vc2_pop : 1; + mmr_t overflow_iilb_fifo_vc0_pop : 1; + mmr_t overflow_pi_fifo_vc2_pop : 1; + mmr_t overflow_pi_fifo_vc0_pop : 1; + mmr_t overflow_ni_fifo_debit3 : 1; + mmr_t overflow_ni_fifo_debit2 : 1; + mmr_t overflow_ni_fifo_debit1 : 1; + mmr_t overflow_ni_fifo_debit0 : 1; + mmr_t overflow_md_fifo_debit2 : 1; + mmr_t overflow_md_fifo_debit0 : 1; + mmr_t overflow_iilb_fifo_debit2 : 1; + mmr_t overflow_iilb_fifo_debit0 : 1; + mmr_t overflow_pi_fifo_debit2 : 1; + mmr_t overflow_pi_fifo_debit0 : 1; + mmr_t overflow2_vc2_credit : 1; + mmr_t overflow1_vc2_credit : 1; + mmr_t overflow0_vc2_credit : 1; + mmr_t overflow2_vc0_credit : 1; + mmr_t overflow1_vc0_credit : 1; + mmr_t overflow0_vc0_credit : 1; + mmr_t overflow_fifo13_vc2_credit : 1; + mmr_t overflow_fifo13_vc0_credit : 1; + mmr_t overflow_fifo02_vc2_credit : 1; + mmr_t overflow_fifo02_vc0_credit : 1; + mmr_t overflow_fifo13_vc3_push : 1; + mmr_t overflow_fifo13_vc1_push : 1; + mmr_t overflow_fifo02_vc2_push : 1; + mmr_t overflow_fifo02_vc0_push : 1; + mmr_t overflow_fifo13_vc3_pop : 1; + mmr_t overflow_fifo13_vc1_pop : 1; + mmr_t overflow_fifo02_vc2_pop : 1; + mmr_t overflow_fifo02_vc0_pop : 1; + mmr_t overflow_fifo13_debit2 : 1; + mmr_t overflow_fifo13_debit0 : 1; + mmr_t overflow_fifo02_debit2 : 1; + mmr_t overflow_fifo02_debit0 : 1; + } sh_ni0_error_mask_1_s; +} sh_ni0_error_mask_1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_NI0_ERROR_MASK_2" */ +/* ni0 Error Mask Bits */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ni0_error_mask_2_u { + mmr_t sh_ni0_error_mask_2_regval; + struct { + mmr_t illegal_vcni : 1; + mmr_t illegal_vcpi : 1; + mmr_t illegal_vcmd : 1; + mmr_t illegal_vciilb : 1; + mmr_t underflow_fifo02_vc0_pop : 1; + mmr_t underflow_fifo02_vc2_pop : 1; + mmr_t underflow_fifo13_vc1_pop : 1; + mmr_t underflow_fifo13_vc3_pop : 1; + mmr_t underflow_fifo02_vc0_push : 1; + mmr_t underflow_fifo02_vc2_push : 1; + mmr_t underflow_fifo13_vc1_push : 1; + mmr_t underflow_fifo13_vc3_push : 1; + mmr_t underflow_fifo02_vc0_credit : 1; + mmr_t underflow_fifo02_vc2_credit : 1; + mmr_t underflow_fifo13_vc0_credit : 1; + mmr_t underflow_fifo13_vc2_credit : 1; + mmr_t underflow0_vc0_credit : 1; + mmr_t underflow1_vc0_credit : 1; + mmr_t underflow2_vc0_credit : 1; + mmr_t underflow0_vc2_credit : 1; + mmr_t underflow1_vc2_credit : 1; + mmr_t underflow2_vc2_credit : 1; + mmr_t reserved_0 : 10; + mmr_t underflow_pi_fifo_vc0_pop : 1; + mmr_t underflow_pi_fifo_vc2_pop : 1; + mmr_t underflow_iilb_fifo_vc0_pop : 1; + mmr_t underflow_iilb_fifo_vc2_pop : 1; + mmr_t underflow_md_fifo_vc0_pop : 1; + mmr_t underflow_md_fifo_vc2_pop : 1; + mmr_t underflow_ni_fifo_vc0_pop : 1; + mmr_t underflow_ni_fifo_vc2_pop : 1; + mmr_t underflow_pi_fifo_vc0_push : 1; + mmr_t underflow_pi_fifo_vc2_push : 1; + mmr_t underflow_iilb_fifo_vc0_push : 1; + mmr_t underflow_iilb_fifo_vc2_push : 1; + mmr_t underflow_md_fifo_vc0_push : 1; + mmr_t underflow_md_fifo_vc2_push : 1; + mmr_t underflow_pi_fifo_vc0_credit : 1; + mmr_t underflow_pi_fifo_vc2_credit : 1; + mmr_t underflow_iilb_fifo_vc0_credit : 1; + mmr_t underflow_iilb_fifo_vc2_credit : 1; + mmr_t underflow_md_fifo_vc0_credit : 1; + mmr_t underflow_md_fifo_vc2_credit : 1; + mmr_t underflow_ni_fifo_vc0_credit : 1; + mmr_t underflow_ni_fifo_vc1_credit : 1; + mmr_t underflow_ni_fifo_vc2_credit : 1; + mmr_t underflow_ni_fifo_vc3_credit : 1; + mmr_t llp_deadlock_vc0 : 1; + mmr_t llp_deadlock_vc1 : 1; + mmr_t llp_deadlock_vc2 : 1; + mmr_t llp_deadlock_vc3 : 1; + mmr_t chiplet_nomatch : 1; + mmr_t lut_read_error : 1; + mmr_t retry_timeout_error : 1; + mmr_t reserved_1 : 1; + } sh_ni0_error_mask_2_s; +} sh_ni0_error_mask_2_u_t; +#else +typedef union sh_ni0_error_mask_2_u { + mmr_t sh_ni0_error_mask_2_regval; + struct { + mmr_t reserved_1 : 1; + mmr_t retry_timeout_error : 1; + mmr_t lut_read_error : 1; + mmr_t chiplet_nomatch : 1; + mmr_t llp_deadlock_vc3 : 1; + mmr_t llp_deadlock_vc2 : 1; + mmr_t llp_deadlock_vc1 : 1; + mmr_t llp_deadlock_vc0 : 1; + mmr_t underflow_ni_fifo_vc3_credit : 1; + mmr_t underflow_ni_fifo_vc2_credit : 1; + mmr_t underflow_ni_fifo_vc1_credit : 1; + mmr_t underflow_ni_fifo_vc0_credit : 1; + mmr_t underflow_md_fifo_vc2_credit : 1; + mmr_t underflow_md_fifo_vc0_credit : 1; + mmr_t underflow_iilb_fifo_vc2_credit : 1; + mmr_t underflow_iilb_fifo_vc0_credit : 1; + mmr_t underflow_pi_fifo_vc2_credit : 1; + mmr_t underflow_pi_fifo_vc0_credit : 1; + mmr_t underflow_md_fifo_vc2_push : 1; + mmr_t underflow_md_fifo_vc0_push : 1; + mmr_t underflow_iilb_fifo_vc2_push : 1; + mmr_t underflow_iilb_fifo_vc0_push : 1; + mmr_t underflow_pi_fifo_vc2_push : 1; + mmr_t underflow_pi_fifo_vc0_push : 1; + mmr_t underflow_ni_fifo_vc2_pop : 1; + mmr_t underflow_ni_fifo_vc0_pop : 1; + mmr_t underflow_md_fifo_vc2_pop : 1; + mmr_t underflow_md_fifo_vc0_pop : 1; + mmr_t underflow_iilb_fifo_vc2_pop : 1; + mmr_t underflow_iilb_fifo_vc0_pop : 1; + mmr_t underflow_pi_fifo_vc2_pop : 1; + mmr_t underflow_pi_fifo_vc0_pop : 1; + mmr_t reserved_0 : 10; + mmr_t underflow2_vc2_credit : 1; + mmr_t underflow1_vc2_credit : 1; + mmr_t underflow0_vc2_credit : 1; + mmr_t underflow2_vc0_credit : 1; + mmr_t underflow1_vc0_credit : 1; + mmr_t underflow0_vc0_credit : 1; + mmr_t underflow_fifo13_vc2_credit : 1; + mmr_t underflow_fifo13_vc0_credit : 1; + mmr_t underflow_fifo02_vc2_credit : 1; + mmr_t underflow_fifo02_vc0_credit : 1; + mmr_t underflow_fifo13_vc3_push : 1; + mmr_t underflow_fifo13_vc1_push : 1; + mmr_t underflow_fifo02_vc2_push : 1; + mmr_t underflow_fifo02_vc0_push : 1; + mmr_t underflow_fifo13_vc3_pop : 1; + mmr_t underflow_fifo13_vc1_pop : 1; + mmr_t underflow_fifo02_vc2_pop : 1; + mmr_t underflow_fifo02_vc0_pop : 1; + mmr_t illegal_vciilb : 1; + mmr_t illegal_vcmd : 1; + mmr_t illegal_vcpi : 1; + mmr_t illegal_vcni : 1; + } sh_ni0_error_mask_2_s; +} sh_ni0_error_mask_2_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_NI0_FIRST_ERROR_1" */ +/* ni0 First Error Bits */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ni0_first_error_1_u { + mmr_t sh_ni0_first_error_1_regval; + struct { + mmr_t overflow_fifo02_debit0 : 1; + mmr_t overflow_fifo02_debit2 : 1; + mmr_t overflow_fifo13_debit0 : 1; + mmr_t overflow_fifo13_debit2 : 1; + mmr_t overflow_fifo02_vc0_pop : 1; + mmr_t overflow_fifo02_vc2_pop : 1; + mmr_t overflow_fifo13_vc1_pop : 1; + mmr_t overflow_fifo13_vc3_pop : 1; + mmr_t overflow_fifo02_vc0_push : 1; + mmr_t overflow_fifo02_vc2_push : 1; + mmr_t overflow_fifo13_vc1_push : 1; + mmr_t overflow_fifo13_vc3_push : 1; + mmr_t overflow_fifo02_vc0_credit : 1; + mmr_t overflow_fifo02_vc2_credit : 1; + mmr_t overflow_fifo13_vc0_credit : 1; + mmr_t overflow_fifo13_vc2_credit : 1; + mmr_t overflow0_vc0_credit : 1; + mmr_t overflow1_vc0_credit : 1; + mmr_t overflow2_vc0_credit : 1; + mmr_t overflow0_vc2_credit : 1; + mmr_t overflow1_vc2_credit : 1; + mmr_t overflow2_vc2_credit : 1; + mmr_t overflow_pi_fifo_debit0 : 1; + mmr_t overflow_pi_fifo_debit2 : 1; + mmr_t overflow_iilb_fifo_debit0 : 1; + mmr_t overflow_iilb_fifo_debit2 : 1; + mmr_t overflow_md_fifo_debit0 : 1; + mmr_t overflow_md_fifo_debit2 : 1; + mmr_t overflow_ni_fifo_debit0 : 1; + mmr_t overflow_ni_fifo_debit1 : 1; + mmr_t overflow_ni_fifo_debit2 : 1; + mmr_t overflow_ni_fifo_debit3 : 1; + mmr_t overflow_pi_fifo_vc0_pop : 1; + mmr_t overflow_pi_fifo_vc2_pop : 1; + mmr_t overflow_iilb_fifo_vc0_pop : 1; + mmr_t overflow_iilb_fifo_vc2_pop : 1; + mmr_t overflow_md_fifo_vc0_pop : 1; + mmr_t overflow_md_fifo_vc2_pop : 1; + mmr_t overflow_ni_fifo_vc0_pop : 1; + mmr_t overflow_ni_fifo_vc2_pop : 1; + mmr_t overflow_pi_fifo_vc0_push : 1; + mmr_t overflow_pi_fifo_vc2_push : 1; + mmr_t overflow_iilb_fifo_vc0_push : 1; + mmr_t overflow_iilb_fifo_vc2_push : 1; + mmr_t overflow_md_fifo_vc0_push : 1; + mmr_t overflow_md_fifo_vc2_push : 1; + mmr_t overflow_pi_fifo_vc0_credit : 1; + mmr_t overflow_pi_fifo_vc2_credit : 1; + mmr_t overflow_iilb_fifo_vc0_credit : 1; + mmr_t overflow_iilb_fifo_vc2_credit : 1; + mmr_t overflow_md_fifo_vc0_credit : 1; + mmr_t overflow_md_fifo_vc2_credit : 1; + mmr_t overflow_ni_fifo_vc0_credit : 1; + mmr_t overflow_ni_fifo_vc1_credit : 1; + mmr_t overflow_ni_fifo_vc2_credit : 1; + mmr_t overflow_ni_fifo_vc3_credit : 1; + mmr_t tail_timeout_fifo02_vc0 : 1; + mmr_t tail_timeout_fifo02_vc2 : 1; + mmr_t tail_timeout_fifo13_vc1 : 1; + mmr_t tail_timeout_fifo13_vc3 : 1; + mmr_t tail_timeout_ni_vc0 : 1; + mmr_t tail_timeout_ni_vc1 : 1; + mmr_t tail_timeout_ni_vc2 : 1; + mmr_t tail_timeout_ni_vc3 : 1; + } sh_ni0_first_error_1_s; +} sh_ni0_first_error_1_u_t; +#else +typedef union sh_ni0_first_error_1_u { + mmr_t sh_ni0_first_error_1_regval; + struct { + mmr_t tail_timeout_ni_vc3 : 1; + mmr_t tail_timeout_ni_vc2 : 1; + mmr_t tail_timeout_ni_vc1 : 1; + mmr_t tail_timeout_ni_vc0 : 1; + mmr_t tail_timeout_fifo13_vc3 : 1; + mmr_t tail_timeout_fifo13_vc1 : 1; + mmr_t tail_timeout_fifo02_vc2 : 1; + mmr_t tail_timeout_fifo02_vc0 : 1; + mmr_t overflow_ni_fifo_vc3_credit : 1; + mmr_t overflow_ni_fifo_vc2_credit : 1; + mmr_t overflow_ni_fifo_vc1_credit : 1; + mmr_t overflow_ni_fifo_vc0_credit : 1; + mmr_t overflow_md_fifo_vc2_credit : 1; + mmr_t overflow_md_fifo_vc0_credit : 1; + mmr_t overflow_iilb_fifo_vc2_credit : 1; + mmr_t overflow_iilb_fifo_vc0_credit : 1; + mmr_t overflow_pi_fifo_vc2_credit : 1; + mmr_t overflow_pi_fifo_vc0_credit : 1; + mmr_t overflow_md_fifo_vc2_push : 1; + mmr_t overflow_md_fifo_vc0_push : 1; + mmr_t overflow_iilb_fifo_vc2_push : 1; + mmr_t overflow_iilb_fifo_vc0_push : 1; + mmr_t overflow_pi_fifo_vc2_push : 1; + mmr_t overflow_pi_fifo_vc0_push : 1; + mmr_t overflow_ni_fifo_vc2_pop : 1; + mmr_t overflow_ni_fifo_vc0_pop : 1; + mmr_t overflow_md_fifo_vc2_pop : 1; + mmr_t overflow_md_fifo_vc0_pop : 1; + mmr_t overflow_iilb_fifo_vc2_pop : 1; + mmr_t overflow_iilb_fifo_vc0_pop : 1; + mmr_t overflow_pi_fifo_vc2_pop : 1; + mmr_t overflow_pi_fifo_vc0_pop : 1; + mmr_t overflow_ni_fifo_debit3 : 1; + mmr_t overflow_ni_fifo_debit2 : 1; + mmr_t overflow_ni_fifo_debit1 : 1; + mmr_t overflow_ni_fifo_debit0 : 1; + mmr_t overflow_md_fifo_debit2 : 1; + mmr_t overflow_md_fifo_debit0 : 1; + mmr_t overflow_iilb_fifo_debit2 : 1; + mmr_t overflow_iilb_fifo_debit0 : 1; + mmr_t overflow_pi_fifo_debit2 : 1; + mmr_t overflow_pi_fifo_debit0 : 1; + mmr_t overflow2_vc2_credit : 1; + mmr_t overflow1_vc2_credit : 1; + mmr_t overflow0_vc2_credit : 1; + mmr_t overflow2_vc0_credit : 1; + mmr_t overflow1_vc0_credit : 1; + mmr_t overflow0_vc0_credit : 1; + mmr_t overflow_fifo13_vc2_credit : 1; + mmr_t overflow_fifo13_vc0_credit : 1; + mmr_t overflow_fifo02_vc2_credit : 1; + mmr_t overflow_fifo02_vc0_credit : 1; + mmr_t overflow_fifo13_vc3_push : 1; + mmr_t overflow_fifo13_vc1_push : 1; + mmr_t overflow_fifo02_vc2_push : 1; + mmr_t overflow_fifo02_vc0_push : 1; + mmr_t overflow_fifo13_vc3_pop : 1; + mmr_t overflow_fifo13_vc1_pop : 1; + mmr_t overflow_fifo02_vc2_pop : 1; + mmr_t overflow_fifo02_vc0_pop : 1; + mmr_t overflow_fifo13_debit2 : 1; + mmr_t overflow_fifo13_debit0 : 1; + mmr_t overflow_fifo02_debit2 : 1; + mmr_t overflow_fifo02_debit0 : 1; + } sh_ni0_first_error_1_s; +} sh_ni0_first_error_1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_NI0_FIRST_ERROR_2" */ +/* ni0 First Error Bits */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ni0_first_error_2_u { + mmr_t sh_ni0_first_error_2_regval; + struct { + mmr_t illegal_vcni : 1; + mmr_t illegal_vcpi : 1; + mmr_t illegal_vcmd : 1; + mmr_t illegal_vciilb : 1; + mmr_t underflow_fifo02_vc0_pop : 1; + mmr_t underflow_fifo02_vc2_pop : 1; + mmr_t underflow_fifo13_vc1_pop : 1; + mmr_t underflow_fifo13_vc3_pop : 1; + mmr_t underflow_fifo02_vc0_push : 1; + mmr_t underflow_fifo02_vc2_push : 1; + mmr_t underflow_fifo13_vc1_push : 1; + mmr_t underflow_fifo13_vc3_push : 1; + mmr_t underflow_fifo02_vc0_credit : 1; + mmr_t underflow_fifo02_vc2_credit : 1; + mmr_t underflow_fifo13_vc0_credit : 1; + mmr_t underflow_fifo13_vc2_credit : 1; + mmr_t underflow0_vc0_credit : 1; + mmr_t underflow1_vc0_credit : 1; + mmr_t underflow2_vc0_credit : 1; + mmr_t underflow0_vc2_credit : 1; + mmr_t underflow1_vc2_credit : 1; + mmr_t underflow2_vc2_credit : 1; + mmr_t reserved_0 : 10; + mmr_t underflow_pi_fifo_vc0_pop : 1; + mmr_t underflow_pi_fifo_vc2_pop : 1; + mmr_t underflow_iilb_fifo_vc0_pop : 1; + mmr_t underflow_iilb_fifo_vc2_pop : 1; + mmr_t underflow_md_fifo_vc0_pop : 1; + mmr_t underflow_md_fifo_vc2_pop : 1; + mmr_t underflow_ni_fifo_vc0_pop : 1; + mmr_t underflow_ni_fifo_vc2_pop : 1; + mmr_t underflow_pi_fifo_vc0_push : 1; + mmr_t underflow_pi_fifo_vc2_push : 1; + mmr_t underflow_iilb_fifo_vc0_push : 1; + mmr_t underflow_iilb_fifo_vc2_push : 1; + mmr_t underflow_md_fifo_vc0_push : 1; + mmr_t underflow_md_fifo_vc2_push : 1; + mmr_t underflow_pi_fifo_vc0_credit : 1; + mmr_t underflow_pi_fifo_vc2_credit : 1; + mmr_t underflow_iilb_fifo_vc0_credit : 1; + mmr_t underflow_iilb_fifo_vc2_credit : 1; + mmr_t underflow_md_fifo_vc0_credit : 1; + mmr_t underflow_md_fifo_vc2_credit : 1; + mmr_t underflow_ni_fifo_vc0_credit : 1; + mmr_t underflow_ni_fifo_vc1_credit : 1; + mmr_t underflow_ni_fifo_vc2_credit : 1; + mmr_t underflow_ni_fifo_vc3_credit : 1; + mmr_t llp_deadlock_vc0 : 1; + mmr_t llp_deadlock_vc1 : 1; + mmr_t llp_deadlock_vc2 : 1; + mmr_t llp_deadlock_vc3 : 1; + mmr_t chiplet_nomatch : 1; + mmr_t lut_read_error : 1; + mmr_t retry_timeout_error : 1; + mmr_t reserved_1 : 1; + } sh_ni0_first_error_2_s; +} sh_ni0_first_error_2_u_t; +#else +typedef union sh_ni0_first_error_2_u { + mmr_t sh_ni0_first_error_2_regval; + struct { + mmr_t reserved_1 : 1; + mmr_t retry_timeout_error : 1; + mmr_t lut_read_error : 1; + mmr_t chiplet_nomatch : 1; + mmr_t llp_deadlock_vc3 : 1; + mmr_t llp_deadlock_vc2 : 1; + mmr_t llp_deadlock_vc1 : 1; + mmr_t llp_deadlock_vc0 : 1; + mmr_t underflow_ni_fifo_vc3_credit : 1; + mmr_t underflow_ni_fifo_vc2_credit : 1; + mmr_t underflow_ni_fifo_vc1_credit : 1; + mmr_t underflow_ni_fifo_vc0_credit : 1; + mmr_t underflow_md_fifo_vc2_credit : 1; + mmr_t underflow_md_fifo_vc0_credit : 1; + mmr_t underflow_iilb_fifo_vc2_credit : 1; + mmr_t underflow_iilb_fifo_vc0_credit : 1; + mmr_t underflow_pi_fifo_vc2_credit : 1; + mmr_t underflow_pi_fifo_vc0_credit : 1; + mmr_t underflow_md_fifo_vc2_push : 1; + mmr_t underflow_md_fifo_vc0_push : 1; + mmr_t underflow_iilb_fifo_vc2_push : 1; + mmr_t underflow_iilb_fifo_vc0_push : 1; + mmr_t underflow_pi_fifo_vc2_push : 1; + mmr_t underflow_pi_fifo_vc0_push : 1; + mmr_t underflow_ni_fifo_vc2_pop : 1; + mmr_t underflow_ni_fifo_vc0_pop : 1; + mmr_t underflow_md_fifo_vc2_pop : 1; + mmr_t underflow_md_fifo_vc0_pop : 1; + mmr_t underflow_iilb_fifo_vc2_pop : 1; + mmr_t underflow_iilb_fifo_vc0_pop : 1; + mmr_t underflow_pi_fifo_vc2_pop : 1; + mmr_t underflow_pi_fifo_vc0_pop : 1; + mmr_t reserved_0 : 10; + mmr_t underflow2_vc2_credit : 1; + mmr_t underflow1_vc2_credit : 1; + mmr_t underflow0_vc2_credit : 1; + mmr_t underflow2_vc0_credit : 1; + mmr_t underflow1_vc0_credit : 1; + mmr_t underflow0_vc0_credit : 1; + mmr_t underflow_fifo13_vc2_credit : 1; + mmr_t underflow_fifo13_vc0_credit : 1; + mmr_t underflow_fifo02_vc2_credit : 1; + mmr_t underflow_fifo02_vc0_credit : 1; + mmr_t underflow_fifo13_vc3_push : 1; + mmr_t underflow_fifo13_vc1_push : 1; + mmr_t underflow_fifo02_vc2_push : 1; + mmr_t underflow_fifo02_vc0_push : 1; + mmr_t underflow_fifo13_vc3_pop : 1; + mmr_t underflow_fifo13_vc1_pop : 1; + mmr_t underflow_fifo02_vc2_pop : 1; + mmr_t underflow_fifo02_vc0_pop : 1; + mmr_t illegal_vciilb : 1; + mmr_t illegal_vcmd : 1; + mmr_t illegal_vcpi : 1; + mmr_t illegal_vcni : 1; + } sh_ni0_first_error_2_s; +} sh_ni0_first_error_2_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_NI0_ERROR_DETAIL_1" */ +/* ni0 Chiplet no match header bits 63:0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ni0_error_detail_1_u { + mmr_t sh_ni0_error_detail_1_regval; + struct { + mmr_t header : 64; + } sh_ni0_error_detail_1_s; +} sh_ni0_error_detail_1_u_t; +#else +typedef union sh_ni0_error_detail_1_u { + mmr_t sh_ni0_error_detail_1_regval; + struct { + mmr_t header : 64; + } sh_ni0_error_detail_1_s; +} sh_ni0_error_detail_1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_NI0_ERROR_DETAIL_2" */ +/* ni0 Chiplet no match header bits 127:64 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ni0_error_detail_2_u { + mmr_t sh_ni0_error_detail_2_regval; + struct { + mmr_t header : 64; + } sh_ni0_error_detail_2_s; +} sh_ni0_error_detail_2_u_t; +#else +typedef union sh_ni0_error_detail_2_u { + mmr_t sh_ni0_error_detail_2_regval; + struct { + mmr_t header : 64; + } sh_ni0_error_detail_2_s; +} sh_ni0_error_detail_2_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_NI1_ERROR_SUMMARY_1" */ +/* ni1 Error Summary Bits */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ni1_error_summary_1_u { + mmr_t sh_ni1_error_summary_1_regval; + struct { + mmr_t overflow_fifo02_debit0 : 1; + mmr_t overflow_fifo02_debit2 : 1; + mmr_t overflow_fifo13_debit0 : 1; + mmr_t overflow_fifo13_debit2 : 1; + mmr_t overflow_fifo02_vc0_pop : 1; + mmr_t overflow_fifo02_vc2_pop : 1; + mmr_t overflow_fifo13_vc1_pop : 1; + mmr_t overflow_fifo13_vc3_pop : 1; + mmr_t overflow_fifo02_vc0_push : 1; + mmr_t overflow_fifo02_vc2_push : 1; + mmr_t overflow_fifo13_vc1_push : 1; + mmr_t overflow_fifo13_vc3_push : 1; + mmr_t overflow_fifo02_vc0_credit : 1; + mmr_t overflow_fifo02_vc2_credit : 1; + mmr_t overflow_fifo13_vc0_credit : 1; + mmr_t overflow_fifo13_vc2_credit : 1; + mmr_t overflow0_vc0_credit : 1; + mmr_t overflow1_vc0_credit : 1; + mmr_t overflow2_vc0_credit : 1; + mmr_t overflow0_vc2_credit : 1; + mmr_t overflow1_vc2_credit : 1; + mmr_t overflow2_vc2_credit : 1; + mmr_t overflow_pi_fifo_debit0 : 1; + mmr_t overflow_pi_fifo_debit2 : 1; + mmr_t overflow_iilb_fifo_debit0 : 1; + mmr_t overflow_iilb_fifo_debit2 : 1; + mmr_t overflow_md_fifo_debit0 : 1; + mmr_t overflow_md_fifo_debit2 : 1; + mmr_t overflow_ni_fifo_debit0 : 1; + mmr_t overflow_ni_fifo_debit1 : 1; + mmr_t overflow_ni_fifo_debit2 : 1; + mmr_t overflow_ni_fifo_debit3 : 1; + mmr_t overflow_pi_fifo_vc0_pop : 1; + mmr_t overflow_pi_fifo_vc2_pop : 1; + mmr_t overflow_iilb_fifo_vc0_pop : 1; + mmr_t overflow_iilb_fifo_vc2_pop : 1; + mmr_t overflow_md_fifo_vc0_pop : 1; + mmr_t overflow_md_fifo_vc2_pop : 1; + mmr_t overflow_ni_fifo_vc0_pop : 1; + mmr_t overflow_ni_fifo_vc2_pop : 1; + mmr_t overflow_pi_fifo_vc0_push : 1; + mmr_t overflow_pi_fifo_vc2_push : 1; + mmr_t overflow_iilb_fifo_vc0_push : 1; + mmr_t overflow_iilb_fifo_vc2_push : 1; + mmr_t overflow_md_fifo_vc0_push : 1; + mmr_t overflow_md_fifo_vc2_push : 1; + mmr_t overflow_pi_fifo_vc0_credit : 1; + mmr_t overflow_pi_fifo_vc2_credit : 1; + mmr_t overflow_iilb_fifo_vc0_credit : 1; + mmr_t overflow_iilb_fifo_vc2_credit : 1; + mmr_t overflow_md_fifo_vc0_credit : 1; + mmr_t overflow_md_fifo_vc2_credit : 1; + mmr_t overflow_ni_fifo_vc0_credit : 1; + mmr_t overflow_ni_fifo_vc1_credit : 1; + mmr_t overflow_ni_fifo_vc2_credit : 1; + mmr_t overflow_ni_fifo_vc3_credit : 1; + mmr_t tail_timeout_fifo02_vc0 : 1; + mmr_t tail_timeout_fifo02_vc2 : 1; + mmr_t tail_timeout_fifo13_vc1 : 1; + mmr_t tail_timeout_fifo13_vc3 : 1; + mmr_t tail_timeout_ni_vc0 : 1; + mmr_t tail_timeout_ni_vc1 : 1; + mmr_t tail_timeout_ni_vc2 : 1; + mmr_t tail_timeout_ni_vc3 : 1; + } sh_ni1_error_summary_1_s; +} sh_ni1_error_summary_1_u_t; +#else +typedef union sh_ni1_error_summary_1_u { + mmr_t sh_ni1_error_summary_1_regval; + struct { + mmr_t tail_timeout_ni_vc3 : 1; + mmr_t tail_timeout_ni_vc2 : 1; + mmr_t tail_timeout_ni_vc1 : 1; + mmr_t tail_timeout_ni_vc0 : 1; + mmr_t tail_timeout_fifo13_vc3 : 1; + mmr_t tail_timeout_fifo13_vc1 : 1; + mmr_t tail_timeout_fifo02_vc2 : 1; + mmr_t tail_timeout_fifo02_vc0 : 1; + mmr_t overflow_ni_fifo_vc3_credit : 1; + mmr_t overflow_ni_fifo_vc2_credit : 1; + mmr_t overflow_ni_fifo_vc1_credit : 1; + mmr_t overflow_ni_fifo_vc0_credit : 1; + mmr_t overflow_md_fifo_vc2_credit : 1; + mmr_t overflow_md_fifo_vc0_credit : 1; + mmr_t overflow_iilb_fifo_vc2_credit : 1; + mmr_t overflow_iilb_fifo_vc0_credit : 1; + mmr_t overflow_pi_fifo_vc2_credit : 1; + mmr_t overflow_pi_fifo_vc0_credit : 1; + mmr_t overflow_md_fifo_vc2_push : 1; + mmr_t overflow_md_fifo_vc0_push : 1; + mmr_t overflow_iilb_fifo_vc2_push : 1; + mmr_t overflow_iilb_fifo_vc0_push : 1; + mmr_t overflow_pi_fifo_vc2_push : 1; + mmr_t overflow_pi_fifo_vc0_push : 1; + mmr_t overflow_ni_fifo_vc2_pop : 1; + mmr_t overflow_ni_fifo_vc0_pop : 1; + mmr_t overflow_md_fifo_vc2_pop : 1; + mmr_t overflow_md_fifo_vc0_pop : 1; + mmr_t overflow_iilb_fifo_vc2_pop : 1; + mmr_t overflow_iilb_fifo_vc0_pop : 1; + mmr_t overflow_pi_fifo_vc2_pop : 1; + mmr_t overflow_pi_fifo_vc0_pop : 1; + mmr_t overflow_ni_fifo_debit3 : 1; + mmr_t overflow_ni_fifo_debit2 : 1; + mmr_t overflow_ni_fifo_debit1 : 1; + mmr_t overflow_ni_fifo_debit0 : 1; + mmr_t overflow_md_fifo_debit2 : 1; + mmr_t overflow_md_fifo_debit0 : 1; + mmr_t overflow_iilb_fifo_debit2 : 1; + mmr_t overflow_iilb_fifo_debit0 : 1; + mmr_t overflow_pi_fifo_debit2 : 1; + mmr_t overflow_pi_fifo_debit0 : 1; + mmr_t overflow2_vc2_credit : 1; + mmr_t overflow1_vc2_credit : 1; + mmr_t overflow0_vc2_credit : 1; + mmr_t overflow2_vc0_credit : 1; + mmr_t overflow1_vc0_credit : 1; + mmr_t overflow0_vc0_credit : 1; + mmr_t overflow_fifo13_vc2_credit : 1; + mmr_t overflow_fifo13_vc0_credit : 1; + mmr_t overflow_fifo02_vc2_credit : 1; + mmr_t overflow_fifo02_vc0_credit : 1; + mmr_t overflow_fifo13_vc3_push : 1; + mmr_t overflow_fifo13_vc1_push : 1; + mmr_t overflow_fifo02_vc2_push : 1; + mmr_t overflow_fifo02_vc0_push : 1; + mmr_t overflow_fifo13_vc3_pop : 1; + mmr_t overflow_fifo13_vc1_pop : 1; + mmr_t overflow_fifo02_vc2_pop : 1; + mmr_t overflow_fifo02_vc0_pop : 1; + mmr_t overflow_fifo13_debit2 : 1; + mmr_t overflow_fifo13_debit0 : 1; + mmr_t overflow_fifo02_debit2 : 1; + mmr_t overflow_fifo02_debit0 : 1; + } sh_ni1_error_summary_1_s; +} sh_ni1_error_summary_1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_NI1_ERROR_SUMMARY_2" */ +/* ni1 Error Summary Bits */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ni1_error_summary_2_u { + mmr_t sh_ni1_error_summary_2_regval; + struct { + mmr_t illegal_vcni : 1; + mmr_t illegal_vcpi : 1; + mmr_t illegal_vcmd : 1; + mmr_t illegal_vciilb : 1; + mmr_t underflow_fifo02_vc0_pop : 1; + mmr_t underflow_fifo02_vc2_pop : 1; + mmr_t underflow_fifo13_vc1_pop : 1; + mmr_t underflow_fifo13_vc3_pop : 1; + mmr_t underflow_fifo02_vc0_push : 1; + mmr_t underflow_fifo02_vc2_push : 1; + mmr_t underflow_fifo13_vc1_push : 1; + mmr_t underflow_fifo13_vc3_push : 1; + mmr_t underflow_fifo02_vc0_credit : 1; + mmr_t underflow_fifo02_vc2_credit : 1; + mmr_t underflow_fifo13_vc0_credit : 1; + mmr_t underflow_fifo13_vc2_credit : 1; + mmr_t underflow0_vc0_credit : 1; + mmr_t underflow1_vc0_credit : 1; + mmr_t underflow2_vc0_credit : 1; + mmr_t underflow0_vc2_credit : 1; + mmr_t underflow1_vc2_credit : 1; + mmr_t underflow2_vc2_credit : 1; + mmr_t reserved_0 : 10; + mmr_t underflow_pi_fifo_vc0_pop : 1; + mmr_t underflow_pi_fifo_vc2_pop : 1; + mmr_t underflow_iilb_fifo_vc0_pop : 1; + mmr_t underflow_iilb_fifo_vc2_pop : 1; + mmr_t underflow_md_fifo_vc0_pop : 1; + mmr_t underflow_md_fifo_vc2_pop : 1; + mmr_t underflow_ni_fifo_vc0_pop : 1; + mmr_t underflow_ni_fifo_vc2_pop : 1; + mmr_t underflow_pi_fifo_vc0_push : 1; + mmr_t underflow_pi_fifo_vc2_push : 1; + mmr_t underflow_iilb_fifo_vc0_push : 1; + mmr_t underflow_iilb_fifo_vc2_push : 1; + mmr_t underflow_md_fifo_vc0_push : 1; + mmr_t underflow_md_fifo_vc2_push : 1; + mmr_t underflow_pi_fifo_vc0_credit : 1; + mmr_t underflow_pi_fifo_vc2_credit : 1; + mmr_t underflow_iilb_fifo_vc0_credit : 1; + mmr_t underflow_iilb_fifo_vc2_credit : 1; + mmr_t underflow_md_fifo_vc0_credit : 1; + mmr_t underflow_md_fifo_vc2_credit : 1; + mmr_t underflow_ni_fifo_vc0_credit : 1; + mmr_t underflow_ni_fifo_vc1_credit : 1; + mmr_t underflow_ni_fifo_vc2_credit : 1; + mmr_t underflow_ni_fifo_vc3_credit : 1; + mmr_t llp_deadlock_vc0 : 1; + mmr_t llp_deadlock_vc1 : 1; + mmr_t llp_deadlock_vc2 : 1; + mmr_t llp_deadlock_vc3 : 1; + mmr_t chiplet_nomatch : 1; + mmr_t lut_read_error : 1; + mmr_t retry_timeout_error : 1; + mmr_t reserved_1 : 1; + } sh_ni1_error_summary_2_s; +} sh_ni1_error_summary_2_u_t; +#else +typedef union sh_ni1_error_summary_2_u { + mmr_t sh_ni1_error_summary_2_regval; + struct { + mmr_t reserved_1 : 1; + mmr_t retry_timeout_error : 1; + mmr_t lut_read_error : 1; + mmr_t chiplet_nomatch : 1; + mmr_t llp_deadlock_vc3 : 1; + mmr_t llp_deadlock_vc2 : 1; + mmr_t llp_deadlock_vc1 : 1; + mmr_t llp_deadlock_vc0 : 1; + mmr_t underflow_ni_fifo_vc3_credit : 1; + mmr_t underflow_ni_fifo_vc2_credit : 1; + mmr_t underflow_ni_fifo_vc1_credit : 1; + mmr_t underflow_ni_fifo_vc0_credit : 1; + mmr_t underflow_md_fifo_vc2_credit : 1; + mmr_t underflow_md_fifo_vc0_credit : 1; + mmr_t underflow_iilb_fifo_vc2_credit : 1; + mmr_t underflow_iilb_fifo_vc0_credit : 1; + mmr_t underflow_pi_fifo_vc2_credit : 1; + mmr_t underflow_pi_fifo_vc0_credit : 1; + mmr_t underflow_md_fifo_vc2_push : 1; + mmr_t underflow_md_fifo_vc0_push : 1; + mmr_t underflow_iilb_fifo_vc2_push : 1; + mmr_t underflow_iilb_fifo_vc0_push : 1; + mmr_t underflow_pi_fifo_vc2_push : 1; + mmr_t underflow_pi_fifo_vc0_push : 1; + mmr_t underflow_ni_fifo_vc2_pop : 1; + mmr_t underflow_ni_fifo_vc0_pop : 1; + mmr_t underflow_md_fifo_vc2_pop : 1; + mmr_t underflow_md_fifo_vc0_pop : 1; + mmr_t underflow_iilb_fifo_vc2_pop : 1; + mmr_t underflow_iilb_fifo_vc0_pop : 1; + mmr_t underflow_pi_fifo_vc2_pop : 1; + mmr_t underflow_pi_fifo_vc0_pop : 1; + mmr_t reserved_0 : 10; + mmr_t underflow2_vc2_credit : 1; + mmr_t underflow1_vc2_credit : 1; + mmr_t underflow0_vc2_credit : 1; + mmr_t underflow2_vc0_credit : 1; + mmr_t underflow1_vc0_credit : 1; + mmr_t underflow0_vc0_credit : 1; + mmr_t underflow_fifo13_vc2_credit : 1; + mmr_t underflow_fifo13_vc0_credit : 1; + mmr_t underflow_fifo02_vc2_credit : 1; + mmr_t underflow_fifo02_vc0_credit : 1; + mmr_t underflow_fifo13_vc3_push : 1; + mmr_t underflow_fifo13_vc1_push : 1; + mmr_t underflow_fifo02_vc2_push : 1; + mmr_t underflow_fifo02_vc0_push : 1; + mmr_t underflow_fifo13_vc3_pop : 1; + mmr_t underflow_fifo13_vc1_pop : 1; + mmr_t underflow_fifo02_vc2_pop : 1; + mmr_t underflow_fifo02_vc0_pop : 1; + mmr_t illegal_vciilb : 1; + mmr_t illegal_vcmd : 1; + mmr_t illegal_vcpi : 1; + mmr_t illegal_vcni : 1; + } sh_ni1_error_summary_2_s; +} sh_ni1_error_summary_2_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_NI1_ERROR_OVERFLOW_1" */ +/* ni1 Error Overflow Bits */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ni1_error_overflow_1_u { + mmr_t sh_ni1_error_overflow_1_regval; + struct { + mmr_t overflow_fifo02_debit0 : 1; + mmr_t overflow_fifo02_debit2 : 1; + mmr_t overflow_fifo13_debit0 : 1; + mmr_t overflow_fifo13_debit2 : 1; + mmr_t overflow_fifo02_vc0_pop : 1; + mmr_t overflow_fifo02_vc2_pop : 1; + mmr_t overflow_fifo13_vc1_pop : 1; + mmr_t overflow_fifo13_vc3_pop : 1; + mmr_t overflow_fifo02_vc0_push : 1; + mmr_t overflow_fifo02_vc2_push : 1; + mmr_t overflow_fifo13_vc1_push : 1; + mmr_t overflow_fifo13_vc3_push : 1; + mmr_t overflow_fifo02_vc0_credit : 1; + mmr_t overflow_fifo02_vc2_credit : 1; + mmr_t overflow_fifo13_vc0_credit : 1; + mmr_t overflow_fifo13_vc2_credit : 1; + mmr_t overflow0_vc0_credit : 1; + mmr_t overflow1_vc0_credit : 1; + mmr_t overflow2_vc0_credit : 1; + mmr_t overflow0_vc2_credit : 1; + mmr_t overflow1_vc2_credit : 1; + mmr_t overflow2_vc2_credit : 1; + mmr_t overflow_pi_fifo_debit0 : 1; + mmr_t overflow_pi_fifo_debit2 : 1; + mmr_t overflow_iilb_fifo_debit0 : 1; + mmr_t overflow_iilb_fifo_debit2 : 1; + mmr_t overflow_md_fifo_debit0 : 1; + mmr_t overflow_md_fifo_debit2 : 1; + mmr_t overflow_ni_fifo_debit0 : 1; + mmr_t overflow_ni_fifo_debit1 : 1; + mmr_t overflow_ni_fifo_debit2 : 1; + mmr_t overflow_ni_fifo_debit3 : 1; + mmr_t overflow_pi_fifo_vc0_pop : 1; + mmr_t overflow_pi_fifo_vc2_pop : 1; + mmr_t overflow_iilb_fifo_vc0_pop : 1; + mmr_t overflow_iilb_fifo_vc2_pop : 1; + mmr_t overflow_md_fifo_vc0_pop : 1; + mmr_t overflow_md_fifo_vc2_pop : 1; + mmr_t overflow_ni_fifo_vc0_pop : 1; + mmr_t overflow_ni_fifo_vc2_pop : 1; + mmr_t overflow_pi_fifo_vc0_push : 1; + mmr_t overflow_pi_fifo_vc2_push : 1; + mmr_t overflow_iilb_fifo_vc0_push : 1; + mmr_t overflow_iilb_fifo_vc2_push : 1; + mmr_t overflow_md_fifo_vc0_push : 1; + mmr_t overflow_md_fifo_vc2_push : 1; + mmr_t overflow_pi_fifo_vc0_credit : 1; + mmr_t overflow_pi_fifo_vc2_credit : 1; + mmr_t overflow_iilb_fifo_vc0_credit : 1; + mmr_t overflow_iilb_fifo_vc2_credit : 1; + mmr_t overflow_md_fifo_vc0_credit : 1; + mmr_t overflow_md_fifo_vc2_credit : 1; + mmr_t overflow_ni_fifo_vc0_credit : 1; + mmr_t overflow_ni_fifo_vc1_credit : 1; + mmr_t overflow_ni_fifo_vc2_credit : 1; + mmr_t overflow_ni_fifo_vc3_credit : 1; + mmr_t tail_timeout_fifo02_vc0 : 1; + mmr_t tail_timeout_fifo02_vc2 : 1; + mmr_t tail_timeout_fifo13_vc1 : 1; + mmr_t tail_timeout_fifo13_vc3 : 1; + mmr_t tail_timeout_ni_vc0 : 1; + mmr_t tail_timeout_ni_vc1 : 1; + mmr_t tail_timeout_ni_vc2 : 1; + mmr_t tail_timeout_ni_vc3 : 1; + } sh_ni1_error_overflow_1_s; +} sh_ni1_error_overflow_1_u_t; +#else +typedef union sh_ni1_error_overflow_1_u { + mmr_t sh_ni1_error_overflow_1_regval; + struct { + mmr_t tail_timeout_ni_vc3 : 1; + mmr_t tail_timeout_ni_vc2 : 1; + mmr_t tail_timeout_ni_vc1 : 1; + mmr_t tail_timeout_ni_vc0 : 1; + mmr_t tail_timeout_fifo13_vc3 : 1; + mmr_t tail_timeout_fifo13_vc1 : 1; + mmr_t tail_timeout_fifo02_vc2 : 1; + mmr_t tail_timeout_fifo02_vc0 : 1; + mmr_t overflow_ni_fifo_vc3_credit : 1; + mmr_t overflow_ni_fifo_vc2_credit : 1; + mmr_t overflow_ni_fifo_vc1_credit : 1; + mmr_t overflow_ni_fifo_vc0_credit : 1; + mmr_t overflow_md_fifo_vc2_credit : 1; + mmr_t overflow_md_fifo_vc0_credit : 1; + mmr_t overflow_iilb_fifo_vc2_credit : 1; + mmr_t overflow_iilb_fifo_vc0_credit : 1; + mmr_t overflow_pi_fifo_vc2_credit : 1; + mmr_t overflow_pi_fifo_vc0_credit : 1; + mmr_t overflow_md_fifo_vc2_push : 1; + mmr_t overflow_md_fifo_vc0_push : 1; + mmr_t overflow_iilb_fifo_vc2_push : 1; + mmr_t overflow_iilb_fifo_vc0_push : 1; + mmr_t overflow_pi_fifo_vc2_push : 1; + mmr_t overflow_pi_fifo_vc0_push : 1; + mmr_t overflow_ni_fifo_vc2_pop : 1; + mmr_t overflow_ni_fifo_vc0_pop : 1; + mmr_t overflow_md_fifo_vc2_pop : 1; + mmr_t overflow_md_fifo_vc0_pop : 1; + mmr_t overflow_iilb_fifo_vc2_pop : 1; + mmr_t overflow_iilb_fifo_vc0_pop : 1; + mmr_t overflow_pi_fifo_vc2_pop : 1; + mmr_t overflow_pi_fifo_vc0_pop : 1; + mmr_t overflow_ni_fifo_debit3 : 1; + mmr_t overflow_ni_fifo_debit2 : 1; + mmr_t overflow_ni_fifo_debit1 : 1; + mmr_t overflow_ni_fifo_debit0 : 1; + mmr_t overflow_md_fifo_debit2 : 1; + mmr_t overflow_md_fifo_debit0 : 1; + mmr_t overflow_iilb_fifo_debit2 : 1; + mmr_t overflow_iilb_fifo_debit0 : 1; + mmr_t overflow_pi_fifo_debit2 : 1; + mmr_t overflow_pi_fifo_debit0 : 1; + mmr_t overflow2_vc2_credit : 1; + mmr_t overflow1_vc2_credit : 1; + mmr_t overflow0_vc2_credit : 1; + mmr_t overflow2_vc0_credit : 1; + mmr_t overflow1_vc0_credit : 1; + mmr_t overflow0_vc0_credit : 1; + mmr_t overflow_fifo13_vc2_credit : 1; + mmr_t overflow_fifo13_vc0_credit : 1; + mmr_t overflow_fifo02_vc2_credit : 1; + mmr_t overflow_fifo02_vc0_credit : 1; + mmr_t overflow_fifo13_vc3_push : 1; + mmr_t overflow_fifo13_vc1_push : 1; + mmr_t overflow_fifo02_vc2_push : 1; + mmr_t overflow_fifo02_vc0_push : 1; + mmr_t overflow_fifo13_vc3_pop : 1; + mmr_t overflow_fifo13_vc1_pop : 1; + mmr_t overflow_fifo02_vc2_pop : 1; + mmr_t overflow_fifo02_vc0_pop : 1; + mmr_t overflow_fifo13_debit2 : 1; + mmr_t overflow_fifo13_debit0 : 1; + mmr_t overflow_fifo02_debit2 : 1; + mmr_t overflow_fifo02_debit0 : 1; + } sh_ni1_error_overflow_1_s; +} sh_ni1_error_overflow_1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_NI1_ERROR_OVERFLOW_2" */ +/* ni1 Error Overflow Bits */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ni1_error_overflow_2_u { + mmr_t sh_ni1_error_overflow_2_regval; + struct { + mmr_t illegal_vcni : 1; + mmr_t illegal_vcpi : 1; + mmr_t illegal_vcmd : 1; + mmr_t illegal_vciilb : 1; + mmr_t underflow_fifo02_vc0_pop : 1; + mmr_t underflow_fifo02_vc2_pop : 1; + mmr_t underflow_fifo13_vc1_pop : 1; + mmr_t underflow_fifo13_vc3_pop : 1; + mmr_t underflow_fifo02_vc0_push : 1; + mmr_t underflow_fifo02_vc2_push : 1; + mmr_t underflow_fifo13_vc1_push : 1; + mmr_t underflow_fifo13_vc3_push : 1; + mmr_t underflow_fifo02_vc0_credit : 1; + mmr_t underflow_fifo02_vc2_credit : 1; + mmr_t underflow_fifo13_vc0_credit : 1; + mmr_t underflow_fifo13_vc2_credit : 1; + mmr_t underflow0_vc0_credit : 1; + mmr_t underflow1_vc0_credit : 1; + mmr_t underflow2_vc0_credit : 1; + mmr_t underflow0_vc2_credit : 1; + mmr_t underflow1_vc2_credit : 1; + mmr_t underflow2_vc2_credit : 1; + mmr_t reserved_0 : 10; + mmr_t underflow_pi_fifo_vc0_pop : 1; + mmr_t underflow_pi_fifo_vc2_pop : 1; + mmr_t underflow_iilb_fifo_vc0_pop : 1; + mmr_t underflow_iilb_fifo_vc2_pop : 1; + mmr_t underflow_md_fifo_vc0_pop : 1; + mmr_t underflow_md_fifo_vc2_pop : 1; + mmr_t underflow_ni_fifo_vc0_pop : 1; + mmr_t underflow_ni_fifo_vc2_pop : 1; + mmr_t underflow_pi_fifo_vc0_push : 1; + mmr_t underflow_pi_fifo_vc2_push : 1; + mmr_t underflow_iilb_fifo_vc0_push : 1; + mmr_t underflow_iilb_fifo_vc2_push : 1; + mmr_t underflow_md_fifo_vc0_push : 1; + mmr_t underflow_md_fifo_vc2_push : 1; + mmr_t underflow_pi_fifo_vc0_credit : 1; + mmr_t underflow_pi_fifo_vc2_credit : 1; + mmr_t underflow_iilb_fifo_vc0_credit : 1; + mmr_t underflow_iilb_fifo_vc2_credit : 1; + mmr_t underflow_md_fifo_vc0_credit : 1; + mmr_t underflow_md_fifo_vc2_credit : 1; + mmr_t underflow_ni_fifo_vc0_credit : 1; + mmr_t underflow_ni_fifo_vc1_credit : 1; + mmr_t underflow_ni_fifo_vc2_credit : 1; + mmr_t underflow_ni_fifo_vc3_credit : 1; + mmr_t llp_deadlock_vc0 : 1; + mmr_t llp_deadlock_vc1 : 1; + mmr_t llp_deadlock_vc2 : 1; + mmr_t llp_deadlock_vc3 : 1; + mmr_t chiplet_nomatch : 1; + mmr_t lut_read_error : 1; + mmr_t retry_timeout_error : 1; + mmr_t reserved_1 : 1; + } sh_ni1_error_overflow_2_s; +} sh_ni1_error_overflow_2_u_t; +#else +typedef union sh_ni1_error_overflow_2_u { + mmr_t sh_ni1_error_overflow_2_regval; + struct { + mmr_t reserved_1 : 1; + mmr_t retry_timeout_error : 1; + mmr_t lut_read_error : 1; + mmr_t chiplet_nomatch : 1; + mmr_t llp_deadlock_vc3 : 1; + mmr_t llp_deadlock_vc2 : 1; + mmr_t llp_deadlock_vc1 : 1; + mmr_t llp_deadlock_vc0 : 1; + mmr_t underflow_ni_fifo_vc3_credit : 1; + mmr_t underflow_ni_fifo_vc2_credit : 1; + mmr_t underflow_ni_fifo_vc1_credit : 1; + mmr_t underflow_ni_fifo_vc0_credit : 1; + mmr_t underflow_md_fifo_vc2_credit : 1; + mmr_t underflow_md_fifo_vc0_credit : 1; + mmr_t underflow_iilb_fifo_vc2_credit : 1; + mmr_t underflow_iilb_fifo_vc0_credit : 1; + mmr_t underflow_pi_fifo_vc2_credit : 1; + mmr_t underflow_pi_fifo_vc0_credit : 1; + mmr_t underflow_md_fifo_vc2_push : 1; + mmr_t underflow_md_fifo_vc0_push : 1; + mmr_t underflow_iilb_fifo_vc2_push : 1; + mmr_t underflow_iilb_fifo_vc0_push : 1; + mmr_t underflow_pi_fifo_vc2_push : 1; + mmr_t underflow_pi_fifo_vc0_push : 1; + mmr_t underflow_ni_fifo_vc2_pop : 1; + mmr_t underflow_ni_fifo_vc0_pop : 1; + mmr_t underflow_md_fifo_vc2_pop : 1; + mmr_t underflow_md_fifo_vc0_pop : 1; + mmr_t underflow_iilb_fifo_vc2_pop : 1; + mmr_t underflow_iilb_fifo_vc0_pop : 1; + mmr_t underflow_pi_fifo_vc2_pop : 1; + mmr_t underflow_pi_fifo_vc0_pop : 1; + mmr_t reserved_0 : 10; + mmr_t underflow2_vc2_credit : 1; + mmr_t underflow1_vc2_credit : 1; + mmr_t underflow0_vc2_credit : 1; + mmr_t underflow2_vc0_credit : 1; + mmr_t underflow1_vc0_credit : 1; + mmr_t underflow0_vc0_credit : 1; + mmr_t underflow_fifo13_vc2_credit : 1; + mmr_t underflow_fifo13_vc0_credit : 1; + mmr_t underflow_fifo02_vc2_credit : 1; + mmr_t underflow_fifo02_vc0_credit : 1; + mmr_t underflow_fifo13_vc3_push : 1; + mmr_t underflow_fifo13_vc1_push : 1; + mmr_t underflow_fifo02_vc2_push : 1; + mmr_t underflow_fifo02_vc0_push : 1; + mmr_t underflow_fifo13_vc3_pop : 1; + mmr_t underflow_fifo13_vc1_pop : 1; + mmr_t underflow_fifo02_vc2_pop : 1; + mmr_t underflow_fifo02_vc0_pop : 1; + mmr_t illegal_vciilb : 1; + mmr_t illegal_vcmd : 1; + mmr_t illegal_vcpi : 1; + mmr_t illegal_vcni : 1; + } sh_ni1_error_overflow_2_s; +} sh_ni1_error_overflow_2_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_NI1_ERROR_MASK_1" */ +/* ni1 Error Mask Bits */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ni1_error_mask_1_u { + mmr_t sh_ni1_error_mask_1_regval; + struct { + mmr_t overflow_fifo02_debit0 : 1; + mmr_t overflow_fifo02_debit2 : 1; + mmr_t overflow_fifo13_debit0 : 1; + mmr_t overflow_fifo13_debit2 : 1; + mmr_t overflow_fifo02_vc0_pop : 1; + mmr_t overflow_fifo02_vc2_pop : 1; + mmr_t overflow_fifo13_vc1_pop : 1; + mmr_t overflow_fifo13_vc3_pop : 1; + mmr_t overflow_fifo02_vc0_push : 1; + mmr_t overflow_fifo02_vc2_push : 1; + mmr_t overflow_fifo13_vc1_push : 1; + mmr_t overflow_fifo13_vc3_push : 1; + mmr_t overflow_fifo02_vc0_credit : 1; + mmr_t overflow_fifo02_vc2_credit : 1; + mmr_t overflow_fifo13_vc0_credit : 1; + mmr_t overflow_fifo13_vc2_credit : 1; + mmr_t overflow0_vc0_credit : 1; + mmr_t overflow1_vc0_credit : 1; + mmr_t overflow2_vc0_credit : 1; + mmr_t overflow0_vc2_credit : 1; + mmr_t overflow1_vc2_credit : 1; + mmr_t overflow2_vc2_credit : 1; + mmr_t overflow_pi_fifo_debit0 : 1; + mmr_t overflow_pi_fifo_debit2 : 1; + mmr_t overflow_iilb_fifo_debit0 : 1; + mmr_t overflow_iilb_fifo_debit2 : 1; + mmr_t overflow_md_fifo_debit0 : 1; + mmr_t overflow_md_fifo_debit2 : 1; + mmr_t overflow_ni_fifo_debit0 : 1; + mmr_t overflow_ni_fifo_debit1 : 1; + mmr_t overflow_ni_fifo_debit2 : 1; + mmr_t overflow_ni_fifo_debit3 : 1; + mmr_t overflow_pi_fifo_vc0_pop : 1; + mmr_t overflow_pi_fifo_vc2_pop : 1; + mmr_t overflow_iilb_fifo_vc0_pop : 1; + mmr_t overflow_iilb_fifo_vc2_pop : 1; + mmr_t overflow_md_fifo_vc0_pop : 1; + mmr_t overflow_md_fifo_vc2_pop : 1; + mmr_t overflow_ni_fifo_vc0_pop : 1; + mmr_t overflow_ni_fifo_vc2_pop : 1; + mmr_t overflow_pi_fifo_vc0_push : 1; + mmr_t overflow_pi_fifo_vc2_push : 1; + mmr_t overflow_iilb_fifo_vc0_push : 1; + mmr_t overflow_iilb_fifo_vc2_push : 1; + mmr_t overflow_md_fifo_vc0_push : 1; + mmr_t overflow_md_fifo_vc2_push : 1; + mmr_t overflow_pi_fifo_vc0_credit : 1; + mmr_t overflow_pi_fifo_vc2_credit : 1; + mmr_t overflow_iilb_fifo_vc0_credit : 1; + mmr_t overflow_iilb_fifo_vc2_credit : 1; + mmr_t overflow_md_fifo_vc0_credit : 1; + mmr_t overflow_md_fifo_vc2_credit : 1; + mmr_t overflow_ni_fifo_vc0_credit : 1; + mmr_t overflow_ni_fifo_vc1_credit : 1; + mmr_t overflow_ni_fifo_vc2_credit : 1; + mmr_t overflow_ni_fifo_vc3_credit : 1; + mmr_t tail_timeout_fifo02_vc0 : 1; + mmr_t tail_timeout_fifo02_vc2 : 1; + mmr_t tail_timeout_fifo13_vc1 : 1; + mmr_t tail_timeout_fifo13_vc3 : 1; + mmr_t tail_timeout_ni_vc0 : 1; + mmr_t tail_timeout_ni_vc1 : 1; + mmr_t tail_timeout_ni_vc2 : 1; + mmr_t tail_timeout_ni_vc3 : 1; + } sh_ni1_error_mask_1_s; +} sh_ni1_error_mask_1_u_t; +#else +typedef union sh_ni1_error_mask_1_u { + mmr_t sh_ni1_error_mask_1_regval; + struct { + mmr_t tail_timeout_ni_vc3 : 1; + mmr_t tail_timeout_ni_vc2 : 1; + mmr_t tail_timeout_ni_vc1 : 1; + mmr_t tail_timeout_ni_vc0 : 1; + mmr_t tail_timeout_fifo13_vc3 : 1; + mmr_t tail_timeout_fifo13_vc1 : 1; + mmr_t tail_timeout_fifo02_vc2 : 1; + mmr_t tail_timeout_fifo02_vc0 : 1; + mmr_t overflow_ni_fifo_vc3_credit : 1; + mmr_t overflow_ni_fifo_vc2_credit : 1; + mmr_t overflow_ni_fifo_vc1_credit : 1; + mmr_t overflow_ni_fifo_vc0_credit : 1; + mmr_t overflow_md_fifo_vc2_credit : 1; + mmr_t overflow_md_fifo_vc0_credit : 1; + mmr_t overflow_iilb_fifo_vc2_credit : 1; + mmr_t overflow_iilb_fifo_vc0_credit : 1; + mmr_t overflow_pi_fifo_vc2_credit : 1; + mmr_t overflow_pi_fifo_vc0_credit : 1; + mmr_t overflow_md_fifo_vc2_push : 1; + mmr_t overflow_md_fifo_vc0_push : 1; + mmr_t overflow_iilb_fifo_vc2_push : 1; + mmr_t overflow_iilb_fifo_vc0_push : 1; + mmr_t overflow_pi_fifo_vc2_push : 1; + mmr_t overflow_pi_fifo_vc0_push : 1; + mmr_t overflow_ni_fifo_vc2_pop : 1; + mmr_t overflow_ni_fifo_vc0_pop : 1; + mmr_t overflow_md_fifo_vc2_pop : 1; + mmr_t overflow_md_fifo_vc0_pop : 1; + mmr_t overflow_iilb_fifo_vc2_pop : 1; + mmr_t overflow_iilb_fifo_vc0_pop : 1; + mmr_t overflow_pi_fifo_vc2_pop : 1; + mmr_t overflow_pi_fifo_vc0_pop : 1; + mmr_t overflow_ni_fifo_debit3 : 1; + mmr_t overflow_ni_fifo_debit2 : 1; + mmr_t overflow_ni_fifo_debit1 : 1; + mmr_t overflow_ni_fifo_debit0 : 1; + mmr_t overflow_md_fifo_debit2 : 1; + mmr_t overflow_md_fifo_debit0 : 1; + mmr_t overflow_iilb_fifo_debit2 : 1; + mmr_t overflow_iilb_fifo_debit0 : 1; + mmr_t overflow_pi_fifo_debit2 : 1; + mmr_t overflow_pi_fifo_debit0 : 1; + mmr_t overflow2_vc2_credit : 1; + mmr_t overflow1_vc2_credit : 1; + mmr_t overflow0_vc2_credit : 1; + mmr_t overflow2_vc0_credit : 1; + mmr_t overflow1_vc0_credit : 1; + mmr_t overflow0_vc0_credit : 1; + mmr_t overflow_fifo13_vc2_credit : 1; + mmr_t overflow_fifo13_vc0_credit : 1; + mmr_t overflow_fifo02_vc2_credit : 1; + mmr_t overflow_fifo02_vc0_credit : 1; + mmr_t overflow_fifo13_vc3_push : 1; + mmr_t overflow_fifo13_vc1_push : 1; + mmr_t overflow_fifo02_vc2_push : 1; + mmr_t overflow_fifo02_vc0_push : 1; + mmr_t overflow_fifo13_vc3_pop : 1; + mmr_t overflow_fifo13_vc1_pop : 1; + mmr_t overflow_fifo02_vc2_pop : 1; + mmr_t overflow_fifo02_vc0_pop : 1; + mmr_t overflow_fifo13_debit2 : 1; + mmr_t overflow_fifo13_debit0 : 1; + mmr_t overflow_fifo02_debit2 : 1; + mmr_t overflow_fifo02_debit0 : 1; + } sh_ni1_error_mask_1_s; +} sh_ni1_error_mask_1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_NI1_ERROR_MASK_2" */ +/* ni1 Error Mask Bits */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ni1_error_mask_2_u { + mmr_t sh_ni1_error_mask_2_regval; + struct { + mmr_t illegal_vcni : 1; + mmr_t illegal_vcpi : 1; + mmr_t illegal_vcmd : 1; + mmr_t illegal_vciilb : 1; + mmr_t underflow_fifo02_vc0_pop : 1; + mmr_t underflow_fifo02_vc2_pop : 1; + mmr_t underflow_fifo13_vc1_pop : 1; + mmr_t underflow_fifo13_vc3_pop : 1; + mmr_t underflow_fifo02_vc0_push : 1; + mmr_t underflow_fifo02_vc2_push : 1; + mmr_t underflow_fifo13_vc1_push : 1; + mmr_t underflow_fifo13_vc3_push : 1; + mmr_t underflow_fifo02_vc0_credit : 1; + mmr_t underflow_fifo02_vc2_credit : 1; + mmr_t underflow_fifo13_vc0_credit : 1; + mmr_t underflow_fifo13_vc2_credit : 1; + mmr_t underflow0_vc0_credit : 1; + mmr_t underflow1_vc0_credit : 1; + mmr_t underflow2_vc0_credit : 1; + mmr_t underflow0_vc2_credit : 1; + mmr_t underflow1_vc2_credit : 1; + mmr_t underflow2_vc2_credit : 1; + mmr_t reserved_0 : 10; + mmr_t underflow_pi_fifo_vc0_pop : 1; + mmr_t underflow_pi_fifo_vc2_pop : 1; + mmr_t underflow_iilb_fifo_vc0_pop : 1; + mmr_t underflow_iilb_fifo_vc2_pop : 1; + mmr_t underflow_md_fifo_vc0_pop : 1; + mmr_t underflow_md_fifo_vc2_pop : 1; + mmr_t underflow_ni_fifo_vc0_pop : 1; + mmr_t underflow_ni_fifo_vc2_pop : 1; + mmr_t underflow_pi_fifo_vc0_push : 1; + mmr_t underflow_pi_fifo_vc2_push : 1; + mmr_t underflow_iilb_fifo_vc0_push : 1; + mmr_t underflow_iilb_fifo_vc2_push : 1; + mmr_t underflow_md_fifo_vc0_push : 1; + mmr_t underflow_md_fifo_vc2_push : 1; + mmr_t underflow_pi_fifo_vc0_credit : 1; + mmr_t underflow_pi_fifo_vc2_credit : 1; + mmr_t underflow_iilb_fifo_vc0_credit : 1; + mmr_t underflow_iilb_fifo_vc2_credit : 1; + mmr_t underflow_md_fifo_vc0_credit : 1; + mmr_t underflow_md_fifo_vc2_credit : 1; + mmr_t underflow_ni_fifo_vc0_credit : 1; + mmr_t underflow_ni_fifo_vc1_credit : 1; + mmr_t underflow_ni_fifo_vc2_credit : 1; + mmr_t underflow_ni_fifo_vc3_credit : 1; + mmr_t llp_deadlock_vc0 : 1; + mmr_t llp_deadlock_vc1 : 1; + mmr_t llp_deadlock_vc2 : 1; + mmr_t llp_deadlock_vc3 : 1; + mmr_t chiplet_nomatch : 1; + mmr_t lut_read_error : 1; + mmr_t retry_timeout_error : 1; + mmr_t reserved_1 : 1; + } sh_ni1_error_mask_2_s; +} sh_ni1_error_mask_2_u_t; +#else +typedef union sh_ni1_error_mask_2_u { + mmr_t sh_ni1_error_mask_2_regval; + struct { + mmr_t reserved_1 : 1; + mmr_t retry_timeout_error : 1; + mmr_t lut_read_error : 1; + mmr_t chiplet_nomatch : 1; + mmr_t llp_deadlock_vc3 : 1; + mmr_t llp_deadlock_vc2 : 1; + mmr_t llp_deadlock_vc1 : 1; + mmr_t llp_deadlock_vc0 : 1; + mmr_t underflow_ni_fifo_vc3_credit : 1; + mmr_t underflow_ni_fifo_vc2_credit : 1; + mmr_t underflow_ni_fifo_vc1_credit : 1; + mmr_t underflow_ni_fifo_vc0_credit : 1; + mmr_t underflow_md_fifo_vc2_credit : 1; + mmr_t underflow_md_fifo_vc0_credit : 1; + mmr_t underflow_iilb_fifo_vc2_credit : 1; + mmr_t underflow_iilb_fifo_vc0_credit : 1; + mmr_t underflow_pi_fifo_vc2_credit : 1; + mmr_t underflow_pi_fifo_vc0_credit : 1; + mmr_t underflow_md_fifo_vc2_push : 1; + mmr_t underflow_md_fifo_vc0_push : 1; + mmr_t underflow_iilb_fifo_vc2_push : 1; + mmr_t underflow_iilb_fifo_vc0_push : 1; + mmr_t underflow_pi_fifo_vc2_push : 1; + mmr_t underflow_pi_fifo_vc0_push : 1; + mmr_t underflow_ni_fifo_vc2_pop : 1; + mmr_t underflow_ni_fifo_vc0_pop : 1; + mmr_t underflow_md_fifo_vc2_pop : 1; + mmr_t underflow_md_fifo_vc0_pop : 1; + mmr_t underflow_iilb_fifo_vc2_pop : 1; + mmr_t underflow_iilb_fifo_vc0_pop : 1; + mmr_t underflow_pi_fifo_vc2_pop : 1; + mmr_t underflow_pi_fifo_vc0_pop : 1; + mmr_t reserved_0 : 10; + mmr_t underflow2_vc2_credit : 1; + mmr_t underflow1_vc2_credit : 1; + mmr_t underflow0_vc2_credit : 1; + mmr_t underflow2_vc0_credit : 1; + mmr_t underflow1_vc0_credit : 1; + mmr_t underflow0_vc0_credit : 1; + mmr_t underflow_fifo13_vc2_credit : 1; + mmr_t underflow_fifo13_vc0_credit : 1; + mmr_t underflow_fifo02_vc2_credit : 1; + mmr_t underflow_fifo02_vc0_credit : 1; + mmr_t underflow_fifo13_vc3_push : 1; + mmr_t underflow_fifo13_vc1_push : 1; + mmr_t underflow_fifo02_vc2_push : 1; + mmr_t underflow_fifo02_vc0_push : 1; + mmr_t underflow_fifo13_vc3_pop : 1; + mmr_t underflow_fifo13_vc1_pop : 1; + mmr_t underflow_fifo02_vc2_pop : 1; + mmr_t underflow_fifo02_vc0_pop : 1; + mmr_t illegal_vciilb : 1; + mmr_t illegal_vcmd : 1; + mmr_t illegal_vcpi : 1; + mmr_t illegal_vcni : 1; + } sh_ni1_error_mask_2_s; +} sh_ni1_error_mask_2_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_NI1_FIRST_ERROR_1" */ +/* ni1 First Error Bits */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ni1_first_error_1_u { + mmr_t sh_ni1_first_error_1_regval; + struct { + mmr_t overflow_fifo02_debit0 : 1; + mmr_t overflow_fifo02_debit2 : 1; + mmr_t overflow_fifo13_debit0 : 1; + mmr_t overflow_fifo13_debit2 : 1; + mmr_t overflow_fifo02_vc0_pop : 1; + mmr_t overflow_fifo02_vc2_pop : 1; + mmr_t overflow_fifo13_vc1_pop : 1; + mmr_t overflow_fifo13_vc3_pop : 1; + mmr_t overflow_fifo02_vc0_push : 1; + mmr_t overflow_fifo02_vc2_push : 1; + mmr_t overflow_fifo13_vc1_push : 1; + mmr_t overflow_fifo13_vc3_push : 1; + mmr_t overflow_fifo02_vc0_credit : 1; + mmr_t overflow_fifo02_vc2_credit : 1; + mmr_t overflow_fifo13_vc0_credit : 1; + mmr_t overflow_fifo13_vc2_credit : 1; + mmr_t overflow0_vc0_credit : 1; + mmr_t overflow1_vc0_credit : 1; + mmr_t overflow2_vc0_credit : 1; + mmr_t overflow0_vc2_credit : 1; + mmr_t overflow1_vc2_credit : 1; + mmr_t overflow2_vc2_credit : 1; + mmr_t overflow_pi_fifo_debit0 : 1; + mmr_t overflow_pi_fifo_debit2 : 1; + mmr_t overflow_iilb_fifo_debit0 : 1; + mmr_t overflow_iilb_fifo_debit2 : 1; + mmr_t overflow_md_fifo_debit0 : 1; + mmr_t overflow_md_fifo_debit2 : 1; + mmr_t overflow_ni_fifo_debit0 : 1; + mmr_t overflow_ni_fifo_debit1 : 1; + mmr_t overflow_ni_fifo_debit2 : 1; + mmr_t overflow_ni_fifo_debit3 : 1; + mmr_t overflow_pi_fifo_vc0_pop : 1; + mmr_t overflow_pi_fifo_vc2_pop : 1; + mmr_t overflow_iilb_fifo_vc0_pop : 1; + mmr_t overflow_iilb_fifo_vc2_pop : 1; + mmr_t overflow_md_fifo_vc0_pop : 1; + mmr_t overflow_md_fifo_vc2_pop : 1; + mmr_t overflow_ni_fifo_vc0_pop : 1; + mmr_t overflow_ni_fifo_vc2_pop : 1; + mmr_t overflow_pi_fifo_vc0_push : 1; + mmr_t overflow_pi_fifo_vc2_push : 1; + mmr_t overflow_iilb_fifo_vc0_push : 1; + mmr_t overflow_iilb_fifo_vc2_push : 1; + mmr_t overflow_md_fifo_vc0_push : 1; + mmr_t overflow_md_fifo_vc2_push : 1; + mmr_t overflow_pi_fifo_vc0_credit : 1; + mmr_t overflow_pi_fifo_vc2_credit : 1; + mmr_t overflow_iilb_fifo_vc0_credit : 1; + mmr_t overflow_iilb_fifo_vc2_credit : 1; + mmr_t overflow_md_fifo_vc0_credit : 1; + mmr_t overflow_md_fifo_vc2_credit : 1; + mmr_t overflow_ni_fifo_vc0_credit : 1; + mmr_t overflow_ni_fifo_vc1_credit : 1; + mmr_t overflow_ni_fifo_vc2_credit : 1; + mmr_t overflow_ni_fifo_vc3_credit : 1; + mmr_t tail_timeout_fifo02_vc0 : 1; + mmr_t tail_timeout_fifo02_vc2 : 1; + mmr_t tail_timeout_fifo13_vc1 : 1; + mmr_t tail_timeout_fifo13_vc3 : 1; + mmr_t tail_timeout_ni_vc0 : 1; + mmr_t tail_timeout_ni_vc1 : 1; + mmr_t tail_timeout_ni_vc2 : 1; + mmr_t tail_timeout_ni_vc3 : 1; + } sh_ni1_first_error_1_s; +} sh_ni1_first_error_1_u_t; +#else +typedef union sh_ni1_first_error_1_u { + mmr_t sh_ni1_first_error_1_regval; + struct { + mmr_t tail_timeout_ni_vc3 : 1; + mmr_t tail_timeout_ni_vc2 : 1; + mmr_t tail_timeout_ni_vc1 : 1; + mmr_t tail_timeout_ni_vc0 : 1; + mmr_t tail_timeout_fifo13_vc3 : 1; + mmr_t tail_timeout_fifo13_vc1 : 1; + mmr_t tail_timeout_fifo02_vc2 : 1; + mmr_t tail_timeout_fifo02_vc0 : 1; + mmr_t overflow_ni_fifo_vc3_credit : 1; + mmr_t overflow_ni_fifo_vc2_credit : 1; + mmr_t overflow_ni_fifo_vc1_credit : 1; + mmr_t overflow_ni_fifo_vc0_credit : 1; + mmr_t overflow_md_fifo_vc2_credit : 1; + mmr_t overflow_md_fifo_vc0_credit : 1; + mmr_t overflow_iilb_fifo_vc2_credit : 1; + mmr_t overflow_iilb_fifo_vc0_credit : 1; + mmr_t overflow_pi_fifo_vc2_credit : 1; + mmr_t overflow_pi_fifo_vc0_credit : 1; + mmr_t overflow_md_fifo_vc2_push : 1; + mmr_t overflow_md_fifo_vc0_push : 1; + mmr_t overflow_iilb_fifo_vc2_push : 1; + mmr_t overflow_iilb_fifo_vc0_push : 1; + mmr_t overflow_pi_fifo_vc2_push : 1; + mmr_t overflow_pi_fifo_vc0_push : 1; + mmr_t overflow_ni_fifo_vc2_pop : 1; + mmr_t overflow_ni_fifo_vc0_pop : 1; + mmr_t overflow_md_fifo_vc2_pop : 1; + mmr_t overflow_md_fifo_vc0_pop : 1; + mmr_t overflow_iilb_fifo_vc2_pop : 1; + mmr_t overflow_iilb_fifo_vc0_pop : 1; + mmr_t overflow_pi_fifo_vc2_pop : 1; + mmr_t overflow_pi_fifo_vc0_pop : 1; + mmr_t overflow_ni_fifo_debit3 : 1; + mmr_t overflow_ni_fifo_debit2 : 1; + mmr_t overflow_ni_fifo_debit1 : 1; + mmr_t overflow_ni_fifo_debit0 : 1; + mmr_t overflow_md_fifo_debit2 : 1; + mmr_t overflow_md_fifo_debit0 : 1; + mmr_t overflow_iilb_fifo_debit2 : 1; + mmr_t overflow_iilb_fifo_debit0 : 1; + mmr_t overflow_pi_fifo_debit2 : 1; + mmr_t overflow_pi_fifo_debit0 : 1; + mmr_t overflow2_vc2_credit : 1; + mmr_t overflow1_vc2_credit : 1; + mmr_t overflow0_vc2_credit : 1; + mmr_t overflow2_vc0_credit : 1; + mmr_t overflow1_vc0_credit : 1; + mmr_t overflow0_vc0_credit : 1; + mmr_t overflow_fifo13_vc2_credit : 1; + mmr_t overflow_fifo13_vc0_credit : 1; + mmr_t overflow_fifo02_vc2_credit : 1; + mmr_t overflow_fifo02_vc0_credit : 1; + mmr_t overflow_fifo13_vc3_push : 1; + mmr_t overflow_fifo13_vc1_push : 1; + mmr_t overflow_fifo02_vc2_push : 1; + mmr_t overflow_fifo02_vc0_push : 1; + mmr_t overflow_fifo13_vc3_pop : 1; + mmr_t overflow_fifo13_vc1_pop : 1; + mmr_t overflow_fifo02_vc2_pop : 1; + mmr_t overflow_fifo02_vc0_pop : 1; + mmr_t overflow_fifo13_debit2 : 1; + mmr_t overflow_fifo13_debit0 : 1; + mmr_t overflow_fifo02_debit2 : 1; + mmr_t overflow_fifo02_debit0 : 1; + } sh_ni1_first_error_1_s; +} sh_ni1_first_error_1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_NI1_FIRST_ERROR_2" */ +/* ni1 First Error Bits */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ni1_first_error_2_u { + mmr_t sh_ni1_first_error_2_regval; + struct { + mmr_t illegal_vcni : 1; + mmr_t illegal_vcpi : 1; + mmr_t illegal_vcmd : 1; + mmr_t illegal_vciilb : 1; + mmr_t underflow_fifo02_vc0_pop : 1; + mmr_t underflow_fifo02_vc2_pop : 1; + mmr_t underflow_fifo13_vc1_pop : 1; + mmr_t underflow_fifo13_vc3_pop : 1; + mmr_t underflow_fifo02_vc0_push : 1; + mmr_t underflow_fifo02_vc2_push : 1; + mmr_t underflow_fifo13_vc1_push : 1; + mmr_t underflow_fifo13_vc3_push : 1; + mmr_t underflow_fifo02_vc0_credit : 1; + mmr_t underflow_fifo02_vc2_credit : 1; + mmr_t underflow_fifo13_vc0_credit : 1; + mmr_t underflow_fifo13_vc2_credit : 1; + mmr_t underflow0_vc0_credit : 1; + mmr_t underflow1_vc0_credit : 1; + mmr_t underflow2_vc0_credit : 1; + mmr_t underflow0_vc2_credit : 1; + mmr_t underflow1_vc2_credit : 1; + mmr_t underflow2_vc2_credit : 1; + mmr_t reserved_0 : 10; + mmr_t underflow_pi_fifo_vc0_pop : 1; + mmr_t underflow_pi_fifo_vc2_pop : 1; + mmr_t underflow_iilb_fifo_vc0_pop : 1; + mmr_t underflow_iilb_fifo_vc2_pop : 1; + mmr_t underflow_md_fifo_vc0_pop : 1; + mmr_t underflow_md_fifo_vc2_pop : 1; + mmr_t underflow_ni_fifo_vc0_pop : 1; + mmr_t underflow_ni_fifo_vc2_pop : 1; + mmr_t underflow_pi_fifo_vc0_push : 1; + mmr_t underflow_pi_fifo_vc2_push : 1; + mmr_t underflow_iilb_fifo_vc0_push : 1; + mmr_t underflow_iilb_fifo_vc2_push : 1; + mmr_t underflow_md_fifo_vc0_push : 1; + mmr_t underflow_md_fifo_vc2_push : 1; + mmr_t underflow_pi_fifo_vc0_credit : 1; + mmr_t underflow_pi_fifo_vc2_credit : 1; + mmr_t underflow_iilb_fifo_vc0_credit : 1; + mmr_t underflow_iilb_fifo_vc2_credit : 1; + mmr_t underflow_md_fifo_vc0_credit : 1; + mmr_t underflow_md_fifo_vc2_credit : 1; + mmr_t underflow_ni_fifo_vc0_credit : 1; + mmr_t underflow_ni_fifo_vc1_credit : 1; + mmr_t underflow_ni_fifo_vc2_credit : 1; + mmr_t underflow_ni_fifo_vc3_credit : 1; + mmr_t llp_deadlock_vc0 : 1; + mmr_t llp_deadlock_vc1 : 1; + mmr_t llp_deadlock_vc2 : 1; + mmr_t llp_deadlock_vc3 : 1; + mmr_t chiplet_nomatch : 1; + mmr_t lut_read_error : 1; + mmr_t retry_timeout_error : 1; + mmr_t reserved_1 : 1; + } sh_ni1_first_error_2_s; +} sh_ni1_first_error_2_u_t; +#else +typedef union sh_ni1_first_error_2_u { + mmr_t sh_ni1_first_error_2_regval; + struct { + mmr_t reserved_1 : 1; + mmr_t retry_timeout_error : 1; + mmr_t lut_read_error : 1; + mmr_t chiplet_nomatch : 1; + mmr_t llp_deadlock_vc3 : 1; + mmr_t llp_deadlock_vc2 : 1; + mmr_t llp_deadlock_vc1 : 1; + mmr_t llp_deadlock_vc0 : 1; + mmr_t underflow_ni_fifo_vc3_credit : 1; + mmr_t underflow_ni_fifo_vc2_credit : 1; + mmr_t underflow_ni_fifo_vc1_credit : 1; + mmr_t underflow_ni_fifo_vc0_credit : 1; + mmr_t underflow_md_fifo_vc2_credit : 1; + mmr_t underflow_md_fifo_vc0_credit : 1; + mmr_t underflow_iilb_fifo_vc2_credit : 1; + mmr_t underflow_iilb_fifo_vc0_credit : 1; + mmr_t underflow_pi_fifo_vc2_credit : 1; + mmr_t underflow_pi_fifo_vc0_credit : 1; + mmr_t underflow_md_fifo_vc2_push : 1; + mmr_t underflow_md_fifo_vc0_push : 1; + mmr_t underflow_iilb_fifo_vc2_push : 1; + mmr_t underflow_iilb_fifo_vc0_push : 1; + mmr_t underflow_pi_fifo_vc2_push : 1; + mmr_t underflow_pi_fifo_vc0_push : 1; + mmr_t underflow_ni_fifo_vc2_pop : 1; + mmr_t underflow_ni_fifo_vc0_pop : 1; + mmr_t underflow_md_fifo_vc2_pop : 1; + mmr_t underflow_md_fifo_vc0_pop : 1; + mmr_t underflow_iilb_fifo_vc2_pop : 1; + mmr_t underflow_iilb_fifo_vc0_pop : 1; + mmr_t underflow_pi_fifo_vc2_pop : 1; + mmr_t underflow_pi_fifo_vc0_pop : 1; + mmr_t reserved_0 : 10; + mmr_t underflow2_vc2_credit : 1; + mmr_t underflow1_vc2_credit : 1; + mmr_t underflow0_vc2_credit : 1; + mmr_t underflow2_vc0_credit : 1; + mmr_t underflow1_vc0_credit : 1; + mmr_t underflow0_vc0_credit : 1; + mmr_t underflow_fifo13_vc2_credit : 1; + mmr_t underflow_fifo13_vc0_credit : 1; + mmr_t underflow_fifo02_vc2_credit : 1; + mmr_t underflow_fifo02_vc0_credit : 1; + mmr_t underflow_fifo13_vc3_push : 1; + mmr_t underflow_fifo13_vc1_push : 1; + mmr_t underflow_fifo02_vc2_push : 1; + mmr_t underflow_fifo02_vc0_push : 1; + mmr_t underflow_fifo13_vc3_pop : 1; + mmr_t underflow_fifo13_vc1_pop : 1; + mmr_t underflow_fifo02_vc2_pop : 1; + mmr_t underflow_fifo02_vc0_pop : 1; + mmr_t illegal_vciilb : 1; + mmr_t illegal_vcmd : 1; + mmr_t illegal_vcpi : 1; + mmr_t illegal_vcni : 1; + } sh_ni1_first_error_2_s; +} sh_ni1_first_error_2_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_NI1_ERROR_DETAIL_1" */ +/* ni1 Chiplet no match header bits 63:0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ni1_error_detail_1_u { + mmr_t sh_ni1_error_detail_1_regval; + struct { + mmr_t header : 64; + } sh_ni1_error_detail_1_s; +} sh_ni1_error_detail_1_u_t; +#else +typedef union sh_ni1_error_detail_1_u { + mmr_t sh_ni1_error_detail_1_regval; + struct { + mmr_t header : 64; + } sh_ni1_error_detail_1_s; +} sh_ni1_error_detail_1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_NI1_ERROR_DETAIL_2" */ +/* ni1 Chiplet no match header bits 127:64 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ni1_error_detail_2_u { + mmr_t sh_ni1_error_detail_2_regval; + struct { + mmr_t header : 64; + } sh_ni1_error_detail_2_s; +} sh_ni1_error_detail_2_u_t; +#else +typedef union sh_ni1_error_detail_2_u { + mmr_t sh_ni1_error_detail_2_regval; + struct { + mmr_t header : 64; + } sh_ni1_error_detail_2_s; +} sh_ni1_error_detail_2_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_CORRECTED_DETAIL_1" */ +/* Corrected error details */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_corrected_detail_1_u { + mmr_t sh_xn_corrected_detail_1_regval; + struct { + mmr_t ecc0_syndrome : 8; + mmr_t ecc0_wc : 2; + mmr_t ecc0_vc : 2; + mmr_t reserved_0 : 4; + mmr_t ecc1_syndrome : 8; + mmr_t ecc1_wc : 2; + mmr_t ecc1_vc : 2; + mmr_t reserved_1 : 4; + mmr_t ecc2_syndrome : 8; + mmr_t ecc2_wc : 2; + mmr_t ecc2_vc : 2; + mmr_t reserved_2 : 4; + mmr_t ecc3_syndrome : 8; + mmr_t ecc3_wc : 2; + mmr_t ecc3_vc : 2; + mmr_t reserved_3 : 4; + } sh_xn_corrected_detail_1_s; +} sh_xn_corrected_detail_1_u_t; +#else +typedef union sh_xn_corrected_detail_1_u { + mmr_t sh_xn_corrected_detail_1_regval; + struct { + mmr_t reserved_3 : 4; + mmr_t ecc3_vc : 2; + mmr_t ecc3_wc : 2; + mmr_t ecc3_syndrome : 8; + mmr_t reserved_2 : 4; + mmr_t ecc2_vc : 2; + mmr_t ecc2_wc : 2; + mmr_t ecc2_syndrome : 8; + mmr_t reserved_1 : 4; + mmr_t ecc1_vc : 2; + mmr_t ecc1_wc : 2; + mmr_t ecc1_syndrome : 8; + mmr_t reserved_0 : 4; + mmr_t ecc0_vc : 2; + mmr_t ecc0_wc : 2; + mmr_t ecc0_syndrome : 8; + } sh_xn_corrected_detail_1_s; +} sh_xn_corrected_detail_1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_CORRECTED_DETAIL_2" */ +/* Corrected error data */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_corrected_detail_2_u { + mmr_t sh_xn_corrected_detail_2_regval; + struct { + mmr_t data : 64; + } sh_xn_corrected_detail_2_s; +} sh_xn_corrected_detail_2_u_t; +#else +typedef union sh_xn_corrected_detail_2_u { + mmr_t sh_xn_corrected_detail_2_regval; + struct { + mmr_t data : 64; + } sh_xn_corrected_detail_2_s; +} sh_xn_corrected_detail_2_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_CORRECTED_DETAIL_3" */ +/* Corrected error header0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_corrected_detail_3_u { + mmr_t sh_xn_corrected_detail_3_regval; + struct { + mmr_t header0 : 64; + } sh_xn_corrected_detail_3_s; +} sh_xn_corrected_detail_3_u_t; +#else +typedef union sh_xn_corrected_detail_3_u { + mmr_t sh_xn_corrected_detail_3_regval; + struct { + mmr_t header0 : 64; + } sh_xn_corrected_detail_3_s; +} sh_xn_corrected_detail_3_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_CORRECTED_DETAIL_4" */ +/* Corrected error header1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_corrected_detail_4_u { + mmr_t sh_xn_corrected_detail_4_regval; + struct { + mmr_t header1 : 42; + mmr_t reserved_0 : 20; + mmr_t err_group : 2; + } sh_xn_corrected_detail_4_s; +} sh_xn_corrected_detail_4_u_t; +#else +typedef union sh_xn_corrected_detail_4_u { + mmr_t sh_xn_corrected_detail_4_regval; + struct { + mmr_t err_group : 2; + mmr_t reserved_0 : 20; + mmr_t header1 : 42; + } sh_xn_corrected_detail_4_s; +} sh_xn_corrected_detail_4_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_UNCORRECTED_DETAIL_1" */ +/* Uncorrected error details */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_uncorrected_detail_1_u { + mmr_t sh_xn_uncorrected_detail_1_regval; + struct { + mmr_t ecc0_syndrome : 8; + mmr_t ecc0_wc : 2; + mmr_t ecc0_vc : 2; + mmr_t reserved_0 : 4; + mmr_t ecc1_syndrome : 8; + mmr_t ecc1_wc : 2; + mmr_t ecc1_vc : 2; + mmr_t reserved_1 : 4; + mmr_t ecc2_syndrome : 8; + mmr_t ecc2_wc : 2; + mmr_t ecc2_vc : 2; + mmr_t reserved_2 : 4; + mmr_t ecc3_syndrome : 8; + mmr_t ecc3_wc : 2; + mmr_t ecc3_vc : 2; + mmr_t reserved_3 : 4; + } sh_xn_uncorrected_detail_1_s; +} sh_xn_uncorrected_detail_1_u_t; +#else +typedef union sh_xn_uncorrected_detail_1_u { + mmr_t sh_xn_uncorrected_detail_1_regval; + struct { + mmr_t reserved_3 : 4; + mmr_t ecc3_vc : 2; + mmr_t ecc3_wc : 2; + mmr_t ecc3_syndrome : 8; + mmr_t reserved_2 : 4; + mmr_t ecc2_vc : 2; + mmr_t ecc2_wc : 2; + mmr_t ecc2_syndrome : 8; + mmr_t reserved_1 : 4; + mmr_t ecc1_vc : 2; + mmr_t ecc1_wc : 2; + mmr_t ecc1_syndrome : 8; + mmr_t reserved_0 : 4; + mmr_t ecc0_vc : 2; + mmr_t ecc0_wc : 2; + mmr_t ecc0_syndrome : 8; + } sh_xn_uncorrected_detail_1_s; +} sh_xn_uncorrected_detail_1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_UNCORRECTED_DETAIL_2" */ +/* Uncorrected error data */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_uncorrected_detail_2_u { + mmr_t sh_xn_uncorrected_detail_2_regval; + struct { + mmr_t data : 64; + } sh_xn_uncorrected_detail_2_s; +} sh_xn_uncorrected_detail_2_u_t; +#else +typedef union sh_xn_uncorrected_detail_2_u { + mmr_t sh_xn_uncorrected_detail_2_regval; + struct { + mmr_t data : 64; + } sh_xn_uncorrected_detail_2_s; +} sh_xn_uncorrected_detail_2_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_UNCORRECTED_DETAIL_3" */ +/* Uncorrected error header0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_uncorrected_detail_3_u { + mmr_t sh_xn_uncorrected_detail_3_regval; + struct { + mmr_t header0 : 64; + } sh_xn_uncorrected_detail_3_s; +} sh_xn_uncorrected_detail_3_u_t; +#else +typedef union sh_xn_uncorrected_detail_3_u { + mmr_t sh_xn_uncorrected_detail_3_regval; + struct { + mmr_t header0 : 64; + } sh_xn_uncorrected_detail_3_s; +} sh_xn_uncorrected_detail_3_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_UNCORRECTED_DETAIL_4" */ +/* Uncorrected error header1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_uncorrected_detail_4_u { + mmr_t sh_xn_uncorrected_detail_4_regval; + struct { + mmr_t header1 : 42; + mmr_t reserved_0 : 20; + mmr_t err_group : 2; + } sh_xn_uncorrected_detail_4_s; +} sh_xn_uncorrected_detail_4_u_t; +#else +typedef union sh_xn_uncorrected_detail_4_u { + mmr_t sh_xn_uncorrected_detail_4_regval; + struct { + mmr_t err_group : 2; + mmr_t reserved_0 : 20; + mmr_t header1 : 42; + } sh_xn_uncorrected_detail_4_s; +} sh_xn_uncorrected_detail_4_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNMD_ERROR_DETAIL_1" */ +/* Look Up Table Address (md) */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnmd_error_detail_1_u { + mmr_t sh_xnmd_error_detail_1_regval; + struct { + mmr_t lut_addr : 11; + mmr_t reserved_0 : 53; + } sh_xnmd_error_detail_1_s; +} sh_xnmd_error_detail_1_u_t; +#else +typedef union sh_xnmd_error_detail_1_u { + mmr_t sh_xnmd_error_detail_1_regval; + struct { + mmr_t reserved_0 : 53; + mmr_t lut_addr : 11; + } sh_xnmd_error_detail_1_s; +} sh_xnmd_error_detail_1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNPI_ERROR_DETAIL_1" */ +/* Look Up Table Address (pi) */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnpi_error_detail_1_u { + mmr_t sh_xnpi_error_detail_1_regval; + struct { + mmr_t lut_addr : 11; + mmr_t reserved_0 : 53; + } sh_xnpi_error_detail_1_s; +} sh_xnpi_error_detail_1_u_t; +#else +typedef union sh_xnpi_error_detail_1_u { + mmr_t sh_xnpi_error_detail_1_regval; + struct { + mmr_t reserved_0 : 53; + mmr_t lut_addr : 11; + } sh_xnpi_error_detail_1_s; +} sh_xnpi_error_detail_1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNIILB_ERROR_DETAIL_1" */ +/* Chiplet NoMatch header [63:0] */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xniilb_error_detail_1_u { + mmr_t sh_xniilb_error_detail_1_regval; + struct { + mmr_t header : 64; + } sh_xniilb_error_detail_1_s; +} sh_xniilb_error_detail_1_u_t; +#else +typedef union sh_xniilb_error_detail_1_u { + mmr_t sh_xniilb_error_detail_1_regval; + struct { + mmr_t header : 64; + } sh_xniilb_error_detail_1_s; +} sh_xniilb_error_detail_1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNIILB_ERROR_DETAIL_2" */ +/* Chiplet NoMatch header [127:64] */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xniilb_error_detail_2_u { + mmr_t sh_xniilb_error_detail_2_regval; + struct { + mmr_t header : 64; + } sh_xniilb_error_detail_2_s; +} sh_xniilb_error_detail_2_u_t; +#else +typedef union sh_xniilb_error_detail_2_u { + mmr_t sh_xniilb_error_detail_2_regval; + struct { + mmr_t header : 64; + } sh_xniilb_error_detail_2_s; +} sh_xniilb_error_detail_2_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNIILB_ERROR_DETAIL_3" */ +/* Look Up Table Address (iilb) */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xniilb_error_detail_3_u { + mmr_t sh_xniilb_error_detail_3_regval; + struct { + mmr_t lut_addr : 11; + mmr_t reserved_0 : 53; + } sh_xniilb_error_detail_3_s; +} sh_xniilb_error_detail_3_u_t; +#else +typedef union sh_xniilb_error_detail_3_u { + mmr_t sh_xniilb_error_detail_3_regval; + struct { + mmr_t reserved_0 : 53; + mmr_t lut_addr : 11; + } sh_xniilb_error_detail_3_s; +} sh_xniilb_error_detail_3_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_NI0_ERROR_DETAIL_3" */ +/* Look Up Table Address (ni0) */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ni0_error_detail_3_u { + mmr_t sh_ni0_error_detail_3_regval; + struct { + mmr_t lut_addr : 11; + mmr_t reserved_0 : 53; + } sh_ni0_error_detail_3_s; +} sh_ni0_error_detail_3_u_t; +#else +typedef union sh_ni0_error_detail_3_u { + mmr_t sh_ni0_error_detail_3_regval; + struct { + mmr_t reserved_0 : 53; + mmr_t lut_addr : 11; + } sh_ni0_error_detail_3_s; +} sh_ni0_error_detail_3_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_NI1_ERROR_DETAIL_3" */ +/* Look Up Table Address (ni1) */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ni1_error_detail_3_u { + mmr_t sh_ni1_error_detail_3_regval; + struct { + mmr_t lut_addr : 11; + mmr_t reserved_0 : 53; + } sh_ni1_error_detail_3_s; +} sh_ni1_error_detail_3_u_t; +#else +typedef union sh_ni1_error_detail_3_u { + mmr_t sh_ni1_error_detail_3_regval; + struct { + mmr_t reserved_0 : 53; + mmr_t lut_addr : 11; + } sh_ni1_error_detail_3_s; +} sh_ni1_error_detail_3_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_ERROR_SUMMARY" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_error_summary_u { + mmr_t sh_xn_error_summary_regval; + struct { + mmr_t ni0_pop_overflow : 1; + mmr_t ni0_push_overflow : 1; + mmr_t ni0_credit_overflow : 1; + mmr_t ni0_debit_overflow : 1; + mmr_t ni0_pop_underflow : 1; + mmr_t ni0_push_underflow : 1; + mmr_t ni0_credit_underflow : 1; + mmr_t ni0_llp_error : 1; + mmr_t ni0_pipe_error : 1; + mmr_t ni1_pop_overflow : 1; + mmr_t ni1_push_overflow : 1; + mmr_t ni1_credit_overflow : 1; + mmr_t ni1_debit_overflow : 1; + mmr_t ni1_pop_underflow : 1; + mmr_t ni1_push_underflow : 1; + mmr_t ni1_credit_underflow : 1; + mmr_t ni1_llp_error : 1; + mmr_t ni1_pipe_error : 1; + mmr_t xnmd_credit_overflow : 1; + mmr_t xnmd_debit_overflow : 1; + mmr_t xnmd_data_buff_overflow : 1; + mmr_t xnmd_credit_underflow : 1; + mmr_t xnmd_sbe_error : 1; + mmr_t xnmd_uce_error : 1; + mmr_t xnmd_lut_error : 1; + mmr_t xnpi_credit_overflow : 1; + mmr_t xnpi_debit_overflow : 1; + mmr_t xnpi_data_buff_overflow : 1; + mmr_t xnpi_credit_underflow : 1; + mmr_t xnpi_sbe_error : 1; + mmr_t xnpi_uce_error : 1; + mmr_t xnpi_lut_error : 1; + mmr_t iilb_debit_overflow : 1; + mmr_t iilb_credit_overflow : 1; + mmr_t iilb_fifo_overflow : 1; + mmr_t iilb_credit_underflow : 1; + mmr_t iilb_fifo_underflow : 1; + mmr_t iilb_chiplet_or_lut : 1; + mmr_t reserved_0 : 26; + } sh_xn_error_summary_s; +} sh_xn_error_summary_u_t; +#else +typedef union sh_xn_error_summary_u { + mmr_t sh_xn_error_summary_regval; + struct { + mmr_t reserved_0 : 26; + mmr_t iilb_chiplet_or_lut : 1; + mmr_t iilb_fifo_underflow : 1; + mmr_t iilb_credit_underflow : 1; + mmr_t iilb_fifo_overflow : 1; + mmr_t iilb_credit_overflow : 1; + mmr_t iilb_debit_overflow : 1; + mmr_t xnpi_lut_error : 1; + mmr_t xnpi_uce_error : 1; + mmr_t xnpi_sbe_error : 1; + mmr_t xnpi_credit_underflow : 1; + mmr_t xnpi_data_buff_overflow : 1; + mmr_t xnpi_debit_overflow : 1; + mmr_t xnpi_credit_overflow : 1; + mmr_t xnmd_lut_error : 1; + mmr_t xnmd_uce_error : 1; + mmr_t xnmd_sbe_error : 1; + mmr_t xnmd_credit_underflow : 1; + mmr_t xnmd_data_buff_overflow : 1; + mmr_t xnmd_debit_overflow : 1; + mmr_t xnmd_credit_overflow : 1; + mmr_t ni1_pipe_error : 1; + mmr_t ni1_llp_error : 1; + mmr_t ni1_credit_underflow : 1; + mmr_t ni1_push_underflow : 1; + mmr_t ni1_pop_underflow : 1; + mmr_t ni1_debit_overflow : 1; + mmr_t ni1_credit_overflow : 1; + mmr_t ni1_push_overflow : 1; + mmr_t ni1_pop_overflow : 1; + mmr_t ni0_pipe_error : 1; + mmr_t ni0_llp_error : 1; + mmr_t ni0_credit_underflow : 1; + mmr_t ni0_push_underflow : 1; + mmr_t ni0_pop_underflow : 1; + mmr_t ni0_debit_overflow : 1; + mmr_t ni0_credit_overflow : 1; + mmr_t ni0_push_overflow : 1; + mmr_t ni0_pop_overflow : 1; + } sh_xn_error_summary_s; +} sh_xn_error_summary_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_ERROR_OVERFLOW" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_error_overflow_u { + mmr_t sh_xn_error_overflow_regval; + struct { + mmr_t ni0_pop_overflow : 1; + mmr_t ni0_push_overflow : 1; + mmr_t ni0_credit_overflow : 1; + mmr_t ni0_debit_overflow : 1; + mmr_t ni0_pop_underflow : 1; + mmr_t ni0_push_underflow : 1; + mmr_t ni0_credit_underflow : 1; + mmr_t ni0_llp_error : 1; + mmr_t ni0_pipe_error : 1; + mmr_t ni1_pop_overflow : 1; + mmr_t ni1_push_overflow : 1; + mmr_t ni1_credit_overflow : 1; + mmr_t ni1_debit_overflow : 1; + mmr_t ni1_pop_underflow : 1; + mmr_t ni1_push_underflow : 1; + mmr_t ni1_credit_underflow : 1; + mmr_t ni1_llp_error : 1; + mmr_t ni1_pipe_error : 1; + mmr_t xnmd_credit_overflow : 1; + mmr_t xnmd_debit_overflow : 1; + mmr_t xnmd_data_buff_overflow : 1; + mmr_t xnmd_credit_underflow : 1; + mmr_t xnmd_sbe_error : 1; + mmr_t xnmd_uce_error : 1; + mmr_t xnmd_lut_error : 1; + mmr_t xnpi_credit_overflow : 1; + mmr_t xnpi_debit_overflow : 1; + mmr_t xnpi_data_buff_overflow : 1; + mmr_t xnpi_credit_underflow : 1; + mmr_t xnpi_sbe_error : 1; + mmr_t xnpi_uce_error : 1; + mmr_t xnpi_lut_error : 1; + mmr_t iilb_debit_overflow : 1; + mmr_t iilb_credit_overflow : 1; + mmr_t iilb_fifo_overflow : 1; + mmr_t iilb_credit_underflow : 1; + mmr_t iilb_fifo_underflow : 1; + mmr_t iilb_chiplet_or_lut : 1; + mmr_t reserved_0 : 26; + } sh_xn_error_overflow_s; +} sh_xn_error_overflow_u_t; +#else +typedef union sh_xn_error_overflow_u { + mmr_t sh_xn_error_overflow_regval; + struct { + mmr_t reserved_0 : 26; + mmr_t iilb_chiplet_or_lut : 1; + mmr_t iilb_fifo_underflow : 1; + mmr_t iilb_credit_underflow : 1; + mmr_t iilb_fifo_overflow : 1; + mmr_t iilb_credit_overflow : 1; + mmr_t iilb_debit_overflow : 1; + mmr_t xnpi_lut_error : 1; + mmr_t xnpi_uce_error : 1; + mmr_t xnpi_sbe_error : 1; + mmr_t xnpi_credit_underflow : 1; + mmr_t xnpi_data_buff_overflow : 1; + mmr_t xnpi_debit_overflow : 1; + mmr_t xnpi_credit_overflow : 1; + mmr_t xnmd_lut_error : 1; + mmr_t xnmd_uce_error : 1; + mmr_t xnmd_sbe_error : 1; + mmr_t xnmd_credit_underflow : 1; + mmr_t xnmd_data_buff_overflow : 1; + mmr_t xnmd_debit_overflow : 1; + mmr_t xnmd_credit_overflow : 1; + mmr_t ni1_pipe_error : 1; + mmr_t ni1_llp_error : 1; + mmr_t ni1_credit_underflow : 1; + mmr_t ni1_push_underflow : 1; + mmr_t ni1_pop_underflow : 1; + mmr_t ni1_debit_overflow : 1; + mmr_t ni1_credit_overflow : 1; + mmr_t ni1_push_overflow : 1; + mmr_t ni1_pop_overflow : 1; + mmr_t ni0_pipe_error : 1; + mmr_t ni0_llp_error : 1; + mmr_t ni0_credit_underflow : 1; + mmr_t ni0_push_underflow : 1; + mmr_t ni0_pop_underflow : 1; + mmr_t ni0_debit_overflow : 1; + mmr_t ni0_credit_overflow : 1; + mmr_t ni0_push_overflow : 1; + mmr_t ni0_pop_overflow : 1; + } sh_xn_error_overflow_s; +} sh_xn_error_overflow_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_ERROR_MASK" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_error_mask_u { + mmr_t sh_xn_error_mask_regval; + struct { + mmr_t ni0_pop_overflow : 1; + mmr_t ni0_push_overflow : 1; + mmr_t ni0_credit_overflow : 1; + mmr_t ni0_debit_overflow : 1; + mmr_t ni0_pop_underflow : 1; + mmr_t ni0_push_underflow : 1; + mmr_t ni0_credit_underflow : 1; + mmr_t ni0_llp_error : 1; + mmr_t ni0_pipe_error : 1; + mmr_t ni1_pop_overflow : 1; + mmr_t ni1_push_overflow : 1; + mmr_t ni1_credit_overflow : 1; + mmr_t ni1_debit_overflow : 1; + mmr_t ni1_pop_underflow : 1; + mmr_t ni1_push_underflow : 1; + mmr_t ni1_credit_underflow : 1; + mmr_t ni1_llp_error : 1; + mmr_t ni1_pipe_error : 1; + mmr_t xnmd_credit_overflow : 1; + mmr_t xnmd_debit_overflow : 1; + mmr_t xnmd_data_buff_overflow : 1; + mmr_t xnmd_credit_underflow : 1; + mmr_t xnmd_sbe_error : 1; + mmr_t xnmd_uce_error : 1; + mmr_t xnmd_lut_error : 1; + mmr_t xnpi_credit_overflow : 1; + mmr_t xnpi_debit_overflow : 1; + mmr_t xnpi_data_buff_overflow : 1; + mmr_t xnpi_credit_underflow : 1; + mmr_t xnpi_sbe_error : 1; + mmr_t xnpi_uce_error : 1; + mmr_t xnpi_lut_error : 1; + mmr_t iilb_debit_overflow : 1; + mmr_t iilb_credit_overflow : 1; + mmr_t iilb_fifo_overflow : 1; + mmr_t iilb_credit_underflow : 1; + mmr_t iilb_fifo_underflow : 1; + mmr_t iilb_chiplet_or_lut : 1; + mmr_t reserved_0 : 26; + } sh_xn_error_mask_s; +} sh_xn_error_mask_u_t; +#else +typedef union sh_xn_error_mask_u { + mmr_t sh_xn_error_mask_regval; + struct { + mmr_t reserved_0 : 26; + mmr_t iilb_chiplet_or_lut : 1; + mmr_t iilb_fifo_underflow : 1; + mmr_t iilb_credit_underflow : 1; + mmr_t iilb_fifo_overflow : 1; + mmr_t iilb_credit_overflow : 1; + mmr_t iilb_debit_overflow : 1; + mmr_t xnpi_lut_error : 1; + mmr_t xnpi_uce_error : 1; + mmr_t xnpi_sbe_error : 1; + mmr_t xnpi_credit_underflow : 1; + mmr_t xnpi_data_buff_overflow : 1; + mmr_t xnpi_debit_overflow : 1; + mmr_t xnpi_credit_overflow : 1; + mmr_t xnmd_lut_error : 1; + mmr_t xnmd_uce_error : 1; + mmr_t xnmd_sbe_error : 1; + mmr_t xnmd_credit_underflow : 1; + mmr_t xnmd_data_buff_overflow : 1; + mmr_t xnmd_debit_overflow : 1; + mmr_t xnmd_credit_overflow : 1; + mmr_t ni1_pipe_error : 1; + mmr_t ni1_llp_error : 1; + mmr_t ni1_credit_underflow : 1; + mmr_t ni1_push_underflow : 1; + mmr_t ni1_pop_underflow : 1; + mmr_t ni1_debit_overflow : 1; + mmr_t ni1_credit_overflow : 1; + mmr_t ni1_push_overflow : 1; + mmr_t ni1_pop_overflow : 1; + mmr_t ni0_pipe_error : 1; + mmr_t ni0_llp_error : 1; + mmr_t ni0_credit_underflow : 1; + mmr_t ni0_push_underflow : 1; + mmr_t ni0_pop_underflow : 1; + mmr_t ni0_debit_overflow : 1; + mmr_t ni0_credit_overflow : 1; + mmr_t ni0_push_overflow : 1; + mmr_t ni0_pop_overflow : 1; + } sh_xn_error_mask_s; +} sh_xn_error_mask_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_FIRST_ERROR" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_first_error_u { + mmr_t sh_xn_first_error_regval; + struct { + mmr_t ni0_pop_overflow : 1; + mmr_t ni0_push_overflow : 1; + mmr_t ni0_credit_overflow : 1; + mmr_t ni0_debit_overflow : 1; + mmr_t ni0_pop_underflow : 1; + mmr_t ni0_push_underflow : 1; + mmr_t ni0_credit_underflow : 1; + mmr_t ni0_llp_error : 1; + mmr_t ni0_pipe_error : 1; + mmr_t ni1_pop_overflow : 1; + mmr_t ni1_push_overflow : 1; + mmr_t ni1_credit_overflow : 1; + mmr_t ni1_debit_overflow : 1; + mmr_t ni1_pop_underflow : 1; + mmr_t ni1_push_underflow : 1; + mmr_t ni1_credit_underflow : 1; + mmr_t ni1_llp_error : 1; + mmr_t ni1_pipe_error : 1; + mmr_t xnmd_credit_overflow : 1; + mmr_t xnmd_debit_overflow : 1; + mmr_t xnmd_data_buff_overflow : 1; + mmr_t xnmd_credit_underflow : 1; + mmr_t xnmd_sbe_error : 1; + mmr_t xnmd_uce_error : 1; + mmr_t xnmd_lut_error : 1; + mmr_t xnpi_credit_overflow : 1; + mmr_t xnpi_debit_overflow : 1; + mmr_t xnpi_data_buff_overflow : 1; + mmr_t xnpi_credit_underflow : 1; + mmr_t xnpi_sbe_error : 1; + mmr_t xnpi_uce_error : 1; + mmr_t xnpi_lut_error : 1; + mmr_t iilb_debit_overflow : 1; + mmr_t iilb_credit_overflow : 1; + mmr_t iilb_fifo_overflow : 1; + mmr_t iilb_credit_underflow : 1; + mmr_t iilb_fifo_underflow : 1; + mmr_t iilb_chiplet_or_lut : 1; + mmr_t reserved_0 : 26; + } sh_xn_first_error_s; +} sh_xn_first_error_u_t; +#else +typedef union sh_xn_first_error_u { + mmr_t sh_xn_first_error_regval; + struct { + mmr_t reserved_0 : 26; + mmr_t iilb_chiplet_or_lut : 1; + mmr_t iilb_fifo_underflow : 1; + mmr_t iilb_credit_underflow : 1; + mmr_t iilb_fifo_overflow : 1; + mmr_t iilb_credit_overflow : 1; + mmr_t iilb_debit_overflow : 1; + mmr_t xnpi_lut_error : 1; + mmr_t xnpi_uce_error : 1; + mmr_t xnpi_sbe_error : 1; + mmr_t xnpi_credit_underflow : 1; + mmr_t xnpi_data_buff_overflow : 1; + mmr_t xnpi_debit_overflow : 1; + mmr_t xnpi_credit_overflow : 1; + mmr_t xnmd_lut_error : 1; + mmr_t xnmd_uce_error : 1; + mmr_t xnmd_sbe_error : 1; + mmr_t xnmd_credit_underflow : 1; + mmr_t xnmd_data_buff_overflow : 1; + mmr_t xnmd_debit_overflow : 1; + mmr_t xnmd_credit_overflow : 1; + mmr_t ni1_pipe_error : 1; + mmr_t ni1_llp_error : 1; + mmr_t ni1_credit_underflow : 1; + mmr_t ni1_push_underflow : 1; + mmr_t ni1_pop_underflow : 1; + mmr_t ni1_debit_overflow : 1; + mmr_t ni1_credit_overflow : 1; + mmr_t ni1_push_overflow : 1; + mmr_t ni1_pop_overflow : 1; + mmr_t ni0_pipe_error : 1; + mmr_t ni0_llp_error : 1; + mmr_t ni0_credit_underflow : 1; + mmr_t ni0_push_underflow : 1; + mmr_t ni0_pop_underflow : 1; + mmr_t ni0_debit_overflow : 1; + mmr_t ni0_credit_overflow : 1; + mmr_t ni0_push_overflow : 1; + mmr_t ni0_pop_overflow : 1; + } sh_xn_first_error_s; +} sh_xn_first_error_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNIILB_ERROR_SUMMARY" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xniilb_error_summary_u { + mmr_t sh_xniilb_error_summary_regval; + struct { + mmr_t overflow_ii_debit0 : 1; + mmr_t overflow_ii_debit2 : 1; + mmr_t overflow_lb_debit0 : 1; + mmr_t overflow_lb_debit2 : 1; + mmr_t overflow_ii_vc0 : 1; + mmr_t overflow_ii_vc2 : 1; + mmr_t underflow_ii_vc0 : 1; + mmr_t underflow_ii_vc2 : 1; + mmr_t overflow_lb_vc0 : 1; + mmr_t overflow_lb_vc2 : 1; + mmr_t underflow_lb_vc0 : 1; + mmr_t underflow_lb_vc2 : 1; + mmr_t overflow_pi_vc0_credit_in : 1; + mmr_t overflow_iilb_vc0_credit_in : 1; + mmr_t overflow_md_vc0_credit_in : 1; + mmr_t overflow_ni0_vc0_credit_in : 1; + mmr_t overflow_ni1_vc0_credit_in : 1; + mmr_t overflow_pi_vc2_credit_in : 1; + mmr_t overflow_iilb_vc2_credit_in : 1; + mmr_t overflow_md_vc2_credit_in : 1; + mmr_t overflow_ni0_vc2_credit_in : 1; + mmr_t overflow_ni1_vc2_credit_in : 1; + mmr_t underflow_pi_vc0_credit_in : 1; + mmr_t underflow_iilb_vc0_credit_in : 1; + mmr_t underflow_md_vc0_credit_in : 1; + mmr_t underflow_ni0_vc0_credit_in : 1; + mmr_t underflow_ni1_vc0_credit_in : 1; + mmr_t underflow_pi_vc2_credit_in : 1; + mmr_t underflow_iilb_vc2_credit_in : 1; + mmr_t underflow_md_vc2_credit_in : 1; + mmr_t underflow_ni0_vc2_credit_in : 1; + mmr_t underflow_ni1_vc2_credit_in : 1; + mmr_t overflow_pi_debit0 : 1; + mmr_t overflow_pi_debit2 : 1; + mmr_t overflow_iilb_debit0 : 1; + mmr_t overflow_iilb_debit2 : 1; + mmr_t overflow_md_debit0 : 1; + mmr_t overflow_md_debit2 : 1; + mmr_t overflow_ni0_debit0 : 1; + mmr_t overflow_ni0_debit2 : 1; + mmr_t overflow_ni1_debit0 : 1; + mmr_t overflow_ni1_debit2 : 1; + mmr_t overflow_pi_vc0_credit_out : 1; + mmr_t overflow_pi_vc2_credit_out : 1; + mmr_t overflow_md_vc0_credit_out : 1; + mmr_t overflow_md_vc2_credit_out : 1; + mmr_t overflow_iilb_vc0_credit_out : 1; + mmr_t overflow_iilb_vc2_credit_out : 1; + mmr_t overflow_ni0_vc0_credit_out : 1; + mmr_t overflow_ni0_vc2_credit_out : 1; + mmr_t overflow_ni1_vc0_credit_out : 1; + mmr_t overflow_ni1_vc2_credit_out : 1; + mmr_t underflow_pi_vc0_credit_out : 1; + mmr_t underflow_pi_vc2_credit_out : 1; + mmr_t underflow_md_vc0_credit_out : 1; + mmr_t underflow_md_vc2_credit_out : 1; + mmr_t underflow_iilb_vc0_credit_out : 1; + mmr_t underflow_iilb_vc2_credit_out : 1; + mmr_t underflow_ni0_vc0_credit_out : 1; + mmr_t underflow_ni0_vc2_credit_out : 1; + mmr_t underflow_ni1_vc0_credit_out : 1; + mmr_t underflow_ni1_vc2_credit_out : 1; + mmr_t chiplet_nomatch : 1; + mmr_t lut_read_error : 1; + } sh_xniilb_error_summary_s; +} sh_xniilb_error_summary_u_t; +#else +typedef union sh_xniilb_error_summary_u { + mmr_t sh_xniilb_error_summary_regval; + struct { + mmr_t lut_read_error : 1; + mmr_t chiplet_nomatch : 1; + mmr_t underflow_ni1_vc2_credit_out : 1; + mmr_t underflow_ni1_vc0_credit_out : 1; + mmr_t underflow_ni0_vc2_credit_out : 1; + mmr_t underflow_ni0_vc0_credit_out : 1; + mmr_t underflow_iilb_vc2_credit_out : 1; + mmr_t underflow_iilb_vc0_credit_out : 1; + mmr_t underflow_md_vc2_credit_out : 1; + mmr_t underflow_md_vc0_credit_out : 1; + mmr_t underflow_pi_vc2_credit_out : 1; + mmr_t underflow_pi_vc0_credit_out : 1; + mmr_t overflow_ni1_vc2_credit_out : 1; + mmr_t overflow_ni1_vc0_credit_out : 1; + mmr_t overflow_ni0_vc2_credit_out : 1; + mmr_t overflow_ni0_vc0_credit_out : 1; + mmr_t overflow_iilb_vc2_credit_out : 1; + mmr_t overflow_iilb_vc0_credit_out : 1; + mmr_t overflow_md_vc2_credit_out : 1; + mmr_t overflow_md_vc0_credit_out : 1; + mmr_t overflow_pi_vc2_credit_out : 1; + mmr_t overflow_pi_vc0_credit_out : 1; + mmr_t overflow_ni1_debit2 : 1; + mmr_t overflow_ni1_debit0 : 1; + mmr_t overflow_ni0_debit2 : 1; + mmr_t overflow_ni0_debit0 : 1; + mmr_t overflow_md_debit2 : 1; + mmr_t overflow_md_debit0 : 1; + mmr_t overflow_iilb_debit2 : 1; + mmr_t overflow_iilb_debit0 : 1; + mmr_t overflow_pi_debit2 : 1; + mmr_t overflow_pi_debit0 : 1; + mmr_t underflow_ni1_vc2_credit_in : 1; + mmr_t underflow_ni0_vc2_credit_in : 1; + mmr_t underflow_md_vc2_credit_in : 1; + mmr_t underflow_iilb_vc2_credit_in : 1; + mmr_t underflow_pi_vc2_credit_in : 1; + mmr_t underflow_ni1_vc0_credit_in : 1; + mmr_t underflow_ni0_vc0_credit_in : 1; + mmr_t underflow_md_vc0_credit_in : 1; + mmr_t underflow_iilb_vc0_credit_in : 1; + mmr_t underflow_pi_vc0_credit_in : 1; + mmr_t overflow_ni1_vc2_credit_in : 1; + mmr_t overflow_ni0_vc2_credit_in : 1; + mmr_t overflow_md_vc2_credit_in : 1; + mmr_t overflow_iilb_vc2_credit_in : 1; + mmr_t overflow_pi_vc2_credit_in : 1; + mmr_t overflow_ni1_vc0_credit_in : 1; + mmr_t overflow_ni0_vc0_credit_in : 1; + mmr_t overflow_md_vc0_credit_in : 1; + mmr_t overflow_iilb_vc0_credit_in : 1; + mmr_t overflow_pi_vc0_credit_in : 1; + mmr_t underflow_lb_vc2 : 1; + mmr_t underflow_lb_vc0 : 1; + mmr_t overflow_lb_vc2 : 1; + mmr_t overflow_lb_vc0 : 1; + mmr_t underflow_ii_vc2 : 1; + mmr_t underflow_ii_vc0 : 1; + mmr_t overflow_ii_vc2 : 1; + mmr_t overflow_ii_vc0 : 1; + mmr_t overflow_lb_debit2 : 1; + mmr_t overflow_lb_debit0 : 1; + mmr_t overflow_ii_debit2 : 1; + mmr_t overflow_ii_debit0 : 1; + } sh_xniilb_error_summary_s; +} sh_xniilb_error_summary_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNIILB_ERROR_OVERFLOW" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xniilb_error_overflow_u { + mmr_t sh_xniilb_error_overflow_regval; + struct { + mmr_t overflow_ii_debit0 : 1; + mmr_t overflow_ii_debit2 : 1; + mmr_t overflow_lb_debit0 : 1; + mmr_t overflow_lb_debit2 : 1; + mmr_t overflow_ii_vc0 : 1; + mmr_t overflow_ii_vc2 : 1; + mmr_t underflow_ii_vc0 : 1; + mmr_t underflow_ii_vc2 : 1; + mmr_t overflow_lb_vc0 : 1; + mmr_t overflow_lb_vc2 : 1; + mmr_t underflow_lb_vc0 : 1; + mmr_t underflow_lb_vc2 : 1; + mmr_t overflow_pi_vc0_credit_in : 1; + mmr_t overflow_iilb_vc0_credit_in : 1; + mmr_t overflow_md_vc0_credit_in : 1; + mmr_t overflow_ni0_vc0_credit_in : 1; + mmr_t overflow_ni1_vc0_credit_in : 1; + mmr_t overflow_pi_vc2_credit_in : 1; + mmr_t overflow_iilb_vc2_credit_in : 1; + mmr_t overflow_md_vc2_credit_in : 1; + mmr_t overflow_ni0_vc2_credit_in : 1; + mmr_t overflow_ni1_vc2_credit_in : 1; + mmr_t underflow_pi_vc0_credit_in : 1; + mmr_t underflow_iilb_vc0_credit_in : 1; + mmr_t underflow_md_vc0_credit_in : 1; + mmr_t underflow_ni0_vc0_credit_in : 1; + mmr_t underflow_ni1_vc0_credit_in : 1; + mmr_t underflow_pi_vc2_credit_in : 1; + mmr_t underflow_iilb_vc2_credit_in : 1; + mmr_t underflow_md_vc2_credit_in : 1; + mmr_t underflow_ni0_vc2_credit_in : 1; + mmr_t underflow_ni1_vc2_credit_in : 1; + mmr_t overflow_pi_debit0 : 1; + mmr_t overflow_pi_debit2 : 1; + mmr_t overflow_iilb_debit0 : 1; + mmr_t overflow_iilb_debit2 : 1; + mmr_t overflow_md_debit0 : 1; + mmr_t overflow_md_debit2 : 1; + mmr_t overflow_ni0_debit0 : 1; + mmr_t overflow_ni0_debit2 : 1; + mmr_t overflow_ni1_debit0 : 1; + mmr_t overflow_ni1_debit2 : 1; + mmr_t overflow_pi_vc0_credit_out : 1; + mmr_t overflow_pi_vc2_credit_out : 1; + mmr_t overflow_md_vc0_credit_out : 1; + mmr_t overflow_md_vc2_credit_out : 1; + mmr_t overflow_iilb_vc0_credit_out : 1; + mmr_t overflow_iilb_vc2_credit_out : 1; + mmr_t overflow_ni0_vc0_credit_out : 1; + mmr_t overflow_ni0_vc2_credit_out : 1; + mmr_t overflow_ni1_vc0_credit_out : 1; + mmr_t overflow_ni1_vc2_credit_out : 1; + mmr_t underflow_pi_vc0_credit_out : 1; + mmr_t underflow_pi_vc2_credit_out : 1; + mmr_t underflow_md_vc0_credit_out : 1; + mmr_t underflow_md_vc2_credit_out : 1; + mmr_t underflow_iilb_vc0_credit_out : 1; + mmr_t underflow_iilb_vc2_credit_out : 1; + mmr_t underflow_ni0_vc0_credit_out : 1; + mmr_t underflow_ni0_vc2_credit_out : 1; + mmr_t underflow_ni1_vc0_credit_out : 1; + mmr_t underflow_ni1_vc2_credit_out : 1; + mmr_t chiplet_nomatch : 1; + mmr_t lut_read_error : 1; + } sh_xniilb_error_overflow_s; +} sh_xniilb_error_overflow_u_t; +#else +typedef union sh_xniilb_error_overflow_u { + mmr_t sh_xniilb_error_overflow_regval; + struct { + mmr_t lut_read_error : 1; + mmr_t chiplet_nomatch : 1; + mmr_t underflow_ni1_vc2_credit_out : 1; + mmr_t underflow_ni1_vc0_credit_out : 1; + mmr_t underflow_ni0_vc2_credit_out : 1; + mmr_t underflow_ni0_vc0_credit_out : 1; + mmr_t underflow_iilb_vc2_credit_out : 1; + mmr_t underflow_iilb_vc0_credit_out : 1; + mmr_t underflow_md_vc2_credit_out : 1; + mmr_t underflow_md_vc0_credit_out : 1; + mmr_t underflow_pi_vc2_credit_out : 1; + mmr_t underflow_pi_vc0_credit_out : 1; + mmr_t overflow_ni1_vc2_credit_out : 1; + mmr_t overflow_ni1_vc0_credit_out : 1; + mmr_t overflow_ni0_vc2_credit_out : 1; + mmr_t overflow_ni0_vc0_credit_out : 1; + mmr_t overflow_iilb_vc2_credit_out : 1; + mmr_t overflow_iilb_vc0_credit_out : 1; + mmr_t overflow_md_vc2_credit_out : 1; + mmr_t overflow_md_vc0_credit_out : 1; + mmr_t overflow_pi_vc2_credit_out : 1; + mmr_t overflow_pi_vc0_credit_out : 1; + mmr_t overflow_ni1_debit2 : 1; + mmr_t overflow_ni1_debit0 : 1; + mmr_t overflow_ni0_debit2 : 1; + mmr_t overflow_ni0_debit0 : 1; + mmr_t overflow_md_debit2 : 1; + mmr_t overflow_md_debit0 : 1; + mmr_t overflow_iilb_debit2 : 1; + mmr_t overflow_iilb_debit0 : 1; + mmr_t overflow_pi_debit2 : 1; + mmr_t overflow_pi_debit0 : 1; + mmr_t underflow_ni1_vc2_credit_in : 1; + mmr_t underflow_ni0_vc2_credit_in : 1; + mmr_t underflow_md_vc2_credit_in : 1; + mmr_t underflow_iilb_vc2_credit_in : 1; + mmr_t underflow_pi_vc2_credit_in : 1; + mmr_t underflow_ni1_vc0_credit_in : 1; + mmr_t underflow_ni0_vc0_credit_in : 1; + mmr_t underflow_md_vc0_credit_in : 1; + mmr_t underflow_iilb_vc0_credit_in : 1; + mmr_t underflow_pi_vc0_credit_in : 1; + mmr_t overflow_ni1_vc2_credit_in : 1; + mmr_t overflow_ni0_vc2_credit_in : 1; + mmr_t overflow_md_vc2_credit_in : 1; + mmr_t overflow_iilb_vc2_credit_in : 1; + mmr_t overflow_pi_vc2_credit_in : 1; + mmr_t overflow_ni1_vc0_credit_in : 1; + mmr_t overflow_ni0_vc0_credit_in : 1; + mmr_t overflow_md_vc0_credit_in : 1; + mmr_t overflow_iilb_vc0_credit_in : 1; + mmr_t overflow_pi_vc0_credit_in : 1; + mmr_t underflow_lb_vc2 : 1; + mmr_t underflow_lb_vc0 : 1; + mmr_t overflow_lb_vc2 : 1; + mmr_t overflow_lb_vc0 : 1; + mmr_t underflow_ii_vc2 : 1; + mmr_t underflow_ii_vc0 : 1; + mmr_t overflow_ii_vc2 : 1; + mmr_t overflow_ii_vc0 : 1; + mmr_t overflow_lb_debit2 : 1; + mmr_t overflow_lb_debit0 : 1; + mmr_t overflow_ii_debit2 : 1; + mmr_t overflow_ii_debit0 : 1; + } sh_xniilb_error_overflow_s; +} sh_xniilb_error_overflow_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNIILB_ERROR_MASK" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xniilb_error_mask_u { + mmr_t sh_xniilb_error_mask_regval; + struct { + mmr_t overflow_ii_debit0 : 1; + mmr_t overflow_ii_debit2 : 1; + mmr_t overflow_lb_debit0 : 1; + mmr_t overflow_lb_debit2 : 1; + mmr_t overflow_ii_vc0 : 1; + mmr_t overflow_ii_vc2 : 1; + mmr_t underflow_ii_vc0 : 1; + mmr_t underflow_ii_vc2 : 1; + mmr_t overflow_lb_vc0 : 1; + mmr_t overflow_lb_vc2 : 1; + mmr_t underflow_lb_vc0 : 1; + mmr_t underflow_lb_vc2 : 1; + mmr_t overflow_pi_vc0_credit_in : 1; + mmr_t overflow_iilb_vc0_credit_in : 1; + mmr_t overflow_md_vc0_credit_in : 1; + mmr_t overflow_ni0_vc0_credit_in : 1; + mmr_t overflow_ni1_vc0_credit_in : 1; + mmr_t overflow_pi_vc2_credit_in : 1; + mmr_t overflow_iilb_vc2_credit_in : 1; + mmr_t overflow_md_vc2_credit_in : 1; + mmr_t overflow_ni0_vc2_credit_in : 1; + mmr_t overflow_ni1_vc2_credit_in : 1; + mmr_t underflow_pi_vc0_credit_in : 1; + mmr_t underflow_iilb_vc0_credit_in : 1; + mmr_t underflow_md_vc0_credit_in : 1; + mmr_t underflow_ni0_vc0_credit_in : 1; + mmr_t underflow_ni1_vc0_credit_in : 1; + mmr_t underflow_pi_vc2_credit_in : 1; + mmr_t underflow_iilb_vc2_credit_in : 1; + mmr_t underflow_md_vc2_credit_in : 1; + mmr_t underflow_ni0_vc2_credit_in : 1; + mmr_t underflow_ni1_vc2_credit_in : 1; + mmr_t overflow_pi_debit0 : 1; + mmr_t overflow_pi_debit2 : 1; + mmr_t overflow_iilb_debit0 : 1; + mmr_t overflow_iilb_debit2 : 1; + mmr_t overflow_md_debit0 : 1; + mmr_t overflow_md_debit2 : 1; + mmr_t overflow_ni0_debit0 : 1; + mmr_t overflow_ni0_debit2 : 1; + mmr_t overflow_ni1_debit0 : 1; + mmr_t overflow_ni1_debit2 : 1; + mmr_t overflow_pi_vc0_credit_out : 1; + mmr_t overflow_pi_vc2_credit_out : 1; + mmr_t overflow_md_vc0_credit_out : 1; + mmr_t overflow_md_vc2_credit_out : 1; + mmr_t overflow_iilb_vc0_credit_out : 1; + mmr_t overflow_iilb_vc2_credit_out : 1; + mmr_t overflow_ni0_vc0_credit_out : 1; + mmr_t overflow_ni0_vc2_credit_out : 1; + mmr_t overflow_ni1_vc0_credit_out : 1; + mmr_t overflow_ni1_vc2_credit_out : 1; + mmr_t underflow_pi_vc0_credit_out : 1; + mmr_t underflow_pi_vc2_credit_out : 1; + mmr_t underflow_md_vc0_credit_out : 1; + mmr_t underflow_md_vc2_credit_out : 1; + mmr_t underflow_iilb_vc0_credit_out : 1; + mmr_t underflow_iilb_vc2_credit_out : 1; + mmr_t underflow_ni0_vc0_credit_out : 1; + mmr_t underflow_ni0_vc2_credit_out : 1; + mmr_t underflow_ni1_vc0_credit_out : 1; + mmr_t underflow_ni1_vc2_credit_out : 1; + mmr_t chiplet_nomatch : 1; + mmr_t lut_read_error : 1; + } sh_xniilb_error_mask_s; +} sh_xniilb_error_mask_u_t; +#else +typedef union sh_xniilb_error_mask_u { + mmr_t sh_xniilb_error_mask_regval; + struct { + mmr_t lut_read_error : 1; + mmr_t chiplet_nomatch : 1; + mmr_t underflow_ni1_vc2_credit_out : 1; + mmr_t underflow_ni1_vc0_credit_out : 1; + mmr_t underflow_ni0_vc2_credit_out : 1; + mmr_t underflow_ni0_vc0_credit_out : 1; + mmr_t underflow_iilb_vc2_credit_out : 1; + mmr_t underflow_iilb_vc0_credit_out : 1; + mmr_t underflow_md_vc2_credit_out : 1; + mmr_t underflow_md_vc0_credit_out : 1; + mmr_t underflow_pi_vc2_credit_out : 1; + mmr_t underflow_pi_vc0_credit_out : 1; + mmr_t overflow_ni1_vc2_credit_out : 1; + mmr_t overflow_ni1_vc0_credit_out : 1; + mmr_t overflow_ni0_vc2_credit_out : 1; + mmr_t overflow_ni0_vc0_credit_out : 1; + mmr_t overflow_iilb_vc2_credit_out : 1; + mmr_t overflow_iilb_vc0_credit_out : 1; + mmr_t overflow_md_vc2_credit_out : 1; + mmr_t overflow_md_vc0_credit_out : 1; + mmr_t overflow_pi_vc2_credit_out : 1; + mmr_t overflow_pi_vc0_credit_out : 1; + mmr_t overflow_ni1_debit2 : 1; + mmr_t overflow_ni1_debit0 : 1; + mmr_t overflow_ni0_debit2 : 1; + mmr_t overflow_ni0_debit0 : 1; + mmr_t overflow_md_debit2 : 1; + mmr_t overflow_md_debit0 : 1; + mmr_t overflow_iilb_debit2 : 1; + mmr_t overflow_iilb_debit0 : 1; + mmr_t overflow_pi_debit2 : 1; + mmr_t overflow_pi_debit0 : 1; + mmr_t underflow_ni1_vc2_credit_in : 1; + mmr_t underflow_ni0_vc2_credit_in : 1; + mmr_t underflow_md_vc2_credit_in : 1; + mmr_t underflow_iilb_vc2_credit_in : 1; + mmr_t underflow_pi_vc2_credit_in : 1; + mmr_t underflow_ni1_vc0_credit_in : 1; + mmr_t underflow_ni0_vc0_credit_in : 1; + mmr_t underflow_md_vc0_credit_in : 1; + mmr_t underflow_iilb_vc0_credit_in : 1; + mmr_t underflow_pi_vc0_credit_in : 1; + mmr_t overflow_ni1_vc2_credit_in : 1; + mmr_t overflow_ni0_vc2_credit_in : 1; + mmr_t overflow_md_vc2_credit_in : 1; + mmr_t overflow_iilb_vc2_credit_in : 1; + mmr_t overflow_pi_vc2_credit_in : 1; + mmr_t overflow_ni1_vc0_credit_in : 1; + mmr_t overflow_ni0_vc0_credit_in : 1; + mmr_t overflow_md_vc0_credit_in : 1; + mmr_t overflow_iilb_vc0_credit_in : 1; + mmr_t overflow_pi_vc0_credit_in : 1; + mmr_t underflow_lb_vc2 : 1; + mmr_t underflow_lb_vc0 : 1; + mmr_t overflow_lb_vc2 : 1; + mmr_t overflow_lb_vc0 : 1; + mmr_t underflow_ii_vc2 : 1; + mmr_t underflow_ii_vc0 : 1; + mmr_t overflow_ii_vc2 : 1; + mmr_t overflow_ii_vc0 : 1; + mmr_t overflow_lb_debit2 : 1; + mmr_t overflow_lb_debit0 : 1; + mmr_t overflow_ii_debit2 : 1; + mmr_t overflow_ii_debit0 : 1; + } sh_xniilb_error_mask_s; +} sh_xniilb_error_mask_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNIILB_FIRST_ERROR" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xniilb_first_error_u { + mmr_t sh_xniilb_first_error_regval; + struct { + mmr_t overflow_ii_debit0 : 1; + mmr_t overflow_ii_debit2 : 1; + mmr_t overflow_lb_debit0 : 1; + mmr_t overflow_lb_debit2 : 1; + mmr_t overflow_ii_vc0 : 1; + mmr_t overflow_ii_vc2 : 1; + mmr_t underflow_ii_vc0 : 1; + mmr_t underflow_ii_vc2 : 1; + mmr_t overflow_lb_vc0 : 1; + mmr_t overflow_lb_vc2 : 1; + mmr_t underflow_lb_vc0 : 1; + mmr_t underflow_lb_vc2 : 1; + mmr_t overflow_pi_vc0_credit_in : 1; + mmr_t overflow_iilb_vc0_credit_in : 1; + mmr_t overflow_md_vc0_credit_in : 1; + mmr_t overflow_ni0_vc0_credit_in : 1; + mmr_t overflow_ni1_vc0_credit_in : 1; + mmr_t overflow_pi_vc2_credit_in : 1; + mmr_t overflow_iilb_vc2_credit_in : 1; + mmr_t overflow_md_vc2_credit_in : 1; + mmr_t overflow_ni0_vc2_credit_in : 1; + mmr_t overflow_ni1_vc2_credit_in : 1; + mmr_t underflow_pi_vc0_credit_in : 1; + mmr_t underflow_iilb_vc0_credit_in : 1; + mmr_t underflow_md_vc0_credit_in : 1; + mmr_t underflow_ni0_vc0_credit_in : 1; + mmr_t underflow_ni1_vc0_credit_in : 1; + mmr_t underflow_pi_vc2_credit_in : 1; + mmr_t underflow_iilb_vc2_credit_in : 1; + mmr_t underflow_md_vc2_credit_in : 1; + mmr_t underflow_ni0_vc2_credit_in : 1; + mmr_t underflow_ni1_vc2_credit_in : 1; + mmr_t overflow_pi_debit0 : 1; + mmr_t overflow_pi_debit2 : 1; + mmr_t overflow_iilb_debit0 : 1; + mmr_t overflow_iilb_debit2 : 1; + mmr_t overflow_md_debit0 : 1; + mmr_t overflow_md_debit2 : 1; + mmr_t overflow_ni0_debit0 : 1; + mmr_t overflow_ni0_debit2 : 1; + mmr_t overflow_ni1_debit0 : 1; + mmr_t overflow_ni1_debit2 : 1; + mmr_t overflow_pi_vc0_credit_out : 1; + mmr_t overflow_pi_vc2_credit_out : 1; + mmr_t overflow_md_vc0_credit_out : 1; + mmr_t overflow_md_vc2_credit_out : 1; + mmr_t overflow_iilb_vc0_credit_out : 1; + mmr_t overflow_iilb_vc2_credit_out : 1; + mmr_t overflow_ni0_vc0_credit_out : 1; + mmr_t overflow_ni0_vc2_credit_out : 1; + mmr_t overflow_ni1_vc0_credit_out : 1; + mmr_t overflow_ni1_vc2_credit_out : 1; + mmr_t underflow_pi_vc0_credit_out : 1; + mmr_t underflow_pi_vc2_credit_out : 1; + mmr_t underflow_md_vc0_credit_out : 1; + mmr_t underflow_md_vc2_credit_out : 1; + mmr_t underflow_iilb_vc0_credit_out : 1; + mmr_t underflow_iilb_vc2_credit_out : 1; + mmr_t underflow_ni0_vc0_credit_out : 1; + mmr_t underflow_ni0_vc2_credit_out : 1; + mmr_t underflow_ni1_vc0_credit_out : 1; + mmr_t underflow_ni1_vc2_credit_out : 1; + mmr_t chiplet_nomatch : 1; + mmr_t lut_read_error : 1; + } sh_xniilb_first_error_s; +} sh_xniilb_first_error_u_t; +#else +typedef union sh_xniilb_first_error_u { + mmr_t sh_xniilb_first_error_regval; + struct { + mmr_t lut_read_error : 1; + mmr_t chiplet_nomatch : 1; + mmr_t underflow_ni1_vc2_credit_out : 1; + mmr_t underflow_ni1_vc0_credit_out : 1; + mmr_t underflow_ni0_vc2_credit_out : 1; + mmr_t underflow_ni0_vc0_credit_out : 1; + mmr_t underflow_iilb_vc2_credit_out : 1; + mmr_t underflow_iilb_vc0_credit_out : 1; + mmr_t underflow_md_vc2_credit_out : 1; + mmr_t underflow_md_vc0_credit_out : 1; + mmr_t underflow_pi_vc2_credit_out : 1; + mmr_t underflow_pi_vc0_credit_out : 1; + mmr_t overflow_ni1_vc2_credit_out : 1; + mmr_t overflow_ni1_vc0_credit_out : 1; + mmr_t overflow_ni0_vc2_credit_out : 1; + mmr_t overflow_ni0_vc0_credit_out : 1; + mmr_t overflow_iilb_vc2_credit_out : 1; + mmr_t overflow_iilb_vc0_credit_out : 1; + mmr_t overflow_md_vc2_credit_out : 1; + mmr_t overflow_md_vc0_credit_out : 1; + mmr_t overflow_pi_vc2_credit_out : 1; + mmr_t overflow_pi_vc0_credit_out : 1; + mmr_t overflow_ni1_debit2 : 1; + mmr_t overflow_ni1_debit0 : 1; + mmr_t overflow_ni0_debit2 : 1; + mmr_t overflow_ni0_debit0 : 1; + mmr_t overflow_md_debit2 : 1; + mmr_t overflow_md_debit0 : 1; + mmr_t overflow_iilb_debit2 : 1; + mmr_t overflow_iilb_debit0 : 1; + mmr_t overflow_pi_debit2 : 1; + mmr_t overflow_pi_debit0 : 1; + mmr_t underflow_ni1_vc2_credit_in : 1; + mmr_t underflow_ni0_vc2_credit_in : 1; + mmr_t underflow_md_vc2_credit_in : 1; + mmr_t underflow_iilb_vc2_credit_in : 1; + mmr_t underflow_pi_vc2_credit_in : 1; + mmr_t underflow_ni1_vc0_credit_in : 1; + mmr_t underflow_ni0_vc0_credit_in : 1; + mmr_t underflow_md_vc0_credit_in : 1; + mmr_t underflow_iilb_vc0_credit_in : 1; + mmr_t underflow_pi_vc0_credit_in : 1; + mmr_t overflow_ni1_vc2_credit_in : 1; + mmr_t overflow_ni0_vc2_credit_in : 1; + mmr_t overflow_md_vc2_credit_in : 1; + mmr_t overflow_iilb_vc2_credit_in : 1; + mmr_t overflow_pi_vc2_credit_in : 1; + mmr_t overflow_ni1_vc0_credit_in : 1; + mmr_t overflow_ni0_vc0_credit_in : 1; + mmr_t overflow_md_vc0_credit_in : 1; + mmr_t overflow_iilb_vc0_credit_in : 1; + mmr_t overflow_pi_vc0_credit_in : 1; + mmr_t underflow_lb_vc2 : 1; + mmr_t underflow_lb_vc0 : 1; + mmr_t overflow_lb_vc2 : 1; + mmr_t overflow_lb_vc0 : 1; + mmr_t underflow_ii_vc2 : 1; + mmr_t underflow_ii_vc0 : 1; + mmr_t overflow_ii_vc2 : 1; + mmr_t overflow_ii_vc0 : 1; + mmr_t overflow_lb_debit2 : 1; + mmr_t overflow_lb_debit0 : 1; + mmr_t overflow_ii_debit2 : 1; + mmr_t overflow_ii_debit0 : 1; + } sh_xniilb_first_error_s; +} sh_xniilb_first_error_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNPI_ERROR_SUMMARY" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnpi_error_summary_u { + mmr_t sh_xnpi_error_summary_regval; + struct { + mmr_t underflow_ni0_vc0 : 1; + mmr_t overflow_ni0_vc0 : 1; + mmr_t underflow_ni0_vc2 : 1; + mmr_t overflow_ni0_vc2 : 1; + mmr_t underflow_ni1_vc0 : 1; + mmr_t overflow_ni1_vc0 : 1; + mmr_t underflow_ni1_vc2 : 1; + mmr_t overflow_ni1_vc2 : 1; + mmr_t underflow_iilb_vc0 : 1; + mmr_t overflow_iilb_vc0 : 1; + mmr_t underflow_iilb_vc2 : 1; + mmr_t overflow_iilb_vc2 : 1; + mmr_t underflow_vc0_credit : 1; + mmr_t overflow_vc0_credit : 1; + mmr_t underflow_vc2_credit : 1; + mmr_t overflow_vc2_credit : 1; + mmr_t overflow_databuff_vc0 : 1; + mmr_t overflow_databuff_vc2 : 1; + mmr_t lut_read_error : 1; + mmr_t single_bit_error0 : 1; + mmr_t single_bit_error1 : 1; + mmr_t single_bit_error2 : 1; + mmr_t single_bit_error3 : 1; + mmr_t uncor_error0 : 1; + mmr_t uncor_error1 : 1; + mmr_t uncor_error2 : 1; + mmr_t uncor_error3 : 1; + mmr_t underflow_sic_cntr0 : 1; + mmr_t overflow_sic_cntr0 : 1; + mmr_t underflow_sic_cntr2 : 1; + mmr_t overflow_sic_cntr2 : 1; + mmr_t overflow_ni0_debit0 : 1; + mmr_t overflow_ni0_debit2 : 1; + mmr_t overflow_ni1_debit0 : 1; + mmr_t overflow_ni1_debit2 : 1; + mmr_t overflow_iilb_debit0 : 1; + mmr_t overflow_iilb_debit2 : 1; + mmr_t underflow_ni0_vc0_credit : 1; + mmr_t overflow_ni0_vc0_credit : 1; + mmr_t underflow_ni0_vc2_credit : 1; + mmr_t overflow_ni0_vc2_credit : 1; + mmr_t underflow_ni1_vc0_credit : 1; + mmr_t overflow_ni1_vc0_credit : 1; + mmr_t underflow_ni1_vc2_credit : 1; + mmr_t overflow_ni1_vc2_credit : 1; + mmr_t underflow_iilb_vc0_credit : 1; + mmr_t overflow_iilb_vc0_credit : 1; + mmr_t underflow_iilb_vc2_credit : 1; + mmr_t overflow_iilb_vc2_credit : 1; + mmr_t overflow_header_cancel_fifo : 1; + mmr_t reserved_0 : 14; + } sh_xnpi_error_summary_s; +} sh_xnpi_error_summary_u_t; +#else +typedef union sh_xnpi_error_summary_u { + mmr_t sh_xnpi_error_summary_regval; + struct { + mmr_t reserved_0 : 14; + mmr_t overflow_header_cancel_fifo : 1; + mmr_t overflow_iilb_vc2_credit : 1; + mmr_t underflow_iilb_vc2_credit : 1; + mmr_t overflow_iilb_vc0_credit : 1; + mmr_t underflow_iilb_vc0_credit : 1; + mmr_t overflow_ni1_vc2_credit : 1; + mmr_t underflow_ni1_vc2_credit : 1; + mmr_t overflow_ni1_vc0_credit : 1; + mmr_t underflow_ni1_vc0_credit : 1; + mmr_t overflow_ni0_vc2_credit : 1; + mmr_t underflow_ni0_vc2_credit : 1; + mmr_t overflow_ni0_vc0_credit : 1; + mmr_t underflow_ni0_vc0_credit : 1; + mmr_t overflow_iilb_debit2 : 1; + mmr_t overflow_iilb_debit0 : 1; + mmr_t overflow_ni1_debit2 : 1; + mmr_t overflow_ni1_debit0 : 1; + mmr_t overflow_ni0_debit2 : 1; + mmr_t overflow_ni0_debit0 : 1; + mmr_t overflow_sic_cntr2 : 1; + mmr_t underflow_sic_cntr2 : 1; + mmr_t overflow_sic_cntr0 : 1; + mmr_t underflow_sic_cntr0 : 1; + mmr_t uncor_error3 : 1; + mmr_t uncor_error2 : 1; + mmr_t uncor_error1 : 1; + mmr_t uncor_error0 : 1; + mmr_t single_bit_error3 : 1; + mmr_t single_bit_error2 : 1; + mmr_t single_bit_error1 : 1; + mmr_t single_bit_error0 : 1; + mmr_t lut_read_error : 1; + mmr_t overflow_databuff_vc2 : 1; + mmr_t overflow_databuff_vc0 : 1; + mmr_t overflow_vc2_credit : 1; + mmr_t underflow_vc2_credit : 1; + mmr_t overflow_vc0_credit : 1; + mmr_t underflow_vc0_credit : 1; + mmr_t overflow_iilb_vc2 : 1; + mmr_t underflow_iilb_vc2 : 1; + mmr_t overflow_iilb_vc0 : 1; + mmr_t underflow_iilb_vc0 : 1; + mmr_t overflow_ni1_vc2 : 1; + mmr_t underflow_ni1_vc2 : 1; + mmr_t overflow_ni1_vc0 : 1; + mmr_t underflow_ni1_vc0 : 1; + mmr_t overflow_ni0_vc2 : 1; + mmr_t underflow_ni0_vc2 : 1; + mmr_t overflow_ni0_vc0 : 1; + mmr_t underflow_ni0_vc0 : 1; + } sh_xnpi_error_summary_s; +} sh_xnpi_error_summary_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNPI_ERROR_OVERFLOW" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnpi_error_overflow_u { + mmr_t sh_xnpi_error_overflow_regval; + struct { + mmr_t underflow_ni0_vc0 : 1; + mmr_t overflow_ni0_vc0 : 1; + mmr_t underflow_ni0_vc2 : 1; + mmr_t overflow_ni0_vc2 : 1; + mmr_t underflow_ni1_vc0 : 1; + mmr_t overflow_ni1_vc0 : 1; + mmr_t underflow_ni1_vc2 : 1; + mmr_t overflow_ni1_vc2 : 1; + mmr_t underflow_iilb_vc0 : 1; + mmr_t overflow_iilb_vc0 : 1; + mmr_t underflow_iilb_vc2 : 1; + mmr_t overflow_iilb_vc2 : 1; + mmr_t underflow_vc0_credit : 1; + mmr_t overflow_vc0_credit : 1; + mmr_t underflow_vc2_credit : 1; + mmr_t overflow_vc2_credit : 1; + mmr_t overflow_databuff_vc0 : 1; + mmr_t overflow_databuff_vc2 : 1; + mmr_t lut_read_error : 1; + mmr_t single_bit_error0 : 1; + mmr_t single_bit_error1 : 1; + mmr_t single_bit_error2 : 1; + mmr_t single_bit_error3 : 1; + mmr_t uncor_error0 : 1; + mmr_t uncor_error1 : 1; + mmr_t uncor_error2 : 1; + mmr_t uncor_error3 : 1; + mmr_t underflow_sic_cntr0 : 1; + mmr_t overflow_sic_cntr0 : 1; + mmr_t underflow_sic_cntr2 : 1; + mmr_t overflow_sic_cntr2 : 1; + mmr_t overflow_ni0_debit0 : 1; + mmr_t overflow_ni0_debit2 : 1; + mmr_t overflow_ni1_debit0 : 1; + mmr_t overflow_ni1_debit2 : 1; + mmr_t overflow_iilb_debit0 : 1; + mmr_t overflow_iilb_debit2 : 1; + mmr_t underflow_ni0_vc0_credit : 1; + mmr_t overflow_ni0_vc0_credit : 1; + mmr_t underflow_ni0_vc2_credit : 1; + mmr_t overflow_ni0_vc2_credit : 1; + mmr_t underflow_ni1_vc0_credit : 1; + mmr_t overflow_ni1_vc0_credit : 1; + mmr_t underflow_ni1_vc2_credit : 1; + mmr_t overflow_ni1_vc2_credit : 1; + mmr_t underflow_iilb_vc0_credit : 1; + mmr_t overflow_iilb_vc0_credit : 1; + mmr_t underflow_iilb_vc2_credit : 1; + mmr_t overflow_iilb_vc2_credit : 1; + mmr_t overflow_header_cancel_fifo : 1; + mmr_t reserved_0 : 14; + } sh_xnpi_error_overflow_s; +} sh_xnpi_error_overflow_u_t; +#else +typedef union sh_xnpi_error_overflow_u { + mmr_t sh_xnpi_error_overflow_regval; + struct { + mmr_t reserved_0 : 14; + mmr_t overflow_header_cancel_fifo : 1; + mmr_t overflow_iilb_vc2_credit : 1; + mmr_t underflow_iilb_vc2_credit : 1; + mmr_t overflow_iilb_vc0_credit : 1; + mmr_t underflow_iilb_vc0_credit : 1; + mmr_t overflow_ni1_vc2_credit : 1; + mmr_t underflow_ni1_vc2_credit : 1; + mmr_t overflow_ni1_vc0_credit : 1; + mmr_t underflow_ni1_vc0_credit : 1; + mmr_t overflow_ni0_vc2_credit : 1; + mmr_t underflow_ni0_vc2_credit : 1; + mmr_t overflow_ni0_vc0_credit : 1; + mmr_t underflow_ni0_vc0_credit : 1; + mmr_t overflow_iilb_debit2 : 1; + mmr_t overflow_iilb_debit0 : 1; + mmr_t overflow_ni1_debit2 : 1; + mmr_t overflow_ni1_debit0 : 1; + mmr_t overflow_ni0_debit2 : 1; + mmr_t overflow_ni0_debit0 : 1; + mmr_t overflow_sic_cntr2 : 1; + mmr_t underflow_sic_cntr2 : 1; + mmr_t overflow_sic_cntr0 : 1; + mmr_t underflow_sic_cntr0 : 1; + mmr_t uncor_error3 : 1; + mmr_t uncor_error2 : 1; + mmr_t uncor_error1 : 1; + mmr_t uncor_error0 : 1; + mmr_t single_bit_error3 : 1; + mmr_t single_bit_error2 : 1; + mmr_t single_bit_error1 : 1; + mmr_t single_bit_error0 : 1; + mmr_t lut_read_error : 1; + mmr_t overflow_databuff_vc2 : 1; + mmr_t overflow_databuff_vc0 : 1; + mmr_t overflow_vc2_credit : 1; + mmr_t underflow_vc2_credit : 1; + mmr_t overflow_vc0_credit : 1; + mmr_t underflow_vc0_credit : 1; + mmr_t overflow_iilb_vc2 : 1; + mmr_t underflow_iilb_vc2 : 1; + mmr_t overflow_iilb_vc0 : 1; + mmr_t underflow_iilb_vc0 : 1; + mmr_t overflow_ni1_vc2 : 1; + mmr_t underflow_ni1_vc2 : 1; + mmr_t overflow_ni1_vc0 : 1; + mmr_t underflow_ni1_vc0 : 1; + mmr_t overflow_ni0_vc2 : 1; + mmr_t underflow_ni0_vc2 : 1; + mmr_t overflow_ni0_vc0 : 1; + mmr_t underflow_ni0_vc0 : 1; + } sh_xnpi_error_overflow_s; +} sh_xnpi_error_overflow_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNPI_ERROR_MASK" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnpi_error_mask_u { + mmr_t sh_xnpi_error_mask_regval; + struct { + mmr_t underflow_ni0_vc0 : 1; + mmr_t overflow_ni0_vc0 : 1; + mmr_t underflow_ni0_vc2 : 1; + mmr_t overflow_ni0_vc2 : 1; + mmr_t underflow_ni1_vc0 : 1; + mmr_t overflow_ni1_vc0 : 1; + mmr_t underflow_ni1_vc2 : 1; + mmr_t overflow_ni1_vc2 : 1; + mmr_t underflow_iilb_vc0 : 1; + mmr_t overflow_iilb_vc0 : 1; + mmr_t underflow_iilb_vc2 : 1; + mmr_t overflow_iilb_vc2 : 1; + mmr_t underflow_vc0_credit : 1; + mmr_t overflow_vc0_credit : 1; + mmr_t underflow_vc2_credit : 1; + mmr_t overflow_vc2_credit : 1; + mmr_t overflow_databuff_vc0 : 1; + mmr_t overflow_databuff_vc2 : 1; + mmr_t lut_read_error : 1; + mmr_t single_bit_error0 : 1; + mmr_t single_bit_error1 : 1; + mmr_t single_bit_error2 : 1; + mmr_t single_bit_error3 : 1; + mmr_t uncor_error0 : 1; + mmr_t uncor_error1 : 1; + mmr_t uncor_error2 : 1; + mmr_t uncor_error3 : 1; + mmr_t underflow_sic_cntr0 : 1; + mmr_t overflow_sic_cntr0 : 1; + mmr_t underflow_sic_cntr2 : 1; + mmr_t overflow_sic_cntr2 : 1; + mmr_t overflow_ni0_debit0 : 1; + mmr_t overflow_ni0_debit2 : 1; + mmr_t overflow_ni1_debit0 : 1; + mmr_t overflow_ni1_debit2 : 1; + mmr_t overflow_iilb_debit0 : 1; + mmr_t overflow_iilb_debit2 : 1; + mmr_t underflow_ni0_vc0_credit : 1; + mmr_t overflow_ni0_vc0_credit : 1; + mmr_t underflow_ni0_vc2_credit : 1; + mmr_t overflow_ni0_vc2_credit : 1; + mmr_t underflow_ni1_vc0_credit : 1; + mmr_t overflow_ni1_vc0_credit : 1; + mmr_t underflow_ni1_vc2_credit : 1; + mmr_t overflow_ni1_vc2_credit : 1; + mmr_t underflow_iilb_vc0_credit : 1; + mmr_t overflow_iilb_vc0_credit : 1; + mmr_t underflow_iilb_vc2_credit : 1; + mmr_t overflow_iilb_vc2_credit : 1; + mmr_t overflow_header_cancel_fifo : 1; + mmr_t reserved_0 : 14; + } sh_xnpi_error_mask_s; +} sh_xnpi_error_mask_u_t; +#else +typedef union sh_xnpi_error_mask_u { + mmr_t sh_xnpi_error_mask_regval; + struct { + mmr_t reserved_0 : 14; + mmr_t overflow_header_cancel_fifo : 1; + mmr_t overflow_iilb_vc2_credit : 1; + mmr_t underflow_iilb_vc2_credit : 1; + mmr_t overflow_iilb_vc0_credit : 1; + mmr_t underflow_iilb_vc0_credit : 1; + mmr_t overflow_ni1_vc2_credit : 1; + mmr_t underflow_ni1_vc2_credit : 1; + mmr_t overflow_ni1_vc0_credit : 1; + mmr_t underflow_ni1_vc0_credit : 1; + mmr_t overflow_ni0_vc2_credit : 1; + mmr_t underflow_ni0_vc2_credit : 1; + mmr_t overflow_ni0_vc0_credit : 1; + mmr_t underflow_ni0_vc0_credit : 1; + mmr_t overflow_iilb_debit2 : 1; + mmr_t overflow_iilb_debit0 : 1; + mmr_t overflow_ni1_debit2 : 1; + mmr_t overflow_ni1_debit0 : 1; + mmr_t overflow_ni0_debit2 : 1; + mmr_t overflow_ni0_debit0 : 1; + mmr_t overflow_sic_cntr2 : 1; + mmr_t underflow_sic_cntr2 : 1; + mmr_t overflow_sic_cntr0 : 1; + mmr_t underflow_sic_cntr0 : 1; + mmr_t uncor_error3 : 1; + mmr_t uncor_error2 : 1; + mmr_t uncor_error1 : 1; + mmr_t uncor_error0 : 1; + mmr_t single_bit_error3 : 1; + mmr_t single_bit_error2 : 1; + mmr_t single_bit_error1 : 1; + mmr_t single_bit_error0 : 1; + mmr_t lut_read_error : 1; + mmr_t overflow_databuff_vc2 : 1; + mmr_t overflow_databuff_vc0 : 1; + mmr_t overflow_vc2_credit : 1; + mmr_t underflow_vc2_credit : 1; + mmr_t overflow_vc0_credit : 1; + mmr_t underflow_vc0_credit : 1; + mmr_t overflow_iilb_vc2 : 1; + mmr_t underflow_iilb_vc2 : 1; + mmr_t overflow_iilb_vc0 : 1; + mmr_t underflow_iilb_vc0 : 1; + mmr_t overflow_ni1_vc2 : 1; + mmr_t underflow_ni1_vc2 : 1; + mmr_t overflow_ni1_vc0 : 1; + mmr_t underflow_ni1_vc0 : 1; + mmr_t overflow_ni0_vc2 : 1; + mmr_t underflow_ni0_vc2 : 1; + mmr_t overflow_ni0_vc0 : 1; + mmr_t underflow_ni0_vc0 : 1; + } sh_xnpi_error_mask_s; +} sh_xnpi_error_mask_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNPI_FIRST_ERROR" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnpi_first_error_u { + mmr_t sh_xnpi_first_error_regval; + struct { + mmr_t underflow_ni0_vc0 : 1; + mmr_t overflow_ni0_vc0 : 1; + mmr_t underflow_ni0_vc2 : 1; + mmr_t overflow_ni0_vc2 : 1; + mmr_t underflow_ni1_vc0 : 1; + mmr_t overflow_ni1_vc0 : 1; + mmr_t underflow_ni1_vc2 : 1; + mmr_t overflow_ni1_vc2 : 1; + mmr_t underflow_iilb_vc0 : 1; + mmr_t overflow_iilb_vc0 : 1; + mmr_t underflow_iilb_vc2 : 1; + mmr_t overflow_iilb_vc2 : 1; + mmr_t underflow_vc0_credit : 1; + mmr_t overflow_vc0_credit : 1; + mmr_t underflow_vc2_credit : 1; + mmr_t overflow_vc2_credit : 1; + mmr_t overflow_databuff_vc0 : 1; + mmr_t overflow_databuff_vc2 : 1; + mmr_t lut_read_error : 1; + mmr_t single_bit_error0 : 1; + mmr_t single_bit_error1 : 1; + mmr_t single_bit_error2 : 1; + mmr_t single_bit_error3 : 1; + mmr_t uncor_error0 : 1; + mmr_t uncor_error1 : 1; + mmr_t uncor_error2 : 1; + mmr_t uncor_error3 : 1; + mmr_t underflow_sic_cntr0 : 1; + mmr_t overflow_sic_cntr0 : 1; + mmr_t underflow_sic_cntr2 : 1; + mmr_t overflow_sic_cntr2 : 1; + mmr_t overflow_ni0_debit0 : 1; + mmr_t overflow_ni0_debit2 : 1; + mmr_t overflow_ni1_debit0 : 1; + mmr_t overflow_ni1_debit2 : 1; + mmr_t overflow_iilb_debit0 : 1; + mmr_t overflow_iilb_debit2 : 1; + mmr_t underflow_ni0_vc0_credit : 1; + mmr_t overflow_ni0_vc0_credit : 1; + mmr_t underflow_ni0_vc2_credit : 1; + mmr_t overflow_ni0_vc2_credit : 1; + mmr_t underflow_ni1_vc0_credit : 1; + mmr_t overflow_ni1_vc0_credit : 1; + mmr_t underflow_ni1_vc2_credit : 1; + mmr_t overflow_ni1_vc2_credit : 1; + mmr_t underflow_iilb_vc0_credit : 1; + mmr_t overflow_iilb_vc0_credit : 1; + mmr_t underflow_iilb_vc2_credit : 1; + mmr_t overflow_iilb_vc2_credit : 1; + mmr_t overflow_header_cancel_fifo : 1; + mmr_t reserved_0 : 14; + } sh_xnpi_first_error_s; +} sh_xnpi_first_error_u_t; +#else +typedef union sh_xnpi_first_error_u { + mmr_t sh_xnpi_first_error_regval; + struct { + mmr_t reserved_0 : 14; + mmr_t overflow_header_cancel_fifo : 1; + mmr_t overflow_iilb_vc2_credit : 1; + mmr_t underflow_iilb_vc2_credit : 1; + mmr_t overflow_iilb_vc0_credit : 1; + mmr_t underflow_iilb_vc0_credit : 1; + mmr_t overflow_ni1_vc2_credit : 1; + mmr_t underflow_ni1_vc2_credit : 1; + mmr_t overflow_ni1_vc0_credit : 1; + mmr_t underflow_ni1_vc0_credit : 1; + mmr_t overflow_ni0_vc2_credit : 1; + mmr_t underflow_ni0_vc2_credit : 1; + mmr_t overflow_ni0_vc0_credit : 1; + mmr_t underflow_ni0_vc0_credit : 1; + mmr_t overflow_iilb_debit2 : 1; + mmr_t overflow_iilb_debit0 : 1; + mmr_t overflow_ni1_debit2 : 1; + mmr_t overflow_ni1_debit0 : 1; + mmr_t overflow_ni0_debit2 : 1; + mmr_t overflow_ni0_debit0 : 1; + mmr_t overflow_sic_cntr2 : 1; + mmr_t underflow_sic_cntr2 : 1; + mmr_t overflow_sic_cntr0 : 1; + mmr_t underflow_sic_cntr0 : 1; + mmr_t uncor_error3 : 1; + mmr_t uncor_error2 : 1; + mmr_t uncor_error1 : 1; + mmr_t uncor_error0 : 1; + mmr_t single_bit_error3 : 1; + mmr_t single_bit_error2 : 1; + mmr_t single_bit_error1 : 1; + mmr_t single_bit_error0 : 1; + mmr_t lut_read_error : 1; + mmr_t overflow_databuff_vc2 : 1; + mmr_t overflow_databuff_vc0 : 1; + mmr_t overflow_vc2_credit : 1; + mmr_t underflow_vc2_credit : 1; + mmr_t overflow_vc0_credit : 1; + mmr_t underflow_vc0_credit : 1; + mmr_t overflow_iilb_vc2 : 1; + mmr_t underflow_iilb_vc2 : 1; + mmr_t overflow_iilb_vc0 : 1; + mmr_t underflow_iilb_vc0 : 1; + mmr_t overflow_ni1_vc2 : 1; + mmr_t underflow_ni1_vc2 : 1; + mmr_t overflow_ni1_vc0 : 1; + mmr_t underflow_ni1_vc0 : 1; + mmr_t overflow_ni0_vc2 : 1; + mmr_t underflow_ni0_vc2 : 1; + mmr_t overflow_ni0_vc0 : 1; + mmr_t underflow_ni0_vc0 : 1; + } sh_xnpi_first_error_s; +} sh_xnpi_first_error_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNMD_ERROR_SUMMARY" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnmd_error_summary_u { + mmr_t sh_xnmd_error_summary_regval; + struct { + mmr_t underflow_ni0_vc0 : 1; + mmr_t overflow_ni0_vc0 : 1; + mmr_t underflow_ni0_vc2 : 1; + mmr_t overflow_ni0_vc2 : 1; + mmr_t underflow_ni1_vc0 : 1; + mmr_t overflow_ni1_vc0 : 1; + mmr_t underflow_ni1_vc2 : 1; + mmr_t overflow_ni1_vc2 : 1; + mmr_t underflow_iilb_vc0 : 1; + mmr_t overflow_iilb_vc0 : 1; + mmr_t underflow_iilb_vc2 : 1; + mmr_t overflow_iilb_vc2 : 1; + mmr_t underflow_vc0_credit : 1; + mmr_t overflow_vc0_credit : 1; + mmr_t underflow_vc2_credit : 1; + mmr_t overflow_vc2_credit : 1; + mmr_t overflow_databuff_vc0 : 1; + mmr_t overflow_databuff_vc2 : 1; + mmr_t lut_read_error : 1; + mmr_t single_bit_error0 : 1; + mmr_t single_bit_error1 : 1; + mmr_t single_bit_error2 : 1; + mmr_t single_bit_error3 : 1; + mmr_t uncor_error0 : 1; + mmr_t uncor_error1 : 1; + mmr_t uncor_error2 : 1; + mmr_t uncor_error3 : 1; + mmr_t underflow_sic_cntr0 : 1; + mmr_t overflow_sic_cntr0 : 1; + mmr_t underflow_sic_cntr2 : 1; + mmr_t overflow_sic_cntr2 : 1; + mmr_t overflow_ni0_debit0 : 1; + mmr_t overflow_ni0_debit2 : 1; + mmr_t overflow_ni1_debit0 : 1; + mmr_t overflow_ni1_debit2 : 1; + mmr_t overflow_iilb_debit0 : 1; + mmr_t overflow_iilb_debit2 : 1; + mmr_t underflow_ni0_vc0_credit : 1; + mmr_t overflow_ni0_vc0_credit : 1; + mmr_t underflow_ni0_vc2_credit : 1; + mmr_t overflow_ni0_vc2_credit : 1; + mmr_t underflow_ni1_vc0_credit : 1; + mmr_t overflow_ni1_vc0_credit : 1; + mmr_t underflow_ni1_vc2_credit : 1; + mmr_t overflow_ni1_vc2_credit : 1; + mmr_t underflow_iilb_vc0_credit : 1; + mmr_t overflow_iilb_vc0_credit : 1; + mmr_t underflow_iilb_vc2_credit : 1; + mmr_t overflow_iilb_vc2_credit : 1; + mmr_t overflow_header_cancel_fifo : 1; + mmr_t reserved_0 : 14; + } sh_xnmd_error_summary_s; +} sh_xnmd_error_summary_u_t; +#else +typedef union sh_xnmd_error_summary_u { + mmr_t sh_xnmd_error_summary_regval; + struct { + mmr_t reserved_0 : 14; + mmr_t overflow_header_cancel_fifo : 1; + mmr_t overflow_iilb_vc2_credit : 1; + mmr_t underflow_iilb_vc2_credit : 1; + mmr_t overflow_iilb_vc0_credit : 1; + mmr_t underflow_iilb_vc0_credit : 1; + mmr_t overflow_ni1_vc2_credit : 1; + mmr_t underflow_ni1_vc2_credit : 1; + mmr_t overflow_ni1_vc0_credit : 1; + mmr_t underflow_ni1_vc0_credit : 1; + mmr_t overflow_ni0_vc2_credit : 1; + mmr_t underflow_ni0_vc2_credit : 1; + mmr_t overflow_ni0_vc0_credit : 1; + mmr_t underflow_ni0_vc0_credit : 1; + mmr_t overflow_iilb_debit2 : 1; + mmr_t overflow_iilb_debit0 : 1; + mmr_t overflow_ni1_debit2 : 1; + mmr_t overflow_ni1_debit0 : 1; + mmr_t overflow_ni0_debit2 : 1; + mmr_t overflow_ni0_debit0 : 1; + mmr_t overflow_sic_cntr2 : 1; + mmr_t underflow_sic_cntr2 : 1; + mmr_t overflow_sic_cntr0 : 1; + mmr_t underflow_sic_cntr0 : 1; + mmr_t uncor_error3 : 1; + mmr_t uncor_error2 : 1; + mmr_t uncor_error1 : 1; + mmr_t uncor_error0 : 1; + mmr_t single_bit_error3 : 1; + mmr_t single_bit_error2 : 1; + mmr_t single_bit_error1 : 1; + mmr_t single_bit_error0 : 1; + mmr_t lut_read_error : 1; + mmr_t overflow_databuff_vc2 : 1; + mmr_t overflow_databuff_vc0 : 1; + mmr_t overflow_vc2_credit : 1; + mmr_t underflow_vc2_credit : 1; + mmr_t overflow_vc0_credit : 1; + mmr_t underflow_vc0_credit : 1; + mmr_t overflow_iilb_vc2 : 1; + mmr_t underflow_iilb_vc2 : 1; + mmr_t overflow_iilb_vc0 : 1; + mmr_t underflow_iilb_vc0 : 1; + mmr_t overflow_ni1_vc2 : 1; + mmr_t underflow_ni1_vc2 : 1; + mmr_t overflow_ni1_vc0 : 1; + mmr_t underflow_ni1_vc0 : 1; + mmr_t overflow_ni0_vc2 : 1; + mmr_t underflow_ni0_vc2 : 1; + mmr_t overflow_ni0_vc0 : 1; + mmr_t underflow_ni0_vc0 : 1; + } sh_xnmd_error_summary_s; +} sh_xnmd_error_summary_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNMD_ERROR_OVERFLOW" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnmd_error_overflow_u { + mmr_t sh_xnmd_error_overflow_regval; + struct { + mmr_t underflow_ni0_vc0 : 1; + mmr_t overflow_ni0_vc0 : 1; + mmr_t underflow_ni0_vc2 : 1; + mmr_t overflow_ni0_vc2 : 1; + mmr_t underflow_ni1_vc0 : 1; + mmr_t overflow_ni1_vc0 : 1; + mmr_t underflow_ni1_vc2 : 1; + mmr_t overflow_ni1_vc2 : 1; + mmr_t underflow_iilb_vc0 : 1; + mmr_t overflow_iilb_vc0 : 1; + mmr_t underflow_iilb_vc2 : 1; + mmr_t overflow_iilb_vc2 : 1; + mmr_t underflow_vc0_credit : 1; + mmr_t overflow_vc0_credit : 1; + mmr_t underflow_vc2_credit : 1; + mmr_t overflow_vc2_credit : 1; + mmr_t overflow_databuff_vc0 : 1; + mmr_t overflow_databuff_vc2 : 1; + mmr_t lut_read_error : 1; + mmr_t single_bit_error0 : 1; + mmr_t single_bit_error1 : 1; + mmr_t single_bit_error2 : 1; + mmr_t single_bit_error3 : 1; + mmr_t uncor_error0 : 1; + mmr_t uncor_error1 : 1; + mmr_t uncor_error2 : 1; + mmr_t uncor_error3 : 1; + mmr_t underflow_sic_cntr0 : 1; + mmr_t overflow_sic_cntr0 : 1; + mmr_t underflow_sic_cntr2 : 1; + mmr_t overflow_sic_cntr2 : 1; + mmr_t overflow_ni0_debit0 : 1; + mmr_t overflow_ni0_debit2 : 1; + mmr_t overflow_ni1_debit0 : 1; + mmr_t overflow_ni1_debit2 : 1; + mmr_t overflow_iilb_debit0 : 1; + mmr_t overflow_iilb_debit2 : 1; + mmr_t underflow_ni0_vc0_credit : 1; + mmr_t overflow_ni0_vc0_credit : 1; + mmr_t underflow_ni0_vc2_credit : 1; + mmr_t overflow_ni0_vc2_credit : 1; + mmr_t underflow_ni1_vc0_credit : 1; + mmr_t overflow_ni1_vc0_credit : 1; + mmr_t underflow_ni1_vc2_credit : 1; + mmr_t overflow_ni1_vc2_credit : 1; + mmr_t underflow_iilb_vc0_credit : 1; + mmr_t overflow_iilb_vc0_credit : 1; + mmr_t underflow_iilb_vc2_credit : 1; + mmr_t overflow_iilb_vc2_credit : 1; + mmr_t overflow_header_cancel_fifo : 1; + mmr_t reserved_0 : 14; + } sh_xnmd_error_overflow_s; +} sh_xnmd_error_overflow_u_t; +#else +typedef union sh_xnmd_error_overflow_u { + mmr_t sh_xnmd_error_overflow_regval; + struct { + mmr_t reserved_0 : 14; + mmr_t overflow_header_cancel_fifo : 1; + mmr_t overflow_iilb_vc2_credit : 1; + mmr_t underflow_iilb_vc2_credit : 1; + mmr_t overflow_iilb_vc0_credit : 1; + mmr_t underflow_iilb_vc0_credit : 1; + mmr_t overflow_ni1_vc2_credit : 1; + mmr_t underflow_ni1_vc2_credit : 1; + mmr_t overflow_ni1_vc0_credit : 1; + mmr_t underflow_ni1_vc0_credit : 1; + mmr_t overflow_ni0_vc2_credit : 1; + mmr_t underflow_ni0_vc2_credit : 1; + mmr_t overflow_ni0_vc0_credit : 1; + mmr_t underflow_ni0_vc0_credit : 1; + mmr_t overflow_iilb_debit2 : 1; + mmr_t overflow_iilb_debit0 : 1; + mmr_t overflow_ni1_debit2 : 1; + mmr_t overflow_ni1_debit0 : 1; + mmr_t overflow_ni0_debit2 : 1; + mmr_t overflow_ni0_debit0 : 1; + mmr_t overflow_sic_cntr2 : 1; + mmr_t underflow_sic_cntr2 : 1; + mmr_t overflow_sic_cntr0 : 1; + mmr_t underflow_sic_cntr0 : 1; + mmr_t uncor_error3 : 1; + mmr_t uncor_error2 : 1; + mmr_t uncor_error1 : 1; + mmr_t uncor_error0 : 1; + mmr_t single_bit_error3 : 1; + mmr_t single_bit_error2 : 1; + mmr_t single_bit_error1 : 1; + mmr_t single_bit_error0 : 1; + mmr_t lut_read_error : 1; + mmr_t overflow_databuff_vc2 : 1; + mmr_t overflow_databuff_vc0 : 1; + mmr_t overflow_vc2_credit : 1; + mmr_t underflow_vc2_credit : 1; + mmr_t overflow_vc0_credit : 1; + mmr_t underflow_vc0_credit : 1; + mmr_t overflow_iilb_vc2 : 1; + mmr_t underflow_iilb_vc2 : 1; + mmr_t overflow_iilb_vc0 : 1; + mmr_t underflow_iilb_vc0 : 1; + mmr_t overflow_ni1_vc2 : 1; + mmr_t underflow_ni1_vc2 : 1; + mmr_t overflow_ni1_vc0 : 1; + mmr_t underflow_ni1_vc0 : 1; + mmr_t overflow_ni0_vc2 : 1; + mmr_t underflow_ni0_vc2 : 1; + mmr_t overflow_ni0_vc0 : 1; + mmr_t underflow_ni0_vc0 : 1; + } sh_xnmd_error_overflow_s; +} sh_xnmd_error_overflow_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNMD_ERROR_MASK" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnmd_error_mask_u { + mmr_t sh_xnmd_error_mask_regval; + struct { + mmr_t underflow_ni0_vc0 : 1; + mmr_t overflow_ni0_vc0 : 1; + mmr_t underflow_ni0_vc2 : 1; + mmr_t overflow_ni0_vc2 : 1; + mmr_t underflow_ni1_vc0 : 1; + mmr_t overflow_ni1_vc0 : 1; + mmr_t underflow_ni1_vc2 : 1; + mmr_t overflow_ni1_vc2 : 1; + mmr_t underflow_iilb_vc0 : 1; + mmr_t overflow_iilb_vc0 : 1; + mmr_t underflow_iilb_vc2 : 1; + mmr_t overflow_iilb_vc2 : 1; + mmr_t underflow_vc0_credit : 1; + mmr_t overflow_vc0_credit : 1; + mmr_t underflow_vc2_credit : 1; + mmr_t overflow_vc2_credit : 1; + mmr_t overflow_databuff_vc0 : 1; + mmr_t overflow_databuff_vc2 : 1; + mmr_t lut_read_error : 1; + mmr_t single_bit_error0 : 1; + mmr_t single_bit_error1 : 1; + mmr_t single_bit_error2 : 1; + mmr_t single_bit_error3 : 1; + mmr_t uncor_error0 : 1; + mmr_t uncor_error1 : 1; + mmr_t uncor_error2 : 1; + mmr_t uncor_error3 : 1; + mmr_t underflow_sic_cntr0 : 1; + mmr_t overflow_sic_cntr0 : 1; + mmr_t underflow_sic_cntr2 : 1; + mmr_t overflow_sic_cntr2 : 1; + mmr_t overflow_ni0_debit0 : 1; + mmr_t overflow_ni0_debit2 : 1; + mmr_t overflow_ni1_debit0 : 1; + mmr_t overflow_ni1_debit2 : 1; + mmr_t overflow_iilb_debit0 : 1; + mmr_t overflow_iilb_debit2 : 1; + mmr_t underflow_ni0_vc0_credit : 1; + mmr_t overflow_ni0_vc0_credit : 1; + mmr_t underflow_ni0_vc2_credit : 1; + mmr_t overflow_ni0_vc2_credit : 1; + mmr_t underflow_ni1_vc0_credit : 1; + mmr_t overflow_ni1_vc0_credit : 1; + mmr_t underflow_ni1_vc2_credit : 1; + mmr_t overflow_ni1_vc2_credit : 1; + mmr_t underflow_iilb_vc0_credit : 1; + mmr_t overflow_iilb_vc0_credit : 1; + mmr_t underflow_iilb_vc2_credit : 1; + mmr_t overflow_iilb_vc2_credit : 1; + mmr_t overflow_header_cancel_fifo : 1; + mmr_t reserved_0 : 14; + } sh_xnmd_error_mask_s; +} sh_xnmd_error_mask_u_t; +#else +typedef union sh_xnmd_error_mask_u { + mmr_t sh_xnmd_error_mask_regval; + struct { + mmr_t reserved_0 : 14; + mmr_t overflow_header_cancel_fifo : 1; + mmr_t overflow_iilb_vc2_credit : 1; + mmr_t underflow_iilb_vc2_credit : 1; + mmr_t overflow_iilb_vc0_credit : 1; + mmr_t underflow_iilb_vc0_credit : 1; + mmr_t overflow_ni1_vc2_credit : 1; + mmr_t underflow_ni1_vc2_credit : 1; + mmr_t overflow_ni1_vc0_credit : 1; + mmr_t underflow_ni1_vc0_credit : 1; + mmr_t overflow_ni0_vc2_credit : 1; + mmr_t underflow_ni0_vc2_credit : 1; + mmr_t overflow_ni0_vc0_credit : 1; + mmr_t underflow_ni0_vc0_credit : 1; + mmr_t overflow_iilb_debit2 : 1; + mmr_t overflow_iilb_debit0 : 1; + mmr_t overflow_ni1_debit2 : 1; + mmr_t overflow_ni1_debit0 : 1; + mmr_t overflow_ni0_debit2 : 1; + mmr_t overflow_ni0_debit0 : 1; + mmr_t overflow_sic_cntr2 : 1; + mmr_t underflow_sic_cntr2 : 1; + mmr_t overflow_sic_cntr0 : 1; + mmr_t underflow_sic_cntr0 : 1; + mmr_t uncor_error3 : 1; + mmr_t uncor_error2 : 1; + mmr_t uncor_error1 : 1; + mmr_t uncor_error0 : 1; + mmr_t single_bit_error3 : 1; + mmr_t single_bit_error2 : 1; + mmr_t single_bit_error1 : 1; + mmr_t single_bit_error0 : 1; + mmr_t lut_read_error : 1; + mmr_t overflow_databuff_vc2 : 1; + mmr_t overflow_databuff_vc0 : 1; + mmr_t overflow_vc2_credit : 1; + mmr_t underflow_vc2_credit : 1; + mmr_t overflow_vc0_credit : 1; + mmr_t underflow_vc0_credit : 1; + mmr_t overflow_iilb_vc2 : 1; + mmr_t underflow_iilb_vc2 : 1; + mmr_t overflow_iilb_vc0 : 1; + mmr_t underflow_iilb_vc0 : 1; + mmr_t overflow_ni1_vc2 : 1; + mmr_t underflow_ni1_vc2 : 1; + mmr_t overflow_ni1_vc0 : 1; + mmr_t underflow_ni1_vc0 : 1; + mmr_t overflow_ni0_vc2 : 1; + mmr_t underflow_ni0_vc2 : 1; + mmr_t overflow_ni0_vc0 : 1; + mmr_t underflow_ni0_vc0 : 1; + } sh_xnmd_error_mask_s; +} sh_xnmd_error_mask_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XNMD_FIRST_ERROR" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xnmd_first_error_u { + mmr_t sh_xnmd_first_error_regval; + struct { + mmr_t underflow_ni0_vc0 : 1; + mmr_t overflow_ni0_vc0 : 1; + mmr_t underflow_ni0_vc2 : 1; + mmr_t overflow_ni0_vc2 : 1; + mmr_t underflow_ni1_vc0 : 1; + mmr_t overflow_ni1_vc0 : 1; + mmr_t underflow_ni1_vc2 : 1; + mmr_t overflow_ni1_vc2 : 1; + mmr_t underflow_iilb_vc0 : 1; + mmr_t overflow_iilb_vc0 : 1; + mmr_t underflow_iilb_vc2 : 1; + mmr_t overflow_iilb_vc2 : 1; + mmr_t underflow_vc0_credit : 1; + mmr_t overflow_vc0_credit : 1; + mmr_t underflow_vc2_credit : 1; + mmr_t overflow_vc2_credit : 1; + mmr_t overflow_databuff_vc0 : 1; + mmr_t overflow_databuff_vc2 : 1; + mmr_t lut_read_error : 1; + mmr_t single_bit_error0 : 1; + mmr_t single_bit_error1 : 1; + mmr_t single_bit_error2 : 1; + mmr_t single_bit_error3 : 1; + mmr_t uncor_error0 : 1; + mmr_t uncor_error1 : 1; + mmr_t uncor_error2 : 1; + mmr_t uncor_error3 : 1; + mmr_t underflow_sic_cntr0 : 1; + mmr_t overflow_sic_cntr0 : 1; + mmr_t underflow_sic_cntr2 : 1; + mmr_t overflow_sic_cntr2 : 1; + mmr_t overflow_ni0_debit0 : 1; + mmr_t overflow_ni0_debit2 : 1; + mmr_t overflow_ni1_debit0 : 1; + mmr_t overflow_ni1_debit2 : 1; + mmr_t overflow_iilb_debit0 : 1; + mmr_t overflow_iilb_debit2 : 1; + mmr_t underflow_ni0_vc0_credit : 1; + mmr_t overflow_ni0_vc0_credit : 1; + mmr_t underflow_ni0_vc2_credit : 1; + mmr_t overflow_ni0_vc2_credit : 1; + mmr_t underflow_ni1_vc0_credit : 1; + mmr_t overflow_ni1_vc0_credit : 1; + mmr_t underflow_ni1_vc2_credit : 1; + mmr_t overflow_ni1_vc2_credit : 1; + mmr_t underflow_iilb_vc0_credit : 1; + mmr_t overflow_iilb_vc0_credit : 1; + mmr_t underflow_iilb_vc2_credit : 1; + mmr_t overflow_iilb_vc2_credit : 1; + mmr_t overflow_header_cancel_fifo : 1; + mmr_t reserved_0 : 14; + } sh_xnmd_first_error_s; +} sh_xnmd_first_error_u_t; +#else +typedef union sh_xnmd_first_error_u { + mmr_t sh_xnmd_first_error_regval; + struct { + mmr_t reserved_0 : 14; + mmr_t overflow_header_cancel_fifo : 1; + mmr_t overflow_iilb_vc2_credit : 1; + mmr_t underflow_iilb_vc2_credit : 1; + mmr_t overflow_iilb_vc0_credit : 1; + mmr_t underflow_iilb_vc0_credit : 1; + mmr_t overflow_ni1_vc2_credit : 1; + mmr_t underflow_ni1_vc2_credit : 1; + mmr_t overflow_ni1_vc0_credit : 1; + mmr_t underflow_ni1_vc0_credit : 1; + mmr_t overflow_ni0_vc2_credit : 1; + mmr_t underflow_ni0_vc2_credit : 1; + mmr_t overflow_ni0_vc0_credit : 1; + mmr_t underflow_ni0_vc0_credit : 1; + mmr_t overflow_iilb_debit2 : 1; + mmr_t overflow_iilb_debit0 : 1; + mmr_t overflow_ni1_debit2 : 1; + mmr_t overflow_ni1_debit0 : 1; + mmr_t overflow_ni0_debit2 : 1; + mmr_t overflow_ni0_debit0 : 1; + mmr_t overflow_sic_cntr2 : 1; + mmr_t underflow_sic_cntr2 : 1; + mmr_t overflow_sic_cntr0 : 1; + mmr_t underflow_sic_cntr0 : 1; + mmr_t uncor_error3 : 1; + mmr_t uncor_error2 : 1; + mmr_t uncor_error1 : 1; + mmr_t uncor_error0 : 1; + mmr_t single_bit_error3 : 1; + mmr_t single_bit_error2 : 1; + mmr_t single_bit_error1 : 1; + mmr_t single_bit_error0 : 1; + mmr_t lut_read_error : 1; + mmr_t overflow_databuff_vc2 : 1; + mmr_t overflow_databuff_vc0 : 1; + mmr_t overflow_vc2_credit : 1; + mmr_t underflow_vc2_credit : 1; + mmr_t overflow_vc0_credit : 1; + mmr_t underflow_vc0_credit : 1; + mmr_t overflow_iilb_vc2 : 1; + mmr_t underflow_iilb_vc2 : 1; + mmr_t overflow_iilb_vc0 : 1; + mmr_t underflow_iilb_vc0 : 1; + mmr_t overflow_ni1_vc2 : 1; + mmr_t underflow_ni1_vc2 : 1; + mmr_t overflow_ni1_vc0 : 1; + mmr_t underflow_ni1_vc0 : 1; + mmr_t overflow_ni0_vc2 : 1; + mmr_t underflow_ni0_vc2 : 1; + mmr_t overflow_ni0_vc0 : 1; + mmr_t underflow_ni0_vc0 : 1; + } sh_xnmd_first_error_s; +} sh_xnmd_first_error_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_AUTO_REPLY_ENABLE0" */ +/* Automatic Maintenance Reply Enable 0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_auto_reply_enable0_u { + mmr_t sh_auto_reply_enable0_regval; + struct { + mmr_t enable0 : 64; + } sh_auto_reply_enable0_s; +} sh_auto_reply_enable0_u_t; +#else +typedef union sh_auto_reply_enable0_u { + mmr_t sh_auto_reply_enable0_regval; + struct { + mmr_t enable0 : 64; + } sh_auto_reply_enable0_s; +} sh_auto_reply_enable0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_AUTO_REPLY_ENABLE1" */ +/* Automatic Maintenance Reply Enable 1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_auto_reply_enable1_u { + mmr_t sh_auto_reply_enable1_regval; + struct { + mmr_t enable1 : 64; + } sh_auto_reply_enable1_s; +} sh_auto_reply_enable1_u_t; +#else +typedef union sh_auto_reply_enable1_u { + mmr_t sh_auto_reply_enable1_regval; + struct { + mmr_t enable1 : 64; + } sh_auto_reply_enable1_s; +} sh_auto_reply_enable1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_AUTO_REPLY_HEADER0" */ +/* Automatic Maintenance Reply Header 0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_auto_reply_header0_u { + mmr_t sh_auto_reply_header0_regval; + struct { + mmr_t header0 : 64; + } sh_auto_reply_header0_s; +} sh_auto_reply_header0_u_t; +#else +typedef union sh_auto_reply_header0_u { + mmr_t sh_auto_reply_header0_regval; + struct { + mmr_t header0 : 64; + } sh_auto_reply_header0_s; +} sh_auto_reply_header0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_AUTO_REPLY_HEADER1" */ +/* Automatic Maintenance Reply Header 1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_auto_reply_header1_u { + mmr_t sh_auto_reply_header1_regval; + struct { + mmr_t header1 : 64; + } sh_auto_reply_header1_s; +} sh_auto_reply_header1_u_t; +#else +typedef union sh_auto_reply_header1_u { + mmr_t sh_auto_reply_header1_regval; + struct { + mmr_t header1 : 64; + } sh_auto_reply_header1_s; +} sh_auto_reply_header1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_ENABLE_RP_AUTO_REPLY" */ +/* Enable Automatic Maintenance Reply From Reply Queue */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_enable_rp_auto_reply_u { + mmr_t sh_enable_rp_auto_reply_regval; + struct { + mmr_t enable : 1; + mmr_t reserved_0 : 63; + } sh_enable_rp_auto_reply_s; +} sh_enable_rp_auto_reply_u_t; +#else +typedef union sh_enable_rp_auto_reply_u { + mmr_t sh_enable_rp_auto_reply_regval; + struct { + mmr_t reserved_0 : 63; + mmr_t enable : 1; + } sh_enable_rp_auto_reply_s; +} sh_enable_rp_auto_reply_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_ENABLE_RQ_AUTO_REPLY" */ +/* Enable Automatic Maintenance Reply From Request Queue */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_enable_rq_auto_reply_u { + mmr_t sh_enable_rq_auto_reply_regval; + struct { + mmr_t enable : 1; + mmr_t reserved_0 : 63; + } sh_enable_rq_auto_reply_s; +} sh_enable_rq_auto_reply_u_t; +#else +typedef union sh_enable_rq_auto_reply_u { + mmr_t sh_enable_rq_auto_reply_regval; + struct { + mmr_t reserved_0 : 63; + mmr_t enable : 1; + } sh_enable_rq_auto_reply_s; +} sh_enable_rq_auto_reply_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_REDIRECT_INVAL" */ +/* Redirect invalidate to LB instead of PI */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_redirect_inval_u { + mmr_t sh_redirect_inval_regval; + struct { + mmr_t redirect : 1; + mmr_t reserved_0 : 63; + } sh_redirect_inval_s; +} sh_redirect_inval_u_t; +#else +typedef union sh_redirect_inval_u { + mmr_t sh_redirect_inval_regval; + struct { + mmr_t reserved_0 : 63; + mmr_t redirect : 1; + } sh_redirect_inval_s; +} sh_redirect_inval_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_DIAG_MSG_CNTRL" */ +/* Diagnostic Message Control Register */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_diag_msg_cntrl_u { + mmr_t sh_diag_msg_cntrl_regval; + struct { + mmr_t msg_length : 6; + mmr_t error_inject_point : 6; + mmr_t error_inject_enable : 1; + mmr_t port : 1; + mmr_t reserved_0 : 48; + mmr_t start : 1; + mmr_t busy : 1; + } sh_diag_msg_cntrl_s; +} sh_diag_msg_cntrl_u_t; +#else +typedef union sh_diag_msg_cntrl_u { + mmr_t sh_diag_msg_cntrl_regval; + struct { + mmr_t busy : 1; + mmr_t start : 1; + mmr_t reserved_0 : 48; + mmr_t port : 1; + mmr_t error_inject_enable : 1; + mmr_t error_inject_point : 6; + mmr_t msg_length : 6; + } sh_diag_msg_cntrl_s; +} sh_diag_msg_cntrl_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_DIAG_MSG_DATA0L" */ +/* Diagnostic Data, lower 64 bits */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_diag_msg_data0l_u { + mmr_t sh_diag_msg_data0l_regval; + struct { + mmr_t data_lower : 64; + } sh_diag_msg_data0l_s; +} sh_diag_msg_data0l_u_t; +#else +typedef union sh_diag_msg_data0l_u { + mmr_t sh_diag_msg_data0l_regval; + struct { + mmr_t data_lower : 64; + } sh_diag_msg_data0l_s; +} sh_diag_msg_data0l_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_DIAG_MSG_DATA0U" */ +/* Diagnostice Data, upper 64 bits */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_diag_msg_data0u_u { + mmr_t sh_diag_msg_data0u_regval; + struct { + mmr_t data_upper : 64; + } sh_diag_msg_data0u_s; +} sh_diag_msg_data0u_u_t; +#else +typedef union sh_diag_msg_data0u_u { + mmr_t sh_diag_msg_data0u_regval; + struct { + mmr_t data_upper : 64; + } sh_diag_msg_data0u_s; +} sh_diag_msg_data0u_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_DIAG_MSG_DATA1L" */ +/* Diagnostic Data, lower 64 bits */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_diag_msg_data1l_u { + mmr_t sh_diag_msg_data1l_regval; + struct { + mmr_t data_lower : 64; + } sh_diag_msg_data1l_s; +} sh_diag_msg_data1l_u_t; +#else +typedef union sh_diag_msg_data1l_u { + mmr_t sh_diag_msg_data1l_regval; + struct { + mmr_t data_lower : 64; + } sh_diag_msg_data1l_s; +} sh_diag_msg_data1l_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_DIAG_MSG_DATA1U" */ +/* Diagnostice Data, upper 64 bits */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_diag_msg_data1u_u { + mmr_t sh_diag_msg_data1u_regval; + struct { + mmr_t data_upper : 64; + } sh_diag_msg_data1u_s; +} sh_diag_msg_data1u_u_t; +#else +typedef union sh_diag_msg_data1u_u { + mmr_t sh_diag_msg_data1u_regval; + struct { + mmr_t data_upper : 64; + } sh_diag_msg_data1u_s; +} sh_diag_msg_data1u_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_DIAG_MSG_DATA2L" */ +/* Diagnostic Data, lower 64 bits */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_diag_msg_data2l_u { + mmr_t sh_diag_msg_data2l_regval; + struct { + mmr_t data_lower : 64; + } sh_diag_msg_data2l_s; +} sh_diag_msg_data2l_u_t; +#else +typedef union sh_diag_msg_data2l_u { + mmr_t sh_diag_msg_data2l_regval; + struct { + mmr_t data_lower : 64; + } sh_diag_msg_data2l_s; +} sh_diag_msg_data2l_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_DIAG_MSG_DATA2U" */ +/* Diagnostice Data, upper 64 bits */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_diag_msg_data2u_u { + mmr_t sh_diag_msg_data2u_regval; + struct { + mmr_t data_upper : 64; + } sh_diag_msg_data2u_s; +} sh_diag_msg_data2u_u_t; +#else +typedef union sh_diag_msg_data2u_u { + mmr_t sh_diag_msg_data2u_regval; + struct { + mmr_t data_upper : 64; + } sh_diag_msg_data2u_s; +} sh_diag_msg_data2u_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_DIAG_MSG_DATA3L" */ +/* Diagnostic Data, lower 64 bits */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_diag_msg_data3l_u { + mmr_t sh_diag_msg_data3l_regval; + struct { + mmr_t data_lower : 64; + } sh_diag_msg_data3l_s; +} sh_diag_msg_data3l_u_t; +#else +typedef union sh_diag_msg_data3l_u { + mmr_t sh_diag_msg_data3l_regval; + struct { + mmr_t data_lower : 64; + } sh_diag_msg_data3l_s; +} sh_diag_msg_data3l_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_DIAG_MSG_DATA3U" */ +/* Diagnostice Data, upper 64 bits */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_diag_msg_data3u_u { + mmr_t sh_diag_msg_data3u_regval; + struct { + mmr_t data_upper : 64; + } sh_diag_msg_data3u_s; +} sh_diag_msg_data3u_u_t; +#else +typedef union sh_diag_msg_data3u_u { + mmr_t sh_diag_msg_data3u_regval; + struct { + mmr_t data_upper : 64; + } sh_diag_msg_data3u_s; +} sh_diag_msg_data3u_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_DIAG_MSG_DATA4L" */ +/* Diagnostic Data, lower 64 bits */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_diag_msg_data4l_u { + mmr_t sh_diag_msg_data4l_regval; + struct { + mmr_t data_lower : 64; + } sh_diag_msg_data4l_s; +} sh_diag_msg_data4l_u_t; +#else +typedef union sh_diag_msg_data4l_u { + mmr_t sh_diag_msg_data4l_regval; + struct { + mmr_t data_lower : 64; + } sh_diag_msg_data4l_s; +} sh_diag_msg_data4l_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_DIAG_MSG_DATA4U" */ +/* Diagnostice Data, upper 64 bits */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_diag_msg_data4u_u { + mmr_t sh_diag_msg_data4u_regval; + struct { + mmr_t data_upper : 64; + } sh_diag_msg_data4u_s; +} sh_diag_msg_data4u_u_t; +#else +typedef union sh_diag_msg_data4u_u { + mmr_t sh_diag_msg_data4u_regval; + struct { + mmr_t data_upper : 64; + } sh_diag_msg_data4u_s; +} sh_diag_msg_data4u_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_DIAG_MSG_DATA5L" */ +/* Diagnostic Data, lower 64 bits */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_diag_msg_data5l_u { + mmr_t sh_diag_msg_data5l_regval; + struct { + mmr_t data_lower : 64; + } sh_diag_msg_data5l_s; +} sh_diag_msg_data5l_u_t; +#else +typedef union sh_diag_msg_data5l_u { + mmr_t sh_diag_msg_data5l_regval; + struct { + mmr_t data_lower : 64; + } sh_diag_msg_data5l_s; +} sh_diag_msg_data5l_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_DIAG_MSG_DATA5U" */ +/* Diagnostice Data, upper 64 bits */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_diag_msg_data5u_u { + mmr_t sh_diag_msg_data5u_regval; + struct { + mmr_t data_upper : 64; + } sh_diag_msg_data5u_s; +} sh_diag_msg_data5u_u_t; +#else +typedef union sh_diag_msg_data5u_u { + mmr_t sh_diag_msg_data5u_regval; + struct { + mmr_t data_upper : 64; + } sh_diag_msg_data5u_s; +} sh_diag_msg_data5u_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_DIAG_MSG_DATA6L" */ +/* Diagnostic Data, lower 64 bits */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_diag_msg_data6l_u { + mmr_t sh_diag_msg_data6l_regval; + struct { + mmr_t data_lower : 64; + } sh_diag_msg_data6l_s; +} sh_diag_msg_data6l_u_t; +#else +typedef union sh_diag_msg_data6l_u { + mmr_t sh_diag_msg_data6l_regval; + struct { + mmr_t data_lower : 64; + } sh_diag_msg_data6l_s; +} sh_diag_msg_data6l_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_DIAG_MSG_DATA6U" */ +/* Diagnostice Data, upper 64 bits */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_diag_msg_data6u_u { + mmr_t sh_diag_msg_data6u_regval; + struct { + mmr_t data_upper : 64; + } sh_diag_msg_data6u_s; +} sh_diag_msg_data6u_u_t; +#else +typedef union sh_diag_msg_data6u_u { + mmr_t sh_diag_msg_data6u_regval; + struct { + mmr_t data_upper : 64; + } sh_diag_msg_data6u_s; +} sh_diag_msg_data6u_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_DIAG_MSG_DATA7L" */ +/* Diagnostic Data, lower 64 bits */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_diag_msg_data7l_u { + mmr_t sh_diag_msg_data7l_regval; + struct { + mmr_t data_lower : 64; + } sh_diag_msg_data7l_s; +} sh_diag_msg_data7l_u_t; +#else +typedef union sh_diag_msg_data7l_u { + mmr_t sh_diag_msg_data7l_regval; + struct { + mmr_t data_lower : 64; + } sh_diag_msg_data7l_s; +} sh_diag_msg_data7l_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_DIAG_MSG_DATA7U" */ +/* Diagnostice Data, upper 64 bits */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_diag_msg_data7u_u { + mmr_t sh_diag_msg_data7u_regval; + struct { + mmr_t data_upper : 64; + } sh_diag_msg_data7u_s; +} sh_diag_msg_data7u_u_t; +#else +typedef union sh_diag_msg_data7u_u { + mmr_t sh_diag_msg_data7u_regval; + struct { + mmr_t data_upper : 64; + } sh_diag_msg_data7u_s; +} sh_diag_msg_data7u_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_DIAG_MSG_DATA8L" */ +/* Diagnostic Data, lower 64 bits */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_diag_msg_data8l_u { + mmr_t sh_diag_msg_data8l_regval; + struct { + mmr_t data_lower : 64; + } sh_diag_msg_data8l_s; +} sh_diag_msg_data8l_u_t; +#else +typedef union sh_diag_msg_data8l_u { + mmr_t sh_diag_msg_data8l_regval; + struct { + mmr_t data_lower : 64; + } sh_diag_msg_data8l_s; +} sh_diag_msg_data8l_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_DIAG_MSG_DATA8U" */ +/* Diagnostice Data, upper 64 bits */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_diag_msg_data8u_u { + mmr_t sh_diag_msg_data8u_regval; + struct { + mmr_t data_upper : 64; + } sh_diag_msg_data8u_s; +} sh_diag_msg_data8u_u_t; +#else +typedef union sh_diag_msg_data8u_u { + mmr_t sh_diag_msg_data8u_regval; + struct { + mmr_t data_upper : 64; + } sh_diag_msg_data8u_s; +} sh_diag_msg_data8u_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_DIAG_MSG_HDR0" */ +/* Diagnostice Data, lower 64 bits of header */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_diag_msg_hdr0_u { + mmr_t sh_diag_msg_hdr0_regval; + struct { + mmr_t header0 : 64; + } sh_diag_msg_hdr0_s; +} sh_diag_msg_hdr0_u_t; +#else +typedef union sh_diag_msg_hdr0_u { + mmr_t sh_diag_msg_hdr0_regval; + struct { + mmr_t header0 : 64; + } sh_diag_msg_hdr0_s; +} sh_diag_msg_hdr0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_DIAG_MSG_HDR1" */ +/* Diagnostice Data, upper 64 bits of header */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_diag_msg_hdr1_u { + mmr_t sh_diag_msg_hdr1_regval; + struct { + mmr_t header1 : 64; + } sh_diag_msg_hdr1_s; +} sh_diag_msg_hdr1_u_t; +#else +typedef union sh_diag_msg_hdr1_u { + mmr_t sh_diag_msg_hdr1_regval; + struct { + mmr_t header1 : 64; + } sh_diag_msg_hdr1_s; +} sh_diag_msg_hdr1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_DEBUG_SELECT" */ +/* SHub Debug Port Select */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_debug_select_u { + mmr_t sh_debug_select_regval; + struct { + mmr_t nibble0_nibble_sel : 3; + mmr_t nibble0_chiplet_sel : 3; + mmr_t nibble1_nibble_sel : 3; + mmr_t nibble1_chiplet_sel : 3; + mmr_t nibble2_nibble_sel : 3; + mmr_t nibble2_chiplet_sel : 3; + mmr_t nibble3_nibble_sel : 3; + mmr_t nibble3_chiplet_sel : 3; + mmr_t nibble4_nibble_sel : 3; + mmr_t nibble4_chiplet_sel : 3; + mmr_t nibble5_nibble_sel : 3; + mmr_t nibble5_chiplet_sel : 3; + mmr_t nibble6_nibble_sel : 3; + mmr_t nibble6_chiplet_sel : 3; + mmr_t nibble7_nibble_sel : 3; + mmr_t nibble7_chiplet_sel : 3; + mmr_t debug_ii_sel : 3; + mmr_t sel_ii : 9; + mmr_t reserved_0 : 3; + mmr_t trigger_enable : 1; + } sh_debug_select_s; +} sh_debug_select_u_t; +#else +typedef union sh_debug_select_u { + mmr_t sh_debug_select_regval; + struct { + mmr_t trigger_enable : 1; + mmr_t reserved_0 : 3; + mmr_t sel_ii : 9; + mmr_t debug_ii_sel : 3; + mmr_t nibble7_chiplet_sel : 3; + mmr_t nibble7_nibble_sel : 3; + mmr_t nibble6_chiplet_sel : 3; + mmr_t nibble6_nibble_sel : 3; + mmr_t nibble5_chiplet_sel : 3; + mmr_t nibble5_nibble_sel : 3; + mmr_t nibble4_chiplet_sel : 3; + mmr_t nibble4_nibble_sel : 3; + mmr_t nibble3_chiplet_sel : 3; + mmr_t nibble3_nibble_sel : 3; + mmr_t nibble2_chiplet_sel : 3; + mmr_t nibble2_nibble_sel : 3; + mmr_t nibble1_chiplet_sel : 3; + mmr_t nibble1_nibble_sel : 3; + mmr_t nibble0_chiplet_sel : 3; + mmr_t nibble0_nibble_sel : 3; + } sh_debug_select_s; +} sh_debug_select_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_TRIGGER_COMPARE_MASK" */ +/* SHub Trigger Compare Mask */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_trigger_compare_mask_u { + mmr_t sh_trigger_compare_mask_regval; + struct { + mmr_t mask : 32; + mmr_t reserved_0 : 32; + } sh_trigger_compare_mask_s; +} sh_trigger_compare_mask_u_t; +#else +typedef union sh_trigger_compare_mask_u { + mmr_t sh_trigger_compare_mask_regval; + struct { + mmr_t reserved_0 : 32; + mmr_t mask : 32; + } sh_trigger_compare_mask_s; +} sh_trigger_compare_mask_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_TRIGGER_COMPARE_PATTERN" */ +/* SHub Trigger Compare Pattern */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_trigger_compare_pattern_u { + mmr_t sh_trigger_compare_pattern_regval; + struct { + mmr_t data : 32; + mmr_t reserved_0 : 32; + } sh_trigger_compare_pattern_s; +} sh_trigger_compare_pattern_u_t; +#else +typedef union sh_trigger_compare_pattern_u { + mmr_t sh_trigger_compare_pattern_regval; + struct { + mmr_t reserved_0 : 32; + mmr_t data : 32; + } sh_trigger_compare_pattern_s; +} sh_trigger_compare_pattern_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_TRIGGER_SEL" */ +/* Trigger select for SHUB debug port */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_trigger_sel_u { + mmr_t sh_trigger_sel_regval; + struct { + mmr_t nibble0_input_sel : 3; + mmr_t reserved_0 : 1; + mmr_t nibble0_nibble_sel : 3; + mmr_t reserved_1 : 1; + mmr_t nibble1_input_sel : 3; + mmr_t reserved_2 : 1; + mmr_t nibble1_nibble_sel : 3; + mmr_t reserved_3 : 1; + mmr_t nibble2_input_sel : 3; + mmr_t reserved_4 : 1; + mmr_t nibble2_nibble_sel : 3; + mmr_t reserved_5 : 1; + mmr_t nibble3_input_sel : 3; + mmr_t reserved_6 : 1; + mmr_t nibble3_nibble_sel : 3; + mmr_t reserved_7 : 1; + mmr_t nibble4_input_sel : 3; + mmr_t reserved_8 : 1; + mmr_t nibble4_nibble_sel : 3; + mmr_t reserved_9 : 1; + mmr_t nibble5_input_sel : 3; + mmr_t reserved_10 : 1; + mmr_t nibble5_nibble_sel : 3; + mmr_t reserved_11 : 1; + mmr_t nibble6_input_sel : 3; + mmr_t reserved_12 : 1; + mmr_t nibble6_nibble_sel : 3; + mmr_t reserved_13 : 1; + mmr_t nibble7_input_sel : 3; + mmr_t reserved_14 : 1; + mmr_t nibble7_nibble_sel : 3; + mmr_t reserved_15 : 1; + } sh_trigger_sel_s; +} sh_trigger_sel_u_t; +#else +typedef union sh_trigger_sel_u { + mmr_t sh_trigger_sel_regval; + struct { + mmr_t reserved_15 : 1; + mmr_t nibble7_nibble_sel : 3; + mmr_t reserved_14 : 1; + mmr_t nibble7_input_sel : 3; + mmr_t reserved_13 : 1; + mmr_t nibble6_nibble_sel : 3; + mmr_t reserved_12 : 1; + mmr_t nibble6_input_sel : 3; + mmr_t reserved_11 : 1; + mmr_t nibble5_nibble_sel : 3; + mmr_t reserved_10 : 1; + mmr_t nibble5_input_sel : 3; + mmr_t reserved_9 : 1; + mmr_t nibble4_nibble_sel : 3; + mmr_t reserved_8 : 1; + mmr_t nibble4_input_sel : 3; + mmr_t reserved_7 : 1; + mmr_t nibble3_nibble_sel : 3; + mmr_t reserved_6 : 1; + mmr_t nibble3_input_sel : 3; + mmr_t reserved_5 : 1; + mmr_t nibble2_nibble_sel : 3; + mmr_t reserved_4 : 1; + mmr_t nibble2_input_sel : 3; + mmr_t reserved_3 : 1; + mmr_t nibble1_nibble_sel : 3; + mmr_t reserved_2 : 1; + mmr_t nibble1_input_sel : 3; + mmr_t reserved_1 : 1; + mmr_t nibble0_nibble_sel : 3; + mmr_t reserved_0 : 1; + mmr_t nibble0_input_sel : 3; + } sh_trigger_sel_s; +} sh_trigger_sel_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_STOP_CLK_CONTROL" */ +/* Stop Clock Control */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_stop_clk_control_u { + mmr_t sh_stop_clk_control_regval; + struct { + mmr_t stimulus : 5; + mmr_t event : 1; + mmr_t polarity : 1; + mmr_t mode : 1; + mmr_t reserved_0 : 56; + } sh_stop_clk_control_s; +} sh_stop_clk_control_u_t; +#else +typedef union sh_stop_clk_control_u { + mmr_t sh_stop_clk_control_regval; + struct { + mmr_t reserved_0 : 56; + mmr_t mode : 1; + mmr_t polarity : 1; + mmr_t event : 1; + mmr_t stimulus : 5; + } sh_stop_clk_control_s; +} sh_stop_clk_control_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_STOP_CLK_DELAY_PHASE" */ +/* Stop Clock Delay Phase */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_stop_clk_delay_phase_u { + mmr_t sh_stop_clk_delay_phase_regval; + struct { + mmr_t delay : 8; + mmr_t reserved_0 : 56; + } sh_stop_clk_delay_phase_s; +} sh_stop_clk_delay_phase_u_t; +#else +typedef union sh_stop_clk_delay_phase_u { + mmr_t sh_stop_clk_delay_phase_regval; + struct { + mmr_t reserved_0 : 56; + mmr_t delay : 8; + } sh_stop_clk_delay_phase_s; +} sh_stop_clk_delay_phase_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_TSF_ARM_MASK" */ +/* Trigger sequencing facility arm mask */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_tsf_arm_mask_u { + mmr_t sh_tsf_arm_mask_regval; + struct { + mmr_t mask : 64; + } sh_tsf_arm_mask_s; +} sh_tsf_arm_mask_u_t; +#else +typedef union sh_tsf_arm_mask_u { + mmr_t sh_tsf_arm_mask_regval; + struct { + mmr_t mask : 64; + } sh_tsf_arm_mask_s; +} sh_tsf_arm_mask_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_TSF_COUNTER_PRESETS" */ +/* Trigger sequencing facility counter presets */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_tsf_counter_presets_u { + mmr_t sh_tsf_counter_presets_regval; + struct { + mmr_t count_32 : 32; + mmr_t count_16 : 16; + mmr_t count_8b : 8; + mmr_t count_8a : 8; + } sh_tsf_counter_presets_s; +} sh_tsf_counter_presets_u_t; +#else +typedef union sh_tsf_counter_presets_u { + mmr_t sh_tsf_counter_presets_regval; + struct { + mmr_t count_8a : 8; + mmr_t count_8b : 8; + mmr_t count_16 : 16; + mmr_t count_32 : 32; + } sh_tsf_counter_presets_s; +} sh_tsf_counter_presets_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_TSF_DECREMENT_CTL" */ +/* Trigger sequencing facility counter decrement control */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_tsf_decrement_ctl_u { + mmr_t sh_tsf_decrement_ctl_regval; + struct { + mmr_t ctl : 16; + mmr_t reserved_0 : 48; + } sh_tsf_decrement_ctl_s; +} sh_tsf_decrement_ctl_u_t; +#else +typedef union sh_tsf_decrement_ctl_u { + mmr_t sh_tsf_decrement_ctl_regval; + struct { + mmr_t reserved_0 : 48; + mmr_t ctl : 16; + } sh_tsf_decrement_ctl_s; +} sh_tsf_decrement_ctl_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_TSF_DIAG_MSG_CTL" */ +/* Trigger sequencing facility diagnostic message control */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_tsf_diag_msg_ctl_u { + mmr_t sh_tsf_diag_msg_ctl_regval; + struct { + mmr_t enable : 8; + mmr_t reserved_0 : 56; + } sh_tsf_diag_msg_ctl_s; +} sh_tsf_diag_msg_ctl_u_t; +#else +typedef union sh_tsf_diag_msg_ctl_u { + mmr_t sh_tsf_diag_msg_ctl_regval; + struct { + mmr_t reserved_0 : 56; + mmr_t enable : 8; + } sh_tsf_diag_msg_ctl_s; +} sh_tsf_diag_msg_ctl_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_TSF_DISARM_MASK" */ +/* Trigger sequencing facility disarm mask */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_tsf_disarm_mask_u { + mmr_t sh_tsf_disarm_mask_regval; + struct { + mmr_t mask : 64; + } sh_tsf_disarm_mask_s; +} sh_tsf_disarm_mask_u_t; +#else +typedef union sh_tsf_disarm_mask_u { + mmr_t sh_tsf_disarm_mask_regval; + struct { + mmr_t mask : 64; + } sh_tsf_disarm_mask_s; +} sh_tsf_disarm_mask_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_TSF_ENABLE_CTL" */ +/* Trigger sequencing facility counter enable control */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_tsf_enable_ctl_u { + mmr_t sh_tsf_enable_ctl_regval; + struct { + mmr_t ctl : 16; + mmr_t reserved_0 : 48; + } sh_tsf_enable_ctl_s; +} sh_tsf_enable_ctl_u_t; +#else +typedef union sh_tsf_enable_ctl_u { + mmr_t sh_tsf_enable_ctl_regval; + struct { + mmr_t reserved_0 : 48; + mmr_t ctl : 16; + } sh_tsf_enable_ctl_s; +} sh_tsf_enable_ctl_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_TSF_SOFTWARE_ARM" */ +/* Trigger sequencing facility software arm */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_tsf_software_arm_u { + mmr_t sh_tsf_software_arm_regval; + struct { + mmr_t bit0 : 1; + mmr_t bit1 : 1; + mmr_t bit2 : 1; + mmr_t bit3 : 1; + mmr_t bit4 : 1; + mmr_t bit5 : 1; + mmr_t bit6 : 1; + mmr_t bit7 : 1; + mmr_t reserved_0 : 56; + } sh_tsf_software_arm_s; +} sh_tsf_software_arm_u_t; +#else +typedef union sh_tsf_software_arm_u { + mmr_t sh_tsf_software_arm_regval; + struct { + mmr_t reserved_0 : 56; + mmr_t bit7 : 1; + mmr_t bit6 : 1; + mmr_t bit5 : 1; + mmr_t bit4 : 1; + mmr_t bit3 : 1; + mmr_t bit2 : 1; + mmr_t bit1 : 1; + mmr_t bit0 : 1; + } sh_tsf_software_arm_s; +} sh_tsf_software_arm_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_TSF_SOFTWARE_DISARM" */ +/* Trigger sequencing facility software disarm */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_tsf_software_disarm_u { + mmr_t sh_tsf_software_disarm_regval; + struct { + mmr_t bit0 : 1; + mmr_t bit1 : 1; + mmr_t bit2 : 1; + mmr_t bit3 : 1; + mmr_t bit4 : 1; + mmr_t bit5 : 1; + mmr_t bit6 : 1; + mmr_t bit7 : 1; + mmr_t reserved_0 : 56; + } sh_tsf_software_disarm_s; +} sh_tsf_software_disarm_u_t; +#else +typedef union sh_tsf_software_disarm_u { + mmr_t sh_tsf_software_disarm_regval; + struct { + mmr_t reserved_0 : 56; + mmr_t bit7 : 1; + mmr_t bit6 : 1; + mmr_t bit5 : 1; + mmr_t bit4 : 1; + mmr_t bit3 : 1; + mmr_t bit2 : 1; + mmr_t bit1 : 1; + mmr_t bit0 : 1; + } sh_tsf_software_disarm_s; +} sh_tsf_software_disarm_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_TSF_SOFTWARE_TRIGGERED" */ +/* Trigger sequencing facility software triggered */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_tsf_software_triggered_u { + mmr_t sh_tsf_software_triggered_regval; + struct { + mmr_t bit0 : 1; + mmr_t bit1 : 1; + mmr_t bit2 : 1; + mmr_t bit3 : 1; + mmr_t bit4 : 1; + mmr_t bit5 : 1; + mmr_t bit6 : 1; + mmr_t bit7 : 1; + mmr_t reserved_0 : 56; + } sh_tsf_software_triggered_s; +} sh_tsf_software_triggered_u_t; +#else +typedef union sh_tsf_software_triggered_u { + mmr_t sh_tsf_software_triggered_regval; + struct { + mmr_t reserved_0 : 56; + mmr_t bit7 : 1; + mmr_t bit6 : 1; + mmr_t bit5 : 1; + mmr_t bit4 : 1; + mmr_t bit3 : 1; + mmr_t bit2 : 1; + mmr_t bit1 : 1; + mmr_t bit0 : 1; + } sh_tsf_software_triggered_s; +} sh_tsf_software_triggered_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_TSF_TRIGGER_MASK" */ +/* Trigger sequencing facility trigger mask */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_tsf_trigger_mask_u { + mmr_t sh_tsf_trigger_mask_regval; + struct { + mmr_t mask : 64; + } sh_tsf_trigger_mask_s; +} sh_tsf_trigger_mask_u_t; +#else +typedef union sh_tsf_trigger_mask_u { + mmr_t sh_tsf_trigger_mask_regval; + struct { + mmr_t mask : 64; + } sh_tsf_trigger_mask_s; +} sh_tsf_trigger_mask_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_VEC_DATA" */ +/* Vector Write Request Message Data */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_vec_data_u { + mmr_t sh_vec_data_regval; + struct { + mmr_t data : 64; + } sh_vec_data_s; +} sh_vec_data_u_t; +#else +typedef union sh_vec_data_u { + mmr_t sh_vec_data_regval; + struct { + mmr_t data : 64; + } sh_vec_data_s; +} sh_vec_data_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_VEC_PARMS" */ +/* Vector Message Parameters Register */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_vec_parms_u { + mmr_t sh_vec_parms_regval; + struct { + mmr_t type : 1; + mmr_t ni_port : 1; + mmr_t reserved_0 : 1; + mmr_t address : 32; + mmr_t pio_id : 11; + mmr_t reserved_1 : 16; + mmr_t start : 1; + mmr_t busy : 1; + } sh_vec_parms_s; +} sh_vec_parms_u_t; +#else +typedef union sh_vec_parms_u { + mmr_t sh_vec_parms_regval; + struct { + mmr_t busy : 1; + mmr_t start : 1; + mmr_t reserved_1 : 16; + mmr_t pio_id : 11; + mmr_t address : 32; + mmr_t reserved_0 : 1; + mmr_t ni_port : 1; + mmr_t type : 1; + } sh_vec_parms_s; +} sh_vec_parms_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_VEC_ROUTE" */ +/* Vector Request Message Route */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_vec_route_u { + mmr_t sh_vec_route_regval; + struct { + mmr_t route : 64; + } sh_vec_route_s; +} sh_vec_route_u_t; +#else +typedef union sh_vec_route_u { + mmr_t sh_vec_route_regval; + struct { + mmr_t route : 64; + } sh_vec_route_s; +} sh_vec_route_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_CPU_PERM" */ +/* CPU MMR Access Permission Bits */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_cpu_perm_u { + mmr_t sh_cpu_perm_regval; + struct { + mmr_t access_bits : 64; + } sh_cpu_perm_s; +} sh_cpu_perm_u_t; +#else +typedef union sh_cpu_perm_u { + mmr_t sh_cpu_perm_regval; + struct { + mmr_t access_bits : 64; + } sh_cpu_perm_s; +} sh_cpu_perm_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_CPU_PERM_OVR" */ +/* CPU MMR Access Permission Override */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_cpu_perm_ovr_u { + mmr_t sh_cpu_perm_ovr_regval; + struct { + mmr_t override : 64; + } sh_cpu_perm_ovr_s; +} sh_cpu_perm_ovr_u_t; +#else +typedef union sh_cpu_perm_ovr_u { + mmr_t sh_cpu_perm_ovr_regval; + struct { + mmr_t override : 64; + } sh_cpu_perm_ovr_s; +} sh_cpu_perm_ovr_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_EXT_IO_PERM" */ +/* External IO MMR Access Permission Bits */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ext_io_perm_u { + mmr_t sh_ext_io_perm_regval; + struct { + mmr_t access_bits : 64; + } sh_ext_io_perm_s; +} sh_ext_io_perm_u_t; +#else +typedef union sh_ext_io_perm_u { + mmr_t sh_ext_io_perm_regval; + struct { + mmr_t access_bits : 64; + } sh_ext_io_perm_s; +} sh_ext_io_perm_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_EXT_IOI_ACCESS" */ +/* External IO Interrupt Access Permission Bits */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ext_ioi_access_u { + mmr_t sh_ext_ioi_access_regval; + struct { + mmr_t access_bits : 64; + } sh_ext_ioi_access_s; +} sh_ext_ioi_access_u_t; +#else +typedef union sh_ext_ioi_access_u { + mmr_t sh_ext_ioi_access_regval; + struct { + mmr_t access_bits : 64; + } sh_ext_ioi_access_s; +} sh_ext_ioi_access_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_GC_FIL_CTRL" */ +/* SHub Global Clock Filter Control */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_gc_fil_ctrl_u { + mmr_t sh_gc_fil_ctrl_regval; + struct { + mmr_t offset : 5; + mmr_t reserved_0 : 3; + mmr_t mask_counter : 12; + mmr_t mask_enable : 1; + mmr_t reserved_1 : 3; + mmr_t dropout_counter : 10; + mmr_t reserved_2 : 2; + mmr_t dropout_thresh : 10; + mmr_t reserved_3 : 2; + mmr_t error_counter : 10; + mmr_t reserved_4 : 6; + } sh_gc_fil_ctrl_s; +} sh_gc_fil_ctrl_u_t; +#else +typedef union sh_gc_fil_ctrl_u { + mmr_t sh_gc_fil_ctrl_regval; + struct { + mmr_t reserved_4 : 6; + mmr_t error_counter : 10; + mmr_t reserved_3 : 2; + mmr_t dropout_thresh : 10; + mmr_t reserved_2 : 2; + mmr_t dropout_counter : 10; + mmr_t reserved_1 : 3; + mmr_t mask_enable : 1; + mmr_t mask_counter : 12; + mmr_t reserved_0 : 3; + mmr_t offset : 5; + } sh_gc_fil_ctrl_s; +} sh_gc_fil_ctrl_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_GC_SRC_CTRL" */ +/* SHub Global Clock Control */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_gc_src_ctrl_u { + mmr_t sh_gc_src_ctrl_regval; + struct { + mmr_t enable_counter : 1; + mmr_t reserved_0 : 3; + mmr_t max_count : 10; + mmr_t reserved_1 : 2; + mmr_t counter : 10; + mmr_t reserved_2 : 2; + mmr_t toggle_bit : 1; + mmr_t reserved_3 : 3; + mmr_t source_sel : 2; + mmr_t reserved_4 : 30; + } sh_gc_src_ctrl_s; +} sh_gc_src_ctrl_u_t; +#else +typedef union sh_gc_src_ctrl_u { + mmr_t sh_gc_src_ctrl_regval; + struct { + mmr_t reserved_4 : 30; + mmr_t source_sel : 2; + mmr_t reserved_3 : 3; + mmr_t toggle_bit : 1; + mmr_t reserved_2 : 2; + mmr_t counter : 10; + mmr_t reserved_1 : 2; + mmr_t max_count : 10; + mmr_t reserved_0 : 3; + mmr_t enable_counter : 1; + } sh_gc_src_ctrl_s; +} sh_gc_src_ctrl_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_HARD_RESET" */ +/* SHub Hard Reset */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_hard_reset_u { + mmr_t sh_hard_reset_regval; + struct { + mmr_t hard_reset : 1; + mmr_t reserved_0 : 63; + } sh_hard_reset_s; +} sh_hard_reset_u_t; +#else +typedef union sh_hard_reset_u { + mmr_t sh_hard_reset_regval; + struct { + mmr_t reserved_0 : 63; + mmr_t hard_reset : 1; + } sh_hard_reset_s; +} sh_hard_reset_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_IO_PERM" */ +/* II MMR Access Permission Bits */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_io_perm_u { + mmr_t sh_io_perm_regval; + struct { + mmr_t access_bits : 64; + } sh_io_perm_s; +} sh_io_perm_u_t; +#else +typedef union sh_io_perm_u { + mmr_t sh_io_perm_regval; + struct { + mmr_t access_bits : 64; + } sh_io_perm_s; +} sh_io_perm_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_IOI_ACCESS" */ +/* II Interrupt Access Permission Bits */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ioi_access_u { + mmr_t sh_ioi_access_regval; + struct { + mmr_t access_bits : 64; + } sh_ioi_access_s; +} sh_ioi_access_u_t; +#else +typedef union sh_ioi_access_u { + mmr_t sh_ioi_access_regval; + struct { + mmr_t access_bits : 64; + } sh_ioi_access_s; +} sh_ioi_access_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_IPI_ACCESS" */ +/* CPU interrupt Access Permission Bits */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ipi_access_u { + mmr_t sh_ipi_access_regval; + struct { + mmr_t access_bits : 64; + } sh_ipi_access_s; +} sh_ipi_access_u_t; +#else +typedef union sh_ipi_access_u { + mmr_t sh_ipi_access_regval; + struct { + mmr_t access_bits : 64; + } sh_ipi_access_s; +} sh_ipi_access_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_JTAG_CONFIG" */ +/* SHub JTAG configuration */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_jtag_config_u { + mmr_t sh_jtag_config_regval; + struct { + mmr_t md_clk_sel : 2; + mmr_t ni_clk_sel : 1; + mmr_t ii_clk_sel : 2; + mmr_t wrt90_target : 14; + mmr_t wrt90_overrider : 1; + mmr_t wrt90_override : 1; + mmr_t jtag_mci_reset_delay : 4; + mmr_t jtag_mci_target : 14; + mmr_t jtag_mci_override : 1; + mmr_t fsb_config_ioq_depth : 1; + mmr_t fsb_config_sample_binit : 1; + mmr_t fsb_config_enable_bus_parking : 1; + mmr_t fsb_config_clock_ratio : 5; + mmr_t fsb_config_output_tristate : 4; + mmr_t fsb_config_enable_bist : 1; + mmr_t fsb_config_aux : 2; + mmr_t gtl_config_re : 1; + mmr_t reserved_0 : 8; + } sh_jtag_config_s; +} sh_jtag_config_u_t; +#else +typedef union sh_jtag_config_u { + mmr_t sh_jtag_config_regval; + struct { + mmr_t reserved_0 : 8; + mmr_t gtl_config_re : 1; + mmr_t fsb_config_aux : 2; + mmr_t fsb_config_enable_bist : 1; + mmr_t fsb_config_output_tristate : 4; + mmr_t fsb_config_clock_ratio : 5; + mmr_t fsb_config_enable_bus_parking : 1; + mmr_t fsb_config_sample_binit : 1; + mmr_t fsb_config_ioq_depth : 1; + mmr_t jtag_mci_override : 1; + mmr_t jtag_mci_target : 14; + mmr_t jtag_mci_reset_delay : 4; + mmr_t wrt90_override : 1; + mmr_t wrt90_overrider : 1; + mmr_t wrt90_target : 14; + mmr_t ii_clk_sel : 2; + mmr_t ni_clk_sel : 1; + mmr_t md_clk_sel : 2; + } sh_jtag_config_s; +} sh_jtag_config_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_SHUB_ID" */ +/* SHub ID Number */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_shub_id_u { + mmr_t sh_shub_id_regval; + struct { + mmr_t force1 : 1; + mmr_t manufacturer : 11; + mmr_t part_number : 16; + mmr_t revision : 4; + mmr_t node_id : 11; + mmr_t reserved_0 : 1; + mmr_t sharing_mode : 2; + mmr_t reserved_1 : 2; + mmr_t nodes_per_bit : 5; + mmr_t reserved_2 : 3; + mmr_t ni_port : 1; + mmr_t reserved_3 : 7; + } sh_shub_id_s; +} sh_shub_id_u_t; +#else +typedef union sh_shub_id_u { + mmr_t sh_shub_id_regval; + struct { + mmr_t reserved_3 : 7; + mmr_t ni_port : 1; + mmr_t reserved_2 : 3; + mmr_t nodes_per_bit : 5; + mmr_t reserved_1 : 2; + mmr_t sharing_mode : 2; + mmr_t reserved_0 : 1; + mmr_t node_id : 11; + mmr_t revision : 4; + mmr_t part_number : 16; + mmr_t manufacturer : 11; + mmr_t force1 : 1; + } sh_shub_id_s; +} sh_shub_id_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_SHUBS_PRESENT0" */ +/* Shubs 0 - 63 Present. Used for invalidate generation */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_shubs_present0_u { + mmr_t sh_shubs_present0_regval; + struct { + mmr_t shubs_present0 : 64; + } sh_shubs_present0_s; +} sh_shubs_present0_u_t; +#else +typedef union sh_shubs_present0_u { + mmr_t sh_shubs_present0_regval; + struct { + mmr_t shubs_present0 : 64; + } sh_shubs_present0_s; +} sh_shubs_present0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_SHUBS_PRESENT1" */ +/* Shubs 64 - 127 Present. Used for invalidate generation */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_shubs_present1_u { + mmr_t sh_shubs_present1_regval; + struct { + mmr_t shubs_present1 : 64; + } sh_shubs_present1_s; +} sh_shubs_present1_u_t; +#else +typedef union sh_shubs_present1_u { + mmr_t sh_shubs_present1_regval; + struct { + mmr_t shubs_present1 : 64; + } sh_shubs_present1_s; +} sh_shubs_present1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_SHUBS_PRESENT2" */ +/* Shubs 128 - 191 Present. Used for invalidate generation */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_shubs_present2_u { + mmr_t sh_shubs_present2_regval; + struct { + mmr_t shubs_present2 : 64; + } sh_shubs_present2_s; +} sh_shubs_present2_u_t; +#else +typedef union sh_shubs_present2_u { + mmr_t sh_shubs_present2_regval; + struct { + mmr_t shubs_present2 : 64; + } sh_shubs_present2_s; +} sh_shubs_present2_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_SHUBS_PRESENT3" */ +/* Shubs 192 - 255 Present. Used for invalidate generation */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_shubs_present3_u { + mmr_t sh_shubs_present3_regval; + struct { + mmr_t shubs_present3 : 64; + } sh_shubs_present3_s; +} sh_shubs_present3_u_t; +#else +typedef union sh_shubs_present3_u { + mmr_t sh_shubs_present3_regval; + struct { + mmr_t shubs_present3 : 64; + } sh_shubs_present3_s; +} sh_shubs_present3_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_SOFT_RESET" */ +/* SHub Soft Reset */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_soft_reset_u { + mmr_t sh_soft_reset_regval; + struct { + mmr_t soft_reset : 1; + mmr_t reserved_0 : 63; + } sh_soft_reset_s; +} sh_soft_reset_u_t; +#else +typedef union sh_soft_reset_u { + mmr_t sh_soft_reset_regval; + struct { + mmr_t reserved_0 : 63; + mmr_t soft_reset : 1; + } sh_soft_reset_s; +} sh_soft_reset_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_FIRST_ERROR" */ +/* Shub Global First Error Flags */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_first_error_u { + mmr_t sh_first_error_regval; + struct { + mmr_t first_error : 19; + mmr_t reserved_0 : 45; + } sh_first_error_s; +} sh_first_error_u_t; +#else +typedef union sh_first_error_u { + mmr_t sh_first_error_regval; + struct { + mmr_t reserved_0 : 45; + mmr_t first_error : 19; + } sh_first_error_s; +} sh_first_error_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_II_HW_TIME_STAMP" */ +/* II hardware error time stamp */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ii_hw_time_stamp_u { + mmr_t sh_ii_hw_time_stamp_regval; + struct { + mmr_t time : 63; + mmr_t valid : 1; + } sh_ii_hw_time_stamp_s; +} sh_ii_hw_time_stamp_u_t; +#else +typedef union sh_ii_hw_time_stamp_u { + mmr_t sh_ii_hw_time_stamp_regval; + struct { + mmr_t valid : 1; + mmr_t time : 63; + } sh_ii_hw_time_stamp_s; +} sh_ii_hw_time_stamp_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_LB_HW_TIME_STAMP" */ +/* LB hardware error time stamp */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_lb_hw_time_stamp_u { + mmr_t sh_lb_hw_time_stamp_regval; + struct { + mmr_t time : 63; + mmr_t valid : 1; + } sh_lb_hw_time_stamp_s; +} sh_lb_hw_time_stamp_u_t; +#else +typedef union sh_lb_hw_time_stamp_u { + mmr_t sh_lb_hw_time_stamp_regval; + struct { + mmr_t valid : 1; + mmr_t time : 63; + } sh_lb_hw_time_stamp_s; +} sh_lb_hw_time_stamp_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_COR_TIME_STAMP" */ +/* MD correctable error time stamp */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_cor_time_stamp_u { + mmr_t sh_md_cor_time_stamp_regval; + struct { + mmr_t time : 63; + mmr_t valid : 1; + } sh_md_cor_time_stamp_s; +} sh_md_cor_time_stamp_u_t; +#else +typedef union sh_md_cor_time_stamp_u { + mmr_t sh_md_cor_time_stamp_regval; + struct { + mmr_t valid : 1; + mmr_t time : 63; + } sh_md_cor_time_stamp_s; +} sh_md_cor_time_stamp_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_HW_TIME_STAMP" */ +/* MD hardware error time stamp */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_hw_time_stamp_u { + mmr_t sh_md_hw_time_stamp_regval; + struct { + mmr_t time : 63; + mmr_t valid : 1; + } sh_md_hw_time_stamp_s; +} sh_md_hw_time_stamp_u_t; +#else +typedef union sh_md_hw_time_stamp_u { + mmr_t sh_md_hw_time_stamp_regval; + struct { + mmr_t valid : 1; + mmr_t time : 63; + } sh_md_hw_time_stamp_s; +} sh_md_hw_time_stamp_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_UNCOR_TIME_STAMP" */ +/* MD uncorrectable error time stamp */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_uncor_time_stamp_u { + mmr_t sh_md_uncor_time_stamp_regval; + struct { + mmr_t time : 63; + mmr_t valid : 1; + } sh_md_uncor_time_stamp_s; +} sh_md_uncor_time_stamp_u_t; +#else +typedef union sh_md_uncor_time_stamp_u { + mmr_t sh_md_uncor_time_stamp_regval; + struct { + mmr_t valid : 1; + mmr_t time : 63; + } sh_md_uncor_time_stamp_s; +} sh_md_uncor_time_stamp_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_COR_TIME_STAMP" */ +/* PI correctable error time stamp */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_cor_time_stamp_u { + mmr_t sh_pi_cor_time_stamp_regval; + struct { + mmr_t time : 63; + mmr_t valid : 1; + } sh_pi_cor_time_stamp_s; +} sh_pi_cor_time_stamp_u_t; +#else +typedef union sh_pi_cor_time_stamp_u { + mmr_t sh_pi_cor_time_stamp_regval; + struct { + mmr_t valid : 1; + mmr_t time : 63; + } sh_pi_cor_time_stamp_s; +} sh_pi_cor_time_stamp_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_HW_TIME_STAMP" */ +/* PI hardware error time stamp */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_hw_time_stamp_u { + mmr_t sh_pi_hw_time_stamp_regval; + struct { + mmr_t time : 63; + mmr_t valid : 1; + } sh_pi_hw_time_stamp_s; +} sh_pi_hw_time_stamp_u_t; +#else +typedef union sh_pi_hw_time_stamp_u { + mmr_t sh_pi_hw_time_stamp_regval; + struct { + mmr_t valid : 1; + mmr_t time : 63; + } sh_pi_hw_time_stamp_s; +} sh_pi_hw_time_stamp_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_UNCOR_TIME_STAMP" */ +/* PI uncorrectable error time stamp */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_uncor_time_stamp_u { + mmr_t sh_pi_uncor_time_stamp_regval; + struct { + mmr_t time : 63; + mmr_t valid : 1; + } sh_pi_uncor_time_stamp_s; +} sh_pi_uncor_time_stamp_u_t; +#else +typedef union sh_pi_uncor_time_stamp_u { + mmr_t sh_pi_uncor_time_stamp_regval; + struct { + mmr_t valid : 1; + mmr_t time : 63; + } sh_pi_uncor_time_stamp_s; +} sh_pi_uncor_time_stamp_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PROC0_ADV_TIME_STAMP" */ +/* Proc 0 advisory time stamp */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_proc0_adv_time_stamp_u { + mmr_t sh_proc0_adv_time_stamp_regval; + struct { + mmr_t time : 63; + mmr_t valid : 1; + } sh_proc0_adv_time_stamp_s; +} sh_proc0_adv_time_stamp_u_t; +#else +typedef union sh_proc0_adv_time_stamp_u { + mmr_t sh_proc0_adv_time_stamp_regval; + struct { + mmr_t valid : 1; + mmr_t time : 63; + } sh_proc0_adv_time_stamp_s; +} sh_proc0_adv_time_stamp_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PROC0_ERR_TIME_STAMP" */ +/* Proc 0 error time stamp */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_proc0_err_time_stamp_u { + mmr_t sh_proc0_err_time_stamp_regval; + struct { + mmr_t time : 63; + mmr_t valid : 1; + } sh_proc0_err_time_stamp_s; +} sh_proc0_err_time_stamp_u_t; +#else +typedef union sh_proc0_err_time_stamp_u { + mmr_t sh_proc0_err_time_stamp_regval; + struct { + mmr_t valid : 1; + mmr_t time : 63; + } sh_proc0_err_time_stamp_s; +} sh_proc0_err_time_stamp_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PROC1_ADV_TIME_STAMP" */ +/* Proc 1 advisory time stamp */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_proc1_adv_time_stamp_u { + mmr_t sh_proc1_adv_time_stamp_regval; + struct { + mmr_t time : 63; + mmr_t valid : 1; + } sh_proc1_adv_time_stamp_s; +} sh_proc1_adv_time_stamp_u_t; +#else +typedef union sh_proc1_adv_time_stamp_u { + mmr_t sh_proc1_adv_time_stamp_regval; + struct { + mmr_t valid : 1; + mmr_t time : 63; + } sh_proc1_adv_time_stamp_s; +} sh_proc1_adv_time_stamp_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PROC1_ERR_TIME_STAMP" */ +/* Proc 1 error time stamp */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_proc1_err_time_stamp_u { + mmr_t sh_proc1_err_time_stamp_regval; + struct { + mmr_t time : 63; + mmr_t valid : 1; + } sh_proc1_err_time_stamp_s; +} sh_proc1_err_time_stamp_u_t; +#else +typedef union sh_proc1_err_time_stamp_u { + mmr_t sh_proc1_err_time_stamp_regval; + struct { + mmr_t valid : 1; + mmr_t time : 63; + } sh_proc1_err_time_stamp_s; +} sh_proc1_err_time_stamp_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PROC2_ADV_TIME_STAMP" */ +/* Proc 2 advisory time stamp */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_proc2_adv_time_stamp_u { + mmr_t sh_proc2_adv_time_stamp_regval; + struct { + mmr_t time : 63; + mmr_t valid : 1; + } sh_proc2_adv_time_stamp_s; +} sh_proc2_adv_time_stamp_u_t; +#else +typedef union sh_proc2_adv_time_stamp_u { + mmr_t sh_proc2_adv_time_stamp_regval; + struct { + mmr_t valid : 1; + mmr_t time : 63; + } sh_proc2_adv_time_stamp_s; +} sh_proc2_adv_time_stamp_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PROC2_ERR_TIME_STAMP" */ +/* Proc 2 error time stamp */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_proc2_err_time_stamp_u { + mmr_t sh_proc2_err_time_stamp_regval; + struct { + mmr_t time : 63; + mmr_t valid : 1; + } sh_proc2_err_time_stamp_s; +} sh_proc2_err_time_stamp_u_t; +#else +typedef union sh_proc2_err_time_stamp_u { + mmr_t sh_proc2_err_time_stamp_regval; + struct { + mmr_t valid : 1; + mmr_t time : 63; + } sh_proc2_err_time_stamp_s; +} sh_proc2_err_time_stamp_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PROC3_ADV_TIME_STAMP" */ +/* Proc 3 advisory time stamp */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_proc3_adv_time_stamp_u { + mmr_t sh_proc3_adv_time_stamp_regval; + struct { + mmr_t time : 63; + mmr_t valid : 1; + } sh_proc3_adv_time_stamp_s; +} sh_proc3_adv_time_stamp_u_t; +#else +typedef union sh_proc3_adv_time_stamp_u { + mmr_t sh_proc3_adv_time_stamp_regval; + struct { + mmr_t valid : 1; + mmr_t time : 63; + } sh_proc3_adv_time_stamp_s; +} sh_proc3_adv_time_stamp_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PROC3_ERR_TIME_STAMP" */ +/* Proc 3 error time stamp */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_proc3_err_time_stamp_u { + mmr_t sh_proc3_err_time_stamp_regval; + struct { + mmr_t time : 63; + mmr_t valid : 1; + } sh_proc3_err_time_stamp_s; +} sh_proc3_err_time_stamp_u_t; +#else +typedef union sh_proc3_err_time_stamp_u { + mmr_t sh_proc3_err_time_stamp_regval; + struct { + mmr_t valid : 1; + mmr_t time : 63; + } sh_proc3_err_time_stamp_s; +} sh_proc3_err_time_stamp_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_COR_TIME_STAMP" */ +/* XN correctable error time stamp */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_cor_time_stamp_u { + mmr_t sh_xn_cor_time_stamp_regval; + struct { + mmr_t time : 63; + mmr_t valid : 1; + } sh_xn_cor_time_stamp_s; +} sh_xn_cor_time_stamp_u_t; +#else +typedef union sh_xn_cor_time_stamp_u { + mmr_t sh_xn_cor_time_stamp_regval; + struct { + mmr_t valid : 1; + mmr_t time : 63; + } sh_xn_cor_time_stamp_s; +} sh_xn_cor_time_stamp_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_HW_TIME_STAMP" */ +/* XN hardware error time stamp */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_hw_time_stamp_u { + mmr_t sh_xn_hw_time_stamp_regval; + struct { + mmr_t time : 63; + mmr_t valid : 1; + } sh_xn_hw_time_stamp_s; +} sh_xn_hw_time_stamp_u_t; +#else +typedef union sh_xn_hw_time_stamp_u { + mmr_t sh_xn_hw_time_stamp_regval; + struct { + mmr_t valid : 1; + mmr_t time : 63; + } sh_xn_hw_time_stamp_s; +} sh_xn_hw_time_stamp_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_UNCOR_TIME_STAMP" */ +/* XN uncorrectable error time stamp */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_uncor_time_stamp_u { + mmr_t sh_xn_uncor_time_stamp_regval; + struct { + mmr_t time : 63; + mmr_t valid : 1; + } sh_xn_uncor_time_stamp_s; +} sh_xn_uncor_time_stamp_u_t; +#else +typedef union sh_xn_uncor_time_stamp_u { + mmr_t sh_xn_uncor_time_stamp_regval; + struct { + mmr_t valid : 1; + mmr_t time : 63; + } sh_xn_uncor_time_stamp_s; +} sh_xn_uncor_time_stamp_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_DEBUG_PORT" */ +/* SHub Debug Port */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_debug_port_u { + mmr_t sh_debug_port_regval; + struct { + mmr_t debug_nibble0 : 4; + mmr_t debug_nibble1 : 4; + mmr_t debug_nibble2 : 4; + mmr_t debug_nibble3 : 4; + mmr_t debug_nibble4 : 4; + mmr_t debug_nibble5 : 4; + mmr_t debug_nibble6 : 4; + mmr_t debug_nibble7 : 4; + mmr_t reserved_0 : 32; + } sh_debug_port_s; +} sh_debug_port_u_t; +#else +typedef union sh_debug_port_u { + mmr_t sh_debug_port_regval; + struct { + mmr_t reserved_0 : 32; + mmr_t debug_nibble7 : 4; + mmr_t debug_nibble6 : 4; + mmr_t debug_nibble5 : 4; + mmr_t debug_nibble4 : 4; + mmr_t debug_nibble3 : 4; + mmr_t debug_nibble2 : 4; + mmr_t debug_nibble1 : 4; + mmr_t debug_nibble0 : 4; + } sh_debug_port_s; +} sh_debug_port_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_II_DEBUG_DATA" */ +/* II Debug Data */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ii_debug_data_u { + mmr_t sh_ii_debug_data_regval; + struct { + mmr_t ii_data : 32; + mmr_t reserved_0 : 32; + } sh_ii_debug_data_s; +} sh_ii_debug_data_u_t; +#else +typedef union sh_ii_debug_data_u { + mmr_t sh_ii_debug_data_regval; + struct { + mmr_t reserved_0 : 32; + mmr_t ii_data : 32; + } sh_ii_debug_data_s; +} sh_ii_debug_data_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_II_WRAP_DEBUG_DATA" */ +/* SHub II Wrapper Debug Data */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ii_wrap_debug_data_u { + mmr_t sh_ii_wrap_debug_data_regval; + struct { + mmr_t ii_wrap_data : 32; + mmr_t reserved_0 : 32; + } sh_ii_wrap_debug_data_s; +} sh_ii_wrap_debug_data_u_t; +#else +typedef union sh_ii_wrap_debug_data_u { + mmr_t sh_ii_wrap_debug_data_regval; + struct { + mmr_t reserved_0 : 32; + mmr_t ii_wrap_data : 32; + } sh_ii_wrap_debug_data_s; +} sh_ii_wrap_debug_data_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_LB_DEBUG_DATA" */ +/* SHub LB Debug Data */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_lb_debug_data_u { + mmr_t sh_lb_debug_data_regval; + struct { + mmr_t lb_data : 32; + mmr_t reserved_0 : 32; + } sh_lb_debug_data_s; +} sh_lb_debug_data_u_t; +#else +typedef union sh_lb_debug_data_u { + mmr_t sh_lb_debug_data_regval; + struct { + mmr_t reserved_0 : 32; + mmr_t lb_data : 32; + } sh_lb_debug_data_s; +} sh_lb_debug_data_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DEBUG_DATA" */ +/* SHub MD Debug Data */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_debug_data_u { + mmr_t sh_md_debug_data_regval; + struct { + mmr_t md_data : 32; + mmr_t reserved_0 : 32; + } sh_md_debug_data_s; +} sh_md_debug_data_u_t; +#else +typedef union sh_md_debug_data_u { + mmr_t sh_md_debug_data_regval; + struct { + mmr_t reserved_0 : 32; + mmr_t md_data : 32; + } sh_md_debug_data_s; +} sh_md_debug_data_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_DEBUG_DATA" */ +/* SHub PI Debug Data */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_debug_data_u { + mmr_t sh_pi_debug_data_regval; + struct { + mmr_t pi_data : 32; + mmr_t reserved_0 : 32; + } sh_pi_debug_data_s; +} sh_pi_debug_data_u_t; +#else +typedef union sh_pi_debug_data_u { + mmr_t sh_pi_debug_data_regval; + struct { + mmr_t reserved_0 : 32; + mmr_t pi_data : 32; + } sh_pi_debug_data_s; +} sh_pi_debug_data_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_DEBUG_DATA" */ +/* SHub XN Debug Data */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_debug_data_u { + mmr_t sh_xn_debug_data_regval; + struct { + mmr_t xn_data : 32; + mmr_t reserved_0 : 32; + } sh_xn_debug_data_s; +} sh_xn_debug_data_u_t; +#else +typedef union sh_xn_debug_data_u { + mmr_t sh_xn_debug_data_regval; + struct { + mmr_t reserved_0 : 32; + mmr_t xn_data : 32; + } sh_xn_debug_data_s; +} sh_xn_debug_data_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_TSF_ARMED_STATE" */ +/* Trigger sequencing facility arm state */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_tsf_armed_state_u { + mmr_t sh_tsf_armed_state_regval; + struct { + mmr_t state : 8; + mmr_t reserved_0 : 56; + } sh_tsf_armed_state_s; +} sh_tsf_armed_state_u_t; +#else +typedef union sh_tsf_armed_state_u { + mmr_t sh_tsf_armed_state_regval; + struct { + mmr_t reserved_0 : 56; + mmr_t state : 8; + } sh_tsf_armed_state_s; +} sh_tsf_armed_state_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_TSF_COUNTER_VALUE" */ +/* Trigger sequencing facility counter value */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_tsf_counter_value_u { + mmr_t sh_tsf_counter_value_regval; + struct { + mmr_t count_32 : 32; + mmr_t count_16 : 16; + mmr_t count_8b : 8; + mmr_t count_8a : 8; + } sh_tsf_counter_value_s; +} sh_tsf_counter_value_u_t; +#else +typedef union sh_tsf_counter_value_u { + mmr_t sh_tsf_counter_value_regval; + struct { + mmr_t count_8a : 8; + mmr_t count_8b : 8; + mmr_t count_16 : 16; + mmr_t count_32 : 32; + } sh_tsf_counter_value_s; +} sh_tsf_counter_value_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_TSF_TRIGGERED_STATE" */ +/* Trigger sequencing facility triggered state */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_tsf_triggered_state_u { + mmr_t sh_tsf_triggered_state_regval; + struct { + mmr_t state : 8; + mmr_t reserved_0 : 56; + } sh_tsf_triggered_state_s; +} sh_tsf_triggered_state_u_t; +#else +typedef union sh_tsf_triggered_state_u { + mmr_t sh_tsf_triggered_state_regval; + struct { + mmr_t reserved_0 : 56; + mmr_t state : 8; + } sh_tsf_triggered_state_s; +} sh_tsf_triggered_state_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_VEC_RDDATA" */ +/* Vector Reply Message Data */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_vec_rddata_u { + mmr_t sh_vec_rddata_regval; + struct { + mmr_t data : 64; + } sh_vec_rddata_s; +} sh_vec_rddata_u_t; +#else +typedef union sh_vec_rddata_u { + mmr_t sh_vec_rddata_regval; + struct { + mmr_t data : 64; + } sh_vec_rddata_s; +} sh_vec_rddata_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_VEC_RETURN" */ +/* Vector Reply Message Return Route */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_vec_return_u { + mmr_t sh_vec_return_regval; + struct { + mmr_t route : 64; + } sh_vec_return_s; +} sh_vec_return_u_t; +#else +typedef union sh_vec_return_u { + mmr_t sh_vec_return_regval; + struct { + mmr_t route : 64; + } sh_vec_return_s; +} sh_vec_return_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_VEC_STATUS" */ +/* Vector Reply Message Status */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_vec_status_u { + mmr_t sh_vec_status_regval; + struct { + mmr_t type : 3; + mmr_t address : 32; + mmr_t pio_id : 11; + mmr_t source : 14; + mmr_t reserved_0 : 2; + mmr_t overrun : 1; + mmr_t status_valid : 1; + } sh_vec_status_s; +} sh_vec_status_u_t; +#else +typedef union sh_vec_status_u { + mmr_t sh_vec_status_regval; + struct { + mmr_t status_valid : 1; + mmr_t overrun : 1; + mmr_t reserved_0 : 2; + mmr_t source : 14; + mmr_t pio_id : 11; + mmr_t address : 32; + mmr_t type : 3; + } sh_vec_status_s; +} sh_vec_status_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PERFORMANCE_COUNT0_CONTROL" */ +/* Performance Counter 0 Control */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_performance_count0_control_u { + mmr_t sh_performance_count0_control_regval; + struct { + mmr_t up_stimulus : 5; + mmr_t up_event : 1; + mmr_t up_polarity : 1; + mmr_t up_mode : 1; + mmr_t dn_stimulus : 5; + mmr_t dn_event : 1; + mmr_t dn_polarity : 1; + mmr_t dn_mode : 1; + mmr_t inc_enable : 1; + mmr_t dec_enable : 1; + mmr_t peak_det_enable : 1; + mmr_t reserved_0 : 45; + } sh_performance_count0_control_s; +} sh_performance_count0_control_u_t; +#else +typedef union sh_performance_count0_control_u { + mmr_t sh_performance_count0_control_regval; + struct { + mmr_t reserved_0 : 45; + mmr_t peak_det_enable : 1; + mmr_t dec_enable : 1; + mmr_t inc_enable : 1; + mmr_t dn_mode : 1; + mmr_t dn_polarity : 1; + mmr_t dn_event : 1; + mmr_t dn_stimulus : 5; + mmr_t up_mode : 1; + mmr_t up_polarity : 1; + mmr_t up_event : 1; + mmr_t up_stimulus : 5; + } sh_performance_count0_control_s; +} sh_performance_count0_control_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PERFORMANCE_COUNT1_CONTROL" */ +/* Performance Counter 1 Control */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_performance_count1_control_u { + mmr_t sh_performance_count1_control_regval; + struct { + mmr_t up_stimulus : 5; + mmr_t up_event : 1; + mmr_t up_polarity : 1; + mmr_t up_mode : 1; + mmr_t dn_stimulus : 5; + mmr_t dn_event : 1; + mmr_t dn_polarity : 1; + mmr_t dn_mode : 1; + mmr_t inc_enable : 1; + mmr_t dec_enable : 1; + mmr_t peak_det_enable : 1; + mmr_t reserved_0 : 45; + } sh_performance_count1_control_s; +} sh_performance_count1_control_u_t; +#else +typedef union sh_performance_count1_control_u { + mmr_t sh_performance_count1_control_regval; + struct { + mmr_t reserved_0 : 45; + mmr_t peak_det_enable : 1; + mmr_t dec_enable : 1; + mmr_t inc_enable : 1; + mmr_t dn_mode : 1; + mmr_t dn_polarity : 1; + mmr_t dn_event : 1; + mmr_t dn_stimulus : 5; + mmr_t up_mode : 1; + mmr_t up_polarity : 1; + mmr_t up_event : 1; + mmr_t up_stimulus : 5; + } sh_performance_count1_control_s; +} sh_performance_count1_control_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PERFORMANCE_COUNT2_CONTROL" */ +/* Performance Counter 2 Control */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_performance_count2_control_u { + mmr_t sh_performance_count2_control_regval; + struct { + mmr_t up_stimulus : 5; + mmr_t up_event : 1; + mmr_t up_polarity : 1; + mmr_t up_mode : 1; + mmr_t dn_stimulus : 5; + mmr_t dn_event : 1; + mmr_t dn_polarity : 1; + mmr_t dn_mode : 1; + mmr_t inc_enable : 1; + mmr_t dec_enable : 1; + mmr_t peak_det_enable : 1; + mmr_t reserved_0 : 45; + } sh_performance_count2_control_s; +} sh_performance_count2_control_u_t; +#else +typedef union sh_performance_count2_control_u { + mmr_t sh_performance_count2_control_regval; + struct { + mmr_t reserved_0 : 45; + mmr_t peak_det_enable : 1; + mmr_t dec_enable : 1; + mmr_t inc_enable : 1; + mmr_t dn_mode : 1; + mmr_t dn_polarity : 1; + mmr_t dn_event : 1; + mmr_t dn_stimulus : 5; + mmr_t up_mode : 1; + mmr_t up_polarity : 1; + mmr_t up_event : 1; + mmr_t up_stimulus : 5; + } sh_performance_count2_control_s; +} sh_performance_count2_control_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PERFORMANCE_COUNT3_CONTROL" */ +/* Performance Counter 3 Control */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_performance_count3_control_u { + mmr_t sh_performance_count3_control_regval; + struct { + mmr_t up_stimulus : 5; + mmr_t up_event : 1; + mmr_t up_polarity : 1; + mmr_t up_mode : 1; + mmr_t dn_stimulus : 5; + mmr_t dn_event : 1; + mmr_t dn_polarity : 1; + mmr_t dn_mode : 1; + mmr_t inc_enable : 1; + mmr_t dec_enable : 1; + mmr_t peak_det_enable : 1; + mmr_t reserved_0 : 45; + } sh_performance_count3_control_s; +} sh_performance_count3_control_u_t; +#else +typedef union sh_performance_count3_control_u { + mmr_t sh_performance_count3_control_regval; + struct { + mmr_t reserved_0 : 45; + mmr_t peak_det_enable : 1; + mmr_t dec_enable : 1; + mmr_t inc_enable : 1; + mmr_t dn_mode : 1; + mmr_t dn_polarity : 1; + mmr_t dn_event : 1; + mmr_t dn_stimulus : 5; + mmr_t up_mode : 1; + mmr_t up_polarity : 1; + mmr_t up_event : 1; + mmr_t up_stimulus : 5; + } sh_performance_count3_control_s; +} sh_performance_count3_control_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PERFORMANCE_COUNT4_CONTROL" */ +/* Performance Counter 4 Control */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_performance_count4_control_u { + mmr_t sh_performance_count4_control_regval; + struct { + mmr_t up_stimulus : 5; + mmr_t up_event : 1; + mmr_t up_polarity : 1; + mmr_t up_mode : 1; + mmr_t dn_stimulus : 5; + mmr_t dn_event : 1; + mmr_t dn_polarity : 1; + mmr_t dn_mode : 1; + mmr_t inc_enable : 1; + mmr_t dec_enable : 1; + mmr_t peak_det_enable : 1; + mmr_t reserved_0 : 45; + } sh_performance_count4_control_s; +} sh_performance_count4_control_u_t; +#else +typedef union sh_performance_count4_control_u { + mmr_t sh_performance_count4_control_regval; + struct { + mmr_t reserved_0 : 45; + mmr_t peak_det_enable : 1; + mmr_t dec_enable : 1; + mmr_t inc_enable : 1; + mmr_t dn_mode : 1; + mmr_t dn_polarity : 1; + mmr_t dn_event : 1; + mmr_t dn_stimulus : 5; + mmr_t up_mode : 1; + mmr_t up_polarity : 1; + mmr_t up_event : 1; + mmr_t up_stimulus : 5; + } sh_performance_count4_control_s; +} sh_performance_count4_control_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PERFORMANCE_COUNT5_CONTROL" */ +/* Performance Counter 5 Control */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_performance_count5_control_u { + mmr_t sh_performance_count5_control_regval; + struct { + mmr_t up_stimulus : 5; + mmr_t up_event : 1; + mmr_t up_polarity : 1; + mmr_t up_mode : 1; + mmr_t dn_stimulus : 5; + mmr_t dn_event : 1; + mmr_t dn_polarity : 1; + mmr_t dn_mode : 1; + mmr_t inc_enable : 1; + mmr_t dec_enable : 1; + mmr_t peak_det_enable : 1; + mmr_t reserved_0 : 45; + } sh_performance_count5_control_s; +} sh_performance_count5_control_u_t; +#else +typedef union sh_performance_count5_control_u { + mmr_t sh_performance_count5_control_regval; + struct { + mmr_t reserved_0 : 45; + mmr_t peak_det_enable : 1; + mmr_t dec_enable : 1; + mmr_t inc_enable : 1; + mmr_t dn_mode : 1; + mmr_t dn_polarity : 1; + mmr_t dn_event : 1; + mmr_t dn_stimulus : 5; + mmr_t up_mode : 1; + mmr_t up_polarity : 1; + mmr_t up_event : 1; + mmr_t up_stimulus : 5; + } sh_performance_count5_control_s; +} sh_performance_count5_control_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PERFORMANCE_COUNT6_CONTROL" */ +/* Performance Counter 6 Control */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_performance_count6_control_u { + mmr_t sh_performance_count6_control_regval; + struct { + mmr_t up_stimulus : 5; + mmr_t up_event : 1; + mmr_t up_polarity : 1; + mmr_t up_mode : 1; + mmr_t dn_stimulus : 5; + mmr_t dn_event : 1; + mmr_t dn_polarity : 1; + mmr_t dn_mode : 1; + mmr_t inc_enable : 1; + mmr_t dec_enable : 1; + mmr_t peak_det_enable : 1; + mmr_t reserved_0 : 45; + } sh_performance_count6_control_s; +} sh_performance_count6_control_u_t; +#else +typedef union sh_performance_count6_control_u { + mmr_t sh_performance_count6_control_regval; + struct { + mmr_t reserved_0 : 45; + mmr_t peak_det_enable : 1; + mmr_t dec_enable : 1; + mmr_t inc_enable : 1; + mmr_t dn_mode : 1; + mmr_t dn_polarity : 1; + mmr_t dn_event : 1; + mmr_t dn_stimulus : 5; + mmr_t up_mode : 1; + mmr_t up_polarity : 1; + mmr_t up_event : 1; + mmr_t up_stimulus : 5; + } sh_performance_count6_control_s; +} sh_performance_count6_control_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PERFORMANCE_COUNT7_CONTROL" */ +/* Performance Counter 7 Control */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_performance_count7_control_u { + mmr_t sh_performance_count7_control_regval; + struct { + mmr_t up_stimulus : 5; + mmr_t up_event : 1; + mmr_t up_polarity : 1; + mmr_t up_mode : 1; + mmr_t dn_stimulus : 5; + mmr_t dn_event : 1; + mmr_t dn_polarity : 1; + mmr_t dn_mode : 1; + mmr_t inc_enable : 1; + mmr_t dec_enable : 1; + mmr_t peak_det_enable : 1; + mmr_t reserved_0 : 45; + } sh_performance_count7_control_s; +} sh_performance_count7_control_u_t; +#else +typedef union sh_performance_count7_control_u { + mmr_t sh_performance_count7_control_regval; + struct { + mmr_t reserved_0 : 45; + mmr_t peak_det_enable : 1; + mmr_t dec_enable : 1; + mmr_t inc_enable : 1; + mmr_t dn_mode : 1; + mmr_t dn_polarity : 1; + mmr_t dn_event : 1; + mmr_t dn_stimulus : 5; + mmr_t up_mode : 1; + mmr_t up_polarity : 1; + mmr_t up_event : 1; + mmr_t up_stimulus : 5; + } sh_performance_count7_control_s; +} sh_performance_count7_control_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PROFILE_DN_CONTROL" */ +/* Profile Counter Down Control */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_profile_dn_control_u { + mmr_t sh_profile_dn_control_regval; + struct { + mmr_t stimulus : 5; + mmr_t event : 1; + mmr_t polarity : 1; + mmr_t mode : 1; + mmr_t reserved_0 : 56; + } sh_profile_dn_control_s; +} sh_profile_dn_control_u_t; +#else +typedef union sh_profile_dn_control_u { + mmr_t sh_profile_dn_control_regval; + struct { + mmr_t reserved_0 : 56; + mmr_t mode : 1; + mmr_t polarity : 1; + mmr_t event : 1; + mmr_t stimulus : 5; + } sh_profile_dn_control_s; +} sh_profile_dn_control_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PROFILE_PEAK_CONTROL" */ +/* Profile Counter Peak Control */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_profile_peak_control_u { + mmr_t sh_profile_peak_control_regval; + struct { + mmr_t reserved_0 : 3; + mmr_t stimulus : 1; + mmr_t reserved_1 : 1; + mmr_t event : 1; + mmr_t polarity : 1; + mmr_t reserved_2 : 57; + } sh_profile_peak_control_s; +} sh_profile_peak_control_u_t; +#else +typedef union sh_profile_peak_control_u { + mmr_t sh_profile_peak_control_regval; + struct { + mmr_t reserved_2 : 57; + mmr_t polarity : 1; + mmr_t event : 1; + mmr_t reserved_1 : 1; + mmr_t stimulus : 1; + mmr_t reserved_0 : 3; + } sh_profile_peak_control_s; +} sh_profile_peak_control_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PROFILE_RANGE" */ +/* Profile Counter Range */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_profile_range_u { + mmr_t sh_profile_range_regval; + struct { + mmr_t range0 : 8; + mmr_t range1 : 8; + mmr_t range2 : 8; + mmr_t range3 : 8; + mmr_t range4 : 8; + mmr_t range5 : 8; + mmr_t range6 : 8; + mmr_t range7 : 8; + } sh_profile_range_s; +} sh_profile_range_u_t; +#else +typedef union sh_profile_range_u { + mmr_t sh_profile_range_regval; + struct { + mmr_t range7 : 8; + mmr_t range6 : 8; + mmr_t range5 : 8; + mmr_t range4 : 8; + mmr_t range3 : 8; + mmr_t range2 : 8; + mmr_t range1 : 8; + mmr_t range0 : 8; + } sh_profile_range_s; +} sh_profile_range_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PROFILE_UP_CONTROL" */ +/* Profile Counter Up Control */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_profile_up_control_u { + mmr_t sh_profile_up_control_regval; + struct { + mmr_t stimulus : 5; + mmr_t event : 1; + mmr_t polarity : 1; + mmr_t mode : 1; + mmr_t reserved_0 : 56; + } sh_profile_up_control_s; +} sh_profile_up_control_u_t; +#else +typedef union sh_profile_up_control_u { + mmr_t sh_profile_up_control_regval; + struct { + mmr_t reserved_0 : 56; + mmr_t mode : 1; + mmr_t polarity : 1; + mmr_t event : 1; + mmr_t stimulus : 5; + } sh_profile_up_control_s; +} sh_profile_up_control_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PERFORMANCE_COUNTER0" */ +/* Performance Counter 0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_performance_counter0_u { + mmr_t sh_performance_counter0_regval; + struct { + mmr_t count : 32; + mmr_t reserved_0 : 32; + } sh_performance_counter0_s; +} sh_performance_counter0_u_t; +#else +typedef union sh_performance_counter0_u { + mmr_t sh_performance_counter0_regval; + struct { + mmr_t reserved_0 : 32; + mmr_t count : 32; + } sh_performance_counter0_s; +} sh_performance_counter0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PERFORMANCE_COUNTER1" */ +/* Performance Counter 1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_performance_counter1_u { + mmr_t sh_performance_counter1_regval; + struct { + mmr_t count : 32; + mmr_t reserved_0 : 32; + } sh_performance_counter1_s; +} sh_performance_counter1_u_t; +#else +typedef union sh_performance_counter1_u { + mmr_t sh_performance_counter1_regval; + struct { + mmr_t reserved_0 : 32; + mmr_t count : 32; + } sh_performance_counter1_s; +} sh_performance_counter1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PERFORMANCE_COUNTER2" */ +/* Performance Counter 2 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_performance_counter2_u { + mmr_t sh_performance_counter2_regval; + struct { + mmr_t count : 32; + mmr_t reserved_0 : 32; + } sh_performance_counter2_s; +} sh_performance_counter2_u_t; +#else +typedef union sh_performance_counter2_u { + mmr_t sh_performance_counter2_regval; + struct { + mmr_t reserved_0 : 32; + mmr_t count : 32; + } sh_performance_counter2_s; +} sh_performance_counter2_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PERFORMANCE_COUNTER3" */ +/* Performance Counter 3 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_performance_counter3_u { + mmr_t sh_performance_counter3_regval; + struct { + mmr_t count : 32; + mmr_t reserved_0 : 32; + } sh_performance_counter3_s; +} sh_performance_counter3_u_t; +#else +typedef union sh_performance_counter3_u { + mmr_t sh_performance_counter3_regval; + struct { + mmr_t reserved_0 : 32; + mmr_t count : 32; + } sh_performance_counter3_s; +} sh_performance_counter3_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PERFORMANCE_COUNTER4" */ +/* Performance Counter 4 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_performance_counter4_u { + mmr_t sh_performance_counter4_regval; + struct { + mmr_t count : 32; + mmr_t reserved_0 : 32; + } sh_performance_counter4_s; +} sh_performance_counter4_u_t; +#else +typedef union sh_performance_counter4_u { + mmr_t sh_performance_counter4_regval; + struct { + mmr_t reserved_0 : 32; + mmr_t count : 32; + } sh_performance_counter4_s; +} sh_performance_counter4_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PERFORMANCE_COUNTER5" */ +/* Performance Counter 5 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_performance_counter5_u { + mmr_t sh_performance_counter5_regval; + struct { + mmr_t count : 32; + mmr_t reserved_0 : 32; + } sh_performance_counter5_s; +} sh_performance_counter5_u_t; +#else +typedef union sh_performance_counter5_u { + mmr_t sh_performance_counter5_regval; + struct { + mmr_t reserved_0 : 32; + mmr_t count : 32; + } sh_performance_counter5_s; +} sh_performance_counter5_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PERFORMANCE_COUNTER6" */ +/* Performance Counter 6 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_performance_counter6_u { + mmr_t sh_performance_counter6_regval; + struct { + mmr_t count : 32; + mmr_t reserved_0 : 32; + } sh_performance_counter6_s; +} sh_performance_counter6_u_t; +#else +typedef union sh_performance_counter6_u { + mmr_t sh_performance_counter6_regval; + struct { + mmr_t reserved_0 : 32; + mmr_t count : 32; + } sh_performance_counter6_s; +} sh_performance_counter6_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PERFORMANCE_COUNTER7" */ +/* Performance Counter 7 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_performance_counter7_u { + mmr_t sh_performance_counter7_regval; + struct { + mmr_t count : 32; + mmr_t reserved_0 : 32; + } sh_performance_counter7_s; +} sh_performance_counter7_u_t; +#else +typedef union sh_performance_counter7_u { + mmr_t sh_performance_counter7_regval; + struct { + mmr_t reserved_0 : 32; + mmr_t count : 32; + } sh_performance_counter7_s; +} sh_performance_counter7_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PROFILE_COUNTER" */ +/* Profile Counter */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_profile_counter_u { + mmr_t sh_profile_counter_regval; + struct { + mmr_t counter : 8; + mmr_t reserved_0 : 56; + } sh_profile_counter_s; +} sh_profile_counter_u_t; +#else +typedef union sh_profile_counter_u { + mmr_t sh_profile_counter_regval; + struct { + mmr_t reserved_0 : 56; + mmr_t counter : 8; + } sh_profile_counter_s; +} sh_profile_counter_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PROFILE_PEAK" */ +/* Profile Peak Counter */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_profile_peak_u { + mmr_t sh_profile_peak_regval; + struct { + mmr_t counter : 8; + mmr_t reserved_0 : 56; + } sh_profile_peak_s; +} sh_profile_peak_u_t; +#else +typedef union sh_profile_peak_u { + mmr_t sh_profile_peak_regval; + struct { + mmr_t reserved_0 : 56; + mmr_t counter : 8; + } sh_profile_peak_s; +} sh_profile_peak_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PTC_0" */ +/* Puge Translation Cache Message Configuration Information */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ptc_0_u { + mmr_t sh_ptc_0_regval; + struct { + mmr_t a : 1; + mmr_t reserved_0 : 1; + mmr_t ps : 6; + mmr_t rid : 24; + mmr_t reserved_1 : 31; + mmr_t start : 1; + } sh_ptc_0_s; +} sh_ptc_0_u_t; +#else +typedef union sh_ptc_0_u { + mmr_t sh_ptc_0_regval; + struct { + mmr_t start : 1; + mmr_t reserved_1 : 31; + mmr_t rid : 24; + mmr_t ps : 6; + mmr_t reserved_0 : 1; + mmr_t a : 1; + } sh_ptc_0_s; +} sh_ptc_0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PTC_1" */ +/* Puge Translation Cache Message Configuration Information */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ptc_1_u { + mmr_t sh_ptc_1_regval; + struct { + mmr_t reserved_0 : 12; + mmr_t vpn : 49; + mmr_t reserved_1 : 2; + mmr_t start : 1; + } sh_ptc_1_s; +} sh_ptc_1_u_t; +#else +typedef union sh_ptc_1_u { + mmr_t sh_ptc_1_regval; + struct { + mmr_t start : 1; + mmr_t reserved_1 : 2; + mmr_t vpn : 49; + mmr_t reserved_0 : 12; + } sh_ptc_1_s; +} sh_ptc_1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PTC_PARMS" */ +/* PTC Time-out parmaeters */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_ptc_parms_u { + mmr_t sh_ptc_parms_regval; + struct { + mmr_t ptc_to_wrap : 24; + mmr_t ptc_to_val : 12; + mmr_t reserved_0 : 28; + } sh_ptc_parms_s; +} sh_ptc_parms_u_t; +#else +typedef union sh_ptc_parms_u { + mmr_t sh_ptc_parms_regval; + struct { + mmr_t reserved_0 : 28; + mmr_t ptc_to_val : 12; + mmr_t ptc_to_wrap : 24; + } sh_ptc_parms_s; +} sh_ptc_parms_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_INT_CMPA" */ +/* RTC Compare Value for Processor A */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_int_cmpa_u { + mmr_t sh_int_cmpa_regval; + struct { + mmr_t real_time_cmpa : 55; + mmr_t reserved_0 : 9; + } sh_int_cmpa_s; +} sh_int_cmpa_u_t; +#else +typedef union sh_int_cmpa_u { + mmr_t sh_int_cmpa_regval; + struct { + mmr_t reserved_0 : 9; + mmr_t real_time_cmpa : 55; + } sh_int_cmpa_s; +} sh_int_cmpa_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_INT_CMPB" */ +/* RTC Compare Value for Processor B */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_int_cmpb_u { + mmr_t sh_int_cmpb_regval; + struct { + mmr_t real_time_cmpb : 55; + mmr_t reserved_0 : 9; + } sh_int_cmpb_s; +} sh_int_cmpb_u_t; +#else +typedef union sh_int_cmpb_u { + mmr_t sh_int_cmpb_regval; + struct { + mmr_t reserved_0 : 9; + mmr_t real_time_cmpb : 55; + } sh_int_cmpb_s; +} sh_int_cmpb_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_INT_CMPC" */ +/* RTC Compare Value for Processor C */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_int_cmpc_u { + mmr_t sh_int_cmpc_regval; + struct { + mmr_t real_time_cmpc : 55; + mmr_t reserved_0 : 9; + } sh_int_cmpc_s; +} sh_int_cmpc_u_t; +#else +typedef union sh_int_cmpc_u { + mmr_t sh_int_cmpc_regval; + struct { + mmr_t reserved_0 : 9; + mmr_t real_time_cmpc : 55; + } sh_int_cmpc_s; +} sh_int_cmpc_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_INT_CMPD" */ +/* RTC Compare Value for Processor D */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_int_cmpd_u { + mmr_t sh_int_cmpd_regval; + struct { + mmr_t real_time_cmpd : 55; + mmr_t reserved_0 : 9; + } sh_int_cmpd_s; +} sh_int_cmpd_u_t; +#else +typedef union sh_int_cmpd_u { + mmr_t sh_int_cmpd_regval; + struct { + mmr_t reserved_0 : 9; + mmr_t real_time_cmpd : 55; + } sh_int_cmpd_s; +} sh_int_cmpd_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_INT_PROF" */ +/* Profile Compare Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_int_prof_u { + mmr_t sh_int_prof_regval; + struct { + mmr_t profile_compare : 32; + mmr_t reserved_0 : 32; + } sh_int_prof_s; +} sh_int_prof_u_t; +#else +typedef union sh_int_prof_u { + mmr_t sh_int_prof_regval; + struct { + mmr_t reserved_0 : 32; + mmr_t profile_compare : 32; + } sh_int_prof_s; +} sh_int_prof_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_RTC" */ +/* Real-time Clock */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_rtc_u { + mmr_t sh_rtc_regval; + struct { + mmr_t real_time_clock : 55; + mmr_t reserved_0 : 9; + } sh_rtc_s; +} sh_rtc_u_t; +#else +typedef union sh_rtc_u { + mmr_t sh_rtc_regval; + struct { + mmr_t reserved_0 : 9; + mmr_t real_time_clock : 55; + } sh_rtc_s; +} sh_rtc_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_SCRATCH0" */ +/* Scratch Register 0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_scratch0_u { + mmr_t sh_scratch0_regval; + struct { + mmr_t scratch0 : 64; + } sh_scratch0_s; +} sh_scratch0_u_t; +#else +typedef union sh_scratch0_u { + mmr_t sh_scratch0_regval; + struct { + mmr_t scratch0 : 64; + } sh_scratch0_s; +} sh_scratch0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_SCRATCH1" */ +/* Scratch Register 1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_scratch1_u { + mmr_t sh_scratch1_regval; + struct { + mmr_t scratch1 : 64; + } sh_scratch1_s; +} sh_scratch1_u_t; +#else +typedef union sh_scratch1_u { + mmr_t sh_scratch1_regval; + struct { + mmr_t scratch1 : 64; + } sh_scratch1_s; +} sh_scratch1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_SCRATCH2" */ +/* Scratch Register 2 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_scratch2_u { + mmr_t sh_scratch2_regval; + struct { + mmr_t scratch2 : 64; + } sh_scratch2_s; +} sh_scratch2_u_t; +#else +typedef union sh_scratch2_u { + mmr_t sh_scratch2_regval; + struct { + mmr_t scratch2 : 64; + } sh_scratch2_s; +} sh_scratch2_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_SCRATCH3" */ +/* Scratch Register 3 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_scratch3_u { + mmr_t sh_scratch3_regval; + struct { + mmr_t scratch3 : 1; + mmr_t reserved_0 : 63; + } sh_scratch3_s; +} sh_scratch3_u_t; +#else +typedef union sh_scratch3_u { + mmr_t sh_scratch3_regval; + struct { + mmr_t reserved_0 : 63; + mmr_t scratch3 : 1; + } sh_scratch3_s; +} sh_scratch3_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_SCRATCH4" */ +/* Scratch Register 4 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_scratch4_u { + mmr_t sh_scratch4_regval; + struct { + mmr_t scratch4 : 1; + mmr_t reserved_0 : 63; + } sh_scratch4_s; +} sh_scratch4_u_t; +#else +typedef union sh_scratch4_u { + mmr_t sh_scratch4_regval; + struct { + mmr_t reserved_0 : 63; + mmr_t scratch4 : 1; + } sh_scratch4_s; +} sh_scratch4_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_CRB_MESSAGE_CONTROL" */ +/* Coherent Request Buffer Message Control */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_crb_message_control_u { + mmr_t sh_crb_message_control_regval; + struct { + mmr_t system_coherence_enable : 1; + mmr_t local_speculative_message_enable : 1; + mmr_t remote_speculative_message_enable : 1; + mmr_t message_color : 1; + mmr_t message_color_enable : 1; + mmr_t rrb_attribute_mismatch_fsb_enable : 1; + mmr_t wrb_attribute_mismatch_fsb_enable : 1; + mmr_t irb_attribute_mismatch_fsb_enable : 1; + mmr_t rrb_attribute_mismatch_xb_enable : 1; + mmr_t wrb_attribute_mismatch_xb_enable : 1; + mmr_t suppress_bogus_writes : 1; + mmr_t enable_ivack_consolidation : 1; + mmr_t reserved_0 : 20; + mmr_t ivack_stall_count : 16; + mmr_t ivack_throttle_control : 16; + } sh_crb_message_control_s; +} sh_crb_message_control_u_t; +#else +typedef union sh_crb_message_control_u { + mmr_t sh_crb_message_control_regval; + struct { + mmr_t ivack_throttle_control : 16; + mmr_t ivack_stall_count : 16; + mmr_t reserved_0 : 20; + mmr_t enable_ivack_consolidation : 1; + mmr_t suppress_bogus_writes : 1; + mmr_t wrb_attribute_mismatch_xb_enable : 1; + mmr_t rrb_attribute_mismatch_xb_enable : 1; + mmr_t irb_attribute_mismatch_fsb_enable : 1; + mmr_t wrb_attribute_mismatch_fsb_enable : 1; + mmr_t rrb_attribute_mismatch_fsb_enable : 1; + mmr_t message_color_enable : 1; + mmr_t message_color : 1; + mmr_t remote_speculative_message_enable : 1; + mmr_t local_speculative_message_enable : 1; + mmr_t system_coherence_enable : 1; + } sh_crb_message_control_s; +} sh_crb_message_control_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_CRB_NACK_LIMIT" */ +/* CRB Nack Limit */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_crb_nack_limit_u { + mmr_t sh_crb_nack_limit_regval; + struct { + mmr_t limit : 12; + mmr_t pri_freq : 4; + mmr_t reserved_0 : 47; + mmr_t enable : 1; + } sh_crb_nack_limit_s; +} sh_crb_nack_limit_u_t; +#else +typedef union sh_crb_nack_limit_u { + mmr_t sh_crb_nack_limit_regval; + struct { + mmr_t enable : 1; + mmr_t reserved_0 : 47; + mmr_t pri_freq : 4; + mmr_t limit : 12; + } sh_crb_nack_limit_s; +} sh_crb_nack_limit_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_CRB_TIMEOUT_PRESCALE" */ +/* Coherent Request Buffer Timeout Prescale */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_crb_timeout_prescale_u { + mmr_t sh_crb_timeout_prescale_regval; + struct { + mmr_t scaling_factor : 32; + mmr_t reserved_0 : 32; + } sh_crb_timeout_prescale_s; +} sh_crb_timeout_prescale_u_t; +#else +typedef union sh_crb_timeout_prescale_u { + mmr_t sh_crb_timeout_prescale_regval; + struct { + mmr_t reserved_0 : 32; + mmr_t scaling_factor : 32; + } sh_crb_timeout_prescale_s; +} sh_crb_timeout_prescale_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_CRB_TIMEOUT_SKID" */ +/* Coherent Request Buffer Timeout Skid Limit */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_crb_timeout_skid_u { + mmr_t sh_crb_timeout_skid_regval; + struct { + mmr_t skid : 6; + mmr_t reserved_0 : 57; + mmr_t reset_skid_count : 1; + } sh_crb_timeout_skid_s; +} sh_crb_timeout_skid_u_t; +#else +typedef union sh_crb_timeout_skid_u { + mmr_t sh_crb_timeout_skid_regval; + struct { + mmr_t reset_skid_count : 1; + mmr_t reserved_0 : 57; + mmr_t skid : 6; + } sh_crb_timeout_skid_s; +} sh_crb_timeout_skid_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MEMORY_WRITE_STATUS_0" */ +/* Memory Write Status for CPU 0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_memory_write_status_0_u { + mmr_t sh_memory_write_status_0_regval; + struct { + mmr_t pending_write_count : 6; + mmr_t reserved_0 : 58; + } sh_memory_write_status_0_s; +} sh_memory_write_status_0_u_t; +#else +typedef union sh_memory_write_status_0_u { + mmr_t sh_memory_write_status_0_regval; + struct { + mmr_t reserved_0 : 58; + mmr_t pending_write_count : 6; + } sh_memory_write_status_0_s; +} sh_memory_write_status_0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MEMORY_WRITE_STATUS_1" */ +/* Memory Write Status for CPU 1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_memory_write_status_1_u { + mmr_t sh_memory_write_status_1_regval; + struct { + mmr_t pending_write_count : 6; + mmr_t reserved_0 : 58; + } sh_memory_write_status_1_s; +} sh_memory_write_status_1_u_t; +#else +typedef union sh_memory_write_status_1_u { + mmr_t sh_memory_write_status_1_regval; + struct { + mmr_t reserved_0 : 58; + mmr_t pending_write_count : 6; + } sh_memory_write_status_1_s; +} sh_memory_write_status_1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PIO_WRITE_STATUS_0" */ +/* PIO Write Status for CPU 0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pio_write_status_0_u { + mmr_t sh_pio_write_status_0_regval; + struct { + mmr_t multi_write_error : 1; + mmr_t write_deadlock : 1; + mmr_t write_error : 1; + mmr_t write_error_address : 47; + mmr_t reserved_0 : 6; + mmr_t pending_write_count : 6; + mmr_t reserved_1 : 1; + mmr_t writes_ok : 1; + } sh_pio_write_status_0_s; +} sh_pio_write_status_0_u_t; +#else +typedef union sh_pio_write_status_0_u { + mmr_t sh_pio_write_status_0_regval; + struct { + mmr_t writes_ok : 1; + mmr_t reserved_1 : 1; + mmr_t pending_write_count : 6; + mmr_t reserved_0 : 6; + mmr_t write_error_address : 47; + mmr_t write_error : 1; + mmr_t write_deadlock : 1; + mmr_t multi_write_error : 1; + } sh_pio_write_status_0_s; +} sh_pio_write_status_0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PIO_WRITE_STATUS_1" */ +/* PIO Write Status for CPU 1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pio_write_status_1_u { + mmr_t sh_pio_write_status_1_regval; + struct { + mmr_t multi_write_error : 1; + mmr_t write_deadlock : 1; + mmr_t write_error : 1; + mmr_t write_error_address : 47; + mmr_t reserved_0 : 6; + mmr_t pending_write_count : 6; + mmr_t reserved_1 : 1; + mmr_t writes_ok : 1; + } sh_pio_write_status_1_s; +} sh_pio_write_status_1_u_t; +#else +typedef union sh_pio_write_status_1_u { + mmr_t sh_pio_write_status_1_regval; + struct { + mmr_t writes_ok : 1; + mmr_t reserved_1 : 1; + mmr_t pending_write_count : 6; + mmr_t reserved_0 : 6; + mmr_t write_error_address : 47; + mmr_t write_error : 1; + mmr_t write_deadlock : 1; + mmr_t multi_write_error : 1; + } sh_pio_write_status_1_s; +} sh_pio_write_status_1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MEMORY_WRITE_STATUS_NON_USER_0" */ +/* Memory Write Status for CPU 0. OS access only */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_memory_write_status_non_user_0_u { + mmr_t sh_memory_write_status_non_user_0_regval; + struct { + mmr_t pending_write_count : 6; + mmr_t reserved_0 : 57; + mmr_t clear : 1; + } sh_memory_write_status_non_user_0_s; +} sh_memory_write_status_non_user_0_u_t; +#else +typedef union sh_memory_write_status_non_user_0_u { + mmr_t sh_memory_write_status_non_user_0_regval; + struct { + mmr_t clear : 1; + mmr_t reserved_0 : 57; + mmr_t pending_write_count : 6; + } sh_memory_write_status_non_user_0_s; +} sh_memory_write_status_non_user_0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MEMORY_WRITE_STATUS_NON_USER_1" */ +/* Memory Write Status for CPU 1. OS access only */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_memory_write_status_non_user_1_u { + mmr_t sh_memory_write_status_non_user_1_regval; + struct { + mmr_t pending_write_count : 6; + mmr_t reserved_0 : 57; + mmr_t clear : 1; + } sh_memory_write_status_non_user_1_s; +} sh_memory_write_status_non_user_1_u_t; +#else +typedef union sh_memory_write_status_non_user_1_u { + mmr_t sh_memory_write_status_non_user_1_regval; + struct { + mmr_t clear : 1; + mmr_t reserved_0 : 57; + mmr_t pending_write_count : 6; + } sh_memory_write_status_non_user_1_s; +} sh_memory_write_status_non_user_1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MMRBIST_ERR" */ +/* Error capture for bist read errors */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_mmrbist_err_u { + mmr_t sh_mmrbist_err_regval; + struct { + mmr_t addr : 33; + mmr_t reserved_0 : 3; + mmr_t detected : 1; + mmr_t multiple_detected : 1; + mmr_t cancelled : 1; + mmr_t reserved_1 : 25; + } sh_mmrbist_err_s; +} sh_mmrbist_err_u_t; +#else +typedef union sh_mmrbist_err_u { + mmr_t sh_mmrbist_err_regval; + struct { + mmr_t reserved_1 : 25; + mmr_t cancelled : 1; + mmr_t multiple_detected : 1; + mmr_t detected : 1; + mmr_t reserved_0 : 3; + mmr_t addr : 33; + } sh_mmrbist_err_s; +} sh_mmrbist_err_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MISC_ERR_HDR_LOWER" */ +/* Header capture register */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_misc_err_hdr_lower_u { + mmr_t sh_misc_err_hdr_lower_regval; + struct { + mmr_t reserved_0 : 3; + mmr_t addr : 33; + mmr_t cmd : 8; + mmr_t src : 14; + mmr_t reserved_1 : 2; + mmr_t write : 1; + mmr_t reserved_2 : 2; + mmr_t valid : 1; + } sh_misc_err_hdr_lower_s; +} sh_misc_err_hdr_lower_u_t; +#else +typedef union sh_misc_err_hdr_lower_u { + mmr_t sh_misc_err_hdr_lower_regval; + struct { + mmr_t valid : 1; + mmr_t reserved_2 : 2; + mmr_t write : 1; + mmr_t reserved_1 : 2; + mmr_t src : 14; + mmr_t cmd : 8; + mmr_t addr : 33; + mmr_t reserved_0 : 3; + } sh_misc_err_hdr_lower_s; +} sh_misc_err_hdr_lower_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MISC_ERR_HDR_UPPER" */ +/* Error header capture packet and protocol errors */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_misc_err_hdr_upper_u { + mmr_t sh_misc_err_hdr_upper_regval; + struct { + mmr_t dir_protocol : 1; + mmr_t illegal_cmd : 1; + mmr_t nonexist_addr : 1; + mmr_t rmw_uc : 1; + mmr_t rmw_cor : 1; + mmr_t dir_acc : 1; + mmr_t pi_pkt_size : 1; + mmr_t xn_pkt_size : 1; + mmr_t reserved_0 : 12; + mmr_t echo : 9; + mmr_t reserved_1 : 35; + } sh_misc_err_hdr_upper_s; +} sh_misc_err_hdr_upper_u_t; +#else +typedef union sh_misc_err_hdr_upper_u { + mmr_t sh_misc_err_hdr_upper_regval; + struct { + mmr_t reserved_1 : 35; + mmr_t echo : 9; + mmr_t reserved_0 : 12; + mmr_t xn_pkt_size : 1; + mmr_t pi_pkt_size : 1; + mmr_t dir_acc : 1; + mmr_t rmw_cor : 1; + mmr_t rmw_uc : 1; + mmr_t nonexist_addr : 1; + mmr_t illegal_cmd : 1; + mmr_t dir_protocol : 1; + } sh_misc_err_hdr_upper_s; +} sh_misc_err_hdr_upper_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_DIR_UC_ERR_HDR_LOWER" */ +/* Header capture register */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_dir_uc_err_hdr_lower_u { + mmr_t sh_dir_uc_err_hdr_lower_regval; + struct { + mmr_t reserved_0 : 3; + mmr_t addr : 33; + mmr_t cmd : 8; + mmr_t src : 14; + mmr_t reserved_1 : 2; + mmr_t write : 1; + mmr_t reserved_2 : 2; + mmr_t valid : 1; + } sh_dir_uc_err_hdr_lower_s; +} sh_dir_uc_err_hdr_lower_u_t; +#else +typedef union sh_dir_uc_err_hdr_lower_u { + mmr_t sh_dir_uc_err_hdr_lower_regval; + struct { + mmr_t valid : 1; + mmr_t reserved_2 : 2; + mmr_t write : 1; + mmr_t reserved_1 : 2; + mmr_t src : 14; + mmr_t cmd : 8; + mmr_t addr : 33; + mmr_t reserved_0 : 3; + } sh_dir_uc_err_hdr_lower_s; +} sh_dir_uc_err_hdr_lower_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_DIR_UC_ERR_HDR_UPPER" */ +/* Error header capture packet and protocol errors */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_dir_uc_err_hdr_upper_u { + mmr_t sh_dir_uc_err_hdr_upper_regval; + struct { + mmr_t reserved_0 : 3; + mmr_t dir_uc : 1; + mmr_t reserved_1 : 16; + mmr_t echo : 9; + mmr_t reserved_2 : 35; + } sh_dir_uc_err_hdr_upper_s; +} sh_dir_uc_err_hdr_upper_u_t; +#else +typedef union sh_dir_uc_err_hdr_upper_u { + mmr_t sh_dir_uc_err_hdr_upper_regval; + struct { + mmr_t reserved_2 : 35; + mmr_t echo : 9; + mmr_t reserved_1 : 16; + mmr_t dir_uc : 1; + mmr_t reserved_0 : 3; + } sh_dir_uc_err_hdr_upper_s; +} sh_dir_uc_err_hdr_upper_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_DIR_COR_ERR_HDR_LOWER" */ +/* Header capture register */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_dir_cor_err_hdr_lower_u { + mmr_t sh_dir_cor_err_hdr_lower_regval; + struct { + mmr_t reserved_0 : 3; + mmr_t addr : 33; + mmr_t cmd : 8; + mmr_t src : 14; + mmr_t reserved_1 : 2; + mmr_t write : 1; + mmr_t reserved_2 : 2; + mmr_t valid : 1; + } sh_dir_cor_err_hdr_lower_s; +} sh_dir_cor_err_hdr_lower_u_t; +#else +typedef union sh_dir_cor_err_hdr_lower_u { + mmr_t sh_dir_cor_err_hdr_lower_regval; + struct { + mmr_t valid : 1; + mmr_t reserved_2 : 2; + mmr_t write : 1; + mmr_t reserved_1 : 2; + mmr_t src : 14; + mmr_t cmd : 8; + mmr_t addr : 33; + mmr_t reserved_0 : 3; + } sh_dir_cor_err_hdr_lower_s; +} sh_dir_cor_err_hdr_lower_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_DIR_COR_ERR_HDR_UPPER" */ +/* Error header capture packet and protocol errors */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_dir_cor_err_hdr_upper_u { + mmr_t sh_dir_cor_err_hdr_upper_regval; + struct { + mmr_t reserved_0 : 8; + mmr_t dir_cor : 1; + mmr_t reserved_1 : 11; + mmr_t echo : 9; + mmr_t reserved_2 : 35; + } sh_dir_cor_err_hdr_upper_s; +} sh_dir_cor_err_hdr_upper_u_t; +#else +typedef union sh_dir_cor_err_hdr_upper_u { + mmr_t sh_dir_cor_err_hdr_upper_regval; + struct { + mmr_t reserved_2 : 35; + mmr_t echo : 9; + mmr_t reserved_1 : 11; + mmr_t dir_cor : 1; + mmr_t reserved_0 : 8; + } sh_dir_cor_err_hdr_upper_s; +} sh_dir_cor_err_hdr_upper_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MEM_ERROR_SUMMARY" */ +/* Memory error flags */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_mem_error_summary_u { + mmr_t sh_mem_error_summary_regval; + struct { + mmr_t illegal_cmd : 1; + mmr_t nonexist_addr : 1; + mmr_t dqlp_dir_perr : 1; + mmr_t dqrp_dir_perr : 1; + mmr_t dqlp_dir_uc : 1; + mmr_t dqlp_dir_cor : 1; + mmr_t dqrp_dir_uc : 1; + mmr_t dqrp_dir_cor : 1; + mmr_t acx_int_hw : 1; + mmr_t acy_int_hw : 1; + mmr_t dir_acc : 1; + mmr_t reserved_0 : 1; + mmr_t dqlp_int_uc : 1; + mmr_t dqlp_int_cor : 1; + mmr_t dqlp_int_hw : 1; + mmr_t reserved_1 : 1; + mmr_t dqls_int_uc : 1; + mmr_t dqls_int_cor : 1; + mmr_t dqls_int_hw : 1; + mmr_t reserved_2 : 1; + mmr_t dqrp_int_uc : 1; + mmr_t dqrp_int_cor : 1; + mmr_t dqrp_int_hw : 1; + mmr_t reserved_3 : 1; + mmr_t dqrs_int_uc : 1; + mmr_t dqrs_int_cor : 1; + mmr_t dqrs_int_hw : 1; + mmr_t reserved_4 : 1; + mmr_t pi_reply_overflow : 1; + mmr_t xn_reply_overflow : 1; + mmr_t pi_request_overflow : 1; + mmr_t xn_request_overflow : 1; + mmr_t red_black_err_timeout : 1; + mmr_t pi_pkt_size : 1; + mmr_t xn_pkt_size : 1; + mmr_t reserved_5 : 29; + } sh_mem_error_summary_s; +} sh_mem_error_summary_u_t; +#else +typedef union sh_mem_error_summary_u { + mmr_t sh_mem_error_summary_regval; + struct { + mmr_t reserved_5 : 29; + mmr_t xn_pkt_size : 1; + mmr_t pi_pkt_size : 1; + mmr_t red_black_err_timeout : 1; + mmr_t xn_request_overflow : 1; + mmr_t pi_request_overflow : 1; + mmr_t xn_reply_overflow : 1; + mmr_t pi_reply_overflow : 1; + mmr_t reserved_4 : 1; + mmr_t dqrs_int_hw : 1; + mmr_t dqrs_int_cor : 1; + mmr_t dqrs_int_uc : 1; + mmr_t reserved_3 : 1; + mmr_t dqrp_int_hw : 1; + mmr_t dqrp_int_cor : 1; + mmr_t dqrp_int_uc : 1; + mmr_t reserved_2 : 1; + mmr_t dqls_int_hw : 1; + mmr_t dqls_int_cor : 1; + mmr_t dqls_int_uc : 1; + mmr_t reserved_1 : 1; + mmr_t dqlp_int_hw : 1; + mmr_t dqlp_int_cor : 1; + mmr_t dqlp_int_uc : 1; + mmr_t reserved_0 : 1; + mmr_t dir_acc : 1; + mmr_t acy_int_hw : 1; + mmr_t acx_int_hw : 1; + mmr_t dqrp_dir_cor : 1; + mmr_t dqrp_dir_uc : 1; + mmr_t dqlp_dir_cor : 1; + mmr_t dqlp_dir_uc : 1; + mmr_t dqrp_dir_perr : 1; + mmr_t dqlp_dir_perr : 1; + mmr_t nonexist_addr : 1; + mmr_t illegal_cmd : 1; + } sh_mem_error_summary_s; +} sh_mem_error_summary_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MEM_ERROR_OVERFLOW" */ +/* Memory error flags */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_mem_error_overflow_u { + mmr_t sh_mem_error_overflow_regval; + struct { + mmr_t illegal_cmd : 1; + mmr_t nonexist_addr : 1; + mmr_t dqlp_dir_perr : 1; + mmr_t dqrp_dir_perr : 1; + mmr_t dqlp_dir_uc : 1; + mmr_t dqlp_dir_cor : 1; + mmr_t dqrp_dir_uc : 1; + mmr_t dqrp_dir_cor : 1; + mmr_t acx_int_hw : 1; + mmr_t acy_int_hw : 1; + mmr_t dir_acc : 1; + mmr_t reserved_0 : 1; + mmr_t dqlp_int_uc : 1; + mmr_t dqlp_int_cor : 1; + mmr_t dqlp_int_hw : 1; + mmr_t reserved_1 : 1; + mmr_t dqls_int_uc : 1; + mmr_t dqls_int_cor : 1; + mmr_t dqls_int_hw : 1; + mmr_t reserved_2 : 1; + mmr_t dqrp_int_uc : 1; + mmr_t dqrp_int_cor : 1; + mmr_t dqrp_int_hw : 1; + mmr_t reserved_3 : 1; + mmr_t dqrs_int_uc : 1; + mmr_t dqrs_int_cor : 1; + mmr_t dqrs_int_hw : 1; + mmr_t reserved_4 : 1; + mmr_t pi_reply_overflow : 1; + mmr_t xn_reply_overflow : 1; + mmr_t pi_request_overflow : 1; + mmr_t xn_request_overflow : 1; + mmr_t red_black_err_timeout : 1; + mmr_t pi_pkt_size : 1; + mmr_t xn_pkt_size : 1; + mmr_t reserved_5 : 29; + } sh_mem_error_overflow_s; +} sh_mem_error_overflow_u_t; +#else +typedef union sh_mem_error_overflow_u { + mmr_t sh_mem_error_overflow_regval; + struct { + mmr_t reserved_5 : 29; + mmr_t xn_pkt_size : 1; + mmr_t pi_pkt_size : 1; + mmr_t red_black_err_timeout : 1; + mmr_t xn_request_overflow : 1; + mmr_t pi_request_overflow : 1; + mmr_t xn_reply_overflow : 1; + mmr_t pi_reply_overflow : 1; + mmr_t reserved_4 : 1; + mmr_t dqrs_int_hw : 1; + mmr_t dqrs_int_cor : 1; + mmr_t dqrs_int_uc : 1; + mmr_t reserved_3 : 1; + mmr_t dqrp_int_hw : 1; + mmr_t dqrp_int_cor : 1; + mmr_t dqrp_int_uc : 1; + mmr_t reserved_2 : 1; + mmr_t dqls_int_hw : 1; + mmr_t dqls_int_cor : 1; + mmr_t dqls_int_uc : 1; + mmr_t reserved_1 : 1; + mmr_t dqlp_int_hw : 1; + mmr_t dqlp_int_cor : 1; + mmr_t dqlp_int_uc : 1; + mmr_t reserved_0 : 1; + mmr_t dir_acc : 1; + mmr_t acy_int_hw : 1; + mmr_t acx_int_hw : 1; + mmr_t dqrp_dir_cor : 1; + mmr_t dqrp_dir_uc : 1; + mmr_t dqlp_dir_cor : 1; + mmr_t dqlp_dir_uc : 1; + mmr_t dqrp_dir_perr : 1; + mmr_t dqlp_dir_perr : 1; + mmr_t nonexist_addr : 1; + mmr_t illegal_cmd : 1; + } sh_mem_error_overflow_s; +} sh_mem_error_overflow_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MEM_ERROR_MASK" */ +/* Memory error flags */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_mem_error_mask_u { + mmr_t sh_mem_error_mask_regval; + struct { + mmr_t illegal_cmd : 1; + mmr_t nonexist_addr : 1; + mmr_t dqlp_dir_perr : 1; + mmr_t dqrp_dir_perr : 1; + mmr_t dqlp_dir_uc : 1; + mmr_t dqlp_dir_cor : 1; + mmr_t dqrp_dir_uc : 1; + mmr_t dqrp_dir_cor : 1; + mmr_t acx_int_hw : 1; + mmr_t acy_int_hw : 1; + mmr_t dir_acc : 1; + mmr_t reserved_0 : 1; + mmr_t dqlp_int_uc : 1; + mmr_t dqlp_int_cor : 1; + mmr_t dqlp_int_hw : 1; + mmr_t reserved_1 : 1; + mmr_t dqls_int_uc : 1; + mmr_t dqls_int_cor : 1; + mmr_t dqls_int_hw : 1; + mmr_t reserved_2 : 1; + mmr_t dqrp_int_uc : 1; + mmr_t dqrp_int_cor : 1; + mmr_t dqrp_int_hw : 1; + mmr_t reserved_3 : 1; + mmr_t dqrs_int_uc : 1; + mmr_t dqrs_int_cor : 1; + mmr_t dqrs_int_hw : 1; + mmr_t reserved_4 : 1; + mmr_t pi_reply_overflow : 1; + mmr_t xn_reply_overflow : 1; + mmr_t pi_request_overflow : 1; + mmr_t xn_request_overflow : 1; + mmr_t red_black_err_timeout : 1; + mmr_t pi_pkt_size : 1; + mmr_t xn_pkt_size : 1; + mmr_t reserved_5 : 29; + } sh_mem_error_mask_s; +} sh_mem_error_mask_u_t; +#else +typedef union sh_mem_error_mask_u { + mmr_t sh_mem_error_mask_regval; + struct { + mmr_t reserved_5 : 29; + mmr_t xn_pkt_size : 1; + mmr_t pi_pkt_size : 1; + mmr_t red_black_err_timeout : 1; + mmr_t xn_request_overflow : 1; + mmr_t pi_request_overflow : 1; + mmr_t xn_reply_overflow : 1; + mmr_t pi_reply_overflow : 1; + mmr_t reserved_4 : 1; + mmr_t dqrs_int_hw : 1; + mmr_t dqrs_int_cor : 1; + mmr_t dqrs_int_uc : 1; + mmr_t reserved_3 : 1; + mmr_t dqrp_int_hw : 1; + mmr_t dqrp_int_cor : 1; + mmr_t dqrp_int_uc : 1; + mmr_t reserved_2 : 1; + mmr_t dqls_int_hw : 1; + mmr_t dqls_int_cor : 1; + mmr_t dqls_int_uc : 1; + mmr_t reserved_1 : 1; + mmr_t dqlp_int_hw : 1; + mmr_t dqlp_int_cor : 1; + mmr_t dqlp_int_uc : 1; + mmr_t reserved_0 : 1; + mmr_t dir_acc : 1; + mmr_t acy_int_hw : 1; + mmr_t acx_int_hw : 1; + mmr_t dqrp_dir_cor : 1; + mmr_t dqrp_dir_uc : 1; + mmr_t dqlp_dir_cor : 1; + mmr_t dqlp_dir_uc : 1; + mmr_t dqrp_dir_perr : 1; + mmr_t dqlp_dir_perr : 1; + mmr_t nonexist_addr : 1; + mmr_t illegal_cmd : 1; + } sh_mem_error_mask_s; +} sh_mem_error_mask_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_X_DIMM_CFG" */ +/* AC Mem Config Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_x_dimm_cfg_u { + mmr_t sh_x_dimm_cfg_regval; + struct { + mmr_t dimm0_size : 3; + mmr_t dimm0_2bk : 1; + mmr_t dimm0_rev : 1; + mmr_t dimm0_cs : 2; + mmr_t reserved_0 : 1; + mmr_t dimm1_size : 3; + mmr_t dimm1_2bk : 1; + mmr_t dimm1_rev : 1; + mmr_t dimm1_cs : 2; + mmr_t reserved_1 : 1; + mmr_t dimm2_size : 3; + mmr_t dimm2_2bk : 1; + mmr_t dimm2_rev : 1; + mmr_t dimm2_cs : 2; + mmr_t reserved_2 : 1; + mmr_t dimm3_size : 3; + mmr_t dimm3_2bk : 1; + mmr_t dimm3_rev : 1; + mmr_t dimm3_cs : 2; + mmr_t reserved_3 : 1; + mmr_t freq : 4; + mmr_t reserved_4 : 28; + } sh_x_dimm_cfg_s; +} sh_x_dimm_cfg_u_t; +#else +typedef union sh_x_dimm_cfg_u { + mmr_t sh_x_dimm_cfg_regval; + struct { + mmr_t reserved_4 : 28; + mmr_t freq : 4; + mmr_t reserved_3 : 1; + mmr_t dimm3_cs : 2; + mmr_t dimm3_rev : 1; + mmr_t dimm3_2bk : 1; + mmr_t dimm3_size : 3; + mmr_t reserved_2 : 1; + mmr_t dimm2_cs : 2; + mmr_t dimm2_rev : 1; + mmr_t dimm2_2bk : 1; + mmr_t dimm2_size : 3; + mmr_t reserved_1 : 1; + mmr_t dimm1_cs : 2; + mmr_t dimm1_rev : 1; + mmr_t dimm1_2bk : 1; + mmr_t dimm1_size : 3; + mmr_t reserved_0 : 1; + mmr_t dimm0_cs : 2; + mmr_t dimm0_rev : 1; + mmr_t dimm0_2bk : 1; + mmr_t dimm0_size : 3; + } sh_x_dimm_cfg_s; +} sh_x_dimm_cfg_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_Y_DIMM_CFG" */ +/* AC Mem Config Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_y_dimm_cfg_u { + mmr_t sh_y_dimm_cfg_regval; + struct { + mmr_t dimm0_size : 3; + mmr_t dimm0_2bk : 1; + mmr_t dimm0_rev : 1; + mmr_t dimm0_cs : 2; + mmr_t reserved_0 : 1; + mmr_t dimm1_size : 3; + mmr_t dimm1_2bk : 1; + mmr_t dimm1_rev : 1; + mmr_t dimm1_cs : 2; + mmr_t reserved_1 : 1; + mmr_t dimm2_size : 3; + mmr_t dimm2_2bk : 1; + mmr_t dimm2_rev : 1; + mmr_t dimm2_cs : 2; + mmr_t reserved_2 : 1; + mmr_t dimm3_size : 3; + mmr_t dimm3_2bk : 1; + mmr_t dimm3_rev : 1; + mmr_t dimm3_cs : 2; + mmr_t reserved_3 : 1; + mmr_t freq : 4; + mmr_t reserved_4 : 28; + } sh_y_dimm_cfg_s; +} sh_y_dimm_cfg_u_t; +#else +typedef union sh_y_dimm_cfg_u { + mmr_t sh_y_dimm_cfg_regval; + struct { + mmr_t reserved_4 : 28; + mmr_t freq : 4; + mmr_t reserved_3 : 1; + mmr_t dimm3_cs : 2; + mmr_t dimm3_rev : 1; + mmr_t dimm3_2bk : 1; + mmr_t dimm3_size : 3; + mmr_t reserved_2 : 1; + mmr_t dimm2_cs : 2; + mmr_t dimm2_rev : 1; + mmr_t dimm2_2bk : 1; + mmr_t dimm2_size : 3; + mmr_t reserved_1 : 1; + mmr_t dimm1_cs : 2; + mmr_t dimm1_rev : 1; + mmr_t dimm1_2bk : 1; + mmr_t dimm1_size : 3; + mmr_t reserved_0 : 1; + mmr_t dimm0_cs : 2; + mmr_t dimm0_rev : 1; + mmr_t dimm0_2bk : 1; + mmr_t dimm0_size : 3; + } sh_y_dimm_cfg_s; +} sh_y_dimm_cfg_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_JNR_DIMM_CFG" */ +/* AC Mem Config Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_jnr_dimm_cfg_u { + mmr_t sh_jnr_dimm_cfg_regval; + struct { + mmr_t dimm0_size : 3; + mmr_t dimm0_2bk : 1; + mmr_t dimm0_rev : 1; + mmr_t dimm0_cs : 2; + mmr_t reserved_0 : 1; + mmr_t dimm1_size : 3; + mmr_t dimm1_2bk : 1; + mmr_t dimm1_rev : 1; + mmr_t dimm1_cs : 2; + mmr_t reserved_1 : 1; + mmr_t dimm2_size : 3; + mmr_t dimm2_2bk : 1; + mmr_t dimm2_rev : 1; + mmr_t dimm2_cs : 2; + mmr_t reserved_2 : 1; + mmr_t dimm3_size : 3; + mmr_t dimm3_2bk : 1; + mmr_t dimm3_rev : 1; + mmr_t dimm3_cs : 2; + mmr_t reserved_3 : 1; + mmr_t freq : 4; + mmr_t reserved_4 : 28; + } sh_jnr_dimm_cfg_s; +} sh_jnr_dimm_cfg_u_t; +#else +typedef union sh_jnr_dimm_cfg_u { + mmr_t sh_jnr_dimm_cfg_regval; + struct { + mmr_t reserved_4 : 28; + mmr_t freq : 4; + mmr_t reserved_3 : 1; + mmr_t dimm3_cs : 2; + mmr_t dimm3_rev : 1; + mmr_t dimm3_2bk : 1; + mmr_t dimm3_size : 3; + mmr_t reserved_2 : 1; + mmr_t dimm2_cs : 2; + mmr_t dimm2_rev : 1; + mmr_t dimm2_2bk : 1; + mmr_t dimm2_size : 3; + mmr_t reserved_1 : 1; + mmr_t dimm1_cs : 2; + mmr_t dimm1_rev : 1; + mmr_t dimm1_2bk : 1; + mmr_t dimm1_size : 3; + mmr_t reserved_0 : 1; + mmr_t dimm0_cs : 2; + mmr_t dimm0_rev : 1; + mmr_t dimm0_2bk : 1; + mmr_t dimm0_size : 3; + } sh_jnr_dimm_cfg_s; +} sh_jnr_dimm_cfg_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_X_PHASE_CFG" */ +/* AC Phase Config Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_x_phase_cfg_u { + mmr_t sh_x_phase_cfg_regval; + struct { + mmr_t ld_a : 5; + mmr_t ld_b : 5; + mmr_t dq_ld_a : 5; + mmr_t dq_ld_b : 5; + mmr_t hold : 5; + mmr_t hold_req : 5; + mmr_t add_cp : 5; + mmr_t bubble_en : 5; + mmr_t pha_bubble : 3; + mmr_t phb_bubble : 3; + mmr_t phc_bubble : 3; + mmr_t phd_bubble : 3; + mmr_t phe_bubble : 3; + mmr_t sel_a : 4; + mmr_t dq_sel_a : 4; + mmr_t reserved_0 : 1; + } sh_x_phase_cfg_s; +} sh_x_phase_cfg_u_t; +#else +typedef union sh_x_phase_cfg_u { + mmr_t sh_x_phase_cfg_regval; + struct { + mmr_t reserved_0 : 1; + mmr_t dq_sel_a : 4; + mmr_t sel_a : 4; + mmr_t phe_bubble : 3; + mmr_t phd_bubble : 3; + mmr_t phc_bubble : 3; + mmr_t phb_bubble : 3; + mmr_t pha_bubble : 3; + mmr_t bubble_en : 5; + mmr_t add_cp : 5; + mmr_t hold_req : 5; + mmr_t hold : 5; + mmr_t dq_ld_b : 5; + mmr_t dq_ld_a : 5; + mmr_t ld_b : 5; + mmr_t ld_a : 5; + } sh_x_phase_cfg_s; +} sh_x_phase_cfg_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_X_CFG" */ +/* AC Config Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_x_cfg_u { + mmr_t sh_x_cfg_regval; + struct { + mmr_t mode_serial : 1; + mmr_t dirc_random_replacement : 1; + mmr_t dir_counter_init : 6; + mmr_t ta_dlys : 32; + mmr_t da_bb_clr : 4; + mmr_t dc_bb_clr : 4; + mmr_t wt_bb_clr : 4; + mmr_t sso_wt_en : 1; + mmr_t trcd2_en : 1; + mmr_t trcd4_en : 1; + mmr_t req_cntr_dis : 1; + mmr_t req_cntr_val : 6; + mmr_t inv_cas_addr : 1; + mmr_t clr_dir_cache : 1; + } sh_x_cfg_s; +} sh_x_cfg_u_t; +#else +typedef union sh_x_cfg_u { + mmr_t sh_x_cfg_regval; + struct { + mmr_t clr_dir_cache : 1; + mmr_t inv_cas_addr : 1; + mmr_t req_cntr_val : 6; + mmr_t req_cntr_dis : 1; + mmr_t trcd4_en : 1; + mmr_t trcd2_en : 1; + mmr_t sso_wt_en : 1; + mmr_t wt_bb_clr : 4; + mmr_t dc_bb_clr : 4; + mmr_t da_bb_clr : 4; + mmr_t ta_dlys : 32; + mmr_t dir_counter_init : 6; + mmr_t dirc_random_replacement : 1; + mmr_t mode_serial : 1; + } sh_x_cfg_s; +} sh_x_cfg_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_X_DQCT_CFG" */ +/* AC Config Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_x_dqct_cfg_u { + mmr_t sh_x_dqct_cfg_regval; + struct { + mmr_t rd_sel : 4; + mmr_t wt_sel : 4; + mmr_t dta_rd_sel : 4; + mmr_t dta_wt_sel : 4; + mmr_t dir_rd_sel : 4; + mmr_t mdir_rd_sel : 4; + mmr_t reserved_0 : 40; + } sh_x_dqct_cfg_s; +} sh_x_dqct_cfg_u_t; +#else +typedef union sh_x_dqct_cfg_u { + mmr_t sh_x_dqct_cfg_regval; + struct { + mmr_t reserved_0 : 40; + mmr_t mdir_rd_sel : 4; + mmr_t dir_rd_sel : 4; + mmr_t dta_wt_sel : 4; + mmr_t dta_rd_sel : 4; + mmr_t wt_sel : 4; + mmr_t rd_sel : 4; + } sh_x_dqct_cfg_s; +} sh_x_dqct_cfg_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_X_REFRESH_CONTROL" */ +/* Refresh Control Register */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_x_refresh_control_u { + mmr_t sh_x_refresh_control_regval; + struct { + mmr_t enable : 8; + mmr_t interval : 9; + mmr_t hold : 6; + mmr_t interleave : 1; + mmr_t half_rate : 4; + mmr_t reserved_0 : 36; + } sh_x_refresh_control_s; +} sh_x_refresh_control_u_t; +#else +typedef union sh_x_refresh_control_u { + mmr_t sh_x_refresh_control_regval; + struct { + mmr_t reserved_0 : 36; + mmr_t half_rate : 4; + mmr_t interleave : 1; + mmr_t hold : 6; + mmr_t interval : 9; + mmr_t enable : 8; + } sh_x_refresh_control_s; +} sh_x_refresh_control_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_Y_PHASE_CFG" */ +/* AC Phase Config Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_y_phase_cfg_u { + mmr_t sh_y_phase_cfg_regval; + struct { + mmr_t ld_a : 5; + mmr_t ld_b : 5; + mmr_t dq_ld_a : 5; + mmr_t dq_ld_b : 5; + mmr_t hold : 5; + mmr_t hold_req : 5; + mmr_t add_cp : 5; + mmr_t bubble_en : 5; + mmr_t pha_bubble : 3; + mmr_t phb_bubble : 3; + mmr_t phc_bubble : 3; + mmr_t phd_bubble : 3; + mmr_t phe_bubble : 3; + mmr_t sel_a : 4; + mmr_t dq_sel_a : 4; + mmr_t reserved_0 : 1; + } sh_y_phase_cfg_s; +} sh_y_phase_cfg_u_t; +#else +typedef union sh_y_phase_cfg_u { + mmr_t sh_y_phase_cfg_regval; + struct { + mmr_t reserved_0 : 1; + mmr_t dq_sel_a : 4; + mmr_t sel_a : 4; + mmr_t phe_bubble : 3; + mmr_t phd_bubble : 3; + mmr_t phc_bubble : 3; + mmr_t phb_bubble : 3; + mmr_t pha_bubble : 3; + mmr_t bubble_en : 5; + mmr_t add_cp : 5; + mmr_t hold_req : 5; + mmr_t hold : 5; + mmr_t dq_ld_b : 5; + mmr_t dq_ld_a : 5; + mmr_t ld_b : 5; + mmr_t ld_a : 5; + } sh_y_phase_cfg_s; +} sh_y_phase_cfg_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_Y_CFG" */ +/* AC Config Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_y_cfg_u { + mmr_t sh_y_cfg_regval; + struct { + mmr_t mode_serial : 1; + mmr_t dirc_random_replacement : 1; + mmr_t dir_counter_init : 6; + mmr_t ta_dlys : 32; + mmr_t da_bb_clr : 4; + mmr_t dc_bb_clr : 4; + mmr_t wt_bb_clr : 4; + mmr_t sso_wt_en : 1; + mmr_t trcd2_en : 1; + mmr_t trcd4_en : 1; + mmr_t req_cntr_dis : 1; + mmr_t req_cntr_val : 6; + mmr_t inv_cas_addr : 1; + mmr_t clr_dir_cache : 1; + } sh_y_cfg_s; +} sh_y_cfg_u_t; +#else +typedef union sh_y_cfg_u { + mmr_t sh_y_cfg_regval; + struct { + mmr_t clr_dir_cache : 1; + mmr_t inv_cas_addr : 1; + mmr_t req_cntr_val : 6; + mmr_t req_cntr_dis : 1; + mmr_t trcd4_en : 1; + mmr_t trcd2_en : 1; + mmr_t sso_wt_en : 1; + mmr_t wt_bb_clr : 4; + mmr_t dc_bb_clr : 4; + mmr_t da_bb_clr : 4; + mmr_t ta_dlys : 32; + mmr_t dir_counter_init : 6; + mmr_t dirc_random_replacement : 1; + mmr_t mode_serial : 1; + } sh_y_cfg_s; +} sh_y_cfg_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_Y_DQCT_CFG" */ +/* AC Config Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_y_dqct_cfg_u { + mmr_t sh_y_dqct_cfg_regval; + struct { + mmr_t rd_sel : 4; + mmr_t wt_sel : 4; + mmr_t dta_rd_sel : 4; + mmr_t dta_wt_sel : 4; + mmr_t dir_rd_sel : 4; + mmr_t mdir_rd_sel : 4; + mmr_t reserved_0 : 40; + } sh_y_dqct_cfg_s; +} sh_y_dqct_cfg_u_t; +#else +typedef union sh_y_dqct_cfg_u { + mmr_t sh_y_dqct_cfg_regval; + struct { + mmr_t reserved_0 : 40; + mmr_t mdir_rd_sel : 4; + mmr_t dir_rd_sel : 4; + mmr_t dta_wt_sel : 4; + mmr_t dta_rd_sel : 4; + mmr_t wt_sel : 4; + mmr_t rd_sel : 4; + } sh_y_dqct_cfg_s; +} sh_y_dqct_cfg_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_Y_REFRESH_CONTROL" */ +/* Refresh Control Register */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_y_refresh_control_u { + mmr_t sh_y_refresh_control_regval; + struct { + mmr_t enable : 8; + mmr_t interval : 9; + mmr_t hold : 6; + mmr_t interleave : 1; + mmr_t half_rate : 4; + mmr_t reserved_0 : 36; + } sh_y_refresh_control_s; +} sh_y_refresh_control_u_t; +#else +typedef union sh_y_refresh_control_u { + mmr_t sh_y_refresh_control_regval; + struct { + mmr_t reserved_0 : 36; + mmr_t half_rate : 4; + mmr_t interleave : 1; + mmr_t hold : 6; + mmr_t interval : 9; + mmr_t enable : 8; + } sh_y_refresh_control_s; +} sh_y_refresh_control_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MEM_RED_BLACK" */ +/* MD fairness watchdog timers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_mem_red_black_u { + mmr_t sh_mem_red_black_regval; + struct { + mmr_t time : 16; + mmr_t err_time : 36; + mmr_t reserved_0 : 12; + } sh_mem_red_black_s; +} sh_mem_red_black_u_t; +#else +typedef union sh_mem_red_black_u { + mmr_t sh_mem_red_black_regval; + struct { + mmr_t reserved_0 : 12; + mmr_t err_time : 36; + mmr_t time : 16; + } sh_mem_red_black_s; +} sh_mem_red_black_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MISC_MEM_CFG" */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_misc_mem_cfg_u { + mmr_t sh_misc_mem_cfg_regval; + struct { + mmr_t express_header_enable : 1; + mmr_t spec_header_enable : 1; + mmr_t jnr_bypass_enable : 1; + mmr_t xn_rd_same_as_pi : 1; + mmr_t low_write_buffer_threshold : 6; + mmr_t reserved_0 : 2; + mmr_t low_victim_buffer_threshold : 6; + mmr_t reserved_1 : 2; + mmr_t throttle_cnt : 8; + mmr_t disabled_read_tnums : 5; + mmr_t reserved_2 : 3; + mmr_t disabled_write_tnums : 5; + mmr_t reserved_3 : 3; + mmr_t disabled_victims : 6; + mmr_t reserved_4 : 2; + mmr_t alternate_xn_rp_plane : 1; + mmr_t reserved_5 : 11; + } sh_misc_mem_cfg_s; +} sh_misc_mem_cfg_u_t; +#else +typedef union sh_misc_mem_cfg_u { + mmr_t sh_misc_mem_cfg_regval; + struct { + mmr_t reserved_5 : 11; + mmr_t alternate_xn_rp_plane : 1; + mmr_t reserved_4 : 2; + mmr_t disabled_victims : 6; + mmr_t reserved_3 : 3; + mmr_t disabled_write_tnums : 5; + mmr_t reserved_2 : 3; + mmr_t disabled_read_tnums : 5; + mmr_t throttle_cnt : 8; + mmr_t reserved_1 : 2; + mmr_t low_victim_buffer_threshold : 6; + mmr_t reserved_0 : 2; + mmr_t low_write_buffer_threshold : 6; + mmr_t xn_rd_same_as_pi : 1; + mmr_t jnr_bypass_enable : 1; + mmr_t spec_header_enable : 1; + mmr_t express_header_enable : 1; + } sh_misc_mem_cfg_s; +} sh_misc_mem_cfg_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PIO_RQ_CRD_CTL" */ +/* pio_rq Credit Circulation Control */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pio_rq_crd_ctl_u { + mmr_t sh_pio_rq_crd_ctl_regval; + struct { + mmr_t depth : 6; + mmr_t reserved_0 : 58; + } sh_pio_rq_crd_ctl_s; +} sh_pio_rq_crd_ctl_u_t; +#else +typedef union sh_pio_rq_crd_ctl_u { + mmr_t sh_pio_rq_crd_ctl_regval; + struct { + mmr_t reserved_0 : 58; + mmr_t depth : 6; + } sh_pio_rq_crd_ctl_s; +} sh_pio_rq_crd_ctl_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_MD_RQ_CRD_CTL" */ +/* pi_md_rq Credit Circulation Control */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_md_rq_crd_ctl_u { + mmr_t sh_pi_md_rq_crd_ctl_regval; + struct { + mmr_t depth : 6; + mmr_t reserved_0 : 58; + } sh_pi_md_rq_crd_ctl_s; +} sh_pi_md_rq_crd_ctl_u_t; +#else +typedef union sh_pi_md_rq_crd_ctl_u { + mmr_t sh_pi_md_rq_crd_ctl_regval; + struct { + mmr_t reserved_0 : 58; + mmr_t depth : 6; + } sh_pi_md_rq_crd_ctl_s; +} sh_pi_md_rq_crd_ctl_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_PI_MD_RP_CRD_CTL" */ +/* pi_md_rp Credit Circulation Control */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_pi_md_rp_crd_ctl_u { + mmr_t sh_pi_md_rp_crd_ctl_regval; + struct { + mmr_t depth : 6; + mmr_t reserved_0 : 58; + } sh_pi_md_rp_crd_ctl_s; +} sh_pi_md_rp_crd_ctl_u_t; +#else +typedef union sh_pi_md_rp_crd_ctl_u { + mmr_t sh_pi_md_rp_crd_ctl_regval; + struct { + mmr_t reserved_0 : 58; + mmr_t depth : 6; + } sh_pi_md_rp_crd_ctl_s; +} sh_pi_md_rp_crd_ctl_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_MD_RQ_CRD_CTL" */ +/* xn_md_rq Credit Circulation Control */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_md_rq_crd_ctl_u { + mmr_t sh_xn_md_rq_crd_ctl_regval; + struct { + mmr_t depth : 6; + mmr_t reserved_0 : 58; + } sh_xn_md_rq_crd_ctl_s; +} sh_xn_md_rq_crd_ctl_u_t; +#else +typedef union sh_xn_md_rq_crd_ctl_u { + mmr_t sh_xn_md_rq_crd_ctl_regval; + struct { + mmr_t reserved_0 : 58; + mmr_t depth : 6; + } sh_xn_md_rq_crd_ctl_s; +} sh_xn_md_rq_crd_ctl_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_XN_MD_RP_CRD_CTL" */ +/* xn_md_rp Credit Circulation Control */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_xn_md_rp_crd_ctl_u { + mmr_t sh_xn_md_rp_crd_ctl_regval; + struct { + mmr_t depth : 6; + mmr_t reserved_0 : 58; + } sh_xn_md_rp_crd_ctl_s; +} sh_xn_md_rp_crd_ctl_u_t; +#else +typedef union sh_xn_md_rp_crd_ctl_u { + mmr_t sh_xn_md_rp_crd_ctl_regval; + struct { + mmr_t reserved_0 : 58; + mmr_t depth : 6; + } sh_xn_md_rp_crd_ctl_s; +} sh_xn_md_rp_crd_ctl_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_X_TAG0" */ +/* AC tag Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_x_tag0_u { + mmr_t sh_x_tag0_regval; + struct { + mmr_t tag : 20; + mmr_t reserved_0 : 44; + } sh_x_tag0_s; +} sh_x_tag0_u_t; +#else +typedef union sh_x_tag0_u { + mmr_t sh_x_tag0_regval; + struct { + mmr_t reserved_0 : 44; + mmr_t tag : 20; + } sh_x_tag0_s; +} sh_x_tag0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_X_TAG1" */ +/* AC tag Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_x_tag1_u { + mmr_t sh_x_tag1_regval; + struct { + mmr_t tag : 20; + mmr_t reserved_0 : 44; + } sh_x_tag1_s; +} sh_x_tag1_u_t; +#else +typedef union sh_x_tag1_u { + mmr_t sh_x_tag1_regval; + struct { + mmr_t reserved_0 : 44; + mmr_t tag : 20; + } sh_x_tag1_s; +} sh_x_tag1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_X_TAG2" */ +/* AC tag Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_x_tag2_u { + mmr_t sh_x_tag2_regval; + struct { + mmr_t tag : 20; + mmr_t reserved_0 : 44; + } sh_x_tag2_s; +} sh_x_tag2_u_t; +#else +typedef union sh_x_tag2_u { + mmr_t sh_x_tag2_regval; + struct { + mmr_t reserved_0 : 44; + mmr_t tag : 20; + } sh_x_tag2_s; +} sh_x_tag2_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_X_TAG3" */ +/* AC tag Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_x_tag3_u { + mmr_t sh_x_tag3_regval; + struct { + mmr_t tag : 20; + mmr_t reserved_0 : 44; + } sh_x_tag3_s; +} sh_x_tag3_u_t; +#else +typedef union sh_x_tag3_u { + mmr_t sh_x_tag3_regval; + struct { + mmr_t reserved_0 : 44; + mmr_t tag : 20; + } sh_x_tag3_s; +} sh_x_tag3_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_X_TAG4" */ +/* AC tag Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_x_tag4_u { + mmr_t sh_x_tag4_regval; + struct { + mmr_t tag : 20; + mmr_t reserved_0 : 44; + } sh_x_tag4_s; +} sh_x_tag4_u_t; +#else +typedef union sh_x_tag4_u { + mmr_t sh_x_tag4_regval; + struct { + mmr_t reserved_0 : 44; + mmr_t tag : 20; + } sh_x_tag4_s; +} sh_x_tag4_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_X_TAG5" */ +/* AC tag Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_x_tag5_u { + mmr_t sh_x_tag5_regval; + struct { + mmr_t tag : 20; + mmr_t reserved_0 : 44; + } sh_x_tag5_s; +} sh_x_tag5_u_t; +#else +typedef union sh_x_tag5_u { + mmr_t sh_x_tag5_regval; + struct { + mmr_t reserved_0 : 44; + mmr_t tag : 20; + } sh_x_tag5_s; +} sh_x_tag5_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_X_TAG6" */ +/* AC tag Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_x_tag6_u { + mmr_t sh_x_tag6_regval; + struct { + mmr_t tag : 20; + mmr_t reserved_0 : 44; + } sh_x_tag6_s; +} sh_x_tag6_u_t; +#else +typedef union sh_x_tag6_u { + mmr_t sh_x_tag6_regval; + struct { + mmr_t reserved_0 : 44; + mmr_t tag : 20; + } sh_x_tag6_s; +} sh_x_tag6_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_X_TAG7" */ +/* AC tag Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_x_tag7_u { + mmr_t sh_x_tag7_regval; + struct { + mmr_t tag : 20; + mmr_t reserved_0 : 44; + } sh_x_tag7_s; +} sh_x_tag7_u_t; +#else +typedef union sh_x_tag7_u { + mmr_t sh_x_tag7_regval; + struct { + mmr_t reserved_0 : 44; + mmr_t tag : 20; + } sh_x_tag7_s; +} sh_x_tag7_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_Y_TAG0" */ +/* AC tag Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_y_tag0_u { + mmr_t sh_y_tag0_regval; + struct { + mmr_t tag : 20; + mmr_t reserved_0 : 44; + } sh_y_tag0_s; +} sh_y_tag0_u_t; +#else +typedef union sh_y_tag0_u { + mmr_t sh_y_tag0_regval; + struct { + mmr_t reserved_0 : 44; + mmr_t tag : 20; + } sh_y_tag0_s; +} sh_y_tag0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_Y_TAG1" */ +/* AC tag Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_y_tag1_u { + mmr_t sh_y_tag1_regval; + struct { + mmr_t tag : 20; + mmr_t reserved_0 : 44; + } sh_y_tag1_s; +} sh_y_tag1_u_t; +#else +typedef union sh_y_tag1_u { + mmr_t sh_y_tag1_regval; + struct { + mmr_t reserved_0 : 44; + mmr_t tag : 20; + } sh_y_tag1_s; +} sh_y_tag1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_Y_TAG2" */ +/* AC tag Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_y_tag2_u { + mmr_t sh_y_tag2_regval; + struct { + mmr_t tag : 20; + mmr_t reserved_0 : 44; + } sh_y_tag2_s; +} sh_y_tag2_u_t; +#else +typedef union sh_y_tag2_u { + mmr_t sh_y_tag2_regval; + struct { + mmr_t reserved_0 : 44; + mmr_t tag : 20; + } sh_y_tag2_s; +} sh_y_tag2_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_Y_TAG3" */ +/* AC tag Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_y_tag3_u { + mmr_t sh_y_tag3_regval; + struct { + mmr_t tag : 20; + mmr_t reserved_0 : 44; + } sh_y_tag3_s; +} sh_y_tag3_u_t; +#else +typedef union sh_y_tag3_u { + mmr_t sh_y_tag3_regval; + struct { + mmr_t reserved_0 : 44; + mmr_t tag : 20; + } sh_y_tag3_s; +} sh_y_tag3_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_Y_TAG4" */ +/* AC tag Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_y_tag4_u { + mmr_t sh_y_tag4_regval; + struct { + mmr_t tag : 20; + mmr_t reserved_0 : 44; + } sh_y_tag4_s; +} sh_y_tag4_u_t; +#else +typedef union sh_y_tag4_u { + mmr_t sh_y_tag4_regval; + struct { + mmr_t reserved_0 : 44; + mmr_t tag : 20; + } sh_y_tag4_s; +} sh_y_tag4_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_Y_TAG5" */ +/* AC tag Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_y_tag5_u { + mmr_t sh_y_tag5_regval; + struct { + mmr_t tag : 20; + mmr_t reserved_0 : 44; + } sh_y_tag5_s; +} sh_y_tag5_u_t; +#else +typedef union sh_y_tag5_u { + mmr_t sh_y_tag5_regval; + struct { + mmr_t reserved_0 : 44; + mmr_t tag : 20; + } sh_y_tag5_s; +} sh_y_tag5_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_Y_TAG6" */ +/* AC tag Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_y_tag6_u { + mmr_t sh_y_tag6_regval; + struct { + mmr_t tag : 20; + mmr_t reserved_0 : 44; + } sh_y_tag6_s; +} sh_y_tag6_u_t; +#else +typedef union sh_y_tag6_u { + mmr_t sh_y_tag6_regval; + struct { + mmr_t reserved_0 : 44; + mmr_t tag : 20; + } sh_y_tag6_s; +} sh_y_tag6_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_Y_TAG7" */ +/* AC tag Registers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_y_tag7_u { + mmr_t sh_y_tag7_regval; + struct { + mmr_t tag : 20; + mmr_t reserved_0 : 44; + } sh_y_tag7_s; +} sh_y_tag7_u_t; +#else +typedef union sh_y_tag7_u { + mmr_t sh_y_tag7_regval; + struct { + mmr_t reserved_0 : 44; + mmr_t tag : 20; + } sh_y_tag7_s; +} sh_y_tag7_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MMRBIST_BASE" */ +/* mmr/bist base address */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_mmrbist_base_u { + mmr_t sh_mmrbist_base_regval; + struct { + mmr_t reserved_0 : 3; + mmr_t dword_addr : 47; + mmr_t reserved_1 : 14; + } sh_mmrbist_base_s; +} sh_mmrbist_base_u_t; +#else +typedef union sh_mmrbist_base_u { + mmr_t sh_mmrbist_base_regval; + struct { + mmr_t reserved_1 : 14; + mmr_t dword_addr : 47; + mmr_t reserved_0 : 3; + } sh_mmrbist_base_s; +} sh_mmrbist_base_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MMRBIST_CTL" */ +/* Bist base address */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_mmrbist_ctl_u { + mmr_t sh_mmrbist_ctl_regval; + struct { + mmr_t block_length : 31; + mmr_t reserved_0 : 1; + mmr_t cmd : 7; + mmr_t reserved_1 : 1; + mmr_t in_progress : 1; + mmr_t fail : 1; + mmr_t mem_idle : 1; + mmr_t reserved_2 : 1; + mmr_t reset_state : 1; + mmr_t reserved_3 : 19; + } sh_mmrbist_ctl_s; +} sh_mmrbist_ctl_u_t; +#else +typedef union sh_mmrbist_ctl_u { + mmr_t sh_mmrbist_ctl_regval; + struct { + mmr_t reserved_3 : 19; + mmr_t reset_state : 1; + mmr_t reserved_2 : 1; + mmr_t mem_idle : 1; + mmr_t fail : 1; + mmr_t in_progress : 1; + mmr_t reserved_1 : 1; + mmr_t cmd : 7; + mmr_t reserved_0 : 1; + mmr_t block_length : 31; + } sh_mmrbist_ctl_s; +} sh_mmrbist_ctl_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DBUG_DATA_CFG" */ +/* configuration for md debug data muxes */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dbug_data_cfg_u { + mmr_t sh_md_dbug_data_cfg_regval; + struct { + mmr_t nibble0_chiplet : 3; + mmr_t reserved_0 : 1; + mmr_t nibble0_nibble : 3; + mmr_t reserved_1 : 1; + mmr_t nibble1_chiplet : 3; + mmr_t reserved_2 : 1; + mmr_t nibble1_nibble : 3; + mmr_t reserved_3 : 1; + mmr_t nibble2_chiplet : 3; + mmr_t reserved_4 : 1; + mmr_t nibble2_nibble : 3; + mmr_t reserved_5 : 1; + mmr_t nibble3_chiplet : 3; + mmr_t reserved_6 : 1; + mmr_t nibble3_nibble : 3; + mmr_t reserved_7 : 1; + mmr_t nibble4_chiplet : 3; + mmr_t reserved_8 : 1; + mmr_t nibble4_nibble : 3; + mmr_t reserved_9 : 1; + mmr_t nibble5_chiplet : 3; + mmr_t reserved_10 : 1; + mmr_t nibble5_nibble : 3; + mmr_t reserved_11 : 1; + mmr_t nibble6_chiplet : 3; + mmr_t reserved_12 : 1; + mmr_t nibble6_nibble : 3; + mmr_t reserved_13 : 1; + mmr_t nibble7_chiplet : 3; + mmr_t reserved_14 : 1; + mmr_t nibble7_nibble : 3; + mmr_t reserved_15 : 1; + } sh_md_dbug_data_cfg_s; +} sh_md_dbug_data_cfg_u_t; +#else +typedef union sh_md_dbug_data_cfg_u { + mmr_t sh_md_dbug_data_cfg_regval; + struct { + mmr_t reserved_15 : 1; + mmr_t nibble7_nibble : 3; + mmr_t reserved_14 : 1; + mmr_t nibble7_chiplet : 3; + mmr_t reserved_13 : 1; + mmr_t nibble6_nibble : 3; + mmr_t reserved_12 : 1; + mmr_t nibble6_chiplet : 3; + mmr_t reserved_11 : 1; + mmr_t nibble5_nibble : 3; + mmr_t reserved_10 : 1; + mmr_t nibble5_chiplet : 3; + mmr_t reserved_9 : 1; + mmr_t nibble4_nibble : 3; + mmr_t reserved_8 : 1; + mmr_t nibble4_chiplet : 3; + mmr_t reserved_7 : 1; + mmr_t nibble3_nibble : 3; + mmr_t reserved_6 : 1; + mmr_t nibble3_chiplet : 3; + mmr_t reserved_5 : 1; + mmr_t nibble2_nibble : 3; + mmr_t reserved_4 : 1; + mmr_t nibble2_chiplet : 3; + mmr_t reserved_3 : 1; + mmr_t nibble1_nibble : 3; + mmr_t reserved_2 : 1; + mmr_t nibble1_chiplet : 3; + mmr_t reserved_1 : 1; + mmr_t nibble0_nibble : 3; + mmr_t reserved_0 : 1; + mmr_t nibble0_chiplet : 3; + } sh_md_dbug_data_cfg_s; +} sh_md_dbug_data_cfg_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DBUG_TRIGGER_CFG" */ +/* configuration for md debug triggers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dbug_trigger_cfg_u { + mmr_t sh_md_dbug_trigger_cfg_regval; + struct { + mmr_t nibble0_chiplet : 3; + mmr_t reserved_0 : 1; + mmr_t nibble0_nibble : 3; + mmr_t reserved_1 : 1; + mmr_t nibble1_chiplet : 3; + mmr_t reserved_2 : 1; + mmr_t nibble1_nibble : 3; + mmr_t reserved_3 : 1; + mmr_t nibble2_chiplet : 3; + mmr_t reserved_4 : 1; + mmr_t nibble2_nibble : 3; + mmr_t reserved_5 : 1; + mmr_t nibble3_chiplet : 3; + mmr_t reserved_6 : 1; + mmr_t nibble3_nibble : 3; + mmr_t reserved_7 : 1; + mmr_t nibble4_chiplet : 3; + mmr_t reserved_8 : 1; + mmr_t nibble4_nibble : 3; + mmr_t reserved_9 : 1; + mmr_t nibble5_chiplet : 3; + mmr_t reserved_10 : 1; + mmr_t nibble5_nibble : 3; + mmr_t reserved_11 : 1; + mmr_t nibble6_chiplet : 3; + mmr_t reserved_12 : 1; + mmr_t nibble6_nibble : 3; + mmr_t reserved_13 : 1; + mmr_t nibble7_chiplet : 3; + mmr_t reserved_14 : 1; + mmr_t nibble7_nibble : 3; + mmr_t enable : 1; + } sh_md_dbug_trigger_cfg_s; +} sh_md_dbug_trigger_cfg_u_t; +#else +typedef union sh_md_dbug_trigger_cfg_u { + mmr_t sh_md_dbug_trigger_cfg_regval; + struct { + mmr_t enable : 1; + mmr_t nibble7_nibble : 3; + mmr_t reserved_14 : 1; + mmr_t nibble7_chiplet : 3; + mmr_t reserved_13 : 1; + mmr_t nibble6_nibble : 3; + mmr_t reserved_12 : 1; + mmr_t nibble6_chiplet : 3; + mmr_t reserved_11 : 1; + mmr_t nibble5_nibble : 3; + mmr_t reserved_10 : 1; + mmr_t nibble5_chiplet : 3; + mmr_t reserved_9 : 1; + mmr_t nibble4_nibble : 3; + mmr_t reserved_8 : 1; + mmr_t nibble4_chiplet : 3; + mmr_t reserved_7 : 1; + mmr_t nibble3_nibble : 3; + mmr_t reserved_6 : 1; + mmr_t nibble3_chiplet : 3; + mmr_t reserved_5 : 1; + mmr_t nibble2_nibble : 3; + mmr_t reserved_4 : 1; + mmr_t nibble2_chiplet : 3; + mmr_t reserved_3 : 1; + mmr_t nibble1_nibble : 3; + mmr_t reserved_2 : 1; + mmr_t nibble1_chiplet : 3; + mmr_t reserved_1 : 1; + mmr_t nibble0_nibble : 3; + mmr_t reserved_0 : 1; + mmr_t nibble0_chiplet : 3; + } sh_md_dbug_trigger_cfg_s; +} sh_md_dbug_trigger_cfg_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DBUG_COMPARE" */ +/* md debug compare pattern and mask */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dbug_compare_u { + mmr_t sh_md_dbug_compare_regval; + struct { + mmr_t pattern : 32; + mmr_t mask : 32; + } sh_md_dbug_compare_s; +} sh_md_dbug_compare_u_t; +#else +typedef union sh_md_dbug_compare_u { + mmr_t sh_md_dbug_compare_regval; + struct { + mmr_t mask : 32; + mmr_t pattern : 32; + } sh_md_dbug_compare_s; +} sh_md_dbug_compare_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_X_MOD_DBUG_SEL" */ +/* MD acx debug select */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_x_mod_dbug_sel_u { + mmr_t sh_x_mod_dbug_sel_regval; + struct { + mmr_t tag_sel : 8; + mmr_t wbq_sel : 8; + mmr_t arb_sel : 8; + mmr_t atl_sel : 11; + mmr_t atr_sel : 11; + mmr_t dql_sel : 6; + mmr_t dqr_sel : 6; + mmr_t reserved_0 : 6; + } sh_x_mod_dbug_sel_s; +} sh_x_mod_dbug_sel_u_t; +#else +typedef union sh_x_mod_dbug_sel_u { + mmr_t sh_x_mod_dbug_sel_regval; + struct { + mmr_t reserved_0 : 6; + mmr_t dqr_sel : 6; + mmr_t dql_sel : 6; + mmr_t atr_sel : 11; + mmr_t atl_sel : 11; + mmr_t arb_sel : 8; + mmr_t wbq_sel : 8; + mmr_t tag_sel : 8; + } sh_x_mod_dbug_sel_s; +} sh_x_mod_dbug_sel_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_X_DBUG_SEL" */ +/* MD acx debug select */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_x_dbug_sel_u { + mmr_t sh_x_dbug_sel_regval; + struct { + mmr_t dbg_sel : 24; + mmr_t reserved_0 : 40; + } sh_x_dbug_sel_s; +} sh_x_dbug_sel_u_t; +#else +typedef union sh_x_dbug_sel_u { + mmr_t sh_x_dbug_sel_regval; + struct { + mmr_t reserved_0 : 40; + mmr_t dbg_sel : 24; + } sh_x_dbug_sel_s; +} sh_x_dbug_sel_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_X_LADDR_CMP" */ +/* MD acx address compare */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_x_laddr_cmp_u { + mmr_t sh_x_laddr_cmp_regval; + struct { + mmr_t cmp_val : 28; + mmr_t reserved_0 : 4; + mmr_t mask_val : 28; + mmr_t reserved_1 : 4; + } sh_x_laddr_cmp_s; +} sh_x_laddr_cmp_u_t; +#else +typedef union sh_x_laddr_cmp_u { + mmr_t sh_x_laddr_cmp_regval; + struct { + mmr_t reserved_1 : 4; + mmr_t mask_val : 28; + mmr_t reserved_0 : 4; + mmr_t cmp_val : 28; + } sh_x_laddr_cmp_s; +} sh_x_laddr_cmp_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_X_RADDR_CMP" */ +/* MD acx address compare */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_x_raddr_cmp_u { + mmr_t sh_x_raddr_cmp_regval; + struct { + mmr_t cmp_val : 28; + mmr_t reserved_0 : 4; + mmr_t mask_val : 28; + mmr_t reserved_1 : 4; + } sh_x_raddr_cmp_s; +} sh_x_raddr_cmp_u_t; +#else +typedef union sh_x_raddr_cmp_u { + mmr_t sh_x_raddr_cmp_regval; + struct { + mmr_t reserved_1 : 4; + mmr_t mask_val : 28; + mmr_t reserved_0 : 4; + mmr_t cmp_val : 28; + } sh_x_raddr_cmp_s; +} sh_x_raddr_cmp_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_X_TAG_CMP" */ +/* MD acx tagmgr compare */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_x_tag_cmp_u { + mmr_t sh_x_tag_cmp_regval; + struct { + mmr_t cmd : 8; + mmr_t addr : 33; + mmr_t src : 14; + mmr_t reserved_0 : 9; + } sh_x_tag_cmp_s; +} sh_x_tag_cmp_u_t; +#else +typedef union sh_x_tag_cmp_u { + mmr_t sh_x_tag_cmp_regval; + struct { + mmr_t reserved_0 : 9; + mmr_t src : 14; + mmr_t addr : 33; + mmr_t cmd : 8; + } sh_x_tag_cmp_s; +} sh_x_tag_cmp_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_X_TAG_MASK" */ +/* MD acx tagmgr mask */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_x_tag_mask_u { + mmr_t sh_x_tag_mask_regval; + struct { + mmr_t cmd : 8; + mmr_t addr : 33; + mmr_t src : 14; + mmr_t reserved_0 : 9; + } sh_x_tag_mask_s; +} sh_x_tag_mask_u_t; +#else +typedef union sh_x_tag_mask_u { + mmr_t sh_x_tag_mask_regval; + struct { + mmr_t reserved_0 : 9; + mmr_t src : 14; + mmr_t addr : 33; + mmr_t cmd : 8; + } sh_x_tag_mask_s; +} sh_x_tag_mask_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_Y_MOD_DBUG_SEL" */ +/* MD acy debug select */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_y_mod_dbug_sel_u { + mmr_t sh_y_mod_dbug_sel_regval; + struct { + mmr_t tag_sel : 8; + mmr_t wbq_sel : 8; + mmr_t arb_sel : 8; + mmr_t atl_sel : 11; + mmr_t atr_sel : 11; + mmr_t dql_sel : 6; + mmr_t dqr_sel : 6; + mmr_t reserved_0 : 6; + } sh_y_mod_dbug_sel_s; +} sh_y_mod_dbug_sel_u_t; +#else +typedef union sh_y_mod_dbug_sel_u { + mmr_t sh_y_mod_dbug_sel_regval; + struct { + mmr_t reserved_0 : 6; + mmr_t dqr_sel : 6; + mmr_t dql_sel : 6; + mmr_t atr_sel : 11; + mmr_t atl_sel : 11; + mmr_t arb_sel : 8; + mmr_t wbq_sel : 8; + mmr_t tag_sel : 8; + } sh_y_mod_dbug_sel_s; +} sh_y_mod_dbug_sel_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_Y_DBUG_SEL" */ +/* MD acy debug select */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_y_dbug_sel_u { + mmr_t sh_y_dbug_sel_regval; + struct { + mmr_t dbg_sel : 24; + mmr_t reserved_0 : 40; + } sh_y_dbug_sel_s; +} sh_y_dbug_sel_u_t; +#else +typedef union sh_y_dbug_sel_u { + mmr_t sh_y_dbug_sel_regval; + struct { + mmr_t reserved_0 : 40; + mmr_t dbg_sel : 24; + } sh_y_dbug_sel_s; +} sh_y_dbug_sel_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_Y_LADDR_CMP" */ +/* MD acy address compare */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_y_laddr_cmp_u { + mmr_t sh_y_laddr_cmp_regval; + struct { + mmr_t cmp_val : 28; + mmr_t reserved_0 : 4; + mmr_t mask_val : 28; + mmr_t reserved_1 : 4; + } sh_y_laddr_cmp_s; +} sh_y_laddr_cmp_u_t; +#else +typedef union sh_y_laddr_cmp_u { + mmr_t sh_y_laddr_cmp_regval; + struct { + mmr_t reserved_1 : 4; + mmr_t mask_val : 28; + mmr_t reserved_0 : 4; + mmr_t cmp_val : 28; + } sh_y_laddr_cmp_s; +} sh_y_laddr_cmp_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_Y_RADDR_CMP" */ +/* MD acy address compare */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_y_raddr_cmp_u { + mmr_t sh_y_raddr_cmp_regval; + struct { + mmr_t cmp_val : 28; + mmr_t reserved_0 : 4; + mmr_t mask_val : 28; + mmr_t reserved_1 : 4; + } sh_y_raddr_cmp_s; +} sh_y_raddr_cmp_u_t; +#else +typedef union sh_y_raddr_cmp_u { + mmr_t sh_y_raddr_cmp_regval; + struct { + mmr_t reserved_1 : 4; + mmr_t mask_val : 28; + mmr_t reserved_0 : 4; + mmr_t cmp_val : 28; + } sh_y_raddr_cmp_s; +} sh_y_raddr_cmp_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_Y_TAG_CMP" */ +/* MD acy tagmgr compare */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_y_tag_cmp_u { + mmr_t sh_y_tag_cmp_regval; + struct { + mmr_t cmd : 8; + mmr_t addr : 33; + mmr_t src : 14; + mmr_t reserved_0 : 9; + } sh_y_tag_cmp_s; +} sh_y_tag_cmp_u_t; +#else +typedef union sh_y_tag_cmp_u { + mmr_t sh_y_tag_cmp_regval; + struct { + mmr_t reserved_0 : 9; + mmr_t src : 14; + mmr_t addr : 33; + mmr_t cmd : 8; + } sh_y_tag_cmp_s; +} sh_y_tag_cmp_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_Y_TAG_MASK" */ +/* MD acy tagmgr mask */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_y_tag_mask_u { + mmr_t sh_y_tag_mask_regval; + struct { + mmr_t cmd : 8; + mmr_t addr : 33; + mmr_t src : 14; + mmr_t reserved_0 : 9; + } sh_y_tag_mask_s; +} sh_y_tag_mask_u_t; +#else +typedef union sh_y_tag_mask_u { + mmr_t sh_y_tag_mask_regval; + struct { + mmr_t reserved_0 : 9; + mmr_t src : 14; + mmr_t addr : 33; + mmr_t cmd : 8; + } sh_y_tag_mask_s; +} sh_y_tag_mask_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_JNR_DBUG_DATA_CFG" */ +/* configuration for md jnr debug data muxes */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_jnr_dbug_data_cfg_u { + mmr_t sh_md_jnr_dbug_data_cfg_regval; + struct { + mmr_t nibble0_sel : 3; + mmr_t reserved_0 : 1; + mmr_t nibble1_sel : 3; + mmr_t reserved_1 : 1; + mmr_t nibble2_sel : 3; + mmr_t reserved_2 : 1; + mmr_t nibble3_sel : 3; + mmr_t reserved_3 : 1; + mmr_t nibble4_sel : 3; + mmr_t reserved_4 : 1; + mmr_t nibble5_sel : 3; + mmr_t reserved_5 : 1; + mmr_t nibble6_sel : 3; + mmr_t reserved_6 : 1; + mmr_t nibble7_sel : 3; + mmr_t reserved_7 : 33; + } sh_md_jnr_dbug_data_cfg_s; +} sh_md_jnr_dbug_data_cfg_u_t; +#else +typedef union sh_md_jnr_dbug_data_cfg_u { + mmr_t sh_md_jnr_dbug_data_cfg_regval; + struct { + mmr_t reserved_7 : 33; + mmr_t nibble7_sel : 3; + mmr_t reserved_6 : 1; + mmr_t nibble6_sel : 3; + mmr_t reserved_5 : 1; + mmr_t nibble5_sel : 3; + mmr_t reserved_4 : 1; + mmr_t nibble4_sel : 3; + mmr_t reserved_3 : 1; + mmr_t nibble3_sel : 3; + mmr_t reserved_2 : 1; + mmr_t nibble2_sel : 3; + mmr_t reserved_1 : 1; + mmr_t nibble1_sel : 3; + mmr_t reserved_0 : 1; + mmr_t nibble0_sel : 3; + } sh_md_jnr_dbug_data_cfg_s; +} sh_md_jnr_dbug_data_cfg_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_LAST_CREDIT" */ +/* captures last credit values on reset */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_last_credit_u { + mmr_t sh_md_last_credit_regval; + struct { + mmr_t rq_to_pi : 6; + mmr_t reserved_0 : 2; + mmr_t rp_to_pi : 6; + mmr_t reserved_1 : 2; + mmr_t rq_to_xn : 6; + mmr_t reserved_2 : 2; + mmr_t rp_to_xn : 6; + mmr_t reserved_3 : 2; + mmr_t to_lb : 6; + mmr_t reserved_4 : 26; + } sh_md_last_credit_s; +} sh_md_last_credit_u_t; +#else +typedef union sh_md_last_credit_u { + mmr_t sh_md_last_credit_regval; + struct { + mmr_t reserved_4 : 26; + mmr_t to_lb : 6; + mmr_t reserved_3 : 2; + mmr_t rp_to_xn : 6; + mmr_t reserved_2 : 2; + mmr_t rq_to_xn : 6; + mmr_t reserved_1 : 2; + mmr_t rp_to_pi : 6; + mmr_t reserved_0 : 2; + mmr_t rq_to_pi : 6; + } sh_md_last_credit_s; +} sh_md_last_credit_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MEM_CAPTURE_ADDR" */ +/* Address capture address register */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_mem_capture_addr_u { + mmr_t sh_mem_capture_addr_regval; + struct { + mmr_t reserved_0 : 3; + mmr_t addr : 33; + mmr_t cmd : 8; + mmr_t reserved_1 : 20; + } sh_mem_capture_addr_s; +} sh_mem_capture_addr_u_t; +#else +typedef union sh_mem_capture_addr_u { + mmr_t sh_mem_capture_addr_regval; + struct { + mmr_t reserved_1 : 20; + mmr_t cmd : 8; + mmr_t addr : 33; + mmr_t reserved_0 : 3; + } sh_mem_capture_addr_s; +} sh_mem_capture_addr_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MEM_CAPTURE_MASK" */ +/* Address capture mask register */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_mem_capture_mask_u { + mmr_t sh_mem_capture_mask_regval; + struct { + mmr_t reserved_0 : 3; + mmr_t addr : 33; + mmr_t cmd : 8; + mmr_t enable_local : 1; + mmr_t enable_remote : 1; + mmr_t reserved_1 : 18; + } sh_mem_capture_mask_s; +} sh_mem_capture_mask_u_t; +#else +typedef union sh_mem_capture_mask_u { + mmr_t sh_mem_capture_mask_regval; + struct { + mmr_t reserved_1 : 18; + mmr_t enable_remote : 1; + mmr_t enable_local : 1; + mmr_t cmd : 8; + mmr_t addr : 33; + mmr_t reserved_0 : 3; + } sh_mem_capture_mask_s; +} sh_mem_capture_mask_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MEM_CAPTURE_HDR" */ +/* Address capture header register */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_mem_capture_hdr_u { + mmr_t sh_mem_capture_hdr_regval; + struct { + mmr_t reserved_0 : 3; + mmr_t addr : 33; + mmr_t cmd : 8; + mmr_t src : 14; + mmr_t cntr : 6; + } sh_mem_capture_hdr_s; +} sh_mem_capture_hdr_u_t; +#else +typedef union sh_mem_capture_hdr_u { + mmr_t sh_mem_capture_hdr_regval; + struct { + mmr_t cntr : 6; + mmr_t src : 14; + mmr_t cmd : 8; + mmr_t addr : 33; + mmr_t reserved_0 : 3; + } sh_mem_capture_hdr_s; +} sh_mem_capture_hdr_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_DIR_CONFIG" */ +/* DQ directory config register */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqlp_mmr_dir_config_u { + mmr_t sh_md_dqlp_mmr_dir_config_regval; + struct { + mmr_t sys_size : 3; + mmr_t en_direcc : 1; + mmr_t en_dirpois : 1; + mmr_t reserved_0 : 59; + } sh_md_dqlp_mmr_dir_config_s; +} sh_md_dqlp_mmr_dir_config_u_t; +#else +typedef union sh_md_dqlp_mmr_dir_config_u { + mmr_t sh_md_dqlp_mmr_dir_config_regval; + struct { + mmr_t reserved_0 : 59; + mmr_t en_dirpois : 1; + mmr_t en_direcc : 1; + mmr_t sys_size : 3; + } sh_md_dqlp_mmr_dir_config_s; +} sh_md_dqlp_mmr_dir_config_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_DIR_PRESVEC0" */ +/* node [63:0] presence bits */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqlp_mmr_dir_presvec0_u { + mmr_t sh_md_dqlp_mmr_dir_presvec0_regval; + struct { + mmr_t vec : 64; + } sh_md_dqlp_mmr_dir_presvec0_s; +} sh_md_dqlp_mmr_dir_presvec0_u_t; +#else +typedef union sh_md_dqlp_mmr_dir_presvec0_u { + mmr_t sh_md_dqlp_mmr_dir_presvec0_regval; + struct { + mmr_t vec : 64; + } sh_md_dqlp_mmr_dir_presvec0_s; +} sh_md_dqlp_mmr_dir_presvec0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_DIR_PRESVEC1" */ +/* node [127:64] presence bits */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqlp_mmr_dir_presvec1_u { + mmr_t sh_md_dqlp_mmr_dir_presvec1_regval; + struct { + mmr_t vec : 64; + } sh_md_dqlp_mmr_dir_presvec1_s; +} sh_md_dqlp_mmr_dir_presvec1_u_t; +#else +typedef union sh_md_dqlp_mmr_dir_presvec1_u { + mmr_t sh_md_dqlp_mmr_dir_presvec1_regval; + struct { + mmr_t vec : 64; + } sh_md_dqlp_mmr_dir_presvec1_s; +} sh_md_dqlp_mmr_dir_presvec1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_DIR_PRESVEC2" */ +/* node [191:128] presence bits */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqlp_mmr_dir_presvec2_u { + mmr_t sh_md_dqlp_mmr_dir_presvec2_regval; + struct { + mmr_t vec : 64; + } sh_md_dqlp_mmr_dir_presvec2_s; +} sh_md_dqlp_mmr_dir_presvec2_u_t; +#else +typedef union sh_md_dqlp_mmr_dir_presvec2_u { + mmr_t sh_md_dqlp_mmr_dir_presvec2_regval; + struct { + mmr_t vec : 64; + } sh_md_dqlp_mmr_dir_presvec2_s; +} sh_md_dqlp_mmr_dir_presvec2_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_DIR_PRESVEC3" */ +/* node [255:192] presence bits */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqlp_mmr_dir_presvec3_u { + mmr_t sh_md_dqlp_mmr_dir_presvec3_regval; + struct { + mmr_t vec : 64; + } sh_md_dqlp_mmr_dir_presvec3_s; +} sh_md_dqlp_mmr_dir_presvec3_u_t; +#else +typedef union sh_md_dqlp_mmr_dir_presvec3_u { + mmr_t sh_md_dqlp_mmr_dir_presvec3_regval; + struct { + mmr_t vec : 64; + } sh_md_dqlp_mmr_dir_presvec3_s; +} sh_md_dqlp_mmr_dir_presvec3_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_DIR_LOCVEC0" */ +/* local vector for acc=0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqlp_mmr_dir_locvec0_u { + mmr_t sh_md_dqlp_mmr_dir_locvec0_regval; + struct { + mmr_t vec : 64; + } sh_md_dqlp_mmr_dir_locvec0_s; +} sh_md_dqlp_mmr_dir_locvec0_u_t; +#else +typedef union sh_md_dqlp_mmr_dir_locvec0_u { + mmr_t sh_md_dqlp_mmr_dir_locvec0_regval; + struct { + mmr_t vec : 64; + } sh_md_dqlp_mmr_dir_locvec0_s; +} sh_md_dqlp_mmr_dir_locvec0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_DIR_LOCVEC1" */ +/* local vector for acc=1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqlp_mmr_dir_locvec1_u { + mmr_t sh_md_dqlp_mmr_dir_locvec1_regval; + struct { + mmr_t vec : 64; + } sh_md_dqlp_mmr_dir_locvec1_s; +} sh_md_dqlp_mmr_dir_locvec1_u_t; +#else +typedef union sh_md_dqlp_mmr_dir_locvec1_u { + mmr_t sh_md_dqlp_mmr_dir_locvec1_regval; + struct { + mmr_t vec : 64; + } sh_md_dqlp_mmr_dir_locvec1_s; +} sh_md_dqlp_mmr_dir_locvec1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_DIR_LOCVEC2" */ +/* local vector for acc=2 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqlp_mmr_dir_locvec2_u { + mmr_t sh_md_dqlp_mmr_dir_locvec2_regval; + struct { + mmr_t vec : 64; + } sh_md_dqlp_mmr_dir_locvec2_s; +} sh_md_dqlp_mmr_dir_locvec2_u_t; +#else +typedef union sh_md_dqlp_mmr_dir_locvec2_u { + mmr_t sh_md_dqlp_mmr_dir_locvec2_regval; + struct { + mmr_t vec : 64; + } sh_md_dqlp_mmr_dir_locvec2_s; +} sh_md_dqlp_mmr_dir_locvec2_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_DIR_LOCVEC3" */ +/* local vector for acc=3 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqlp_mmr_dir_locvec3_u { + mmr_t sh_md_dqlp_mmr_dir_locvec3_regval; + struct { + mmr_t vec : 64; + } sh_md_dqlp_mmr_dir_locvec3_s; +} sh_md_dqlp_mmr_dir_locvec3_u_t; +#else +typedef union sh_md_dqlp_mmr_dir_locvec3_u { + mmr_t sh_md_dqlp_mmr_dir_locvec3_regval; + struct { + mmr_t vec : 64; + } sh_md_dqlp_mmr_dir_locvec3_s; +} sh_md_dqlp_mmr_dir_locvec3_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_DIR_LOCVEC4" */ +/* local vector for acc=4 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqlp_mmr_dir_locvec4_u { + mmr_t sh_md_dqlp_mmr_dir_locvec4_regval; + struct { + mmr_t vec : 64; + } sh_md_dqlp_mmr_dir_locvec4_s; +} sh_md_dqlp_mmr_dir_locvec4_u_t; +#else +typedef union sh_md_dqlp_mmr_dir_locvec4_u { + mmr_t sh_md_dqlp_mmr_dir_locvec4_regval; + struct { + mmr_t vec : 64; + } sh_md_dqlp_mmr_dir_locvec4_s; +} sh_md_dqlp_mmr_dir_locvec4_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_DIR_LOCVEC5" */ +/* local vector for acc=5 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqlp_mmr_dir_locvec5_u { + mmr_t sh_md_dqlp_mmr_dir_locvec5_regval; + struct { + mmr_t vec : 64; + } sh_md_dqlp_mmr_dir_locvec5_s; +} sh_md_dqlp_mmr_dir_locvec5_u_t; +#else +typedef union sh_md_dqlp_mmr_dir_locvec5_u { + mmr_t sh_md_dqlp_mmr_dir_locvec5_regval; + struct { + mmr_t vec : 64; + } sh_md_dqlp_mmr_dir_locvec5_s; +} sh_md_dqlp_mmr_dir_locvec5_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_DIR_LOCVEC6" */ +/* local vector for acc=6 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqlp_mmr_dir_locvec6_u { + mmr_t sh_md_dqlp_mmr_dir_locvec6_regval; + struct { + mmr_t vec : 64; + } sh_md_dqlp_mmr_dir_locvec6_s; +} sh_md_dqlp_mmr_dir_locvec6_u_t; +#else +typedef union sh_md_dqlp_mmr_dir_locvec6_u { + mmr_t sh_md_dqlp_mmr_dir_locvec6_regval; + struct { + mmr_t vec : 64; + } sh_md_dqlp_mmr_dir_locvec6_s; +} sh_md_dqlp_mmr_dir_locvec6_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_DIR_LOCVEC7" */ +/* local vector for acc=7 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqlp_mmr_dir_locvec7_u { + mmr_t sh_md_dqlp_mmr_dir_locvec7_regval; + struct { + mmr_t vec : 64; + } sh_md_dqlp_mmr_dir_locvec7_s; +} sh_md_dqlp_mmr_dir_locvec7_u_t; +#else +typedef union sh_md_dqlp_mmr_dir_locvec7_u { + mmr_t sh_md_dqlp_mmr_dir_locvec7_regval; + struct { + mmr_t vec : 64; + } sh_md_dqlp_mmr_dir_locvec7_s; +} sh_md_dqlp_mmr_dir_locvec7_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_DIR_PRIVEC0" */ +/* privilege vector for acc=0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqlp_mmr_dir_privec0_u { + mmr_t sh_md_dqlp_mmr_dir_privec0_regval; + struct { + mmr_t in : 14; + mmr_t out : 14; + mmr_t reserved_0 : 36; + } sh_md_dqlp_mmr_dir_privec0_s; +} sh_md_dqlp_mmr_dir_privec0_u_t; +#else +typedef union sh_md_dqlp_mmr_dir_privec0_u { + mmr_t sh_md_dqlp_mmr_dir_privec0_regval; + struct { + mmr_t reserved_0 : 36; + mmr_t out : 14; + mmr_t in : 14; + } sh_md_dqlp_mmr_dir_privec0_s; +} sh_md_dqlp_mmr_dir_privec0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_DIR_PRIVEC1" */ +/* privilege vector for acc=1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqlp_mmr_dir_privec1_u { + mmr_t sh_md_dqlp_mmr_dir_privec1_regval; + struct { + mmr_t in : 14; + mmr_t out : 14; + mmr_t reserved_0 : 36; + } sh_md_dqlp_mmr_dir_privec1_s; +} sh_md_dqlp_mmr_dir_privec1_u_t; +#else +typedef union sh_md_dqlp_mmr_dir_privec1_u { + mmr_t sh_md_dqlp_mmr_dir_privec1_regval; + struct { + mmr_t reserved_0 : 36; + mmr_t out : 14; + mmr_t in : 14; + } sh_md_dqlp_mmr_dir_privec1_s; +} sh_md_dqlp_mmr_dir_privec1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_DIR_PRIVEC2" */ +/* privilege vector for acc=2 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqlp_mmr_dir_privec2_u { + mmr_t sh_md_dqlp_mmr_dir_privec2_regval; + struct { + mmr_t in : 14; + mmr_t out : 14; + mmr_t reserved_0 : 36; + } sh_md_dqlp_mmr_dir_privec2_s; +} sh_md_dqlp_mmr_dir_privec2_u_t; +#else +typedef union sh_md_dqlp_mmr_dir_privec2_u { + mmr_t sh_md_dqlp_mmr_dir_privec2_regval; + struct { + mmr_t reserved_0 : 36; + mmr_t out : 14; + mmr_t in : 14; + } sh_md_dqlp_mmr_dir_privec2_s; +} sh_md_dqlp_mmr_dir_privec2_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_DIR_PRIVEC3" */ +/* privilege vector for acc=3 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqlp_mmr_dir_privec3_u { + mmr_t sh_md_dqlp_mmr_dir_privec3_regval; + struct { + mmr_t in : 14; + mmr_t out : 14; + mmr_t reserved_0 : 36; + } sh_md_dqlp_mmr_dir_privec3_s; +} sh_md_dqlp_mmr_dir_privec3_u_t; +#else +typedef union sh_md_dqlp_mmr_dir_privec3_u { + mmr_t sh_md_dqlp_mmr_dir_privec3_regval; + struct { + mmr_t reserved_0 : 36; + mmr_t out : 14; + mmr_t in : 14; + } sh_md_dqlp_mmr_dir_privec3_s; +} sh_md_dqlp_mmr_dir_privec3_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_DIR_PRIVEC4" */ +/* privilege vector for acc=4 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqlp_mmr_dir_privec4_u { + mmr_t sh_md_dqlp_mmr_dir_privec4_regval; + struct { + mmr_t in : 14; + mmr_t out : 14; + mmr_t reserved_0 : 36; + } sh_md_dqlp_mmr_dir_privec4_s; +} sh_md_dqlp_mmr_dir_privec4_u_t; +#else +typedef union sh_md_dqlp_mmr_dir_privec4_u { + mmr_t sh_md_dqlp_mmr_dir_privec4_regval; + struct { + mmr_t reserved_0 : 36; + mmr_t out : 14; + mmr_t in : 14; + } sh_md_dqlp_mmr_dir_privec4_s; +} sh_md_dqlp_mmr_dir_privec4_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_DIR_PRIVEC5" */ +/* privilege vector for acc=5 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqlp_mmr_dir_privec5_u { + mmr_t sh_md_dqlp_mmr_dir_privec5_regval; + struct { + mmr_t in : 14; + mmr_t out : 14; + mmr_t reserved_0 : 36; + } sh_md_dqlp_mmr_dir_privec5_s; +} sh_md_dqlp_mmr_dir_privec5_u_t; +#else +typedef union sh_md_dqlp_mmr_dir_privec5_u { + mmr_t sh_md_dqlp_mmr_dir_privec5_regval; + struct { + mmr_t reserved_0 : 36; + mmr_t out : 14; + mmr_t in : 14; + } sh_md_dqlp_mmr_dir_privec5_s; +} sh_md_dqlp_mmr_dir_privec5_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_DIR_PRIVEC6" */ +/* privilege vector for acc=6 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqlp_mmr_dir_privec6_u { + mmr_t sh_md_dqlp_mmr_dir_privec6_regval; + struct { + mmr_t in : 14; + mmr_t out : 14; + mmr_t reserved_0 : 36; + } sh_md_dqlp_mmr_dir_privec6_s; +} sh_md_dqlp_mmr_dir_privec6_u_t; +#else +typedef union sh_md_dqlp_mmr_dir_privec6_u { + mmr_t sh_md_dqlp_mmr_dir_privec6_regval; + struct { + mmr_t reserved_0 : 36; + mmr_t out : 14; + mmr_t in : 14; + } sh_md_dqlp_mmr_dir_privec6_s; +} sh_md_dqlp_mmr_dir_privec6_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_DIR_PRIVEC7" */ +/* privilege vector for acc=7 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqlp_mmr_dir_privec7_u { + mmr_t sh_md_dqlp_mmr_dir_privec7_regval; + struct { + mmr_t in : 14; + mmr_t out : 14; + mmr_t reserved_0 : 36; + } sh_md_dqlp_mmr_dir_privec7_s; +} sh_md_dqlp_mmr_dir_privec7_u_t; +#else +typedef union sh_md_dqlp_mmr_dir_privec7_u { + mmr_t sh_md_dqlp_mmr_dir_privec7_regval; + struct { + mmr_t reserved_0 : 36; + mmr_t out : 14; + mmr_t in : 14; + } sh_md_dqlp_mmr_dir_privec7_s; +} sh_md_dqlp_mmr_dir_privec7_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_DIR_TIMER" */ +/* MD SXRO timer */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqlp_mmr_dir_timer_u { + mmr_t sh_md_dqlp_mmr_dir_timer_regval; + struct { + mmr_t timer_div : 12; + mmr_t timer_en : 1; + mmr_t timer_cur : 9; + mmr_t reserved_0 : 42; + } sh_md_dqlp_mmr_dir_timer_s; +} sh_md_dqlp_mmr_dir_timer_u_t; +#else +typedef union sh_md_dqlp_mmr_dir_timer_u { + mmr_t sh_md_dqlp_mmr_dir_timer_regval; + struct { + mmr_t reserved_0 : 42; + mmr_t timer_cur : 9; + mmr_t timer_en : 1; + mmr_t timer_div : 12; + } sh_md_dqlp_mmr_dir_timer_s; +} sh_md_dqlp_mmr_dir_timer_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_PIOWD_DIR_ENTRY" */ +/* directory pio write data */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqlp_mmr_piowd_dir_entry_u { + mmr_t sh_md_dqlp_mmr_piowd_dir_entry_regval; + struct { + mmr_t dira : 26; + mmr_t dirb : 26; + mmr_t pri : 3; + mmr_t acc : 3; + mmr_t reserved_0 : 6; + } sh_md_dqlp_mmr_piowd_dir_entry_s; +} sh_md_dqlp_mmr_piowd_dir_entry_u_t; +#else +typedef union sh_md_dqlp_mmr_piowd_dir_entry_u { + mmr_t sh_md_dqlp_mmr_piowd_dir_entry_regval; + struct { + mmr_t reserved_0 : 6; + mmr_t acc : 3; + mmr_t pri : 3; + mmr_t dirb : 26; + mmr_t dira : 26; + } sh_md_dqlp_mmr_piowd_dir_entry_s; +} sh_md_dqlp_mmr_piowd_dir_entry_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_PIOWD_DIR_ECC" */ +/* directory ecc register */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqlp_mmr_piowd_dir_ecc_u { + mmr_t sh_md_dqlp_mmr_piowd_dir_ecc_regval; + struct { + mmr_t ecca : 7; + mmr_t eccb : 7; + mmr_t reserved_0 : 50; + } sh_md_dqlp_mmr_piowd_dir_ecc_s; +} sh_md_dqlp_mmr_piowd_dir_ecc_u_t; +#else +typedef union sh_md_dqlp_mmr_piowd_dir_ecc_u { + mmr_t sh_md_dqlp_mmr_piowd_dir_ecc_regval; + struct { + mmr_t reserved_0 : 50; + mmr_t eccb : 7; + mmr_t ecca : 7; + } sh_md_dqlp_mmr_piowd_dir_ecc_s; +} sh_md_dqlp_mmr_piowd_dir_ecc_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_XPIORD_XDIR_ENTRY" */ +/* x directory pio read data */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqlp_mmr_xpiord_xdir_entry_u { + mmr_t sh_md_dqlp_mmr_xpiord_xdir_entry_regval; + struct { + mmr_t dira : 26; + mmr_t dirb : 26; + mmr_t pri : 3; + mmr_t acc : 3; + mmr_t cor : 1; + mmr_t unc : 1; + mmr_t reserved_0 : 4; + } sh_md_dqlp_mmr_xpiord_xdir_entry_s; +} sh_md_dqlp_mmr_xpiord_xdir_entry_u_t; +#else +typedef union sh_md_dqlp_mmr_xpiord_xdir_entry_u { + mmr_t sh_md_dqlp_mmr_xpiord_xdir_entry_regval; + struct { + mmr_t reserved_0 : 4; + mmr_t unc : 1; + mmr_t cor : 1; + mmr_t acc : 3; + mmr_t pri : 3; + mmr_t dirb : 26; + mmr_t dira : 26; + } sh_md_dqlp_mmr_xpiord_xdir_entry_s; +} sh_md_dqlp_mmr_xpiord_xdir_entry_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_XPIORD_XDIR_ECC" */ +/* x directory ecc */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqlp_mmr_xpiord_xdir_ecc_u { + mmr_t sh_md_dqlp_mmr_xpiord_xdir_ecc_regval; + struct { + mmr_t ecca : 7; + mmr_t eccb : 7; + mmr_t reserved_0 : 50; + } sh_md_dqlp_mmr_xpiord_xdir_ecc_s; +} sh_md_dqlp_mmr_xpiord_xdir_ecc_u_t; +#else +typedef union sh_md_dqlp_mmr_xpiord_xdir_ecc_u { + mmr_t sh_md_dqlp_mmr_xpiord_xdir_ecc_regval; + struct { + mmr_t reserved_0 : 50; + mmr_t eccb : 7; + mmr_t ecca : 7; + } sh_md_dqlp_mmr_xpiord_xdir_ecc_s; +} sh_md_dqlp_mmr_xpiord_xdir_ecc_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_YPIORD_YDIR_ENTRY" */ +/* y directory pio read data */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqlp_mmr_ypiord_ydir_entry_u { + mmr_t sh_md_dqlp_mmr_ypiord_ydir_entry_regval; + struct { + mmr_t dira : 26; + mmr_t dirb : 26; + mmr_t pri : 3; + mmr_t acc : 3; + mmr_t cor : 1; + mmr_t unc : 1; + mmr_t reserved_0 : 4; + } sh_md_dqlp_mmr_ypiord_ydir_entry_s; +} sh_md_dqlp_mmr_ypiord_ydir_entry_u_t; +#else +typedef union sh_md_dqlp_mmr_ypiord_ydir_entry_u { + mmr_t sh_md_dqlp_mmr_ypiord_ydir_entry_regval; + struct { + mmr_t reserved_0 : 4; + mmr_t unc : 1; + mmr_t cor : 1; + mmr_t acc : 3; + mmr_t pri : 3; + mmr_t dirb : 26; + mmr_t dira : 26; + } sh_md_dqlp_mmr_ypiord_ydir_entry_s; +} sh_md_dqlp_mmr_ypiord_ydir_entry_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_YPIORD_YDIR_ECC" */ +/* y directory ecc */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqlp_mmr_ypiord_ydir_ecc_u { + mmr_t sh_md_dqlp_mmr_ypiord_ydir_ecc_regval; + struct { + mmr_t ecca : 7; + mmr_t eccb : 7; + mmr_t reserved_0 : 50; + } sh_md_dqlp_mmr_ypiord_ydir_ecc_s; +} sh_md_dqlp_mmr_ypiord_ydir_ecc_u_t; +#else +typedef union sh_md_dqlp_mmr_ypiord_ydir_ecc_u { + mmr_t sh_md_dqlp_mmr_ypiord_ydir_ecc_regval; + struct { + mmr_t reserved_0 : 50; + mmr_t eccb : 7; + mmr_t ecca : 7; + } sh_md_dqlp_mmr_ypiord_ydir_ecc_s; +} sh_md_dqlp_mmr_ypiord_ydir_ecc_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_XCERR1" */ +/* correctable dir ecc group 1 error register */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqlp_mmr_xcerr1_u { + mmr_t sh_md_dqlp_mmr_xcerr1_regval; + struct { + mmr_t grp1 : 36; + mmr_t val : 1; + mmr_t more : 1; + mmr_t arm : 1; + mmr_t reserved_0 : 25; + } sh_md_dqlp_mmr_xcerr1_s; +} sh_md_dqlp_mmr_xcerr1_u_t; +#else +typedef union sh_md_dqlp_mmr_xcerr1_u { + mmr_t sh_md_dqlp_mmr_xcerr1_regval; + struct { + mmr_t reserved_0 : 25; + mmr_t arm : 1; + mmr_t more : 1; + mmr_t val : 1; + mmr_t grp1 : 36; + } sh_md_dqlp_mmr_xcerr1_s; +} sh_md_dqlp_mmr_xcerr1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_XCERR2" */ +/* correctable dir ecc group 2 error register */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqlp_mmr_xcerr2_u { + mmr_t sh_md_dqlp_mmr_xcerr2_regval; + struct { + mmr_t grp2 : 36; + mmr_t val : 1; + mmr_t more : 1; + mmr_t reserved_0 : 26; + } sh_md_dqlp_mmr_xcerr2_s; +} sh_md_dqlp_mmr_xcerr2_u_t; +#else +typedef union sh_md_dqlp_mmr_xcerr2_u { + mmr_t sh_md_dqlp_mmr_xcerr2_regval; + struct { + mmr_t reserved_0 : 26; + mmr_t more : 1; + mmr_t val : 1; + mmr_t grp2 : 36; + } sh_md_dqlp_mmr_xcerr2_s; +} sh_md_dqlp_mmr_xcerr2_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_XUERR1" */ +/* uncorrectable dir ecc group 1 error register */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqlp_mmr_xuerr1_u { + mmr_t sh_md_dqlp_mmr_xuerr1_regval; + struct { + mmr_t grp1 : 36; + mmr_t val : 1; + mmr_t more : 1; + mmr_t arm : 1; + mmr_t reserved_0 : 25; + } sh_md_dqlp_mmr_xuerr1_s; +} sh_md_dqlp_mmr_xuerr1_u_t; +#else +typedef union sh_md_dqlp_mmr_xuerr1_u { + mmr_t sh_md_dqlp_mmr_xuerr1_regval; + struct { + mmr_t reserved_0 : 25; + mmr_t arm : 1; + mmr_t more : 1; + mmr_t val : 1; + mmr_t grp1 : 36; + } sh_md_dqlp_mmr_xuerr1_s; +} sh_md_dqlp_mmr_xuerr1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_XUERR2" */ +/* uncorrectable dir ecc group 2 error register */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqlp_mmr_xuerr2_u { + mmr_t sh_md_dqlp_mmr_xuerr2_regval; + struct { + mmr_t grp2 : 36; + mmr_t val : 1; + mmr_t more : 1; + mmr_t reserved_0 : 26; + } sh_md_dqlp_mmr_xuerr2_s; +} sh_md_dqlp_mmr_xuerr2_u_t; +#else +typedef union sh_md_dqlp_mmr_xuerr2_u { + mmr_t sh_md_dqlp_mmr_xuerr2_regval; + struct { + mmr_t reserved_0 : 26; + mmr_t more : 1; + mmr_t val : 1; + mmr_t grp2 : 36; + } sh_md_dqlp_mmr_xuerr2_s; +} sh_md_dqlp_mmr_xuerr2_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_XPERR" */ +/* protocol error register */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqlp_mmr_xperr_u { + mmr_t sh_md_dqlp_mmr_xperr_regval; + struct { + mmr_t dir : 26; + mmr_t cmd : 8; + mmr_t src : 14; + mmr_t prige : 1; + mmr_t priv : 1; + mmr_t cor : 1; + mmr_t unc : 1; + mmr_t mybit : 8; + mmr_t val : 1; + mmr_t more : 1; + mmr_t arm : 1; + mmr_t reserved_0 : 1; + } sh_md_dqlp_mmr_xperr_s; +} sh_md_dqlp_mmr_xperr_u_t; +#else +typedef union sh_md_dqlp_mmr_xperr_u { + mmr_t sh_md_dqlp_mmr_xperr_regval; + struct { + mmr_t reserved_0 : 1; + mmr_t arm : 1; + mmr_t more : 1; + mmr_t val : 1; + mmr_t mybit : 8; + mmr_t unc : 1; + mmr_t cor : 1; + mmr_t priv : 1; + mmr_t prige : 1; + mmr_t src : 14; + mmr_t cmd : 8; + mmr_t dir : 26; + } sh_md_dqlp_mmr_xperr_s; +} sh_md_dqlp_mmr_xperr_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_YCERR1" */ +/* correctable dir ecc group 1 error register */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqlp_mmr_ycerr1_u { + mmr_t sh_md_dqlp_mmr_ycerr1_regval; + struct { + mmr_t grp1 : 36; + mmr_t val : 1; + mmr_t more : 1; + mmr_t arm : 1; + mmr_t reserved_0 : 25; + } sh_md_dqlp_mmr_ycerr1_s; +} sh_md_dqlp_mmr_ycerr1_u_t; +#else +typedef union sh_md_dqlp_mmr_ycerr1_u { + mmr_t sh_md_dqlp_mmr_ycerr1_regval; + struct { + mmr_t reserved_0 : 25; + mmr_t arm : 1; + mmr_t more : 1; + mmr_t val : 1; + mmr_t grp1 : 36; + } sh_md_dqlp_mmr_ycerr1_s; +} sh_md_dqlp_mmr_ycerr1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_YCERR2" */ +/* correctable dir ecc group 2 error register */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqlp_mmr_ycerr2_u { + mmr_t sh_md_dqlp_mmr_ycerr2_regval; + struct { + mmr_t grp2 : 36; + mmr_t val : 1; + mmr_t more : 1; + mmr_t reserved_0 : 26; + } sh_md_dqlp_mmr_ycerr2_s; +} sh_md_dqlp_mmr_ycerr2_u_t; +#else +typedef union sh_md_dqlp_mmr_ycerr2_u { + mmr_t sh_md_dqlp_mmr_ycerr2_regval; + struct { + mmr_t reserved_0 : 26; + mmr_t more : 1; + mmr_t val : 1; + mmr_t grp2 : 36; + } sh_md_dqlp_mmr_ycerr2_s; +} sh_md_dqlp_mmr_ycerr2_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_YUERR1" */ +/* uncorrectable dir ecc group 1 error register */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqlp_mmr_yuerr1_u { + mmr_t sh_md_dqlp_mmr_yuerr1_regval; + struct { + mmr_t grp1 : 36; + mmr_t val : 1; + mmr_t more : 1; + mmr_t arm : 1; + mmr_t reserved_0 : 25; + } sh_md_dqlp_mmr_yuerr1_s; +} sh_md_dqlp_mmr_yuerr1_u_t; +#else +typedef union sh_md_dqlp_mmr_yuerr1_u { + mmr_t sh_md_dqlp_mmr_yuerr1_regval; + struct { + mmr_t reserved_0 : 25; + mmr_t arm : 1; + mmr_t more : 1; + mmr_t val : 1; + mmr_t grp1 : 36; + } sh_md_dqlp_mmr_yuerr1_s; +} sh_md_dqlp_mmr_yuerr1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_YUERR2" */ +/* uncorrectable dir ecc group 2 error register */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqlp_mmr_yuerr2_u { + mmr_t sh_md_dqlp_mmr_yuerr2_regval; + struct { + mmr_t grp2 : 36; + mmr_t val : 1; + mmr_t more : 1; + mmr_t reserved_0 : 26; + } sh_md_dqlp_mmr_yuerr2_s; +} sh_md_dqlp_mmr_yuerr2_u_t; +#else +typedef union sh_md_dqlp_mmr_yuerr2_u { + mmr_t sh_md_dqlp_mmr_yuerr2_regval; + struct { + mmr_t reserved_0 : 26; + mmr_t more : 1; + mmr_t val : 1; + mmr_t grp2 : 36; + } sh_md_dqlp_mmr_yuerr2_s; +} sh_md_dqlp_mmr_yuerr2_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_YPERR" */ +/* protocol error register */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqlp_mmr_yperr_u { + mmr_t sh_md_dqlp_mmr_yperr_regval; + struct { + mmr_t dir : 26; + mmr_t cmd : 8; + mmr_t src : 14; + mmr_t prige : 1; + mmr_t priv : 1; + mmr_t cor : 1; + mmr_t unc : 1; + mmr_t mybit : 8; + mmr_t val : 1; + mmr_t more : 1; + mmr_t arm : 1; + mmr_t reserved_0 : 1; + } sh_md_dqlp_mmr_yperr_s; +} sh_md_dqlp_mmr_yperr_u_t; +#else +typedef union sh_md_dqlp_mmr_yperr_u { + mmr_t sh_md_dqlp_mmr_yperr_regval; + struct { + mmr_t reserved_0 : 1; + mmr_t arm : 1; + mmr_t more : 1; + mmr_t val : 1; + mmr_t mybit : 8; + mmr_t unc : 1; + mmr_t cor : 1; + mmr_t priv : 1; + mmr_t prige : 1; + mmr_t src : 14; + mmr_t cmd : 8; + mmr_t dir : 26; + } sh_md_dqlp_mmr_yperr_s; +} sh_md_dqlp_mmr_yperr_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_DIR_CMDTRIG" */ +/* cmd triggers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqlp_mmr_dir_cmdtrig_u { + mmr_t sh_md_dqlp_mmr_dir_cmdtrig_regval; + struct { + mmr_t cmd0 : 8; + mmr_t cmd1 : 8; + mmr_t cmd2 : 8; + mmr_t cmd3 : 8; + mmr_t reserved_0 : 32; + } sh_md_dqlp_mmr_dir_cmdtrig_s; +} sh_md_dqlp_mmr_dir_cmdtrig_u_t; +#else +typedef union sh_md_dqlp_mmr_dir_cmdtrig_u { + mmr_t sh_md_dqlp_mmr_dir_cmdtrig_regval; + struct { + mmr_t reserved_0 : 32; + mmr_t cmd3 : 8; + mmr_t cmd2 : 8; + mmr_t cmd1 : 8; + mmr_t cmd0 : 8; + } sh_md_dqlp_mmr_dir_cmdtrig_s; +} sh_md_dqlp_mmr_dir_cmdtrig_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_DIR_TBLTRIG" */ +/* dir table trigger */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqlp_mmr_dir_tbltrig_u { + mmr_t sh_md_dqlp_mmr_dir_tbltrig_regval; + struct { + mmr_t src : 14; + mmr_t cmd : 8; + mmr_t acc : 2; + mmr_t prige : 1; + mmr_t dirst : 9; + mmr_t mybit : 8; + mmr_t reserved_0 : 22; + } sh_md_dqlp_mmr_dir_tbltrig_s; +} sh_md_dqlp_mmr_dir_tbltrig_u_t; +#else +typedef union sh_md_dqlp_mmr_dir_tbltrig_u { + mmr_t sh_md_dqlp_mmr_dir_tbltrig_regval; + struct { + mmr_t reserved_0 : 22; + mmr_t mybit : 8; + mmr_t dirst : 9; + mmr_t prige : 1; + mmr_t acc : 2; + mmr_t cmd : 8; + mmr_t src : 14; + } sh_md_dqlp_mmr_dir_tbltrig_s; +} sh_md_dqlp_mmr_dir_tbltrig_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_DIR_TBLMASK" */ +/* dir table trigger mask */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqlp_mmr_dir_tblmask_u { + mmr_t sh_md_dqlp_mmr_dir_tblmask_regval; + struct { + mmr_t src : 14; + mmr_t cmd : 8; + mmr_t acc : 2; + mmr_t prige : 1; + mmr_t dirst : 9; + mmr_t mybit : 8; + mmr_t reserved_0 : 22; + } sh_md_dqlp_mmr_dir_tblmask_s; +} sh_md_dqlp_mmr_dir_tblmask_u_t; +#else +typedef union sh_md_dqlp_mmr_dir_tblmask_u { + mmr_t sh_md_dqlp_mmr_dir_tblmask_regval; + struct { + mmr_t reserved_0 : 22; + mmr_t mybit : 8; + mmr_t dirst : 9; + mmr_t prige : 1; + mmr_t acc : 2; + mmr_t cmd : 8; + mmr_t src : 14; + } sh_md_dqlp_mmr_dir_tblmask_s; +} sh_md_dqlp_mmr_dir_tblmask_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_XBIST_H" */ +/* rising edge bist/fill pattern */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqlp_mmr_xbist_h_u { + mmr_t sh_md_dqlp_mmr_xbist_h_regval; + struct { + mmr_t pat : 32; + mmr_t reserved_0 : 8; + mmr_t inv : 1; + mmr_t rot : 1; + mmr_t arm : 1; + mmr_t reserved_1 : 21; + } sh_md_dqlp_mmr_xbist_h_s; +} sh_md_dqlp_mmr_xbist_h_u_t; +#else +typedef union sh_md_dqlp_mmr_xbist_h_u { + mmr_t sh_md_dqlp_mmr_xbist_h_regval; + struct { + mmr_t reserved_1 : 21; + mmr_t arm : 1; + mmr_t rot : 1; + mmr_t inv : 1; + mmr_t reserved_0 : 8; + mmr_t pat : 32; + } sh_md_dqlp_mmr_xbist_h_s; +} sh_md_dqlp_mmr_xbist_h_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_XBIST_L" */ +/* falling edge bist/fill pattern */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqlp_mmr_xbist_l_u { + mmr_t sh_md_dqlp_mmr_xbist_l_regval; + struct { + mmr_t pat : 32; + mmr_t reserved_0 : 8; + mmr_t inv : 1; + mmr_t rot : 1; + mmr_t reserved_1 : 22; + } sh_md_dqlp_mmr_xbist_l_s; +} sh_md_dqlp_mmr_xbist_l_u_t; +#else +typedef union sh_md_dqlp_mmr_xbist_l_u { + mmr_t sh_md_dqlp_mmr_xbist_l_regval; + struct { + mmr_t reserved_1 : 22; + mmr_t rot : 1; + mmr_t inv : 1; + mmr_t reserved_0 : 8; + mmr_t pat : 32; + } sh_md_dqlp_mmr_xbist_l_s; +} sh_md_dqlp_mmr_xbist_l_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_XBIST_ERR_H" */ +/* rising edge bist error pattern */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqlp_mmr_xbist_err_h_u { + mmr_t sh_md_dqlp_mmr_xbist_err_h_regval; + struct { + mmr_t pat : 32; + mmr_t reserved_0 : 8; + mmr_t val : 1; + mmr_t more : 1; + mmr_t reserved_1 : 22; + } sh_md_dqlp_mmr_xbist_err_h_s; +} sh_md_dqlp_mmr_xbist_err_h_u_t; +#else +typedef union sh_md_dqlp_mmr_xbist_err_h_u { + mmr_t sh_md_dqlp_mmr_xbist_err_h_regval; + struct { + mmr_t reserved_1 : 22; + mmr_t more : 1; + mmr_t val : 1; + mmr_t reserved_0 : 8; + mmr_t pat : 32; + } sh_md_dqlp_mmr_xbist_err_h_s; +} sh_md_dqlp_mmr_xbist_err_h_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_XBIST_ERR_L" */ +/* falling edge bist error pattern */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqlp_mmr_xbist_err_l_u { + mmr_t sh_md_dqlp_mmr_xbist_err_l_regval; + struct { + mmr_t pat : 32; + mmr_t reserved_0 : 8; + mmr_t val : 1; + mmr_t more : 1; + mmr_t reserved_1 : 22; + } sh_md_dqlp_mmr_xbist_err_l_s; +} sh_md_dqlp_mmr_xbist_err_l_u_t; +#else +typedef union sh_md_dqlp_mmr_xbist_err_l_u { + mmr_t sh_md_dqlp_mmr_xbist_err_l_regval; + struct { + mmr_t reserved_1 : 22; + mmr_t more : 1; + mmr_t val : 1; + mmr_t reserved_0 : 8; + mmr_t pat : 32; + } sh_md_dqlp_mmr_xbist_err_l_s; +} sh_md_dqlp_mmr_xbist_err_l_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_YBIST_H" */ +/* rising edge bist/fill pattern */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqlp_mmr_ybist_h_u { + mmr_t sh_md_dqlp_mmr_ybist_h_regval; + struct { + mmr_t pat : 32; + mmr_t reserved_0 : 8; + mmr_t inv : 1; + mmr_t rot : 1; + mmr_t arm : 1; + mmr_t reserved_1 : 21; + } sh_md_dqlp_mmr_ybist_h_s; +} sh_md_dqlp_mmr_ybist_h_u_t; +#else +typedef union sh_md_dqlp_mmr_ybist_h_u { + mmr_t sh_md_dqlp_mmr_ybist_h_regval; + struct { + mmr_t reserved_1 : 21; + mmr_t arm : 1; + mmr_t rot : 1; + mmr_t inv : 1; + mmr_t reserved_0 : 8; + mmr_t pat : 32; + } sh_md_dqlp_mmr_ybist_h_s; +} sh_md_dqlp_mmr_ybist_h_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_YBIST_L" */ +/* falling edge bist/fill pattern */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqlp_mmr_ybist_l_u { + mmr_t sh_md_dqlp_mmr_ybist_l_regval; + struct { + mmr_t pat : 32; + mmr_t reserved_0 : 8; + mmr_t inv : 1; + mmr_t rot : 1; + mmr_t reserved_1 : 22; + } sh_md_dqlp_mmr_ybist_l_s; +} sh_md_dqlp_mmr_ybist_l_u_t; +#else +typedef union sh_md_dqlp_mmr_ybist_l_u { + mmr_t sh_md_dqlp_mmr_ybist_l_regval; + struct { + mmr_t reserved_1 : 22; + mmr_t rot : 1; + mmr_t inv : 1; + mmr_t reserved_0 : 8; + mmr_t pat : 32; + } sh_md_dqlp_mmr_ybist_l_s; +} sh_md_dqlp_mmr_ybist_l_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_YBIST_ERR_H" */ +/* rising edge bist error pattern */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqlp_mmr_ybist_err_h_u { + mmr_t sh_md_dqlp_mmr_ybist_err_h_regval; + struct { + mmr_t pat : 32; + mmr_t reserved_0 : 8; + mmr_t val : 1; + mmr_t more : 1; + mmr_t reserved_1 : 22; + } sh_md_dqlp_mmr_ybist_err_h_s; +} sh_md_dqlp_mmr_ybist_err_h_u_t; +#else +typedef union sh_md_dqlp_mmr_ybist_err_h_u { + mmr_t sh_md_dqlp_mmr_ybist_err_h_regval; + struct { + mmr_t reserved_1 : 22; + mmr_t more : 1; + mmr_t val : 1; + mmr_t reserved_0 : 8; + mmr_t pat : 32; + } sh_md_dqlp_mmr_ybist_err_h_s; +} sh_md_dqlp_mmr_ybist_err_h_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLP_MMR_YBIST_ERR_L" */ +/* falling edge bist error pattern */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqlp_mmr_ybist_err_l_u { + mmr_t sh_md_dqlp_mmr_ybist_err_l_regval; + struct { + mmr_t pat : 32; + mmr_t reserved_0 : 8; + mmr_t val : 1; + mmr_t more : 1; + mmr_t reserved_1 : 22; + } sh_md_dqlp_mmr_ybist_err_l_s; +} sh_md_dqlp_mmr_ybist_err_l_u_t; +#else +typedef union sh_md_dqlp_mmr_ybist_err_l_u { + mmr_t sh_md_dqlp_mmr_ybist_err_l_regval; + struct { + mmr_t reserved_1 : 22; + mmr_t more : 1; + mmr_t val : 1; + mmr_t reserved_0 : 8; + mmr_t pat : 32; + } sh_md_dqlp_mmr_ybist_err_l_s; +} sh_md_dqlp_mmr_ybist_err_l_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLS_MMR_XBIST_H" */ +/* rising edge bist/fill pattern */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqls_mmr_xbist_h_u { + mmr_t sh_md_dqls_mmr_xbist_h_regval; + struct { + mmr_t pat : 40; + mmr_t inv : 1; + mmr_t rot : 1; + mmr_t arm : 1; + mmr_t reserved_0 : 21; + } sh_md_dqls_mmr_xbist_h_s; +} sh_md_dqls_mmr_xbist_h_u_t; +#else +typedef union sh_md_dqls_mmr_xbist_h_u { + mmr_t sh_md_dqls_mmr_xbist_h_regval; + struct { + mmr_t reserved_0 : 21; + mmr_t arm : 1; + mmr_t rot : 1; + mmr_t inv : 1; + mmr_t pat : 40; + } sh_md_dqls_mmr_xbist_h_s; +} sh_md_dqls_mmr_xbist_h_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLS_MMR_XBIST_L" */ +/* falling edge bist/fill pattern */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqls_mmr_xbist_l_u { + mmr_t sh_md_dqls_mmr_xbist_l_regval; + struct { + mmr_t pat : 40; + mmr_t inv : 1; + mmr_t rot : 1; + mmr_t reserved_0 : 22; + } sh_md_dqls_mmr_xbist_l_s; +} sh_md_dqls_mmr_xbist_l_u_t; +#else +typedef union sh_md_dqls_mmr_xbist_l_u { + mmr_t sh_md_dqls_mmr_xbist_l_regval; + struct { + mmr_t reserved_0 : 22; + mmr_t rot : 1; + mmr_t inv : 1; + mmr_t pat : 40; + } sh_md_dqls_mmr_xbist_l_s; +} sh_md_dqls_mmr_xbist_l_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLS_MMR_XBIST_ERR_H" */ +/* rising edge bist error pattern */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqls_mmr_xbist_err_h_u { + mmr_t sh_md_dqls_mmr_xbist_err_h_regval; + struct { + mmr_t pat : 40; + mmr_t val : 1; + mmr_t more : 1; + mmr_t reserved_0 : 22; + } sh_md_dqls_mmr_xbist_err_h_s; +} sh_md_dqls_mmr_xbist_err_h_u_t; +#else +typedef union sh_md_dqls_mmr_xbist_err_h_u { + mmr_t sh_md_dqls_mmr_xbist_err_h_regval; + struct { + mmr_t reserved_0 : 22; + mmr_t more : 1; + mmr_t val : 1; + mmr_t pat : 40; + } sh_md_dqls_mmr_xbist_err_h_s; +} sh_md_dqls_mmr_xbist_err_h_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLS_MMR_XBIST_ERR_L" */ +/* falling edge bist error pattern */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqls_mmr_xbist_err_l_u { + mmr_t sh_md_dqls_mmr_xbist_err_l_regval; + struct { + mmr_t pat : 40; + mmr_t val : 1; + mmr_t more : 1; + mmr_t reserved_0 : 22; + } sh_md_dqls_mmr_xbist_err_l_s; +} sh_md_dqls_mmr_xbist_err_l_u_t; +#else +typedef union sh_md_dqls_mmr_xbist_err_l_u { + mmr_t sh_md_dqls_mmr_xbist_err_l_regval; + struct { + mmr_t reserved_0 : 22; + mmr_t more : 1; + mmr_t val : 1; + mmr_t pat : 40; + } sh_md_dqls_mmr_xbist_err_l_s; +} sh_md_dqls_mmr_xbist_err_l_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLS_MMR_YBIST_H" */ +/* rising edge bist/fill pattern */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqls_mmr_ybist_h_u { + mmr_t sh_md_dqls_mmr_ybist_h_regval; + struct { + mmr_t pat : 40; + mmr_t inv : 1; + mmr_t rot : 1; + mmr_t arm : 1; + mmr_t reserved_0 : 21; + } sh_md_dqls_mmr_ybist_h_s; +} sh_md_dqls_mmr_ybist_h_u_t; +#else +typedef union sh_md_dqls_mmr_ybist_h_u { + mmr_t sh_md_dqls_mmr_ybist_h_regval; + struct { + mmr_t reserved_0 : 21; + mmr_t arm : 1; + mmr_t rot : 1; + mmr_t inv : 1; + mmr_t pat : 40; + } sh_md_dqls_mmr_ybist_h_s; +} sh_md_dqls_mmr_ybist_h_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLS_MMR_YBIST_L" */ +/* falling edge bist/fill pattern */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqls_mmr_ybist_l_u { + mmr_t sh_md_dqls_mmr_ybist_l_regval; + struct { + mmr_t pat : 40; + mmr_t inv : 1; + mmr_t rot : 1; + mmr_t reserved_0 : 22; + } sh_md_dqls_mmr_ybist_l_s; +} sh_md_dqls_mmr_ybist_l_u_t; +#else +typedef union sh_md_dqls_mmr_ybist_l_u { + mmr_t sh_md_dqls_mmr_ybist_l_regval; + struct { + mmr_t reserved_0 : 22; + mmr_t rot : 1; + mmr_t inv : 1; + mmr_t pat : 40; + } sh_md_dqls_mmr_ybist_l_s; +} sh_md_dqls_mmr_ybist_l_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLS_MMR_YBIST_ERR_H" */ +/* rising edge bist error pattern */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqls_mmr_ybist_err_h_u { + mmr_t sh_md_dqls_mmr_ybist_err_h_regval; + struct { + mmr_t pat : 40; + mmr_t val : 1; + mmr_t more : 1; + mmr_t reserved_0 : 22; + } sh_md_dqls_mmr_ybist_err_h_s; +} sh_md_dqls_mmr_ybist_err_h_u_t; +#else +typedef union sh_md_dqls_mmr_ybist_err_h_u { + mmr_t sh_md_dqls_mmr_ybist_err_h_regval; + struct { + mmr_t reserved_0 : 22; + mmr_t more : 1; + mmr_t val : 1; + mmr_t pat : 40; + } sh_md_dqls_mmr_ybist_err_h_s; +} sh_md_dqls_mmr_ybist_err_h_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLS_MMR_YBIST_ERR_L" */ +/* falling edge bist error pattern */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqls_mmr_ybist_err_l_u { + mmr_t sh_md_dqls_mmr_ybist_err_l_regval; + struct { + mmr_t pat : 40; + mmr_t val : 1; + mmr_t more : 1; + mmr_t reserved_0 : 22; + } sh_md_dqls_mmr_ybist_err_l_s; +} sh_md_dqls_mmr_ybist_err_l_u_t; +#else +typedef union sh_md_dqls_mmr_ybist_err_l_u { + mmr_t sh_md_dqls_mmr_ybist_err_l_regval; + struct { + mmr_t reserved_0 : 22; + mmr_t more : 1; + mmr_t val : 1; + mmr_t pat : 40; + } sh_md_dqls_mmr_ybist_err_l_s; +} sh_md_dqls_mmr_ybist_err_l_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLS_MMR_JNR_DEBUG" */ +/* joiner/fct debug configuration */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqls_mmr_jnr_debug_u { + mmr_t sh_md_dqls_mmr_jnr_debug_regval; + struct { + mmr_t px : 1; + mmr_t rw : 1; + mmr_t reserved_0 : 62; + } sh_md_dqls_mmr_jnr_debug_s; +} sh_md_dqls_mmr_jnr_debug_u_t; +#else +typedef union sh_md_dqls_mmr_jnr_debug_u { + mmr_t sh_md_dqls_mmr_jnr_debug_regval; + struct { + mmr_t reserved_0 : 62; + mmr_t rw : 1; + mmr_t px : 1; + } sh_md_dqls_mmr_jnr_debug_s; +} sh_md_dqls_mmr_jnr_debug_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQLS_MMR_XAMOPW_ERR" */ +/* amo/partial rmw ecc error register */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqls_mmr_xamopw_err_u { + mmr_t sh_md_dqls_mmr_xamopw_err_regval; + struct { + mmr_t ssyn : 8; + mmr_t scor : 1; + mmr_t sunc : 1; + mmr_t reserved_0 : 6; + mmr_t rsyn : 8; + mmr_t rcor : 1; + mmr_t runc : 1; + mmr_t reserved_1 : 6; + mmr_t arm : 1; + mmr_t reserved_2 : 31; + } sh_md_dqls_mmr_xamopw_err_s; +} sh_md_dqls_mmr_xamopw_err_u_t; +#else +typedef union sh_md_dqls_mmr_xamopw_err_u { + mmr_t sh_md_dqls_mmr_xamopw_err_regval; + struct { + mmr_t reserved_2 : 31; + mmr_t arm : 1; + mmr_t reserved_1 : 6; + mmr_t runc : 1; + mmr_t rcor : 1; + mmr_t rsyn : 8; + mmr_t reserved_0 : 6; + mmr_t sunc : 1; + mmr_t scor : 1; + mmr_t ssyn : 8; + } sh_md_dqls_mmr_xamopw_err_s; +} sh_md_dqls_mmr_xamopw_err_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_DIR_CONFIG" */ +/* DQ directory config register */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrp_mmr_dir_config_u { + mmr_t sh_md_dqrp_mmr_dir_config_regval; + struct { + mmr_t sys_size : 3; + mmr_t en_direcc : 1; + mmr_t en_dirpois : 1; + mmr_t reserved_0 : 59; + } sh_md_dqrp_mmr_dir_config_s; +} sh_md_dqrp_mmr_dir_config_u_t; +#else +typedef union sh_md_dqrp_mmr_dir_config_u { + mmr_t sh_md_dqrp_mmr_dir_config_regval; + struct { + mmr_t reserved_0 : 59; + mmr_t en_dirpois : 1; + mmr_t en_direcc : 1; + mmr_t sys_size : 3; + } sh_md_dqrp_mmr_dir_config_s; +} sh_md_dqrp_mmr_dir_config_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_DIR_PRESVEC0" */ +/* node [63:0] presence bits */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrp_mmr_dir_presvec0_u { + mmr_t sh_md_dqrp_mmr_dir_presvec0_regval; + struct { + mmr_t vec : 64; + } sh_md_dqrp_mmr_dir_presvec0_s; +} sh_md_dqrp_mmr_dir_presvec0_u_t; +#else +typedef union sh_md_dqrp_mmr_dir_presvec0_u { + mmr_t sh_md_dqrp_mmr_dir_presvec0_regval; + struct { + mmr_t vec : 64; + } sh_md_dqrp_mmr_dir_presvec0_s; +} sh_md_dqrp_mmr_dir_presvec0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_DIR_PRESVEC1" */ +/* node [127:64] presence bits */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrp_mmr_dir_presvec1_u { + mmr_t sh_md_dqrp_mmr_dir_presvec1_regval; + struct { + mmr_t vec : 64; + } sh_md_dqrp_mmr_dir_presvec1_s; +} sh_md_dqrp_mmr_dir_presvec1_u_t; +#else +typedef union sh_md_dqrp_mmr_dir_presvec1_u { + mmr_t sh_md_dqrp_mmr_dir_presvec1_regval; + struct { + mmr_t vec : 64; + } sh_md_dqrp_mmr_dir_presvec1_s; +} sh_md_dqrp_mmr_dir_presvec1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_DIR_PRESVEC2" */ +/* node [191:128] presence bits */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrp_mmr_dir_presvec2_u { + mmr_t sh_md_dqrp_mmr_dir_presvec2_regval; + struct { + mmr_t vec : 64; + } sh_md_dqrp_mmr_dir_presvec2_s; +} sh_md_dqrp_mmr_dir_presvec2_u_t; +#else +typedef union sh_md_dqrp_mmr_dir_presvec2_u { + mmr_t sh_md_dqrp_mmr_dir_presvec2_regval; + struct { + mmr_t vec : 64; + } sh_md_dqrp_mmr_dir_presvec2_s; +} sh_md_dqrp_mmr_dir_presvec2_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_DIR_PRESVEC3" */ +/* node [255:192] presence bits */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrp_mmr_dir_presvec3_u { + mmr_t sh_md_dqrp_mmr_dir_presvec3_regval; + struct { + mmr_t vec : 64; + } sh_md_dqrp_mmr_dir_presvec3_s; +} sh_md_dqrp_mmr_dir_presvec3_u_t; +#else +typedef union sh_md_dqrp_mmr_dir_presvec3_u { + mmr_t sh_md_dqrp_mmr_dir_presvec3_regval; + struct { + mmr_t vec : 64; + } sh_md_dqrp_mmr_dir_presvec3_s; +} sh_md_dqrp_mmr_dir_presvec3_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_DIR_LOCVEC0" */ +/* local vector for acc=0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrp_mmr_dir_locvec0_u { + mmr_t sh_md_dqrp_mmr_dir_locvec0_regval; + struct { + mmr_t vec : 64; + } sh_md_dqrp_mmr_dir_locvec0_s; +} sh_md_dqrp_mmr_dir_locvec0_u_t; +#else +typedef union sh_md_dqrp_mmr_dir_locvec0_u { + mmr_t sh_md_dqrp_mmr_dir_locvec0_regval; + struct { + mmr_t vec : 64; + } sh_md_dqrp_mmr_dir_locvec0_s; +} sh_md_dqrp_mmr_dir_locvec0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_DIR_LOCVEC1" */ +/* local vector for acc=1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrp_mmr_dir_locvec1_u { + mmr_t sh_md_dqrp_mmr_dir_locvec1_regval; + struct { + mmr_t vec : 64; + } sh_md_dqrp_mmr_dir_locvec1_s; +} sh_md_dqrp_mmr_dir_locvec1_u_t; +#else +typedef union sh_md_dqrp_mmr_dir_locvec1_u { + mmr_t sh_md_dqrp_mmr_dir_locvec1_regval; + struct { + mmr_t vec : 64; + } sh_md_dqrp_mmr_dir_locvec1_s; +} sh_md_dqrp_mmr_dir_locvec1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_DIR_LOCVEC2" */ +/* local vector for acc=2 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrp_mmr_dir_locvec2_u { + mmr_t sh_md_dqrp_mmr_dir_locvec2_regval; + struct { + mmr_t vec : 64; + } sh_md_dqrp_mmr_dir_locvec2_s; +} sh_md_dqrp_mmr_dir_locvec2_u_t; +#else +typedef union sh_md_dqrp_mmr_dir_locvec2_u { + mmr_t sh_md_dqrp_mmr_dir_locvec2_regval; + struct { + mmr_t vec : 64; + } sh_md_dqrp_mmr_dir_locvec2_s; +} sh_md_dqrp_mmr_dir_locvec2_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_DIR_LOCVEC3" */ +/* local vector for acc=3 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrp_mmr_dir_locvec3_u { + mmr_t sh_md_dqrp_mmr_dir_locvec3_regval; + struct { + mmr_t vec : 64; + } sh_md_dqrp_mmr_dir_locvec3_s; +} sh_md_dqrp_mmr_dir_locvec3_u_t; +#else +typedef union sh_md_dqrp_mmr_dir_locvec3_u { + mmr_t sh_md_dqrp_mmr_dir_locvec3_regval; + struct { + mmr_t vec : 64; + } sh_md_dqrp_mmr_dir_locvec3_s; +} sh_md_dqrp_mmr_dir_locvec3_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_DIR_LOCVEC4" */ +/* local vector for acc=4 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrp_mmr_dir_locvec4_u { + mmr_t sh_md_dqrp_mmr_dir_locvec4_regval; + struct { + mmr_t vec : 64; + } sh_md_dqrp_mmr_dir_locvec4_s; +} sh_md_dqrp_mmr_dir_locvec4_u_t; +#else +typedef union sh_md_dqrp_mmr_dir_locvec4_u { + mmr_t sh_md_dqrp_mmr_dir_locvec4_regval; + struct { + mmr_t vec : 64; + } sh_md_dqrp_mmr_dir_locvec4_s; +} sh_md_dqrp_mmr_dir_locvec4_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_DIR_LOCVEC5" */ +/* local vector for acc=5 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrp_mmr_dir_locvec5_u { + mmr_t sh_md_dqrp_mmr_dir_locvec5_regval; + struct { + mmr_t vec : 64; + } sh_md_dqrp_mmr_dir_locvec5_s; +} sh_md_dqrp_mmr_dir_locvec5_u_t; +#else +typedef union sh_md_dqrp_mmr_dir_locvec5_u { + mmr_t sh_md_dqrp_mmr_dir_locvec5_regval; + struct { + mmr_t vec : 64; + } sh_md_dqrp_mmr_dir_locvec5_s; +} sh_md_dqrp_mmr_dir_locvec5_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_DIR_LOCVEC6" */ +/* local vector for acc=6 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrp_mmr_dir_locvec6_u { + mmr_t sh_md_dqrp_mmr_dir_locvec6_regval; + struct { + mmr_t vec : 64; + } sh_md_dqrp_mmr_dir_locvec6_s; +} sh_md_dqrp_mmr_dir_locvec6_u_t; +#else +typedef union sh_md_dqrp_mmr_dir_locvec6_u { + mmr_t sh_md_dqrp_mmr_dir_locvec6_regval; + struct { + mmr_t vec : 64; + } sh_md_dqrp_mmr_dir_locvec6_s; +} sh_md_dqrp_mmr_dir_locvec6_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_DIR_LOCVEC7" */ +/* local vector for acc=7 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrp_mmr_dir_locvec7_u { + mmr_t sh_md_dqrp_mmr_dir_locvec7_regval; + struct { + mmr_t vec : 64; + } sh_md_dqrp_mmr_dir_locvec7_s; +} sh_md_dqrp_mmr_dir_locvec7_u_t; +#else +typedef union sh_md_dqrp_mmr_dir_locvec7_u { + mmr_t sh_md_dqrp_mmr_dir_locvec7_regval; + struct { + mmr_t vec : 64; + } sh_md_dqrp_mmr_dir_locvec7_s; +} sh_md_dqrp_mmr_dir_locvec7_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_DIR_PRIVEC0" */ +/* privilege vector for acc=0 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrp_mmr_dir_privec0_u { + mmr_t sh_md_dqrp_mmr_dir_privec0_regval; + struct { + mmr_t in : 14; + mmr_t out : 14; + mmr_t reserved_0 : 36; + } sh_md_dqrp_mmr_dir_privec0_s; +} sh_md_dqrp_mmr_dir_privec0_u_t; +#else +typedef union sh_md_dqrp_mmr_dir_privec0_u { + mmr_t sh_md_dqrp_mmr_dir_privec0_regval; + struct { + mmr_t reserved_0 : 36; + mmr_t out : 14; + mmr_t in : 14; + } sh_md_dqrp_mmr_dir_privec0_s; +} sh_md_dqrp_mmr_dir_privec0_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_DIR_PRIVEC1" */ +/* privilege vector for acc=1 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrp_mmr_dir_privec1_u { + mmr_t sh_md_dqrp_mmr_dir_privec1_regval; + struct { + mmr_t in : 14; + mmr_t out : 14; + mmr_t reserved_0 : 36; + } sh_md_dqrp_mmr_dir_privec1_s; +} sh_md_dqrp_mmr_dir_privec1_u_t; +#else +typedef union sh_md_dqrp_mmr_dir_privec1_u { + mmr_t sh_md_dqrp_mmr_dir_privec1_regval; + struct { + mmr_t reserved_0 : 36; + mmr_t out : 14; + mmr_t in : 14; + } sh_md_dqrp_mmr_dir_privec1_s; +} sh_md_dqrp_mmr_dir_privec1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_DIR_PRIVEC2" */ +/* privilege vector for acc=2 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrp_mmr_dir_privec2_u { + mmr_t sh_md_dqrp_mmr_dir_privec2_regval; + struct { + mmr_t in : 14; + mmr_t out : 14; + mmr_t reserved_0 : 36; + } sh_md_dqrp_mmr_dir_privec2_s; +} sh_md_dqrp_mmr_dir_privec2_u_t; +#else +typedef union sh_md_dqrp_mmr_dir_privec2_u { + mmr_t sh_md_dqrp_mmr_dir_privec2_regval; + struct { + mmr_t reserved_0 : 36; + mmr_t out : 14; + mmr_t in : 14; + } sh_md_dqrp_mmr_dir_privec2_s; +} sh_md_dqrp_mmr_dir_privec2_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_DIR_PRIVEC3" */ +/* privilege vector for acc=3 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrp_mmr_dir_privec3_u { + mmr_t sh_md_dqrp_mmr_dir_privec3_regval; + struct { + mmr_t in : 14; + mmr_t out : 14; + mmr_t reserved_0 : 36; + } sh_md_dqrp_mmr_dir_privec3_s; +} sh_md_dqrp_mmr_dir_privec3_u_t; +#else +typedef union sh_md_dqrp_mmr_dir_privec3_u { + mmr_t sh_md_dqrp_mmr_dir_privec3_regval; + struct { + mmr_t reserved_0 : 36; + mmr_t out : 14; + mmr_t in : 14; + } sh_md_dqrp_mmr_dir_privec3_s; +} sh_md_dqrp_mmr_dir_privec3_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_DIR_PRIVEC4" */ +/* privilege vector for acc=4 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrp_mmr_dir_privec4_u { + mmr_t sh_md_dqrp_mmr_dir_privec4_regval; + struct { + mmr_t in : 14; + mmr_t out : 14; + mmr_t reserved_0 : 36; + } sh_md_dqrp_mmr_dir_privec4_s; +} sh_md_dqrp_mmr_dir_privec4_u_t; +#else +typedef union sh_md_dqrp_mmr_dir_privec4_u { + mmr_t sh_md_dqrp_mmr_dir_privec4_regval; + struct { + mmr_t reserved_0 : 36; + mmr_t out : 14; + mmr_t in : 14; + } sh_md_dqrp_mmr_dir_privec4_s; +} sh_md_dqrp_mmr_dir_privec4_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_DIR_PRIVEC5" */ +/* privilege vector for acc=5 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrp_mmr_dir_privec5_u { + mmr_t sh_md_dqrp_mmr_dir_privec5_regval; + struct { + mmr_t in : 14; + mmr_t out : 14; + mmr_t reserved_0 : 36; + } sh_md_dqrp_mmr_dir_privec5_s; +} sh_md_dqrp_mmr_dir_privec5_u_t; +#else +typedef union sh_md_dqrp_mmr_dir_privec5_u { + mmr_t sh_md_dqrp_mmr_dir_privec5_regval; + struct { + mmr_t reserved_0 : 36; + mmr_t out : 14; + mmr_t in : 14; + } sh_md_dqrp_mmr_dir_privec5_s; +} sh_md_dqrp_mmr_dir_privec5_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_DIR_PRIVEC6" */ +/* privilege vector for acc=6 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrp_mmr_dir_privec6_u { + mmr_t sh_md_dqrp_mmr_dir_privec6_regval; + struct { + mmr_t in : 14; + mmr_t out : 14; + mmr_t reserved_0 : 36; + } sh_md_dqrp_mmr_dir_privec6_s; +} sh_md_dqrp_mmr_dir_privec6_u_t; +#else +typedef union sh_md_dqrp_mmr_dir_privec6_u { + mmr_t sh_md_dqrp_mmr_dir_privec6_regval; + struct { + mmr_t reserved_0 : 36; + mmr_t out : 14; + mmr_t in : 14; + } sh_md_dqrp_mmr_dir_privec6_s; +} sh_md_dqrp_mmr_dir_privec6_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_DIR_PRIVEC7" */ +/* privilege vector for acc=7 */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrp_mmr_dir_privec7_u { + mmr_t sh_md_dqrp_mmr_dir_privec7_regval; + struct { + mmr_t in : 14; + mmr_t out : 14; + mmr_t reserved_0 : 36; + } sh_md_dqrp_mmr_dir_privec7_s; +} sh_md_dqrp_mmr_dir_privec7_u_t; +#else +typedef union sh_md_dqrp_mmr_dir_privec7_u { + mmr_t sh_md_dqrp_mmr_dir_privec7_regval; + struct { + mmr_t reserved_0 : 36; + mmr_t out : 14; + mmr_t in : 14; + } sh_md_dqrp_mmr_dir_privec7_s; +} sh_md_dqrp_mmr_dir_privec7_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_DIR_TIMER" */ +/* MD SXRO timer */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrp_mmr_dir_timer_u { + mmr_t sh_md_dqrp_mmr_dir_timer_regval; + struct { + mmr_t timer_div : 12; + mmr_t timer_en : 1; + mmr_t timer_cur : 9; + mmr_t reserved_0 : 42; + } sh_md_dqrp_mmr_dir_timer_s; +} sh_md_dqrp_mmr_dir_timer_u_t; +#else +typedef union sh_md_dqrp_mmr_dir_timer_u { + mmr_t sh_md_dqrp_mmr_dir_timer_regval; + struct { + mmr_t reserved_0 : 42; + mmr_t timer_cur : 9; + mmr_t timer_en : 1; + mmr_t timer_div : 12; + } sh_md_dqrp_mmr_dir_timer_s; +} sh_md_dqrp_mmr_dir_timer_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_PIOWD_DIR_ENTRY" */ +/* directory pio write data */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrp_mmr_piowd_dir_entry_u { + mmr_t sh_md_dqrp_mmr_piowd_dir_entry_regval; + struct { + mmr_t dira : 26; + mmr_t dirb : 26; + mmr_t pri : 3; + mmr_t acc : 3; + mmr_t reserved_0 : 6; + } sh_md_dqrp_mmr_piowd_dir_entry_s; +} sh_md_dqrp_mmr_piowd_dir_entry_u_t; +#else +typedef union sh_md_dqrp_mmr_piowd_dir_entry_u { + mmr_t sh_md_dqrp_mmr_piowd_dir_entry_regval; + struct { + mmr_t reserved_0 : 6; + mmr_t acc : 3; + mmr_t pri : 3; + mmr_t dirb : 26; + mmr_t dira : 26; + } sh_md_dqrp_mmr_piowd_dir_entry_s; +} sh_md_dqrp_mmr_piowd_dir_entry_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_PIOWD_DIR_ECC" */ +/* directory ecc register */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrp_mmr_piowd_dir_ecc_u { + mmr_t sh_md_dqrp_mmr_piowd_dir_ecc_regval; + struct { + mmr_t ecca : 7; + mmr_t eccb : 7; + mmr_t reserved_0 : 50; + } sh_md_dqrp_mmr_piowd_dir_ecc_s; +} sh_md_dqrp_mmr_piowd_dir_ecc_u_t; +#else +typedef union sh_md_dqrp_mmr_piowd_dir_ecc_u { + mmr_t sh_md_dqrp_mmr_piowd_dir_ecc_regval; + struct { + mmr_t reserved_0 : 50; + mmr_t eccb : 7; + mmr_t ecca : 7; + } sh_md_dqrp_mmr_piowd_dir_ecc_s; +} sh_md_dqrp_mmr_piowd_dir_ecc_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_XPIORD_XDIR_ENTRY" */ +/* x directory pio read data */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrp_mmr_xpiord_xdir_entry_u { + mmr_t sh_md_dqrp_mmr_xpiord_xdir_entry_regval; + struct { + mmr_t dira : 26; + mmr_t dirb : 26; + mmr_t pri : 3; + mmr_t acc : 3; + mmr_t cor : 1; + mmr_t unc : 1; + mmr_t reserved_0 : 4; + } sh_md_dqrp_mmr_xpiord_xdir_entry_s; +} sh_md_dqrp_mmr_xpiord_xdir_entry_u_t; +#else +typedef union sh_md_dqrp_mmr_xpiord_xdir_entry_u { + mmr_t sh_md_dqrp_mmr_xpiord_xdir_entry_regval; + struct { + mmr_t reserved_0 : 4; + mmr_t unc : 1; + mmr_t cor : 1; + mmr_t acc : 3; + mmr_t pri : 3; + mmr_t dirb : 26; + mmr_t dira : 26; + } sh_md_dqrp_mmr_xpiord_xdir_entry_s; +} sh_md_dqrp_mmr_xpiord_xdir_entry_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_XPIORD_XDIR_ECC" */ +/* x directory ecc */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrp_mmr_xpiord_xdir_ecc_u { + mmr_t sh_md_dqrp_mmr_xpiord_xdir_ecc_regval; + struct { + mmr_t ecca : 7; + mmr_t eccb : 7; + mmr_t reserved_0 : 50; + } sh_md_dqrp_mmr_xpiord_xdir_ecc_s; +} sh_md_dqrp_mmr_xpiord_xdir_ecc_u_t; +#else +typedef union sh_md_dqrp_mmr_xpiord_xdir_ecc_u { + mmr_t sh_md_dqrp_mmr_xpiord_xdir_ecc_regval; + struct { + mmr_t reserved_0 : 50; + mmr_t eccb : 7; + mmr_t ecca : 7; + } sh_md_dqrp_mmr_xpiord_xdir_ecc_s; +} sh_md_dqrp_mmr_xpiord_xdir_ecc_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_YPIORD_YDIR_ENTRY" */ +/* y directory pio read data */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrp_mmr_ypiord_ydir_entry_u { + mmr_t sh_md_dqrp_mmr_ypiord_ydir_entry_regval; + struct { + mmr_t dira : 26; + mmr_t dirb : 26; + mmr_t pri : 3; + mmr_t acc : 3; + mmr_t cor : 1; + mmr_t unc : 1; + mmr_t reserved_0 : 4; + } sh_md_dqrp_mmr_ypiord_ydir_entry_s; +} sh_md_dqrp_mmr_ypiord_ydir_entry_u_t; +#else +typedef union sh_md_dqrp_mmr_ypiord_ydir_entry_u { + mmr_t sh_md_dqrp_mmr_ypiord_ydir_entry_regval; + struct { + mmr_t reserved_0 : 4; + mmr_t unc : 1; + mmr_t cor : 1; + mmr_t acc : 3; + mmr_t pri : 3; + mmr_t dirb : 26; + mmr_t dira : 26; + } sh_md_dqrp_mmr_ypiord_ydir_entry_s; +} sh_md_dqrp_mmr_ypiord_ydir_entry_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_YPIORD_YDIR_ECC" */ +/* y directory ecc */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrp_mmr_ypiord_ydir_ecc_u { + mmr_t sh_md_dqrp_mmr_ypiord_ydir_ecc_regval; + struct { + mmr_t ecca : 7; + mmr_t eccb : 7; + mmr_t reserved_0 : 50; + } sh_md_dqrp_mmr_ypiord_ydir_ecc_s; +} sh_md_dqrp_mmr_ypiord_ydir_ecc_u_t; +#else +typedef union sh_md_dqrp_mmr_ypiord_ydir_ecc_u { + mmr_t sh_md_dqrp_mmr_ypiord_ydir_ecc_regval; + struct { + mmr_t reserved_0 : 50; + mmr_t eccb : 7; + mmr_t ecca : 7; + } sh_md_dqrp_mmr_ypiord_ydir_ecc_s; +} sh_md_dqrp_mmr_ypiord_ydir_ecc_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_XCERR1" */ +/* correctable dir ecc group 1 error register */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrp_mmr_xcerr1_u { + mmr_t sh_md_dqrp_mmr_xcerr1_regval; + struct { + mmr_t grp1 : 36; + mmr_t val : 1; + mmr_t more : 1; + mmr_t arm : 1; + mmr_t reserved_0 : 25; + } sh_md_dqrp_mmr_xcerr1_s; +} sh_md_dqrp_mmr_xcerr1_u_t; +#else +typedef union sh_md_dqrp_mmr_xcerr1_u { + mmr_t sh_md_dqrp_mmr_xcerr1_regval; + struct { + mmr_t reserved_0 : 25; + mmr_t arm : 1; + mmr_t more : 1; + mmr_t val : 1; + mmr_t grp1 : 36; + } sh_md_dqrp_mmr_xcerr1_s; +} sh_md_dqrp_mmr_xcerr1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_XCERR2" */ +/* correctable dir ecc group 2 error register */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrp_mmr_xcerr2_u { + mmr_t sh_md_dqrp_mmr_xcerr2_regval; + struct { + mmr_t grp2 : 36; + mmr_t val : 1; + mmr_t more : 1; + mmr_t reserved_0 : 26; + } sh_md_dqrp_mmr_xcerr2_s; +} sh_md_dqrp_mmr_xcerr2_u_t; +#else +typedef union sh_md_dqrp_mmr_xcerr2_u { + mmr_t sh_md_dqrp_mmr_xcerr2_regval; + struct { + mmr_t reserved_0 : 26; + mmr_t more : 1; + mmr_t val : 1; + mmr_t grp2 : 36; + } sh_md_dqrp_mmr_xcerr2_s; +} sh_md_dqrp_mmr_xcerr2_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_XUERR1" */ +/* uncorrectable dir ecc group 1 error register */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrp_mmr_xuerr1_u { + mmr_t sh_md_dqrp_mmr_xuerr1_regval; + struct { + mmr_t grp1 : 36; + mmr_t val : 1; + mmr_t more : 1; + mmr_t arm : 1; + mmr_t reserved_0 : 25; + } sh_md_dqrp_mmr_xuerr1_s; +} sh_md_dqrp_mmr_xuerr1_u_t; +#else +typedef union sh_md_dqrp_mmr_xuerr1_u { + mmr_t sh_md_dqrp_mmr_xuerr1_regval; + struct { + mmr_t reserved_0 : 25; + mmr_t arm : 1; + mmr_t more : 1; + mmr_t val : 1; + mmr_t grp1 : 36; + } sh_md_dqrp_mmr_xuerr1_s; +} sh_md_dqrp_mmr_xuerr1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_XUERR2" */ +/* uncorrectable dir ecc group 2 error register */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrp_mmr_xuerr2_u { + mmr_t sh_md_dqrp_mmr_xuerr2_regval; + struct { + mmr_t grp2 : 36; + mmr_t val : 1; + mmr_t more : 1; + mmr_t reserved_0 : 26; + } sh_md_dqrp_mmr_xuerr2_s; +} sh_md_dqrp_mmr_xuerr2_u_t; +#else +typedef union sh_md_dqrp_mmr_xuerr2_u { + mmr_t sh_md_dqrp_mmr_xuerr2_regval; + struct { + mmr_t reserved_0 : 26; + mmr_t more : 1; + mmr_t val : 1; + mmr_t grp2 : 36; + } sh_md_dqrp_mmr_xuerr2_s; +} sh_md_dqrp_mmr_xuerr2_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_XPERR" */ +/* protocol error register */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrp_mmr_xperr_u { + mmr_t sh_md_dqrp_mmr_xperr_regval; + struct { + mmr_t dir : 26; + mmr_t cmd : 8; + mmr_t src : 14; + mmr_t prige : 1; + mmr_t priv : 1; + mmr_t cor : 1; + mmr_t unc : 1; + mmr_t mybit : 8; + mmr_t val : 1; + mmr_t more : 1; + mmr_t arm : 1; + mmr_t reserved_0 : 1; + } sh_md_dqrp_mmr_xperr_s; +} sh_md_dqrp_mmr_xperr_u_t; +#else +typedef union sh_md_dqrp_mmr_xperr_u { + mmr_t sh_md_dqrp_mmr_xperr_regval; + struct { + mmr_t reserved_0 : 1; + mmr_t arm : 1; + mmr_t more : 1; + mmr_t val : 1; + mmr_t mybit : 8; + mmr_t unc : 1; + mmr_t cor : 1; + mmr_t priv : 1; + mmr_t prige : 1; + mmr_t src : 14; + mmr_t cmd : 8; + mmr_t dir : 26; + } sh_md_dqrp_mmr_xperr_s; +} sh_md_dqrp_mmr_xperr_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_YCERR1" */ +/* correctable dir ecc group 1 error register */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrp_mmr_ycerr1_u { + mmr_t sh_md_dqrp_mmr_ycerr1_regval; + struct { + mmr_t grp1 : 36; + mmr_t val : 1; + mmr_t more : 1; + mmr_t arm : 1; + mmr_t reserved_0 : 25; + } sh_md_dqrp_mmr_ycerr1_s; +} sh_md_dqrp_mmr_ycerr1_u_t; +#else +typedef union sh_md_dqrp_mmr_ycerr1_u { + mmr_t sh_md_dqrp_mmr_ycerr1_regval; + struct { + mmr_t reserved_0 : 25; + mmr_t arm : 1; + mmr_t more : 1; + mmr_t val : 1; + mmr_t grp1 : 36; + } sh_md_dqrp_mmr_ycerr1_s; +} sh_md_dqrp_mmr_ycerr1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_YCERR2" */ +/* correctable dir ecc group 2 error register */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrp_mmr_ycerr2_u { + mmr_t sh_md_dqrp_mmr_ycerr2_regval; + struct { + mmr_t grp2 : 36; + mmr_t val : 1; + mmr_t more : 1; + mmr_t reserved_0 : 26; + } sh_md_dqrp_mmr_ycerr2_s; +} sh_md_dqrp_mmr_ycerr2_u_t; +#else +typedef union sh_md_dqrp_mmr_ycerr2_u { + mmr_t sh_md_dqrp_mmr_ycerr2_regval; + struct { + mmr_t reserved_0 : 26; + mmr_t more : 1; + mmr_t val : 1; + mmr_t grp2 : 36; + } sh_md_dqrp_mmr_ycerr2_s; +} sh_md_dqrp_mmr_ycerr2_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_YUERR1" */ +/* uncorrectable dir ecc group 1 error register */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrp_mmr_yuerr1_u { + mmr_t sh_md_dqrp_mmr_yuerr1_regval; + struct { + mmr_t grp1 : 36; + mmr_t val : 1; + mmr_t more : 1; + mmr_t arm : 1; + mmr_t reserved_0 : 25; + } sh_md_dqrp_mmr_yuerr1_s; +} sh_md_dqrp_mmr_yuerr1_u_t; +#else +typedef union sh_md_dqrp_mmr_yuerr1_u { + mmr_t sh_md_dqrp_mmr_yuerr1_regval; + struct { + mmr_t reserved_0 : 25; + mmr_t arm : 1; + mmr_t more : 1; + mmr_t val : 1; + mmr_t grp1 : 36; + } sh_md_dqrp_mmr_yuerr1_s; +} sh_md_dqrp_mmr_yuerr1_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_YUERR2" */ +/* uncorrectable dir ecc group 2 error register */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrp_mmr_yuerr2_u { + mmr_t sh_md_dqrp_mmr_yuerr2_regval; + struct { + mmr_t grp2 : 36; + mmr_t val : 1; + mmr_t more : 1; + mmr_t reserved_0 : 26; + } sh_md_dqrp_mmr_yuerr2_s; +} sh_md_dqrp_mmr_yuerr2_u_t; +#else +typedef union sh_md_dqrp_mmr_yuerr2_u { + mmr_t sh_md_dqrp_mmr_yuerr2_regval; + struct { + mmr_t reserved_0 : 26; + mmr_t more : 1; + mmr_t val : 1; + mmr_t grp2 : 36; + } sh_md_dqrp_mmr_yuerr2_s; +} sh_md_dqrp_mmr_yuerr2_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_YPERR" */ +/* protocol error register */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrp_mmr_yperr_u { + mmr_t sh_md_dqrp_mmr_yperr_regval; + struct { + mmr_t dir : 26; + mmr_t cmd : 8; + mmr_t src : 14; + mmr_t prige : 1; + mmr_t priv : 1; + mmr_t cor : 1; + mmr_t unc : 1; + mmr_t mybit : 8; + mmr_t val : 1; + mmr_t more : 1; + mmr_t arm : 1; + mmr_t reserved_0 : 1; + } sh_md_dqrp_mmr_yperr_s; +} sh_md_dqrp_mmr_yperr_u_t; +#else +typedef union sh_md_dqrp_mmr_yperr_u { + mmr_t sh_md_dqrp_mmr_yperr_regval; + struct { + mmr_t reserved_0 : 1; + mmr_t arm : 1; + mmr_t more : 1; + mmr_t val : 1; + mmr_t mybit : 8; + mmr_t unc : 1; + mmr_t cor : 1; + mmr_t priv : 1; + mmr_t prige : 1; + mmr_t src : 14; + mmr_t cmd : 8; + mmr_t dir : 26; + } sh_md_dqrp_mmr_yperr_s; +} sh_md_dqrp_mmr_yperr_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_DIR_CMDTRIG" */ +/* cmd triggers */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrp_mmr_dir_cmdtrig_u { + mmr_t sh_md_dqrp_mmr_dir_cmdtrig_regval; + struct { + mmr_t cmd0 : 8; + mmr_t cmd1 : 8; + mmr_t cmd2 : 8; + mmr_t cmd3 : 8; + mmr_t reserved_0 : 32; + } sh_md_dqrp_mmr_dir_cmdtrig_s; +} sh_md_dqrp_mmr_dir_cmdtrig_u_t; +#else +typedef union sh_md_dqrp_mmr_dir_cmdtrig_u { + mmr_t sh_md_dqrp_mmr_dir_cmdtrig_regval; + struct { + mmr_t reserved_0 : 32; + mmr_t cmd3 : 8; + mmr_t cmd2 : 8; + mmr_t cmd1 : 8; + mmr_t cmd0 : 8; + } sh_md_dqrp_mmr_dir_cmdtrig_s; +} sh_md_dqrp_mmr_dir_cmdtrig_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_DIR_TBLTRIG" */ +/* dir table trigger */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrp_mmr_dir_tbltrig_u { + mmr_t sh_md_dqrp_mmr_dir_tbltrig_regval; + struct { + mmr_t src : 14; + mmr_t cmd : 8; + mmr_t acc : 2; + mmr_t prige : 1; + mmr_t dirst : 9; + mmr_t mybit : 8; + mmr_t reserved_0 : 22; + } sh_md_dqrp_mmr_dir_tbltrig_s; +} sh_md_dqrp_mmr_dir_tbltrig_u_t; +#else +typedef union sh_md_dqrp_mmr_dir_tbltrig_u { + mmr_t sh_md_dqrp_mmr_dir_tbltrig_regval; + struct { + mmr_t reserved_0 : 22; + mmr_t mybit : 8; + mmr_t dirst : 9; + mmr_t prige : 1; + mmr_t acc : 2; + mmr_t cmd : 8; + mmr_t src : 14; + } sh_md_dqrp_mmr_dir_tbltrig_s; +} sh_md_dqrp_mmr_dir_tbltrig_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_DIR_TBLMASK" */ +/* dir table trigger mask */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrp_mmr_dir_tblmask_u { + mmr_t sh_md_dqrp_mmr_dir_tblmask_regval; + struct { + mmr_t src : 14; + mmr_t cmd : 8; + mmr_t acc : 2; + mmr_t prige : 1; + mmr_t dirst : 9; + mmr_t mybit : 8; + mmr_t reserved_0 : 22; + } sh_md_dqrp_mmr_dir_tblmask_s; +} sh_md_dqrp_mmr_dir_tblmask_u_t; +#else +typedef union sh_md_dqrp_mmr_dir_tblmask_u { + mmr_t sh_md_dqrp_mmr_dir_tblmask_regval; + struct { + mmr_t reserved_0 : 22; + mmr_t mybit : 8; + mmr_t dirst : 9; + mmr_t prige : 1; + mmr_t acc : 2; + mmr_t cmd : 8; + mmr_t src : 14; + } sh_md_dqrp_mmr_dir_tblmask_s; +} sh_md_dqrp_mmr_dir_tblmask_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_XBIST_H" */ +/* rising edge bist/fill pattern */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrp_mmr_xbist_h_u { + mmr_t sh_md_dqrp_mmr_xbist_h_regval; + struct { + mmr_t pat : 32; + mmr_t reserved_0 : 8; + mmr_t inv : 1; + mmr_t rot : 1; + mmr_t arm : 1; + mmr_t reserved_1 : 21; + } sh_md_dqrp_mmr_xbist_h_s; +} sh_md_dqrp_mmr_xbist_h_u_t; +#else +typedef union sh_md_dqrp_mmr_xbist_h_u { + mmr_t sh_md_dqrp_mmr_xbist_h_regval; + struct { + mmr_t reserved_1 : 21; + mmr_t arm : 1; + mmr_t rot : 1; + mmr_t inv : 1; + mmr_t reserved_0 : 8; + mmr_t pat : 32; + } sh_md_dqrp_mmr_xbist_h_s; +} sh_md_dqrp_mmr_xbist_h_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_XBIST_L" */ +/* falling edge bist/fill pattern */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrp_mmr_xbist_l_u { + mmr_t sh_md_dqrp_mmr_xbist_l_regval; + struct { + mmr_t pat : 32; + mmr_t reserved_0 : 8; + mmr_t inv : 1; + mmr_t rot : 1; + mmr_t reserved_1 : 22; + } sh_md_dqrp_mmr_xbist_l_s; +} sh_md_dqrp_mmr_xbist_l_u_t; +#else +typedef union sh_md_dqrp_mmr_xbist_l_u { + mmr_t sh_md_dqrp_mmr_xbist_l_regval; + struct { + mmr_t reserved_1 : 22; + mmr_t rot : 1; + mmr_t inv : 1; + mmr_t reserved_0 : 8; + mmr_t pat : 32; + } sh_md_dqrp_mmr_xbist_l_s; +} sh_md_dqrp_mmr_xbist_l_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_XBIST_ERR_H" */ +/* rising edge bist error pattern */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrp_mmr_xbist_err_h_u { + mmr_t sh_md_dqrp_mmr_xbist_err_h_regval; + struct { + mmr_t pat : 32; + mmr_t reserved_0 : 8; + mmr_t val : 1; + mmr_t more : 1; + mmr_t reserved_1 : 22; + } sh_md_dqrp_mmr_xbist_err_h_s; +} sh_md_dqrp_mmr_xbist_err_h_u_t; +#else +typedef union sh_md_dqrp_mmr_xbist_err_h_u { + mmr_t sh_md_dqrp_mmr_xbist_err_h_regval; + struct { + mmr_t reserved_1 : 22; + mmr_t more : 1; + mmr_t val : 1; + mmr_t reserved_0 : 8; + mmr_t pat : 32; + } sh_md_dqrp_mmr_xbist_err_h_s; +} sh_md_dqrp_mmr_xbist_err_h_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_XBIST_ERR_L" */ +/* falling edge bist error pattern */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrp_mmr_xbist_err_l_u { + mmr_t sh_md_dqrp_mmr_xbist_err_l_regval; + struct { + mmr_t pat : 32; + mmr_t reserved_0 : 8; + mmr_t val : 1; + mmr_t more : 1; + mmr_t reserved_1 : 22; + } sh_md_dqrp_mmr_xbist_err_l_s; +} sh_md_dqrp_mmr_xbist_err_l_u_t; +#else +typedef union sh_md_dqrp_mmr_xbist_err_l_u { + mmr_t sh_md_dqrp_mmr_xbist_err_l_regval; + struct { + mmr_t reserved_1 : 22; + mmr_t more : 1; + mmr_t val : 1; + mmr_t reserved_0 : 8; + mmr_t pat : 32; + } sh_md_dqrp_mmr_xbist_err_l_s; +} sh_md_dqrp_mmr_xbist_err_l_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_YBIST_H" */ +/* rising edge bist/fill pattern */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrp_mmr_ybist_h_u { + mmr_t sh_md_dqrp_mmr_ybist_h_regval; + struct { + mmr_t pat : 32; + mmr_t reserved_0 : 8; + mmr_t inv : 1; + mmr_t rot : 1; + mmr_t arm : 1; + mmr_t reserved_1 : 21; + } sh_md_dqrp_mmr_ybist_h_s; +} sh_md_dqrp_mmr_ybist_h_u_t; +#else +typedef union sh_md_dqrp_mmr_ybist_h_u { + mmr_t sh_md_dqrp_mmr_ybist_h_regval; + struct { + mmr_t reserved_1 : 21; + mmr_t arm : 1; + mmr_t rot : 1; + mmr_t inv : 1; + mmr_t reserved_0 : 8; + mmr_t pat : 32; + } sh_md_dqrp_mmr_ybist_h_s; +} sh_md_dqrp_mmr_ybist_h_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_YBIST_L" */ +/* falling edge bist/fill pattern */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrp_mmr_ybist_l_u { + mmr_t sh_md_dqrp_mmr_ybist_l_regval; + struct { + mmr_t pat : 32; + mmr_t reserved_0 : 8; + mmr_t inv : 1; + mmr_t rot : 1; + mmr_t reserved_1 : 22; + } sh_md_dqrp_mmr_ybist_l_s; +} sh_md_dqrp_mmr_ybist_l_u_t; +#else +typedef union sh_md_dqrp_mmr_ybist_l_u { + mmr_t sh_md_dqrp_mmr_ybist_l_regval; + struct { + mmr_t reserved_1 : 22; + mmr_t rot : 1; + mmr_t inv : 1; + mmr_t reserved_0 : 8; + mmr_t pat : 32; + } sh_md_dqrp_mmr_ybist_l_s; +} sh_md_dqrp_mmr_ybist_l_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_YBIST_ERR_H" */ +/* rising edge bist error pattern */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrp_mmr_ybist_err_h_u { + mmr_t sh_md_dqrp_mmr_ybist_err_h_regval; + struct { + mmr_t pat : 32; + mmr_t reserved_0 : 8; + mmr_t val : 1; + mmr_t more : 1; + mmr_t reserved_1 : 22; + } sh_md_dqrp_mmr_ybist_err_h_s; +} sh_md_dqrp_mmr_ybist_err_h_u_t; +#else +typedef union sh_md_dqrp_mmr_ybist_err_h_u { + mmr_t sh_md_dqrp_mmr_ybist_err_h_regval; + struct { + mmr_t reserved_1 : 22; + mmr_t more : 1; + mmr_t val : 1; + mmr_t reserved_0 : 8; + mmr_t pat : 32; + } sh_md_dqrp_mmr_ybist_err_h_s; +} sh_md_dqrp_mmr_ybist_err_h_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRP_MMR_YBIST_ERR_L" */ +/* falling edge bist error pattern */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrp_mmr_ybist_err_l_u { + mmr_t sh_md_dqrp_mmr_ybist_err_l_regval; + struct { + mmr_t pat : 32; + mmr_t reserved_0 : 8; + mmr_t val : 1; + mmr_t more : 1; + mmr_t reserved_1 : 22; + } sh_md_dqrp_mmr_ybist_err_l_s; +} sh_md_dqrp_mmr_ybist_err_l_u_t; +#else +typedef union sh_md_dqrp_mmr_ybist_err_l_u { + mmr_t sh_md_dqrp_mmr_ybist_err_l_regval; + struct { + mmr_t reserved_1 : 22; + mmr_t more : 1; + mmr_t val : 1; + mmr_t reserved_0 : 8; + mmr_t pat : 32; + } sh_md_dqrp_mmr_ybist_err_l_s; +} sh_md_dqrp_mmr_ybist_err_l_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRS_MMR_XBIST_H" */ +/* rising edge bist/fill pattern */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrs_mmr_xbist_h_u { + mmr_t sh_md_dqrs_mmr_xbist_h_regval; + struct { + mmr_t pat : 40; + mmr_t inv : 1; + mmr_t rot : 1; + mmr_t arm : 1; + mmr_t reserved_0 : 21; + } sh_md_dqrs_mmr_xbist_h_s; +} sh_md_dqrs_mmr_xbist_h_u_t; +#else +typedef union sh_md_dqrs_mmr_xbist_h_u { + mmr_t sh_md_dqrs_mmr_xbist_h_regval; + struct { + mmr_t reserved_0 : 21; + mmr_t arm : 1; + mmr_t rot : 1; + mmr_t inv : 1; + mmr_t pat : 40; + } sh_md_dqrs_mmr_xbist_h_s; +} sh_md_dqrs_mmr_xbist_h_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRS_MMR_XBIST_L" */ +/* falling edge bist/fill pattern */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrs_mmr_xbist_l_u { + mmr_t sh_md_dqrs_mmr_xbist_l_regval; + struct { + mmr_t pat : 40; + mmr_t inv : 1; + mmr_t rot : 1; + mmr_t reserved_0 : 22; + } sh_md_dqrs_mmr_xbist_l_s; +} sh_md_dqrs_mmr_xbist_l_u_t; +#else +typedef union sh_md_dqrs_mmr_xbist_l_u { + mmr_t sh_md_dqrs_mmr_xbist_l_regval; + struct { + mmr_t reserved_0 : 22; + mmr_t rot : 1; + mmr_t inv : 1; + mmr_t pat : 40; + } sh_md_dqrs_mmr_xbist_l_s; +} sh_md_dqrs_mmr_xbist_l_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRS_MMR_XBIST_ERR_H" */ +/* rising edge bist error pattern */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrs_mmr_xbist_err_h_u { + mmr_t sh_md_dqrs_mmr_xbist_err_h_regval; + struct { + mmr_t pat : 40; + mmr_t val : 1; + mmr_t more : 1; + mmr_t reserved_0 : 22; + } sh_md_dqrs_mmr_xbist_err_h_s; +} sh_md_dqrs_mmr_xbist_err_h_u_t; +#else +typedef union sh_md_dqrs_mmr_xbist_err_h_u { + mmr_t sh_md_dqrs_mmr_xbist_err_h_regval; + struct { + mmr_t reserved_0 : 22; + mmr_t more : 1; + mmr_t val : 1; + mmr_t pat : 40; + } sh_md_dqrs_mmr_xbist_err_h_s; +} sh_md_dqrs_mmr_xbist_err_h_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRS_MMR_XBIST_ERR_L" */ +/* falling edge bist error pattern */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrs_mmr_xbist_err_l_u { + mmr_t sh_md_dqrs_mmr_xbist_err_l_regval; + struct { + mmr_t pat : 40; + mmr_t val : 1; + mmr_t more : 1; + mmr_t reserved_0 : 22; + } sh_md_dqrs_mmr_xbist_err_l_s; +} sh_md_dqrs_mmr_xbist_err_l_u_t; +#else +typedef union sh_md_dqrs_mmr_xbist_err_l_u { + mmr_t sh_md_dqrs_mmr_xbist_err_l_regval; + struct { + mmr_t reserved_0 : 22; + mmr_t more : 1; + mmr_t val : 1; + mmr_t pat : 40; + } sh_md_dqrs_mmr_xbist_err_l_s; +} sh_md_dqrs_mmr_xbist_err_l_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRS_MMR_YBIST_H" */ +/* rising edge bist/fill pattern */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrs_mmr_ybist_h_u { + mmr_t sh_md_dqrs_mmr_ybist_h_regval; + struct { + mmr_t pat : 40; + mmr_t inv : 1; + mmr_t rot : 1; + mmr_t arm : 1; + mmr_t reserved_0 : 21; + } sh_md_dqrs_mmr_ybist_h_s; +} sh_md_dqrs_mmr_ybist_h_u_t; +#else +typedef union sh_md_dqrs_mmr_ybist_h_u { + mmr_t sh_md_dqrs_mmr_ybist_h_regval; + struct { + mmr_t reserved_0 : 21; + mmr_t arm : 1; + mmr_t rot : 1; + mmr_t inv : 1; + mmr_t pat : 40; + } sh_md_dqrs_mmr_ybist_h_s; +} sh_md_dqrs_mmr_ybist_h_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRS_MMR_YBIST_L" */ +/* falling edge bist/fill pattern */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrs_mmr_ybist_l_u { + mmr_t sh_md_dqrs_mmr_ybist_l_regval; + struct { + mmr_t pat : 40; + mmr_t inv : 1; + mmr_t rot : 1; + mmr_t reserved_0 : 22; + } sh_md_dqrs_mmr_ybist_l_s; +} sh_md_dqrs_mmr_ybist_l_u_t; +#else +typedef union sh_md_dqrs_mmr_ybist_l_u { + mmr_t sh_md_dqrs_mmr_ybist_l_regval; + struct { + mmr_t reserved_0 : 22; + mmr_t rot : 1; + mmr_t inv : 1; + mmr_t pat : 40; + } sh_md_dqrs_mmr_ybist_l_s; +} sh_md_dqrs_mmr_ybist_l_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRS_MMR_YBIST_ERR_H" */ +/* rising edge bist error pattern */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrs_mmr_ybist_err_h_u { + mmr_t sh_md_dqrs_mmr_ybist_err_h_regval; + struct { + mmr_t pat : 40; + mmr_t val : 1; + mmr_t more : 1; + mmr_t reserved_0 : 22; + } sh_md_dqrs_mmr_ybist_err_h_s; +} sh_md_dqrs_mmr_ybist_err_h_u_t; +#else +typedef union sh_md_dqrs_mmr_ybist_err_h_u { + mmr_t sh_md_dqrs_mmr_ybist_err_h_regval; + struct { + mmr_t reserved_0 : 22; + mmr_t more : 1; + mmr_t val : 1; + mmr_t pat : 40; + } sh_md_dqrs_mmr_ybist_err_h_s; +} sh_md_dqrs_mmr_ybist_err_h_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRS_MMR_YBIST_ERR_L" */ +/* falling edge bist error pattern */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrs_mmr_ybist_err_l_u { + mmr_t sh_md_dqrs_mmr_ybist_err_l_regval; + struct { + mmr_t pat : 40; + mmr_t val : 1; + mmr_t more : 1; + mmr_t reserved_0 : 22; + } sh_md_dqrs_mmr_ybist_err_l_s; +} sh_md_dqrs_mmr_ybist_err_l_u_t; +#else +typedef union sh_md_dqrs_mmr_ybist_err_l_u { + mmr_t sh_md_dqrs_mmr_ybist_err_l_regval; + struct { + mmr_t reserved_0 : 22; + mmr_t more : 1; + mmr_t val : 1; + mmr_t pat : 40; + } sh_md_dqrs_mmr_ybist_err_l_s; +} sh_md_dqrs_mmr_ybist_err_l_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRS_MMR_JNR_DEBUG" */ +/* joiner/fct debug configuration */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrs_mmr_jnr_debug_u { + mmr_t sh_md_dqrs_mmr_jnr_debug_regval; + struct { + mmr_t px : 1; + mmr_t rw : 1; + mmr_t reserved_0 : 62; + } sh_md_dqrs_mmr_jnr_debug_s; +} sh_md_dqrs_mmr_jnr_debug_u_t; +#else +typedef union sh_md_dqrs_mmr_jnr_debug_u { + mmr_t sh_md_dqrs_mmr_jnr_debug_regval; + struct { + mmr_t reserved_0 : 62; + mmr_t rw : 1; + mmr_t px : 1; + } sh_md_dqrs_mmr_jnr_debug_s; +} sh_md_dqrs_mmr_jnr_debug_u_t; +#endif + +/* ==================================================================== */ +/* Register "SH_MD_DQRS_MMR_YAMOPW_ERR" */ +/* amo/partial rmw ecc error register */ +/* ==================================================================== */ + +#ifdef LITTLE_ENDIAN +typedef union sh_md_dqrs_mmr_yamopw_err_u { + mmr_t sh_md_dqrs_mmr_yamopw_err_regval; + struct { + mmr_t ssyn : 8; + mmr_t scor : 1; + mmr_t sunc : 1; + mmr_t reserved_0 : 6; + mmr_t rsyn : 8; + mmr_t rcor : 1; + mmr_t runc : 1; + mmr_t reserved_1 : 6; + mmr_t arm : 1; + mmr_t reserved_2 : 31; + } sh_md_dqrs_mmr_yamopw_err_s; +} sh_md_dqrs_mmr_yamopw_err_u_t; +#else +typedef union sh_md_dqrs_mmr_yamopw_err_u { + mmr_t sh_md_dqrs_mmr_yamopw_err_regval; + struct { + mmr_t reserved_2 : 31; + mmr_t arm : 1; + mmr_t reserved_1 : 6; + mmr_t runc : 1; + mmr_t rcor : 1; + mmr_t rsyn : 8; + mmr_t reserved_0 : 6; + mmr_t sunc : 1; + mmr_t scor : 1; + mmr_t ssyn : 8; + } sh_md_dqrs_mmr_yamopw_err_s; +} sh_md_dqrs_mmr_yamopw_err_u_t; +#endif + + +#endif /* _ASM_IA64_SN_SN2_SHUB_MMR_T_H */ diff -Nru a/include/asm-ia64/sn/sn2/shubio.h b/include/asm-ia64/sn/sn2/shubio.h --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/include/asm-ia64/sn/sn2/shubio.h Tue Mar 12 13:58:16 2002 @@ -0,0 +1,3639 @@ +/* $Id$ + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (C) 1992 - 1997, 2000-2002 Silicon Graphics, Inc. All rights reserved. + */ + +#ifndef _ASM_IA64_SN_SN2_SHUBIO_H +#define _ASM_IA64_SN_SN2_SHUBIO_H + +#include + +#define HUB_WIDGET_ID_MAX 0xf +#define IIO_NUM_ITTES 7 +#define HUB_NUM_BIG_WINDOW (IIO_NUM_ITTES - 1) + +#define IIO_WID 0x00400000 /* Crosstalk Widget Identification */ + /* This register is also accessible from + * Crosstalk at address 0x0. */ +#define IIO_WSTAT 0x00400008 /* Crosstalk Widget Status */ +#define IIO_WCR 0x00400020 /* Crosstalk Widget Control Register */ +#define IIO_ILAPR 0x00400100 /* IO Local Access Protection Register */ +#define IIO_ILAPO 0x00400108 /* IO Local Access Protection Override */ +#define IIO_IOWA 0x00400110 /* IO Outbound Widget Access */ +#define IIO_IIWA 0x00400118 /* IO Inbound Widget Access */ +#define IIO_IIDEM 0x00400120 /* IO Inbound Device Error Mask */ +#define IIO_ILCSR 0x00400128 /* IO LLP Control and Status Register */ +#define IIO_ILLR 0x00400130 /* IO LLP Log Register */ +#define IIO_IIDSR 0x00400138 /* IO Interrupt Destination */ + +#define IIO_IGFX0 0x00400140 /* IO Graphics Node-Widget Map 0 */ +#define IIO_IGFX1 0x00400148 /* IO Graphics Node-Widget Map 1 */ + +#define IIO_ISCR0 0x00400150 /* IO Scratch Register 0 */ +#define IIO_ISCR1 0x00400158 /* IO Scratch Register 1 */ + +#define IIO_ITTE1 0x00400160 /* IO Translation Table Entry 1 */ +#define IIO_ITTE2 0x00400168 /* IO Translation Table Entry 2 */ +#define IIO_ITTE3 0x00400170 /* IO Translation Table Entry 3 */ +#define IIO_ITTE4 0x00400178 /* IO Translation Table Entry 4 */ +#define IIO_ITTE5 0x00400180 /* IO Translation Table Entry 5 */ +#define IIO_ITTE6 0x00400188 /* IO Translation Table Entry 6 */ +#define IIO_ITTE7 0x00400190 /* IO Translation Table Entry 7 */ + +#define IIO_IPRB0 0x00400198 /* IO PRB Entry 0 */ +#define IIO_IPRB8 0x004001A0 /* IO PRB Entry 8 */ +#define IIO_IPRB9 0x004001A8 /* IO PRB Entry 9 */ +#define IIO_IPRBA 0x004001B0 /* IO PRB Entry A */ +#define IIO_IPRBB 0x004001B8 /* IO PRB Entry B */ +#define IIO_IPRBC 0x004001C0 /* IO PRB Entry C */ +#define IIO_IPRBD 0x004001C8 /* IO PRB Entry D */ +#define IIO_IPRBE 0x004001D0 /* IO PRB Entry E */ +#define IIO_IPRBF 0x004001D8 /* IO PRB Entry F */ + +#define IIO_IXCC 0x004001E0 /* IO Crosstalk Credit Count Timeout */ +#define IIO_IMEM 0x004001E8 /* IO Miscellaneous Error Mask */ +#define IIO_IXTT 0x004001F0 /* IO Crosstalk Timeout Threshold */ +#define IIO_IECLR 0x004001F8 /* IO Error Clear Register */ +#define IIO_IBCR 0x00400200 /* IO BTE Control Register */ + +#define IIO_IXSM 0x00400208 /* IO Crosstalk Spurious Message */ +#define IIO_IXSS 0x00400210 /* IO Crosstalk Spurious Sideband */ + +#define IIO_ILCT 0x00400218 /* IO LLP Channel Test */ + +#define IIO_IIEPH1 0x00400220 /* IO Incoming Error Packet Header, Part 1 */ +#define IIO_IIEPH2 0x00400228 /* IO Incoming Error Packet Header, Part 2 */ + + +#define IIO_ISLAPR 0x00400230 /* IO SXB Local Access Protection Regster */ +#define IIO_ISLAPO 0x00400238 /* IO SXB Local Access Protection Override */ + +#define IIO_IWI 0x00400240 /* IO Wrapper Interrupt Register */ +#define IIO_IWEL 0x00400248 /* IO Wrapper Error Log Register */ +#define IIO_IWC 0x00400250 /* IO Wrapper Control Register */ +#define IIO_IWS 0x00400258 /* IO Wrapper Status Register */ +#define IIO_IWEIM 0x00400260 /* IO Wrapper Error Interrupt Masking Register */ + +#define IIO_IPCA 0x00400300 /* IO PRB Counter Adjust */ + +#define IIO_IPRTE0_A 0x00400308 /* IO PIO Read Address Table Entry 0, Part A */ +#define IIO_IPRTE1_A 0x00400310 /* IO PIO Read Address Table Entry 1, Part A */ +#define IIO_IPRTE2_A 0x00400318 /* IO PIO Read Address Table Entry 2, Part A */ +#define IIO_IPRTE3_A 0x00400320 /* IO PIO Read Address Table Entry 3, Part A */ +#define IIO_IPRTE4_A 0x00400328 /* IO PIO Read Address Table Entry 4, Part A */ +#define IIO_IPRTE5_A 0x00400330 /* IO PIO Read Address Table Entry 5, Part A */ +#define IIO_IPRTE6_A 0x00400338 /* IO PIO Read Address Table Entry 6, Part A */ +#define IIO_IPRTE7_A 0x00400340 /* IO PIO Read Address Table Entry 7, Part A */ + +#define IIO_IPRTE0_B 0x00400348 /* IO PIO Read Address Table Entry 0, Part B */ +#define IIO_IPRTE1_B 0x00400350 /* IO PIO Read Address Table Entry 1, Part B */ +#define IIO_IPRTE2_B 0x00400358 /* IO PIO Read Address Table Entry 2, Part B */ +#define IIO_IPRTE3_B 0x00400360 /* IO PIO Read Address Table Entry 3, Part B */ +#define IIO_IPRTE4_B 0x00400368 /* IO PIO Read Address Table Entry 4, Part B */ +#define IIO_IPRTE5_B 0x00400370 /* IO PIO Read Address Table Entry 5, Part B */ +#define IIO_IPRTE6_B 0x00400378 /* IO PIO Read Address Table Entry 6, Part B */ +#define IIO_IPRTE7_B 0x00400380 /* IO PIO Read Address Table Entry 7, Part B */ + +#define IIO_IPDR 0x00400388 /* IO PIO Deallocation Register */ +#define IIO_ICDR 0x00400390 /* IO CRB Entry Deallocation Register */ +#define IIO_IFDR 0x00400398 /* IO IOQ FIFO Depth Register */ +#define IIO_IIAP 0x004003A0 /* IO IIQ Arbitration Parameters */ +#define IIO_ICMR 0x004003A8 /* IO CRB Management Register */ +#define IIO_ICCR 0x004003B0 /* IO CRB Control Register */ +#define IIO_ICTO 0x004003B8 /* IO CRB Timeout */ +#define IIO_ICTP 0x004003C0 /* IO CRB Timeout Prescalar */ + +#define IIO_ICRB0_A 0x00400400 /* IO CRB Entry 0_A */ +#define IIO_ICRB0_B 0x00400408 /* IO CRB Entry 0_B */ +#define IIO_ICRB0_C 0x00400410 /* IO CRB Entry 0_C */ +#define IIO_ICRB0_D 0x00400418 /* IO CRB Entry 0_D */ +#define IIO_ICRB0_E 0x00400420 /* IO CRB Entry 0_E */ + +#define IIO_ICRB1_A 0x00400430 /* IO CRB Entry 1_A */ +#define IIO_ICRB1_B 0x00400438 /* IO CRB Entry 1_B */ +#define IIO_ICRB1_C 0x00400440 /* IO CRB Entry 1_C */ +#define IIO_ICRB1_D 0x00400448 /* IO CRB Entry 1_D */ +#define IIO_ICRB1_E 0x00400450 /* IO CRB Entry 1_E */ + +#define IIO_ICRB2_A 0x00400460 /* IO CRB Entry 2_A */ +#define IIO_ICRB2_B 0x00400468 /* IO CRB Entry 2_B */ +#define IIO_ICRB2_C 0x00400470 /* IO CRB Entry 2_C */ +#define IIO_ICRB2_D 0x00400478 /* IO CRB Entry 2_D */ +#define IIO_ICRB2_E 0x00400480 /* IO CRB Entry 2_E */ + +#define IIO_ICRB3_A 0x00400490 /* IO CRB Entry 3_A */ +#define IIO_ICRB3_B 0x00400498 /* IO CRB Entry 3_B */ +#define IIO_ICRB3_C 0x004004a0 /* IO CRB Entry 3_C */ +#define IIO_ICRB3_D 0x004004a8 /* IO CRB Entry 3_D */ +#define IIO_ICRB3_E 0x004004b0 /* IO CRB Entry 3_E */ + +#define IIO_ICRB4_A 0x004004c0 /* IO CRB Entry 4_A */ +#define IIO_ICRB4_B 0x004004c8 /* IO CRB Entry 4_B */ +#define IIO_ICRB4_C 0x004004d0 /* IO CRB Entry 4_C */ +#define IIO_ICRB4_D 0x004004d8 /* IO CRB Entry 4_D */ +#define IIO_ICRB4_E 0x004004e0 /* IO CRB Entry 4_E */ + +#define IIO_ICRB5_A 0x004004f0 /* IO CRB Entry 5_A */ +#define IIO_ICRB5_B 0x004004f8 /* IO CRB Entry 5_B */ +#define IIO_ICRB5_C 0x00400500 /* IO CRB Entry 5_C */ +#define IIO_ICRB5_D 0x00400508 /* IO CRB Entry 5_D */ +#define IIO_ICRB5_E 0x00400510 /* IO CRB Entry 5_E */ + +#define IIO_ICRB6_A 0x00400520 /* IO CRB Entry 6_A */ +#define IIO_ICRB6_B 0x00400528 /* IO CRB Entry 6_B */ +#define IIO_ICRB6_C 0x00400530 /* IO CRB Entry 6_C */ +#define IIO_ICRB6_D 0x00400538 /* IO CRB Entry 6_D */ +#define IIO_ICRB6_E 0x00400540 /* IO CRB Entry 6_E */ + +#define IIO_ICRB7_A 0x00400550 /* IO CRB Entry 7_A */ +#define IIO_ICRB7_B 0x00400558 /* IO CRB Entry 7_B */ +#define IIO_ICRB7_C 0x00400560 /* IO CRB Entry 7_C */ +#define IIO_ICRB7_D 0x00400568 /* IO CRB Entry 7_D */ +#define IIO_ICRB7_E 0x00400570 /* IO CRB Entry 7_E */ + +#define IIO_ICRB8_A 0x00400580 /* IO CRB Entry 8_A */ +#define IIO_ICRB8_B 0x00400588 /* IO CRB Entry 8_B */ +#define IIO_ICRB8_C 0x00400590 /* IO CRB Entry 8_C */ +#define IIO_ICRB8_D 0x00400598 /* IO CRB Entry 8_D */ +#define IIO_ICRB8_E 0x004005a0 /* IO CRB Entry 8_E */ + +#define IIO_ICRB9_A 0x004005b0 /* IO CRB Entry 9_A */ +#define IIO_ICRB9_B 0x004005b8 /* IO CRB Entry 9_B */ +#define IIO_ICRB9_C 0x004005c0 /* IO CRB Entry 9_C */ +#define IIO_ICRB9_D 0x004005c8 /* IO CRB Entry 9_D */ +#define IIO_ICRB9_E 0x004005d0 /* IO CRB Entry 9_E */ + +#define IIO_ICRBA_A 0x004005e0 /* IO CRB Entry A_A */ +#define IIO_ICRBA_B 0x004005e8 /* IO CRB Entry A_B */ +#define IIO_ICRBA_C 0x004005f0 /* IO CRB Entry A_C */ +#define IIO_ICRBA_D 0x004005f8 /* IO CRB Entry A_D */ +#define IIO_ICRBA_E 0x00400600 /* IO CRB Entry A_E */ + +#define IIO_ICRBB_A 0x00400610 /* IO CRB Entry B_A */ +#define IIO_ICRBB_B 0x00400618 /* IO CRB Entry B_B */ +#define IIO_ICRBB_C 0x00400620 /* IO CRB Entry B_C */ +#define IIO_ICRBB_D 0x00400628 /* IO CRB Entry B_D */ +#define IIO_ICRBB_E 0x00400630 /* IO CRB Entry B_E */ + +#define IIO_ICRBC_A 0x00400640 /* IO CRB Entry C_A */ +#define IIO_ICRBC_B 0x00400648 /* IO CRB Entry C_B */ +#define IIO_ICRBC_C 0x00400650 /* IO CRB Entry C_C */ +#define IIO_ICRBC_D 0x00400658 /* IO CRB Entry C_D */ +#define IIO_ICRBC_E 0x00400660 /* IO CRB Entry C_E */ + +#define IIO_ICRBD_A 0x00400670 /* IO CRB Entry D_A */ +#define IIO_ICRBD_B 0x00400678 /* IO CRB Entry D_B */ +#define IIO_ICRBD_C 0x00400680 /* IO CRB Entry D_C */ +#define IIO_ICRBD_D 0x00400688 /* IO CRB Entry D_D */ +#define IIO_ICRBD_E 0x00400690 /* IO CRB Entry D_E */ + +#define IIO_ICRBE_A 0x004006a0 /* IO CRB Entry E_A */ +#define IIO_ICRBE_B 0x004006a8 /* IO CRB Entry E_B */ +#define IIO_ICRBE_C 0x004006b0 /* IO CRB Entry E_C */ +#define IIO_ICRBE_D 0x004006b8 /* IO CRB Entry E_D */ +#define IIO_ICRBE_E 0x004006c0 /* IO CRB Entry E_E */ + +#define IIO_ICSML 0x00400700 /* IO CRB Spurious Message Low */ +#define IIO_ICSMM 0x00400708 /* IO CRB Spurious Message Middle */ +#define IIO_ICSMH 0x00400710 /* IO CRB Spurious Message High */ + +#define IIO_IDBSS 0x00400718 /* IO Debug Submenu Select */ + +#define IIO_IBLS0 0x00410000 /* IO BTE Length Status 0 */ +#define IIO_IBSA0 0x00410008 /* IO BTE Source Address 0 */ +#define IIO_IBDA0 0x00410010 /* IO BTE Destination Address 0 */ +#define IIO_IBCT0 0x00410018 /* IO BTE Control Terminate 0 */ +#define IIO_IBNA0 0x00410020 /* IO BTE Notification Address 0 */ +#define IIO_IBIA0 0x00410028 /* IO BTE Interrupt Address 0 */ +#define IIO_IBLS1 0x00420000 /* IO BTE Length Status 1 */ +#define IIO_IBSA1 0x00420008 /* IO BTE Source Address 1 */ +#define IIO_IBDA1 0x00420010 /* IO BTE Destination Address 1 */ +#define IIO_IBCT1 0x00420018 /* IO BTE Control Terminate 1 */ +#define IIO_IBNA1 0x00420020 /* IO BTE Notification Address 1 */ +#define IIO_IBIA1 0x00420028 /* IO BTE Interrupt Address 1 */ + +#define IIO_IPCR 0x00430000 /* IO Performance Control */ +#define IIO_IPPR 0x00430008 /* IO Performance Profiling */ + + +#ifndef __ASSEMBLY__ + +/************************************************************************ + * * + * Description: This register echoes some information from the * + * LB_REV_ID register. It is available through Crosstalk as described * + * above. The REV_NUM and MFG_NUM fields receive their values from * + * the REVISION and MANUFACTURER fields in the LB_REV_ID register. * + * The PART_NUM field's value is the Crosstalk device ID number that * + * Steve Miller assigned to the SHub chip. * + * * + ************************************************************************/ + +typedef union ii_wid_u { + shubreg_t ii_wid_regval; + struct { + shubreg_t w_rsvd_1 : 1; + shubreg_t w_mfg_num : 11; + shubreg_t w_part_num : 16; + shubreg_t w_rev_num : 4; + shubreg_t w_rsvd : 32; + } ii_wid_fld_s; +} ii_wid_u_t; + + +/************************************************************************ + * * + * The fields in this register are set upon detection of an error * + * and cleared by various mechanisms, as explained in the * + * description. * + * * + ************************************************************************/ + +typedef union ii_wstat_u { + shubreg_t ii_wstat_regval; + struct { + shubreg_t w_pending : 4; + shubreg_t w_xt_crd_to : 1; + shubreg_t w_xt_tail_to : 1; + shubreg_t w_rsvd_3 : 3; + shubreg_t w_tx_mx_rty : 1; + shubreg_t w_rsvd_2 : 6; + shubreg_t w_llp_tx_cnt : 8; + shubreg_t w_rsvd_1 : 8; + shubreg_t w_crazy : 1; + shubreg_t w_rsvd : 31; + } ii_wstat_fld_s; +} ii_wstat_u_t; + + +/************************************************************************ + * * + * Description: This is a read-write enabled register. It controls * + * various aspects of the Crosstalk flow control. * + * * + ************************************************************************/ + +typedef union ii_wcr_u { + shubreg_t ii_wcr_regval; + struct { + shubreg_t w_wid : 4; + shubreg_t w_tag : 1; + shubreg_t w_rsvd_1 : 8; + shubreg_t w_dst_crd : 3; + shubreg_t w_f_bad_pkt : 1; + shubreg_t w_dir_con : 1; + shubreg_t w_e_thresh : 5; + shubreg_t w_rsvd : 41; + } ii_wcr_fld_s; +} ii_wcr_u_t; + + +/************************************************************************ + * * + * Description: This register's value is a bit vector that guards * + * access to local registers within the II as well as to external * + * Crosstalk widgets. Each bit in the register corresponds to a * + * particular region in the system; a region consists of one, two or * + * four nodes (depending on the value of the REGION_SIZE field in the * + * LB_REV_ID register, which is documented in Section 8.3.1.1). The * + * protection provided by this register applies to PIO read * + * operations as well as PIO write operations. The II will perform a * + * PIO read or write request only if the bit for the requestor's * + * region is set; otherwise, the II will not perform the requested * + * operation and will return an error response. When a PIO read or * + * write request targets an external Crosstalk widget, then not only * + * must the bit for the requestor's region be set in the ILAPR, but * + * also the target widget's bit in the IOWA register must be set in * + * order for the II to perform the requested operation; otherwise, * + * the II will return an error response. Hence, the protection * + * provided by the IOWA register supplements the protection provided * + * by the ILAPR for requests that target external Crosstalk widgets. * + * This register itself can be accessed only by the nodes whose * + * region ID bits are enabled in this same register. It can also be * + * accessed through the IAlias space by the local processors. * + * The reset value of this register allows access by all nodes. * + * * + ************************************************************************/ + +typedef union ii_ilapr_u { + shubreg_t ii_ilapr_regval; + struct { + shubreg_t i_region : 64; + } ii_ilapr_fld_s; +} ii_ilapr_u_t; + + + + +/************************************************************************ + * * + * Description: A write to this register of the 64-bit value * + * "SGIrules" in ASCII, will cause the bit in the ILAPR register * + * corresponding to the region of the requestor to be set (allow * + * access). A write of any other value will be ignored. Access * + * protection for this register is "SGIrules". * + * This register can also be accessed through the IAlias space. * + * However, this access will not change the access permissions in the * + * ILAPR. * + * * + ************************************************************************/ + +typedef union ii_ilapo_u { + shubreg_t ii_ilapo_regval; + struct { + shubreg_t i_io_ovrride : 64; + } ii_ilapo_fld_s; +} ii_ilapo_u_t; + + + +/************************************************************************ + * * + * This register qualifies all the PIO and Graphics writes launched * + * from the SHUB towards a widget. * + * * + ************************************************************************/ + +typedef union ii_iowa_u { + shubreg_t ii_iowa_regval; + struct { + shubreg_t i_w0_oac : 1; + shubreg_t i_rsvd_1 : 7; + shubreg_t i_wx_oac : 8; + shubreg_t i_rsvd : 48; + } ii_iowa_fld_s; +} ii_iowa_u_t; + + +/************************************************************************ + * * + * Description: This register qualifies all the requests launched * + * from a widget towards the Shub. This register is intended to be * + * used by software in case of misbehaving widgets. * + * * + * * + ************************************************************************/ + +typedef union ii_iiwa_u { + shubreg_t ii_iiwa_regval; + struct { + shubreg_t i_w0_iac : 1; + shubreg_t i_rsvd_1 : 7; + shubreg_t i_wx_iac : 8; + shubreg_t i_rsvd : 48; + } ii_iiwa_fld_s; +} ii_iiwa_u_t; + + + +/************************************************************************ + * * + * Description: This register qualifies all the operations launched * + * from a widget towards the SHub. It allows individual access * + * control for up to 8 devices per widget. A device refers to * + * individual DMA master hosted by a widget. * + * The bits in each field of this register are cleared by the Shub * + * upon detection of an error which requires the device to be * + * disabled. These fields assume that 0=TNUM=7 (i.e., Bridge-centric * + * Crosstalk). Whether or not a device has access rights to this * + * Shub is determined by an AND of the device enable bit in the * + * appropriate field of this register and the corresponding bit in * + * the Wx_IAC field (for the widget which this device belongs to). * + * The bits in this field are set by writing a 1 to them. Incoming * + * replies from Crosstalk are not subject to this access control * + * mechanism. * + * * + ************************************************************************/ + +typedef union ii_iidem_u { + shubreg_t ii_iidem_regval; + struct { + shubreg_t i_w8_dxs : 8; + shubreg_t i_w9_dxs : 8; + shubreg_t i_wa_dxs : 8; + shubreg_t i_wb_dxs : 8; + shubreg_t i_wc_dxs : 8; + shubreg_t i_wd_dxs : 8; + shubreg_t i_we_dxs : 8; + shubreg_t i_wf_dxs : 8; + } ii_iidem_fld_s; +} ii_iidem_u_t; + + +/************************************************************************ + * * + * This register contains the various programmable fields necessary * + * for controlling and observing the LLP signals. * + * * + ************************************************************************/ + +typedef union ii_ilcsr_u { + shubreg_t ii_ilcsr_regval; + struct { + shubreg_t i_nullto : 6; + shubreg_t i_rsvd_4 : 2; + shubreg_t i_wrmrst : 1; + shubreg_t i_rsvd_3 : 1; + shubreg_t i_llp_en : 1; + shubreg_t i_bm8 : 1; + shubreg_t i_llp_stat : 2; + shubreg_t i_remote_power : 1; + shubreg_t i_rsvd_2 : 1; + shubreg_t i_maxrtry : 10; + shubreg_t i_d_avail_sel : 2; + shubreg_t i_rsvd_1 : 4; + shubreg_t i_maxbrst : 10; + shubreg_t i_rsvd : 22; + + } ii_ilcsr_fld_s; +} ii_ilcsr_u_t; + + +/************************************************************************ + * * + * This is simply a status registers that monitors the LLP error * + * rate. * + * * + ************************************************************************/ + +typedef union ii_illr_u { + shubreg_t ii_illr_regval; + struct { + shubreg_t i_sn_cnt : 16; + shubreg_t i_cb_cnt : 16; + shubreg_t i_rsvd : 32; + } ii_illr_fld_s; +} ii_illr_u_t; + + +/************************************************************************ + * * + * Description: All II-detected non-BTE error interrupts are * + * specified via this register. * + * NOTE: The PI interrupt register address is hardcoded in the II. If * + * PI_ID==0, then the II sends an interrupt request (Duplonet PWRI * + * packet) to address offset 0x0180_0090 within the local register * + * address space of PI0 on the node specified by the NODE field. If * + * PI_ID==1, then the II sends the interrupt request to address * + * offset 0x01A0_0090 within the local register address space of PI1 * + * on the node specified by the NODE field. * + * * + ************************************************************************/ + +typedef union ii_iidsr_u { + shubreg_t ii_iidsr_regval; + struct { + shubreg_t i_level : 8; + shubreg_t i_pi_id : 1; + shubreg_t i_node : 11; + shubreg_t i_rsvd_3 : 4; + shubreg_t i_enable : 1; + shubreg_t i_rsvd_2 : 3; + shubreg_t i_int_sent : 1; + shubreg_t i_rsvd_1 : 3; + shubreg_t i_pi0_forward_int : 1; + shubreg_t i_pi1_forward_int : 1; + shubreg_t i_rsvd : 30; + } ii_iidsr_fld_s; +} ii_iidsr_u_t; + + + +/************************************************************************ + * * + * There are two instances of this register. This register is used * + * for matching up the incoming responses from the graphics widget to * + * the processor that initiated the graphics operation. The * + * write-responses are converted to graphics credits and returned to * + * the processor so that the processor interface can manage the flow * + * control. * + * * + ************************************************************************/ + +typedef union ii_igfx0_u { + shubreg_t ii_igfx0_regval; + struct { + shubreg_t i_w_num : 4; + shubreg_t i_pi_id : 1; + shubreg_t i_n_num : 12; + shubreg_t i_p_num : 1; + shubreg_t i_rsvd : 46; + } ii_igfx0_fld_s; +} ii_igfx0_u_t; + + +/************************************************************************ + * * + * There are two instances of this register. This register is used * + * for matching up the incoming responses from the graphics widget to * + * the processor that initiated the graphics operation. The * + * write-responses are converted to graphics credits and returned to * + * the processor so that the processor interface can manage the flow * + * control. * + * * + ************************************************************************/ + +typedef union ii_igfx1_u { + shubreg_t ii_igfx1_regval; + struct { + shubreg_t i_w_num : 4; + shubreg_t i_pi_id : 1; + shubreg_t i_n_num : 12; + shubreg_t i_p_num : 1; + shubreg_t i_rsvd : 46; + } ii_igfx1_fld_s; +} ii_igfx1_u_t; + + +/************************************************************************ + * * + * There are two instances of this registers. These registers are * + * used as scratch registers for software use. * + * * + ************************************************************************/ + +typedef union ii_iscr0_u { + shubreg_t ii_iscr0_regval; + struct { + shubreg_t i_scratch : 64; + } ii_iscr0_fld_s; +} ii_iscr0_u_t; + + + +/************************************************************************ + * * + * There are two instances of this registers. These registers are * + * used as scratch registers for software use. * + * * + ************************************************************************/ + +typedef union ii_iscr1_u { + shubreg_t ii_iscr1_regval; + struct { + shubreg_t i_scratch : 64; + } ii_iscr1_fld_s; +} ii_iscr1_u_t; + + +/************************************************************************ + * * + * Description: There are seven instances of translation table entry * + * registers. Each register maps a Shub Big Window to a 48-bit * + * address on Crosstalk. * + * For M-mode (128 nodes, 8 GBytes/node), SysAD[31:29] (Big Window * + * number) are used to select one of these 7 registers. The Widget * + * number field is then derived from the W_NUM field for synthesizing * + * a Crosstalk packet. The 5 bits of OFFSET are concatenated with * + * SysAD[28:0] to form Crosstalk[33:0]. The upper Crosstalk[47:34] * + * are padded with zeros. Although the maximum Crosstalk space * + * addressable by the SHub is thus the lower 16 GBytes per widget * + * (M-mode), however only 7/32nds of this * + * space can be accessed. * + * For the N-mode (256 nodes, 4 GBytes/node), SysAD[30:28] (Big * + * Window number) are used to select one of these 7 registers. The * + * Widget number field is then derived from the W_NUM field for * + * synthesizing a Crosstalk packet. The 5 bits of OFFSET are * + * concatenated with SysAD[27:0] to form Crosstalk[33:0]. The IOSP * + * field is used as Crosstalk[47], and remainder of the Crosstalk * + * address bits (Crosstalk[46:34]) are always zero. While the maximum * + * Crosstalk space addressable by the Shub is thus the lower * + * 8-GBytes per widget (N-mode), only 7/32nds * + * of this space can be accessed. * + * * + ************************************************************************/ + +typedef union ii_itte1_u { + shubreg_t ii_itte1_regval; + struct { + shubreg_t i_offset : 5; + shubreg_t i_rsvd_1 : 3; + shubreg_t i_w_num : 4; + shubreg_t i_iosp : 1; + shubreg_t i_rsvd : 51; + } ii_itte1_fld_s; +} ii_itte1_u_t; + + +/************************************************************************ + * * + * Description: There are seven instances of translation table entry * + * registers. Each register maps a Shub Big Window to a 48-bit * + * address on Crosstalk. * + * For M-mode (128 nodes, 8 GBytes/node), SysAD[31:29] (Big Window * + * number) are used to select one of these 7 registers. The Widget * + * number field is then derived from the W_NUM field for synthesizing * + * a Crosstalk packet. The 5 bits of OFFSET are concatenated with * + * SysAD[28:0] to form Crosstalk[33:0]. The upper Crosstalk[47:34] * + * are padded with zeros. Although the maximum Crosstalk space * + * addressable by the Shub is thus the lower 16 GBytes per widget * + * (M-mode), however only 7/32nds of this * + * space can be accessed. * + * For the N-mode (256 nodes, 4 GBytes/node), SysAD[30:28] (Big * + * Window number) are used to select one of these 7 registers. The * + * Widget number field is then derived from the W_NUM field for * + * synthesizing a Crosstalk packet. The 5 bits of OFFSET are * + * concatenated with SysAD[27:0] to form Crosstalk[33:0]. The IOSP * + * field is used as Crosstalk[47], and remainder of the Crosstalk * + * address bits (Crosstalk[46:34]) are always zero. While the maximum * + * Crosstalk space addressable by the Shub is thus the lower * + * 8-GBytes per widget (N-mode), only 7/32nds * + * of this space can be accessed. * + * * + ************************************************************************/ + +typedef union ii_itte2_u { + shubreg_t ii_itte2_regval; + struct { + shubreg_t i_offset : 5; + shubreg_t i_rsvd_1 : 3; + shubreg_t i_w_num : 4; + shubreg_t i_iosp : 1; + shubreg_t i_rsvd : 51; + } ii_itte2_fld_s; +} ii_itte2_u_t; + + +/************************************************************************ + * * + * Description: There are seven instances of translation table entry * + * registers. Each register maps a Shub Big Window to a 48-bit * + * address on Crosstalk. * + * For M-mode (128 nodes, 8 GBytes/node), SysAD[31:29] (Big Window * + * number) are used to select one of these 7 registers. The Widget * + * number field is then derived from the W_NUM field for synthesizing * + * a Crosstalk packet. The 5 bits of OFFSET are concatenated with * + * SysAD[28:0] to form Crosstalk[33:0]. The upper Crosstalk[47:34] * + * are padded with zeros. Although the maximum Crosstalk space * + * addressable by the Shub is thus the lower 16 GBytes per widget * + * (M-mode), however only 7/32nds of this * + * space can be accessed. * + * For the N-mode (256 nodes, 4 GBytes/node), SysAD[30:28] (Big * + * Window number) are used to select one of these 7 registers. The * + * Widget number field is then derived from the W_NUM field for * + * synthesizing a Crosstalk packet. The 5 bits of OFFSET are * + * concatenated with SysAD[27:0] to form Crosstalk[33:0]. The IOSP * + * field is used as Crosstalk[47], and remainder of the Crosstalk * + * address bits (Crosstalk[46:34]) are always zero. While the maximum * + * Crosstalk space addressable by the SHub is thus the lower * + * 8-GBytes per widget (N-mode), only 7/32nds * + * of this space can be accessed. * + * * + ************************************************************************/ + +typedef union ii_itte3_u { + shubreg_t ii_itte3_regval; + struct { + shubreg_t i_offset : 5; + shubreg_t i_rsvd_1 : 3; + shubreg_t i_w_num : 4; + shubreg_t i_iosp : 1; + shubreg_t i_rsvd : 51; + } ii_itte3_fld_s; +} ii_itte3_u_t; + + +/************************************************************************ + * * + * Description: There are seven instances of translation table entry * + * registers. Each register maps a SHub Big Window to a 48-bit * + * address on Crosstalk. * + * For M-mode (128 nodes, 8 GBytes/node), SysAD[31:29] (Big Window * + * number) are used to select one of these 7 registers. The Widget * + * number field is then derived from the W_NUM field for synthesizing * + * a Crosstalk packet. The 5 bits of OFFSET are concatenated with * + * SysAD[28:0] to form Crosstalk[33:0]. The upper Crosstalk[47:34] * + * are padded with zeros. Although the maximum Crosstalk space * + * addressable by the SHub is thus the lower 16 GBytes per widget * + * (M-mode), however only 7/32nds of this * + * space can be accessed. * + * For the N-mode (256 nodes, 4 GBytes/node), SysAD[30:28] (Big * + * Window number) are used to select one of these 7 registers. The * + * Widget number field is then derived from the W_NUM field for * + * synthesizing a Crosstalk packet. The 5 bits of OFFSET are * + * concatenated with SysAD[27:0] to form Crosstalk[33:0]. The IOSP * + * field is used as Crosstalk[47], and remainder of the Crosstalk * + * address bits (Crosstalk[46:34]) are always zero. While the maximum * + * Crosstalk space addressable by the SHub is thus the lower * + * 8-GBytes per widget (N-mode), only 7/32nds * + * of this space can be accessed. * + * * + ************************************************************************/ + +typedef union ii_itte4_u { + shubreg_t ii_itte4_regval; + struct { + shubreg_t i_offset : 5; + shubreg_t i_rsvd_1 : 3; + shubreg_t i_w_num : 4; + shubreg_t i_iosp : 1; + shubreg_t i_rsvd : 51; + } ii_itte4_fld_s; +} ii_itte4_u_t; + + +/************************************************************************ + * * + * Description: There are seven instances of translation table entry * + * registers. Each register maps a SHub Big Window to a 48-bit * + * address on Crosstalk. * + * For M-mode (128 nodes, 8 GBytes/node), SysAD[31:29] (Big Window * + * number) are used to select one of these 7 registers. The Widget * + * number field is then derived from the W_NUM field for synthesizing * + * a Crosstalk packet. The 5 bits of OFFSET are concatenated with * + * SysAD[28:0] to form Crosstalk[33:0]. The upper Crosstalk[47:34] * + * are padded with zeros. Although the maximum Crosstalk space * + * addressable by the Shub is thus the lower 16 GBytes per widget * + * (M-mode), however only 7/32nds of this * + * space can be accessed. * + * For the N-mode (256 nodes, 4 GBytes/node), SysAD[30:28] (Big * + * Window number) are used to select one of these 7 registers. The * + * Widget number field is then derived from the W_NUM field for * + * synthesizing a Crosstalk packet. The 5 bits of OFFSET are * + * concatenated with SysAD[27:0] to form Crosstalk[33:0]. The IOSP * + * field is used as Crosstalk[47], and remainder of the Crosstalk * + * address bits (Crosstalk[46:34]) are always zero. While the maximum * + * Crosstalk space addressable by the Shub is thus the lower * + * 8-GBytes per widget (N-mode), only 7/32nds * + * of this space can be accessed. * + * * + ************************************************************************/ + +typedef union ii_itte5_u { + shubreg_t ii_itte5_regval; + struct { + shubreg_t i_offset : 5; + shubreg_t i_rsvd_1 : 3; + shubreg_t i_w_num : 4; + shubreg_t i_iosp : 1; + shubreg_t i_rsvd : 51; + } ii_itte5_fld_s; +} ii_itte5_u_t; + + +/************************************************************************ + * * + * Description: There are seven instances of translation table entry * + * registers. Each register maps a Shub Big Window to a 48-bit * + * address on Crosstalk. * + * For M-mode (128 nodes, 8 GBytes/node), SysAD[31:29] (Big Window * + * number) are used to select one of these 7 registers. The Widget * + * number field is then derived from the W_NUM field for synthesizing * + * a Crosstalk packet. The 5 bits of OFFSET are concatenated with * + * SysAD[28:0] to form Crosstalk[33:0]. The upper Crosstalk[47:34] * + * are padded with zeros. Although the maximum Crosstalk space * + * addressable by the Shub is thus the lower 16 GBytes per widget * + * (M-mode), however only 7/32nds of this * + * space can be accessed. * + * For the N-mode (256 nodes, 4 GBytes/node), SysAD[30:28] (Big * + * Window number) are used to select one of these 7 registers. The * + * Widget number field is then derived from the W_NUM field for * + * synthesizing a Crosstalk packet. The 5 bits of OFFSET are * + * concatenated with SysAD[27:0] to form Crosstalk[33:0]. The IOSP * + * field is used as Crosstalk[47], and remainder of the Crosstalk * + * address bits (Crosstalk[46:34]) are always zero. While the maximum * + * Crosstalk space addressable by the Shub is thus the lower * + * 8-GBytes per widget (N-mode), only 7/32nds * + * of this space can be accessed. * + * * + ************************************************************************/ + +typedef union ii_itte6_u { + shubreg_t ii_itte6_regval; + struct { + shubreg_t i_offset : 5; + shubreg_t i_rsvd_1 : 3; + shubreg_t i_w_num : 4; + shubreg_t i_iosp : 1; + shubreg_t i_rsvd : 51; + } ii_itte6_fld_s; +} ii_itte6_u_t; + + +/************************************************************************ + * * + * Description: There are seven instances of translation table entry * + * registers. Each register maps a Shub Big Window to a 48-bit * + * address on Crosstalk. * + * For M-mode (128 nodes, 8 GBytes/node), SysAD[31:29] (Big Window * + * number) are used to select one of these 7 registers. The Widget * + * number field is then derived from the W_NUM field for synthesizing * + * a Crosstalk packet. The 5 bits of OFFSET are concatenated with * + * SysAD[28:0] to form Crosstalk[33:0]. The upper Crosstalk[47:34] * + * are padded with zeros. Although the maximum Crosstalk space * + * addressable by the Shub is thus the lower 16 GBytes per widget * + * (M-mode), however only 7/32nds of this * + * space can be accessed. * + * For the N-mode (256 nodes, 4 GBytes/node), SysAD[30:28] (Big * + * Window number) are used to select one of these 7 registers. The * + * Widget number field is then derived from the W_NUM field for * + * synthesizing a Crosstalk packet. The 5 bits of OFFSET are * + * concatenated with SysAD[27:0] to form Crosstalk[33:0]. The IOSP * + * field is used as Crosstalk[47], and remainder of the Crosstalk * + * address bits (Crosstalk[46:34]) are always zero. While the maximum * + * Crosstalk space addressable by the SHub is thus the lower * + * 8-GBytes per widget (N-mode), only 7/32nds * + * of this space can be accessed. * + * * + ************************************************************************/ + +typedef union ii_itte7_u { + shubreg_t ii_itte7_regval; + struct { + shubreg_t i_offset : 5; + shubreg_t i_rsvd_1 : 3; + shubreg_t i_w_num : 4; + shubreg_t i_iosp : 1; + shubreg_t i_rsvd : 51; + } ii_itte7_fld_s; +} ii_itte7_u_t; + + +/************************************************************************ + * * + * Description: There are 9 instances of this register, one per * + * actual widget in this implementation of SHub and Crossbow. * + * Note: Crossbow only has ports for Widgets 8 through F, widget 0 * + * refers to Crossbow's internal space. * + * This register contains the state elements per widget that are * + * necessary to manage the PIO flow control on Crosstalk and on the * + * Router Network. See the PIO Flow Control chapter for a complete * + * description of this register * + * The SPUR_WR bit requires some explanation. When this register is * + * written, the new value of the C field is captured in an internal * + * register so the hardware can remember what the programmer wrote * + * into the credit counter. The SPUR_WR bit sets whenever the C field * + * increments above this stored value, which indicates that there * + * have been more responses received than requests sent. The SPUR_WR * + * bit cannot be cleared until a value is written to the IPRBx * + * register; the write will correct the C field and capture its new * + * value in the internal register. Even if IECLR[E_PRB_x] is set, the * + * SPUR_WR bit will persist if IPRBx hasn't yet been written. * + * . * + * * + ************************************************************************/ + +typedef union ii_iprb0_u { + shubreg_t ii_iprb0_regval; + struct { + shubreg_t i_c : 8; + shubreg_t i_na : 14; + shubreg_t i_rsvd_2 : 2; + shubreg_t i_nb : 14; + shubreg_t i_rsvd_1 : 2; + shubreg_t i_m : 2; + shubreg_t i_f : 1; + shubreg_t i_of_cnt : 5; + shubreg_t i_error : 1; + shubreg_t i_rd_to : 1; + shubreg_t i_spur_wr : 1; + shubreg_t i_spur_rd : 1; + shubreg_t i_rsvd : 11; + shubreg_t i_mult_err : 1; + } ii_iprb0_fld_s; +} ii_iprb0_u_t; + + +/************************************************************************ + * * + * Description: There are 9 instances of this register, one per * + * actual widget in this implementation of SHub and Crossbow. * + * Note: Crossbow only has ports for Widgets 8 through F, widget 0 * + * refers to Crossbow's internal space. * + * This register contains the state elements per widget that are * + * necessary to manage the PIO flow control on Crosstalk and on the * + * Router Network. See the PIO Flow Control chapter for a complete * + * description of this register * + * The SPUR_WR bit requires some explanation. When this register is * + * written, the new value of the C field is captured in an internal * + * register so the hardware can remember what the programmer wrote * + * into the credit counter. The SPUR_WR bit sets whenever the C field * + * increments above this stored value, which indicates that there * + * have been more responses received than requests sent. The SPUR_WR * + * bit cannot be cleared until a value is written to the IPRBx * + * register; the write will correct the C field and capture its new * + * value in the internal register. Even if IECLR[E_PRB_x] is set, the * + * SPUR_WR bit will persist if IPRBx hasn't yet been written. * + * . * + * * + ************************************************************************/ + +typedef union ii_iprb8_u { + shubreg_t ii_iprb8_regval; + struct { + shubreg_t i_c : 8; + shubreg_t i_na : 14; + shubreg_t i_rsvd_2 : 2; + shubreg_t i_nb : 14; + shubreg_t i_rsvd_1 : 2; + shubreg_t i_m : 2; + shubreg_t i_f : 1; + shubreg_t i_of_cnt : 5; + shubreg_t i_error : 1; + shubreg_t i_rd_to : 1; + shubreg_t i_spur_wr : 1; + shubreg_t i_spur_rd : 1; + shubreg_t i_rsvd : 11; + shubreg_t i_mult_err : 1; + } ii_iprb8_fld_s; +} ii_iprb8_u_t; + + +/************************************************************************ + * * + * Description: There are 9 instances of this register, one per * + * actual widget in this implementation of SHub and Crossbow. * + * Note: Crossbow only has ports for Widgets 8 through F, widget 0 * + * refers to Crossbow's internal space. * + * This register contains the state elements per widget that are * + * necessary to manage the PIO flow control on Crosstalk and on the * + * Router Network. See the PIO Flow Control chapter for a complete * + * description of this register * + * The SPUR_WR bit requires some explanation. When this register is * + * written, the new value of the C field is captured in an internal * + * register so the hardware can remember what the programmer wrote * + * into the credit counter. The SPUR_WR bit sets whenever the C field * + * increments above this stored value, which indicates that there * + * have been more responses received than requests sent. The SPUR_WR * + * bit cannot be cleared until a value is written to the IPRBx * + * register; the write will correct the C field and capture its new * + * value in the internal register. Even if IECLR[E_PRB_x] is set, the * + * SPUR_WR bit will persist if IPRBx hasn't yet been written. * + * . * + * * + ************************************************************************/ + +typedef union ii_iprb9_u { + shubreg_t ii_iprb9_regval; + struct { + shubreg_t i_c : 8; + shubreg_t i_na : 14; + shubreg_t i_rsvd_2 : 2; + shubreg_t i_nb : 14; + shubreg_t i_rsvd_1 : 2; + shubreg_t i_m : 2; + shubreg_t i_f : 1; + shubreg_t i_of_cnt : 5; + shubreg_t i_error : 1; + shubreg_t i_rd_to : 1; + shubreg_t i_spur_wr : 1; + shubreg_t i_spur_rd : 1; + shubreg_t i_rsvd : 11; + shubreg_t i_mult_err : 1; + } ii_iprb9_fld_s; +} ii_iprb9_u_t; + + +/************************************************************************ + * * + * Description: There are 9 instances of this register, one per * + * actual widget in this implementation of SHub and Crossbow. * + * Note: Crossbow only has ports for Widgets 8 through F, widget 0 * + * refers to Crossbow's internal space. * + * This register contains the state elements per widget that are * + * necessary to manage the PIO flow control on Crosstalk and on the * + * Router Network. See the PIO Flow Control chapter for a complete * + * description of this register * + * The SPUR_WR bit requires some explanation. When this register is * + * written, the new value of the C field is captured in an internal * + * register so the hardware can remember what the programmer wrote * + * into the credit counter. The SPUR_WR bit sets whenever the C field * + * increments above this stored value, which indicates that there * + * have been more responses received than requests sent. The SPUR_WR * + * bit cannot be cleared until a value is written to the IPRBx * + * register; the write will correct the C field and capture its new * + * value in the internal register. Even if IECLR[E_PRB_x] is set, the * + * SPUR_WR bit will persist if IPRBx hasn't yet been written. * + * * + * * + ************************************************************************/ + +typedef union ii_iprba_u { + shubreg_t ii_iprba_regval; + struct { + shubreg_t i_c : 8; + shubreg_t i_na : 14; + shubreg_t i_rsvd_2 : 2; + shubreg_t i_nb : 14; + shubreg_t i_rsvd_1 : 2; + shubreg_t i_m : 2; + shubreg_t i_f : 1; + shubreg_t i_of_cnt : 5; + shubreg_t i_error : 1; + shubreg_t i_rd_to : 1; + shubreg_t i_spur_wr : 1; + shubreg_t i_spur_rd : 1; + shubreg_t i_rsvd : 11; + shubreg_t i_mult_err : 1; + } ii_iprba_fld_s; +} ii_iprba_u_t; + + +/************************************************************************ + * * + * Description: There are 9 instances of this register, one per * + * actual widget in this implementation of SHub and Crossbow. * + * Note: Crossbow only has ports for Widgets 8 through F, widget 0 * + * refers to Crossbow's internal space. * + * This register contains the state elements per widget that are * + * necessary to manage the PIO flow control on Crosstalk and on the * + * Router Network. See the PIO Flow Control chapter for a complete * + * description of this register * + * The SPUR_WR bit requires some explanation. When this register is * + * written, the new value of the C field is captured in an internal * + * register so the hardware can remember what the programmer wrote * + * into the credit counter. The SPUR_WR bit sets whenever the C field * + * increments above this stored value, which indicates that there * + * have been more responses received than requests sent. The SPUR_WR * + * bit cannot be cleared until a value is written to the IPRBx * + * register; the write will correct the C field and capture its new * + * value in the internal register. Even if IECLR[E_PRB_x] is set, the * + * SPUR_WR bit will persist if IPRBx hasn't yet been written. * + * . * + * * + ************************************************************************/ + +typedef union ii_iprbb_u { + shubreg_t ii_iprbb_regval; + struct { + shubreg_t i_c : 8; + shubreg_t i_na : 14; + shubreg_t i_rsvd_2 : 2; + shubreg_t i_nb : 14; + shubreg_t i_rsvd_1 : 2; + shubreg_t i_m : 2; + shubreg_t i_f : 1; + shubreg_t i_of_cnt : 5; + shubreg_t i_error : 1; + shubreg_t i_rd_to : 1; + shubreg_t i_spur_wr : 1; + shubreg_t i_spur_rd : 1; + shubreg_t i_rsvd : 11; + shubreg_t i_mult_err : 1; + } ii_iprbb_fld_s; +} ii_iprbb_u_t; + + +/************************************************************************ + * * + * Description: There are 9 instances of this register, one per * + * actual widget in this implementation of SHub and Crossbow. * + * Note: Crossbow only has ports for Widgets 8 through F, widget 0 * + * refers to Crossbow's internal space. * + * This register contains the state elements per widget that are * + * necessary to manage the PIO flow control on Crosstalk and on the * + * Router Network. See the PIO Flow Control chapter for a complete * + * description of this register * + * The SPUR_WR bit requires some explanation. When this register is * + * written, the new value of the C field is captured in an internal * + * register so the hardware can remember what the programmer wrote * + * into the credit counter. The SPUR_WR bit sets whenever the C field * + * increments above this stored value, which indicates that there * + * have been more responses received than requests sent. The SPUR_WR * + * bit cannot be cleared until a value is written to the IPRBx * + * register; the write will correct the C field and capture its new * + * value in the internal register. Even if IECLR[E_PRB_x] is set, the * + * SPUR_WR bit will persist if IPRBx hasn't yet been written. * + * . * + * * + ************************************************************************/ + +typedef union ii_iprbc_u { + shubreg_t ii_iprbc_regval; + struct { + shubreg_t i_c : 8; + shubreg_t i_na : 14; + shubreg_t i_rsvd_2 : 2; + shubreg_t i_nb : 14; + shubreg_t i_rsvd_1 : 2; + shubreg_t i_m : 2; + shubreg_t i_f : 1; + shubreg_t i_of_cnt : 5; + shubreg_t i_error : 1; + shubreg_t i_rd_to : 1; + shubreg_t i_spur_wr : 1; + shubreg_t i_spur_rd : 1; + shubreg_t i_rsvd : 11; + shubreg_t i_mult_err : 1; + } ii_iprbc_fld_s; +} ii_iprbc_u_t; + + +/************************************************************************ + * * + * Description: There are 9 instances of this register, one per * + * actual widget in this implementation of SHub and Crossbow. * + * Note: Crossbow only has ports for Widgets 8 through F, widget 0 * + * refers to Crossbow's internal space. * + * This register contains the state elements per widget that are * + * necessary to manage the PIO flow control on Crosstalk and on the * + * Router Network. See the PIO Flow Control chapter for a complete * + * description of this register * + * The SPUR_WR bit requires some explanation. When this register is * + * written, the new value of the C field is captured in an internal * + * register so the hardware can remember what the programmer wrote * + * into the credit counter. The SPUR_WR bit sets whenever the C field * + * increments above this stored value, which indicates that there * + * have been more responses received than requests sent. The SPUR_WR * + * bit cannot be cleared until a value is written to the IPRBx * + * register; the write will correct the C field and capture its new * + * value in the internal register. Even if IECLR[E_PRB_x] is set, the * + * SPUR_WR bit will persist if IPRBx hasn't yet been written. * + * . * + * * + ************************************************************************/ + +typedef union ii_iprbd_u { + shubreg_t ii_iprbd_regval; + struct { + shubreg_t i_c : 8; + shubreg_t i_na : 14; + shubreg_t i_rsvd_2 : 2; + shubreg_t i_nb : 14; + shubreg_t i_rsvd_1 : 2; + shubreg_t i_m : 2; + shubreg_t i_f : 1; + shubreg_t i_of_cnt : 5; + shubreg_t i_error : 1; + shubreg_t i_rd_to : 1; + shubreg_t i_spur_wr : 1; + shubreg_t i_spur_rd : 1; + shubreg_t i_rsvd : 11; + shubreg_t i_mult_err : 1; + } ii_iprbd_fld_s; +} ii_iprbd_u_t; + + +/************************************************************************ + * * + * Description: There are 9 instances of this register, one per * + * actual widget in this implementation of SHub and Crossbow. * + * Note: Crossbow only has ports for Widgets 8 through F, widget 0 * + * refers to Crossbow's internal space. * + * This register contains the state elements per widget that are * + * necessary to manage the PIO flow control on Crosstalk and on the * + * Router Network. See the PIO Flow Control chapter for a complete * + * description of this register * + * The SPUR_WR bit requires some explanation. When this register is * + * written, the new value of the C field is captured in an internal * + * register so the hardware can remember what the programmer wrote * + * into the credit counter. The SPUR_WR bit sets whenever the C field * + * increments above this stored value, which indicates that there * + * have been more responses received than requests sent. The SPUR_WR * + * bit cannot be cleared until a value is written to the IPRBx * + * register; the write will correct the C field and capture its new * + * value in the internal register. Even if IECLR[E_PRB_x] is set, the * + * SPUR_WR bit will persist if IPRBx hasn't yet been written. * + * . * + * * + ************************************************************************/ + +typedef union ii_iprbe_u { + shubreg_t ii_iprbe_regval; + struct { + shubreg_t i_c : 8; + shubreg_t i_na : 14; + shubreg_t i_rsvd_2 : 2; + shubreg_t i_nb : 14; + shubreg_t i_rsvd_1 : 2; + shubreg_t i_m : 2; + shubreg_t i_f : 1; + shubreg_t i_of_cnt : 5; + shubreg_t i_error : 1; + shubreg_t i_rd_to : 1; + shubreg_t i_spur_wr : 1; + shubreg_t i_spur_rd : 1; + shubreg_t i_rsvd : 11; + shubreg_t i_mult_err : 1; + } ii_iprbe_fld_s; +} ii_iprbe_u_t; + + +/************************************************************************ + * * + * Description: There are 9 instances of this register, one per * + * actual widget in this implementation of Shub and Crossbow. * + * Note: Crossbow only has ports for Widgets 8 through F, widget 0 * + * refers to Crossbow's internal space. * + * This register contains the state elements per widget that are * + * necessary to manage the PIO flow control on Crosstalk and on the * + * Router Network. See the PIO Flow Control chapter for a complete * + * description of this register * + * The SPUR_WR bit requires some explanation. When this register is * + * written, the new value of the C field is captured in an internal * + * register so the hardware can remember what the programmer wrote * + * into the credit counter. The SPUR_WR bit sets whenever the C field * + * increments above this stored value, which indicates that there * + * have been more responses received than requests sent. The SPUR_WR * + * bit cannot be cleared until a value is written to the IPRBx * + * register; the write will correct the C field and capture its new * + * value in the internal register. Even if IECLR[E_PRB_x] is set, the * + * SPUR_WR bit will persist if IPRBx hasn't yet been written. * + * . * + * * + ************************************************************************/ + +typedef union ii_iprbf_u { + shubreg_t ii_iprbf_regval; + struct { + shubreg_t i_c : 8; + shubreg_t i_na : 14; + shubreg_t i_rsvd_2 : 2; + shubreg_t i_nb : 14; + shubreg_t i_rsvd_1 : 2; + shubreg_t i_m : 2; + shubreg_t i_f : 1; + shubreg_t i_of_cnt : 5; + shubreg_t i_error : 1; + shubreg_t i_rd_to : 1; + shubreg_t i_spur_wr : 1; + shubreg_t i_spur_rd : 1; + shubreg_t i_rsvd : 11; + shubreg_t i_mult_err : 1; + } ii_iprbe_fld_s; +} ii_iprbf_u_t; + + +/************************************************************************ + * * + * This register specifies the timeout value to use for monitoring * + * Crosstalk credits which are used outbound to Crosstalk. An * + * internal counter called the Crosstalk Credit Timeout Counter * + * increments every 128 II clocks. The counter starts counting * + * anytime the credit count drops below a threshold, and resets to * + * zero (stops counting) anytime the credit count is at or above the * + * threshold. The threshold is 1 credit in direct connect mode and 2 * + * in Crossbow connect mode. When the internal Crosstalk Credit * + * Timeout Counter reaches the value programmed in this register, a * + * Crosstalk Credit Timeout has occurred. The internal counter is not * + * readable from software, and stops counting at its maximum value, * + * so it cannot cause more than one interrupt. * + * * + ************************************************************************/ + +typedef union ii_ixcc_u { + shubreg_t ii_ixcc_regval; + struct { + shubreg_t i_time_out : 26; + shubreg_t i_rsvd : 38; + } ii_ixcc_fld_s; +} ii_ixcc_u_t; + + +/************************************************************************ + * * + * Description: This register qualifies all the PIO and DMA * + * operations launched from widget 0 towards the SHub. In * + * addition, it also qualifies accesses by the BTE streams. * + * The bits in each field of this register are cleared by the SHub * + * upon detection of an error which requires widget 0 or the BTE * + * streams to be terminated. Whether or not widget x has access * + * rights to this SHub is determined by an AND of the device * + * enable bit in the appropriate field of this register and bit 0 in * + * the Wx_IAC field. The bits in this field are set by writing a 1 to * + * them. Incoming replies from Crosstalk are not subject to this * + * access control mechanism. * + * * + ************************************************************************/ + +typedef union ii_imem_u { + shubreg_t ii_imem_regval; + struct { + shubreg_t i_w0_esd : 1; + shubreg_t i_rsvd_3 : 3; + shubreg_t i_b0_esd : 1; + shubreg_t i_rsvd_2 : 3; + shubreg_t i_b1_esd : 1; + shubreg_t i_rsvd_1 : 3; + shubreg_t i_clr_precise : 1; + shubreg_t i_rsvd : 51; + } ii_imem_fld_s; +} ii_imem_u_t; + + + +/************************************************************************ + * * + * Description: This register specifies the timeout value to use for * + * monitoring Crosstalk tail flits coming into the Shub in the * + * TAIL_TO field. An internal counter associated with this register * + * is incremented every 128 II internal clocks (7 bits). The counter * + * starts counting anytime a header micropacket is received and stops * + * counting (and resets to zero) any time a micropacket with a Tail * + * bit is received. Once the counter reaches the threshold value * + * programmed in this register, it generates an interrupt to the * + * processor that is programmed into the IIDSR. The counter saturates * + * (does not roll over) at its maximum value, so it cannot cause * + * another interrupt until after it is cleared. * + * The register also contains the Read Response Timeout values. The * + * Prescalar is 23 bits, and counts II clocks. An internal counter * + * increments on every II clock and when it reaches the value in the * + * Prescalar field, all IPRTE registers with their valid bits set * + * have their Read Response timers bumped. Whenever any of them match * + * the value in the RRSP_TO field, a Read Response Timeout has * + * occurred, and error handling occurs as described in the Error * + * Handling section of this document. * + * * + ************************************************************************/ + +typedef union ii_ixtt_u { + shubreg_t ii_ixtt_regval; + struct { + shubreg_t i_tail_to : 26; + shubreg_t i_rsvd_1 : 6; + shubreg_t i_rrsp_ps : 23; + shubreg_t i_rrsp_to : 5; + shubreg_t i_rsvd : 4; + } ii_ixtt_fld_s; +} ii_ixtt_u_t; + + +/************************************************************************ + * * + * Writing a 1 to the fields of this register clears the appropriate * + * error bits in other areas of SHub. Note that when the * + * E_PRB_x bits are used to clear error bits in PRB registers, * + * SPUR_RD and SPUR_WR may persist, because they require additional * + * action to clear them. See the IPRBx and IXSS Register * + * specifications. * + * * + ************************************************************************/ + +typedef union ii_ieclr_u { + shubreg_t ii_ieclr_regval; + struct { + shubreg_t i_e_prb_0 : 1; + shubreg_t i_rsvd : 7; + shubreg_t i_e_prb_8 : 1; + shubreg_t i_e_prb_9 : 1; + shubreg_t i_e_prb_a : 1; + shubreg_t i_e_prb_b : 1; + shubreg_t i_e_prb_c : 1; + shubreg_t i_e_prb_d : 1; + shubreg_t i_e_prb_e : 1; + shubreg_t i_e_prb_f : 1; + shubreg_t i_e_crazy : 1; + shubreg_t i_e_bte_0 : 1; + shubreg_t i_e_bte_1 : 1; + shubreg_t i_reserved_1 : 10; + shubreg_t i_spur_rd_hdr : 1; + shubreg_t i_cam_intr_to : 1; + shubreg_t i_cam_overflow : 1; + shubreg_t i_cam_read_miss : 1; + shubreg_t i_ioq_rep_underflow : 1; + shubreg_t i_ioq_req_underflow : 1; + shubreg_t i_ioq_rep_overflow : 1; + shubreg_t i_ioq_req_overflow : 1; + shubreg_t i_iiq_rep_overflow : 1; + shubreg_t i_iiq_req_overflow : 1; + shubreg_t i_ii_xn_rep_cred_overflow : 1; + shubreg_t i_ii_xn_req_cred_overflow : 1; + shubreg_t i_ii_xn_invalid_cmd : 1; + shubreg_t i_xn_ii_invalid_cmd : 1; + shubreg_t i_reserved_2 : 21; + } ii_ieclr_fld_s; +} ii_ieclr_u_t; + + +/************************************************************************ + * * + * This register controls both BTEs. SOFT_RESET is intended for * + * recovery after an error. COUNT controls the total number of CRBs * + * that both BTEs (combined) can use, which affects total BTE * + * bandwidth. * + * * + ************************************************************************/ + +typedef union ii_ibcr_u { + shubreg_t ii_ibcr_regval; + struct { + shubreg_t i_count : 4; + shubreg_t i_rsvd_1 : 4; + shubreg_t i_soft_reset : 1; + shubreg_t i_rsvd : 55; + } ii_ibcr_fld_s; +} ii_ibcr_u_t; + + +/************************************************************************ + * * + * This register contains the header of a spurious read response * + * received from Crosstalk. A spurious read response is defined as a * + * read response received by II from a widget for which (1) the SIDN * + * has a value between 1 and 7, inclusive (II never sends requests to * + * these widgets (2) there is no valid IPRTE register which * + * corresponds to the TNUM, or (3) the widget indicated in SIDN is * + * not the same as the widget recorded in the IPRTE register * + * referenced by the TNUM. If this condition is true, and if the * + * IXSS[VALID] bit is clear, then the header of the spurious read * + * response is capture in IXSM and IXSS, and IXSS[VALID] is set. The * + * errant header is thereby captured, and no further spurious read * + * respones are captured until IXSS[VALID] is cleared by setting the * + * appropriate bit in IECLR.Everytime a spurious read response is * + * detected, the SPUR_RD bit of the PRB corresponding to the incoming * + * message's SIDN field is set. This always happens, regarless of * + * whether a header is captured. The programmer should check * + * IXSM[SIDN] to determine which widget sent the spurious response, * + * because there may be more than one SPUR_RD bit set in the PRB * + * registers. The widget indicated by IXSM[SIDN] was the first * + * spurious read response to be received since the last time * + * IXSS[VALID] was clear. The SPUR_RD bit of the corresponding PRB * + * will be set. Any SPUR_RD bits in any other PRB registers indicate * + * spurious messages from other widets which were detected after the * + * header was captured.. * + * * + ************************************************************************/ + +typedef union ii_ixsm_u { + shubreg_t ii_ixsm_regval; + struct { + shubreg_t i_byte_en : 32; + shubreg_t i_reserved : 1; + shubreg_t i_tag : 3; + shubreg_t i_alt_pactyp : 4; + shubreg_t i_bo : 1; + shubreg_t i_error : 1; + shubreg_t i_vbpm : 1; + shubreg_t i_gbr : 1; + shubreg_t i_ds : 2; + shubreg_t i_ct : 1; + shubreg_t i_tnum : 5; + shubreg_t i_pactyp : 4; + shubreg_t i_sidn : 4; + shubreg_t i_didn : 4; + } ii_ixsm_fld_s; +} ii_ixsm_u_t; + + +/************************************************************************ + * * + * This register contains the sideband bits of a spurious read * + * response received from Crosstalk. * + * * + ************************************************************************/ + +typedef union ii_ixss_u { + shubreg_t ii_ixss_regval; + struct { + shubreg_t i_sideband : 8; + shubreg_t i_rsvd : 55; + shubreg_t i_valid : 1; + } ii_ixss_fld_s; +} ii_ixss_u_t; + + +/************************************************************************ + * * + * This register enables software to access the II LLP's test port. * + * Refer to the LLP 2.5 documentation for an explanation of the test * + * port. Software can write to this register to program the values * + * for the control fields (TestErrCapture, TestClear, TestFlit, * + * TestMask and TestSeed). Similarly, software can read from this * + * register to obtain the values of the test port's status outputs * + * (TestCBerr, TestValid and TestData). * + * * + ************************************************************************/ + +typedef union ii_ilct_u { + shubreg_t ii_ilct_regval; + struct { + shubreg_t i_test_seed : 20; + shubreg_t i_test_mask : 8; + shubreg_t i_test_data : 20; + shubreg_t i_test_valid : 1; + shubreg_t i_test_cberr : 1; + shubreg_t i_test_flit : 3; + shubreg_t i_test_clear : 1; + shubreg_t i_test_err_capture : 1; + shubreg_t i_rsvd : 9; + } ii_ilct_fld_s; +} ii_ilct_u_t; + + +/************************************************************************ + * * + * If the II detects an illegal incoming Duplonet packet (request or * + * reply) when VALID==0 in the IIEPH1 register, then it saves the * + * contents of the packet's header flit in the IIEPH1 and IIEPH2 * + * registers, sets the VALID bit in IIEPH1, clears the OVERRUN bit, * + * and assigns a value to the ERR_TYPE field which indicates the * + * specific nature of the error. The II recognizes four different * + * types of errors: short request packets (ERR_TYPE==2), short reply * + * packets (ERR_TYPE==3), long request packets (ERR_TYPE==4) and long * + * reply packets (ERR_TYPE==5). The encodings for these types of * + * errors were chosen to be consistent with the same types of errors * + * indicated by the ERR_TYPE field in the LB_ERROR_HDR1 register (in * + * the LB unit). If the II detects an illegal incoming Duplonet * + * packet when VALID==1 in the IIEPH1 register, then it merely sets * + * the OVERRUN bit to indicate that a subsequent error has happened, * + * and does nothing further. * + * * + ************************************************************************/ + +typedef union ii_iieph1_u { + shubreg_t ii_iieph1_regval; + struct { + shubreg_t i_command : 7; + shubreg_t i_rsvd_5 : 1; + shubreg_t i_suppl : 14; + shubreg_t i_rsvd_4 : 1; + shubreg_t i_source : 14; + shubreg_t i_rsvd_3 : 1; + shubreg_t i_err_type : 4; + shubreg_t i_rsvd_2 : 4; + shubreg_t i_overrun : 1; + shubreg_t i_rsvd_1 : 3; + shubreg_t i_valid : 1; + shubreg_t i_rsvd : 13; + } ii_iieph1_fld_s; +} ii_iieph1_u_t; + + +/************************************************************************ + * * + * This register holds the Address field from the header flit of an * + * incoming erroneous Duplonet packet, along with the tail bit which * + * accompanied this header flit. This register is essentially an * + * extension of IIEPH1. Two registers were necessary because the 64 * + * bits available in only a single register were insufficient to * + * capture the entire header flit of an erroneous packet. * + * * + ************************************************************************/ + +typedef union ii_iieph2_u { + shubreg_t ii_iieph2_regval; + struct { + shubreg_t i_rsvd_0 : 3; + shubreg_t i_address : 47; + shubreg_t i_rsvd_1 : 10; + shubreg_t i_tail : 1; + shubreg_t i_rsvd : 3; + } ii_iieph2_fld_s; +} ii_iieph2_u_t; + + +/******************************/ + + + +/************************************************************************ + * * + * This register's value is a bit vector that guards access from SXBs * + * to local registers within the II as well as to external Crosstalk * + * widgets * + * * + ************************************************************************/ + +typedef union ii_islapr_u { + shubreg_t ii_islapr_regval; + struct { + shubreg_t i_region : 64; + } ii_islapr_fld_s; +} ii_islapr_u_t; + + +/************************************************************************ + * * + * A write to this register of the 56-bit value "Pup+Bun" will cause * + * the bit in the ISLAPR register corresponding to the region of the * + * requestor to be set (access allowed). ( + * * + ************************************************************************/ + +typedef union ii_islapo_u { + shubreg_t ii_islapo_regval; + struct { + shubreg_t i_io_sbx_ovrride : 56; + shubreg_t i_rsvd : 8; + } ii_islapo_fld_s; +} ii_islapo_u_t; + +/************************************************************************ + * * + * Determines how long the wrapper will wait aftr an interrupt is * + * initially issued from the II before it times out the outstanding * + * interrupt and drops it from the interrupt queue. * + * * + ************************************************************************/ + +typedef union ii_iwi_u { + shubreg_t ii_iwi_regval; + struct { + shubreg_t i_prescale : 24; + shubreg_t i_rsvd : 8; + shubreg_t i_timeout : 8; + shubreg_t i_rsvd1 : 8; + shubreg_t i_intrpt_retry_period : 8; + shubreg_t i_rsvd2 : 8; + } ii_iwi_fld_s; +} ii_iwi_u_t; + +/************************************************************************ + * * + * Log errors which have occurred in the II wrapper. The errors are * + * cleared by writing to the IECLR register. * + * * + ************************************************************************/ + +typedef union ii_iwel_u { + shubreg_t ii_iwel_regval; + struct { + shubreg_t i_intr_timed_out : 1; + shubreg_t i_rsvd : 7; + shubreg_t i_cam_overflow : 1; + shubreg_t i_cam_read_miss : 1; + shubreg_t i_rsvd1 : 2; + shubreg_t i_ioq_rep_underflow : 1; + shubreg_t i_ioq_req_underflow : 1; + shubreg_t i_ioq_rep_overflow : 1; + shubreg_t i_ioq_req_overflow : 1; + shubreg_t i_iiq_rep_overflow : 1; + shubreg_t i_iiq_req_overflow : 1; + shubreg_t i_rsvd2 : 6; + shubreg_t i_ii_xn_rep_cred_over_under: 1; + shubreg_t i_ii_xn_req_cred_over_under: 1; + shubreg_t i_rsvd3 : 6; + shubreg_t i_ii_xn_invalid_cmd : 1; + shubreg_t i_xn_ii_invalid_cmd : 1; + shubreg_t i_rsvd4 : 30; + } ii_iwel_fld_s; +} ii_iwel_u_t; + +/************************************************************************ + * * + * Controls the II wrapper. * + * * + ************************************************************************/ + +typedef union ii_iwc_u { + shubreg_t ii_iwc_regval; + struct { + shubreg_t i_dma_byte_swap : 1; + shubreg_t i_rsvd : 3; + shubreg_t i_cam_read_lines_reset : 1; + shubreg_t i_rsvd1 : 3; + shubreg_t i_ii_xn_cred_over_under_log: 1; + shubreg_t i_rsvd2 : 19; + shubreg_t i_xn_rep_iq_depth : 5; + shubreg_t i_rsvd3 : 3; + shubreg_t i_xn_req_iq_depth : 5; + shubreg_t i_rsvd4 : 3; + shubreg_t i_iiq_depth : 6; + shubreg_t i_rsvd5 : 12; + shubreg_t i_force_rep_cred : 1; + shubreg_t i_force_req_cred : 1; + } ii_iwc_fld_s; +} ii_iwc_u_t; + +/************************************************************************ + * * + * Status in the II wrapper. * + * * + ************************************************************************/ + +typedef union ii_iws_u { + shubreg_t ii_iws_regval; + struct { + shubreg_t i_xn_rep_iq_credits : 5; + shubreg_t i_rsvd : 3; + shubreg_t i_xn_req_iq_credits : 5; + shubreg_t i_rsvd1 : 51; + } ii_iws_fld_s; +} ii_iws_u_t; + +/************************************************************************ + * * + * Masks errors in the IWEL register. * + * * + ************************************************************************/ + +typedef union ii_iweim_u { + shubreg_t ii_iweim_regval; + struct { + shubreg_t i_intr_timed_out : 1; + shubreg_t i_rsvd : 7; + shubreg_t i_cam_overflow : 1; + shubreg_t i_cam_read_miss : 1; + shubreg_t i_rsvd1 : 2; + shubreg_t i_ioq_rep_underflow : 1; + shubreg_t i_ioq_req_underflow : 1; + shubreg_t i_ioq_rep_overflow : 1; + shubreg_t i_ioq_req_overflow : 1; + shubreg_t i_iiq_rep_overflow : 1; + shubreg_t i_iiq_req_overflow : 1; + shubreg_t i_rsvd2 : 6; + shubreg_t i_ii_xn_rep_cred_overflow : 1; + shubreg_t i_ii_xn_req_cred_overflow : 1; + shubreg_t i_rsvd3 : 6; + shubreg_t i_ii_xn_invalid_cmd : 1; + shubreg_t i_xn_ii_invalid_cmd : 1; + shubreg_t i_rsvd4 : 30; + } ii_iweim_fld_s; +} ii_iweim_u_t; + + +/************************************************************************ + * * + * A write to this register causes a particular field in the * + * corresponding widget's PRB entry to be adjusted up or down by 1. * + * This counter should be used when recovering from error and reset * + * conditions. Note that software would be capable of causing * + * inadvertent overflow or underflow of these counters. * + * * + ************************************************************************/ + +typedef union ii_ipca_u { + shubreg_t ii_ipca_regval; + struct { + shubreg_t i_wid : 4; + shubreg_t i_adjust : 1; + shubreg_t i_rsvd_1 : 3; + shubreg_t i_field : 2; + shubreg_t i_rsvd : 54; + } ii_ipca_fld_s; +} ii_ipca_u_t; + + +/************************************************************************ + * * + * There are 8 instances of this register. This register contains * + * the information that the II has to remember once it has launched a * + * PIO Read operation. The contents are used to form the correct * + * Router Network packet and direct the Crosstalk reply to the * + * appropriate processor. * + * * + ************************************************************************/ + + +typedef union ii_iprte0a_u { + shubreg_t ii_iprte0a_regval; + struct { + shubreg_t i_rsvd_1 : 54; + shubreg_t i_widget : 4; + shubreg_t i_to_cnt : 5; + shubreg_t i_vld : 1; + } ii_iprte0a_fld_s; +} ii_iprte0a_u_t; + + +/************************************************************************ + * * + * There are 8 instances of this register. This register contains * + * the information that the II has to remember once it has launched a * + * PIO Read operation. The contents are used to form the correct * + * Router Network packet and direct the Crosstalk reply to the * + * appropriate processor. * + * * + ************************************************************************/ + +typedef union ii_iprte1a_u { + shubreg_t ii_iprte1a_regval; + struct { + shubreg_t i_rsvd_1 : 54; + shubreg_t i_widget : 4; + shubreg_t i_to_cnt : 5; + shubreg_t i_vld : 1; + } ii_iprte1a_fld_s; +} ii_iprte1a_u_t; + + +/************************************************************************ + * * + * There are 8 instances of this register. This register contains * + * the information that the II has to remember once it has launched a * + * PIO Read operation. The contents are used to form the correct * + * Router Network packet and direct the Crosstalk reply to the * + * appropriate processor. * + * * + ************************************************************************/ + +typedef union ii_iprte2a_u { + shubreg_t ii_iprte2a_regval; + struct { + shubreg_t i_rsvd_1 : 54; + shubreg_t i_widget : 4; + shubreg_t i_to_cnt : 5; + shubreg_t i_vld : 1; + } ii_iprte2a_fld_s; +} ii_iprte2a_u_t; + + +/************************************************************************ + * * + * There are 8 instances of this register. This register contains * + * the information that the II has to remember once it has launched a * + * PIO Read operation. The contents are used to form the correct * + * Router Network packet and direct the Crosstalk reply to the * + * appropriate processor. * + * * + ************************************************************************/ + +typedef union ii_iprte3a_u { + shubreg_t ii_iprte3a_regval; + struct { + shubreg_t i_rsvd_1 : 54; + shubreg_t i_widget : 4; + shubreg_t i_to_cnt : 5; + shubreg_t i_vld : 1; + } ii_iprte3a_fld_s; +} ii_iprte3a_u_t; + + +/************************************************************************ + * * + * There are 8 instances of this register. This register contains * + * the information that the II has to remember once it has launched a * + * PIO Read operation. The contents are used to form the correct * + * Router Network packet and direct the Crosstalk reply to the * + * appropriate processor. * + * * + ************************************************************************/ + +typedef union ii_iprte4a_u { + shubreg_t ii_iprte4a_regval; + struct { + shubreg_t i_rsvd_1 : 54; + shubreg_t i_widget : 4; + shubreg_t i_to_cnt : 5; + shubreg_t i_vld : 1; + } ii_iprte4a_fld_s; +} ii_iprte4a_u_t; + + +/************************************************************************ + * * + * There are 8 instances of this register. This register contains * + * the information that the II has to remember once it has launched a * + * PIO Read operation. The contents are used to form the correct * + * Router Network packet and direct the Crosstalk reply to the * + * appropriate processor. * + * * + ************************************************************************/ + +typedef union ii_iprte5a_u { + shubreg_t ii_iprte5a_regval; + struct { + shubreg_t i_rsvd_1 : 54; + shubreg_t i_widget : 4; + shubreg_t i_to_cnt : 5; + shubreg_t i_vld : 1; + } ii_iprte5a_fld_s; +} ii_iprte5a_u_t; + + +/************************************************************************ + * * + * There are 8 instances of this register. This register contains * + * the information that the II has to remember once it has launched a * + * PIO Read operation. The contents are used to form the correct * + * Router Network packet and direct the Crosstalk reply to the * + * appropriate processor. * + * * + ************************************************************************/ + +typedef union ii_iprte6a_u { + shubreg_t ii_iprte6a_regval; + struct { + shubreg_t i_rsvd_1 : 54; + shubreg_t i_widget : 4; + shubreg_t i_to_cnt : 5; + shubreg_t i_vld : 1; + } ii_iprte6a_fld_s; +} ii_iprte6a_u_t; + + +/************************************************************************ + * * + * There are 8 instances of this register. This register contains * + * the information that the II has to remember once it has launched a * + * PIO Read operation. The contents are used to form the correct * + * Router Network packet and direct the Crosstalk reply to the * + * appropriate processor. * + * * + ************************************************************************/ + +typedef union ii_iprte7a_u { + shubreg_t ii_iprte7a_regval; + struct { + shubreg_t i_rsvd_1 : 54; + shubreg_t i_widget : 4; + shubreg_t i_to_cnt : 5; + shubreg_t i_vld : 1; + } ii_iprtea7_fld_s; +} ii_iprte7a_u_t; + + + +/************************************************************************ + * * + * There are 8 instances of this register. This register contains * + * the information that the II has to remember once it has launched a * + * PIO Read operation. The contents are used to form the correct * + * Router Network packet and direct the Crosstalk reply to the * + * appropriate processor. * + * * + ************************************************************************/ + + +typedef union ii_iprte0b_u { + shubreg_t ii_iprte0b_regval; + struct { + shubreg_t i_rsvd_1 : 3; + shubreg_t i_address : 47; + shubreg_t i_init : 3; + shubreg_t i_source : 11; + } ii_iprte0b_fld_s; +} ii_iprte0b_u_t; + + +/************************************************************************ + * * + * There are 8 instances of this register. This register contains * + * the information that the II has to remember once it has launched a * + * PIO Read operation. The contents are used to form the correct * + * Router Network packet and direct the Crosstalk reply to the * + * appropriate processor. * + * * + ************************************************************************/ + +typedef union ii_iprte1b_u { + shubreg_t ii_iprte1b_regval; + struct { + shubreg_t i_rsvd_1 : 3; + shubreg_t i_address : 47; + shubreg_t i_init : 3; + shubreg_t i_source : 11; + } ii_iprte1b_fld_s; +} ii_iprte1b_u_t; + + +/************************************************************************ + * * + * There are 8 instances of this register. This register contains * + * the information that the II has to remember once it has launched a * + * PIO Read operation. The contents are used to form the correct * + * Router Network packet and direct the Crosstalk reply to the * + * appropriate processor. * + * * + ************************************************************************/ + +typedef union ii_iprte2b_u { + shubreg_t ii_iprte2b_regval; + struct { + shubreg_t i_rsvd_1 : 3; + shubreg_t i_address : 47; + shubreg_t i_init : 3; + shubreg_t i_source : 11; + } ii_iprte2b_fld_s; +} ii_iprte2b_u_t; + + +/************************************************************************ + * * + * There are 8 instances of this register. This register contains * + * the information that the II has to remember once it has launched a * + * PIO Read operation. The contents are used to form the correct * + * Router Network packet and direct the Crosstalk reply to the * + * appropriate processor. * + * * + ************************************************************************/ + +typedef union ii_iprte3b_u { + shubreg_t ii_iprte3b_regval; + struct { + shubreg_t i_rsvd_1 : 3; + shubreg_t i_address : 47; + shubreg_t i_init : 3; + shubreg_t i_source : 11; + } ii_iprte3b_fld_s; +} ii_iprte3b_u_t; + + +/************************************************************************ + * * + * There are 8 instances of this register. This register contains * + * the information that the II has to remember once it has launched a * + * PIO Read operation. The contents are used to form the correct * + * Router Network packet and direct the Crosstalk reply to the * + * appropriate processor. * + * * + ************************************************************************/ + +typedef union ii_iprte4b_u { + shubreg_t ii_iprte4b_regval; + struct { + shubreg_t i_rsvd_1 : 3; + shubreg_t i_address : 47; + shubreg_t i_init : 3; + shubreg_t i_source : 11; + } ii_iprte4b_fld_s; +} ii_iprte4b_u_t; + + +/************************************************************************ + * * + * There are 8 instances of this register. This register contains * + * the information that the II has to remember once it has launched a * + * PIO Read operation. The contents are used to form the correct * + * Router Network packet and direct the Crosstalk reply to the * + * appropriate processor. * + * * + ************************************************************************/ + +typedef union ii_iprte5b_u { + shubreg_t ii_iprte5b_regval; + struct { + shubreg_t i_rsvd_1 : 3; + shubreg_t i_address : 47; + shubreg_t i_init : 3; + shubreg_t i_source : 11; + } ii_iprte5b_fld_s; +} ii_iprte5b_u_t; + + +/************************************************************************ + * * + * There are 8 instances of this register. This register contains * + * the information that the II has to remember once it has launched a * + * PIO Read operation. The contents are used to form the correct * + * Router Network packet and direct the Crosstalk reply to the * + * appropriate processor. * + * * + ************************************************************************/ + +typedef union ii_iprte6b_u { + shubreg_t ii_iprte6b_regval; + struct { + shubreg_t i_rsvd_1 : 3; + shubreg_t i_address : 47; + shubreg_t i_init : 3; + shubreg_t i_source : 11; + + } ii_iprte6b_fld_s; +} ii_iprte6b_u_t; + + +/************************************************************************ + * * + * There are 8 instances of this register. This register contains * + * the information that the II has to remember once it has launched a * + * PIO Read operation. The contents are used to form the correct * + * Router Network packet and direct the Crosstalk reply to the * + * appropriate processor. * + * * + ************************************************************************/ + +typedef union ii_iprte7b_u { + shubreg_t ii_iprte7b_regval; + struct { + shubreg_t i_rsvd_1 : 3; + shubreg_t i_address : 47; + shubreg_t i_init : 3; + shubreg_t i_source : 11; + } ii_iprte7b_fld_s; +} ii_iprte7b_u_t; + + +/************************************************************************ + * * + * Description: SHub II contains a feature which did not exist in * + * the Hub which automatically cleans up after a Read Response * + * timeout, including deallocation of the IPRTE and recovery of IBuf * + * space. The inclusion of this register in SHub is for backward * + * compatibility * + * A write to this register causes an entry from the table of * + * outstanding PIO Read Requests to be freed and returned to the * + * stack of free entries. This register is used in handling the * + * timeout errors that result in a PIO Reply never returning from * + * Crosstalk. * + * Note that this register does not affect the contents of the IPRTE * + * registers. The Valid bits in those registers have to be * + * specifically turned off by software. * + * * + ************************************************************************/ + +typedef union ii_ipdr_u { + shubreg_t ii_ipdr_regval; + struct { + shubreg_t i_te : 3; + shubreg_t i_rsvd_1 : 1; + shubreg_t i_pnd : 1; + shubreg_t i_init_rpcnt : 1; + shubreg_t i_rsvd : 58; + } ii_ipdr_fld_s; +} ii_ipdr_u_t; + + +/************************************************************************ + * * + * A write to this register causes a CRB entry to be returned to the * + * queue of free CRBs. The entry should have previously been cleared * + * (mark bit) via backdoor access to the pertinent CRB entry. This * + * register is used in the last step of handling the errors that are * + * captured and marked in CRB entries. Briefly: 1) first error for * + * DMA write from a particular device, and first error for a * + * particular BTE stream, lead to a marked CRB entry, and processor * + * interrupt, 2) software reads the error information captured in the * + * CRB entry, and presumably takes some corrective action, 3) * + * software clears the mark bit, and finally 4) software writes to * + * the ICDR register to return the CRB entry to the list of free CRB * + * entries. * + * * + ************************************************************************/ + +typedef union ii_icdr_u { + shubreg_t ii_icdr_regval; + struct { + shubreg_t i_crb_num : 4; + shubreg_t i_pnd : 1; + shubreg_t i_rsvd : 59; + } ii_icdr_fld_s; +} ii_icdr_u_t; + + +/************************************************************************ + * * + * This register provides debug access to two FIFOs inside of II. * + * Both IOQ_MAX* fields of this register contain the instantaneous * + * depth (in units of the number of available entries) of the * + * associated IOQ FIFO. A read of this register will return the * + * number of free entries on each FIFO at the time of the read. So * + * when a FIFO is idle, the associated field contains the maximum * + * depth of the FIFO. This register is writable for debug reasons * + * and is intended to be written with the maximum desired FIFO depth * + * while the FIFO is idle. Software must assure that II is idle when * + * this register is written. If there are any active entries in any * + * of these FIFOs when this register is written, the results are * + * undefined. * + * * + ************************************************************************/ + +typedef union ii_ifdr_u { + shubreg_t ii_ifdr_regval; + struct { + shubreg_t i_ioq_max_rq : 7; + shubreg_t i_set_ioq_rq : 1; + shubreg_t i_ioq_max_rp : 7; + shubreg_t i_set_ioq_rp : 1; + shubreg_t i_rsvd : 48; + } ii_ifdr_fld_s; +} ii_ifdr_u_t; + + +/************************************************************************ + * * + * This register allows the II to become sluggish in removing * + * messages from its inbound queue (IIQ). This will cause messages to * + * back up in either virtual channel. Disabling the "molasses" mode * + * subsequently allows the II to be tested under stress. In the * + * sluggish ("Molasses") mode, the localized effects of congestion * + * can be observed. * + * * + ************************************************************************/ + +typedef union ii_iiap_u { + shubreg_t ii_iiap_regval; + struct { + shubreg_t i_rq_mls : 6; + shubreg_t i_rsvd_1 : 2; + shubreg_t i_rp_mls : 6; + shubreg_t i_rsvd : 50; + } ii_iiap_fld_s; +} ii_iiap_u_t; + + +/************************************************************************ + * * + * This register allows several parameters of CRB operation to be * + * set. Note that writing to this register can have catastrophic side * + * effects, if the CRB is not quiescent, i.e. if the CRB is * + * processing protocol messages when the write occurs. * + * * + ************************************************************************/ + +typedef union ii_icmr_u { + shubreg_t ii_icmr_regval; + struct { + shubreg_t i_sp_msg : 1; + shubreg_t i_rd_hdr : 1; + shubreg_t i_rsvd_4 : 2; + shubreg_t i_c_cnt : 4; + shubreg_t i_rsvd_3 : 4; + shubreg_t i_clr_rqpd : 1; + shubreg_t i_clr_rppd : 1; + shubreg_t i_rsvd_2 : 2; + shubreg_t i_fc_cnt : 4; + shubreg_t i_crb_vld : 15; + shubreg_t i_crb_mark : 15; + shubreg_t i_rsvd_1 : 2; + shubreg_t i_precise : 1; + shubreg_t i_rsvd : 11; + } ii_icmr_fld_s; +} ii_icmr_u_t; + + +/************************************************************************ + * * + * This register allows control of the table portion of the CRB * + * logic via software. Control operations from this register have * + * priority over all incoming Crosstalk or BTE requests. * + * * + ************************************************************************/ + +typedef union ii_iccr_u { + shubreg_t ii_iccr_regval; + struct { + shubreg_t i_crb_num : 4; + shubreg_t i_rsvd_1 : 4; + shubreg_t i_cmd : 8; + shubreg_t i_pending : 1; + shubreg_t i_rsvd : 47; + } ii_iccr_fld_s; +} ii_iccr_u_t; + + +/************************************************************************ + * * + * This register allows the maximum timeout value to be programmed. * + * * + ************************************************************************/ + +typedef union ii_icto_u { + shubreg_t ii_icto_regval; + struct { + shubreg_t i_timeout : 8; + shubreg_t i_rsvd : 56; + } ii_icto_fld_s; +} ii_icto_u_t; + + +/************************************************************************ + * * + * This register allows the timeout prescalar to be programmed. An * + * internal counter is associated with this register. When the * + * internal counter reaches the value of the PRESCALE field, the * + * timer registers in all valid CRBs are incremented (CRBx_D[TIMEOUT] * + * field). The internal counter resets to zero, and then continues * + * counting. * + * * + ************************************************************************/ + +typedef union ii_ictp_u { + shubreg_t ii_ictp_regval; + struct { + shubreg_t i_prescale : 24; + shubreg_t i_rsvd : 40; + } ii_ictp_fld_s; +} ii_ictp_u_t; + + +/************************************************************************ + * * + * Description: There are 15 CRB Entries (ICRB0 to ICRBE) that are * + * used for Crosstalk operations (both cacheline and partial * + * operations) or BTE/IO. Because the CRB entries are very wide, five * + * registers (_A to _E) are required to read and write each entry. * + * The CRB Entry registers can be conceptualized as rows and columns * + * (illustrated in the table above). Each row contains the 4 * + * registers required for a single CRB Entry. The first doubleword * + * (column) for each entry is labeled A, and the second doubleword * + * (higher address) is labeled B, the third doubleword is labeled C, * + * the fourth doubleword is labeled D and the fifth doubleword is * + * labeled E. All CRB entries have their addresses on a quarter * + * cacheline aligned boundary. * + * Upon reset, only the following fields are initialized: valid * + * (VLD), priority count, timeout, timeout valid, and context valid. * + * All other bits should be cleared by software before use (after * + * recovering any potential error state from before the reset). * + * The following four tables summarize the format for the four * + * registers that are used for each ICRB# Entry. * + * * + ************************************************************************/ + +typedef union ii_icrb0_a_u { + shubreg_t ii_icrb0_a_regval; + struct { + shubreg_t ia_iow : 1; + shubreg_t ia_vld : 1; + shubreg_t ia_addr : 47; + shubreg_t ia_tnum : 5; + shubreg_t ia_sidn : 4; + shubreg_t ia_rsvd : 6; + } ii_icrb0_a_fld_s; +} ii_icrb0_a_u_t; + + +/************************************************************************ + * * + * Description: There are 15 CRB Entries (ICRB0 to ICRBE) that are * + * used for Crosstalk operations (both cacheline and partial * + * operations) or BTE/IO. Because the CRB entries are very wide, five * + * registers (_A to _E) are required to read and write each entry. * + * * + ************************************************************************/ + +typedef union ii_icrb0_b_u { + shubreg_t ii_icrb0_b_regval; + struct { + shubreg_t ib_xt_err : 1; + shubreg_t ib_mark : 1; + shubreg_t ib_ln_uce : 1; + shubreg_t ib_errcode : 3; + shubreg_t ib_error : 1; + shubreg_t ib_stall__bte_1 : 1; + shubreg_t ib_stall__bte_0 : 1; + shubreg_t ib_stall__intr : 1; + shubreg_t ib_stall_ib : 1; + shubreg_t ib_intvn : 1; + shubreg_t ib_wb : 1; + shubreg_t ib_hold : 1; + shubreg_t ib_ack : 1; + shubreg_t ib_resp : 1; + shubreg_t ib_ack_cnt : 11; + shubreg_t ib_rsvd : 7; + shubreg_t ib_exc : 5; + shubreg_t ib_init : 3; + shubreg_t ib_imsg : 8; + shubreg_t ib_imsgtype : 2; + shubreg_t ib_use_old : 1; + shubreg_t ib_rsvd_1 : 11; + } ii_icrb0_b_fld_s; +} ii_icrb0_b_u_t; + + +/************************************************************************ + * * + * Description: There are 15 CRB Entries (ICRB0 to ICRBE) that are * + * used for Crosstalk operations (both cacheline and partial * + * operations) or BTE/IO. Because the CRB entries are very wide, five * + * registers (_A to _E) are required to read and write each entry. * + * * + ************************************************************************/ + +typedef union ii_icrb0_c_u { + shubreg_t ii_icrb0_c_regval; + struct { + shubreg_t ic_source : 15; + shubreg_t ic_size : 2; + shubreg_t ic_ct : 1; + shubreg_t ic_bte_num : 1; + shubreg_t ic_gbr : 1; + shubreg_t ic_resprqd : 1; + shubreg_t ic_bo : 1; + shubreg_t ic_suppl : 15; + shubreg_t ic_rsvd : 27; + } ii_icrb0_c_fld_s; +} ii_icrb0_c_u_t; + + +/************************************************************************ + * * + * Description: There are 15 CRB Entries (ICRB0 to ICRBE) that are * + * used for Crosstalk operations (both cacheline and partial * + * operations) or BTE/IO. Because the CRB entries are very wide, five * + * registers (_A to _E) are required to read and write each entry. * + * * + ************************************************************************/ + +typedef union ii_icrb0_d_u { + shubreg_t ii_icrb0_d_regval; + struct { + shubreg_t id_pa_be : 43; + shubreg_t id_bte_op : 1; + shubreg_t id_pr_psc : 4; + shubreg_t id_pr_cnt : 4; + shubreg_t id_sleep : 1; + shubreg_t id_rsvd : 11; + } ii_icrb0_d_fld_s; +} ii_icrb0_d_u_t; + + +/************************************************************************ + * * + * Description: There are 15 CRB Entries (ICRB0 to ICRBE) that are * + * used for Crosstalk operations (both cacheline and partial * + * operations) or BTE/IO. Because the CRB entries are very wide, five * + * registers (_A to _E) are required to read and write each entry. * + * * + ************************************************************************/ + +typedef union ii_icrb0_e_u { + shubreg_t ii_icrb0_e_regval; + struct { + shubreg_t ie_timeout : 8; + shubreg_t ie_context : 15; + shubreg_t ie_rsvd : 1; + shubreg_t ie_tvld : 1; + shubreg_t ie_cvld : 1; + shubreg_t ie_rsvd_0 : 38; + } ii_icrb0_e_fld_s; +} ii_icrb0_e_u_t; + + +/************************************************************************ + * * + * This register contains the lower 64 bits of the header of the * + * spurious message captured by II. Valid when the SP_MSG bit in ICMR * + * register is set. * + * * + ************************************************************************/ + +typedef union ii_icsml_u { + shubreg_t ii_icsml_regval; + struct { + shubreg_t i_tt_addr : 47; + shubreg_t i_newsuppl_ex : 14; + shubreg_t i_reserved : 2; + shubreg_t i_overflow : 1; + } ii_icsml_fld_s; +} ii_icsml_u_t; + + +/************************************************************************ + * * + * This register contains the middle 64 bits of the header of the * + * spurious message captured by II. Valid when the SP_MSG bit in ICMR * + * register is set. * + * * + ************************************************************************/ + +typedef union ii_icsmm_u { + shubreg_t ii_icsmm_regval; + struct { + shubreg_t i_tt_ack_cnt : 11; + shubreg_t i_reserved : 53; + } ii_icsmm_fld_s; +} ii_icsmm_u_t; + + +/************************************************************************ + * * + * This register contains the microscopic state, all the inputs to * + * the protocol table, captured with the spurious message. Valid when * + * the SP_MSG bit in the ICMR register is set. * + * * + ************************************************************************/ + +typedef union ii_icsmh_u { + shubreg_t ii_icsmh_regval; + struct { + shubreg_t i_tt_vld : 1; + shubreg_t i_xerr : 1; + shubreg_t i_ft_cwact_o : 1; + shubreg_t i_ft_wact_o : 1; + shubreg_t i_ft_active_o : 1; + shubreg_t i_sync : 1; + shubreg_t i_mnusg : 1; + shubreg_t i_mnusz : 1; + shubreg_t i_plusz : 1; + shubreg_t i_plusg : 1; + shubreg_t i_tt_exc : 5; + shubreg_t i_tt_wb : 1; + shubreg_t i_tt_hold : 1; + shubreg_t i_tt_ack : 1; + shubreg_t i_tt_resp : 1; + shubreg_t i_tt_intvn : 1; + shubreg_t i_g_stall_bte1 : 1; + shubreg_t i_g_stall_bte0 : 1; + shubreg_t i_g_stall_il : 1; + shubreg_t i_g_stall_ib : 1; + shubreg_t i_tt_imsg : 8; + shubreg_t i_tt_imsgtype : 2; + shubreg_t i_tt_use_old : 1; + shubreg_t i_tt_respreqd : 1; + shubreg_t i_tt_bte_num : 1; + shubreg_t i_cbn : 1; + shubreg_t i_match : 1; + shubreg_t i_rpcnt_lt_34 : 1; + shubreg_t i_rpcnt_ge_34 : 1; + shubreg_t i_rpcnt_lt_18 : 1; + shubreg_t i_rpcnt_ge_18 : 1; + shubreg_t i_rpcnt_lt_2 : 1; + shubreg_t i_rpcnt_ge_2 : 1; + shubreg_t i_rqcnt_lt_18 : 1; + shubreg_t i_rqcnt_ge_18 : 1; + shubreg_t i_rqcnt_lt_2 : 1; + shubreg_t i_rqcnt_ge_2 : 1; + shubreg_t i_tt_device : 7; + shubreg_t i_tt_init : 3; + shubreg_t i_reserved : 5; + } ii_icsmh_fld_s; +} ii_icsmh_u_t; + + +/************************************************************************ + * * + * The Shub DEBUG unit provides a 3-bit selection signal to the * + * II core and a 3-bit selection signal to the fsbclk domain in the II * + * wrapper. * + * * + ************************************************************************/ + +typedef union ii_idbss_u { + shubreg_t ii_idbss_regval; + struct { + shubreg_t i_iioclk_core_submenu : 3; + shubreg_t i_rsvd : 5; + shubreg_t i_fsbclk_wrapper_submenu : 3; + shubreg_t i_rsvd_1 : 5; + shubreg_t i_iioclk_menu : 5; + shubreg_t i_rsvd_2 : 43; + } ii_idbss_fld_s; +} ii_idbss_u_t; + + +/************************************************************************ + * * + * Description: This register is used to set up the length for a * + * transfer and then to monitor the progress of that transfer. This * + * register needs to be initialized before a transfer is started. A * + * legitimate write to this register will set the Busy bit, clear the * + * Error bit, and initialize the length to the value desired. * + * While the transfer is in progress, hardware will decrement the * + * length field with each successful block that is copied. Once the * + * transfer completes, hardware will clear the Busy bit. The length * + * field will also contain the number of cache lines left to be * + * transferred. * + * * + ************************************************************************/ + +typedef union ii_ibls0_u { + shubreg_t ii_ibls0_regval; + struct { + shubreg_t i_length : 16; + shubreg_t i_error : 1; + shubreg_t i_rsvd_1 : 3; + shubreg_t i_busy : 1; + shubreg_t i_rsvd : 43; + } ii_ibls0_fld_s; +} ii_ibls0_u_t; + + +/************************************************************************ + * * + * This register should be loaded before a transfer is started. The * + * address to be loaded in bits 39:0 is the 40-bit TRex+ physical * + * address as described in Section 1.3, Figure2 and Figure3. Since * + * the bottom 7 bits of the address are always taken to be zero, BTE * + * transfers are always cacheline-aligned. * + * * + ************************************************************************/ + +typedef union ii_ibsa0_u { + shubreg_t ii_ibsa0_regval; + struct { + shubreg_t i_rsvd_1 : 7; + shubreg_t i_addr : 42; + shubreg_t i_rsvd : 15; + } ii_ibsa0_fld_s; +} ii_ibsa0_u_t; + + +/************************************************************************ + * * + * This register should be loaded before a transfer is started. The * + * address to be loaded in bits 39:0 is the 40-bit TRex+ physical * + * address as described in Section 1.3, Figure2 and Figure3. Since * + * the bottom 7 bits of the address are always taken to be zero, BTE * + * transfers are always cacheline-aligned. * + * * + ************************************************************************/ + +typedef union ii_ibda0_u { + shubreg_t ii_ibda0_regval; + struct { + shubreg_t i_rsvd_1 : 7; + shubreg_t i_addr : 42; + shubreg_t i_rsvd : 15; + } ii_ibda0_fld_s; +} ii_ibda0_u_t; + + +/************************************************************************ + * * + * Writing to this register sets up the attributes of the transfer * + * and initiates the transfer operation. Reading this register has * + * the side effect of terminating any transfer in progress. Note: * + * stopping a transfer midstream could have an adverse impact on the * + * other BTE. If a BTE stream has to be stopped (due to error * + * handling for example), both BTE streams should be stopped and * + * their transfers discarded. * + * * + ************************************************************************/ + +typedef union ii_ibct0_u { + shubreg_t ii_ibct0_regval; + struct { + shubreg_t i_zerofill : 1; + shubreg_t i_rsvd_2 : 3; + shubreg_t i_notify : 1; + shubreg_t i_rsvd_1 : 3; + shubreg_t i_poison : 1; + shubreg_t i_rsvd : 55; + } ii_ibct0_fld_s; +} ii_ibct0_u_t; + + +/************************************************************************ + * * + * This register contains the address to which the WINV is sent. * + * This address has to be cache line aligned. * + * * + ************************************************************************/ + +typedef union ii_ibna0_u { + shubreg_t ii_ibna0_regval; + struct { + shubreg_t i_rsvd_1 : 7; + shubreg_t i_addr : 42; + shubreg_t i_rsvd : 15; + } ii_ibna0_fld_s; +} ii_ibna0_u_t; + + +/************************************************************************ + * * + * This register contains the programmable level as well as the node * + * ID and PI unit of the processor to which the interrupt will be * + * sent. * + * * + ************************************************************************/ + +typedef union ii_ibia0_u { + shubreg_t ii_ibia0_regval; + struct { + shubreg_t i_rsvd_2 : 1; + shubreg_t i_node_id : 11; + shubreg_t i_rsvd_1 : 4; + shubreg_t i_level : 7; + shubreg_t i_rsvd : 41; + } ii_ibia0_fld_s; +} ii_ibia0_u_t; + + +/************************************************************************ + * * + * Description: This register is used to set up the length for a * + * transfer and then to monitor the progress of that transfer. This * + * register needs to be initialized before a transfer is started. A * + * legitimate write to this register will set the Busy bit, clear the * + * Error bit, and initialize the length to the value desired. * + * While the transfer is in progress, hardware will decrement the * + * length field with each successful block that is copied. Once the * + * transfer completes, hardware will clear the Busy bit. The length * + * field will also contain the number of cache lines left to be * + * transferred. * + * * + ************************************************************************/ + +typedef union ii_ibls1_u { + shubreg_t ii_ibls1_regval; + struct { + shubreg_t i_length : 16; + shubreg_t i_error : 1; + shubreg_t i_rsvd_1 : 3; + shubreg_t i_busy : 1; + shubreg_t i_rsvd : 43; + } ii_ibls1_fld_s; +} ii_ibls1_u_t; + + +/************************************************************************ + * * + * This register should be loaded before a transfer is started. The * + * address to be loaded in bits 39:0 is the 40-bit TRex+ physical * + * address as described in Section 1.3, Figure2 and Figure3. Since * + * the bottom 7 bits of the address are always taken to be zero, BTE * + * transfers are always cacheline-aligned. * + * * + ************************************************************************/ + +typedef union ii_ibsa1_u { + shubreg_t ii_ibsa1_regval; + struct { + shubreg_t i_rsvd_1 : 7; + shubreg_t i_addr : 33; + shubreg_t i_rsvd : 24; + } ii_ibsa1_fld_s; +} ii_ibsa1_u_t; + + +/************************************************************************ + * * + * This register should be loaded before a transfer is started. The * + * address to be loaded in bits 39:0 is the 40-bit TRex+ physical * + * address as described in Section 1.3, Figure2 and Figure3. Since * + * the bottom 7 bits of the address are always taken to be zero, BTE * + * transfers are always cacheline-aligned. * + * * + ************************************************************************/ + +typedef union ii_ibda1_u { + shubreg_t ii_ibda1_regval; + struct { + shubreg_t i_rsvd_1 : 7; + shubreg_t i_addr : 33; + shubreg_t i_rsvd : 24; + } ii_ibda1_fld_s; +} ii_ibda1_u_t; + + +/************************************************************************ + * * + * Writing to this register sets up the attributes of the transfer * + * and initiates the transfer operation. Reading this register has * + * the side effect of terminating any transfer in progress. Note: * + * stopping a transfer midstream could have an adverse impact on the * + * other BTE. If a BTE stream has to be stopped (due to error * + * handling for example), both BTE streams should be stopped and * + * their transfers discarded. * + * * + ************************************************************************/ + +typedef union ii_ibct1_u { + shubreg_t ii_ibct1_regval; + struct { + shubreg_t i_zerofill : 1; + shubreg_t i_rsvd_2 : 3; + shubreg_t i_notify : 1; + shubreg_t i_rsvd_1 : 3; + shubreg_t i_poison : 1; + shubreg_t i_rsvd : 55; + } ii_ibct1_fld_s; +} ii_ibct1_u_t; + + +/************************************************************************ + * * + * This register contains the address to which the WINV is sent. * + * This address has to be cache line aligned. * + * * + ************************************************************************/ + +typedef union ii_ibna1_u { + shubreg_t ii_ibna1_regval; + struct { + shubreg_t i_rsvd_1 : 7; + shubreg_t i_addr : 33; + shubreg_t i_rsvd : 24; + } ii_ibna1_fld_s; +} ii_ibna1_u_t; + + +/************************************************************************ + * * + * This register contains the programmable level as well as the node * + * ID and PI unit of the processor to which the interrupt will be * + * sent. * + * * + ************************************************************************/ + +typedef union ii_ibia1_u { + shubreg_t ii_ibia1_regval; + struct { + shubreg_t i_pi_id : 1; + shubreg_t i_node_id : 8; + shubreg_t i_rsvd_1 : 7; + shubreg_t i_level : 7; + shubreg_t i_rsvd : 41; + } ii_ibia1_fld_s; +} ii_ibia1_u_t; + + +/************************************************************************ + * * + * This register defines the resources that feed information into * + * the two performance counters located in the IO Performance * + * Profiling Register. There are 17 different quantities that can be * + * measured. Given these 17 different options, the two performance * + * counters have 15 of them in common; menu selections 0 through 0xE * + * are identical for each performance counter. As for the other two * + * options, one is available from one performance counter and the * + * other is available from the other performance counter. Hence, the * + * II supports all 17*16=272 possible combinations of quantities to * + * measure. * + * * + ************************************************************************/ + +typedef union ii_ipcr_u { + shubreg_t ii_ipcr_regval; + struct { + shubreg_t i_ippr0_c : 4; + shubreg_t i_ippr1_c : 4; + shubreg_t i_icct : 8; + shubreg_t i_rsvd : 48; + } ii_ipcr_fld_s; +} ii_ipcr_u_t; + + +/************************************************************************ + * * + * * + * * + ************************************************************************/ + +typedef union ii_ippr_u { + shubreg_t ii_ippr_regval; + struct { + shubreg_t i_ippr0 : 32; + shubreg_t i_ippr1 : 32; + } ii_ippr_fld_s; +} ii_ippr_u_t; + + +#endif /* __ASSEMBLY__ */ + +/************************************************************************** + * * + * The following defines which were not formed into structures are * + * probably indentical to another register, and the name of the * + * register is provided against each of these registers. This * + * information needs to be checked carefully * + * * + * IIO_ICRB1_A IIO_ICRB0_A * + * IIO_ICRB1_B IIO_ICRB0_B * + * IIO_ICRB1_C IIO_ICRB0_C * + * IIO_ICRB1_D IIO_ICRB0_D * + * IIO_ICRB1_E IIO_ICRB0_E * + * IIO_ICRB2_A IIO_ICRB0_A * + * IIO_ICRB2_B IIO_ICRB0_B * + * IIO_ICRB2_C IIO_ICRB0_C * + * IIO_ICRB2_D IIO_ICRB0_D * + * IIO_ICRB2_E IIO_ICRB0_E * + * IIO_ICRB3_A IIO_ICRB0_A * + * IIO_ICRB3_B IIO_ICRB0_B * + * IIO_ICRB3_C IIO_ICRB0_C * + * IIO_ICRB3_D IIO_ICRB0_D * + * IIO_ICRB3_E IIO_ICRB0_E * + * IIO_ICRB4_A IIO_ICRB0_A * + * IIO_ICRB4_B IIO_ICRB0_B * + * IIO_ICRB4_C IIO_ICRB0_C * + * IIO_ICRB4_D IIO_ICRB0_D * + * IIO_ICRB4_E IIO_ICRB0_E * + * IIO_ICRB5_A IIO_ICRB0_A * + * IIO_ICRB5_B IIO_ICRB0_B * + * IIO_ICRB5_C IIO_ICRB0_C * + * IIO_ICRB5_D IIO_ICRB0_D * + * IIO_ICRB5_E IIO_ICRB0_E * + * IIO_ICRB6_A IIO_ICRB0_A * + * IIO_ICRB6_B IIO_ICRB0_B * + * IIO_ICRB6_C IIO_ICRB0_C * + * IIO_ICRB6_D IIO_ICRB0_D * + * IIO_ICRB6_E IIO_ICRB0_E * + * IIO_ICRB7_A IIO_ICRB0_A * + * IIO_ICRB7_B IIO_ICRB0_B * + * IIO_ICRB7_C IIO_ICRB0_C * + * IIO_ICRB7_D IIO_ICRB0_D * + * IIO_ICRB7_E IIO_ICRB0_E * + * IIO_ICRB8_A IIO_ICRB0_A * + * IIO_ICRB8_B IIO_ICRB0_B * + * IIO_ICRB8_C IIO_ICRB0_C * + * IIO_ICRB8_D IIO_ICRB0_D * + * IIO_ICRB8_E IIO_ICRB0_E * + * IIO_ICRB9_A IIO_ICRB0_A * + * IIO_ICRB9_B IIO_ICRB0_B * + * IIO_ICRB9_C IIO_ICRB0_C * + * IIO_ICRB9_D IIO_ICRB0_D * + * IIO_ICRB9_E IIO_ICRB0_E * + * IIO_ICRBA_A IIO_ICRB0_A * + * IIO_ICRBA_B IIO_ICRB0_B * + * IIO_ICRBA_C IIO_ICRB0_C * + * IIO_ICRBA_D IIO_ICRB0_D * + * IIO_ICRBA_E IIO_ICRB0_E * + * IIO_ICRBB_A IIO_ICRB0_A * + * IIO_ICRBB_B IIO_ICRB0_B * + * IIO_ICRBB_C IIO_ICRB0_C * + * IIO_ICRBB_D IIO_ICRB0_D * + * IIO_ICRBB_E IIO_ICRB0_E * + * IIO_ICRBC_A IIO_ICRB0_A * + * IIO_ICRBC_B IIO_ICRB0_B * + * IIO_ICRBC_C IIO_ICRB0_C * + * IIO_ICRBC_D IIO_ICRB0_D * + * IIO_ICRBC_E IIO_ICRB0_E * + * IIO_ICRBD_A IIO_ICRB0_A * + * IIO_ICRBD_B IIO_ICRB0_B * + * IIO_ICRBD_C IIO_ICRB0_C * + * IIO_ICRBD_D IIO_ICRB0_D * + * IIO_ICRBD_E IIO_ICRB0_E * + * IIO_ICRBE_A IIO_ICRB0_A * + * IIO_ICRBE_B IIO_ICRB0_B * + * IIO_ICRBE_C IIO_ICRB0_C * + * IIO_ICRBE_D IIO_ICRB0_D * + * IIO_ICRBE_E IIO_ICRB0_E * + * * + **************************************************************************/ + + +/* + * Slightly friendlier names for some common registers. + */ +#define IIO_WIDGET IIO_WID /* Widget identification */ +#define IIO_WIDGET_STAT IIO_WSTAT /* Widget status register */ +#define IIO_WIDGET_CTRL IIO_WCR /* Widget control register */ +#define IIO_PROTECT IIO_ILAPR /* IO interface protection */ +#define IIO_PROTECT_OVRRD IIO_ILAPO /* IO protect override */ +#define IIO_OUTWIDGET_ACCESS IIO_IOWA /* Outbound widget access */ +#define IIO_INWIDGET_ACCESS IIO_IIWA /* Inbound widget access */ +#define IIO_INDEV_ERR_MASK IIO_IIDEM /* Inbound device error mask */ +#define IIO_LLP_CSR IIO_ILCSR /* LLP control and status */ +#define IIO_LLP_LOG IIO_ILLR /* LLP log */ +#define IIO_XTALKCC_TOUT IIO_IXCC /* Xtalk credit count timeout*/ +#define IIO_XTALKTT_TOUT IIO_IXTT /* Xtalk tail timeout */ +#define IIO_IO_ERR_CLR IIO_IECLR /* IO error clear */ +#define IIO_IGFX_0 IIO_IGFX0 +#define IIO_IGFX_1 IIO_IGFX1 +#define IIO_IBCT_0 IIO_IBCT0 +#define IIO_IBCT_1 IIO_IBCT1 +#define IIO_IBLS_0 IIO_IBLS0 +#define IIO_IBLS_1 IIO_IBLS1 +#define IIO_IBSA_0 IIO_IBSA0 +#define IIO_IBSA_1 IIO_IBSA1 +#define IIO_IBDA_0 IIO_IBDA0 +#define IIO_IBDA_1 IIO_IBDA1 +#define IIO_IBNA_0 IIO_IBNA0 +#define IIO_IBNA_1 IIO_IBNA1 +#define IIO_IBIA_0 IIO_IBIA0 +#define IIO_IBIA_1 IIO_IBIA1 +#define IIO_IOPRB_0 IIO_IPRB0 + +#define IIO_PRTE_A(_x) (IIO_IPRTE0_A + (8 * (_x))) +#define IIO_PRTE_B(_x) (IIO_IPRTE0_B + (8 * (_x))) +#define IIO_NUM_PRTES 8 /* Total number of PRB table entries */ +#define IIO_WIDPRTE_A(x) IIO_PRTE_A(((x) - 8)) /* widget ID to its PRTE num */ +#define IIO_WIDPRTE_B(x) IIO_PRTE_B(((x) - 8)) /* widget ID to its PRTE num */ + +#define IIO_NUM_IPRBS (9) + +#define IIO_LLP_CSR_IS_UP 0x00002000 +#define IIO_LLP_CSR_LLP_STAT_MASK 0x00003000 +#define IIO_LLP_CSR_LLP_STAT_SHFT 12 + +#define IIO_LLP_CB_MAX 0xffff /* in ILLR CB_CNT, Max Check Bit errors */ +#define IIO_LLP_SN_MAX 0xffff /* in ILLR SN_CNT, Max Sequence Number errors */ + +/* key to IIO_PROTECT_OVRRD */ +#define IIO_PROTECT_OVRRD_KEY 0x53474972756c6573ull /* "SGIrules" */ + +/* BTE register names */ +#define IIO_BTE_STAT_0 IIO_IBLS_0 /* Also BTE length/status 0 */ +#define IIO_BTE_SRC_0 IIO_IBSA_0 /* Also BTE source address 0 */ +#define IIO_BTE_DEST_0 IIO_IBDA_0 /* Also BTE dest. address 0 */ +#define IIO_BTE_CTRL_0 IIO_IBCT_0 /* Also BTE control/terminate 0 */ +#define IIO_BTE_NOTIFY_0 IIO_IBNA_0 /* Also BTE notification 0 */ +#define IIO_BTE_INT_0 IIO_IBIA_0 /* Also BTE interrupt 0 */ +#define IIO_BTE_OFF_0 0 /* Base offset from BTE 0 regs. */ +#define IIO_BTE_OFF_1 (IIO_IBLS_1 - IIO_IBLS_0) /* Offset from base to BTE 1 */ + +/* BTE register offsets from base */ +#define BTEOFF_STAT 0 +#define BTEOFF_SRC (IIO_BTE_SRC_0 - IIO_BTE_STAT_0) +#define BTEOFF_DEST (IIO_BTE_DEST_0 - IIO_BTE_STAT_0) +#define BTEOFF_CTRL (IIO_BTE_CTRL_0 - IIO_BTE_STAT_0) +#define BTEOFF_NOTIFY (IIO_BTE_NOTIFY_0 - IIO_BTE_STAT_0) +#define BTEOFF_INT (IIO_BTE_INT_0 - IIO_BTE_STAT_0) + + +/* names used in shub diags */ +#define IIO_BASE_BTE0 IIO_IBLS_0 +#define IIO_BASE_BTE1 IIO_IBLS_1 + +/* + * Macro which takes the widget number, and returns the + * IO PRB address of that widget. + * value _x is expected to be a widget number in the range + * 0, 8 - 0xF + */ +#define IIO_IOPRB(_x) (IIO_IOPRB_0 + ( ( (_x) < HUB_WIDGET_ID_MIN ? \ + (_x) : \ + (_x) - (HUB_WIDGET_ID_MIN-1)) << 3) ) + + +/* GFX Flow Control Node/Widget Register */ +#define IIO_IGFX_W_NUM_BITS 4 /* size of widget num field */ +#define IIO_IGFX_W_NUM_MASK ((1<> IIO_WSTAT_TXRETRY_SHFT) & \ + IIO_WSTAT_TXRETRY_MASK) + +/* Number of II perf. counters we can multiplex at once */ + +#define IO_PERF_SETS 32 + +#if __KERNEL__ +#ifndef __ASSEMBLY__ +#include +#include +#include +#include + +/* Bit for the widget in inbound access register */ +#define IIO_IIWA_WIDGET(_w) ((uint64_t)(1ULL << _w)) +/* Bit for the widget in outbound access register */ +#define IIO_IOWA_WIDGET(_w) ((uint64_t)(1ULL << _w)) + +/* NOTE: The following define assumes that we are going to get + * widget numbers from 8 thru F and the device numbers within + * widget from 0 thru 7. + */ +#define IIO_IIDEM_WIDGETDEV_MASK(w, d) ((uint64_t)(1ULL << (8 * ((w) - 8) + (d)))) + +/* IO Interrupt Destination Register */ +#define IIO_IIDSR_SENT_SHIFT 28 +#define IIO_IIDSR_SENT_MASK 0x10000000 +#define IIO_IIDSR_ENB_SHIFT 24 +#define IIO_IIDSR_ENB_MASK 0x01000000 +#define IIO_IIDSR_NODE_SHIFT 8 +#define IIO_IIDSR_NODE_MASK 0x0000ff00 +#define IIO_IIDSR_PI_ID_SHIFT 8 +#define IIO_IIDSR_PI_ID_MASK 0x00000010 +#define IIO_IIDSR_LVL_SHIFT 0 +#define IIO_IIDSR_LVL_MASK 0x0000007f + +/* Xtalk timeout threshhold register (IIO_IXTT) */ +#define IXTT_RRSP_TO_SHFT 55 /* read response timeout */ +#define IXTT_RRSP_TO_MASK (0x1FULL << IXTT_RRSP_TO_SHFT) +#define IXTT_RRSP_PS_SHFT 32 /* read responsed TO prescalar */ +#define IXTT_RRSP_PS_MASK (0x7FFFFFULL << IXTT_RRSP_PS_SHFT) +#define IXTT_TAIL_TO_SHFT 0 /* tail timeout counter threshold */ +#define IXTT_TAIL_TO_MASK (0x3FFFFFFULL << IXTT_TAIL_TO_SHFT) + +/* + * The IO LLP control status register and widget control register + */ + +typedef union hubii_wcr_u { + uint64_t wcr_reg_value; + struct { + uint64_t wcr_widget_id: 4, /* LLP crossbar credit */ + wcr_tag_mode: 1, /* Tag mode */ + wcr_rsvd1: 8, /* Reserved */ + wcr_xbar_crd: 3, /* LLP crossbar credit */ + wcr_f_bad_pkt: 1, /* Force bad llp pkt enable */ + wcr_dir_con: 1, /* widget direct connect */ + wcr_e_thresh: 5, /* elasticity threshold */ + wcr_rsvd: 41; /* unused */ + } wcr_fields_s; +} hubii_wcr_t; + +#define iwcr_dir_con wcr_fields_s.wcr_dir_con + +/* The structures below are defined to extract and modify the ii +performance registers */ + +/* io_perf_sel allows the caller to specify what tests will be + performed */ + +typedef union io_perf_sel { + uint64_t perf_sel_reg; + struct { + uint64_t perf_ippr0 : 4, + perf_ippr1 : 4, + perf_icct : 8, + perf_rsvd : 48; + } perf_sel_bits; +} io_perf_sel_t; + +/* io_perf_cnt is to extract the count from the shub registers. Due to + hardware problems there is only one counter, not two. */ + +typedef union io_perf_cnt { + uint64_t perf_cnt; + struct { + uint64_t perf_cnt : 20, + perf_rsvd2 : 12, + perf_rsvd1 : 32; + } perf_cnt_bits; + +} io_perf_cnt_t; + +typedef union iprte_a { + shubreg_t entry; + struct { + shubreg_t i_rsvd_1 : 3; + shubreg_t i_addr : 38; + shubreg_t i_init : 3; + shubreg_t i_source : 8; + shubreg_t i_rsvd : 2; + shubreg_t i_widget : 4; + shubreg_t i_to_cnt : 5; + shubreg_t i_vld : 1; + } iprte_fields; +} iprte_a_t; + + +/* PIO MANAGEMENT */ +typedef struct hub_piomap_s *hub_piomap_t; + +extern hub_piomap_t +hub_piomap_alloc(devfs_handle_t dev, /* set up mapping for this device */ + device_desc_t dev_desc, /* device descriptor */ + iopaddr_t xtalk_addr, /* map for this xtalk_addr range */ + size_t byte_count, + size_t byte_count_max, /* maximum size of a mapping */ + unsigned flags); /* defined in sys/pio.h */ + +extern void hub_piomap_free(hub_piomap_t hub_piomap); + +extern caddr_t +hub_piomap_addr(hub_piomap_t hub_piomap, /* mapping resources */ + iopaddr_t xtalk_addr, /* map for this xtalk addr */ + size_t byte_count); /* map this many bytes */ + +extern void +hub_piomap_done(hub_piomap_t hub_piomap); + +extern caddr_t +hub_piotrans_addr( devfs_handle_t dev, /* translate to this device */ + device_desc_t dev_desc, /* device descriptor */ + iopaddr_t xtalk_addr, /* Crosstalk address */ + size_t byte_count, /* map this many bytes */ + unsigned flags); /* (currently unused) */ + +/* DMA MANAGEMENT */ +typedef struct hub_dmamap_s *hub_dmamap_t; + +extern hub_dmamap_t +hub_dmamap_alloc( devfs_handle_t dev, /* set up mappings for dev */ + device_desc_t dev_desc, /* device descriptor */ + size_t byte_count_max, /* max size of a mapping */ + unsigned flags); /* defined in dma.h */ + +extern void +hub_dmamap_free(hub_dmamap_t dmamap); + +extern iopaddr_t +hub_dmamap_addr( hub_dmamap_t dmamap, /* use mapping resources */ + paddr_t paddr, /* map for this address */ + size_t byte_count); /* map this many bytes */ + +extern alenlist_t +hub_dmamap_list( hub_dmamap_t dmamap, /* use mapping resources */ + alenlist_t alenlist, /* map this Addr/Length List */ + unsigned flags); + +extern void +hub_dmamap_done( hub_dmamap_t dmamap); /* done w/ mapping resources */ + +extern iopaddr_t +hub_dmatrans_addr( devfs_handle_t dev, /* translate for this device */ + device_desc_t dev_desc, /* device descriptor */ + paddr_t paddr, /* system physical address */ + size_t byte_count, /* length */ + unsigned flags); /* defined in dma.h */ + +extern alenlist_t +hub_dmatrans_list( devfs_handle_t dev, /* translate for this device */ + device_desc_t dev_desc, /* device descriptor */ + alenlist_t palenlist, /* system addr/length list */ + unsigned flags); /* defined in dma.h */ + +extern void +hub_dmamap_drain( hub_dmamap_t map); + +extern void +hub_dmaaddr_drain( devfs_handle_t vhdl, + paddr_t addr, + size_t bytes); + +extern void +hub_dmalist_drain( devfs_handle_t vhdl, + alenlist_t list); + + +/* INTERRUPT MANAGEMENT */ +typedef struct hub_intr_s *hub_intr_t; + +extern hub_intr_t +hub_intr_alloc( devfs_handle_t dev, /* which device */ + device_desc_t dev_desc, /* device descriptor */ + devfs_handle_t owner_dev); /* owner of this interrupt */ + +extern hub_intr_t +hub_intr_alloc_nothd(devfs_handle_t dev, /* which device */ + device_desc_t dev_desc, /* device descriptor */ + devfs_handle_t owner_dev); /* owner of this interrupt */ + +extern void +hub_intr_free(hub_intr_t intr_hdl); + +extern int +hub_intr_connect( hub_intr_t intr_hdl, /* xtalk intr resource hndl */ + xtalk_intr_setfunc_t setfunc, + /* func to set intr hw */ + void *setfunc_arg); /* arg to setfunc */ + +extern void +hub_intr_disconnect(hub_intr_t intr_hdl); + +extern devfs_handle_t +hub_intr_cpu_get(hub_intr_t intr_hdl); + +/* CONFIGURATION MANAGEMENT */ + +extern void +hub_provider_startup(devfs_handle_t hub); + +extern void +hub_provider_shutdown(devfs_handle_t hub); + +#define HUB_PIO_CONVEYOR 0x1 /* PIO in conveyor belt mode */ +#define HUB_PIO_FIRE_N_FORGET 0x2 /* PIO in fire-and-forget mode */ + +/* Flags that make sense to hub_widget_flags_set */ +#define HUB_WIDGET_FLAGS ( \ + HUB_PIO_CONVEYOR | \ + HUB_PIO_FIRE_N_FORGET \ + ) + + +typedef int hub_widget_flags_t; + +/* Set the PIO mode for a widget. These two functions perform the + * same operation, but hub_device_flags_set() takes a hardware graph + * vertex while hub_widget_flags_set() takes a nasid and widget + * number. In most cases, hub_device_flags_set() should be used. + */ +extern int hub_widget_flags_set(nasid_t nasid, + xwidgetnum_t widget_num, + hub_widget_flags_t flags); + +/* Depending on the flags set take the appropriate actions */ +extern int hub_device_flags_set(devfs_handle_t widget_dev, + hub_widget_flags_t flags); + + +/* Error Handling. */ +extern int hub_ioerror_handler(devfs_handle_t, int, int, struct io_error_s *); +extern int kl_ioerror_handler(cnodeid_t, cnodeid_t, cpuid_t, + int, paddr_t, caddr_t, ioerror_mode_t); +extern void hub_widget_reset(devfs_handle_t, xwidgetnum_t); +extern int hub_error_devenable(devfs_handle_t, int, int); +extern void hub_widgetdev_enable(devfs_handle_t, int); +extern void hub_widgetdev_shutdown(devfs_handle_t, int); +extern int hub_dma_enabled(devfs_handle_t); + +/* hubdev */ +extern void hubdev_init(void); +extern void hubdev_register(int (*attach_method)(devfs_handle_t)); +extern int hubdev_unregister(int (*attach_method)(devfs_handle_t)); +extern int hubdev_docallouts(devfs_handle_t hub); + +extern caddr_t hubdev_prombase_get(devfs_handle_t hub); +extern cnodeid_t hubdev_cnodeid_get(devfs_handle_t hub); + +#endif /* __ASSEMBLY__ */ +#endif /* _KERNEL */ +#endif /* _ASM_IA64_SN_SN2_SHUBIO_H */ + diff -Nru a/include/asm-ia64/sn/sn2/slotnum.h b/include/asm-ia64/sn/sn2/slotnum.h --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/include/asm-ia64/sn/sn2/slotnum.h Tue Mar 12 13:58:16 2002 @@ -0,0 +1,41 @@ +/* + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (c) 1992 - 1997,2001 Silicon Graphics, Inc. All rights reserved. + */ + +#ifndef _ASM_IA64_SN_SN2_SLOTNUM_H +#define _ASM_IA64_SN_SN2_SLOTNUM_H + +#define SLOTNUM_MAXLENGTH 16 + +/* + * This file defines IO widget to slot/device assignments. + */ + + +/* This determines module to pnode mapping. */ + +#define NODESLOTS_PER_MODULE 1 +#define NODESLOTS_PER_MODULE_SHFT 1 + +#define SLOTNUM_NODE_CLASS 0x00 /* Node */ +#define SLOTNUM_ROUTER_CLASS 0x10 /* Router */ +#define SLOTNUM_XTALK_CLASS 0x20 /* Xtalk */ +#define SLOTNUM_MIDPLANE_CLASS 0x30 /* Midplane */ +#define SLOTNUM_XBOW_CLASS 0x40 /* Xbow */ +#define SLOTNUM_KNODE_CLASS 0x50 /* Kego node */ +#define SLOTNUM_PCI_CLASS 0x60 /* PCI widgets on XBridge */ +#define SLOTNUM_INVALID_CLASS 0xf0 /* Invalid */ + +#define SLOTNUM_CLASS_MASK 0xf0 +#define SLOTNUM_SLOT_MASK 0x0f + +#define SLOTNUM_GETCLASS(_sn) ((_sn) & SLOTNUM_CLASS_MASK) +#define SLOTNUM_GETSLOT(_sn) ((_sn) & SLOTNUM_SLOT_MASK) + + +#endif /* _ASM_IA64_SN_SN2_SLOTNUM_H */ diff -Nru a/include/asm-ia64/sn/sn2/sn_private.h b/include/asm-ia64/sn/sn2/sn_private.h --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/include/asm-ia64/sn/sn2/sn_private.h Tue Mar 12 13:58:16 2002 @@ -0,0 +1,251 @@ +/* $Id$ + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (C) 1992 - 1997, 2000-2002 Silicon Graphics, Inc. All rights reserved. + */ +#ifndef _ASM_IA64_SN_SN2_SN_PRIVATE_H +#define _ASM_IA64_SN_SN2_SN_PRIVATE_H + +#include +#include +#include +#include + +extern nasid_t master_nasid; + +/* promif.c */ +extern void he_arcs_set_vectors(void); +extern void mem_init(void); +extern void cpu_unenable(cpuid_t); +extern nasid_t get_lowest_nasid(void); +extern __psunsigned_t get_master_bridge_base(void); +extern void set_master_bridge_base(void); +extern int check_nasid_equiv(nasid_t, nasid_t); +extern nasid_t get_console_nasid(void); +extern char get_console_pcislot(void); + +extern int is_master_nasid_widget(nasid_t test_nasid, xwidgetnum_t test_wid); + +/* memsupport.c */ +extern void poison_state_alter_range(__psunsigned_t start, int len, int poison); +extern int memory_present(paddr_t); +extern int memory_read_accessible(paddr_t); +extern int memory_write_accessible(paddr_t); +extern void memory_set_access(paddr_t, int, int); +extern void show_dir_state(paddr_t, void (*)(char *, ...)); +extern void check_dir_state(nasid_t, int, void (*)(char *, ...)); +extern void set_dir_owner(paddr_t, int); +extern void set_dir_state(paddr_t, int); +extern void set_dir_state_POISONED(paddr_t); +extern void set_dir_state_UNOWNED(paddr_t); +extern int is_POISONED_dir_state(paddr_t); +extern int is_UNOWNED_dir_state(paddr_t); +extern void get_dir_ent(paddr_t paddr, int *state, + uint64_t *vec_ptr, hubreg_t *elo); + +/* intr.c */ +extern int intr_reserve_level(cpuid_t cpu, int level, int err, devfs_handle_t owner_dev, char *name); +extern void intr_unreserve_level(cpuid_t cpu, int level); +extern int intr_connect_level(cpuid_t cpu, int bit, ilvl_t mask_no, + intr_func_t intr_prefunc); +extern int intr_disconnect_level(cpuid_t cpu, int bit); +extern cpuid_t intr_heuristic(devfs_handle_t dev, device_desc_t dev_desc, + int req_bit,int intr_resflags,devfs_handle_t owner_dev, + char *intr_name,int *resp_bit); +extern void intr_block_bit(cpuid_t cpu, int bit); +extern void intr_unblock_bit(cpuid_t cpu, int bit); +extern void setrtvector(intr_func_t); +extern void install_cpuintr(cpuid_t cpu); +extern void install_dbgintr(cpuid_t cpu); +extern void install_tlbintr(cpuid_t cpu); +extern void hub_migrintr_init(cnodeid_t /*cnode*/); +extern int cause_intr_connect(int level, intr_func_t handler, uint intr_spl_mask); +extern int cause_intr_disconnect(int level); +extern void intr_dumpvec(cnodeid_t cnode, void (*pf)(char *, ...)); + +/* error_dump.c */ +extern char *hub_rrb_err_type[]; +extern char *hub_wrb_err_type[]; + +void nmi_dump(void); +void install_cpu_nmi_handler(int slice); + +/* klclock.c */ +extern void hub_rtc_init(cnodeid_t); + +/* bte.c */ +void bte_lateinit(void); +void bte_wait_for_xfer_completion(void *); + +/* klgraph.c */ +void klhwg_add_all_nodes(devfs_handle_t); +void klhwg_add_all_modules(devfs_handle_t); + +/* klidbg.c */ +void install_klidbg_functions(void); + +/* klnuma.c */ +extern void replicate_kernel_text(int numnodes); +extern __psunsigned_t get_freemem_start(cnodeid_t cnode); +extern void setup_replication_mask(int maxnodes); + +/* init.c */ +extern cnodeid_t get_compact_nodeid(void); /* get compact node id */ +extern void init_platform_nodepda(nodepda_t *npda, cnodeid_t node); +extern void init_platform_pda(cpuid_t cpu); +extern void per_cpu_init(void); +extern int is_fine_dirmode(void); +extern void update_node_information(cnodeid_t); + +/* shubio.c */ +extern void hubio_init(void); +extern void hub_merge_clean(nasid_t nasid); +extern void hub_set_piomode(nasid_t nasid, int conveyor); + +/* shuberror.c */ +extern void hub_error_init(cnodeid_t); +extern void dump_error_spool(cpuid_t cpu, void (*pf)(char *, ...)); +extern void hubni_error_handler(char *, int); +extern int check_ni_errors(void); + +/* Used for debugger to signal upper software a breakpoint has taken place */ + +extern void *debugger_update; +extern __psunsigned_t debugger_stopped; + +/* + * piomap, created by shub_pio_alloc. + * xtalk_info MUST BE FIRST, since this structure is cast to a + * xtalk_piomap_s by generic xtalk routines. + */ +struct hub_piomap_s { + struct xtalk_piomap_s hpio_xtalk_info;/* standard crosstalk pio info */ + devfs_handle_t hpio_hub; /* which shub's mapping registers are set up */ + short hpio_holdcnt; /* count of current users of bigwin mapping */ + char hpio_bigwin_num;/* if big window map, which one */ + int hpio_flags; /* defined below */ +}; +/* hub_piomap flags */ +#define HUB_PIOMAP_IS_VALID 0x1 +#define HUB_PIOMAP_IS_BIGWINDOW 0x2 +#define HUB_PIOMAP_IS_FIXED 0x4 + +#define hub_piomap_xt_piomap(hp) (&hp->hpio_xtalk_info) +#define hub_piomap_hub_v(hp) (hp->hpio_hub) +#define hub_piomap_winnum(hp) (hp->hpio_bigwin_num) + +/* + * dmamap, created by shub_pio_alloc. + * xtalk_info MUST BE FIRST, since this structure is cast to a + * xtalk_dmamap_s by generic xtalk routines. + */ +struct hub_dmamap_s { + struct xtalk_dmamap_s hdma_xtalk_info;/* standard crosstalk dma info */ + devfs_handle_t hdma_hub; /* which shub we go through */ + int hdma_flags; /* defined below */ +}; +/* shub_dmamap flags */ +#define HUB_DMAMAP_IS_VALID 0x1 +#define HUB_DMAMAP_USED 0x2 +#define HUB_DMAMAP_IS_FIXED 0x4 + +/* + * interrupt handle, created by shub_intr_alloc. + * xtalk_info MUST BE FIRST, since this structure is cast to a + * xtalk_intr_s by generic xtalk routines. + */ +struct hub_intr_s { + struct xtalk_intr_s i_xtalk_info; /* standard crosstalk intr info */ + ilvl_t i_swlevel; /* software level for blocking intr */ + cpuid_t i_cpuid; /* which cpu */ + int i_bit; /* which bit */ + int i_flags; +}; +/* flag values */ +#define HUB_INTR_IS_ALLOCED 0x1 /* for debug: allocated */ +#define HUB_INTR_IS_CONNECTED 0x4 /* for debug: connected to a software driver */ + +typedef struct hubinfo_s { + nodepda_t *h_nodepda; /* pointer to node's private data area */ + cnodeid_t h_cnodeid; /* compact nodeid */ + nasid_t h_nasid; /* nasid */ + + /* structures for PIO management */ + xwidgetnum_t h_widgetid; /* my widget # (as viewed from xbow) */ + struct hub_piomap_s h_small_window_piomap[HUB_WIDGET_ID_MAX+1]; + sv_t h_bwwait; /* wait for big window to free */ + spinlock_t h_bwlock; /* guard big window piomap's */ + spinlock_t h_crblock; /* gaurd CRB error handling */ + int h_num_big_window_fixed; /* count number of FIXED maps */ + struct hub_piomap_s h_big_window_piomap[HUB_NUM_BIG_WINDOW]; + hub_intr_t hub_ii_errintr; +} *hubinfo_t; + +#define hubinfo_get(vhdl, infoptr) ((void)hwgraph_info_get_LBL \ + (vhdl, INFO_LBL_NODE_INFO, (arbitrary_info_t *)infoptr)) + +#define hubinfo_set(vhdl, infoptr) (void)hwgraph_info_add_LBL \ + (vhdl, INFO_LBL_NODE_INFO, (arbitrary_info_t)infoptr) + +#define hubinfo_to_hubv(hinfo, hub_v) (hinfo->h_nodepda->node_vertex) + +/* + * Hub info PIO map access functions. + */ +#define hubinfo_bwin_piomap_get(hinfo, win) \ + (&hinfo->h_big_window_piomap[win]) +#define hubinfo_swin_piomap_get(hinfo, win) \ + (&hinfo->h_small_window_piomap[win]) + +/* cpu-specific information stored under INFO_LBL_CPU_INFO */ +typedef struct cpuinfo_s { + cpuid_t ci_cpuid; /* CPU ID */ +} *cpuinfo_t; + +#define cpuinfo_get(vhdl, infoptr) ((void)hwgraph_info_get_LBL \ + (vhdl, INFO_LBL_CPU_INFO, (arbitrary_info_t *)infoptr)) + +#define cpuinfo_set(vhdl, infoptr) (void)hwgraph_info_add_LBL \ + (vhdl, INFO_LBL_CPU_INFO, (arbitrary_info_t)infoptr) + +/* Special initialization function for xswitch vertices created during startup. */ +extern void xswitch_vertex_init(devfs_handle_t xswitch); + +extern xtalk_provider_t hub_provider; + +/* du.c */ +int ducons_write(char *buf, int len); + +/* memerror.c */ + +extern void install_eccintr(cpuid_t cpu); +extern void memerror_get_stats(cnodeid_t cnode, + int *bank_stats, int *bank_stats_max); +extern void probe_md_errors(nasid_t); +/* sysctlr.c */ +extern void sysctlr_init(void); +extern void sysctlr_power_off(int sdonly); +extern void sysctlr_keepalive(void); + +#define valid_cpuid(_x) (((_x) >= 0) && ((_x) < maxcpus)) + +/* Useful definitions to get the memory dimm given a physical + * address. + */ +#define paddr_dimm(_pa) ((_pa & MD_BANK_MASK) >> MD_BANK_SHFT) +#define paddr_cnode(_pa) (NASID_TO_COMPACT_NODEID(NASID_GET(_pa))) +extern void membank_pathname_get(paddr_t,char *); + +/* To redirect the output into the error buffer */ +#define errbuf_print(_s) printf("#%s",_s) + +extern void crbx(nasid_t nasid, void (*pf)(char *, ...)); +void bootstrap(void); + +/* sndrv.c */ +extern int sndrv_attach(devfs_handle_t vertex); + +#endif /* _ASM_IA64_SN_SN2_SN_PRIVATE_H */ diff -Nru a/include/asm-ia64/sn/sn_cpuid.h b/include/asm-ia64/sn/sn_cpuid.h --- a/include/asm-ia64/sn/sn_cpuid.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/sn/sn_cpuid.h Tue Mar 12 13:58:15 2002 @@ -4,8 +4,7 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Jack Steiner (steiner@sgi.com) + * Copyright (C) 2000-2002 Silicon Graphics, Inc. All rights reserved. */ @@ -13,8 +12,13 @@ #define _ASM_IA64_SN_SN_CPUID_H #include -#include -#include +#include +#include +#include +#include +#include +#include + /* * Functions for converting between cpuids, nodeids and NASIDs. @@ -46,10 +50,15 @@ * * not real efficient - dont use in perf critical code * * LID - processor defined register (see PRM V2). + * + * On SN1 * 31:24 - id Contains the NASID * 23:16 - eid Contains 0-3 to identify the cpu on the node * bit 17 - synergy number * bit 16 - FSB slot number + * On SN2 + * 31:28 - id Contains 0-3 to identify the cpu on the node + * 27:16 - eid Contains the NASID * * * @@ -70,15 +79,15 @@ * | | * ------- ------- * | | | | - * | 0 | | 1 | SYNERGY + * | 0 | | 1 | SYNERGY (SN1 only) * | | | | * ------- ------- * | | * | | * ------------------------------- * | | - * | BEDROCK | NASID (0..127) - * | | CNODEID (0..numnodes-1) + * | BEDROCK / SHUB | NASID (0..MAX_NASIDS) + * | | CNODEID (0..num_compact_nodes-1) * | | * | | * ------------------------------- @@ -91,10 +100,25 @@ #define cpu_physical_id(cpuid) ((ia64_get_lid() >> 16) & 0xffff) #endif +#ifdef CONFIG_IA64_SGI_SN1 +/* + * macros for some of these exist in sn/addrs.h & sn/arch.h, etc. However, + * trying #include these files here causes circular dependencies. + */ #define cpu_physical_id_to_nasid(cpi) ((cpi) >> 8) #define cpu_physical_id_to_synergy(cpi) (((cpi) >> 1) & 1) #define cpu_physical_id_to_fsb_slot(cpi) ((cpi) & 1) #define cpu_physical_id_to_slice(cpi) ((cpi) & 3) +#define get_nasid() ((ia64_get_lid() >> 24)) +#define get_slice() ((ia64_get_lid() >> 16) & 3) +#define get_node_number(addr) (((unsigned long)(addr)>>33) & 0x7f) +#else +#define cpu_physical_id_to_nasid(cpi) ((cpi) &0xfff) +#define cpu_physical_id_to_slice(cpi) ((cpi>>12) & 3) +#define get_nasid() ((ia64_get_lid() >> 16) & 0xfff) +#define get_slice() ((ia64_get_lid() >> 28) & 0xf) +#define get_node_number(addr) (((unsigned long)(addr)>>38) & 0x7ff) +#endif /* * NOTE: id & eid refer to Intels definitions of the LID register @@ -118,15 +142,12 @@ +#ifdef CONFIG_IA64_SGI_SN1 /* * cpuid_to_fsb_slot - convert a cpuid to the fsb slot number that it is in. * (there are 2 cpus per FSB. This function returns 0 or 1) */ -static __inline__ int -cpuid_to_fsb_slot(int cpuid) -{ - return cpu_physical_id_to_fsb_slot(cpu_physical_id(cpuid)); -} +#define cpuid_to_fsb_slot(cpuid) (cpu_physical_id_to_fsb_slot(cpu_physical_id(cpuid))) /* @@ -134,108 +155,75 @@ * (there are 2 synergies per node. Function returns 0 or 1 to * specify which synergy the cpu is on) */ -static __inline__ int -cpuid_to_synergy(int cpuid) -{ - return cpu_physical_id_to_synergy(cpu_physical_id(cpuid)); -} +#define cpuid_to_synergy(cpuid) (cpu_physical_id_to_synergy(cpu_physical_id(cpuid))) +#endif /* * cpuid_to_slice - convert a cpuid to the slice that it resides on * There are 4 cpus per node. This function returns 0 .. 3) */ -static __inline__ int -cpuid_to_slice(int cpuid) -{ - return cpu_physical_id_to_slice(cpu_physical_id(cpuid)); -} +#define cpuid_to_slice(cpuid) (cpu_physical_id_to_slice(cpu_physical_id(cpuid))) /* * cpuid_to_nasid - convert a cpuid to the NASID that it resides on */ -static __inline__ int -cpuid_to_nasid(int cpuid) -{ - return cpu_physical_id_to_nasid(cpu_physical_id(cpuid)); -} +#define cpuid_to_nasid(cpuid) (cpu_physical_id_to_nasid(cpu_physical_id(cpuid))) /* * cpuid_to_cnodeid - convert a cpuid to the cnode that it resides on */ -static __inline__ int -cpuid_to_cnodeid(int cpuid) -{ - return nasid_map[cpuid_to_nasid(cpuid)]; -} +#define cpuid_to_cnodeid(cpuid) (local_node_data->physical_node_map[cpuid_to_nasid(cpuid)]) + /* * cnodeid_to_nasid - convert a cnodeid to a NASID + * Macro relies on pg_data for a node being on the node itself. + * Just extract the NASID from the pointer. + * */ -static __inline__ int -cnodeid_to_nasid(int cnodeid) -{ - if (nasid_map[cnodeid_map[cnodeid]] != cnodeid) - panic("cnodeid_to_nasid, cnode = %d", cnodeid); - return cnodeid_map[cnodeid]; -} +#define cnodeid_to_nasid(cnodeid) (get_node_number(local_node_data->pg_data_ptrs[cnodeid])) + /* * nasid_to_cnodeid - convert a NASID to a cnodeid */ -static __inline__ int -nasid_to_cnodeid(int nasid) -{ - if (cnodeid_map[nasid_map[nasid]] != nasid) - panic("nasid_to_cnodeid"); - return nasid_map[nasid]; -} +#define nasid_to_cnodeid(nasid) (local_node_data->physical_node_map[nasid]) /* * cnode_slice_to_cpuid - convert a codeid & slice to a cpuid */ -static __inline__ int -cnode_slice_to_cpuid(int cnodeid, int slice) { - return(id_eid_to_cpuid(cnodeid_to_nasid(cnodeid),slice)); -} +#define cnode_slice_to_cpuid(cnodeid,slice) (id_eid_to_cpuid(cnodeid_to_nasid(cnodeid),(slice))) + /* * cpuid_to_subnode - convert a cpuid to the subnode it resides on. * slice 0 & 1 are on subnode 0 * slice 2 & 3 are on subnode 1. */ -static __inline__ int -cpuid_to_subnode(int cpuid) { - int ret = cpuid_to_slice(cpuid); - if (ret < 2) return 0; - else return 1; -} +#define cpuid_to_subnode(cpuid) ((cpuid_to_slice(cpuid)<2) ? 0 : 1) + /* * cpuid_to_localslice - convert a cpuid to a local slice * slice 0 & 2 are local slice 0 * slice 1 & 3 are local slice 1 */ -static __inline__ int -cpuid_to_localslice(int cpuid) { - return(cpuid_to_slice(cpuid) & 1); -} - -static __inline__ int -cnodeid_to_cpuid(int cnode) { - int cpu; - - for (cpu = 0; cpu < smp_num_cpus; cpu++) { - if (cpuid_to_cnodeid(cpu) == cnode) { - break; - } - } - if (cpu == smp_num_cpus) cpu = -1; - return cpu; -} +#define cpuid_to_localslice(cpuid) (cpuid_to_slice(cpuid) & 1) + + +#define smp_physical_node_id() (cpuid_to_nasid(smp_processor_id())) + + +/* + * cnodeid_to_cpuid - convert a cnode to a cpuid of a cpu on the node. + * returns -1 if no cpus exist on the node + */ +extern int cnodeid_to_cpuid(int cnode); #endif /* _ASM_IA64_SN_SN_CPUID_H */ + diff -Nru a/include/asm-ia64/sn/sn_fru.h b/include/asm-ia64/sn/sn_fru.h --- a/include/asm-ia64/sn/sn_fru.h Tue Mar 12 13:58:14 2002 +++ b/include/asm-ia64/sn/sn_fru.h Tue Mar 12 13:58:14 2002 @@ -4,11 +4,11 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 1999-2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Patrick Gefre + * Copyright (C) 1992 - 1997, 1999-2001 Silicon Graphics, Inc. + * All rights reserved. */ -#ifndef _ASM_SN_SN_FRU_H -#define _ASM_SN_SN_FRU_H +#ifndef _ASM_IA64_SN_SN_FRU_H +#define _ASM_IA64_SN_SN_FRU_H #define MAX_DIMMS 8 /* max # of dimm banks */ #define MAX_PCIDEV 8 /* max # of pci devices on a pci bus */ @@ -42,5 +42,5 @@ } kf_pci_bus_t; -#endif /* _ASM_SN_SN_FRU_H */ +#endif /* _ASM_IA64_SN_SN_FRU_H */ diff -Nru a/include/asm-ia64/sn/sn_pio_sync.h b/include/asm-ia64/sn/sn_pio_sync.h --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/include/asm-ia64/sn/sn_pio_sync.h Tue Mar 12 13:58:16 2002 @@ -0,0 +1,53 @@ +/* + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (C) 2001-2002 Silicon Graphics, Inc. All rights reserved. + */ + + +#ifndef _ASM_IA64_SN_SN_PIO_WRITE_SYNC_H +#define _ASM_IA64_SN_SN_PIO_WRITE_SYNC_H + +#include +#ifdef CONFIG_IA64_SGI_SN2 +#include +#include +#include +#include + +/* + * This macro flushes all outstanding PIOs performed by this cpu to the + * intended destination SHUB. This in essence ensures that all PIO's + * issues by this cpu has landed at it's destination. + * + * This macro expects the caller: + * 1. The thread is locked. + * 2. All prior PIO operations has been fenced with __ia64_mf_a(). + * + * The expectation is that get_slice() will return either 0 or 2. + * When we have multi-core cpu's, the expectation is get_slice() will + * return either 0,1 or 2,3. + */ + +#define SN_PIO_WRITE_SYNC \ + { \ + volatile unsigned long sn_pio_writes_done; \ + do { \ + sn_pio_writes_done = (volatile unsigned long) (SH_PIO_WRITE_STATUS_0_WRITES_OK_MASK & HUB_L( (unsigned long *)GLOBAL_MMR_ADDR(get_nasid(), (get_slice() < 2) ? SH_PIO_WRITE_STATUS_0 : SH_PIO_WRITE_STATUS_1 ))); \ + } while (!sn_pio_writes_done); \ + __ia64_mf_a(); \ + } +#else + +/* + * For all ARCHITECTURE type, this is a NOOP. + */ + +#define SN_PIO_WRITE_SYNC + +#endif + +#endif /* _ASM_IA64_SN_SN_PIO_WRITE_SYNC_H */ diff -Nru a/include/asm-ia64/sn/sn_private.h b/include/asm-ia64/sn/sn_private.h --- a/include/asm-ia64/sn/sn_private.h Tue Mar 12 13:58:14 2002 +++ b/include/asm-ia64/sn/sn_private.h Tue Mar 12 13:58:14 2002 @@ -4,299 +4,20 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2001 Silicon Graphics, Inc. All rights reserved. */ -#ifndef _ASM_SN_PRIVATE_H -#define _ASM_SN_PRIVATE_H +#ifndef _ASM_IA64_SN_SN_PRIVATE_H +#define _ASM_IA64_SN_SN_PRIVATE_H +#include #include #include #include -extern nasid_t master_nasid; - -extern hubreg_t get_region(cnodeid_t); -extern hubreg_t nasid_to_region(nasid_t); -/* promif.c */ -#ifdef LATER -extern cpuid_t cpu_node_probe(cpumask_t *cpumask, int *numnodes); -#endif -extern void he_arcs_set_vectors(void); -extern void mem_init(void); -#ifdef LATER -extern int cpu_enabled(cpuid_t); -#endif -extern void cpu_unenable(cpuid_t); -extern nasid_t get_lowest_nasid(void); -extern __psunsigned_t get_master_bridge_base(void); -extern void set_master_bridge_base(void); -extern int check_nasid_equiv(nasid_t, nasid_t); -extern nasid_t get_console_nasid(void); -extern char get_console_pcislot(void); -#ifdef LATER -extern void intr_init_vecblk(nodepda_t *npda, cnodeid_t, int); -#endif - -extern int is_master_nasid_widget(nasid_t test_nasid, xwidgetnum_t test_wid); - -/* memsupport.c */ -extern void poison_state_alter_range(__psunsigned_t start, int len, int poison); -extern int memory_present(paddr_t); -extern int memory_read_accessible(paddr_t); -extern int memory_write_accessible(paddr_t); -extern void memory_set_access(paddr_t, int, int); -extern void show_dir_state(paddr_t, void (*)(char *, ...)); -extern void check_dir_state(nasid_t, int, void (*)(char *, ...)); -extern void set_dir_owner(paddr_t, int); -extern void set_dir_state(paddr_t, int); -extern void set_dir_state_POISONED(paddr_t); -extern void set_dir_state_UNOWNED(paddr_t); -extern int is_POISONED_dir_state(paddr_t); -extern int is_UNOWNED_dir_state(paddr_t); -extern void get_dir_ent(paddr_t paddr, int *state, - uint64_t *vec_ptr, hubreg_t *elo); - -/* intr.c */ -#if defined(NEW_INTERRUPTS) -extern int intr_reserve_level(cpuid_t cpu, int level, int err, devfs_handle_t owner_dev, char *name); -extern void intr_unreserve_level(cpuid_t cpu, int level); -extern int intr_connect_level(cpuid_t cpu, int bit, ilvl_t mask_no, - intr_func_t intr_func, void *intr_arg, - intr_func_t intr_prefunc); -extern int intr_disconnect_level(cpuid_t cpu, int bit); -extern cpuid_t intr_heuristic(devfs_handle_t dev, device_desc_t dev_desc, - int req_bit,int intr_resflags,devfs_handle_t owner_dev, - char *intr_name,int *resp_bit); -#endif /* NEW_INTERRUPTS */ -extern void intr_block_bit(cpuid_t cpu, int bit); -extern void intr_unblock_bit(cpuid_t cpu, int bit); -extern void setrtvector(intr_func_t); -extern void install_cpuintr(cpuid_t cpu); -extern void install_dbgintr(cpuid_t cpu); -extern void install_tlbintr(cpuid_t cpu); -extern void hub_migrintr_init(cnodeid_t /*cnode*/); -extern int cause_intr_connect(int level, intr_func_t handler, uint intr_spl_mask); -extern int cause_intr_disconnect(int level); -extern void intr_reserve_hardwired(cnodeid_t); -extern void intr_clear_all(nasid_t); -extern void intr_dumpvec(cnodeid_t cnode, void (*pf)(char *, ...)); -extern int protected_broadcast(hubreg_t intrbit); - -/* error_dump.c */ -extern char *hub_rrb_err_type[]; -extern char *hub_wrb_err_type[]; - -void nmi_dump(void); -void install_cpu_nmi_handler(int slice); - -/* klclock.c */ -extern void hub_rtc_init(cnodeid_t); - -/* bte.c */ -void bte_lateinit(void); -void bte_wait_for_xfer_completion(void *); - -/* klgraph.c */ -void klhwg_add_all_nodes(devfs_handle_t); -void klhwg_add_all_modules(devfs_handle_t); - -/* klidbg.c */ -void install_klidbg_functions(void); - -/* klnuma.c */ -extern void replicate_kernel_text(int numnodes); -extern __psunsigned_t get_freemem_start(cnodeid_t cnode); -extern void setup_replication_mask(int maxnodes); - -/* init.c */ -extern cnodeid_t get_compact_nodeid(void); /* get compact node id */ -#ifdef LATER -extern void init_platform_nodepda(nodepda_t *npda, cnodeid_t node); -extern void init_platform_pda(pda_t *ppda, cpuid_t cpu); +#if defined(CONFIG_IA64_SGI_SN1) +#include +#elif defined(CONFIG_IA64_SGI_SN2) +#include #endif -extern void per_cpu_init(void); -extern void per_hub_init(cnodeid_t); -#ifdef LATER -extern cpumask_t boot_cpumask; -#endif -extern int is_fine_dirmode(void); -extern void update_node_information(cnodeid_t); - -#ifdef LATER -/* clksupport.c */ -extern void early_counter_intr(eframe_t *); -#endif - -/* hubio.c */ -extern void hubio_init(void); -extern void hub_merge_clean(nasid_t nasid); -extern void hub_set_piomode(nasid_t nasid, int conveyor); - -/* huberror.c */ -extern void hub_error_init(cnodeid_t); -extern void dump_error_spool(cpuid_t cpu, void (*pf)(char *, ...)); -extern void hubni_error_handler(char *, int); -extern int check_ni_errors(void); - -/* Used for debugger to signal upper software a breakpoint has taken place */ - -extern void *debugger_update; -extern __psunsigned_t debugger_stopped; - -/* - * IP27 piomap, created by hub_pio_alloc. - * xtalk_info MUST BE FIRST, since this structure is cast to a - * xtalk_piomap_s by generic xtalk routines. - */ -struct hub_piomap_s { - struct xtalk_piomap_s hpio_xtalk_info;/* standard crosstalk pio info */ - devfs_handle_t hpio_hub; /* which hub's mapping registers are set up */ - short hpio_holdcnt; /* count of current users of bigwin mapping */ - char hpio_bigwin_num;/* if big window map, which one */ - int hpio_flags; /* defined below */ -}; -/* hub_piomap flags */ -#define HUB_PIOMAP_IS_VALID 0x1 -#define HUB_PIOMAP_IS_BIGWINDOW 0x2 -#define HUB_PIOMAP_IS_FIXED 0x4 - -#define hub_piomap_xt_piomap(hp) (&hp->hpio_xtalk_info) -#define hub_piomap_hub_v(hp) (hp->hpio_hub) -#define hub_piomap_winnum(hp) (hp->hpio_bigwin_num) - -#if TBD - /* Ensure that hpio_xtalk_info is first */ - #assert (&(((struct hub_piomap_s *)0)->hpio_xtalk_info) == 0) -#endif - - -/* - * IP27 dmamap, created by hub_pio_alloc. - * xtalk_info MUST BE FIRST, since this structure is cast to a - * xtalk_dmamap_s by generic xtalk routines. - */ -struct hub_dmamap_s { - struct xtalk_dmamap_s hdma_xtalk_info;/* standard crosstalk dma info */ - devfs_handle_t hdma_hub; /* which hub we go through */ - int hdma_flags; /* defined below */ -}; -/* hub_dmamap flags */ -#define HUB_DMAMAP_IS_VALID 0x1 -#define HUB_DMAMAP_USED 0x2 -#define HUB_DMAMAP_IS_FIXED 0x4 - -#if TBD - /* Ensure that hdma_xtalk_info is first */ - #assert (&(((struct hub_dmamap_s *)0)->hdma_xtalk_info) == 0) -#endif - -/* - * IP27 interrupt handle, created by hub_intr_alloc. - * xtalk_info MUST BE FIRST, since this structure is cast to a - * xtalk_intr_s by generic xtalk routines. - */ -struct hub_intr_s { - struct xtalk_intr_s i_xtalk_info; /* standard crosstalk intr info */ - ilvl_t i_swlevel; /* software level for blocking intr */ - cpuid_t i_cpuid; /* which cpu */ - int i_bit; /* which bit */ - int i_flags; -}; -/* flag values */ -#define HUB_INTR_IS_ALLOCED 0x1 /* for debug: allocated */ -#define HUB_INTR_IS_CONNECTED 0x4 /* for debug: connected to a software driver */ - -#if TBD - /* Ensure that i_xtalk_info is first */ - #assert (&(((struct hub_intr_s *)0)->i_xtalk_info) == 0) -#endif - - -/* IP27 hub-specific information stored under INFO_LBL_HUB_INFO */ -/* TBD: IP27-dependent stuff currently in nodepda.h should be here */ -typedef struct hubinfo_s { - nodepda_t *h_nodepda; /* pointer to node's private data area */ - cnodeid_t h_cnodeid; /* compact nodeid */ - nasid_t h_nasid; /* nasid */ - - /* structures for PIO management */ - xwidgetnum_t h_widgetid; /* my widget # (as viewed from xbow) */ - struct hub_piomap_s h_small_window_piomap[HUB_WIDGET_ID_MAX+1]; - sv_t h_bwwait; /* wait for big window to free */ - spinlock_t h_bwlock; /* guard big window piomap's */ - spinlock_t h_crblock; /* gaurd CRB error handling */ - int h_num_big_window_fixed; /* count number of FIXED maps */ - struct hub_piomap_s h_big_window_piomap[HUB_NUM_BIG_WINDOW]; - hub_intr_t hub_ii_errintr; -} *hubinfo_t; - -#define hubinfo_get(vhdl, infoptr) ((void)hwgraph_info_get_LBL \ - (vhdl, INFO_LBL_NODE_INFO, (arbitrary_info_t *)infoptr)) - -#define hubinfo_set(vhdl, infoptr) (void)hwgraph_info_add_LBL \ - (vhdl, INFO_LBL_NODE_INFO, (arbitrary_info_t)infoptr) - -#define hubinfo_to_hubv(hinfo, hub_v) (hinfo->h_nodepda->node_vertex) - -/* - * Hub info PIO map access functions. - */ -#define hubinfo_bwin_piomap_get(hinfo, win) \ - (&hinfo->h_big_window_piomap[win]) -#define hubinfo_swin_piomap_get(hinfo, win) \ - (&hinfo->h_small_window_piomap[win]) - -/* IP27 cpu-specific information stored under INFO_LBL_CPU_INFO */ -/* TBD: IP27-dependent stuff currently in pda.h should be here */ -typedef struct cpuinfo_s { -#ifdef LATER - pda_t *ci_cpupda; /* pointer to CPU's private data area */ -#endif - cpuid_t ci_cpuid; /* CPU ID */ -} *cpuinfo_t; - -#define cpuinfo_get(vhdl, infoptr) ((void)hwgraph_info_get_LBL \ - (vhdl, INFO_LBL_CPU_INFO, (arbitrary_info_t *)infoptr)) - -#define cpuinfo_set(vhdl, infoptr) (void)hwgraph_info_add_LBL \ - (vhdl, INFO_LBL_CPU_INFO, (arbitrary_info_t)infoptr) - -/* Special initialization function for xswitch vertices created during startup. */ -extern void xswitch_vertex_init(devfs_handle_t xswitch); - -extern xtalk_provider_t hub_provider; - -/* du.c */ -int ducons_write(char *buf, int len); - -/* memerror.c */ - -extern void install_eccintr(cpuid_t cpu); -extern void memerror_get_stats(cnodeid_t cnode, - int *bank_stats, int *bank_stats_max); -extern void probe_md_errors(nasid_t); -/* sysctlr.c */ -extern void sysctlr_init(void); -extern void sysctlr_power_off(int sdonly); -extern void sysctlr_keepalive(void); - -#define valid_cpuid(_x) (((_x) >= 0) && ((_x) < maxcpus)) - -/* Useful definitions to get the memory dimm given a physical - * address. - */ -#define paddr_dimm(_pa) ((_pa & MD_BANK_MASK) >> MD_BANK_SHFT) -#define paddr_cnode(_pa) (NASID_TO_COMPACT_NODEID(NASID_GET(_pa))) -extern void membank_pathname_get(paddr_t,char *); - -/* To redirect the output into the error buffer */ -#define errbuf_print(_s) printf("#%s",_s) - -extern void crbx(nasid_t nasid, void (*pf)(char *, ...)); -void bootstrap(void); - -/* sndrv.c */ -extern int sndrv_attach(devfs_handle_t vertex); -#endif /* _ASM_SN_PRIVATE_H */ +#endif /* _ASM_IA64_SN_SN_PRIVATE_H */ diff -Nru a/include/asm-ia64/sn/sn_sal.h b/include/asm-ia64/sn/sn_sal.h --- a/include/asm-ia64/sn/sn_sal.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/sn/sn_sal.h Tue Mar 12 13:58:15 2002 @@ -1,25 +1,81 @@ -#ifndef _ASM_IA64_SN_SAL_H -#define _ASM_IA64_SN_SAL_H +#ifndef _ASM_IA64_SN_SN_SAL_H +#define _ASM_IA64_SN_SN_SAL_H /* * System Abstraction Layer definitions for IA64 * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. * - * Copyright (C) 2000, Silicon Graphics. - * Copyright (C) 2000. Jack Steiner (steiner@sgi.com) + * Copyright (c) 2000-2002 Silicon Graphics, Inc. All rights reserved. */ #include +#include // SGI Specific Calls #define SN_SAL_POD_MODE 0x02000001 #define SN_SAL_SYSTEM_RESET 0x02000002 #define SN_SAL_PROBE 0x02000003 +#define SN_SAL_GET_CONSOLE_NASID 0x02000004 +#define SN_SAL_GET_KLCONFIG_ADDR 0x02000005 +#define SN_SAL_LOG_CE 0x02000006 +#define SN_SAL_REGISTER_CE 0x02000007 u64 ia64_sn_probe_io_slot(long paddr, long size, void *data_ptr); +/* + * Returns the master console nasid, if the call fails, return an illegal + * value. + */ +static inline u64 +ia64_sn_get_console_nasid(void) +{ + struct ia64_sal_retval ret_stuff; + + ret_stuff.status = (uint64_t)0; + ret_stuff.v0 = (uint64_t)0; + ret_stuff.v1 = (uint64_t)0; + ret_stuff.v2 = (uint64_t)0; + SAL_CALL(ret_stuff, SN_SAL_GET_CONSOLE_NASID, 0, 0, 0, 0, 0, 0, 0); + + if (ret_stuff.status < 0) + return ret_stuff.status; + + /* Master console nasid is in 'v0' */ + return ret_stuff.v0; +} + +static inline u64 +ia64_sn_get_klconfig_addr(nasid_t nasid) +{ + struct ia64_sal_retval ret_stuff; + extern u64 klgraph_addr[]; + int cnodeid; + + cnodeid = nasid_to_cnodeid(nasid); + if (klgraph_addr[cnodeid] == 0) { + ret_stuff.status = (uint64_t)0; + ret_stuff.v0 = (uint64_t)0; + ret_stuff.v1 = (uint64_t)0; + ret_stuff.v2 = (uint64_t)0; + SAL_CALL(ret_stuff, SN_SAL_GET_KLCONFIG_ADDR, (u64)nasid, 0, 0, 0, 0, 0, 0); + + /* + * We should panic if a valid cnode nasid does not produce + * a klconfig address. + */ + if (ret_stuff.status != 0) { + panic("ia64_sn_get_klconfig_addr: Returned error %lx\n", ret_stuff.status); + } + + klgraph_addr[cnodeid] = ret_stuff.v0; + } + return(klgraph_addr[cnodeid]); -#endif /* _ASM_IA64_SN_SN1_SAL_H */ +} +#endif /* _ASM_IA64_SN_SN_SAL_H */ diff -Nru a/include/asm-ia64/sn/snconfig.h b/include/asm-ia64/sn/snconfig.h --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/include/asm-ia64/sn/snconfig.h Tue Mar 12 13:58:16 2002 @@ -0,0 +1,18 @@ +/* + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (C) 2000-2001 Silicon Graphics, Inc. + */ +#ifndef _ASM_IA64_SN_SNCONFIG_H +#define _ASM_IA64_SN_SNCONFIG_H + +#include + +#if defined(CONFIG_IA64_SGI_SN1) +#include +#elif defined(CONFIG_IA64_SGI_SN2) +#endif + +#endif /* _ASM_IA64_SN_SNCONFIG_H */ diff -Nru a/include/asm-ia64/sn/sndrv.h b/include/asm-ia64/sn/sndrv.h --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/include/asm-ia64/sn/sndrv.h Tue Mar 12 13:58:16 2002 @@ -0,0 +1,39 @@ +#ifndef _ASM_IA64_SN_SNDRV_H +#define _ASM_IA64_SN_SNDRV_H + +/* ioctl commands */ +#define SNDRV_GET_ROUTERINFO 1 +#define SNDRV_GET_INFOSIZE 2 +#define SNDRV_GET_HUBINFO 3 +#define SNDRV_GET_FLASHLOGSIZE 4 +#define SNDRV_SET_FLASHSYNC 5 +#define SNDRV_GET_FLASHLOGDATA 6 +#define SNDRV_GET_FLASHLOGALL 7 + +#define SNDRV_SET_HISTOGRAM_TYPE 14 + +#define SNDRV_ELSC_COMMAND 19 +#define SNDRV_CLEAR_LOG 20 +#define SNDRV_INIT_LOG 21 +#define SNDRV_GET_PIMM_PSC 22 +#define SNDRV_SET_PARTITION 23 +#define SNDRV_GET_PARTITION 24 + +/* see synergy_perf_ioctl() */ +#define SNDRV_GET_SYNERGY_VERSION 30 +#define SNDRV_GET_SYNERGY_STATUS 31 +#define SNDRV_GET_SYNERGYINFO 32 +#define SNDRV_SYNERGY_APPEND 33 +#define SNDRV_SYNERGY_ENABLE 34 +#define SNDRV_SYNERGY_FREQ 35 + +/* Devices */ +#define SNDRV_UKNOWN_DEVICE -1 +#define SNDRV_ROUTER_DEVICE 1 +#define SNDRV_HUB_DEVICE 2 +#define SNDRV_ELSC_NVRAM_DEVICE 3 +#define SNDRV_ELSC_CONTROLLER_DEVICE 4 +#define SNDRV_SYSCTL_SUBCH 5 +#define SNDRV_SYNERGY_DEVICE 6 + +#endif /* _ASM_IA64_SN_SNDRV_H */ diff -Nru a/include/asm-ia64/sn/sv.h b/include/asm-ia64/sn/sv.h --- a/include/asm-ia64/sn/sv.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/sn/sv.h Tue Mar 12 13:58:15 2002 @@ -3,7 +3,7 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 2000 Silicon Graphics, Inc. All rights reserved + * Copyright (C) 2000-2001 Silicon Graphics, Inc. All rights reserved * * This implemenation of synchronization variables is heavily based on * one done by Steve Lord @@ -11,8 +11,8 @@ * Paul Cassella */ -#ifndef SV_H -#define SV_H +#ifndef _ASM_IA64_SN_SV_H +#define _ASM_IA64_SN_SV_H #include #include @@ -150,4 +150,4 @@ #undef _SV_ASSERT #endif -#endif +#endif /* _ASM_IA64_SN_SV_H */ diff -Nru a/include/asm-ia64/sn/synergy.h b/include/asm-ia64/sn/synergy.h --- a/include/asm-ia64/sn/synergy.h Tue Mar 12 13:58:15 2002 +++ /dev/null Wed Dec 31 16:00:00 1969 @@ -1,168 +0,0 @@ -#ifndef ASM_IA64_SN_SYNERGY_H -#define ASM_IA64_SN_SYNERGY_H - -#include - -#include "asm/io.h" -#include "asm/sn/nodepda.h" -#include "asm/sn/intr_public.h" - - -/* - * Definitions for the synergy asic driver - * - * These are for SGI platforms only. - * - * Copyright (C) 2000 Silicon Graphics, Inc - * Copyright (C) 2000 Alan Mayer (ajm@sgi.com) - */ - - -#define SSPEC_BASE (0xe0000000000) -#define LB_REG_BASE (SSPEC_BASE + 0x0) - -#define VEC_MASK3A_ADDR (0x2a0 + LB_REG_BASE + __IA64_UNCACHED_OFFSET) -#define VEC_MASK3B_ADDR (0x2a8 + LB_REG_BASE + __IA64_UNCACHED_OFFSET) -#define VEC_MASK3A (0x2a0) -#define VEC_MASK3B (0x2a8) - -#define VEC_MASK2A_ADDR (0x2b0 + LB_REG_BASE + __IA64_UNCACHED_OFFSET) -#define VEC_MASK2B_ADDR (0x2b8 + LB_REG_BASE + __IA64_UNCACHED_OFFSET) -#define VEC_MASK2A (0x2b0) -#define VEC_MASK2B (0x2b8) - -#define VEC_MASK1A_ADDR (0x2c0 + LB_REG_BASE + __IA64_UNCACHED_OFFSET) -#define VEC_MASK1B_ADDR (0x2c8 + LB_REG_BASE + __IA64_UNCACHED_OFFSET) -#define VEC_MASK1A (0x2c0) -#define VEC_MASK1B (0x2c8) - -#define VEC_MASK0A_ADDR (0x2d0 + LB_REG_BASE + __IA64_UNCACHED_OFFSET) -#define VEC_MASK0B_ADDR (0x2d8 + LB_REG_BASE + __IA64_UNCACHED_OFFSET) -#define VEC_MASK0A (0x2d0) -#define VEC_MASK0B (0x2d8) - -#define WRITE_LOCAL_SYNERGY_REG(addr, value) __synergy_out(addr, value) - -#define HUBREG_CAST (volatile hubreg_t *) -#define HUB_L(_a) *(_a) -#define HUB_S(_a, _d) *(_a) = (_d) - -#define HSPEC_SYNERGY0_0 0x04000000 /* Synergy0 Registers */ -#define HSPEC_SYNERGY1_0 0x05000000 /* Synergy1 Registers */ -#define HS_SYNERGY_STRIDE (HSPEC_SYNERGY1_0 - HSPEC_SYNERGY0_0) -#define REMOTE_HSPEC(_n, _x) (HUBREG_CAST (RREG_BASE(_n) + (_x))) - -#define RREG_BASE(_n) (NODE_LREG_BASE(_n)) -#define NODE_LREG_BASE(_n) (NODE_HSPEC_BASE(_n) + 0x30000000) -#define NODE_HSPEC_BASE(_n) (HSPEC_BASE + NODE_OFFSET(_n)) -#ifndef HSPEC_BASE -#define HSPEC_BASE (SYN_UNCACHED_SPACE | HSPEC_BASE_SYN) -#endif -#define SYN_UNCACHED_SPACE 0xc000000000000000 -#define HSPEC_BASE_SYN 0x00000b0000000000 -#define NODE_OFFSET(_n) (UINT64_CAST (_n) << NODE_SIZE_BITS) -#define NODE_SIZE_BITS 33 - - -#define RSYN_REG_OFFSET(fsb, reg) (((fsb) ? HSPEC_SYNERGY1_0 : HSPEC_SYNERGY0_0) | (reg)) - -#define REMOTE_SYNERGY_LOAD(nasid, fsb, reg) __remote_synergy_in(nasid, fsb, reg) -#define REMOTE_SYNERGY_STORE(nasid, fsb, reg, val) __remote_synergy_out(nasid, fsb, reg, val) - -extern inline uint64_t -__remote_synergy_in(int nasid, int fsb, uint64_t reg) { - volatile uint64_t *addr; - - addr = (uint64_t *)(RREG_BASE(nasid) + RSYN_REG_OFFSET(fsb, reg)); - return (*addr); -} - -extern inline void -__remote_synergy_out(int nasid, int fsb, uint64_t reg, uint64_t value) { - volatile uint64_t *addr; - - addr = (uint64_t *)(RREG_BASE(nasid) + RSYN_REG_OFFSET(fsb, (reg<<2))); - *(addr+0) = value >> 48; - *(addr+1) = value >> 32; - *(addr+2) = value >> 16; - *(addr+3) = value; - __ia64_mf_a(); -} - -/* XX this doesn't make a lot of sense. Which fsb? */ -extern inline void -__synergy_out(unsigned long addr, unsigned long value) -{ - volatile unsigned long *adr = (unsigned long *) - (addr | __IA64_UNCACHED_OFFSET); - - *adr = value; - __ia64_mf_a(); -} - -#define READ_LOCAL_SYNERGY_REG(addr) __synergy_in(addr) - -/* XX this doesn't make a lot of sense. Which fsb? */ -extern inline unsigned long -__synergy_in(unsigned long addr) -{ - unsigned long ret, *adr = (unsigned long *) - (addr | __IA64_UNCACHED_OFFSET); - - ret = *adr; - __ia64_mf_a(); - return ret; -} - -struct sn1_intr_action { - void (*handler)(int, void *, struct pt_regs *); - void *intr_arg; - unsigned long flags; - struct sn1_intr_action * next; -}; - -typedef struct synergy_da_s { - hub_intmasks_t s_intmasks; -}synergy_da_t; - -struct sn1_cnode_action_list { - spinlock_t action_list_lock; - struct sn1_intr_action *action_list; -}; - -#if defined(CONFIG_IA64_SGI_SYNERGY_PERF) - -/* multiplex the counters every 10 timer interrupts */ -#define SYNERGY_PERF_FREQ_DEFAULT 10 - -/* synergy perf control registers */ -#define PERF_CNTL0_A 0xab0UL /* control A on FSB0 */ -#define PERF_CNTL0_B 0xab8UL /* control B on FSB0 */ -#define PERF_CNTL1_A 0xac0UL /* control A on FSB1 */ -#define PERF_CNTL1_B 0xac8UL /* control B on FSB1 */ - -/* synergy perf counters */ -#define PERF_CNTR0_A 0xad0UL /* counter A on FSB0 */ -#define PERF_CNTR0_B 0xad8UL /* counter B on FSB0 */ -#define PERF_CNTR1_A 0xaf0UL /* counter A on FSB1 */ -#define PERF_CNTR1_B 0xaf8UL /* counter B on FSB1 */ - -/* Synergy perf data. Each nodepda keeps a list of these */ -struct synergy_perf_s { - uint64_t intervals; /* count of active intervals for this event */ - uint64_t modesel; /* mode and sel bits, both A and B registers */ - struct synergy_perf_s *next; /* next in circular linked list */ - uint64_t counts[2]; /* [0] is synergy-A counter, [1] synergy-B counter */ -}; - -typedef struct synergy_perf_s synergy_perf_t; - -extern void synergy_perf_init(void); -extern void synergy_perf_update(int); - -#endif /* CONFIG_IA64_SGI_SYNERGY_PERF */ - - -/* Temporary defintions for testing: */ - -#endif ASM_IA64_SN_SYNERGY_H diff -Nru a/include/asm-ia64/sn/systeminfo.h b/include/asm-ia64/sn/systeminfo.h --- a/include/asm-ia64/sn/systeminfo.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/sn/systeminfo.h Tue Mar 12 13:58:15 2002 @@ -4,11 +4,12 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2002 Silicon Graphics, Inc. All rights reserved. */ -#ifndef _ASM_SN_SYSTEMINFO_H -#define _ASM_SN_SYSTEMINFO_H +#ifndef _ASM_IA64_SN_SYSTEMINFO_H +#define _ASM_IA64_SN_SYSTEMINFO_H + +#include #ifdef __cplusplus extern "C" { @@ -69,4 +70,4 @@ } #endif -#endif /* _ASM_SN_SYSTEMINFO_H */ +#endif /* _ASM_IA64_SN_SYSTEMINFO_H */ diff -Nru a/include/asm-ia64/sn/types.h b/include/asm-ia64/sn/types.h --- a/include/asm-ia64/sn/types.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/sn/types.h Tue Mar 12 13:58:15 2002 @@ -3,30 +3,27 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1999 Silicon Graphics, Inc. + * Copyright (C) 1999,2001-2002 Silicon Graphics, Inc. All Rights Reserved. * Copyright (C) 1999 by Ralf Baechle */ -#ifndef _ASM_SN_TYPES_H -#define _ASM_SN_TYPES_H +#ifndef _ASM_IA64_SN_TYPES_H +#define _ASM_IA64_SN_TYPES_H #include typedef unsigned long cpuid_t; typedef unsigned long cpumask_t; -/* typedef unsigned long cnodemask_t; */ typedef signed short nasid_t; /* node id in numa-as-id space */ -typedef signed short cnodeid_t; /* node id in compact-id space */ typedef signed char partid_t; /* partition ID type */ typedef signed short moduleid_t; /* user-visible module number type */ typedef signed short cmoduleid_t; /* kernel compact module id type */ typedef unsigned char clusterid_t; /* Clusterid of the cell */ -#define __psunsigned_t uint64_t -#define lock_t uint64_t +typedef uint64_t __psunsigned_t; typedef unsigned long iopaddr_t; typedef unsigned char uchar_t; typedef unsigned long paddr_t; typedef unsigned long pfn_t; -#endif /* _ASM_SN_TYPES_H */ +#endif /* _ASM_IA64_SN_TYPES_H */ diff -Nru a/include/asm-ia64/sn/uart16550.h b/include/asm-ia64/sn/uart16550.h --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/include/asm-ia64/sn/uart16550.h Tue Mar 12 13:58:15 2002 @@ -0,0 +1,227 @@ +/* + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + * + * Copyright (C) 1992 - 1997, 2000-2001 Silicon Graphics, Inc. All rights reserved. + */ + +#ifndef _ASM_IA64_SN_UART16550_H +#define _ASM_IA64_SN_UART16550_H + + +/* + * Definitions for 16550 chip + */ + + /* defined as offsets from the data register */ +#define REG_DAT 0 /* receive/transmit data */ +#define REG_ICR 1 /* interrupt control register */ +#define REG_ISR 2 /* interrupt status register */ +#define REG_FCR 2 /* fifo control register */ +#define REG_LCR 3 /* line control register */ +#define REG_MCR 4 /* modem control register */ +#define REG_LSR 5 /* line status register */ +#define REG_MSR 6 /* modem status register */ +#define REG_SCR 7 /* Scratch register */ +#define REG_DLL 0 /* divisor latch (lsb) */ +#define REG_DLH 1 /* divisor latch (msb) */ +#define REG_EFR 2 /* 16650 enhanced feature register */ + +/* + * 16450/16550 Registers Structure. + */ + +/* Line Control Register */ +#define LCR_WLS0 0x01 /*word length select bit 0 */ +#define LCR_WLS1 0x02 /*word length select bit 2 */ +#define LCR_STB 0x04 /* number of stop bits */ +#define LCR_PEN 0x08 /* parity enable */ +#define LCR_EPS 0x10 /* even parity select */ +#define LCR_SETBREAK 0x40 /* break key */ +#define LCR_DLAB 0x80 /* divisor latch access bit */ +#define LCR_RXLEN 0x03 /* # of data bits per received/xmitted char */ +#define LCR_STOP1 0x00 +#define LCR_STOP2 0x04 +#define LCR_PAREN 0x08 +#define LCR_PAREVN 0x10 +#define LCR_PARMARK 0x20 +#define LCR_SNDBRK 0x40 +#define LCR_DLAB 0x80 + + +#define LCR_BITS5 0x00 /* 5 bits per char */ +#define LCR_BITS6 0x01 /* 6 bits per char */ +#define LCR_BITS7 0x02 /* 7 bits per char */ +#define LCR_BITS8 0x03 /* 8 bits per char */ + +#define LCR_MASK_BITS_CHAR 0x03 +#define LCR_MASK_STOP_BITS 0x04 +#define LCR_MASK_PARITY_BITS 0x18 + + +/* Line Status Register */ +#define LSR_RCA 0x01 /* data ready */ +#define LSR_OVRRUN 0x02 /* overrun error */ +#define LSR_PARERR 0x04 /* parity error */ +#define LSR_FRMERR 0x08 /* framing error */ +#define LSR_BRKDET 0x10 /* a break has arrived */ +#define LSR_XHRE 0x20 /* tx hold reg is now empty */ +#define LSR_XSRE 0x40 /* tx shift reg is now empty */ +#define LSR_RFBE 0x80 /* rx FIFO Buffer error */ + +/* Interrupt Status Regisger */ +#define ISR_MSTATUS 0x00 +#define ISR_TxRDY 0x02 +#define ISR_RxRDY 0x04 +#define ISR_ERROR_INTR 0x08 +#define ISR_FFTMOUT 0x0c /* FIFO Timeout */ +#define ISR_RSTATUS 0x06 /* Receiver Line status */ + +/* Interrupt Enable Register */ +#define ICR_RIEN 0x01 /* Received Data Ready */ +#define ICR_TIEN 0x02 /* Tx Hold Register Empty */ +#define ICR_SIEN 0x04 /* Receiver Line Status */ +#define ICR_MIEN 0x08 /* Modem Status */ + +/* Modem Control Register */ +#define MCR_DTR 0x01 /* Data Terminal Ready */ +#define MCR_RTS 0x02 /* Request To Send */ +#define MCR_OUT1 0x04 /* Aux output - not used */ +#define MCR_OUT2 0x08 /* turns intr to 386 on/off */ +#define MCR_LOOP 0x10 /* loopback for diagnostics */ +#define MCR_AFE 0x20 /* Auto flow control enable */ + +/* Modem Status Register */ +#define MSR_DCTS 0x01 /* Delta Clear To Send */ +#define MSR_DDSR 0x02 /* Delta Data Set Ready */ +#define MSR_DRI 0x04 /* Trail Edge Ring Indicator */ +#define MSR_DDCD 0x08 /* Delta Data Carrier Detect */ +#define MSR_CTS 0x10 /* Clear To Send */ +#define MSR_DSR 0x20 /* Data Set Ready */ +#define MSR_RI 0x40 /* Ring Indicator */ +#define MSR_DCD 0x80 /* Data Carrier Detect */ + +#define DELTAS(x) ((x)&(MSR_DCTS|MSR_DDSR|MSR_DRI|MSR_DDCD)) +#define STATES(x) ((x)(MSR_CTS|MSR_DSR|MSR_RI|MSR_DCD)) + + +#define FCR_FIFOEN 0x01 /* enable receive/transmit fifo */ +#define FCR_RxFIFO 0x02 /* enable receive fifo */ +#define FCR_TxFIFO 0x04 /* enable transmit fifo */ +#define FCR_MODE1 0x08 /* change to mode 1 */ +#define RxLVL0 0x00 /* Rx fifo level at 1 */ +#define RxLVL1 0x40 /* Rx fifo level at 4 */ +#define RxLVL2 0x80 /* Rx fifo level at 8 */ +#define RxLVL3 0xc0 /* Rx fifo level at 14 */ + +#define FIFOEN (FCR_FIFOEN | FCR_RxFIFO | FCR_TxFIFO | RxLVL3 | FCR_MODE1) + +#define FCT_TxMASK 0x30 /* mask for Tx trigger */ +#define FCT_RxMASK 0xc0 /* mask for Rx trigger */ + +/* enhanced festures register */ +#define EFR_SFLOW 0x0f /* various S/w Flow Controls */ +#define EFR_EIC 0x10 /* Enhanced Interrupt Control bit */ +#define EFR_SCD 0x20 /* Special Character Detect */ +#define EFR_RTS 0x40 /* RTS flow control */ +#define EFR_CTS 0x80 /* CTS flow control */ + +/* Rx Tx software flow controls in 16650 enhanced mode */ +#define SFLOW_Tx0 0x00 /* no Xmit flow control */ +#define SFLOW_Tx1 0x08 /* Transmit Xon1, Xoff1 */ +#define SFLOW_Tx2 0x04 /* Transmit Xon2, Xoff2 */ +#define SFLOW_Tx3 0x0c /* Transmit Xon1,Xon2, Xoff1,Xoff2 */ +#define SFLOW_Rx0 0x00 /* no Rcv flow control */ +#define SFLOW_Rx1 0x02 /* Receiver compares Xon1, Xoff1 */ +#define SFLOW_Rx2 0x01 /* Receiver compares Xon2, Xoff2 */ + +#define ASSERT_DTR(x) (x |= MCR_DTR) +#define ASSERT_RTS(x) (x |= MCR_RTS) +#define DU_RTS_ASSERTED(x) (((x) & MCR_RTS) != 0) +#define DU_RTS_ASSERT(x) ((x) |= MCR_RTS) +#define DU_RTS_DEASSERT(x) ((x) &= ~MCR_RTS) + + +/* + * ioctl(fd, I_STR, arg) + * use the SIOC_RS422 and SIOC_EXTCLK combination to support MIDI + */ +#define SIOC ('z' << 8) /* z for z85130 */ +#define SIOC_EXTCLK (SIOC | 1) /* select/de-select external clock */ +#define SIOC_RS422 (SIOC | 2) /* select/de-select RS422 protocol */ +#define SIOC_ITIMER (SIOC | 3) /* upstream timer adjustment */ +#define SIOC_LOOPBACK (SIOC | 4) /* diagnostic loopback test mode */ + + +/* channel control register */ +#define DMA_INT_MASK 0xe0 /* ring intr mask */ +#define DMA_INT_TH25 0x20 /* 25% threshold */ +#define DMA_INT_TH50 0x40 /* 50% threshold */ +#define DMA_INT_TH75 0x60 /* 75% threshold */ +#define DMA_INT_EMPTY 0x80 /* ring buffer empty */ +#define DMA_INT_NEMPTY 0xa0 /* ring buffer not empty */ +#define DMA_INT_FULL 0xc0 /* ring buffer full */ +#define DMA_INT_NFULL 0xe0 /* ring buffer not full */ + +#define DMA_CHANNEL_RESET 0x400 /* reset dma channel */ +#define DMA_ENABLE 0x200 /* enable DMA */ + +/* peripheral controller intr status bits applicable to serial ports */ +#define ISA_SERIAL0_MASK 0x03f00000 /* mask for port #1 intrs */ +#define ISA_SERIAL0_DIR 0x00100000 /* device intr request */ +#define ISA_SERIAL0_Tx_THIR 0x00200000 /* Transmit DMA threshold */ +#define ISA_SERIAL0_Tx_PREQ 0x00400000 /* Transmit DMA pair req */ +#define ISA_SERIAL0_Tx_MEMERR 0x00800000 /* Transmit DMA memory err */ +#define ISA_SERIAL0_Rx_THIR 0x01000000 /* Receive DMA threshold */ +#define ISA_SERIAL0_Rx_OVERRUN 0x02000000 /* Receive DMA over-run */ + +#define ISA_SERIAL1_MASK 0xfc000000 /* mask for port #1 intrs */ +#define ISA_SERIAL1_DIR 0x04000000 /* device intr request */ +#define ISA_SERIAL1_Tx_THIR 0x08000000 /* Transmit DMA threshold */ +#define ISA_SERIAL1_Tx_PREQ 0x10000000 /* Transmit DMA pair req */ +#define ISA_SERIAL1_Tx_MEMERR 0x20000000 /* Transmit DMA memory err */ +#define ISA_SERIAL1_Rx_THIR 0x40000000 /* Receive DMA threshold */ +#define ISA_SERIAL1_Rx_OVERRUN 0x80000000 /* Receive DMA over-run */ + +#define MAX_RING_BLOCKS 128 /* 4096/32 */ +#define MAX_RING_SIZE 4096 + +/* DMA Input Control Byte */ +#define DMA_IC_OVRRUN 0x01 /* overrun error */ +#define DMA_IC_PARERR 0x02 /* parity error */ +#define DMA_IC_FRMERR 0x04 /* framing error */ +#define DMA_IC_BRKDET 0x08 /* a break has arrived */ +#define DMA_IC_VALID 0x80 /* pair is valid */ + +/* DMA Output Control Byte */ +#define DMA_OC_TxINTR 0x20 /* set Tx intr after processing byte */ +#define DMA_OC_INVALID 0x00 /* invalid pair */ +#define DMA_OC_WTHR 0x40 /* Write byte to THR */ +#define DMA_OC_WMCR 0x80 /* Write byte to MCR */ +#define DMA_OC_DELAY 0xc0 /* time delay before next xmit */ + +/* ring id's */ +#define RID_SERIAL0_TX 0x4 /* serial port 0, transmit ring buffer */ +#define RID_SERIAL0_RX 0x5 /* serial port 0, receive ring buffer */ +#define RID_SERIAL1_TX 0x6 /* serial port 1, transmit ring buffer */ +#define RID_SERIAL1_RX 0x7 /* serial port 1, receive ring buffer */ + +#define CLOCK_XIN 22 +#define PRESCALER_DIVISOR 3 +#define CLOCK_ACE 7333333 + +/* + * increment the ring offset. One way to do this would be to add b'100000. + * this would let the offset value roll over automatically when it reaches + * its maximum value (127). However when we use the offset, we must use + * the appropriate bits only by masking with 0xfe0. + * The other option is to shift the offset right by 5 bits and look at its + * value. Then increment if required and shift back + * note: 127 * 2^5 = 4064 + */ +#define INC_RING_POINTER(x) \ + ( ((x & 0xffe0) < 4064) ? (x += 32) : 0 ) + +#endif /* _ASM_IA64_SN_UART16550_H */ diff -Nru a/include/asm-ia64/sn/vector.h b/include/asm-ia64/sn/vector.h --- a/include/asm-ia64/sn/vector.h Tue Mar 12 13:58:14 2002 +++ b/include/asm-ia64/sn/vector.h Tue Mar 12 13:58:14 2002 @@ -4,11 +4,10 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992 - 1997, 2000-2002 Silicon Graphics, Inc. All rights reserved. */ -#ifndef _ASM_SN_VECTOR_H -#define _ASM_SN_VECTOR_H +#ifndef _ASM_IA64_SN_VECTOR_H +#define _ASM_IA64_SN_VECTOR_H #include @@ -37,7 +36,7 @@ #endif /* RTL */ -#if defined(CONFIG_SGI_IP35) || defined(CONFIG_IA64_SGI_SN1) || defined(CONFIG_IA64_GENERIC) +#if defined(CONFIG_IA64_SGI_SN1) || defined(CONFIG_IA64_GENERIC) #define VECTOR_PARMS LB_VECTOR_PARMS #define VECTOR_ROUTE LB_VECTOR_ROUTE #define VECTOR_DATA LB_VECTOR_DATA @@ -66,19 +65,22 @@ #define VS_ERROR_MASK LVS_ERROR_MASK #endif -#define NET_ERROR_NONE 0 /* No error */ -#define NET_ERROR_HARDWARE -1 /* Hardware error */ -#define NET_ERROR_OVERRUN -2 /* Extra response(s) */ -#define NET_ERROR_REPLY -3 /* Reply parms mismatch */ -#define NET_ERROR_ADDRESS -4 /* Addr error response */ -#define NET_ERROR_COMMAND -5 /* Cmd error response */ -#define NET_ERROR_PROT -6 /* Prot error response */ -#define NET_ERROR_TIMEOUT -7 /* Too many retries */ -#define NET_ERROR_VECTOR -8 /* Invalid vector/path */ -#define NET_ERROR_ROUTERLOCK -9 /* Timeout locking rtr */ -#define NET_ERROR_INVAL -10 /* Invalid vector request */ +#define NET_ERROR_NONE 0 /* No error */ +#define NET_ERROR_HARDWARE (-1) /* Hardware error */ +#define NET_ERROR_OVERRUN (-2) /* Extra response(s) */ +#define NET_ERROR_REPLY (-3) /* Reply parms mismatch */ +#define NET_ERROR_ADDRESS (-4) /* Addr error response */ +#define NET_ERROR_COMMAND (-5) /* Cmd error response */ +#define NET_ERROR_PROT (-6) /* Prot error response */ +#define NET_ERROR_TIMEOUT (-7) /* Too many retries */ +#define NET_ERROR_VECTOR (-8) /* Invalid vector/path */ +#define NET_ERROR_ROUTERLOCK (-9) /* Timeout locking rtr */ +#define NET_ERROR_INVAL (-10) /* Invalid vector request */ + +#ifndef __ASSEMBLY__ +#include +#include -#if defined(_LANGUAGE_C) || defined(_LANGUAGE_C_PLUS_PLUS) typedef uint64_t net_reg_t; typedef uint64_t net_vec_t; @@ -114,6 +116,6 @@ int addr, net_reg_t *value); #endif -#endif /* _LANGUAGE_C || _LANGUAGE_C_PLUS_PLUS */ +#endif /* __ASSEMBLY__ */ -#endif /* _ASM_SN_VECTOR_H */ +#endif /* _ASM_IA64_SN_VECTOR_H */ diff -Nru a/include/asm-ia64/sn/xtalk/xbow.h b/include/asm-ia64/sn/xtalk/xbow.h --- a/include/asm-ia64/sn/xtalk/xbow.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/sn/xtalk/xbow.h Tue Mar 12 13:58:15 2002 @@ -4,7 +4,7 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. + * Copyright (C) 1992 - 1997, 2000-2001 Silicon Graphics, Inc. * Copyright (C) 2000 by Colin Ngam */ #ifndef _ASM_SN_SN_XTALK_XBOW_H @@ -17,7 +17,7 @@ #include #include #include -#ifdef LANGUAGE_C +#ifndef __ASSEMBLY__ #include #endif @@ -46,7 +46,7 @@ #define MAX_XBOW_NAME 16 -#if LANGUAGE_C +#ifndef __ASSEMBLY__ typedef uint32_t xbowreg_t; #define XBOWCONST (xbowreg_t) @@ -236,7 +236,7 @@ /* offset of arbitration register, given source widget id */ #define XBOW_ARB_OFF(wid) (XBOW_ARB_IS_UPPER(wid) ? 0x1c : 0x24) -#endif /* LANGUAGE_C */ +#endif /* __ASSEMBLY__ */ #define XBOW_WID_ID WIDGET_ID #define XBOW_WID_STAT WIDGET_STATUS @@ -402,7 +402,7 @@ (XWIDGET_PART_NUM(XWIDGET_ID_READ(nasid, 0)) == XXBOW_WIDGET_PART_NUM) -#ifdef _LANGUAGE_C +#ifndef __ASSEMBLY__ /* * XBOW Widget 0 Register formats. * Format for many of these registers are similar to the standard @@ -891,5 +891,5 @@ #endif /* MACROFIELD_LINE */ -#endif /* _LANGUAGE_C */ +#endif /* __ASSEMBLY__ */ #endif /* _ASM_SN_SN_XTALK_XBOW_H */ diff -Nru a/include/asm-ia64/sn/xtalk/xbow_info.h b/include/asm-ia64/sn/xtalk/xbow_info.h --- a/include/asm-ia64/sn/xtalk/xbow_info.h Tue Mar 12 13:58:14 2002 +++ b/include/asm-ia64/sn/xtalk/xbow_info.h Tue Mar 12 13:58:14 2002 @@ -4,11 +4,13 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992-1997,2000-2002 Silicon Graphics, Inc. All Rights Reserved. */ #ifndef _ASM_SN_XTALK_XBOW_INFO_H #define _ASM_SN_XTALK_XBOW_INFO_H + +#include +#include #define XBOW_PERF_MODES 0x03 #define XBOW_PERF_COUNTERS 0x02 diff -Nru a/include/asm-ia64/sn/xtalk/xswitch.h b/include/asm-ia64/sn/xtalk/xswitch.h --- a/include/asm-ia64/sn/xtalk/xswitch.h Tue Mar 12 13:58:16 2002 +++ b/include/asm-ia64/sn/xtalk/xswitch.h Tue Mar 12 13:58:16 2002 @@ -4,8 +4,7 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992-1997,2000-2002 Silicon Graphics, Inc. All Rights Reserved. */ #ifndef _ASM_SN_XTALK_XSWITCH_H #define _ASM_SN_XTALK_XSWITCH_H @@ -16,7 +15,10 @@ * xtalk bus providers. */ -#if LANGUAGE_C +#ifndef __ASSEMBLY__ + +#include +#include typedef struct xswitch_info_s *xswitch_info_t; @@ -54,6 +56,6 @@ extern int xswitch_id_get(devfs_handle_t vhdl); extern void xswitch_id_set(devfs_handle_t vhdl,int xbow_num); -#endif /* LANGUAGE_C */ +#endif /* __ASSEMBLY__ */ #endif /* _ASM_SN_XTALK_XSWITCH_H */ diff -Nru a/include/asm-ia64/sn/xtalk/xtalk.h b/include/asm-ia64/sn/xtalk/xtalk.h --- a/include/asm-ia64/sn/xtalk/xtalk.h Tue Mar 12 13:58:14 2002 +++ b/include/asm-ia64/sn/xtalk/xtalk.h Tue Mar 12 13:58:14 2002 @@ -4,8 +4,7 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992-1997, 2000-2002 Silicon Graphics, Inc. All Rights Reserved. */ #ifndef _ASM_SN_XTALK_XTALK_H #define _ASM_SN_XTALK_XTALK_H @@ -18,19 +17,19 @@ */ typedef char xwidgetnum_t; /* xtalk widget number (0..15) */ -#define XWIDGET_NONE -1 +#define XWIDGET_NONE (-1) typedef int xwidget_part_num_t; /* xtalk widget part number */ -#define XWIDGET_PART_NUM_NONE -1 +#define XWIDGET_PART_NUM_NONE (-1) typedef int xwidget_rev_num_t; /* xtalk widget revision number */ -#define XWIDGET_REV_NUM_NONE -1 +#define XWIDGET_REV_NUM_NONE (-1) typedef int xwidget_mfg_num_t; /* xtalk widget manufacturing ID */ -#define XWIDGET_MFG_NUM_NONE -1 +#define XWIDGET_MFG_NUM_NONE (-1) typedef struct xtalk_piomap_s *xtalk_piomap_t; @@ -57,7 +56,7 @@ #include #include #include -#include +#include #include struct xwidget_hwid_s; @@ -205,14 +204,8 @@ typedef int xtalk_intr_connect_f (xtalk_intr_t intr_hdl, /* xtalk intr resource handle */ - intr_func_t intr_func, /* xtalk intr handler */ - void *intr_arg, /* arg to intr handler */ xtalk_intr_setfunc_f *setfunc, /* func to set intr hw */ - void *setfunc_arg, /* arg to setfunc. This must be */ - /* sufficient to determine which */ - /* interrupt on which board needs */ - /* to be set. */ - void *thread); /* which intr thread to use */ + void *setfunc_arg); /* arg to setfunc */ typedef void xtalk_intr_disconnect_f (xtalk_intr_t intr_hdl); @@ -400,7 +393,6 @@ extern int xtalk_device_powerup(devfs_handle_t, xwidgetnum_t); extern int xtalk_device_shutdown(devfs_handle_t, xwidgetnum_t); -extern int xtalk_device_inquiry(devfs_handle_t, xwidgetnum_t); #endif /* __KERNEL__ */ #endif /* _ASM_SN_XTALK_XTALK_H */ diff -Nru a/include/asm-ia64/sn/xtalk/xtalk_private.h b/include/asm-ia64/sn/xtalk/xtalk_private.h --- a/include/asm-ia64/sn/xtalk/xtalk_private.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/sn/xtalk/xtalk_private.h Tue Mar 12 13:58:15 2002 @@ -4,13 +4,15 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. - * Copyright (C) 2000 by Colin Ngam + * Copyright (C) 1992-1997, 2000-2002 Silicon Graphics, Inc. All Rights Reserved. */ #ifndef _ASM_SN_XTALK_XTALK_PRIVATE_H #define _ASM_SN_XTALK_XTALK_PRIVATE_H #include /* for error function and arg types */ +#include +#include +#include /* * xtalk_private.h -- private definitions for xtalk diff -Nru a/include/asm-ia64/sn/xtalk/xtalkaddrs.h b/include/asm-ia64/sn/xtalk/xtalkaddrs.h --- a/include/asm-ia64/sn/xtalk/xtalkaddrs.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/sn/xtalk/xtalkaddrs.h Tue Mar 12 13:58:15 2002 @@ -4,13 +4,12 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. + * Copyright (C) 1992 - 1997, 2000-2001 Silicon Graphics, Inc. * Copyright (C) 2000 by Colin Ngam */ #ifndef _ASM_SN_XTALK_XTALKADDRS_H #define _ASM_SN_XTALK_XTALKADDRS_H -#include /* * CrossTalk to SN0 Hub addressing support @@ -60,19 +59,15 @@ * This looks very much like a REMOTE_HUB access, except the nodeID * is in a different place, and the highest xtalk bit is set. */ - /* Hub-specific xtalk definitions */ #define HX_MEM_BIT 0L /* Hub's idea of xtalk memory access */ #define HX_IO_BIT 1L /* Hub's idea of xtalk register access */ #define HX_ACCTYPE_SHIFT 47 -#if CONFIG_SGI_IP35 || CONFIG_IA64_SGI_SN1 || CONFIG_IA64_GENERIC #define HX_NODE_SHIFT 39 -#endif #define HX_BIGWIN_SHIFT 28 - #define HX_SWIN_SHIFT 23 #define HX_LOCACC 0L /* local access */ diff -Nru a/include/asm-ia64/sn/xtalk/xwidget.h b/include/asm-ia64/sn/xtalk/xwidget.h --- a/include/asm-ia64/sn/xtalk/xwidget.h Tue Mar 12 13:58:14 2002 +++ b/include/asm-ia64/sn/xtalk/xwidget.h Tue Mar 12 13:58:14 2002 @@ -4,7 +4,7 @@ * License. See the file "COPYING" in the main directory of this archive * for more details. * - * Copyright (C) 1992 - 1997, 2000 Silicon Graphics, Inc. + * Copyright (C) 1992 - 1997, 2000-2001 Silicon Graphics, Inc. * Copyright (C) 2000 by Colin Ngam */ #ifndef __ASM_SN_XTALK_XWIDGET_H__ @@ -15,9 +15,9 @@ */ #include -#if LANGUAGE_C +#ifndef __ASSEMBLY__ #include -#endif /* LANGUAGE_C */ +#endif /* __ASSEMBLY__ */ #ifdef LITTLE_ENDIAN #define WIDGET_ID 0x00 @@ -115,7 +115,7 @@ * widget target flush register are widget dependent thus will not be * defined here */ -#if _LANGUAGE_C +#ifndef __ASSEMBLY__ typedef uint32_t widgetreg_t; /* widget configuration registers */ @@ -267,9 +267,6 @@ async_attach_t aa); extern int xwidget_unregister(devfs_handle_t); -extern void xwidget_error_register(devfs_handle_t xwidget, - error_handler_f * efunc, - error_handler_arg_t einfo); extern void xwidget_reset(devfs_handle_t xwidget); extern void xwidget_gfx_reset(devfs_handle_t xwidget); @@ -289,6 +286,9 @@ extern xwidget_rev_num_t xwidget_info_rev_num_get(xwidget_info_t xwidget_info); extern xwidget_mfg_num_t xwidget_info_mfg_num_get(xwidget_info_t xwidget_info); +extern xwidgetnum_t hub_widget_id(nasid_t); + + /* * TBD: DELETE THIS ENTIRE STRUCTURE! Equivalent is now in @@ -303,6 +303,6 @@ } v_widget_t; #endif /* _KERNEL */ -#endif /* _LANGUAGE_C */ +#endif /* __ASSEMBLY__ */ #endif /* __ASM_SN_XTALK_XWIDGET_H__ */ diff -Nru a/include/asm-ia64/softirq.h b/include/asm-ia64/softirq.h --- a/include/asm-ia64/softirq.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/softirq.h Tue Mar 12 13:58:15 2002 @@ -3,21 +3,21 @@ /* * Copyright (C) 1998-2001 Hewlett-Packard Co - * Copyright (C) 1998-2001 David Mosberger-Tang + * David Mosberger-Tang */ #include -#define __local_bh_enable() do { barrier(); local_bh_count()--; } while (0) +#define __local_bh_enable() do { barrier(); really_local_bh_count()--; } while (0) -#define local_bh_disable() do { local_bh_count()++; barrier(); } while (0) +#define local_bh_disable() do { really_local_bh_count()++; barrier(); } while (0) #define local_bh_enable() \ do { \ __local_bh_enable(); \ - if (__builtin_expect(local_softirq_pending(), 0) && local_bh_count() == 0) \ + if (__builtin_expect(local_softirq_pending(), 0) && really_local_bh_count() == 0) \ do_softirq(); \ } while (0) -#define in_softirq() (local_bh_count() != 0) +#define in_softirq() (really_local_bh_count() != 0) #endif /* _ASM_IA64_SOFTIRQ_H */ diff -Nru a/include/asm-ia64/spinlock.h b/include/asm-ia64/spinlock.h --- a/include/asm-ia64/spinlock.h Tue Mar 12 13:58:14 2002 +++ b/include/asm-ia64/spinlock.h Tue Mar 12 13:58:14 2002 @@ -2,8 +2,8 @@ #define _ASM_IA64_SPINLOCK_H /* - * Copyright (C) 1998-2001 Hewlett-Packard Co - * Copyright (C) 1998-2001 David Mosberger-Tang + * Copyright (C) 1998-2002 Hewlett-Packard Co + * David Mosberger-Tang * Copyright (C) 1999 Walt Drummond * * This file is used for SMP configurations only. @@ -31,7 +31,7 @@ * rather than a simple xchg to avoid writing the cache-line when * there is contention. */ -#define spin_lock(x) \ +#define _raw_spin_lock(x) \ { \ register char *addr __asm__ ("r31") = (char *) &(x)->lock; \ \ @@ -49,7 +49,7 @@ : "ar.ccv", "ar.pfs", "b7", "p15", "r28", "r29", "r30", "memory"); \ } -#define spin_trylock(x) \ +#define _raw_spin_trylock(x) \ ({ \ register long result; \ \ @@ -62,7 +62,7 @@ }) #define spin_is_locked(x) ((x)->lock != 0) -#define spin_unlock(x) do { barrier(); ((spinlock_t *) x)->lock = 0;} while (0) +#define _raw_spin_unlock(x) do { barrier(); ((spinlock_t *) x)->lock = 0;} while (0) #define spin_unlock_wait(x) do { barrier(); } while ((x)->lock) #else /* !NEW_LOCK */ @@ -79,7 +79,7 @@ * rather than a simple xchg to avoid writing the cache-line when * there is contention. */ -#define spin_lock(x) __asm__ __volatile__ ( \ +#define _raw_spin_lock(x) __asm__ __volatile__ ( \ "mov ar.ccv = r0\n" \ "mov r29 = 1\n" \ ";;\n" \ @@ -93,11 +93,11 @@ "cmp4.eq p0,p7 = r0, r2\n" \ "(p7) br.cond.spnt.few 1b\n" \ ";;\n" \ - :: "r"(&(x)->lock) : "r2", "r29", "memory") + :: "r"(&(x)->lock) : "ar.ccv", "p7", "r2", "r29", "memory") #define spin_is_locked(x) ((x)->lock != 0) -#define spin_unlock(x) do { barrier(); ((spinlock_t *) x)->lock = 0; } while (0) -#define spin_trylock(x) (cmpxchg_acq(&(x)->lock, 0, 1) == 0) +#define _raw_spin_unlock(x) do { barrier(); ((spinlock_t *) x)->lock = 0; } while (0) +#define _raw_spin_trylock(x) (cmpxchg_acq(&(x)->lock, 0, 1) == 0) #define spin_unlock_wait(x) do { barrier(); } while ((x)->lock) #endif /* !NEW_LOCK */ @@ -110,7 +110,7 @@ #define rwlock_init(x) do { *(x) = RW_LOCK_UNLOCKED; } while(0) -#define read_lock(rw) \ +#define _raw_read_lock(rw) \ do { \ int tmp = 0; \ __asm__ __volatile__ ("1:\tfetchadd4.acq %0 = [%1], 1\n" \ @@ -128,10 +128,10 @@ ";;\n" \ ".previous\n" \ : "=&r" (tmp) \ - : "r" (rw): "memory"); \ + : "r" (rw) : "p6", "memory"); \ } while(0) -#define read_unlock(rw) \ +#define _raw_read_unlock(rw) \ do { \ int tmp = 0; \ __asm__ __volatile__ ("fetchadd4.rel %0 = [%1], -1\n" \ @@ -140,7 +140,7 @@ : "memory"); \ } while(0) -#define write_lock(rw) \ +#define _raw_write_lock(rw) \ do { \ __asm__ __volatile__ ( \ "mov ar.ccv = r0\n" \ @@ -156,13 +156,13 @@ "cmp4.eq p0,p7 = r0, r2\n" \ "(p7) br.cond.spnt.few 1b\n" \ ";;\n" \ - :: "r"(rw) : "r2", "r29", "memory"); \ + :: "r"(rw) : "ar.ccv", "p7", "r2", "r29", "memory"); \ } while(0) -/* - * clear_bit() has "acq" semantics; we're really need "rel" semantics, - * but for simplicity, we simply do a fence for now... - */ -#define write_unlock(x) ({clear_bit(31, (x)); mb();}) +#define _raw_write_unlock(x) \ +({ \ + smp_mb__before_clear_bit(); /* need barrier before releasing lock... */ \ + clear_bit(31, (x)); \ +}) #endif /* _ASM_IA64_SPINLOCK_H */ diff -Nru a/include/asm-ia64/system.h b/include/asm-ia64/system.h --- a/include/asm-ia64/system.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/system.h Tue Mar 12 13:58:15 2002 @@ -7,8 +7,8 @@ * on information published in the Processor Abstraction Layer * and the System Abstraction Layer manual. * - * Copyright (C) 1998-2001 Hewlett-Packard Co - * Copyright (C) 1998-2001 David Mosberger-Tang + * Copyright (C) 1998-2002 Hewlett-Packard Co + * David Mosberger-Tang * Copyright (C) 1999 Asit Mallick * Copyright (C) 1999 Don Dugger */ @@ -232,7 +232,7 @@ _tmp = __bad_increment_for_ia64_fetch_and_add(); \ break; \ } \ - (__typeof__(*v)) (_tmp + (i)); /* return new value */ \ + (__typeof__(*(v))) (_tmp + (i)); /* return new value */ \ }) /* @@ -373,19 +373,27 @@ * newly created thread returns directly to * ia64_ret_from_syscall_clear_r8. */ -extern struct task_struct *ia64_switch_to (void *next_task); +extern void ia64_switch_to (void *next_task); + +struct task_struct; extern void ia64_save_extra (struct task_struct *task); extern void ia64_load_extra (struct task_struct *task); -#define __switch_to(prev,next,last) do { \ +#if defined(CONFIG_SMP) && defined(CONFIG_PERFMON) +# define PERFMON_IS_SYSWIDE() (local_cpu_data->pfm_syst_wide != 0) +#else +# define PERFMON_IS_SYSWIDE() (0) +#endif + +#define __switch_to(prev,next) do { \ if (((prev)->thread.flags & (IA64_THREAD_DBG_VALID|IA64_THREAD_PM_VALID)) \ - || IS_IA32_PROCESS(ia64_task_regs(prev))) \ + || IS_IA32_PROCESS(ia64_task_regs(prev)) || PERFMON_IS_SYSWIDE()) \ ia64_save_extra(prev); \ if (((next)->thread.flags & (IA64_THREAD_DBG_VALID|IA64_THREAD_PM_VALID)) \ - || IS_IA32_PROCESS(ia64_task_regs(next))) \ + || IS_IA32_PROCESS(ia64_task_regs(next)) || PERFMON_IS_SYSWIDE()) \ ia64_load_extra(next); \ - (last) = ia64_switch_to((next)); \ + ia64_switch_to((next)); \ } while (0) #ifdef CONFIG_SMP @@ -396,19 +404,19 @@ * task->thread.fph, avoiding the complication of having to fetch * the latest fph state from another CPU. */ -# define switch_to(prev,next,last) do { \ +# define switch_to(prev,next) do { \ if (ia64_psr(ia64_task_regs(prev))->mfh) { \ ia64_psr(ia64_task_regs(prev))->mfh = 0; \ (prev)->thread.flags |= IA64_THREAD_FPH_VALID; \ __ia64_save_fpu((prev)->thread.fph); \ } \ ia64_psr(ia64_task_regs(prev))->dfh = 1; \ - __switch_to(prev,next,last); \ + __switch_to(prev,next); \ } while (0) #else -# define switch_to(prev,next,last) do { \ +# define switch_to(prev,next) do { \ ia64_psr(ia64_task_regs(next))->dfh = (ia64_get_fpu_owner() != (next)); \ - __switch_to(prev,next,last); \ + __switch_to(prev,next); \ } while (0) #endif diff -Nru a/include/asm-ia64/thread_info.h b/include/asm-ia64/thread_info.h --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/include/asm-ia64/thread_info.h Tue Mar 12 13:58:16 2002 @@ -0,0 +1,69 @@ +/* + * Copyright (C) 2002 Hewlett-Packard Co + * David Mosberger-Tang + */ +#ifndef _ASM_IA64_THREAD_INFO_H +#define _ASM_IA64_THREAD_INFO_H + +#include +#include +#include + +#define TI_EXEC_DOMAIN 0x00 +#define TI_FLAGS 0x08 +#define TI_CPU 0x0c +#define TI_ADDR_LIMI 0x10 + +#ifndef __ASSEMBLY__ + +/* + * On IA-64, we want to keep the task structure and kernel stack together, so they can be + * mapped by a single TLB entry and so they can be addressed by the "current" pointer + * without having to do pointer masking. + */ +struct thread_info { + struct exec_domain *exec_domain;/* execution domain */ + __u32 flags; /* thread_info flags (see TIF_*) */ + __u32 cpu; /* current CPU */ + mm_segment_t addr_limit; /* user-level address space limit */ +}; + +#define INIT_THREAD_SIZE /* tell sched.h not to declare the thread_union */ +#define THREAD_SIZE KERNEL_STACK_SIZE + +#define INIT_THREAD_INFO(ti) \ +{ \ + exec_domain: &default_exec_domain, \ + flags: 0, \ + cpu: 0, \ + addr_limit: KERNEL_DS, \ +} + +/* how to get the thread information struct from C */ +#define current_thread_info() ((struct thread_info *) ((char *) current + IA64_TASK_SIZE)) + +#endif /* !__ASSEMBLY */ + +/* + * thread information flags + * - these are process state flags that various assembly files may need to access + * - pending work-to-be-done flags are in least-significant 16 bits, other flags + * in top 16 bits + */ +#define TIF_NOTIFY_RESUME 0 /* resumption notification requested */ +#define TIF_SIGPENDING 1 /* signal pending */ +#define TIF_NEED_RESCHED 2 /* rescheduling necessary */ +#define TIF_SYSCALL_TRACE 3 /* syscall trace active */ +#define TIF_POLLING_NRFLAG 16 /* true if poll_idle() is polling TIF_NEED_RESCHED */ + +#define TIF_WORK_MASK 0x7 /* like TIF_ALLWORK_BITS but sans TIF_SYSCALL_TRACE */ +#define TIF_ALLWORK_MASK 0xf /* bits 0..3 are "work to do on user-return" bits */ + +#define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE) +#define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME) +#define _TIF_SIGPENDING (1 << TIF_SIGPENDING) +#define _TIF_NEED_RESCHED (1 << TIF_NEED_RESCHED) +#define _TIF_USEDFPU (1 << TIF_USEDFPU) +#define _TIF_POLLING_NRFLAG (1 << TIF_POLLING_NRFLAG) + +#endif /* _ASM_IA64_THREAD_INFO_H */ diff -Nru a/include/asm-ia64/uaccess.h b/include/asm-ia64/uaccess.h --- a/include/asm-ia64/uaccess.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ia64/uaccess.h Tue Mar 12 13:58:15 2002 @@ -26,8 +26,8 @@ * associated and, if so, sets r8 to -EFAULT and clears r9 to 0 and * then resumes execution at the continuation point. * - * Copyright (C) 1998, 1999, 2001 Hewlett-Packard Co - * Copyright (C) 1998, 1999, 2001 David Mosberger-Tang + * Copyright (C) 1998, 1999, 2001-2002 Hewlett-Packard Co + * David Mosberger-Tang */ #include @@ -45,8 +45,8 @@ #define VERIFY_WRITE 1 #define get_ds() (KERNEL_DS) -#define get_fs() (current->addr_limit) -#define set_fs(x) (current->addr_limit = (x)) +#define get_fs() (current_thread_info()->addr_limit) +#define set_fs(x) (current_thread_info()->addr_limit = (x)) #define segment_eq(a,b) ((a).seg == (b).seg) diff -Nru a/include/asm-ia64/unistd.h b/include/asm-ia64/unistd.h --- a/include/asm-ia64/unistd.h Tue Mar 12 13:58:14 2002 +++ b/include/asm-ia64/unistd.h Tue Mar 12 13:58:14 2002 @@ -4,7 +4,7 @@ /* * IA-64 Linux syscall numbers and inline-functions. * - * Copyright (C) 1998-2001 Hewlett-Packard Co + * Copyright (C) 1998-2002 Hewlett-Packard Co * David Mosberger-Tang */ @@ -109,9 +109,9 @@ #define __NR_syslog 1117 #define __NR_setitimer 1118 #define __NR_getitimer 1119 -#define __NR_old_stat 1120 -#define __NR_old_lstat 1121 -#define __NR_old_fstat 1122 +/* 1120 was __NR_old_stat */ +/* 1121 was __NR_old_lstat */ +/* 1122 was __NR_old_fstat */ #define __NR_vhangup 1123 #define __NR_lchown 1124 #define __NR_vm86 1125 @@ -206,7 +206,19 @@ #define __NR_getdents64 1214 #define __NR_getunwind 1215 #define __NR_readahead 1216 -#define __NR_tkill 1217 +#define __NR_setxattr 1217 +#define __NR_lsetxattr 1218 +#define __NR_fsetxattr 1219 +#define __NR_getxattr 1220 +#define __NR_lgetxattr 1221 +#define __NR_fgetxattr 1222 +#define __NR_listxattr 1223 +#define __NR_llistxattr 1224 +#define __NR_flistxattr 1225 +#define __NR_removexattr 1226 +#define __NR_lremovexattr 1227 +#define __NR_fremovexattr 1228 +#define __NR_tkill 1229 #if !defined(__ASSEMBLY__) && !defined(ASSEMBLER) @@ -281,6 +293,8 @@ } #ifdef __KERNEL_SYSCALLS__ + +struct rusage; static inline _syscall0(int,sync) static inline _syscall0(pid_t,setsid) diff -Nru a/include/asm-ppc/mman.h b/include/asm-ppc/mman.h --- a/include/asm-ppc/mman.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ppc/mman.h Tue Mar 12 13:58:15 2002 @@ -7,6 +7,7 @@ #define PROT_READ 0x1 /* page can be read */ #define PROT_WRITE 0x2 /* page can be written */ #define PROT_EXEC 0x4 /* page can be executed */ +#define PROT_SEM 0x8 /* page may be used for atomic ops */ #define PROT_NONE 0x0 /* page can not be accessed */ #define MAP_SHARED 0x01 /* Share changes */ diff -Nru a/include/asm-ppc/unistd.h b/include/asm-ppc/unistd.h --- a/include/asm-ppc/unistd.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-ppc/unistd.h Tue Mar 12 13:58:15 2002 @@ -228,6 +228,7 @@ #define __NR_removexattr 218 #define __NR_lremovexattr 219 #define __NR_fremovexattr 220 +#define __NR_futex 221 #define __NR(n) #n diff -Nru a/include/asm-sparc/siginfo.h b/include/asm-sparc/siginfo.h --- a/include/asm-sparc/siginfo.h Tue Mar 12 13:58:14 2002 +++ b/include/asm-sparc/siginfo.h Tue Mar 12 13:58:14 2002 @@ -113,6 +113,7 @@ #define SI_ASYNCIO -4 /* sent by AIO completion */ #define SI_SIGIO -5 /* sent by queued SIGIO */ #define SI_TKILL -6 /* sent by tkill system call */ +#define SI_DETHREAD -7 /* sent by execve() killing subsidiary threads */ #define SI_FROMUSER(siptr) ((siptr)->si_code <= 0) #define SI_FROMKERNEL(siptr) ((siptr)->si_code > 0) diff -Nru a/include/asm-sparc/unistd.h b/include/asm-sparc/unistd.h --- a/include/asm-sparc/unistd.h Tue Mar 12 13:58:14 2002 +++ b/include/asm-sparc/unistd.h Tue Mar 12 13:58:14 2002 @@ -155,7 +155,7 @@ #define __NR_rmdir 137 /* Common */ #define __NR_utimes 138 /* SunOS Specific */ #define __NR_stat64 139 /* Linux sparc32 Specific */ -/* #define __NR_adjtime 140 SunOS Specific */ +#define __NR_sendfile64 140 /* adjtime under SunOS */ #define __NR_getpeername 141 /* Common */ /* #define __NR_gethostid 142 SunOS Specific */ #define __NR_gettid 143 /* ENOSYS under SunOS */ diff -Nru a/include/asm-sparc64/bitops.h b/include/asm-sparc64/bitops.h --- a/include/asm-sparc64/bitops.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-sparc64/bitops.h Tue Mar 12 13:58:15 2002 @@ -7,6 +7,7 @@ #ifndef _SPARC64_BITOPS_H #define _SPARC64_BITOPS_H +#include #include extern long ___test_and_set_bit(unsigned long nr, volatile void *addr); @@ -100,6 +101,23 @@ } #ifdef __KERNEL__ + +/* + * Every architecture must define this function. It's the fastest + * way of searching a 140-bit bitmap where the first 100 bits are + * unlikely to be set. It's guaranteed that at least one of the 140 + * bits is cleared. + */ +static inline int sched_find_first_bit(unsigned long *b) +{ + if (unlikely(b[0])) + return __ffs(b[0]); + if (unlikely(((unsigned int)b[1]))) + return __ffs(b[1]) + 64; + if (b[1] >> 32) + return __ffs(b[1] >> 32) + 96; + return __ffs(b[2]) + 128; +} /* * ffs: find first bit set. This is defined the same way as diff -Nru a/include/asm-sparc64/fhc.h b/include/asm-sparc64/fhc.h --- a/include/asm-sparc64/fhc.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-sparc64/fhc.h Tue Mar 12 13:58:15 2002 @@ -86,10 +86,10 @@ #define FHC_BSR_NIA 0x0000001c /* Jumper, bit 18 in PROM space */ #define FHC_BSR_SI 0x00000001 /* Spare input pin value */ #define FHC_PREGS_ECC 0x40UL /* FHC ECC Control Register (16 bits) */ -#define FHC_PREGS_JCTRL 0x50UL /* FHC JTAG Control Register */ +#define FHC_PREGS_JCTRL 0xf0UL /* FHC JTAG Control Register */ #define FHC_JTAG_CTRL_MENAB 0x80000000 /* Indicates this is JTAG Master */ #define FHC_JTAG_CTRL_MNONE 0x40000000 /* Indicates no JTAG Master present */ -#define FHC_PREGS_JCMD 0x60UL /* FHC JTAG Command Register */ +#define FHC_PREGS_JCMD 0x100UL /* FHC JTAG Command Register */ unsigned long ireg; /* FHC IGN reg */ #define FHC_IREG_IGN 0x00UL /* This FHC's IGN */ unsigned long ffregs; /* FHC fanfail regs */ diff -Nru a/include/asm-sparc64/head.h b/include/asm-sparc64/head.h --- a/include/asm-sparc64/head.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-sparc64/head.h Tue Mar 12 13:58:15 2002 @@ -4,6 +4,8 @@ #include -#define KERNBASE 0x400000 +#define KERNBASE 0x400000 + +#define PTREGS_OFF (STACK_BIAS + REGWIN_SZ) #endif /* !(_SPARC64_HEAD_H) */ diff -Nru a/include/asm-sparc64/mmu_context.h b/include/asm-sparc64/mmu_context.h --- a/include/asm-sparc64/mmu_context.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-sparc64/mmu_context.h Tue Mar 12 13:58:15 2002 @@ -27,25 +27,6 @@ #include #include -/* - * Every architecture must define this function. It's the fastest - * way of searching a 168-bit bitmap where the first 128 bits are - * unlikely to be set. It's guaranteed that at least one of the 168 - * bits is cleared. - */ -#if MAX_RT_PRIO != 128 || MAX_PRIO != 168 -# error update this function. -#endif - -static inline int sched_find_first_bit(unsigned long *b) -{ - if (unlikely(b[0])) - return __ffs(b[0]); - if (unlikely(b[1])) - return __ffs(b[1]) + 64; - return __ffs(b[2]) + 128; -} - static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk, unsigned cpu) { } diff -Nru a/include/asm-sparc64/pgalloc.h b/include/asm-sparc64/pgalloc.h --- a/include/asm-sparc64/pgalloc.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-sparc64/pgalloc.h Tue Mar 12 13:58:15 2002 @@ -10,95 +10,6 @@ #include #include -/* Cache and TLB flush operations. */ - -/* These are the same regardless of whether this is an SMP kernel or not. */ -#define flush_cache_mm(__mm) \ - do { if ((__mm) == current->mm) flushw_user(); } while(0) -extern void flush_cache_range(struct vm_area_struct *, unsigned long, unsigned long); -#define flush_cache_page(vma, page) \ - flush_cache_mm((vma)->vm_mm) - -/* This is unnecessary on the SpitFire since D-CACHE is write-through. */ -#define flush_page_to_ram(page) do { } while (0) - -/* - * On spitfire, the icache doesn't snoop local stores and we don't - * use block commit stores (which invalidate icache lines) during - * module load, so we need this. - */ -extern void flush_icache_range(unsigned long start, unsigned long end); - -extern void __flush_dcache_page(void *addr, int flush_icache); -extern void __flush_icache_page(unsigned long); -extern void flush_dcache_page_impl(struct page *page); -#ifdef CONFIG_SMP -extern void smp_flush_dcache_page_impl(struct page *page, int cpu); -extern void flush_dcache_page_all(struct mm_struct *mm, struct page *page); -#else -#define smp_flush_dcache_page_impl(page,cpu) flush_dcache_page_impl(page) -#define flush_dcache_page_all(mm,page) flush_dcache_page_impl(page) -#endif - -extern void flush_dcache_page(struct page *page); - -extern void __flush_dcache_range(unsigned long start, unsigned long end); - -extern void __flush_cache_all(void); - -extern void __flush_tlb_all(void); -extern void __flush_tlb_mm(unsigned long context, unsigned long r); -extern void __flush_tlb_range(unsigned long context, unsigned long start, - unsigned long r, unsigned long end, - unsigned long pgsz, unsigned long size); -extern void __flush_tlb_page(unsigned long context, unsigned long page, unsigned long r); - -#ifndef CONFIG_SMP - -#define flush_cache_all() __flush_cache_all() -#define flush_tlb_all() __flush_tlb_all() - -#define flush_tlb_mm(__mm) \ -do { if(CTX_VALID((__mm)->context)) \ - __flush_tlb_mm(CTX_HWBITS((__mm)->context), SECONDARY_CONTEXT); \ -} while(0) - -#define flush_tlb_range(__vma, start, end) \ -do { if(CTX_VALID((__vma)->vm_mm->context)) { \ - unsigned long __start = (start)&PAGE_MASK; \ - unsigned long __end = PAGE_ALIGN(end); \ - __flush_tlb_range(CTX_HWBITS((__vma)->vm_mm->context), __start, \ - SECONDARY_CONTEXT, __end, PAGE_SIZE, \ - (__end - __start)); \ - } \ -} while(0) - -#define flush_tlb_page(vma, page) \ -do { struct mm_struct *__mm = (vma)->vm_mm; \ - if(CTX_VALID(__mm->context)) \ - __flush_tlb_page(CTX_HWBITS(__mm->context), (page)&PAGE_MASK, \ - SECONDARY_CONTEXT); \ -} while(0) - -#else /* CONFIG_SMP */ - -extern void smp_flush_cache_all(void); -extern void smp_flush_tlb_all(void); -extern void smp_flush_tlb_mm(struct mm_struct *mm); -extern void smp_flush_tlb_range(struct vm_area_struct *vma, unsigned long start, - unsigned long end); -extern void smp_flush_tlb_page(struct mm_struct *mm, unsigned long page); - -#define flush_cache_all() smp_flush_cache_all() -#define flush_tlb_all() smp_flush_tlb_all() -#define flush_tlb_mm(mm) smp_flush_tlb_mm(mm) -#define flush_tlb_range(vma, start, end) \ - smp_flush_tlb_range(vma, start, end) -#define flush_tlb_page(vma, page) \ - smp_flush_tlb_page((vma)->vm_mm, page) - -#endif /* ! CONFIG_SMP */ - #define VPTE_BASE_SPITFIRE 0xfffffffe00000000 #if 1 #define VPTE_BASE_CHEETAH VPTE_BASE_SPITFIRE @@ -106,7 +17,7 @@ #define VPTE_BASE_CHEETAH 0xffe0000000000000 #endif -extern __inline__ void flush_tlb_pgtables(struct mm_struct *mm, unsigned long start, +static __inline__ void flush_tlb_pgtables(struct mm_struct *mm, unsigned long start, unsigned long end) { /* Note the signed type. */ @@ -154,7 +65,7 @@ #ifndef CONFIG_SMP -extern __inline__ void free_pgd_fast(pgd_t *pgd) +static __inline__ void free_pgd_fast(pgd_t *pgd) { struct page *page = virt_to_page(pgd); @@ -169,7 +80,7 @@ preempt_enable(); } -extern __inline__ pgd_t *get_pgd_fast(void) +static __inline__ pgd_t *get_pgd_fast(void) { struct page *ret; @@ -212,7 +123,7 @@ #else /* CONFIG_SMP */ -extern __inline__ void free_pgd_fast(pgd_t *pgd) +static __inline__ void free_pgd_fast(pgd_t *pgd) { preempt_disable(); *(unsigned long *)pgd = (unsigned long) pgd_quicklist; @@ -221,7 +132,7 @@ preempt_enable(); } -extern __inline__ pgd_t *get_pgd_fast(void) +static __inline__ pgd_t *get_pgd_fast(void) { unsigned long *ret; @@ -240,7 +151,7 @@ return (pgd_t *)ret; } -extern __inline__ void free_pgd_slow(pgd_t *pgd) +static __inline__ void free_pgd_slow(pgd_t *pgd) { free_page((unsigned long)pgd); } @@ -257,23 +168,15 @@ #define pgd_populate(MM, PGD, PMD) pgd_set(PGD, PMD) -extern __inline__ pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long address) -{ - pmd_t *pmd = (pmd_t *)__get_free_page(GFP_KERNEL); - if (pmd) - memset(pmd, 0, PAGE_SIZE); - return pmd; -} - -extern __inline__ pmd_t *pmd_alloc_one_fast(struct mm_struct *mm, unsigned long address) +static __inline__ pmd_t *pmd_alloc_one_fast(struct mm_struct *mm, unsigned long address) { unsigned long *ret; int color = 0; + preempt_disable(); if (pte_quicklist[color] == NULL) color = 1; - preempt_disable(); if((ret = (unsigned long *)pte_quicklist[color]) != NULL) { pte_quicklist[color] = (unsigned long *)(*ret); ret[0] = 0; @@ -284,7 +187,20 @@ return (pmd_t *)ret; } -extern __inline__ void free_pmd_fast(pmd_t *pmd) +static __inline__ pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long address) +{ + pmd_t *pmd; + + pmd = pmd_alloc_one_fast(mm, address); + if (!pmd) { + pmd = (pmd_t *)__get_free_page(GFP_KERNEL); + if (pmd) + memset(pmd, 0, PAGE_SIZE); + } + return pmd; +} + +static __inline__ void free_pmd_fast(pmd_t *pmd) { unsigned long color = DCACHE_COLOR((unsigned long)pmd); @@ -295,16 +211,19 @@ preempt_enable(); } -extern __inline__ void free_pmd_slow(pmd_t *pmd) +static __inline__ void free_pmd_slow(pmd_t *pmd) { free_page((unsigned long)pmd); } -#define pmd_populate(MM, PMD, PTE) pmd_set(PMD, PTE) +#define pmd_populate_kernel(MM, PMD, PTE) pmd_set(PMD, PTE) +#define pmd_populate(MM,PMD,PTE_PAGE) \ + pmd_populate_kernel(MM,PMD,page_address(PTE_PAGE)) -extern pte_t *pte_alloc_one(struct mm_struct *mm, unsigned long address); +extern pte_t *pte_alloc_one_kernel(struct mm_struct *mm, unsigned long address); +#define pte_alloc_one(MM,ADDR) virt_to_page(pte_alloc_one_kernel(MM,ADDR)) -extern __inline__ pte_t *pte_alloc_one_fast(struct mm_struct *mm, unsigned long address) +static __inline__ pte_t *pte_alloc_one_fast(struct mm_struct *mm, unsigned long address) { unsigned long color = VPTE_COLOR(address); unsigned long *ret; @@ -319,7 +238,7 @@ return (pte_t *)ret; } -extern __inline__ void free_pte_fast(pte_t *pte) +static __inline__ void free_pte_fast(pte_t *pte) { unsigned long color = DCACHE_COLOR((unsigned long)pte); @@ -330,16 +249,15 @@ preempt_enable(); } -extern __inline__ void free_pte_slow(pte_t *pte) +static __inline__ void free_pte_slow(pte_t *pte) { free_page((unsigned long)pte); } -#define pte_free(pte) free_pte_fast(pte) +#define pte_free_kernel(pte) free_pte_fast(pte) +#define pte_free(pte) free_pte_fast(page_address(pte)) #define pmd_free(pmd) free_pmd_fast(pmd) #define pgd_free(pgd) free_pgd_fast(pgd) #define pgd_alloc(mm) get_pgd_fast() - -extern int do_check_pgt_cache(int, int); #endif /* _SPARC64_PGALLOC_H */ diff -Nru a/include/asm-sparc64/pgtable.h b/include/asm-sparc64/pgtable.h --- a/include/asm-sparc64/pgtable.h Tue Mar 12 13:58:14 2002 +++ b/include/asm-sparc64/pgtable.h Tue Mar 12 13:58:14 2002 @@ -36,6 +36,95 @@ #define LOW_OBP_ADDRESS 0x00000000f0000000 #define HI_OBP_ADDRESS 0x0000000100000000 +#ifndef __ASSEMBLY__ + +/* Cache and TLB flush operations. */ + +/* These are the same regardless of whether this is an SMP kernel or not. */ +#define flush_cache_mm(__mm) \ + do { if ((__mm) == current->mm) flushw_user(); } while(0) +extern void flush_cache_range(struct vm_area_struct *, unsigned long, unsigned long); +#define flush_cache_page(vma, page) \ + flush_cache_mm((vma)->vm_mm) + + +/* + * On spitfire, the icache doesn't snoop local stores and we don't + * use block commit stores (which invalidate icache lines) during + * module load, so we need this. + */ +extern void flush_icache_range(unsigned long start, unsigned long end); + +extern void __flush_dcache_page(void *addr, int flush_icache); +extern void __flush_icache_page(unsigned long); +extern void flush_dcache_page_impl(struct page *page); +#ifdef CONFIG_SMP +extern void smp_flush_dcache_page_impl(struct page *page, int cpu); +extern void flush_dcache_page_all(struct mm_struct *mm, struct page *page); +#else +#define smp_flush_dcache_page_impl(page,cpu) flush_dcache_page_impl(page) +#define flush_dcache_page_all(mm,page) flush_dcache_page_impl(page) +#endif + +extern void __flush_dcache_range(unsigned long start, unsigned long end); + +extern void __flush_cache_all(void); + +extern void __flush_tlb_all(void); +extern void __flush_tlb_mm(unsigned long context, unsigned long r); +extern void __flush_tlb_range(unsigned long context, unsigned long start, + unsigned long r, unsigned long end, + unsigned long pgsz, unsigned long size); +extern void __flush_tlb_page(unsigned long context, unsigned long page, unsigned long r); + +#ifndef CONFIG_SMP + +#define flush_cache_all() __flush_cache_all() +#define flush_tlb_all() __flush_tlb_all() + +#define flush_tlb_mm(__mm) \ +do { if(CTX_VALID((__mm)->context)) \ + __flush_tlb_mm(CTX_HWBITS((__mm)->context), SECONDARY_CONTEXT); \ +} while(0) + +#define flush_tlb_range(__vma, start, end) \ +do { if(CTX_VALID((__vma)->vm_mm->context)) { \ + unsigned long __start = (start)&PAGE_MASK; \ + unsigned long __end = PAGE_ALIGN(end); \ + __flush_tlb_range(CTX_HWBITS((__vma)->vm_mm->context), __start, \ + SECONDARY_CONTEXT, __end, PAGE_SIZE, \ + (__end - __start)); \ + } \ +} while(0) + +#define flush_tlb_page(vma, page) \ +do { struct mm_struct *__mm = (vma)->vm_mm; \ + if(CTX_VALID(__mm->context)) \ + __flush_tlb_page(CTX_HWBITS(__mm->context), (page)&PAGE_MASK, \ + SECONDARY_CONTEXT); \ +} while(0) + +#else /* CONFIG_SMP */ + +extern void smp_flush_cache_all(void); +extern void smp_flush_tlb_all(void); +extern void smp_flush_tlb_mm(struct mm_struct *mm); +extern void smp_flush_tlb_range(struct vm_area_struct *vma, unsigned long start, + unsigned long end); +extern void smp_flush_tlb_page(struct mm_struct *mm, unsigned long page); + +#define flush_cache_all() smp_flush_cache_all() +#define flush_tlb_all() smp_flush_tlb_all() +#define flush_tlb_mm(mm) smp_flush_tlb_mm(mm) +#define flush_tlb_range(vma, start, end) \ + smp_flush_tlb_range(vma, start, end) +#define flush_tlb_page(vma, page) \ + smp_flush_tlb_page((vma)->vm_mm, page) + +#endif /* ! CONFIG_SMP */ + +#endif /* ! __ASSEMBLY__ */ + /* XXX All of this needs to be rethought so we can take advantage * XXX cheetah's full 64-bit virtual address space, ie. no more hole * XXX in the middle like on spitfire. -DaveM @@ -215,7 +304,8 @@ (pmd_val(*(pmdp)) = (__pa((unsigned long) (ptep)) >> 11UL)) #define pgd_set(pgdp, pmdp) \ (pgd_val(*(pgdp)) = (__pa((unsigned long) (pmdp)) >> 11UL)) -#define pmd_page(pmd) ((unsigned long) __va((pmd_val(pmd)<<11UL))) +#define __pmd_page(pmd) ((unsigned long) __va((pmd_val(pmd)<<11UL))) +#define pmd_page(pmd) virt_to_page((void *)__pmd_page(pmd)) #define pgd_page(pgd) ((unsigned long) __va((pgd_val(pgd)<<11UL))) #define pte_none(pte) (!pte_val(pte)) #define pte_present(pte) (pte_val(pte) & _PAGE_PRESENT) @@ -264,8 +354,13 @@ ((address >> PMD_SHIFT) & (REAL_PTRS_PER_PMD-1))) /* Find an entry in the third-level page table.. */ -#define pte_offset(dir, address) ((pte_t *) pmd_page(*(dir)) + \ +#define __pte_offset(dir, address) ((pte_t *) __pmd_page(*(dir)) + \ ((address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1))) +#define pte_offset_kernel __pte_offset +#define pte_offset_map __pte_offset +#define pte_offset_map_nested __pte_offset +#define pte_unmap(pte) do { } while (0) +#define pte_unmap_nested(pte) do { } while (0) extern pgd_t swapper_pg_dir[1]; @@ -312,10 +407,10 @@ return addr & _PAGE_PADDR; if ((addr >= LOW_OBP_ADDRESS) && (addr < HI_OBP_ADDRESS)) return prom_virt_to_phys(addr, 0); - pgdp = pgd_offset_k (addr); - pmdp = pmd_offset (pgdp, addr); - ptep = pte_offset (pmdp, addr); - return pte_val (*ptep) & _PAGE_PADDR; + pgdp = pgd_offset_k(addr); + pmdp = pmd_offset(pgdp, addr); + ptep = pte_offset_kernel(pmdp, addr); + return pte_val(*ptep) & _PAGE_PADDR; } extern __inline__ unsigned long @@ -350,11 +445,18 @@ extern unsigned long get_fb_unmapped_area(struct file *filp, unsigned long, unsigned long, unsigned long, unsigned long); #define HAVE_ARCH_FB_UNMAPPED_AREA -#endif /* !(__ASSEMBLY__) */ - /* * No page table caches to initialise */ #define pgtable_cache_init() do { } while (0) + +extern void check_pgt_cache(void); + +extern void flush_dcache_page(struct page *page); + +/* This is unnecessary on the SpitFire since D-CACHE is write-through. */ +#define flush_page_to_ram(page) do { } while (0) + +#endif /* !(__ASSEMBLY__) */ #endif /* !(_SPARC64_PGTABLE_H) */ diff -Nru a/include/asm-sparc64/pil.h b/include/asm-sparc64/pil.h --- a/include/asm-sparc64/pil.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-sparc64/pil.h Tue Mar 12 13:58:15 2002 @@ -3,19 +3,24 @@ #define _SPARC64_PIL_H /* To avoid some locking problems, we hard allocate certain PILs - * for SMP cross call messages. cli() does not block the cross - * call delivery, so when SMP locking is an issue we reschedule - * the event into a PIL interrupt which is blocked by cli(). + * for SMP cross call messages that must do a etrap/rtrap. * - * XXX In fact the whole set of PILs used for hardware interrupts - * XXX may be allocated in this manner. All of the devices can - * XXX happily sit at the same PIL. We would then need only two - * XXX PILs, one for devices and one for the CPU local timer tick. + * A cli() does not block the cross call delivery, so when SMP + * locking is an issue we reschedule the event into a PIL interrupt + * which is blocked by cli(). + * + * In fact any XCALL which has to etrap/rtrap has a problem because + * it is difficult to prevent rtrap from running BH's, and that would + * need to be done if the XCALL arrived while %pil==15. */ -#define PIL_MIGRATE 1 +#define PIL_SMP_CALL_FUNC 1 +#define PIL_SMP_RECEIVE_SIGNAL 2 +#define PIL_SMP_CAPTURE 3 #ifndef __ASSEMBLY__ -#define PIL_RESERVED(PIL) ((PIL) == PIL_MIGRATE) +#define PIL_RESERVED(PIL) ((PIL) == PIL_SMP_CALL_FUNC || \ + (PIL) == PIL_SMP_RECEIVE_SIGNAL || \ + (PIL) == PIL_SMP_CAPTURE) #endif #endif /* !(_SPARC64_PIL_H) */ diff -Nru a/include/asm-sparc64/siginfo.h b/include/asm-sparc64/siginfo.h --- a/include/asm-sparc64/siginfo.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-sparc64/siginfo.h Tue Mar 12 13:58:15 2002 @@ -173,6 +173,7 @@ #define SI_ASYNCIO -4 /* sent by AIO completion */ #define SI_SIGIO -5 /* sent by queued SIGIO */ #define SI_TKILL -6 /* sent by tkill system call */ +#define SI_DETHREAD -7 /* sent by execve() killing subsidiary threads */ #define SI_FROMUSER(siptr) ((siptr)->si_code <= 0) #define SI_FROMKERNEL(siptr) ((siptr)->si_code > 0) diff -Nru a/include/asm-sparc64/system.h b/include/asm-sparc64/system.h --- a/include/asm-sparc64/system.h Tue Mar 12 13:58:14 2002 +++ b/include/asm-sparc64/system.h Tue Mar 12 13:58:14 2002 @@ -172,7 +172,7 @@ * not preserve it's value. Hairy, but it lets us remove 2 loads * and 2 stores in this critical code path. -DaveM */ -#define switch_to(prev, next, last) \ +#define switch_to(prev, next) \ do { CHECK_LOCKS(prev); \ if (test_thread_flag(TIF_PERFCTR)) { \ unsigned long __tmp; \ @@ -193,16 +193,16 @@ "stx %%i6, [%%sp + 2047 + 0x70]\n\t" \ "stx %%i7, [%%sp + 2047 + 0x78]\n\t" \ "rdpr %%wstate, %%o5\n\t" \ - "stx %%o6, [%%g6 + %3]\n\t" \ - "stb %%o5, [%%g6 + %2]\n\t" \ + "stx %%o6, [%%g6 + %2]\n\t" \ + "stb %%o5, [%%g6 + %1]\n\t" \ "rdpr %%cwp, %%o5\n\t" \ - "stb %%o5, [%%g6 + %5]\n\t" \ - "mov %1, %%g6\n\t" \ - "ldub [%1 + %5], %%g1\n\t" \ + "stb %%o5, [%%g6 + %4]\n\t" \ + "mov %0, %%g6\n\t" \ + "ldub [%0 + %4], %%g1\n\t" \ "wrpr %%g1, %%cwp\n\t" \ - "ldx [%%g6 + %3], %%o6\n\t" \ - "ldub [%%g6 + %2], %%o5\n\t" \ - "ldx [%%g6 + %4], %%o7\n\t" \ + "ldx [%%g6 + %2], %%o6\n\t" \ + "ldub [%%g6 + %1], %%o5\n\t" \ + "ldx [%%g6 + %3], %%o7\n\t" \ "mov %%g6, %%l2\n\t" \ "wrpr %%o5, 0x0, %%wstate\n\t" \ "ldx [%%sp + 2047 + 0x70], %%i6\n\t" \ @@ -210,13 +210,13 @@ "wrpr %%g0, 0x94, %%pstate\n\t" \ "mov %%l2, %%g6\n\t" \ "wrpr %%g0, 0x96, %%pstate\n\t" \ - "andcc %%o7, %6, %%g0\n\t" \ + "andcc %%o7, %5, %%g0\n\t" \ "bne,pn %%icc, ret_from_syscall\n\t" \ - " ldx [%%g5 + %7], %0\n\t" \ - : "=&r" (last) \ + " nop\n\t" \ + : /* no outputs */ \ : "r" (next->thread_info), \ "i" (TI_WSTATE), "i" (TI_KSP), "i" (TI_FLAGS), "i" (TI_CWP), \ - "i" (_TIF_NEWCHILD), "i" (TI_TASK) \ + "i" (_TIF_NEWCHILD) \ : "cc", "g1", "g2", "g3", "g5", "g7", \ "l2", "l3", "l4", "l5", "l6", "l7", \ "i0", "i1", "i2", "i3", "i4", "i5", \ diff -Nru a/include/asm-sparc64/thread_info.h b/include/asm-sparc64/thread_info.h --- a/include/asm-sparc64/thread_info.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-sparc64/thread_info.h Tue Mar 12 13:58:15 2002 @@ -28,6 +28,7 @@ #include #include +#include struct task_struct; struct exec_domain; diff -Nru a/include/asm-sparc64/unistd.h b/include/asm-sparc64/unistd.h --- a/include/asm-sparc64/unistd.h Tue Mar 12 13:58:15 2002 +++ b/include/asm-sparc64/unistd.h Tue Mar 12 13:58:15 2002 @@ -155,7 +155,7 @@ #define __NR_rmdir 137 /* Common */ #define __NR_utimes 138 /* SunOS Specific */ /* #define __NR_stat64 139 Linux sparc32 Specific */ -/* #define __NR_adjtime 140 SunOS Specific */ +#define __NR_sendfile64 140 /* adjtime under SunOS */ #define __NR_getpeername 141 /* Common */ /* #define __NR_gethostid 142 SunOS Specific */ #define __NR_gettid 143 /* ENOSYS under SunOS */ diff -Nru a/include/linux/cramfs_fs_sb.h b/include/linux/cramfs_fs_sb.h --- a/include/linux/cramfs_fs_sb.h Tue Mar 12 13:58:15 2002 +++ b/include/linux/cramfs_fs_sb.h Tue Mar 12 13:58:15 2002 @@ -12,4 +12,9 @@ unsigned long flags; }; +static inline struct cramfs_sb_info *CRAMFS_SB(struct super_block *sb) +{ + return sb->u.generic_sbp; +} + #endif diff -Nru a/include/linux/dnotify.h b/include/linux/dnotify.h --- a/include/linux/dnotify.h Tue Mar 12 13:58:14 2002 +++ b/include/linux/dnotify.h Tue Mar 12 13:58:14 2002 @@ -13,6 +13,7 @@ see linux/fcntl.h */ int dn_fd; struct file * dn_filp; + fl_owner_t dn_owner; }; #define DNOTIFY_MAGIC 0x444E4F54 diff -Nru a/include/linux/fs.h b/include/linux/fs.h --- a/include/linux/fs.h Tue Mar 12 13:58:15 2002 +++ b/include/linux/fs.h Tue Mar 12 13:58:15 2002 @@ -92,7 +92,6 @@ * FS_NO_DCACHE is not set. */ #define FS_NOMOUNT 16 /* Never mount from userland */ -#define FS_LITTER 32 /* Keeps the tree in dcache */ #define FS_ODD_RENAME 32768 /* Temporary stuff; will go away as soon * as nfs_rename() will be cleaned up */ @@ -291,7 +290,6 @@ #include /* #include */ #include -#include /* * Attribute flags. These should be or-ed together to figure out what @@ -650,7 +648,6 @@ #define MNT_FORCE 0x00000001 /* Attempt to forcibily umount */ #define MNT_DETACH 0x00000002 /* Just detach from the tree */ -#include #include #include #include @@ -671,7 +668,6 @@ #include #include #include -#include #include extern struct list_head super_blocks; @@ -708,7 +704,6 @@ char s_id[32]; /* Informational name */ union { - struct minix_sb_info minix_sb; struct ext2_sb_info ext2_sb; struct ext3_sb_info ext3_sb; struct hpfs_sb_info hpfs_sb; @@ -731,7 +726,6 @@ struct udf_sb_info udf_sb; struct ncp_sb_info ncpfs_sb; struct jffs2_sb_info jffs2_sb; - struct cramfs_sb_info cramfs_sb; void *generic_sbp; } u; /* @@ -944,6 +938,7 @@ const char *name; int fs_flags; struct super_block *(*get_sb) (struct file_system_type *, int, char *, void *); + void (*kill_sb) (struct super_block *); struct module *owner; struct file_system_type * next; struct list_head fs_supers; @@ -958,6 +953,10 @@ struct super_block *get_sb_nodev(struct file_system_type *fs_type, int flags, void *data, int (*fill_super)(struct super_block *, void *, int)); +void kill_block_super(struct super_block *sb); +void kill_anon_super(struct super_block *sb); +void kill_litter_super(struct super_block *sb); +void deactivate_super(struct super_block *sb); /* Alas, no aliases. Too much hassle with bringing module.h everywhere */ #define fops_get(fops) \ diff -Nru a/include/linux/futex.h b/include/linux/futex.h --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/include/linux/futex.h Tue Mar 12 13:58:16 2002 @@ -0,0 +1,8 @@ +#ifndef _LINUX_FUTEX_H +#define _LINUX_FUTEX_H + +/* Second argument to futex syscall */ +#define FUTEX_UP (0) +#define FUTEX_DOWN (1) + +#endif diff -Nru a/include/linux/hash.h b/include/linux/hash.h --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/include/linux/hash.h Tue Mar 12 13:58:16 2002 @@ -0,0 +1,58 @@ +#ifndef _LINUX_HASH_H +#define _LINUX_HASH_H +/* Fast hashing routine for a long. + (C) 2002 William Lee Irwin III, IBM */ + +/* + * Knuth recommends primes in approximately golden ratio to the maximum + * integer representable by a machine word for multiplicative hashing. + * Chuck Lever verified the effectiveness of this technique: + * http://www.citi.umich.edu/techreports/reports/citi-tr-00-1.pdf + * + * These primes are chosen to be bit-sparse, that is operations on + * them can use shifts and additions instead of multiplications for + * machines where multiplications are slow. + */ +#if BITS_PER_LONG == 32 +/* 2^31 + 2^29 - 2^25 + 2^22 - 2^19 - 2^16 + 1 */ +#define GOLDEN_RATIO_PRIME 0x9e370001UL +#elif BITS_PER_LONG == 64 +/* 2^63 + 2^61 - 2^57 + 2^54 - 2^51 - 2^18 + 1 */ +#define GOLDEN_RATIO_PRIME 0x9e37fffffffc0001UL +#else +#error Define GOLDEN_RATIO_PRIME for your wordsize. +#endif + +static inline unsigned long hash_long(unsigned long val, unsigned int bits) +{ + unsigned long hash = val; + +#if BITS_PER_LONG == 64 + /* Sigh, gcc can't optimise this alone like it does for 32 bits. */ + unsigned long n = hash; + n <<= 18; + hash -= n; + n <<= 33; + hash -= n; + n <<= 3; + hash += n; + n <<= 3; + hash -= n; + n <<= 4; + hash += n; + n <<= 2; + hash += n; +#else + /* On some cpus multiply is faster, on others gcc will do shifts */ + hash *= GOLDEN_RATIO_PRIME; +#endif + + /* High bits are more random, so use them. */ + return hash >> (BITS_PER_LONG - bits); +} + +static inline unsigned long hash_ptr(void *ptr, unsigned int bits) +{ + return hash_long((unsigned long)ptr, bits); +} +#endif /* _LINUX_HASH_H */ diff -Nru a/include/linux/hdreg.h b/include/linux/hdreg.h --- a/include/linux/hdreg.h Tue Mar 12 13:58:15 2002 +++ b/include/linux/hdreg.h Tue Mar 12 13:58:15 2002 @@ -368,9 +368,7 @@ #define HDIO_SET_NOWERR 0x0325 /* change ignore-write-error flag */ #define HDIO_SET_DMA 0x0326 /* change use-dma flag */ #define HDIO_SET_PIO_MODE 0x0327 /* reconfig interface to new speed */ -#define HDIO_SCAN_HWIF 0x0328 /* register and (re)scan interface */ #define HDIO_SET_NICE 0x0329 /* set nice flags */ -#define HDIO_UNREGISTER_HWIF 0x032a /* unregister interface */ #define HDIO_SET_WCACHE 0x032b /* change write cache enable-disable */ #define HDIO_SET_ACOUSTIC 0x032c /* change acoustic behavior */ #define HDIO_SET_BUSSTATE 0x032d /* set the bus state of the hwif */ @@ -644,18 +642,5 @@ #define IDE_NICE_0 (2) /* when sure that it won't affect us */ #define IDE_NICE_1 (3) /* when probably won't affect us much */ #define IDE_NICE_2 (4) /* when we know it's on our expense */ - -#ifdef __KERNEL__ -/* - * These routines are used for kernel command line parameters from main.c: - */ -#include - -#if defined(CONFIG_BLK_DEV_IDE) || defined(CONFIG_BLK_DEV_IDE_MODULE) -int ide_register(int io_port, int ctl_port, int irq); -void ide_unregister(unsigned int); -#endif /* CONFIG_BLK_DEV_IDE || CONFIG_BLK_DEV_IDE_MODULE */ - -#endif /* __KERNEL__ */ #endif /* _LINUX_HDREG_H */ diff -Nru a/include/linux/ide.h b/include/linux/ide.h --- a/include/linux/ide.h Tue Mar 12 13:58:15 2002 +++ b/include/linux/ide.h Tue Mar 12 13:58:15 2002 @@ -240,11 +240,6 @@ } hw_regs_t; /* - * Register new hardware with ide - */ -int ide_register_hw(hw_regs_t *hw, struct hwif_s **hwifp); - -/* * Set up hw_regs_t structure before calling ide_register_hw (optional) */ void ide_setup_ports(hw_regs_t *hw, @@ -337,6 +332,7 @@ unsigned autotune : 2; /* 1=autotune, 2=noautotune, 0=default */ unsigned remap_0_to_1 : 2; /* 0=remap if ezdrive, 1=remap, 2=noremap */ unsigned ata_flash : 1; /* 1=present, 0=default */ + unsigned blocked : 1; /* 1=powermanagment told us not to do anything, so sleep nicely */ unsigned addressing; /* : 2; 0=28-bit, 1=48-bit, 2=64-bit */ byte scsi; /* 0=default, 1=skip current ide-subdriver for ide-scsi emulation */ select_t select; /* basic drive/head select reg value */ @@ -505,6 +501,12 @@ byte bus_state; /* power state of the IDE bus */ struct device device; /* global device tree handle */ } ide_hwif_t; + +/* + * Register new hardware with ide + */ +extern int ide_register_hw(hw_regs_t *hw, struct hwif_s **hwifp); +extern void ide_unregister(ide_hwif_t *hwif); /* * Status returned from various ide_ functions diff -Nru a/include/linux/if_ec.h b/include/linux/if_ec.h --- a/include/linux/if_ec.h Tue Mar 12 13:58:14 2002 +++ b/include/linux/if_ec.h Tue Mar 12 13:58:14 2002 @@ -53,6 +53,7 @@ unsigned char port; unsigned char station; unsigned char net; + unsigned short num; }; #define ec_sk(__sk) ((struct econet_opt *)(__sk)->protinfo) diff -Nru a/include/linux/if_pppox.h b/include/linux/if_pppox.h --- a/include/linux/if_pppox.h Tue Mar 12 13:58:15 2002 +++ b/include/linux/if_pppox.h Tue Mar 12 13:58:15 2002 @@ -127,6 +127,7 @@ union { struct pppoe_opt pppoe; } proto; + unsigned short num; }; #define pppoe_dev proto.pppoe.dev #define pppoe_pa proto.pppoe.pa diff -Nru a/include/linux/if_vlan.h b/include/linux/if_vlan.h --- a/include/linux/if_vlan.h Tue Mar 12 13:58:15 2002 +++ b/include/linux/if_vlan.h Tue Mar 12 13:58:15 2002 @@ -52,70 +52,16 @@ unsigned short h_vlan_encapsulated_proto; /* packet type ID field (or len) */ }; -/* Find a VLAN device by the MAC address of it's Ethernet device, and - * it's VLAN ID. The default configuration is to have VLAN's scope - * to be box-wide, so the MAC will be ignored. The mac will only be - * looked at if we are configured to have a seperate set of VLANs per - * each MAC addressable interface. Note that this latter option does - * NOT follow the spec for VLANs, but may be useful for doing very - * large quantities of VLAN MUX/DEMUX onto FrameRelay or ATM PVCs. - */ -struct net_device *find_802_1Q_vlan_dev(struct net_device* real_dev, - unsigned short VID); /* vlan.c */ +#define VLAN_VID_MASK 0xfff /* found in af_inet.c */ extern int (*vlan_ioctl_hook)(unsigned long arg); -/* found in vlan_dev.c */ -struct net_device_stats* vlan_dev_get_stats(struct net_device* dev); -int vlan_dev_rebuild_header(struct sk_buff *skb); -int vlan_skb_recv(struct sk_buff *skb, struct net_device *dev, - struct packet_type* ptype); -int vlan_dev_hard_header(struct sk_buff *skb, struct net_device *dev, - unsigned short type, void *daddr, void *saddr, - unsigned len); -int vlan_dev_hard_start_xmit(struct sk_buff *skb, struct net_device *dev); -int vlan_dev_change_mtu(struct net_device *dev, int new_mtu); -int vlan_dev_set_mac_address(struct net_device *dev, void* addr); -int vlan_dev_open(struct net_device* dev); -int vlan_dev_stop(struct net_device* dev); -int vlan_dev_init(struct net_device* dev); -void vlan_dev_destruct(struct net_device* dev); -void vlan_dev_copy_and_sum(struct sk_buff *dest, unsigned char *src, - int length, int base); -int vlan_dev_set_ingress_priority(char* dev_name, __u32 skb_prio, short vlan_prio); -int vlan_dev_set_egress_priority(char* dev_name, __u32 skb_prio, short vlan_prio); -int vlan_dev_set_vlan_flag(char* dev_name, __u32 flag, short flag_val); - -/* VLAN multicast stuff */ -/* Delete all of the MC list entries from this vlan device. Also deals - * with the underlying device... - */ -void vlan_flush_mc_list(struct net_device* dev); -/* copy the mc_list into the vlan_info structure. */ -void vlan_copy_mc_list(struct dev_mc_list* mc_list, struct vlan_dev_info* vlan_info); -/** dmi is a single entry into a dev_mc_list, a single node. mc_list is - * an entire list, and we'll iterate through it. - */ -int vlan_should_add_mc(struct dev_mc_list *dmi, struct dev_mc_list *mc_list); -/** Taken from Gleb + Lennert's VLAN code, and modified... */ -void vlan_dev_set_multicast_list(struct net_device *vlan_dev); - -int vlan_collection_add_vlan(struct vlan_collection* vc, unsigned short vlan_id, - unsigned short flags); -int vlan_collection_remove_vlan(struct vlan_collection* vc, - struct net_device* vlan_dev); -int vlan_collection_remove_vlan_id(struct vlan_collection* vc, unsigned short vlan_id); - -/* found in vlan.c */ -/* Our listing of VLAN group(s) */ -extern struct vlan_group* p802_1Q_vlan_list; - #define VLAN_NAME "vlan" /* if this changes, algorithm will have to be reworked because this * depends on completely exhausting the VLAN identifier space. Thus - * it gives constant time look-up, but it many cases it wastes memory. + * it gives constant time look-up, but in many cases it wastes memory. */ #define VLAN_GROUP_ARRAY_LEN 4096 @@ -170,56 +116,73 @@ /* inline functions */ -/* Used in vlan_skb_recv */ -static inline struct sk_buff *vlan_check_reorder_header(struct sk_buff *skb) +static inline struct net_device_stats *vlan_dev_get_stats(struct net_device *dev) { - if (VLAN_DEV_INFO(skb->dev)->flags & 1) { - skb = skb_share_check(skb, GFP_ATOMIC); - if (skb) { - /* Lifted from Gleb's VLAN code... */ - memmove(skb->data - ETH_HLEN, - skb->data - VLAN_ETH_HLEN, 12); - skb->mac.raw += VLAN_HLEN; - } - } - - return skb; + return &(VLAN_DEV_INFO(dev)->dev_stats); } -static inline unsigned short vlan_dev_get_egress_qos_mask(struct net_device* dev, - struct sk_buff* skb) +static inline __u32 vlan_get_ingress_priority(struct net_device *dev, + unsigned short vlan_tag) { - struct vlan_priority_tci_mapping *mp = - VLAN_DEV_INFO(dev)->egress_priority_map[(skb->priority & 0xF)]; + struct vlan_dev_info *vip = VLAN_DEV_INFO(dev); - while (mp) { - if (mp->priority == skb->priority) { - return mp->vlan_qos; /* This should already be shifted to mask - * correctly with the VLAN's TCI - */ - } - mp = mp->next; - } - return 0; + return vip->ingress_priority_map[(vlan_tag >> 13) & 0x7]; } -static inline int vlan_dmi_equals(struct dev_mc_list *dmi1, - struct dev_mc_list *dmi2) -{ - return ((dmi1->dmi_addrlen == dmi2->dmi_addrlen) && - (memcmp(dmi1->dmi_addr, dmi2->dmi_addr, dmi1->dmi_addrlen) == 0)); -} +/* VLAN tx hw acceleration helpers. */ +struct vlan_skb_tx_cookie { + u32 magic; + u32 vlan_tag; +}; -static inline void vlan_destroy_mc_list(struct dev_mc_list *mc_list) +#define VLAN_TX_COOKIE_MAGIC 0x564c414e /* "VLAN" in ascii. */ +#define VLAN_TX_SKB_CB(__skb) ((struct vlan_skb_tx_cookie *)&((__skb)->cb[0])) +#define vlan_tx_tag_present(__skb) \ + (VLAN_TX_SKB_CB(__skb)->magic == VLAN_TX_COOKIE_MAGIC) +#define vlan_tx_tag_get(__skb) (VLAN_TX_SKB_CB(__skb)->vlan_tag) + +/* VLAN rx hw acceleration helper. This acts like netif_rx(). */ +static inline int vlan_hwaccel_rx(struct sk_buff *skb, struct vlan_group *grp, + unsigned short vlan_tag) { - struct dev_mc_list *dmi = mc_list; - struct dev_mc_list *next; + struct net_device_stats *stats; - while(dmi) { - next = dmi->next; - kfree(dmi); - dmi = next; + skb->dev = grp->vlan_devices[vlan_tag & VLAN_VID_MASK]; + if (skb->dev == NULL) { + kfree_skb(skb); + + /* Not NET_RX_DROP, this is not being dropped + * due to congestion. + */ + return 0; } + + skb->dev->last_rx = jiffies; + + stats = vlan_dev_get_stats(skb->dev); + stats->rx_packets++; + stats->rx_bytes += skb->len; + + skb->priority = vlan_get_ingress_priority(skb->dev, vlan_tag); + switch (skb->pkt_type) { + case PACKET_BROADCAST: + break; + + case PACKET_MULTICAST: + stats->multicast++; + break; + + case PACKET_OTHERHOST: + /* Our lower layer thinks this is not local, let's make sure. + * This allows the VLAN to have a different MAC than the underlying + * device, and still route correctly. + */ + if (!memcmp(skb->mac.ethernet->h_dest, skb->dev->dev_addr, ETH_ALEN)) + skb->pkt_type = PACKET_HOST; + break; + }; + + return netif_rx(skb); } #endif /* __KERNEL__ */ diff -Nru a/include/linux/ip.h b/include/linux/ip.h --- a/include/linux/ip.h Tue Mar 12 13:58:15 2002 +++ b/include/linux/ip.h Tue Mar 12 13:58:15 2002 @@ -116,17 +116,24 @@ #define optlength(opt) (sizeof(struct ip_options) + opt->optlen) struct inet_opt { + /* Socket demultiplex comparisons on incoming packets. */ + __u32 daddr; /* Foreign IPv4 addr */ + __u32 rcv_saddr; /* Bound local IPv4 addr */ + __u16 dport; /* Destination port */ + __u16 num; /* Local port */ + __u32 saddr; /* Sending source */ int ttl; /* TTL setting */ int tos; /* TOS */ unsigned cmsg_flags; struct ip_options *opt; + __u16 sport; /* Source port */ unsigned char hdrincl; /* Include headers ? */ __u8 mc_ttl; /* Multicasting TTL */ __u8 mc_loop; /* Loopback */ + __u8 pmtudisc; + __u16 id; /* ID counter for DF pkts */ unsigned recverr : 1, freebind : 1; - __u16 id; /* ID counter for DF pkts */ - __u8 pmtudisc; int mc_index; /* Multicast device index */ __u32 mc_addr; struct ip_mc_socklist *mc_list; /* Group array */ diff -Nru a/include/linux/jffs2.h b/include/linux/jffs2.h --- a/include/linux/jffs2.h Tue Mar 12 13:58:15 2002 +++ b/include/linux/jffs2.h Tue Mar 12 13:58:15 2002 @@ -1,7 +1,7 @@ /* * JFFS2 -- Journalling Flash File System, Version 2. * - * Copyright (C) 2001 Red Hat, Inc. + * Copyright (C) 2001, 2002 Red Hat, Inc. * * Created by David Woodhouse * @@ -31,14 +31,13 @@ * provisions above, a recipient may use your version of this file * under either the RHEPL or the GPL. * - * $Id: jffs2.h,v 1.19 2001/10/09 13:20:23 dwmw2 Exp $ + * $Id: jffs2.h,v 1.23 2002/02/21 17:03:45 dwmw2 Exp $ * */ #ifndef __LINUX_JFFS2_H__ #define __LINUX_JFFS2_H__ -#include #define JFFS2_SUPER_MAGIC 0x72b6 /* Values we may expect to find in the 'magic' field */ @@ -78,16 +77,12 @@ #define JFFS2_NODETYPE_DIRENT (JFFS2_FEATURE_INCOMPAT | JFFS2_NODE_ACCURATE | 1) #define JFFS2_NODETYPE_INODE (JFFS2_FEATURE_INCOMPAT | JFFS2_NODE_ACCURATE | 2) #define JFFS2_NODETYPE_CLEANMARKER (JFFS2_FEATURE_RWCOMPAT_DELETE | JFFS2_NODE_ACCURATE | 3) +#define JFFS2_NODETYPE_PADDING (JFFS2_FEATURE_RWCOMPAT_DELETE | JFFS2_NODE_ACCURATE | 4) // Maybe later... //#define JFFS2_NODETYPE_CHECKPOINT (JFFS2_FEATURE_RWCOMPAT_DELETE | JFFS2_NODE_ACCURATE | 3) //#define JFFS2_NODETYPE_OPTIONS (JFFS2_FEATURE_RWCOMPAT_COPY | JFFS2_NODE_ACCURATE | 4) -/* Same as the non_ECC versions, but with extra space for real - * ECC instead of just the checksum. For use on NAND flash - */ -//#define JFFS2_NODETYPE_DIRENT_ECC (JFFS2_FEATURE_INCOMPAT | JFFS2_NODE_ACCURATE | 5) -//#define JFFS2_NODETYPE_INODE_ECC (JFFS2_FEATURE_INCOMPAT | JFFS2_NODE_ACCURATE | 6) #define JFFS2_INO_FLAG_PREREAD 1 /* Do read_inode() for this one at mount time, don't wait for it to @@ -99,28 +94,28 @@ struct jffs2_unknown_node { /* All start like this */ - __u16 magic; - __u16 nodetype; - __u32 totlen; /* So we can skip over nodes we don't grok */ - __u32 hdr_crc; + uint16_t magic; + uint16_t nodetype; + uint32_t totlen; /* So we can skip over nodes we don't grok */ + uint32_t hdr_crc; } __attribute__((packed)); struct jffs2_raw_dirent { - __u16 magic; - __u16 nodetype; /* == JFFS_NODETYPE_DIRENT */ - __u32 totlen; - __u32 hdr_crc; - __u32 pino; - __u32 version; - __u32 ino; /* == zero for unlink */ - __u32 mctime; - __u8 nsize; - __u8 type; - __u8 unused[2]; - __u32 node_crc; - __u32 name_crc; - __u8 name[0]; + uint16_t magic; + uint16_t nodetype; /* == JFFS_NODETYPE_DIRENT */ + uint32_t totlen; + uint32_t hdr_crc; + uint32_t pino; + uint32_t version; + uint32_t ino; /* == zero for unlink */ + uint32_t mctime; + uint8_t nsize; + uint8_t type; + uint8_t unused[2]; + uint32_t node_crc; + uint32_t name_crc; + uint8_t name[0]; } __attribute__((packed)); /* The JFFS2 raw inode structure: Used for storage on physical media. */ @@ -131,28 +126,28 @@ */ struct jffs2_raw_inode { - __u16 magic; /* A constant magic number. */ - __u16 nodetype; /* == JFFS_NODETYPE_INODE */ - __u32 totlen; /* Total length of this node (inc data, etc.) */ - __u32 hdr_crc; - __u32 ino; /* Inode number. */ - __u32 version; /* Version number. */ - __u32 mode; /* The file's type or mode. */ - __u16 uid; /* The file's owner. */ - __u16 gid; /* The file's group. */ - __u32 isize; /* Total resultant size of this inode (used for truncations) */ - __u32 atime; /* Last access time. */ - __u32 mtime; /* Last modification time. */ - __u32 ctime; /* Change time. */ - __u32 offset; /* Where to begin to write. */ - __u32 csize; /* (Compressed) data size */ - __u32 dsize; /* Size of the node's data. (after decompression) */ - __u8 compr; /* Compression algorithm used */ - __u8 usercompr; /* Compression algorithm requested by the user */ - __u16 flags; /* See JFFS2_INO_FLAG_* */ - __u32 data_crc; /* CRC for the (compressed) data. */ - __u32 node_crc; /* CRC for the raw inode (excluding data) */ -// __u8 data[dsize]; + uint16_t magic; /* A constant magic number. */ + uint16_t nodetype; /* == JFFS_NODETYPE_INODE */ + uint32_t totlen; /* Total length of this node (inc data, etc.) */ + uint32_t hdr_crc; + uint32_t ino; /* Inode number. */ + uint32_t version; /* Version number. */ + uint32_t mode; /* The file's type or mode. */ + uint16_t uid; /* The file's owner. */ + uint16_t gid; /* The file's group. */ + uint32_t isize; /* Total resultant size of this inode (used for truncations) */ + uint32_t atime; /* Last access time. */ + uint32_t mtime; /* Last modification time. */ + uint32_t ctime; /* Change time. */ + uint32_t offset; /* Where to begin to write. */ + uint32_t csize; /* (Compressed) data size */ + uint32_t dsize; /* Size of the node's data. (after decompression) */ + uint8_t compr; /* Compression algorithm used */ + uint8_t usercompr; /* Compression algorithm requested by the user */ + uint16_t flags; /* See JFFS2_INO_FLAG_* */ + uint32_t data_crc; /* CRC for the (compressed) data. */ + uint32_t node_crc; /* CRC for the raw inode (excluding data) */ +// uint8_t data[dsize]; } __attribute__((packed)); union jffs2_node_union { diff -Nru a/include/linux/jffs2_fs_i.h b/include/linux/jffs2_fs_i.h --- a/include/linux/jffs2_fs_i.h Tue Mar 12 13:58:15 2002 +++ b/include/linux/jffs2_fs_i.h Tue Mar 12 13:58:15 2002 @@ -1,22 +1,11 @@ -/* $Id: jffs2_fs_i.h,v 1.8 2001/04/18 13:05:28 dwmw2 Exp $ */ +/* $Id: jffs2_fs_i.h,v 1.12 2002/03/06 13:59:21 dwmw2 Exp $ */ #ifndef _JFFS2_FS_I #define _JFFS2_FS_I -/* Include the pipe_inode_info at the beginning so that we can still - use the storage space in the inode when we have a pipe inode. - This sucks. -*/ - -#undef THISSUCKS /* Only for 2.2 */ -#ifdef THISSUCKS -#include -#endif +#include struct jffs2_inode_info { -#ifdef THISSUCKS - struct pipe_inode_info pipecrap; -#endif /* We need an internal semaphore similar to inode->i_sem. Unfortunately, we can't used the existing one, because either the GC would deadlock, or we'd have to release it @@ -26,7 +15,7 @@ struct semaphore sem; /* The highest (datanode) version number used for this ino */ - __u32 highest_version; + uint32_t highest_version; /* List of data fragments which make up the file */ struct jffs2_node_frag *fraglist; @@ -44,23 +33,11 @@ /* Some stuff we just have to keep in-core at all times, for each inode. */ struct jffs2_inode_cache *inocache; - /* Keep a pointer to the last physical node in the list. We don't - use the doubly-linked lists because we don't want to increase - the memory usage that much. This is simpler */ - // struct jffs2_raw_node_ref *lastnode; - __u16 flags; - __u8 usercompr; + uint16_t flags; + uint8_t usercompr; +#if LINUX_VERSION_CODE > KERNEL_VERSION(2,5,2) struct inode vfs_inode; -}; - -#ifdef JFFS2_OUT_OF_KERNEL -#define JFFS2_INODE_INFO(i) ((struct jffs2_inode_info *) &(i)->u) -#else -static inline struct jffs2_inode_info *JFFS2_INODE_INFO(struct inode *inode) -{ - return list_entry(inode, struct jffs2_inode_info, vfs_inode); -} #endif +}; #endif /* _JFFS2_FS_I */ - diff -Nru a/include/linux/jffs2_fs_sb.h b/include/linux/jffs2_fs_sb.h --- a/include/linux/jffs2_fs_sb.h Tue Mar 12 13:58:14 2002 +++ b/include/linux/jffs2_fs_sb.h Tue Mar 12 13:58:14 2002 @@ -1,4 +1,4 @@ -/* $Id: jffs2_fs_sb.h,v 1.16.2.1 2002/02/23 14:13:34 dwmw2 Exp $ */ +/* $Id: jffs2_fs_sb.h,v 1.25 2002/03/08 15:11:24 dwmw2 Exp $ */ #ifndef _JFFS2_FS_SB #define _JFFS2_FS_SB @@ -9,7 +9,7 @@ #include #include -#define INOCACHE_HASHSIZE 1 +#define INOCACHE_HASHSIZE 14 #define JFFS2_SB_FLAG_RO 1 #define JFFS2_SB_FLAG_MOUNTING 2 @@ -21,36 +21,30 @@ struct jffs2_sb_info { struct mtd_info *mtd; - __u32 highest_ino; + uint32_t highest_ino; unsigned int flags; - spinlock_t nodelist_lock; - // pid_t thread_pid; /* GC thread's PID */ struct task_struct *gc_task; /* GC task struct */ struct semaphore gc_thread_start; /* GC thread start mutex */ struct completion gc_thread_exit; /* GC thread exit completion port */ - // __u32 gc_minfree_threshold; /* GC trigger thresholds */ - // __u32 gc_maxdirty_threshold; struct semaphore alloc_sem; /* Used to protect all the following fields, and also to protect against out-of-order writing of nodes. And GC. */ - __u32 flash_size; - __u32 used_size; - __u32 dirty_size; - __u32 free_size; - __u32 erasing_size; - __u32 bad_size; - __u32 sector_size; - // __u32 min_free_size; - // __u32 max_chunk_size; + uint32_t flash_size; + uint32_t used_size; + uint32_t dirty_size; + uint32_t free_size; + uint32_t erasing_size; + uint32_t bad_size; + uint32_t sector_size; - __u32 nr_free_blocks; - __u32 nr_erasing_blocks; + uint32_t nr_free_blocks; + uint32_t nr_erasing_blocks; - __u32 nr_blocks; + uint32_t nr_blocks; struct jffs2_eraseblock *blocks; /* The whole array of blocks. Used for getting blocks * from the offset (blocks[ofs / sector_size]) */ struct jffs2_eraseblock *nextblock; /* The block we're currently filling */ @@ -59,8 +53,10 @@ struct list_head clean_list; /* Blocks 100% full of clean data */ struct list_head dirty_list; /* Blocks with some dirty space */ + struct list_head erasable_list; /* Blocks which are completely dirty, and need erasing */ + struct list_head erasable_pending_wbuf_list; /* Blocks which need erasing but only after the current wbuf is flushed */ struct list_head erasing_list; /* Blocks which are currently erasing */ - struct list_head erase_pending_list; /* Blocks which need erasing */ + struct list_head erase_pending_list; /* Blocks which need erasing now */ struct list_head erase_complete_list; /* Blocks which are erased and need the clean marker written to them */ struct list_head free_list; /* Blocks which are free and ready to be used */ struct list_head bad_list; /* Bad blocks. */ @@ -69,16 +65,22 @@ spinlock_t erase_completion_lock; /* Protect free_list and erasing_list against erase completion handler */ wait_queue_head_t erase_wait; /* For waiting for erases to complete */ + struct jffs2_inode_cache *inocache_list[INOCACHE_HASHSIZE]; spinlock_t inocache_lock; + /* This _really_ speeds up mounts. */ + struct jffs2_inode_cache *inocache_last; + + /* Sem to allow jffs2_garbage_collect_deletion_dirent to + drop the erase_completion_lock while it's holding a pointer + to an obsoleted node. I don't like this. Alternatives welcomed. */ + struct semaphore erase_free_sem; + + /* Write-behind buffer for NAND flash */ + unsigned char *wbuf; + uint32_t wbuf_ofs; + uint32_t wbuf_len; + uint32_t wbuf_pagesize; }; - -#ifdef JFFS2_OUT_OF_KERNEL -#define JFFS2_SB_INFO(sb) ((struct jffs2_sb_info *) &(sb)->u) -#else -#define JFFS2_SB_INFO(sb) (&sb->u.jffs2_sb) -#endif - -#define OFNI_BS_2SFFJ(c) ((struct super_block *) ( ((char *)c) - ((char *)(&((struct super_block *)NULL)->u)) ) ) #endif /* _JFFS2_FB_SB */ diff -Nru a/include/linux/minix_fs.h b/include/linux/minix_fs.h --- a/include/linux/minix_fs.h Tue Mar 12 13:58:14 2002 +++ b/include/linux/minix_fs.h Tue Mar 12 13:58:14 2002 @@ -32,7 +32,7 @@ #define MINIX_V1 0x0001 /* original minix fs */ #define MINIX_V2 0x0002 /* minix V2 fs */ -#define INODE_VERSION(inode) inode->i_sb->u.minix_sb.s_version +#define INODE_VERSION(inode) minix_sb(inode->i_sb)->s_version /* * This is the original minix inode layout on disk. @@ -90,6 +90,7 @@ #ifdef __KERNEL__ #include +#include /* * change the define below to 0 if you want names > info->s_namelen chars to be @@ -130,6 +131,11 @@ extern struct file_operations minix_file_operations; extern struct file_operations minix_dir_operations; extern struct dentry_operations minix_dentry_operations; + +static inline struct minix_sb_info *minix_sb(struct super_block *sb) +{ + return sb->u.generic_sbp; +} static inline struct minix_inode_info *minix_i(struct inode *inode) { diff -Nru a/include/linux/mm.h b/include/linux/mm.h --- a/include/linux/mm.h Tue Mar 12 13:58:14 2002 +++ b/include/linux/mm.h Tue Mar 12 13:58:14 2002 @@ -104,7 +104,7 @@ #define VM_DONTEXPAND 0x00040000 /* Cannot expand with mremap() */ #define VM_RESERVED 0x00080000 /* Don't unmap it from swap_out */ -#define VM_STACK_FLAGS 0x00000177 +#define VM_STACK_FLAGS (0x00000100 | VM_DATA_DEFAULT_FLAGS) #define VM_READHINTMASK (VM_SEQ_READ | VM_RAND_READ) #define VM_ClearReadHint(v) (v)->vm_flags &= ~VM_READHINTMASK diff -Nru a/include/linux/mmzone.h b/include/linux/mmzone.h --- a/include/linux/mmzone.h Tue Mar 12 13:58:14 2002 +++ b/include/linux/mmzone.h Tue Mar 12 13:58:14 2002 @@ -51,8 +51,7 @@ /* * wait_table -- the array holding the hash table * wait_table_size -- the size of the hash table array - * wait_table_shift -- wait_table_size - * == BITS_PER_LONG (1 << wait_table_bits) + * wait_table_bits -- wait_table_size == (1 << wait_table_bits) * * The purpose of all these is to keep track of the people * waiting for a page to become available and make them @@ -75,7 +74,7 @@ */ wait_queue_head_t * wait_table; unsigned long wait_table_size; - unsigned long wait_table_shift; + unsigned long wait_table_bits; /* * Discontig memory support fields. diff -Nru a/include/linux/netdevice.h b/include/linux/netdevice.h --- a/include/linux/netdevice.h Tue Mar 12 13:58:16 2002 +++ b/include/linux/netdevice.h Tue Mar 12 13:58:16 2002 @@ -40,6 +40,7 @@ #endif struct divert_blk; +struct vlan_group; #define HAVE_ALLOC_NETDEV /* feature macro: alloc_xxxdev functions are available. */ @@ -357,6 +358,10 @@ #define NETIF_F_DYNALLOC 16 /* Self-dectructable device. */ #define NETIF_F_HIGHDMA 32 /* Can DMA to high memory. */ #define NETIF_F_FRAGLIST 64 /* Scatter/gather IO. */ +#define NETIF_F_HW_VLAN_TX 128 /* Transmit VLAN hw acceleration */ +#define NETIF_F_HW_VLAN_RX 256 /* Receive VLAN hw acceleration */ +#define NETIF_F_HW_VLAN_FILTER 512 /* Receive filtering on VLAN */ +#define NETIF_F_VLAN_CHALLENGED 1024 /* Device cannot handle VLAN packets */ /* Called after device is detached from network. */ void (*uninit)(struct net_device *dev); @@ -397,6 +402,13 @@ #define HAVE_TX_TIMEOUT void (*tx_timeout) (struct net_device *dev); + + void (*vlan_rx_register)(struct net_device *dev, + struct vlan_group *grp); + void (*vlan_rx_add_vid)(struct net_device *dev, + unsigned short vid); + void (*vlan_rx_kill_vid)(struct net_device *dev, + unsigned short vid); int (*hard_header_parse)(struct sk_buff *skb, unsigned char *haddr); diff -Nru a/include/linux/netfilter_ipv4/ip_conntrack.h b/include/linux/netfilter_ipv4/ip_conntrack.h --- a/include/linux/netfilter_ipv4/ip_conntrack.h Tue Mar 12 13:58:15 2002 +++ b/include/linux/netfilter_ipv4/ip_conntrack.h Tue Mar 12 13:58:15 2002 @@ -82,10 +82,7 @@ #endif #include - -#if defined(CONFIG_IP_NF_IRC) || defined(CONFIG_IP_NF_IRC_MODULE) #include -#endif struct ip_conntrack { @@ -125,9 +122,7 @@ union { struct ip_ct_ftp ct_ftp_info; -#if defined(CONFIG_IP_NF_IRC) || defined(CONFIG_IP_NF_IRC_MODULE) struct ip_ct_irc ct_irc_info; -#endif } help; #ifdef CONFIG_IP_NF_NAT_NEEDED diff -Nru a/include/linux/nfsd/export.h b/include/linux/nfsd/export.h --- a/include/linux/nfsd/export.h Tue Mar 12 13:58:14 2002 +++ b/include/linux/nfsd/export.h Tue Mar 12 13:58:14 2002 @@ -39,7 +39,8 @@ #define NFSEXP_NOSUBTREECHECK 0x0400 #define NFSEXP_NOAUTHNLM 0x0800 /* Don't authenticate NLM requests - just trust */ #define NFSEXP_MSNFS 0x1000 /* do silly things that MS clients expect */ -#define NFSEXP_ALLFLAGS 0x1FFF +#define NFSEXP_FSID 0x2000 +#define NFSEXP_ALLFLAGS 0x3FFF #ifdef __KERNEL__ @@ -55,11 +56,13 @@ struct in_addr cl_addr[NFSCLNT_ADDRMAX]; struct svc_uidmap * cl_umap; struct list_head cl_export[NFSCLNT_EXPMAX]; + struct list_head cl_expfsid[NFSCLNT_EXPMAX]; struct list_head cl_list; }; struct svc_export { struct list_head ex_hash; + struct list_head ex_fsid_hash; struct list_head ex_list; char ex_path[NFS_MAXPATHLEN+1]; struct svc_export * ex_parent; @@ -71,6 +74,7 @@ ino_t ex_ino; uid_t ex_anon_uid; gid_t ex_anon_gid; + int ex_fsid; }; #define EX_SECURE(exp) (!((exp)->ex_flags & NFSEXP_INSECURE_PORT)) @@ -91,6 +95,7 @@ struct svc_client * exp_getclient(struct sockaddr_in *sin); void exp_putclient(struct svc_client *clp); struct svc_export * exp_get(struct svc_client *clp, kdev_t dev, ino_t ino); +struct svc_export * exp_get_fsid(struct svc_client *clp, int fsid); struct svc_export * exp_get_by_name(struct svc_client *clp, struct vfsmount *mnt, struct dentry *dentry); diff -Nru a/include/linux/pci_ids.h b/include/linux/pci_ids.h --- a/include/linux/pci_ids.h Tue Mar 12 13:58:15 2002 +++ b/include/linux/pci_ids.h Tue Mar 12 13:58:15 2002 @@ -383,11 +383,14 @@ #define PCI_DEVICE_ID_AMD_VIPER_7411 0x7411 #define PCI_DEVICE_ID_AMD_VIPER_7413 0x7413 #define PCI_DEVICE_ID_AMD_VIPER_7414 0x7414 -#define PCI_DEVICE_ID_AMD_VIPER_7440 0x7440 -#define PCI_DEVICE_ID_AMD_VIPER_7441 0x7441 -#define PCI_DEVICE_ID_AMD_VIPER_7443 0x7443 -#define PCI_DEVICE_ID_AMD_VIPER_7448 0x7448 -#define PCI_DEVICE_ID_AMD_VIPER_7449 0x7449 +#define PCI_DEVICE_ID_AMD_OPUS_7440 0x7440 +#define PCI_DEVICE_ID_AMD_OPUS_7441 0x7441 +#define PCI_DEVICE_ID_AMD_OPUS_7443 0x7443 +#define PCI_DEVICE_ID_AMD_OPUS_7448 0x7448 +#define PCI_DEVICE_ID_AMD_OPUS_7449 0x7449 +#define PCI_DEVICE_ID_AMD_8111_LAN 0x7462 +#define PCI_DEVICE_ID_AMD_8111_IDE 0x7469 +#define PCI_DEVICE_ID_AMD_8111_AUDIO 0x746d #define PCI_VENDOR_ID_TRIDENT 0x1023 #define PCI_DEVICE_ID_TRIDENT_4DWAVE_DX 0x2000 diff -Nru a/include/linux/rtnetlink.h b/include/linux/rtnetlink.h --- a/include/linux/rtnetlink.h Tue Mar 12 13:58:16 2002 +++ b/include/linux/rtnetlink.h Tue Mar 12 13:58:16 2002 @@ -440,12 +440,14 @@ #define IFLA_COST IFLA_COST IFLA_PRIORITY, #define IFLA_PRIORITY IFLA_PRIORITY - IFLA_MASTER + IFLA_MASTER, #define IFLA_MASTER IFLA_MASTER + IFLA_WIRELESS, /* Wireless Extension event - see wireless.h */ +#define IFLA_WIRELESS IFLA_WIRELESS }; -#define IFLA_MAX IFLA_MASTER +#define IFLA_MAX IFLA_WIRELESS #define IFLA_RTA(r) ((struct rtattr*)(((char*)(r)) + NLMSG_ALIGN(sizeof(struct ifinfomsg)))) #define IFLA_PAYLOAD(n) NLMSG_PAYLOAD(n,sizeof(struct ifinfomsg)) diff -Nru a/include/linux/videodev.h b/include/linux/videodev.h --- a/include/linux/videodev.h Tue Mar 12 13:58:15 2002 +++ b/include/linux/videodev.h Tue Mar 12 13:58:15 2002 @@ -4,6 +4,18 @@ #include #include +#if 0 +/* + * v4l2 is still work-in-progress, integration planed for 2.5.x + * v4l2 project homepage: http://www.thedirks.org/v4l2/ + * patches available from: http://bytesex.org/patches/ + */ +# define HAVE_V4L2 1 +# include +#else +# undef HAVE_V4L2 +#endif + #ifdef __KERNEL__ #include @@ -13,24 +25,25 @@ struct video_device { struct module *owner; - char name[32]; - int type; + char name[32]; + int type; /* v4l1 */ + int type2; /* v4l2 */ int hardware; + int minor; - int (*open)(struct video_device *, int mode); - void (*close)(struct video_device *); - long (*read)(struct video_device *, char *, unsigned long, int noblock); - /* Do we need a write method ? */ - long (*write)(struct video_device *, const char *, unsigned long, int noblock); -#if LINUX_VERSION_CODE >= 0x020100 - unsigned int (*poll)(struct video_device *, struct file *, poll_table *); -#endif - int (*ioctl)(struct video_device *, unsigned int , void *); - int (*mmap)(struct vm_area_struct *vma, struct video_device *, const char *, unsigned long); - int (*initialize)(struct video_device *); + /* new interface -- we will use file_operations directly + * like soundcore does. + * kernel_ioctl() will be called by video_generic_ioctl. + * video_generic_ioctl() does the userspace copying of the + * ioctl arguments */ + struct file_operations *fops; + int (*kernel_ioctl)(struct inode *inode, struct file *file, + unsigned int cmd, void *arg); void *priv; /* Used to be 'private' but that upsets C++ */ - int busy; - int minor; + + /* for videodev.c intenal usage -- don't touch */ + int users; + struct semaphore lock; devfs_handle_t devfs_handle; }; @@ -43,8 +56,13 @@ #define VFL_TYPE_VTX 3 extern void video_unregister_device(struct video_device *); -#endif +extern struct video_device* video_devdata(struct file*); +extern int video_exclusive_open(struct inode *inode, struct file *file); +extern int video_exclusive_release(struct inode *inode, struct file *file); +extern int video_generic_ioctl(struct inode *inode, struct file *file, + unsigned int cmd, unsigned long arg); +#endif /* __KERNEL__ */ #define VID_TYPE_CAPTURE 1 /* Can capture */ #define VID_TYPE_TUNER 2 /* Can tune */ @@ -150,6 +168,7 @@ #define VIDEO_AUDIO_VOLUME 4 #define VIDEO_AUDIO_BASS 8 #define VIDEO_AUDIO_TREBLE 16 +#define VIDEO_AUDIO_BALANCE 32 char name[16]; #define VIDEO_SOUND_MONO 1 #define VIDEO_SOUND_STEREO 2 @@ -379,4 +398,10 @@ #define VID_HARDWARE_MEYE 32 /* Sony Vaio MotionEye cameras */ #define VID_HARDWARE_CPIA2 33 -#endif +#endif /* __LINUX_VIDEODEV_H */ + +/* + * Local variables: + * c-basic-offset: 8 + * End: + */ diff -Nru a/include/linux/wireless.h b/include/linux/wireless.h --- a/include/linux/wireless.h Tue Mar 12 13:58:14 2002 +++ b/include/linux/wireless.h Tue Mar 12 13:58:14 2002 @@ -1,10 +1,10 @@ /* * This file define a set of standard wireless extensions * - * Version : 13 6.12.01 + * Version : 14 25.1.02 * * Authors : Jean Tourrilhes - HPL - - * Copyright (c) 1997-2001 Jean Tourrilhes, All Rights Reserved. + * Copyright (c) 1997-2002 Jean Tourrilhes, All Rights Reserved. */ #ifndef _LINUX_WIRELESS_H @@ -40,7 +40,7 @@ * # include/linux/netdevice.h (one place) * # include/linux/proc_fs.h (one place) * - * New driver API (2001 -> onward) : + * New driver API (2002 -> onward) : * ------------------------------- * This file is only concerned with the user space API and common definitions. * The new driver API is defined and documented in : @@ -49,6 +49,11 @@ * Note as well that /proc/net/wireless implementation has now moved in : * # include/linux/wireless.c * + * Wireless Events (2002 -> onward) : + * -------------------------------- + * Events are defined at the end of this file, and implemented in : + * # include/linux/wireless.c + * * Other comments : * -------------- * Do not add here things that are redundant with other mechanisms @@ -75,7 +80,7 @@ * (there is some stuff that will be added in the future...) * I just plan to increment with each new version. */ -#define WIRELESS_EXT 13 +#define WIRELESS_EXT 14 /* * Changes : @@ -141,6 +146,13 @@ * - Document creation of new driver API. * - Extract union iwreq_data from struct iwreq (for new driver API). * - Rename SIOCSIWNAME as SIOCSIWCOMMIT + * + * V13 to V14 + * ---------- + * - Wireless Events support : define struct iw_event + * - Define additional specific event numbers + * - Add "addr" and "param" fields in union iwreq_data + * - AP scanning stuff (SIOCSIWSCAN and friends) */ /**************************** CONSTANTS ****************************/ @@ -175,6 +187,8 @@ #define SIOCSIWAP 0x8B14 /* set access point MAC addresses */ #define SIOCGIWAP 0x8B15 /* get access point MAC addresses */ #define SIOCGIWAPLIST 0x8B17 /* get list of access point in range */ +#define SIOCSIWSCAN 0x8B18 /* trigger scanning */ +#define SIOCGIWSCAN 0x8B19 /* get scanning results */ /* 802.11 specific support */ #define SIOCSIWESSID 0x8B1A /* set ESSID (network name) */ @@ -238,6 +252,15 @@ #define IW_IS_SET(cmd) (!((cmd) & 0x1)) #define IW_IS_GET(cmd) ((cmd) & 0x1) +/* ----------------------- WIRELESS EVENTS ----------------------- */ +/* Those are *NOT* ioctls, do not issue request on them !!! */ +/* Most events use the same identifier as ioctl requests */ + +#define IWEVTXDROP 0x8C00 /* Packet dropped to excessive retry */ +#define IWEVQUAL 0x8C01 /* Quality part of statistics */ + +#define IWEVFIRST 0x8C00 + /* ------------------------- PRIVATE INFO ------------------------- */ /* * The following is used with SIOCGIWPRIV. It allow a driver to define @@ -340,6 +363,19 @@ #define IW_RETRY_MAX 0x0002 /* Value is a maximum */ #define IW_RETRY_RELATIVE 0x0004 /* Value is not in seconds/ms/us */ +/* Scanning request flags */ +#define IW_SCAN_DEFAULT 0x0000 /* Default scan of the driver */ +#define IW_SCAN_ALL_ESSID 0x0001 /* Scan all ESSIDs */ +#define IW_SCAN_THIS_ESSID 0x0002 /* Scan only this ESSID */ +#define IW_SCAN_ALL_FREQ 0x0004 /* Scan all Frequencies */ +#define IW_SCAN_THIS_FREQ 0x0008 /* Scan only this Frequency */ +#define IW_SCAN_ALL_MODE 0x0010 /* Scan all Modes */ +#define IW_SCAN_THIS_MODE 0x0020 /* Scan only this Mode */ +#define IW_SCAN_ALL_RATE 0x0040 /* Scan all Bit-Rates */ +#define IW_SCAN_THIS_RATE 0x0080 /* Scan only this Bit-Rate */ +/* Maximum size of returned data */ +#define IW_SCAN_MAX_DATA 4096 /* In bytes */ + /****************************** TYPES ******************************/ /* --------------------------- SUBTYPES --------------------------- */ @@ -466,9 +502,12 @@ struct iw_point encoding; /* Encoding stuff : tokens */ struct iw_param power; /* PM duration/timeout */ + struct iw_quality qual; /* Quality part of statistics */ struct sockaddr ap_addr; /* Access point address */ + struct sockaddr addr; /* Destination address (hw) */ + struct iw_param param; /* Other small parameters */ struct iw_point data; /* Other large parameters */ }; @@ -595,5 +634,36 @@ __u16 get_args; /* Type and number of args */ char name[IFNAMSIZ]; /* Name of the extension */ }; + +/* ----------------------- WIRELESS EVENTS ----------------------- */ +/* + * Wireless events are carried through the rtnetlink socket to user + * space. They are encapsulated in the IFLA_WIRELESS field of + * a RTM_NEWLINK message. + */ + +/* + * A Wireless Event. Contains basically the same data as the ioctl... + */ +struct iw_event +{ + __u16 len; /* Real lenght of this stuff */ + __u16 cmd; /* Wireless IOCTL */ + union iwreq_data u; /* IOCTL fixed payload */ +}; + +/* Size of the Event prefix (including padding and alignement junk) */ +#define IW_EV_LCP_LEN (sizeof(struct iw_event) - sizeof(union iwreq_data)) +/* Size of the various events */ +#define IW_EV_CHAR_LEN (IW_EV_LCP_LEN + IFNAMSIZ) +#define IW_EV_UINT_LEN (IW_EV_LCP_LEN + sizeof(__u32)) +#define IW_EV_FREQ_LEN (IW_EV_LCP_LEN + sizeof(struct iw_freq)) +#define IW_EV_POINT_LEN (IW_EV_LCP_LEN + sizeof(struct iw_point)) +#define IW_EV_PARAM_LEN (IW_EV_LCP_LEN + sizeof(struct iw_param)) +#define IW_EV_ADDR_LEN (IW_EV_LCP_LEN + sizeof(struct sockaddr)) +#define IW_EV_QUAL_LEN (IW_EV_LCP_LEN + sizeof(struct iw_quality)) + +/* Note : in the case of iw_point, the extra data will come at the + * end of the event */ #endif /* _LINUX_WIRELESS_H */ diff -Nru a/include/net/ip.h b/include/net/ip.h --- a/include/net/ip.h Tue Mar 12 13:58:15 2002 +++ b/include/net/ip.h Tue Mar 12 13:58:15 2002 @@ -197,7 +197,8 @@ * does not change, they drop every other packet in * a TCP stream using header compression. */ - iph->id = (sk && sk->daddr) ? htons(inet_sk(sk)->id++) : 0; + iph->id = (sk && inet_sk(sk)->daddr) ? + htons(inet_sk(sk)->id++) : 0; } else __ip_select_ident(iph, dst); } diff -Nru a/include/net/iw_handler.h b/include/net/iw_handler.h --- a/include/net/iw_handler.h Tue Mar 12 13:58:15 2002 +++ b/include/net/iw_handler.h Tue Mar 12 13:58:15 2002 @@ -1,10 +1,10 @@ /* * This file define the new driver API for Wireless Extensions * - * Version : 2 6.12.01 + * Version : 3 17.1.02 * * Authors : Jean Tourrilhes - HPL - - * Copyright (c) 2001 Jean Tourrilhes, All Rights Reserved. + * Copyright (c) 2001-2002 Jean Tourrilhes, All Rights Reserved. */ #ifndef _IW_HANDLER_H @@ -33,7 +33,7 @@ * o The user space interface is tied to ioctl because of the use * copy_to/from_user. * - * New driver API (2001 -> onward) : + * New driver API (2002 -> onward) : * ------------------------------- * The new driver API is just a bunch of standard functions (handlers), * each handling a specific Wireless Extension. The driver just export @@ -206,7 +206,18 @@ * will be needed... * I just plan to increment with each new version. */ -#define IW_HANDLER_VERSION 2 +#define IW_HANDLER_VERSION 3 + +/* + * Changes : + * + * V2 to V3 + * -------- + * - Move event definition in + * - Add Wireless Event support : + * o wireless_send_event() prototype + * o iwe_stream_add_event/point() inline functions + */ /**************************** CONSTANTS ****************************/ @@ -225,6 +236,7 @@ #define IW_HEADER_TYPE_POINT 6 /* struct iw_point */ #define IW_HEADER_TYPE_PARAM 7 /* struct iw_param */ #define IW_HEADER_TYPE_ADDR 8 /* struct sockaddr */ +#define IW_HEADER_TYPE_QUAL 9 /* struct iw_quality */ /* Handling flags */ /* Most are not implemented. I just use them as a reminder of some @@ -303,25 +315,6 @@ * 'struct net_device' to here, to minimise bloat. */ }; -/* ----------------------- WIRELESS EVENTS ----------------------- */ -/* - * Currently we don't support events, so let's just plan for the - * future... - */ - -/* - * A Wireless Event. - */ -// How do we define short header ? We don't want a flag on length. -// Probably a flag on event ? Highest bit to zero... -struct iw_event -{ - __u16 length; /* Lenght of this stuff */ - __u16 event; /* Wireless IOCTL */ - union iwreq_data header; /* IOCTL fixed payload */ - char extra[0]; /* Optional IOCTL data */ -}; - /* ---------------------- IOCTL DESCRIPTION ---------------------- */ /* * One of the main goal of the new interface is to deal entirely with @@ -369,6 +362,88 @@ extern int wireless_process_ioctl(struct ifreq *ifr, unsigned int cmd); /* Second : functions that may be called by driver modules */ -/* None yet */ -#endif /* _LINUX_WIRELESS_H */ +/* Send a single event to user space */ +extern void wireless_send_event(struct net_device * dev, + unsigned int cmd, + union iwreq_data * wrqu, + char * extra); + +/* We may need a function to send a stream of events to user space. + * More on that later... */ + +/************************* INLINE FUNTIONS *************************/ +/* + * Function that are so simple that it's more efficient inlining them + */ + +/*------------------------------------------------------------------*/ +/* + * Wrapper to add an Wireless Event to a stream of events. + */ +static inline char * +iwe_stream_add_event(char * stream, /* Stream of events */ + char * ends, /* End of stream */ + struct iw_event *iwe, /* Payload */ + int event_len) /* Real size of payload */ +{ + /* Check if it's possible */ + if((stream + event_len) < ends) { + iwe->len = event_len; + memcpy(stream, (char *) iwe, event_len); + stream += event_len; + } + return stream; +} + +/*------------------------------------------------------------------*/ +/* + * Wrapper to add an short Wireless Event containing a pointer to a + * stream of events. + */ +static inline char * +iwe_stream_add_point(char * stream, /* Stream of events */ + char * ends, /* End of stream */ + struct iw_event *iwe, /* Payload */ + char * extra) +{ + int event_len = IW_EV_POINT_LEN + iwe->u.data.length; + /* Check if it's possible */ + if((stream + event_len) < ends) { + iwe->len = event_len; + memcpy(stream, (char *) iwe, IW_EV_POINT_LEN); + memcpy(stream + IW_EV_POINT_LEN, extra, iwe->u.data.length); + stream += event_len; + } + return stream; +} + +/*------------------------------------------------------------------*/ +/* + * Wrapper to add a value to a Wireless Event in a stream of events. + * Be careful, this one is tricky to use properly : + * At the first run, you need to have (value = event + IW_EV_LCP_LEN). + */ +static inline char * +iwe_stream_add_value(char * event, /* Event in the stream */ + char * value, /* Value in event */ + char * ends, /* End of stream */ + struct iw_event *iwe, /* Payload */ + int event_len) /* Real size of payload */ +{ + /* Don't duplicate LCP */ + event_len -= IW_EV_LCP_LEN; + + /* Check if it's possible */ + if((value + event_len) < ends) { + /* Add new value */ + memcpy(value, (char *) iwe + IW_EV_LCP_LEN, event_len); + value += event_len; + /* Patch LCP */ + iwe->len = value - event; + memcpy(event, (char *) iwe, IW_EV_LCP_LEN); + } + return value; +} + +#endif /* _IW_HANDLER_H */ diff -Nru a/include/net/sock.h b/include/net/sock.h --- a/include/net/sock.h Tue Mar 12 13:58:15 2002 +++ b/include/net/sock.h Tue Mar 12 13:58:15 2002 @@ -83,28 +83,22 @@ } while(0); struct sock { - /* Socket demultiplex comparisons on incoming packets. */ - __u32 daddr; /* Foreign IPv4 addr */ - __u32 rcv_saddr; /* Bound local IPv4 addr */ - __u16 dport; /* Destination port */ - unsigned short num; /* Local port */ - int bound_dev_if; /* Bound device index if != 0 */ - + /* Begin of struct sock/struct tcp_tw_bucket shared layout */ + volatile unsigned char state, /* Connection state */ + zapped; /* ax25 & ipx means !linked */ + unsigned char reuse; /* SO_REUSEADDR setting */ + unsigned char shutdown; + int bound_dev_if; /* Bound device index if != 0 */ /* Main hash linkage for various protocol lookup tables. */ struct sock *next; struct sock **pprev; struct sock *bind_next; struct sock **bind_pprev; - - volatile unsigned char state, /* Connection state */ - zapped; /* In ax25 & ipx means not linked */ - __u16 sport; /* Source port */ - - unsigned short family; /* Address family */ - unsigned char reuse; /* SO_REUSEADDR setting */ - unsigned char shutdown; atomic_t refcnt; /* Reference count */ - + unsigned short family; /* Address family */ + /* End of struct sock/struct tcp_tw_bucket shared layout */ + unsigned char use_write_queue; + unsigned char userlocks; socket_lock_t lock; /* Synchronizer... */ int rcvbuf; /* Size of receive buffer in bytes */ @@ -118,7 +112,6 @@ atomic_t omem_alloc; /* "o" is "option" or "other" */ int wmem_queued; /* Persistent queue size */ int forward_alloc; /* Space allocated forward. */ - __u32 saddr; /* Sending source */ unsigned int allocation; /* Allocation mode */ int sndbuf; /* Size of send buffer in bytes */ struct sock *prev; @@ -137,9 +130,7 @@ bsdism; unsigned char debug; unsigned char rcvtstamp; - unsigned char use_write_queue; - unsigned char userlocks; - /* Hole of 3 bytes. Try to pack. */ + /* Hole of 1 byte. Try to pack. */ int route_caps; int proc; unsigned long lingertime; @@ -759,16 +750,13 @@ #define SOCK_MIN_SNDBUF 2048 #define SOCK_MIN_RCVBUF 256 -/* Must be less or equal SOCK_MIN_SNDBUF */ -#define SOCK_MIN_WRITE_SPACE SOCK_MIN_SNDBUF /* * Default write policy as shown to user space via poll/select/SIGIO - * Kernel internally doesn't use the MIN_WRITE_SPACE threshold. */ static inline int sock_writeable(struct sock *sk) { - return sock_wspace(sk) >= SOCK_MIN_WRITE_SPACE; + return atomic_read(&sk->wmem_alloc) < (sk->sndbuf / 2); } static inline int gfp_any(void) diff -Nru a/include/net/tcp.h b/include/net/tcp.h --- a/include/net/tcp.h Tue Mar 12 13:58:14 2002 +++ b/include/net/tcp.h Tue Mar 12 13:58:14 2002 @@ -53,7 +53,7 @@ * 2) If all sockets have sk->reuse set, and none of them are in * TCP_LISTEN state, the port may be shared. * Failing that, goto test 3. - * 3) If all sockets are bound to a specific sk->rcv_saddr local + * 3) If all sockets are bound to a specific inet_sk(sk)->rcv_saddr local * address, and none of them are the same, the port may be * shared. * Failing this, the port cannot be shared. @@ -162,23 +162,26 @@ * XXX Yes I know this is gross, but I'd have to edit every single * XXX networking file if I created a "struct sock_header". -DaveM */ - __u32 daddr; - __u32 rcv_saddr; - __u16 dport; - unsigned short num; + volatile unsigned char state, /* Connection state */ + substate; /* "zapped" -> "substate" */ + unsigned char reuse; /* SO_REUSEADDR setting */ + unsigned char rcv_wscale; /* also TW bucket specific */ int bound_dev_if; + /* Main hash linkage for various protocol lookup tables. */ struct sock *next; struct sock **pprev; struct sock *bind_next; struct sock **bind_pprev; - unsigned char state, - substate; /* "zapped" is replaced with "substate" */ - __u16 sport; - unsigned short family; - unsigned char reuse, - rcv_wscale; /* It is also TW bucket specific */ atomic_t refcnt; - + unsigned short family; + /* End of struct sock/struct tcp_tw_bucket shared layout */ + __u16 sport; + /* Socket demultiplex comparisons on incoming packets. */ + /* these five are in inet_opt */ + __u32 daddr; + __u32 rcv_saddr; + __u16 dport; + __u16 num; /* And these are ours. */ int hashent; int timeout; @@ -236,20 +239,20 @@ __u64 __name = (((__u64)(__daddr))<<32)|((__u64)(__saddr)); #endif /* __BIG_ENDIAN */ #define TCP_IPV4_MATCH(__sk, __cookie, __saddr, __daddr, __ports, __dif)\ - (((*((__u64 *)&((__sk)->daddr)))== (__cookie)) && \ - ((*((__u32 *)&((__sk)->dport)))== (__ports)) && \ + (((*((__u64 *)&(inet_sk(__sk)->daddr)))== (__cookie)) && \ + ((*((__u32 *)&(inet_sk(__sk)->dport)))== (__ports)) && \ (!((__sk)->bound_dev_if) || ((__sk)->bound_dev_if == (__dif)))) #else /* 32-bit arch */ #define TCP_V4_ADDR_COOKIE(__name, __saddr, __daddr) #define TCP_IPV4_MATCH(__sk, __cookie, __saddr, __daddr, __ports, __dif)\ - (((__sk)->daddr == (__saddr)) && \ - ((__sk)->rcv_saddr == (__daddr)) && \ - ((*((__u32 *)&((__sk)->dport)))== (__ports)) && \ + ((inet_sk(__sk)->daddr == (__saddr)) && \ + (inet_sk(__sk)->rcv_saddr == (__daddr)) && \ + ((*((__u32 *)&(inet_sk(__sk)->dport)))== (__ports)) && \ (!((__sk)->bound_dev_if) || ((__sk)->bound_dev_if == (__dif)))) #endif /* 64-bit arch */ #define TCP_IPV6_MATCH(__sk, __saddr, __daddr, __ports, __dif) \ - (((*((__u32 *)&((__sk)->dport)))== (__ports)) && \ + (((*((__u32 *)&(inet_sk(__sk)->dport)))== (__ports)) && \ ((__sk)->family == AF_INET6) && \ !ipv6_addr_cmp(&inet6_sk(__sk)->daddr, (__saddr)) && \ !ipv6_addr_cmp(&inet6_sk(__sk)->rcv_saddr, (__daddr)) && \ @@ -263,7 +266,7 @@ static __inline__ int tcp_sk_listen_hashfn(struct sock *sk) { - return tcp_lhashfn(sk->num); + return tcp_lhashfn(inet_sk(sk)->num); } #define MAX_TCP_HEADER (128 + MAX_HEADER) diff -Nru a/include/net/udp.h b/include/net/udp.h --- a/include/net/udp.h Tue Mar 12 13:58:16 2002 +++ b/include/net/udp.h Tue Mar 12 13:58:16 2002 @@ -23,6 +23,7 @@ #define _UDP_H #include +#include #include #define UDP_HTABLE_SIZE 128 @@ -41,7 +42,7 @@ struct sock *sk = udp_hash[num & (UDP_HTABLE_SIZE - 1)]; for(; sk != NULL; sk = sk->next) { - if(sk->num == num) + if (inet_sk(sk)->num == num) return 1; } return 0; diff -Nru a/kernel/Makefile b/kernel/Makefile --- a/kernel/Makefile Tue Mar 12 13:58:15 2002 +++ b/kernel/Makefile Tue Mar 12 13:58:15 2002 @@ -15,7 +15,7 @@ obj-y = sched.o dma.o fork.o exec_domain.o panic.o printk.o \ module.o exit.o itimer.o info.o time.o softirq.o resource.o \ sysctl.o acct.o capability.o ptrace.o timer.o user.o \ - signal.o sys.o kmod.o context.o + signal.o sys.o kmod.o context.o futex.o obj-$(CONFIG_UID16) += uid16.o obj-$(CONFIG_MODULES) += ksyms.o diff -Nru a/kernel/acct.c b/kernel/acct.c --- a/kernel/acct.c Tue Mar 12 13:58:14 2002 +++ b/kernel/acct.c Tue Mar 12 13:58:14 2002 @@ -72,14 +72,30 @@ /* * External references and all of the globals. */ - -static volatile int acct_active; -static volatile int acct_needcheck; -static struct file *acct_file; -static struct timer_list acct_timer; static void do_acct_process(long, struct file *); /* + * This structure is used so that all the data protected by lock + * can be placed in the same cache line as the lock. This primes + * the cache line to have the data after getting the lock. + */ +struct acct_glbs { + spinlock_t lock; + volatile int active; + volatile int needcheck; + struct file *file; + struct timer_list timer; +}; + +static struct acct_glbs acct_globals __cacheline_aligned = {SPIN_LOCK_UNLOCKED}; + +#define acct_lock acct_globals.lock +#define acct_active acct_globals.active +#define acct_needcheck acct_globals.needcheck +#define acct_file acct_globals.file +#define acct_timer acct_globals.timer + +/* * Called whenever the timer says to check the free space. */ static void acct_timeout(unsigned long unused) @@ -96,11 +112,11 @@ int res; int act; - lock_kernel(); + spin_lock(&acct_lock); res = acct_active; if (!file || !acct_needcheck) goto out; - unlock_kernel(); + spin_unlock(&acct_lock); /* May block */ if (vfs_statfs(file->f_dentry->d_inode->i_sb, &sbuf)) @@ -117,7 +133,7 @@ * If some joker switched acct_file under us we'ld better be * silent and _not_ touch anything. */ - lock_kernel(); + spin_lock(&acct_lock); if (file != acct_file) { if (act) res = act>0; @@ -142,22 +158,26 @@ add_timer(&acct_timer); res = acct_active; out: - unlock_kernel(); + spin_unlock(&acct_lock); return res; } /* - * sys_acct() is the only system call needed to implement process - * accounting. It takes the name of the file where accounting records - * should be written. If the filename is NULL, accounting will be - * shutdown. + * acct_common() is the main routine that implements process accounting. + * It takes the name of the file where accounting records should be + * written. If the filename is NULL, accounting will be shutdown. */ -asmlinkage long sys_acct(const char *name) +long acct_common(const char *name, int locked) { struct file *file = NULL, *old_acct = NULL; char *tmp; int error; + /* + * Should only have locked set when name is NULL (enforce this). + */ + BUG_ON(locked && name); + if (!capable(CAP_SYS_PACCT)) return -EPERM; @@ -183,7 +203,8 @@ } error = 0; - lock_kernel(); + if (!locked) + spin_lock(&acct_lock); if (acct_file) { old_acct = acct_file; del_timer(&acct_timer); @@ -201,7 +222,7 @@ acct_timer.expires = jiffies + ACCT_TIMEOUT*HZ; add_timer(&acct_timer); } - unlock_kernel(); + spin_unlock(&acct_lock); if (old_acct) { do_acct_process(0,old_acct); filp_close(old_acct, NULL); @@ -213,12 +234,25 @@ goto out; } +/* + * sys_acct() is the only system call needed to implement process + * accounting. It takes the name of the file where accounting records + * should be written. If the filename is NULL, accounting will be + * shutdown. + */ +asmlinkage long sys_acct(const char *name) +{ + return (acct_common(name, 0)); +} + void acct_auto_close(struct super_block *sb) { - lock_kernel(); - if (acct_file && acct_file->f_dentry->d_inode->i_sb == sb) - sys_acct(NULL); - unlock_kernel(); + spin_lock(&acct_lock); + if (acct_file && acct_file->f_dentry->d_inode->i_sb == sb) { + (void) acct_common(NULL, 1); + } else { + spin_unlock(&acct_lock); + } } /* @@ -277,6 +311,7 @@ struct acct ac; mm_segment_t fs; unsigned long vsize; + unsigned long flim; /* * First check to see if there is enough free_space to continue @@ -338,8 +373,14 @@ */ fs = get_fs(); set_fs(KERNEL_DS); + /* + * Accounting records are not subject to resource limits. + */ + flim = current->rlim[RLIMIT_FSIZE].rlim_cur; + current->rlim[RLIMIT_FSIZE].rlim_cur = RLIM_INFINITY; file->f_op->write(file, (char *)&ac, sizeof(struct acct), &file->f_pos); + current->rlim[RLIMIT_FSIZE].rlim_cur = flim; set_fs(fs); } @@ -349,15 +390,15 @@ int acct_process(long exitcode) { struct file *file = NULL; - lock_kernel(); + spin_lock(&acct_lock); if (acct_file) { file = acct_file; get_file(file); - unlock_kernel(); - do_acct_process(exitcode, acct_file); + spin_unlock(&acct_lock); + do_acct_process(exitcode, file); fput(file); } else - unlock_kernel(); + spin_unlock(&acct_lock); return 0; } diff -Nru a/kernel/futex.c b/kernel/futex.c --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/kernel/futex.c Tue Mar 12 13:58:16 2002 @@ -0,0 +1,232 @@ +/* + * Fast Userspace Mutexes (which I call "Futexes!"). + * (C) Rusty Russell, IBM 2002 + * + * Thanks to Ben LaHaise for yelling "hashed waitqueues" loudly + * enough at me, Linus for the original (flawed) idea, Matthew + * Kirkwood for proof-of-concept implementation. + * + * "The futexes are also cursed." + * "But they come in a choice of three flavours!" + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + */ +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +/* These mutexes are a very simple counter: the winner is the one who + decrements from 1 to 0. The counter starts at 1 when the lock is + free. A value other than 0 or 1 means someone may be sleeping. + This is simple enough to work on all architectures, but has the + problem that if we never "up" the semaphore it could eventually + wrap around. */ + +/* FIXME: This may be way too small. --RR */ +#define FUTEX_HASHBITS 6 + +/* We use this instead of a normal wait_queue_t, so we can wake only + the relevent ones (hashed queues may be shared) */ +struct futex_q { + struct list_head list; + struct task_struct *task; + /* Page struct and offset within it. */ + struct page *page; + unsigned int offset; +}; + +/* The key for the hash is the address + index + offset within page */ +static struct list_head futex_queues[1<page == page && this->offset == offset) { + wake_up_process(this->task); + break; + } + } + spin_unlock(&futex_lock); +} + +/* Add at end to avoid starvation */ +static inline void queue_me(struct list_head *head, + struct futex_q *q, + struct page *page, + unsigned int offset) +{ + q->task = current; + q->page = page; + q->offset = offset; + + spin_lock(&futex_lock); + list_add_tail(&q->list, head); + spin_unlock(&futex_lock); +} + +static inline void unqueue_me(struct futex_q *q) +{ + spin_lock(&futex_lock); + list_del(&q->list); + spin_unlock(&futex_lock); +} + +/* Get kernel address of the user page and pin it. */ +static struct page *pin_page(unsigned long page_start) +{ + struct mm_struct *mm = current->mm; + struct page *page; + int err; + + down_read(&mm->mmap_sem); + err = get_user_pages(current, current->mm, page_start, + 1 /* one page */, + 1 /* writable */, + 0 /* don't force */, + &page, + NULL /* don't return vmas */); + up_read(&mm->mmap_sem); + + if (err < 0) + return ERR_PTR(err); + return page; +} + +/* Try to decrement the user count to zero. */ +static int decrement_to_zero(struct page *page, unsigned int offset) +{ + atomic_t *count; + int ret = 0; + + count = kmap(page) + offset; + /* If we take the semaphore from 1 to 0, it's ours. If it's + zero, decrement anyway, to indicate we are waiting. If + it's negative, don't decrement so we don't wrap... */ + if (atomic_read(count) >= 0 && atomic_dec_and_test(count)) + ret = 1; + kunmap(page); + return ret; +} + +/* Simplified from arch/ppc/kernel/semaphore.c: Paul M. is a genius. */ +static int futex_down(struct list_head *head, struct page *page, int offset) +{ + int retval = 0; + struct futex_q q; + + current->state = TASK_INTERRUPTIBLE; + queue_me(head, &q, page, offset); + + while (!decrement_to_zero(page, offset)) { + if (signal_pending(current)) { + retval = -EINTR; + break; + } + schedule(); + current->state = TASK_INTERRUPTIBLE; + } + current->state = TASK_RUNNING; + unqueue_me(&q); + /* If we were signalled, we might have just been woken: we + must wake another one. Otherwise we need to wake someone + else (if they are waiting) so they drop the count below 0, + and when we "up" in userspace, we know there is a + waiter. */ + wake_one_waiter(head, page, offset); + return retval; +} + +static int futex_up(struct list_head *head, struct page *page, int offset) +{ + atomic_t *count; + + count = kmap(page) + offset; + atomic_set(count, 1); + smp_wmb(); + kunmap(page); + wake_one_waiter(head, page, offset); + return 0; +} + +asmlinkage int sys_futex(void *uaddr, int op) +{ + int ret; + unsigned long pos_in_page; + struct list_head *head; + struct page *page; + + pos_in_page = ((unsigned long)uaddr) % PAGE_SIZE; + + /* Must be "naturally" aligned, and not on page boundary. */ + if ((pos_in_page % __alignof__(atomic_t)) != 0 + || pos_in_page + sizeof(atomic_t) > PAGE_SIZE) + return -EINVAL; + + /* Simpler if it doesn't vanish underneath us. */ + page = pin_page((unsigned long)uaddr - pos_in_page); + if (IS_ERR(page)) + return PTR_ERR(page); + + head = hash_futex(page, pos_in_page); + switch (op) { + case FUTEX_UP: + ret = futex_up(head, page, pos_in_page); + break; + case FUTEX_DOWN: + ret = futex_down(head, page, pos_in_page); + break; + /* Add other lock types here... */ + default: + ret = -EINVAL; + } + put_page(page); + + return ret; +} + +static int __init init(void) +{ + unsigned int i; + + for (i = 0; i < ARRAY_SIZE(futex_queues); i++) + INIT_LIST_HEAD(&futex_queues[i]); + return 0; +} +__initcall(init); diff -Nru a/kernel/ksyms.c b/kernel/ksyms.c --- a/kernel/ksyms.c Tue Mar 12 13:58:14 2002 +++ b/kernel/ksyms.c Tue Mar 12 13:58:14 2002 @@ -286,8 +286,12 @@ EXPORT_SYMBOL(fd_install); EXPORT_SYMBOL(put_unused_fd); EXPORT_SYMBOL(get_sb_bdev); +EXPORT_SYMBOL(kill_block_super); EXPORT_SYMBOL(get_sb_nodev); EXPORT_SYMBOL(get_sb_single); +EXPORT_SYMBOL(kill_anon_super); +EXPORT_SYMBOL(kill_litter_super); +EXPORT_SYMBOL(deactivate_super); /* for stackable file systems (lofs, wrapfs, cryptfs, etc.) */ EXPORT_SYMBOL(default_llseek); diff -Nru a/kernel/printk.c b/kernel/printk.c --- a/kernel/printk.c Tue Mar 12 13:58:15 2002 +++ b/kernel/printk.c Tue Mar 12 13:58:15 2002 @@ -29,7 +29,7 @@ #include -#ifdef CONFIG_MULTIQUAD +#if defined(CONFIG_MULTIQUAD) || defined(CONFIG_IA64) #define LOG_BUF_LEN (65536) #elif defined(CONFIG_SMP) #define LOG_BUF_LEN (32768) diff -Nru a/kernel/sched.c b/kernel/sched.c --- a/kernel/sched.c Tue Mar 12 13:58:15 2002 +++ b/kernel/sched.c Tue Mar 12 13:58:15 2002 @@ -140,6 +140,7 @@ */ struct runqueue { spinlock_t lock; + spinlock_t frozen; unsigned long nr_running, nr_switches, expired_timestamp; task_t *curr, *idle; prio_array_t *active, *expired, arrays[2]; @@ -400,7 +401,7 @@ #if CONFIG_SMP || CONFIG_PREEMPT asmlinkage void schedule_tail(void) { - spin_unlock_irq(&this_rq()->lock); + spin_unlock_irq(&this_rq()->frozen); } #endif @@ -518,12 +519,14 @@ busiest = NULL; max_load = 1; for (i = 0; i < smp_num_cpus; i++) { - rq_src = cpu_rq(cpu_logical_map(i)); - if (idle || (rq_src->nr_running < this_rq->prev_nr_running[i])) + int logical = cpu_logical_map(i); + + rq_src = cpu_rq(logical); + if (idle || (rq_src->nr_running < this_rq->prev_nr_running[logical])) load = rq_src->nr_running; else - load = this_rq->prev_nr_running[i]; - this_rq->prev_nr_running[i] = rq_src->nr_running; + load = this_rq->prev_nr_running[logical]; + this_rq->prev_nr_running[logical] = rq_src->nr_running; if ((load > max_load) && (rq_src != this_rq)) { busiest = rq_src; @@ -590,7 +593,7 @@ #define CAN_MIGRATE_TASK(p,rq,this_cpu) \ ((jiffies - (p)->sleep_timestamp > cache_decay_ticks) && \ ((p) != (rq)->curr) && \ - (tmp->cpus_allowed & (1 << (this_cpu)))) + ((p)->cpus_allowed & (1 << (this_cpu)))) if (!CAN_MIGRATE_TASK(tmp, busiest, this_cpu)) { curr = curr->next; @@ -808,16 +811,22 @@ if (likely(prev != next)) { rq->nr_switches++; rq->curr = next; + spin_lock(&rq->frozen); + spin_unlock(&rq->lock); + context_switch(prev, next); + /* * The runqueue pointer might be from another CPU * if the new task was last running on a different * CPU - thus re-load it. */ - barrier(); + mb(); rq = this_rq(); + spin_unlock_irq(&rq->frozen); + } else { + spin_unlock_irq(&rq->lock); } - spin_unlock_irq(&rq->lock); reacquire_kernel_lock(current); preempt_enable_no_resched(); @@ -1463,6 +1472,7 @@ rq->active = rq->arrays; rq->expired = rq->arrays + 1; spin_lock_init(&rq->lock); + spin_lock_init(&rq->frozen); INIT_LIST_HEAD(&rq->migration_queue); for (j = 0; j < 2; j++) { @@ -1649,19 +1659,31 @@ void __init migration_init(void) { + unsigned long tmp, orig_cache_decay_ticks; int cpu; - for (cpu = 0; cpu < smp_num_cpus; cpu++) + tmp = 0; + for (cpu = 0; cpu < smp_num_cpus; cpu++) { if (kernel_thread(migration_thread, NULL, CLONE_FS | CLONE_FILES | CLONE_SIGNAL) < 0) BUG(); + tmp |= (1UL << cpu_logical_map(cpu)); + } + + migration_mask = tmp; + + orig_cache_decay_ticks = cache_decay_ticks; + cache_decay_ticks = 0; - migration_mask = (1 << smp_num_cpus) - 1; + for (cpu = 0; cpu < smp_num_cpus; cpu++) { + int logical = cpu_logical_map(cpu); - for (cpu = 0; cpu < smp_num_cpus; cpu++) - while (!cpu_rq(cpu)->migration_thread) + while (!cpu_rq(logical)->migration_thread) schedule_timeout(2); + } if (migration_mask) BUG(); + + cache_decay_ticks = orig_cache_decay_ticks; } #endif diff -Nru a/mm/Makefile b/mm/Makefile --- a/mm/Makefile Tue Mar 12 13:58:15 2002 +++ b/mm/Makefile Tue Mar 12 13:58:15 2002 @@ -14,6 +14,6 @@ obj-y := memory.o mmap.o filemap.o mprotect.o mlock.o mremap.o \ vmalloc.o slab.o bootmem.o swap.o vmscan.o page_io.o \ page_alloc.o swap_state.o swapfile.o numa.o oom_kill.o \ - shmem.o highmem.o mempool.o + shmem.o highmem.o mempool.o msync.o mincore.o include $(TOPDIR)/Rules.make diff -Nru a/mm/filemap.c b/mm/filemap.c --- a/mm/filemap.c Tue Mar 12 13:58:15 2002 +++ b/mm/filemap.c Tue Mar 12 13:58:15 2002 @@ -25,6 +25,7 @@ #include #include #include +#include #include #include @@ -550,7 +551,7 @@ spin_lock(&pagecache_lock); while (!list_empty(&mapping->dirty_pages)) { - struct page *page = list_entry(mapping->dirty_pages.next, struct page, list); + struct page *page = list_entry(mapping->dirty_pages.prev, struct page, list); list_del(&page->list); list_add(&page->list, &mapping->locked_pages); @@ -773,32 +774,8 @@ static inline wait_queue_head_t *page_waitqueue(struct page *page) { const zone_t *zone = page_zone(page); - wait_queue_head_t *wait = zone->wait_table; - unsigned long hash = (unsigned long)page; -#if BITS_PER_LONG == 64 - /* Sigh, gcc can't optimise this alone like it does for 32 bits. */ - unsigned long n = hash; - n <<= 18; - hash -= n; - n <<= 33; - hash -= n; - n <<= 3; - hash += n; - n <<= 3; - hash -= n; - n <<= 4; - hash += n; - n <<= 2; - hash += n; -#else - /* On some cpus multiply is faster, on others gcc will do shifts */ - hash *= GOLDEN_RATIO_PRIME; -#endif - - hash >>= zone->wait_table_shift; - - return &wait[hash]; + return &zone->wait_table[hash_ptr(page, zone->wait_table_bits)]; } /* @@ -2082,107 +2059,6 @@ return NULL; } -/* Called with mm->page_table_lock held to protect against other - * threads/the swapper from ripping pte's out from under us. - */ -static inline int filemap_sync_pte(pte_t *ptep, pmd_t *pmdp, struct vm_area_struct *vma, - unsigned long address, unsigned int flags) -{ - pte_t pte = *ptep; - - if (pte_present(pte)) { - struct page *page = pte_page(pte); - if (VALID_PAGE(page) && !PageReserved(page) && ptep_test_and_clear_dirty(ptep)) { - flush_tlb_page(vma, address); - set_page_dirty(page); - } - } - return 0; -} - -static inline int filemap_sync_pte_range(pmd_t * pmd, - unsigned long address, unsigned long end, - struct vm_area_struct *vma, unsigned int flags) -{ - pte_t *pte; - int error; - - if (pmd_none(*pmd)) - return 0; - if (pmd_bad(*pmd)) { - pmd_ERROR(*pmd); - pmd_clear(pmd); - return 0; - } - pte = pte_offset_map(pmd, address); - if ((address & PMD_MASK) != (end & PMD_MASK)) - end = (address & PMD_MASK) + PMD_SIZE; - error = 0; - do { - error |= filemap_sync_pte(pte, pmd, vma, address, flags); - address += PAGE_SIZE; - pte++; - } while (address && (address < end)); - - pte_unmap(pte - 1); - - return error; -} - -static inline int filemap_sync_pmd_range(pgd_t * pgd, - unsigned long address, unsigned long end, - struct vm_area_struct *vma, unsigned int flags) -{ - pmd_t * pmd; - int error; - - if (pgd_none(*pgd)) - return 0; - if (pgd_bad(*pgd)) { - pgd_ERROR(*pgd); - pgd_clear(pgd); - return 0; - } - pmd = pmd_offset(pgd, address); - if ((address & PGDIR_MASK) != (end & PGDIR_MASK)) - end = (address & PGDIR_MASK) + PGDIR_SIZE; - error = 0; - do { - error |= filemap_sync_pte_range(pmd, address, end, vma, flags); - address = (address + PMD_SIZE) & PMD_MASK; - pmd++; - } while (address && (address < end)); - return error; -} - -int filemap_sync(struct vm_area_struct * vma, unsigned long address, - size_t size, unsigned int flags) -{ - pgd_t * dir; - unsigned long end = address + size; - int error = 0; - - /* Aquire the lock early; it may be possible to avoid dropping - * and reaquiring it repeatedly. - */ - spin_lock(&vma->vm_mm->page_table_lock); - - dir = pgd_offset(vma->vm_mm, address); - flush_cache_range(vma, address, end); - if (address >= end) - BUG(); - do { - error |= filemap_sync_pmd_range(dir, address, end, vma, flags); - address = (address + PGDIR_SIZE) & PGDIR_MASK; - dir++; - } while (address && (address < end)); - flush_tlb_range(vma, end - size, end); - - spin_unlock(&vma->vm_mm->page_table_lock); - - return error; -} - static struct vm_operations_struct generic_file_vm_ops = { nopage: filemap_nopage, }; @@ -2205,107 +2081,6 @@ return 0; } -/* - * The msync() system call. - */ - -/* - * MS_SYNC syncs the entire file - including mappings. - * - * MS_ASYNC initiates writeout of just the dirty mapped data. - * This provides no guarantee of file integrity - things like indirect - * blocks may not have started writeout. MS_ASYNC is primarily useful - * where the application knows that it has finished with the data and - * wishes to intelligently schedule its own I/O traffic. - */ -static int msync_interval(struct vm_area_struct * vma, - unsigned long start, unsigned long end, int flags) -{ - int ret = 0; - struct file * file = vma->vm_file; - - if (file && (vma->vm_flags & VM_SHARED)) { - ret = filemap_sync(vma, start, end-start, flags); - - if (!ret && (flags & (MS_SYNC|MS_ASYNC))) { - struct inode * inode = file->f_dentry->d_inode; - - down(&inode->i_sem); - ret = filemap_fdatasync(inode->i_mapping); - if (flags & MS_SYNC) { - int err; - - if (file->f_op && file->f_op->fsync) { - err = file->f_op->fsync(file, file->f_dentry, 1); - if (err && !ret) - ret = err; - } - err = filemap_fdatawait(inode->i_mapping); - if (err && !ret) - ret = err; - } - up(&inode->i_sem); - } - } - return ret; -} - -asmlinkage long sys_msync(unsigned long start, size_t len, int flags) -{ - unsigned long end; - struct vm_area_struct * vma; - int unmapped_error, error = -EINVAL; - - down_read(¤t->mm->mmap_sem); - if (start & ~PAGE_MASK) - goto out; - len = (len + ~PAGE_MASK) & PAGE_MASK; - end = start + len; - if (end < start) - goto out; - if (flags & ~(MS_ASYNC | MS_INVALIDATE | MS_SYNC)) - goto out; - error = 0; - if (end == start) - goto out; - /* - * If the interval [start,end) covers some unmapped address ranges, - * just ignore them, but return -EFAULT at the end. - */ - vma = find_vma(current->mm, start); - unmapped_error = 0; - for (;;) { - /* Still start < end. */ - error = -EFAULT; - if (!vma) - goto out; - /* Here start < vma->vm_end. */ - if (start < vma->vm_start) { - unmapped_error = -EFAULT; - start = vma->vm_start; - } - /* Here vma->vm_start <= start < vma->vm_end. */ - if (end <= vma->vm_end) { - if (start < end) { - error = msync_interval(vma, start, end, flags); - if (error) - goto out; - } - error = unmapped_error; - goto out; - } - /* Here vma->vm_start <= start < vma->vm_end < end. */ - error = msync_interval(vma, start, vma->vm_end, flags); - if (error) - goto out; - start = vma->vm_end; - vma = vma->vm_next; - } -out: - up_read(¤t->mm->mmap_sem); - return error; -} - static inline void setup_read_behavior(struct vm_area_struct * vma, int behavior) { @@ -2654,160 +2429,6 @@ out: up_write(¤t->mm->mmap_sem); - return error; -} - -/* - * Later we can get more picky about what "in core" means precisely. - * For now, simply check to see if the page is in the page cache, - * and is up to date; i.e. that no page-in operation would be required - * at this time if an application were to map and access this page. - */ -static unsigned char mincore_page(struct vm_area_struct * vma, - unsigned long pgoff) -{ - unsigned char present = 0; - struct address_space * as = vma->vm_file->f_dentry->d_inode->i_mapping; - struct page * page, ** hash = page_hash(as, pgoff); - - spin_lock(&pagecache_lock); - page = __find_page_nolock(as, pgoff, *hash); - if ((page) && (Page_Uptodate(page))) - present = 1; - spin_unlock(&pagecache_lock); - - return present; -} - -static long mincore_vma(struct vm_area_struct * vma, - unsigned long start, unsigned long end, unsigned char * vec) -{ - long error, i, remaining; - unsigned char * tmp; - - error = -ENOMEM; - if (!vma->vm_file) - return error; - - start = ((start - vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff; - if (end > vma->vm_end) - end = vma->vm_end; - end = ((end - vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff; - - error = -EAGAIN; - tmp = (unsigned char *) __get_free_page(GFP_KERNEL); - if (!tmp) - return error; - - /* (end - start) is # of pages, and also # of bytes in "vec */ - remaining = (end - start), - - error = 0; - for (i = 0; remaining > 0; remaining -= PAGE_SIZE, i++) { - int j = 0; - long thispiece = (remaining < PAGE_SIZE) ? - remaining : PAGE_SIZE; - - while (j < thispiece) - tmp[j++] = mincore_page(vma, start++); - - if (copy_to_user(vec + PAGE_SIZE * i, tmp, thispiece)) { - error = -EFAULT; - break; - } - } - - free_page((unsigned long) tmp); - return error; -} - -/* - * The mincore(2) system call. - * - * mincore() returns the memory residency status of the pages in the - * current process's address space specified by [addr, addr + len). - * The status is returned in a vector of bytes. The least significant - * bit of each byte is 1 if the referenced page is in memory, otherwise - * it is zero. - * - * Because the status of a page can change after mincore() checks it - * but before it returns to the application, the returned vector may - * contain stale information. Only locked pages are guaranteed to - * remain in memory. - * - * return values: - * zero - success - * -EFAULT - vec points to an illegal address - * -EINVAL - addr is not a multiple of PAGE_CACHE_SIZE, - * or len has a nonpositive value - * -ENOMEM - Addresses in the range [addr, addr + len] are - * invalid for the address space of this process, or - * specify one or more pages which are not currently - * mapped - * -EAGAIN - A kernel resource was temporarily unavailable. - */ -asmlinkage long sys_mincore(unsigned long start, size_t len, - unsigned char * vec) -{ - int index = 0; - unsigned long end; - struct vm_area_struct * vma; - int unmapped_error = 0; - long error = -EINVAL; - - down_read(¤t->mm->mmap_sem); - - if (start & ~PAGE_CACHE_MASK) - goto out; - len = (len + ~PAGE_CACHE_MASK) & PAGE_CACHE_MASK; - end = start + len; - if (end < start) - goto out; - - error = 0; - if (end == start) - goto out; - - /* - * If the interval [start,end) covers some unmapped address - * ranges, just ignore them, but return -ENOMEM at the end. - */ - vma = find_vma(current->mm, start); - for (;;) { - /* Still start < end. */ - error = -ENOMEM; - if (!vma) - goto out; - - /* Here start < vma->vm_end. */ - if (start < vma->vm_start) { - unmapped_error = -ENOMEM; - start = vma->vm_start; - } - - /* Here vma->vm_start <= start < vma->vm_end. */ - if (end <= vma->vm_end) { - if (start < end) { - error = mincore_vma(vma, start, end, - &vec[index]); - if (error) - goto out; - } - error = unmapped_error; - goto out; - } - - /* Here vma->vm_start <= start < vma->vm_end < end. */ - error = mincore_vma(vma, start, vma->vm_end, &vec[index]); - if (error) - goto out; - index += (vma->vm_end - start) >> PAGE_CACHE_SHIFT; - start = vma->vm_end; - vma = vma->vm_next; - } - -out: - up_read(¤t->mm->mmap_sem); return error; } diff -Nru a/mm/memory.c b/mm/memory.c --- a/mm/memory.c Tue Mar 12 13:58:15 2002 +++ b/mm/memory.c Tue Mar 12 13:58:15 2002 @@ -140,6 +140,9 @@ page_dir++; } while (--nr); spin_unlock(&mm->page_table_lock); + + /* keep the page table cache within bounds */ + check_pgt_cache(); } pte_t * pte_alloc_map(struct mm_struct *mm, pmd_t *pmd, unsigned long address) diff -Nru a/mm/mincore.c b/mm/mincore.c --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/mm/mincore.c Tue Mar 12 13:58:16 2002 @@ -0,0 +1,175 @@ +/* + * linux/mm/mincore.c + * + * Copyright (C) 1994-1999 Linus Torvalds + */ + +/* + * The mincore() system call. + */ +#include +#include +#include +#include + +#include +#include +#include + +/* + * Later we can get more picky about what "in core" means precisely. + * For now, simply check to see if the page is in the page cache, + * and is up to date; i.e. that no page-in operation would be required + * at this time if an application were to map and access this page. + */ +static unsigned char mincore_page(struct vm_area_struct * vma, + unsigned long pgoff) +{ + unsigned char present = 0; + struct address_space * as = vma->vm_file->f_dentry->d_inode->i_mapping; + struct page * page, ** hash = page_hash(as, pgoff); + + page = __find_get_page(as, pgoff, hash); + if (page) { + present = Page_Uptodate(page); + page_cache_release(page); + } + + return present; +} + +static long mincore_vma(struct vm_area_struct * vma, + unsigned long start, unsigned long end, unsigned char * vec) +{ + long error, i, remaining; + unsigned char * tmp; + + error = -ENOMEM; + if (!vma->vm_file) + return error; + + start = ((start - vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff; + if (end > vma->vm_end) + end = vma->vm_end; + end = ((end - vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff; + + error = -EAGAIN; + tmp = (unsigned char *) __get_free_page(GFP_KERNEL); + if (!tmp) + return error; + + /* (end - start) is # of pages, and also # of bytes in "vec */ + remaining = (end - start), + + error = 0; + for (i = 0; remaining > 0; remaining -= PAGE_SIZE, i++) { + int j = 0; + long thispiece = (remaining < PAGE_SIZE) ? + remaining : PAGE_SIZE; + + while (j < thispiece) + tmp[j++] = mincore_page(vma, start++); + + if (copy_to_user(vec + PAGE_SIZE * i, tmp, thispiece)) { + error = -EFAULT; + break; + } + } + + free_page((unsigned long) tmp); + return error; +} + +/* + * The mincore(2) system call. + * + * mincore() returns the memory residency status of the pages in the + * current process's address space specified by [addr, addr + len). + * The status is returned in a vector of bytes. The least significant + * bit of each byte is 1 if the referenced page is in memory, otherwise + * it is zero. + * + * Because the status of a page can change after mincore() checks it + * but before it returns to the application, the returned vector may + * contain stale information. Only locked pages are guaranteed to + * remain in memory. + * + * return values: + * zero - success + * -EFAULT - vec points to an illegal address + * -EINVAL - addr is not a multiple of PAGE_CACHE_SIZE, + * or len has a nonpositive value + * -ENOMEM - Addresses in the range [addr, addr + len] are + * invalid for the address space of this process, or + * specify one or more pages which are not currently + * mapped + * -EAGAIN - A kernel resource was temporarily unavailable. + */ +asmlinkage long sys_mincore(unsigned long start, size_t len, + unsigned char * vec) +{ + int index = 0; + unsigned long end; + struct vm_area_struct * vma; + int unmapped_error = 0; + long error = -EINVAL; + + down_read(¤t->mm->mmap_sem); + + if (start & ~PAGE_CACHE_MASK) + goto out; + len = (len + ~PAGE_CACHE_MASK) & PAGE_CACHE_MASK; + end = start + len; + if (end < start) + goto out; + + error = -EFAULT; + if (!access_ok(VERIFY_WRITE, (unsigned long) vec, len >> PAGE_SHIFT)) + goto out; + + error = 0; + if (end == start) + goto out; + + /* + * If the interval [start,end) covers some unmapped address + * ranges, just ignore them, but return -ENOMEM at the end. + */ + vma = find_vma(current->mm, start); + for (;;) { + /* Still start < end. */ + error = -ENOMEM; + if (!vma) + goto out; + + /* Here start < vma->vm_end. */ + if (start < vma->vm_start) { + unmapped_error = -ENOMEM; + start = vma->vm_start; + } + + /* Here vma->vm_start <= start < vma->vm_end. */ + if (end <= vma->vm_end) { + if (start < end) { + error = mincore_vma(vma, start, end, + &vec[index]); + if (error) + goto out; + } + error = unmapped_error; + goto out; + } + + /* Here vma->vm_start <= start < vma->vm_end < end. */ + error = mincore_vma(vma, start, vma->vm_end, &vec[index]); + if (error) + goto out; + index += (vma->vm_end - start) >> PAGE_CACHE_SHIFT; + start = vma->vm_end; + vma = vma->vm_next; + } + +out: + up_read(¤t->mm->mmap_sem); + return error; +} diff -Nru a/mm/mprotect.c b/mm/mprotect.c --- a/mm/mprotect.c Tue Mar 12 13:58:15 2002 +++ b/mm/mprotect.c Tue Mar 12 13:58:15 2002 @@ -280,7 +280,7 @@ end = start + len; if (end < start) return -EINVAL; - if (prot & ~(PROT_READ | PROT_WRITE | PROT_EXEC)) + if (prot & ~(PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM)) return -EINVAL; if (end == start) return 0; diff -Nru a/mm/msync.c b/mm/msync.c --- /dev/null Wed Dec 31 16:00:00 1969 +++ b/mm/msync.c Tue Mar 12 13:58:16 2002 @@ -0,0 +1,217 @@ +/* + * linux/mm/msync.c + * + * Copyright (C) 1994-1999 Linus Torvalds + */ + +/* + * The msync() system call. + */ +#include +#include +#include +#include + +#include +#include + +/* + * Called with mm->page_table_lock held to protect against other + * threads/the swapper from ripping pte's out from under us. + */ +static int filemap_sync_pte(pte_t *ptep, struct vm_area_struct *vma, + unsigned long address, unsigned int flags) +{ + pte_t pte = *ptep; + + if (pte_present(pte) && pte_dirty(pte)) { + struct page *page = pte_page(pte); + if (VALID_PAGE(page) && !PageReserved(page) && ptep_test_and_clear_dirty(ptep)) { + flush_tlb_page(vma, address); + set_page_dirty(page); + } + } + return 0; +} + +static inline int filemap_sync_pte_range(pmd_t * pmd, + unsigned long address, unsigned long end, + struct vm_area_struct *vma, unsigned int flags) +{ + pte_t *pte; + int error; + + if (pmd_none(*pmd)) + return 0; + if (pmd_bad(*pmd)) { + pmd_ERROR(*pmd); + pmd_clear(pmd); + return 0; + } + pte = pte_offset_map(pmd, address); + if ((address & PMD_MASK) != (end & PMD_MASK)) + end = (address & PMD_MASK) + PMD_SIZE; + error = 0; + do { + error |= filemap_sync_pte(pte, vma, address, flags); + address += PAGE_SIZE; + pte++; + } while (address && (address < end)); + + pte_unmap(pte - 1); + + return error; +} + +static inline int filemap_sync_pmd_range(pgd_t * pgd, + unsigned long address, unsigned long end, + struct vm_area_struct *vma, unsigned int flags) +{ + pmd_t * pmd; + int error; + + if (pgd_none(*pgd)) + return 0; + if (pgd_bad(*pgd)) { + pgd_ERROR(*pgd); + pgd_clear(pgd); + return 0; + } + pmd = pmd_offset(pgd, address); + if ((address & PGDIR_MASK) != (end & PGDIR_MASK)) + end = (address & PGDIR_MASK) + PGDIR_SIZE; + error = 0; + do { + error |= filemap_sync_pte_range(pmd, address, end, vma, flags); + address = (address + PMD_SIZE) & PMD_MASK; + pmd++; + } while (address && (address < end)); + return error; +} + +int filemap_sync(struct vm_area_struct * vma, unsigned long address, + size_t size, unsigned int flags) +{ + pgd_t * dir; + unsigned long end = address + size; + int error = 0; + + /* Aquire the lock early; it may be possible to avoid dropping + * and reaquiring it repeatedly. + */ + spin_lock(&vma->vm_mm->page_table_lock); + + dir = pgd_offset(vma->vm_mm, address); + flush_cache_range(vma, address, end); + if (address >= end) + BUG(); + do { + error |= filemap_sync_pmd_range(dir, address, end, vma, flags); + address = (address + PGDIR_SIZE) & PGDIR_MASK; + dir++; + } while (address && (address < end)); + flush_tlb_range(vma, end - size, end); + + spin_unlock(&vma->vm_mm->page_table_lock); + + return error; +} + +/* + * MS_SYNC syncs the entire file - including mappings. + * + * MS_ASYNC initiates writeout of just the dirty mapped data. + * This provides no guarantee of file integrity - things like indirect + * blocks may not have started writeout. MS_ASYNC is primarily useful + * where the application knows that it has finished with the data and + * wishes to intelligently schedule its own I/O traffic. + */ +static int msync_interval(struct vm_area_struct * vma, + unsigned long start, unsigned long end, int flags) +{ + int ret = 0; + struct file * file = vma->vm_file; + + if (file && (vma->vm_flags & VM_SHARED)) { + ret = filemap_sync(vma, start, end-start, flags); + + if (!ret && (flags & (MS_SYNC|MS_ASYNC))) { + struct inode * inode = file->f_dentry->d_inode; + + down(&inode->i_sem); + ret = filemap_fdatasync(inode->i_mapping); + if (flags & MS_SYNC) { + int err; + + if (file->f_op && file->f_op->fsync) { + err = file->f_op->fsync(file, file->f_dentry, 1); + if (err && !ret) + ret = err; + } + err = filemap_fdatawait(inode->i_mapping); + if (err && !ret) + ret = err; + } + up(&inode->i_sem); + } + } + return ret; +} + +asmlinkage long sys_msync(unsigned long start, size_t len, int flags) +{ + unsigned long end; + struct vm_area_struct * vma; + int unmapped_error, error = -EINVAL; + + down_read(¤t->mm->mmap_sem); + if (start & ~PAGE_MASK) + goto out; + len = (len + ~PAGE_MASK) & PAGE_MASK; + end = start + len; + if (end < start) + goto out; + if (flags & ~(MS_ASYNC | MS_INVALIDATE | MS_SYNC)) + goto out; + error = 0; + if (end == start) + goto out; + /* + * If the interval [start,end) covers some unmapped address ranges, + * just ignore them, but return -EFAULT at the end. + */ + vma = find_vma(current->mm, start); + unmapped_error = 0; + for (;;) { + /* Still start < end. */ + error = -EFAULT; + if (!vma) + goto out; + /* Here start < vma->vm_end. */ + if (start < vma->vm_start) { + unmapped_error = -EFAULT; + start = vma->vm_start; + } + /* Here vma->vm_start <= start < vma->vm_end. */ + if (end <= vma->vm_end) { + if (start < end) { + error = msync_interval(vma, start, end, flags); + if (error) + goto out; + } + error = unmapped_error; + goto out; + } + /* Here vma->vm_start <= start < vma->vm_end < end. */ + error = msync_interval(vma, start, vma->vm_end, flags); + if (error) + goto out; + start = vma->vm_end; + vma = vma->vm_next; + } +out: + up_read(¤t->mm->mmap_sem); + return error; +} + + diff -Nru a/mm/page_alloc.c b/mm/page_alloc.c --- a/mm/page_alloc.c Tue Mar 12 13:58:14 2002 +++ b/mm/page_alloc.c Tue Mar 12 13:58:14 2002 @@ -776,8 +776,8 @@ * per zone. */ zone->wait_table_size = wait_table_size(size); - zone->wait_table_shift = - BITS_PER_LONG - wait_table_bits(zone->wait_table_size); + zone->wait_table_bits = + wait_table_bits(zone->wait_table_size); zone->wait_table = (wait_queue_head_t *) alloc_bootmem_node(pgdat, zone->wait_table_size * sizeof(wait_queue_head_t)); diff -Nru a/mm/shmem.c b/mm/shmem.c --- a/mm/shmem.c Tue Mar 12 13:58:15 2002 +++ b/mm/shmem.c Tue Mar 12 13:58:15 2002 @@ -1425,20 +1425,21 @@ owner: THIS_MODULE, name: "shmem", get_sb: shmem_get_sb, - fs_flags: FS_LITTER, + kill_sb: kill_litter_super, }; static struct file_system_type tmpfs_fs_type = { owner: THIS_MODULE, name: "tmpfs", get_sb: shmem_get_sb, - fs_flags: FS_LITTER, + kill_sb: kill_litter_super, }; #else static struct file_system_type tmpfs_fs_type = { owner: THIS_MODULE, name: "tmpfs", get_sb: shmem_get_sb, - fs_flags: FS_LITTER|FS_NOMOUNT, + kill_sb: kill_litter_super, + fs_flags: FS_NOMOUNT, }; #endif static struct vfsmount *shm_mnt; diff -Nru a/net/8021q/vlan.c b/net/8021q/vlan.c --- a/net/8021q/vlan.c Tue Mar 12 13:58:15 2002 +++ b/net/8021q/vlan.c Tue Mar 12 13:58:15 2002 @@ -8,7 +8,9 @@ * * Fixes: * Fix for packet capture - Nick Eggleston ; - * + * Add HW acceleration hooks - David S. Miller ; + * Correct all the locking - David S. Miller ; + * Use hash table for VLAN groups - David S. Miller * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License @@ -37,12 +39,15 @@ /* Global VLAN variables */ /* Our listing of VLAN group(s) */ -struct vlan_group *p802_1Q_vlan_list; +struct vlan_group *vlan_group_hash[VLAN_GRP_HASH_SIZE]; +spinlock_t vlan_group_lock = SPIN_LOCK_UNLOCKED; +#define vlan_grp_hashfn(IDX) ((((IDX) >> VLAN_GRP_HASH_SHIFT) ^ (IDX)) & VLAN_GRP_HASH_MASK) static char vlan_fullname[] = "802.1Q VLAN Support"; static unsigned int vlan_version = 1; -static unsigned int vlan_release = 6; -static char vlan_copyright[] = " Ben Greear "; +static unsigned int vlan_release = 7; +static char vlan_copyright[] = "Ben Greear "; +static char vlan_buggyright[] = "David S. Miller "; static int vlan_device_event(struct notifier_block *, unsigned long, void *); @@ -55,9 +60,6 @@ /* Determines interface naming scheme. */ unsigned short vlan_name_type = VLAN_NAME_TYPE_RAW_PLUS_VID_NO_PAD; -/* Counter for how many NON-VLAN protos we've received on a VLAN. */ -unsigned long vlan_bad_proto_recvd = 0; - /* DO reorder the header by default */ unsigned short vlan_default_dev_flags = 1; @@ -83,6 +85,8 @@ printk(VLAN_INF "%s v%u.%u %s\n", vlan_fullname, vlan_version, vlan_release, vlan_copyright); + printk(VLAN_INF "All bugs added by %s\n", + vlan_buggyright); /* proc file system initialization */ err = vlan_proc_init(); @@ -100,71 +104,83 @@ vlan_ioctl_hook = vlan_ioctl_handler; - printk(VLAN_INF "%s Initialization complete.\n", VLAN_NAME); return 0; } /* - * Cleanup of groups before exit - */ - -static void vlan_group_cleanup(void) -{ - struct vlan_group *grp = NULL; - struct vlan_group *nextgroup; - - for (grp = p802_1Q_vlan_list; (grp != NULL);) { - nextgroup = grp->next; - kfree(grp); - grp = nextgroup; - } - p802_1Q_vlan_list = NULL; -} - -/* * Module 'remove' entry point. * o delete /proc/net/router directory and static entries. */ static void __exit vlan_cleanup_module(void) { + int i; + + /* This table must be empty if there are no module + * references left. + */ + for (i = 0; i < VLAN_GRP_HASH_SIZE; i++) { + if (vlan_group_hash[i] != NULL) + BUG(); + } + /* Un-register us from receiving netdevice events */ unregister_netdevice_notifier(&vlan_notifier_block); dev_remove_pack(&vlan_packet_type); vlan_proc_cleanup(); - vlan_group_cleanup(); vlan_ioctl_hook = NULL; } module_init(vlan_proto_init); module_exit(vlan_cleanup_module); -/** Will search linearly for now, based on device index. Could - * hash, or directly link, this some day. --Ben - * TODO: Potential performance issue here. Linear search where N is - * the number of 'real' devices used by VLANs. - */ -struct vlan_group* vlan_find_group(int real_dev_ifindex) +/* Must be invoked with vlan_group_lock held. */ +static struct vlan_group *__vlan_find_group(int real_dev_ifindex) { - struct vlan_group *grp = NULL; + struct vlan_group *grp; - br_read_lock_bh(BR_NETPROTO_LOCK); - for (grp = p802_1Q_vlan_list; - ((grp != NULL) && (grp->real_dev_ifindex != real_dev_ifindex)); + for (grp = vlan_group_hash[vlan_grp_hashfn(real_dev_ifindex)]; + grp != NULL; grp = grp->next) { - /* nothing */ ; + if (grp->real_dev_ifindex == real_dev_ifindex) + break; } - br_read_unlock_bh(BR_NETPROTO_LOCK); return grp; } -/* Find the protocol handler. Assumes VID < 0xFFF. +/* Must hold vlan_group_lock. */ +static void __grp_hash(struct vlan_group *grp) +{ + struct vlan_group **head; + + head = &vlan_group_hash[vlan_grp_hashfn(grp->real_dev_ifindex)]; + grp->next = *head; + *head = grp; +} + +/* Must hold vlan_group_lock. */ +static void __grp_unhash(struct vlan_group *grp) +{ + struct vlan_group *next, **pprev; + + pprev = &vlan_group_hash[vlan_grp_hashfn(grp->real_dev_ifindex)]; + next = *pprev; + while (next != grp) { + pprev = &next->next; + next = *pprev; + } + *pprev = grp->next; +} + +/* Find the protocol handler. Assumes VID < VLAN_VID_MASK. + * + * Must be invoked with vlan_group_lock held. */ -struct net_device *find_802_1Q_vlan_dev(struct net_device *real_dev, - unsigned short VID) +struct net_device *__find_vlan_dev(struct net_device *real_dev, + unsigned short VID) { - struct vlan_group *grp = vlan_find_group(real_dev->ifindex); + struct vlan_group *grp = __vlan_find_group(real_dev->ifindex); if (grp) return grp->vlan_devices[VID]; @@ -172,109 +188,143 @@ return NULL; } -/** This method will explicitly do a dev_put on the device if do_dev_put - * is TRUE. This gets around a difficulty with reference counting, and - * the unregister-by-name (below). If do_locks is true, it will grab - * a lock before un-registering. If do_locks is false, it is assumed that - * the lock has already been grabbed externally... --Ben +/* This returns 0 if everything went fine. + * It will return 1 if the group was killed as a result. + * A negative return indicates failure. + * + * The RTNL lock must be held. */ -int unregister_802_1Q_vlan_dev(int real_dev_ifindex, unsigned short vlan_id, - int do_dev_put, int do_locks) +static int unregister_vlan_dev(struct net_device *real_dev, + unsigned short vlan_id) { struct net_device *dev = NULL; + int real_dev_ifindex = real_dev->ifindex; struct vlan_group *grp; + int i, ret; #ifdef VLAN_DEBUG printk(VLAN_DBG __FUNCTION__ ": VID: %i\n", vlan_id); #endif /* sanity check */ - if ((vlan_id >= 0xFFF) || (vlan_id <= 0)) + if ((vlan_id >= VLAN_VID_MASK) || (vlan_id <= 0)) return -EINVAL; - grp = vlan_find_group(real_dev_ifindex); + spin_lock_bh(&vlan_group_lock); + grp = __vlan_find_group(real_dev_ifindex); + spin_unlock_bh(&vlan_group_lock); + + ret = 0; + if (grp) { dev = grp->vlan_devices[vlan_id]; if (dev) { /* Remove proc entry */ vlan_proc_rem_dev(dev); - /* Take it out of our own structures */ - grp->vlan_devices[vlan_id] = NULL; + /* Take it out of our own structures, but be sure to + * interlock with HW accelerating devices or SW vlan + * input packet processing. + */ + if (real_dev->features & + (NETIF_F_HW_VLAN_RX | NETIF_F_HW_VLAN_FILTER)) { + real_dev->vlan_rx_kill_vid(real_dev, vlan_id); + } else { + br_write_lock(BR_NETPROTO_LOCK); + grp->vlan_devices[vlan_id] = NULL; + br_write_unlock(BR_NETPROTO_LOCK); + } - /* Take it out of the global list of devices. - * NOTE: This deletes dev, don't access it again!! + /* Caller unregisters (and if necessary, puts) + * VLAN device, but we get rid of the reference to + * real_dev here. */ + dev_put(real_dev); - if (do_dev_put) - dev_put(dev); + /* If the group is now empty, kill off the + * group. + */ + for (i = 0; i < VLAN_VID_MASK; i++) + if (grp->vlan_devices[i]) + break; + + if (i == VLAN_VID_MASK) { + if (real_dev->features & NETIF_F_HW_VLAN_RX) + real_dev->vlan_rx_register(real_dev, NULL); + + spin_lock_bh(&vlan_group_lock); + __grp_unhash(grp); + spin_unlock_bh(&vlan_group_lock); - /* TODO: Please review this code. */ - if (do_locks) { - rtnl_lock(); - unregister_netdevice(dev); - rtnl_unlock(); - } else { - unregister_netdevice(dev); + ret = 1; } MOD_DEC_USE_COUNT; } } - - return 0; + + return ret; } -int unregister_802_1Q_vlan_device(const char *vlan_IF_name) +static int unregister_vlan_device(const char *vlan_IF_name) { struct net_device *dev = NULL; + int ret; -#ifdef VLAN_DEBUG - printk(VLAN_DBG __FUNCTION__ ": unregister VLAN by name, name -:%s:-\n", - vlan_IF_name); -#endif dev = dev_get_by_name(vlan_IF_name); + ret = -EINVAL; if (dev) { if (dev->priv_flags & IFF_802_1Q_VLAN) { - return unregister_802_1Q_vlan_dev( - VLAN_DEV_INFO(dev)->real_dev->ifindex, - (unsigned short)(VLAN_DEV_INFO(dev)->vlan_id), - 1 /* do dev_put */, 1 /* do locking */); + rtnl_lock(); + + ret = unregister_vlan_dev(VLAN_DEV_INFO(dev)->real_dev, + VLAN_DEV_INFO(dev)->vlan_id); + + dev_put(dev); + unregister_netdevice(dev); + + rtnl_unlock(); + + if (ret == 1) + ret = 0; } else { printk(VLAN_ERR __FUNCTION__ ": ERROR: Tried to remove a non-vlan device " "with VLAN code, name: %s priv_flags: %hX\n", dev->name, dev->priv_flags); dev_put(dev); - return -EPERM; + ret = -EPERM; } } else { #ifdef VLAN_DEBUG printk(VLAN_DBG __FUNCTION__ ": WARNING: Could not find dev.\n"); #endif - return -EINVAL; + ret = -EINVAL; } + + return ret; } /* Attach a VLAN device to a mac address (ie Ethernet Card). * Returns the device that was created, or NULL if there was * an error of some kind. */ -struct net_device *register_802_1Q_vlan_device(const char* eth_IF_name, +static struct net_device *register_vlan_device(const char *eth_IF_name, unsigned short VLAN_ID) { struct vlan_group *grp; struct net_device *new_dev; struct net_device *real_dev; /* the ethernet device */ int malloc_size = 0; + int r; #ifdef VLAN_DEBUG printk(VLAN_DBG __FUNCTION__ ": if_name -:%s:- vid: %i\n", eth_IF_name, VLAN_ID); #endif - if (VLAN_ID >= 0xfff) + if (VLAN_ID >= VLAN_VID_MASK) goto out_ret_null; /* find the device relating to eth_IF_name. */ @@ -282,14 +332,47 @@ if (!real_dev) goto out_ret_null; - /* TODO: Make sure this device can really handle having a VLAN attached - * to it... + if (real_dev->features & NETIF_F_VLAN_CHALLENGED) { + printk(VLAN_DBG __FUNCTION__ ": VLANs not supported on %s.\n", + real_dev->name); + goto out_put_dev; + } + + if ((real_dev->features & NETIF_F_HW_VLAN_RX) && + (real_dev->vlan_rx_register == NULL || + real_dev->vlan_rx_kill_vid == NULL)) { + printk(VLAN_DBG __FUNCTION__ ": Device %s has buggy VLAN hw accel.\n", + real_dev->name); + goto out_put_dev; + } + + if ((real_dev->features & NETIF_F_HW_VLAN_FILTER) && + (real_dev->vlan_rx_add_vid == NULL || + real_dev->vlan_rx_kill_vid == NULL)) { + printk(VLAN_DBG __FUNCTION__ ": Device %s has buggy VLAN hw accel.\n", + real_dev->name); + goto out_put_dev; + } + + /* From this point on, all the data structures must remain + * consistent. + */ + rtnl_lock(); + + /* The real device must be up and operating in order to + * assosciate a VLAN device with it. */ - if (find_802_1Q_vlan_dev(real_dev, VLAN_ID)) { + if (!(real_dev->flags & IFF_UP)) + goto out_unlock; + + spin_lock_bh(&vlan_group_lock); + r = (__find_vlan_dev(real_dev, VLAN_ID) != NULL); + spin_unlock_bh(&vlan_group_lock); + + if (r) { /* was already registered. */ printk(VLAN_DBG __FUNCTION__ ": ALREADY had VLAN registered\n"); - dev_put(real_dev); - return NULL; + goto out_unlock; } malloc_size = (sizeof(struct net_device)); @@ -298,15 +381,14 @@ new_dev, malloc_size); if (new_dev == NULL) - goto out_put_dev; + goto out_unlock; memset(new_dev, 0, malloc_size); - /* set us up to not use a Qdisc, as the underlying Hardware device + /* Set us up to have no queue, as the underlying Hardware device * can do all the queueing we could want. */ - /* new_dev->qdisc_sleeping = &noqueue_qdisc; Not needed it seems. */ - new_dev->tx_queue_len = 0; /* This should effectively give us no queue. */ + new_dev->tx_queue_len = 0; /* Gotta set up the fields for the device. */ #ifdef VLAN_DEBUG @@ -368,8 +450,11 @@ /* TODO: maybe just assign it to be ETHERNET? */ new_dev->type = real_dev->type; - /* Regular ethernet + 4 bytes (18 total). */ - new_dev->hard_header_len = VLAN_HLEN + real_dev->hard_header_len; + new_dev->hard_header_len = real_dev->hard_header_len; + if (!(real_dev->features & NETIF_F_HW_VLAN_TX)) { + /* Regular ethernet + 4 bytes (18 total). */ + new_dev->hard_header_len += VLAN_HLEN; + } new_dev->priv = kmalloc(sizeof(struct vlan_dev_info), GFP_KERNEL); @@ -377,10 +462,8 @@ new_dev->priv, sizeof(struct vlan_dev_info)); - if (new_dev->priv == NULL) { - kfree(new_dev); - goto out_put_dev; - } + if (new_dev->priv == NULL) + goto out_free_newdev; memset(new_dev->priv, 0, sizeof(struct vlan_dev_info)); @@ -390,15 +473,21 @@ new_dev->open = vlan_dev_open; new_dev->stop = vlan_dev_stop; - new_dev->hard_header = vlan_dev_hard_header; - new_dev->hard_start_xmit = vlan_dev_hard_start_xmit; - new_dev->rebuild_header = vlan_dev_rebuild_header; + if (real_dev->features & NETIF_F_HW_VLAN_TX) { + new_dev->hard_header = real_dev->hard_header; + new_dev->hard_start_xmit = vlan_dev_hwaccel_hard_start_xmit; + new_dev->rebuild_header = real_dev->rebuild_header; + } else { + new_dev->hard_header = vlan_dev_hard_header; + new_dev->hard_start_xmit = vlan_dev_hard_start_xmit; + new_dev->rebuild_header = vlan_dev_rebuild_header; + } new_dev->hard_header_parse = real_dev->hard_header_parse; new_dev->set_mac_address = vlan_dev_set_mac_address; new_dev->set_multicast_list = vlan_dev_set_multicast_list; - VLAN_DEV_INFO(new_dev)->vlan_id = VLAN_ID; /* 1 through 0xFFF */ + VLAN_DEV_INFO(new_dev)->vlan_id = VLAN_ID; /* 1 through VLAN_VID_MASK */ VLAN_DEV_INFO(new_dev)->real_dev = real_dev; VLAN_DEV_INFO(new_dev)->dent = NULL; VLAN_DEV_INFO(new_dev)->flags = vlan_default_dev_flags; @@ -411,37 +500,39 @@ /* So, got the sucker initialized, now lets place * it into our local structure. */ - grp = vlan_find_group(real_dev->ifindex); + spin_lock_bh(&vlan_group_lock); + grp = __vlan_find_group(real_dev->ifindex); + spin_unlock_bh(&vlan_group_lock); + + /* Note, we are running under the RTNL semaphore + * so it cannot "appear" on us. + */ if (!grp) { /* need to add a new group */ grp = kmalloc(sizeof(struct vlan_group), GFP_KERNEL); - VLAN_MEM_DBG("grp malloc, addr: %p size: %i\n", - grp, sizeof(struct vlan_group)); - if (!grp) { - kfree(new_dev->priv); - VLAN_FMEM_DBG("new_dev->priv free, addr: %p\n", - new_dev->priv); - kfree(new_dev); - VLAN_FMEM_DBG("new_dev free, addr: %p\n", new_dev); - - goto out_put_dev; - } + if (!grp) + goto out_free_newdev_priv; - printk(KERN_ALERT "VLAN REGISTER: Allocated new group.\n"); + /* printk(KERN_ALERT "VLAN REGISTER: Allocated new group.\n"); */ memset(grp, 0, sizeof(struct vlan_group)); grp->real_dev_ifindex = real_dev->ifindex; - br_write_lock_bh(BR_NETPROTO_LOCK); - grp->next = p802_1Q_vlan_list; - p802_1Q_vlan_list = grp; - br_write_unlock_bh(BR_NETPROTO_LOCK); + spin_lock_bh(&vlan_group_lock); + __grp_hash(grp); + spin_unlock_bh(&vlan_group_lock); + + if (real_dev->features & NETIF_F_HW_VLAN_RX) + real_dev->vlan_rx_register(real_dev, grp); } grp->vlan_devices[VLAN_ID] = new_dev; + vlan_proc_add_dev(new_dev); /* create it's proc entry */ - /* TODO: Please check this: RTNL --Ben */ - rtnl_lock(); + if (real_dev->features & NETIF_F_HW_VLAN_FILTER) + real_dev->vlan_rx_add_vid(real_dev, VLAN_ID); + register_netdevice(new_dev); + rtnl_unlock(); /* NOTE: We have a reference to the real device, @@ -453,6 +544,15 @@ #endif return new_dev; +out_free_newdev_priv: + kfree(new_dev->priv); + +out_free_newdev: + kfree(new_dev); + +out_unlock: + rtnl_unlock(); + out_put_dev: dev_put(real_dev); @@ -464,78 +564,78 @@ { struct net_device *dev = (struct net_device *)(ptr); struct vlan_group *grp = NULL; - int i = 0; + int i, flgs; struct net_device *vlandev = NULL; + spin_lock_bh(&vlan_group_lock); + grp = __vlan_find_group(dev->ifindex); + spin_unlock_bh(&vlan_group_lock); + + if (!grp) + goto out; + + /* It is OK that we do not hold the group lock right now, + * as we run under the RTNL lock. + */ + switch (event) { case NETDEV_CHANGEADDR: - /* Ignore for now */ - break; - case NETDEV_GOING_DOWN: /* Ignore for now */ break; case NETDEV_DOWN: - /* TODO: Please review this code. */ - /* put all related VLANs in the down state too. */ - for (grp = p802_1Q_vlan_list; grp != NULL; grp = grp->next) { - int flgs = 0; - - for (i = 0; i < VLAN_GROUP_ARRAY_LEN; i++) { - vlandev = grp->vlan_devices[i]; - if (!vlandev || - (VLAN_DEV_INFO(vlandev)->real_dev != dev) || - (!(vlandev->flags & IFF_UP))) - continue; - - flgs = vlandev->flags; - flgs &= ~IFF_UP; - dev_change_flags(vlandev, flgs); - } + /* Put all VLANs for this dev in the down state too. */ + for (i = 0; i < VLAN_GROUP_ARRAY_LEN; i++) { + vlandev = grp->vlan_devices[i]; + if (!vlandev) + continue; + + flgs = vlandev->flags; + if (!(flgs & IFF_UP)) + continue; + + dev_change_flags(vlandev, flgs & ~IFF_UP); } break; case NETDEV_UP: - /* TODO: Please review this code. */ - /* put all related VLANs in the down state too. */ - for (grp = p802_1Q_vlan_list; grp != NULL; grp = grp->next) { - int flgs; - - for (i = 0; i < VLAN_GROUP_ARRAY_LEN; i++) { - vlandev = grp->vlan_devices[i]; - if (!vlandev || - (VLAN_DEV_INFO(vlandev)->real_dev != dev) || - (vlandev->flags & IFF_UP)) - continue; + /* Put all VLANs for this dev in the up state too. */ + for (i = 0; i < VLAN_GROUP_ARRAY_LEN; i++) { + vlandev = grp->vlan_devices[i]; + if (!vlandev) + continue; - flgs = vlandev->flags; - flgs |= IFF_UP; - dev_change_flags(vlandev, flgs); - } + flgs = vlandev->flags; + if (flgs & IFF_UP) + continue; + + dev_change_flags(vlandev, flgs | IFF_UP); } break; case NETDEV_UNREGISTER: - /* TODO: Please review this code. */ - /* delete all related VLANs. */ - for (grp = p802_1Q_vlan_list; grp != NULL; grp = grp->next) { - for (i = 0; i < VLAN_GROUP_ARRAY_LEN; i++) { - vlandev = grp->vlan_devices[i]; - if (!vlandev || - (VLAN_DEV_INFO(vlandev)->real_dev != dev)) - continue; - - unregister_802_1Q_vlan_dev( - VLAN_DEV_INFO(vlandev)->real_dev->ifindex, - VLAN_DEV_INFO(vlandev)->vlan_id, - 0, 0); - vlandev = NULL; - } + /* Delete all VLANs for this dev. */ + for (i = 0; i < VLAN_GROUP_ARRAY_LEN; i++) { + int ret; + + vlandev = grp->vlan_devices[i]; + if (!vlandev) + continue; + + ret = unregister_vlan_dev(dev, + VLAN_DEV_INFO(vlandev)->vlan_id); + + unregister_netdev(vlandev); + + /* Group was destroyed? */ + if (ret == 1) + break; } break; }; +out: return NOTIFY_DONE; } @@ -612,7 +712,7 @@ * talk to: args.dev1 We also have the * VLAN ID: args.u.VID */ - if (register_802_1Q_vlan_device(args.device1, args.u.VID)) { + if (register_vlan_device(args.device1, args.u.VID)) { err = 0; } else { err = -EINVAL; @@ -623,7 +723,7 @@ /* Here, the args.dev1 is the actual VLAN we want * to get rid of. */ - err = unregister_802_1Q_vlan_device(args.device1); + err = unregister_vlan_device(args.device1); break; default: @@ -636,4 +736,4 @@ return err; } - +MODULE_LICENSE("GPL"); diff -Nru a/net/8021q/vlan.h b/net/8021q/vlan.h --- a/net/8021q/vlan.h Tue Mar 12 13:58:14 2002 +++ b/net/8021q/vlan.h Tue Mar 12 13:58:14 2002 @@ -30,14 +30,48 @@ extern unsigned short vlan_name_type; -/* Counter for how many NON-VLAN protos we've received on a VLAN. */ -extern unsigned long vlan_bad_proto_recvd; - int vlan_ioctl_handler(unsigned long arg); -/* Add some headers for the public VLAN methods. */ -int unregister_802_1Q_vlan_device(const char* vlan_IF_name); -struct net_device *register_802_1Q_vlan_device(const char* eth_IF_name, - unsigned short VID); +#define VLAN_GRP_HASH_SHIFT 5 +#define VLAN_GRP_HASH_SIZE (1 << VLAN_GRP_HASH_SHIFT) +#define VLAN_GRP_HASH_MASK (VLAN_GRP_HASH_SIZE - 1) +extern struct vlan_group *vlan_group_hash[VLAN_GRP_HASH_SIZE]; +extern spinlock_t vlan_group_lock; + +/* Find a VLAN device by the MAC address of it's Ethernet device, and + * it's VLAN ID. The default configuration is to have VLAN's scope + * to be box-wide, so the MAC will be ignored. The mac will only be + * looked at if we are configured to have a seperate set of VLANs per + * each MAC addressable interface. Note that this latter option does + * NOT follow the spec for VLANs, but may be useful for doing very + * large quantities of VLAN MUX/DEMUX onto FrameRelay or ATM PVCs. + * + * Must be invoked with vlan_group_lock held and that lock MUST NOT + * be dropped until a reference is obtained on the returned device. + * You may drop the lock earlier if you are running under the RTNL + * semaphore, however. + */ +struct net_device *__find_vlan_dev(struct net_device* real_dev, + unsigned short VID); /* vlan.c */ + +/* found in vlan_dev.c */ +int vlan_dev_rebuild_header(struct sk_buff *skb); +int vlan_skb_recv(struct sk_buff *skb, struct net_device *dev, + struct packet_type* ptype); +int vlan_dev_hard_header(struct sk_buff *skb, struct net_device *dev, + unsigned short type, void *daddr, void *saddr, + unsigned len); +int vlan_dev_hard_start_xmit(struct sk_buff *skb, struct net_device *dev); +int vlan_dev_hwaccel_hard_start_xmit(struct sk_buff *skb, struct net_device *dev); +int vlan_dev_change_mtu(struct net_device *dev, int new_mtu); +int vlan_dev_set_mac_address(struct net_device *dev, void* addr); +int vlan_dev_open(struct net_device* dev); +int vlan_dev_stop(struct net_device* dev); +int vlan_dev_init(struct net_device* dev); +void vlan_dev_destruct(struct net_device* dev); +int vlan_dev_set_ingress_priority(char* dev_name, __u32 skb_prio, short vlan_prio); +int vlan_dev_set_egress_priority(char* dev_name, __u32 skb_prio, short vlan_prio); +int vlan_dev_set_vlan_flag(char* dev_name, __u32 flag, short flag_val); +void vlan_dev_set_multicast_list(struct net_device *vlan_dev); #endif /* !(__BEN_VLAN_802_1Q_INC__) */ diff -Nru a/net/8021q/vlan_dev.c b/net/8021q/vlan_dev.c --- a/net/8021q/vlan_dev.c Tue Mar 12 13:58:16 2002 +++ b/net/8021q/vlan_dev.c Tue Mar 12 13:58:16 2002 @@ -38,12 +38,6 @@ #include #include -struct net_device_stats *vlan_dev_get_stats(struct net_device *dev) -{ - return &(((struct vlan_dev_info *)(dev->priv))->dev_stats); -} - - /* * Rebuild the Ethernet MAC header. This is called after an ARP * (or in future other address resolution) has completed on this @@ -78,6 +72,21 @@ return 0; } +static inline struct sk_buff *vlan_check_reorder_header(struct sk_buff *skb) +{ + if (VLAN_DEV_INFO(skb->dev)->flags & 1) { + skb = skb_share_check(skb, GFP_ATOMIC); + if (skb) { + /* Lifted from Gleb's VLAN code... */ + memmove(skb->data - ETH_HLEN, + skb->data - VLAN_ETH_HLEN, 12); + skb->mac.raw += VLAN_HLEN; + } + } + + return skb; +} + /* * Determine the packet's protocol ID. The rule here is that we * assume 802.3 if the type field is short enough to be a length. @@ -113,7 +122,7 @@ /* vlan_TCI = ntohs(get_unaligned(&vhdr->h_vlan_TCI)); */ vlan_TCI = ntohs(vhdr->h_vlan_TCI); - vid = (vlan_TCI & 0xFFF); + vid = (vlan_TCI & VLAN_VID_MASK); #ifdef VLAN_DEBUG printk(VLAN_DBG __FUNCTION__ ": skb: %p vlan_id: %hx\n", @@ -124,11 +133,18 @@ * and then go on as usual. */ - /* we have 12 bits of vlan ID. */ - /* If it's NULL, we will tag it to be junked below */ - skb->dev = find_802_1Q_vlan_dev(dev, vid); + /* We have 12 bits of vlan ID. + * + * We must not drop the vlan_group_lock until we hold a + * reference to the device (netif_rx does that) or we + * fail. + */ + spin_lock_bh(&vlan_group_lock); + skb->dev = __find_vlan_dev(dev, vid); if (!skb->dev) { + spin_unlock_bh(&vlan_group_lock); + #ifdef VLAN_DEBUG printk(VLAN_DBG __FUNCTION__ ": ERROR: No net_device for VID: %i on dev: %s [%i]\n", (unsigned int)(vid), dev->name, dev->ifindex); @@ -137,6 +153,8 @@ return -1; } + skb->dev->last_rx = jiffies; + /* Bump the rx counters for the VLAN device. */ stats = vlan_dev_get_stats(skb->dev); stats->rx_packets++; @@ -149,6 +167,8 @@ */ if (dev != VLAN_DEV_INFO(skb->dev)->real_dev) { + spin_unlock_bh(&vlan_group_lock); + #ifdef VLAN_DEBUG printk(VLAN_DBG __FUNCTION__ ": dropping skb: %p because came in on wrong device, dev: %s real_dev: %s, skb_dev: %s\n", skb, dev->name, VLAN_DEV_INFO(skb->dev)->real_dev->name, skb->dev->name); @@ -161,7 +181,7 @@ /* * Deal with ingress priority mapping. */ - skb->priority = VLAN_DEV_INFO(skb->dev)->ingress_priority_map[(ntohs(vhdr->h_vlan_TCI) >> 13) & 0x7]; + skb->priority = vlan_get_ingress_priority(skb->dev, ntohs(vhdr->h_vlan_TCI)); #ifdef VLAN_DEBUG printk(VLAN_DBG __FUNCTION__ ": priority: %lu for TCI: %hu (hbo)\n", @@ -174,9 +194,12 @@ switch (skb->pkt_type) { case PACKET_BROADCAST: /* Yeah, stats collect these together.. */ // stats->broadcast ++; // no such counter :-( + break; + case PACKET_MULTICAST: stats->multicast++; break; + case PACKET_OTHERHOST: /* Our lower layer thinks this is not local, let's make sure. * This allows the VLAN to have a different MAC than the underlying @@ -215,6 +238,7 @@ /* TODO: Add a more specific counter here. */ stats->rx_errors++; } + spin_unlock_bh(&vlan_group_lock); return 0; } @@ -243,6 +267,7 @@ /* TODO: Add a more specific counter here. */ stats->rx_errors++; } + spin_unlock_bh(&vlan_group_lock); return 0; } @@ -265,6 +290,24 @@ /* TODO: Add a more specific counter here. */ stats->rx_errors++; } + spin_unlock_bh(&vlan_group_lock); + return 0; +} + +static inline unsigned short vlan_dev_get_egress_qos_mask(struct net_device* dev, + struct sk_buff* skb) +{ + struct vlan_priority_tci_mapping *mp = + VLAN_DEV_INFO(dev)->egress_priority_map[(skb->priority & 0xF)]; + + while (mp) { + if (mp->priority == skb->priority) { + return mp->vlan_qos; /* This should already be shifted to mask + * correctly with the VLAN's TCI + */ + } + mp = mp->next; + } return 0; } @@ -396,8 +439,9 @@ */ if (veth->h_vlan_proto != __constant_htons(ETH_P_8021Q)) { + unsigned short veth_TCI; + /* This is not a VLAN frame...but we can fix that! */ - unsigned short veth_TCI = 0; VLAN_DEV_INFO(dev)->cnt_encap_on_xmit++; #ifdef VLAN_DEBUG @@ -453,65 +497,44 @@ veth->h_vlan_proto, veth->h_vlan_TCI, veth->h_vlan_encapsulated_proto); #endif - dev_queue_xmit(skb); stats->tx_packets++; /* for statics only */ stats->tx_bytes += skb->len; - return 0; -} -int vlan_dev_change_mtu(struct net_device *dev, int new_mtu) -{ - /* TODO: gotta make sure the underlying layer can handle it, - * maybe an IFF_VLAN_CAPABLE flag for devices? - */ - if (VLAN_DEV_INFO(dev)->real_dev->mtu < new_mtu) - return -ERANGE; - - dev->mtu = new_mtu; + dev_queue_xmit(skb); - return new_mtu; + return 0; } -int vlan_dev_open(struct net_device *dev) +int vlan_dev_hwaccel_hard_start_xmit(struct sk_buff *skb, struct net_device *dev) { - if (!(VLAN_DEV_INFO(dev)->real_dev->flags & IFF_UP)) - return -ENETDOWN; + struct net_device_stats *stats = vlan_dev_get_stats(dev); + struct vlan_skb_tx_cookie *cookie; - return 0; -} + stats->tx_packets++; + stats->tx_bytes += skb->len; -int vlan_dev_stop(struct net_device *dev) -{ - vlan_flush_mc_list(dev); - return 0; -} + skb->dev = VLAN_DEV_INFO(dev)->real_dev; + cookie = VLAN_TX_SKB_CB(skb); + cookie->magic = VLAN_TX_COOKIE_MAGIC; + cookie->vlan_tag = (VLAN_DEV_INFO(dev)->vlan_id | + vlan_dev_get_egress_qos_mask(dev, skb)); + + dev_queue_xmit(skb); -int vlan_dev_init(struct net_device *dev) -{ - /* TODO: figure this out, maybe do nothing?? */ return 0; } -void vlan_dev_destruct(struct net_device *dev) +int vlan_dev_change_mtu(struct net_device *dev, int new_mtu) { - if (dev) { - vlan_flush_mc_list(dev); - if (dev->priv) { - dev_put(VLAN_DEV_INFO(dev)->real_dev); - if (VLAN_DEV_INFO(dev)->dent) { - printk(KERN_ERR __FUNCTION__ ": dent is NOT NULL!\n"); - - /* If we ever get here, there is a serious bug - * that must be fixed. - */ - } + /* TODO: gotta make sure the underlying layer can handle it, + * maybe an IFF_VLAN_CAPABLE flag for devices? + */ + if (VLAN_DEV_INFO(dev)->real_dev->mtu < new_mtu) + return -ERANGE; - kfree(dev->priv); + dev->mtu = new_mtu; - VLAN_FMEM_DBG("dev->priv free, addr: %p\n", dev->priv); - dev->priv = NULL; - } - } + return new_mtu; } int vlan_dev_set_ingress_priority(char *dev_name, __u32 skb_prio, short vlan_prio) @@ -642,6 +665,124 @@ return 0; } +static inline int vlan_dmi_equals(struct dev_mc_list *dmi1, + struct dev_mc_list *dmi2) +{ + return ((dmi1->dmi_addrlen == dmi2->dmi_addrlen) && + (memcmp(dmi1->dmi_addr, dmi2->dmi_addr, dmi1->dmi_addrlen) == 0)); +} + +/** dmi is a single entry into a dev_mc_list, a single node. mc_list is + * an entire list, and we'll iterate through it. + */ +static int vlan_should_add_mc(struct dev_mc_list *dmi, struct dev_mc_list *mc_list) +{ + struct dev_mc_list *idmi; + + for (idmi = mc_list; idmi != NULL; ) { + if (vlan_dmi_equals(dmi, idmi)) { + if (dmi->dmi_users > idmi->dmi_users) + return 1; + else + return 0; + } else { + idmi = idmi->next; + } + } + + return 1; +} + +static inline void vlan_destroy_mc_list(struct dev_mc_list *mc_list) +{ + struct dev_mc_list *dmi = mc_list; + struct dev_mc_list *next; + + while(dmi) { + next = dmi->next; + kfree(dmi); + dmi = next; + } +} + +static void vlan_copy_mc_list(struct dev_mc_list *mc_list, struct vlan_dev_info *vlan_info) +{ + struct dev_mc_list *dmi, *new_dmi; + + vlan_destroy_mc_list(vlan_info->old_mc_list); + vlan_info->old_mc_list = NULL; + + for (dmi = mc_list; dmi != NULL; dmi = dmi->next) { + new_dmi = kmalloc(sizeof(*new_dmi), GFP_ATOMIC); + if (new_dmi == NULL) { + printk(KERN_ERR "vlan: cannot allocate memory. " + "Multicast may not work properly from now.\n"); + return; + } + + /* Copy whole structure, then make new 'next' pointer */ + *new_dmi = *dmi; + new_dmi->next = vlan_info->old_mc_list; + vlan_info->old_mc_list = new_dmi; + } +} + +static void vlan_flush_mc_list(struct net_device *dev) +{ + struct dev_mc_list *dmi = dev->mc_list; + + while (dmi) { + dev_mc_delete(dev, dmi->dmi_addr, dmi->dmi_addrlen, 0); + printk(KERN_INFO "%s: del %.2x:%.2x:%.2x:%.2x:%.2x:%.2x mcast address from vlan interface\n", + dev->name, + dmi->dmi_addr[0], + dmi->dmi_addr[1], + dmi->dmi_addr[2], + dmi->dmi_addr[3], + dmi->dmi_addr[4], + dmi->dmi_addr[5]); + dmi = dev->mc_list; + } + + /* dev->mc_list is NULL by the time we get here. */ + vlan_destroy_mc_list(VLAN_DEV_INFO(dev)->old_mc_list); + VLAN_DEV_INFO(dev)->old_mc_list = NULL; +} + +int vlan_dev_open(struct net_device *dev) +{ + if (!(VLAN_DEV_INFO(dev)->real_dev->flags & IFF_UP)) + return -ENETDOWN; + + return 0; +} + +int vlan_dev_stop(struct net_device *dev) +{ + vlan_flush_mc_list(dev); + return 0; +} + +int vlan_dev_init(struct net_device *dev) +{ + /* TODO: figure this out, maybe do nothing?? */ + return 0; +} + +void vlan_dev_destruct(struct net_device *dev) +{ + if (dev) { + vlan_flush_mc_list(dev); + if (dev->priv) { + if (VLAN_DEV_INFO(dev)->dent) + BUG(); + + kfree(dev->priv); + dev->priv = NULL; + } + } +} + /** Taken from Gleb + Lennert's VLAN code, and modified... */ void vlan_dev_set_multicast_list(struct net_device *vlan_dev) { @@ -706,69 +847,4 @@ /* save multicast list */ vlan_copy_mc_list(vlan_dev->mc_list, VLAN_DEV_INFO(vlan_dev)); } -} - -/** dmi is a single entry into a dev_mc_list, a single node. mc_list is - * an entire list, and we'll iterate through it. - */ -int vlan_should_add_mc(struct dev_mc_list *dmi, struct dev_mc_list *mc_list) -{ - struct dev_mc_list *idmi; - - for (idmi = mc_list; idmi != NULL; ) { - if (vlan_dmi_equals(dmi, idmi)) { - if (dmi->dmi_users > idmi->dmi_users) - return 1; - else - return 0; - } else { - idmi = idmi->next; - } - } - - return 1; -} - -void vlan_copy_mc_list(struct dev_mc_list *mc_list, struct vlan_dev_info *vlan_info) -{ - struct dev_mc_list *dmi, *new_dmi; - - vlan_destroy_mc_list(vlan_info->old_mc_list); - vlan_info->old_mc_list = NULL; - - for (dmi = mc_list; dmi != NULL; dmi = dmi->next) { - new_dmi = kmalloc(sizeof(*new_dmi), GFP_ATOMIC); - if (new_dmi == NULL) { - printk(KERN_ERR "vlan: cannot allocate memory. " - "Multicast may not work properly from now.\n"); - return; - } - - /* Copy whole structure, then make new 'next' pointer */ - *new_dmi = *dmi; - new_dmi->next = vlan_info->old_mc_list; - vlan_info->old_mc_list = new_dmi; - } -} - -void vlan_flush_mc_list(struct net_device *dev) -{ - struct dev_mc_list *dmi = dev->mc_list; - - while (dmi) { - dev_mc_delete(dev, dmi->dmi_addr, dmi->dmi_addrlen, 0); - printk(KERN_INFO "%s: del %.2x:%.2x:%.2x:%.2x:%.2x:%.2x mcast address from vlan interface\n", - dev->name, - dmi->dmi_addr[0], - dmi->dmi_addr[1], - dmi->dmi_addr[2], - dmi->dmi_addr[3], - dmi->dmi_addr[4], - dmi->dmi_addr[5]); - dmi = dev->mc_list; - } - - /* dev->mc_list is NULL by the time we get here. */ - vlan_destroy_mc_list(VLAN_DEV_INFO(dev)->old_mc_list); - VLAN_DEV_INFO(dev)->old_mc_list = NULL; } diff -Nru a/net/8021q/vlanproc.c b/net/8021q/vlanproc.c --- a/net/8021q/vlanproc.c Tue Mar 12 13:58:15 2002 +++ b/net/8021q/vlanproc.c Tue Mar 12 13:58:15 2002 @@ -272,7 +272,7 @@ { struct net_device *vlandev = NULL; struct vlan_group *grp = NULL; - int i = 0; + int h, i; char *nm_type = NULL; struct vlan_dev_info *dev_info = NULL; @@ -292,46 +292,34 @@ nm_type = "UNKNOWN"; } - cnt += sprintf(buf + cnt, "Name-Type: %s bad_proto_recvd: %lu\n", - nm_type, vlan_bad_proto_recvd); + cnt += sprintf(buf + cnt, "Name-Type: %s\n", nm_type); - for (grp = p802_1Q_vlan_list; grp != NULL; grp = grp->next) { - /* loop through all devices for this device */ -#ifdef VLAN_DEBUG - printk(VLAN_DBG __FUNCTION__ ": found a group, addr: %p\n",grp); -#endif - for (i = 0; i < VLAN_GROUP_ARRAY_LEN; i++) { - vlandev = grp->vlan_devices[i]; - if (!vlandev) - continue; -#ifdef VLAN_DEBUG - printk(VLAN_DBG __FUNCTION__ - ": found a vlan_dev, addr: %p\n", vlandev); -#endif - if ((cnt + 100) > VLAN_PROC_BUFSZ) { - if ((cnt+strlen(term_msg)) < VLAN_PROC_BUFSZ) - cnt += sprintf(buf+cnt, "%s", term_msg); - - return cnt; - } - if (!vlandev->priv) { - printk(KERN_ERR __FUNCTION__ - ": ERROR: vlandev->priv is NULL\n"); - continue; + spin_lock_bh(&vlan_group_lock); + for (h = 0; h < VLAN_GRP_HASH_SIZE; h++) { + for (grp = vlan_group_hash[h]; grp != NULL; grp = grp->next) { + for (i = 0; i < VLAN_GROUP_ARRAY_LEN; i++) { + vlandev = grp->vlan_devices[i]; + if (!vlandev) + continue; + + if ((cnt + 100) > VLAN_PROC_BUFSZ) { + if ((cnt+strlen(term_msg)) < VLAN_PROC_BUFSZ) + cnt += sprintf(buf+cnt, "%s", term_msg); + + goto out; + } + + dev_info = VLAN_DEV_INFO(vlandev); + cnt += sprintf(buf + cnt, "%-15s| %d | %s\n", + vlandev->name, + dev_info->vlan_id, + dev_info->real_dev->name); } - - dev_info = VLAN_DEV_INFO(vlandev); - -#ifdef VLAN_DEBUG - printk(VLAN_DBG __FUNCTION__ - ": got a good vlandev, addr: %p\n", - VLAN_DEV_INFO(vlandev)); -#endif - cnt += sprintf(buf + cnt, "%-15s| %d | %s\n", - vlandev->name, dev_info->vlan_id, - dev_info->real_dev->name); } } +out: + spin_unlock_bh(&vlan_group_lock); + return cnt; } @@ -365,11 +353,7 @@ int cnt = 0; int i; -#ifdef VLAN_DEBUG - printk(VLAN_DBG __FUNCTION__ ": vlandev: %p\n", vlandev); -#endif - - if ((vlandev == NULL) || (!vlandev->priv_flags & IFF_802_1Q_VLAN)) + if ((vlandev == NULL) || (!(vlandev->priv_flags & IFF_802_1Q_VLAN))) return 0; dev_info = VLAN_DEV_INFO(vlandev); @@ -426,7 +410,7 @@ cnt += sprintf(buf + cnt, "EGRESSS priority Mappings: "); - for (i = 0; i<16; i++) { + for (i = 0; i < 16; i++) { mp = dev_info->egress_priority_map[i]; while (mp) { cnt += sprintf(buf + cnt, "%lu:%hu ", diff -Nru a/net/Config.in b/net/Config.in --- a/net/Config.in Tue Mar 12 13:58:15 2002 +++ b/net/Config.in Tue Mar 12 13:58:15 2002 @@ -44,10 +44,8 @@ tristate ' Multi-Protocol Over ATM (MPOA) support' CONFIG_ATM_MPOA fi fi - - dep_tristate '802.1Q VLAN Support (EXPERIMENTAL)' CONFIG_VLAN_8021Q $CONFIG_EXPERIMENTAL - fi +tristate '802.1Q VLAN Support' CONFIG_VLAN_8021Q comment ' ' tristate 'The IPX protocol' CONFIG_IPX diff -Nru a/net/core/wireless.c b/net/core/wireless.c --- a/net/core/wireless.c Tue Mar 12 13:58:16 2002 +++ b/net/core/wireless.c Tue Mar 12 13:58:16 2002 @@ -2,7 +2,7 @@ * This file implement the Wireless Extensions APIs. * * Authors : Jean Tourrilhes - HPL - - * Copyright (c) 1997-2001 Jean Tourrilhes, All Rights Reserved. + * Copyright (c) 1997-2002 Jean Tourrilhes, All Rights Reserved. * * (As all part of the Linux kernel, this file is GPL) */ @@ -25,6 +25,14 @@ * o Added iw_handler handling ;-) * o Added standard ioctl description * o Initial dumb commit strategy based on orinoco.c + * + * v3 - 19.12.01 - Jean II + * o Make sure we don't go out of standard_ioctl[] in ioctl_standard_call + * o Fix /proc/net/wireless to handle __u8 to __s8 change in iwqual + * o Add event dispatcher function + * o Add event description + * o Propagate events as rtnetlink IFLA_WIRELESS option + * o Generate event on selected SET requests */ /***************************** INCLUDES *****************************/ @@ -33,6 +41,7 @@ #include /* Not needed ??? */ #include /* off_t */ #include /* struct ifreq, dev_get_by_name() */ +#include /* rtnetlink stuff */ #include /* Pretty obvious */ #include /* New driver API */ @@ -44,14 +53,23 @@ /* Debuging stuff */ #undef WE_IOCTL_DEBUG /* Debug IOCTL API */ +#undef WE_EVENT_DEBUG /* Debug Event dispatcher */ + +/* Options */ +#define WE_EVENT_NETLINK /* Propagate events using rtnetlink */ +#define WE_SET_EVENT /* Generate an event on some set commands */ /************************* GLOBAL VARIABLES *************************/ /* * You should not use global variables, because or re-entrancy. * On our case, it's only const, so it's OK... */ +/* + * Meta-data about all the standard Wireless Extension request we + * know about. + */ static const struct iw_ioctl_description standard_ioctl[] = { - /* SIOCSIWCOMMIT (internal) */ + /* SIOCSIWCOMMIT */ { IW_HEADER_TYPE_NULL, 0, 0, 0, 0, 0}, /* SIOCGIWNAME */ { IW_HEADER_TYPE_CHAR, 0, 0, 0, 0, IW_DESCR_FLAG_DUMP}, @@ -99,10 +117,10 @@ { IW_HEADER_TYPE_NULL, 0, 0, 0, 0, 0}, /* SIOCGIWAPLIST */ { IW_HEADER_TYPE_POINT, 0, (sizeof(struct sockaddr) + sizeof(struct iw_quality)), 0, IW_MAX_AP, 0}, - /* -- hole -- */ - { IW_HEADER_TYPE_NULL, 0, 0, 0, 0, 0}, - /* -- hole -- */ - { IW_HEADER_TYPE_NULL, 0, 0, 0, 0, 0}, + /* SIOCSIWSCAN */ + { IW_HEADER_TYPE_PARAM, 0, 0, 0, 0, 0}, + /* SIOCGIWSCAN */ + { IW_HEADER_TYPE_POINT, 0, 1, 0, IW_SCAN_MAX_DATA, 0}, /* SIOCSIWESSID */ { IW_HEADER_TYPE_POINT, 0, 1, 0, IW_ESSID_MAX_SIZE, IW_DESCR_FLAG_EVENT}, /* SIOCGIWESSID */ @@ -136,7 +154,7 @@ /* SIOCGIWRETRY */ { IW_HEADER_TYPE_PARAM, 0, 0, 0, 0, 0}, /* SIOCSIWENCODE */ - { IW_HEADER_TYPE_POINT, 4, 1, 0, IW_ENCODING_TOKEN_MAX, IW_DESCR_FLAG_EVENT | IW_DESCR_FLAG_RESTRICT}, + { IW_HEADER_TYPE_POINT, 0, 1, 0, IW_ENCODING_TOKEN_MAX, IW_DESCR_FLAG_EVENT | IW_DESCR_FLAG_RESTRICT}, /* SIOCGIWENCODE */ { IW_HEADER_TYPE_POINT, 0, 1, 0, IW_ENCODING_TOKEN_MAX, IW_DESCR_FLAG_DUMP | IW_DESCR_FLAG_RESTRICT}, /* SIOCSIWPOWER */ @@ -144,9 +162,38 @@ /* SIOCGIWPOWER */ { IW_HEADER_TYPE_PARAM, 0, 0, 0, 0, 0}, }; +static const int standard_ioctl_num = (sizeof(standard_ioctl) / + sizeof(struct iw_ioctl_description)); + +/* + * Meta-data about all the additional standard Wireless Extension events + * we know about. + */ +static const struct iw_ioctl_description standard_event[] = { + /* IWEVTXDROP */ + { IW_HEADER_TYPE_ADDR, 0, 0, 0, 0, 0}, + /* IWEVQUAL */ + { IW_HEADER_TYPE_QUAL, 0, 0, 0, 0, 0}, +}; +static const int standard_event_num = (sizeof(standard_event) / + sizeof(struct iw_ioctl_description)); /* Size (in bytes) of the various private data types */ -char priv_type_size[] = { 0, 1, 1, 0, 4, 4, 0, 0 }; +static const char priv_type_size[] = { 0, 1, 1, 0, 4, 4, 0, 0 }; + +/* Size (in bytes) of various events */ +static const int event_type_size[] = { + IW_EV_LCP_LEN, + 0, + IW_EV_CHAR_LEN, + 0, + IW_EV_UINT_LEN, + IW_EV_FREQ_LEN, + IW_EV_POINT_LEN, /* Without variable payload */ + IW_EV_PARAM_LEN, + IW_EV_ADDR_LEN, + IW_EV_QUAL_LEN, +}; /************************ COMMON SUBROUTINES ************************/ /* @@ -162,7 +209,8 @@ static inline iw_handler get_handler(struct net_device *dev, unsigned int cmd) { - unsigned int index; /* MUST be unsigned */ + /* Don't "optimise" the following variable, it will crash */ + unsigned int index; /* *MUST* be unsigned */ /* Check if we have some wireless handlers defined */ if(dev->wireless_handlers == NULL) @@ -269,9 +317,9 @@ stats->status, stats->qual.qual, stats->qual.updated & 1 ? '.' : ' ', - stats->qual.level, + ((__u8) stats->qual.level), stats->qual.updated & 2 ? '.' : ' ', - stats->qual.noise, + ((__u8) stats->qual.noise), stats->qual.updated & 4 ? '.' : ' ', stats->discard.nwid, stats->discard.code, @@ -423,12 +471,14 @@ int ret = -EINVAL; /* Get the description of the IOCTL */ + if((cmd - SIOCIWFIRST) >= standard_ioctl_num) + return -EOPNOTSUPP; descr = &(standard_ioctl[cmd - SIOCIWFIRST]); #ifdef WE_IOCTL_DEBUG - printk(KERN_DEBUG "%s : Found standard handler for 0x%04X\n", + printk(KERN_DEBUG "%s (WE) : Found standard handler for 0x%04X\n", ifr->ifr_name, cmd); - printk(KERN_DEBUG "Header type : %d, token type : %d, token_size : %d, max_token : %d\n", descr->header_type, descr->token_type, descr->token_size, descr->max_tokens); + printk(KERN_DEBUG "%s (WE) : Header type : %d, Token type : %d, size : %d, token : %d\n", dev->name, descr->header_type, descr->token_type, descr->token_size, descr->max_tokens); #endif /* WE_IOCTL_DEBUG */ /* Prepare the call */ @@ -437,8 +487,16 @@ /* Check if we have a pointer to user space data or not */ if(descr->header_type != IW_HEADER_TYPE_POINT) { + /* No extra arguments. Trivial to handle */ ret = handler(dev, &info, &(iwr->u), NULL); + +#ifdef WE_SET_EVENT + /* Generate an event to notify listeners of the change */ + if((descr->flags & IW_DESCR_FLAG_EVENT) && + ((ret == 0) || (ret == -EIWCOMMIT))) + wireless_send_event(dev, cmd, &(iwr->u), NULL); +#endif /* WE_SET_EVENT */ } else { char * extra; int err; @@ -466,8 +524,8 @@ } #ifdef WE_IOCTL_DEBUG - printk(KERN_DEBUG "Malloc %d bytes\n", - descr->max_tokens * descr->token_size); + printk(KERN_DEBUG "%s (WE) : Malloc %d bytes\n", + dev->name, descr->max_tokens * descr->token_size); #endif /* WE_IOCTL_DEBUG */ /* Always allocate for max space. Easier, and won't last @@ -488,7 +546,8 @@ return -EFAULT; } #ifdef WE_IOCTL_DEBUG - printk(KERN_DEBUG "Got %d bytes\n", + printk(KERN_DEBUG "%s (WE) : Got %d bytes\n", + dev->name, iwr->u.data.length * descr->token_size); #endif /* WE_IOCTL_DEBUG */ } @@ -504,11 +563,26 @@ if (err) ret = -EFAULT; #ifdef WE_IOCTL_DEBUG - printk(KERN_DEBUG "Wrote %d bytes\n", + printk(KERN_DEBUG "%s (WE) : Wrote %d bytes\n", + dev->name, iwr->u.data.length * descr->token_size); #endif /* WE_IOCTL_DEBUG */ } +#ifdef WE_SET_EVENT + /* Generate an event to notify listeners of the change */ + if((descr->flags & IW_DESCR_FLAG_EVENT) && + ((ret == 0) || (ret == -EIWCOMMIT))) { + if(descr->flags & IW_DESCR_FLAG_RESTRICT) + /* If the event is restricted, don't + * export the payload */ + wireless_send_event(dev, cmd, &(iwr->u), NULL); + else + wireless_send_event(dev, cmd, &(iwr->u), + extra); + } +#endif /* WE_SET_EVENT */ + /* Cleanup - I told you it wasn't that long ;-) */ kfree(extra); } @@ -558,11 +632,12 @@ } #ifdef WE_IOCTL_DEBUG - printk(KERN_DEBUG "%s : Found private handler for 0x%04X\n", + printk(KERN_DEBUG "%s (WE) : Found private handler for 0x%04X\n", ifr->ifr_name, cmd); if(descr) { - printk(KERN_DEBUG "Name %s, set %X, get %X\n", - descr->name, descr->set_args, descr->get_args); + printk(KERN_DEBUG "%s (WE) : Name %s, set %X, get %X\n", + dev->name, descr->name, + descr->set_args, descr->get_args); } #endif /* WE_IOCTL_DEBUG */ @@ -617,7 +692,8 @@ } #ifdef WE_IOCTL_DEBUG - printk(KERN_DEBUG "Malloc %d bytes\n", extra_size); + printk(KERN_DEBUG "%s (WE) : Malloc %d bytes\n", + dev->name, extra_size); #endif /* WE_IOCTL_DEBUG */ /* Always allocate for max space. Easier, and won't last @@ -636,7 +712,8 @@ return -EFAULT; } #ifdef WE_IOCTL_DEBUG - printk(KERN_DEBUG "Got %d elem\n", iwr->u.data.length); + printk(KERN_DEBUG "%s (WE) : Got %d elem\n", + dev->name, iwr->u.data.length); #endif /* WE_IOCTL_DEBUG */ } @@ -650,8 +727,8 @@ if (err) ret = -EFAULT; #ifdef WE_IOCTL_DEBUG - printk(KERN_DEBUG "Wrote %d elem\n", - iwr->u.data.length); + printk(KERN_DEBUG "%s (WE) : Wrote %d elem\n", + dev->name, iwr->u.data.length); #endif /* WE_IOCTL_DEBUG */ } @@ -730,4 +807,178 @@ } /* Not reached */ return -EINVAL; +} + +/************************* EVENT PROCESSING *************************/ +/* + * Process events generated by the wireless layer or the driver. + * Most often, the event will be propagated through rtnetlink + */ + +#ifdef WE_EVENT_NETLINK +/* "rtnl" is defined in net/core/rtnetlink.c, but we need it here. + * It is declared in */ + +/* ---------------------------------------------------------------- */ +/* + * Fill a rtnetlink message with our event data. + * Note that we propage only the specified event and don't dump the + * current wireless config. Dumping the wireless config is far too + * expensive (for each parameter, the driver need to query the hardware). + */ +static inline int rtnetlink_fill_iwinfo(struct sk_buff * skb, + struct net_device * dev, + int type, + char * event, + int event_len) +{ + struct ifinfomsg *r; + struct nlmsghdr *nlh; + unsigned char *b = skb->tail; + + nlh = NLMSG_PUT(skb, 0, 0, type, sizeof(*r)); + r = NLMSG_DATA(nlh); + r->ifi_family = AF_UNSPEC; + r->ifi_type = dev->type; + r->ifi_index = dev->ifindex; + r->ifi_flags = dev->flags; + r->ifi_change = 0; /* Wireless changes don't affect those flags */ + + /* Add the wireless events in the netlink packet */ + RTA_PUT(skb, IFLA_WIRELESS, + event_len, event); + + nlh->nlmsg_len = skb->tail - b; + return skb->len; + +nlmsg_failure: +rtattr_failure: + skb_trim(skb, b - skb->data); + return -1; +} + +/* ---------------------------------------------------------------- */ +/* + * Create and broadcast and send it on the standard rtnetlink socket + * This is a pure clone rtmsg_ifinfo() in net/core/rtnetlink.c + * Andrzej Krzysztofowicz mandated that I used a IFLA_XXX field + * within a RTM_NEWLINK event. + */ +static inline void rtmsg_iwinfo(struct net_device * dev, + char * event, + int event_len) +{ + struct sk_buff *skb; + int size = NLMSG_GOODSIZE; + + skb = alloc_skb(size, GFP_ATOMIC); + if (!skb) + return; + + if (rtnetlink_fill_iwinfo(skb, dev, RTM_NEWLINK, + event, event_len) < 0) { + kfree_skb(skb); + return; + } + NETLINK_CB(skb).dst_groups = RTMGRP_LINK; + netlink_broadcast(rtnl, skb, 0, RTMGRP_LINK, GFP_ATOMIC); +} +#endif /* WE_EVENT_NETLINK */ + +/* ---------------------------------------------------------------- */ +/* + * Main event dispatcher. Called from other parts and drivers. + * Send the event on the apropriate channels. + * May be called from interrupt context. + */ +void wireless_send_event(struct net_device * dev, + unsigned int cmd, + union iwreq_data * wrqu, + char * extra) +{ + const struct iw_ioctl_description * descr = NULL; + int extra_len = 0; + struct iw_event *event; /* Mallocated whole event */ + int event_len; /* Its size */ + int hdr_len; /* Size of the event header */ + /* Don't "optimise" the following variable, it will crash */ + unsigned cmd_index; /* *MUST* be unsigned */ + + /* Get the description of the IOCTL */ + if(cmd <= SIOCIWLAST) { + cmd_index = cmd - SIOCIWFIRST; + if(cmd_index < standard_ioctl_num) + descr = &(standard_ioctl[cmd_index]); + } else { + cmd_index = cmd - IWEVFIRST; + if(cmd_index < standard_event_num) + descr = &(standard_event[cmd_index]); + } + /* Don't accept unknown events */ + if(descr == NULL) { + /* Note : we don't return an error to the driver, because + * the driver would not know what to do about it. It can't + * return an error to the user, because the event is not + * initiated by a user request. + * The best the driver could do is to log an error message. + * We will do it ourselves instead... + */ + printk(KERN_ERR "%s (WE) : Invalid Wireless Event (0x%04X)\n", + dev->name, cmd); + return; + } +#ifdef WE_EVENT_DEBUG + printk(KERN_DEBUG "%s (WE) : Got event 0x%04X\n", + dev->name, cmd); + printk(KERN_DEBUG "%s (WE) : Header type : %d, Token type : %d, size : %d, token : %d\n", dev->name, descr->header_type, descr->token_type, descr->token_size, descr->max_tokens); +#endif /* WE_EVENT_DEBUG */ + + /* Check extra parameters and set extra_len */ + if(descr->header_type == IW_HEADER_TYPE_POINT) { + /* Check if number of token fits within bounds */ + if(wrqu->data.length > descr->max_tokens) { + printk(KERN_ERR "%s (WE) : Wireless Event too big (%d)\n", dev->name, wrqu->data.length); + return; + } + if(wrqu->data.length < descr->min_tokens) { + printk(KERN_ERR "%s (WE) : Wireless Event too small (%d)\n", dev->name, wrqu->data.length); + return; + } + /* Calculate extra_len - extra is NULL for restricted events */ + if(extra != NULL) + extra_len = wrqu->data.length * descr->token_size; +#ifdef WE_EVENT_DEBUG + printk(KERN_DEBUG "%s (WE) : Event 0x%04X, tokens %d, extra_len %d\n", dev->name, cmd, wrqu->data.length, extra_len); +#endif /* WE_EVENT_DEBUG */ + } + + /* Total length of the event */ + hdr_len = event_type_size[descr->header_type]; + event_len = hdr_len + extra_len; + +#ifdef WE_EVENT_DEBUG + printk(KERN_DEBUG "%s (WE) : Event 0x%04X, hdr_len %d, event_len %d\n", dev->name, cmd, hdr_len, event_len); +#endif /* WE_EVENT_DEBUG */ + + /* Create temporary buffer to hold the event */ + event = kmalloc(event_len, GFP_ATOMIC); + if(event == NULL) + return; + + /* Fill event */ + event->len = event_len; + event->cmd = cmd; + memcpy(&event->u, wrqu, hdr_len - IW_EV_LCP_LEN); + if(extra != NULL) + memcpy(((char *) event) + hdr_len, extra, extra_len); + +#ifdef WE_EVENT_NETLINK + /* rtnetlink event channel */ + rtmsg_iwinfo(dev, (char *) event, event_len); +#endif /* WE_EVENT_NETLINK */ + + /* Cleanup */ + kfree(event); + + return; /* Always success, I guess ;-) */ } diff -Nru a/net/econet/af_econet.c b/net/econet/af_econet.c --- a/net/econet/af_econet.c Tue Mar 12 13:58:15 2002 +++ b/net/econet/af_econet.c Tue Mar 12 13:58:15 2002 @@ -554,7 +554,7 @@ memset(eo, 0, sizeof(*eo)); sk->zapped=0; sk->family = PF_ECONET; - sk->num = protocol; + eo->num = protocol; sklist_insert_socket(&econet_sklist, sk); return(0); diff -Nru a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c --- a/net/ipv4/af_inet.c Tue Mar 12 13:58:15 2002 +++ b/net/ipv4/af_inet.c Tue Mar 12 13:58:15 2002 @@ -270,14 +270,15 @@ static int inet_autobind(struct sock *sk) { + struct inet_opt *inet = inet_sk(sk); /* We may need to bind the socket. */ lock_sock(sk); - if (sk->num == 0) { + if (!inet->num) { if (sk->prot->get_port(sk, 0) != 0) { release_sock(sk); return -EAGAIN; } - sk->sport = htons(sk->num); + inet->sport = htons(inet->num); } release_sock(sk); return 0; @@ -397,7 +398,7 @@ inet = inet_sk(sk); if (SOCK_RAW == sock->type) { - sk->num = protocol; + inet->num = protocol; if (IPPROTO_RAW == protocol) inet->hdrincl = 1; } @@ -430,13 +431,13 @@ atomic_inc(&inet_sock_nr); #endif - if (sk->num) { + if (inet->num) { /* It assumes that any protocol which allows * the user to assign a number at socket * creation time automatically * shares. */ - sk->sport = htons(sk->num); + inet->sport = htons(inet->num); /* Add to protocol hash chains. */ sk->prot->hash(sk); @@ -551,28 +552,27 @@ /* Check these errors (active socket, double bind). */ err = -EINVAL; - if ((sk->state != TCP_CLOSE) || - (sk->num != 0)) + if (sk->state != TCP_CLOSE || inet->num) goto out; - sk->rcv_saddr = sk->saddr = addr->sin_addr.s_addr; + inet->rcv_saddr = inet->saddr = addr->sin_addr.s_addr; if (chk_addr_ret == RTN_MULTICAST || chk_addr_ret == RTN_BROADCAST) - sk->saddr = 0; /* Use device */ + inet->saddr = 0; /* Use device */ /* Make sure we are allowed to bind here. */ if (sk->prot->get_port(sk, snum) != 0) { - sk->saddr = sk->rcv_saddr = 0; + inet->saddr = inet->rcv_saddr = 0; err = -EADDRINUSE; goto out; } - if (sk->rcv_saddr) + if (inet->rcv_saddr) sk->userlocks |= SOCK_BINDADDR_LOCK; if (snum) sk->userlocks |= SOCK_BINDPORT_LOCK; - sk->sport = htons(sk->num); - sk->daddr = 0; - sk->dport = 0; + inet->sport = htons(inet->num); + inet->daddr = 0; + inet->dport = 0; sk_dst_reset(sk); err = 0; out: @@ -588,7 +588,7 @@ if (uaddr->sa_family == AF_UNSPEC) return sk->prot->disconnect(sk, flags); - if (sk->num==0 && inet_autobind(sk) != 0) + if (!inet_sk(sk)->num && inet_autobind(sk)) return -EAGAIN; return sk->prot->connect(sk, (struct sockaddr *)uaddr, addr_len); } @@ -627,6 +627,7 @@ int addr_len, int flags) { struct sock *sk=sock->sk; + struct inet_opt *inet = inet_sk(sk); int err; long timeo; @@ -655,10 +656,10 @@ goto out; err = -EAGAIN; - if (sk->num == 0) { + if (!inet->num) { if (sk->prot->get_port(sk, 0) != 0) goto out; - sk->sport = htons(sk->num); + inet->sport = htons(inet->num); } err = sk->prot->connect(sk, uaddr, addr_len); @@ -748,21 +749,22 @@ int *uaddr_len, int peer) { struct sock *sk = sock->sk; + struct inet_opt *inet = inet_sk(sk); struct sockaddr_in *sin = (struct sockaddr_in *)uaddr; sin->sin_family = AF_INET; if (peer) { - if (!sk->dport) + if (!inet->dport) return -ENOTCONN; if (((1<state)&(TCPF_CLOSE|TCPF_SYN_SENT)) && peer == 1) return -ENOTCONN; - sin->sin_port = sk->dport; - sin->sin_addr.s_addr = sk->daddr; + sin->sin_port = inet->dport; + sin->sin_addr.s_addr = inet->daddr; } else { - __u32 addr = sk->rcv_saddr; + __u32 addr = inet->rcv_saddr; if (!addr) - addr = sk->saddr; - sin->sin_port = sk->sport; + addr = inet->saddr; + sin->sin_port = inet->sport; sin->sin_addr.s_addr = addr; } *uaddr_len = sizeof(*sin); @@ -792,7 +794,7 @@ struct sock *sk = sock->sk; /* We may need to bind the socket. */ - if (sk->num==0 && inet_autobind(sk) != 0) + if (!inet_sk(sk)->num && inet_autobind(sk)) return -EAGAIN; return sk->prot->sendmsg(sk, msg, size); diff -Nru a/net/ipv4/ip_input.c b/net/ipv4/ip_input.c --- a/net/ipv4/ip_input.c Tue Mar 12 13:58:14 2002 +++ b/net/ipv4/ip_input.c Tue Mar 12 13:58:14 2002 @@ -166,7 +166,7 @@ /* If socket is bound to an interface, only report * the packet if it came from that interface. */ - if (sk && sk->num == protocol + if (sk && inet_sk(sk)->num == protocol && ((sk->bound_dev_if == 0) || (sk->bound_dev_if == skb->dev->ifindex))) { if (skb->nh.iph->frag_off & htons(IP_MF|IP_OFFSET)) { diff -Nru a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c --- a/net/ipv4/ip_output.c Tue Mar 12 13:58:15 2002 +++ b/net/ipv4/ip_output.c Tue Mar 12 13:58:15 2002 @@ -135,9 +135,10 @@ iph->version = 4; iph->ihl = 5; iph->tos = inet->tos; - iph->frag_off = 0; if (ip_dont_fragment(sk, &rt->u.dst)) - iph->frag_off |= htons(IP_DF); + iph->frag_off = __constant_htons(IP_DF); + else + iph->frag_off = 0; iph->ttl = inet->ttl; iph->daddr = rt->rt_dst; iph->saddr = rt->rt_src; @@ -308,9 +309,6 @@ if (skb->len > rt->u.dst.pmtu) goto fragment; - if (ip_dont_fragment(sk, &rt->u.dst)) - iph->frag_off |= __constant_htons(IP_DF); - ip_select_ident(iph, &rt->u.dst, sk); /* Add an IP checksum. */ @@ -324,7 +322,6 @@ /* Reject packet ONLY if TCP might fragment * it itself, if were careful enough. */ - iph->frag_off |= __constant_htons(IP_DF); NETDEBUG(printk(KERN_DEBUG "sending pkt_too_big to self\n")); icmp_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED, @@ -360,7 +357,7 @@ u32 daddr; /* Use correct destination address if we have options. */ - daddr = sk->daddr; + daddr = inet->daddr; if(opt && opt->srr) daddr = opt->faddr; @@ -368,7 +365,7 @@ * keep trying until route appears or the connection times itself * out. */ - if (ip_route_output(&rt, daddr, sk->saddr, + if (ip_route_output(&rt, daddr, inet->saddr, RT_CONN_FLAGS(sk), sk->bound_dev_if)) goto no_route; @@ -385,7 +382,10 @@ iph = (struct iphdr *) skb_push(skb, sizeof(struct iphdr) + (opt ? opt->optlen : 0)); *((__u16 *)iph) = htons((4 << 12) | (5 << 8) | (inet->tos & 0xff)); iph->tot_len = htons(skb->len); - iph->frag_off = 0; + if (ip_dont_fragment(sk, &rt->u.dst)) + iph->frag_off = __constant_htons(IP_DF); + else + iph->frag_off = 0; iph->ttl = inet->ttl; iph->protocol = sk->protocol; iph->saddr = rt->rt_src; @@ -395,7 +395,7 @@ if(opt && opt->optlen) { iph->ihl += opt->optlen >> 2; - ip_options_build(skb, opt, sk->daddr, rt, 0); + ip_options_build(skb, opt, inet->daddr, rt, 0); } return NF_HOOK(PF_INET, NF_IP_LOCAL_OUT, skb, NULL, rt->u.dst.dev, @@ -452,7 +452,7 @@ mtu = rt->u.dst.pmtu; if (ip_dont_fragment(sk, &rt->u.dst)) - df = htons(IP_DF); + df = __constant_htons(IP_DF); length -= sizeof(struct iphdr); @@ -471,7 +471,7 @@ } if (length + fragheaderlen > 0xFFFF) { - ip_local_error(sk, EMSGSIZE, rt->rt_dst, sk->dport, mtu); + ip_local_error(sk, EMSGSIZE, rt->rt_dst, inet->dport, mtu); return -EMSGSIZE; } @@ -503,7 +503,7 @@ */ if (offset > 0 && inet->pmtudisc == IP_PMTUDISC_DO) { - ip_local_error(sk, EMSGSIZE, rt->rt_dst, sk->dport, mtu); + ip_local_error(sk, EMSGSIZE, rt->rt_dst, inet->dport, mtu); return -EMSGSIZE; } if (flags&MSG_PROBE) @@ -573,7 +573,7 @@ /* * Any further fragments will have MF set. */ - mf = htons(IP_MF); + mf = __constant_htons(IP_MF); } if (rt->rt_type == RTN_MULTICAST) iph->ttl = inet->mc_ttl; @@ -659,7 +659,8 @@ return ip_build_xmit_slow(sk,getfrag,frag,length,ipc,rt,flags); } else { if (length > rt->u.dst.dev->mtu) { - ip_local_error(sk, EMSGSIZE, rt->rt_dst, sk->dport, rt->u.dst.dev->mtu); + ip_local_error(sk, EMSGSIZE, rt->rt_dst, inet->dport, + rt->u.dst.dev->mtu); return -EMSGSIZE; } } @@ -671,7 +672,7 @@ */ df = 0; if (ip_dont_fragment(sk, &rt->u.dst)) - df = htons(IP_DF); + df = __constant_htons(IP_DF); /* * Fast path for unfragmented frames without options. @@ -775,7 +776,7 @@ */ offset = (ntohs(iph->frag_off) & IP_OFFSET) << 3; - not_last_frag = iph->frag_off & htons(IP_MF); + not_last_frag = iph->frag_off & __constant_htons(IP_MF); /* * Keep copying data until we run out. @@ -860,7 +861,7 @@ * last fragment then keep MF on each bit */ if (left > 0 || not_last_frag) - iph->frag_off |= htons(IP_MF); + iph->frag_off |= __constant_htons(IP_MF); ptr += len; offset += len; diff -Nru a/net/ipv4/ip_sockglue.c b/net/ipv4/ip_sockglue.c --- a/net/ipv4/ip_sockglue.c Tue Mar 12 13:58:15 2002 +++ b/net/ipv4/ip_sockglue.c Tue Mar 12 13:58:15 2002 @@ -193,7 +193,7 @@ { struct ip_ra_chain *ra, *new_ra, **rap; - if (sk->type != SOCK_RAW || sk->num == IPPROTO_RAW) + if (sk->type != SOCK_RAW || inet_sk(sk)->num == IPPROTO_RAW) return -EINVAL; new_ra = on ? kmalloc(sizeof(*new_ra), GFP_KERNEL) : NULL; @@ -435,7 +435,7 @@ #if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE) if (sk->family == PF_INET || (!((1<state)&(TCPF_LISTEN|TCPF_CLOSE)) - && sk->daddr != LOOPBACK4_IPV6)) { + && inet->daddr != LOOPBACK4_IPV6)) { #endif if (opt) tp->ext_header_len = opt->optlen; @@ -771,8 +771,8 @@ if (inet->cmsg_flags & IP_CMSG_PKTINFO) { struct in_pktinfo info; - info.ipi_addr.s_addr = sk->rcv_saddr; - info.ipi_spec_dst.s_addr = sk->rcv_saddr; + info.ipi_addr.s_addr = inet->rcv_saddr; + info.ipi_spec_dst.s_addr = inet->rcv_saddr; info.ipi_ifindex = inet->mc_index; put_cmsg(&msg, SOL_IP, IP_PKTINFO, sizeof(info), &info); } diff -Nru a/net/ipv4/netfilter/Config.in b/net/ipv4/netfilter/Config.in --- a/net/ipv4/netfilter/Config.in Tue Mar 12 13:58:15 2002 +++ b/net/ipv4/netfilter/Config.in Tue Mar 12 13:58:15 2002 @@ -75,9 +75,7 @@ dep_tristate ' MARK target support' CONFIG_IP_NF_TARGET_MARK $CONFIG_IP_NF_MANGLE fi dep_tristate ' LOG target support' CONFIG_IP_NF_TARGET_LOG $CONFIG_IP_NF_IPTABLES - if [ "$CONFIG_NETLINK" != "n" ]; then - dep_tristate ' ULOG target support' CONFIG_IP_NF_TARGET_ULOG $CONFIG_NETLINK $CONFIG_IP_NF_IPTABLES - fi + dep_tristate ' ULOG target support' CONFIG_IP_NF_TARGET_ULOG $CONFIG_IP_NF_IPTABLES dep_tristate ' TCPMSS target support' CONFIG_IP_NF_TARGET_TCPMSS $CONFIG_IP_NF_IPTABLES fi diff -Nru a/net/ipv4/netfilter/Makefile b/net/ipv4/netfilter/Makefile --- a/net/ipv4/netfilter/Makefile Tue Mar 12 13:58:15 2002 +++ b/net/ipv4/netfilter/Makefile Tue Mar 12 13:58:15 2002 @@ -31,15 +31,13 @@ # connection tracking obj-$(CONFIG_IP_NF_CONNTRACK) += ip_conntrack.o -# IRC support -obj-$(CONFIG_IP_NF_IRC) += ip_conntrack_irc.o -obj-$(CONFIG_IP_NF_NAT_IRC) += ip_nat_irc.o - # connection tracking helpers obj-$(CONFIG_IP_NF_FTP) += ip_conntrack_ftp.o +obj-$(CONFIG_IP_NF_IRC) += ip_conntrack_irc.o # NAT helpers obj-$(CONFIG_IP_NF_NAT_FTP) += ip_nat_ftp.o +obj-$(CONFIG_IP_NF_NAT_IRC) += ip_nat_irc.o # generic IP tables obj-$(CONFIG_IP_NF_IPTABLES) += ip_tables.o diff -Nru a/net/ipv4/netfilter/ip_conntrack_core.c b/net/ipv4/netfilter/ip_conntrack_core.c --- a/net/ipv4/netfilter/ip_conntrack_core.c Tue Mar 12 13:58:14 2002 +++ b/net/ipv4/netfilter/ip_conntrack_core.c Tue Mar 12 13:58:14 2002 @@ -969,9 +969,12 @@ static int getorigdst(struct sock *sk, int optval, void *user, int *len) { + struct inet_opt *inet = inet_sk(sk); struct ip_conntrack_tuple_hash *h; - struct ip_conntrack_tuple tuple = { { sk->rcv_saddr, { sk->sport } }, - { sk->daddr, { sk->dport }, + struct ip_conntrack_tuple tuple = { { inet->rcv_saddr, + { inet->sport } }, + { inet->daddr, + { inet->dport }, IPPROTO_TCP } }; /* We only do TCP at the moment: is there a better way? */ diff -Nru a/net/ipv4/netfilter/ip_nat_standalone.c b/net/ipv4/netfilter/ip_nat_standalone.c --- a/net/ipv4/netfilter/ip_nat_standalone.c Tue Mar 12 13:58:15 2002 +++ b/net/ipv4/netfilter/ip_nat_standalone.c Tue Mar 12 13:58:15 2002 @@ -308,6 +308,8 @@ module_exit(fini); EXPORT_SYMBOL(ip_nat_setup_info); +EXPORT_SYMBOL(ip_nat_protocol_register); +EXPORT_SYMBOL(ip_nat_protocol_unregister); EXPORT_SYMBOL(ip_nat_helper_register); EXPORT_SYMBOL(ip_nat_helper_unregister); EXPORT_SYMBOL(ip_nat_expect_register); @@ -316,4 +318,5 @@ EXPORT_SYMBOL(ip_nat_mangle_tcp_packet); EXPORT_SYMBOL(ip_nat_seq_adjust); EXPORT_SYMBOL(ip_nat_delete_sack); +EXPORT_SYMBOL(ip_nat_used_tuple); MODULE_LICENSE("GPL"); diff -Nru a/net/ipv4/netfilter/ipt_REJECT.c b/net/ipv4/netfilter/ipt_REJECT.c --- a/net/ipv4/netfilter/ipt_REJECT.c Tue Mar 12 13:58:15 2002 +++ b/net/ipv4/netfilter/ipt_REJECT.c Tue Mar 12 13:58:15 2002 @@ -234,11 +234,8 @@ iph->tos=tos; iph->tot_len = htons(length); - /* This abbreviates icmp->send->ip_build_xmit->ip_dont_fragment */ - if (!ipv4_config.no_pmtu_disc - && !(rt->u.dst.mxlock&(1<frag_off = htons(IP_DF); - else iph->frag_off = 0; + /* PMTU discovery never applies to ICMP packets. */ + iph->frag_off = 0; iph->ttl = MAXTTL; ip_select_ident(iph, &rt->u.dst, NULL); diff -Nru a/net/ipv4/raw.c b/net/ipv4/raw.c --- a/net/ipv4/raw.c Tue Mar 12 13:58:14 2002 +++ b/net/ipv4/raw.c Tue Mar 12 13:58:14 2002 @@ -70,7 +70,8 @@ static void raw_v4_hash(struct sock *sk) { - struct sock **skp = &raw_v4_htable[sk->num & (RAWV4_HTABLE_SIZE - 1)]; + struct sock **skp = &raw_v4_htable[inet_sk(sk)->num & + (RAWV4_HTABLE_SIZE - 1)]; write_lock_bh(&raw_v4_lock); if ((sk->next = *skp) != NULL) @@ -103,9 +104,11 @@ struct sock *s = sk; for (s = sk; s; s = s->next) { - if (s->num == num && - !(s->daddr && s->daddr != raddr) && - !(s->rcv_saddr && s->rcv_saddr != laddr) && + struct inet_opt *inet = inet_sk(s); + + if (inet->num == num && + !(inet->daddr && inet->daddr != raddr) && + !(inet->rcv_saddr && inet->rcv_saddr != laddr) && !(s->bound_dev_if && s->bound_dev_if != dif)) break; /* gotcha */ } @@ -364,10 +367,10 @@ err = -EINVAL; if (sk->state != TCP_ESTABLISHED) goto out; - daddr = sk->daddr; + daddr = inet->daddr; } - ipc.addr = sk->saddr; + ipc.addr = inet->saddr; ipc.opt = NULL; ipc.oif = sk->bound_dev_if; @@ -458,6 +461,7 @@ /* This gets rid of all the nasties in af_inet. -DaveM */ static int raw_bind(struct sock *sk, struct sockaddr *uaddr, int addr_len) { + struct inet_opt *inet = inet_sk(sk); struct sockaddr_in *addr = (struct sockaddr_in *) uaddr; int ret = -EINVAL; int chk_addr_ret; @@ -469,9 +473,9 @@ if (addr->sin_addr.s_addr && chk_addr_ret != RTN_LOCAL && chk_addr_ret != RTN_MULTICAST && chk_addr_ret != RTN_BROADCAST) goto out; - sk->rcv_saddr = sk->saddr = addr->sin_addr.s_addr; + inet->rcv_saddr = inet->saddr = addr->sin_addr.s_addr; if (chk_addr_ret == RTN_MULTICAST || chk_addr_ret == RTN_BROADCAST) - sk->saddr = 0; /* Use device */ + inet->saddr = 0; /* Use device */ sk_dst_reset(sk); ret = 0; out: return ret; @@ -534,7 +538,7 @@ static int raw_init(struct sock *sk) { struct raw_opt *tp = raw4_sk(sk); - if (sk->num == IPPROTO_ICMP) + if (inet_sk(sk)->num == IPPROTO_ICMP) memset(&tp->filter, 0, sizeof(tp->filter)); return 0; } @@ -574,7 +578,7 @@ return ip_setsockopt(sk, level, optname, optval, optlen); if (optname == ICMP_FILTER) { - if (sk->num != IPPROTO_ICMP) + if (inet_sk(sk)->num != IPPROTO_ICMP) return -EOPNOTSUPP; else return raw_seticmpfilter(sk, optval, optlen); @@ -589,7 +593,7 @@ return ip_getsockopt(sk, level, optname, optval, optlen); if (optname == ICMP_FILTER) { - if (sk->num != IPPROTO_ICMP) + if (inet_sk(sk)->num != IPPROTO_ICMP) return -EOPNOTSUPP; else return raw_geticmpfilter(sk, optval, optlen); @@ -627,13 +631,14 @@ static void get_raw_sock(struct sock *sp, char *tmpbuf, int i) { - unsigned int dest = sp->daddr, - src = sp->rcv_saddr; + struct inet_opt *inet = inet_sk(sp); + unsigned int dest = inet->daddr, + src = inet->rcv_saddr; __u16 destp = 0, - srcp = sp->num; + srcp = inet->num; sprintf(tmpbuf, "%4d: %08X:%04X %08X:%04X" - " %02X %08X:%08X %02X:%08lX %08X %5d %8d %ld %d %p", + " %02X %08X:%08X %02X:%08lX %08X %5d %8d %lu %d %p", i, src, srcp, dest, destp, sp->state, atomic_read(&sp->wmem_alloc), atomic_read(&sp->rmem_alloc), 0, 0L, 0, diff -Nru a/net/ipv4/tcp.c b/net/ipv4/tcp.c --- a/net/ipv4/tcp.c Tue Mar 12 13:58:15 2002 +++ b/net/ipv4/tcp.c Tue Mar 12 13:58:15 2002 @@ -524,6 +524,7 @@ int tcp_listen_start(struct sock *sk) { + struct inet_opt *inet = inet_sk(sk); struct tcp_opt *tp = tcp_sk(sk); struct tcp_listen_opt *lopt; @@ -552,8 +553,8 @@ * after validation is complete. */ sk->state = TCP_LISTEN; - if (sk->prot->get_port(sk, sk->num) == 0) { - sk->sport = htons(sk->num); + if (!sk->prot->get_port(sk, inet->num)) { + inet->sport = htons(inet->num); sk_dst_reset(sk); sk->prot->hash(sk); @@ -1786,8 +1787,8 @@ /* It cannot be in hash table! */ BUG_TRAP(sk->pprev==NULL); - /* If it has not 0 sk->num, it must be bound */ - BUG_TRAP(!sk->num || sk->prev!=NULL); + /* If it has not 0 inet_sk(sk)->num, it must be bound */ + BUG_TRAP(!inet_sk(sk)->num || sk->prev); #ifdef TCP_DEBUG if (sk->zapped) { @@ -1988,6 +1989,7 @@ int tcp_disconnect(struct sock *sk, int flags) { + struct inet_opt *inet = inet_sk(sk); struct tcp_opt *tp = tcp_sk(sk); int old_state; int err = 0; @@ -2015,11 +2017,10 @@ tcp_writequeue_purge(sk); __skb_queue_purge(&tp->out_of_order_queue); - sk->dport = 0; + inet->dport = 0; if (!(sk->userlocks&SOCK_BINDADDR_LOCK)) { - sk->rcv_saddr = 0; - sk->saddr = 0; + inet->rcv_saddr = inet->saddr = 0; #if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE) if (sk->family == PF_INET6) { struct ipv6_pinfo *np = inet6_sk(sk); @@ -2049,7 +2050,7 @@ tcp_sack_reset(tp); __sk_dst_reset(sk); - BUG_TRAP(!sk->num || sk->prev); + BUG_TRAP(!inet->num || sk->prev); sk->error_report(sk); return err; diff -Nru a/net/ipv4/tcp_diag.c b/net/ipv4/tcp_diag.c --- a/net/ipv4/tcp_diag.c Tue Mar 12 13:58:14 2002 +++ b/net/ipv4/tcp_diag.c Tue Mar 12 13:58:14 2002 @@ -44,6 +44,7 @@ static int tcpdiag_fill(struct sk_buff *skb, struct sock *sk, int ext, u32 pid, u32 seq) { + struct inet_opt *inet = inet_sk(sk); struct tcp_opt *tp = tcp_sk(sk); struct tcpdiagmsg *r; struct nlmsghdr *nlh; @@ -64,10 +65,6 @@ r->tcpdiag_timer = 0; r->tcpdiag_retrans = 0; - r->id.tcpdiag_sport = sk->sport; - r->id.tcpdiag_dport = sk->dport; - r->id.tcpdiag_src[0] = sk->rcv_saddr; - r->id.tcpdiag_dst[0] = sk->daddr; r->id.tcpdiag_if = sk->bound_dev_if; *((struct sock **)&r->id.tcpdiag_cookie) = sk; @@ -77,6 +74,10 @@ if (tmo < 0) tmo = 0; + r->id.tcpdiag_sport = tw->sport; + r->id.tcpdiag_dport = tw->dport; + r->id.tcpdiag_src[0] = tw->rcv_saddr; + r->id.tcpdiag_dst[0] = tw->daddr; r->tcpdiag_state = tw->substate; r->tcpdiag_timer = 3; r->tcpdiag_expires = (tmo*1000+HZ-1)/HZ; @@ -94,6 +95,11 @@ return skb->len; } + r->id.tcpdiag_sport = inet->sport; + r->id.tcpdiag_dport = inet->dport; + r->id.tcpdiag_src[0] = inet->rcv_saddr; + r->id.tcpdiag_dst[0] = inet->daddr; + #ifdef CONFIG_IPV6 if (r->tcpdiag_family == AF_INET6) { struct ipv6_pinfo *np = inet6_sk(sk); @@ -291,6 +297,7 @@ { while (len > 0) { int yes = 1; + struct inet_opt *inet = inet_sk(sk); struct tcpdiag_bc_op *op = (struct tcpdiag_bc_op*)bc; switch (op->code) { @@ -300,16 +307,16 @@ yes = 0; break; case TCPDIAG_BC_S_GE: - yes = (sk->num >= op[1].no); + yes = inet->num >= op[1].no; break; case TCPDIAG_BC_S_LE: - yes = (sk->num <= op[1].no); + yes = inet->num <= op[1].no; break; case TCPDIAG_BC_D_GE: - yes = (ntohs(sk->dport) >= op[1].no); + yes = ntohs(inet->dport) >= op[1].no; break; case TCPDIAG_BC_D_LE: - yes = (ntohs(sk->dport) <= op[1].no); + yes = ntohs(inet->dport) <= op[1].no; break; case TCPDIAG_BC_AUTO: yes = !(sk->userlocks&SOCK_BINDPORT_LOCK); @@ -321,7 +328,8 @@ u32 *addr; if (cond->port != -1 && - cond->port != (op->code == TCPDIAG_BC_S_COND ? sk->num : ntohs(sk->dport))) { + cond->port != (op->code == TCPDIAG_BC_S_COND ? + inet->num : ntohs(inet->dport))) { yes = 0; break; } @@ -341,9 +349,9 @@ #endif { if (op->code == TCPDIAG_BC_S_COND) - addr = &sk->rcv_saddr; + addr = &inet->rcv_saddr; else - addr = &sk->daddr; + addr = &inet->daddr; } if (bitstring_match(addr, cond->addr, cond->prefix_len)) @@ -453,12 +461,14 @@ for (sk = tcp_listening_hash[i], num = 0; sk != NULL; sk = sk->next, num++) { + struct inet_opt *inet = inet_sk(sk); if (num < s_num) continue; if (!(r->tcpdiag_states&TCPF_LISTEN) || r->id.tcpdiag_dport) continue; - if (r->id.tcpdiag_sport != sk->sport && r->id.tcpdiag_sport) + if (r->id.tcpdiag_sport != inet->sport && + r->id.tcpdiag_sport) continue; if (bc && !tcpdiag_bc_run(RTA_DATA(bc), RTA_PAYLOAD(bc), sk)) continue; @@ -491,13 +501,16 @@ for (sk = head->chain, num = 0; sk != NULL; sk = sk->next, num++) { + struct inet_opt *inet = inet_sk(sk); + if (num < s_num) continue; if (!(r->tcpdiag_states&(1<state))) continue; - if (r->id.tcpdiag_sport != sk->sport && r->id.tcpdiag_sport) + if (r->id.tcpdiag_sport != inet->sport && + r->id.tcpdiag_sport) continue; - if (r->id.tcpdiag_dport != sk->dport && r->id.tcpdiag_dport) + if (r->id.tcpdiag_dport != inet->dport && r->id.tcpdiag_dport) continue; if (bc && !tcpdiag_bc_run(RTA_DATA(bc), RTA_PAYLOAD(bc), sk)) continue; @@ -513,13 +526,17 @@ for (sk = tcp_ehash[i+tcp_ehash_size].chain; sk != NULL; sk = sk->next, num++) { + struct inet_opt *inet = inet_sk(sk); + if (num < s_num) continue; if (!(r->tcpdiag_states&(1<zapped))) continue; - if (r->id.tcpdiag_sport != sk->sport && r->id.tcpdiag_sport) + if (r->id.tcpdiag_sport != inet->sport && + r->id.tcpdiag_sport) continue; - if (r->id.tcpdiag_dport != sk->dport && r->id.tcpdiag_dport) + if (r->id.tcpdiag_dport != inet->dport && + r->id.tcpdiag_dport) continue; if (bc && !tcpdiag_bc_run(RTA_DATA(bc), RTA_PAYLOAD(bc), sk)) continue; diff -Nru a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c --- a/net/ipv4/tcp_input.c Tue Mar 12 13:58:15 2002 +++ b/net/ipv4/tcp_input.c Tue Mar 12 13:58:15 2002 @@ -1329,9 +1329,10 @@ #if FASTRETRANS_DEBUG > 1 static void DBGUNDO(struct sock *sk, struct tcp_opt *tp, const char *msg) { + struct inet_opt *inet = inet_sk(sk); printk(KERN_DEBUG "Undo %s %u.%u.%u.%u/%u c%u l%u ss%u/%u p%u\n", msg, - NIPQUAD(sk->daddr), ntohs(sk->dport), + NIPQUAD(inet->daddr), ntohs(inet->dport), tp->snd_cwnd, tp->left_out, tp->snd_ssthresh, tp->prior_ssthresh, tp->packets_out); } @@ -2570,15 +2571,12 @@ __set_current_state(TASK_RUNNING); local_bh_enable(); - if (skb_copy_datagram_iovec(skb, 0, tp->ucopy.iov, - chunk)) { - sk->err = EFAULT; - sk->error_report(sk); + if (!skb_copy_datagram_iovec(skb, 0, tp->ucopy.iov, chunk)) { + tp->ucopy.len -= chunk; + tp->copied_seq += chunk; + eaten = (chunk == skb->len && !th->fin); } local_bh_disable(); - tp->ucopy.len -= chunk; - tp->copied_seq += chunk; - eaten = (chunk == skb->len && !th->fin); } if (eaten <= 0) { @@ -3178,17 +3176,8 @@ tp->ucopy.iov); if (!err) { -update: - tp->ucopy.len -= chunk; + tp->ucopy.len -= chunk; tp->copied_seq += chunk; - local_bh_disable(); - return 0; - } - - if (err == -EFAULT) { - sk->err = EFAULT; - sk->error_report(sk); - goto update; } local_bh_disable(); @@ -3327,19 +3316,16 @@ tp->copied_seq == tp->rcv_nxt && len - tcp_header_len <= tp->ucopy.len && sk->lock.users) { - eaten = 1; - - NET_INC_STATS_BH(TCPHPHitsToUser); - __set_current_state(TASK_RUNNING); - if (tcp_copy_to_iovec(sk, skb, tcp_header_len)) - goto csum_error; - - __skb_pull(skb,tcp_header_len); - - tp->rcv_nxt = TCP_SKB_CB(skb)->end_seq; - } else { + if (!tcp_copy_to_iovec(sk, skb, tcp_header_len)) { + __skb_pull(skb, tcp_header_len); + tp->rcv_nxt = TCP_SKB_CB(skb)->end_seq; + NET_INC_STATS_BH(TCPHPHitsToUser); + eaten = 1; + } + } + if (!eaten) { if (tcp_checksum_complete_user(sk, skb)) goto csum_error; diff -Nru a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c --- a/net/ipv4/tcp_ipv4.c Tue Mar 12 13:58:15 2002 +++ b/net/ipv4/tcp_ipv4.c Tue Mar 12 13:58:15 2002 @@ -109,10 +109,11 @@ static __inline__ int tcp_sk_hashfn(struct sock *sk) { - __u32 laddr = sk->rcv_saddr; - __u16 lport = sk->num; - __u32 faddr = sk->daddr; - __u16 fport = sk->dport; + struct inet_opt *inet = inet_sk(sk); + __u32 laddr = inet->rcv_saddr; + __u16 lport = inet->num; + __u32 faddr = inet->daddr; + __u16 fport = inet->dport; return tcp_hashfn(laddr, lport, faddr, fport); } @@ -141,7 +142,8 @@ /* Caller must disable local BH processing. */ static __inline__ void __tcp_inherit_port(struct sock *sk, struct sock *child) { - struct tcp_bind_hashbucket *head = &tcp_bhash[tcp_bhashfn(child->num)]; + struct tcp_bind_hashbucket *head = + &tcp_bhash[tcp_bhashfn(inet_sk(child)->num)]; struct tcp_bind_bucket *tb; spin_lock(&head->lock); @@ -163,7 +165,7 @@ static inline void tcp_bind_hash(struct sock *sk, struct tcp_bind_bucket *tb, unsigned short snum) { - sk->num = snum; + inet_sk(sk)->num = snum; if ((sk->bind_next = tb->owners) != NULL) tb->owners->bind_pprev = &sk->bind_next; tb->owners = sk; @@ -173,6 +175,7 @@ static inline int tcp_bind_conflict(struct sock *sk, struct tcp_bind_bucket *tb) { + struct inet_opt *inet = inet_sk(sk); struct sock *sk2 = tb->owners; int sk_reuse = sk->reuse; @@ -182,9 +185,10 @@ if (!sk_reuse || !sk2->reuse || sk2->state == TCP_LISTEN) { - if (!sk2->rcv_saddr || - !sk->rcv_saddr || - (sk2->rcv_saddr == sk->rcv_saddr)) + struct inet_opt *inet2 = inet_sk(sk2); + if (!inet2->rcv_saddr || + !inet->rcv_saddr || + (inet2->rcv_saddr == inet->rcv_saddr)) break; } } @@ -281,7 +285,8 @@ */ __inline__ void __tcp_put_port(struct sock *sk) { - struct tcp_bind_hashbucket *head = &tcp_bhash[tcp_bhashfn(sk->num)]; + struct inet_opt *inet = inet_sk(sk); + struct tcp_bind_hashbucket *head = &tcp_bhash[tcp_bhashfn(inet->num)]; struct tcp_bind_bucket *tb; spin_lock(&head->lock); @@ -290,7 +295,7 @@ sk->bind_next->bind_pprev = sk->bind_pprev; *(sk->bind_pprev) = sk->bind_next; sk->prev = NULL; - sk->num = 0; + inet->num = 0; if (tb->owners == NULL) { if (tb->next) tb->next->pprev = tb->pprev; @@ -409,8 +414,10 @@ hiscore=0; for(; sk; sk = sk->next) { - if(sk->num == hnum) { - __u32 rcv_saddr = sk->rcv_saddr; + struct inet_opt *inet = inet_sk(sk); + + if(inet->num == hnum) { + __u32 rcv_saddr = inet->rcv_saddr; score = 1; if(rcv_saddr) { @@ -442,9 +449,11 @@ read_lock(&tcp_lhash_lock); sk = tcp_listening_hash[tcp_lhashfn(hnum)]; if (sk) { - if (sk->num == hnum && + struct inet_opt *inet = inet_sk(sk); + + if (inet->num == hnum && sk->next == NULL && - (!sk->rcv_saddr || sk->rcv_saddr == daddr) && + (!inet->rcv_saddr || inet->rcv_saddr == daddr) && !sk->bound_dev_if) goto sherry_cache; sk = __tcp_v4_lookup_listener(sk, daddr, hnum, dif); @@ -531,12 +540,13 @@ static int tcp_v4_check_established(struct sock *sk) { - u32 daddr = sk->rcv_saddr; - u32 saddr = sk->daddr; + struct inet_opt *inet = inet_sk(sk); + u32 daddr = inet->rcv_saddr; + u32 saddr = inet->daddr; int dif = sk->bound_dev_if; TCP_V4_ADDR_COOKIE(acookie, saddr, daddr) - __u32 ports = TCP_COMBINED_PORTS(sk->dport, sk->num); - int hash = tcp_hashfn(daddr, sk->num, saddr, sk->dport); + __u32 ports = TCP_COMBINED_PORTS(inet->dport, inet->num); + int hash = tcp_hashfn(daddr, inet->num, saddr, inet->dport); struct tcp_ehash_bucket *head = &tcp_ehash[hash]; struct sock *sk2, **skp; struct tcp_tw_bucket *tw; @@ -625,7 +635,7 @@ int tcp_v4_hash_connecting(struct sock *sk) { - unsigned short snum = sk->num; + unsigned short snum = inet_sk(sk)->num; struct tcp_bind_hashbucket *head = &tcp_bhash[tcp_bhashfn(snum)]; struct tcp_bind_bucket *tb = (struct tcp_bind_bucket *)sk->prev; @@ -667,7 +677,7 @@ nexthop = inet->opt->faddr; } - tmp = ip_route_connect(&rt, nexthop, sk->saddr, + tmp = ip_route_connect(&rt, nexthop, inet->saddr, RT_CONN_FLAGS(sk), sk->bound_dev_if); if (tmp < 0) return tmp; @@ -689,11 +699,11 @@ if (buff == NULL) goto failure; - if (!sk->saddr) - sk->saddr = rt->rt_src; - sk->rcv_saddr = sk->saddr; + if (!inet->saddr) + inet->saddr = rt->rt_src; + inet->rcv_saddr = inet->saddr; - if (tp->ts_recent_stamp && sk->daddr != daddr) { + if (tp->ts_recent_stamp && inet->daddr != daddr) { /* Reset inherited state */ tp->ts_recent = 0; tp->ts_recent_stamp = 0; @@ -716,12 +726,13 @@ } } - sk->dport = usin->sin_port; - sk->daddr = daddr; + inet->dport = usin->sin_port; + inet->daddr = daddr; if (!tp->write_seq) - tp->write_seq = secure_tcp_sequence_number(sk->saddr, sk->daddr, - sk->sport, + tp->write_seq = secure_tcp_sequence_number(inet->saddr, + inet->daddr, + inet->sport, usin->sin_port); tp->ext_header_len = 0; @@ -738,7 +749,7 @@ failure: __sk_dst_reset(sk); sk->route_caps = 0; - sk->dport = 0; + inet->dport = 0; return err; } @@ -1018,11 +1029,13 @@ void tcp_v4_send_check(struct sock *sk, struct tcphdr *th, int len, struct sk_buff *skb) { + struct inet_opt *inet = inet_sk(sk); + if (skb->ip_summed == CHECKSUM_HW) { - th->check = ~tcp_v4_check(th, len, sk->saddr, sk->daddr, 0); + th->check = ~tcp_v4_check(th, len, inet->saddr, inet->daddr, 0); skb->csum = offsetof(struct tcphdr, check); } else { - th->check = tcp_v4_check(th, len, sk->saddr, sk->daddr, + th->check = tcp_v4_check(th, len, inet->saddr, inet->daddr, csum_partial((char *)th, th->doff<<2, skb->csum)); } } @@ -1448,10 +1461,10 @@ newsk->route_caps = dst->dev->features; newtp = tcp_sk(newsk); - newsk->daddr = req->af.v4_req.rmt_addr; - newsk->saddr = req->af.v4_req.loc_addr; - newsk->rcv_saddr = req->af.v4_req.loc_addr; newinet = inet_sk(newsk); + newinet->daddr = req->af.v4_req.rmt_addr; + newinet->rcv_saddr = req->af.v4_req.loc_addr; + newinet->saddr = req->af.v4_req.loc_addr; newinet->opt = req->af.v4_req.opt; req->af.v4_req.opt = NULL; newinet->mc_index = tcp_v4_iif(skb); @@ -1736,9 +1749,9 @@ struct inet_opt *inet = inet_sk(sk); int err; struct rtable *rt; - __u32 old_saddr = sk->saddr; + __u32 old_saddr = inet->saddr; __u32 new_saddr; - __u32 daddr = sk->daddr; + __u32 daddr = inet->daddr; if (inet->opt && inet->opt->srr) daddr = inet->opt->faddr; @@ -1759,14 +1772,14 @@ return 0; if (sysctl_ip_dynaddr > 1) { - printk(KERN_INFO "tcp_v4_rebuild_header(): shifting sk->saddr " - "from %d.%d.%d.%d to %d.%d.%d.%d\n", + printk(KERN_INFO "tcp_v4_rebuild_header(): shifting inet->" + "saddr from %d.%d.%d.%d to %d.%d.%d.%d\n", NIPQUAD(old_saddr), NIPQUAD(new_saddr)); } - sk->saddr = new_saddr; - sk->rcv_saddr = new_saddr; + inet->saddr = new_saddr; + inet->rcv_saddr = new_saddr; /* XXX The only one ugly spot where we need to * XXX really change the sockets identity after @@ -1791,11 +1804,11 @@ return 0; /* Reroute. */ - daddr = sk->daddr; + daddr = inet->daddr; if (inet->opt && inet->opt->srr) daddr = inet->opt->faddr; - err = ip_route_output(&rt, daddr, sk->saddr, + err = ip_route_output(&rt, daddr, inet->saddr, RT_CONN_FLAGS(sk), sk->bound_dev_if); if (!err) { __sk_dst_set(sk, &rt->u.dst); @@ -1818,10 +1831,11 @@ static void v4_addr2sockaddr(struct sock *sk, struct sockaddr * uaddr) { struct sockaddr_in *sin = (struct sockaddr_in *) uaddr; + struct inet_opt *inet = inet_sk(sk); sin->sin_family = AF_INET; - sin->sin_addr.s_addr = sk->daddr; - sin->sin_port = sk->dport; + sin->sin_addr.s_addr = inet->daddr; + sin->sin_port = inet->dport; } /* VJ's idea. Save last timestamp seen from this destination @@ -1832,13 +1846,14 @@ int tcp_v4_remember_stamp(struct sock *sk) { + struct inet_opt *inet = inet_sk(sk); struct tcp_opt *tp = tcp_sk(sk); struct rtable *rt = (struct rtable*)__sk_dst_get(sk); struct inet_peer *peer = NULL; int release_it = 0; - if (rt == NULL || rt->rt_dst != sk->daddr) { - peer = inet_getpeer(sk->daddr, 1); + if (rt == NULL || rt->rt_dst != inet->daddr) { + peer = inet_getpeer(inet->daddr, 1); release_it = 1; } else { if (rt->peer == NULL) @@ -1979,7 +1994,7 @@ " %02X %08X:%08X %02X:%08X %08X %5d %8d %u %d %p", i, req->af.v4_req.loc_addr, - ntohs(sk->sport), + ntohs(inet_sk(sk)->sport), req->af.v4_req.rmt_addr, ntohs(req->rmt_port), TCP_SYN_RECV, @@ -2002,11 +2017,12 @@ int timer_active; unsigned long timer_expires; struct tcp_opt *tp = tcp_sk(sp); + struct inet_opt *inet = inet_sk(sp); - dest = sp->daddr; - src = sp->rcv_saddr; - destp = ntohs(sp->dport); - srcp = ntohs(sp->sport); + dest = inet->daddr; + src = inet->rcv_saddr; + destp = ntohs(inet->dport); + srcp = ntohs(inet->sport); if (tp->pending == TCP_TIME_RETRANS) { timer_active = 1; timer_expires = tp->timeout; diff -Nru a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c --- a/net/ipv4/tcp_minisocks.c Tue Mar 12 13:58:15 2002 +++ b/net/ipv4/tcp_minisocks.c Tue Mar 12 13:58:15 2002 @@ -75,17 +75,16 @@ /* Disassociate with bind bucket. */ bhead = &tcp_bhash[tcp_bhashfn(tw->num)]; spin_lock(&bhead->lock); - if ((tb = tw->tb) != NULL) { - if(tw->bind_next) - tw->bind_next->bind_pprev = tw->bind_pprev; - *(tw->bind_pprev) = tw->bind_next; - tw->tb = NULL; - if (tb->owners == NULL) { - if (tb->next) - tb->next->pprev = tb->pprev; - *(tb->pprev) = tb->next; - kmem_cache_free(tcp_bucket_cachep, tb); - } + tb = tw->tb; + if(tw->bind_next) + tw->bind_next->bind_pprev = tw->bind_pprev; + *(tw->bind_pprev) = tw->bind_next; + tw->tb = NULL; + if (tb->owners == NULL) { + if (tb->next) + tb->next->pprev = tb->pprev; + *(tb->pprev) = tb->next; + kmem_cache_free(tcp_bucket_cachep, tb); } spin_unlock(&bhead->lock); @@ -304,9 +303,23 @@ struct tcp_bind_hashbucket *bhead; struct sock **head, *sktw; + /* Step 1: Put TW into bind hash. Original socket stays there too. + Note, that any socket with inet_sk(sk)->num != 0 MUST be bound in + binding cache, even if it is closed. + */ + bhead = &tcp_bhash[tcp_bhashfn(inet_sk(sk)->num)]; + spin_lock(&bhead->lock); + tw->tb = (struct tcp_bind_bucket *)sk->prev; + BUG_TRAP(sk->prev!=NULL); + if ((tw->bind_next = tw->tb->owners) != NULL) + tw->tb->owners->bind_pprev = &tw->bind_next; + tw->tb->owners = (struct sock*)tw; + tw->bind_pprev = &tw->tb->owners; + spin_unlock(&bhead->lock); + write_lock(&ehead->lock); - /* Step 1: Remove SK from established hash. */ + /* Step 2: Remove SK from established hash. */ if (sk->pprev) { if(sk->next) sk->next->pprev = sk->pprev; @@ -315,7 +328,7 @@ sock_prot_dec_use(sk->prot); } - /* Step 2: Hash TW into TIMEWAIT half of established hash table. */ + /* Step 3: Hash TW into TIMEWAIT half of established hash table. */ head = &(ehead + tcp_ehash_size)->chain; sktw = (struct sock *)tw; if((sktw->next = *head) != NULL) @@ -325,20 +338,6 @@ atomic_inc(&tw->refcnt); write_unlock(&ehead->lock); - - /* Step 3: Put TW into bind hash. Original socket stays there too. - Note, that any socket with sk->num!=0 MUST be bound in binding - cache, even if it is closed. - */ - bhead = &tcp_bhash[tcp_bhashfn(sk->num)]; - spin_lock(&bhead->lock); - tw->tb = (struct tcp_bind_bucket *)sk->prev; - BUG_TRAP(sk->prev!=NULL); - if ((tw->bind_next = tw->tb->owners) != NULL) - tw->tb->owners->bind_pprev = &tw->bind_next; - tw->tb->owners = (struct sock*)tw; - tw->bind_pprev = &tw->tb->owners; - spin_unlock(&bhead->lock); } /* @@ -357,17 +356,18 @@ tw = kmem_cache_alloc(tcp_timewait_cachep, SLAB_ATOMIC); if(tw != NULL) { + struct inet_opt *inet = inet_sk(sk); int rto = (tp->rto<<2) - (tp->rto>>1); /* Give us an identity. */ - tw->daddr = sk->daddr; - tw->rcv_saddr = sk->rcv_saddr; + tw->daddr = inet->daddr; + tw->rcv_saddr = inet->rcv_saddr; tw->bound_dev_if= sk->bound_dev_if; - tw->num = sk->num; + tw->num = inet->num; tw->state = TCP_TIME_WAIT; tw->substate = state; - tw->sport = sk->sport; - tw->dport = sk->dport; + tw->sport = inet->sport; + tw->dport = inet->dport; tw->family = sk->family; tw->reuse = sk->reuse; tw->rcv_wscale = tp->rcv_wscale; @@ -660,7 +660,7 @@ newsk->prev = NULL; /* Clone the TCP header template */ - newsk->dport = req->rmt_port; + inet_sk(newsk)->dport = req->rmt_port; sock_lock_init(newsk); bh_lock_sock(newsk); diff -Nru a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c --- a/net/ipv4/tcp_output.c Tue Mar 12 13:58:15 2002 +++ b/net/ipv4/tcp_output.c Tue Mar 12 13:58:15 2002 @@ -188,6 +188,7 @@ int tcp_transmit_skb(struct sock *sk, struct sk_buff *skb) { if(skb != NULL) { + struct inet_opt *inet = inet_sk(sk); struct tcp_opt *tp = tcp_sk(sk); struct tcp_skb_cb *tcb = TCP_SKB_CB(skb); int tcp_header_size = tp->tcp_header_len; @@ -227,8 +228,8 @@ skb_set_owner_w(skb, sk); /* Build TCP header and checksum it. */ - th->source = sk->sport; - th->dest = sk->dport; + th->source = inet->sport; + th->dest = inet->dport; th->seq = htonl(tcb->seq); th->ack_seq = htonl(tp->rcv_nxt); *(((__u16 *)th) + 6) = htons(((tcp_header_size >> 2) << 12) | tcb->flags); @@ -1120,7 +1121,7 @@ th->syn = 1; th->ack = 1; TCP_ECN_make_synack(req, th); - th->source = sk->sport; + th->source = inet_sk(sk)->sport; th->dest = req->rmt_port; TCP_SKB_CB(skb)->seq = req->snt_isn; TCP_SKB_CB(skb)->end_seq = TCP_SKB_CB(skb)->seq + 1; diff -Nru a/net/ipv4/tcp_timer.c b/net/ipv4/tcp_timer.c --- a/net/ipv4/tcp_timer.c Tue Mar 12 13:58:16 2002 +++ b/net/ipv4/tcp_timer.c Tue Mar 12 13:58:16 2002 @@ -334,10 +334,12 @@ * we cannot allow such beasts to hang infinitely. */ #ifdef TCP_DEBUG - if (net_ratelimit()) + if (net_ratelimit()) { + struct inet_opt *inet = inet_sk(sk); printk(KERN_DEBUG "TCP: Treason uncloaked! Peer %u.%u.%u.%u:%u/%u shrinks window %u:%u. Repaired.\n", - NIPQUAD(sk->daddr), htons(sk->dport), sk->num, - tp->snd_una, tp->snd_nxt); + NIPQUAD(inet->daddr), htons(inet->dport), + inet->num, tp->snd_una, tp->snd_nxt); + } #endif if (tcp_time_stamp - tp->rcv_tstamp > TCP_RTO_MAX) { tcp_write_err(sk); diff -Nru a/net/ipv4/udp.c b/net/ipv4/udp.c --- a/net/ipv4/udp.c Tue Mar 12 13:58:15 2002 +++ b/net/ipv4/udp.c Tue Mar 12 13:58:15 2002 @@ -108,6 +108,8 @@ static int udp_v4_get_port(struct sock *sk, unsigned short snum) { + struct inet_opt *inet = inet_sk(sk); + write_lock_bh(&udp_hash_lock); if (snum == 0) { int best_size_so_far, best, result, i; @@ -118,11 +120,11 @@ best_size_so_far = 32767; best = result = udp_port_rover; for (i = 0; i < UDP_HTABLE_SIZE; i++, result++) { - struct sock *sk; + struct sock *sk2; int size; - sk = udp_hash[result & (UDP_HTABLE_SIZE - 1)]; - if (!sk) { + sk2 = udp_hash[result & (UDP_HTABLE_SIZE - 1)]; + if (!sk2) { if (result > sysctl_local_port_range[1]) result = sysctl_local_port_range[0] + ((result - sysctl_local_port_range[0]) & @@ -133,7 +135,7 @@ do { if (++size >= best_size_so_far) goto next; - } while ((sk = sk->next) != NULL); + } while ((sk2 = sk2->next) != NULL); best_size_so_far = size; best = result; next:; @@ -157,17 +159,19 @@ for (sk2 = udp_hash[snum & (UDP_HTABLE_SIZE - 1)]; sk2 != NULL; sk2 = sk2->next) { - if (sk2->num == snum && + struct inet_opt *inet2 = inet_sk(sk2); + + if (inet2->num == snum && sk2 != sk && sk2->bound_dev_if == sk->bound_dev_if && - (!sk2->rcv_saddr || - !sk->rcv_saddr || - sk2->rcv_saddr == sk->rcv_saddr) && + (!inet2->rcv_saddr || + !inet->rcv_saddr || + inet2->rcv_saddr == inet->rcv_saddr) && (!sk2->reuse || !sk->reuse)) goto fail; } } - sk->num = snum; + inet->num = snum; if (sk->pprev == NULL) { struct sock **skp = &udp_hash[snum & (UDP_HTABLE_SIZE - 1)]; if ((sk->next = *skp) != NULL) @@ -198,7 +202,7 @@ sk->next->pprev = sk->pprev; *sk->pprev = sk->next; sk->pprev = NULL; - sk->num = 0; + inet_sk(sk)->num = 0; sock_prot_dec_use(sk->prot); __sock_put(sk); } @@ -215,20 +219,22 @@ int badness = -1; for(sk = udp_hash[hnum & (UDP_HTABLE_SIZE - 1)]; sk != NULL; sk = sk->next) { - if(sk->num == hnum) { + struct inet_opt *inet = inet_sk(sk); + + if (inet->num == hnum) { int score = 0; - if(sk->rcv_saddr) { - if(sk->rcv_saddr != daddr) + if (inet->rcv_saddr) { + if (inet->rcv_saddr != daddr) continue; score++; } - if(sk->daddr) { - if(sk->daddr != saddr) + if (inet->daddr) { + if (inet->daddr != saddr) continue; score++; } - if(sk->dport) { - if(sk->dport != sport) + if (inet->dport) { + if (inet->dport != sport) continue; score++; } @@ -269,10 +275,12 @@ struct sock *s = sk; unsigned short hnum = ntohs(loc_port); for(; s; s = s->next) { - if ((s->num != hnum) || - (s->daddr && s->daddr!=rmt_addr) || - (s->dport != rmt_port && s->dport != 0) || - (s->rcv_saddr && s->rcv_saddr != loc_addr) || + struct inet_opt *inet = inet_sk(s); + + if (inet->num != hnum || + (inet->daddr && inet->daddr != rmt_addr) || + (inet->dport != rmt_port && inet->dport) || + (inet->rcv_saddr && inet->rcv_saddr != loc_addr) || (s->bound_dev_if && s->bound_dev_if != dif)) continue; break; @@ -469,15 +477,15 @@ } else { if (sk->state != TCP_ESTABLISHED) return -ENOTCONN; - ufh.daddr = sk->daddr; - ufh.uh.dest = sk->dport; + ufh.daddr = inet->daddr; + ufh.uh.dest = inet->dport; /* Open fast path for connected socket. Route will not be used, if at least one option is set. */ connected = 1; } - ipc.addr = sk->saddr; - ufh.uh.source = sk->sport; + ipc.addr = inet->saddr; + ufh.uh.source = inet->sport; ipc.opt = NULL; ipc.oif = sk->bound_dev_if; @@ -728,7 +736,7 @@ sk_dst_reset(sk); - err = ip_route_connect(&rt, usin->sin_addr.s_addr, sk->saddr, + err = ip_route_connect(&rt, usin->sin_addr.s_addr, inet->saddr, RT_CONN_FLAGS(sk), sk->bound_dev_if); if (err) return err; @@ -736,12 +744,12 @@ ip_rt_put(rt); return -EACCES; } - if(!sk->saddr) - sk->saddr = rt->rt_src; /* Update source address */ - if(!sk->rcv_saddr) - sk->rcv_saddr = rt->rt_src; - sk->daddr = rt->rt_dst; - sk->dport = usin->sin_port; + if (!inet->saddr) + inet->saddr = rt->rt_src; /* Update source address */ + if (!inet->rcv_saddr) + inet->rcv_saddr = rt->rt_src; + inet->daddr = rt->rt_dst; + inet->dport = usin->sin_port; sk->state = TCP_ESTABLISHED; inet->id = jiffies; @@ -751,17 +759,17 @@ int udp_disconnect(struct sock *sk, int flags) { + struct inet_opt *inet = inet_sk(sk); /* * 1003.1g - break association. */ sk->state = TCP_CLOSE; - sk->daddr = 0; - sk->dport = 0; + inet->daddr = 0; + inet->dport = 0; sk->bound_dev_if = 0; if (!(sk->userlocks&SOCK_BINDADDR_LOCK)) { - sk->rcv_saddr = 0; - sk->saddr = 0; + inet->rcv_saddr = inet->saddr = 0; #if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE) if (sk->family == PF_INET6) { struct ipv6_pinfo *np = inet6_sk(sk); @@ -773,7 +781,7 @@ } if (!(sk->userlocks&SOCK_BINDPORT_LOCK)) { sk->prot->unhash(sk); - sk->sport = 0; + inet->sport = 0; } sk_dst_reset(sk); return 0; @@ -962,15 +970,16 @@ static void get_udp_sock(struct sock *sp, char *tmpbuf, int i) { + struct inet_opt *inet = inet_sk(sp); unsigned int dest, src; __u16 destp, srcp; - dest = sp->daddr; - src = sp->rcv_saddr; - destp = ntohs(sp->dport); - srcp = ntohs(sp->sport); + dest = inet->daddr; + src = inet->rcv_saddr; + destp = ntohs(inet->dport); + srcp = ntohs(inet->sport); sprintf(tmpbuf, "%4d: %08X:%04X %08X:%04X" - " %02X %08X:%08X %02X:%08lX %08X %5d %8d %ld %d %p", + " %02X %08X:%08X %02X:%08lX %08X %5d %8d %lu %d %p", i, src, srcp, dest, destp, sp->state, atomic_read(&sp->wmem_alloc), atomic_read(&sp->rmem_alloc), 0, 0L, 0, diff -Nru a/net/ipv6/af_inet6.c b/net/ipv6/af_inet6.c --- a/net/ipv6/af_inet6.c Tue Mar 12 13:58:14 2002 +++ b/net/ipv6/af_inet6.c Tue Mar 12 13:58:14 2002 @@ -200,7 +200,7 @@ inet = inet_sk(sk); if (SOCK_RAW == sock->type) { - sk->num = protocol; + inet->num = protocol; if (IPPROTO_RAW == protocol) inet->hdrincl = 1; } @@ -241,12 +241,12 @@ #endif MOD_INC_USE_COUNT; - if (sk->num) { + if (inet->num) { /* It assumes that any protocol which allows * the user to assign a number at socket * creation time automatically shares. */ - sk->sport = ntohs(sk->num); + inet->sport = ntohs(inet->num); sk->prot->hash(sk); } if (sk->prot->init) { @@ -278,6 +278,7 @@ { struct sockaddr_in6 *addr=(struct sockaddr_in6 *)uaddr; struct sock *sk = sock->sk; + struct inet_opt *inet = inet_sk(sk); struct ipv6_pinfo *np = inet6_sk(sk); __u32 v4addr = 0; unsigned short snum; @@ -318,8 +319,7 @@ lock_sock(sk); /* Check these errors (active socket, double bind). */ - if ((sk->state != TCP_CLOSE) || - (sk->num != 0)) { + if (sk->state != TCP_CLOSE || inet->num) { release_sock(sk); return -EINVAL; } @@ -340,8 +340,8 @@ } } - sk->rcv_saddr = v4addr; - sk->saddr = v4addr; + inet->rcv_saddr = v4addr; + inet->saddr = v4addr; ipv6_addr_copy(&np->rcv_saddr, &addr->sin6_addr); @@ -350,8 +350,7 @@ /* Make sure we are allowed to bind here. */ if (sk->prot->get_port(sk, snum) != 0) { - sk->rcv_saddr = 0; - sk->saddr = 0; + inet->rcv_saddr = inet->saddr = 0; memset(&np->rcv_saddr, 0, sizeof(struct in6_addr)); memset(&np->saddr, 0, sizeof(struct in6_addr)); @@ -363,9 +362,9 @@ sk->userlocks |= SOCK_BINDADDR_LOCK; if (snum) sk->userlocks |= SOCK_BINDPORT_LOCK; - sk->sport = ntohs(sk->num); - sk->dport = 0; - sk->daddr = 0; + inet->sport = ntohs(inet->num); + inet->dport = 0; + inet->daddr = 0; release_sock(sk); return 0; @@ -421,17 +420,18 @@ { struct sockaddr_in6 *sin=(struct sockaddr_in6 *)uaddr; struct sock *sk = sock->sk; + struct inet_opt *inet = inet_sk(sk); struct ipv6_pinfo *np = inet6_sk(sk); sin->sin6_family = AF_INET6; sin->sin6_flowinfo = 0; sin->sin6_scope_id = 0; if (peer) { - if (!sk->dport) + if (!inet->dport) return -ENOTCONN; if (((1<state)&(TCPF_CLOSE|TCPF_SYN_SENT)) && peer == 1) return -ENOTCONN; - sin->sin6_port = sk->dport; + sin->sin6_port = inet->dport; memcpy(&sin->sin6_addr, &np->daddr, sizeof(struct in6_addr)); if (np->sndflow) sin->sin6_flowinfo = np->flow_label; @@ -443,7 +443,7 @@ memcpy(&sin->sin6_addr, &np->rcv_saddr, sizeof(struct in6_addr)); - sin->sin6_port = sk->sport; + sin->sin6_port = inet->sport; } if (ipv6_addr_type(&sin->sin6_addr) & IPV6_ADDR_LINKLOCAL) sin->sin6_scope_id = sk->bound_dev_if; @@ -675,6 +675,11 @@ */ inet6_register_protosw(&rawv6_protosw); + /* Register the family here so that the init calls below will + * be able to create sockets. (?? is this dangerous ??) + */ + (void) sock_register(&inet6_family_ops); + /* * ipngwg API draft makes clear that the correct semantics * for TCP and UDP is to consider one TCP and UDP instance @@ -719,9 +724,6 @@ udpv6_init(); tcpv6_init(); - /* Now the userspace is allowed to create INET6 sockets. */ - (void) sock_register(&inet6_family_ops); - return 0; #ifdef CONFIG_PROC_FS diff -Nru a/net/ipv6/ipv6_sockglue.c b/net/ipv6/ipv6_sockglue.c --- a/net/ipv6/ipv6_sockglue.c Tue Mar 12 13:58:14 2002 +++ b/net/ipv6/ipv6_sockglue.c Tue Mar 12 13:58:14 2002 @@ -79,7 +79,7 @@ struct ip6_ra_chain *ra, *new_ra, **rap; /* RA packet may be delivered ONLY to IPPROTO_RAW socket */ - if (sk->type != SOCK_RAW || sk->num != IPPROTO_RAW) + if (sk->type != SOCK_RAW || inet_sk(sk)->num != IPPROTO_RAW) return -EINVAL; new_ra = (sel>=0) ? kmalloc(sizeof(*new_ra), GFP_KERNEL) : NULL; @@ -283,7 +283,7 @@ if (opt) { struct tcp_opt *tp = tcp_sk(sk); if (!((1<state)&(TCPF_LISTEN|TCPF_CLOSE)) - && sk->daddr != LOOPBACK4_IPV6) { + && inet_sk(sk)->daddr != LOOPBACK4_IPV6) { tp->ext_header_len = opt->opt_flen + opt->opt_nflen; tcp_sync_mss(sk, tp->pmtu_cookie); } diff -Nru a/net/ipv6/ndisc.c b/net/ipv6/ndisc.c --- a/net/ipv6/ndisc.c Tue Mar 12 13:58:14 2002 +++ b/net/ipv6/ndisc.c Tue Mar 12 13:58:14 2002 @@ -84,58 +84,63 @@ static void pndisc_destructor(struct pneigh_entry *n); static void pndisc_redo(struct sk_buff *skb); -static struct neigh_ops ndisc_generic_ops = -{ - AF_INET6, - NULL, - ndisc_solicit, - ndisc_error_report, - neigh_resolve_output, - neigh_connected_output, - dev_queue_xmit, - dev_queue_xmit +static struct neigh_ops ndisc_generic_ops = { + family: AF_INET6, + solicit: ndisc_solicit, + error_report: ndisc_error_report, + output: neigh_resolve_output, + connected_output: neigh_connected_output, + hh_output: dev_queue_xmit, + queue_xmit: dev_queue_xmit, }; -static struct neigh_ops ndisc_hh_ops = -{ - AF_INET6, - NULL, - ndisc_solicit, - ndisc_error_report, - neigh_resolve_output, - neigh_resolve_output, - dev_queue_xmit, - dev_queue_xmit +static struct neigh_ops ndisc_hh_ops = { + family: AF_INET6, + solicit: ndisc_solicit, + error_report: ndisc_error_report, + output: neigh_resolve_output, + connected_output: neigh_resolve_output, + hh_output: dev_queue_xmit, + queue_xmit: dev_queue_xmit, }; -static struct neigh_ops ndisc_direct_ops = -{ - AF_INET6, - NULL, - NULL, - NULL, - dev_queue_xmit, - dev_queue_xmit, - dev_queue_xmit, - dev_queue_xmit +static struct neigh_ops ndisc_direct_ops = { + family: AF_INET6, + output: dev_queue_xmit, + connected_output: dev_queue_xmit, + hh_output: dev_queue_xmit, + queue_xmit: dev_queue_xmit, }; -struct neigh_table nd_tbl = -{ - NULL, - AF_INET6, - sizeof(struct neighbour) + sizeof(struct in6_addr), - sizeof(struct in6_addr), - ndisc_hash, - ndisc_constructor, - pndisc_constructor, - pndisc_destructor, - pndisc_redo, - "ndisc_cache", - { NULL, NULL, &nd_tbl, 0, NULL, NULL, - 30*HZ, 1*HZ, 60*HZ, 30*HZ, 5*HZ, 3, 3, 0, 3, 1*HZ, (8*HZ)/10, 64, 0 }, - 30*HZ, 128, 512, 1024, +struct neigh_table nd_tbl = { + family: AF_INET6, + entry_size: sizeof(struct neighbour) + sizeof(struct in6_addr), + key_len: sizeof(struct in6_addr), + hash: ndisc_hash, + constructor: ndisc_constructor, + pconstructor: pndisc_constructor, + pdestructor: pndisc_destructor, + proxy_redo: pndisc_redo, + id: "ndisc_cache", + parms: { + tbl: &nd_tbl, + base_reachable_time: 30 * HZ, + retrans_time: 1 * HZ, + gc_staletime: 60 * HZ, + reachable_time: 30 * HZ, + delay_probe_time: 5 * HZ, + queue_len: 3, + ucast_probes: 3, + mcast_probes: 3, + anycast_delay: 1 * HZ, + proxy_delay: (8 * HZ) / 10, + proxy_qlen: 64, + }, + gc_interval: 30 * HZ, + gc_thresh1: 128, + gc_thresh2: 512, + gc_thresh3: 1024, }; #define NDISC_OPT_SPACE(len) (((len)+2+7)&~7) diff -Nru a/net/ipv6/raw.c b/net/ipv6/raw.c --- a/net/ipv6/raw.c Tue Mar 12 13:58:15 2002 +++ b/net/ipv6/raw.c Tue Mar 12 13:58:15 2002 @@ -50,7 +50,8 @@ static void raw_v6_hash(struct sock *sk) { - struct sock **skp = &raw_v6_htable[sk->num & (RAWV6_HTABLE_SIZE - 1)]; + struct sock **skp = &raw_v6_htable[inet_sk(sk)->num & + (RAWV6_HTABLE_SIZE - 1)]; write_lock_bh(&raw_v6_lock); if ((sk->next = *skp) != NULL) @@ -85,7 +86,7 @@ int addr_type = ipv6_addr_type(loc_addr); for(s = sk; s; s = s->next) { - if(s->num == num) { + if (inet_sk(s)->num == num) { struct ipv6_pinfo *np = inet6_sk(s); if (!ipv6_addr_any(&np->daddr) && @@ -186,6 +187,7 @@ /* This cleans up af_inet6 a bit. -DaveM */ static int rawv6_bind(struct sock *sk, struct sockaddr *uaddr, int addr_len) { + struct inet_opt *inet = inet_sk(sk); struct ipv6_pinfo *np = inet6_sk(sk); struct sockaddr_in6 *addr = (struct sockaddr_in6 *) uaddr; __u32 v4addr = 0; @@ -233,8 +235,7 @@ } } - sk->rcv_saddr = v4addr; - sk->saddr = v4addr; + inet->rcv_saddr = inet->saddr = v4addr; ipv6_addr_copy(&np->rcv_saddr, &addr->sin6_addr); if (!(addr_type & IPV6_ADDR_MULTICAST)) ipv6_addr_copy(&np->saddr, &addr->sin6_addr); @@ -439,6 +440,7 @@ { struct ipv6_txoptions opt_space; struct sockaddr_in6 * sin6 = (struct sockaddr_in6 *) msg->msg_name; + struct inet_opt *inet = inet_sk(sk); struct ipv6_pinfo *np = inet6_sk(sk); struct ipv6_txoptions *opt = NULL; struct ip6_flowlabel *flowlabel = NULL; @@ -478,7 +480,7 @@ proto = ntohs(sin6->sin6_port); if (!proto) - proto = sk->num; + proto = inet->num; if (proto > 255) return(-EINVAL); @@ -507,7 +509,7 @@ if (sk->state != TCP_ESTABLISHED) return(-EINVAL); - proto = sk->num; + proto = inet->num; daddr = &np->daddr; fl.fl6_flowlabel = np->flow_label; } @@ -635,7 +637,7 @@ break; case SOL_ICMPV6: - if (sk->num != IPPROTO_ICMPV6) + if (inet_sk(sk)->num != IPPROTO_ICMPV6) return -EOPNOTSUPP; return rawv6_seticmpfilter(sk, level, optname, optval, optlen); @@ -678,7 +680,7 @@ break; case SOL_ICMPV6: - if (sk->num != IPPROTO_ICMPV6) + if (inet_sk(sk)->num != IPPROTO_ICMPV6) return -EOPNOTSUPP; return rawv6_geticmpfilter(sk, level, optname, optval, optlen); @@ -741,7 +743,7 @@ static void rawv6_close(struct sock *sk, long timeout) { - if (sk->num == IPPROTO_RAW) + if (inet_sk(sk)->num == IPPROTO_RAW) ip6_ra_control(sk, -1, NULL); inet_sock_release(sk); @@ -764,10 +766,10 @@ dest = &np->daddr; src = &np->rcv_saddr; destp = 0; - srcp = sp->num; + srcp = inet_sk(sp)->num; sprintf(tmpbuf, "%4d: %08X%08X%08X%08X:%04X %08X%08X%08X%08X:%04X " - "%02X %08X:%08X %02X:%08lX %08X %5d %8d %ld %d %p", + "%02X %08X:%08X %02X:%08lX %08X %5d %8d %lu %d %p", i, src->s6_addr32[0], src->s6_addr32[1], src->s6_addr32[2], src->s6_addr32[3], srcp, diff -Nru a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c --- a/net/ipv6/tcp_ipv6.c Tue Mar 12 13:58:14 2002 +++ b/net/ipv6/tcp_ipv6.c Tue Mar 12 13:58:14 2002 @@ -76,11 +76,12 @@ static __inline__ int tcp_v6_sk_hashfn(struct sock *sk) { + struct inet_opt *inet = inet_sk(sk); struct ipv6_pinfo *np = inet6_sk(sk); struct in6_addr *laddr = &np->rcv_saddr; struct in6_addr *faddr = &np->daddr; - __u16 lport = sk->num; - __u16 fport = sk->dport; + __u16 lport = inet->num; + __u16 fport = inet->dport; return tcp_v6_hashfn(laddr, lport, faddr, fport); } @@ -153,14 +154,15 @@ !sk2->reuse || sk2->state == TCP_LISTEN) { /* NOTE: IPv6 tw bucket have different format */ - if (!sk2->rcv_saddr || + if (!inet_sk(sk2)->rcv_saddr || addr_type == IPV6_ADDR_ANY || !ipv6_addr_cmp(&np->rcv_saddr, sk2->state != TCP_TIME_WAIT ? &np2->rcv_saddr : &((struct tcp_tw_bucket*)sk)->v6_rcv_saddr) || (addr_type==IPV6_ADDR_MAPPED && sk2->family==AF_INET && - sk->rcv_saddr==sk2->rcv_saddr)) + inet_sk(sk)->rcv_saddr == + inet_sk(sk2)->rcv_saddr)) break; } } @@ -185,7 +187,7 @@ tb->fastreuse = 0; success: - sk->num = snum; + inet_sk(sk)->num = snum; if (sk->prev == NULL) { if ((sk->bind_next = tb->owners) != NULL) tb->owners->bind_pprev = &sk->bind_next; @@ -255,7 +257,7 @@ read_lock(&tcp_lhash_lock); sk = tcp_listening_hash[tcp_lhashfn(hnum)]; for(; sk; sk = sk->next) { - if((sk->num == hnum) && (sk->family == PF_INET6)) { + if (inet_sk(sk)->num == hnum && sk->family == PF_INET6) { struct ipv6_pinfo *np = inet6_sk(sk); score = 1; @@ -313,9 +315,11 @@ } /* Must check for a TIME_WAIT'er before going to listener hash. */ for(sk = (head + tcp_ehash_size)->chain; sk; sk = sk->next) { - if(*((__u32 *)&(sk->dport)) == ports && + /* FIXME: acme: check this... */ + struct tcp_tw_bucket *tw = (struct tcp_tw_bucket *)sk; + + if(*((__u32 *)&(tw->dport)) == ports && sk->family == PF_INET6) { - struct tcp_tw_bucket *tw = (struct tcp_tw_bucket *)sk; if(!ipv6_addr_cmp(&tw->v6_daddr, saddr) && !ipv6_addr_cmp(&tw->v6_rcv_saddr, daddr) && (!sk->bound_dev_if || sk->bound_dev_if == dif)) @@ -424,12 +428,13 @@ static int tcp_v6_check_established(struct sock *sk) { + struct inet_opt *inet = inet_sk(sk); struct ipv6_pinfo *np = inet6_sk(sk); struct in6_addr *daddr = &np->rcv_saddr; struct in6_addr *saddr = &np->daddr; int dif = sk->bound_dev_if; - u32 ports = TCP_COMBINED_PORTS(sk->dport, sk->num); - int hash = tcp_v6_hashfn(daddr, sk->num, saddr, sk->dport); + u32 ports = TCP_COMBINED_PORTS(inet->dport, inet->num); + int hash = tcp_v6_hashfn(daddr, inet->num, saddr, inet->dport); struct tcp_ehash_bucket *head = &tcp_ehash[hash]; struct sock *sk2, **skp; struct tcp_tw_bucket *tw; @@ -439,7 +444,7 @@ for(skp = &(head + tcp_ehash_size)->chain; (sk2=*skp)!=NULL; skp = &sk2->next) { tw = (struct tcp_tw_bucket*)sk2; - if(*((__u32 *)&(sk2->dport)) == ports && + if(*((__u32 *)&(tw->dport)) == ports && sk2->family == PF_INET6 && !ipv6_addr_cmp(&tw->v6_daddr, saddr) && !ipv6_addr_cmp(&tw->v6_rcv_saddr, daddr) && @@ -496,7 +501,7 @@ static int tcp_v6_hash_connecting(struct sock *sk) { - unsigned short snum = sk->num; + unsigned short snum = inet_sk(sk)->num; struct tcp_bind_hashbucket *head = &tcp_bhash[tcp_bhashfn(snum)]; struct tcp_bind_bucket *tb = head->chain; @@ -522,6 +527,7 @@ int addr_len) { struct sockaddr_in6 *usin = (struct sockaddr_in6 *) uaddr; + struct inet_opt *inet = inet_sk(sk); struct ipv6_pinfo *np = inet6_sk(sk); struct tcp_opt *tp = tcp_sk(sk); struct in6_addr *saddr = NULL; @@ -618,9 +624,9 @@ goto failure; } else { ipv6_addr_set(&np->saddr, 0, 0, htonl(0x0000FFFF), - sk->saddr); + inet->saddr); ipv6_addr_set(&np->rcv_saddr, 0, 0, htonl(0x0000FFFF), - sk->rcv_saddr); + inet->rcv_saddr); } return err; @@ -634,7 +640,7 @@ fl.fl6_src = saddr; fl.oif = sk->bound_dev_if; fl.uli_u.ports.dport = usin->sin6_port; - fl.uli_u.ports.sport = sk->sport; + fl.uli_u.ports.sport = inet->sport; if (np->opt && np->opt->srcrt) { struct rt0_hdr *rt0 = (struct rt0_hdr *)np->opt->srcrt; @@ -662,7 +668,7 @@ /* set the source address */ ipv6_addr_copy(&np->rcv_saddr, saddr); ipv6_addr_copy(&np->saddr, saddr); - sk->rcv_saddr= LOOPBACK4_IPV6; + inet->rcv_saddr = LOOPBACK4_IPV6; tp->ext_header_len = 0; if (np->opt) @@ -675,7 +681,7 @@ if (buff == NULL) goto failure; - sk->dport = usin->sin6_port; + inet->dport = usin->sin6_port; /* * Init variables @@ -684,7 +690,8 @@ if (!tp->write_seq) tp->write_seq = secure_tcpv6_sequence_number(np->saddr.s6_addr32, np->daddr.s6_addr32, - sk->sport, sk->dport); + inet->sport, + inet->dport); err = tcp_connect(sk, buff); if (err == 0) @@ -692,7 +699,7 @@ failure: __sk_dst_reset(sk); - sk->dport = 0; + inet->dport = 0; sk->route_caps = 0; return err; } @@ -750,6 +757,7 @@ dst = __sk_dst_check(sk, np->dst_cookie); if (dst == NULL) { + struct inet_opt *inet = inet_sk(sk); struct flowi fl; /* BUGGG_FUTURE: Again, it is not clear how @@ -760,8 +768,8 @@ fl.nl_u.ip6_u.daddr = &np->daddr; fl.nl_u.ip6_u.saddr = &np->saddr; fl.oif = sk->bound_dev_if; - fl.uli_u.ports.dport = sk->dport; - fl.uli_u.ports.sport = sk->sport; + fl.uli_u.ports.dport = inet->dport; + fl.uli_u.ports.sport = inet->sport; dst = ip6_route_output(sk, &fl); } else @@ -850,7 +858,7 @@ fl.fl6_flowlabel = 0; fl.oif = req->af.v6_req.iif; fl.uli_u.ports.dport = req->rmt_port; - fl.uli_u.ports.sport = sk->sport; + fl.uli_u.ports.sport = inet_sk(sk)->sport; if (dst == NULL) { opt = np->opt; @@ -1245,14 +1253,15 @@ if (newsk == NULL) return NULL; + newinet = inet_sk(newsk); newnp = inet6_sk(newsk); newtp = tcp_sk(newsk); ipv6_addr_set(&newnp->daddr, 0, 0, htonl(0x0000FFFF), - newsk->daddr); + newinet->daddr); ipv6_addr_set(&newnp->saddr, 0, 0, htonl(0x0000FFFF), - newsk->saddr); + newinet->saddr); ipv6_addr_copy(&newnp->rcv_saddr, &newnp->saddr); @@ -1303,7 +1312,7 @@ fl.fl6_flowlabel = 0; fl.oif = sk->bound_dev_if; fl.uli_u.ports.dport = req->rmt_port; - fl.uli_u.ports.sport = sk->sport; + fl.uli_u.ports.sport = inet_sk(sk)->sport; dst = ip6_route_output(sk, &fl); } @@ -1376,9 +1385,7 @@ newtp->advmss = dst->advmss; tcp_initialize_rcv_mss(newsk); - newsk->daddr = LOOPBACK4_IPV6; - newsk->saddr = LOOPBACK4_IPV6; - newsk->rcv_saddr = LOOPBACK4_IPV6; + newinet->daddr = newinet->saddr = newinet->rcv_saddr = LOOPBACK4_IPV6; __tcp_v6_hash(newsk); tcp_inherit_port(sk, newsk); @@ -1680,6 +1687,7 @@ dst = __sk_dst_check(sk, np->dst_cookie); if (dst == NULL) { + struct inet_opt *inet = inet_sk(sk); struct flowi fl; fl.proto = IPPROTO_TCP; @@ -1687,8 +1695,8 @@ fl.nl_u.ip6_u.saddr = &np->saddr; fl.fl6_flowlabel = np->flow_label; fl.oif = sk->bound_dev_if; - fl.uli_u.ports.dport = sk->dport; - fl.uli_u.ports.sport = sk->sport; + fl.uli_u.ports.dport = inet->dport; + fl.uli_u.ports.sport = inet->sport; if (np->opt && np->opt->srcrt) { struct rt0_hdr *rt0 = (struct rt0_hdr *) np->opt->srcrt; @@ -1714,6 +1722,7 @@ static int tcp_v6_xmit(struct sk_buff *skb) { struct sock *sk = skb->sk; + struct inet_opt *inet = inet_sk(sk); struct ipv6_pinfo *np = inet6_sk(sk); struct flowi fl; struct dst_entry *dst; @@ -1724,8 +1733,8 @@ fl.fl6_flowlabel = np->flow_label; IP6_ECN_flow_xmit(sk, fl.fl6_flowlabel); fl.oif = sk->bound_dev_if; - fl.uli_u.ports.sport = sk->sport; - fl.uli_u.ports.dport = sk->dport; + fl.uli_u.ports.sport = inet->sport; + fl.uli_u.ports.dport = inet->dport; if (np->opt && np->opt->srcrt) { struct rt0_hdr *rt0 = (struct rt0_hdr *) np->opt->srcrt; @@ -1761,7 +1770,7 @@ sin6->sin6_family = AF_INET6; memcpy(&sin6->sin6_addr, &np->daddr, sizeof(struct in6_addr)); - sin6->sin6_port = sk->dport; + sin6->sin6_port = inet_sk(sk)->dport; /* We do not store received flowlabel for TCP */ sin6->sin6_flowinfo = 0; sin6->sin6_scope_id = 0; @@ -1903,7 +1912,7 @@ i, src->s6_addr32[0], src->s6_addr32[1], src->s6_addr32[2], src->s6_addr32[3], - ntohs(sk->sport), + ntohs(inet_sk(sk)->sport), dest->s6_addr32[0], dest->s6_addr32[1], dest->s6_addr32[2], dest->s6_addr32[3], ntohs(req->rmt_port), @@ -1924,13 +1933,14 @@ __u16 destp, srcp; int timer_active; unsigned long timer_expires; + struct inet_opt *inet = inet_sk(sp); struct tcp_opt *tp = tcp_sk(sp); struct ipv6_pinfo *np = inet6_sk(sp); dest = &np->daddr; src = &np->rcv_saddr; - destp = ntohs(sp->dport); - srcp = ntohs(sp->sport); + destp = ntohs(inet->dport); + srcp = ntohs(inet->sport); if (tp->pending == TCP_TIME_RETRANS) { timer_active = 1; timer_expires = tp->timeout; diff -Nru a/net/ipv6/udp.c b/net/ipv6/udp.c --- a/net/ipv6/udp.c Tue Mar 12 13:58:15 2002 +++ b/net/ipv6/udp.c Tue Mar 12 13:58:15 2002 @@ -65,11 +65,11 @@ best_size_so_far = 32767; best = result = udp_port_rover; for (i = 0; i < UDP_HTABLE_SIZE; i++, result++) { - struct sock *sk; + struct sock *sk2; int size; - sk = udp_hash[result & (UDP_HTABLE_SIZE - 1)]; - if (!sk) { + sk2 = udp_hash[result & (UDP_HTABLE_SIZE - 1)]; + if (!sk2) { if (result > sysctl_local_port_range[1]) result = sysctl_local_port_range[0] + ((result - sysctl_local_port_range[0]) & @@ -80,7 +80,7 @@ do { if (++size >= best_size_so_far) goto next; - } while ((sk = sk->next) != NULL); + } while ((sk2 = sk2->next) != NULL); best_size_so_far = size; best = result; next:; @@ -104,23 +104,24 @@ for (sk2 = udp_hash[snum & (UDP_HTABLE_SIZE - 1)]; sk2 != NULL; sk2 = sk2->next) { + struct inet_opt *inet2 = inet_sk(sk2); struct ipv6_pinfo *np2 = inet6_sk(sk2); - if (sk2->num == snum && + if (inet2->num == snum && sk2 != sk && sk2->bound_dev_if == sk->bound_dev_if && - (!sk2->rcv_saddr || + (!inet2->rcv_saddr || addr_type == IPV6_ADDR_ANY || !ipv6_addr_cmp(&np->rcv_saddr, &np2->rcv_saddr) || (addr_type == IPV6_ADDR_MAPPED && sk2->family == AF_INET && - sk->rcv_saddr == sk2->rcv_saddr)) && + inet_sk(sk)->rcv_saddr == inet2->rcv_saddr)) && (!sk2->reuse || !sk->reuse)) goto fail; } } - sk->num = snum; + inet_sk(sk)->num = snum; if (sk->pprev == NULL) { struct sock **skp = &udp_hash[snum & (UDP_HTABLE_SIZE - 1)]; if ((sk->next = *skp) != NULL) @@ -151,7 +152,7 @@ sk->next->pprev = sk->pprev; *sk->pprev = sk->next; sk->pprev = NULL; - sk->num = 0; + inet_sk(sk)->num = 0; sock_prot_dec_use(sk->prot); __sock_put(sk); } @@ -167,12 +168,13 @@ read_lock(&udp_hash_lock); for(sk = udp_hash[hnum & (UDP_HTABLE_SIZE - 1)]; sk != NULL; sk = sk->next) { - if((sk->num == hnum) && - (sk->family == PF_INET6)) { + struct inet_opt *inet = inet_sk(sk); + + if (inet->num == hnum && sk->family == PF_INET6) { struct ipv6_pinfo *np = inet6_sk(sk); int score = 0; - if(sk->dport) { - if(sk->dport != sport) + if (inet->dport) { + if (inet->dport != sport) continue; score++; } @@ -213,6 +215,7 @@ int udpv6_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len) { struct sockaddr_in6 *usin = (struct sockaddr_in6 *) uaddr; + struct inet_opt *inet = inet_sk(sk); struct ipv6_pinfo *np = inet6_sk(sk); struct in6_addr *daddr; struct in6_addr saddr; @@ -268,16 +271,16 @@ if (err < 0) return err; - ipv6_addr_set(&np->daddr, 0, 0, htonl(0x0000ffff), sk->daddr); + ipv6_addr_set(&np->daddr, 0, 0, htonl(0x0000ffff), inet->daddr); if (ipv6_addr_any(&np->saddr)) { ipv6_addr_set(&np->saddr, 0, 0, htonl(0x0000ffff), - sk->saddr); + inet->saddr); } if (ipv6_addr_any(&np->rcv_saddr)) { ipv6_addr_set(&np->rcv_saddr, 0, 0, htonl(0x0000ffff), - sk->rcv_saddr); + inet->rcv_saddr); } return 0; } @@ -300,7 +303,7 @@ ipv6_addr_copy(&np->daddr, daddr); np->flow_label = fl.fl6_flowlabel; - sk->dport = usin->sin6_port; + inet->dport = usin->sin6_port; /* * Check for a route to destination an obtain the @@ -311,8 +314,8 @@ fl.fl6_dst = &np->daddr; fl.fl6_src = &saddr; fl.oif = sk->bound_dev_if; - fl.uli_u.ports.dport = sk->dport; - fl.uli_u.ports.sport = sk->sport; + fl.uli_u.ports.dport = inet->dport; + fl.uli_u.ports.sport = inet->sport; if (flowlabel) { if (flowlabel->opt && flowlabel->opt->srcrt) { @@ -344,7 +347,7 @@ if (ipv6_addr_any(&np->rcv_saddr)) { ipv6_addr_copy(&np->rcv_saddr, &saddr); - sk->rcv_saddr = LOOPBACK4_IPV6; + inet->rcv_saddr = LOOPBACK4_IPV6; } sk->state = TCP_ESTABLISHED; } @@ -528,10 +531,12 @@ struct sock *s = sk; unsigned short num = ntohs(loc_port); for(; s; s = s->next) { - if(s->num == num) { + struct inet_opt *inet = inet_sk(s); + + if (inet->num == num) { struct ipv6_pinfo *np = inet6_sk(s); - if(s->dport) { - if(s->dport != rmt_port) + if (inet->dport) { + if (inet->dport != rmt_port) continue; } if (!ipv6_addr_any(&np->daddr) && @@ -757,6 +762,7 @@ { struct ipv6_txoptions opt_space; struct udpv6fakehdr udh; + struct inet_opt *inet = inet_sk(sk); struct ipv6_pinfo *np = inet6_sk(sk); struct sockaddr_in6 *sin6 = (struct sockaddr_in6 *) msg->msg_name; struct ipv6_txoptions *opt = NULL; @@ -818,7 +824,7 @@ if (sk->state != TCP_ESTABLISHED) return -ENOTCONN; - udh.uh.dest = sk->dport; + udh.uh.dest = inet->dport; daddr = &np->daddr; fl.fl6_flowlabel = np->flow_label; } @@ -867,7 +873,7 @@ if (opt && opt->srcrt) udh.daddr = daddr; - udh.uh.source = sk->sport; + udh.uh.source = inet->sport; udh.uh.len = len < 0x10000 ? htons(len) : 0; udh.uh.check = 0; udh.iov = msg->msg_iov; @@ -905,17 +911,18 @@ static void get_udp6_sock(struct sock *sp, char *tmpbuf, int i) { + struct inet_opt *inet = inet_sk(sp); struct ipv6_pinfo *np = inet6_sk(sp); struct in6_addr *dest, *src; __u16 destp, srcp; dest = &np->daddr; src = &np->rcv_saddr; - destp = ntohs(sp->dport); - srcp = ntohs(sp->sport); + destp = ntohs(inet->dport); + srcp = ntohs(inet->sport); sprintf(tmpbuf, "%4d: %08X%08X%08X%08X:%04X %08X%08X%08X%08X:%04X " - "%02X %08X:%08X %02X:%08lX %08X %5d %8d %ld %d %p", + "%02X %08X:%08X %02X:%08lX %08X %5d %8d %lu %d %p", i, src->s6_addr32[0], src->s6_addr32[1], src->s6_addr32[2], src->s6_addr32[3], srcp, diff -Nru a/net/irda/af_irda.c b/net/irda/af_irda.c --- a/net/irda/af_irda.c Tue Mar 12 13:58:15 2002 +++ b/net/irda/af_irda.c Tue Mar 12 13:58:15 2002 @@ -1700,7 +1700,7 @@ if (sk->state == TCP_ESTABLISHED) { if ((self->tx_flow == FLOW_START) && - (sk->sndbuf - (int)atomic_read(&sk->wmem_alloc) >= SOCK_MIN_WRITE_SPACE)) + sock_writeable(sk)) { mask |= POLLOUT | POLLWRNORM | POLLWRBAND; } @@ -1708,13 +1708,13 @@ break; case SOCK_SEQPACKET: if ((self->tx_flow == FLOW_START) && - (sk->sndbuf - (int)atomic_read(&sk->wmem_alloc) >= SOCK_MIN_WRITE_SPACE)) + sock_writeable(sk)) { mask |= POLLOUT | POLLWRNORM | POLLWRBAND; } break; case SOCK_DGRAM: - if (sk->sndbuf - (int)atomic_read(&sk->wmem_alloc) >= SOCK_MIN_WRITE_SPACE) + if (sock_writeable(sk)) mask |= POLLOUT | POLLWRNORM | POLLWRBAND; break; default: diff -Nru a/net/netsyms.c b/net/netsyms.c --- a/net/netsyms.c Tue Mar 12 13:58:15 2002 +++ b/net/netsyms.c Tue Mar 12 13:58:15 2002 @@ -588,4 +588,11 @@ EXPORT_SYMBOL(net_call_rx_atomic); EXPORT_SYMBOL(softnet_data); +#if defined(CONFIG_NET_RADIO) || defined(CONFIG_NET_PCMCIA_RADIO) +/* Don't include the whole header mess for a single function */ +union iwreq_data; +extern void wireless_send_event(struct net_device *dev, unsigned int cmd, union iwreq_data *wrqu, char *extra); +EXPORT_SYMBOL(wireless_send_event); +#endif /* CONFIG_NET_RADIO || CONFIG_NET_PCMCIA_RADIO */ + #endif /* CONFIG_NET */ diff -Nru a/net/packet/af_packet.c b/net/packet/af_packet.c --- a/net/packet/af_packet.c Tue Mar 12 13:58:14 2002 +++ b/net/packet/af_packet.c Tue Mar 12 13:58:14 2002 @@ -180,6 +180,7 @@ spinlock_t bind_lock; char running; /* prot_hook is attached*/ int ifindex; /* bound device */ + unsigned short num; struct tpacket_stats stats; #ifdef CONFIG_PACKET_MULTICAST struct packet_mclist *mclist; @@ -678,8 +679,10 @@ */ if (saddr == NULL) { - ifindex = pkt_sk(sk)->ifindex; - proto = sk->num; + struct packet_opt *po = pkt_sk(sk); + + ifindex = po->ifindex; + proto = po->num; addr = NULL; } else { err = -EINVAL; @@ -839,7 +842,7 @@ po->running = 0; } - sk->num = protocol; + po->num = protocol; po->prot_hook.type = protocol; po->prot_hook.dev = dev; @@ -894,7 +897,7 @@ dev = dev_get_by_name(name); if (dev) { - err = packet_do_bind(sk, dev, sk->num); + err = packet_do_bind(sk, dev, pkt_sk(sk)->num); dev_put(dev); } return err; @@ -924,7 +927,7 @@ if (dev == NULL) goto out; } - err = packet_do_bind(sk, dev, sll->sll_protocol ? : sk->num); + err = packet_do_bind(sk, dev, sll->sll_protocol ? : pkt_sk(sk)->num); if (dev) dev_put(dev); @@ -972,7 +975,7 @@ goto out_free; memset(po, 0, sizeof(*po)); sk->family = PF_PACKET; - sk->num = protocol; + po->num = protocol; sk->destruct = packet_sock_destruct; atomic_inc(&packet_socks_nr); @@ -1131,7 +1134,7 @@ sll->sll_family = AF_PACKET; sll->sll_ifindex = po->ifindex; - sll->sll_protocol = sk->num; + sll->sll_protocol = po->num; dev = dev_get_by_index(po->ifindex); if (dev) { sll->sll_hatype = dev->type; @@ -1410,7 +1413,8 @@ break; case NETDEV_UP: spin_lock(&po->bind_lock); - if (dev->ifindex == po->ifindex && sk->num && po->running==0) { + if (dev->ifindex == po->ifindex && po->num && + !po->running) { dev_add_pack(&po->prot_hook); sock_hold(sk); po->running = 1; @@ -1861,7 +1865,7 @@ s, atomic_read(&s->refcnt), s->type, - ntohs(s->num), + ntohs(po->num), po->ifindex, po->running, atomic_read(&s->rmem_alloc), diff -Nru a/net/sched/sch_gred.c b/net/sched/sch_gred.c --- a/net/sched/sch_gred.c Tue Mar 12 13:58:15 2002 +++ b/net/sched/sch_gred.c Tue Mar 12 13:58:15 2002 @@ -7,7 +7,7 @@ * as published by the Free Software Foundation; either version * 2 of the License, or (at your option) any later version. * - * Authors: J Hadi Salim (hadi@nortelnetworks.com) 1998,1999 + * Authors: J Hadi Salim (hadi@cyberus.ca) 1998-2002 * * 991129: - Bug fix with grio mode * - a better sing. AvgQ mode with Grio(WRED) @@ -436,7 +436,7 @@ if (table->tab[table->def] == NULL) { table->tab[table->def]= kmalloc(sizeof(struct gred_sched_data), GFP_KERNEL); - if (NULL == table->tab[ctl->DP]) + if (NULL == table->tab[table->def]) return -ENOMEM; memset(table->tab[table->def], 0, @@ -498,7 +498,7 @@ { unsigned long qave; struct rtattr *rta; - struct tc_gred_qopt *opt; + struct tc_gred_qopt *opt = NULL ; struct tc_gred_qopt *dst; struct gred_sched *table = (struct gred_sched *)sch->data; struct gred_sched_data *q; @@ -520,7 +520,6 @@ if (!table->initd) { DPRINTK("NO GRED Queues setup!\n"); - return -1; } for (i=0;irta_len = skb->tail - b; + kfree(opt); return skb->len; rtattr_failure: + if (opt) + kfree(opt); DPRINTK("gred_dump: FAILURE!!!!\n"); /* also free the opt struct here */ diff -Nru a/net/socket.c b/net/socket.c --- a/net/socket.c Tue Mar 12 13:58:15 2002 +++ b/net/socket.c Tue Mar 12 13:58:15 2002 @@ -359,6 +359,7 @@ static struct file_system_type sock_fs_type = { name: "sockfs", get_sb: sockfs_get_sb, + kill_sb: kill_anon_super, fs_flags: FS_NOMOUNT, }; static int sockfs_delete_dentry(struct dentry *dentry) diff -Nru a/net/sunrpc/sunrpc_syms.c b/net/sunrpc/sunrpc_syms.c --- a/net/sunrpc/sunrpc_syms.c Tue Mar 12 13:58:16 2002 +++ b/net/sunrpc/sunrpc_syms.c Tue Mar 12 13:58:16 2002 @@ -77,6 +77,7 @@ EXPORT_SYMBOL(svc_recv); EXPORT_SYMBOL(svc_wake_up); EXPORT_SYMBOL(svc_makesock); +EXPORT_SYMBOL(svc_reserve); /* RPC statistics */ #ifdef CONFIG_PROC_FS diff -Nru a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c --- a/net/sunrpc/svcsock.c Tue Mar 12 13:58:14 2002 +++ b/net/sunrpc/svcsock.c Tue Mar 12 13:58:14 2002 @@ -1161,7 +1161,8 @@ /* Register socket with portmapper */ if (*errp >= 0 && pmap_register) - *errp = svc_register(serv, inet->protocol, ntohs(inet->sport)); + *errp = svc_register(serv, inet->protocol, + ntohs(inet_sk(inet)->sport)); if (*errp < 0) { inet->user_data = NULL; diff -Nru a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c --- a/net/sunrpc/xprt.c Tue Mar 12 13:58:15 2002 +++ b/net/sunrpc/xprt.c Tue Mar 12 13:58:15 2002 @@ -67,9 +67,6 @@ #include -/* Following value should be > 32k + RPC overhead */ -#define XPRT_MIN_WRITE_SPACE (35000 + SOCK_MIN_WRITE_SPACE) - extern spinlock_t rpc_queue_lock; /* @@ -1099,9 +1096,8 @@ if (xprt->shutdown) return; - - /* Wait until we have enough socket memory */ - if (sock_wspace(sk) < min_t(int, sk->sndbuf,XPRT_MIN_WRITE_SPACE)) + /* Wait until we have enough socket memory. */ + if (sock_writeable(sk)) return; if (!xprt_test_and_set_wspace(xprt)) { diff -Nru a/net/unix/af_unix.c b/net/unix/af_unix.c --- a/net/unix/af_unix.c Tue Mar 12 13:58:15 2002 +++ b/net/unix/af_unix.c Tue Mar 12 13:58:15 2002 @@ -1767,7 +1767,7 @@ struct unix_sock *u = unix_sk(s); unix_state_rlock(s); - len+=sprintf(buffer+len,"%p: %08X %08X %08X %04X %02X %5ld", + len+=sprintf(buffer+len,"%p: %08X %08X %08X %04X %02X %5lu", s, atomic_read(&s->refcnt), 0, diff -Nru a/sound/oss/es1370.c b/sound/oss/es1370.c --- a/sound/oss/es1370.c Tue Mar 12 13:58:15 2002 +++ b/sound/oss/es1370.c Tue Mar 12 13:58:15 2002 @@ -374,6 +374,10 @@ unsigned subdivision; } dma_dac1, dma_dac2, dma_adc; + /* The following buffer is used to point the phantom write channel to. */ + unsigned char *bugbuf_cpu; + dma_addr_t bugbuf_dma; + /* midi stuff */ struct { unsigned ird, iwr, icnt; @@ -392,13 +396,6 @@ static LIST_HEAD(devs); -/* - * The following buffer is used to point the phantom write channel to, - * so that it cannot wreak havoc. The attribute makes sure it doesn't - * cross a page boundary and ensures dword alignment for the DMA engine - */ -static unsigned char bugbuf[16] __attribute__ ((aligned (16))); - /* --------------------------------------------------------------------- */ static inline unsigned ld2(unsigned int x) @@ -2653,8 +2650,9 @@ outl(s->ctrl, s->io+ES1370_REG_CONTROL); outl(s->sctrl, s->io+ES1370_REG_SERIAL_CONTROL); /* point phantom write channel to "bugbuf" */ + s->bugbuf_cpu = pci_alloc_consistent(pcidev,16,&s->bugbuf_dma); outl((ES1370_REG_PHANTOM_FRAMEADR >> 8) & 15, s->io+ES1370_REG_MEMPAGE); - outl(virt_to_bus(bugbuf), s->io+(ES1370_REG_PHANTOM_FRAMEADR & 0xff)); + outl(s->bugbuf_dma, s->io+(ES1370_REG_PHANTOM_FRAMEADR & 0xff)); outl(0, s->io+(ES1370_REG_PHANTOM_FRAMECNT & 0xff)); pci_set_master(pcidev); /* enable bus mastering */ wrcodec(s, 0x16, 3); /* no RST, PD */ @@ -2721,6 +2719,7 @@ unregister_sound_mixer(s->dev_mixer); unregister_sound_dsp(s->dev_dac); unregister_sound_midi(s->dev_midi); + pci_free_consistent(dev, 16, s->bugbuf_cpu, s->bugbuf_dma); kfree(s); pci_set_drvdata(dev, NULL); }